content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Matrix Inverse - Mathematica Code For Inverse of a Matrix
(*To Write Mathematica Code For Inverse of a Matrix, First of all you have to know about the condition for inverting a matrix. First Condition is Matrix should be a Square Matrix and Second is Matrix
Should be a Non Singular Matrix. Here we will declare matrix enteries at first and then we will use nested if condition to check matrix is square and Non singular and then print result.*)
(* Input matrix entries *)
matrix = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
(* Check if the matrix is square *)
(* If the matrix is square, check if its determinant is non-zero *)
If[Det[matrix] != 0,
(* If the determinant is non-zero, print a message indicating that the matrix is square and non-singular *)
Print["The matrix is square and its determinant is non-zero."];
(* Print a message indicating that we're about to display the inverse of the matrix *)
Print["The inverse of the matrix is:"];
(* Print the inverse of the matrix *)
(* If the determinant is zero, print a message indicating that the matrix is square but singular *)
Print["The matrix is square but its determinant is zero."]
(* If the matrix is not square, print a message indicating that it's not square *)
Print["The matrix is not square."]
Post a Comment | {"url":"https://www.mathkibook.com/2024/03/Matrix%20Inverse%20-%20Mathematica%20Code%20For%20Inverse%20of%20a%20Matrix%20.html","timestamp":"2024-11-07T15:31:45Z","content_type":"application/xhtml+xml","content_length":"190534","record_id":"<urn:uuid:6c7549b9-7a3a-4818-9ced-7bd6585dcc3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00573.warc.gz"} |
Solution: ‘Friday the 13th’ | Quanta Magazine
Our Insights questions this month were based on the vagaries of the modern calendar and that eternal question about any specified date: “What day of the week is that?” Our first two questions
concerned the frequency of Friday the 13th’s, which some consider an unlucky day.
Question 1:
The year 2017 began with a Friday the 13th in January, and another one is due in October. What’s the maximum and minimum number of Friday the 13th’s that there can be in a Gregorian calendar year?
This question can, of course, be solved by brute force methods, but can you find an easy way to answer it that you could conceivably even do in your head?
The maximum number of Friday the 13ths in a year is three, and the minimum is one, as Cameron Eggins explains. Here’s a way to think about it: If two given months start on the same day of the week,
then the day of the week for the 13th of both months will also be concordant. So all we need to do is figure out which months will start from the same days of the week, for both nonleap and leap
Let’s map the days of the week to the numbers 0 through 6, and assign the base number 0 for January. We can now find the offsets to this base number for each subsequent month by casting out complete
weeks. Basically, we perform modulo arithmetic: Take the number of days in the month, divide by 7 and add the remainder to the previous month’s number, and reduce the sum to a number below 7, if
necessary, to get the number for the next month. January has 31 days, which is four complete weeks plus three days, so February’s base number is 0 + 3, which is 3. Continuing this way, we obtain the
numbers 0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 5, which give the offsets for the 12 months in a nonleap year. The number 3 occurs three times, for February, March and November, and this is the maximum
number of occurrences of all the numbers. If any one of these three months has a Friday the 13th, so will the other two months. Hence three is the maximum number of Friday the 13ths that it is
possible to have in a nonleap year. Now notice that each of the seven numbers occurs at least once, which means that all starting days of the week are represented, so you cannot avoid having at least
one Friday the 13th in a nonleap year. Doing the same procedure for leap years does not change our maximum and minimum, which remain 3 and 1 respectively.
Incidentally, if you memorize this string of numbers that represent the months — 0 3 3 6 1 4 6 2 5 0 3 5 — you can figure out the day of the week for any date using simple addition and casting out
7s. Let’s take Oct. 13, 2017. Map the weekdays to the digits 0 through 6 such that 0 = Sunday and 6 = Saturday. Add the number of years since 2001 to the number of leap years since 2001: 16 + 4 = 20
= 6 (mod 7). Now add the date and the offset number for the month, giving 6 + 13 + 0 = 19 = 5, which is a Friday. You can do it pretty quickly in your head with some practice. For dates in the 1900s
use the number of years plus number of leap years since 1900.
Question 2:
Suppose that instead of being spooked by Friday the 13th, you consider it to be your lucky day, and you want to maximize the number of Friday the 13th’s in a year. You are allowed to tamper with the
monthly distribution of days in a normal nonleap year in the following way: You can take away one day from any month of the year and add it to any other. For instance, you could, like Robin Hood, rob
the day-rich December of one day, reducing it to 30 days, and bump up February’s quota to 29 days. Or you could, like a kleptocrat, decree that January has 32 days while poor February has just 27.
What’s the maximum number of Friday the 13th’s you could create in this way in a single year? What if you could do the above procedure for two pairs of months, without using any month twice?
The answers to the two questions are 4 and 5.
You can solve this by inspecting the string of numbers we obtained above: 0 3 3 6 1 4 6 2 5 0 3 5. There are already three 3s, so it is simplest to try to maximize them. Notice that there is a 4 and
a 2, and we can convert both to 3s by performing the day-borrowing procedure described on adjacent months. By taking a day from May and giving it to June, we can surgically alter the 4 to a 3, thus
adding a fourth 3 and potentially creating a fourth Friday the 13th. Similarly, by borrowing a day from August and giving it to July, we can change the 2 to a fifth 3, without changing any of the
other offsets. So our string of offsets is now 0 3 3 6 1 3 6 3 6 0 3 5, which includes five 3s. This means that if February has a Friday the 13th, so will March, June, August and November. Lucky you!
Pete Winkler pointed out that the 13th is in fact more likely to be a Friday than any other day of the week, something, he said, that was proved by a 13-year-old! Just knowing this fact, it is
possible to conclude that there is an integral number of weeks in a time period of 400 years. Do you see how?
Question 3:
On the subject of days and dates, there is something special about the 12th of March, the 9th of May and the 11th of July that requires a global perspective to appreciate.
Each of these dates has a “twin” date with which it shares a special property. Can you figure out what it is? Note: There are some other pairs of twin dates (how many?) that have a similar property,
but the three that are mentioned above (with their twin dates) possess it to a degree that is ahead of the others by leaps and bounds.
This question was correctly answered by amrith raghavan:
“The 12th of March, the 9th of May and the 11th of July share the property that they fall on the same day of the week even if the month and day of the dates are switched. That is, these dates would
fall on the same day of the week if written the American way (mm/dd/yyyy) or the British way (dd/mm/yyyy).”
Yes, indeed! These dates fall on the same day whether they are written in the globally more common European/imperial format (DD-MM-YYYY) or in the American format (MM-DD-YYYY). These three pairs of
dates are unique in that they fall on the same day in both nonleap and leap years:
There are six other pairs of dates that share this property, but three of them work only in nonleap years (01-07/07-01, 01-11/11-01 and 02-08/03-02) or leap years (01-06/06-01, 02-03/03-02 and 02-12/
12-02). Several years ago, I constructed a science fiction adventure based on this fact.
In the Insights column, we also discussed calendar reform. In response to one suggestion that involves having 13 months of 28 days each, with one or two additional extracalendar holidays, I commented
that the above system does not preserve quarters of three months and had asked, “Can you think of a way that preserves weeks of 7 days, has months of about 30 days (let’s say 30 plus or minus no more
than 5 days), preserves equal quarters and equal seasons, and does not need a new calendar every year?” However, after reading the comment by David Prentiss, I agree that there is no need to try to
preserve four-month quarters, as it would require the insertion of extracalendar days four times in the year. Instead, it is much easier to redefine the quarter to be exactly 13 weeks (three 28-day
months plus one week, a very small adjustment). The beauty of the 13th-month calendar, as David Prentiss stated, is that it avoids this by putting all adjustments (which are limited to just one or
two days) at the year’s end. So I rescind the question: I think there is no doubt that the 13-month calendar is most logical, and we should move to it. Of course, there is little chance of that
happening in the near future, if ever. Calendar reform faces a great deal more entrenched resistance than just from triskaidekaphobes!
Finally, I asked for readers’ views on what the base year for a universal human calendar should be. Michael Ahern suggested 1969, when human beings first landed on the moon. That’s definitely a good
candidate. But for me, no other choice can come close to the year when Charles Darwin published what the philosopher Daniel Dennett has called “the greatest idea that anyone ever had” — the theory of
evolution by natural selection. For the first time in our history, we as a species glimpsed our true origins. The year 1859 marks the emergence of our species from its intellectual childhood, from
the realm of magic and fantasy into the world of rational thought.
As usual, it was not easy to decide the winner of the Quanta T-shirt this month. I’ve decided to give it to amrith raghavan, based on his answer to Question 3. Considering how he ended his comment, I
hope he is sober now! Cameron Eggins just misses out. See you next month for new insights. | {"url":"https://www.quantamagazine.org/friday-the-13th-calendar-puzzle-solution-20170426/","timestamp":"2024-11-04T17:40:07Z","content_type":"text/html","content_length":"196336","record_id":"<urn:uuid:488c30dd-e352-405d-b50f-18449d831d12>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00492.warc.gz"} |
Computing Integer Roots
Today I’m going to talk about the generalization of the integer square root algorithm to higher roots. That is, given \(n\) and \(p\), computing \(\iroot(n, p) = \lfloor \sqrt[p]{n} \rfloor\), or the
greatest integer whose \(p\)th power is less than or equal to \(n\). The generalized algorithm is straightforward, and it’s easy to generalize the proof of correctness, but the run-time bound is a
bit trickier, since it has a dependence on \(p\).
First, the algorithm, which we’ll call \(\NewtonRoot\):
1. If \(n = 0\), return \(0\).
2. If \(p \ge \Bits(n)\) return \(1\).
3. Otherwise, set \(i\) to \(0\) and set \(x_0\) to \(2^{\lceil \Bits(n) / p\rceil}\).
4. Repeat:
1. Set \(x_{i+1}\) to \(\lfloor ((p - 1) x_i + \lfloor n/x_i^{p-1} \rfloor) / p \rfloor\).
2. If \(x_{i+1} \ge x_i\), return \(x_i\). Otherwise, increment \(i\).
and its implementation in Javascript:
// iroot returns the greatest number x such that x^p <= n. The type of
// n must behave like BigInteger (e.g.,
// https://github.com/akalin/jsbn ), n must be non-negative, and
// p must be a positive integer.
// Example (open up the JS console on this page and type):
// iroot(new BigInteger("64"), 3).toString()
function iroot(n, p) {
var s = n.signum();
if (s < 0) {
throw new Error('negative radicand');
if (p <= 0) {
throw new Error('non-positive degree');
if (p !== (p|0)) {
throw new Error('non-integral degree');
if (s == 0) {
return n;
var b = n.bitLength();
if (p >= b) {
return n.constructor.ONE;
// x = 2^ceil(Bits(n)/p)
var x = n.constructor.ONE.shiftLeft(Math.ceil(b/p));
var pMinusOne = new n.constructor((p - 1).toString());
var pBig = new n.constructor(p.toString());
while (true) {
// y = floor(((p-1)x + floor(n/x^(p-1)))/p)
var y = pMinusOne.multiply(x).add(n.divide(x.pow(pMinusOne))).divide(pBig);
if (y.compareTo(x) >= 0) {
return x;
x = y;
This algorithm turns out to require \(Θ(p) + O(\lg \lg n)\) loop iterations, with the run-time for a loop iteration depending on what kind of arithmetic operations are used.
Again we look at the iteration rule: \[ x_{i+1} = \left\lfloor \frac{(p - 1) x_i + \left\lfloor \frac{n}{x_i^{p-1}} \right\rfloor}{p} \right\rfloor \] Letting \(f(x)\) be the right-hand side, we can
again use basic properties of the floor function to remove the inner floor: \[ f(x) = \left\lfloor \frac{1}{p} ((p-1) x + n/x^{p-1}) \right\rfloor \] Letting \(g(x)\) be its real-valued equivalent: \
[ g(x) = \frac{1}{p} ((p-1) x + n/x^{p-1}) \] we can, again using basic properties of the floor function, show that \(f(x) \le g(x)\), and for any integer \(m\), \(m \le f(x)\) if and only if \(m \le
Finally, let’s give a name to our desired output: let \(s = \iroot(n, p) = \lfloor \sqrt[p]{n} \rfloor\).^[2]
Unsurprisingly, \(f(x)\) never underestimates:
(Lemma 1.) For \(x \gt 0\), \(f(x) \ge s\).
Proof. By the basic properties of \(f(x)\) and \(g(x)\) above, it suffices to show that \(g(x) \ge s\). \(g'(x) = (1 - 1/p) (1 - n/x^p)\) and \(g''(x) = (p - 1) (n/x^{p+1})\). Therefore, \(g(x)\) is
concave-up for \(x \gt 0\); in particular, its single positive extremum at \(x = \sqrt[p]{n}\) is a minimum. But \(g(\sqrt[p]{n}) = \sqrt[p]{n} \ge s\). ∎
Also, our initial guess is always an overestimate:
(Lemma 2.) \(x_0 \gt s\).
Proof. \(\Bits(n) = \lfloor \lg n \rfloor + 1 \gt \lg n\). Therefore, \[ \begin{aligned} x_0 &= 2^{\lceil \Bits(n) / p \rceil} \\ &\ge 2^{\Bits(n) / p} \\ &\gt 2^{\lg n / p} \\ &= \sqrt[p]{n} \\ &\ge
s\text{.} \; \blacksquare \end{aligned} \]
Therefore, we again have the invariant that \(x_i \ge s\), which lets us prove partial correctness:
(Theorem 1.) If \(\NewtonRoot\) terminates, it returns the value \(s\).
Proof. Assume it terminates. If it terminates in step \(1\) or \(2\), then we are done. Otherwise, it can only terminate in step \(4.2\) where it returns \(x_i\) such that \(x_{i+1} = f(x_i) \ge x_i
\). This implies \(g(x_i) = ((p-1)x_i + n/x_i^{p-1}) / p \ge x_i\). Rearranging yields \(n \ge x_i^p\) and combining with our invariant we get \(\sqrt[p]{n} \ge x_i \ge s\). But \(s + 1 \gt \sqrt[p]
{n}\), so that forces \(x_i\) to be \(s\), and thus \(\NewtonRoot\) returns \(s\) if it terminates. ∎
Total correctness is also easy:
(Theorem 2.) \(\NewtonRoot\) terminates.
Proof. Assume it doesn’t terminate. Then we have a strictly decreasing infinite sequence of integers \(\{ x_0, x_1, \dotsc \}\). But this sequence is bounded below by \(s\), so it cannot decrease
indefinitely. This is a contradiction, so \(\NewtonRoot\) must terminate. ∎
Note that, like \(\NewtonRoot\), the check in step \(4.2\) cannot be weakened to \(x_{i+1} = x_i\), as doing so would cause the algorithm to oscillate. In fact, as \(p\) grows, so do the number of
values of \(n\) that exhibit this behavior, and so do the number of possible oscillations. For example, \(n = 972\) with \(p = 3\) would yield the sequence \(\{ 16, 11, 10, 9, 10, 9, \dotsc \}\), and
\(n = 80\) with \(p = 4\) would yield the sequence \(\{ 4, 3, 2, 4, 3, 2, \dotsc \}\).
We will show that \(\NewtonRoot\) takes \(Θ(p) + O(\lg \lg n)\) loop iterations. Then we will analyze a single loop iteration and the arithmetic operations used to get a total run-time bound.
Analagous to the square root case, define \(\Err(x) = x^p/n - 1\) and let \(ϵ_i = \Err(x_i)\). First, let’s prove our lower bound for \(ϵ_i\), which translates directly from the square root case:
(Lemma 3.) \(x_i \ge s + 1\) if and only if \(ϵ_i \ge 1/n\).
Proof. \(n \lt (s + 1)^p\), so \(n + 1 \le (s + 1)^p\), and therefore \((s + 1)^p/n - 1 \ge 1/n\). But the expression on the left side is just \(\Err(s + 1)\). \(x_i \ge s + 1\) if and only if \(ϵ_i
\ge \Err(s + 1)\), so the result immediately follows. ∎
Now for the next few lemmas we need to do some algebra and calculus. Inverting \(\Err(x)\), we get that \(x_i = \sqrt[p]{(ϵ_i + 1) \cdot n}\). Expressing \(g(x_i)\) in terms of \(ϵ_i\) and \(q = 1 -
1/p\) we get \[ g(x_i) = \sqrt[p]{n} \left( \frac{ϵ_i q + 1}{(ϵ_i + 1)^q} \right) \] and \[ \Err(g(x_i)) = \frac{(q ϵ_i + 1)^p}{(ϵ_i + 1)^{p-1}} - 1\text{.} \] Let \[ f(ϵ) = \frac{(q ϵ + 1)^p}{(ϵ +
1)^{p-1}} - 1\text{.} \] Then computing derivatives, \[ \begin{aligned} f'(ϵ) &= q ϵ \frac{(q ϵ + 1)^{p-1}}{(ϵ + 1)^p}\text{,} \\ f''(ϵ) &= q \frac{(q ϵ + 1)^{p-2}}{(ϵ + 1)^{p + 1}}\text{, and} \\
f'''(ϵ) &= -q (2 + q (2 + 3 ϵ)) \frac{(q ϵ + 1)^{p-3}}{(ϵ + 1)^{p + 2}}\text{.} \end{aligned} \] Note that \(f(0) = f'(0) = 0\), and \(f''(0) = q\). Also, for \(ϵ > 0\), \(f'(ϵ) \gt 0\), \(f''(ϵ) \gt
0\), and \(f'''(ϵ) < 0\).
Now we’re ready to show that the \(ϵ_i\) shrink quadratically:
(Lemma 4.) \(f(ϵ) \lt (ϵ/\sqrt{2})^2\) for \(ϵ \gt 0\).
Proof. Taylor-expand \(f(ϵ)\) around \(0\) with the Lagrange remainder form to get \[ f(ϵ) = f(0) + f'(0) ϵ + \frac{f''(0)}{2} ϵ^2 + \frac{f'''(\xi)}{6} ϵ^3 \] for some some \(\xi\) such that \(0 \lt
\xi \lt ϵ\). Plugging in values, we see that \(f(ϵ) = \frac{1}{2} q ϵ^2 + \frac{1}{6} f'''(\xi) ϵ^3\) with the last term being negative, so \(f(ϵ) \lt \frac{1}{2} q ϵ^2 \lt \frac{1}{2} ϵ^2\). ∎
But this is only a useful upper bound when \(ϵ_i \le 1\). In the square root case this was okay, since \(ϵ_1 \le 1\), but that is not true for larger values of \(p\). In fact, in general, the \(ϵ_i\)
start off shrinking
(Lemma 5.) For \(ϵ \gt 1\), \(f(ϵ) \gt ϵ/8\).
Proof. Since \(f(0) = f'(0) = 0\), and \(f''(ϵ) \gt 0\) for \(ϵ \ge 0\), \(f'(ϵ)\) and \(f(ϵ)\) are increasing, and thus \(f(1) \gt 0\) and \(f(ϵ)\) is a concave-up curve.
Then \((0, 0)\) and \((1, f(1))\) are two points on a concave-up curve, and thus geometrically the line \(y = f(1) ϵ\) must lie below \(y = f(ϵ)\) for \(ϵ \gt 1\), and thus \(f(ϵ) \gt f(1) ϵ\) for \
(ϵ \gt 1\). Algebraically, this also follows from the definition of (strict) convexity (with \(x_1 = 0\), \(x_2 = ϵ\), and \(t = 1 - 1/ϵ\)).
But \(f(1) = (2 - 1/p)^p/2^{p-1} - 1 = 2 \left(1 - \frac{1}{2p}\right)^p - 1\), which is always increasing as a function of \(p\), as you can see by calculating its derivative. Therefore, its minimum
is at \(p = 2\), which is \(1/8\), and so \(f(ϵ) \gt f(1) ϵ \ge ϵ/8\). ∎
Finally, let’s bound our initial values:
(Lemma 6.) \(x_0 \le 2s\) and \(ϵ_0 \le 2^p - 1\).
Proof. This is a straightforward generalization of the equivalent lemma from the square root case. Let’s start with \(x_0\): \[ \begin{aligned} x_0 &= 2^{\lceil \Bits(n) / p \rceil} \\ &= 2^{\lfloor
(\lfloor \lg n \rfloor + 1 + p - 1)/p \rfloor} \\ &= 2^{\lfloor \lg n / p \rfloor + 1} \\ &= 2 \cdot 2^{\lfloor \lg n / p \rfloor}\text{.} \end{aligned} \] Then \(x_0/2 = 2^{\lfloor \lg n / p \
rfloor} \le 2^{\lg n / p} = \sqrt[p]{n}\). Since \(x_0/2\) is an integer, \(x_0/2 \le \sqrt[p]{n}\) if and only if \(x_0/2 \le \lfloor \sqrt[p]{n} \rfloor = s\). Therefore, \(x_0 \le 2s\).
As for \(ϵ_0\): \[ \begin{aligned} ϵ_0 &= \Err(x_0) \\ &\le \Err(2s) \\ &= (2s)^p/n - 1 \\ &= 2^p s^p/n - 1\text{.} \end{aligned} \] Since \(s^p \le n\), \(2^p s^p/n \le 2^p\) and thus \(ϵ_0 \le 2^p
- 1\). ∎
Now we’re ready to show our main result, which involves calculating how long the \(ϵ_i\) shrink linearly:
(Theorem 3.) \(\NewtonRoot\) performs \(Θ(p) + O(\lg \lg n)\) loop iterations.
Proof. Assume that \(ϵ_i \gt 1\) for \(i \le j\), \(ϵ_{j+1} \le 1\), and \(j+k\) is the number of loop iterations performed when running the algorithm for \(n\) and \(p\) (i.e., \(x_{j+k} \ge x_
{j+k-1}\)). Using Lemma 5, \[ \left( \frac{1}{8} \right)^{j+1} ϵ_0 \lt ϵ_{j+1} \le 1\text{,} \] which implies \[ j \gt \frac{\lg ϵ_0}{3} - 1\text{.} \]
Similarly, \[ \left( \frac{1}{8} \right)^j ϵ_0 \ge ϵ_j \gt 1\text{,} \] which implies \[ j \lt \frac{\lg ϵ_0}{3} \text{.} \] Therefore, \(j = Θ(\lg ϵ_0)\), which is \(Θ(p)\) by Lemma 6.
Now assume \(k \ge 5\). Then \(x_i \ge s + 1\) for \(i \lt j + k - 1\). Since \(ϵ_{j+1} \le 1\) by assumption, \(ϵ_{j+3} \le 1/2\) and \(ϵ_i \le (ϵ_{j+3})^{2^{i-j-3}}\) for \(j + 3 \le i \lt j + k -
1\) by Lemma 4, then \(ϵ_{j+k-2} \le 2^{-2^{k-5}}\). But \(1/n \le ϵ_{j+k-2}\) by Lemma 3, so \(1/n \le 2^{-2^{k-5}}\). Taking logs to bring down the \(k\) yields \(k - 5 \le \lg \lg n\). Then \(k \
le \lg \lg n + 5\), and thus \(k = O(\lg \lg n)\).
Therefore, the total number of loop iterations is \(Θ(p) + O(\lg \lg n)\). ∎
Note that \(p \le \lg n\), so we can just say that \(\NewtonRoot\) performs \(Θ(\lg n)\) operations. But that obscures rather than simplifies. Note that the proof above is very similar to the proof
of the worse run-time of \(\mathrm{N{\small EWTON}\text{-}I{\small SQRT}'}\) where the initial guess varies. In this case, the error in our initial guess is magnified, since we raise it to the \
((p-1)\)th power, and so that manifests as the \(Θ(p)\) term.
Furthermore, unlike the square root case, the number of arithmetic operations in a loop iteration isn’t constant. In particular, the sub-step to compute \(x_i^{p-1}\) takes a number of arithmetic
operations dependent on \(p - 1\). Using repeated squarings, this computation would take \(Θ(\lg p)\) squarings and at most \(Θ(\lg p)\) multiplications.
If the cost of an arithmetic operation is constant, e.g., we’re working with fixed-size integers, then the run-time bounds is the above multiplied by \(Θ(\lg p)\).
Otherwise, if the cost of an arithmetic operation depends on the length of its arguments, then we only have to multiply by a constant factor to get the run-time bounds in terms of arithmetic
operations. If the cost of multiplying two numbers \(\le x\) is \(M(x) = O(\lg^k x)\), then the cost of computing \(x^p\) is \(O((p \lg x)^k)\). But \(x\) is \(Θ(n^{1/p})\), so the cost of computing
\(x^p\) is \(O(\lg^k n)\), which is on the order of the cost of multiplying two numbers \(\le n\). Furthermore, note that we divide the result into \(n\), so we can stop once the computation of \(x_i
^{p-1}\) exceeds \(n\). So in that case, we can treat a loop iteration as if it were performing a constant number of arithmetic operations on numbers of order \(n\), and so, like in the square root
case, we pick up a factor of \(D(n)\), where \(D(n)\) is the run-time of dividing \(n\) by some number \(\le n\).
Like this post? Subscribe to my feed or follow me on Twitter .
[1] Go and JS implementations are available on my GitHub. ↩
[2] Here, and in most of the article, we’ll implicitly assume that \(n \gt 0\) and \(p \gt 1\). ↩ | {"url":"https://www.akalin.com/computing-iroot","timestamp":"2024-11-03T12:53:01Z","content_type":"text/html","content_length":"20501","record_id":"<urn:uuid:02420104-933c-4d53-a69b-3ef681282d39>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00575.warc.gz"} |
Statistical Pi(e)
Statistics is probably the branch of math most closely related to science. Stats is what sets the scientific ball rolling: if you have a hypothesis, you get data, then use stats to analyze the data
and come to conclusions about your hypothesis. It’s hard to imagine a way to make stats useless. Of course, this can be interpreted as a challenge.
Buckle up, folks, for we are going on a statistical journey. We are going to do a statistical analysis of the digits of π.
I: Is Pi Approximately Uniform?
Ever look at the digits of π? Those infinite digits, stretching out into the cosmos and beyond? You probably have. You’ve probably also wondered if there’s some sort of pattern behind them. The goal
of this post is to come out of this at least 95% confident that the answer to that question, in at least one respect, is “no”.
First, we ask the question that is the title of this section: is π approximately uniform? That is, is there any one digit that occurs more frequently in pi than the others? We can ask this question
graphically by asking about the distribution of the digits of pi. If all these digits are uniform, each digit should occur 1/10 of the time, since there are ten digits. So the distribution of the
digits of pi should be this:
If we play the same game with the first 314 digits of pi, this is the distribution that results: (the graph below is interactive)
The fun thing, however, is we’re working with computers, so we can jack up the number of digits however high we want! This is a million:^1The digits come from http://piday.org/million.
When I was making this, my computer didn’t throw a fit at all! Way to go, computer!
This looks pretty convincing; the bars appear to be of the same size, which is what we’re aiming for. However, we can do this even better by formalizing “appear”. And you know what that means!
It’s time for
The best statistical test here is a chi-squared test for goodness of fit^2When I first heard this term in Stats class, the word goodness felt wrong to me. If you’re thinking the same thing, know that
goodness is absolutely a real word, and it’s just the linguists messing with our minds..
So, how does it work?
II: A Crash Course in Statistical Tests
The basic format of any statistical test involves a null hypothesis (H_0) and an alternative hypothesis (H_\mathrm{a}). The null states that what we are trying to prove is false, and the alternative
is the opposite, i.e., it states that what we are trying to prove is true. In the context of our digits example, if we were for some reason trying to prove the digits were biased in some fashion
(which is usually the case in the real world—you almost never encounter a situation where you need to prove that a set of frequencies are the same), they would be written as follows:
\begin{cases}H_0\!\!: \text{The digits are of equal frequency.}\\
H_\mathrm{a}\!\!: \text{The digits are not of equal frequency.}\end{cases}
Actually, now that I’ve written it out, we’ll roll with it. Let’s see what happens if we try to prove that the digits aren’t of equal frequency.
The idea behind statistical tests is this: we assume the null is true, and then follow this idea to some logical conclusion, which is very unlikely to be true. At this point we can say the null
itself is very unlikely to be true, and claim that there is evidence to support the alternative, which is the desired result.
What is this logical conclusion? This is where the p-value comes in. This is calculated by taking our data and calculating the probability that we would have gotten results this extreme or more
entirely by chance given that the null is true. In context, this is the probability, assuming that the digits are of equal frequency, that the digits just happened to line up the way they are due to
sheer coincidence.
If the p-value is low enough, we get to reject the null and claim that there is evidence to support the alternative. If it is not low enough we cannot reject the null, and we typically keep assuming
the null. This is like most legal systems: in that case the null would be that the defendant is innocent of whatever crime they are convicted of, and it is necessary to prove that they are not
innocent, i.e., to reject the null, to convict them. Otherwise, they are presumed innocent. This is what “innocent until proven guilty” means.
In our case, because we are trying to prove that the digits don’t line up, we should obtain a somewhat large p-value. The typical threshold is 0.05, or one in twenty. If we don’t obtain a value lower
than 0.05, we keep the null (i.e., they line up) by default.
What is the p-value of our test, you ask? Well, we’ll just have to run it, won’t we?
III: Yes, Pi in Fact Is Approximately Uniform
The exact manner in which the p-value is computed depends on the type of test we’re running. In this case, we’re using a chi-squared (\chi^2) distribution. The exact shape of the distribution depends
on a variable called the degrees of freedom, which is equal to the number of categories (digits, in our test) minus one. As there are ten digits in π (0 through 9), there are nine degrees of freedom,
represented by a subscript _9. The distribution, in full denoted by \chi^2_9, ends up looking like this:
So! What do we actually do with this distribution?
Nothing, yet. First we need to find a test statistic—basically a measure of how close our data is to the target distribution. This can be calculated using the formula
where observed and expected are measured by counts, summing this value over all categories (all ten digits). Summing up this value for all ten digit frequencies, we find that \chi^2 \approx 5.5114,
or roughly five and a half.
Calculating probabilities with probability density functions (PDFs) such as our chi-squared distribution is simple: simply find the area under the curve for the x-values you want. Calculating this
area is done using an integral. This means that, for example, the full area under any PDF is guaranteed to be 1, since this calculation can be simplified to the probability that you get any value,
which is of course 1.
Now, we need to put all of this together to find the p-value. To repeat, the p-value is the probability, assuming that the digits are not of equal frequency, that they would line up this close (or
closer) to equality by coincidence. Our \chi^2_9 PDF measures this, but with areas. So, the p-value is some area on our chi-squared distribution. What area? Well, our \chi^2 test statistic is a
measure of the closeness of our data to the distribution, so the probability of getting a \chi^2 value of 1 trillion is actually the output of our distribution when interpreted as a function. And the
probability of getting a value of 1 trillion or greater is the area on our chi-squared distribution to the right of the line x=5.5114. For perspective, the x-axis on this distribution, repeated
below, ranges from 0 to 25.
Some technical details: the \chi^2 distribution has no value over negative numbers, but has values for all positive real numbers, so the part of the graph with area under it ranges from 0 to \infty;
it’s just that said area becomes really really small for larger values.
Finding the area is done in practice using a calculator or computer. Unfortunately, when using distributions such as these on laptops such as mine, Desmos gets iffy. On to WolframAlpha!
What’s that, WolframAlpha?
*sigh*^3What WolframAlpha is saying here is “upgrade or I’m not computing this any farther”. Basically, it says that I’m not paying it enough to do integrals like the ones I gave it. Which is fair,
because I’m not paying it at all, so…
The moral of this story is: don’t use numbers above 1 million in computations involving probability distributions, because they’re cursed and your calculator and/or computer will hate you forever.
Anyway, Python didn’t let me down, and returned a p-value of about 0.84, meaning that we cannot reject the null, which if you remember is that the digits are of equal frequency. It only gave this
verdict after about ten minutes, however.
Protip: If your computer swears revenge on you, try restarting it.
Now, you can’t discuss stats without a good discussion of misleading people with stats. On to the next section!
IV: How Can I Use This to Deceive People?
Let’s go back to our bar chart.
Here are the exact frequencies:
Digit Count Frequency
0 99959 9.9959%
1 99757 9.9757%
2 100026 10.0026%
3 100230 10.023%
4 100230 10.023%
5 100359 10.0359%
6 99548 9.9548%
7 99800 9.98%
8 99985 9.9985%
9 100106 10.0106%
As you can see, there is some slight variation! This is good news, because we can blow up the y-axis and make it look like large variation:
However, the differences on this are so miniscule that the amount of significant figures we have to use to make this convincingly misleading draws attention to the y-axis, which is precisely what we
do not want to be doing. We can do better than this.^4Or worse, I guess, depending on whether you believe “misleading the public about the distribution of digits of pi” to be good or bad.
Recall the chi-squared test we did earlier in this post to determine whether there was a significant difference in the distribution of the digits. Recall that the answer was “no”. Now, how can we
change the result of this test while still keeping the reasoning mostly plausible?
Right on the mark! This is called p-hacking. P-hacking is something that is really complex and is causing a problem in science, and it’s something that’s more suited to a separate post, since this
one is really running a bit long. To cut it short, p-hacking is when you do various bad things to data, such as small sample sizes, cherry-picking your favorite data, removing “outliers”^5Generally
speaking, outliers are anomalous data points. P-hackers will remove data that doesn’t fit their hypothesis by calling them outliers, even if they’re not. subjectively, and being generally biased, in
the hope to make your results significant (i.e., get the p-value below the magical 0.05 figure).
Anyway, let’s see p-hacking for ourselves. I’m taking the same 1 million digits, but I’m taking 50 random samples of 50 digits each. The full results are here, but the key here is that 3 of these
came out significant. Here is the most visible result, with a \chi^2 of 30.8:
+10 alertness points if you noticed that the graph goes up to 0.5.
The best part is drawing fallacious conclusions, though:
Look, all I’M saying is that if 8 isn’t ∞ in disguise then why are they both greater than π? OPEN YOUR EYES
It’s good to keep a watch on situations like these: just because a conclusion fits the data doesn’t mean it’s supported by them.
• 1
• 2
When I first heard this term in Stats class, the word
felt wrong to me. If you’re thinking the same thing, know that
is absolutely a real word, and it’s just the linguists messing with our minds.
• 3
What WolframAlpha is saying here is “upgrade or I’m not computing this any farther”. Basically, it says that I’m not paying it enough to do integrals like the ones I gave it. Which is fair,
because I’m not paying it at all, so…
• 4
Or worse, I guess, depending on whether you believe “misleading the public about the distribution of digits of pi” to be good or bad.
• 5
Generally speaking, outliers are anomalous data points. P-hackers will remove data that doesn’t fit their hypothesis by calling them outliers, even if they’re not.
2 responses to “Statistical Pi(e)”
1. A fun read because our comments are funny. However, you take no time justifying the use of Chi Squared. You seem to presume we should just “blindly” use Chi Squared; because you said so? Perhaps
this is beyond the scope of this blog but to me it is critically important, i.e., if you did a simulation using a Monte Carlo method, would results be the same as the Chi Squared.
□ Hi! Glad you liked the post. In reponse to your comment, I picked χ^2 somewhat at random, because using all possible methods would result in an infeasibly long post; there’s nothing specific
about it that makes it special. You mention Monte Carlo; the idea is that you would get a p-value that would be similar (but not the same–randomness and all that) to the p-value you’d get
from χ^2. | {"url":"https://www.theadder.org/statistical-pie/","timestamp":"2024-11-06T08:21:02Z","content_type":"text/html","content_length":"91069","record_id":"<urn:uuid:70008797-b292-456e-b5eb-1add5186baa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00470.warc.gz"} |
Boolean - (Logical) Operator (OR, AND, XOR, )
Boolean operator manipultes truthy and falsy values that can come from:
General syntax is
lhs operator rhs
Any Boolean expression (such as the outcome of a comparison operator may be negated with not.
Syntax example:
# or
Operator Description Logic
&& Logical AND If rhs is falsy, the expression evaluates to lhs
|| Logical OR if lhs is truthy, the expression evaluates to lhs, otherwise, it evaluates to rhs
& Bitwise AND
| Bitwise OR
^ Bitwise exclusive OR
&& and || exhibit short-circuiting“ behavior, which means that the second operand is evaluated only if needed
In computer, a logical operator is implemented by a logic gate
And, Or and Xor seem incorrect. | {"url":"https://datacadamia.com/data/type/boolean/operator","timestamp":"2024-11-10T21:29:19Z","content_type":"text/html","content_length":"142628","record_id":"<urn:uuid:bc9214f1-6594-4d4a-9562-ef09f5bca1df>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00607.warc.gz"} |
Zitat: Philipp Niemann, Alwin Walter Zulehner, Rolf Drechsler, Robert Wille, "Overcoming the Trade-off Between Accuracy and Compactness in Decision Diagrams for
Quantum Computation" , in IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (TCAD), 2020, ISSN: 1937-4151
Original Titel: Overcoming the Trade-off Between Accuracy and Compactness in Decision Diagrams for Quantum Computation
Sprache des Titels: Englisch
Original Kurzfassung: Quantum computation promises to solve many hard or infeasible problems substantially faster than classical solutions. The involvement of big players like
Google, IBM, Intel, Rigetti, or Microsoft furthermore led to a momentum which increases the demand for automated design methods for quantum computations.
In this context, decision diagrams for quantum computation provide a major pillar as they allow to efficiently represent quantum states and quantum
operations which, otherwise, have to be described in terms of exponentially large state vectors and unitary matrices. However, current decision diagrams
for the quantum domain suffer from a trade-off between accuracy and compactness, since (1) small errors that are inevitably introduced by the limited
precision of floating-point arithmetic can harm the compactness (i.e., the size of the decision diagram) significantly and (2) overcompensating these
errors (to increase compactness) may lead to an information loss and introduces numerical instabilities. In this work, we describe and evaluate the
effects of this trade-off which clearly motivates the need for a solution that is perfectly accurate and compact at the same time. More precisely, we show
that the trade-off indeed weakens current design automation approaches for quantum computation (possibly leading to corrupted results or infeasible
run-times). To overcome this, we propose an alternative approach that utilizes an algebraic representation of the occurring complex and irrational numbers
and outline how this can be incorporated in a decision diagram which is suited for quantum computation. Evaluations show that - at the cost of an overhead
which is moderate in many cases - the proposed algebraic solution indeed overcomes the trade-off between accuracy and compactness that is present in
current numerical solutions.
Sprache der Englisch
Journal: IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (TCAD)
Erscheinungsjahr: 2020
ISSN: 1937-4151
Anzahl der Seiten: 12
DOI: 10.1109/TCAD.2020.2977603
Reichweite: international
Publikationstyp: Aufsatz / Paper in SCI-Expanded-Zeitschrift
Autoren: Philipp Niemann, Alwin Walter Zulehner, Rolf Drechsler, Robert Wille
Forschungseinheiten: LIT Secure and Correct Systems Lab
Abteilung für Integrierten Schaltungs- und Systementwurf
Wissenschaftsgebiete: Informatik (ÖSTAT:102)
Elektrotechnik, Elektronik, Informationstechnik (ÖSTAT:202)
Benutzerbetreuung: Sandra Winzer, letzte Änderung:
Zitat: Philipp Niemann, Alwin Walter Zulehner, Rolf Drechsler, Robert Wille, "Overcoming the Trade-off Between Accuracy and Compactness in Decision Diagrams for Quantum Computation" ,
in IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (TCAD), 2020, ISSN: 1937-4151
Original Titel: Overcoming the Trade-off Between Accuracy and Compactness in Decision Diagrams for Quantum Computation
Sprache des Titels: Englisch
Original Kurzfassung: Quantum computation promises to solve many hard or infeasible problems substantially faster than classical solutions. The involvement of big players like Google, IBM, Intel,
Rigetti, or Microsoft furthermore led to a momentum which increases the demand for automated design methods for quantum computations. In this context, decision diagrams for
quantum computation provide a major pillar as they allow to efficiently represent quantum states and quantum operations which, otherwise, have to be described in terms of
exponentially large state vectors and unitary matrices. However, current decision diagrams for the quantum domain suffer from a trade-off between accuracy and compactness,
since (1) small errors that are inevitably introduced by the limited precision of floating-point arithmetic can harm the compactness (i.e., the size of the decision diagram)
significantly and (2) overcompensating these errors (to increase compactness) may lead to an information loss and introduces numerical instabilities. In this work, we describe
and evaluate the effects of this trade-off which clearly motivates the need for a solution that is perfectly accurate and compact at the same time. More precisely, we show that
the trade-off indeed weakens current design automation approaches for quantum computation (possibly leading to corrupted results or infeasible run-times). To overcome this, we
propose an alternative approach that utilizes an algebraic representation of the occurring complex and irrational numbers and outline how this can be incorporated in a decision
diagram which is suited for quantum computation. Evaluations show that - at the cost of an overhead which is moderate in many cases - the proposed algebraic solution indeed
overcomes the trade-off between accuracy and compactness that is present in current numerical solutions.
Sprache der Englisch
Journal: IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (TCAD)
Erscheinungsjahr: 2020
ISSN: 1937-4151
Anzahl der Seiten: 12
DOI: 10.1109/TCAD.2020.2977603
Reichweite: international
Publikationstyp: Aufsatz / Paper in SCI-Expanded-Zeitschrift
Autoren: Philipp Niemann, Alwin Walter Zulehner, Rolf Drechsler, Robert Wille
Forschungseinheiten: LIT Secure and Correct Systems Lab
Abteilung für Integrierten Schaltungs- und Systementwurf
Wissenschaftsgebiete: Informatik (ÖSTAT:102)
Elektrotechnik, Elektronik, Informationstechnik (ÖSTAT:202) | {"url":"https://fodok.jku.at/fodok/publikation.xsql?PUB_ID=68059","timestamp":"2024-11-09T09:03:15Z","content_type":"text/html","content_length":"9615","record_id":"<urn:uuid:3be9183a-0653-46e6-89a0-988f34a4025f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00394.warc.gz"} |
Polynomial Regression
2. Polynomial Regression
(AI Studio Core)
This operator generates a polynomial regression model from the given ExampleSet. Polynomial regression is considered to be a special case of multiple linear regression.
Polynomial regression is a form of linear regression in which the relationship between the independent variable x and the dependent variable y is modeled as an nth order polynomial. In Altair
RapidMiner, y is the label attribute and x is the set of regular attributes that are used for the prediction of y. Polynomial regression fits a nonlinear relationship between the value of x and the
corresponding conditional mean of y, denoted E(y | x), and has been used to describe nonlinear phenomena such as the growth rate of tissues and the progression of disease epidemics. Although
polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.
The goal of regression analysis is to model the expected value of a dependent variable y in terms of the value of an independent variable (or vector of independent variables) x. In simple linear
regression, the following model is used:
y = w0 + ( w1 * x )
In this model, for each unit increase in the value of x, the conditional expectation of y increases by w1 units.
In many settings, such a linear relationship may not hold. For example, if we are modeling the yield of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may
find that the yield improves by increasing amounts for each unit increase in temperature. In this case, we might propose a quadratic model of the form:
y = w0 + (w1 * x1 ^1) + (w2 * x2 ^2)
In this model, when the temperature is increased from x to x + 1 units, the expected yield changes by w1 + w2 + 2 (w2 * x). The fact that the change in yield depends on x is what makes the
relationship nonlinear (this must not be confused with saying that this is nonlinear regression; on the contrary, this is still a case of linear regression). In general, we can model the expected
value of y as an nth order polynomial, yielding the general polynomial regression model:
y = w0 + (w1 * x1 ^1) + (w2 * x2 ^2) + . . . + (wm * xm ^m)
Regression is a technique used for numerical prediction. It is a statistical measure that attempts to determine the strength of the relationship between one dependent variable ( i.e. the label
attribute) and a series of other changing variables known as independent variables (regular attributes). Just like Classification is used for predicting categorical labels, Regression is used for
predicting a continuous value. For example, we may wish to predict the salary of university graduates with 5 years of work experience, or the potential sales of a new product given its price.
Regression is often used to determine how much specific factors such as the price of a commodity, interest rates, particular industries or sectors influence the price movement of an asset.
Polynomial regression is a form of linear regression in which the relationship between the independent variable x and the dependent variable y is modeled as an nth order polynomial.
• (Data table)
This input port expects an ExampleSet. This operator cannot handle nominal attributes; it can be applied on data sets with numeric attributes. Thus often you may have to use the Nominal to
Numerical operator before application of this operator.
• (Model)
The regression model is delivered from this output port. This model can now be applied on unseen data sets.
• (Data table)
The ExampleSet that was given as input is passed without any modifications to the output through this port. This is usually used to reuse the same ExampleSet in further operators or to view the
ExampleSet in the Results Workspace.
• max iterationsThis parameter specifies the maximum number of iterations to be used for the model fitting.
• replication factorThis parameter specifies the amount of times each input variable is replicated, i.e. how many different degrees and coefficients can be applied to each variable.
• max degreeThis parameter specifies the maximal degree to be used for the final polynomial.
• min coefficientThis parameter specifies the minimum number to be used for the coefficients and the offset.
• max coefficientThis parameter specifies the maximum number to be used for the coefficients and the offset.
• use local random seedThis parameter indicates if a local random seed should be used for randomization. Using the same value of the local random seed will produce the same randomization.
• local random seedThis parameter specifies the local random seed. This parameter is only available if the use local random seed parameter is set to true.
Tutorial Processes
Applying the Polynomial Regression operator on the Polynomial data set
The 'Polynomial' data set is loaded using the Retrieve operator. The Split Data operator is applied on it to split the ExampleSet into training and testing data sets. The Polynomial Regression
operator is applied on the training data set with default values of all parameters. The regression model generated by the Polynomial Regression operator is applied on the testing data set of the
'Polynomial' data set using the Apply Model operator. The labeled data set generated by the Apply Model operator is provided to the Performance (Regression) operator. The absolute error and the
prediction average parameters are set to true. Thus the Performance Vector generated by the Performance (Regression) operator has information regarding the absolute error and the prediction average
in the labeled data set. The absolute error is calculated by adding the difference of all predicted values from the actual values of the label attribute, and dividing this sum by the total number of
predictions. The prediction average is calculated by adding all actual label values and dividing this sum by the total number of examples. You can verify this from the results in the Results | {"url":"https://docs.rapidminer.com/latest/studio/operators/modeling/predictive/functions/polynomial_regression.html","timestamp":"2024-11-14T14:42:49Z","content_type":"text/html","content_length":"25394","record_id":"<urn:uuid:4c0c11b0-fbfb-48d6-b014-86e7bbd21c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00574.warc.gz"} |
z3+Az2+Bz+26=0, where A∈R,B∈R
One of the roots of the above cub... | Filo
Question asked by Filo student
One of the roots of the above cubic equation is . a) Fiad the real root of the equation. b) Determine the values of and .
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
8 mins
Uploaded on: 5/9/2024
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Complex Number and Binomial Theorem
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text One of the roots of the above cubic equation is . a) Fiad the real root of the equation. b) Determine the values of and .
Updated On May 9, 2024
Topic Complex Number and Binomial Theorem
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 85
Avg. Video Duration 8 min | {"url":"https://askfilo.com/user-question-answers-mathematics/one-of-the-roots-of-the-above-cubic-equation-is-a-fiad-the-3130393834393133","timestamp":"2024-11-02T23:14:36Z","content_type":"text/html","content_length":"231020","record_id":"<urn:uuid:b1b9cd03-d525-4836-8e86-f425b4d85577>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00731.warc.gz"} |
DataFrames and Series
Printed copies of Elements of Data Science are available now, with a full color interior, from Lulu.com.
DataFrames and Series#
Click here to run this notebook on Colab.
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + str(local))
return filename
import utils
This chapter introduces Pandas, a Python library that provides functions for reading and writing data files, exploring and analyzing data, and generating visualizations. And it provides two new types
for working with data, DataFrame and Series.
We will use these tools to answer a data question – what is the average birth weight of babies in the United States? This example will demonstrate important steps in almost any data science project:
1. Identifying data that can answer a question.
2. Obtaining the data and loading it in Python.
3. Checking the data and dealing with errors.
4. Selecting relevant subsets from the data.
5. Using histograms to visualize a distribution of values.
6. Using summary statistics to describe the data in a way that best answers the question.
7. Considering possible sources of error and limitations in our conclusions.
Let’s start by getting the data.
Reading the Data#
To estimate average birth weight, we’ll use data from the National Survey of Family Growth (NSFG), which is available from the National Center for Health Statistics. To download the data, you have to
agree to the Data User’s Agreement. URLs for the data and the agreement are in the notebook for this chapter.
The NSFG data is available from https://www.cdc.gov/nchs/nsfg.
The Data User’s Agreement is at https://www.cdc.gov/nchs/data_access/ftp_dua.htm.
You should read the terms carefully, but let me draw your attention to what I think is the most important one:
Make no attempt to learn the identity of any person or establishment included in these data.
NSFG respondents answer questions of the most personal nature with the expectation that their identities will not be revealed. As ethical data scientists, we should respect their privacy and adhere
to the terms of use.
Respondents to the NSFG provide general information about themselves, which is stored in the respondent file, and information about each time they have been pregnant, which is stored in the pregnancy
We will work with the pregnancy file, which contains one row for each pregnancy and one column for each question on the NSFG questionnaire.
The data is stored in a fixed-width format, which means that every row is the same length and each column spans a fixed range of characters. For example, the first six characters in each row
represent a a unique identifier for each respondent; the next two characters indicate whether a pregnancy is the respondent’s first, second, etc.
To read this data, we need a data dictionary, which specifies the names of the columns and the index where each one begins and ends. The data and the data dictionary are available in separate files.
Instructions for downloading them are in the notebook for this chapter.
dict_file = '2015_2017_FemPregSetup.dct'
data_file = '2015_2017_FemPregData.dat'
Pandas can read data in most common formats, including CSV, Excel, and fixed-width format, but it cannot read the data dictionary, which is in Stata format. For that, we’ll use a Python library
called statadict.
From statadict, we’ll import parse_stata_dict, which reads the data dictionary.
from statadict import parse_stata_dict
stata_dict = parse_stata_dict(dict_file)
<statadict.base.StataDict at 0x7fe63035c7f0>
The result is an object that contains
• names, which is a list of column names, and
• colspecs, which is a list of tuples, where each tuple contains the first and last index of a column.
These values are exactly the arguments we need to use read_fwf, which is the Pandas function that reads a file in fixed-width format.
import pandas as pd
nsfg = pd.read_fwf(data_file,
The result is a DataFrame, which is the primary type Pandas uses to store data. DataFrame has a method called head that shows the first 5 rows:
│ │CASEID│PREGORDR│HOWPREG_N │HOWPREG_P │MOSCURRP│NOWPRGDK│PREGEND1│
│0│70627 │1 │NaN │NaN │NaN │NaN │6.0 │
│1│70627 │2 │NaN │NaN │NaN │NaN │1.0 │
│2│70627 │3 │NaN │NaN │NaN │NaN │6.0 │
│3│70628 │1 │NaN │NaN │NaN │NaN │6.0 │
│4│70628 │2 │NaN │NaN │NaN │NaN │6.0 │
The first two columns are CASEID and PREGORDR, which I mentioned earlier. The first three rows have the same CASEID, which means this respondent reported three pregnancies. The values of PREGORDR
indicate that they are the first, second, and third pregnancies, in that order. We will learn more about the other columns as we go along.
In addition to methods like head, a Dataframe object has several attributes, which are variables associated with the object. For example, nsfg has an attribute called shape, which is a tuple
containing the number of rows and columns:
There are 9553 rows in this dataset, one for each pregnancy, and 248 columns, one for each question. nsfg also has an attribute called columns, which contains the column names:
Index(['CASEID', 'PREGORDR', 'HOWPREG_N', 'HOWPREG_P', 'MOSCURRP', 'NOWPRGDK',
'PREGEND1', 'PREGEND2', 'HOWENDDK', 'NBRNALIV',
'SECU', 'SEST', 'CMINTVW', 'CMLSTYR', 'CMJAN3YR', 'CMJAN4YR',
'CMJAN5YR', 'QUARTER', 'PHASE', 'INTVWYEAR'],
dtype='object', length=248)
The column names are stored in an Index, which is another Pandas type, similar to a list.
Based on the names, you might be able to guess what some of the columns are, but in general you have to read the documentation. When you work with datasets like the NSFG, it is important to read the
documentation carefully. If you interpret a column incorrectly, you can generate nonsense results and never realize it.
So, before we start looking at data, let’s get familiar with the NSFG codebook, which describes every column. Instructions for downloading it are in the notebook for this chapter.
You can download the codebook for this dataset from AllenDowney/ElementsOfDataScience.
If you search that document for “weigh at birth” you should find these columns related to birth weight.
• BIRTHWGT_LB1: Birthweight in Pounds - 1st baby from this pregnancy
• BIRTHWGT_OZ1: Birthweight in Ounces - 1st baby from this pregnancy
There are similar columns for a 2nd or 3rd baby, in the case of twins or triplets. For now we will focus on the first baby from each pregnancy, and we will come back to the issue of multiple births.
In many ways a DataFrame is like a Python dictionary, where the column names are the keys and the columns are the values. You can select a column from a DataFrame using the bracket operator, with a
string as the key.
pounds = nsfg['BIRTHWGT_LB1']
The result is a Series, which is a Pandas type that represents a single column of data. In this case the Series contains the birth weight, in pounds, for each live birth.
head shows the first five values in the Series, the name of the Series, and the data type:
0 7.0
1 NaN
2 9.0
3 6.0
4 7.0
Name: BIRTHWGT_LB1, dtype: float64
One of the values is NaN, which stands for “Not a Number”. NaN is a special value used to indicate invalid or missing data. In this example, the pregnancy did not end in live birth, so birth weight
is inapplicable.
Exercise: The column BIRTHWGT_OZ1 contains the ounces part of birth weight. Select this column from nsfg and assign it to a new variable called ounces. Then display the first 5 elements of ounces.
Exercise: The Pandas types we have seen so far are DataFrame, Index, and Series. You can find the documentation of these types at:
This documentation can be overwhelming – I don’t recommend trying to read it all now. But you might want to skim it so you know where to look later.
At this point we have identified the columns we need to answer the question and assigned them to variables named pounds and ounces.
pounds = nsfg['BIRTHWGT_LB1']
ounces = nsfg['BIRTHWGT_OZ1']
Before we do anything with this data, we have to validate it. One part of validation is confirming that we are interpreting the data correctly. We can use the value_counts method to see what values
appear in pounds and how many times each value appears. With dropna=False, it includes NaNs. By default, the results are sorted with the highest count first, but we can use sort_index to sort them by
value instead, with the lightest babies first and heaviest babies last.
0.0 2
1.0 28
2.0 46
3.0 76
4.0 179
5.0 570
6.0 1644
7.0 2268
8.0 1287
9.0 396
10.0 82
11.0 17
12.0 2
13.0 1
14.0 1
98.0 2
99.0 89
NaN 2863
Name: count, dtype: int64
The values are in the left column and the counts are in the right column. The most frequent values are 6-8 pounds, but there are some very light babies, a few very heavy babies, and two special
values, 98, and 99. According to the codebook, these values indicate that the respondent declined to answer the question (98) or did not know (99).
We can validate the results by comparing them to the codebook, which lists the values and their frequencies.
Value Label Total
. INAPPLICABLE 2863
0-5 UNDER 6 POUNDS 901
6 6 POUNDS 1644
7 7 POUNDS 2268
8 8 POUNDS 1287
9-95 9 POUNDS OR MORE 499
98 Refused 2
99 Don’t know 89
The results from value_counts agree with the codebook, so we can be confident that we are reading and interpreting the data correctly.
Exercise: In nsfg, the OUTCOME column encodes the outcome of each pregnancy as shown below:
Value Meaning
1 Live birth
2 Induced abortion
3 Stillbirth
4 Miscarriage
5 Ectopic pregnancy
6 Current pregnancy
Use value_counts to display the values in this column and how many times each value appears. Are the results consistent with the codebook?
Summary Statistics#
Another way to validate the data is with describe, which computes statistics that summarize the data, like the mean, standard deviation, minimum, and maximum. Here are the results for pounds.
count 6690.000000
mean 8.008819
std 10.771360
min 0.000000
25% 6.000000
50% 7.000000
75% 8.000000
max 99.000000
Name: BIRTHWGT_LB1, dtype: float64
count is the number of values, not including NaN. mean and std are the mean and standard deviation. min and max are the minimum and maximum values, and in between are the 25th, 50th, and 75th
percentiles. The 50th percentile is the median.
The mean is about 8.05, but that doesn’t mean much because it includes the special values 98 and 99. Before we can really compute the mean, we have to replace those values with NaN to identify them
as missing data. The replace method does what we want:
import numpy as np
pounds_clean = pounds.replace([98, 99], np.nan)
replace takes a list of the values we want to replace and the value we want to replace them with. np.nan means we are getting the special value NaN from the NumPy library, which is imported as np.
The result is a new Series, which I assign to pounds_clean. If we run describe again, we see that count is smaller now because it includes only the valid values.
count 6599.000000
mean 6.754357
std 1.383268
min 0.000000
25% 6.000000
50% 7.000000
75% 8.000000
max 14.000000
Name: BIRTHWGT_LB1, dtype: float64
The mean of the new Series is about 6.7 pounds. Remember that the mean of the original Series was more than 8 pounds. It makes a big difference when you remove a few 99-pound babies!
The effect on standard deviation is even more dramatic. If we include the values 98 and 99, the standard deviation is 10.8. If we remove them – as we should – the standard deviation is 1.4.
Exercise: Use describe to summarize ounces. Then use replace to replace the special values 98 and 99 with NaN, and assign the result to ounces_clean. Run describe again. How much does this cleaning
affect the results?
Series Arithmetic#
Now we want to combine pounds and ounces into a single Series that contains total birth weight. With Series objects, the arithmetic operators work elementwise, as they do with NumPy arrays.
So, to convert pounds to ounces, we can write pounds * 16. Then, to add in ounces we can write pounds * 16 + ounces.
Exercise: Use pounds_clean and ounces_clean to compute the total birth weight expressed in kilograms (there are roughly 2.2 pounds per kilogram). What is the mean birth weight in kilograms?
Let’s get back to the original question: what is the average birth weight for babies in the U.S.?
As an answer we could take the results from the previous section and compute the mean:
pounds_clean = pounds.replace([98, 99], np.nan)
ounces_clean = ounces.replace([98, 99], np.nan)
birth_weight = pounds_clean + ounces_clean / 16
But it is risky to compute a summary statistic, like the mean, before we look at the whole distribution of values. A distribution is a set of possible values and their frequencies. One way to
visualize a distribution is a histogram, which shows values on the x-axis and their frequencies on the y-axis. Series provides a hist method that makes histograms, and we can use Pyplot to label the
import matplotlib.pyplot as plt
plt.xlabel('Birth weight in pounds')
plt.ylabel('Number of live births')
plt.title('Distribution of U.S. birth weight');
The keyword argument, bins, tells hist to divide the range of weights into 30 intervals, called bins, and count how many values fall in each bin. The x-axis is birth weight in pounds; the y-axis is
the number of births in each bin.
The distribution looks like a bell curve, but the tail is longer on the left than on the right – that is, there are more light babies than heavy babies. That makes sense, because the distribution
includes some babies that were born preterm.
Exercise: hist takes keyword arguments that specify the type and appearance of the histogram. Read the documentation of hist at https://pandas.pydata.org/docs/reference/api/pandas.Series.hist.html
and see if you can figure out how to plot the histogram as an unfilled line against a background with no grid lines.
Exercise: The NSFG dataset includes a column called AGECON that records a woman’s age at conception for each pregnancy. Select this column from the DataFrame and plot the histogram of the values with
20 bins. Label the axes and add a title.
Boolean Series#
We have seen that the distribution of birth weights is skewed to the left – that is, the left tail extends farther from the center than the right tail. That’s because preterm babies tend to be
lighter. To see which babies are preterm, we can use the PRGLNGTH column, which records pregnancy length in weeks. A baby is considered preterm if it is born prior to the 37th week of pregnancy.
preterm = (nsfg['PRGLNGTH'] < 37)
When you compare a Series to a value, the result is a Boolean Series – that is, a Series where each element is a Boolean value, True or False. In this case, it’s True for each preterm baby and False
otherwise. We can use head to see the first 5 elements.
0 False
1 True
2 False
3 False
4 False
Name: PRGLNGTH, dtype: bool
For a Boolean Series, the sum method treats True as 1 and False as 0, so the result is the number of True values, which is the number of preterm babies.
If you compute the mean of a Boolean Series, the result the fraction of True values. In this case, it’s about 0.38 – which means about 38% of the pregnancies are less than 37 weeks in duration.
However, this result is misleading because it includes all pregnancy outcomes, not just live births. We can use the OUTCOME column to create another Boolean Series to indicate which pregnancies ended
in live birth.
live = (nsfg['OUTCOME'] == 1)
Now we can use the & operator, which represents the logical AND operation, to identify pregnancies where the outcome is a live birth and preterm:
live_preterm = (live & preterm)
About 9% of all pregnancies resulted in a preterm live birth.
The other common logical operators that work with Series objects are:
• |, which represents the logical OR operation – for example, live | preterm is true if either live is true, or preterm is true, or both.
• ~, which represents the logical NOT operation – for example, ~live is true if live not true.
The logical operators treat NaN the same as False, so you should be careful about using the NOT operator with a Series that contains NaN values. For example, ~preterm would include not just full term
pregnancies, but also pregnancies with unknown duration.
Exercise: Of all pregnancies, what fraction are live births at full term (37 weeks or more)? Of all live births, what fraction are full term?
Filtering Data#
We can use a Boolean Series as a filter – that is, we can select only rows that satisfy a condition or meet some criterion. For example, we can use preterm and the bracket operator to select values
from birth_weight, so preterm_weight gets birth weights for preterm babies.
preterm_weight = birth_weight[preterm]
To select full-term babies, we can create a Boolean Series like this:
fullterm = (nsfg['PRGLNGTH'] >= 37)
And use it to select birth weights for full term babies:
full_term_weight = birth_weight[fullterm]
As expected, full term babies are heavier, on average, than preterm babies. To be more explicit, we could also limit the results to live births, like this:
full_term_weight = birth_weight[live & fullterm]
But in this case we get the same result because birth_weight is only valid for live births.
Exercise: Let’s see if there is a difference in weight between single births and multiple births (twins, triplets, etc.). The column NBRNALIV represents the number of babies born alive from a single
nbrnaliv = nsfg['NBRNALIV']
1.0 6573
2.0 111
3.0 6
Name: count, dtype: int64
Use nbrnaliv and live to create a Boolean series called multiple that is true for multiple live births. Of all live births, what fraction are multiple births?
Exercise: Make a Boolean series called single that is true for single live births. Of all single births, what fraction are preterm? Of all multiple births, what fraction are preterm?
Exercise: What is the average birth weight for live, single, full-term births?
Weighted Means#
We are almost done, but there’s one more problem we have to solve: oversampling. The NSFG sample is not exactly representative of the U.S. population. By design, some groups are more likely to appear
in the sample than others – that is, they are oversampled. Oversampling helps to ensure that you have enough people in every group to get reliable statistics, but it makes data analysis a little more
Each pregnancy in the dataset has a sampling weight that indicates how many pregnancies it represents. In nsfg, the sampling weight is stored in a column named wgt2015_2017. Here’s what it looks
sampling_weight = nsfg['WGT2015_2017']
count 9553.000000
mean 13337.425944
std 16138.878271
min 1924.916000
25% 4575.221221
50% 7292.490835
75% 15724.902673
max 106774.400000
Name: WGT2015_2017, dtype: float64
The median value (50th percentile) in this column is about 7292, which means that a pregnancy with that weight represents 7292 total pregnancies in the population. But the range of values is wide, so
some rows represent many more pregnancies than others.
To take these weights into account, we can compute a weighted mean. Here are the steps:
1. Multiply the birth weights for each pregnancy by the sampling weights and add up the products.
2. Add up the sampling weights.
3. Divide the first sum by the second.
To do this correctly, we have to be careful with missing data. To help with that, we’ll use two Series methods, isna and notna. isna returns a Boolean Series that is True where the corresponding
value is NaN.
missing = birth_weight.isna()
In birth_weight there are 3013 missing values (mostly for pregnancies that did not end in live birth). notna returns a Boolean Series that is True where the corresponding value is not NaN.
valid = birth_weight.notna()
We can combine valid with the other Boolean Series we have computed to identify single, full term, live births with valid birth weights.
single = (nbrnaliv == 1)
selected = valid & live & single & fullterm
You can finish off this computation as an exercise.
Exercise: Use selected, birth_weight, and sampling_weight to compute the weighted mean of birth weight for live, single, full term births. You should find that the weighted mean is a little higher
than the unweighted mean we computed in the previous section. That’s because the groups that are oversampled in the NSFG tend to have lighter babies, on average.
Making an Extract#
The NSFG dataset is large, and reading a fixed-width file is relatively slow. So now that we’ve read it, let’s save a smaller version in a more efficient format. When we come back to this dataset in
Chapter 13, here are the columns we’ll need.
variables = ['CASEID', 'OUTCOME', 'BIRTHWGT_LB1', 'BIRTHWGT_OZ1',
'PRGLNGTH', 'NBRNALIV', 'AGECON', 'AGEPREG', 'BIRTHORD',
'HPAGELB', 'WGT2015_2017']
And here’s how we can select just those columns from the DataFrame.
subset = nsfg[variables]
DataFrame provides several methods for writing data to a file – the one we’ll use is to_hdf, which creates an HDF file. The parameters are the name of the new file, the name of the object we’re
storing in the file, and the compression level, which determines how effectively the data are compressed.
filename = 'nsfg.hdf'
nsfg.to_hdf(filename, key='nsfg', complevel=6)
The result is much smaller than the original fixed-width file, and faster to read. We can read it back like this.
nsfg = pd.read_hdf(filename, key='nsfg')
This chapter poses what seems like a simple question: what is the average birth weight of babies in the United States?
To answer it, we found an appropriate dataset and downloaded the files. We used Pandas to read the files and create a DataFrame. Then we validated the data and dealt with special values and missing
data. To explore the data, we used value_counts, hist, describe, and other methods. And to select relevant data, we used Boolean Series objects.
Along the way, we had to think more about the question. What do we mean by “average”, and which babies should we include? Should we include all live births or exclude preterm babies or multiple
And we had to think about the sampling process. By design, the NSFG respondents are not representative of the U.S. population, but we can use sampling weights to correct for this effect.
Even a simple question can be a challenging data science project.
A note on vocabulary: In a dataset like the one we used in this chapter, we could say that each column represents a “variable”, and what we called column names might also be called variable names. I
avoided that use of the term because it might be confusing to say that we select a “variable” from a DataFrame and assign it to a Python variable. But you might see this use of the term elsewhere, so
I thought I would mention it. | {"url":"https://allendowney.github.io/ElementsOfDataScience/07_dataframes.html","timestamp":"2024-11-07T23:34:45Z","content_type":"text/html","content_length":"88605","record_id":"<urn:uuid:42f50a80-a1d4-4e8e-ac0e-687a6ee191ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00360.warc.gz"} |
NCERT Solutions for Class 11 Maths Chapter 14 Exercise 14.2 - Access free PDF
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
NCERT Solutions Class 11 Maths Chapter 14 Exercise 14.2 Mathematical Reasoning
NCERT Solutions for Class 11 Maths Chapter 14 Exercise 14.2 Mathematical Reasoning is based on compound statements. A Compound Statement is a statement that is made up of two or more statements.
Apart from this, the topic of deducting new statements from the old ones and negating statements is also included. NCERT solutions Class 11 maths Chapter 14 Exercise 14.2 comprises questions, samples
problems, examples, and illustrations that efficiently explain this topic.
There are three questions in this exercise that are simply based on writing negation and compound statements of the given statements. With the practice of these questions, students can easily learn
the logic to deduce compound statements from two statements. Such skills are extremely beneficial in solving problems based on logical connectivities that are usually asked in the various competitive
exams. Class 11 Maths NCERT Solutions Chapter 14 Exercise 14.2 is also available in a scrollable PDF format that students can download by clicking on the links below.
☛ Download NCERT Solutions Class 11 Maths Chapter 14 Exercise 14.2
Exercise 14.2 Class 11 Chapter 14
More Exercises in Class 11 Maths Chapter 14
NCERT Solutions Class 11 Maths Chapter 14 Exercise 14.2 Tips
NCERT Solutions for Class 11 Maths Chapter 14 Exercise 14.2 are prepared by math experts in a simple and understandable format to promote efficient math learning. Solving sample problems and
illustrations provided in these solutions is pretty beneficial for kids to master this topic. As it is a fundamental subject matter for related math topics students should focus more on clearing
doubts and forming logic.
Kids should carefully read all the definitions, terms, and notes provided in these solutions to build a firm knowledge of the important concepts. Solving the sample papers along with the sums
provided in Class 11 Maths NCERT Solutions Chapter 14 Exercise 14.2 will help students understand the pattern of questions asked in the annual exam. This is also a proficient way to obtain excellent
marks in exams.
Download Cuemath NCERT Solutions PDF for free and start learning!
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/ncert-solutions/ncert-solutions-class-11-maths-chapter-14-exercise-14-2/","timestamp":"2024-11-04T17:47:41Z","content_type":"text/html","content_length":"207550","record_id":"<urn:uuid:0caefa9f-50a9-422e-af7b-19ef387fd03f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00620.warc.gz"} |
Non-cooperative game theory
In game theory, a non-cooperative game is a game with competition between individual players and in which only self-enforcing (e.g. through credible threats) alliances (or competition between groups
of players, called "coalitions") are possible due to the absence of external means to enforce cooperative behavior (e.g. contract law), as opposed to cooperative games.
Non-cooperative games are generally analysed through the framework of non-cooperative game theory, which tries to predict players' individual strategies and payoffs and to find Nash equilibria.^[1]^
[2] It is opposed to cooperative game theory which focuses on predicting which coalitons will form, the joint actions that groups take and the resulting collective payoffs and does not analyze the
strategic bargaining that occurs within each coalition and affects the distribution of payoffs between members of a same coalition.
Non-cooperative game theory provides a low-level approach as it models all the procedural details of the game, whereas cooperative game theory only describes the structure, strategies and payoffs of
coalitions. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient
assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. While it would thus be optimal to have all games
expressed under a non-cooperative framework, in many instances insufficient information is available to accurately model the formal procedures available to the players during the strategic bargaining
process, or the resulting model would be of too high complexity to offer a practical tool in the real world. In such cases, cooperative game theory provides a simplified approach that allows to
analyze the game at large without having to make any assumption about bargaining powers.
See also
External links
This article is issued from
- version of the 10/6/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files. | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Non-cooperative_game.html","timestamp":"2024-11-09T14:25:13Z","content_type":"text/html","content_length":"9117","record_id":"<urn:uuid:0d36c76a-91ab-4a07-aab6-8e7af5b77200>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00831.warc.gz"} |
Hiding Assessment | Assessing Math Concepts | Math Perspectives
top of page
Acerca de
Hiding Assessment
Some of our first grade teachers are tracking their students on the Hiding Assessment. They set goals for where they would like to see students in a month's time... and then choose activities from
the Developing Number Concepts books to help guide student development.
At the end of four weeks, it is almost routine that they won't meet their goal, and will then wonder what they've done wrong, or which activities they should be doing instead, etc. I'm wondering if
are just asking too much of their students. This is where I thought it would be useful to know what numbers a "typical" first grader is able to "know" combinations for at the A level, the P level,
The results of the Hiding Assessment are often a shock to all who give it. First Graders typically know parts to 6 by the end of the year. A few may know parts of 7. Once children are Ready to Apply
for 6 or 7, they can usually get Ps for the rest of the numbers.
Some children may learn parts of 10 before they know 7, 8 and 9.
I tried to allow for showing stages of growth by including minuses and pluses in addition to A, P, I and N.
If teachers feel they are not seeing progress, they may want to take notes while they are giving the assessment.
They may be able to see smaller steps of growth than is shown by recording only the instructional level. For example, a child may only know 4 and 1 at first and when reassessed know 1 and 4. Or they
may just be more confident when figuring out a missing part.
Below are descriptions of the instructional levels in case they can also be of help to you.
6. Hiding Assessment
Part One
(A) Ready to Apply
Knows all quickly, no errors
Students are ready to apply if the know all the parts of numbers to 10 with automaticity with counters.
(P) Needs Practice
Students need practice if they know some parts, count on, or use relationships for parts they don't know.
(P+) Knows all but 1 quickly, no errors, no counting all (may count on or back or use relationships for one combination)
(P) Figures out two or more, may have one error, may not count all
(P-) May have one error, counts all for up to half of the combinations
(I) Needs Instruction
Children need instruction if they often make errors, or if they must "count on" or "count all" most of the time. May have two errors, counts all for more than half of the combinations
(N) Needs Prerequisite
Three or more errors or guesses.
One clarifying question: Our data reveals a great disparity between what the students can do with a model vs. what they can do without a model. Is that also typical? I am looking at one class, for
example, in which a majority of the students are "A" with 5 and 6, using models. However, when assessed without models, only two students reached the "A" level. When you responded earlier to what a
typical first grader knows, were you speaking with or without models?
Thanks again.
Children in first grade typically need models in order to think about the parts so they usually are lower on Part Two than Part One. This becomes less of an issue for most second graders.
First grade teachers need to help the kids move to this level by asking them "what if" questions once in a while.. What if you had 4 cookies and you gave me 2, how many would you have left? They
should do this with smaller numbers that the children know well with models.
f this is still an issue with 2nd graders, direct teachers to the tasks where the kids are asked to "pretend". Developing Number Concepts Book 2: 3-6 through 3-12.
I administered a Hiding Assessment yesterday (using AMCAnywhere web-version) and have a question.... I wanted to assess starting at five and no matter what happened, I wanted to assess ten.
Typically, kids can do ten before 7, 8, and 9. Also, in the new Framework, sums to ten will be an expectation. I was unable to skip around within one assessment session and when I did ten by
reassessing, the other data disappears from some reports. I am pretty sure that there is only one report where I can retrieve the information.
The Assessing Math Concepts Assessments are intended to help teachers determine the instructional needs of their students. The Hiding Assessment is based on the idea that when a child knows the parts
of numbers so well that they can immediately identify the missing parts, they essentially know the "basic facts" for that range of numbers. That means the assessment can be used to determine what
addition and subtraction "facts" the students knows and what facts they still need to learn. In order for the assessment to really hone in on what students know, we designed the path through the
assessment to identify all the numbers the child is Ready to Apply from the smallest number to the largest number. Children generally learn the smaller numbers before the larger numbers. The
exception to this is the number 10. Students do often learn parts of 10 before they learn parts of 8 and 9. However, 10 is just a small part of the whole picture, so we did not set up the assessment
to assess only that number out of the context of what else the child knows or doesn't know. A child who knows the parts of 10 without knowing parts for 7, 8, or 9 is at a much different level than
one who knows parts of 10 and all the parts for the smaller numbers. This seems to be the important information that would not show up if you skipped those numbers to assess 10.
If you are required to assess whether your students know parts of 10 specifically, you could assess everyone on 10 and end the assessment after getting that information. The class summary report
always shows the last assessment given so if you go on to assess other numbers without including 10, it would not show up on that report. You could however, save that report as a PDF before
reassessing your class on other numbers. You can also retrieve that information on the Student Progress Reports.
Please let me know if you have any additional questions.
I wanted to assess a student on specific numbers: 4, 5, and 10 on Part 1. So I begin with 4 and then go to 5. In order to do 10, I have to end the assessment and then enter again as a new assessment.
I am thinking that the first session isn't being recorded.
Yesterday, I assessed a number of students and the information is not being recorded. I am wondering if it is because of going out of one session and starting another.
There is an explanation for why you are not seeing the session where you assessed children on 4 and 5. The class summary always shows the latest assessment which, in this case, is the assessment on
the number 10. You would have to go to the child's progress report to see both sessions. The only thing I can suggest if you really need to assess every child on the number 10, no matter how they do
on numbers from 6 through 9, is that you assess all the children on 10 first. You could then make PDF copies of the class summaries that show these results. Then you could assess 4 and 5, and because
it would be the latest assessment, those results would show up on the class report.
I think it would be useful for you to know why the Hiding Assessment works the way it does. I designed the assessment so teachers could find out what number combinations a child knows without
counting and what number combinations the child still needs to work on. It is not really intended to find out whether a child knows or does not know any particular number. This is because it is
possible for one student to know 4 and 5 quickly and easily, but not to be able to do 6 at all, and for another child to know not only 4 and 5, but also 6, 7, and 8. These children are at very
different places and need very different instruction.
If you want to set a benchmark for knowing 4 and 5, you can do that and find out which children meet the benchmark and which children do not.
Le t me know if you have any further questions about the Hiding Assessment or any others.
bottom of page | {"url":"https://www.assessingmathconcepts.com/hiding-assessment-1","timestamp":"2024-11-11T13:00:46Z","content_type":"text/html","content_length":"520114","record_id":"<urn:uuid:d62470b3-a0c3-4ce8-8d99-e673280df82e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00162.warc.gz"} |
The objective of this activity is to be able to apply the key concepts involved in hypothesis testing.
Included in this is understanding and being able to apply the concept of statistical power.
Important definitions
Null hypothesis
the statistical hypothesis that there is no (important) difference between experimental treatment and control in relation to the primary endpoint
Alternative hypothesis
the statistical hypothesis that there is a difference between experimental treatment and control in relation to the primary endpoint (the alternative hypothesis may be directional and hypothesise
that the different is positive or negative, or it may be non-directional and hypothesise merely that there is a difference)
Sampling distribution
is the probability distribution of expected results expected assuming a particular hypothesis about the effect size is true (e.g. the null hypothesis), all the assumptions associated with the
statistical model are true, and the trial is conducted as planned.
Type I error (\(\alpha\))
is the pre-test probability of rejecting the null hypothesis when the null hypothesis is true. It is usually set at 0.05.
Type II error (\(\beta\))
\(\beta\) is the pre-test probability of accepting the null hypothesis when the alternative hypothesis is true.
Power \((1-\beta)\)
is the pre-study probability that the study will produce a statistically significant result for a given sample size and postulated effect size
\(p\) value
a measure of the compatibility of the observed data with the data that would be expected if the null hypothesis was true when all other statistical and methodological assumptions are met
Confidence interval
is the range of values that is considered more compatible with the observed data assuming the statistical and methodological assumptions of the study are met. A 95% confidence interval provides
the range of values for which a test of an effect size within the range against the observed data would provide a \(p\) value \(> 0.05\).
Standard hypothesis tests are set up in such a way that if the observed data falls into the \(\alpha\) region, then observing such a result is more likely if the alternative hypothesis is true
compared to if the null hypothesis is true. This is an important aspect of the justification for rejecting the null hypothesis and accepting the alternative hypothesis.
Assuming the outcome is a continuous variable that is normally distributed, the following figure illustrates the set-up of a hypothesis test.
A hypothesis test is set up in such a way as to avoid Type I and Type II errors. Type I errors occur when you reject the null hypothesis when the null hypothesis is true. Type II errors occur when
you fail to reject the null hypothesis despite the null hypothesis being false.
One way to avoid type I and type II errors is to ensure the statistical test is adequately powered. A suitable rule of thumb is that \(\beta \approx 0.2\) (which is equivalent to a power of 80%).
In an overpowered test, it may be the case that a result in the \(\alpha\) region is more likely if the null hypothesis is true rather than if the alternative hypothesis is true.
In an underpowered test, it may be the case that a result in the \(\alpha\) region is not much more likely if the alternative hypothesis is true compared if the null hypothesis is true. In this
situation, the result falling in the \(\alpha\) region may not be a reliable indicator that the null is false.
Setting up a hypothesis test
If you are doing a hypothesis test, you need to think about power. The power of the test is considered before collecting any data, but after you have specified your research question and identified
an appropriate statistical test. At this point I am going to assume that you have already selected an appropriate statistical test. You now need to determine how big you study needs to be to be able
to reliably answer your research question.
The information you need will depend on the specific statistical test, but here are some general principles. The statistical power of the test depends on:
• Sample size: the larger the sample, the higher the power.
You also need to think about loss to follow-up, drop-outs and non-adherence to the study.
• Estimated effect size: this is the effect size you are trying to detect (perhaps the difference between two means). The smaller the estimated effect size, the lower the power
This might be determined by existing evidence, or it might be the “minimally clinically important difference”.
• Variability in the sample: the more variability in the sample, the lower the power of the test
• Predetermined \(\alpha\), \(\beta\): standard values are \(\alpha = 0.05\), \(\beta = 0.2\), but if you change these, you will change the power of the test (obviously, I hope)
The best way to get across this is with examples—which we do later in the module. For now it is enough to appreciate the determinates of statistical power. | {"url":"https://alacaze.net/teaching/phrm7021/hypothesistesting/","timestamp":"2024-11-04T04:40:22Z","content_type":"text/html","content_length":"7674","record_id":"<urn:uuid:f29ce7eb-daf0-4cfa-88b4-49406169d27e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00438.warc.gz"} |
How Much Does a 40 Pack of Water Weigh? Weight Check
How much does a 40 pack of water weigh?
This simple question may have crossed your mind while grocery shopping, planning a camping trip, or stocking up emergency supplies.
Water might seem straightforward, but pack sizes can throw in a curveball.
You'll soon be the go-to person for all water-weight related queries among your friends.
Ready for a splash of knowledge? Dive in!
Understanding Water Weight
As one of the fundamental elements of life, water is a substance we interact with daily. However, have you ever wondered about its weight?
In our daily lives, we may overlook how the weight of water plays a crucial role in everything from filling our bottles to understanding the effort required to carry a 40-pack of water. Let's dive
into understanding the weight of water!
The Density of Water
To truly grasp the weight of water, we first need to comprehend its density. Density is a measure of mass per unit of volume. Remarkably, the density of water remains constant at approximately 1 gram
per milliliter (g/ml) or equivalently 1 kilogram per liter (kg/l). This holds true whether you're measuring a single drop or an entire ocean!
This constancy is due to water's unique properties, specifically the way its molecules are packed. Each water molecule (comprising two hydrogen atoms and one oxygen atom, thus H2O) occupies a fixed
amount of space. Consequently, no matter the volume of water, the density remains unchanged.
Let's break this down. If you have 1 liter of water, based on its density, it will weigh approximately 1 kilogram. It doesn't matter if this liter is in a bucket, a fish tank, or a water bottle; the
weight remains the same. It's this reliability that makes water an ideal reference in many scientific and practical applications.
The Weight of a Single Unit of Water
Now that we've established the principle of water's density, we can use it to determine the weight of a standard unit of water. Let's consider a commonly used volume measurement: the gallon.
A US gallon holds approximately 3.785 liters of liquid. Therefore, using the density we discussed above, a gallon of water weighs about 3.785 kilograms or roughly 8.34 pounds.
We can apply the same principle to different units, as seen in the following table:
Volume Unit Volume (in liters) Weight (in kilograms) Weight (in pounds)
Gallon (US) 3.785 3.785 8.34
Liter 1 1 2.205
Ounce (US Fluid) 0.02957 0.02957 0.065
Bottle (500ml) 0.5 0.5 1.102
As the table illustrates, the weight of water in different units all circles back to the unchanging density of water.
Understanding Water Bottle Sizes
To carry our understanding of water weight forward, let's consider a practical real-world example that you're likely to encounter: water bottles.
Common Water Bottle Sizes
Water bottles come in a variety of sizes, suited for different needs and preferences. Some common water bottle sizes include:
• Small: These are typically around 8 ounces (0.236 liters), weighing approximately 0.236 kilograms or 0.52 pounds.
• Medium: Often seen in vending machines or convenience stores, these are usually 16.9 ounces (0.5 liters), weighing about 0.5 kilograms or 1.1 pounds.
• Large: These larger bottles can hold around 33.8 ounces (1 liter), weighing approximately 1 kilogram or 2.2 pounds.
Each size has its purpose, with smaller bottles being easy to carry on a jog and larger bottles being ideal for keeping hydrated throughout a workday.
A Closer Look at a 40-Pack of Water
Let's now tackle the question at hand: How much does a 40-pack of water weigh? For this example, we'll consider medium-sized water bottles, which are most commonly found in multipacks.
As we established above, a 500 ml (16.9 oz) water bottle weighs around 0.5 kilograms (1.1 pounds). So, if you have a 40-pack of these bottles, you simply multiply the weight of one bottle by 40.
0.5 kilograms/bottle x 40 bottles = 20 kilograms or 1.1 pounds/bottle x 40 bottles = 44 pounds
So, a 40-pack of medium-sized water bottles weighs around 20 kilograms or 44 pounds. This weight is substantial, equivalent to lifting a medium-sized dog or carrying a couple of bags of groceries!
Also learn: How Much Does a 5 Gallon Bottle of Water Weigh
Determining the Weight of a Single Water Bottle
As we've delved into the weight of water, an essential aspect of our discussion is understanding how a single water bottle factors into this equation. After all, it's not just the water itself that
adds weight, but the bottle too.
Average Weight of a Water Bottle
While it's true that the water inside a bottle accounts for most of the weight, the bottle itself is not weightless. For instance, the typical weight of a 500ml (16.9 oz) empty plastic water bottle
is about 12.7 grams or roughly 0.03 pounds.
When filled, the weight of the water (0.5 kilograms or 1.1 pounds) plus the weight of the bottle (0.0127 kilograms or 0.03 pounds) equals approximately 0.5127 kilograms or 1.13 pounds. So, while the
bottle's weight might seem insignificant next to the water, it still contributes to the total weight you're carrying.
Variations in Weight Due to Bottle Material
Remember, though, that the material of the bottle can significantly impact its weight. For example, glass bottles are heavier than plastic ones. A 500ml glass bottle can weigh up to 400 grams (0.88
pounds) when empty, making it significantly heavier than its plastic counterpart.
Know more: How Big is a Water Bottle in Inches
The Weight of a 40-Pack of Water
Now, let's revisit our original query: the weight of a 40-pack of water. By this point, we have a solid foundation to approach this calculation.
Calculating the Weight of a 40-Pack of Water
We've established that a filled 500ml plastic water bottle weighs around 0.5127 kilograms (1.13 pounds). Therefore, to calculate the weight of a 40-pack of such bottles, we'll multiply this weight by
0.5127 kilograms/bottle x 40 bottles = 20.508 kilograms
1.13 pounds/bottle x 40 bottles = 45.2 pounds
Considerations for Different Bottle Sizes
Of course, this calculation is based on a medium-sized 500ml water bottle. If you were dealing with a 40-pack of 1L water bottles, the weight would roughly double, since each bottle holds twice the
volume of water. Likewise, a 40-pack of smaller 250ml bottles would weigh about half as much.
Additional Weight Considerations
Another factor to consider is the weight of the packaging that holds the water bottles together. A plastic wrap or cardboard box could add a few additional pounds to the total weight.
Moreover, if the water bottles are made from heavier material like glass, the total weight will increase substantially.
As a summary, here is a detailed table for the weight of a 40-pack of water, considering 500ml plastic bottles:
Pack Dimension Volume (40 x 500ml) Weight of Water (in kilograms) Weight of Bottles (in kilograms) Total Weight (in kilograms) Total Weight (in pounds)
40-Pack 20 liters 20 0.508 20.508 45.2
This breakdown emphasizes the factors contributing to the weight of a 40-pack of water. It's not just about the water – every element, from the bottle to the packaging, plays a part in determining
the total weight that you'll be carrying.
Converting Water Weight to Pounds or Kilograms
Converting between different units of measurement, such as pounds and kilograms, can be a handy tool, especially when dealing with international standards. Fortunately, the conversion between these
two units is straightforward: 1 kilogram is equivalent to approximately 2.20462 pounds. Conversely, 1 pound is about 0.453592 kilograms.
Here's a quick reference table to help you convert water weights between kilograms and pounds:
Water Weight in Liters Weight in Kilograms Weight in Pounds
1 liter (1,000ml) 1 kg 2.20462 lbs
0.5 liters (500ml) 0.5 kg 1.10231 lbs
0.25 liters (250ml) 0.25 kg 0.551155 lbs
This table demonstrates that the weight of water remains consistent, irrespective of the unit of measurement used.
Read more: How Tall is a Water Bottle in Inches
Practical Examples of 40-Pack Water Weights
Understanding water weight becomes even more interesting when we put it into a practical context. Let's examine some real-life examples:
Plastic Bottles
As we've determined earlier, a 40-pack of 500ml plastic water bottles would weigh approximately 20.508 kilograms or 45.2 pounds.
Glass Bottles
Glass bottles are significantly heavier than plastic ones. If we consider that a 500ml glass bottle could weigh up to 400 grams when empty, the total weight of a 40-pack would be drastically
different. Filled, each glass bottle would weigh about 0.9 kilograms or nearly 2 pounds. Therefore, a 40-pack of glass bottles would weigh approximately 36 kilograms or 79.36 pounds.
Different Bottle Sizes
A 40-pack of 1L bottles would weigh around double compared to a 40-pack of 500ml bottles. For example, in the case of plastic bottles, a 40-pack of 1L would weigh around 41.016 kilograms or 90.4
pounds. On the other hand, a 40-pack of 250ml plastic bottles would weigh around half or approximately 10.254 kilograms or 22.6 pounds.
Here's a summary table of these practical examples:
Bottle Material Bottle Size 40-Pack Weight in Kilograms 40-Pack Weight in Pounds
Plastic 500ml 20.508 kg 45.2 lbs
Glass 500ml 36 kg 79.36 lbs
Plastic 1L 41.016 kg 90.4 lbs
Plastic 250ml 10.254 kg 22.6 lbs
Applications and Importance of Knowing the Weight
Being aware of water weight has real-world implications. It can affect how we manage physical tasks, transportation, and shipping.
Physical Considerations
Imagine you're tasked with carrying a 40-pack of water from your car to your home. Without understanding the weight, you might strain yourself or even risk injury. Knowing the weight in advance
allows you to gauge if it's a one-person job or if you'll need assistance.
Transportation and Shipping
Weight is also a critical factor in transportation and shipping. Shipping companies charge based on weight, so knowing how much a 40-pack of water weighs could impact your shipping costs.
Additionally, weight limits for vehicles and planes also factor in, meaning that a miscalculation could have serious implications.
Considerations When Carrying or Transporting 40-Packs of Water
When it comes to handling large quantities of water, such as a 40-pack, some considerations can make the task easier and safer.
Weight Distribution
A crucial factor to consider is weight distribution. When you're carrying a 40-pack of water, it's essential that the weight is evenly distributed. This can prevent the packaging from tearing and
reduce the strain on your body. If you're loading a vehicle, you'll want to distribute the weight evenly to maintain the vehicle's balance and stability.
Handling and Lifting Techniques
Proper handling and lifting techniques are vital to avoid injuries. When lifting a 40-pack of water, use your legs and not your back to prevent strain. Keep the pack close to your body, and avoid
twisting while carrying it. When loading a vehicle, be mindful of the height from which you're lifting and whether you might need a ramp or other equipment.
FAQs About Water Pack Weigh
How much does a pallet of 40 pack water weigh?
A pallet of 40-pack water typically includes 48 cases. Assuming each bottle holds 500ml, the total weight can be approximately 960 kg (2116 lbs), not considering the pallet or packaging weight.
How much does a 24 pack of water weigh?
A 24-pack of standard 16.9-ounce water bottles typically weighs approximately 27.69 pounds, considering the weight of the water itself, the plastic bottles, and the packaging. Learn how to calculate
the 24 pack of water weigh from here.
How heavy is a 40 pack of water at Costco?
A 40-pack of water at Costco typically consists of 16.9 oz (500 ml) bottles. Therefore, it would weigh about 20 kg or 44 lbs, not including the packaging.
How much does a 35 pack of waters weigh?
A 35 pack of 500ml waters holds 17.5 liters of water. So, it would weigh around 17.5 kg or 38.58 lbs, excluding the packaging weight.
How much does a bag of 40 water bottles weigh?
A bag of 40 water bottles (assuming each bottle is 500ml) contains 20 liters of water. This means it would weigh roughly 20 kg or 44 lbs, without considering the bag's weight.
How heavy is a full pallet of water?
A full pallet of water typically contains 60 to 80 cases. Given each case holds 24 bottles of 500 ml each, the total weight would range from 720 kg to 960 kg (1587 lbs to 2116 lbs), excluding the
pallet's weight.
Final Thoughts
In this article, we dove into the fascinating world of water weights, and we found out just how much a 40-pack of water might weigh. The answer is, of course, it depends! The weight of a 40-pack can
vary depending on the size of the individual bottles and the materials used to manufacture them.
However, based on the standard weight of water, which is 1 kilogram per liter, a 40-pack of 500ml plastic water bottles would weigh approximately 20 kilograms, or 44 pounds. If you're dealing with
glass bottles, the weight would increase due to the heavier container, whereas smaller bottles would reduce the overall weight.
Understanding these weights isn't just a scientific exercise. It has practical implications for carrying, transportation, and shipping. Proper handling and lifting techniques, combined with a good
understanding of weight distribution, can help you manage these tasks more effectively and safely.
Ovi Tanchangya
Hey there, fellow explorers! This is Ovi Tanchangya, passionate blogger and avid outdoorsman. I want to share my thoughts about my past outdoor experiences, and of course, I will continue to do so.
The past is very practical and can't be forgotten. I don't know which is unique about camping, but I can't forget the campfire smoke and the smell of the camp foods. When I am in mechanical society,
I try to recall my memories by watching various camp videos and listening to the sound of the forest raining. And this is me. | {"url":"https://theoutdoorinsider.com/hiking/hydration-and-beverages/how-much-does-a-40-pack-of-water-weigh/","timestamp":"2024-11-11T18:18:59Z","content_type":"text/html","content_length":"205606","record_id":"<urn:uuid:5d47ed2c-e272-466b-8543-f90a49cd0a4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00089.warc.gz"} |
Binomial Distribution
Binomial Probability Distribution
To understand binomial distributions and binomial probability, it helps to understand binomial experiments and some associated notation; so we cover those topics first.
Binomial Experiment
A binomial experiment is a statistical experiment that has the following properties:
• The experiment consists of n repeated trials.
• Each trial can result in just two possible outcomes. We call one of these outcomes a success and the other, a failure.
• The probability of success, denoted by P, is the same on every trial.
• The trials are independent; that is, the outcome on one trial does not affect the outcome on other trials.
Consider the following statistical experiment. You flip a coin 2 times and count the number of times the coin lands on heads. This is a binomial experiment because:
• The experiment consists of repeated trials. We flip a coin 2 times.
• Each trial can result in just two possible outcomes - heads or tails.
• The probability of success is constant - 0.5 on every trial.
• The trials are independent; that is, getting heads on one trial does not affect whether we get heads on other trials.
The following notation is helpful, when we talk about binomial probability.
• x: The number of successes that result from the binomial experiment.
• n: The number of trials in the binomial experiment.
• P: The probability of success on an individual trial.
• Q: The probability of failure on an individual trial. (This is equal to 1 - P.)
• n!: The factorial of n (also known as n factorial).
• b(x; n, P): Binomial probability - the probability that an n-trial binomial experiment results in exactly x successes, when the probability of success on an individual trial is P.
• [n]C[r]: The number of combinations of n things, taken r at a time.
Binomial Distribution
A binomial random variable is the number of successes x in n repeated trials of a binomial experiment. The probability distribution of a binomial random variable is called a binomial distribution.
Suppose we flip a coin two times and count the number of heads (successes). The binomial random variable is the number of heads, which can take on values of 0, 1, or 2. The binomial distribution is
presented below.
Number of heads Probability
0 0.25
1 0.50
2 0.25
The binomial distribution has the following properties:
• The mean of the distribution (μ[x]) is equal to n * P .
• The variance (σ^2[x]) is n * P * ( 1 - P ).
• The standard deviation (σ[x]) is sqrt[ n * P * ( 1 - P ) ].
Binomial Formula and Binomial Probability
The binomial probability refers to the probability that a binomial experiment results in exactly x successes. For example, in the above table, we see that the binomial probability of getting exactly
one head in two coin flips is 0.50.
Given x, n, and P, we can compute the binomial probability based on the binomial formula:
Binomial Formula. Suppose a binomial experiment consists of n trials and results in x successes. If the probability of success on an individual trial is P, then the binomial probability is:
b(x; n, P) = [n]C[x] * P^x * (1 - P)^n - x
b(x; n, P) = { n! / [ x! (n - x)! ] } * P^x * (1 - P)^n - x
Example 1
Suppose a die is tossed 5 times. What is the probability of getting exactly 2 fours?
Solution: This is a binomial experiment in which the number of trials is equal to 5, the number of successes is equal to 2, and the probability of success on a single trial is 1/6 or about 0.167.
Therefore, the binomial probability is:
b(2; 5, 0.167) = [5]C[2] * (0.167)^2 * (0.833)^3
b(2; 5, 0.167) = 10 * (0.167)^2 * (0.833)^3
b(2; 5, 0.167) = 0.161
Example 2
What is the probability that the world series will last 4 games? 5 games? 6 games? 7 games? Assume that the teams are evenly matched.
Solution: The solution to this problem requires a creative application of the binomial formula. If you can follow the logic of this solution, you have a good understanding of the material covered in
the tutorial, to this point.
In the world series, there are two baseball teams. The series ends when the winning team wins 4 games. Therefore, we define a success as a win by the team that ultimately becomes the world series
For the purpose of this analysis, we assume that the teams are evenly matched. Therefore, the probability that a particular team wins a particular game is 0.5.
Let's look first at the simplest case. What is the probability that the series lasts only 4 games. This can occur if one team wins the first 4 games. The probability of the National League team
winning 4 games in a row is:
b(4; 4, 0.5) = [4]C[4] * (0.5)^4 * (0.5)^0 = 0.0625
Similarly, when we compute the probability of the American League team winning 4 games in a row, we find that it is also 0.0625. Therefore, probability that the series ends in four games would be
0.0625 + 0.0625 = 0.125; since the series would end if either the American or National League team won 4 games in a row.
Now let's tackle the question of finding probability that the world series ends in 5 games. The trick in finding this solution is to recognize that the series can only end in 5 games, if one team has
won 3 out of the first 4 games. So let's first find the probability that the American League team wins exactly 3 of the first 4 games.
b(3; 4, 0.5) = [4]C[3] * (0.5)^3 * (0.5)^1 = 0.25
Okay, here comes some more tricky stuff, so listen up. Given that the American League team has won 3 of the first 4 games, the American League team has a 50/50 chance of winning the fifth game to end
the series. Therefore, the probability of the American League team winning the series in 5 games is 0.25 * 0.50 = 0.125. Since the National League team could also win the series in 5 games, the
probability that the series ends in 5 games would be 0.125 + 0.125 = 0.25.
The rest of the problem would be solved in the same way. You should find that the probability of the series ending in 6 games is 0.3125; and the probability of the series ending in 7 games is also
Cumulative Binomial Probability
A cumulative binomial probability refers to the probability that the binomial random variable falls within a specified range (e.g., is greater than or equal to a stated lower limit and less than or
equal to a stated upper limit).
To compute a cumulative binomial probability, we find the sum of relevant individual binomial probabilities, as illustrated in the examples below.
Example 3
The probability that a student is accepted to a prestigious college is 0.3. If 5 students from the same school apply, what is the probability that at most 2 are accepted?
Solution: To solve this problem, we compute 3 individual probabilities, using the binomial formula. The sum of all these probabilities is the answer we seek. Thus,
b(x < 2; 5, 0.3) = b(x = 0; 5, 0.3) + b(x = 1; 5, 0.3) + b(x = 2; 5, 0.3)
b(x < 2; 5, 0.3) = 0.1681 + 0.3601 + 0.3087
b(x < 2; 5, 0.3) = 0.8369
Example 4
What is the probability of obtaining 45 or fewer heads in 100 tosses of a coin?
Solution: To solve this problem, we compute 46 individual binomial probabilities, using the binomial formula. The sum of all these binomial probabilities is the answer we seek. Thus,
b(x < 45; 100, 0.5) = b(x = 0; 100, 0.5) + b(x = 1; 100, 0.5) + . . . + b(x = 44; 100, 0.5) + b(x = 45; 100, 0.5)
b(x < 45; 100, 0.5) = 0.184
Binomial Calculator
As you may have noticed, the binomial formula requires many time-consuming computations. The Binomial Calculator can do this work for you - quickly, easily, and error-free. Use the Binomial
Calculator to compute binomial probabilities and cumulative binomial probabilities. The calculator is free. It can found in the Stat Trek main menu under the Stat Tools tab. Or you can tap the button
Binomial Calculator
Here is the solution to this problem, using the Binomial Calculator:
Example 5
Suppose it were possible to take a simple random sample of 120 newborns. Find the probability that no more than 40% will be boys. Assume equal probabilities for the births of boys and girls.
Solution: We know that 40% of 120 is 48. Therefore, we want to know the probability that a random sample of 120 newborns will include no more than 48 boys. The solution to this problem requires that
we compute the following cumulative binomial probability.
b(x < 48; 120, 0.5) = b(x = 0; 120, 0.5) + b(x = 1; 120, 0.5) + ... + b(x = 48; 120, 0.5)
b(x < 48; 120, 0.5) = 0.0 + 0.0 + ... + 0.00662
b(x < 48; 120, 0.5) = 0.01766
Note: Finding this cumulative binomial probability requires requires computing 49 individual binomial probabilities. It can be done by hand, but it is much easier to use the Binomial Calculator, as
shown below: | {"url":"https://stattrek.org/probability-distributions/Binomial","timestamp":"2024-11-06T14:23:09Z","content_type":"text/html","content_length":"62007","record_id":"<urn:uuid:8fd024c2-08b9-465d-8609-2f148b6794d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00860.warc.gz"} |
Ernie's 3D Pancakes
Here are a couple more computational geometry highlights from yesterday's FOCS schedule.
• Ankur Moitra and Tom Leighton solved the greedy embedding conjecture for 3-connected planar graphs, posed by Papadimitriou and Ratajczak in 2004. A path in the plane is distance-decreasing if the
distance to one endpoint is monotonically decreasing as we walk along the path from the other endpoint. An embedding of a graph in the plane is greedy if every pair of nodes in the graph is
connected by a distance-decreasing path. Papadimitriou and Ratajczak conjectured that every 3-connected planar graph has a greedy embedding. 3-connected planar graphs have tons of lovely
geometric properties via theorems of Cauchy, Tutte, Steinitz, Koebe-Andreev-Thurston, and others, none of which proved useful in Moitra and Leighton's solution! Instead, they argue that any
3-connected planar graph (in fact, any circuit graph) has a spanning Christmas cactus, a subgraph in which every edge belongs to at most one cycle and deleting any vertex leaves at most two
components, and then show how to greedily embed any Christmas cactus.
• Tasos Sidiropoulos described inapproximability resuls for metric embeddings into R^d, which he developed with Jiri Matousek. Given an n-point metric space M and a target dimension d, it is
natural to ask for the minimum-distortion embedding of M into Euclidean d-space. Unfortunately, even for the case d=1, approximating the minimum distortion to small polynomial factors is NP-hard.
Tasos and Jirka bootstrapped this known result to arbitrary dimensions, using a lovely product construction. If M is a bad example for embeddng into the line, then MxS is a bad example for
embedding into R^d, where S is a sufficiently dense net on the (d-1)-sphere. Each copy of S must be embedded on an approximate sphere. A clever cohomology argument (hooray for Poincaré-Alexander
duality!) then implies that these quasi-spheres are properly nested, which implies that the distortion is at least as bad as the distortion of M into the line. Thus result implies that random
projection not only gives near-optimal worst-case distortion, but it also gives a nearly optimal approximation of the distortion for any metric space.
On our way to dinner last night, Mihai Patrascu asked me if the stories about my faculty job interview at MIT were true. Apparently, I impressed somebody (Mihai wouldn't say who) enough to become a
cautionary example to MIT theory students. The story has become a little distorted over the last ten years, but yes, the essential details are accurate.
For any students entering the academic job market, who desperately want to avoid an offer from MIT, let me offer the following script, which proved wildly successful for me:
Gerry Sussman:
What's going to be the most important development in computer science in the next 20 years?
Yours truly:
I don't know, and neither does anyone else!
For extra added bonus points, make sure your tone of voice conveys just how incredibly
you think the question is.
I'm not crazy enough to think that was the only reason MIT didn't hire me—truthfully, I think they made the right decision—but it certainly didn't help. On the other hand, aside from a few added
details about high school students and pornography, I'm not sure my answer would be much different today.
Recent Comments | {"url":"https://3dpancakes.typepad.com/ernie/2008/10/","timestamp":"2024-11-10T19:21:39Z","content_type":"application/xhtml+xml","content_length":"44217","record_id":"<urn:uuid:977db970-055f-4435-9b8a-5c8472b0f839>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00031.warc.gz"} |
[Libre-soc-dev] effect of more decode pipe stages on hardware requirements for execution resources for OoO processors
Jacob Lifshay programmerjake at gmail.com
Wed Feb 16 16:34:09 GMT 2022
On Wed, Feb 16, 2022, 02:32 lkcl <luke.leighton at gmail.com> wrote:
> On Wed, Feb 16, 2022 at 6:02 AM Jacob Lifshay <programmerjake at gmail.com>
> wrote:
> let us start again.
> let us use mathematical notation
> "For the infinite set of all possible instruction sequences, there
> exists a sequence of 40 instructions such that there are 39 RaW-WaR
> hazards between each pair in sequence such that 40 RSes *are* required
> to hold the full chain"
> let us call that Chain40
> the *actual* instructions within Chain40 are completely irrelevant.
> it is the fact that there *is* a chain that is the sole exclusive
> critical fact. please do not place or create barriers or argue with
> the fact that such a chain exists, nor argue or advocate any
> additional circumstances which make Chain40 a non-possibility.
> now let us also create some additional groups:
> "For the infinite set of all possible instruction sequences, there
> exists 40 instruction sequences of length 1 (one), such that they have
> no Hazards at all onto Chain40 *and* have no Hazards with each other"
> let us call those NonChain1-40
> so:
> * there are 40 instructions in a chain of 39 hazards with each other,
> called Chain40
> * there are 40 instructions with *no* Hazards either on each other or
> with Chain40, called NonChain40
> now let us define the hardware:
> * let the pipeline depth be 2 for ALL instructions
> * let the instructions to be executed be: Chain40 followed by
> NonChain1...NonChain40
> * let us assume 100-way multi-issue (please do not argue that this is
> impractical at this point in time)
> QUESTION: how many Reservation Stations are required to ensure that an
> issue-stall does not occur?
that depends on what you mean by pipeline depth...which pipeline(s)? how is
it distributed? is it 2 cycles for every execution pipeline and 0 cycles in
the fetch/decode pipeline? is it 1 cycle for every execution pipeline and 1
cycle in the fetch/decode pipeline? is it 2 cycles in the fetch/decode
pipeline and 1 cycle in every execution pipeline?
> now let us change one parameter:
> * let the pipeline depth increase to 10
> QUESTION: what effect does this have on the number of Reservation
> Stations required?
that depends on what you mean by pipeline depth...which pipeline(s)? if
it's the fetch/decode pipeline only that is increasing in depth, then
exactly the same number of RSes are needed. if execution pipelines also
increase in depth, then more RSes may be needed, dependent only on the
fetch width in instructions and the execution pipelines' depth and other
factors (that we're assuming don't occur here) that cause instructions to
be delayed in the execution or scheduling stages (we're assuming scheduling
takes 0 cycles if there's no stall), such as running out of RSes or
insufficient execution pipelines or stalled instructions (e.g. tlb miss in
a load/store).
More information about the Libre-soc-dev mailing list | {"url":"https://lists.libre-soc.org/pipermail/libre-soc-dev/2022-February/004462.html","timestamp":"2024-11-13T15:46:22Z","content_type":"text/html","content_length":"7238","record_id":"<urn:uuid:617c3d65-eebf-45dc-87c0-6440d02f0573>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00494.warc.gz"} |
Constant False Alarm Rate (CFAR) Detection
This example introduces constant false alarm rate (CFAR) detection and shows how to use CFARDetector and CFARDetector2D in the Phased Array System Toolbox™ to perform cell averaging CFAR detection.
One important task a radar system performs is target detection. The detection itself is fairly straightforward. It compares the signal to a threshold. Therefore, the real work on detection is coming
up with an appropriate threshold. In general, the threshold is a function of both the probability of detection and the probability of false alarm.
In many phased array systems, because of the cost associated with a false detection, it is desirable to have a detection threshold that not only maximizes the probability of detection but also keeps
the probability of false alarm below a preset level.
There is extensive literature on how to determine the detection threshold. Readers might be interested in the Signal Detection in White Gaussian Noise and Signal Detection Using Multiple Samples
examples for some well known results. However, all these classical results are based on theoretical probabilities and are limited to white Gaussian noise with known variance (power). In real
applications, the noise is often colored and its power is unknown.
CFAR technology addresses these issues. In CFAR, when the detection is needed for a given cell, often termed as the cell under test (CUT), the noise power is estimated from neighboring cells. Then
the detection threshold, $T$, is given by
where ${P}_{n}$ is the noise power estimate and $\alpha$ is a scaling factor called the threshold factor.
From the equation, it is clear that the threshold adapts to the data. It can be shown that with the appropriate threshold factor, $\alpha$, the resulting probability of false alarm can be kept at a
constant, hence the name CFAR.
Cell Averaging CFAR Detection
The cell averaging CFAR detector is probably the most widely used CFAR detector. It is also used as a baseline comparison for other CFAR techniques. In a cell averaging CFAR detector, noise samples
are extracted from both leading and lagging cells (called training cells) around the CUT. The noise estimate can be computed as [1]
${P}_{n}=\frac{1}{N}\sum _{m=1}^{N}{x}_{m}$
where $N$ is the number of training cells and ${x}_{m}$ is the sample in each training cell. If ${x}_{m}$ happens to be the output of a square law detector, then ${P}_{n}$ represents the estimated
noise power. In general, the number of leading and lagging training cells are the same. Guard cells are placed adjacent to the CUT, both leading and lagging it. The purpose of these guard cells is to
avoid signal components from leaking into the training cell, which could adversely affect the noise estimate.
The following figure shows the relation among these cells for the 1-D case.
With the above cell averaging CFAR detector, assuming the data passed into the detector is from a single pulse, i.e., no pulse integration involved, the threshold factor can be written as [1]
$\alpha =N\left({P}_{fa}^{-1/N}-1\right)$
where ${P}_{fa}$ is the desired false alarm rate.
CFAR Detection Using Automatic Threshold Factor
In the rest of this example, we show how to use Phased Array System Toolbox to perform a cell averaging CFAR detection. For simplicity and without losing any generality, we still assume that the
noise is white Gaussian. This enables the comparison between the CFAR and classical detection theory.
We can instantiate a CFAR detector using the following command:
cfar = phased.CFARDetector('NumTrainingCells',20,'NumGuardCells',2);
In this detector we use 20 training cells and 2 guard cells in total. This means that there are 10 training cells and 1 guard cell on each side of the CUT. As mentioned above, if we assume that the
signal is from a square law detector with no pulse integration, the threshold can be calculated based on the number of training cells and the desired probability of false alarm. Assuming the desired
false alarm rate is 0.001, we can configure the CFAR detector as follows so that this calculation can be carried out.
exp_pfa = 1e-3;
cfar.ThresholdFactor = 'Auto';
cfar.ProbabilityFalseAlarm = exp_pfa;
The configured CFAR detector is shown below.
cfar =
phased.CFARDetector with properties:
Method: 'CA'
NumGuardCells: 2
NumTrainingCells: 20
ThresholdFactor: 'Auto'
ProbabilityFalseAlarm: 1.0000e-03
OutputFormat: 'CUT result'
ThresholdOutputPort: false
NoisePowerOutputPort: false
We now simulate the input data. Since the focus is to show that the CFAR detector can keep the false alarm rate under a certain value, we just simulate the noise samples in those cells. Here are the
• The data sequence is 23 samples long, and the CUT is cell 12. This leaves 10 training cells and 1 guard cell on each side of the CUT.
• The false alarm rate is calculated using 100 thousand Monte Carlo trials.
rs = RandStream('mt19937ar','Seed',2010);
npower = db2pow(-10); % Assume 10dB SNR ratio
Ntrials = 1e5;
Ncells = 23;
CUTIdx = 12;
% Noise samples after a square law detector
rsamp = randn(rs,Ncells,Ntrials)+1i*randn(rs,Ncells,Ntrials);
x = abs(sqrt(npower/2)*rsamp).^2;
To perform the detection, pass the data through the detector. In this example, there is only one CUT, so the output is a logical vector containing the detection result for all the trials. If the
result is true, it means that a target is present in the corresponding trial. In our example, all detections are false alarms because we are only passing in noise. The resulting false alarm rate can
then be calculated based on the number of false alarms and the number of trials.
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
The result shows that the resulting probability of false alarm is below 0.001, just as we specified.
CFAR Detection Using Custom Threshold Factor
As explained in the earlier part of this example, there are only a few cases in which the CFAR detector can automatically compute the appropriate threshold factor. For example, using the previous
scenario, if we employ a 10-pulses noncoherent integration before the data goes into the detector, the automatic threshold can no longer provide the desired false alarm rate.
npower = db2pow(-10); % Assume 10dB SNR ratio
xn = 0;
for m = 1:10
rsamp = randn(rs,Ncells,Ntrials)+1i*randn(rs,Ncells,Ntrials);
xn = xn + abs(sqrt(npower/2)*rsamp).^2; % noncoherent integration
x_detected = cfar(xn,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
One may be puzzled why we think a resulting false alarm rate of 0 is worse than a false alarm rate of 0.001. After all, isn't a false alarm rate of 0 a great thing? The answer to this question lies
in the fact that when the probability of false alarm is decreased, so is the probability of detection. In this case, because the true false alarm rate is far below the allowed value, the detection
threshold is set too high. The same probability of detection can be achieved with our desired probability of false alarm at lower cost; for example, with lower transmitter power.
In most cases, the threshold factor needs to be estimated based on the specific environment and system configuration. We can configure the CFAR detector to use a custom threshold factor, as shown
cfar.ThresholdFactor = 'Custom';
Continuing with the pulse integration example and using empirical data, we found that we can use a custom threshold factor of 2.35 to achieve the desired false alarm rate. Using this threshold, we
see that the resulting false alarm rate matches the expected value.
cfar.CustomThresholdFactor = 2.35;
x_detected = cfar(xn,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
CFAR Detection Threshold
A CFAR detection occurs when the input signal level in a cell exceeds the threshold level. The threshold level for each cell depends on the threshold factor and the noise power in that derived from
training cells. To maintain a constant false alarm rate, the detection threshold will increase or decrease in proportion to the noise power in the training cells. Configure the CFAR detector to
output the threshold used for each detection using the ThresholdOutputPort property. Use an automatic threshold factor and 200 training cells.
cfar.ThresholdOutputPort = true;
cfar.ThresholdFactor = 'Auto';
cfar.NumTrainingCells = 200;
Next, create a square-law input signal with increasing noise power.
rs = RandStream('mt19937ar','Seed',2010);
Npoints = 1e4;
rsamp = randn(rs,Npoints,1)+1i*randn(rs,Npoints,1);
ramp = linspace(1,10,Npoints)';
xRamp = abs(sqrt(npower*ramp./2).*rsamp).^2;
Compute detections and thresholds for all cells in the signal.
[x_detected,th] = cfar(xRamp,1:length(xRamp));
Next, compare the CFAR threshold to the input signal.
xlabel('Time Index')
Here, the threshold increases with the noise power of the signal to maintain the constant false alarm rate. Detections occur where the signal level exceeds the threshold.
Comparison Between CFAR and Classical Neyman-Pearson Detector
In this section, we compare the performance of a CFAR detector with the classical detection theory using the Neyman-Pearson principle. Returning to the first example and assuming the true noise power
is known, the theoretical threshold can be calculated as
T_ideal = npower*db2pow(npwgnthresh(exp_pfa));
The false alarm rate of this classical Neyman-Pearson detector can be calculated using this theoretical threshold.
act_Pfa_np = sum(x(CUTIdx,:)>T_ideal)/Ntrials
Because we know the noise power, classical detection theory also produces the desired false alarm rate. The false alarm rate achieved by the CFAR detector is similar.
cfar.ThresholdOutputPort = false;
cfar.NumTrainingCells = 20;
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
Next, assume that both detectors are deployed to the field and that the noise power is 1 dB more than expected. In this case, if we use the theoretical threshold, the resulting probability of false
alarm is four times more than what we desire.
npower = db2pow(-9); % Assume 9dB SNR ratio
rsamp = randn(rs,Ncells,Ntrials)+1i*randn(rs,Ncells,Ntrials);
x = abs(sqrt(npower/2)*rsamp).^2;
act_Pfa_np = sum(x(CUTIdx,:)>T_ideal)/Ntrials
On the contrary, the CFAR detector's performance is not affected.
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
Hence, the CFAR detector is robust to noise power uncertainty and better suited to field applications.
Finally, use a CFAR detection in the presence of colored noise. We first apply the classical detection threshold to the data.
npower = db2pow(-10);
fcoeff = maxflat(10,'sym',0.2);
x = abs(sqrt(npower/2)*filter(fcoeff,1,rsamp)).^2; % colored noise
act_Pfa_np = sum(x(CUTIdx,:)>T_ideal)/Ntrials
Note that the resulting false alarm rate cannot meet the requirement. However, using the CFAR detector with a custom threshold factor, we can obtain the desired false alarm rate.
cfar.ThresholdFactor = 'Custom';
cfar.CustomThresholdFactor = 12.85;
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
CFAR Detection for Range-Doppler Images
In the previous sections, the noise estimate was computed from training cells leading and lagging the CUT in a single dimension. We can also perform CFAR detection on images. Cells correspond to
pixels in the images, and guard cells and training cells are placed in bands around the CUT. The detection threshold is computed from cells in the rectangular training band around the CUT.
In the figure above, the guard band size is [2 2] and the training band size is [4 3]. The size indices refer to the number of cells on each side of the CUT in the row and columns dimensions,
respectively. The guard band size can also be defined as 2, since the size is the same along row and column dimensions.
Next, create a two-dimensional CFAR detector. Use a probability of false alarm of 1e-5 and specify a guard band size of 5 cells and a training band size of 10 cells.
cfar2D = phased.CFARDetector2D('GuardBandSize',5,'TrainingBandSize',10,...
Next, load and plot a range-doppler image. The image includes returns from two stationary targets and one target moving away from the radar.
[resp,rngGrid,dopGrid] = helperRangeDoppler;
Use CFAR to search the range-Doppler space for objects, and plot a map of the detections. Search from -10 to 10 kHz and from 1000 to 4000 m. First, define the cells under test for this region.
[~,rangeIndx] = min(abs(rngGrid-[1000 4000]));
[~,dopplerIndx] = min(abs(dopGrid-[-1e4 1e4]));
[columnInds,rowInds] = meshgrid(dopplerIndx(1):dopplerIndx(2),...
CUTIdx = [rowInds(:) columnInds(:)]';
Compute a detection result for each cell under test. Each pixel in the search region is a cell in this example. Plot a map of the detection results for the range-Doppler image.
detections = cfar2D(resp,CUTIdx);
The three objects are detected. A data cube of range-Doppler images over time can likewise be provided as the input signal to cfar2D, and detections will be calculated in a single step.
In this example, we presented the basic concepts behind CFAR detectors. In particular, we explored how to use the Phased Array System Toolbox to perform cell averaging CFAR detection on signals and
range-Doppler images. The comparison between the performance offered by a cell averaging CFAR detector and a detector equipped with the theoretically calculated threshold shows clearly that the CFAR
detector is more suitable for real field applications.
[1] Mark Richards, Fundamentals of Radar Signal Processing, McGraw Hill, 2005 | {"url":"https://kr.mathworks.com/help/radar/ug/constant-false-alarm-rate-cfar-detection.html","timestamp":"2024-11-08T15:04:58Z","content_type":"text/html","content_length":"93040","record_id":"<urn:uuid:ba2de2b6-3765-4db9-9f4b-c65aa202144f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00228.warc.gz"} |
Precalculus (6th Edition) Blitzer Chapter 8 - Section 8.5 - Determinants and Cramer’s Rule - Exercise Set - Page 945 6
The general formula to calculate the determinate of a $2 \times 2$ matrix which has 2 rows and 2 coulmns is $D=\begin{vmatrix}a&b\\c&d\end{vmatrix}=ad-bc$ Now, $D=\begin{vmatrix}1&-3\\-8&2\end
You can help us out by revising, improving and updating this answer.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"url":"https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-8-section-8-5-determinants-and-cramer-s-rule-exercise-set-page-945/6","timestamp":"2024-11-09T20:31:29Z","content_type":"text/html","content_length":"72101","record_id":"<urn:uuid:939eda6b-9a41-44e8-a0f2-0e0b9128015c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00173.warc.gz"} |
April 3, 2008 - 22:40
hw help
A wooden plank 2.5m long, 40 cm long and 60 mm thick floats in water with 15 mm of its thickness above the surface. Calculate the plank's mass.
i don't know how to go abot solving this problem. any help would be awsome.
October 13, 2008 - 21:41
(Reply to #1) #2
GOODSTUDENT;77230 wrote:A wooden plank 2.5m long, 40 cm long and 60 mm thick floats in water with 15 mm of its thickness above the surface. Calculate the plank's mass.
i don't know how to go abot solving this problem. any help would be awsome.
Use Buoyancy Force, set it equal to mg. Buoyancy force points upward while mg points downwards. We know that 15 mm is above water, so 60-15 is under water. Which means that 45/60 percent of the block
is under water
set rho V g = rho V g (density of the plank on one side, the density of water on the other)
then f = V(of water)/V (wood) = rho (wood)/rho(water) = 45/60 (simplified) | {"url":"https://course-notes.org/forum/science/ap_physics/hw_help","timestamp":"2024-11-01T23:02:09Z","content_type":"text/html","content_length":"50180","record_id":"<urn:uuid:ada285d0-bc0e-4f40-96f1-da9c13f2f37f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00540.warc.gz"} |
Python - Change The Bin Size Of An Histogram+ With Code Examples
In this article, we will look at how to get the solution for the problem, Python - Change The Bin Size Of An Histogram+ With Code Examples
How do you change the bin size on a histogram?
To adjust the bin width, right click the horizontal axis on the histogram and then click Format Axis from the dropdown: What is this? In the window that appears to the right, we can see that Excel
chose the bin width to be 29,000. We can change this to any number we'd like.
x = np.random.randn(1000) # Generate random numbers
plt.hist(x, bins=20) # Select bin size
plt.hist(x, bins=range(-4, 5)) # Select range
Can histograms have different bin widths?
Most histograms use bin widths that are as equal as possible, but it is also possible to use unequal bin widths (see the 'Variable bin widths' section of Histogram). A recommended strategy is to size
bins so the number of values they contain is approximately equal.
What does bins mean in Python?
Bins are the number of intervals you want to divide all of your data into, such that it can be displayed as bars on a histogram. A simple method to work our how many bins are suitable is to take the
square root of the total number of values in your distribution.
What is bin size in histogram Python?
The default value of the number of bins to be created in a histogram is 10. However, we can change the size of bins using the parameter bins in matplotlib. pyplot. hist().
How do I set the bin size in Python?
How to set the bin size of a Matplotlib histogram in Python
• data = np. random. normal(50, 10, size = 10000) Creating random data.
• ax = plt. hist(data)
• bins_list = [-10, 20, 40, 50, 60, 80, 110] specify bin start and end points.
• ax = plt. hist(data, bins = bins_list)
How do you make a histogram with unequal class widths?
This is called unequal class intervals. To draw a histogram for this information, first find the class width of each category. The area of the bar represents the frequency, so to find the height of
the bar, divide frequency by the class width. This is called frequency density.
How do you make a bin in Python?
The following Python function can be used to create bins.
• def create_bins(lower_bound, width, quantity): """ create_bins returns an equal-width (distance) partitioning.
• bins = create_bins(lower_bound=10, width=10, quantity=5) bins.
What is bucket size in a histogram?
A histogram displays numerical data by grouping data into "bins" of equal width. Each bin is plotted as a bar whose height corresponds to how many data points are in that bin. Bins are also sometimes
called "intervals", "classes", or "buckets".
How do you binning data in Python?
Smoothing by bin means : In smoothing by bin means, each value in a bin is replaced by the mean value of the bin. Smoothing by bin median : In this method each bin value is replaced by its bin median
What is the bin width of the histogram?
Calculate the number of bins by taking the square root of the number of data points and round up. Calculate the bin width by dividing the specification tolerance or range (USL-LSL or Max-Min value)
by the # of bins.
Skip Error Python With Code Examples
In this article, we will look at how to get the solution for the problem, Skip Error Python With Code Examples How do I skip an exception? There is no way to basically ignore a thrown exception. The
best that you can do is to limit the standard you have to wrap the exception-throwing code in. try: #line that could cause an error except: pass #continue running code from here without stopping on
the error try: # what you want to try except KeyError: < error type here continue How do I f
Python Permutation With Code Examples
In this article, we will look at how to get the solution for the problem, Python Permutation With Code Examples How do you generate all possible combinations of two lists in Python? The unique
combination of two lists in Python can be formed by pairing each element of the first list with the elements of the second list. Method 1 : Using permutation() of itertools package and zip()
function. Approach : Import itertools package and initialize list_1 and list_2. import itertools print(list(itertoo
Python Ssh Connection With Code Examples
In this article, we will look at how to get the solution for the problem, Python Ssh Connection With Code Examples Is paramiko safe? Is paramiko safe to use? The python package paramiko was scanned
for known vulnerabilities and missing license, and no issues were found. Thus the package was deemed as safe to use. #ssh client in python example pip install paramiko client = paramiko.SSHClient()
client.load_system_host_keys() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) client.conn
Python Dictionary Sort With Code Examples
In this article, we will look at how to get the solution for the problem, Python Dictionary Sort With Code Examples Which function is used to sort the dictionary? Like lists, we can use the sorted()
function to sort the dictionary by keys. d={ 3: 4, 1: 1, 0: 0, 4: 3, 2: 1 } y=dict(sorted(d.items(), key=lambda item: item[1])) print(y) # {0: 0, 2: 1, 1: 2, 4: 3, 3: 4} l = {1: 40, 2: 60, 3: 50, 4:
30, 5: 20} d1 = dict(sorted(l.items(),key=lambda x:x[1],reverse=True)) print(d1) #output : {2:
How To Convert String To Byte Without Encoding Python With Code Examples
In this article, we will look at how to get the solution for the problem, How To Convert String To Byte Without Encoding Python With Code Examples How do you convert a string to a byte in Python?
Convert strings to bytes string = "Hello World" # string with encoding 'utf-8' arr = bytes(string, 'utf-8') arr2 = bytes(string, 'ascii') print(arr,'\n')
>>> message = 'test 112 hello: what?!' >>> message = message.encode('iso-8859-15 | {"url":"https://www.isnt.org.in/python-change-the-bin-size-of-an-histogram-with-code-examples.html","timestamp":"2024-11-10T21:16:29Z","content_type":"text/html","content_length":"150380","record_id":"<urn:uuid:80bdcf39-b8ed-4bae-bcd6-ecef00e8443f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00055.warc.gz"} |
Library Guides: Scholarly Impact Research Guide: Definitions & Metrics
Journal Metrics can help track citation patterns within journals and determine which journals are highly-cited.
Journal Impact Factor: the average number of times, in the past two years, that articles from a journal have been cited in the Journal Citation Reports (JCR). Most commonly used Journal Impact
Journal Normalized Citation Impact: the ratio of the actual number of citing items to the average citation rate of publications in the same journal in the same year and with the same document type
CiteScore: the ratio of citations to documents published over a four year period over number of documents in the same four year period. CiteScore only includes peer-reviewed research: articles,
reviews, conference papers, data papers and book chapters, covering 4 years of citations and publications.
SNIP (Source Normalized Impact per Paper) Measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is
given higher value in subject areas where citations are less likely, and vice versa.
5-Year Journal Impact Factor: the average number of times articles from the journal published in the past five years have been cited in the JCR year. It is calculated by dividing the number of
citations in the JCR year by the total number of articles published in the five previous years.
Journal Immediacy Index: Citations to articles from the current year, divided by the total number of articles from the current year.
Eigenfactor Score: Similar to the 5-Year Journal Impact Factor, but weeds out journal self-citations. Includes both the hard sciences and the social sciences.
Article Influence Score: the average influence of a journal's articles over the first five years after publication. The Eigenfactor score divided by the number of articles published in journal. Mean
score is 1.
SJR - SCImago Journal Rank: Doesn't consider all citations of equal weight; the prestige of the citing journal is taken into account.
h5-index: the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2010-2014 have at least h citations each | {"url":"https://libguides.tulane.edu/c.php?g=455984&p=3130617","timestamp":"2024-11-04T00:52:39Z","content_type":"text/html","content_length":"31046","record_id":"<urn:uuid:24c58d42-0066-47a3-8e6b-6ad5f25f1903>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00683.warc.gz"} |
AstroScans and Wave Amplifiers
This little tool helps you calculate the success probablility of your AstroScans in Planetarion. Enter the number of Asteroids and Wave Amplifiers you've got, and klick on "calculate". Probablilities
above 100% mean certain success (almost).
If you want to calculate how many WaveAmps you need for the success probability you want,
try this page
The formula is:
probablility = (waveamps/(asteroids*2)+1)*30
This is the formula fudge gave us.
From our personal experience, the real success rate is much higher than the official formula suggests. So we put our own formula here, too. You get both results and can choose which you trust more. | {"url":"http://www.lytha.com/planetarion/tools/astroscans.phtml","timestamp":"2024-11-04T09:07:32Z","content_type":"text/html","content_length":"3900","record_id":"<urn:uuid:811d490e-d8da-4861-b651-0986f5ec9390>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00332.warc.gz"} |
curtains.js | Documentation | Vec3 class
Vec3 class
Basic usage
Vec3 is the class used for all 3 dimensions vector manipulations.
To create a new Vec3, just use its constructor.
If used without parameters, it will create a null vector Vec3 with X, Y and Z components equal to 0.
// create two new Vec3 vectors
const nullVector = new Vec3();
const vector = new Vec3(1, 1, 1);
Use chaining
Since most of the Vec3 methods return the vector itself, you can use chaining:
// create a new Vec3 vector and chain methods
const vector = new Vec3().addScalar(2).normalize();
onChange callback
Vec3 class is using getters and setters, allowing to execute a function each time one of the vector components changes:
// create a new Vec3 vector and listen to its changes
const vector = new Vec3().onChange(() => {
// one of the vector component just changed, do something...
const normalizedVector = vector.normalize();
// this will trigger our onChange callback
vector.x = 1;
// be careful, this will trigger onChange again!
vector.y = 2;
• X float, optional value along X axis. Default to 0.
• Y float, optional value along Y axis. Default to 0.
• Z float, optional value along Z axis. Default to 0.
• x (float): v7.0
Value along X axis.
• y (float): v7.0
Value along Y axis.
• z (float): v7.0
Value along Z axis.
• add(vector): v7.0
Adds a vector to this vector.
□ vector Vec3 class object vector to add
returns: this vector after addition.
• addScalar(scalar): v7.0
Adds a scalar to this vector.
□ scalar float number to add
returns: this vector after addition.
• applyMat4(matrix): v7.0
Apply a 4 dimensions Mat4 matrix to this vector.
□ matrix Mat4 class object matrix to apply
returns: this vector after matrix application.
• applyQuat(quaternion): v7.1
Apply a quaternion (rotation in 3D space) to this vector.
□ quaternion Quat class object quaternion to apply
returns: this vector after quaternion application.
• clone(): v7.0
Clone this vector.
returns: new cloned vector.
• copy(vector): v7.0
Copy a vector into this vector.
□ vector Vec3 class object vector to copy
returns: this vector after copy.
• dot(vector): v7.0
Calculates the dot product of 2 vectors.
□ vector Vec3 class object vector to use for dot product
returns: a float representing the dot product of the 2 vectors.
• equals(vector): v7.0
Checks if 2 vectors are equal.
□ vector Vec3 class object vector to compare
returns: true if the 2 vectors are equals, false otherwise.
• max(vector): v7.0
Apply max values to this vector.
□ vector Vec3 class object vector representing max values
returns: vector with max values applied.
• min(vector): v7.0
Apply min values to this vector.
□ vector Vec3 class object vector representing min values
returns: vector with min values applied.
• multiply(vector): v7.1
Multiplies a vector with this vector.
□ vector Vec3 class object vector to use for multiplication
returns: this vector after multiplication.
• multiplyScalar(scalar): v7.1
Multiplies a scalar with this vector.
□ scalar float number to use for multiplication
returns: this vector after multiplication.
• normalize(): v7.0
Normalize this vector.
returns: normalized vector.
• project(camera): v7.1
Project 3D coordinate to 2D point.
□ camera Camera class object. Use a plane camera. camera to use to project this vector from 3D to 2D space.
returns: this vector after having been projected.
• sanitizeNaNValuesWith(vector): v7.0
Merges this vector with a vector when values are NaN. Mostly used internally to avoid errors.
□ vector Vec3 class object vector to use for sanitization (i.e. set this vector value if original vector value is NaN).
returns: sanitized vector.
• set(x, y, z): v7.0
Sets the vector from values.
□ x float X component of our vector.
□ y float Y component of our vector.
□ z float Z component of our vector.
returns: this vector after being set.
• sub(vector): v7.0
Subtracts a vector from this vector.
□ vector Vec3 class object vector to use for subtraction.
returns: this vector after subtraction.
• subScalar(scalar): v7.0
Subtracts a scalar to this vector.
□ scalar float number to use for subtraction.
returns: this vector after subtraction.
• unproject(camera): v7.1
Unproject 2D point to 3D coordinate.
□ camera Camera class object. Use a plane camera. camera to use to unproject this vector from 2D to 3D space.
returns: this vector after having been unprojected.
• onChange(callback): v8.0
Execute a function each time the x, y or z component of the vector changed.
□ callback function function to execute.
returns: your Vec3 object, allowing it to be chainable. | {"url":"https://www.curtainsjs.com/vec-3-class.html","timestamp":"2024-11-14T19:58:05Z","content_type":"text/html","content_length":"47997","record_id":"<urn:uuid:81d74f63-1ee1-4ae8-b57e-0714017cd7e6>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00691.warc.gz"} |
KSEEB Solutions for Class 9 Maths Chapter 9 Coordinate Geometry Ex 9.2
KSEEB Solutions for Class 9 Maths Chapter 9 Coordinate Geometry Ex 9.2 are part of KSEEB Solutions for Class 9 Maths. Here we have given Karnataka Board Class 9 Maths Chapter 9 Coordinate Geometry
Exercise 9.2.
Karnataka Board Class 9 Maths Chapter 9 Coordinate Geometry Ex 9.2
Question 1.
Write the answer of each of the following questions :
(i) What is the name of horizontal and the vertical lines drawn to determine the position of any point in the Cartesian plane ?
(ii) What is the name of each part of the plane formed by these two lines?
(iii) Write the name of the point where these two lines intersect.
(i) In a Cartesian plane, to determine the position of any point, the horizontal line is called the x-axis. A vertical line is y-
The y-coordinate is also called the ordinate.
Horizontal line → xOx’
Vertical line → yOy’
(ii) The name of each part of the plane formed by these two lines is called Quadrant.
I Quadrant → xOy
II Quadrant → yOx’
III Quadrant → x’Oy’
IV Quadrant → y’Ox
(iii) The name of the point where these two lines intersect is called the origin. Coordinates of origin are (0. 0).
Question 2.
See Fig.9.14, and write the following:
1. The coordinates of B : (-5, +2)
2. The coordinates of C. : (+5, -5)
3. The point identified by the coordinates (-3, -5) : E.
4. The point identified by the coordinates (2, -4) : G.
5. The abscissa of the point D : 6
6. The ordinate of the point H: -3
7. The coordinates of the point L : (0, +5)
8. The coordinates of the point M : (-3, 0).
We hope the KSEEB Solutions for Class 9 Maths Chapter 9 Coordinate Geometry Ex 9.2 helps you. If you have any query regarding Karnataka Board Class 9 Maths Chapter 9 Coordinate Geometry Exercise 9.2,
drop a comment below and we will get back to you at the earliest. | {"url":"https://www.kseebsolutions.com/kseeb-solutions-class-9-maths-chapter-9-ex-9-2/","timestamp":"2024-11-07T16:54:00Z","content_type":"text/html","content_length":"63632","record_id":"<urn:uuid:57354f7b-1dd4-4b2d-aeab-287c5f065d32>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00009.warc.gz"} |
Calculus for Beginners: A Friendly Guide to Mastering the Basics
1. Introduction to Calculus
What is Calculus?
Calculus is the branch of mathematics that deals with change. Unlike algebra, which is static, calculus allows us to understand and describe changes. It’s a powerful tool used in various fields such
as physics, engineering, economics, and even biology.
Why Learn Calculus?
Learning calculus opens up a new world of mathematical understanding. It provides the foundation for more advanced studies in mathematics and science. Whether you’re looking to pursue a career in a
STEM field or just want to challenge yourself, calculus is a great place to start.
2. The Building Blocks of Calculus
Functions and Their Importance
At its core, calculus deals with functions. A function is a relationship between two variables where each input is related to exactly one output. Understanding functions is crucial as they form the
basis of most calculus problems.
Different Types of Functions
There are various types of functions you’ll encounter in calculus. These include linear functions, polynomial functions, exponential functions, and trigonometric functions. Each type behaves
differently and has unique properties.
3. Limits: The Foundation of Calculus
Understanding Limits
Limits are the cornerstone of calculus. They help us understand the behavior of functions as they approach a particular point. Limits can be tricky, but they are essential for defining derivatives
and integrals.
Calculating Limits
To calculate a limit, you approach the value from both sides and see what the function tends to. Sometimes, you can find limits algebraically, while other times you might need to use a graph or
numerical methods.
4. Derivatives: Measuring Change
What is a Derivative?
A derivative represents the rate of change of a function with respect to a variable. In simpler terms, it tells you how fast something is changing at any given point. Derivatives are fundamental in
understanding motion, growth, and other dynamic processes.
Basic Rules of Differentiation
There are several rules to make finding derivatives easier. These include the power rule, product rule, quotient rule, and chain rule. Mastering these rules is essential for solving more complex
5. Applications of Derivatives
Motion and Velocity
One of the most common applications of derivatives is in physics, where they are used to describe motion. The derivative of the position function with respect to time gives you the velocity, which
tells you how fast an object is moving.
Optimization Problems
Derivatives are also used in optimization problems. These problems involve finding the maximum or minimum values of a function, which is essential in fields like economics and engineering.
6. Integrals: Accumulating Change
What is an Integral?
An integral is the opposite of a derivative. While derivatives measure the rate of change, integrals measure the total accumulation of change. Integrals are used to calculate areas under curves,
among other things.
Basic Integration Techniques
There are several techniques for finding integrals, including substitution and integration by parts. Learning these techniques is crucial for solving a wide range of problems.
7. The Fundamental Theorem of Calculus
Connecting Derivatives and Integrals
The Fundamental Theorem of Calculus links the concepts of differentiation and integration. It states that differentiation and integration are inverse processes. This theorem is a central idea in
calculus and has profound implications.
Practical Implications
Understanding this theorem allows you to solve complex problems more efficiently. It shows that if you know the antiderivative of a function, you can easily compute the integral.
8. Differential Equations
What are Differential Equations?
Differential equations involve equations that relate a function to its derivatives. They are used to model a wide range of real-world phenomena, from population growth to the motion of planets.
Solving Simple Differential Equations
Solving differential equations can be challenging, but many techniques can simplify the process. For beginners, focusing on first-order differential equations is a good starting point.
9. Series and Sequences
Understanding Sequences
A sequence is an ordered list of numbers that often follow a specific pattern. Sequences are the building blocks of series, which are sums of sequences. Understanding sequences is important for
studying series and their convergence.
Convergence and Divergence
A series can either converge (approach a finite value) or diverge (grow without bound). Knowing whether a series converges or diverges is crucial in many areas of mathematics and applied science.
10. Multivariable Calculus
Extending Calculus to Multiple Variables
Multivariable calculus extends the concepts of single-variable calculus to functions of several variables. This includes studying partial derivatives and multiple integrals.
Applications in the Real World
Multivariable calculus is used in many fields, such as physics for describing electromagnetic fields, in economics for optimizing functions with several variables, and in engineering for analyzing
systems with multiple inputs.
11. Tips for Studying Calculus
Practice Regularly
The key to mastering calculus is practice. Work on problems regularly to build your understanding and improve your problem-solving skills. Don’t just stick to examples in textbooks—challenge
yourself with new and different problems.
Seek Help When Needed
Don’t be afraid to seek help when you’re stuck. Use resources like online tutorials, study groups, and tutoring services. Sometimes, a different perspective can make a complex topic much clearer.
12. Real-World Applications of Calculus
Engineering and Physics
Calculus is indispensable in engineering and physics. It’s used to design and analyze systems, from the simple mechanics of a lever to the complex dynamics of a spacecraft.
Economics and Biology
In economics, calculus helps in modeling and predicting economic trends. In biology, it’s used to model population dynamics and the spread of diseases.
13. Technology and Calculus
Calculus in Computer Science
Calculus plays a crucial role in computer science, particularly in areas like machine learning and algorithms. Understanding calculus can give you a significant advantage in this rapidly growing
Software Tools for Calculus
There are numerous software tools available that can help you with calculus. These include graphing calculators, computer algebra systems, and educational apps that provide interactive
problem-solving experiences.
14. Historical Perspectives
The Origins of Calculus
Calculus has a rich history that dates back to ancient times. However, it was formalized in the 17th century by mathematicians Isaac Newton and Gottfried Wilhelm Leibniz. Understanding the historical
context can give you a deeper appreciation of the subject.
Key Mathematicians
Many mathematicians have contributed to the development of calculus. Learning about their lives and work can inspire you and provide valuable insights into the subject.
15. Advanced Topics in Calculus
Vector Calculus
Vector calculus deals with vector fields and is essential in physics and engineering. It extends the concepts of calculus to more complex systems and provides powerful tools for analyzing them.
Fourier Analysis
Fourier analysis is a method of expressing functions as sums of sinusoidal functions. It’s widely used in signal processing, image analysis, and other fields that require analyzing periodic data.
16. Calculus in Everyday Life
Calculus in Medicine
In medicine, calculus is used in various ways, such as modeling the growth of tumors or the spread of diseases. It’s also used in medical imaging techniques like MRI and CT scans.
Environmental Science
Environmental scientists use calculus to model and predict changes in the environment. This includes studying climate change, pollution levels, and the dynamics of ecosystems.
17. Learning Resources
Textbooks and Online Courses
There are many excellent textbooks and online courses available for learning calculus. Some popular ones include “Calculus” by James Stewart and online courses from platforms like Khan Academy and
Interactive Learning
Interactive learning tools, such as simulations and educational software, can make studying calculus more engaging and effective. These tools allow you to visualize concepts and practice problems in
a hands-on way.
18. Common Mistakes and How to Avoid Them
Misunderstanding Limits
One common mistake is misunderstanding the concept of limits. It’s essential to grasp this fundamental idea as it underpins much of calculus. Practice different types of limit problems to build a
strong foundation.
Skipping Steps
Another mistake is skipping steps in problem-solving. Calculus problems can be complex, and skipping steps often leads to errors. Write out each step clearly to ensure you understand the process.
19. Preparing for Calculus Exams
Study Strategies
Effective study strategies include reviewing notes regularly, practicing problems, and understanding key concepts rather than memorizing formulas. Creating a study schedule can help manage your time
Practice Exams
Taking practice exams can help you prepare for the format and types of questions you’ll encounter. It also helps you identify areas where you need further review and practice.
20. Calculus and Beyond
Transitioning to Advanced Mathematics
Calculus is a gateway to more advanced areas of mathematics, such as differential equations, linear algebra, and complex analysis. Mastering calculus prepares you for these more challenging subjects.
Career Opportunities
A strong understanding of calculus opens up many career opportunities in fields like engineering, data science, economics, and more. It’s a valuable skill that can set you apart in the job market.
21. Conclusion: Embracing the Journey
The Joy of Learning Calculus
Learning calculus can be challenging, but it’s also incredibly rewarding. The skills and knowledge you gain will serve you well in many areas of your life and career. Calculus is just the
beginning. Keep exploring and learning new mathematical concepts. The journey of learning never ends, and each new topic builds on the foundation you’ve established with calculus.
Add a Comment | {"url":"https://mathematicalexplorations.co.in/calculus-for-beginners-a-friendly-guide-to-mastering-the-basics/","timestamp":"2024-11-08T15:49:18Z","content_type":"text/html","content_length":"250848","record_id":"<urn:uuid:c61beb18-8e07-4952-b76f-130b2668fed8>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00553.warc.gz"} |
{rsimsum}: Analysis of Simulation Studies Including Monte Carlo Error
rsimsum is an R package that can compute summary statistics from simulation studies. rsimsum is modelled upon a similar package available in Stata, the user-written command simsum (White I.R., 2010).
The aim of rsimsum is to help to report simulation studies, including understanding the role of chance in results of simulation studies: Monte Carlo standard errors and confidence intervals based on
them are computed and presented to the user by default. rsimsum can compute a wide variety of summary statistics: bias, empirical and model-based standard errors, relative precision, relative error
in model standard error, mean squared error, coverage, bias. Further details on each summary statistic are presented elsewhere (White I.R., 2010; Morris et al, 2019).
The main function of rsimsum is called simsum and can handle simulation studies with a single estimand of interest at a time. Missing values are excluded by default, and it is possible to define
boundary values to drop estimated values or standard errors exceeding such limits. It is possible to define a variable representing methods compared with the simulation study, and it is possible to
define by factors, that is, factors that vary between the different simulated scenarios (data-generating mechanisms, DGMs). However, methods and DGMs are not strictly required: in that case, a
simulation study with a single scenario and a single method is assumed. Finally, rsimsum provides a function named multisimsum that allows summarising simulation studies with multiple estimands as
An important step of reporting a simulation study consists in visualising the results; therefore, rsimsum exploits the R package ggplot2 to produce a portfolio of opinionated data visualisations for
quick exploration of results, inferring colours and facetting by data-generating mechanisms. rsimsum includes methods to produce (1) plots of summary statistics with confidence intervals based on
Monte Carlo standard errors (forest plots, lolly plots), (2) zipper plots to graphically visualise coverage by directly plotting confidence intervals, (3) plots for method-wise comparisons of
estimates and standard errors (scatter plots, Bland-Altman plots, ridgeline plots), and (4) heat plots. The latter is a visualisation type that has not been traditionally used to present results of
simulation studies, and consists in a mosaic plot where the factor on the x-axis is the methods compared with the current simulation study and the factor on the y-axis is the data-generating factors.
Each tile of the mosaic plot is coloured according to the value of the summary statistic of interest, with a red colour representing values above the target value and a blue colour representing
values below the target.
You can install rsimsum from CRAN:
Alternatively, it is possible to install the development version from GitHub using the remotes package:
This is a basic example using data from a simulation study on missing data (type help("MIsim", package = "rsimsum") in the R console for more information):
We set x = TRUE as it will be required for some plot types.
Summarising the results:
rsimsum comes with 5 vignettes. In particular, check out the introductory one:
The list of vignettes could be obtained by typing the following in the R console:
Visualising results
As of version 0.2.0, rsimsum can produce a variety of plots: among others, lolly plots, forest plots, zipper plots, etc.:
With rsimsum 0.5.0 the plotting functionality has been completely rewritten, and new plot types have been implemented:
• Scatter plots for method-wise comparisons, including Bland-Altman type plots;
Nested loop plots have been implemented in rsimsum 0.6.0:
Finally, as of rsimsum 0.7.1 contour plots and hexbin plots have been implemented as well:
They provide a useful alternative when there are several data points with large overlap (e.g. in a scatterplot).
The plotting functionality now extend the S3 generic autoplot: see ?ggplot2::autoplot and ?rsimsum::autoplot.simsum for further details.
More details and information can be found in the vignettes dedicated to plotting:
If you find rsimsum useful, please cite it in your publications:
• White, I.R. 2010. simsum: Analyses of simulation studies including Monte Carlo error. The Stata Journal, 10(3): 369-385
• Morris, T.P., White, I.R. and Crowther, M.J. 2019. Using simulation studies to evaluate statistical methods. Statistics in Medicine, 38: 2074-2102
• Gasparini, A. 2018. rsimsum: Summarise results from Monte Carlo simulation studies. Journal of Open Source Software, 3(26):739 | {"url":"https://pbil.univ-lyon1.fr/CRAN/web/packages/rsimsum/readme/README.html","timestamp":"2024-11-05T15:53:41Z","content_type":"application/xhtml+xml","content_length":"28240","record_id":"<urn:uuid:0512e397-ccd2-4654-8b39-03c1e1709aa3>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00747.warc.gz"} |
date and times calculating the difference - on two different rows
• Hi - This is a great formula. I am trying to get it working where the date and times were are calculating the difference on are on two different rows.
EG Row 1 has a Date and Time
Row 2 has a Date and time
Calculate the difference between the Date and Time in Row 2 versus row 1.
I have formula working in each row, just not sure how to adjust it to compare two different rows. Any suggestions?
• If you want to calculate between two different rows then you'll need a way to identify those rows.
One way is to add a couple of columns to get the Row Number onto the row data. Then you can do something like "calculate the difference between the date on the current row, and the row just above
To do that:
1. Add an Auto Number type column called Auto
2. Add a Text/Number type column called Row Number
3. In one of the Row Number cells, enter the following formula which will find the current row's auto-number in the column of auto-numbers and return it's position…which equates to the row
number. Once you enter the formula, right click and choose Convert to Column Formula: =MATCH(Auto:Auto,Auto@row)
4. Now you can write your Date formula like this and set it to be a Column Formula (right click and choose Convert to Column Formula after you have entered it) = Date@row - INDEX(Date:Date, [Row
The INDEX function looks at the Date column and returns the date from the position that equals the current row number minus 1…the previous row.
You can adjust this based on how you want to do the lookup. If instead you're using a checkbox instead of a Row Number, then you can ignore the row number setup above and use a formula like this
= Date@row - INDEX (Date:Date, MATCH (1,Checkbox:Checkbox,0))
In this formula, the MATCH function finds the first checked checkbox (the "1" means true) in the Checkbox column. The 0 in MATCH means that it's searching the whole column in unordered fashion
instead of alphabetically. Then the INDEX takes the row number that MATCH provides and returns the Date from that row. That's then subtracted from the Date on your current row.
You can vary the criteria to fit your needs in the MATCH statement.
BRIAN RICHARDSON | PMO TOOLS AND RESOURCES | HE|HIM
SEATTLE WA, USA
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/123047/date-and-times-calculating-the-difference-on-two-different-rows","timestamp":"2024-11-15T00:59:01Z","content_type":"text/html","content_length":"394101","record_id":"<urn:uuid:596ae9f1-4861-40b5-b85a-bf24666b8b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00554.warc.gz"} |
For a symmetric positive semidefinite linear system of equations $\mathcal{Q} {\bf x} = {\bf b}$, where ${\bf x} = (x_1,\ldots,x_s)$ is partitioned into $s$ blocks, with $s \geq 2$, we show that each
cycle of the classical block symmetric Gauss-Seidel (block sGS) method exactly solves the associated quadratic programming (QP) problem but added with an … Read more | {"url":"https://optimization-online.org/tag/schur-complement/","timestamp":"2024-11-10T05:10:51Z","content_type":"text/html","content_length":"89026","record_id":"<urn:uuid:54d47cce-e34a-4c20-9d19-19372611c3c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00070.warc.gz"} |
composition series
group theory
, a composition series is a special kind of
subnormal series
which gives a
of a
simple group
Specifically, if G is a group, then a composition series for G is a sequence
(where {1} denotes the trivial group and G>H means "H is a subgroup of G") such that for i=1,...,n,
1. G[i+1] is a normal subgroup of G[i] and
2. the quotient group G[i]/G[i+1] is a simple group.
These quotient groups are called composition factors for G.
It can be shown, using induction on the size of the group and the isomorphism theorems, that every finite group has a composition series. In general, the composition series will not be unique, but
the Jordan-Holder theorem states that the composition factors will be unique for each group (counting mulitplicity). | {"url":"https://m.everything2.com/title/composition+series","timestamp":"2024-11-07T14:07:25Z","content_type":"text/html","content_length":"26591","record_id":"<urn:uuid:694aeb80-9f6d-466d-8ec3-b33f1bcbf668>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00205.warc.gz"} |
DbSchema | How to Perform Aggregate Functions in PostgreSQL?
DbSchema | How to Perform Aggregate Functions in PostgreSQL?
Table of Contents
In PostgreSQL, aggregate functions allow you to perform calculations on sets of values and return a single result. These functions are commonly used in database queries to summarize or transform
data. In this article, we will explore how to use aggregate functions in both psql (PostgreSQL command-line interface) and DbSchema (a graphical database tool).
What is an Aggregate Function?
An aggregate function is a function that operates on a set of values and returns a single value. It takes multiple input values and produces a single output value based on those inputs. Examples of
aggregate functions include calculating the average, sum, count, maximum, and minimum values from a dataset.
Advantages and Limitations of Using Aggregate Functions
Using aggregate functions in your queries offers several advantages, such as:
• Simplifying complex calculations: Aggregate functions provide a concise way to perform calculations on a large amount of data.
• Efficient data summarization: They allow you to summarize data quickly without having to retrieve and process every individual record.
• Improved query readability: By using aggregate functions, you can express your queries in a more intuitive and readable manner.
However, there are also some limitations to keep in mind when using aggregate functions:
• Grouping requirements: Many aggregate functions require grouping the data by one or more columns to produce meaningful results.
• *Limited flexibility8: Aggregate functions operate on entire columns or groups, so they may not be suitable for all types of calculations or transformations.
• Performance considerations: When dealing with large datasets, aggregate functions can impact query performance, so optimization techniques may be necessary.
Restrictions on Using Aggregate Functions
While aggregate functions offer powerful capabilities, there are a few restrictions to be aware of:
• Cannot be nested within each other: Aggregate functions cannot be used as arguments for other aggregate functions. However, subqueries can be employed to work around this limitation.
• Ambiguity in column selection: When using aggregate functions with non-aggregated columns, you must specify how those columns should be grouped or aggregated.
Aggregate Functions Overview
Here is a brief explanation of some commonly used aggregate functions:
Aggregate Function Description
AVG() Calculates the average of a set of values.
COUNT() Counts the number of rows in a dataset. Can be used with or without specifying a column.
MIN() Retrieves the minimum value from a dataset. Can be used with numerical, string, or date/time values.
MAX() Retrieves the maximum value from a dataset. Can be used with various data types.
SUM() Calculates the sum of a set of values. Works with numerical data and returns the total sum.
Performing Aggregate Functions in psql and DbSchema
Using Aggregate Functions in psql
To perform aggregate functions in psql, follow these steps:
1. Connect to your PostgreSQL database using psql.
For installation and establishing connection refer to PostgreSQL-How to create a database?
2. Construct a SELECT statement with the desired aggregate function(s) and column(s).
3. Optionally, use the GROUP BY clause to group the data based on one or more columns.
4. Execute the query to retrieve the aggregated result.
Sample Database:
employee Table:
id name age salary
1 John 25 5000
2 Sarah 28 6000
3 Michael 30 5500
4 Jessica 27 6500
5 William 32 7000
The AVG() function calculates the average of a set of values. It takes a column or an expression as input and returns the average value.
-- Calculate the average salary of employees
SELECT AVG(salary) AS average_salary FROM employee;
Result from Query:
Following is the result obtained by executing query on the sample database
The COUNT() function counts the number of rows in a dataset. It can be used with or without specifying a column. When used without a column, it counts all the rows in the result set.
-- Count the number of employees
SELECT COUNT(*) AS employee_count FROM employee;
Result from Query:
Following is the result obtained by executing query on the sample database
The MIN() function retrieves the minimum value from a dataset. It can be used with numerical, string, or date/time values.
-- Find the minimum age of employee in the employee table
SELECT MIN(age) AS minimum_age FROM employee;
Result from Query:
Following is the result obtained by executing query on the sample database
The MAX() function retrieves the maximum value from a dataset. It can also be used with various data types.
-- Find the maximum salary of employee in the employee table
SELECT MAX(salary) AS maximum_salary FROM employee;
Result from Query:
Following is the result obtained by executing query on the sample database
The SUM() function calculates the sum of a set of values. It works with numerical data and returns the total sum.
-- Calculate the total salary of all the employees present in the employee table
SELECT SUM(salary) AS total_salary FROM employee;
Result from Query:
Following is the result obtained by executing query on the sample database
Using Aggregate Functions in DbSchema
To perform aggregate functions in DbSchema, follow these steps:
1. Launch DbSchema and connect to your PostgreSQL database.
2. Navigate to the SQL editor or query builder interface.
3. Build your query using the graphical tools or write the SQL code directly.
4. Include the desired aggregate function(s) and column(s) in your query.
5. Execute the query to retrieve the aggregated result.
-- Calculate the total salary of all the employees present in the employee table
SELECT SUM(salary) AS total_salary FROM employee;
-- Calculate the average salary of employees
SELECT AVG(salary) AS average_salary FROM employee;
-- Count the number of employees
SELECT COUNT(*) AS employee_count FROM employee;
-- Find the minimum age of employee in the employee table
SELECT MIN(age) AS minimum_age FROM employee;
-- Find the maximum salary of employee in the employee table
SELECT MAX(salary) AS maximum_salary FROM employee;
Implement Aggregate Functions and Visually Manage PostgreSQL using DbSchema
DbSchema is a PostgreSQL client and visual designer. DbSchema has a free Community Edition, which can be downloaded here.
Implement Aggregate Functions
• Start the application and connect to the Postgres database.
• Navigate to SQL Editor section and build your query.
• Include the desired aggregate function in your query.
• Execute the query.
_Aggregate functions_ play a crucial role in data analysis and reporting tasks. They allow you to derive valuable insights from your database by summarizing and transforming data. In this article, we
explored the concept of aggregate functions, their advantages, limitations, and how to use them in both psql and DbSchema. By mastering these functions, you can enhance your ability to extract
meaningful information from your PostgreSQL database.
Remember, the official documentation for both PostgreSQL and DbSchema is the most reliable source for up-to-date information. These resources can provide more in-depth knowledge and cover other
complex aspects of creating and managing databases. | {"url":"https://dbschema.com/2023/05/30/postgres/aggregate-functions/","timestamp":"2024-11-04T23:14:11Z","content_type":"application/xhtml+xml","content_length":"35259","record_id":"<urn:uuid:2cae41e4-f974-4762-ab74-cf92bce0afd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00033.warc.gz"} |
Retirement calculators & financial tools. Want help creating a budget? Calculating your needs for retirement? Saving for your children's education? TIAA's. Retirement calculator · Annual Income
Required (today's dollars) · Number of years until retirement · Number of years required after retirement · Annual Inflation. Your calculation includes an assumed amount for Canada Pension Plan (CPP)
/ Quebec Pension Plan (QPP) and Old Age Security (OAS). Calculate your results. Why no single retirement target covers everyone · Start by calculating your future expenses · Next, add up all your
potential income sources · Plan ahead to close. Are you saving enough money for retirement? Use our retirement savings calculator to help find out how much money you need to save for retirement.
Use this calculator to find out how much money you might need in retirement and whether your current savings plan could get you to your goal. Use your current income as a starting point · Project
your retirement expenses · Decide when you'll retire · Estimate your life expectancy · Identify your sources. This calculator can help with planning the financial aspects of your retirement, such as
providing an idea where you stand in terms of retirement savings. How old are you? At what age would you like to retire? At today's prices, how much monthly income will you need while retired? How
much do you have to invest. We are assuming that you are between 50 and 70 years old and that you will retire by age This worksheet will calculate how many years you will live in. How much income
will you need in retirement? Are you on track? Compare what you may have to what you will need. Curious to know how much you need to retire? Our retirement savings calculator will help you understand
how much you'll have and how much you'll need. Saving for retirement can be daunting. Use our retirement calculator to see how much you should be saving each month to retire when and how you want to.
Our retirement calculator estimates your savings based on your current contributions and then calculates how that money will stretch in today's dollars. Here's a simple rule for calculating how much
money you need to retire: at least 1x your salary at 30, 3x at 40, 6x at 50, 8x at 60, and 10x at Orange Money® is the money you save for tomorrow, today. myOrangeMoney® will show you the future
monthly income you may need and your progress toward that goal.
You can estimate what you will need to retire comfortably by using your current level of expenses, compounded yearly to the retirement age with an appropriate. Our retirement calculator estimates
your retirement savings based on your current contributions, and then calculates how your savings will stretch in. Unsure how much you need to save for retirement? Our calculator can help bring
clarity and offer tips on saving for the retirement of your dreams. "Full Retirement Age" is a point in time between age 66 and 67, which we use to determine your benefit amount, as well as your
family's benefits. Regardless. Calculating the amount of money you'll need for retirement can be confusing. Here are the factors to consider — and a helpful tool to make it easier. The Personal
Retirement Calculator is provided by one or more third party service providers. However, the information generated by the calculator is developed. To estimate your retirement incomes from various
sources, you will need to work through a series of modules. You will then need to compare them to your goal. Typically 10 to 12 times your annual income at retirement age. While there is no
one-size-fits-all plan, there are some common guidelines and benchmarks. A retirement savings calculator is a handy planning tool that lets you see how much you might end up with during retirement
based on how much you save monthly.
To determine how much money you'll need when you retire, enter your desired retirement age and the number of years you expect to draw on your retirement income. Are you saving enough for retirement?
SmartAsset's award-winning calculator can help you determine exactly how much you need to save to retire. This calculator helps you work out how much income you will need in retirement. Or, to
estimate how much super you will have, try our super calculator. The good news? You don't have to do it yourself. One option is using an online retirement tool, like a retirement expenses worksheet
or calculator. Both can. Our Retirement Calculator estimates the future value of your retirement savings and determines how much more you need to save each month.
How To Calculate Your Retirement Savings Goal
How much income will you need in retirement? Are you on track? Compare what you may have to what you will need. This calculator helps you work out how much income you will need in retirement. Or, to
estimate how much super you will have, try our super calculator. A retirement savings calculator is a handy planning tool that lets you see how much you might end up with during retirement based on
how much you save monthly. Assumptions Required To Estimate How Much Money You Need To Retire All retirement calculators require the same basic inputs to work their magic – your. Estimate your
retirement expenses by considering factors such as housing, healthcare, daily living costs, travel and hobbies. Remember that some expenses—like. It takes planning and commitment and, yes, money.
Facts. ▫ Only about half of Americans have calculated how much they need to save for retirement. Orange Money® is the money you save for tomorrow, today. myOrangeMoney® will show you the future
monthly income you may need and your progress toward that goal. Use your current income as a starting point · Project your retirement expenses · Decide when you'll retire · Estimate your life
expectancy · Identify your sources. Retirement calculator · Annual Income Required (today's dollars) · Number of years until retirement · Number of years required after retirement · Annual Inflation.
Use this retirement calculator to create your retirement plan. View your retirement savings balance and calculate your withdrawals for each year. How old are you? At what age would you like to
retire? At today's prices, how much monthly income will you need while retired? How much do you have to invest. Use this calculator to find out how much money you might need in retirement and whether
your current savings plan could get you to your goal. Typically 10 to 12 times your annual income at retirement age. While there is no one-size-fits-all plan, there are some common guidelines and
benchmarks. The good news? You don't have to do it yourself. One option is using an online retirement tool, like a retirement expenses worksheet or calculator. Both can. We are assuming that you are
between 50 and 70 years old and that you will retire by age This worksheet will calculate how many years you will live in. The first step to retirement planning is determining the amount of money you
would need to maintain your lifestyle and fulfil your post-retirement aspirations. You can estimate what you will need to retire comfortably by using your current level of expenses, compounded yearly
to the retirement age with an appropriate. We built a simple retirement savings calculator to help you answer the question: “how much do I need to retire?”. Retirement calculators & financial tools.
Want help creating a budget? Calculating your needs for retirement? Saving for your children's education? TIAA's. Plan your retirement effortlessly. Find out how much you need, whether you have
enough, and actionable steps to achieve the retirement you want. Calculating the future balance for each month until retirement · The value of the account is multiplied by , the monthly market growth
effect. This is. Our Retirement Calculator estimates the future value of your retirement savings and determines how much more you need to save each month. Why no single retirement target covers
everyone · Start by calculating your future expenses · Next, add up all your potential income sources · Plan ahead to close. "Full Retirement Age" is a point in time between age 66 and 67, which we
use to determine your benefit amount, as well as your family's benefits. Regardless. Here's a simple rule for calculating how much money you need to retire: at least 1x your salary at 30, 3x at 40,
6x at 50, 8x at 60, and 10x at An industry rule of thumb for estimating how much retirement savings you'll need is to assume you'll withdraw 4% of your retirement savings each year in. The Personal
Retirement Calculator is provided by one or more third party service providers. However, the information generated by the calculator is developed. Are you saving enough money for retirement? Use our
retirement savings calculator to help find out how much money you need to save for retirement. This calculator can help with planning the financial aspects of your retirement, such as providing an
idea where you stand in terms of retirement savings. Are you saving enough for retirement? SmartAsset's award-winning calculator can help you determine exactly how much you need to save to retire.
Experts estimate that you'll need between 7 to 10 times your annual salary in order to retire comfortably. However, this amount is dependent on many different. How to calculate your retirement
corpus? = %/12 = PMT = Inflation adjusted monthly income at retirement = 18,02,/12 = Rs 1,50,
Best Chart For Trend Analysis | Free Job Posting Sites For Recruiters | {"url":"https://mto-yug.ru/learn/how-to-calculate-retirement-needs.php","timestamp":"2024-11-04T12:18:18Z","content_type":"text/html","content_length":"16443","record_id":"<urn:uuid:89a2170c-678f-48b5-89d3-623420b8aad3>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00252.warc.gz"} |
Advanced Python Arrays - Introducing NumPy
Written by Alex Armstrong
Sunday, 21 April 2013
Integer Indexing
NumPy arrays have other tricks when it comes to indexing mostly borrowed from Matlab and Octave. For example you can set up an indexing array of integers and these will be used to select elements
along the corresponding dimension.
So for example:
picks out the myArray[0,1] and myArray[2,2] i.e.
array([2, 9])
You can use this technique to access arbitrary sub matrices of elements or regular sub matrices with indexes determined by other matrices.
There are lots of rules that govern how the lists of integers are applied i.e. what happens if you don't specify enough lists and so on but they are all obvious defaults.
Boolean Indexing
This is another indexing method borrowed from Matlab and Octave. You can use a Boolean array to pick out the elements you want to work with. If the Boolean array has a true element then the element
in the array being indexed is selected.
Now at first thought this doesn't sound like a very useful option. Why would you go to the trouble of constructing a Boolean array to pick elements from an array? The answer is because it is very
One of the things that NumPy introduces is elementwise functions and operations including elementwise comparisons.
For example:
produces a Boolean array:
[[False, True, True],
[ True, True, True],
[ True, True, True]]
This makes is easy to index an array on a condition. For example, suppose you wanted to set every element of an array smaller than 0.1 to zero:
Now you can see why a Boolean index is useful!
Array Functions
There are many features of NumPy that are beyond the scope of this introduction but the availability of array functions is one that you need to know about. NumPy provides many apparent redundant
copies of functions that are already provided in math or other modules. For example, np.sqrt seems to be identical to math.sqrt but it can be applied to arrays and it will be computed on each
array([[ 1. , 1.41421356, 1.73205081],
[ 2. , 2.23606798, 2.44948974],
[ 2.64575131, 2.82842712, 3. ]])
If you want to compute an elementwise function on a NumPy array then check to see if it is provided within NumPy first - there are a lot of functions for general mathematics, bit manipulation and
matrix arithmetic. For example to multiply two matrices together, A.B just use
There is so much more to say about NumPy so look out for another installment on working with arrays.
Related Articles
Arrays in Python
The Python Dictionary
Creating The Python UI With Tkinter
Creating The Python UI With Tkinter - The Canvas Widget
Getting Started with Python
Head First Python (Book Review)
A Concise Introduction to Programming In Python (Book Review)
or email your comment to: comments@i-programmer.info
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.
Last Updated ( Wednesday, 27 February 2019 ) | {"url":"https://www.i-programmer.info/programming/python/5785-advanced-python-arrays-introducing-numpy.html?start=2","timestamp":"2024-11-14T12:00:28Z","content_type":"text/html","content_length":"32321","record_id":"<urn:uuid:667ef30d-ef94-437a-831b-e13ea0f0db6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00195.warc.gz"} |
What factors determine nominal interest rates
Nominal interest rate refers to the interest rate before taking inflation into account. Nominal can also refer to the advertised or stated interest rate on a loan, without taking into account any
fees or compounding of interest. The nominal interest rate formula can be calculated as: r = m × [ ( 1 + i) 1/m - 1 ].
Six factors that determine the nominal interest rate on a security “Inflation – A continual increase in the price level of a baskets of goods and services throughout the economy as a whole. Real
risk-free rate – Risk-free rate adjusted for inflation; generally lower than nominal risk-free rates at any particular time. Nominal interest rate formula = [(1 + Real interest rate) * (1 + Inflation
rate)] – 1. Real Interest Rate is the interest rate that takes inflation, compounding effect and other charges into account. Inflation is the most important factor that impacts the nominal interest
rate. Like many economic variables in a reasonably free-market economy, interest rates are determined by the forces of supply and demand. Specifically, nominal interest rates, which is the monetary
return on saving, is determined by the supply and demand of money in an economy. The real interest rate is the nominal rate of interest minus inflation, which can be expressed approximately by the
following formula: Real Interest Rate = Nominal Interest Rate – Inflation Rate = Growth of Purchasing Power. For low rates of inflation, the above equation is fairly accurate. Defining Interest Rate
Components. The interest rate components are the factors that determine the interest rate for investments. Interest Rate Components Real Interest Rates. One of the interest rate components is the
real interest rate, which is the compensation, over and above inflation, that a lender demands to lend his money. Answer - 5 Sep, 2012 The six factors are Interest rate = RRF + IP + DRP + LP + PRP +
CRP wher.
Nominal interest rates refer to interest rates that do not take into consideration inflation. Nominal interest rates increase when the central bank feels that inflation is high and that the money
supply should be tightened.
Factors that can affect nominal interest rates in financial transactions includes a) special provisions b) liquidity and default risk c) inflation and real interest rate d) 19 Sep 2016 In short, the
real interest rate is a critical factor in almost every decision faced real interest rates and highlights two key forces that help determine them. Second, the likelihood of nominal interest rates
hitting the zero lower 29 Sep 2017 Understand the key factors that affect your interest rate. Use our Explore Rates Tool to see how they may affect interest rates for loans in your inflation is low
and the (nominal) policy rate is tied to a floor (the 'lower bound'). It is also determines the estimations and explanatory factors of r*. 1.3.1 Time In other words, to determine the expected real
interest rate, the investor would need to subtract the expected inflation rate from the nominal interest rate. 11 Dec 2019 What is Bank Rate? How changes in Bank Rate affect the economy. What are
interest rates? Interest Since 1870, nominal interest rates in the core advanced economies have never Instead, deleveraging seems to be the key factor determining the speed of
If you have an interest in interest, read on to learn more. Factors out of your control. Interest rates are partly based on economic factors that shift over time. You may not have any sway over
these, but once you know what to look for, you can watch for changes and take advantage of them. Supply and demand: When you think of interest rates as a price for borrowing money, it makes sense
that they would be affected by supply and demand. In lending, an increase in the demand for money, or a
Nominal interest rate refers to the interest rate before taking inflation into account. Nominal can also refer to the advertised or stated interest rate on a loan, without taking into account any
fees or compounding of interest. The nominal interest rate formula can be calculated as: r = m × [ ( 1 + i) 1/m - 1 ]. Nominal interest rates are the rates advertised for investments or loans that do
not factor in the rate of inflation. The primary difference between nominal interest rates and real interest rates is, in fact, simply whether or not they factor in the rate of inflation in any given
market economy. How Interest Rates are Determined. Supply and Demand. Interest rate levels are a factor of the supply and demand of credit: an increase in the demand for money or credit will raise
Inflation. Government. Interest keeps the economy moving by encouraging people to borrow, to lend—and to spend. If you have an interest in interest, read on to learn more. Factors out of your
control. Interest rates are partly based on economic factors that shift over time. You may not have any sway over these, but once you know what to look for, you can watch for changes and take
advantage of them. Supply and demand: When you think of interest rates as a price for borrowing money, it makes sense that they would be affected by supply and demand. In lending, an increase in the
demand for money, or a
mestic factors affect nominal interest rates, while domestic interest rates in. Singapore are fully determined by the foreign interest rates and exchange rate rela-.
Factors that can affect nominal interest rates in financial transactions includes a) special provisions b) liquidity and default risk c) inflation and real interest rate d) 19 Sep 2016 In short, the
real interest rate is a critical factor in almost every decision faced real interest rates and highlights two key forces that help determine them. Second, the likelihood of nominal interest rates
hitting the zero lower 29 Sep 2017 Understand the key factors that affect your interest rate. Use our Explore Rates Tool to see how they may affect interest rates for loans in your inflation is low
and the (nominal) policy rate is tied to a floor (the 'lower bound'). It is also determines the estimations and explanatory factors of r*. 1.3.1 Time In other words, to determine the expected real
interest rate, the investor would need to subtract the expected inflation rate from the nominal interest rate. 11 Dec 2019 What is Bank Rate? How changes in Bank Rate affect the economy. What are
interest rates? Interest
In other words, to determine the expected real interest rate, the investor would need to subtract the expected inflation rate from the nominal interest rate.
inflation is low and the (nominal) policy rate is tied to a floor (the 'lower bound'). It is also determines the estimations and explanatory factors of r*. 1.3.1 Time In other words, to determine
the expected real interest rate, the investor would need to subtract the expected inflation rate from the nominal interest rate. 11 Dec 2019 What is Bank Rate? How changes in Bank Rate affect the
economy. What are interest rates? Interest Since 1870, nominal interest rates in the core advanced economies have never Instead, deleveraging seems to be the key factor determining the speed of
These firms use capital, labor, and oil as factor inputs. Goods prices are determined by Calvo-Yun staggered contracts. Trade occurs at the level of intermediate mestic factors affect nominal
interest rates, while domestic interest rates in. Singapore are fully determined by the foreign interest rates and exchange rate rela-. The amount of interest paid is determined by the amount of
money borrowed, the length of This interest rate is known as the nominal rate and it may vary from time between the interest rates for different types of loans reflects the risk factor.
19 Sep 2016 In short, the real interest rate is a critical factor in almost every decision faced real interest rates and highlights two key forces that help determine them. Second, the likelihood of
nominal interest rates hitting the zero lower 29 Sep 2017 Understand the key factors that affect your interest rate. Use our Explore Rates Tool to see how they may affect interest rates for loans in
your inflation is low and the (nominal) policy rate is tied to a floor (the 'lower bound'). It is also determines the estimations and explanatory factors of r*. 1.3.1 Time In other words, to
determine the expected real interest rate, the investor would need to subtract the expected inflation rate from the nominal interest rate. 11 Dec 2019 What is Bank Rate? How changes in Bank Rate
affect the economy. What are interest rates? Interest Since 1870, nominal interest rates in the core advanced economies have never Instead, deleveraging seems to be the key factor determining the
speed of These firms use capital, labor, and oil as factor inputs. Goods prices are determined by Calvo-Yun staggered contracts. Trade occurs at the level of intermediate | {"url":"https://topbitxxrbckmy.netlify.app/burrall23219zu/what-factors-determine-nominal-interest-rates-zat","timestamp":"2024-11-08T21:25:08Z","content_type":"text/html","content_length":"38610","record_id":"<urn:uuid:0523760e-9182-44b5-834d-965a1f8581fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00325.warc.gz"} |
Analysis and Construction of Artificial Neural Networks for the Heat Equations, and Their Associated Parameters, Depths, and Accuracies.
Date of Graduation
Document Type
Degree Name
Doctor of Philosophy in Mathematics (PhD)
Mathematical Sciences
Padgett, Joshua L.
Committee Member
Nakarmi, Ukash
Second Committee Member
Chen, Jiahui
Third Committee Member
Kaman, Tulin
Artificial neural networks; Multi-Level Picard; Feynman-Kac; Monte Carlo method
This dissertation seeks to explore a certain calculus for artificial neural networks. Specifi- cally we will be looking at versions of the heat equation, and exploring strategies on how to
approximate them. Our strategy towards the beginning will be to take a technique called Multi-Level Picard (MLP), and present a simplified version of it showing that it converges to a solution of the
equation �� ∂ ud�� (t, x) = (∇2xud) (t, x). ∂t We will then take a small detour exploring the viscosity super-solution properties of so- lutions to such equations. It is here that we will first
encounter Feynman-Kac, and see that solutions to these equations can be expressed the expected value of a certain stochastic in- tegral. The final and last part of the dissertation will be dedicated
to expanding a certain neu- ral network framework. We will build on this framework by introducing new operations, namely raising to a power, and use this to build out neural network polynomials. This
opens the gateway for approximating transcendental functions such as exp (x) , sin (x), and cos (x). This, coupled with a trapezoidal rule mechanism for integration allows us to approximate
expressions of the form exp ����ab □dt��. We will, in the last chapter, look at how the technology of neural networks developed in the previous two chapters work towards approximating the expression
that Feynman-Kac asserts must be the solution to these modified heat equations. We will then end by giving approximate bounds for the error in the Monte Carlo method. All the while we will maintain
that the parameter estimates and depth estimates remain polynomial on 1ε . As an added bonus we will also look at the simplified MLP technque from the previous chapters of this dissertation and show
that yes, they can indeed be approximated with ar- tificial neural networks, and that yes, they can be done so with neural networks whose parameters and depth counts grow only polynomially on 1ε .
Our appendix will contain code listings of these neural network operations, some of the architectures, and some small scale simulation results.
Rafi, S. A. (2024). Analysis and Construction of Artificial Neural Networks for the Heat Equations, and Their Associated Parameters, Depths, and Accuracies.. Graduate Theses and Dissertations
Retrieved from https://scholarworks.uark.edu/etd/5277 | {"url":"https://scholarworks.uark.edu/etd/5277/","timestamp":"2024-11-11T22:58:24Z","content_type":"text/html","content_length":"41992","record_id":"<urn:uuid:66775c92-8d3c-41c9-bccd-951f63676618>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00680.warc.gz"} |
Lesson 11
Approximating Pi
11.1: More Sides (10 minutes)
The goal of this activity is to further reinforce the concept that polygons with many sides are nearly circular. Students find the difference in area between a square and the circle it is inscribed
in, then compare it to the difference in area between a hexagon and the circle it is inscribed in. It is also an opportunity to practice decomposing a shape, which will be essential to the
generalization in this lesson.
Student Facing
Calculate the area of the shaded regions.
Anticipated Misconceptions
If students are struggling, ask them what shapes they could decompose the hexagon into. (Two trapezoids or six triangles.)
Activity Synthesis
Use the applet to demonstrate what happens to the areas of the polygons as the number of sides increases.
“What if the polygon has 10 sides? 20?” (The shaded region would be very small.) Reinforce the idea that the more sides an inscribed polygon has, the closer it is to a circle.
11.2: N Sides (15 minutes)
In this activity students build off the specific calculations from the previous lesson to generalize the perimeter of a polygon inscribed in a circle of radius 1. The relatively unstructured
presentation of this activity is purposeful. Students work with their groups to determine what information they need, how to calculate in the specific cases, and how they can express those repeated
procedures in a generalized formula (MP8). Monitor for groups who have generalized any part of the process.
Encourage students to refer to the examples from the previous lesson as they work to generalize.
Action and Expression: Internalize Executive Functions. Chunk this task into manageable parts for students who benefit from support with organizational skills in problem solving. Check in with
students after the first 2–3 minutes of work time. Look for students who are struggling to begin and review examples from the previous lesson. Record their thinking on a display and keep the work
visible as students continue to work.
Supports accessibility for: Organization; Attention
Student Facing
The applet shows a regular \(n\)-sided polygon inscribed in a circle.
Come up with a general formula for the perimeter of the polygon in terms of \(n\). Explain or show your reasoning.
Encourage students to refer to the examples from the previous lesson as they work to generalize.
Action and Expression: Internalize Executive Functions. Chunk this task into manageable parts for students who benefit from support with organizational skills in problem solving. Check in with
students after the first 2–3 minutes of work time. Look for students who are struggling to begin and review examples from the previous lesson. Record their thinking on a display and keep the work
visible as students continue to work.
Supports accessibility for: Organization; Attention
Student Facing
Here is one part of a regular \(n\)-sided polygon inscribed in a circle of radius 1.
Come up with a general formula for the perimeter of the polygon in terms of \(n\). Explain or show your reasoning.
Anticipated Misconceptions
If students are struggling invite them to go back to the problems from the previous lesson to generalize the process. (Draw in the altitude. Find the measure of the central angle. Find the length of
the opposite leg.) Suggest students generalize each step before trying to write a single formula.
Activity Synthesis
Invite previously selected groups to share one step at a time:
• generalize the angle measure
• generalize the segment length
• extend to the whole perimeter
Invite another student to summarize by explaining where each piece of \(P=2n \boldcdot \sin \left( \frac{360}{2n} \right)\) appears in the diagram.
Representing, Conversing: MLR7 Compare and Connect. Use this routine to prepare students for the whole-class discussion. At the appropriate time, invite students to create a visual display showing
their general formula for the perimeter of the polygon in terms of \(n\) and their reasoning. Allow students time to quietly circulate and analyze the formulas and reasoning in at least 2 other
displays in the room. Give students quiet think time to consider what is the same and what is different. Next, ask students to find a partner to discuss what they noticed. Listen for and amplify
observations that highlight what information each group used, any calculations they completed, and how they expressed those procedures in a generalized formula.
Design Principle(s): Optimize output (for generalization); Cultivate conversation
11.3: So Many Sides (15 minutes)
In this activity students will use the formula they developed in the previous activity. They will see how quickly this formula approximates \(\pi\) and consider how accurate the approximation is for
polygons of various side lengths.
Invite students to use the formula from the previous activity to calculate the perimeter of a square. (5.657) Tell students to round to the thousandths place for this activity. “Does that seem close
to the perimeter of the circle? What is the circumference of a circle with radius 1?” (\(2\pi=6.283\)) “How close is the approximation?” (\(6.283-5.657=0.626\))
“Since the circumference is \(2\pi\) we could use this formula to approximate pi. This is what mathematicians did before they knew the value of pi. Rewrite the formula to find an expression that
gives the value of \(\pi\) rather than \(2\pi\).” (\(n \boldcdot \sin \left( \frac{360}{2n} \right)\))
“How could we get a better approximation of \(\pi\) than the square gives?” (More sides!)
Engagement: Develop Effort and Persistence. Encourage and support opportunities for peer interactions. Prior to the whole-class discussion, invite students to share their work with a partner. To
support student conversation, display sentence frames such as: “First, I _____ because . . .”, “I noticed _____ so I . . .”, “Why did you . . .?”, “I agree/disagree because . . . .”
Supports accessibility for: Language; Social-emotional skills
Student Facing
Let's use the expression you came up with to approximate the value of \(\pi\).
1. How close is the approximation when \(n=6\)?
2. How close is the approximation when \(n=10\)?
3. How close is the approximation when \(n=20\)?
4. How close is the approximation when \(n=50\)?
5. What value of \(n\) approximates the value of \(\pi\) to the thousandths place?
Student Facing
Are you ready for more?
Describe how to find the area of a regular \(n\)-gon with side length \(s\). Then write an expression that will give the area.
Activity Synthesis
Invite students to share the values of \(n\) they chose and how close to \(\pi\) the approximation is. Invite students who chose 72 and students who chose 102 to debate. (72 sides is enough because
there are 3 accurate digits after the decimal place. 72 sides isn't enough, you need 102 sides in order for the approximation to round to the thousandths place correctly.)
Share that people often employ this kind of thinking to program calculators to get very accurate approximations without the calculator needing to store a very long string of digits to represent \(\pi
Speaking: MLR8 Discussion Supports. Use this routine to support students in producing statements to critique the reasoning of others when sharing their responses to the last question. Provide
sentence frames for students to use, such as “That could (or couldn’t) be true because . . .”, “We can agree that . . .”, “_____ and _____ are different because . . .”, and “Another way to look at it
is . . . .” Encourage students to press each other for details as they explain by requesting each other to elaborate on an idea or give an example.
Design Principle(s): Support sense-making
Lesson Synthesis
Display the images and ask students, “What’s the same? What’s different?” (Both are approaching the circle. One estimate is too small, one is too large.)
Display the applet for all to see. Demonstrate the calculations for \(\pi\) throughout the discussion.
“In mathematical language we say that the perimeter of a polygon inscribed in a circle estimates the circumference. It gives you the lower bound for the circumference of the circle since the
perimeter is smaller than the circumference. Similarly, we say that the circumference of a circle inscribed in a polygon is estimated by the perimeter of the polygon, but since it is now outside the
circle, the perimeter of the polygon gives you the upper bound. The formula for the perimeter of the polygon with the circle inscribed inside it is \(P=2n \boldcdot \tan \left( \frac{360}{2n} \right)
\) What is the expression to approximate \(\pi\)?” (\(n \boldcdot \tan \left( \frac{360}{2n} \right)\)) “What is the range for the value of \(\pi\) starting with \(n=10\)?” (\(3.090<\pi<3.249\))
“What about \(n=50\)?” (\(3.140<\pi<3.146\))
“Archimedes did this without a calculator to tell him the values of sine or tangent. In fact he didn’t even have a concept of decimals! He was able to calculate the perimeter of a 96 sided regular
polygon both inscribed in a circle and with a circle inscribed in it to say that \(3 \frac{10}{71} < \pi < 3 \frac{1}{7}\) which is impressively accurate for 250 BCE. Chinese mathematicians Liu and
Chongzhi took a similar approach but found a method that was much faster to calculate and by 480 CE calculated the range of a 12,288-sided polygon which is accurate for the first eight digits. This
was the most accurate approximation of \(\pi\) anyone could come up with for the next 800 years. Mathematicians from many other countries continued to independently discover and refine methods, and
even today people are working on better ways to use supercomputers to calculate \(\pi\) to still greater accuracy. In 2019, a team led by Emma Haruka Iwao set the world record by calculating over 31
trillion digits of \(\pi\).”
Student Facing
It's easier to work with polygons than with circles because we can decompose polygons into simple shapes such as triangles. We can use polygons to figure out things about circles. For example, we
know how to calculate the area of regular polygons inscribed in a circle of radius 1.
To find the area of this regular pentagon, let's find the area of one triangle and then multiply by 5. Drawing in the altitude creates a right triangle, so we can use trigonometry to calculate the
lengths of both \(x\) and \(h\). To find \(\theta\) use the fact that a full rotation is \(360^\circ\) and that in an isosceles triangle the altitude is also an angle bisector. So \(\theta=360 \div
10\). \(\sin(36)=\frac{x}{1}\) so \(x\) is about 0.59 units. \(\cos(36)=\frac{h}{1}\) so \(h\) is about 0.81 units. The area of the isosceles triangle is about 0.48 square units and the area of the
pentagon is 5 times that, or about 2.4 square units.
That's not very close to the area of the circle, but if we add more and more sides to the regular polygon, its area gets closer and closer to covering the entire circle. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/4/11/index.html","timestamp":"2024-11-07T23:37:08Z","content_type":"text/html","content_length":"247105","record_id":"<urn:uuid:bb2408a8-5d72-4c11-9c67-ae50a0d97a3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00606.warc.gz"} |
FIC-UAI Publication Database -- Query Results
Berkovits, N., & Chandia, O. (2014). Simplified pure spinor b ghost in a curved heterotic superstring background. J. High Energy Phys., (6), 12 pp.
Chandia, O. (2014). The non-minimal heterotic pure spinor string in a curved background. J. High Energy Phys., (3), 16 pp.
Chandia, O., & Vallilo, B. C. (2015). C Ambitwistor pure spinor string in a type II supergravity background. J. High Energy Phys., (6), 15 pp.
Chandia, O., & Vallilo, B. C. (2015). Non-minimal fields of the pure spinor string in general curved backgrounds. J. High Energy Phys., (2), 16 pp.
Chandia, O., & Vallilo, B. C. (2016). On-shell type II supergravity from the ambitwistor pure spinor string. Class. Quantum Gravity, 33(18), 9 pp.
Chandia, O., Bevilaqua, L. I., & Vallilo, B. C. (2014). AdS pure spinor superstring in constant backgrounds. J. High Energy Phys., (6), 16 pp.
Chandia, O., Linch, W. D., & Vallilo, B. C. (2011). Compactification of the heterotic pure spinor superstring II. J. High Energy Phys., (10), 22 pp.
Donnay, L., Giribet, G., González, H., Puhm, A., & Rojas, F. (2023). Celestial open strings at one-loop. J. High Energy Phys., (10), 47.
Kalyaan, A., Pinilla, P., Krijt, S., Mulders, G. D., & Banzatti, A. (2021). Linking Outer Disk Pebble Dynamics and Gaps to Inner Disk Water Enrichment. Astrophys. J., 921(1), 84. | {"url":"https://ficpubs.uai.cl/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%2C%20area%20FROM%20refs%20WHERE%20keywords%20RLIKE%20%22RINGS%22%20ORDER%20BY%20first_author%2C%20author_count%2C%20author%2C%20year%2C%20title&client=&formType=sqlSearch&submit=Cite&viewType=Print&showQuery=0&showLinks=0&showRows=20&rowOffset=&wrapResults=1&citeOrder=&citeStyle=APA&exportFormat=RIS&exportType=html&exportStylesheet=&citeType=html&headerMsg=","timestamp":"2024-11-03T20:14:28Z","content_type":"text/html","content_length":"24589","record_id":"<urn:uuid:efdbef18-36a6-4598-b8d0-724343a66eb0>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00102.warc.gz"} |
On the center of fusion categories
M\"uger proved in 2003 that the center of a spherical fusion category C of non-zero dimension over an algebraically closed field is a modular fusion category whose dimension is the square of that of
C. We generalize this theorem to a pivotal fusion category C over an arbitrary commutative ring... Show more | {"url":"https://synthical.com/article/On-the-center-of-fusion-categories-3fb6bff4-ffc0-11ed-9b54-72eb57fa10b3?","timestamp":"2024-11-02T20:28:29Z","content_type":"text/html","content_length":"60362","record_id":"<urn:uuid:421ffb86-6df6-4969-b77e-a5c7d9ce19b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00279.warc.gz"} |
Re: [sfepy-devel] Re: specifying an initial guess for non-linear problems with no ebcs
21 Dec 2016 21 Dec '16
7:14 p.m.
Hi Robert.
I've now got the interactive form working nicely, with an initial guess passed to the problem solver as you describe in your first reply. I'm having some issues with passing function values to the
Material properties, but that might be best as a separate thread.
Thanks for all your help so far! David
On Wednesday, 21 December 2016 14:09:03 UTC+1, Robert Cimrman wrote:
Hi David,
note "approx_order=2" in the field definition in your "interactive" script (line 77). Use 1 to have the smaller matrix as in the problem description file. Or, if you do want to use the
bi-quadratic order, increase the order of numerical integration (line 93) to two. Currently you are under-integrating and that is why you keep getting a singular matrix.
On 12/21/2016 01:59 PM, David Jessop wrote:
I'm getting some very strange behaviour in my "interactive" solution. For the time being I'm just solving a linear Poisson equation on a square region with a constant source term everywhere
and Dirichelet BCs on the left and right boundaries. I'm set up and solved the same problem using a problem description file with the simple.py routine (see myPoisson_Soln.pdf) yet the
interactive form has massive oscillations (see myPoissonInteractive_solution. pdf) in the solution. The error seems to be in that the matrix for the interactive case is too large (1599x1599
elements for a domain of 21x21 cells, myPoisson.py gives 399x399 elements for the same number of cells in the domain) and so the inversion is singular, yet I can't find where the error in my
code could be. Would someone please mind pointing it out to me?
Thanks. D
Output of ./simple.py myPoisson.py: sfepy: left over: ['verbose', '__builtins__', 'n_step', 'dims', 'shape', '__file__', '__name__', 't1', 'center', 'UserMeshIO', 'gen_block_mesh', 't0' , '
__package__', 'output_dir', '_filename', 'np', 'output', '__doc__', 'mesh_hook'] sfepy: reading mesh [user] (function:mesh_hook)... sfepy: ...done in 0.00 s sfepy: creating regions... sfepy:
Right sfepy: Top sfepy: Bottom sfepy: Omega sfepy: Left sfepy: ...done in 0.00 s sfepy: equation "Temperature": sfepy: dw_laplace.i.Omega(cond.val, s, T) - dw_surface_integrate.2.Top
(insulated.val, s) - dw_volume_lvf.2.Omega(G.val, s)
sfepy: using solvers: ts: ts nls: newton ls: ls sfepy: updating variables... sfepy: ...done sfepy: setting up dof connectivities... sfepy: ...done in 0.00 s sfepy: matrix shape: (399, 399)
sfepy: assembling matrix graph... sfepy: ...done in 0.00 s sfepy: matrix structural nonzeros: 3355 (2.11e-02% fill) sfepy: ====== time 0.000000e+00 (step 1 of 2) ===== sfepy: updating
materials... sfepy: G sfepy: cond sfepy: insulated sfepy: ...done in 0.00 s sfepy: nls: iter: 0, residual: 2.530862e+01 (rel: 1.000000e+00) sfepy: rezidual: 0.00 [s] sfepy: solve: 0.00 [s]
sfepy: matrix: 0.00 [s] sfepy: nls: iter: 1, residual: 6.515061e-14 (rel: 2.574246e-15) sfepy: ====== time 1.000000e-01 (step 2 of 2) ===== sfepy: updating variables... sfepy: ...done sfepy:
updating materials... sfepy: G sfepy: cond sfepy: insulated sfepy: ...done in 0.00 s sfepy: nls: iter: 0, residual: 6.515061e-14 (rel: 1.000000e+00)
Output of python myPoissonInteractive.py: Enter code here...sfepy: saving regions as groups... sfepy: Omega sfepy: Left sfepy: Right sfepy: Bottom sfepy: Top sfepy: ...done sfepy: updating
variables... sfepy: ...done sfepy: setting up dof connectivities... sfepy: ...done in 0.00 s sfepy: matrix shape: (1599, 1599) sfepy: assembling matrix graph... sfepy: ...done in 0.00 s
sfepy: matrix structural nonzeros: 24311 (9.51e-03% fill) sfepy: updating materials... sfepy: cond sfepy: insulated sfepy: G sfepy: ...done in 0.00 s sfepy: nls: iter: 0, residual:
6.169742e+01 (rel: 1.000000e+00) sfepy: rezidual: 0.00 [s] sfepy: solve: 0.01 [s] sfepy: matrix: 0.00 [s] sfepy: warning: linear system solution precision is lower sfepy: then the value set
in solver options! (err = 2.856021e+01 < 1.000000e-10) sfepy: nls: iter: 1, residual: 2.946994e+01 (rel: 4.776527e-01) IndexedStruct condition: 1 err: 29.4699421316 err0: 61.6974210411
n_iter: 1 time_stats: dict with keys: ['rezidual', 'solve', 'matrix']
On Friday, 16 December 2016 10:54:39 UTC+1, Robert Cimrman wrote:
Hi David,
The main problem is having 'vector' in the field creation (line 119) - this means, that your variables are vectors, and not scalars, as required by the Laplace term. Try replacing that
with 'scalar' or 1.
As for materials, you can use simply:
coef = Material('coef', val=2.0)
On 12/16/2016 10:46 AM, David Jessop wrote:
Here's the complete problem description file.
On Tuesday, 6 December 2016 15:26:44 UTC+1, David Jessop wrote:
I have a non-linear form of the Poisson equation where the
coefficient depends on the derivatives of the state variable (see image). There are no Dirichlet-type boundary conditions (i.e. no ebcs) so I really need to specify a good initial
guess at the solution.
My questions are:
1. How are initial guesses passed to the solver? From the documentation, it doesn't seem like initial guesses can be directly passed to the nls.newton solver as an argument
(though they are an argument of the subroutine __call__).
2. In what form should the initial guess vector/array be written? Can a function be passed? Do numerical values have be be specified at the mesh nodes?
Thanks for any help on this matter. | {"url":"https://mail.python.org/archives/list/sfepy@python.org/message/LI2BXAHVX77FW2MUZSOG5RKA7VMDQXHE/","timestamp":"2024-11-05T23:38:45Z","content_type":"text/html","content_length":"19852","record_id":"<urn:uuid:d9d6dcc3-f210-40a3-ac40-a80280dc417f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00317.warc.gz"} |
Coordinate Geometry FormulasCoordinate Geometry Formulas List for Class 9 & 10 - Leverage Edu
Considered as one of the most interesting chapters, Coordinate Geometry is a treat for those who love to deal with the numbers. The chapter is not only a prominent part of the classes 9th and 10th
syllabus but it also an important topic for various competitive exams. For all those who want to understand its meaning and concepts, we bring you a full fledged blog on Coordinate Geometry formulas!
What Is Coordinate Geometry?
Coordinate geometry is a discipline of mathematics that aids in the presentation of geometric forms on a two-dimensional plane and the learning of their properties. To get a rudimentary understanding
of Coordinate geometry, we will learn about the coordinate plane and the coordinates of a point.
Understanding the Cartesian Coordinate Geometry
Before we began to explore the formulas of this chapter, let us first understand the working of coordinate geometry. In the famous Cartesian Coordinate system, there is a plane know as Cartesian
plane comprises of 2 number lines which bare perpendicular to each other i.e., x- Axis (Vertical) and Y-Axis (Horizontal) which represent the two variables. The aforementioned two perpendicular lines
are called the coordinate axis. Some important pointers of coordinate geometry are:
• Origin: It is the intersecting point of two lines i.e, the centre of the coordinate plane and its coordinates are (0,0)
• any random point on the coordinate plane is represented in the ordered pair of numbers. Let (c,d) is an ordered pair then, c is the x-coordinate and d is the y-coordinate
• x coordinate or Abscissa if the distance of any point from the y-axis whereas the distance of any point from x-axis is called y coordinate or ordinate
• The cartesian plane is always divided into four quadrants as I, II, III, IV
• The positive x-axis is to the right of the origin, while the negative x-axis is to the left of the origin. Also, the positive y-axis is above the origin O, while the negative y-axis is below the
origin O.
• The point represented in the first quadrant (3, 1) has both positive values and is plotted with reference to the positive x-axis and the positive y-axis.
• The point represented in the second quadrant is (-3, 2) is plotted with reference to the negative x-axis and positive y-axis.
• The point represented in the third quadrant (-3, -4) is plotted with reference to the negative x-axis and negative y-axis.
• The point represented in the fourth quadrant (1, -2) is plotted with reference to the positive x-axis and negative y-axis.
• The coordinates of a point can be used to conduct a variety of operations such as calculating distance, determining the midpoint, calculating the slope of a line, and calculating the equation of
a line.
Topics in Coordinate Geometry
Before ramping on to the coordinate geometry formulas in this blog, let us first unmask some of the important topics and sub-topics that are present and are part of this branch of mathematics. The
concepts and formulas required for coordinate geometry are first understood through the courses presented in coordinate geometry. The following are the topics addressed in coordinate geometry:
• The Coordinate Plane and the terms associated with it.
• Understand a point’s coordinates and how they are written in different quadrants.
• The distance between two points in the coordinate plane.
• The formula for calculating the slope of a line that connects two points.
• To get the midpoint of a line connecting two points using the Mid-point Formula.
• Section Formula for splitting the join of two points in a ratio to get the points splitting the join of two points
• Finding the centroid of a triangle with the specified three points.
• Finding the area of a triangle with three vertices.
• A line’s equation and the various forms of a line’s equation
Also Read: Top 20 Career Options in Commerce with Maths Stream
Important Coordinate Geometry Formulas
Being a one-of-a-kind chapter in class 9th and 10th Syllabus, Coordinate Geometry includes a variety of formulas which a student must learn while preparing for the exam. Learning these formulas
before-hand will not only help you save time during the exam but it will also fasten your calculations. Listed below are the important coordinate geometry formulas-
• Distance of a line, PQ=
• Distance of a slope of a line= m=
• Equation of a line= y=mx+c
• The product of the slopes of two perpendicular lines is -1
• The distance between the points (
• If the point P (x,y) divides the segment AB, where A=
• If P is the midpoint then, x=
• If G (x,y) is the centroid of triangle ABC, A=
then x=
• If I (x,y) is the in-centre of triangle ABC, A=
• y=mx+c is the equation of a straight line, where m is the slope and c is the y-intercept (tan Θ) =m, where Θ is the angle that the line makes with the positive X-axis).
• To find the coordinates of a point that divides the line segment joining points (x1,y1) and (x2,y2) in the ratio m:n, then the point (x, y) dividing these 2 points lie either on the line joining
these 2 points or outside the line segment. According to the section formula, (x, y) = (mx2+nx1 / m+n , my2+ny1 / m+n)
• The area of the triangle having vertices P (x1, y1), Q (x2, y2), and R (x3, y3) is obtained from the following formula: 1/2 |x1(y2−y3) + x2(y3−y1) + x3(y1−y2)|
Find out everything about Vedic Maths in this blog!
Also Read: Class 10 Trigonometry
Coordinate Geometry – Types of Questions
Now that you are familiar with the coordinate geometry formulas, let us have a look at some pointers that we can deduce using the above-mentioned formulas when we are given coordinates of some
• You can calculate the distance between them
• Determine that lines formed using them are perpendicular or parallel
• Finding the slope, midpoint or equation of a line segment
• Define the equations of various geometric figures
• Calculating the area or perimeter of the figure formed through various points
Practise Questions
Now that you are through with coordinate geometry formulas and concept, here are some practice questions for you:
• Calculate the area of a triangle formed by the vertices (3,2) (6,7) and (-5,8).
• Find the coordinates of the point which will divide the line formed by joining the points (-10,7) and (8,-6) externally in ratio 3:2.
• Find the equation of the line passing through (4,6) and perpendicular to the line 6x + 2y = 8.
• Determine the slope of the line passing through A(-14,5) and B(8,-11).
• Determine the equation of the line whose slope is -3 and x intercept is 7.
• Find the coordinates of the point which will divide the line formed by joining the points (- 3,4) and (9,4) internally in ratio 5:1
• If the coordinates of one end of a diameter of a circle are(4,10) and the coordinates of its centre are (6,13), then determine the coordinates of the other end of the diameter.
• If two vertices of an equilateral triangle are (-7,4) and (-3, 12) calculate the third vertex.
Have a look at these Maths tricks that can help you solve the questions faster!
Best Books
Many times students face difficulties in understanding the difficulties in the Coordinate Geometry concepts. In that case, taking help from a reference book can help to a great extent in explaining
the concepts right. Here is a list of coordinate geometry books that can help you in doing the same:
Book Name Link
The Elements of COORDINATE GEOMETRY Part-1 Cartesian Coordinatesby S L Loney Buy Here!
Skills in Mathematics – Coordinate Geometry for JEE Main and Advanced 2020by Arihant Experts Buy Here!
Mathematics For Joint Entrance Examination JEE ( Advanced ) Coordinate Geometryby G Tewani Buy Here!
Advanced Problems in Coordinate Geometry for JEE (Main & Advanced) by Vikas Gupta Buy Here!
Co-Ordinate Geometry (2-D and 3-D)by Ajay Kumar, Ravi Prakash Buy Here!
The Elements of Co-Ordinate Geometry – Part 1 by S.N. Loney Buy Here!
Balaji Advanced Problems in Co-Ordinate Geometry for JEE Main & Advancedby Vikas Gupta Buy Here!
Eduwiser Coordinate Geometry for JEE Main and Advanced by K. C. Sinha Buy Here!
Coordinate Geometry and Vectors for JEE Main and Advanced | Mathematics Module IIIby Ajay Kumar Buy Here!
Fundamentals of Mathematics – Co-ordinate Geometry 2ed by Sanjay Mishra Buy Here!
Must Read: Statistics Formulas for GMAT Quantitative Reasoning Section
[BONUS] Here are some of our blogs on Class 9th and Class 10th Maths topics:
We hope these Coordinate Geometry formulas helped you understand the concept better! Are you struggling in making the crucial decision of stream selection after class 10th? If yes, reach out to our
experts at Leverage Edu and they will provide you with the best career guidance. | {"url":"https://leverageedu.com/blog/coordinate-geometry-formulas/","timestamp":"2024-11-10T11:06:35Z","content_type":"text/html","content_length":"350335","record_id":"<urn:uuid:f282c603-f3a0-43ad-9348-22a0e86e48a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00850.warc.gz"} |
Normalized Device Coordinates
What is really interesting here is that we still see only the original triangle. The original triangle, at z=0,
(-1.0, -1.0, 0.0) (1.0, -1.0, 0.0) (0.0, 1.0, 0.0)
is blocking the view of the new triangle at z=0.5.
(-1.0, -1.0, 0.5) (1.0, -1.0, 0.5) (0.0, 1.0, 0.5)
This tells us that z increases into the screen. This completes our picture of the coordinate system we are drawing into. Each of x, y, and z range from -1 to plus 1.
The gl_Position output from the vertex shader will always fall within these normalized device coordinates. Before the end of this presentation we will see how arbitrary scenes are mapped into this
coordinate system. | {"url":"http://www.vizitsolutions.com/portfolio/webgl/normalizedDeviceCoordinates.html","timestamp":"2024-11-13T09:27:01Z","content_type":"text/html","content_length":"2328","record_id":"<urn:uuid:d9504cab-9da3-4a2b-8f3c-b71d7b03805c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00804.warc.gz"} |
How Frequently Does Your math websites for kids Make Your Neighbors Say That
Learn algebra—variables, equations, functions, graphs, and extra. Learn Algebra 2 aligned to the Eureka Math/EngageNY curriculum —polynomials, rational functions, trigonometry, and more. Learn
differential equations—differential equations, separable equations, actual equations, integrating elements, and homogeneous equations, and extra. Learn statistics and probability—everything you’d
wish to know about descriptive and inferential statistics. This course will give an overview of geometry from a modern viewpoint. Axiomatic, analytic, transformational, and algebraic approaches to
geometry will be used.
• Properties and graphs of exponential, logarithmic, and trigonometric features are emphasised.
• Learning mathematical finance might help you acquire data about concepts like portfolio optimization, predictive modelling, machine learning, financial engineering, inventory option pricing, and
danger management.
• The department offers 3 sequences in multivariable arithmetic.
• Created by consultants, Khan Academy’s library of trusted practice and lessons covers math, science, and extra.
Use the data and skills you’ve gained to drive impression at work and grow your career. Increase your quantitative reasoning expertise by way of a deeper understanding of likelihood and statistics.
Learn advanced approaches to genomic visualization, reproducible evaluation, knowledge structure, and exploration of cloud-scale consortium-generated genomic data. Created by specialists, Khan
Academy’s library of trusted practice and lessons covers math, science, and more.
The Released Secret to kodu game lab play free Discovered
Dads with children beneath 18 of their household spend, on average, 1.02 hours caring for and serving to them per day. This consists of 0.36 hours playing with them and 0.32 hours offering physical
care . Fathers report spending much less time each day studying to and with their children (0.05 hours on average) and on activities associated to their kids’ schooling, similar to helping with
homework or college initiatives (0.05 hours on average).
Use kodu game lab buttons like a ‘profession’
Learning mathematical finance can help you achieve knowledge about concepts like portfolio optimization, predictive modelling, machine learning, financial engineering, stock choice pricing, and
threat administration. These subjects all type a half of understanding mathematical finance, and are in growing use in monetary enterprise today. Learn the skills that will set you up for success in
unfavorable quantity operations; fractions, decimals, and percentages; charges and proportional relationships; expressions, equations, and inequalities; geometry; and statistics and likelihood.
Learn Algebra 1 aligned to the Eureka Math/EngageNY curriculum —linear functions and equations, exponential growth and decay, quadratics, and more. Learn sixth grade math aligned to the Eureka Math/
EngageNY curriculum—ratios, exponents, long division, adverse numbers, geometry, statistics, and more. Learn fifth grade math aligned to the Eureka Math/EngageNY curriculum—arithmetic with fractions
and decimals, quantity problems, unit conversion, graphing factors, and more.
Build your mathematical abilities and explore how edX courses might help you get started on your learning journey today. Learn the skills that may set you up for success in advanced numbers;
polynomials; composite and inverse functions; trigonometry; vectors and matrices; sequence; conic sections; and probability and combinatorics. Learn the talents that can set you up for success in
ratios, rates, and percentages; arithmetic operations; unfavorable numbers; equations, expressions, and inequalities; and geometry. Learn eighth grade math aligned to the Eureka Math/EngageNY
curriculum —functions, linear equations, geometric transformations, and more.
The Hidden Truth on play kodu game lab free online Exposed
OpenLearn works with different organisations by providing free programs and resources that assist our mission of opening up educational alternatives to extra individuals in more places. This free
course consolidates and builds on group principle studied at OU degree 2 or equivalent. Section 1 describes the way to construct a bunch called the direct product of two given teams, and then
describes certain kodugamelab review circumstances beneath which a group may be regarded as the direct product of its subgroups. This free course examines the essential kinematics of two-dimensional
fluid flows. Section 1 introduces the differential equations for pathlines and streamlines. Section 2 introduces a scalar field, referred to as the stream function, which for an incompressible fluid
supplies an alternative technique of modelling the flow and finding the streamlines. | {"url":"https://ridereau.com/how-frequently-does-your-math-websites-for-kids-make-your-neighbors-say-that/","timestamp":"2024-11-13T22:39:12Z","content_type":"text/html","content_length":"74540","record_id":"<urn:uuid:36bb22ab-1a5e-4155-b6ab-d501de9beaa5>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00373.warc.gz"} |
Integrated optimization of gear design and manufacturing
Home Features Integrated optimization of gear design and manufacturing
Integrated optimization of gear design and manufacturing
The integration, optimization, design, and manufacturing of gears using state-of-the-art software allows cutting tools to be harmonized.
The word “optimization” is becoming fashionable, also with regard to gear design. It is applied to both macro-geometry and micro-geometry. The approach can be of various types: analytical
pre-optimization with different objectives, bulk generation of variants, multi-objective and multi-disciplinary commercial optimizers, generative optimization, and even artificial intelligence.
Sometimes, the best solution is presented directly; other times, the choice is left to the user according to multiple criteria. However, these are all scenarios that assume the manufacturer will
accept any geometry indicated by the designer. This is certainly not the case with the industrial gearboxes on catalog for which standard cutting tools are used to reduce cost and keep available the
interchange of suppliers, nor with special gearboxes, “goods to order,” in which the producers try to use cutting tools already in the tool room. Even in the automotive industry, manufacturers try to
use existing cutting tools as much as possible, at least during prototyping and for small batches.
After presenting some design optimization techniques adopted in different companies, the focus of the article shifts to some business scenarios where manufacturing has been equipped with a software
for a semi-automatic selection of hobbing- and pinion-type tools starting from the macro-geometry of the gear. In particular, it will look at the case where a paper database of more than 10,000 hobs,
with different dimensioning modes, has requested to be harmonized into a single computer database. The software allows the search for a hob even with “modified rolling,” a method very widespread in
the automotive industry, practically “unknown” for industrial gearboxes.
Finally, for companies that have both design and manufacturing departments, a design optimization with a list of cutting tools as a main boundary will be presented.
1 Introduction
The key issues of this article are design and manufacturing. So, our starting point will be the opening words of two classic university books focusing on these issues:
• “The main task of engineers is to apply their scientific and engineering knowledge to the solution of technical problems, and then to optimize those solutions within the requirements and
constraints set by material, technological, economic, legal, environmental and human-related considerations.” [1]
• “Machine tools are used for the purpose of manufacturing parts, which meet design requirements concerning shape, size tolerance, and surface characteristics from both a technical and economic
viewpoint.” [2]
It is clear how the three requisites — material, technological, and economic — listed in the work focusing on design are linked to manufacturing, and, reciprocally, manufacturing refers to design
The need for increasing integration of these two phases is also pursued by CAD/CAM system developers to the extent that books such as “Integrated Design-to-Manufacturing Solutions: Lower Costs and
Improve Quality” [3] are distributed online by these types of companies.
Moreover, the term “optimization” is becoming increasingly popular, especially in papers presented at various conferences, such as AGMA’s Fall Technical Meeting.
So, firstly, we have to focus individually on the four terms found in the title of this article (integration, optimization, design, and manufacturing), clearly restricting ourselves to the field of
gears. We will look at them in “chronological” order:
• Firstly design, because man is above all homo sapiens, a thinker, able to plan and project.
• Followed by manufacturing, in other words, the ability to construct, which is a hallmark of homo faber, a Latin expression that became popular once more during the Renaissance.
• Lastly, optimization and integration, which are new words.
We will limit ourselves to cylindrical gears, which are the most common. We will put to one side wormgears, which I have already covered in other publications [4] [5], and bevel gears, which are
highly branded [6].
2 Design
Generally speaking, gear design is lifetime-based: The aim is to transmit a specific load for a set period of time. The ways in which tooth failure occurs are taken into account in order to satisfy
this requisite.
Recent updating of document numbers ISO 6336 allows for an easy overview of the main ones (bending, pitting, micropitting, scuffing, TFF) and stresses the importance of focus on the type of failure
in order to achieve correct sizing.
The terms used in the title of the standards (Table 1 and Table 2) play on the nuances that can be given to design goals: calculation or rating, strength, load capacity, durability, resistance.
Table 1: ISO standards, technical specifications, and technical reports for cylindrical gear design.
Table 2: AGMA standards for cylindrical gear design.
From a historical viewpoint, the geometrical principles of toothing were established first of all, especially involute toothing, and then rating criteria, above all for bending and surface fatigue
[7, 8]. The formulas contained in the various standards and bibliographical references were then implemented in manual calculation sheets (Figure 1) and subsequently in electronic spreadsheets and
software (Figure 2) to simplify the work of designers.
Figure 1: “Historical” manual calculation sheet.
Figure 2: Example of modern software for gear calculation.
One of the first areas of focus in all publications dating from the last century was the definition of the proportions to be given to gears following the rules cited at the beginning of this article,
in other words “within the requirements and constraints set by material, technological, economic considerations.” Here are just a few examples:
• Dudley, in his book unmistakably titled “Practical Gear Design,” later re-titled “Handbook of Practical Gear Design” [9].
• Niemann [10] with his formulas to split the transmission ratios of a parallel-axis reducer so as to minimize the costs of gear materials and housing (a concept further developed by Schlecht [11] at
a later date).
• Severin [12] with his translation of the Russian work titled “Increasing the load on gearing and decreasing its weight.”
From a standardization viewpoint, the ISO documents listed in Table 1 provide methods to verify gears whose geometry is known. In some of the AGMA documents listed in Table 2, design suggestions
depending on the application are also provided. There are no universal criteria. While in the automotive field small b/d ratios are common, in rolling plants, b/d ratios are often greater than 1.
3 Manufacturing
We will limit ourselves to looking at metal cylindrical gears, cut mostly using hobs, pinion-type cutters or power skiving, with possible grinding for finishing, to correct distortion error caused by
any thermal or surface treatments or to define micro-geometric modifications [13].
For obvious reasons of space, we will put to one side gears boasting a “free” geometry: plastic, sintered, obtained by additive manufacturing, 5-axis milling, or form cutting.
Therefore, the main job of the person who receives the gear drawing, such as the one in Figure 3, is to define the dimensions of the most suitable tool, in this case a hob, trying not to buy a new
one, but to choose from those already available (Figure 4).
Figure 3: Gear data in the drawing.
Figure 4: Different hobs with the same module and pressure angle of gear in the previous figure.
Let us now try to describe some atypical situations that can occur in the hob’s dimensioning that may result in a difficult interpretation of the geometry for the reader of the drawing in Figure 5,
which is often not even to scale. There is no standard that regulates a single method of dimensioning for these tools.
Figure 5: Nonstandard dimensioning for hobs.
When there is protuberance:
• Only two out of its three dimensions are independent.
• If the wording “full-radius” is included for the hob’s tip radius, an iterative calculation is needed to calculate the value of the root radius.
• The reference line, in relation to which the other dimensions such as addendum and dedendum are provided, may not be the line that divides the space thickness the same as the tooth thickness as,
instead, assumed by some calculation software.
• Semi-topping can have a double inclination or radius not dimensioned in the change of the pressure angle.
As for design, the focus in this case is also on aim and criteria. The aim is the one cited in the introduction “to manufacture parts that meet design requirements concerning shape, size tolerance,
and surface characteristics from both a technical and economic viewpoint.”
The choice of hob, which allows for the required shape to be obtained, can be made by entering the data of the required geometry and the data of the hob (uniquely established, as we said) into
specific calculation software (Figure 6) and superposing the calculated geometry with the one produced via enveloping (Figure 7). For example, in the case of the use of a pre-grinding tool with no
protuberance, it is easy to note the grinding notch. The grinding notch is accepted for small-size industrial gearboxes, clearly not in the case of automotive or aerospace gears.
Figure 6: Design and manufacturing data.
Figure 7: From left to right: Required tooth form, hobbing simulation, comparison between required (black) and ground (blue) tooth form, focus on the grinding notch.
Once the technical aim has been achieved, there is not a single criterion for the most economic choice. For example, it could be attempted to obtain the maximum efficiency from the hob K [14]
K is the efficiency of the hob; in m/tooth.
p is the number of gears (pieces).
z is the number of gear teeth.
l is the face width, mm.
t[os] is the axial pitch of the hob, mm.
i[os] is the number of hob gashes or flutes.
β is the helix angle of the gear.
b[1] is the working length of the hob (Figure 8).
Figure 8: Hob and gear.
The efficiency K should be between 4 and 5 m/tooth in order to be assessed as good. Before calculating K, the level of wear of the hob to be reached prior to replacement needs to be set and the cost
of the tool and grinding taken into account.
Even if more advanced methods have been proposed [15], Hoffmeister’s formula can still be used to calculate the chip’s maximum thickness given a set progress for each part revolution.
h[1,max] is the maximum chip thickness.
m[n] is the standard module.
β[0] is the angle of the hob’s helix.
x[p] is the addendum modification factor.
f[a] is the axial feed.
d[a0] is the hob head’s diameter.
i[0] is the number of gaps.
z[0] is the number of threads.
h is the cutting depth.
Figure 9: Profile and helix deviations generated by the hob.
As regards to cutting parameters, it must be remembered that it is possible to estimate profile ε[1] and helix ε[2 ]deviations caused by the progress value (Figure 9).
ε[1] is the profile deviation.
ε[2] is the helix deviation.
z[0] is the number of hob teeth.
i[0] is the number of hob starts.
z is the number of gear teeth.
R[p] is the pitch radius of the hob; mm.
f[a] is the progress per part revolution; mm/rev.
β[0] is the angle of the hob’s helix.
α is the pressure angle.
Therefore, with the same reference profile of the hob, the choice of hob is determined by:
• Other geometric characteristics of the hob, such as the number of cutters, external diameter, number of principles, helix angle length.
• Cutting parameters, such as cutting speed and progress/tooth, the recommended values for which can be found in the bibliography [14].
• Number of parts to be cut.
All these values can be used in the Equation 2 and 1, in order to check that:
• The chip thickness is not excessive.
• Efficiency falls into the 4-5 m/tooth interval.
But this is not the only criterion for assessing the advantageousness of specific working conditions. For example, the choice of favoring an increase in cutting speed and hence a reduction in cutting
time is commonplace, resulting in the waiver of a good level of hob efficiency.
We have tried to present simple formulas with a deep educational value [14]. Some other examples are in [16] and [17]. A more precise approach can be found in [18]. For pinion-type cutter, see [19].
4 Optimization
We will focus on the optimization of design and the optimization of manufacturing
as separate, independent activities: The former to be adopted in the technical department and the latter in the workshop, even in the case of two different companies,
i.e., an engineering company and a subcontractor.
4.1 Optimization of design
As stated in the introduction, design consists of a choice of variants, while generating and selecting them forms part of the optimization process. Without going into detail, the notion of
optimization is based on three concepts: objective/s, constraints, variables. Once these three concepts have been established, a multitude of variants are generally obtained and the optimal solution
chosen among these, based on well- defined criteria.
Let us have a look at some cases of optimization applied solely to gear design:
4.1.1 Analytical optimization
Some years ago, Schöler [20] presented an evolution, hence an optimization, of the traditional proportioning and pre-dimensioning formulas. The paper refers to beveloid gears, but it offers a clear
idea of what has also been done with regard to cylindrical gears.
4.1.2 Fast generation of variants
Kissling [21] has shown how quick the generation of macro-geometry variants can be, using software already widely adopted in technical departments (Figure 10). The numerous variants generated
(Figure 11) are then selected by the designer with the help of filters and graphs (Figure 12). The choice is up to the designer. The same approach is used to generate micro-geometry variants, as
presented in a recent FTM [22].
Figure 10: Optimizer interface: goal, variables, and boundary condition for the DOE.
4.1.3 Multi-objective commercial optimizers
Bonfiglioli [23] and Noesis [24] presented the use of a multi-objective optimizer interfaced with gear calculation software. ModeFrontier and Optimus took care of the experiment design (DOE) while
KISSsoft calculated each individual variant. The variant generation criterion performs better, and reporting is more functional in the face of longer processing times.
Figure 11: Optimizer interface: generated variants (list).
4.1.4 Optimizers for supercomputers
UniMoRe has recently made available to some companies [25] a genetic algorithm optimizer developed by the university [26], which works exclusively on supercomputers.
Figure 12: Optimizer interface: graphic with Pareto front.
4.1.5 Artificial intelligence
Schlecht [27] even decided to make use of artificial intelligence in order to find the optimal flank modification for a pair of cylindrical gears. Compared to all the methods described earlier, a
training phase for the AI engine is needed in this case, but advanced contact analysis software can be done away with.
4.2 Optimization of manufacturing
The same concepts seen for the optimization of design can be applied to manufacturing.
In this case, too, optimization involves the choice of the best variant, in other words, the choice of the hob that “copies” the geometry of the gear under design at the lowest cost, also taking into
account the stock allowance.
As mentioned previously, the aim could be the hob’s performance, which falls within the values listed earlier. The constraints concern generation of the desired profile and the maintenance of cutting
parameters within the recommended ranges. The only variable is the hob. Software able to perform the calculations listed under point 3 is obviously required. An example of hob selection from database
in a gear software calculation is shown in Figure 13.
Figure 13: Tool selection from Database in KISSsoft.
4.2.1 Tool database
Before explaining the hob’s optimized selection process, let us take a deeper look at the tool database. It is necessary to have a computer database containing all the hobs’ characteristics. The
platform used can range from a straightforward Excel spreadsheet to PLM.
There are workshops that cut gears for medium-size reducers (with a module from 0.5 to 7 mm), that have 400 hobs entered into an Excel spreadsheet (Figure 14), and there are workshops working
especially for the automotive industry that have 650 hobs in an Oracle database that also lists the resharpening (Figure 15).
Figure 14: Hobs database in Excel.
Figure 15: Hobs and pinion-type cutters database in Oracle.
There are also workshops working for the automotive and agricultural industries that handle more than 10,000 hobs for which only printed information sheets are available. Therefore, the first step is
to enter data into a computer database. An Excel spreadsheet has been prepared with some formulas in order to harmonize the various ways of sizing the hobs mentioned previously. The enormous amount
of work involved in compiling the database can only be justified by the savings, in economic terms, obtained by the process listed in the paragraph below.
In any case, the hobs database must contain this information:
• Reference profile (module, pressure angle, addendum, dedendum, tip and root radius, protuberance, semitopping).
• Geometric characteristics (hob diameter, cutting edge length, helix angle and hand, number of gashes, number of starts, material, coating).
• Working conditions (recommended stock).
Sometimes, the database also includes data [28] or drawings of dressers (Figure 16). So, this method is also used to determine the choice of roll in order to dress the grinder wheels and obtain the
tip-relief listed in the drawing.
Figure 16: From left to right: CAD drawing of a dresser for gear grinding wheel, comparison between the required geometry of the gear and the ground one.
4.2.2 Workflow
The process to be followed in order to generate a list of hobs used to cut the required toothing is shown in the flowchart in Figure 17, taking into account also the stock allowance.
Figure 17: Workflow to generate a list of hobs (variants).
Among the proposed variants, the optimal solution is the one that best meets the set criteria. Similarly to what we saw in Point 4.1.2, the Pareto front must also be adopted in this case.
The process can be more advanced and taken into account “modified rolling” or “short pitch tool” [29]. In this case, the hobs will not be strictly filtered on the basis of module and pressure angle
initially, but also by the base pitch, optionally inside a tolerance range.
The short pitch tool usually is selected to reduce undercut when there is the protuberance, to achieve smaller root form circle after grinding and increase the lifetime of the hob. The tooth form
changes only in the root, and this change should be considered in the strength calculation. In this case, both the tool and gear have the same base pitch.
In another case, the tool can have a base pitch different from the gear. Checking of the geometry obtained via enveloping will not be solely of the tip and root diameters, but also of the profile
deviation, which must remain within values that can be removed by grinding. This operation can be performed only if the first selection failed to result in a solution or if the workshop normally
adopts modified rolling for cutting or if a prototype or small batch is being manufactured.
In (Figure 18), there is the same gear with m = 2.5 mm and α = 20°, hobbed and ground completely (flank and root). The hob in (A) has the same base pitch of the gear (m = 2.4701 mm, α = 18°); the hob
in (B) has a different base pitch (m = 2.5 mm, α = 18°); the grinding allowance is not constant, but it’s acceptable for a prototype.
Figure 18: Gear hobbed and ground completely. Hob with same base pitch of the gear (A), hob with different base pitch of the gear (B).
5 Integration
The meaning of the term “integration” in this article goes beyond the one adopted by Norton in the title of his book “Machine Design: An Integrated Approach” [30] where, instead, he refers to the
educational approach. The approach tackles numerous machine parts within the same whole that are often mutually dependent.
As mentioned in the introduction, the integration we are focusing on is that of design for manufacturing; this has become a must, or at least a leitmotif for many companies.
Indeed, design decisions have a significant impact on manufacturing costs and product quality; 70 to 80 percent of the end manufacturing costs and 80 percent of the work that affects product quality
are established by the end of the design phase (Figure 19). Moreover, the further along you are in the development phase, the more expensive it becomes to make modifications (Figure 20). For example,
once the hob has been ordered, any geometric modifications to the design have an extremely costly impact.
Figure 19: Up to 80 percent of product costs locked in at design (Source: Dowlatshani in [3]).
Figure 20: Closing window of opportunity for changes (Source: Tech-Clarity in [3]).
6 Integrated optimization
We do not need to go deeper into the importance of integration. We have reached the apex of this ascent of the four terms listed in the article’s title. It is just a small step to achieve integrated
optimization of design and manufacturing. The following are necessary:
Adoption of a single gear calculation software in the technical department and in the workshop. Usually, it is first chosen by the technical office and then adopted by workshop.
Sharing of the same hobs database by the design and manufacturing divisions. If, as listed earlier, the first step is taken by the technical office, then it is the workshop that must share its
In the design software, the DOE of the optimizer searches for solutions limited to those obtained by the hobs available in the database (Figure 21) [21]. For each found variants (Figure 11), the
efficiency of the hob could be added as a result to help the designer in the selection of the best solution — “best” for the designer and “best” for the workshop.
Figure 21: Example of DOE where the hobs database is a boundary.
The advantages are for the whole company:
• Saving money in the purchase of new hobs and time in supplying, because the designer tries to limit himself to proposing geometries generated using just the hobs available in the workshop, rather
than coming up with geometric variables at a mathematical level only (e.g. pressure angle, module, addendum, dedendum).
• The designer has a greater awareness of what will be produced, even at the level of efficiency of the hob, pre-grinding quality and grinding twist, especially if the software used conveys the
skills of gear designers and machine tool manufacturers [21].
• The workshop already has the files with toothing and hob data; it does not have to interpret the drawing or enter data related to the hob, if chosen from the database. Therefore, it can focus
exclusively on the technological aspects.
7 Conclusions
The sharing of information and the desire to network, which is the same as the goal of AGMA, and especially of the FTM, is the spirit that lies behind the drafting of this article. No new formulas or
technologies are presented in this article. The state-of-the-art, good practices, and some real cases encountered in various situations and inside companies are presented in order for us to draw from
them. “Uncomplicated” instruments that are already on hand have been described:
• To some designers in order to see whether there is already a tool to manufacture the gear wheel they have in mind.
• To the relative workshops in order to avoid having to spend time re-interpreting designs and to speed up the search for the ideal tool.
If a drawing is a way to encode design information and reading of the drawing represents decoding, an example of CODEC (an IT term used in relation to audio and video meaning COde-DECode) involving
design and manufacturing is shown.
8 Acknowledgements
The author wishes to thank KISSsoft (a Gleason company) for the software. Thanks also to the companies Varvel — Mechnology, CIMA (Coesia group), Graziano (now parts of Dana group), and CEI, that
provided some pictures for this article and, over the years, have dedicated resources for the integrated optimization of design and manufacturing. They have shared among themselves the same software
and same database of hobs and pinion-type cutters in design and manufacturing, not without some initial problems.
1. Pahl, G., and Beitz, W., 1988, Engineering Design: A Systematic Approach, Springer Verlag, London.
2. Zurla, O., 1984, Appunti di macchine utensili, CLUEB.
3. 2007, Integrated Design to Manufacturing Solutions: Lower Costs and Improve Quality, Dessault Système, Waltham MA.
4. Turci, M., and Giacomozzi, G., 2016, “The Whirling Process in a Company That Produces Worm Gear Drives,” Fall Technical Meeting (FTM), AGMA, Pittsburgh.
5. Turci, M., Ferramola, E., Bisanti, F., and Giacomozzi, G., 2015, “Worm Gear Efficiency Estimation and Optimization,” Fall Technical Meeting (FTM), AGMA, Detroit.
6. Stadtfeld, H. J., 2019, Practical Gear Engineering, The Gleason Works, Rochester, N.Y.
7. Buckingham, E., 1928, Spur Gears: Design, Operation, and Production, McGraw-Hill, New York.
8. Soria, L., 1948, Tecnica degli Ingranaggi, Viglongo, Torino.
9. Radzevich, S. P., ed., 2012, Dudley’s Handbook of Practical Gear Design and Manufacture, CRC Press, Boca Raton.
10. Niemann, G., and Winter, H., 1983, Maschinenelemente: Band 2: Getriebe allgemein, Zahnradgetriebe – Grundlagen, Stirnradgetriebe, Springer Nature, Berlin Heidelberg.
11. Schlecht, B., 2009, Maschinenelemente 2: Getriebe, Verzahnungen und Lagerungen, Pearson Studium, München.
12. Saverin, M. M., tran., 1961, Increasing the Loading on Gearing and Decreasing Its Weight (Original Work: Povysheniye Nagruzochnoy Sposobnosti Zubchatykh Peredach i Snizheniye Vesa), Pergamon
Press, New York.
13. Turci, M., 2018, “Design and Optimization of a Hybrid Vehicle Transmission,” Fall Technical Meeting (FTM), AGMA, Chicago.
14. Bianco, G., 2004, La Dentatura Con Creatore, Samp Utensili, Bologna.
15. Brecher, C., Brumm, M., and Krömer, M., 2015, “Design of Gear Hobbing Processes Using Simulations and Empirical Data,” Procedia CIRP, 33, pp. 484–489.
16. Momper, F., 2017, “Un Approccio Moderno Alla Scelta Del Creatore,” Gleason Technology Days, AGMA, Rezzato BS.
17. Zhou, J., and Sari, D., 2020, “Selecting Correct Size of Hob/Gashing Cutter (Ask the Expert),” GEAR TECHNOLOGY.
18. Radzevich, S. P., 2010, Gear Cutting Tools: Science and Engineering, CRC Press, Boca Raton.
19. 1980, Gear Cutting Tools, Manual for Design and Manufacturing, Verzahntechnik Lorenz GmbH & Co, Ettlingen.
20. Schöler, T., Binz, H., and Bachmann, M., 2017, “Method for the Pre-Dimensioning of Beveloid Gears – Efficient Design of Main Gearing Data,” International Conference on Gears, VDI Verlag GmbH,
21. Kissling, U., Stolz, U., and Turich, A., 2017, “Combining Gear Design with Manufacturing Process Decisions, VDI International Conference on Gears,” International Conference on Gears, VDI Verlag
GmbH, Munich.
22. Kissling, U., 2019, “Sizing of Profile Modifications for Asymmetric Gears,” Fall Technical Meeting (FTM), AGMA, Detroit.
23. Franchini, M., 2016, “Multi-Objective Optimization of a Transmission System for an Electric Counterbalance Forklift,” International CAE Conference, Parma.
24. Olson, M., 2018, “Optimierung Eines Stirnradpaares in Einer Kontaktanalyse,” Schweizer Maschinenelemente Kolloquium (SMK).
25. Pellicano, F., 2018, “Overview,” The Gear Day, Modena.
26. Bonori, G., Barbieri, M., and Pellicano, F., 2008, “Optimum Profile Modifications of Spur Gears by Means of Genetic Algorithms,” Journal of Sound and Vibration, 313(3), pp. 603–616.
27. Schlecht, B., and Schulze, T., 2019, “Improved Tooth Contact Analysis by Using Virtual Gear Twins. How Helpful Is AI for Finding Best Gear Design?,” International Conference on Gears, VDI Verlag
GmbH, Munich.
28. Stangl, M. F., Kissling, U., and Pogacnik, A., 2017, “A Procedure to Find Best Fitting Dresser with Grinding Worm for a New Design,” International Conference on Gears, VDI Verlag GmbH, Munich.
29. Liston, K., 1993, “Hob Basics – Part II,” Gear Technology.
30. Norton, R. L., 2013, Machine Design: An Integrated Approach, Pearson College Div, Boston.
^Printed with permission of the copyright holder, the American Gear Manufacturers Association, 1001 N. Fairfax Street, Suite 500, Alexandria, Virginia 22314. Statements presented in this paper are
those of the authors and may not represent the position or opinion of the American Gear Manufacturers Association. (AGMA) This paper was presented November 2021 at the AGMA Fall Technical Meeting. | {"url":"https://gearsolutions.com/features/integrated-optimization-of-gear-design-and-manufacturing/","timestamp":"2024-11-14T20:30:37Z","content_type":"text/html","content_length":"271181","record_id":"<urn:uuid:5fa9c594-d811-4c2b-8bb9-8332d1afc9fa>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00039.warc.gz"} |
Understanding the
Understanding the Dynamic Programming Algorithm Design Paradigm
Dynamic programming is a powerful algorithm design paradigm that provides an efficient approach to solving problems by breaking them down into smaller and overlapping subproblems. It eliminates
redundant computations and optimizes the overall time complexity of the algorithm.
What is Dynamic Programming?
Dynamic programming is an algorithmic technique that solves complex problems by dividing them into smaller, simpler subproblems and storing the solutions to these subproblems in some form of data
structure, such as an array or a table. These stored solutions can later be used to solve the larger problem efficiently.
Dynamic programming is often used for optimization problems, where the goal is to find the best solution from a set of possible solutions. It is particularly useful when the problem has overlapping
subproblems and exhibits optimal substructure, meaning that the optimal solution for the problem can be constructed from the optimal solutions of its subproblems.
Steps Involved in Dynamic Programming
The dynamic programming algorithm design paradigm generally involves the following steps:
1. Define the problem: Clearly define the problem and determine what needs to be optimized or computed.
2. Identify the subproblems: Break down the problem into smaller, overlapping subproblems. The key idea is to divide the problem into subproblems that can be solved independently and whose solutions
can be combined to find the optimal solution for the larger problem.
3. Formulate a recursive relation: Define a recursive relation that expresses the solution to the larger problem in terms of the solutions to its subproblems. This relation provides the foundation
for the dynamic programming approach.
4. Create a memoization table: Set up a data structure, such as an array or a table, to store the solutions to the subproblems. This helps avoid redundant computations by caching previously computed
5. Solve the subproblems: Use the recursive relation and the memoization table to solve the subproblems in a bottom-up or top-down manner. By solving the subproblems, we can gradually build up the
solution to the larger problem.
6. Construct the final solution: Once all the subproblems have been solved, use the solutions stored in the memoization table to construct the final solution to the original problem.
Examples of Dynamic Programming
Dynamic programming can be applied to a wide range of problems, such as:
• Computing the nth Fibonacci number
• Finding the longest common subsequence between two sequences
• Calculating the shortest path between two nodes in a graph
• Solving the knapsack problem
• Determining the optimal strategy for playing a game
In each of these examples, dynamic programming enables us to solve the problem efficiently by avoiding redundant computations and leveraging the solutions to smaller subproblems.
Advantages and Limitations of Dynamic Programming
Dynamic programming offers several advantages:
• Efficiency: Dynamic programming optimizes the time complexity of algorithms by reusing previously computed solutions, resulting in faster computations.
• Simplicity: Dynamic programming breaks down complex problems into simpler subproblems, making them easier to understand and solve.
• Optimality: Dynamic programming guarantees that the solution obtained is optimal since it recursively solves the subproblems and combines their solutions.
However, dynamic programming may not be suitable for all problems. It requires the problem to have overlapping subproblems and optimal substructure for the technique to be applicable. Additionally,
the creation of a memoization table or the recursive relation may add some overhead to the algorithm.
Dynamic programming is a powerful algorithmic technique that enables us to solve complex optimization problems efficiently. By breaking down the problem into smaller subproblems and reusing computed
solutions, dynamic programming reduces redundant computations and improves the overall time complexity of the algorithm. Understanding and implementing dynamic programming can greatly enhance
problem-solving skills and algorithm design abilities. | {"url":"https://noobtomaster.com/algorithms-using-java/understanding-the-dynamic-programming-algorithm-design-paradigm/","timestamp":"2024-11-12T14:21:36Z","content_type":"text/html","content_length":"29859","record_id":"<urn:uuid:bc566bdd-7642-4651-96f0-eab59d46b6b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00252.warc.gz"} |
Effortless Integration with Simpson's Rule Calculator - Adama Stoken
Effortless Integration with Simpson’s Rule Calculator
Simpson’s Rule is a numerical method for approximating the definite integral of a function. It is a more accurate alternative to the trapezoidal rule and is particularly useful when dealing with
functions that are difficult to integrate using traditional methods. The rule is based on approximating the area under a curve by using quadratic polynomials.
The basic idea behind Simpson’s Rule is to divide the interval of integration into an even number of subintervals and then use quadratic interpolation to approximate the area under the curve within
each subinterval. The formula for Simpson’s Rule involves taking the average of the function values at the endpoints of each subinterval, as well as the midpoint, and then multiplying this average by
a factor that depends on the width of the subinterval. By summing up these approximations for each subinterval, we can obtain an estimate of the definite integral of the function over the entire
Simpson’s Rule is particularly useful when dealing with functions that are smooth and well-behaved, as it tends to provide more accurate results compared to other numerical integration methods. It is
also relatively easy to implement and can be applied to a wide range of functions, making it a valuable tool for engineers, scientists, and mathematicians.
Key Takeaways
• Simpson’s Rule is a numerical method for approximating the definite integral of a function, by using quadratic approximations to the function.
• Using a Simpson’s Rule calculator can save time and reduce the chances of human error when calculating definite integrals.
• To use a Simpson’s Rule calculator, input the function, the limits of integration, and the number of subintervals desired for the approximation.
• Simpson’s Rule can be used to approximate the area under a curve, the length of an arc, and the volume of a solid of revolution.
• When using a Simpson’s Rule calculator, it’s important to choose an appropriate number of subintervals for accurate results and to double-check input values for accuracy.
Benefits of Using a Simpson’s Rule Calculator
Using a Simpson’s Rule calculator offers several benefits, especially when dealing with complex functions or large datasets. One of the main advantages of using a calculator is the speed and
efficiency it provides in obtaining numerical approximations of definite integrals. Instead of manually performing the calculations for each subinterval, a Simpson’s Rule calculator can quickly
generate accurate results, saving time and effort.
Another benefit of using a Simpson’s Rule calculator is its ability to handle large amounts of data with ease. When dealing with numerous data points or complex functions, performing the calculations
by hand can be tedious and prone to errors. A calculator can handle these tasks effortlessly, providing reliable results in a fraction of the time it would take to do them manually.
Additionally, using a Simpson’s Rule calculator can help users gain a better understanding of the underlying principles of numerical integration. By inputting different functions and experimenting
with various parameters, users can explore how Simpson’s Rule works and gain insights into its behavior under different conditions. This can be particularly valuable for students and professionals
looking to deepen their understanding of numerical methods and their applications.
How to Use a Simpson’s Rule Calculator
Using a Simpson’s Rule calculator is a straightforward process that involves inputting the necessary parameters and obtaining the numerical approximation of the definite integral. To use a Simpson’s
Rule calculator, follow these steps:
1. Input the function: Enter the function for which you want to calculate the definite integral. This may involve typing in the function directly or selecting it from a list of predefined functions.
2. Specify the interval: Define the interval over which you want to calculate the definite integral by entering the lower and upper limits of integration.
3. Choose the number of subintervals: Select the number of subintervals to use in the approximation. This will depend on the level of accuracy required and the complexity of the function.
4. Obtain the result: Once you have inputted the function, interval, and number of subintervals, click on the “calculate” or “solve” button to obtain the numerical approximation of the definite
integral using Simpson’s Rule.
By following these steps, users can quickly and easily obtain accurate approximations of definite integrals for a wide range of functions, making it a valuable tool for both educational and
professional purposes.
Examples of Simpson’s Rule in Action
Function Interval Approximate Integral
x^2 [0, 2] 2.6667
sin(x) [0, π] 1.9999
e^x [0, 1] 1.7183
To illustrate how Simpson’s Rule works in practice, consider the following examples:
Example 1:
Calculate the definite integral of f(x) = x^2 over the interval [0, 2] using Simpson’s Rule with 4 subintervals.
Using Simpson’s Rule with 4 subintervals, we can approximate the definite integral of f(x) = x^2 over [0, 2] as follows:
h = (2-0)/4 = 0.5
I ≈ (0.5/3) * [f(0) + 4*f(0.5) + 2*f(1) + 4*f(1.5) + f(2)]
≈ (0.5/3) * [0 + 4*(0.5)^2 + 2*1^2 + 4*(1.5)^2 + 2^2]
≈ (0.5/3) * [0 + 4*0.25 + 2 + 4*2.25 + 4]
≈ (0.5/3) * [0 + 1 + 2 + 9 + 4]
≈ (0.5/3) * 16
≈ 1/6 * 16
≈ 8/3
≈ 2.6667
Example 2:
Calculate the definite integral of f(x) = sin(x) over the interval [0, π] using Simpson’s Rule with 6 subintervals.
Using Simpson’s Rule with 6 subintervals, we can approximate the definite integral of f(x) = sin(x) over [0, π] as follows:
h = (π-0)/6 ≈ π/6
I ≈ (π/18) * [f(0) + 4*f(π/6) + 2*f(π/3) + 4*f(π/2) + 2*f(2π/3) + 4*f(5π/6) + f(π)]
≈ (π/18) * [0 + 4*sin(π/6) + 2*sin(π/3) + 4*sin(π/2) + 2*sin(2π/3) + 4*sin(5π/6) + sin(π)]
≈ (π/18) * [0 + 4*(1/2) + 2*(√3/2) + 4*1 + 2*(√3/2) + 4*(-1/2) + 0]
≈ (π/18) * [0 + 2 + √3 + 4 + √3 – 2]
≈ (π/18) * [8 + 2√3]
≈ (8π + 4√3)/18
≈ π/3 + (2√3)/9
≈ 1.0472
These examples demonstrate how Simpson’s Rule can be used to approximate definite integrals for different types of functions over specified intervals, providing accurate results with relatively
simple calculations.
Tips for Efficient Integration with Simpson’s Rule Calculator
When using a Simpson’s Rule calculator for numerical integration, there are several tips that can help ensure efficient and accurate results:
1. Choose an appropriate number of subintervals: The accuracy of the approximation obtained using Simpson’s Rule depends on the number of subintervals used. In general, using more subintervals will
result in a more accurate approximation, but it will also require more computational effort. It is important to strike a balance between accuracy and computational efficiency when choosing the number
of subintervals.
2. Check for convergence: When using a Simpson’s Rule calculator, it is important to check for convergence by increasing the number of subintervals and comparing the results. If the approximations
obtained with increasing numbers of subintervals are converging towards a certain value, this can provide confidence in the accuracy of the result.
3. Understand the limitations: While Simpson’s Rule is a powerful method for numerical integration, it is not suitable for all types of functions. It is important to understand its limitations and
consider alternative methods for functions that may not be well-suited for this approach.
By keeping these tips in mind, users can make efficient use of Simpson’s Rule calculators and obtain reliable numerical approximations for definite integrals with confidence.
Common Mistakes to Avoid when Using Simpson’s Rule Calculator
When using a Simpson’s Rule calculator for numerical integration, there are several common mistakes that should be avoided to ensure accurate results:
1. Incorrect input of function: One common mistake is entering the function incorrectly into the calculator, which can lead to inaccurate results. It is important to double-check that the function is
inputted accurately, including any constants or coefficients.
2. Inaccurate interval specification: Another common mistake is specifying the interval incorrectly, leading to incorrect results. It is important to carefully define the lower and upper limits of
integration to obtain accurate approximations.
3. Choosing an inappropriate number of subintervals: Selecting too few or too many subintervals can lead to inaccurate results when using Simpson’s Rule. It is important to consider the complexity of
the function and choose an appropriate number of subintervals to balance accuracy and computational efficiency.
By being mindful of these common mistakes and taking care to avoid them when using a Simpson’s Rule calculator, users can ensure that they obtain reliable numerical approximations for definite
Advanced Applications of Simpson’s Rule Calculator
In addition to its basic applications, Simpson’s Rule has several advanced applications that make it a valuable tool in various fields:
1. Engineering: In engineering applications, Simpson’s Rule can be used to approximate complex integrals that arise in areas such as structural analysis, fluid dynamics, and control systems design.
By providing accurate numerical approximations, it helps engineers make informed decisions and optimize designs.
2. Scientific research: In scientific research, Simpson’s Rule can be applied to analyze experimental data and obtain numerical approximations for integrals that arise in various scientific
disciplines such as physics, chemistry, and biology. It provides researchers with a powerful tool for data analysis and hypothesis testing.
3. Financial modeling: In financial modeling and risk analysis, Simpson’s Rule can be used to calculate expected values and probabilities by approximating integrals that arise in pricing models and
risk assessment methodologies. It helps financial analysts make informed decisions and manage uncertainties effectively.
By leveraging its advanced applications, Simpson’s Rule calculator becomes an indispensable tool for professionals across diverse fields, enabling them to tackle complex problems and make informed
decisions based on accurate numerical approximations.
Sure, here’s a paragraph that mentions a related article to Simpson’s rule calculator and includes a link to the related article:
“If you’re interested in learning more about numerical integration methods like Simpson’s rule, you might want to check out this insightful article on the benefits of using advanced mathematical
techniques in real-world applications. The article, published by Unipax International, delves into the practical advantages of employing numerical integration methods in various industries. To read
more about this fascinating topic, visit Unipax International.”
What is Simpson’s rule?
Simpson’s rule is a method for numerical integration, which is used to approximate the definite integral of a function. It is based on approximating the area under a curve by using quadratic
How does Simpson’s rule work?
Simpson’s rule works by dividing the interval of integration into subintervals and approximating the area under the curve within each subinterval using quadratic polynomials. These approximations are
then summed up to give an overall approximation of the integral.
What is the formula for Simpson’s rule?
The formula for Simpson’s rule is:
\[ \int_{a}^{b} f(x) \, dx \approx \frac{b-a}{6} \left[ f(a) + 4f\left(\frac{a+b}{2}\right) + f(b) \right] \]
where \(a\) and \(b\) are the limits of integration and \(f(x)\) is the function being integrated.
How can I use a Simpson’s rule calculator?
To use a Simpson’s rule calculator, you simply input the function you want to integrate, the limits of integration, and the number of subintervals you want to use for the approximation. The
calculator will then provide you with an approximation of the definite integral using Simpson’s rule.
When is Simpson’s rule used?
Simpson’s rule is used when an exact solution to a definite integral is difficult or impossible to obtain analytically. It is commonly used in numerical analysis and scientific computing to
approximate integrals.
You must be logged in to post a comment. | {"url":"https://www.adamastoken.com/effortless-integration-with-simpsons-rule-calculator/","timestamp":"2024-11-14T17:05:09Z","content_type":"text/html","content_length":"56997","record_id":"<urn:uuid:f938ae03-ea1c-4b69-bc4b-4f15c19040ce>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00758.warc.gz"} |
Suppose that the points (h,k),(1,2) and (−3,4) lie on the line ... | Filo
Suppose that the points and lie on the line . If a line passing through the points and is perpendicular to , then equals
Not the question you're searching for?
+ Ask your question
Sol. (c) Given, points and are lies on line , so slope of line is
and slope of line joining points and
Since, lines and are perpendicular to each other.
[from Eqs. (i) and (iii)]
On solving Eqs. (ii) and (iv), we get
Was this solution helpful?
Video solutions (9)
Learn from their 1-to-1 discussion with Filo tutors.
11 mins
Uploaded on: 12/21/2022
Was this solution helpful?
11 mins
Uploaded on: 2/1/2023
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Arihant Mathematics Master Resource Book (Arihant)
View more
Practice more questions from Straight Lines
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Suppose that the points and lie on the line . If a line passing through the points and is perpendicular to , then equals
Updated On Mar 5, 2023
Topic Straight Lines
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 9
Upvotes 831
Avg. Video Duration 7 min | {"url":"https://askfilo.com/math-question-answers/suppose-that-the-points-h-k12-and-34-lie-on-the-line-l_1-if-a-line-l_2-passing-209015","timestamp":"2024-11-05T16:02:28Z","content_type":"text/html","content_length":"577053","record_id":"<urn:uuid:ac5dac18-fb5d-4b58-8dc7-183f36649cc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00168.warc.gz"} |
Enhanced Analysis of Falling Weight Deflectometer Data for Use With Mechanistic-Empirical Flexible Pavement Design and Analysis and Recommendations for Improvements to Falling Weight Deflectometers
Enhanced Analysis of Falling Weight Deflectometer Data for Use With Mechanistic-Empirical Flexible Pavement Design and Analysis and Recommendations for Improvements to Falling Weight Deflectometers
CHAPTER 4. VISCOELASTIC APPROACH
Flexible pavements are multilayered structures typically with viscoelastic AC as the top layer and with unbound/bound granular layers below it. The combined response of linear viscoelastic and
elastic materials that are in perfect bonding is linear viscoelastic. Assuming there is full bonding between the asphalt layer and the underlying base and subgrade layers, the overall response of the
entire pavement system becomes viscoelastic. The characteristic mechanistic properties of an isotropic-thermorheologically simple viscoelastic systems are the relaxation modulus E(t), the creep
compliance D(t), the complex (dynamic) modulus |E*|, and the time-temperature shift factors (a[T](T)). These characteristic properties are often expressed at a specific reference temperature, in
terms of a “master curve.” For thermorheologically simple materials, these characteristic properties can be generated at any time (or frequency) and temperature using the time-temperature
superposition principle. It can be shown that if any of the three properties E(t), D(t), or |E*| is known, the other two can be obtained through an interconversion method such as the Prony series.^
(57) The dynamic modulus (|E*|) master curve of an AC layer is a fundamental material property that is required as an input in the MEPDG for a flexible pavement analysis. Knowledge of the |E*| master
curve of an in-service pavement using FWD data can lead to a more accurate estimation of its remaining life and rehabilitation design.
The specific objectives of this component of the project were to (1) develop a layered viscoelastic flexible pavement response model in the time domain; (2) investigate whether the current FWD
testing protocol generated data that were sufficient to backcalculate the |E*| master curve using such a model; and (3) if needed, recommend enhancements to the FWD testing protocol to be able to
accurately backcalculate the |E*| master curve as well as the unbound material properties of in-service pavements. Readers should note that the methods presented in this report were developed for a
single AC layer. However, in-service pavements may be composed of multiple layers of different types of asphalt mixtures. In such cases, the present form of the backcalculation algorithms would
provide a single equivalent |E*| master curve for the asphalt mixture sublayers.
The models presented in this chapter can consider the unbound granular material as both linear elastic as well as nonlinear stress-dependent material. Depending on the assumed unbound granular
material property, two generalized viscoelastic flexible pavement models were developed. The developed forward and backcalculation models for linear viscoelastic AC and elastic unbound layers are
referred to as LAVA and BACKLAVA, respectively, in this report. The developed forward and backcalculation models for linear viscoelastic AC and nonlinear elastic unbound layers are termed LAVAN and
BACKLAVAN, respectively, in the report. The LAVA and BACKLAVA algorithms assumed a constant temperature along the depth of the AC layer. The algorithms were subsequently modified for the temperature
profile in the AC layer, and modified versions are referred to as LAVAP and BACKLAVAP in this report. The viscoelastic properties backcalculated from FWD data included two functions—a time function
and a temperature function. The time function referred to the relaxation modulus master curve E(t,T[0]) in which t was physical time and T[0] was the corresponding (constant) reference temperature.
The temperature function referred to the time-temperature shift factor a[T](T), which was a positive definite dimensionless scalar. In the present study, AC was assumed to be thermorheologically
simple, which allowed applying E(t,T[0]) for any temperature level T (different than T[0]) by simply replacing physical time with a reduced time t[R][ ]= t/a[T](T); therefore, a[T](T) is a function
of both T and T[0], such that a[T](T)=1 if T = T[0].
Typically, a load-displacement history of 60 ms is recorded in an FWD test (which constitutes 25to 35 ms of applied load pulse); it is generally observed that the later portion of deflection history
(after the peak) is not reliable. This is due to the numerical error generated by velocity integration. (Most FWD sensors measure velocity or acceleration, which is integrated to obtain the
deflections.) This can give only limited information about the time-varying E(t) behavior of the AC layer. However, in theory, it should be possible to obtain the two sought-after functions (i.e., E(
t) and a[T](T)). In this report, two different approaches are discussed to obtain the comprehensive behavior of asphalt: (1) using series of FWD deflection time histories at different temperature
levels and (2) using uneven temperature profile information existing across the thickness of the asphaltic layer during a single or multiple FWD drops deflection histories.^(59,60) Both of the models
are presented in detail. Finally, the models were validated using frequency and FEM-based solutions.
Further, the effect of FWD test temperatures and number of surface deflection sensors on backcalculation of the |E*| master curve were studied. These suitable FWD test data requirements are discussed
in the key findings from the study.
Traditionally, flexible pavements are analyzed using analytic multilayered elastic models (e.g., KENLAYER, BISAR, and CHEVRONX), which are based on Burmister’s elastic solution of multilayered
structures. (See references 21, 23, 27, and 61–64.) These models assume the material in each pavement layer is linearly elastic. However, the AC (typically the top layer) is viscoelastic at low
strain.^(60,65,66) As with any viscoelastic material, it shows properties dependent on time (or frequency) as well as temperature.
In the proposed approach, the AC pavement system was modeled as a layered half-space, with the top layer as a linear viscoelastic solid. All other layers (base, subbase, subgrade, and bedrock) in the
pavement were assumed linear elastic. Assuming there was full bonding between the AC layer and the underlying base and subgrade layers, the overall response of the entire pavement system became
viscoelastic. Therefore, its response under arbitrary loading was obtained using Boltzmann’s superposition principle (i.e., the convolution integral) as shown in figure 26.^(65,66)
Figure 26. Equation. Boltzmann’s superposition principle.
R^ve(x, y, z, t) I = the linear viscoelastic response at coordinates (x,y,z) and time t.
R[H]^ve (x, y, z, t) = the (unit) viscoelastic response of the pavement system to a Heaviside step function input (H(t)).
dI(τ) is the change in input at time τ.
It is worth noting that for a uniaxial viscoelastic system (e.g., a cylindrical AC mixture), if response R^ve= ε (t) = strain, then R[H]^ve = D(t) = creep compliance and I(t) = σ(t) = stress. Using
Schapery’s quasi-elastic theory, the viscoelastic response at time t to a unit input function was efficiently and accurately approximated by the elastic response obtained using relaxation modulus at
time t as shown in figure 27.^(67,68)
Figure 27. Equation. Quasi-elastic approximation of a unit response function such as the creep compliance.
Where R[H]^e(x,y,z, E(t)) is the unit elastic response at elastic modulus equal to relaxed modulus (E(t)). Flexible pavements are exposed to different temperatures over time, which in turn influence
their response. For thermorheologically simple materials, this variation in response can be predicted by extending the equations shown in figure 26 and figure 27 to the equation shown in figure 28.
Figure 28. Equation. Hereditary integral using quasi-elastic approximation of a unit response function such as the creep compliance.
t[R] = t/a[T](T).
a[T] (T) = a[1] (T^2- T^2[ref] ) + a[2] (T - T[ref ]) is the shift factor at temperature T.
T[ref] I = the reference temperature
a[1] and a[2] are the shift factor’s polynomial coefficients.
Using figure 28, the formulation for predicting vertical deflection of a linear viscoelastic AC pavement system subjected to an axisymmetric loading can be expressed as the equation shown in figure
Figure 29. Equation. Hereditary integral using quasi-elastic approximation of unit vertical deflection at the surface.
u^ve[vertical] (r,z,t) = the viscoelastic response of the viscoelastic multilayered structure at time t and coordinates (r,z).
u^e[H -vertical] (E(t[r] - τ ),r,z ) i = the elastic unit response of the pavement system at reduced time t[R] due to the unit (Heaviside step) contact stress (i.e., σ (t )= 1).
σ (τ) is the applied stress at the pavement surface.
Detailed derivation of the equation in figure 29 can be found in Levenberg’s research and are not repeated here for brevity.^(69) In this implementation, the vertical surface displacements, i.e., u^
ve[H] (t) ≅ u^e[H] (t) values at the points of interest, were computed using the CHEVRONX layered elastic analysis program. Then the convolution integral in figure 29 was used to calculate the
viscoelastic deflection u^ve(t). A description of the algorithm is given in the following section.
Layered Viscoelastic (Forward) Algorithm (LAVA)
The algorithm steps were as follows:
1. Define the geometric (layer thicknesses, contact radius) and material (E(t), E[base], E[subgrade], and Poisson’s ratio) properties of a layered system similar to the one in figure 30.
2. Select a stress versus time history (σ (τ)) and divide the data into N[s] discrete intervals as shown in figure 31.
Figure 30. Diagram. Typical flexible pavement geometry for analysis.
Figure 31. Graph. Discretization of stress history in forward analysis.
3. Divide the relaxation modulus master curve into N[E], the number of time steps in log scale. The relaxation modulus E(t) can be approximated with a sigmoid function as shown in figure 32.
Figure 32. Equation. Sigmoid form of relaxation modulus master curve.
Where t[R] is the reduced time (t[R] = t/a[T](T)) and c[i] are sigmoid coefficients. The shift factor coefficients were computed using the second order polynomial given in figure 33.
Figure 33. Equation. Shift factor coefficient polynomial.
Where a[1] and a[2] are the shift factor coefficients.
4. Calculate the elastic response (i.e., vertical surface deflections) of the structure to a unit stress using E(t[i]) evaluated at different reduced times (i.e., t[1], t[2], t[3] ….t[NE]). In this
implementation, the surface deflections at several radial distances to a circular plate load shown in figure 30 were of interest. Therefore, these surface deflections were computed using the
CHEVRONX program with the AC modulus value corresponding to different times in figure 34 (i.e., E(t[1)], E(t[2)], E(t[3)], E(t[4)]… E(t[NE])) as shown in figure 35.^(67,68)
Figure 34. Graph. Discretization of the relaxation modulus master curve.
Figure 35. Equation. Quasi-elastic approximation of unit vertical deflection at the surface.
The equation in figure 35 is calculated using E(t[i]) where i = 1,2,3…N[E]. Figure 36 shows the u^e[H] values calculated for points at different distances from the centerline of the circular load
at the surface. These curves are called unit response master curves.
Figure 36. Graph. Deflections calculated under unit stress for points at different distances from the centerline of the circular load at the surface.
5. Calculate the viscoelastic response using the discrete form of figure 29 given in figure 37. The equation was evaluated at each discrete time t using the stress history shown in figure 38. Figure
38 illustrates the dσ ( τ[j] ) in figure 37 for each time step τ j.
Figure 37. Equation. Discrete formulation.
Where I = 1,2,…N[s].
Figure 38. Graph. dσ( τ[j] ) for each time step τ[j]
To illustrate an example, the viscoelastic surface deflections of the three-layer pavement structure shown in figure 39 were computed. Figure 40 shows the vertical surface deflections at points
located at different radial distances from the centerline of the load. Figure 40 clearly shows the relaxation behavior of deflection at each point.
Figure 39. Diagram. Example problem geometry.
Figure 40. Graph. Examples of computed viscoelastic surface deflections at different radial distances from the centerline of the load.
One of the primary reasons for implementing Schapery’s quasi-elastic solution is its extreme computational efficiency. Using a Pentium 2.66 GHz computer with 3.25 gigabytes (GB) of random access
memory (RAM), the computation of the results shown in figure 40 took 1.96 s to calculate the solution for the three-layer system shown in figure 39 and N[S] = 50, N[E][ ]= 50.
Table 10 shows the computation times for different numbers of discrete time steps in the three-layer system.
Table 10. LAVA computation times for different numbers of discrete time steps.
│ N[S] │ N[E] │ Computation Time (s) │
│ 50 │ 50 │ 1.96 │
│ 24 │ 100 │ 2.88 │
│ 50 │ 100 │ 3.03 │
│ 100 │ 100 │ 3.05 │
│ 100 │ 200 │ 5.01 │
│ 200 │ 200 │ 5.13 │
Verification of the Proposed Layered Viscoelastic Solution (LAVA)
The layered viscoelastic algorithm was verified by using two pavement structures selected from the SPS‑1 experiment of the LTPP database (table 11). Surface deflections at different radial distances
due to a circular loading pulse of 0.045 s followed by 0.055-s rest period were calculated using two commonly known software packages, SAPSI and LAMDA, and compared with the layered viscoelastic
solution implemented in this research. SAPSI is based on damped-elastic layer theory and FEM, whereas LAMDA is based on the spectral element technique, axisymmetric dynamic solution.^(32,2) These
software packages were selected because they were known to provide robust dynamic solutions, and their algorithms were based on frequency-domain calculations. This allows truly independent
verification because the layered viscoelastic solution is in the time domain, whereas SAPSI and LAMDA are in the frequency domain.
Table 11. Pavement properties used in LAVA validation with SAPSI and LAMDA.
│ │ │ │ Thickness │ │
│ Case No. │ Physical Layer │ Elastic Modulus │ (inches) │ Poisson’s Ratio │
│ 116 │ AC │ |E*|-f[ro] │ 3.9 │ 0.35 │
│ ├────────────────┼──────────────────┼───────────┼──────────────────┤
│ │ TB base │ 29 ksi │ 12.0 │ 0.40 │
│ ├────────────────┼──────────────────┼───────────┼──────────────────┤
│ │ Subgrade (SS) │ 14.5 ksi │ Infinity │ 0.45 │
│ 120 │ AC │ |E*|-f[ro] │ 3.6 │ 0.35 │
│ ├────────────────┼──────────────────┼───────────┼──────────────────┤
│ │ PATB base │ 26.1 ksi │ 4 │ 0.40 │
│ ├────────────────┼──────────────────┼───────────┼──────────────────┤
│ │ GB base │ 21.8 ksi │ 8 │ 0.40 │
│ ├────────────────┼──────────────────┼───────────┼──────────────────┤
│ │ Subgrade (SS) │ 14.5 ksi │ Infinity │ 0.45 │
│ TB = Treated base. │
│ SS = Sandy subgrade. │
│ PATB = Permeable asphalt treated base. │
│ GB = Granular base. │
Figure 41 and figure 42 show the comparison between the layered viscoelastic solution and solutions calculated with SAPSI and LAMDA. The figures clearly show that the layered viscoelastic result
matched very well to the SAPSI and LAMDA solutions. Note that the layered viscoelastic algorithm did not consider the dynamics, whereas SAPSI and LAMDA did.
The dynamic solution developed a time delay in response to wave propagation. However, because the scope of this portion of the project included the backcalculation of viscoelastic characteristics of
the AC layer, the effect of dynamics was not considered. Therefore, the time delay in dynamic solutions was eliminated by shifting the deflection curves to the left such that the beginning of each
sensor response matched.
VE = viscoelastic.
41. Graphs. Comparison of dynamic solutions (time delay removed) and viscoelastic solution for case 116.
VE = viscoelastic.
Figure 42. Graphs. Comparison of dynamic solutions (time delay removed) and viscoelastic solution for case 120.
Temperature in pavements typically varies with depth, which affects the response of the HMA to the applied load. As shown in figure 43, the temperature may increase with depth (profile 1—linear,
2—piecewise, and profile 3—nonlinear) or decreasing with depth (profile 4—linear, profile 5—piecewise, and profile 6—nonlinear) depending on the time of the day. This variation in temperature with
depth can be approximated with a piecewise continuous temperature profile function as shown in figure 43 (profiles 2 and 5). The advantage of using a piecewise function is that it can be used to
approximate any arbitrary function.
Figure 43. Diagram. Schematic of temperature profile.
An algorithm that considers HMA sublayers with different temperatures within the HMA layer was developed and is referred to as LAVAP (T-profile LAVA). The algorithm was compared with LAVA as well as
ABAQUS. Comparison with LAVA was made for deflection response at all the sensors for constant temperature throughout all the sublayers. The pavement section and layer properties used in the forward
analysis are shown in table 12. Figure 44 shows the response obtained from the temperature profile algorithm at 32, 86, and 122 °F matched very well with LAVA.
Table 12. Pavement properties used in (T-profile LAVA) LAVAP validation with ABAQUS.
│ │ │ Temperature Profile │
│ Property │ Constant Temperature │ (Three-Step Function) │
│ Thickness │ AC sublayers │ 6 │ 2, 2, 2 │
│(inches) ├──────────────────────────────┼───────────────────────────┼───────────────────────────┤
│ │ Granular layers │ 20, infinite │ 20, infinite │
│ Poisson ratio {layer 1, 2, 3…} │ 0.35, 0.3, 0.45 │ 0.35, 0.3, 0.45 │
│ E[unbound]{layer 2, 3…}, psi │ 11,450, 15,000 │ 11,450, 15,000 │
│ Total number of sensors │ 8 │
│ Sensor spacing from the center of load (inches) │ 0, 7.99, 12.01, 17.99, 24.02, 35.98, 47.99, 60 │
│ E(t) sigmoid coefficient {AC} │ 0.841, 3.54, 0.86, -0.515 │ 0.841, 3.54, 0.86, -0.515 │
│ a(T) shift factor polynomial │ 4.42E-04, -1.32E-01 │ 4.42E-04, -1.32E-01 │
│ coefficients {AC} │ │ │
Figure 44. Graphs. Comparison of response calculated using (T-profile LAVA) LAVAP and original LAVA.
To qualitatively examine the response of flexible pavement predicted using the (T-profile LAVA) LAVAP algorithm, the response obtained under temperature profile was compared with the response
obtained under constant temperatures. As an example, a comparison of the response under a temperature profile of {104, 86, 68} °F, with that corresponding to a constant temperature of 104, 86, and 68
°F for the entire depth, is shown in figure 45, figure 46, and figure 47, respectively. It can be seen from the figures that the effect of AC temperature was most prominent in sensors closer to the
load center (sensors 1 through 4). For sensors away from the loading center (sensors 5, 6, 7, and 8), the deflection histories were not influenced by the AC temperature. Figure 48 shows the region of
the E(t) master curve (at 66.2 °F reference temperature) used by the (T-profile LAVA) LAVAP algorithm in calculating time histories.
Figure 45. Graph. Comparison of responses calculated using (T-profile LAVA) LAVAP at
temperature profile {104, 86, 68} °F and original LAVA at constant 104 °F temperature.
Figure 46. Graph. Comparison of responses calculated using (T-profile LAVA) LAVAP at
temperature profile {104, 86, 68} °F and original LAVA at a constant temperature of 86 °F.
Figure 47. Graph. Comparison of responses calculated using (T-profile LAVA) LAVAP at
temperature profile {104, 86, 68} °F and original LAVA at a constant temperature of 68 °F.
Figure 48. Graph. Region of E(t) master curve (at 66.2 °F reference temperature) used by
(T-profile LAVA) LAVAP for calculating response at temperature profile {104, 86, 68} °F .
As expected, for a condition of higher temperature at the top and lower temperature at the bottom, the response with a higher constant temperature was always greater than the response with a
temperature profile. The response with a lower constant temperature was always less than the response with a temperature profile. The response with a medium constant temperature may or may not be
less than the profile response, depending on the temperature profile and thickness of the sublayering.
Next, the LAVA algorithm was validated against a well-known FEM software, ABAQUS, where the temperature profile in the AC layer was simulated as two sublayers of AC with different temperatures. For
this purpose, two different HMA types were considered, Terpolymer and SBS64-40. The viscoelastic properties of these two mixes are shown in figure 49. As shown in table 13, for both mixes, the AC
layer was divided into two sublayers, with temperature in the top and bottom sublayer assumed to be 66 and 86 °F, respectively.
Figure 49. Graphs. Relaxation modulus and shift factor master curves at a reference temperature of 66 °F.
Table 13. Pavement section used in (T-profile LAVA) LAVAP validation.
│ │ │ Thickness (inches) │ │
│ Layer │ Modulus (E(t) or E) │ (Temperature °F) │ Poisson’s Ratio │
│ AC │ Mix 1: Terpolymer (E(t) see figure 49) │ Sublayer1 = 3.94 inches (66 °F) │ 0.45 │
│ │ Mix 2: SBS 64-40 (E(t) see figure 49) │ Sublayer2 = 3.94 inches (86 °F) │ │
│ Base │ 15,000 psi (linear elastic) │ 7.88 inches │ 0.35 │
│ Subgrade │ 10,000 psi (linear elastic) │ Infinity │ 0.45 │
Figure 50 and figure 51 show a comparison of surface deflection time histories measured at radial distances of 0, 7.99, 12.01, 17.99, 24.02, 35.98, 47.99, and 60 inches for mixes 1 and 2. From the
figures, it can be observed that the results obtained from LAVAP and ABAQUS matched well.
Figure 50. Graph. Comparison between LAVAP and ABAQUS at a temperature profile of {66, 86} °F (terpolymer).
Figure 51. Graph. Comparison between LAVAP and ABAQUS at a temperature profile of {66, 86} °F (SBS 64-40).
As expected, it can be seen from table 14 that for both mixes, surface deflection in the pavement section at the two-step AC temperature profile of {66, 86} °F was between the deflections obtained
for constant AC temperatures of 66 and 86 °F.
Table 14. Peak deflections at temperature profile {66, 86}°F and at a constant temperature of 86 °F using LAVA.
│ │ │ Sensor Deflection (mil) │
│ │ ├──────┬──────┬──────┬──────┬──────┬──────┬─────┬─────┤
│ Mix │ Temperature (°F) │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │
│ Terpolymer │ Constant: 66 │ 28.1 │ 24.7 │ 22.5 │ 19.5 │ 16.8 │ 12.4 │ 9.3 │ 7.2 │
│ ├───────────────────┼──────┼──────┼──────┼──────┼──────┼──────┼─────┼─────┤
│ │ Profile: 66, 86 │ 33.0 │ 28.1 │ 24.9 │ 20.7 │ 17.3 │ 12.2 │ 9.1 │ 7.0 │
│ ├───────────────────┼──────┼──────┼──────┼──────┼──────┼──────┼─────┼─────┤
│ │ Constant 86 │ 38.4 │ 30.9 │ 26.8 │ 21.8 │ 17.8 │ 12.2 │ 8.9 │ 6.9 │
│ SBS 64-40 │ Constant 66 │ 35.2 │ 29.1 │ 25.7 │ 21.3 │ 17.6 │ 12.3 │ 9.1 │ 7.0 │
│ ├───────────────────┼──────┼──────┼──────┼──────┼──────┼──────┼─────┼─────┤
│ │ Profile: 66, 86 │ 40.9 │ 32.5 │ 27.7 │ 22.0 │ 17.7 │ 12.0 │ 8.8 │ 6.9 │
│ ├───────────────────┼──────┼──────┼──────┼──────┼──────┼──────┼─────┼─────┤
│ │ Constant 86 │ 47.6 │ 35.1 │ 29.2 │ 22.7 │ 17.9 │ 12.0 │ 8.7 │ 6.8 │
This section presents a computationally efficient layered viscoelastic nonlinear model called LAVAN. LAVAN can consider linear viscoelasticity of AC layers as well as the stress-dependent modulus of
granular layers. The formulation was inspired by quasilinear-viscoelastic (QLV) constitutive modeling, which is often used in analyzing nonlinear viscoelastic materials. In the literature, the
various forms of the model are also called Fung’s model, Schapery’s nonlinearity model, and modified Boltzmann’s superposition. (See references 70–75.)
LAVAN combines Schapery’s quasi-elastic theory with generalized QLV theory to predict the response of multilayered viscoelastic nonlinear flexible pavement structures. Before introducing the
generalized QLV model, a brief overview of granular nonlinear pavement models is presented. This is followed by development of a generalized QLV model for a multilayered system and numerical
validation in which the response of flexible pavements under the FWD test was analyzed. The model was validated against the general-purpose FEM software, ABAQUS.
Layered Nonlinear Elastic Solutions
Under constant amplitude cyclic loading, granular unbound materials exhibit plastic deformation during the initial cycles. As the number of load cycles increases, plastic deformation ceases to occur,
and the response becomes elastic in further load cycles. Often, elastic response in a triaxial cyclic loading is defined by resilient modulus (M[R]) at that load level, which is expressed as shown in
figure 52.
Figure 52. Equation. Resilient modulus.
σ[d] = ( σ[1] – σ[3]), is the deviatoric stress in a triaxial test.
ε[r] = recoverable strain.
If the granular layer reaches this steady state under repeated vehicular loading, then further response can be considered recoverable, and figure 52 can be used to characterize the material. The M[R]
value shown in figure 52 is affected by the stress state (or load level). Typically, unbound granular materials exhibit stress hardening, i.e., M[R] increases with increasing stress.^(76,77) As shown
in figure 53, Hicks and Monismith related bulk stress and the resilient modulus obtained in figure 53 to characterize the stress dependency of the material.^(78)
Figure 53. Equation. Resilient modulus as a function of stress invariant.
θ = the sum of principal stresses (i.e., θ = σ[1] + σ[2] + σ[3])
k[1] and k[2] = regression constants.
The model suggested by Uzan and by Witczak and Uzan (figure 54 and figure 55, respectively) incorporated the distortional shear effect into the model using deviatoric and octahedral stresses,
Figure 54. Equation. Uzan’s nonlinearity model.
p[a] = atmospheric pressure.
θ = the sum of principal stresses (i.e., θ = σ[1] + σ[2] + σ[3]).
σ[d] = deviatoric stress.
k[1], k[2], and k[3] = regression constants.
Figure 55. Equation. Witczak and Uzan’s nonlinearity model.
τ[oct] = octahedral shear stress.
p[a] = atmospheric pressure.
θ = the sum of principal stresses (i.e., θ = σ[1] + σ[2] + σ[3]).
σ[d] is deviatoric stress.
k[1], k[2], and k[3] = regression constants.
The model has been further modified by various researchers. Yau and Von Quintus analyzed LTPP M[R] test data using the generalized form of the Uzan model expressed as the equation in figure 56.^
Figure 56. Equation. Generalized Uzan’s model.
Where k[1], k[2], k[3], k[6], k[7] are regression constants. They found that parameter k[6] regressed to zero for more than half of the tests, and hence the coefficient was set to zero for the
subsequent analysis. The modified equation is shown in figure 57.
Figure 57. Equation. MEPDG model for resilient modulus.
Although the resilient modulus, M[R], is not the Young’s modulus (E), when formulating granular material constitutive equations, it is often used to replace E in the equation in figure 58.^(81)
Figure 58. Equation. Elasticity constitutive equation.
E = Young’s modulus
v = Poisson’s ratio.
σ[ij] is the stress tensor.
ε[ij] is the strain tensor.
ε[kk] = (ε[11] + ε[22] + ε[33]).
δ[ij] is the Kroenecker delta.
Nonlinear M[R] in flexible pavements has been implemented in many FEM-based models, assuming AC layer to be elastic. These include GTPAVE, ILLIPAVE, and MICHPAVE.^(82–84) Typically, FEM-based
nonlinear pavement analysis is performed by choosing a user-defined material (UMAT) in FEM-based software packages such as ABAQUS and ADINA.^(10,85,86) However, although FEM-based solutions are
promising, they are computationally expensive.
An approximate nonlinear analysis of pavement can also be performed using Burmister’s multilayered elastic based solution.^(62,63) However, because the multilayer elastic theory assumes each
individual layer is both vertically and horizontally homogeneous, it can be used to depict nonlinearity only through approximation. For incorporating variation in modulus with depth, Huang suggested
dividing the nonlinear layer into multiple sublayers.^(27) Furthermore, he suggested choosing a representative location in the nonlinear layers to evaluate modulus based on the stress state of the
point. He showed that when the midpoint of the nonlinear layer under the load was selected to calculate modulus values, the predicated response near the load was close to the actual response.
However, the difference between actual and predicted response increased at points away from the loading. Zhou studied stress dependency of base layer modulus obtained from base layer mid-depth stress
state.^(87) He analyzed FWD testing at multiple load levels on two different pavement structures. The study showed that reasonable nonlinearity parameters k[1] and k[2] (figure 53) can be obtained,
regressing backcalculated modulus with stress state at mid-depth of the base layer.
In the present study, the elastic nonlinearity was solved iteratively assuming an initial set of elastic moduli. The stresses computed at mid-depth of each nonlinear layer using the initial values of
modulus were used to compute the new set of moduli. The iteration was continued until E computed from the stresses predicted by the layered solution and the E used in the layered solution converged.
Note that the appropriate stress adjustments were made because unbound granular material cannot take tension. This means that in such a case, either residual stress would be generated such that the
stress state obeyed a yield criterion or the tensile stresses would be replaced with zero.
The algorithm developed to obtain response in a nonlinear system was compared with a robust nonlinear FEM software—MICHPAVE. The algorithm was compared for the cases when the unbound layer was
considered as a single layer for nonlinearity calculations (Algorithm1) and when the layer was divided into two sections (Algorithm2). The analysis results are presented in appendix A. From the
results, it was observed that subdividing the unbound base layer into two layers for computing nonlinearity did not produce much improvement in the results, hence it was decided to use the base layer
as a single layer in further analysis.
Proposed Layered Viscoelastic Nonlinear (LAVAN) Pavement Model
Mechanistic solutions for nonlinear viscoelastic materials exhibit variation depending on the type of nonlinearity present. Typical nonlinear viscoelasticity equations involve convolution integrals
that are based on unit responses (e.g., relaxation modulus and creep compliance), which are a function of stress or strain. Figure 59 and figure 60 show typical forms of such expressions.
Figure 59. Equation. Nonlinear viscoelastic formulation for stress when relaxation modulus is a function of strain.
Figure 60. Equation. Nonlinear viscoelastic formulation for strain.
ε = strain.
σ = stress.
E(t, ε) = strain-dependent relaxation modulus.
D(t, σ) = stress-dependent creep compliance.
Typically, in many nonlinear materials, the shape of the relaxation modulus of the material is preserved, even though the material presents stress or strain dependency.^(74,87) Such nonlinear
viscoelastic (NLV) problems are solved by assuming that time dependence and stress (or strain) dependence can be decomposed into two functions as shown in figure 61 and figure 62.
Figure 61. Equation. Nonlinear creep compliance formulation.
Figure 62. Equation. Nonlinear relaxation modulus formulation.
g(σ) = a function of stress.
D[t](t) = the (only) time-dependent creep compliance.
f(ε) = a function of strain.
E[t](t) = the (only) time-dependent relaxation modulus.
For such materials, the expression in figure 63 has been typically used in NLV formulations to develop the convolution integral.
Figure 63. Equation. Nonlinear viscoelastic formulation for stress when relaxation modulus is separated from strain dependence function.
Where E[t] is a relaxation function that remains unchanged at any strain level and f(ε (τ)) is a function of strain, such that df(ε (τ))/dε represents the elastic tangent stiffness.
These models are designated as Fung’s nonlinear viscoelastic material models, which were first proposed by Leaderman in 1943.^(70) A generalized form of this nonlinearity model was presented by
Schapery using thermodynamic principles.^(71) Yong et al. used the model to describe nonlinear viscoelastic viscoplastic behavior of asphalt sand, whereas Masad et al. used the model to describe
nonlinear viscoelastic creep behavior of binders.^(72,73) The model suggests that the nonlinear relaxation function can be expressed as a product of the function of time (E[t](t[R]– τ)) and the
function of strain df(ε (τ))/dε. In figure 63, nonlinearity was introduced by the elastic component, df(ε (τ))/dε, and the viscoelasticity comes from E[t].
Concepts of nonlinear viscoelastic material behavior can be used to develop formulations for a layered system where the unbound layer is nonlinear and the AC layer is linear viscoelastic. If the
previous argument is directly adopted, then the corresponding QLV analysis of viscoelastic nonlinear multilayered analysis can be represented as shown in figure 64.
Figure 64. Equation. Nonlinear viscoelastic formulation for stress when relaxation modulus is separated from strain dependence function and when formulation is applied to a multilayered pavement
Where E[t](x,y,z,t[R]) is the relaxation function, and f(x,y,z, ε (τ))is a function of strain ε (τ) at location (x,y,z). Alternatively, to obtain vertical surface deflection in pavements, figure 64
can be expressed in terms of vertical deflection response to Heaviside step loading as shown in figure 65.
Figure 65. Equation. Nonlinear viscoelastic formulation for deflection.
u^ve(t) = the surface (nonlinear viscoelastic) displacement.
u^e[H-t] (t, σ = 1) = the unit nonlinear elastic response due to a unit stress.
g(σ) = a function of stress, which can be expressed as shown in figure 66.
Figure 66. Equation. Nonlinear viscoelastic formulation.
Where u^e[H] (t[R ], σ) is the nonlinear elastic unit displacement due to a given stress (σ). For Fung’s theory to hold (i.e., figure 63 through figure 66), g(σ) must be purely a function of stress.
To investigate this, the g(σ) values were computed using figure 66 and plotted against surface stress and relaxation modulus (i.e., time). The LAVA algorithm was modified to implement an iterative
nonlinear solution for the granular base, which was assumed to follow the following two nonlinearity expressions: M[R] = k[1](θ /p[a])^k2 and M[R] = k[1](θ /p[a])^k2(τ [oct]/p[a] + 1)^k3. Analysis
using the
k-θ-τ model is presented here whereas the k-θ model is presented in appendix A. The pavement section properties and material properties are shown in table 15 and figure 67.
Table 15. Pavement geometric and material properties for nonlinear viscoelastic pavement analysis.
│ Property │ Value │
│ Thickness (inches) │ 5.9 (AC), 9.84 (base), infinity (subgrade) │
│ Poisson ratio (ν) │ 0.35 (AC), 0.4 (base), 0.4 (subgrade) │
│ Density (pci) │ 0.0752 (AC), 0.0752 (base), 0.0752 (subgrade) │
│ Nonlinear E[base] (psi) │ k[0 ]= 0.6; k[1 ]= 3,626; k[2]= 0.5; k[3 ]= -0.5 │
│ E[subgrade] (psi) │ 10,000 │
│ AC: E(t) sigmoid coefficient (ci) │ 1.598, 2.937, 0.512, -0.562 │
│ Haversine stress: 35 ms │ Peak stress = 137.79 psi │
│ Sensor spacing from the center of load (inches) │ 0, 7.99, 12.01, 17.99, 24.02, 35.98, 47.99, 60 │
Figure 67. Diagram. Flexible pavement cross section for nonlinear viscoelastic pavement analysis.
The u^e[H] (t, σ) in figure 66 was calculated at a range of stress values from 0.1 to 140 psi and using E(t) values for AC from 10^-8 to 10^8 s. Then, u^e[H-t] (t, σ = 1) was calculated for unit
stress, and g(σ) was calculated using figure 66. Figure 68 shows the variation of g(σ), where the g(σ) values decrease with increasing stress (σ).
Figure 68. Graph. Variation of g(σ) with stress and E(t) of AC layer
This is expected behavior for a nonlinear material because as the stress increases, the unbound layer moduli also increase. However, figure 68 also illustrates that the g(σ) varies with change in E(t
). This means that g(σ) is not solely based on the stress, and, as a result, Fung’s model cannot be used in a layered pavement structure. This is meaningful because the change in the stress
distribution within the pavement layers due to viscoelastic effect (as E(t) varies) imposes changes in the behavior of stress-dependent granular layer. Note that as shown in appendix A, similar
results were obtained when nonlinearity of k-θ type model was assumed. Hence, even though the viscoelastic layer in a nonlinear multilayered system is linear, it cannot be formulated as a Fung’s QLV
model. The QLV model can still be formulated as a convolution integral, provided the stress-dependent relaxation function of the multilayered structure under all the load levels is known. Such a
generalized QLV model for a multilayered structure can be expressed as nonlinear viscoelasticity equations involving the convolution integrals of unit response function of the structure, which is a
function of stress or strain as shown in figure 69.
Figure 69. Equation. Generalized nonlinear viscoelastic formulation.
R^ve(x,y,z,t) = the nonlinear viscoelastic response of the layered pavement structure.
R^e[H] (x, y, z, I (τ), t[R] - τ) = the unit response function that is both a function of time.
input I(τ) = stress applied at the surface of the pavement.
Note that in this formulation, unlike Fung’s QLV model, time dependence and stress (or strain) dependence were not separated.
Forward Algorithm: Numerical Implementation of the Proposed Model (LAVAN)
Figure 69 can be rewritten in terms of vertical surface deflection under axisymmetric surface loading (see figure 67) as shown in figure 70.
Figure 70. Equation. Generalized nonlinear viscoelastic formulation for deflection.
u^ve[vertical] (z,r,t) is the vertical deflection at time t and location (z,r).
u^e[H-vertical] (z,r,σ(τ), t[R] - τ) = u^e[vertical] (z,r,σ(τ),t[R] - τ)/σ where u^e[vertical] (z,r,σ,t[R] - τ) is the nonlinear response of the pavement at a loading stress level of σ.
The model in figure 70 can be expressed in discretized formulation as shown in figure 71.
Figure 71. Equation. Discretized nonlinear formulation.
Where τ[1] = 0, τ[N] = t. The u^e[H](σ, t[Ri] - τ[j]) values are computed via interpolation using the two-dimensional matrix pre-computed for u^e[H ](t, σ) (which was computed at a range of stress
values and E(t) values). The developed model has been referred to as LAVAN (short for LAVA-Nonlinear) in this report. The following step-by-step procedure can be used to numerically compute the
1. Define a discrete set of surface stress values: σ[k] = 0.1 to 140 psi.
2. Calculate nonlinear elastic response u^e(t[Ri], σ[k]) at a range of t[Ri] values, by using E[AC] = E(t[Ri]) for each t[Ri] value. Recursively compute E[base] until the stress in the middle of
the base layer, at a radial distance r, results in the same E[base] as the one used in the layered elastic analysis (within acceptable error). For this step, the nonlinear formulation shown in
figure 72 is assumed for the base.
Figure 72. Equation. Resilient modulus.
θ = σ[1] + σ[2] + σ[3] + yz(1 + 2K[0 ]) (where K[0] is the coefficient of earth pressure at rest).
τ[oct] = octahedral shear stress.
k[1], k[2], and k[3] = regression constants.
p[a] = atmospheric pressure.
3. u^e[H-vertical] (r,z,t[i],σ[k]) =
u^e[vertical] (r,z,t[i],σ[k])
4. Perform convolution shown in figure 71 to calculate the nonlinear viscoelastic response.
Verification of the LAVAN model
To validate the LAVAN algorithm, ABAQUS was used. A flexible pavement was modeled as a three-layer structure, with a viscoelastic AC top layer over a stress-dependent granular base layer on an
elastic half-space (subgrade). Figure 67 shows the geometric properties of the pavement structure used in the validation, where h[AC] = 5.9 inches and h[base]=9.84 inches. The viscoelastic properties
of two HMA mixes, called crumb rubber terminal blend (CRTB) and control (two materials from FHWA’s Accelerated Load Facility 2002 experiment) were used for the AC layer in the analysis as case 1 and
2, relaxation modulus master curve for the two mixes are shown in figure 73.^(89) These curves were computed from their |E*| master curves by following the interconversion procedure suggested by.^
(58) The pavement properties in the analysis for each test case were the same, as shown in figure 15.
Figure 73. Graph. Relaxation moduli of mixes used in LAVAN validation.
In ABAQUS, the viscoelastic properties of the HMAs were input in the form of normalized bulk modulus (K) and normalized shear modulus (G).^(90) For the unbound nonlinear layer, a UMAT was written,
incorporating the nonlinear constitutive modeling as explained in the previous section. ABAQUS requires that any UMAT have at least two main components: (1) update of the stiffness Jacobian Matrix
and (2) stress increment. Figure 74 and figure 75 show the mathematical expressions for these two operations implemented in the UMAT.
Figure 74. Equation. ABAQUS Jacobian formulation.
Figure 75. Equation. ABAQUS stress update formulation.
Where J is the Jacobian matrix; σ[ij]^n+1 is the updated stress; and i, j, k, and l represent r, z, t, and θ in the cylindrical coordinate system. For nonlinear analysis using LAVAN, the unbound
modulus was calculated using the stress state at the midpoint of the unbound base layer (vertically). Because LAVAN cannot incorporate nonlinearity along the horizontal direction, for comparison,
modulus values were calculated using stress at r = 3.5a (r shown in figure 67). In ABAQUS, the FE domain size of 133R in the vertical direction and 53R in the horizontal direction was found to
produce stable surface deflection (with less than 1-percent error at the center). For the selected domain size, the FEM mesh refinement of 0.4 inch in the AC layer and 1 inch in the base layer were
used. ABAQUS took approximately 17 min to analyze a haversine loading of 138 psi and 35 ms, whereas LAVAN could generate the results in 3.6 min.
Comparison of surface deflection between LAVAN and ABAQUS for the control mix (figure 76) and CRTB mix (figure 77) shows good predictability of LAVAN. As expected, the stiffer mix (control) generated
a lower response compared with the softer mix (CRTB) under the same geometric and loading conditions. The top graph in figure 76 shows the results when stress at r = 0 is used in LAVAN and was
provided for comparison purposes. The bottom graph in figure 76 shows the results when stress at r = 3.5a was used in LAVAN. Note that S1, S2, S3, S4, S5, S6, S7, and S8 in the figures correspond to
surface deflection Sensor-1 (r = 0 inches), Sensor-2 (r = 8 inches) etc. Sensors 1 through 8 were 0, 8, 12, 18, 24, 36, 48, and 60 inches away from the centerline of the load.
Figure 76. Graphs. Surface deflection comparison of ABAQUS and LAVAN for the control mix.
Figure 77. Graphs. Surface deflection comparison of ABAQUS and LAVA for the CRTB mix.
The difference between ABAQUS and LAVAN was quantified using the two variables shown in figure 78 and figure 79.
Figure 78. Equation. Error in peak deflection.
Figure 79. Equation. Average error in normalized deflection history.
PE[peak] = Percent error in the peaks.
δ^ peak[ABACUS] = Peak deflection predicted by ABAQUS.
δ ^peak[LAVA][N] = Peak deflection predicted by LAVAN.
PE[avg] = Average percent error in normalized deflection history.
δ[ABAQUS ](t[i]) = Deflection predicted by ABAQUS at time t[i].
δ[LAVAN] (t[i]) = Deflection predicted by LAVAN at time t[i].
N = Number of time intervals in the deflection time history.
Because the model integrates both viscoelastic and nonlinear material properties, both peak deflection and creeping of deflection should be predicted with accuracy. PE[avg] was used to examine the
model performance in creeping.
As shown in figure 80 and figure 81, the PE[peak] and PE[avg] values for the control mix showed a slight improvement in the results when r = 3.5a was used. However, from figure 82 and figure 83, it
can be seen that PE[peak] and PE[avg] values for CRTB mix showed more sensitivity to the location of the stress state.
In general, for the deflection basin at farther sensors, a better match between the ABAQUS and LAVAN results was found when stress state at r = 3.5a was used while incorporating nonlinearity.
However, note that r = 0 also produced relatively good results, especially in the first four to five sensors. Also, note that, for the structure in table 15, the procedure leads to r = 2.8a when the
trapezoidal stress distribution with (0.5 horizontal slope and 1 vertical slope) is assumed.^(27)
Figure 80. Graph. Percent error (PE[peak]) calculated using the peaks of the deflections for LAVAN-ABAQUS comparison (control mix).
Figure 81. Graph. Average percent error (PE[avg]) calculated using the entire time history for the LAVAN-ABAQUS comparison (control mix).
Figure 82. Graph. Percent error (PE[peak]) calculated using the peaks of the deflections for the LAVAN-ABAQUS comparison (CRTB mix).
Figure 83. Graph. Average percent error (PE[avg]) calculated using the entire time history for the LAVAN-ABAQUS comparison (CRTB mix).
Backcalculation of pavement properties using FWD data is essentially an optimization problem. The analysis is based on formulating an objective function, which is minimized by varying the pavement
properties. Response obtained from the forward analysis is matched with response obtained from the FWD test, and the difference is minimized by adjusting the layer properties of the system until a
best match is achieved. Typically, the existing backcalculation methods either use RMS or percentage error of peak deflections as the objective function. However, because the viscoelastic properties
are time dependent, the entire deflection history needs to be used. Hence, the primary component of the proposed backcalculation procedure was a layered viscoelastic forward solution. Such a solution
should provide accurate and rapid displacement response histories owing to a time-varying (stationary) surface loading. For a linear viscoelastic pavement model, the research team used the
computationally efficient layered viscoelastic algorithm LAVA to support the backcalculation algorithm called BACKLAVA, whereas for viscoelastic nonlinear pavement model, the team used the
computationally efficient layered viscoelastic algorithm LAVAN to support the backcalculation algorithm called BACKLAVAN.^(65)
Whenever mechanical properties are derived with inverse analysis, it is desirable to minimize the number of undetermined parameters by using an economical scheme. Such an approach is both
advantageous from a computational speed perspective and addresses the non-uniqueness issue, i.e., test data may not be detailed, accurate, or precise enough to allow calibration of a complicated
model. Moreover, it is beneficial to have some inherent “protection” within the formulation, forcing the analysis to a meaningful convergence—fully compliant with the physics of the problem.
Therefore, as discussed before, the relaxation modulus (E(t)) master curve (figure 32) was initially assumed to follow a sigmoid shape defined by the equation in figure 84:
Figure 84. Equation. Sigmoid form of relaxation modulus curve.
Where c[i] are the sigmoid coefficients and t[R] is the reduced time, which is defined as t[R] = t/a[T](T) (or log(t[R]) = log(t) – log(a[T](T)), where, as discussed before (Figure 33), a[T](T) is
the shift factor coefficient, which is a function of temperature (T) and t is time. The shift factor coefficients has been defined as a second-order polynomial of the form log(a[T](T)) = a[1](T^2–T
[ref]^2) + a[2](T – T[ref]), where a[1] and a[2] are the shift factor coefficients. As shown by the relaxation modulus and shift factor equations, a total of six coefficients were needed to develop
the E(t) master curve, including the temperature dependency (i.e., the shift factor coefficients).
In theory, it should be possible to obtain these six coefficients in two ways: (1) using data containing time-changing response at different temperature levels and (2) using uneven temperature
profile information existing across the thickness of the asphaltic layer during a single drop containing time changing response data.
Reliability and accuracy of the backcalculated results depend on the optimization technique used. In the present work, several optimization techniques were tried to formulate a procedure to
backcalculate these six viscoelastic properties along with unbound material properties. These optimization techniques can be broadly classified as classical methods and evolutionary methods. In this
study, simplex-based classical optimization method was performed using MATLAB® function fminsearch, whereas, genetic algorithm (GA)-based evolutionary optimization method was performed using the
MATLAB® function ga. The objective function, which is based on deflection differences in the current work, is a multidimensional surface that can include many local minima. In elastic backcalculation
methods, the modulus of the AC layer is defined using a single value. However, in the present problem, the AC properties were represented by a sigmoid containing four parameters for E(t) and by a
polynomial containing two parameters for a[T](T). Hence, it was naturally expected that the probability of number of local minima would increase. In traditional methods, because of the presence of
multiple local minima, selection of different initial solutions may lead to different subsequent solutions. Typically, classical optimization methods (such as the fminsearch) have the following
• Solution may depend on initial seed values.
• Convergence can be achieved at a local minimum.
These disadvantages do not mean that classical methods cannot be used in the backcalculation procedure. In fact, the classical methods can be hybridized along with evolutionary optimization
techniques in developing more effective backcalculation procedures.
It is important to develop a backcalculation process such that FWD data obtained at a relatively small range of pavement temperatures can be sufficient to derive the viscoelastic properties of AC.
Among various optimization techniques, GA was chosen because of its capability to converge to a unique global minimum solution, irrespective of the presence of local solutions.^(91–93) GA was
implemented using MATLAB® function ga. In general terms, GA performs the following operations: (1) initialization, (2) selection, (3) generation of offspring, and (4) termination. In initialization,
GA generates a pool of solutions using a subset of the feasible search space, the so-called “population.” Each solution is a vector of feasible variable values. In the selection process, each
solution is evaluated using an objective function, and the best fitted solutions are selected. The selected solutions are then used to generate the next generation population (offspring). This
process mainly involves two operators: crossover and mutation. In crossover, a new solution is formed by exchanging information between two parent solutions, which is done by swapping a portion of
parent vectors. In mutation, a new solution is formed by randomly changing a portion of the parent solution vector. The newly generated population is evaluated using the objective function. This
process is repeated until a termination criterion is reached. Through guided random search from one generation to another, GA minimizes the desired objective function.
Formulation of the optimization model using GA is shown in figure 85.
Figure 85. Equation. Optimization model.
m = Number of sensors.
d[i] = Input deflection information obtained from field at sensor k.
d[o]^k = Output deflection information obtained from forward analysis at sensor k.
n = Total number of deflection data points recorded by a sensor.
c[i] = Sigmoid coefficients.
E[b] and E[s] = Base and subgrade moduli.
a[i] = Shift factor polynomial coefficients.
l = Lower limit.
u= Upper limit.
This model is also subject to the following constraints:
• c^l[1] ≤ c[1] ≤ c^u[1].
• c^l[2] ≤ c[2] ≤ c^u[2].
• c^l[3] ≤ c[3] ≤ c^u[3].
• c^l[4] ≤ c[4] ≤ c^u[4].
• a^l[1] ≤ a[1] ≤ a^u[1].
• a^l[2] ≤ a[2] ≤ a^u[2].
• E^l[b] ≤ E[b] ≤ E^u[b].
• E^l[s] ≤ E[s] ≤ E^u[s].
To obtain the lower and upper limits of c[i][ ]and a[i], values of sigmoid and shift factor coefficients of numerous HMA mixtures were calculated. Table 16 shows these limits, which were used in the
GA constraints shown in figure 85. Limits to the elastic modulus were arbitrarily selected (based on typical values presented in the literature). Note that the sigmoid obtained by using the lower or
upper limits of the coefficients gave a larger range compared with the actual range of E(t). This could potentially slow down the backcalculation process. Therefore, as described later in the report,
additional constraints were defined to narrow the search window.
Table 16. Upper and lower limit values in backcalculation.
│ Limit │ c[1] │ c[2] │ c[3] │ c[4] │ a[1] │ a[2] │ E[1] │ E[2] │
│ Lower │ 0.045 │ 1.80 │ -0.523 │ -0.845 │ -5.380E-04 │ -1.598E-01 │ 10,000 │ 22,000 │
│ Upper │ 2.155 │ 4.40 │ 1.025 │ -0.380 │ 1.136E-03 │ -0.770E-01 │ 13,000 │ 28,000 │
The duration of a single pulse of an FWD test is very short, which limits the portion of the E(t) curve used in the forward calculation using LAVA. As a result, it was not possible to backcalculate
the entire E(t) curve accurately using deflection data of such short duration. The longer the duration of the pulse, the larger portion of the E(t) curve used in LAVA in the forward calculation
process. Therefore, one may conclude that FWD tests need to produce a long-duration deflection-time history. However, owing to the thermorheologically simple behavior of AC, the time-temperature
superposition principle can be used to obtain longer duration data by simply running the FWD tests at different temperatures and using the reduced time concept described at beginning of this chapter.
Before discussing into the details of the required number of FWD test temperatures and magnitudes, an analysis on the effects of different FWD deflection sensor data on the backcalculated E(t) master
curve is presented in the following section.
Sensitivity of E(t) Backcalculation to the Use of Data From Different FWD Sensors
This section presents an analysis of the contribution of individual and a group of sensors on the backcalculation of the E(t) master curve. Note that the analysis was based on a real coded GA, which
uses double vector variables. All the existing applications of GA in pavement inverse analysis were based on a binary coded GA, and hence the GA parameters suggested in these references were not
applicable to the approach presented in this section.^(91–93) As a result, a new set of optimum parameters was determined. The backcalculation process was run using a population and generation of 70
and 15, respectively (selected after trying various combinations), using FWD time histories obtained at a temperature set of {32, 50, 68, 86, 104, 122, 140, 158, and 176} °F. The pavement properties
used (see table 17) were kept the same throughout the study.
Table 17. Pavement properties in viscoelastic backcalculation of optimal number of sensors.
│ Property │ Case 1 │
│ Thickness (AC followed by granular layers) (inches) │ 10, 20, infinity │
│ Poisson ratio {layer 1,2,3…} │ 0.35, 0.3, 0.45 │
│ E[unbound] {layer 2,3…} (psi) │ 11,450, 15,000 │
│ E(t) sigmoid coefficient {layer 1} │ 0.841, 3.54, 0.86, -0.515 │
│ a(T) shift factor polynomial coefficients {layer 1} │ 4.42E-04, -1.32E-01 │
│ Sensor spacing from the center of load (inches) │ 0, 8, 12,18,24, 36,48, 60 │
Convergence was evaluated based on the backcalculated moduli of the base and subgrade layers as well as the E(t) curve of the AC layer. Average error in the moduli of base and subgrade are defined as
shown in figure 86.
Figure 86. Equation. Average error in backcalculated moduli of base and subgrade layers.
ξ[unbound] = Absolute value of the error in the backcalculated unbound layer modulus.
E[act] and E[bc] are the actual and backcalculated moduli (of the unbound layer), respectively.
The variation of error in the backcalculated E(t) at different reduced times is defined as shown in figure 87.
Figure 87. Equation. Error in backcalculated relaxation moduli at different reduced times.
ξ[AC](t[i]) = E(t) error at reduced time t[i], where i ranges from 1 to n such that t[1] = 10^-8 and t[n] = 10^8 s.
n = Total number of discrete points on the E(t) curve.
E[act](t[i]) = Actual E(t) value at point i.
E[bc](t[i]) = Backcalculated E(t) value at i.
Finally, average error in E(t) is defined as shown in figure 88.
Figure 88. Equation. Average error in backcalculated relaxation moduli.
Where ξ^avg[AC] is the average error in the E(t) of the AC layer.
Figure 89 shows the variation of ξ[unbound] when data from different FWD sensors are used. As shown, the error decreased as data from farther sensors were incorporated in the backcalculation. This
may be because at farther sensors, the deflections were primarily, if not solely, due to deformation in the lower layers.
Figure 89. Graph. Error in unbound layer modulus in optimal number of sensor analysis.
Figure 90 shows the actual and backcalculated E(t) curve, which is only based on data from sensor 1 (at the center of load plate). As shown, there was a very good match between the backcalculated and
actual curves. Figure 91 shows the variation of percentage error in E(t) (calculated using figure 87) with time. The magnitude of percent error ranged from about -9 to 23 percent and increased with
reduced time. This was expected because the E(t) in longer durations (> 10^-6 s) were not used in the forward computations. Note that the result is shown over a time range of 10^-8 to 10^8 s.
However, the forward calculations were actually made using temperatures ranging from 32 to 176 °F, which corresponded to a reduced time range of approximately 10^-6 to 10^6 s.
Figure 90. Graph. Backcalculated and actual E(t) master curve at the reference temperature of 66 °F using FWD data from only sensor 1.
Figure 91. Graph. Variation of error when using FWD data from only sensor 1.
To investigate whether using just the farther sensors improved the backcalculated E[unbound] values, backcalculations were performed using data from different combinations of farther sensors. Figure
92 shows the error in backcalculation of the modulus of the base (layer 2) and the subgrade (Layer3), when data from only further sensors were used. As shown, for layer 3, error ranged between 0.27
to 1.43 percent, with no specific trend. The error in the modulus of the base (layer 2) was higher, ranging from 1 to 8.96 percent. However, a clear trend was not observed. By comparing with figure
89, one can conclude that using all the sensors produces the least error in E[unbound].
Figure 92. Graph. Error in unbound layer modulus using FWD data from only farther sensors.
Effect of Temperature Range of Different FWD Tests on Backcalculation
It is typically not feasible to run the FWD test over a wide range of temperatures (e.g., from 32 to 176 °F). Depending on the region and the month of the year, the variation of temperature in a day
can be anywhere between 50 and 86 °F during the fall, summer, and spring when most data collection is done. This means that the performance of the backcalculation algorithm needs to be checked for
various narrow temperature ranges. The purpose of the study explained in this section was to determine the effect of different temperature ranges on the backcalculated E(t) values. Further, it was
recognized that the results obtained from GA might not be exact but only an approximation of the overall solution. Hence a local search method was carried out through fminsearch using the results
obtained from GA as seed. Figure 93 shows the error in the backcalculated elastic modulus values of base and subgrade when different pairs of temperatures were used. As shown, in most cases, the
error was less than 0.1 percent. Note that the errors shown in figure 93 were less than the ones shown in figure 89 (when all sensors were used). This was because in figure 89, only GA was used,
whereas in figure 93, fminsearch was used after the GA, which improved the results.
Figure 93. Graph. Variation of error in backcalculated unbound layer moduli when FWD data run at different sets of pavement temperatures are used.
Figure 94 shows the average error in E(t) (i.e., ξ^avg[AC] given in figure 88), where a pattern was observed. The error was the least when intermediate temperatures (i.e., {50-68}, {50-68-86},
{68-86-104}, {86-104}, {86-104-122} °F) were used. At low temperatures, the error seemed to increase. This was meaningful because at low temperatures, a small portion (upper left in figure 90) was
used in BACKLAVA. Therefore, the chance of mismatch at the later portions of the curve (lower right in figure 90) was high. At high temperatures, error also seemed to increase. Theoretically, the
higher the temperature, the larger the portion of the E(t) curve that was used because of the nature of the convolution integral, which starts from zero (figure 29). However, if only the high
temperatures were used, the discrete nature of load and deflection time history led to a big jump from zero to the next time t[i], during evaluation of the convolution integral. This jump occurred
because when the physical time at high temperatures was converted to reduced time, actual magnitudes became large and, in a sense, a large portion at the upper left side of the E(t) curve was skipped
during the convolution integral. At intermediate temperatures, however, a more balanced use of the E(t) curve in BACKLAVA improved the results.
Figure 94. Graph. Error in backcalculated E(t) curve in optimal backcalculation temperature set analysis minimizing percent error.
When results from GA were used as seed values in fminsearch, it was observed that in general, error in E(t) was reduced. Figure 95, figure 96, and figure 97 show backcalculated E(t) master curve
using GA and corresponding backcalculated E(t) master curves obtained using GA and fminsearch. As shown, combined use of GA and fminsearch resulted in improved backcalculation.
Figure 95. Graphs. Results for backcalculation at {50, 86} °F temperature set: left side—only GA used, right side—GA+fminsearch used.
Figure 96. Graphs. Results for backcalculation at {86, 104} °F temperature set.
Figure 97. Graphs. Results for backcalculation at {86, 104, 122} °F temperature set.
Table 18 shows the time it takes to run the GA for population-generation size of 70 to 15, followed by fminsearch. The results are shown for a computer that has Intel Core 2, 2.40 GHz, and 1.98GB
Table 18. Backcalculation runtime for GA-fminsearch seed runs.
│ Number of Temperature │ Backcalculation │
│ Data │ time (min) │
│ Two (e.g., {50, 86}°F) │ 30 │
│ Three (e.g., {50, 68, 86} °F) │ 40 │
│ Seed run (fminsearch) │ 15–20 │
Normalization of Error Function (Objective Function) to Evaluate Range of Temperatures
In the analysis presented in the previous sections, percent error between the computed and measured displacement was used as the minimizing error. However, the deflection curve obtained from the
field often includes noise, especially after the end of load pulse. If percent error is used as the minimizing objective, it may lead to overemphasis of lower magnitudes of deflections at the later
portion of the time history, which typically includes noise and integration errors. Hence, another fit function was proposed in which the percent error was calculated with respect to the peak of
deflection at each sensor. This approach penalized the tail data by normalizing it with respect to the peak, as shown in figure 98.
Figure 98. Equation. Normalized error in deflection history.
{d ^k}[max] = Peak response at sensor k.
m = Number of sensors.
d[i]^k = Measured deflection at sensor k.
d[o]^k = Output (calculated) deflection from forward analysis at sensor k.
n = Total number of deflection data points recorded by a sensor.
The limits considered for E(t) so far were the limits on the individual parameters of the sigmoid curve (table 16). The E(t) curves obtained by considering the upper and lower limits of the
parameters represent curves well beyond the actual data base domain. To curtail this problem, constraints were introduced putting limits shown in figure 99 on the sum of the sigmoid coefficients c[1]
and c[2].
Figure 99. Equation. Constraints in optimization model.
Where s[1] and s[2] are arbitrary constants.
The arbitrary constants s[1] and s[2]were obtained by calculating maximum and minimum values of the sum of sigmoid coefficients c[1] and c[2] from numerous HMA mixes. Alternatively, the problem was
reframed by incorporating the constraints in limit form by redefining the variables as shown in figure 100.
Figure 100. Equation. Sigmoid variables in optimization model.
Where c[1 ]through c[4] are sigmoidal function coifficents, and x^u and x^1 are the upper and lower limits of c[1] + c[2], respectively. The problem was then resolved after replacing the inequality
constraint with limits on the variables. The new function gave good results at temperature sets of {50, 86} °F, {86, 104} °F, {50, 68, 86} °F, {68, 86, 104} °F, and {86, 104, 122} °F. The
backcalculated E(t) curves were then converted to |E*| using the interconversion relationship given in Kutay et al.^(65) Mathematically, the dynamic modulus can be defined as shown in figure 101:
Figure 101. Equation. Dynamic modulus in complex form.
Where f is frequency, E'(f) is storage modulus, and E"(f) loss modulus, which can be obtained for a generalized Maxwell model using the following equations shown in figure 102 and figure 103:
Figure 102. Equation. Real component of dynamic modulus.
Figure 103. Equation. Imaginary component of dynamic modulus.
Φ = the phase angle.
|E*| = the absolute value of the complex E* function (figure 101).
E[i] = modulus of each Maxwell spring.
ρ[i] = η[i] /E[i] = relaxation time
η[i] = the viscosity of each dashpot element in the generalized Maxwell model as shown in figure 104.
Figure 104. Equation. Dynamic modulus and phase angle.
Backcalculated E(t) and |E*| master curves were compared with the actual curves for temperature sets {50, 86} °F and {50, 68, 86} °F in figure 105 and figure 106, respectively.
Figure 105. Graph. Backcalculated |E*| master curve using FWD data at temperature set {50,86} °F, minimizing normalized error.
Figure 106. Graph. Backcalculated E(t) master curve using FWD data at temperature set {50-68-86} °F, minimizing normalized error.
It can be seen from figure 107 that the results obtained for E(t) errors over temperature sets showed a distinct pattern. The E(t) and E[unbound] errors with respect to temperature sets showed a
trend similar to that observed in the case of percentage error (figure 93 and figure 94, respectively). The error was observed to be high at sets of low ({32, 50} °F) and high ({122,140} °F, {104,
122, 140} °F, {122, 140, 158} °F) temperatures. This is because the backcalculated E(t) at lower temperatures represents the left portion of the sigmoidal E(t) curve and higher temperatures
represents the right. As explained earlier, both regions are fairly flat and hence represent constant values of E(t), which may not optimize to the actual E(t) curve. Better results were obtained for
the temperature range of {50, 68} °F to {86, 104, 122} °F (figure 108). The backcalculated E(t) master curves and corresponding errors obtained at {50, 86} °F and {68,86, 104} °F for the proposed
backcalculation model are shown in figure 109.
Figure 107. Graph. Variation of ξ^avg[AC] at different FWD temperature sets
Figure 108. Graph. Variation of ξ [unbound] at different FWD temperature sets
Figure 109. Graphs. Backcalculation results obtained using modified sigmoid variables.
Backcalculation of Viscoelastic Properties Using Various Asphalt Mixtures
In the previous sections, analyses were performed using only a single mix. To verify the conclusions made in the previous sections regarding the optimum range of temperatures of FWD testing,
backcalculations were performed on nine typical mixtures. Actual viscoelastic properties—relaxation modulus and shift factors of the selected mixtures—are shown in figure 110.
Figure 110. Graphs. Viscoelastic properties of field mix in optimal temperature analysis.
Comparison of the average error in the backcalculated relaxation modulus function calculated over three time ranges—10^-5 to 1 s, 10^-5 to 10^2 s, and 10^-5 to 10^3 s—as shown in figure 111. It can
be seen from the figure that, for all the mixes, relaxation modulus curve can be predicted close to less than 15 percent over a range of relaxation time less than 10^+3 s. Furthermore, it can be seen
that, as suggested, the backcalculated relaxation modulus prediction provided a good match over an approximate temperature range of 50 to 86 °F.
Figure 111. Graphs. Variation of error calculated over three ranges of reduced time—top = 10^-5 to 1 s, middle= 10^-5 to 10^2 s, and bottom = 10^-5 to 10^3 s.
Theoretical Analysis on Multiple-Pulse FWD in Backcalculation
In theory, it should be possible to obtain a relaxation modulus master curve if data containing the time-changing response at different temperature levels were known. The available analysis window in
the current FWD devices is short, extending up to 25 to 35 ms of stress pulse, which could be used to infer part of the relaxation function. Although series of FWD tests at different temperatures
could be useful in developing the entire master curve, in theory, the prediction could be improved if information at different rates of loading or over a larger time interval were known.
As shown in figure 112, a load of four successive pulses with a duration of 35 ms followed by four pulses with 10s duration each was simulated to generate the deflection basin. This example was used
to investigate whether a different loading history could result in better estimation of E(t). Figure 113 shows the backcalculated E(t), where a good fit is visible. Note that the accuracy of the
backcalculated E(t) depended on the duration of the stress pulse, where longer duration allowed calculation of E(t) at longer durations. It is also important to apply a high-frequency (short
duration) pulse load to increase the accuracy of E(t) at very short times. Note that backcalculation of E(t) for this example took less than 5 min in the MATLAB® optimization tool fminsearch. These
possibilities are explored in detail in appendix B.
Figure 112. Graphs. Applied stress and resulting deflection basin for multiple pulse loading analysis.
Figure 113. Graph. Backcalculated E(t) and deflection histories using the multiple stress pulses.
The uneven temperature profile existing across the thickness of the AC layer during a single FWD drop can theoretically be used to backcalculate the E(t) master curve and the shift factor
coefficients (a[T](T)). The AC layer can be divided into several sublayers with same viscoelastic properties but with different temperature levels. Two different approaches of backcalculation are
discussed in this section. In the first approach, all the unknown variables (sigmoid coefficients, shift factor coefficients, and unbound modulus) in the forward algorithm were varied during
backcalculation. In the second approach, a two-staged backcalculation procedure was adopted. The two-stage method involved static backcalculation in the first stage (unbound modulus assuming elastic
AC layer) followed by viscoelastic backcalculation in the second (sigmoid and shift factor coefficients). Both approaches were explored in the present study.
Linear Viscoelastic Backcalculation Using Single Stage Method
As discussed earlier, a total of six coefficients are needed to represent the relaxation properties of the AC layer, including the temperature dependency. The backcalculation procedure used was same
as used in the previous section (i.e., BACKLAVA), except the forward analysis was replaced by LAVAP, which can consider varying temperature along the depth of the AC layer. Subsequently, the new
backcalculation algorithm was referred as BACKLAVAP. For executing the GA, the same lower and upper limits of c[i ]and a[i] (sigmoid and shift factor coefficients) and other specifications were
As a first step, the backcalculation algorithm was validated with a synthetic FWD deflection history, under two different temperature profiles. The data were generated using LAVAP and then used in
BACKLAVAP for backcalculation of E(t). The AC layer was divided into three equal sublayers with three different temperatures. Pavement section, properties, and temperature used in the forward
analysis are shown in table 19.
Table 19. Details of the pavement properties used in single FWD test backcalculation under a known temperature profile.
│ │ Asphalt Concrete Layer │ │ │
│ ├──────────────┬──────────────┬──────────────┤ │ │
│ │ Sublayer │ Sublayer │ Sublayer │ Granular │ │
│ Property │ 1 │ 2 │ 3 │ Base │ Subgrade │
│ Thickness (inches) │ 2 │ 2 │ 2 │ 20 │ Semi-infinite │
│ Temperature (°F) │ Case 1 │ 68 │ 59 │ 50 │ N/A │ N/A │
│ ├────────────┼──────────────┼──────────────┼──────────────┼────────────────┼────────────────┤
│ │ Case 2 │ 86 │ 77 │ 68 │ N/A │ N/A │
│ Poisson’s ratio │ 0.35 │ 0.4 │ 0.45 │
│ Relaxation modulus │ E(t) coefficients (c[1], c[2], c[3], c[4]) │ Backcalculated │ Backcalculated │
│ │ backcalculated │ │ │
│ Time-temperature shifting coefficients │ (a[1], a[2]) backcalculated │ N/A │ N/A │
│ Sensor spacing from the center of load (inches): 0, 8, 12, 18, 24, 36,48, 60 │
│ N/A = Not applicable. │
For the case of backcalculation using a temperature profile, the GA parameters—population size and generation numbers—were again selected after several trials of combinations. It was observed that at
population size of 300, improvement in the best solution was marginal after 12 to 15 generations, and the population converged to the best solution at about 45 generations. Similarly, for a
population size of 400, improvement in the best solution was marginal after 10 to 15 generations. Figure 114 shows the backcalculation results at the temperature sets given in table 19, where a good
match was visible. Error in the backcalculated E(t) was quantified relative to the actual E(t) using ξ[AC](t[i]), given in figure 87. The ξ[AC](t[i]) calculation was performed over a reduced time
interval from 10^-8 to 10^+8 s. Then, the average error (ξ ^avg[AC]) was computed using figure 88. The average error level for the first temperature profile was found to be 5.2 percent, and for the
second, it was 4.4 percent.
To further investigate the effect of the magnitude of the pavement temperature profile on backcalculation of E(t) master curve, synthetic FWD deflection histories were generated. The synthetic data
were then used in backcalculation. The structure was divided into three layers with different temperatures, and E(t) was backcalculated using these data. The pavement section properties used in the
study were same as shown in Table 19.
Backcalculation was performed assuming the temperature of the top, middle, and bottom sublayers of the asphalt layer as {68, 59, 50} °F, {86, 77, 68} °F, {104, 95, 86} °F, and
{122, 113, 104} °F. It was again observed that the problem converged well with 300 GA populations at 45 GA generations. The results shown in figure 115 did exhibit a trend, suggesting that there was
a good potential for backcalculation of E(t) using a single FWD response for the lower temperature ranges, assuming that the temperature profile of the pavement was known.
Figure 114. Graphs. Comparison of actual and backcalculated values in backcalculation using temperature profile.
Figure 115. Graph. Error in backcalculated E(t) curve for a three-temperature profile.
Backcalculation of the Viscoelastic Properties of the LTPP Sections Using a Single FWD Test With Known Temperature Profile
The BACKLAVAP algorithm was next used with field data to backcalculate the viscoelastic properties of nine LTPP sections. With the exception of sections 10101, 300113, and 340801, selection of the
sites was done based on the following rules:
• Section comprised three layers with only one AC layer.
• Total number of constructions of the section was one.
• Section was an SPS section type (experiment number 1 and 8)
• Section was flexible pavement.
Table 20 and table 21 contain general and structural information about each selected LTPP site. As shown in table 21, section 10101 had a total of four layers, including two AC layers. However,
because the D(t) of the two AC layers reported in the LTPP database were very close, the section was included in the list, treating the two layers as a single AC layer in the analysis. Section 300113
comprised two AC layers of thickness 0.2 and 5.8 inches. Because the top AC layer of the section was very thin compared with the second AC layer, the AC layers were treated as a single layer.
Furthermore, the sectional composition of sections 300113 and 340801 were not changed during various constructions; therefore, they were included in the analysis. However, it is not clear from the
LTPP database whether the D(t) was measured before or after the constructions were done.
Table 20. List of LTPP sections used in the analysis.
│ │ │ Year of │ Total Number of │ Section │ Experiment │
│ State │ Section │ Construction │ Constructions │ Type │ Number │
│ 1 │ 0101 │ 4/30/1991 │ 1 │ SPS │ 1 │
│ 6 │ A805 │ 5/1/1999 │ 1 │ SPS │ 8 │
│ 6 │ A806 │ 5/1/1999 │ 1 │ SPS │ 8 │
│ 30 │ 0113 │ 9/18/1997 │ 5 │ SPS │ 1 │
│ 34 │ 0801 │ 1/1/1993 │ 2 │ SPS │ 8 │
│ 34 │ 0802 │ 1/1/1993 │ 1 │ SPS │ 8 │
│ 35 │ 0801 │ 9/11/1995 │ 1 │ SPS │ 8 │
│ 35 │ 0802 │ 9/11/1995 │ 1 │ SPS │ 8 │
│ 46 │ 0804 │ 1/1/1992 │ 1 │ SPS │ 8 │
Table 21. Structural properties of the LTPP sections used in the analysis.
│ │ │ Total │ Number │ AC Layer │ Base Layer │
│ State │ │ Number │ of AC │ Thickness │ Thickness │
│ Code │ Section │ of Layers │ Layers │ (inches) │ (inches) │
│ 1 │ 0101 │ 4 │ 2 │ AC1 = 1.2, │ 7.9 │
│ │ │ │ │ AC2 = 6.2 │ │
│ 6 │ A805 │ 3 │ 1 │ 3.9 │ 8.2 │
│ 6 │ A806 │ 3 │ 1 │ 6.8 │ 12.1 │
│ 30 │ 0113 │ 4 │ 1 │ AC1 = 0.2, │ 8.4 │
│ │ │ │ │ AC2 = 5.8 │ │
│ 34 │ 0801 │ 3 │ 2 │ 3.6 │ 7.8 │
│ 34 │ 0802 │ 3 │ 1 │ 6.7 │ 11.6 │
│ 35 │ 0801 │ 3 │ 1 │ 4.2 │ 9.7 │
│ 35 │ 0802 │ 3 │ 1 │ 7 │ 12.7 │
│ 46 │ 0804 │ 3 │ 1 │ 6.9 │ 12 │
In the LTPP Program, each section is tested according to a specific FWD testing plan, which consists of one or more test passes. Both SPS 1 and SPS 8 are tested, along two test passes (test pass 1
and test pass 3) using test plan 4 in LTPP. Test pass 1 data include FWD testing performed along the midlane (ML) whereas test pass 2 data includes FWD testing performed along the OW path. Because
testing with the ML test pass represents the axisymmetric assumption better, it was used here in the analysis. Furthermore, for each section, testing was done at several longitudinal locations (in
the direction of traffic) in every test pass. Typically, for a 500-ft test section, FWD testing is performed at every 50 ft longitudinally along the test pass. In the LTPP testing protocol,
temperature gradient measurements are taken every 30 min, plus or minus 10 min. The necessary temperature profile data were obtained by interpolating the temperature measured during the FWD testing.
The AC layer was divided into three equal sublayers, and a constant temperature for each sublayer was estimated. Table 22 shows the interpolated temperatures at the middle of the three sublayers.
From the table, it can be seen that the maximum temperature difference (between sublayer 1 and sublayer 3) was 11.2 °F in section350801 and the minimum 5.3 °F in section 6A805.
Table 22. AC temperature profile during LTPP FWD test.
│ │ │ │ Temperature Profile (°F) │
│ │ │ ├──────────┬──────────┬──────────┤
│ State │ │ Test │ Sublayer │ Sublayer │ Sublayer │
│ Code │ Section │ Date │ 1 │ 2 │ 3 │
│ 1 │ 0101 │ 04/28/05 │ 100.0 │ 92.5 │ 91.6 │
│ 6 │ A805 │ 11/16/11 │ 73.6 │ 69.1 │ 68.2 │
│ 6 │ A806 │ 11/16/11 │ 79.2 │ 75.2 │ 71.2 │
│ 30 │ 0113 │ 07/12/10 │ 84.7 │ 80.1 │ 79.2 │
│ 34 │ 0801 │ 08/26/98 │ 102.4 │ 98.8 │ 93.6 │
│ 34 │ 0802 │ 08/26/98 │ 79.3 │ 83.3 │ 86.5 │
│ 35 │ 0801 │ 04/09/05 │ 74.1 │ 65.1 │ 63.0 │
│ 35 │ 0802 │ 05/26/00 │ 89.8 │ 84.4 │ 83.3 │
│ 46 │ 0804 │ 05/02/01 │ 75.9 │ 70.7 │ 67.5 │
Except for section 350801, the FWD deflection data obtained showed no or minimal waviness at the end of the load pulse, which indicated that there was no shallow stiff layer. The FWD deflection data
obtained from section 350801 showed some waviness at the end of the load pulse. This indicated the possibility of a medium-depth stiff layer or high water table. The presence of a stiff layer was
further evaluated using a graphical method suggested by Ullidtz.^(94) The method involves plotting peak deflections obtained from FWD testing versus the reciprocal of the corresponding sensor
location (measured from the center of loading).^(95) Depths of stiff layer in each LTPP section estimated using Ullidtz’s method are shown in table 23.^(94) Note that negative depth to the stiff
layer was interpreted as absence of the stiff layer in the method. The results indicate that stiff layers were generally deeper than 18 ft (except section 350801). It was suggested by Lei that if the
stiff layer was below 18 ft, the effect of dynamics was not observed on the surface deflections.^(96) Section 350801 was also included in the analyses because the depth of the stiff layer was close
to the limit of 18 ft.
Table 23. Depths of stiff layer in each LTPP section estimated using Ullidtz’s method.
│ │ │ Depth of Stiff Layer From Surface (ft) │
│ State │ ├────────────────┬────────────────┬────────────────┬────────────────┤
│ Code │ Section │ Drop 1 │ Drop 2 │ Drop 3 │ Drop 4 │
│ 1 │ 0101 │ 86.9 │ 32.7 │ 109.8 │ 26.6 │
│ 6 │ A805 │ 70.4 │ No stiff layer │ No stiff layer │ No stiff layer │
│ 6 │ A806 │ No stiff layer │ No stiff layer │ No stiff layer │ No stiff layer │
│ 30 │ 0113 │ 96.4 │ 45.3 │ 49.9 │ 35.6 │
│ 34 │ 0801 │ 38.7 │ 38.9 │ 37.9 │ 38.4 │
│ 34 │ 0802 │ No stiff layer │ No stiff layer │ 284.5 │ 63.8 │
│ 35 │ 0801 │ 15.5 │ 15.6 │ 15.5 │ 15.2 │
│ 35 │ 0802 │ No stiff layer │ No stiff layer │ No stiff layer │ No stiff layer │
│ 46 │ 0804 │ 232.2 │ 183.1 │ 61.2 │ 31.6 │
Section properties used for elastic and viscoelastic backcalculations were the same (see table 20) except that the modulus of the AC layer in the elastic backcalculation was assumed constant (modulus
unknown). For elastic backcalculation, an in-house genetic algorithm was developed. The Poisson’s ratio for AC, granular base, and subgrade layers were assumed to be 0.3, 0.4, and 0.45, respectively.
As noted above, the results obtained from elastic backcalculation were used to define bounds for base and subgrade moduli in BACKLAVAP. Table 24 shows the elastic backcalculation results obtained
using data from each FWD drop. With the exception of section 350802, the static backcalculated base modulus values varied between 8,425 psi and 64,479 psi, and the subgrade modulus values varied
between 16,142 psi and 42,615 psi.
Table 24. Elastic backcalculation results for LTPP sections.
│ │ │ Drop 1 (psi) │ Drop 2 (psi) │ Drop 3 (psi) │ Drop 4 (psi) │
│ State │ ├─────────┬─────────────┼─────────┬─────────────┼─────────┬─────────────┼─────────┬─────────────┤
│ Code │ Section │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │
│ 1 │ 0101 │ 34,129 │ 46,835 │ 29,941 │ 45,478 │ 36,065 │ 42,122 │ 22,264 │ 42,615 │
│ 6 │ A805 │ 44,302 │ 19,612 │ 44,985 │ 19,533 │ 44,176 │ 19,707 │ 64,479 │ 19,599 │
│ 6 │ A806 │ 32,252 │ 23,219 │ 29,063 │ 23,281 │ 34,932 │ 22,511 │ 43,687 │ 22,636 │
│ 30 │ 0113 │ 10,908 │ 26,856 │ 10,030 │ 26,754 │ 9,617 │ 27,328 │ 8,425 │ 27,690 │
│ 34 │ 0801 │ 19,853 │ 20,918 │ 19,446 │ 21,150 │ 26,287 │ 21,392 │ 26,063 │ 22,062 │
│ 34 │ 0802 │ 59,182 │ 43,534 │ 51,674 │ 42,828 │ 53,932 │ 43,013 │ 56,114 │ 42,112 │
│ 35 │ 0801 │ 25,508 │ 23,061 │ 20,762 │ 22,567 │ 20,995 │ 21,738 │ 22,113 │ 21,649 │
│ 35 │ 0802 │ 83,664 │ 35,574 │ 74,723 │ 34,993 │ 83,675 │ 34,619 │ 84,015 │ 34,552 │
│ 46 │ 0804 │ 19,699 │ 16,142 │ 23,137 │ 16,207 │ 18,359 │ 16,282 │ 13,676 │ 16,545 │
Next, the viscoelastic backcalculation was performed. The backcalculated unbound layer moduli for the sections obtained from viscoelastic backcalculation are presented in table 25. For the
viscoelastic backcalculation, the GA algorithm in BACKLAVAP used 300 populations in each of the 15 generations, except for sections 10101 and 350801, where a 400 population and 15 generations were
used. However, Note that, for backcalculation, the search approximately converged after 10 generation for 300 populations. As shown in table 25, with the exception of section 350802, the static
backcalculated base modulus values varied between 10,292 and 64,466 psi, and the subgrade modulus values varied between 17,114 and 44,906 psi. Comparing table 24 and table 25, it can be seen that the
elastic and viscoelastic backcalculation predict similar modulus values for the unbound layers.
Table 25. Viscoelastic backcalculation results for LTPP sections.
│ │ │ Drop 1 (psi) │ Drop 2 (psi) │ Drop 3 (psi) │ Drop 4 (psi) │
│ State │ ├─────────┬─────────────┼─────────┬─────────────┼─────────┬─────────────┼─────────┬─────────────┤
│ Code │ Section │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │ E[base] │ E[subgrade] │
│ 1 │ 0101 │ 28,799 │ 44,906 │ 26,431 │ 44,035 │ 28,026 │ 42,682 │ 25,621 │ 41,470 │
│ 6 │ A805 │ 44,377 │ 17,523 │ 44,929 │ 17,114 │ 44,928 │ 18,234 │ 43,871 │ 18,436 │
│ 6 │ A806 │ 26,977 │ 21,273 │ 24,441 │ 20,724 │ 28,809 │ 20,903 │ 29,150 │ 22,615 │
│ 30 │ 0113 │ 10,491 │ 24,972 │ 10,292 │ 26,127 │ 10,391 │ 26,253 │ 11,257 │ 26,254 │
│ 34 │ 0801 │ 22,337 │ 20,569 │ 20,282 │ 19,901 │ 18,243 │ 20,569 │ 14,824 │ 22,710 │
│ 34 │ 0802 │ 64,466 │ 41,648 │ 61,782 │ 38,700 │ 62,967 │ 40,227 │ 48,242 │ 42,904 │
│ 35 │ 0801 │ 22,337 │ 20,569 │ 20,282 │ 19,901 │ 18,243 │ 20,569 │ 14,824 │ 22,710 │
│ 35 │ 0802 │ 84,339 │ 37,787 │ 84,338 │ 33,631 │ 84,339 │ 32,521 │ 84,825 │ 32,653 │
│ 46 │ 0804 │ 26,191 │ 14,746 │ 17,922 │ 14,827 │ 15,427 │ 15,125 │ 12,575 │ 16,373 │
Figure 116 shows example backcalculated (and measured) deflection time histories for sections 10101 and 350801, where section 10101 exhibited a better match than section 350801. This was attributed
to the stiff layer being close to the 18-ft limit in section 350801.
To validate the backcalculated results, creep compliance data available in the LTPP database were converted into relaxation modulus E(t). Creep data were available in tabulated form at three
temperatures—14, 41, and 77 °F—and seven different times—1, 2, 5, 10, 20, 50, and 100 s. Assuming the classical power law function for the creep compliance (figure 117), the available data were
fitted separately to each temperature.
Figure 116. Graphs. Backcalculated and measured deflection time histories for LTPP sections10101 and 350801.
Figure 117. Equation. Creep compliance power law.
The associated relaxation modulus was then obtained using the mathematically exact formula given in figure 118.^(85)
Figure 118. Equation. Relaxation modulus and creep compliance relationship.
Where D[1] and n are the power function coefficients of D(t). The discrete relaxation modulus functions were then shifted to obtain a relaxation modulus master curve. Two different relaxation master
curves were calculated. The first relaxation modulus master curve approximation was obtained when the time-temperature shift factors determined from the measured creep data were used to develop the
relaxation master curve (labeled as “Measured 1”). The second relaxation modulus master curve approximation was obtained when the time-temperature shift factors determined from backcalculation were
used to develop the relaxation master curve (labeled as “Measured 2”). This was done because laboratory creep compliance tests are usually not reliable in determining time-temperature superposition
properties because a perfect stress-step function is very difficult to achieve in the laboratory and also because the results are contaminated with viscoplasticity, especially at the high
temperatures and long creep times. Finally, for comparison, dynamic modulus and phase angle master curves were calculated from the relaxation modulus via interconversion.^(65) For further
verification, the estimated dynamic modulus obtained from ANN-based model ANNACAP was also compared. In the present work, all the estimated dynamic modulus master curve and time-temperature shift
factors obtained from ANNACAP were based on the M[R] model in ANNACAP.^(97) From the results, it was found that the dynamic modulus curves estimated using the ANNACAP model, especially at higher
frequencies, agreed well with the dynamic modulus curves obtained through interconverted creep data. Backcalculated E(t) and a[T](T) for the sections are shown in figure 119 to figure 127, and
backcalculated dynamic modulus and phase angle are shown in figure 128 to figure 136. It can be seen from the figures that in general, the independent test drops within each section resulted in very
similar predicted curves. Note that each FWD drop had a different load level and load history. Although the results were encouraging, backcalculated curves for sections 06A806, 350801, and 350802
showed noticeable disagreement with values derived from creep. However, although the dynamic modulus master curve predicted by ANNACAP matched well at the higher frequencies, it typically predicted
higher values at reduced frequencies of less than 10^-2 Hz.
Figure 119. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 10101.
Figure 120. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 6A805.
121. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 06A806.
Figure 122. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 300113.
Figure 123. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 340801.
Figure 124. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 340802.
Figure 125. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 350801.
Figure 126. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 350802.
Figure 127. Graphs. Comparison of measured and backcalculated E(t) and aT(T) for LTPP section 460804.
Figure 128. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 10101.
Figure 129. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 6A805.
Figure 130. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 6A806.
Figure 131. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 300113.
Figure 132. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 340801.
Figure Figure 133. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 340802.
Figure 134. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 350801.
Figure 135. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 350802.
Figure 136. Graphs. Comparison of measured and backcalculated |E*| and phase angle for LTPP section 46804.
As shown in figure 119, for section 10101, the relaxation modulus master curves matched very well when the time-temperature shift factor obtained from backcalculation was used (to shift the discrete
relaxation modulus functions obtained through LTPP creep data) to develop a measured master curve (labeled as “Measured 2” in figure 119 (left)). On the other hand, when the time-temperature shift
factors were determined from the measured creep data to develop the relaxation master curve (labeled as “Measured 2” in figure 119 (left)), there was a change from the backcalculated curves. The
backcalculated time-temperature shift factors were compared with creep and ANNACAP-computed results in figure 119 (right). It can be seen from the figure that the backcalculated time-temperature
shift factor functions for all the drops showed a good match over the temperature range of 50 to 131 °F. As shown in figure 128 (left), the backcalculated and measured dynamic modulus curve obtained
from Measured 2 also matched well over the entire frequency range. The backcalculated phase angles were compared with measured results in figure 128 (right). The phase angles showed some deviation at
frequencies less than 10^-2 Hz. This was further verified by the dynamic modulus master curve estimated using ANNACAP, which showed a good match over a reduced frequency greater than 10^-2 Hz.
For section 6A805, the backcalculated relaxation modulus master curves were compared with those measured in figure 120 (left). As shown in the figure, a better match with the backcalculated curves
was found when the time-temperature shift factor obtained from the measured creep data was used to develop the measured master curve (labeled as “Measured 1” in figure 120 (left)). On the other hand,
when the time-temperature shift determined from the backcalculation data was used to develop the relaxation master curve (labeled as “Measured 2” in figure 120 (left)), there was a change from the
backcalculated curves. This disagreement in the time-temperature shift can also be seen in the time-temperature shift factors in figure 120 (right) and dynamic modulus and phase angle curves in
figure 129.
Relaxation modulus and time-temperature shift factor curves for section 6A806 were compared in figure 121 (left) and figure 121 (right), respectively. For this section, the measured relaxation
modulus master curves predicted higher values compared with the backcalculated results, which increased with reduced time. Both the predicted a[T](T) curved obtained from ANNACAP and backcalculation
deviated from the measured results. From figure 130 (left) and figure 130 (right), it can be seen that the deviation in relaxation modulus values with time was reflected in dynamic modulus and phase
angle curves at lower frequencies.
Relaxation modulus and time-temperature shift factor curves for section 06A806 were compared in figure 122 (left) and figure 122 (right), respectively. Although the predicted a[T](T) curve at drop 1
for section 300113 showed some deviation after 86 °F, both the backcalculated E(t) and a[T](T) curves showed good agreement with laboratory results as well as ANNACAP data. Although the dynamic
modulus predicted for drop 1 (see figure 131) showed lower values at frequencies greater than 10^2 Hz, an agreement in backcalculated relaxation modulus curves was also reflected in the dynamic
modulus and phase angle curves.
Relaxation modulus and time-temperature shift factor curves for sections 340801 and 340802 were compared in figure 123 and figure 124, respectively. Although the predicted E(t) curves for sections
340801 and 340802 showed some deviation at reduced time greater than 10 s, in general, the two curves showed good agreement with the measurement. Comparison of the backcalculated a[T](T) curves for
both sections 340801 and 340802 (see figure 123 (right) and figure 124 (right)) show a good agreement with ANNACAP and measured curves over the temperature range of 50 to 104 °F. Further, it can be
seen from figure 132 and figure 133 that although the dynamic modulus and phase angle curves predicted by individual drops were the same, deviation at reduced time greater 10^1 s in relaxation
modulus was reflected at frequencies greater than 10^-1 Hz.
Figure 125 shows the backcalculated E(t) and a[T](T) functions for section 350801. Similar results were obtained using drops 1, 3, and 4, whereas E(t) from drop 2 deviated from the other drops. The
reason for this deviation may be the relatively low base modulus (20,241 psi) backcalculated using this drop, as seen in table 25. The average of base moduli in drops 1, 3, and 4 was 28,468 psi,
which is about 40-percent higher than the above base modulus value of drop 2. As shown in figure 126 (left) and figure 126 (right), comparison of backcalculated E(t) and a[T](T) curves with measured
and ANNACAP results for section 350802 showed complete disagreement. As shown in figure 135 (left) and figure 135 (right), similar discrepancies were reflected in the dynamic and phase angle curves.
For section 46804, the backcalculated relaxation modulus master curves were compared with measured results in figure 127 (left). It can be seen from the figure that the relaxation modulus master
curves matched very well when the time-temperature shift factor obtained from backcalculation was used to develop the measured master curve (labeled as “Measured 2” in figure 127 (left)). On the
other hand, when the time-temperature shift determined from the measured creep data was used to develop the relaxation master curve (labeled as “Measured 1” in figure 127 (left)), the backcalculated
modulus values at reduced time less than 10^-1 s were found to be low. Further, although the predicted a[T](T) curve for drop 1 showed some deviation after 86 °F, the curves showed good agreement
with laboratory as well as ANNACAP data. A comparison of backcalculated and measured dynamic modulus and phase angle for section 460804 is shown in figure 136 (left) and figure 136 (right),
respectively. Dynamic modulus values predicted using backcalculation were higher at frequencies greater than 1 Hz, and a better prediction was observed with a frequency greater than 1 Hz.
Note that although measured creep data were used for comparison in the present study, it is not clear from the LTPP database whether D(t) was measured before or after the FWD tests were conducted.
Backcalculation of Linear Viscoelastic Pavement Properties Using Two-Stage Method
In the previous backcalculation process, viscoelastic and unbound properties were calculated during the same step; however, in this section, a two-stage linear viscoelastic backcalculation scheme is
presented. The first stage was to perform linear elastic backcalculation of unbound material properties, which was followed by linear viscoelastic backcalculation (using BACKLAVA/BACKLAVAP) of AC
layer viscoelastic properties (E(t) sigmoid coefficients c[1], c[2], c[3], and c[4][ ]and shift factor a[T](T) coefficients a[1] and a[2]). Details of stage 1 and stage 2 steps are presented in the
following sections.
Stage 1: Elastic Backcalculation for Unbound Layer Properties
It is important to verify that the elastic backcalculation (stage 1) gives unbound granular modulus values close to the actual values. If this is verified, the backcalculated E[unbound] values can be
fixed in viscoelastic backcalculation (stage 2) and only the six unknowns of the AC layer can be backcalculated. Known and unknown variables in the first and second stages of backcalculation are
listed in table 26. In stage1, elastic backcalculation was performed assuming the AC layer was linear elastic. In the stage 2, viscoelastic backcalculation was performed keeping the unbound granular
layer modulus values obtained in the first stage fixed.
To perform the verification, first, various synthetic deflection time histories were obtained by running LAVA on the structure shown in table 27 at various temperature profiles (also shown in table
27). These synthetic deflections were used in stage 1, which computed E[unbound] values. Then these backcalculated E[unbound] values were compared with the original E[unbound] values used in the
original layered viscoelastic forward computation.
Table 26. Variables in two-stage linear viscoelastic backcalculation analysis.
│ Stage │ Known Parameters │ Unknown (Backcalculated) Parameters │
│ 1 │ Thickness and Poisson’s ratio of each layer │ E[ac], elastic modulus of AC layer │
│ ├─────────────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ │ FWD parameters (contact radius, │ E[unbound ][(i)], unbound layer moduli, i = 1…N[L], │
│ │ pressure, locations of the sensors, etc.) │ N[L] = number of unbound layers │
│ 2 │ Thickness and Poisson’s ratio of each │ E(t) sigmoid coefficients: c[1], c[2], c[3], and c[4] │
│ │ layer and FWD parameters │ │
│ ├─────────────────────────────────────────────┼────────────────────────────────────────────────────────┤
│ │ E[unbound ][(i)], unbound layer moduli │ Shift factor a[T](T) coefficients a[1] and a[2] │
│ │ backcalculated in stage 1 │ │
Table 27. Pavement properties in two-stage linear viscoelastic backcalculation analysis.
│ Property │ Values │
│ Thickness (AC followed by granular layers) (inches) │ 6, 20, infinite │
│ Poisson ratio {layer 1, 2, 3…} │ 0.35, 0.3, 0.45 │
│ E[unbound] {layer 2, 3…} (psi) │ 25,560, 11,450 │
│ E(t) sigmoid coefficient {layer 1} │ 0.841, 3.54, 0.86, -0.515 │
│ a[T](T) shift factor polynomial coefficients {layer 1} │ 4.42E-04, -1.32E-01 │
│ Total number of sensors │ 8 │
│ Sensor spacing from the center of load (inches) │ 0, 8, 12, 18, 24, 36, 48, 60 │
│ AC layer temperature profile {T1-T2} or │ {50-32}, {59-50}, {68-50}, {68-59}, {77-68}, │
│ {T1-T2-T3} (°F) │ {86-68}, {86-77}, {95-86}, {59-50-141}, │
│ │ {68-59-50}, {77-68-59}, {86-77-68}, │
│ │ {95-86-77}, {104-95-86} │
Figure 137 and figure 138 show the average base and subgrade modulus values obtained by elastic backcalculation of two- and three-step temperature profile, respectively. Error bars in the figures
represent the standard deviation of 10 GA runs performed for each temperature set.
subg = subgrade.
Figure 137. Graph. Elastic backcalculation of two-step temperature profile FWD data, assuming AC as a single layer in two-stage backcalculation.
subg = subgrade.
Figure 138. Graph. Elastic backcalculation of three-step temperature profile FWD data, assuming AC as a single layer in two-stage backcalculation.
The analysis results shown in figure 137 and figure 138 were based on elastic backcalculations that assume a single AC layer. However, in the LAVA forward computations, because of different
temperatures with depth, multiple layers of AC (two layers for Figure 137 and three layers for Figure 138 analysis) were used. To investigate whether selection of the number of AC layers affected the
results for the elastic backcalculation, the computations were repeated assuming the AC layer consisted of two or three independent elastic layers. Average backcalculated base and subgrade modulus
values for two-step and three-step temperature profiles are shown in figure 139 and figure 140, respectively. Comparing figure 137 with figure 139 and figure 138 with figure 140, it can be seen that
assuming single or multiple AC layers did not significantly affect backcalculation of base and subgrade elastic modulus. From these analyses (figure 137 through figure 140), it can be concluded that
it is possible to first perform elastic backcalculation (stage 1) for the unbound layer properties and fix these in stage 2.
subg = subgrade.
Figure 139. Graph. Elastic backcalculation of two-step temperature profile FWD data, assuming two AC sublayers in two-stage backcalculation.
subg = subgrade.
Figure 140. Graph. Elastic backcalculation of three-step temperature profile FWD data, assuming three AC sublayers in two-stage backcalculation.
Stage 2: Viscoelastic Backcalculation for E(t) of AC Layer
After fixing the unbound layer modulus values, the AC layer properties (E(t) sigmoid coefficients: c[1], c[2], c[3], and c[4][ ]and shift factor a[T](T) coefficients a[1] and a[2]) were
backcalculated using the viscoelastic backcalculation algorithm (BACKLAVA). Note that for viscoelastic backcalculation, as done earlier, a set of FWD test data at different temperature can be used
for backcalculation. This is because even though the temperatures are different, the characteristic properties of the AC layer (E(t) or |E*| master curves) remain the same. In this stage,
viscoelastic backcalculation was performed on a set of temperature profiles keeping the actual unbound modulus values constant.
Average errors (over reduced times from 10^-8 to 10^8 s) in the E(t) master curve, obtained from a set of two two-step and two three-step temperature profiles, are shown in figure 141 and figure 142,
respectively. It can be observed from figure 141 that, for all the cases of the presented two-step temperature profile sets, average error in backcalculated E(t) was below 10 to 12 percent except for
the FWD test at {86-68} and {104-86} °F. It can be observed from figure 142 that, for all the cases presented for the three-step temperature profile sets, average error in backcalculated E(t) was
below 5.5 percent except for the FWD test at {86-77-68} and {77-68-59} °F. Subfigures in each of the figures were included to illustrate how the given percent error relates to the actual E(t) curves
that were being compared. These results indicate that the two-stage algorithm worked well in backcalculating the E(t) of the AC layer. From figure 141 and figure 142, it can be observed that E(t)
errors obtained in the two-stage backcalculation are less when compared with single-stage backcalculation (figure 120). However, note that results presented in figure 141 and figure 142 are from
backcalculation using a set of two FWD test data each obtained at different temperature profiles, whereas results in figure 120 are from backcalculation using single FWD datum. However, the result
does indicate that backcalculation using a set of FWD test data each obtained under a different temperature profile may improve the accuracy.
Figure 141. Graphs. Error in backcalculated E(t) curve from two-step temperature profile FWD test data in two-stage backcalculation.
Figure 142. Graphs. Error in backcalculated E(t) curve from three-step temperature profile FWD test data in two-stage backcalculation.
In the previous sections, the backcalculation scheme and results were developed for a viscoelastic multilayer pavement model consisting of a linear viscoelastic AC layer and linear elastic unbound
layers. This section describes a backcalculation scheme (called BACKLAVAN) that was developed for the layered viscoelastic-nonlinear pavement model consisting of a linear viscoelastic AC layer and
nonlinear elastic unbound layers. Because of computational limitations of the current version of the LAVAN algorithm, it can take a very long time to compute all the parameters (i.e., c[1], c[2], c
[3], and c[4] of the AC and k[1], k[2], and k[3] of the unbound layer) during the backcalculation stage. Therefore, a two-stage backcalculation scheme was proposed to backcalculate viscoelastic as
well as nonlinear unbound layer properties of the pavement layers. The two-stage nonlinear backcalculation model was very similar to the two-stage linear backcalculation model discussed in the
earlier section. In the nonlinear model, the first stage involved nonlinear elastic backcalculation of the properties (i.e., k[1], k[2], and k[3]) of the unbound granular layer. In the second stage,
the backcalculated unbound properties (i.e., k[1], k[2], and k[3]) were fixed, and the layered viscoelastic-nonlinear model (LAVAN) was used to backcalculate the linear viscoelastic properties of AC
layer. Details of known and unknown properties used during these two stages are shown in table 28. Note that the current forward algorithm (LAVAN) can easily be extended to include the nonlinearity
of subgrade layers. However, when such forward solution was used in a backcalculation algorithm, computational efficiency decreased significantly. Typically, the effect of surface stress in the
subgrade was limited (stress “bulb” effect) and assumption of linear elastic subgrade, with increasing E (due to geostatic stress) with depth may be sufficient for most design purposes.
Table 28. Pavement properties and test inputs in two-stage nonlinear viscoelastic backcalculation.
│ │ Stage 1: │ Stage 2: │
│ Property │ Nonlinear Elastic │ Nonlinear Viscoelastic │
│ Thickness (inches) │ Known (AC), known (BASE), infinite (SUBGRADE) │ Known (AC), Known (BASE), infinite (SUBGRADE) │
│ Poisson ratio │ Known (AC), known (BASE), known (SUBGRADE) │ Known (AC), Known (BASE), Known (SUBGRADE) │
│ E[base ](psi) │ Unknowns (k[1], k[2], k[3]) │ Obtained from stage 1 │
│ E[gsubgrade](psi) │ Unknown │ Obtained from stage 1 │
│ E(t)[AC] (psi) │ Unknown (E(t) = constant) │ Unknown (sigmoid coefficient) │
│ Test Inputs │
│ Surface loading (psi) │ Known peak stress │ Known load history │
│ Surface deflection (inches) │ Known peak deflection │ Known deflection history │
The algorithm was used to backcalculate two HMA mixes, namely, Control and CRTB (for mix properties, refer to table 29), on a 35-ms haversine load (synthetic FWD pulse load). The section properties
were as shown in table 29. Stresses at distance r = 0 (center of loading) and layer mid-depth were used in calculating unbound base modulus value for both forward calculation and backcalculation of
synthetic data.
Table 29. Pavement geometric and material properties in two-stage nonlinear viscoelastic backcalculation.
│ Property │ Value │
│ Thickness (inches) │ 5.9 (AC), 9.84 (base), infinite (subgrade) │
│ Poisson ratio (ν) │ 0.35 (AC), 0.4 (base), 0.4 (subgrade) │
│ Density (pci) │ 0.0752 (AC), 0.0752 (base), 0.0752 (subgrade) │
│ Nonlinear E[base](psi) │ K[o][ ]= 0.6; k[1][ ]= 3,626; k[2][ ]= 0.5; k[3][ ]= -0.5 │
│ E[subgrade] (psi) │ 10,000 │
│ AC: E(t) sigmoid │ Control: 1.598, 2.937, -0.272, -0.562 │
│ coefficient (psi) (c[i]) │ CRTB: 0.895, 3.411, 0.634, -0.428 │
│ Shift factor coefficients (a[i]) │ Control: 5.74E-04, -1.55E-01 │
│ │ CRTB: 4.42E-04, -1.32E-01 │
│ Haversine stress │ Peak stress = 137.79 psi │
│ 35 ms (psi) │ │
│ Sensor spacing from the center of load (inches): 0, 8, 12, 18, 24, 36, 48, 60 │
Stage 1: Nonlinear Elastic Backcalculation
Nonlinear elastic model is based on the assumption that the structure is time independent, with the AC layer assumed to be elastic and the unbound base layer assumed to be a stress-dependent
nonlinear material. Each FWD test was generally composed of four independent test drops, where each drop corresponded to a different stress level. Typical ranges of stress levels in each drop are
shown in table 30.
Table 30. Typical FWD test load levels.
│ │ Allowable Range for 11.81-inch │ Used Surface │
│ Load Level │ Diameter Plate (psi) │ Load (psi) │
│ Drop 1 │ 49–60 │ 55 │
│ Drop 2 │ 74–96 │ 80 │
│ Drop 3 │ 99–120 │ 110 │
│ Drop 4 │ 132–161 │ 137.8 │
In stage 1, peak stress and deflection values during all the test drops (drop 1 through 4) were used as input. Peak stress values in each drop (drops 1 through 4) for synthetic haversine FWD loading
used in the present analysis were 55, 80, 110, and 138 psi, respectively (refer to table 30). The AC layer modulus and unbound layer properties (k[1], k[2], and k[3]) backcalculated from synthetic
deflection at different temperatures are shown in figure 143 and figure 144, respectively.
Figure 143. Graph. Nonlinear elastic backcalculated AC modulus for control and CRTB mixes using FWD data at different test temperatures.
Figure 144. Graphs. Nonlinear elastic backcalculated unbound layer properties for control and CRTB mixes, using FWD data at different test temperatures.
As expected, for both control and CRTB mixes, the backcalculated elastic AC modulus values dropped with increase in temperature. Note that the forward solutions for the FWD surface deflections were
computed using the LAVAN (layered viscoelastic-nonlinear) algorithm. The horizontal dashed lines in figure 144 show the actual inputs used in the LAVAN forward computation. As shown, the coefficients
were close to the actual values but they were generally underpredicted by the backcalculation algorithm.
Stage 2: Nonlinear Viscoelastic Backcalculation
In stage 2, the backcalculated unbound layer properties from stage 1 were used as known fixed values, and the viscoelastic layer properties of the AC layer were obtained using viscoelastic-nonlinear
backcalculation. The performance of the backcalculation algorithm was checked for the set of FWD data at temperatures ({50, 68}, {68, 86}, {86, 104}, {104, 122} °F) to determine the effect of
different temperature ranges on the backcalculated E(t) values. The backcalculated unbound layer properties obtained in stage 1 at each independent temperature were averaged when a set of
temperatures was used in viscoelastic backcalculation. Average error in the backcalculated E(t) master curve (refer to figure 145 and figure 146), (backcalculated in the second step) were calculated
using figure 87. The error was calculated over four time ranges: (1)10^-5 to 10^+1 s, (2) 10^-5 to 10^+2 s, (3) 10^-5 to 10^+3 s, (4) and 10^-5 to 10^+5 s. From figure 145 and figure 146, it can be
seen that for lower temperatures, the backcalculated E(t) master curve showed some deviation at higher reduced time, whereas, for higher temperatures, the backcalculated E(t) master curve showed some
deviation at lower reduced time. The backcalculated results for the mixes showed good predictability of the E(t) master curve using the two-stage nonlinear backcalculation scheme.
Figure 145. Graphs. Control mix backcalculation results from two-stage nonlinear viscoelastic backcalculation.
Figure 146. Graphs. CRTB mix backcalculation results from two-stage nonlinear viscoelastic backcalculation.
The developed two-stage backcalculation algorithm was next used with field data to backcalculate the viscoelastic properties of the LTPP section 0101 from State 1 (Alabama). Section10101 was tested
consecutively in 2004–2005, with the two tests separated by more than 68 °F (refer to table 31 and table 32). Further, because the section was not modified between the two tests, it was selected for
the analysis.
Table 31. FWD test data from LTPP section 10101 for 2004–2005.
│ Test Date │ Drop Level │ Peak Stress (psi) │ Deflection (mil) │
│ 2/23/2004 │ 1 │ 54.1 │ 5.00 │ 4.25 │ 3.70 │ 2.99 │ 2.44 │ 1.57 │ 1.10 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 2 │ 83.5 │ 8.11 │ 6.93 │ 6.10 │ 4.92 │ 4.06 │ 2.56 │ 1.81 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 3 │ 113.3 │ 11.65 │ 10.00 │ 8.82 │ 7.13 │ 5.87 │ 3.78 │ 2.60 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 4 │ 148.7 │ 16.14 │ 13.90 │ 12.17 │ 9.92 │ 8.19 │ 5.24 │ 3.62 │
│ 4/28/2005 │ 1 │ 52.2 │ 8.90 │ 6.46 │ 5.04 │ 3.54 │ 2.48 │ 1.46 │ 0.94 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 2 │ 80.2 │ 13.98 │ 10.47 │ 8.50 │ 5.83 │ 4.17 │ 2.40 │ 1.73 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 3 │ 111.2 │ 20.20 │ 15.55 │ 12.91 │ 8.90 │ 6.38 │ 3.74 │ 2.76 │
│ ├────────────┼────────────────────┼───────┼───────┼───────┼───────┼──────┼──────┼──────┤
│ │ 4 │ 139.1 │ 26.06 │ 20.28 │ 16.93 │ 11.81 │ 8.39 │ 4.84 │ 3.50 │
Stage 1: Nonlinear Elastic Backcalculation
In stage 1, peak stress and deflection values during all the test drops in table 31 were used as inputs in nonlinear elastic backcalculation. The backcalculation results in table 32 show that the
unbound base properties (k[1], k[2], and k[3]) were found to be very close; it also shows that although the AC was affected by the temperature of the test, the effect on the unbound layer was not
significant. As expected, the backcalculated elastic AC modulus values dropped with increase in temperature.
Table 32. Nonlinear elastic backcalculation results for LTPP section 10101.
│ │ FWD Test Year │
│ ├─────────┬─────────┤
│ Results │ 2004 │ 2005 │
│ Average AC temperature (°F) │ 53.4 │ 95.4 │
│ Properties │ AC modulus (psi) │ 941,526 │ 227,346 │
│ ├───────────────────┼─────────┼─────────┤
│ │ k[1] │ 17,984 │ 15,972 │
│ ├───────────────────┼─────────┼─────────┤
│ │ k[2] │ 0.16 │ 0.17 │
│ ├───────────────────┼─────────┼─────────┤
│ │ k[3] │ -0.59 │ -0.58 │
│ ├───────────────────┼─────────┼─────────┤
│ │ E[subg] │ 29,832 │ 26,097 │
Stage 2: Nonlinear Viscoelastic Backcalculation
Unbound layer material properties may vary depending on environmental factors (seasons). However, because the unbound layer properties obtained for the two tests in stage 1 were found to be very
close, they were used in the second stage of the backcalculation without any correction. Stage 2 uses the nonlinear viscoelastic forward algorithm during backcalculation of the E(t) of the AC layer.
Note that viscoelastic backcalculation requires the entire time history for backcalculation. For the LTPP section test in year 2004, the entire deflection history was available only for drop 1, hence
the backcalculation was performed using only drop 1. Figure 147 and figure 148 show results obtained by two independent backcalculation attempts using data from 2 years of field testing. As shown, a
very good match was seen in E(t), and a reasonable match was seen in the shift factor function. The measured viscoelastic properties in the figures were obtained using D(t) data available in the LTPP
database (refer to figure 117 and figure 118). As explained earlier (refer to figure 101 through figure 104), the dynamic modulus and phase angle master curve for the backcalculated E(t) were
calculated at 66 °F.
Figure 147. Graphs. Comparison of nonlinear viscoelastic backcalculated and measured E(t) and a[T](T) for LTPP section 10101.
Figure 148. Graphs. Comparison of nonlinear viscoelastic backcalculated and measured |E*| and phase angle for LTPP section 10101.
This chapter presented two methodologies for determining the E(t)/|E*| master curve and unbound material properties of in-service pavements. As part of this effort, two multilayered viscoelastic
algorithms were developed. The first algorithm, called LAVA/LAVAP (LAVA can consider constant AC layer temperature, and LAVAP can consider a temperature profile for the AC layer), assumed the AC
layer was a linear viscoelastic material and the unbound layers was linear elastic. The second algorithm (called LAVAN) also assumed the AC layer was a linear viscoelastic material; however, it can
consider the nonlinear (stress-dependent) elastic moduli of the unbound layers. These two models were used to develop two genetic algorithm-based backcalculation algorithms (called BACKLAVA/BACKLAVAP
for the linear model and BACKLAVAN for the nonlinear model) for determining E(t)/|E*| master curve of AC layers and unbound material properties of in-service pavements.
The following conclusions can be drawn regarding the FWD data collection:
• Careful collection of FWD deflection data is crucial. The accuracy of the deflection time history needs to be improved. As a minimum, a highly accurate deflection time history at least until the
end of the load pulse duration is needed for E(t) or |E*| master curve backcalculation. The longer the duration of the deflection time history, the better.
• The temperature of the AC layer needs to be collected during the FWD testing. Preferably, temperatures should be collected at every 2 inches of depth of the AC layer.
• Either a single FWD run on AC with a large temperature gradient or FWDs run at different temperatures can be sufficient to compute the E(t)/|E*| master curve of AC pavements.
• For backcalculation using multiple FWD test datasets, tests should be conducted at a minimum of two different temperatures, preferably 18 °F or more apart. FWD data collected at a set of
temperatures between 68 and 104 °F will maximize the accuracy of the backcalculated E(t)/|E*| master curve up to less than a 10-percent error.
• For backcalculation using a single FWD test dataset at a known AC temperature profile, the FWD test should be conducted under a temperature variation of preferably ± 9 °F or more.
• An FWD configuration composed of multiple pulses (as presented in the appendix B) will improve the accuracy of the E(t) master curve prediction. However, to obtain the time-temperature shift
factor coefficients, either temperature variation with depth needs to be measured (and included in the analysis) or the FWD test (with multiple pulses) needs to be run at different pavement
temperatures (e.g., different times of the day).
• Study of the effect of FWD sensor data on backcalculation indicates that the influence of unbound layer properties increases with incorporation of data from farther sensors and with increase in
test temperature. Further, it can be concluded that all sensors in the standard FWD configuration are needed for accurate backcalculation of the viscoelastic AC layer and unbound layers.
The following conclusions can be drawn for the backcalculation procedure:
• Viscoelastic properties of AC layer can be obtained using a two-stage scheme. The first stage is an elastic backcalculation to determine unbound layer properties, which is followed by
viscoelastic backcalculation of E(t) of the AC layer while keeping the unbound layer properties fixed.
• The examples presented in this study show that, in the case of the presence of considerable dynamic effects, the algorithms (BACKLAVA/BACKLAVAP and BACKLAVAN) should be used with caution. The
algorithms presented in this chapter predict the behavior of flexible pavement as a viscoelastic damped structure, assuming it to be massless.
• For the GA-based backcalculation procedures, the following population and generation sizes are recommended:
○ For the BACKLAVA model, use a set of FWD tests run at different (but constant) AC layer temperatures with a population size of 70 and 15 generations
○ For the BACKLAVAP model, use a single FWD test with a known AC temperature profile and a population size of 300 and 15 generations.
○ For the BACKLAVAN (nonlinear) model, use FWD tests run at different (but constant) AC layer temperatures and a population size of 100 and 15 generations. | {"url":"https://www.fhwa.dot.gov/publications/research/infrastructure/pavements/15063/004.cfm","timestamp":"2024-11-05T02:51:53Z","content_type":"application/xhtml+xml","content_length":"222851","record_id":"<urn:uuid:6664be3a-688b-48e9-a6ef-4583186c4327>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00209.warc.gz"} |
ConocoPhillips Total Assets from 2010 to 2024 | Stocks: COP - Macroaxis
COP Stock USD 112.05 0.46 0.41%
ConocoPhillips Total Assets yearly trend continues to be relatively stable with very little volatility. Total Assets are likely to drop to about 74.6
. Total Assets is the total value of all owned resources that are expected to provide future economic benefits to the business, including cash, investments, accounts receivable, inventory, property,
plant, equipment, and intangible assets.
View All Fundamentals
First Reported Previous Quarter Current Value Quarterly Volatility
Total Assets
1985-12-31 96 B 96.7 B 54.8 B
Black Monday Oil Shock Dot-com Bubble Housing Crash Credit Downgrade Yuan Drop Covid
Check ConocoPhillips
financial statements
over time to gain insight into future company performance. You can evaluate financial statements to find patterns among ConocoPhillips' main balance sheet or income statement drivers, such as
Depreciation And Amortization of 4.8 B
, Interest Expense of 1.3
Selling General Administrative of 1.3 B
, as well as many indicators such as
Price To Sales Ratio of 2.62
, Dividend Yield of 0.042 or
PTB Ratio of 2.97
. ConocoPhillips financial statements analysis is a perfect complement when working with
ConocoPhillips Valuation
ConocoPhillips Total Assets
Check out the analysis of
ConocoPhillips Correlation
against competitors. To learn how to invest in ConocoPhillips Stock, please use our
How to Invest in ConocoPhillips
Latest ConocoPhillips' Total Assets Growth Pattern
Below is the plot of the Total Assets of ConocoPhillips over the last few years. Total assets refers to the total amount of ConocoPhillips assets owned. Assets are items that have some economic value
and are expended over time to create a benefit for the owner. These assets are usually recorded in ConocoPhillips books under different categories such as cash, marketable securities, accounts
receivable,prepaid expenses, inventory, fixed assets, intangible assets, other assets, marketable securities, accounts receivable, prepaid expenses and others. It is the total value of all owned
resources that are expected to provide future economic benefits to the business, including cash, investments, accounts receivable, inventory, property, plant, equipment, and intangible assets.
ConocoPhillips' Total Assets historical data analysis aims to capture in quantitative terms the overall pattern of either growth or decline in ConocoPhillips' overall financial position and show how
it may be relating to other accounts over time.
View Last Reported 95.92 B 10 Years Trend
ConocoPhillips Total Assets Regression Statistics
Arithmetic Mean 89,186,460,342
Geometric Mean 80,937,997,462
Coefficient Of Variation 35.63
Mean Deviation 22,661,580,923
Median 90,661,000,000
Standard Deviation 31,774,796,201
Sample Variance 1009637673.6T
Range 139.2B
R-Value (0.19)
Mean Square Error 1047351577.2T
R-Squared 0.04
Significance 0.49
Slope (1,361,927,372)
Total Sum of Squares 14134927430.2T
ConocoPhillips Total Assets History
Other Fundumenentals of ConocoPhillips
ConocoPhillips Total Assets component correlations
About ConocoPhillips Financial Statements
ConocoPhillips shareholders use historical fundamental indicators, such as Total Assets, to determine how well the company is positioned to perform in the future. Although ConocoPhillips investors
may analyze each financial statement separately, they are all interrelated. The changes in ConocoPhillips' assets and liabilities, for example, are also reflected in the revenues and expenses on on
ConocoPhillips' income statement. Understanding these patterns can help investors time the market effectively. Please read more on our
fundamental analysis
Last Reported Projected for Next Year
Total Assets 95.9 B 74.6 B
Pair Trading with ConocoPhillips
One of the main advantages of trading using pair correlations is that every trade hedges away some risk. Because there are two separate transactions required, even if ConocoPhillips position performs
unexpectedly, the other equity can make up some of the losses. Pair trading also minimizes risk from directional movements in the market. For example, if an entire industry or sector drops because of
unexpected headlines, the short position in ConocoPhillips will appreciate offsetting losses from the drop in the long position's value.
0.78 PR Permian Resources Aggressive Push PairCorr
0.63 SD SandRidge Energy PairCorr
0.74 SM SM Energy PairCorr
The ability to find closely correlated positions to ConocoPhillips could be a great tool in your tax-loss harvesting strategies, allowing investors a quick way to find a similar-enough asset to
replace ConocoPhillips when you sell it. If you don't do this, your portfolio allocation will be skewed against your target asset allocation. So, investors can't just sell and buy back ConocoPhillips
- that would be a violation of the tax code under the "wash sale" rule, and this is why you need to find a similar enough asset and use the proceeds from selling ConocoPhillips to buy it.
The correlation of ConocoPhillips is a statistical measure of how it moves in relation to other instruments. This measure is expressed in what is known as the correlation coefficient, which ranges
between -1 and +1. A perfect positive correlation (i.e., a correlation coefficient of +1) implies that as ConocoPhillips moves, either up or down, the other security will move in the same direction.
Alternatively, perfect negative correlation means that if ConocoPhillips moves in either direction, the perfectly negatively correlated security will move in the opposite direction. If the
correlation is 0, the equities are not correlated; they are entirely random. A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally
considered weak.
Correlation analysis
and pair trading evaluation for ConocoPhillips can also be used as hedging techniques within a particular sector or industry or even over random equities to generate a better risk-adjusted return on
your portfolios.
Pair CorrelationCorrelation Matching
Additional Tools for ConocoPhillips Stock Analysis
When running ConocoPhillips' price analysis, check to
measure ConocoPhillips' market volatility
, profitability, liquidity, solvency, efficiency, growth potential, financial leverage, and other vital indicators. We have many different tools that can be utilized to determine how healthy
ConocoPhillips is operating at the current time. Most of ConocoPhillips' value examination focuses on studying past and present price action to
predict the probability of ConocoPhillips' future price movements
. You can analyze the entity against its peers and the financial market as a whole to determine factors that move ConocoPhillips' price. Additionally, you may evaluate how the addition of
ConocoPhillips to your portfolios can decrease your overall portfolio volatility. | {"url":"https://widgets.macroaxis.com/financial-statements/COP/Total-Asset","timestamp":"2024-11-12T23:20:59Z","content_type":"text/html","content_length":"563505","record_id":"<urn:uuid:011ae805-60bf-4074-b232-9e611e741558>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00791.warc.gz"} |
10.3 Young’s Double Slit Experiment
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Explain the phenomena of interference
• Define constructive interference for a double slit and destructive interference for a double slit
The information presented in this section supports the following AP® learning objectives and science practices:
• 6.C.3.1 The student is able to qualitatively apply the wave model to quantities that describe the generation of interference patterns to make predictions about interference patterns that form
when waves pass through a set of openings whose spacing and widths are small, but larger than the wavelength. (S.P. 1.4, 6.4)
• 6.D.1.1 The student is able to use representations of individual pulses and construct representations to model the interaction of two wave pulses to analyze the superposition of two pulses. (S.P.
1.1, 1.4)
• 6.D.1.3 The student is able to design a plan for collecting data to quantify the amplitude variations when two or more traveling waves or wave pulses interact in a given medium. (S.P. 4.2)
• 6.D.2.1 The student is able to analyze data or observations or evaluate evidence of the interaction of two or more traveling waves in one or two dimensions (i.e., circular wave fronts) to
evaluate the variations in resultant amplitudes. (S.P. 5.1)
Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were
observable at the time. Owing to Newton’s tremendous stature, his view generally prevailed. The fact that Huygens’s principle worked was not considered evidence that was direct enough to prove that
light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit
experiment (see Figure 10.10).
Why do we not ordinarily observe wave behavior for light, such as observed in Young’s double slit experiment? First, light must interact with something small, such as the closely spaced slits used by
Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source—the sun—through a single slit to make the light somewhat coherent. By coherent, we mean waves are in
phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is that
two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more difficult to
see. We illustrate the double slit experiment with monochromatic (single $λ)λ) size 12{λ} {}$ light to clarify the effect. Figure 10.11 shows the pure constructive and destructive interference of two
waves having the same wavelength and amplitude.
When light passes through narrow slits, it is diffracted into semicircular waves, as shown in Figure 10.12(a). Pure constructive interference occurs where the waves are crest to crest or trough to
trough. Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water
waves is shown in Figure 10.12(b). Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on
wavelength and the distance between the slits, as we shall see below.
Making Connections: Interference
In addition to light waves, the phenomenon of interference also occurs in other waves, including water and sound waves. You will observe patterns of constructive and destructive interference if you
throw two stones in a lake simultaneously. The crests and troughs of the two waves interfere constructively whereas the crest of a wave interferes destructively with the trough of the other wave.
Similarly, sound waves traveling in the same medium interfere with each other. Their amplitudes add if they interfere constructively or subtract if there is destructive interference.
To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in Figure 10.13. Each slit is a different distance from a given point
on the screen. Thus, different numbers of wavelengths fit into each path. Waves start out from the slits in phase—crest to crest—but they may end up out of phase—crest to trough—at the screen if the
paths differ in length by half a wavelength, interfering destructively as shown in Figure 10.13(a). If the paths differ by a whole wavelength, then the waves arrive in phase—crest to crest—at the
screen, interfering constructively as shown in Figure 10.13(b). More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths $[(1/2)λ,[(1/2)λ, size 12{ \( 1/2
\) λ} {}$$(3/2)λ,(3/2)λ, size 12{ \( 3/2 \) λ} {}$$(5/2)λ,(5/2)λ, size 12{ \( 5/2 \) λ} {}$ etc.], then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any
integral number of wavelengths ($λ,λ, size 12{λ} {}$$2λ,2λ, size 12{2λ} {}$$3λ,3λ, size 12{3λ} {}$ etc.), then constructive interference occurs.
Take-Home Experiment: Using Fingers as Slits
Look at a light, such as a street lamp or incandescent bulb, through the narrow gap between two fingers held close together. What type of pattern do you see? How does it change when you allow the
fingers to move a little farther apart? Is it more distinct for a monochromatic source, such as the yellow light from a sodium vapor lamp, than for an incandescent bulb?
Figure 10.14 shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance
between the slits, then the angle $θθ size 12{θ} {}$ between the path and a line from the slits to the screen (see the figure) is nearly the same for each path. The difference between the paths is
shown in the figure; simple trigonometry shows it to be $dsinθ,dsinθ, size 12{d`"sin"θ} {}$ where $dd size 12{d} {}$ is the distance between the slits. To obtain constructive interference for a
double slit, the path length difference must be an integral multiple of the wavelength, or
10.3 $dsinθ=mλ,form= 0, 1, − 1, 2, − 2,…(constructive).dsinθ=mλ,form= 0, 1, − 1, 2, − 2,…(constructive). size 12{d`"sin"θ=mλ,`m=0,`1,`-1,`2,`-2,` dotslow } {}$
Similarly, to obtain destructive interference for a double slit, the path length difference must be a half-integral multiple of the wavelength, or
10.4 $dsinθ=m+12λ,form=0,1,−1,2,−2,…(destructive),dsinθ=m+12λ,form=0,1,−1,2,−2,…(destructive), size 12{d`"sin"θ= left (m+ { {1} over {2} } right )λ,`m=0,`1,` - 1,`2,` - 2,` dotslow } {}$
where $λλ size 12{λ} {}$ is the wavelength of the light, $dd size 12{d} {}$ is the distance between slits, and $θθ size 12{θ} {}$ is the angle from the original direction of the beam as discussed
above. We call $mm size 12{m} {}$ the order of the interference. For example, $m=4m=4 size 12{m=4} {}$ is fourth-order interference.
The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a
pattern called interference fringes, illustrated in Figure 10.15. The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is
the spreading of the bright fringes. We can see this by examining the equation
10.5 $dsinθ=mλ,form=0,1,−1,2,−2,….dsinθ=mλ,form=0,1,−1,2,−2,…. size 12{d`"sin"θ=mλ,`m=0,`1,` - 1,`2,` - 2,` dotslow } {}$
For fixed $λλ size 12{λ} {}$ and $m,m, size 12{m} {}$ the smaller $dd size 12{d} {}$ is, the larger $θθ size 12{θ} {}$ must be, since $sinθ=mλ/d.sinθ=mλ/d. size 12{"sin"θ=mλ/d} {}$ This is consistent
with our contention that wave effects are most noticeable when the object the wave encounters (here, slits a distance $dd size 12{d} {}$ apart) is small. Small $dd size 12{d} {}$ gives large $θ,θ,
size 12{θ} {}$ hence, a large effect.
Making Connections: Amplitude of Interference Fringe
The amplitude of the interference fringe at a point depends on the amplitudes of the two coherent waves (A[1] and A[2]) arriving at that point and can be found using the relationship
10.6 $A 2 = A 1 2 + A 2 2 +2 A 1 A 2 cosδ, A 2 = A 1 2 + A 2 2 +2 A 1 A 2 cosδ,$
where δ is the phase difference between the arriving waves.
This equation is also applicable for Young's double slit experiment. If the two waves come from the same source or two sources with the same amplitude, then A[1] = A[2], and the amplitude of the
interference fringe can be calculated using
10.7 $A 2 =2 A 1 2 (1+cosδ). A 2 =2 A 1 2 (1+cosδ).$
The amplitude will be maximum when cosδ = 1 or δ = 0. This means the central fringe has the maximum amplitude. Also the intensity of a wave is directly proportional to its amplitude (i.e., I ∝ A^2)
and consequently the central fringe also has the maximum intensity.
Example 10.1 Finding a Wavelength from an Interference Pattern
Suppose you pass light from a He-Ne laser through two slits separated by 0.0100 mm and find that the third bright line on a screen is formed at an angle of $10.95º10.95º size 12{"10" "." "95"°} {}$
relative to the incident beam. What is the wavelength of the light?
The third bright line is due to third-order constructive interference, which means that $m=3.m=3. size 12{m=3} {}$ We are given $d=0.0100mmd=0.0100mm size 12{d=0 "." "0100"`"mm"} {}$ and $θ=10.95º.θ=
10.95º. size 12{θ="10" "." "95"°} {}$ The wavelength can thus be found using the equation $dsinθ=mλdsinθ=mλ size 12{d`"sin"θ=mλ} {}$ for constructive interference.
The equation is $dsinθ=mλ.dsinθ=mλ. size 12{d`"sin"θ=mλ} {}$ Solving for the wavelength $λλ size 12{λ} {}$ gives
10.8 $λ=dsinθm.λ=dsinθm. size 12{λ= { {d`"sin"θ} over {m} } } {}$
Substituting known values yields
10.9 λ = (0.0100 mm)(sin 10.95º)3 = 6.33×10−4mm=633 nm. λ = (0.0100 mm)(sin 10.95º)3 = 6.33×10−4mm=633 nm. alignl { stack { size 12{λ= { { \( 0 "." "0100"`"mm" \) \( "sin""10" "." "95" rSup { size 8{
circ } } \) } over {3} } } {} # =6 "." "33" times "10" rSup { size 8{ - 4} } `"mm"="633"`"nm" {} } } {}
To three digits, this is the wavelength of light emitted by the common He-Ne laser. Not by coincidence, this red color is similar to that emitted by neon lights. More important, however, is the fact
that interference patterns can be used to measure wavelength. Young did this for visible wavelengths. This analytical technique is still widely used to measure electromagnetic spectra. For a given
order, the angle for constructive interference increases with $λ,λ, size 12{λ} {}$ so that spectra—measurements of intensity versus wavelength—can be obtained.
Example 10.2 Calculating Highest Order Possible
Interference patterns do not have an infinite number of lines, since there is a limit to how big $mm size 12{m} {}$ can be. What is the highest-order constructive interference possible with the
system described in the preceding example?
Strategy and Concept
The equation $dsinθ=mλ(form=0,1,−1,2,−2,…)dsinθ=mλ(form=0,1,−1,2,−2,…)$ describes constructive interference. For fixed values of $dd size 12{d} {}$ and $λ,λ, size 12{λ} {}$ the larger $mm size 12{m}
{}$ is, the larger $sinθsinθ size 12{"sin"`θ} {}$ is. However, the maximum value that $sinθsinθ size 12{"sin"θ} {}$ can have is 1, for an angle of $90º.90º. size 12{"90"°} {}$ Larger angles imply
that light goes backward and does not reach the screen at all. Let us find which $mm size 12{m} {}$ corresponds to this maximum diffraction angle.
Solving the equation $dsinθ=mλdsinθ=mλ size 12{d`"sin"θ=mλ} {}$ for $mm size 12{m} {}$ gives
10.10 $m=dsinθλ.m=dsinθλ. size 12{m= { {d`"sin"θ} over {λ} } } {}$
Taking $sinθ=1sinθ=1 size 12{"sin"θ=1} {}$ and substituting the values of $dd size 12{d} {}$ and $λλ size 12{m} {}$ from the preceding example gives
10.11 $m=(0.0100 mm)(1)633 nm≈15.8.m=(0.0100 mm)(1)633 nm≈15.8. size 12{m= { { \( 0 "." "0100"`"mm" \) \( 1 \) } over {"633"`"nm"} } } {}$
Therefore, the largest integer $mm size 12{m} {}$ can be is 15, or
10.12 $m=15.m=15. size 12{m="15"} {}$
The number of fringes depends on the wavelength and slit separation. The number of fringes will be very large for large slit separations. However, if the slit separation becomes much greater than the
wavelength, the intensity of the interference pattern changes so that the screen has two bright lines cast by the slits, as expected when light behaves like a ray. We also note that the fringes get
fainter further away from the center. Consequently, not all 15 fringes may be observable.
Applying the Science Practices: Double Slit Experiment
Design an Experiment
Design a double slit experiment to find the wavelength of a He-Ne laser light. Your setup may include the He-Ne laser, a glass plate with two slits, paper, measurement apparatus, and a light
intensity recorder. Write a step-by-step procedure for the experiment, draw a diagram of the set-up, and describe the steps followed to calculate the wavelength of the laser light.
Analyze Data
A double slit experiment is performed using three lasers. The table below shows the locations of the bright fringes that are recorded in meters on a screen.
Fringe Location for Laser 1 Location for Laser 2 Location for Laser 3
3 0.371 0.344 0.395
2 0.314 0.296 0.330
1 0.257 0.248 0.265
0 0.200 0.200 0.200
-1 0.143 0.152 0.135
-2 0.086 0.104 0.070
-3 0.029 0.056 0.005
a. Assuming the screen is 2.00 m away from the slits, find the angles for the first, second, and third bright fringes for each laser.
b. If the distance between the slits is 0.02 mm, calculate the wavelengths of the three lasers used in the experiment.
c. If the amplitudes of the three lasers are in the ratio 1:2:3, find the ratio of intensities of the central bright fringes formed by the three lasers. | {"url":"https://texasgateway.org/resource/103-youngs-double-slit-experiment?book=79106&binder_id=78846","timestamp":"2024-11-04T11:48:27Z","content_type":"text/html","content_length":"102393","record_id":"<urn:uuid:9d6b14ca-9a33-4f71-8600-af19f4b29892>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00566.warc.gz"} |
Smart Lithium-Ion Battery Monitoring in Electric Vehicles: An AI-Empowered Digital Twin Approach
Division of Electronics & Electrical Engineering, Dongguk University, Seoul 04620, Republic of Korea
Author to whom correspondence should be addressed.
Submission received: 1 November 2023 / Revised: 30 November 2023 / Accepted: 1 December 2023 / Published: 4 December 2023
This paper presents a transformative methodology that harnesses the power of digital twin (DT) technology for the advanced condition monitoring of lithium-ion batteries (LIBs) in electric vehicles
(EVs). In contrast to conventional solutions, our approach eliminates the need to calibrate sensors or add additional hardware circuits. The digital replica works seamlessly alongside the embedded
battery management system (BMS) in an EV, delivering real-time signals for monitoring. Our system is a significant step forward in ensuring the efficiency and sustainability of EVs, which play an
essential role in reducing carbon emissions. A core innovation lies in the integration of the digital twin into the battery monitoring process, reshaping the landscape of energy storage and
alternative power sources such as lithium-ion batteries. Our comprehensive system leverages a cloud-based IoT network and combines both physical and digital components to provide a holistic solution.
The physical side encompasses offline modeling, where a long short-term memory (LSTM) algorithm trained with various learning rates (LRs) and optimized by three types of optimizers ensures precise
state-of-charge (SOC) predictions. On the digital side, the digital twin takes center stage, enabling the real-time monitoring and prediction of battery activity. A particularly innovative aspect of
our approach is the utilization of a time-series generative adversarial network (TS-GAN) to generate synthetic data that seamlessly complement the monitoring process. This pioneering use of a TS-GAN
offers an effective solution to the challenge of limited real-time data availability, thus enhancing the system’s predictive capabilities. By seamlessly integrating these physical and digital
elements, our system enables the precise analysis and prediction of battery behavior. This innovation—particularly the application of a TS-GAN for data generation—significantly contributes to
optimizing battery performance, enhancing safety, and extending the longevity of lithium-ion batteries in EVs. Furthermore, the model developed in this research serves as a benchmark for future
digital energy storage in lithium-ion batteries and comprehensive energy utilization. According to statistical tests, the model has a high level of precision. Its exceptional safety performance and
reduced energy consumption offer promising prospects for sustainable and efficient energy solutions. This paper signifies a pivotal step towards realizing a cleaner and more sustainable future
through advanced EV battery management.
1. Introduction
Electric vehicles (EVs) have become a symbol of hope in the face of the energy crisis and the impending global greenhouse effect. There has been notable progress in EV technology sectors in 2023,
including those of battery technology, autonomous driving, and other innovative technologies that are driving the industry forward, thus signaling their indispensable role in shaping our sustainable
future. Lithium-ion batteries (LIBs) are at the forefront of this revolution, and they are known for their remarkable energy density, efficiency, and life cycle. There is enormous significance for
LIBs not only in the realm of EVs, but also in the realm of renewable energy storage and portable electronic devices. The state of charge (SOC) of LIBs is critical, as it serves as a guide for users
and is a crucial component of battery management systems. As we enter a new era of renewable energy in which mitigating climate change is paramount, accurate SOC estimation for LIBs is becoming
increasingly important. Battery energy storage systems are emerging as key players in this transformation, paving the way for economic, environmental, and social sustainability. This pursuit is
driven by high-fidelity digital twin (DT) models, which provide insights into the complexities of battery system performance across various domains, including the fight against climate change.
The efficient management of these batteries is paramount to their longevity and optimal performance. However, accurately predicting the SOC in these batteries remains a complex challenge.
In this paper, we delve into the realm of digital twin technology, a cutting-edge approach that promises to revolutionize SOC forecasting. By creating virtual replicas of real-world batteries, we aim
to enhance our understanding of their behavior and provide more accurate SOC predictions. This research holds the potential to transform battery management systems, prolong battery life, and enable
smarter energy consumption.
EVs need a reliable battery management system (BMS) to monitor the battery state. The SOC is a crucial factor of a BMS that determines the remaining battery energy and the time that it can last
before charging. SOC estimation is complicated due to the complex dynamics of LIBs and changing external conditions. There are three primary categories of SOC estimation methods in the literature:
physics-based electrochemical models [
], electrical equivalent circuit models [
], and data-driven models based on artificial intelligence algorithms, such as neural networks [
], support vector machines [
], random forests [
], and many others. The choice of model depends on factors such as accuracy requirements, data availability, computational resources, and real-world application needs, and the strengths and
limitations of each category are considered. Physics-based models are highly detailed and complex, as they consider electrochemical reactions, diffusion, and heat transfer phenomena. Accurate
predictions of the SOC can be made with these models, but they need a significant number of parameters related to the battery’s materials, geometry, and operating conditions. Proper calibration is
necessary for optimal results. Electrical equivalent circuit models simplify the complex electrochemical processes inside a battery into an electrical circuit. These methods use empirical data to
estimate circuit parameters and are computationally efficient, which makes them appropriate for real-time applications. For certain applications, physics-based models may have a higher level of
accuracy than that of other models.
Data-driven methods have become popular in recent years due to their ability to estimate the SOC using only battery measurement data. Machine learning algorithms such as support vector machines
(SVCs) [
], artificial neural networks [
], and fuzzy logic [
] have been used for SOC estimation. According to [
], there are three types of neural network methods: feed-forward neural networks, deep learning neural networks, and hybrid neural networks. The authors discussed the accurate estimation of a
lithium-ion battery’s state of charge for high-level electric vehicles to support carbon neutrality and emission peak policies. The focus is on neural network techniques that provide precise SOC
estimates without considering the battery’s internal electrochemical state. However, these methods have limitations in accurately estimating the SOC due to the diversity and complexity of LIBs.
Significant amounts of human labor and expertise are required for their training, and the learning architectures that they employ are relatively shallow. In response to these issues, the deep neural
network (DNN) has proven to be a viable solution. By using a single DNN layer, it is possible to automatically learn a novel representation of the input. Additionally, the multi-layered structure of
the network allows for the extraction of intricate feature information, which can be built from the original input. Currently, most researchers propose the estimation of batteries’ SOC using
DNN-based techniques, such as long short-term memory (LSTM) and a gated recurrent unit (GRU) [
]. However, deep learning algorithms may overfit if they are not trained on high-quality datasets. To tackle this problem, regularization methods such as L1/L2 regularization and dropout layers are
employed. Nevertheless, implementing these techniques necessitates specialized hardware, considerable computational resources, energy, and extensive fine-tuning. To mitigate these challenges, hybrid
strategies that combine deep learning with other techniques are often employed. On the other hand, the growing complexity of battery systems coupled with the need for safety, optimization,
sustainability, and cost-effectiveness has driven the demand for digital twins in the battery industry. To achieve energy autonomy and sustainability, renewable energy sources, storage systems,
and energy management systems [
] must be efficiently integrated. In this context, our paper suggests a framework for digital twins that has the potential to transform battery management systems. This framework has the ability to
extend battery life, enhance energy consumption intelligence, and align with evolving energy management technologies.
A digital twin is a virtual model that replicates a physical object or system in great detail and in real time. It captures the characteristics, behavior, and performance of its real-world
counterpart and is utilized for analysis, monitoring, and simulation [
]. Digital twins are employed in multiple industries, such as manufacturing [
] and healthcare, to enhance comprehension, efficiency, and decision making. Digital twins are crucial for advancing battery technology and addressing challenges related to energy storage, control,
and management. In a recent study [
], researchers introduced a control technique called the deep deterministic policy gradient (DDPG) that used reinforcement learning to improve the performance of maximum-power-point tracking (MPPT)
controllers in photovoltaic (PV) energy systems. By utilizing a digital twin for simulation training, the RL approach was able to operate effectively under different weather conditions. The study
found a significant improvement in the real-time total power output, with a 51.45% increase and 24.54-times-faster settling time compared to conventional P& O controllers. Digital twins offer several
benefits for predicting the SOC. Firstly, they can generate synthetic data that accurately reflect the battery’s behavior, even in extreme or rare situations. Secondly, they can speed up the
development of SOC prediction models by providing a virtual platform for training and testing. Moreover, they can improve the reliability of SOC prediction models by creating synthetic data that
cover a wider range of operating conditions. Lastly, digital twins can produce a realistic dataset without compromising sensitive information. To ensure authenticity, statistical distributions or
generative models can be used to manage the random processes involved in data generation.
Additionally, digital twins can enhance the reliability of prediction models by producing synthetic data that cover a broader range of operating conditions while maintaining the privacy of sensitive
information. Our digital twin framework incorporates the use of a TS-GAN to produce synthetic battery data, which greatly improves the capabilities of our system. GANs are a valuable tool for
addressing data limitations in machine learning models. They have the ability to enhance small datasets, protect privacy, produce outliers, balance class distributions, manage missing data, create
realistic testing environments, reduce expenses, and overcome data bias. By generating synthetic data that closely resemble real-world data, TS-GANs are able to increase diversity and robustness.
They also aid in balancing class distributions, filling in missing data points, and creating realistic test environments. GANs also reduce the need for exhaustive data collection, making them a
cost-effective and time-saving solution. Ultimately, TS-GANs improve the performance and capabilities of machine learning models. This innovative approach helps us overcome the challenge of limited
and unreliable real-world battery data. By creating synthetic data that closely resemble real-world behavior, our training dataset is expanded, resulting in more accurate predictions of the SOC and
improved battery management. Briefly, this paper proposes a digital twin framework for SOC prediction in EVs using a TS-GAN. The framework can generate synthetic battery data that accurately reflect
the battery’s behavior. In comparison with previous non-digital-twin methodologies, the accuracy of estimating the SOC of batteries improved with the integration of digital twin technology and the
TS-GAN. The method used in our study outperforms previous methods by utilizing high-quality data and a combination of the digital twin framework and a TS-GAN. This integration not only addresses the
inherent limitations of utilizing limited and unreliable real-world battery data but also leads to a substantial improvement in the performance and capabilities of machine learning models.
Beyond overcoming data constraints, the TS-GAN plays a pivotal role in augmenting the diversity, robustness, and accuracy of SOC predictions. It provides a practical solution to the pressing need to
estimate the SOC more accurately in EVs and battery management systems. In addition to marking a significant advancement in the field, this work also paves the way for future developments at the
intersection of digital twin technology and sustainable energy solutions. This study is structured as follows: In
Section 1
, the research background and motivation are presented through an introduction,
Section 2
discusses related work,
Section 3
outlines the system framework encompassing offline modeling and a digital twin,
Section 4
presents the outcomes of the simulation, an in-depth discussion is provided in
Section 5
, and
Section 6
presents concluding remarks.
1.1. Motivation
LIBs are the most common type of battery used in EVs. However, they are susceptible to degradation over time, which can lead to decreased performance, greater safety risks, and a shorter lifespan. It
is possible for a battery to degrade in various ways, including by experiencing low capacity, overheating, unstable voltage, high internal resistance, the risk of overcharging or over-discharging,
reduced lifespan, and thermal runaway. However, these risks can be minimized by employing battery management systems that constantly monitor and regulate the battery’s condition through predictive
algorithms, ensuring its safe operation despite its degradation. An EV requires a BMS to ensure safe and efficient operation by continuously monitoring and managing the battery’s health. The BMS
prevents overcharging, over-discharging, overheating, and imbalances. Additionally, it predicts maintenance requirements, thus greatly contributing to the overall safety and performance of EVs.
Therefore, it is essential to develop effective methods for monitoring and managing lithium-ion batteries in EVs.
Traditional BMSs are typically based on simple heuristics and do not take the complex electrochemical processes that govern battery behavior into account. As a result, they can be inaccurate and
unreliable, especially under challenging conditions such as high temperatures or fast charging. Recent advances in machine learning and artificial intelligence (AI) have made it possible to develop
more sophisticated methods for monitoring and managing LIBs. These methods can learn the complex relationships between battery parameters and use this knowledge to make accurate predictions about
battery behavior. In [
], a BPNN was trained using three distinct classifications of input features, along with random numbers of weights, biases, and hidden neurons. The resulting model was able to attain an average SOC
error of 3.8% during the US06 (one of the commonly used drive cycles assessing the capabilities of vehicles—especially electric ones; this is known as a standardized testing method) drive cycle at a
temperature of 25
C. An RNN based on the NARXNN architecture was proposed in [
] for the accurate estimation of the lithium-ion battery SOC. The NARXNN technique emphasizes global feedback and backpropagation learning in order to improve robustness and computational
intelligence. LiFP and lithium titanate (LiTO) were used in dynamic charge and discharge current profiles to experimentally test the effectiveness of the method in SOC estimation. The results showed
that NARXNN was accurate and had a low computational cost. The integration of digital twins and AI into battery management systems can become a game changer in battery management intelligence.
Digital twin integration can revolutionize the real-time monitoring and optimization of lithium-ion batteries by leveraging state-of-the-art techniques for accurate SOC estimation by creating virtual
replicas that mirror real-world batteries. Digital twins can predict the SOC, health, and performance in real time to improve energy efficiency, extend battery life, and prevent failures. In recent
studies, digital twins were utilized to optimize energy consumption and facilitate proactive maintenance. The main objective of the authors of [
] was to create a digital twin specifically for hydropower turbines. They emphasized the importance of dynamic modeling, data interfaces, and adaptive learning for accurately representing system
dynamics. The researchers utilized the recursive least squares algorithm as an adaptive learning method to develop hydropower turbine models for the DT. The successful application of this approach
was demonstrated through its implementation in a pilot system at the Norwegian University of Science and Technology, which yielded positive adaptive learning and validation outcomes. To ensure
accurate predictions and enable responsive battery management, one of the key research questions in this area was that of how to effectively model and simulate complex battery behaviors within a
digital twin.
It is important to note, however, that the low availability of real-time data poses one of the major challenges of utilizing digital twin technology for BMSs. A digital twin’s accuracy is limited in
many cases due to the inability to collect data directly from a physical battery in real time; this improves battery management but hinders timely decisions.
Various methods have been proposed to supplement digital twins for lithium-ion batteries, including the use of historical battery data and the creation of synthetic data through computer simulations.
Developing efficient battery management systems that conserve energy is crucial for the advancement of the time-series analysis of lithium-ion batteries. The battery industry faces growing
complexities and demands for enhanced safety, optimization, sustainability, and cost-effectiveness. Developing batteries with higher energy density—particularly for electric vehicles—is a significant
challenge that requires advanced materials and improved battery chemistry. Digital twins serve as virtual replicas that are capable of propelling battery technology forward by improving SOC
anticipation and creating realistic datasets without compromising sensitive information. Incorporating a TS-GAN into the SOC prediction process can address the challenge of limited access to
real-world battery data by generating synthetic data that closely mimic real-world behavior. This comprehensive SOC prediction system bridges the gap between physical and digital realms to contribute
to the optimal performance, safety, and longevity. It has the potential to revolutionize the way in which lithium-ion batteries are managed in electric vehicles, thus ensuring a more sustainable and
effective future for electric vehicles due to its advantages, such as precise SOC prediction, real-time monitoring, lifespan prediction, improved battery management, synthetic data production,
and privacy concerns.
1.2. Contribution of Our Work
We outline the key contributions of our work in this section. Our work makes significant contributions to the field of battery management systems by integrating digital twins and AI. This integration
brings about a new era of energy-efficient smart battery management and addresses crucial research questions related to energy consumption. Our system utilizes advanced techniques for accurate SOC
estimation by creating virtual replicas that predict the SOC, health, and performance in real time. This leads to improved energy efficiency, extended battery life, and the prevention of failures. We
also introduce methods for generating supplementary data to enhance the accuracy and timeliness of our digital twin’s predictions, which aids in responsive battery management and extends the
capabilities of our system in optimizing energy consumption. Our system has the potential to revolutionize the management of LIBs in EVs, thus contributing to optimal energy performance, enhanced
safety, and prolonged battery longevity. We also utilize TS-GANs to generate synthetic data that closely mimic real-world behavior to expand our training dataset and lead to more accurate predictions
of the SOC. Our approach aligns with the goals of sustainable transportation and reduced greenhouse gas emissions, and promotes responsible battery usage, financial savings, and environmental
advantages. Overall, our contributions advance our understanding of LIB management in EVs, offering an innovative, comprehensive solution that fosters a more sustainable and energy-efficient future
for electric vehicles and the broader energy landscape.
2. Related Work
This section provides a summary of previously published literature reviews on the estimation of the SOC of LIBs and the utilization of digital twins that are integrated into battery management
systems for SOC estimation. Due to its critical role in minimizing energy consumption in electric vehicles, the accurate estimation of the SOC has gained significant attention in recent years.
A major goal of this research area is to provide manufacturers and BMS designers with the tools that they need to create high-performance batteries. While advances have been made in estimation
methodologies for the SOC, generative adversarial networks (GANs) stand out as a promising approach. Through this innovative technique, real-time data are created from synthetic time-series data that
are designed to simulate battery behavior in real-life settings, thus ultimately improving the accuracy of SOC estimations. Achieving precise estimation not only facilitates the production of
low-energy batteries but also allows the fine-tuning of battery management strategies and the extension of battery life. The purpose of this section is to contribute to a greater understanding of
BMSs through the synthesis of existing research and the use of this novel approach to the estimation of SOC.
2.1. SOC Estimation Based on LSTM Networks in BMSs
LSTM networks are increasingly used to estimate the SOC in LIBs due to their ability to capture complex battery behavior in real time. Their expertise lies in creating models that capture the impacts
of variables such as voltage, current, and temperature on nonlinear relationships. Combining LSTM networks with other techniques further enhances accuracy. This approach is vital for optimizing
battery performance in electric vehicles and renewable energy systems. A new model for SOC estimation in LIBs was proposed in [
]. LSTM cells were used to model state and process information together. A two-stage pre-training strategy was used to improve the feature-learning capabilities and resolve dynamic differences
between loading profiles and sampling frequencies. Despite the variable sampling frequencies and unknown loading profiles, the proposed method achieved high accuracy in two cases with
different batteries.
In [
], an encoder–decoder framework with bidirectional LSTM networks was used to capture sequential patterns in battery behavior. The bidirectional LSTM networks improved the model’s ability to capture
long-term dependencies from both past and future data points, resulting in improved estimation accuracy. The method was evaluated on publicly available battery datasets—particularly those with
dynamic loading profiles—and demonstrated precise SOC estimation across diverse temperature conditions with mean absolute errors as low as 1.07%. This approach significantly enhanced the reliability
and applicability of battery management systems for real-world scenarios with varying ambient conditions. A more comprehensive evaluation of the practical usefulness of this method can be achieved by
examining its applicability to a broader range of battery types, as battery characteristics can differ significantly. The authors of [
] presented a new neural network called the uncorrelated sparse autoencoder with long short-term memory (USAL) for estimating the SOC in battery-powered machines over a long period of time. USAL
combined the benefits of sparse autoencoders and LSTM networks by using a multi-task learning approach to penalize for high multi-collinearity between encodings and identify long and short temporal
correlations between them. Using three publicly available accelerated aging datasets, the network outperformed existing models after training on five initial charge–discharge cycles. Overall, USAL
showed promise for identifying patterns relevant to long-term SOC estimation that are typically missed by other methods, even with limited initial battery history. In [
], a novel technique was recommended for determining the SOC of large-scale LIB storage systems. The method employed an LSTM neural network model that could handle a nonlinear battery model and the
uncertainties involved in the estimation process. The LSTM model surpassed the feed-forward neural network (FFNN) and deep-feed-forward neural network (DFFNN) models on three datasets. Real-world
data from the Al-Manara PV power plant were used to train the model, which consistently produced precise SOC calculations with a maximum standard error (MSE) of less than 0.62%. In contrast, the FFNN
and DFFNN models exhibited MSEs ranging from 5.37% to 9.22% and 4.03% to 7.37%, respectively. In order to enhance the accuracy of SOC estimation in LIBs, a new technique called PSO-LSTM was
introduced [
]. This approach combined an LSTM neural network with particle swarm optimization (PSO). The PSO algorithm was utilized to optimize the LSTM parameters to align them with the unique characteristics
of a battery for optimal SOC prediction. Moreover, random noise was added at the input layer to improve the network’s robustness against interference. The experimental results demonstrated that the
PSO-LSTM framework consistently achieved a small error margin of only 0.5% when predicting the actual state of charge. The authors of [
] introduced an optimization technique for enhancing SOC estimation using LSTM networks. The novel approach incorporated fractal derivatives—specifically, the improved Borges derivative (a local
fractional-order derivative that can be used to describe the dynamic behavior of complex systems)—into LSTM parameter optimization. This integration extended the concept to adaptive momentum
estimation (Adam) algorithms, replacing integer-order derivatives with improved Borges derivatives. A comparative analysis of the improved Borges derivative’s speed relative to the conventional
integer-order derivative was conducted. That study also proposed an order-tuning method to effectively adjust the parameter training speed. The authors of [
] employed an LSTM recurrent neural network architecture to directly estimate the SOC of a battery by observing variables such as temperature, current, and voltage. This approach eliminated the need
for utilizing inference systems from the Kalman filter family. The results demonstrated that the average mean absolute error (MAE) was consistently reduced to less than 1% across multiple test
scenarios, underscoring the potential of deep learning in SOC estimation. This achievement was significant within the domain of cell state-of-charge estimation, as it replaced conventional estimation
methods that were reliant on cell circuit modeling and fine-tuning SOC estimation accuracy through the use of Kalman filter family techniques. Additionally, the researchers made the battery test data
publicly available to facilitate further research. The SOC estimation methods discussed in this section are all mostly effective.
However, our proposed method has the following advantages:
• It uses a more robust LSTM architecture with four hidden layers. This allows it to capture more intricate temporal dependencies in the data, resulting in more accurate predictions;
• It uses three different optimizers (SWATS, Adam, and SGD) during training. This allows the model to select the most effective optimizer for the data, resulting in faster convergence and better
• It is less sensitive to noise and outliers in the data;
• It compares the performance of LSTM models trained with different optimizers against a GRU network. This provides a more comprehensive understanding of the strengths and weaknesses of each
architecture, allowing the researcher to choose the best one for the task at hand.
As a result, the proposed method is more robust and accurate in general. Battery management systems and other applications that require accurate SOC estimation could benefit from this approach.
2.2. Integration of Digital Twins into BMSs
In recent years, the use of cyber–physical systems—also known as digital twins—has become more widespread due to the availability of low-cost sensors and the deployment of Internet of Things (IoT)
-enabled devices. To create a virtual representation of a physical system, remote sensing is combined with cloud-based models. Incorporating cloud technology into a BMS facilitates the real-time
monitoring and control of battery performance, thus improving energy storage efficiency and providing data-driven insights.
The authors of [
] tackle the difficulties of managing batteries by combining edge and cloud technologies to monitor and regulate charging and discharging. This approach resolved problems that arise with solely
edge-based systems, such as limited computational power, speed, and data storage capacity. The system involved a web interface for remote monitoring and control, and the prototype successfully
executed battery control commands while ensuring precise data transmission between the edge and cloud, leading to the comprehensive storage of historical data.
Digital twins of batteries can be used to develop multi-scale intelligent management systems. In addition, there are challenges such as the need for multiphysics models, nano-/microscale
characterization, and low-latency communication networks. Additionally, effective data pre-processing and increased data security must also be considered. Battery control and lifetime estimation have
shifted from an empirical approach to model-driven techniques, with data-driven and machine learning (ML) approaches gaining popularity. For instance, the authors of [
] highlighted the significance of effective lithium-ion battery management for a low-carbon future, particularly in applications such as electric vehicles and grid-scale energy storage. The article
suggested the possibility of improving real-world control by combining knowledge of battery degradation, modeling tools, diagnostics, and new machine learning techniques to create a digital twin of a
battery. This cyber–physical system allowed for the closer integration of the physical and digital aspects of batteries, resulting in smarter control and a longer lifespan and providing a useful
framework for future intelligent and interconnected battery management. Nonetheless, challenges in real-world applicability persist. Digital twin technology in EV battery management systems offers
advantages such as the real-time monitoring, analysis, and simulation of battery behavior, which enhance the SOC estimation accuracy. Factors such as temperature variations, aging effects, and load
fluctuations are incorporated, contributing to prolonged battery life, optimized energy management, and increased confidence in EV range estimation. An efficient and informed decision-making process
is enabled by integrating digital twin and AI technologies into BMS SOC estimation, thus promoting a more reliable and efficient EV ecosystem. The authors of [
] suggested a combination of the digital twin approach and parameter estimation methods to estimate the online SOC of a battery pack. They devised a unique approach that combined offline parameter
estimation with recursive least squares for online updates to estimate battery parameters. Monitoring the SOC and offering diagnostics are possible using this method. The results obtained from EV
battery packs demonstrated that this approach was effective in accurately estimating parameters and the SOC. The authors of [
] proposed a digital twin platform for the degradation assessment of lithium-ion battery packs in spacecraft. The platform was composed of visual software and an assessment unit. Through remote
sensing links, the visual software received and analyzed real-time data from the battery pack. Through the use of models and algorithms, the assessment unit determined the battery pack’s state of
charge (SOC), state of health (SOH), and remaining useful life (RUL). This study used a Kalman filter–least squares support vector machine (KF-LSSVM) for SOC estimation and an autoregressive particle
filter (AR-PF) for the evaluation of the SOH. Based on the results, the platform was capable of accurately estimating battery packs’ SOC and RUL. The authors of [
] described a cloud-based management system for battery systems. By seamlessly transmitting battery data to the cloud, the system aimed to enhance batteries’ computational power and data storage
capacity. Afterward, battery diagnostic algorithms were used to determine the charge level and aging of batteries. Using equivalent circuit modeling and cloud-based SOC and SOH estimations, a digital
twin of the battery system is created. An adaptive extended H-infinity filter was proposed for accurate state-of-charge estimation for lithium-ion and lead–acid batteries. In addition, a particle
swarm optimization algorithm was used for state-of-health estimation to monitor the capacity and power fading during aging. The authors of [
] recommended a performance degradation evaluation model for LIBs in dynamic operating conditions. Using a digital twin, the model calculated the actual capacity of the battery. The digital twin
model was generated using the LSTM algorithm, which utilized a health indicator (HI) as a temporal measurement. The HI was derived from measurable parameters to represent the battery’s performance
degradation. The results of their experiments demonstrated that the proposed model could precisely estimate the actual capacity of the battery under dynamic operating conditions. The authors of [
] discussed challenges such as range anxiety and slow charging. A special emphasis was placed on the incorporation of digital twin technology in order to identify critical gaps and highlight advanced
technologies. They suggested a thorough plan for creating an effective BMS, facilitating live monitoring, and tackling battery recycling in a complete and unified system. The authors of [
] investigated the feasibility of using a digital twin for battery cells. They used a systematic approach and applied it to a Doyle–Fuller–Newman model. There are several benefits of using a battery
DT, including improved representation, performance estimation, and behavioral predictions based on real data. To update the battery model parameters, they used PyBaMM, a Python package. Using PyBaMM
to develop a digital twin of a battery can be effective, especially for accurately updating parameters during battery cycles. In [
], the authors suggested a digital twin model for BMSs that can forecast battery characteristics such as the temperature, current, and state of charge by solely measuring voltage. They employed
linear and multi-linear regression models to make predictions. The experimental outcomes indicated that this method attained a high level of prediction accuracy (over 90%) for variables such as the
current, maximum cell voltage, and state of charge. However, the results of maximum/minimum cell temperature prediction were not as impressive. To ensure the dependability and safety of lithium-ion
batteries, it is essential to have precise SOH estimation. However, the existing methods that used digital twins necessitated complete charge/discharge cycles, which is not practical for situations
with partial discharges. To overcome this challenge, in [
], a new digital twin framework that allowed on-the-go battery SOH sensing and updates to the physical battery model was proposed. This framework included energy-discrepancy-aware cycling
synchronization, a time-attention SOH estimation model, and a data reconstruction method based on similarity analysis. Comprehensive benchmark tests showed that this solution provided real-time SOH
estimation accuracy within a 1% error range for most sampling instances during ongoing cycles. The critical role of machine learning (ML) models in optimizing battery thermal management system (BTM)
design was emphasized by the authors of [
], as they provided a thorough review of the literature on ML-based BTMs. A digital-twin-based method for enhancing BTMs was also proposed.
Our approach successfully integrated and monitored battery management in a holistic and comprehensive manner. By incorporating digital twin technology, we surpassed existing methodologies and
achieved greater robustness and real-time accuracy and created a comprehensive framework for EV battery management systems.
2.3. GANs for Generating Synthetic Data
Artificial intelligence has made great strides toward producing data that mimic real-life patterns. One of the most important techniques in this area is the use of GANs, which allow for the creation
of synthetic data that are very similar to actual datasets. The potential of GANs is vast, as they can generate a wide variety of datasets that accurately represent different industries, such as
finance, healthcare, and manufacturing, without compromising sensitive information. Additionally, they provide a unique opportunity to delve into the complexities that drive real-world phenomena.
Digital twins use GANs to obtain high-fidelity, real-time data in order to perform precise simulations and predictions, which rely on accurate and up-to-date data. Digital twin simulations and
predictions are made more accurate by combining the generator and discriminator components of GANs. This approach also enables digital twins to operate with up-to-date, high-quality data even in
scenarios where access to real-time data is limited.
Gene expression datasets in cancer research often have a small sample size due to privacy constraints, making it difficult to accurately classify cancer types. Data augmentation can increase the
dataset size by generating synthetic data points. GANs are capable of generating synthetic data, and a modified generator GAN (MG-GAN) generates synthetic data that conform to a Gaussian
distribution [
]. In comparison to traditional data augmentation methods, a MG-GAN significantly improved cancer type classification accuracy for a breast cancer patient gene expression dataset. A significant
reduction in error function loss was achieved with the MG-GAN (from 0.6978 to 0.0082), demonstrating its high sensitivity. In [
], a framework for generating synthetic data that mimicked the joint distribution of variables within original electronic health record (EHR) datasets while managing ambiguities in anonymization was
proposed. The methodology leveraged conditional generative adversarial networks (CGANs) to synthesize data while prioritizing patient confidentiality, resulting in a model that was named ADS-GAN.
ADS-GAN’s ability to replicate joint distributions and uphold patient privacy was validated through rigorous evaluations against real datasets. In [
], synthetic data creation was emphasized as a growing necessity in financial services. The main areas of focus were the generation of genuine artificial datasets, assessment of the resemblance
between actual and produced information, and maintenance of confidentiality throughout the creation procedure. The financial sector faces unique challenges related to regulations and privacy
mandates. This paper addressed these challenges and aimed to establish a shared framework and terminology for generating synthetic financial data.
There are numerous studies proposing GAN-based algorithms for generating synthetic data for a variety of industrial applications, including healthcare and finance, but GAN applications in the realm
of digital twins remain unexplored. An innovative approach within the digital twin domain is the generation of real-time synthetic data, which this study pioneers. The study also introduces a secure
framework that combines Digital Twin technology with a cloud environment specifically for BMSs. The following section provides more details on the complexities of the proposed framework.
3. Proposed System Framework
Our proposed system provides a comprehensive solution for monitoring LIBs in EVs by addressing both physical and digital aspects. The system consists of two main components: the physical side, which
encompasses offline modeling, and the digital side, which includes real-time monitoring and prediction (
Figure 1
Physical Side: The physical side of our system involves a cloud-based IoT network that connects and monitors all EVs equipped with LIBs on the road. According to [
], today, data from various sources and IoT devices are generated and transmitted over cloud-based networks. Virtualized resources, such as servers, storage, databases, and networking, are available
on demand through cloud computing. We consider that each EV is equipped with a BMS responsible for monitoring key battery parameters, such as the voltage, temperature, and SOC. This ensures optimal
battery performance, safety, and longevity. Additionally, we employ machine learning algorithms to train an offline model, which accurately represents a physical battery system. A powerful LSTM
network is used to train the offline model and incorporates data from four types of LIBs (B0005, B0006, B0007, and B0018). The physical battery system, along with the offline model, allows precise
SOC prediction in the digital twin. This ensures that the virtual representation of the battery closely resembles the real one (
Figure 2
Digital Side: The digital aspect of our system involves a cloud-based digital twin that acts as a virtual version of a lithium-ion battery (
Figure 3
). This digital twin allows for the real-time monitoring and prediction of battery activity, thus facilitating efficient battery management. To address the challenge of limited access to real-time
battery data, we employ a TS-GAN to generate synthetic data that mimic real-time battery performance. These synthetic data supplement the physical infrastructure and enhance the digital twin’s
ability to accurately predict the SOC. It generates additional data samples that are similar to the real data, thus expanding the training dataset for improved model generalization. Additionally, it
mitigates the issue of mode collapse by diversifying the generated samples and creating a more balanced dataset when combined with real data. Furthermore, the TS-GAN addresses class imbalance by
generating synthetic samples for minority classes. Lastly, it enables the fine-tuning of pre-trained models with synthetic data, making it valuable when limited task-specific data are available,
thereby reducing the need for collecting a large labeled dataset, which can be costly and time-consuming. The digital twin utilizes these synthetic data to provide real-time information in the
virtual environment, thus improving its SOC prediction accuracy. The digital twin’s SOC prediction results are then compared with the offline model’s SOC prediction results to enable
performance evaluation.
Combined System: By seamlessly integrating physical and digital components, our system offers an integrated approach to effective LIB monitoring in EVs. This framework enables the precise analysis
and prediction of battery behavior, thus contributing to optimal performance, safety, and longevity. Our solution paves the way for a more sustainable and efficient future for electric vehicles. All
of the processes mentioned above are demonstrated in Algorithm 1.
Algorithm 1: Battery Monitoring System
Historical data: X
Manufacturer’s SOC estimates: Y
Real-time battery data: $X ′$ (generated by the TS-GAN)
SOC prediction results from the offline model: Y (Trained Model)
Real-time SOC predictions from the digital twin: $Y ′$
Step 1: Offline Modeling
Train an offline model (M) using historical data (X) and the manufacturer’s SOC estimates (Y).
Input: $X , Y$
Output: Offline model (M)
Step 2: Online SOC Prediction by the Digital Twin
Generate real-time synthetic battery data using the time-series generative adversarial network (TS-GAN).
Input: X
Output: $X ′$ (Synthetic data generated by the TS-GAN)
Perform online SOC prediction using the digital twin.
Input: $X ′$
Output: $Y ′$
Step 3: Evaluation of Online Prediction in Comparison with the Offline Model
Compare the digital twin’s SOC prediction results ($Y ′$) with the offline model’s SOC prediction results (Y) to evaluate performance.
Input: $Y , Y ′$
Output: Performance evaluation metrics (e.g., accuracy, precision, recall)
3.1. Modeling of the Physical Battery System
This section provides an overview of the physical battery system and its offline model, which are important for ensuring that the battery performs properly, is safe to use, and lasts for a long time.
By utilizing the offline model, it is possible to accurately monitor and predict battery behavior. The offline model utilizes historical data from battery manufacturers and physical sensors to
establish a comprehensive dataset that reveals the relationships between physical battery parameters and their associated behaviors. This dataset enables the offline model to accurately predict
battery behavior and facilitate proactive decision making. To develop the offline model, data are collected from physical sensors and battery manufacturers. Manufacturer-provided information,
including battery specifications and manufacturing details, offers invaluable insights into battery chemistry, electrode materials, cell composition, and other relevant parameters. A physical battery
system can be represented using LSTM networks [
], which are a form of recurrent neural network (RNN), through the implementation of the offline model. LSTM networks excel at capturing temporal dependencies and patterns in time-series data, making
them suitable for analyzing battery behavior.
During the training process, three distinct optimizers were examined, namely, SWATS (Switching from Adam to SGD), Adam, and SGD (stochastic gradient descent). Optimizers play a crucial role in
updating a model’s weights and biases. The SWATS technique [
] is an innovative optimization approach that merges adaptive techniques such as Adam with stochastic gradient descent (SGD) to tackle the problem of inadequate generalization during deep learning
training. By utilizing a triggering condition linked to the projection of the steps of Adam on the gradient subspace, SWATS seamlessly transitions from Adam to SGD. Through experiments conducted on
different benchmarks, SWATS was proven to significantly reduce the generalization gap between SGD and Adam in various tasks, including image classification and language modeling. Adam is an optimizer
that adapts learning rates (LRs) based on gradient moments, while SGD updates parameters by using gradients from the loss function.
The performance of the LSTM-based offline model trained with these optimizers was compared with that of a model trained using GRU networks, another popular RNN variant. This comparative analysis
revealed the strengths and weaknesses of each model architecture, aiding in accurate battery behavior forecasting.
LSTM Model Architecture
The offline model was trained on a separate server from the cloud-based IoT network that included all sensors. This saved time and resources by allowing the model to be trained offline. Additionally,
a larger dataset of historical data was used to train the model, thus improving the prediction accuracy. An LSTM network was utilized to train the model offline. LSTM networks are a suitable choice
for training offline models in lithium-ion battery monitoring due to the following:
• Their ability to capture long-term dependencies;
• Their handling of vanishing/exploding gradients;
• Their preservation of memory and context;
• Their accommodation of variable-length sequences;
• Their provision of robustness against overfitting.
These characteristics enabled the model to effectively capture complex dynamics and temporal patterns in battery behavior, filter out irrelevant information, and adapt to varying data sampling rates.
Leveraging these properties can enhance the accuracy and reliability of battery behavior forecasting in electric vehicles. Due to their struggle to retain input information from the past over
extended periods of time, traditional recurrent neural networks have difficulty modeling long-term structures. To solve this issue, LSTM is a more effective solution. This challenge becomes even more
difficult in conditional generative models, where predictions depend solely on the network’s own generated inputs. Although injecting noise into predictions can be helpful, LSTM is specifically
designed to enhance information storage and access in recurrent neural networks. It provides a longer-term memory that references past information, which improves predictions while also ensuring
An LSTM unit comprises three main components: the input gate, forget gate, and output gate, which control the flow of information and allow the unit to selectively retain or discard information over
time. The memory cell is the internal memory of the LSTM unit, and it retains information over long sequences. The combination of the gates and memory cell enables the LSTM to capture long-term
dependencies in sequential data, making it valuable for tasks such as time-series prediction and natural language processing. During training, the LSTM learns optimal gate parameters and memory cell
values through backpropagation through time.
3.2. Proposed Digital-Twin-Based BMS Framework
The digital twin system is a key component of our proposed framework for monitoring LIBs in EVs. The digital twin system serves as a virtual model of the actual battery, allowing for the continuous
observation and anticipation of battery performance.
In order to address the issue of limited access to real-time data, we integrated a TS-GAN into the digital twin system to generate synthetic data. The generation of synthetic data is important, as it
offers a viable solution for improving the accessibility of data for real-time monitoring and prediction. If synthetic data are easily accessible and kept current, the digital twin can make accurate
predictions even when there is a lack of real-world data.
Within this section, we explore the combination of a TS-GAN for the generation of artificial data and an LSTM-based SOC estimator. The battery monitoring and forecasting become more precise and
reliable with these components. We will describe the implementation details, training processes, and evaluation results of the TS-GAN-based synthetic data generation and the LSTM-based SOC estimator.
TS-GAN-Based Real-Time Data Generation
In the field of digital twin research, not having prompt access to up-to-date information is a significant issue. There are several factors and challenges that restrict access to real-time data about
EVs and their batteries. These include privacy and security concerns, ownership and control issues, regulatory and legal constraints, communication and connectivity challenges, data collection costs,
infrastructure limitations, user consent and awareness issues, and technical compatibility. Obtaining real-time data from EVs can be difficult due to limitations and obstacles. However, a TS-GAN can
generate synthetic data that closely resemble real-world data, allowing the digital twin system to accurately simulate and predict battery behavior even without real-time data. This approach provides
a practical solution for overcoming data scarcity and ensuring continuous monitoring and precise forecasting of EV battery performance. However, we found a remedy by integrating a TS-GAN into the
digital twin framework, which generated synthetic data. The generation of synthetic data is crucial as it allows for better access to data for predicting and monitoring things in real time.
“Real-time monitoring” means continuously observing and analyzing data as they appear, without any delay. “Prediction” means using data to make educated guesses about future events or trends. In this
section, we delve into the details of the TS-GAN-based synthetic data generation, including the implementation, training processes, and evaluation results. The use of synthetic data enhances the
precision and reliability of monitoring and forecasting batteries.
The TS-GAN framework, which was presented at NeurIPS in December 2019 by Yoon, Jarrett, and van der Schaar, is a significant development in the field of synthetic time-series data generation [
]. TS-GANs are specifically designed to address the unique challenges associated with generating time-series data, which extend beyond creating cross-sectional distributions of features at each time
point. Additionally, a TS-GAN aims to capture the temporal dynamics that govern the sequence of observations in time-series data, reflecting their autoregressive nature.
A TS-GAN is a specialized machine learning model that is intended to operate with time-series data. In order to learn the distribution of transitions between different points in time, it combines
unsupervised and supervised learning techniques. Synthetic data that are indistinguishable from real data are the ultimate goal of a TS-GAN.
4. Simulation Results
4.1. Dataset Description
In this study, a dataset comprising four LIBs (labeled #5, #6, #7, and #18) was utilized.This type of data is known as historical or physical data, and the experimental setup closely followed the
methodology outlined in [
]. At room temperature (25
C), batteries were subjected to three distinct operational profiles: charge, discharge, and impedance measurements. Charging was accomplished by applying a constant current of 1.5 A until the voltage
reached 4.2 V and then switching to a constant voltage until the charge current dropped to 20 mA. In order to discharge, a constant current of 2 A was applied until specific voltage thresholds were
reached. In addition, electrochemical impedance spectroscopy was performed. During repeated cycles, batteries were exposed to accelerated aging, allowing for the observation of changes in internal
battery parameters until the end-of-life criteria were met, which were defined as a 30% decrease in the rated capacity. Finally, the dataset included the battery capacity for discharge until 2.7 V,
which was recorded and analyzed. The multivariate time-series data obtained from Li-ion batteries included 45,122 samples and eight features, namely, the ID cycle, measured voltage, measured current,
measured temperature, capacity, charging current, charging voltage, and time. Additionally, temperature is a crucial factor that significantly affects the performance and overall health of battery
systems. Our framework addressed this by incorporating temperature as an important aspect in the following ways.
• Addition to the Feature Set: Temperature was a fundamental feature in our dataset, and it was included as one of the key input parameters for our model. Our framework was able to consider the
changing temperature levels that occur during battery operation thanks to the inclusion of these data in the dataset.
• Feature Importance: We recognize the significance of temperature in influencing battery behavior. In our feature engineering process, temperature was assigned an appropriate weight based on its
impacts on various performance metrics, such as the state of charge (SOC), capacity, and voltage.
• Dynamic Modeling: Using our framework, we employed a dynamic temperature model that reflected real-time changes. This allowed the model to understand and represent the subtle impacts of
temperature changes on the battery’s functionality.
• Temperature-Dependent Responses: The model responses, including its state estimation and performance predictions, were inherently linked to the temperature feature. The model adjusted its
predictions to maintain precision in varying temperature conditions.
4.1.1. Time-Series Data
Time-series data are a type of data that are collected over time at regular intervals or time steps, and they are commonly used in various fields to analyze and forecast trends, patterns, and
behaviors that evolve over time. Key characteristics include the temporal order, time intervals, seasonality, trends, and seasonal and residual components. Analyzing and modeling time-series data is
essential for tasks such as forecasting [
], anomaly detection [
], pattern recognition [
], time-series imputation, simulation, and digital twin applications. A time series is a set of observations that are arranged in chronological order and captured at regular intervals. Each data
point in the series is influenced by its previous values, indicating a temporal correlation or dependence between consecutive observations. The joint distribution of the sequence of observations can
be represented using the chaining product rule as follows:
$p ( x 1 , x 2 , x 3 , … , x T ) = p ( x 1 ) ∏ t = 2 T P ( x t | x 1 , x 2 , … , x t − 1 )$
$x t$
is a data point belonging to
, and the conditional probability
$p ( · | · )$
for each event signifies the relationship between the current state and its preceding ones in terms of time. It represents a factorization of a joint probability distribution over a time series of
$x 1 , x 2 , x 3 , … , x T$
into a product of conditional probabilities. In this research, a dataset consisting of multiple batteries’ voltage, current, capacity, and temperature measurements was used. The study focused on four
lithium-ion batteries, namely, B0005, B0006, B0007, and B0018. After 168 cycles of charging and discharging, B0005, B0006, and B0007 showed a 30% decrease in their rated capacity, which met the
end-of-life (EOL) criteria. Similarly, B0018 reached the EOL criteria after 132 cycles of charging and discharging. This dataset is useful for various purposes, such as predicting and identifying
anomalies and recognizing patterns. By utilizing chaining product rules, the researchers were able to effectively model the joint distribution of sequential observations and take advantage of the
temporal correlations between these data points.
Figure 4
shows how the voltage of the battery decreased at various time intervals for B0005. As the battery was utilized, its voltage was reduced with every cycle. Moreover, the battery’s ability to hold a
charge also diminished with an increase in the number of cycles. With a higher cycle count, the battery reached its discharge voltage threshold, which was 2.7 volts, more quickly. Consequently, as
the battery aged, its capacity diminished at an accelerated rate.
4.1.2. Preprocessing
In order to process the raw data for model training, we performed a series of preprocessing steps. To read and display information from the CSV file, we used the pandas library. Subsequently, we used
the Matplotlib library to generate a correlation map.
Figure 5
displays the connections among the variables based on a heatmap analysis that we conducted as part of our exploratory analysis of the essential features. Five critical features were identified in our
analysis: the ID cycle, time, temperature, voltage, and current. These variables directly influenced the SOC. Through this analysis, we sought to quantify the impact of each feature on SOC
prediction, thereby enhancing our comprehension of battery behavior and improving the predictive accuracy.
We then partitioned the data into training and testing sets, allocating the initial 35,000 samples for training and the remainder for testing. To ensure efficient calculations during our operations,
we normalized the data, scaling the columns to a specific range (0 to 1). Lastly, accounting for the data series’ length, we transformed them into fixed-length sequences or windows, creating windows
of 1000 time steps each. This process yielded two arrays, X-train and X-test, with dimensions of (33,950, 100, 8) and (9073, 100, 8), respectively, using a window length of 1000. The sampling
sequence for voltage, temperature, and current for battery B0005 is depicted in
Figure 6
Figure 7
displays the SOC graphs for B0005, B0006, B0007, and B0018 based on information provided by the manufacturer. The batteries were subjected to multiple cycles of charging and discharging until their
SOC reached 72%. Batteries B0005, B0006, and B0007 underwent 166 cycles each, while battery B0018 underwent 132 cycles. It can be observed from the graph that the SOC of battery B0018 decreased at a
faster rate.
4.1.3. Problem Formulation with Physical Batteries and Simulation Results
Overall, the SOC is an important factor when it comes to overseeing and controlling the energy of a battery, as it indicates the present amount of charge that it holds. The SOC can be obtained for
as follows:
$S O C t = S O C t 0 − 1 C ∫ t 0 t C E · I t d t$
$S O C t$
$S O C t 0$
are the state of charge of the battery at the starting time
$t 0$
and the present time
, respectively,
is the capacity of the battery,
$C E$
is the Coulomb efficiency, and
represents the current that passes through the voltage source. Accurate prediction of SOC values is crucial for efficient energy utilization and optimization of battery performance, and the LSTM
utilizes sequential patterns in battery data to achieve this goal. Each LSTM unit uses a memory
$C t$
at time
. Here,
$h t$
is an output or LSTM unit activation determined by
$h t = o t ⨀ tanh ( C t )$
$o t$
is the output gate that controls the amount of content that is provided through memory.
The output gate is calculated through the following:
$o t = σ ( w 0 · [ h t − 1 + x t ] + b 0 )$
is the sigmoid activation function,
$w 0$
is the weight matrix,
$x t$
is the input at time
$b 0$
is a model parameter, and
$h t − 1$
is the hidden state from the previous time step.
With the partial forgetting of the current memory and addition of new memory content as
$C ^ t$
, memory cell
$C t$
is updated to
$C t = f t ⨀ C t − 1 + i t ⨀ C ^ t$
$f t$
is the activation vector of the forget gate, and
$i t$
is the activation vector of the input/update gate. The amount of current memory that should be forgotten is controlled by the forgetting gate (
$f t$
), and the amount of new memory content that should be added to the memory cell is controlled by the update gate (
$i t$
) which is sometimes known as the input gate. This is done with the following calculations:
$f t = σ ( w f · [ h t − 1 + x t ] ) + b f$
$i t = σ ( w u · [ h t − 1 + x t ] + b u )$
The new memory content is expressed through the following:
$C ^ = tanh ( w c · [ h t − 1 + x ] + b c )$
At every time step, an LSTM is provided with details regarding the battery, such as its voltage, current, temperature, and so on. Additionally, the LSTM takes the previous hidden state
$h t − 1$
, as well as the previous cell state
$C t − 1$
from the previous time step
$t − 1$
, into account. Through a particular formulation, an LSTM makes predictions of the SOC. So, we can define SOC at time t as
$S O C t = σ ( w · [ h t − 1 + x t ] + b 0 )$
The problem of estimating the SOC was approached as a supervised learning problem in this study, wherein the model was provided with numerous input–output pairs from which to learn. An offline model
was trained using a dataset with historical information obtained from the battery manufacturer. The dataset consisted of 45,122 samples of multivariate time-series data with eight characteristics,
including the cycle ID, measured voltage, measured current, measured temperature, capacity, charging current, charging voltage, and time. The objective of the offline model was to utilize the known
SOC values to compare and evaluate them against the digital twin’s SOC predictions. During the offline model’s training, five essential input features, namely, the measured voltage, measured current,
cycle ID, measured temperature, and time, were selected to capture the battery’s behavior over time. The output vector for the offline model was the known SOC values. In a given sample, the input
comprised concatenated values of five selected input features, and it was represented by
$[ x 1 , x 2 , … , x n ]$
. The output vector
, which was represented by
$[ y 1 , y 2 , … , y n ]$
, corresponded to the SOC value
$( S O C 0 , S O C 1 , … , S O C n )$
for the input sample. The goal was to train the LSTM model to accurately map the input vector
to the output vector
. The LSTM neural network used in the offline model comprised four hidden layers, as shown in
Figure 8
. The LSTM’s hidden layers sequentially processed the input and passed the hidden state to the following layer. This model received sequential input data at the input layer, where each time step’s
features were processed. Each LSTM unit in a layer comprised an input gate, a forget gate, and an output gate, which determined what information to store in the cell state, what information to
discard, and what part of the cell state to output as the hidden state to the next layer or the final output. The activation functions utilized by the LSTM units were sigmoid and hyperbolic tangent (
$t a n h$
) functions. The input, forget, and output gates utilized the sigmoid function to regulate the flow of information based on relevance and importance, and the values were limited to a range of 0 to 1.
On the other hand, tanh was employed to compute new candidate values that could be included in the memory cell; values were compressed between
$− 1$
and 1 while taking the magnitude and significance of new data into account. The output layer processed the final hidden state or cell state to produce the desired output, that is, the prediction of
the next value or the estimation of the SOC of a battery. During training, backpropagation through time (BPTT) was used to compute gradients and update the weights. To prevent overfitting, the L1/L2
regularization technique was included in the LSTM model by adding a penalty term to the loss function. Furthermore, two dense layers were used in the output layer to take the sequence of hidden
states produced by the LSTM layers and transform them into a meaningful prediction of the state of charge.
Figure 8
illustrates that the input was a matrix with dimensions of
$( n , 256 )$
and a batch size of 256. After passing through the LSTM and dense layers, the output was transformed into a matrix of dimensions
$( n , 1 )$
The LSTM model was trained using three different types of optimizers, namely, SWATS, Adam, and SGD, with different learning rates. This approach provided several benefits, such as the ability to
choose the most effective optimizer, ensuring good generalization and convergence, adapting to various data characteristics and learning rates, being robust against local minima, and gaining insights
for future optimization strategies. Despite the presence of L1/L2 regularization, overfitting could still occur in the LSTM model, particularly if the learning rate was high. This was because a high
learning rate could lead to the model learning the training data too well, including the noise, and consequently, the model may not have performed well on unseen data. To mitigate this problem, one
can use the SWATS optimizer. The SWATS optimizer is a new optimizer that prevents overfitting by adaptively adjusting the learning rate during training. It is based on the Adam optimizer but adds
features such as a moving average of gradients and a decay factor to gradually decrease the learning rate over time. It can be used with any machine learning model but is particularly useful for LSTM
models that are prone to overfitting. The Switching Adam to SGD optimizer is a variant that improves the convergence speed of the optimizer by switching to SGD after a certain number of epochs. The
SWATS optimizer has shown promising results in preventing overfitting in LSTM models.
In This study, the SWATS optimizer was used to train the LSTM model with different learning rates. The results showed that the SWATS optimizer was able to achieve better performance than that of the
other optimizers, including Adam and SGD. The SWATS optimizer prevented overfitting while still achieving good accuracy on the training data.
The LSTM model was trained using the battery manufacturer’s past data to create an offline model that could be used as a benchmark to measure the reliability and precision of the digital twin’s SOC
forecasts. This enabled a quantitative assessment of the digital twin’s effectiveness by comparing its predictions with the actual SOC values.
Additionally, we compared the LSTM models trained with different optimizers (SWATS, Adam, and SGD) with a GRU network to understand their strengths and weaknesses in LIB behavior forecasting. LSTM
and GRU are both RNN variants but differ in their architecture, which impacts their ability to capture temporal dependencies and handle long-term sequences. GRU networks have a simplified
architecture compared to that of LSTM [
]. GRU networks have a reset gate and an update gate, which control the flow of information within the network. The reset gate determines which parts of the past information should be forgotten,
while the update gate decides which parts of the new information should be incorporated. By comparing the performance of both models, we evaluated their suitability for accurate battery behavior
forecasting while considering factors such as prediction accuracy, convergence speed, generalization capabilities, and computational efficiency. Two different evaluation metrics, the root mean square
error (RMSE), and mean absolute error (MAE), were utilized to evaluate the effectiveness of the proposed model. The formulas for calculating these metrics are presented below.
$M A E = 1 N ∑ i = 1 N | S O C actual − S O C pred |$
$R M S E = 1 N ∑ i = 1 N ( S O C actual − S O C pred ) 2$
Following every forward propagation, the model loss was computed as the mean square error (MSE), which involved assessing the deviation between the predicted SOC value and the actual SOC value.
$L o s s = 1 N ∑ i = 1 N | S O C actual − S O C pred | 2$
Table 1
presents a summary of the MAE and RMSE outcomes obtained from the LSTM and GRU models while utilizing three distinct optimizers with learning rates of
, and
. Compared to the other two optimization techniques, the SWATS optimizer exhibited superior performance, effectively mitigating issues related to overfitting and gradient instability. Furthermore, it
was evident that the choice of learning rate significantly impacted the accuracy and quality of the predictions. To assess their impacts, we used a range of learning rate values in our simulations.
Notably, for battery B0005, the employment of a learning rate of
in conjunction with the SWATS optimizer yielded the most optimal results in terms of SOC calculations. This particular configuration proved to be highly effective for estimating the SOC in this
battery model. Moreover, we provide similar simulation results for battery B0006 in
Table 2
, battery B0007 in
Table 3
, and battery B0018 in
Table 4
This gives a thorough understanding of how our suggested method performs with and adjusts to different battery models. Finally, the loss function results for the initial 100 epochs of LSTM with the
SWATS optimizer are shown in
Table 5
. Based on the results that were obtained, the optimal learning rate for battery B0006 with the SWATS optimizer was
, while for battery B0007, it was
, and for battery B0018, it was
. The simulations using the SWATS optimizer yielded the most favorable outcomes and the lowest loss function values for SOC calculation in these batteries. The analysis of the results obtained from
battery B0006 indicated that the SWATS optimizer worked best with a learning rate of
. For battery B0007, after careful experimentation, it was found that the most favorable outcomes were achieved with a learning rate of
when used alongside the SWATS optimizer. On the other hand, a learning rate of
was the most effective value, highlighting the adaptability of the SWATS optimizer in B0018 across various battery models. It was noteworthy that the utilization of the SWATS optimizer consistently
yielded superlative outcomes across these battery types, which were characterized by the lowest loss function values in the SOC calculation process. In
Figure 9
, we show in great detail the results of our thorough simulations of the B0005 battery. The findings are summarized in
Figure 10
, which showcases the use of the SWATS optimizer for batteries B0005, B0006, B0007, and B0018. In addition, we expanded our investigation to encompass the assessment of the effectiveness of the LSTM
network in conjunction with the GRU network. The results of the SOC estimation for the B0005 battery are depicted in
Figure 11
, where we utilized different optimizers and learning rates to gain insights into the performance behavior of the network. We will specifically examine the outcomes of our LSTM modeling efforts that
were conducted offline with a particular focus on forecasting the SOC using various learning rates in
Section 5
4.1.4. Problem Formulation for the Generation of Real Data and Simulation Results
GANs were introduced in 2014 [
] and have proven effective in producing high-quality outputs via a mutual game-learning process between two modules: a generative model and a discriminative model. Generative models—or
generators—are responsible for recovering real data distributions. The algorithm takes a random noise vector
as an input and generates an output,
$G ( z )$
, which can be an image or other data format. On the other hand, the discriminator is a type of discriminative model that has the job of differentiating between data samples that come from the
training set and those that are created by the generator. It receives input in the form of
, which can either be actual training data or data that have been generated by the generator. The output score produced by the discriminator is either 1 or 0. If the score is 1, it means that the
input is real data, while a score of 0 indicates that the input is false or generated data. The aim of the generative model
is to understand a distribution
$p g$
of the data
. This is achieved by using a function
$G ( z ; θ g )$
that maps a prior noise distribution
$p z ( z )$
to the data space, where
$θ g$
represents the model’s parameters. These parameters can be the weights of a multilayer perceptron that is used in
. On the other hand, the discriminative model
$D ( x ; θ d )$
is created as a binary classifier that gives a scalar output that shows the probability of input
coming from the training data instead of
$p g$
The training process involves a game between two models, which continues until they reach a stable balance point. This ensures that the generator can produce realistic data samples, while the
discriminator can accurately distinguish between real and generated data. Both the generator and discriminator networks are trained simultaneously using min–max adversarial loss functions. The
generation module’s objective function is stated as follows:
$min G max D V ( D , G ) = E x ∼ p x log D ( x ) − E z ∼ p z log ( 1 − D ( G ( z ) ) )$
represents the discriminator,
represents the generator,
is the adversarial loss function, the real data distribution is denoted by
$p x$
, and the latent space distribution is represented by
$p z$
. In order to transform a traditional GAN into a TS-GAN, some modifications in the architecture and training process have to be made. The following is a step-by-step guide for accomplishing this
task. The TS-GAN architecture comprises two main components: an autoencoder and an adversarial network (
Figure 12
The autoencoder is responsible for learning a time-series embedding space that can capture the underlying patterns in the data, while the adversarial network generates artificial time-series data and
distinguishes them from real data. The TS-GAN uses both supervised and unsupervised learning objectives during training, and it applies the adversarial loss to both real and synthetic sequences [
]. In addition, TS-GAN includes a stepwise supervised loss that rewards the model for accurately learning the distribution over transitions from one time point to the next, as observed in historical
To implement the TS-GAN architecture, several steps are required; firstly, real historical time-series data, such as EV battery system data, are collected and made ready for training. Alongside this,
random time-series data are generated to be used as a benchmark for comparison with the synthetic data that are produced. Next, the key components of the TS-GAN model, which include the autoencoder,
sequence generator, and sequence discriminator, are established.
The TS-GAN architecture is made up of various components such as an Embedder, a Recovery, a Generator, a Discriminator, and a Supervisor. Its training involves 10,000 iterations. The Embedder is an
RNN-based model that maps real data sequences
$x t$
to a lower-dimensional space
$e t$
and captures temporal dependencies.
The Recovery (another RNN-based model) maps embeddings
$e t$
back to the original data space to reconstruct the time-series data:
The Generator is an RNN-based model that generates synthetic data sequences
$x ( f a k e , t )$
from random noise sequences
$z t$
$x f a k e , t = G ( z t )$
while the Discriminator distinguishes between real time-series data
$x r e a l , t$
and generated data
$x f a k e , t$
• $D ( x r e a l , t )$ represents the probability that $x r e a l , t$ is real;
• $D ( x f a k e , t )$ represents the probability that $x f a k e , t$ is fake.
The Supervisor acts as an intermediary between the Embedder and the Generator to enhance the quality of the generated sequences. There are two main objectives in the training process: adversarial
loss and supervised loss. The adversarial loss can be defined as
$L adv = E x real ∼ p data log D ( x real ) + E z ∼ p z log ( 1 − D ( G ( z ) ) )$
The supervised loss can be defined as
$L sup = E x real ∼ p data ∑ t loss ( x real , t + 1 , x fake , t + 1 )$
The combined adversarial and supervised losses make up the following overall objective:
$L TS - GAN = L adv + λ · L sup$
is an objective-balanced hyperparameter. During the initialization phase of the TS-GAN model, an autoencoder is employed to integrate the Generator and the Embedder. The main objective of this
approach is to reconstruct genuine data sequences and obtain significant embeddings of the real data. During training, the Generator and Embedder are trained twice as often as the Discriminator to
maintain balance. After training, the Generator generates synthetic data sequences, which are transformed back to the original data space using the Recovery model and inverse-scaled to obtain
realistic-looking synthetic data. The results of the data generated by the TS-GAN algorithm are shown in
Figure 13
Furthermore, the battery time-series data for B0018 and B0005 are compared in
Figure 14
Figure 15
to show the differences between the actual and generated data.
4.1.5. Evaluation of the Synthetic Data
The next step after synthesizing our data was to verify that the new data accurately reproduced the initial battery data. Using evaluation metrics is one of the best ways to compare real and
synthetic data. In order to ensure the TS-GAN’s reliability and applicability in real-world scenarios, it was important to accurately evaluate the data generated by it. It is important for the
evaluation metric to be carefully selected when dealing with multivariate time-series data, such as those obtained from LIBs. A model’s performance or predictive capabilities may not be significantly
affected by small changes in time-series data. Hence, it is crucial to strike a balance between capturing meaningful differences in the generated data and being robust enough to tolerate minor
variations. These requirements can be met by the Fréchet inception distance (FID). An objective measure of similarity between a generated data distribution and a real distribution is provided without
being excessively influenced by minor fluctuations in the data distribution.
When evaluating data from a TS-GAN—especially historical data instead of real-time data—it is imperative to use an accurate evaluation metric, such as the FID. The FID approach provides objective and
quantitative insights into the performance and generalization capabilities of a TS-GAN model, despite the fact that historical data are not ideal for evaluation. As a result, potential overfitting
can be detected, generalization can be validated, and the model can be iteratively improved. Through the use of the FID, one can obtain significant insights into the TS-GAN model’s ability to learn
from past data and the similarity between the generated data and the actual data distribution, even without real-time data.
Formulation of the Fréchet inception distance: The Fréchet inception distance (FID) is a popular metric used in generative modeling, including for time-series data. The authors of [
] employed the FID metric to evaluate the performance of a new update rule called the two-time-scale update rule (TTUR) on various datasets. The TTUR is a strategy used in training generative
adversarial networks (GANs) with stochastic gradient descent. It is designed to address convergence issues and enhance learning for GANs, leading to improved results in tasks such as image
generation. The evaluation involves deep learning and feature extraction to gauge the dissimilarity between two probability distributions. In order to evaluate the accuracy of the data produced by
TS-GAN, we utilized the FID to measure the disparity between the feature representations of the generated time-series data and the original time-series data. This was achieved by utilizing a
pre-trained neural network, which is usually an Inception-v3 network (a deep convolutional neural network architecture), to extract high-level features from both types of time-series data. The FID is
calculated as follows:
$F I D 2 = ∥ μ real − μ fake ∥ 2 + Tr ( Σ real − Σ fake − 2 ( Σ real ∗ Σ fake ) 1 2 )$
• $μ r e a l$ is the mean of the feature representations of the real data samples;
• $μ f a k e$ is the mean of the feature representations of the generated data samples;
• $Σ r e a l$ is the covariance matrix of the feature representations of the real data samples;
• $Σ f a k e$ is the covariance matrix of the feature representations of the generated data samples;
• $T r ( . )$ denotes the matrix transposition operation.
By taking the multivariate nature of time-series data into account, the FID metric captures the relevant relationships and patterns for precise evaluation. It offers a reliable and informative
measure of the similarity between the two distributions. A lower FID score suggests a higher similarity, indicating that the generated data closely resemble the actual data. This demonstrates the
effectiveness of the TS-GAN model in replicating the underlying dynamics of the target system. In conclusion, the FID metric is a valuable tool for impartial and precise evaluation of
TS-GAN-generated data, contributing to the advancement and application of this technology in real-world scenarios involving LIBs and beyond. Based on the FID metric,
Figure 16
illustrates the results of the evaluation of the TS-GAN data.
To apply the FID on two distinct datasets, namely, a historical dataset and a real-time dataset of lithium-ion batteries produced by the TS-GAN, we utilized TensorFlow. The process involved loading
and preprocessing the pre-trained Inception-v3 model, extracting features from both datasets using the same pre-trained Inception-v3 model, computing the mean and covariance matrix for each dataset,
and, finally, computing the FID score based on the mean and covariance matrix of the two datasets. The results are shown in
Table 6
4.2. Training and Evaluation of the LSTM-Based SOC Estimator for the Digital Twin
To train the LSTM in the digital twin, these steps were followed:
• Data Preparation: The real-time data obtained from the TS-GAN model were preprocessed to make them compatible with the LSTM model’s input format. The preprocessed data were split into training
and validation sets. In the training set, 67% of the 45,122 samples were included (30,080 samples), while in the validation set, 33% of the 45,122 samples were included (14,890 samples).
Additionally, five important features were selected: the measured voltage, measured current, cycle ID, measured temperature, and time.
• Model Architecture: The LSTM model architecture within the digital twin should be the same or similar to that of the offline LSTM model used for historical data training. LSTM models with four
layers, 25 hidden units in each layer, sigmoid activation functions, and L1/L2 regularization layers were used.
• Training Process: The LSTM model was compiled with an SWATS optimizer. To assess the accuracy and generalization capabilities of the LSTM model, a separate test dataset was used after training.
A comparison between the SOC predicted using the digital twin and the anticipated SOC obtained from historical or real data is shown in
Figure 17
. The gradient of the LSTM model for battery B0005 with learning rates of 0.05, 0.03, 0.01, and 0.008 is depicted in
Figure 18
. Additionally,
Figure 19
illustrates the LSTM model’s loss during state-of-charge estimation for battery B0005, showcasing the impact of the learning rate (0.05, 0.03, 0.01, and 0.008) on the training convergence. Finally,
Figure 20
Figure 21
present the distribution of prediction errors in SOC estimation for lithium-ion batteries using the LSTM network in the digital twin, thus providing insights into the accuracy and reliability of the
model’s predictions during both the training and testing phases, respectively.
4.3. Evaluation of Digital-Twin-Based SOC Estimation
In this section, we compare the accuracy of the SOC estimation results obtained from two distinct models: the LSTM model within the digital twin and the LSTM model trained offline using historical
data. Since both models are regression-based, we will be using error as the primary metric for evaluating their accuracy. The MSE serves as a key measure, with lower values indicating superior
performance. Simply comparing means between two models has limitations due to the effects of sample variation and inherent randomness. It is not practical to expect identical means between the two
models. To effectively determine which model is superior, we followed standard statistical procedures that are more robust. In statistical testing, we established a null hypothesis and an alternative
hypothesis that differed from the null. We then analyzed the data to demonstrate that the null hypothesis could not stand, thereby accepting the alternative hypothesis. This process ensured robust
and credible claims. To evaluate the relative accuracy of the digital twin and offline LSTM model, we employed parametric tests—specifically, the t-test and analysis of variance (ANOVA).
These parametric tests were chosen for several important reasons. Firstly, parametric tests assume that the data follow a specific distribution (usually normal) and are designed for comparing means
and variances. Secondly, they offer greater statistical power, which means that they are more likely to detect true differences if they exist.
Additionally, parametric tests provide precise estimates of the effect size, thus aiding in understanding the practical significance of the results. The
-test was used to analyze the variations in average values of metrics such as the MSE and MAE. ANOVA was used to compare the variance in the LIB datasets. The test results showed that there were no
significant differences among the independent variables in the different groups (
$p > 0.05$
Table 7
Table 8
display the results of the two statistical tests that were utilized to assess the performance of the two models. The usage of these parametric tests led to accurate statistical conclusions regarding
the model’s precision. This method offered a thorough evaluation of the model’s performance that took the differences in both mean and variance into account.
5. Discussion
In this section, we discuss the findings and significance of various simulation studies that centered around the utilization of digital twins for enhanced accuracy in forecasting the charge levels of
5.1. Performance of LSTM-Based Offline Modeling
This section discusses the performance of the LSTM-based offline modeling stage of the digital twin framework for SOC prediction. We trained the LSTM model with four layers, three optimizers (Adam,
SGD, and SWATS), and four learning rates (0.05, 0.03, 0.01, and 0.008) for four LIBs (B0005, B0006, B0007, and B0018).
Figure 7
shows the results for the Adam and SGD optimizers for battery B0005. As we can see, the plot for LR = 0.008 was the most similar to the real data plot. Additionally,
Table 1
shows that the LSTM model with the Adam optimizer and LR = 0.008 had the lowest loss value (MAE = 1.432 and RMSE = 1.456). The LSTM model with the SGD optimizer and LR = 0.01 also had a low loss
value (MAE = 2.945 and RMSE = 3.231).
To make a comparison, we trained a GRU model while using identical optimizers and learning rates.
Figure 9
shows the results for battery B0005. The plot shows that the GRU model with the Adam optimizer and LR = 0.05 had the best results. Moreover,
Table 1
shows that the GRU model with the Adam optimizer and LR = 0.05 had the lowest loss value (MAE = 1.432 and RMSE = 1.456). The GRU model with the SGD optimizer and LR = 0.05 also had a low loss value
(MAE = 1.714 and RMSE = 1.773). The LSTM model was ultimately trained using the SWATS optimizer, as it demonstrated superior performance in previous experiments.
Figure 8
Table 1
show that the LSTM model with the SWATS optimizer and LR = 0.03 had the lowest loss value (MAE = 0.888 and RMSE = 0.912) for battery B0005.
Table 2
Table 3
, and
Table 4
show the results for batteries B0006, B0007, and B0018, respectively.
Table 5
shows the loss values for the LSTM model with the SWATS optimizer for various epochs. The results show that the loss value after 90 epochs no longer changed significantly, so we considered 100 epochs
to be sufficient to reduce the calculation time. Based on the results of the offline modeling stage of the digital twin framework for SOC prediction, we can conclude that the LSTM model with the
SWATS optimizer and LR = 0.03 achieved the best results. This model can be used to generate accurate SOC predictions for the online stage of the digital twin framework.
Extensive training of the LSTM and GRU models was conducted for various Li-ion batteries during the offline modeling stage of the digital twin framework for SOC prediction. The LSTM model that
utilized the SWATS optimizer and had an LR of 0.03 consistently displayed superior performance to that of the other models. When applied to battery B0005, this particular LSTM setup resulted in an
MAE of 0.888 and an RMSE of 0.912. These outcomes emphasized the ability of the LSTM model to produce accurate SOC forecasts for the digital twin’s online phase.
5.2. Evaluating the Synthetic Data Generated by the TS-GAN
The discriminator and generator have contrasting roles in a GAN: Generators aim to create data that can be distinguished from real data, while discriminators determine the authenticity of data. Data
generation is continuously improved as a result of this dynamic interplay. In
Figure 11
, we visualize the generator’s output for battery B0005 at various epochs, ranging from 5 to 100. As the number of epochs increased, the synthetic data generated by the GAN progressively resembled
real data more closely.
To evaluate the quality of the synthetic data produced by the TS-GAN, we employed the FID metric. According to
Figure 15
, there were no significant changes in the FID values after epoch 95, indicating that the generated data closely resembled the real data. Furthermore,
Table 6
presents the FID results for the voltage, current, and temperature, demonstrating that at epoch 1000, we achieved the lowest FID scores. These findings demonstrate that the TS-GAN model generated
synthetic data that closely resembled real-world observations of Li-ion batteries. As the FID values gradually became more stable after epoch 95, the TS-GAN model appeared to learn the underlying
data distribution over time. Generating synthetic data is important in situations where obtaining real-time data is difficult, as it can be a useful replacement for real-time information.
Moreover, the lowest FID scores achieved at epoch 1000, as indicated in
Table 6
, signified the model’s potential to consistently produce high-quality synthetic data. This has significant implications for scenarios where historical data must be leveraged for predictive modeling
and decision making, as it offers a means of bridging the gap between historical and real-time data.
5.3. Performance of the LSTM-Based SOC Estimator in the Digital Twin
We used an LSTM in our digital twin application to estimate the real-time state of charge for lithium-ion batteries. It was crucial for the LSTM used in the online component to have the same
architecture as that used offline.
Figure 16
shows the forecasted SOC values for batteries B0005, B0006, B0007, and B0018, which closely resembled the anticipated SOC values from the manufacturer data. We used a
-test and F-test to compare these two SOC values. It appeared that all of the batteries (B0005, B0006, B0007, and B0018) exhibited similar error metrics (MSE and MAE) between the digital twin and
offline LSTM model based on the provided table of statistical test results.
The values of t in
Table 7
(test statistic) indicate how many standard errors the means of the two groups (digital twin and offline LSTM) were separated by. These t-values were very close to zero, indicating that the means of
the two groups were very similar.
The data points used in the analysis were expressed in degrees of freedom (df). A large dataset suggests a robust analysis in this case, as there are many degrees of freedom. The critical value (cv)
was used to compare the test statistic (t) against a threshold. The threshold in this case was set to 1.645, which is commonly used for a significance level of 0.05. The p-value represents the
probability of obtaining results as extreme as those observed in the sample, assuming that the null hypothesis is true. In all cases, the p-values were very high (close to 1), suggesting that there
was no statistically significant difference in the error metrics between the digital twin and the offline LSTM model. The interpretations via the critical value and p-value both led to the same
conclusion: We accepted the null hypothesis. This meant that there was no statistically significant difference in the accuracy (as measured with the MSE and MAE) between the digital twin and the
offline LSTM model for any of the batteries (B0005, B0006, B0007, and B0018).
When it came to estimating the SOC of these batteries, the digital twin model was just as effective as the offline LSTM model. Moreover,
Table 8
displays the results of an ANOVA that was conducted to evaluate the digital twin’s performance across different Li-ion batteries, namely, B0005, B0006, B0007, and B0018. The column labeled “State”
represents the level of diversity present within each group of batteries. When considering this information, it can be inferred that values such as 0.015, 0.025, 0.009, and 0.024 indicate a
relatively low degree of variability. The “
” column provides
-values associated with the ANOVA. The higher
-values, such as 0.902, 0.874, 0.941, and 0.876, indicate that there was no strong evidence to reject the null hypothesis, suggesting that the performance metrics across these batteries were probably
from the same distribution. Put simply, the ANOVA suggested that the digital twin’s accuracy remained consistent across all of the tested batteries, regardless of the specific type of Li-ion battery
used, indicating that there was no significant difference in its performance.
6. Conclusions
We have demonstrated the immense potential of digital twin technology in simulating and predicting lithium-ion batteries’ SOC. We found that our results were closely aligned with manufacturer data
when we used a time-series generative adversarial network (TS-GAN) to generate synthetic data. These results have great potential for reducing energy usage in EVs and present interesting
opportunities for future research efforts. We have provided valuable insight into SOC estimation for lithium-ion batteries, but we must acknowledge that there are certain limitations, particularly
those related to data availability. Despite these challenges, our efforts are a significant addition to the rapidly developing area of energy for electric cars, and they highlight the importance of
digital twin technology in overseeing batteries. The use of digital twin technology presents a significant change in the way that we oversee and enhance lithium-ion batteries, as demonstrated in our
study. Moreover, by applying t-tests and ANOVAs, we demonstrated the consistency and accuracy of the digital twin across various Li-ion battery types. This comparison with previous work illustrates
the robustness and versatility of our approach. Digital twins are used to generate a virtual replica of an actual battery, allowing for precise monitoring and prediction in real time and ensuring
optimal energy usage. This research will serve as a catalyst for further exploration and innovation in green energy, with the ultimate goal of fostering a sustainable and environmentally friendly
Author Contributions
Conceptualization, M.P.; investigation, M.P.; writing—original draft preparation, M.P.; writing—review and editing, I.S.; supervision, I.S.; funding acquisition, I.S. All authors have read and agreed
to the published version of the manuscript.
This work was supported by a grant from the National Research Foundation of Korea (NRF) funded by the Korean government (MSIT) (No. RS-2023-00252328).
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflict of interest.
1. Zhang, Q.; Wang, D.; Yang, B.; Dong, H.; Zhu, C.; Hao, Z. An electrochemical impedance model of lithium-ion battery for electric vehicle application. J. Energy Storage 2022, 50, 104182. [Google
Scholar] [CrossRef]
2. Xiong, R.; Tian, J.; Shen, W.; Sun, F. A novel fractional order model for state of charge estimation in lithium ion batteries. IEEE Trans. Veh. Technol. 2018, 68, 4130–4139. [Google Scholar] [
3. Ma, L.; Xu, Y.; Zhang, H.; Yang, F.; Wang, X.; Li, C. Co-estimation of state of charge and state of health for lithium-ion batteries based on fractional-order model with multi-innovations
unscented Kalman filter method. J. Energy Storage 2022, 52, 104904. [Google Scholar] [CrossRef]
4. Cao, M.; Zhang, T.; Wang, J.; Liu, Y. A deep belief network approach to remaining capacity estimation for lithium-ion batteries based on charging process features. J. Energy Storage 2022, 48,
103825. [Google Scholar] [CrossRef]
5. Wang, Y.; Huang, H.; Wang, H. A new method for fast state of charge estimation using retired battery parameters. J. Energy Storage 2022, 55, 105621. [Google Scholar] [CrossRef]
6. Li, Y.; Zou, C.; Berecibar, M.; Nanini-Maury, E.; Chan, J.; Van den Bossche, P.; Van Mierlo, J.; Omar, N. Random forest regression for online capacity estimation of lithium-ion batteries. Appl.
Energy 2018, 232, 197–210. [Google Scholar] [CrossRef]
7. Nuhic, A.; Terzimehic, T.; Soczka-Guth, T.; Buchholz, M.; Dietmayer, K. Health diagnosis and remaining useful life prognostics of lithium-ion batteries using data-driven methods. J. Power Source
2013, 239, 680–688. [Google Scholar] [CrossRef]
8. Muh, K.; Caliwag, A.; Jeon, I.; Lim, W. Co-Estimation of SoC and SoP Using BiLSTM. J. Korean Inst. Commun. Sci. 2021, 46, 314–323. [Google Scholar]
9. Du, J.; Liu, Z.; Wang, Y.; Wen, C. A fuzzy logic-based model for Li-ion battery with SOC and temperature effect. In Proceedings of the 11th IEEE International Conference on Control & Automation
(ICCA), Taichung, Taiwan, 18–20 June 2014; pp. 1333–1338. [Google Scholar] [CrossRef]
10. Cui, Z.; Wang, L.; Li, Q.; Wang, K. A comprehensive review on the state of charge estimation for lithium-ion battery based on neural network. Int. J. Energy Res. 2022, 46, 5423–5440. [Google
Scholar] [CrossRef]
11. Du, Z.; Zuo, L.; Li, J.; Liu, Y.; Shen, H. Data-driven estimation of remaining useful lifetime and state of charge for lithium-ion battery. IEEE Trans. Transp. Electrif. 2021, 8, 356–367. [Google
Scholar] [CrossRef]
12. Tightiz, L.; Dang, L.; Yoo, J. Novel deep deterministic policy gradient technique for automated micro-grid energy management in rural and islanded areas. Alex. Eng. J. 2023, 82, 145–153. [Google
Scholar] [CrossRef]
13. Pooyandeh, M.; Han, K.; Sohn, I. Cybersecurity in the AI-Based metaverse: A survey. Appl. Sci. 2022, 12, 12993. [Google Scholar] [CrossRef]
14. Alabugin, A.; Osintsev, K.; Aliukov, S.; Almetova, Z.; Bolkov, Y. Mathematical Foundations for Modeling a Zero-Carbon Electric Power System in Terms of Sustainability. Mathematics 2023, 11, 2180.
[Google Scholar] [CrossRef]
15. Artetxe, E.; Uralde, J.; Barambones, O.; Calvo, I.; Martin, I. Maximum Power Point Tracker Controller for Solar Photovoltaic Based on Reinforcement Learning Agent with a Digital Twin. Mathematics
2023, 11, 2166. [Google Scholar] [CrossRef]
16. Tong, S.; Lacap, J.; Park, J. Battery state of charge estimation using a load-classifying neural network. J. Energy Storage 2016, 7, 236–243. [Google Scholar] [CrossRef]
17. Chaoui, H.; Ibe-Ekeocha, C. State of charge and state of health estimation for lithium batteries using recurrent neural networks. IEEE Trans. Veh. Technol. 2017, 66, 8773–8783. [Google Scholar] [
18. Wang, H.; Ou, S.; Dahlhaug, O.; Storli, P.; Skjelbred, H.; Vilberg, I. Adaptively Learned Modeling for a Digital Twin of Hydropower Turbines with Application to a Pilot Testing System.
Mathematics 2023, 11, 4012. [Google Scholar] [CrossRef]
19. Ma, L.; Wang, Z.; Yang, F.; Cheng, Y.; Lu, C.; Tao, L.; Zhou, T. Robust state of charge estimation based on a sequence-to-sequence mapping model with process information. J. Power Source 2020,
474, 228691. [Google Scholar] [CrossRef]
20. Bian, C.; He, H.; Yang, S.; Huang, T. State-of-charge sequence estimation of lithium-ion battery based on bidirectional long short-term memory encoder-decoder architecture. J. Power Source 2020,
449, 227558. [Google Scholar] [CrossRef]
21. Savargaonkar, M.; Oyewole, I.; Chehade, A.; Hussein, A. Uncorrelated Sparse Autoencoder with Long Short-Term Memory for State-of-Charge Estimations in Lithium-Ion Battery Cells. IEEE Trans.
Autom. Sci. Eng. 2022, 1–12. [Google Scholar] [CrossRef]
22. Almaita, E.; Alshkoor, S.; Abdelsalam, E.; Almomani, F. State of charge estimation for a group of lithium-ion batteries using long short-term memory neural network. J. Energy Storage 2022, 52,
104761. [Google Scholar] [CrossRef]
23. Ren, X.; Liu, S.; Yu, X.; Dong, X. A method for state-of-charge estimation of lithium-ion batteries based on PSO-LSTM. Energy 2021, 234, 121236. [Google Scholar] [CrossRef]
24. Jia, K.; Gao, Z.; Ma, R.; Chai, H.; Sun, S. An Adaptive Optimization Algorithm in LSTM for SOC Estimation Based on Improved Borges Derivative. IEEE Trans. Ind. Inform. 2023, 1–12. [Google Scholar
] [CrossRef]
25. Chemali, E.; Kollmeyer, P.J.; Preindl, M.; Ahmed, R.; Emadi, A. Long short-term memory networks for accurate state-of-charge estimation of Li-ion batteries. IEEE Trans. Ind. Electron. 2017, 65,
6730–6739. [Google Scholar] [CrossRef]
26. Pascual, A.; Caliwag, A.; Lim, W. Implementation of the Battery Monitoring and Control System Using Edge-Cloud Computing. Korean J. Commun. Stud. 2022, 47, 770–780. [Google Scholar] [CrossRef]
27. Wu, B.; Widanage, W.; Yang, S.; Liu, X. Battery digital twins: Perspectives on the fusion of models, data and artificial intelligence for smart battery management systems. Energy AI 2020, 1,
100016. [Google Scholar] [CrossRef]
28. Ramachandran, R.; Subathra, B.; Srinivasan, S. Recursive estimation of battery pack parameters in electric vehicles. In Proceedings of the 2018 IEEE International Conference on Computational
Intelligence and Computing Research (ICCIC), Madurai, India, 13–15 December 2018; pp. 1–7. [Google Scholar]
29. Peng, Y.; Zhang, X.; Song, Y.; Liu, D. A low cost flexible digital twin platform for spacecraft lithium-ion battery pack degradation assessment. In Proceedings of the 2019 IEEE International
Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, 20–23 May 2019; pp. 1–6. [Google Scholar]
30. Li, W.; Rentemeister, M.; Badeda, J.; Jöst, D.; Schulte, D.; Sauer, D. Digital twin for battery systems: Cloud battery management system with online state-of-charge and state-of-health
estimation. J. Energy Storage 2020, 30, 101557. [Google Scholar] [CrossRef]
31. Qu, X.; Song, Y.; Liu, D.; Cui, X.; Peng, Y. Lithium-ion battery performance degradation evaluation in dynamic operating conditions based on a digital twin model. Microelectron. Reliab. 2020, 114
, 113857. [Google Scholar] [CrossRef]
32. Panwar, N.; Singh, S.; Garg, A.; Gupta, A.; Gao, L. Recent advancements in battery management system for Li-ion batteries of electric vehicles: Future role of digital twin, cyber-physical
systems, battery swapping technology, and nondestructive testing. Energy Technol. 2021, 9, 2000984. [Google Scholar] [CrossRef]
33. Singh, S.; Weeber, M.; Birke, K. Implementation of battery digital twin: Approach, functionalities and benefits. Batteries 2021, 7, 78. [Google Scholar] [CrossRef]
34. Li, H.; Kaleem, M.; Chiu, I.; Gao, D.; Peng, J. A digital twin model for the battery management systems of electric vehicles. In Proceedings of the 2021 IEEE 23rd Int Conf on High Performance
Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/
SmartCity/DependSys), Haikou, China, 20–22 December 2021; pp. 1100–1107. [Google Scholar] [CrossRef]
35. Qin, Y.; Arunan, A.; Yuen, C. Digital twin for real-time Li-ion battery state of health estimation with partially discharged cycling data. IEEE Trans. Ind. Inform. 2023, 19, 7247–7257. [Google
Scholar] [CrossRef]
36. Li, A.; Weng, J.; Yuen, A.; Wang, W.; Liu, H.; Lee, E.; Wang, J.; Kook, S.; Yeoh, G. Machine learning assisted advanced battery thermal management system: A state-of-the-art review. J. Energy
Storage 2023, 60, 106688. [Google Scholar] [CrossRef]
37. Chaudhari, P.; Agrawal, H.; Kotecha, K. Data augmentation using MG-GAN for improved cancer classification on gene expression data. Soft Comput. 2020, 24, 11381–11391. [Google Scholar] [CrossRef]
38. Yoon, J.; Drumright, L.; Van Der Schaar, M. Anonymization through data synthesis using generative adversarial networks (ads-gan). IEEE J. Biomed. Health Inform. 2020, 24, 2378–2388. [Google
Scholar] [CrossRef] [PubMed]
39. Assefa, S.; Dervovic, D.; Mahfouz, M.; Tillman, R.; Reddy, P.; Veloso, M. Generating synthetic data in finance: Opportunities, challenges and pitfalls. In Proceedings of the First ACM
International Conference on AI in Finance, New York, NY, USA, 15–16 October 2020; pp. 1–8. [Google Scholar] [CrossRef]
40. Pooyandeh, M.; Sohn, I. Edge network optimization based on ai techniques: A survey. Electronics 2021, 10, 2830. [Google Scholar] [CrossRef]
41. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
42. Keskar, N.; Socher, R. Improving generalization performance by switching from adam to sgd. arXiv 2017, arXiv:1712.07628. [Google Scholar] [CrossRef]
43. Yoon, J.; Jarrett, D.; Van der Schaar, M. Time-series generative adversarial networks. Adv. Neural Inf. Process. Syst. 2019, 32, 5509–5519. [Google Scholar]
44. Fei, C. Lithium-Ion Battery Data Set. 2022. Available online: https://ieee-dataport.org/documents/lithium-ion-battery-data-set (accessed on 30 November 2023).
45. Sagheer, A.; Kotb, M. Unsupervised pre-training of a deep LSTM-based stacked autoencoder for multivariate time series forecasting problems. Sci. Rep. 2019, 9, 19038. [Google Scholar] [CrossRef]
46. Munir, M.; Siddiqui, S.; Dengel, A.; Ahmed, S. DeepAnT: A deep learning approach for unsupervised anomaly detection in time series. IEEE Access 2018, 7, 1991–2005. [Google Scholar] [CrossRef]
47. Fontes, C.; Pereira, O. Pattern recognition in multivariate time series—A case study applied to fault detection in a gas turbine. Eng. Appl. Artif. Intell. 2016, 49, 10–18. [Google Scholar] [
48. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar] [CrossRef]
49. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [
Google Scholar]
50. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. arXiv 2017, arXiv:1706.08500. [Google
Figure 1. Incorporated digital twin and physical system architecture for lithium-ion battery monitoring.
Figure 6. The temporal variations in voltage, current, and temperature for battery B0005 over 168 cycles of charging and discharging.
Figure 7. A comparison of the changes in SOC over 168 cycles of charging and discharging for battery B0005 and 132 cycles of charging and discharging for batteries B0006, B0007, and B0018.
Figure 9. Comparative analysis of SOC prediction using offline LSTM modeling for 45,122 samples with learning rates of 0.05, 0.03, 0.01, and 0.008 with Adam and SGD in battery B0005.
Figure 10. Comparative analysis of SOC prediction using offline LSTM modeling for 45,122 samples with learning rates of 0.05, 0.03, 0.01, and 0.008 with SWATS in batteries (a) B0005, (b) B0006, (c)
B0007, and (d) B0018.
Figure 11. Comparative analysis of SOC prediction using offline GRU modeling for 45,122 samples with learning rates of 0.05, 0.03, 0.01, and 0.008 with SWATS, Adam, and SGD in battery B0005.
Figure 12. The hybrid TS-GAN architecture: combining a GAN with an autoencoder to generate lithium-ion battery data.
Figure 13. Synthetic voltage generated using the TS-GAN. (a) Epoch = 5, (b) epoch = 25, (c) epoch = 50, (d) epoch = 75, (e) epoch = 100, and (f) real data.
Figure 14. Analysis of actual (a) voltages, (b) temperatures, (c) currents, and their generated counterparts.
Figure 15. Analysis of actual (a) voltages, (b) currents, (c) temperatures, and their generated counterparts for B0005.
Figure 16. FID score results for the (a) voltage, (b) temperature, and (c) current generated by the TS-GAN.
Figure 17. A comparison of the predicted SOC from the digital twin and the expected SOC from historical data.
Figure 18. LSTM model gradient for battery B0005 in SOC estimation by the digital twin with learning rates of 0.05, 0.03, 0.01, and 0.008.
Figure 19. Training convergence analysis of the LSTM model loss in SOC estimation by the digital twin for battery B0005 with learning rates of 0.05, 0.03, 0.01, and 0.008.
Figure 20. Distribution of prediction errors in the LSTM model’s training in SOC estimation for lithium-ion batteries in the digital twin.
Figure 21. Distribution of prediction errors in the testing of the LSTM model in SOC estimation for lithium-ion batteries in the digital twin.
Table 1. Evaluating the forecasting performance of the LSTM and GRU Models with varied optimizers and learning rates for battery B0005.
Type MAE RMSE MAE RMSE MAE RMSE MAE RMSE
(LR = 0.008) (LR = 0.008) (LR = 0.01) (LR = 0.01) (LR = 0.03) (LR = 0.03) (LR = 0.05) (LR = 0.05)
LSTM (Adam) 1.432% 1.456% 1.447% 1.466% 2.525% 2.743% 7.182% 7.223%
LSTM (SGD) 9.432% 10.156% 2.943% 3.231% 11.112% 11.143% 11.109% 11.140%
GRU (Adam) 5.716% 5.762% 5.341% 5.373% 4.232% 4.336% 1.432% 1.456%
GRU (SGD) 3.286% 3.293% 3.777% 3.746% 3.214% 3.273% 1.714% 1.773%
LSTM (SWATS) 1.591% 1.598% 1.599% 1.603% 0.888% 0.912% 1.388% 1.452%
Table 2. Evaluating the forecasting performance of the LSTM and GRU Models with varied optimizers and learning rates for battery B0006.
Type MAE RMSE MAE RMSE MAE RMSE MAE RMSE
(LR = 0.008) (LR = 0.008) (LR = 0.01) (LR = 0.01) (LR = 0.03) (LR = 0.03) (LR = 0.05) (LR = 0.05)
LSTM (Adam) 3.743% 3.559% 3.243% 3.319% 3.243% 3.319% 6.901% 7.103%
LSTM (SGD) 5.634% 6.751% 5.953% 6.761% 5.953% 6.761% 10.334% 10.440%
GRU (Adam) 10.023% 10.306% 9.841% 9.956% 9.841% 9.956% 1.257% 1.266%
GRU (SGD) 7.965% 7.944% 7.677% 7.416% 7.677% 7.416% 1.342% 1.553%
LSTM (SWATS) 3.265% 3.273% 3.117% 3.168% 3.211% 3.298% 1.005% 1.023%
Table 3. Evaluating the forecasting performance of the LSTM and GRU models with varied optimizers and learning rates for battery B0007.
Type MAE RMSE MAE RMSE MAE RMSE MAE RMSE
(LR = 0.008) (LR = 0.008) (LR = 0.01) (LR = 0.01) (LR = 0.03) (LR = 0.03) (LR = 0.05) (LR = 0.05)
LSTM (Adam) 9.198% 9.911% 8.850% 9.080% 9.652% 9.701% 9.530% 9.670%
LSTM (SGD) 11.440% 11.990% 11.124% 11.320% 11.777% 11.889% 11.651% 11.872%
GRU (Adam) 2.690% 2.454% 1.607% 1.645% 2.021% 2.102% 1.153% 1.243%
GRU (SGD) 2.223% 2.303% 1.742% 1.943% 2.228% 2.501% 1.225% 1.382%
LSTM (SWATS) 1.889% 1.908% 1.365% 1.420% 1.301% 1.901% 1.023% 1.102%
Table 4. Evaluating the forecasting performance of LSTM and GRU models with varied optimizers and learning rates for battery B0018.
Type MAE RMSE MAE RMSE MAE RMSE MAE RMSE
(LR = 0.008) (LR = 0.008) (LR = 0.01) (LR = 0.01) (LR = 0.03) (LR = 0.03) (LR = 0.05) (LR = 0.05)
LSTM (Adam) 1.877% 1.910% 1.996% 2.027% 9.332% 3.319% 6.901% 7.103%
LSTM (SGD) 10.672% 10.712% 10.856% 10.943% 11.602% 11.693% 11.777% 11.927%
GRU (Adam) 4.475% 4.631% 4.452% 4.651% 2.690% 2.721% 2.881% 2.911%
GRU (SGD) 2.372% 2.523% 2.697% 2.887% 2.438% 2.434% 2.6048% 2.674%
LSTM (SWATS) 1.355% 1.442% 1.612% 1.724% 1.923% 2.002% 2.281% 2.339%
Table 5. Comparison of the loss function results for LSTM with the SWATS optimizer based on the number of epochs.
Epoch B0005 (LR = 0.03) B0006 (LR = 0.05) B0007 (LR = 0.008) B0018 (LR = 0.01)
10 $1.765 × 10 − 1$ $1.266 × 10 − 1$ $2.443 × 10 − 1$ $1.978 × 10 − 1$
20 $9.219 × 10 − 2$ $7.219 × 10 − 2$ $3.053 × 10 − 2$ $2.016 × 10 − 2$
30 $8.031 × 10 − 3$ $3.178 × 10 − 2$ $1.724 × 10 − 2$ $9.711 × 10 − 3$
40 $5.045 × 10 − 3$ $2.730 × 10 − 2$ $7.203 × 10 − 3$ $4.932 × 10 − 3$
50 $3.724 × 10 − 3$ $2.12 × 10 − 2$ $5.125 × 10 − 3$ $2.311 × 10 − 3$
60 $9.589 × 10 − 4$ $1.621 × 10 − 2$ $2.489 × 10 − 3$ $1.554 × 10 − 3$
70 $7.956 × 10 − 4$ $1.067 × 10 − 2$ $1.223 × 10 − 3$ $8.329 × 10 − 4$
80 $9.271 × 10 − 5$ $9.944 × 10 − 3$ $8.175 × 10 − 4$ $5.956 × 10 − 4$
90 $8.331 × 10 − 5$ $1.118 × 10 − 4$ $2.105 × 10 − 4$ $1.280 × 10 − 4$
100 $8.317 × 10 − 5$ $1.047 × 10 − 4$ $2.079 × 10 − 4$ $1.214 × 10 − 4$
Battery Voltage Temperature Current
B0005 0.044 0.034 0.069
B0006 0.041 0.032 0.072
B0007 0.038 0.036 0.082
B0018 0.031 0.042 0.075
Battery t df cv p Result
B0005 0.012 90242 1.645 0.990 Fail ^1 to reject null hypothesis
B0006 0.039 88704 1.645 0.969 Fail to reject null hypothesis
B0007 0.043 96592 1.645 0.966 Fail to reject null hypothesis
B0018 0.086 311980 1.645 0.931 Fail to reject null hypothesis
^1 A low p-value (<0.05) indicates rejection of the null hypothesis, while a high p-value (>0.05) suggests support for the null hypothesis.
Battery State p Result
B0005 0.015 0.902 Probably ^1 the same distribution
B0006 0.025 0.874 Probably the same distribution
B0007 0.009 0.941 Probably the same distribution
B0018 0.024 0.876 Probably the same distribution
^1 “probability” indicates the likelihood of having similar distributions between two predicted values of SOC.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Pooyandeh, M.; Sohn, I. Smart Lithium-Ion Battery Monitoring in Electric Vehicles: An AI-Empowered Digital Twin Approach. Mathematics 2023, 11, 4865. https://doi.org/10.3390/math11234865
AMA Style
Pooyandeh M, Sohn I. Smart Lithium-Ion Battery Monitoring in Electric Vehicles: An AI-Empowered Digital Twin Approach. Mathematics. 2023; 11(23):4865. https://doi.org/10.3390/math11234865
Chicago/Turabian Style
Pooyandeh, Mitra, and Insoo Sohn. 2023. "Smart Lithium-Ion Battery Monitoring in Electric Vehicles: An AI-Empowered Digital Twin Approach" Mathematics 11, no. 23: 4865. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2227-7390/11/23/4865","timestamp":"2024-11-04T14:14:24Z","content_type":"text/html","content_length":"634407","record_id":"<urn:uuid:321522b9-548c-4c9a-83d7-0501a72dffa2>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00655.warc.gz"} |
Set* (pointed sets) and Set non-equivalence
This is my first post here at nForum, and I’m a bit unsure whether my question is appropriate here, so please let me know if it should be moved or deleted. However, I’m just beginning to study
Category Theory on my own, using Adamek, Herrlich and Strecker’s “Abstract and Concrete Categories”. I’m having a bit of a stumbling block with one of the exercises and would like to discuss it with
The exercise just asks to show that $Set cong Set_*$ where $Set_*$ is the category of pointed sets and $\cong$ is equivalence of categories. Could somebody provide me with a little direction? I’d
rather work from a hint than receive an answer, but I know that’s sometimes difficult for these elementary problems.
Think about the ’obvious’ functor $Set_* \to Set$, and see if it satisfies the criteria for a functor to be an equivalence of categories. I would link to equivalence of categories, but it takes a
very sophisticated, n-categorical look at the concept, so I don’t recommend it.
but it takes a very sophisticated, n-categorical look
I’d say what makes this entry look sophisticated is not its $n$-categorical attitude, but that it pays so much attention to the axiom of choice.
I don’t recommend it.
Hopefully then somebody finds the time to improve it!
Hopefully then somebody finds the time to improve it!
Okay, I did it myself. I reworked the beginning of equivalence of categories. And stated more prominently the central theorem that David wanted to point out to xelxebar, that if the axiom of choice
does hold, then a functor is part of an equivalence precisely if it is essentially surjective and full and faithful.
if the axiom of choice does hold, then a functor is part of an equivalence precisely if it is essentially surjective and full and faithful
But that doesn’t depend on the axiom of choice at all! It’s just that people who believe in choice use a different definition, which is then wrong in the absence of choice.
The old version of the article was more or less even-handed, but if you’re going to rewrite it to single out a particular definition, then let’s make it the right one.
@xelxebar: think a bit about initial and terminal objects in each of those categories. (And I wouldn’t worry much at the beginning about all this talk of the axiom of choice, not at this point of
your education. You can use the definition that involves two functors and two natural isomorphisms.)
I wrote:
It’s just that people who believe in choice use a different definition, which is then wrong in the absence of choice.
Actually, on second thought, that’s not really true; Urs’s definition is fine even in the absence of choice, as long as you define ‘functor’ correctly. So I’ll put the article back more like how he
had it.
@ xelxebar
Todd is right; don’t worry at all about this axiom-of-choice business. Your problem is much simpler than any of that. Take any characterisation of equivalence of categories and prove that an
equivalence must map initial objects to initial objects and terminal objects to terminal objects. Then combine this with Todd’s suggestion, and you should have your answer.
Perhaps xelxebar might think about initial and terminal objects in the two categories. Then think about why the possession of zero objects must be preserved under equivalence.
Edit: I see similar advice came in while writing the above. Must be good advice then.
Thank you very much for your advice. It got me on the right track pretty quickly. | {"url":"https://nforum.ncatlab.org/discussion/2189/set-pointed-sets-and-set-nonequivalence/","timestamp":"2024-11-09T07:12:23Z","content_type":"application/xhtml+xml","content_length":"27531","record_id":"<urn:uuid:85ce22d2-d5bd-40b2-85a3-4692fdd9c513>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00271.warc.gz"} |
Equations in BallBerry4aP
Equations in Environment
Variable C_a : Carbon dioxide concentration (umol CO2 (mol air)^-1)
C_a = graph(time())
Typical diurnal curve in forest canopy
Variable H : Relative humidity (proportion)
H = graph(time())
Typical diurnal graph (24 hour)
Variable Q : Photon flux density (umol m^-2 s^-1)
Q = graph(time())
Graph for a sunny day (24 hours)
Equations in Ball-Berry
Variable Gs_start
Gs_start = if time()==0 then g_0 else last(Gs_0)
Gs_0=Iteration time step/Gs_0
Variable g_0 : Stomatal conductance in the dark (mol m^-2 s^-1)
g_0 = 0.01
Variable g_1 : Ball-Berry stomatal conductance coefficient
g_1 = 23
Equations in Iteration time step
Variable Gs
Gs = if loop_count==0 then Gs_start else Gs_0
Variable Gs_0 : Stomatal conductance (mol m^-2 s^-1)
Gs_0 = g_0+g_1*A*H/C_a
Ball-Berry equation
Variable loop_count
loop_count = iterations(al1)
Equations in Assimilation
Variable A : Assimation (umol CO2 m^-2 s^-1)
A = A_Q*Gs
Variable A_Q : Assimilation light response curve
A_Q = graph(Q)
Relationship of Assimilation with photon flux density (light) when stomatal conductance (Gs) is maximum | {"url":"https://simulistics.com/modeltags/iteration","timestamp":"2024-11-10T12:54:32Z","content_type":"application/xhtml+xml","content_length":"29805","record_id":"<urn:uuid:ab636389-5601-4055-ada9-081ba5974755>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00469.warc.gz"} |
General convergent expectation maximization (EM)-type algorithms for image reconstruction
[1] R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 10 (1994), 1217-1229.doi: 10.1088/0266-5611/10/6/003.
[2] R. Alicandro, A. braides and J. Shah, Free-discontinuity problems via functionals involving the $L^1$-norm of the gradient and their approximation, Interfaces and Free Boundaries, 1 (1999),
17-37.doi: 10.4171/IFB/2.
[3] A. Andersen, Algebraic reconstruction in CT from limited views, IEEE Transactions on Medical Imaging, 8 (1989), 50-55.doi: 10.1109/42.20361.
[4] A. Andersen and A. Kak, Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm, Ultrasonic Imaging, 6 (1984), 81-94.doi: 10.1177/
[5] C. Atkinson and J. Soria, An efficient simultaneous reconstruction technique for tomographic particle image velocimetry, Experiments in Fluids, 47 (2009), 553-568.doi: 10.1007/s00348-009-0728-0.
[6] C. Brune, M. Burger, A. Sawatzky, T. Kosters and F. Wubbeling, Forward-Backward EM-tV methods for inverse problems with poisson noise, Preprint, august 2009.
[7] C. Brune, A. Sawatzky and M. Burger, Bregman-EM-TV methods with application to optical nanoscopy, Lecture Notes in Computer Science, 5567 (2009), 235-246.doi: 10.1007/978-3-642-02256-2_20.
[8] C. Brune, A. Sawatzky and M. Burger, Primal and dual Bregman methods with application to optical nanoscopy, International Journal of Computer Vision, 92 (2011), 211-229.doi: 10.1007/
[9] Y. Censor and T. Elfving, Block-iterative algorithms with diagonally scaled oblique projections for the linear feasibility problem, SIAM Journal on Matrix Analysis and Applications, 24 (2002),
40-58.doi: 10.1137/S089547980138705X.
[10] Y. Censor, D. Gordon and R. Gordon, Component averaging: An efficient iterative parallel algorithm for large and sparse unstructured problems, Parallel Computing, 27 (2001), 777-808.doi: 10.1016
[11] J. Chen, J. Cong, L. A. Vese, J. D. Villasenor, M. Yan and Y. Zou, A hybrid architecture for compressive sensing 3-D CT reconstruction, IEEE Journal on Emerging and Selected Topics in Circuits
and Systems, 2 (2012), 616-625.doi: 10.1109/JETCAS.2012.2221530.
[12] J. A. Conchello and J. G. McNally, Fast regularization technique for expectation maximization algorithm for optical sectioning microscopy, in "Proceeding of SPIE Symposium on Electronic Imaging
Science and Technology," 2655 (1996), 199-208.doi: 10.1117/12.237477.
[13] A. Dempster, N. Laird and D. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society Series B, 39 (1977), 1-38.
[14] N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J. C. Olivo-Marin and J. Zerubia, Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,
Microscopy Research and Technique, 69 (2006), 260-266.doi: 10.1002/jemt.20294.
[15] S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6 (1984), 721-741.doi:
[16] R. Gordon, R. Bender and G. Herman, Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography, Journal of Theoretical Biology, 29 (1970),
471-481.doi: 10.1016/0022-5193(70)90109-8.
[17] P. J. Green, On use of the EM algorithm for penalized likelihood estimation, Journal of the Royal Statistical Society Series B, 52 (1990), 443-452.doi: 10.2307/2345668.
[18] U. Grenander, "Tutorial in Pattern Theory," Lecture Notes Volume, Division of Applied Mathematics, Brown University, 1984.
[19] Z. T. Harmany, R. F. Marcia and R. M. Willett, Sparse Poisson intensity reconstruction algorithms, in "Proceedings of IEEE/SP 15th Workshop on Statistical Signal Processing,'' (2009),
634-637.doi: 10.1109/SSP.2009.5278495.
[20] G. Herman, "Fundamentals of Computerized Tomography: Image Reconstruction From Projection," Second edition. Advances in Pattern Recognition. Springer, Dordrecht, 2009.doi: 10.1007/
[21] H. Hurwitz, Entropy reduction in Bayesian analysis of measurements, Physics Review A, 12 (1975), 698-706.doi: 10.1103/PhysRevA.12.698.
[22] S. Jafarpour, R. Willett, M. Raginsky and R. Calderbank, Performance bounds for expander-based compressed sensing in the presence of Poisson noise, in "Proceedings of the IEEE Forty-Third
Asilomar Conference on Signals, Systems and Computers," (2009), 513-517.doi: 10.1109/ACSSC.2009.5469879.
[23] X. Jia, Y. Lou, R. Li, W. Y. Song and S. B. Jiang, GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation, Medical Physics, 37 (2010),
1757-1760.doi: 10.1118/1.3371691.
[24] M. Jiang and G. Wang, Convergence of the simultaneous algebraic reconstruction technique (SART), IEEE Transaction on Image Processing, 12 (2003), 957-961.doi: 10.1109/TIP.2003.815295.
[25] M. Jiang and G. Wang, Convergence studies on iterative algorithms for image reconstruction, IEEE Transactions on Medical Imaging, 22 (2003), 569-579.doi: 10.1109/TMI.2003.812253.
[26] S. Joshi and M. I. Miller, Maximum a posteriori estimation with Good's roughness for three-dimensional optical sectioning microscopy, Journal of the Optical Society of America A, 10 (1993),
1078-1085.doi: 10.1364/JOSAA.10.001078.
[27] M. Jung, E. Resmerita and L. A. Vese, Dual norm based iterative methods for image restoration, Journal of Mathematical Imaging and Vision, 44 (2012), 128-149.doi: 10.1007/s10851-011-0318-7.
[28] A. Kak and M. Slaney, "Principles of Computerized Tomographic Imaging," Reprint of the 1988 original. Classics in Applied Mathematics, 33. Society of Industrial and Applied Mathematics(SIAM),
Philadelphia, PA, 2001.doi: 10.1137/1.9780898719277.
[29] W. Karush, "Minima of Functions of Several Variables With Inequalities as Side Constraints,'' Master's thesis, Department of Mathematics, University of Chicago, Chicago, Illinois, 1939.
[30] H. Kuhn and A. Tucker, Nonlinear programming, in "Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability," 1950, pp. 481–492. University of California Press,
Berkeley and Los Angeles, 1951.
[31] T. Le, R. Chartrand and T. J. Asaki, A variational approach to reconstructing images corrupted by Poisson noise, Journal of Mathematical Imaging and Vision, 27 (2007), 257-263.doi: 10.1007/
[32] E. Levitan and G. T. Herman, A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography, IEEE Transactions on Medial Imaging, 6 (1987),
185-192.doi: 10.1109/TMI.1987.4307826.
[33] L. B. Lucy, An iterative technique for the rectification of observed distributions, Astronomical Journal, 79 (1974), 745-754.doi: 10.1086/111605.
[34] J. Markham and J. A. Conchello, Fast maximum-likelihood image-restoration algorithms for three-dimensional fluorescence microscopy, Journal of the Optical Society America A, 18 (2001),
1062-1071.doi: 10.1364/JOSAA.18.001062.
[35] F. Natterer and F. Wubbeling, "Mathematical Methods in Image Reconstruction," SIAM Monographs on Mathematical Modeling and Computation. Society for Industrial and Applied Mathematics,
Philadelphia, PA, 2001.doi: 10.1137/1.9780898718324.
[36] Y. Pan, R. Whitaker, A. Cheryauka and D. Ferguson, Feasibility of GPU-assisted iterative image reconstruction for mobile C-arm CT, in "Proceedings of International Society for Photonics and
Optonics,'' SPIE, 7258 (2009), 72585J.doi: 10.1117/12.812162.
[37] W. H. Richardson, Bayesian-based iterative method of image restoration, Journal of the Optical Society America, 62 (1972), 55-59.doi: 10.1364/JOSA.62.000055.
[38] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D, 60 (1992), 259-268.doi: 10.1016/0167-2789(92)90242-F.
[39] S. Setzer, G. Steidl and T. Teuber, Deblurring Poissonian images by split Bregman techniques, Journal of Visual Communication and Image Representation, 21 (2010), 193-199.doi: 10.1016/
[40] H. Shah, A common framework for curve evolution, segmentation and anisotropic diffusion, in "Proceeding of IEEE Conference on Computer Vision and Pattern Recognition," (1996), 136-142.doi:
[41] L. Shepp and B. Logan, The Fourier reconstruction of a head section, IEEE Transaction on Nuclear Science, 21 (1974), 21-34.doi: 10.1109/TNS.1974.6499235.
[42] L. Shepp and Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Transaction on Medical Imaging, 1 (1982), 113-122.doi: 10.1109/tmi.1982.4307558.
[43] R. Siddon, Fast calculation of the exact radiological path for a three-dimensional CT array, Medical Physics, 12 (1985), 252-255.doi: 10.1118/1.595715.
[44] E. Y. Sidky, R. Chartrand and X. Pan, Image reconstruction from few views by non-convex optimization, in "IEEE Nuclear Science Symposium Conference Record,'' 5 (2007), 3526-3530. .doi: 10.1109/
[45] E. Y Sidky, J. H. Jorgensen and X. Pan, Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm, Physics in Medicine and
Biology, 57 (2012), 3065.doi: 10.1088/0031-9155/57/10/3065.
[46] A. N. Tychonoff and V. Y. Arsenin, "Solution of Ill-posed Problems," Winston & Sons, Washington, 1977.
[47] J. Wang and Y. Zheng, On the convergence of generalized simultaneous iterative reconstruction algorithms, IEEE Transaction on Image Processing, 16 (2007), 1-6.doi: 10.1109/TIP.2006.887725.
[48] R. M. Willett, Z. T. Harmany and R. F. Marcia, Poisson image reconstruction with total variation regularization, Proceedings of 17th IEEE International Conference on Image Processing, (2010),
4177-4180.doi: 10.1109/ICIP.2010.5649600.
[49] M. Yan and L. A. Vese, Expectation maximization and total variation based model for computed tomography reconstruction from undersampled data, in "Proceeding of SPIE Medical Imaging: Physics of
Medical Imaging," 7961 (2011), 79612X.doi: 10.1117/12.878238.
[50] M. Yan, Convergence analysis of SART: Optimization and statistics, International Journal of Computer Mathematics, 90 (2013), 30-47.doi: 10.1080/00207160.2012.709933.
[51] M. Yan, J. Chen, L. A. Vese, J. D. Villasenor, A. A. T. Bui and J. Cong, EM+TV based reconstruction for cone-beam CT with reduced radiation, in "Lecture Notes in Computer Science," 6938 (2011),
1-10.doi: 10.1007/978-3-642-24028-7_1.
[52] H. Yu and G. Wang, SART-type image reconstruction from a limited number of projections with the sparsity constraint, Journal of Biomedical Imaging, 2010 (2010), 1-9.doi: 10.1155/2010/934847.
[53] H. Zhao and A. J. Reader, Fast ray-tracing technique to calculate line integral paths in voxel arrays, in IEEE Nuclear Science Symposium Conference Record, 4 (2003), 2808-2812.doi: 10.1109/
[54] D. Zhu, M. Razaz and R. Lee, Adaptive penalty likelihood for reconstruction of multi-dimensional confocal microscopy images, Computerized Medical Imaging and Graphics, 29 (2005), 319-331.doi:
[55] Compressive Sensing Resources, http://dsp.rice.edu/cs. | {"url":"https://www.aimsciences.org/article/doi/10.3934/ipi.2013.7.1007","timestamp":"2024-11-07T23:21:19Z","content_type":"text/html","content_length":"140092","record_id":"<urn:uuid:4ccfbc4d-8b81-4ab0-ae5a-9756667b2ea1>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00572.warc.gz"} |
Improved 3D biomechanical model for evaluation of mass and inertial parameters in few body positions from NASA classification
The aim of the current article is: 1) to present a 20-segmental biomechanical model of the male human body generated within a SolidWorks® environment.; 2) to improve the 16-segmental biomechanical
model of the human body described in our previous investigation, shaping the body with 20 instead of 16 segments.; 3) to determine the mass-inertial characteristics of the human body of the average
Bulgarian male based on the model.; 3) to verify the proposed 3D CAD model of the human body against the analytical results from our previous investigation, as well as through comparison with data
available in the provided reports.; 4) to predict a human body's mass and inertial properties in several body positions as classified by NASA. The comparison performed between our model results and
data reported in the literature gives us confidence that this model could be reliably used to calculate these parameters at various postures of the body.
1. Introduction
In order to study human movement, we need to know the geometric and mass-inertial characteristics (MIC) such as volume, mass, the centre of mass, moments of inertia of human body (HB) segments in
various body positions. Moments of inertia and centres of gravity of the body are among its fundamental characteristics. Definitely, these characteristics are required to perform computations,
simulations, and predictions in various scientific areas such as biomechanics, biomedical engineering, medicine, ergonomics and sport. For example, in the field of anthropomorphic and rehabilitation
robotics they are essential in the design of rehabilitation devices that aid the patient to perform a given movement; there the orthopaedic device should have appropriate geometry and suitable mass
distribution. Obviously, the same is true in medicine and more particular in orthopaedics, traumatology, orthotics and prosthesis design. One of the modern directions of research where such knowledge
is mandatory is the analysis of body motion under microgravity. There, the body rotation of an individual is easily produced by its own action or by external forces. In criminalistics, such
information is necessary in order to study body fall or body impact (travel accidents) cases. Let us also mention the problem for the design of air or space transport systems, where human weight is a
significant percentage of vehicle weight [1]-[5], etc.
The field of human motion research, due to its importance, has been the subject of intensive simulations and mathematical modelling [6], [7]. These sources also consider results on problems like the
movement of segments of the HB during walking, running, the role of the muscles and skeleton system, etc.
The anthropometric lengths and data for moments of inertia and centres of gravity of male individuals of the Air Force crews reported by NASA [8] generated our interest in the problem. We wanted to
find out to which extent NASA`s experimental data can be represented via mathematical modelling and how much data for an “average” astronaut contrast those for an “average” Bulgarian male. The
methods presented in [1]-[5] and the results obtained there give a solid ground for the determination of the geometric and mass-inertial parameters of the HB and therefore we will closely follow the
corresponding line of action.
The purpose of this study is to improve the 16-segmental biomechanical model of the human body described in our previous investigation [9], shaping the body with 20 instead of 16 segments. The model
will be generated in a computer environment that allows not only the mass-inertial parameters of the single body segments to be calculated, which we have done also analytically, but to obtain these
characteristics for the whole body in any position of interest. We devise a model of the human thigh that solves the long-standing problem in the mathematical modelling of the body achieving
continuous tailoring of the thigh with the torso in a way that keeps, at the same time, the corresponding anthropometric angles as given in the experimental literature.
In this study, we present, in Section 2, a 20-segmental 3D model of the HB of the Bulgarian man. It allows for predicting the inertial properties of the HB in any fixed body position, including those
provided in NASA’s classification. We obtain MIC, in Section 3, of an average Bulgarian male using SolidWorks CAD software. We compare the obtained results with our previous ones reported in [9] for
the 16-segmental 3D model, as well as, whenever possible, with other data for Caucasian reported in the literature.
2. The model
The proposed model consists of 20 segments: head + neck, upper, middle, and lower part of the torso, three elements of the thigh, shank, foot, upper arm, forearm, and hand. All segments are assumed
to be relatively simple geometrical bodies, as depicted in Fig. 1.
Fig. 120-segmental model of the human body
When one represents via a mathematical model the HB one should successively solve specific problems: 1. Appropriate body decomposition – descriptions of the anthropometric points outlining segments
and relevant characteristic lengths; 2. Generation of a proper 3D model that includes the decision which segment of the body shall be modelled with what geometrical body; 3. Analytical determination
of the characteristics of the segments of interest; 5. Computer realization of the 3D model with the anthropometric data; 6. Verification of the computer-generated model by comparing the data
obtained from the determination of the HB mass properties, using analytically derived results with those obtained based on computer realization; 7. Determination of the characteristics of interest of
a particular part of the body, or the body as a whole, using the computer realization.
In the present article, we use the aforementioned recipe to study the mass-inertial properties of the HB via a combination of mathematical and computer modelling. For our study, we assume full-body
symmetry to the midsagittal plane. Details of the exact dimensions, decomposition of the body for all segments except those of the thigh, which will be given in the following paragraph, can be found
in [9].
The geometrical data needed for the determination of all lengths required is taken from a comprehensive representative anthropological investigation of the Bulgarian population [10] executed during
the period 1989-1993. One has investigated 2435 males. The average values found in the above study are used to design a model, which characterizes the “average” Bulgarian man.
We model each segment of the HB through 3D geometrical forms as follows: (1) the torso is decomposed in three parts; (2) the upper part is modelled by a right reverted elliptical cone; (3) the middle
and lower torso are shaped like an elliptical cylinder and an elliptical cylinder +reverted elliptical cone. The torso lower part is defined exactly as in [6]: it extends from omphalion to
iliospinale, with a plane passing through the iliospinales and concluding an angle of 37° with the sagittal plane. We consider the thigh, as divided into three elements. We did examine that question
in detail in our previous article [11]. Here, for the convenience of the readers, we just repeat a visualization of the way we do model the thigh in Fig. 2. The different segments of the thigh are
shown in different colours. We represent the upper arm, lower arm and shank as a frustum of a cone and the hand is approximated as a sphere.
Fig. 2a) General idea for the 3D model of the human thigh; b) a computer realization of all the three geometrical parts embraced within the mathematical model of the thigh
Knowing the anthropometric parameters of the segments, one can derive analytically all the properties of interest, such as volume, mass, and centre of mass and moments of inertia.
For instance, analytical expressions for two of the inertial moments for a frustum of an elliptic cone are given by the subsequent equations [9]:
${I}_{XX}^{CM}=\frac{1}{240}\pi h\rho \left({R}_{2 }\left(4{h}^{2}+3{r}_{2}^{2}\right)\left(3{r}_{1}+2{R}_{1}\right)+3{r}_{2}\left(4{h}^{2}+{r}_{2}^{2}\right)\left(4{r}_{1}+{R}_{1}\right)-\frac{10{\
left(h {r}_{2}\left(3{r}_{1}+{R}_{1}\right)+h {R}_{2}\left({r}_{1}+{R}_{1}\right)\right)}^{2}}{{r}_{1}\left(2{r}_{2}+{R}_{2}\right)+{R}_{1}\left({r}_{2}+2{R}_{2}\right)}+3{r}_{2}{R}_{2}^{2}\left(2{r}
${I}_{ZZ}^{CM}=\frac{1}{80}h\pi \rho \left(\left({r}_{1}^{3}\left(4{r}_{2}+{R}_{2}\right)+{r}_{1}^{2}{R}_{1}\left(3{r}_{2}+2{R}_{2}\right)$$+{r}_{1}\left(4{r}_{2}^{3}+3{r}_{2}^{2}{R}_{2}+2{r}_{2}{R}_
We perform a computer realization of the model in CAD software SolidWorks. Then we verify the computerized model by comparing the calculated results for the MIC of the segments of the body with the
ones reported in [9]. When the mass-inertial parameters of the segments are obtained, the corresponding characteristics of the whole body can be found assuming it to be in any position of interest.
As specified before, the fundamental postures of the body of significance for NASA are categorized in [8], as well as in [1]-[5]. Normally, eight principal body positions are of special interest.
Here, due to space constraints, data will be shown for three of these positions: the so-called “standing position” (SP)– see Fig. 3(a), “standing, arms over head” (SAOH)– see Fig. 3(b), and “sitting,
thighs elevated position” (STE) – see Fig. 3(c). In the remainder of the article, we comment on the computer-based simulation model realization and report data for characteristics of interest of the
body in any of the stated postures.
3. Determination of mass-inertial characteristics in different body positions
The 3D CAD software SolidWorks is used to generate the model. This computer-generated model is validated by comparing its results with the analytical results we have obtained for segments of the
body. The software procreates segment-by-segment data about volume, mass, centre of mass and moments of inertia, which provides confidence for using the model for calculating of these characteristics
at different body positions. As stated above, we consider the SP, SAOH and STE positions. A laboratory coordinate system is defined, as shown in Fig. 3, for each of these positions. The axes coincide
with the approximate body axes: the frontal ($x$), the sagittal ($y$), and the longitudinal ($z$) ones.
Fig. 3a) Standing position – subject stands erect with head oriented in the Frankfort plane and with arms hanging naturally at the sides as when measuring stature, b) standing, arms over head. Legs,
torso, and head same as position 1; upper extremities raised over head, parallel to Y-axis; wrist axes parallel to X-axis; hands slightly clenched, c) sitting, thighs elevated position: thighs and
forearms are placed parallel to the Z-axis, the upper arms, shanks, and spine are parallel to the Y-axis; the soles are parallel to the X-Z plane; wrist axes are parallel to Z-axis, and the head lies
in Frankfort plane
The mass inertial characteristics of the HB in these three positions are given in Tables 1, 2, and 3. Table 1 contains the data for the SP, Table 2 – for SAOH, and Table 3 – for the STE position.
A comparison of our new data with those from our previous study of the 16 segmental biomechanical model [12], [13], as well as with data from other literature sources are also reported in the Tables.
Naturally, all units as well as reference systems used in [1, 2, 4, 5, 8, 12, 13], has been transferred to the ones utilized in the current study.
The marks 50 % and 95 % which label the data of [4] and NASA [8] imply that this percent of measured data are beneath the value stated in the Table. We observe that our data are in good accord with
those described in the literature.
Table 1Standing position
NASA Chandler Santschi Hanavan Nikolova
Characteristic Our data
Ref. [8] Ref. [5] Ref. [2] Ref. [4] Ref. [9]
50 % 95 % 50 % 95 %
${I}_{XX}$ [kg.cm^2×10^3] 14.4 18.5 17.2 12.7 9.1 14.1 9.7 9.9
${I}_{YY}$[][kg.cm^2×10^3] 129.2 163.4 118.9 116.0 116.2 161.9 105.3 108.0
${I}_{ZZ}$ [kg.cm^2×10^3] 144.5 182.3 134.0 129.5 122.3 171.1 112.0 117.0
Center of mass [cm] 80.2 84.7 72.3 78.7 80.0 83.8 74.6 71.8
Total mass [kg] 82.2 98.5 65.2 75.5 73.4 90.9 72.5 72.5
Height [cm] 179.9 190.1 172.1 176.3 175.5 185.7 171.5 171.5
Table 2Standing, arms over head
NASA Santschi Hanavan Nikolova
Characteristic Our data
Ref. [8] Ref. [2] Ref. [4] Ref. [9]
50 % 95 % 50 % 95 %
${I}_{XX}$[][kg.cm^2×10^3] 14.1 17.5 12.5 9.08 14.1 9.7 11.7
${I}_{YY}$ [kg.cm^2×10^3] 172.9 221.0 154.3 153.2 213.0 145.6 146.0
${I}_{ZZ}$ [kg.cm^2×10^3] 191.9 242.6 171.2 159.3 222.2 152.2 137.0
Center of mass [cm] 73.9 77.9 72.6 73.7 77.2 68.4 65.2
Table 3Sitting, thighs elevated position
NASA Clauser Nikolova
Characteristic Our data
Ref. [8] Ref. [3] Ref. [9]
50 % 95 %
${I}_{XX}$[][kg.cm^2×10^3] 13.1 15.2 17.9 18.3 17.9
${I}_{YY}$ [kg.cm^2×10^3] 48.6 55.8 42.8 33.3 35.1
${I}_{ZZ}$ [kg.cm^2×10^3] 48.7 59.8 44.0 31.4 32.8
$L\left(x\right)$ [cm] 59.4 61.5 58.7 53.6 54.5
$L\left(y\right)$ [cm] 0 0 0 0 0
$L\left(z\right)$ [cm] 9.7 10.5 11.7 7.3 9.0
4. Conclusions
The current article presents the CAD design of a newly proposed 20-segmental biomechanical model of the HB of the average Bulgarian man. The model is an improvement over our previous model, which
consisted of 16 segments. Obviously, the inertial moments depend, on the actual shape of the segment and the mass distribution in the body. In the existing models of the HB even the question of how
the hip enters the torso and the appropriate division of the body in a part that belongs to the torso and another one belonging to the hip is not satisfactory achieved. The point is that one normally
models the torso and the thigh with relatively simple geometrical figures that do not tailor continuously into each other, and which do even overlap. The current study presents some progress in this
direction. The thigh consists of 3 parts – see Fig. 2. Modelling them with geometrical figures nearer to the actual form of these segments improves the model of the thigh and, thus, of the body.
In the current article we present data for the mass inertial parameters of the body of the average Bulgarian male in three essential positions – SP, SAOH, and STE – see Tables 1-3, where we also
match our findings with other data reported in the literature. The specific anthropometric lengths needed to perform the calculations are taken from Ref. [10]. The results obtained, and the procedure
proposed in the present work allows us to claim that more realistic modelling of the shape of the HB is proposed that gives us confidence that this model could be used to calculate the mass inertial
characteristics at any specific posture of the body.
Let us stress, that despite the model is applied to the average Bulgarian male, it is applicable to any gender, race, and for any specific person, given the suitable anthropometric measurements are
• Dempster W. T. Space Requirements of the Seated Operator. WADC-TR-55-159, WPAFB, Ohio, 1955.
• Santschi W. R., Dubois J., Omoto C. Moments of Inertia and Centers of Gravity of the Living Human Body. AMRL TR 63-36, WPAFB, Ohio, 1963.
• Clauser C. E., Mcconville J. T., Young J. W. Weight, Volume, and Center of Mass of Segments of the Human Body. AMRL-TR-69-70, WPAFB, Ohio, 1969.
• Hanavan E. P. A Mathematical Model of the Human Body. AMRL-TR-64-102. Aero-space Medical Research Laboratories, WPAFB, Ohio, 1964.
• Chandler R. F., Clauser C. E., Mcconville J. T., Reynolds H. M., Young J. W. Investigation of Inertial Properties of the Human Body. AMRL-TR-74-137, WPAFB, Ohio, 1975.
• Zatsiorsky V. M. Kinetics of Human Motion. IL: Human Kinetics, Champaign, 2002.
• Winter D. A. Biomechanics and Motor Control of Human Movement. Wiley, New Jersey, 2009.
• The Man_System Integration Standards, Anthropometry and Biomechanics. NASA-STD-3000, https://msis.jsc.nasa.gov/sections/section03.htm#3.2.1.
• Nikolova G., Toshev Y. Estimation of male and female body segment parameters of the Bulgarian population using a 16-segmental mathematical model. Journal of Bio-mechanics, Vol. 40, 2007, p.
• Yordanov Y., Nacheva A., Tornjova S., Kondova N., Dimitrova B., Topalova D. Anthropology of the Bulgarian Population at the End of the 20-Th Century (30-40 Years Old Persons). Academic Publishing
House, Sofia, Bulgaria, 2006.
• Nikolova G. S., Tsveov M. S., Dantchev D. M. A mathematical model of the human thigh and its connection with the torso. AIP Conference Proceedings, Vol. 2164, 2019, p. 080006.
• Kotev V. K., Nikolova G. S., Dantchev D. M. Determination of mass-inertial characteristics of the human body in basic body positions: computer and mathematical modelling. European Medical and
Biological Engineering Conference Nordic-Baltic Conference on Biomedical Engineering and Medical Physics, 2017.
• Nikolova G. S., Kotev V. K., Dantchev D. M. CAD design of human male body for mass-inertial characteristics studies. MATEC Web of Conferences, Vol. 145, 2018, p. 04005, https://doi.org/10.1051/
About this article
Biomechanics and biomedical engineering
3D human body modelling
mass-inertial characteristics
The financial support by the Bulgarian National Science Fund: Contract DN-07/5 “Study of anthropometric and mass-inertial characteristics of the Bulgarian men and women via mathematical models of the
human body” is gratefully acknowledged.
Copyright © 2021 Gergana Nikolova, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/22097","timestamp":"2024-11-12T12:04:52Z","content_type":"text/html","content_length":"134300","record_id":"<urn:uuid:d033f6f9-c8e9-46bf-875d-0cfdbd72fff8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00600.warc.gz"} |
Volatility Fails as a Proxy for Risk
"volatility per se, be it related to weather, timing of the morning newspaper, is simply a benign statistical probability factor that tells us nothing about risk until it is coupled with a
consequence," - Robert H. Jeffrey, "A New Paradigm for Risk," Journal of Portfolio Management, Volume 11, Number 1 (Fall), pp. 33-40.
When we speak about capturing ranges of duration or cost, or speak about compliance with technical performance measures and do not speak about the consequences, then we're pretty much wasting our
time with Risk Management.
Here's The Core Problem(s)
• When capturing the estimates using what every method you choose - I'd recommend NOT asking the estimator the Hi, Most Likely, and Lo for lots of reasons discussed else where - more information is
needed than just the numbers.
• After eliciting the ranges of values, the probability distributions can then be used to drive a Monte Carlo simulator. This approach produces a Credible forecast of the probability of completing
On or Before a Date or having the project cost A Value or Less.
• When static 3 point estimates are used, there can be up to a 27% unfavorable (under estimating) of the completion date. So don't use 3 points and don't use PERT.
• There is a whole cottage industry on why the PERT formulas are bogus and the problems with them. Here's a start "Why PERT Has Problems."
For places to look for PERT background start with:
[1] "Quantitative Risk Analysis for Project Management,: A Critical Review," Lionel Galway, Rand Working Paper, WR-112-RC, February 2004.
[2] "The Polaris System Development: Bureaucratic and Programmatic Success in Government," Harvey M. Sapolsky, Harvard University Press, 1972.
[3] "PERT Completion Times Revisited," Ted Williams, University of Michigan-Flint, School of Management, Working Paper 2005-02, September 2005.
[4] "Hidden Assumptions in Project Management Tools," Dr. Eva Reginier, DRMI Newsletter, January 10, 2006, Naval Postgraduate School, Monterey CA.
[5] "Activity Completion Times in PERT and Scheduling Network Simulation," Dr. Eva Reginier, DRMI Newsletter, April 8, 2005, Naval Postgraduate School, Monterrey CA.
Recent Comments | {"url":"https://herdingcats.typepad.com/my_weblog/2010/07/volatility-fails-as-a-proxy-for-risk.html","timestamp":"2024-11-04T20:31:36Z","content_type":"application/xhtml+xml","content_length":"46661","record_id":"<urn:uuid:03daf94b-5f63-4498-8bd5-6f4237627cd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00805.warc.gz"} |
SYD Function [VBA]
Returns the arithmetic-declining depreciation rate.
SYD (Cost as Double, Salvage as Double, Life as Double, Period as Double)
Cost is the initial cost of an asset.
Salvage is the value of an asset at the end of the depreciation.
Life is the depreciation period determining the number of periods in the depreciation of the asset.
Period is the period number for which you want to calculate the depreciation.
REM ***** BASIC *****
Option VBASupport 1
Sub ExampleSYD
REM Calculate the yearly depreciation of an asset that cost $10,000 at
REM the start of year 1, and has a salvage value of $1,000 after 5 years.
Dim syd_yr1 As Double
REM Calculate the depreciation during year 1.
syd_yr1 = SYD( 10000, 1000, 5, 1 )
print syd_yr1 ' syd_yr1 is now equal to 3000.
End Sub | {"url":"https://help.libreoffice.org/latest/bo/text/sbasic/shared/03140012.html","timestamp":"2024-11-09T06:48:33Z","content_type":"text/html","content_length":"11256","record_id":"<urn:uuid:5e42724c-7a03-4117-b623-ecf42f763849>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00394.warc.gz"} |
Estatistika eta egia
Text written in Basque and translated automatically by
without any subsequent editing.
Statistics and truth
2009/05/01 Etxeberria Murgiondo, Juanito - EHUko irakaslea. Hezkuntzako Ikerkuntzaren eta Diagnosi Metodoen Saila Iturria: Elhuyar aldizkaria
The phrases "I already know the answer, give me a statistics to justify it", or "Politicians use statistics as farolas: more than to illuminate, as support" summarizes the opinion rather widespread
as urban legend. Statistics is a fraud.
Statistics and truth
01/05/2009 | Etxeberria Murgiondo, Juanito | Professor UPV. Department of Research and Diagnostic Methods in Education
There are many who mix statistics with statistics. Statistics is a branch of mathematics charged with collecting, organizing, and analyzing numerical data. And not only that, but it helps us to solve
the problems and make decisions that arise in the design of the experiments. Despite its short history as a scientific discipline, it has a long antiquity as a tool for synthesizing and publishing
numerical information. The extension of statistics and its instrumental function extends to all branches of science.
In cases where no data is available on all elements of the study population, it will be worked in conditions of uncertainty and randomness to prepare the conclusions. In these cases, the inferential
analysis of the data uses a statistical methodology to estimate unknown parameters, contrast concrete hypotheses, foresee future behaviors, make decisions, perform individual and collective
diagnoses, quantify uncertainty and even limit the margin of error. Thus is announced the time, the state of health of a person, the comparison between the results of both procedures, the reliability
of the components of a machine over several years. The specific forecasts should be: tomorrow there is a probability of rain of 87%, you have the brain badly with a 93% probability, or the bulb A is
better than the B with a margin of error of 5%. But it does not seem that the man of time, neither the doctor, nor the seller of bulbs take the task of determining the degree of error of their
It should be noted, moreover, that notions of madness and uncertainty sometimes confuse intuition. Thus, in a group of 30 people, the probability that there are two people who meet the years on the
same day is higher than that of not existing, that is, more than 50%. With only thirty people it seems a lie, but the theory of probability "proves" that the probability that the birthday will occur
at a time is greater than the probability that it will not occur at a time.
A process of inferential analysis of data leads to the definition of the population, the determination of the size of the sample and the selection of the elements, the measurement of the variables of
the object of study, the analysis of the data and the presentation of the results. In each of these stages we can make mistakes, which in some cases are difficult to quantify. The objective of
statistical inference will be to quantify the probability of each possible error. However, as they can be lied with language, with numbers can also be lied, manipulating the results, dividing the
information, keeping a part in the back pocket, or presenting the results fraudulently...
Let's see two quite naive examples. In the two attached charts, the profiles of two races with great continuity in the Basque Country are shown. In one of them, on the Tour de France there are
several mountain ports that cyclists climb along 159.5 kilometers, including Tourmalet --2.114 meters high. The other graph corresponds to the race Behobia-San Sebastián. In this race, the runners
join the two villages, running approximately 20 kilometers. It is basically a flat route with the highest heights, Gaintxurizketa, with 84 meters. See the profiles that appear in the two charts: they
are similar. The scales used to create graphics are very different, but they have allowed me to design two very similar profiles. Very different data but same graphics. Examples can also be seen
against every day.
From very different data you can generate two graphics with a very similar profile. Everything is a matter of scale.
Let's go now to Oñati. To see that it is a village is known. But in the magazine Concelupetik (published in Oñati), explaining the number of visitors of the municipality in 2007, a small error
To begin with, what surprises is the accuracy of the headline: "20,293 tourists visited Oñati in 2007". Doubts and questions arise about this. Do all those who turn around the university count? And
all those who come to the day of the Corpus? And all those who go to Arantzazu? And all those who go to the caves of Arrikrutz? How do they manage to count all with this type of precision? Being so
beautiful Oñati and with so many tourists, are not few? Average lower than 60 per day.
After reading the news, our doubts are clarified: 20.293 people pass through the tourist office. The title mixes both concepts: sample and statistical population. Unfortunately, this type of errors
are made with a very high frequency in the presentations of statistical results.
News published from the council in the magazine. The sample is mixed with the statistical population.
Statistics is a tool that helps to know the “truth” of a reality and invades us in different areas of life. However, misuse and statistical excesses sometimes justify the statistical reticence of a
part of the population. The only vaccine against this misuse is the highest statistical training.
I think it is time to claim the inclusion of more statistical concepts in the school curriculum of mathematics. Statistics for life, or something similar... Or, well, why not Mathematics for Life? It
would help reduce "social annumerism" and, with it, would help us to be alert to the bad uses of statistics. And it is that, although the numbers do not lie, the liars are eleven.
Juanito Etxeberria Murgiondo. Professor of the UPV. Department of Research and Diagnostic Methods in Education.
Etxeberria Murgiondo, Juanito
Services Services Services
Services Services Services
Mathematics Mathematics
Analysis of analysis
Library Library Library
Gai honi buruzko eduki gehiago
Elhuyarrek garatutako teknologia | {"url":"https://zientzia.eus/artikuluak/estatistika-eta-egia/en/","timestamp":"2024-11-09T06:43:54Z","content_type":"text/html","content_length":"50085","record_id":"<urn:uuid:4b7a2a88-3819-428a-b5bf-39d1c01d8799>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00327.warc.gz"} |
Let f(x)=⎩⎨⎧1−x0(2−x)20≤x≤11<x≤22<x≤3
and F(x)=∫0xf(t)dt
Th... | Filo
Question asked by Filo student
Let and Then find the area enclosed by and -axis as - axis varies from o to 3 .
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 11/3/2023
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Integration
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Let and Then find the area enclosed by and -axis as - axis varies from o to 3 .
Updated On Nov 3, 2023
Topic Integration
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 71
Avg. Video Duration 5 min | {"url":"https://askfilo.com/user-question-answers-mathematics/let-and-then-find-the-area-enclosed-by-and-axis-as-axis-35393834393437","timestamp":"2024-11-03T03:25:51Z","content_type":"text/html","content_length":"308904","record_id":"<urn:uuid:8151e0c8-38b7-412e-8eed-3697ead0d165>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00653.warc.gz"} |
Development of Predictive Model for Surface Roughness in End Milling of Al-SiCp Metal Matrix Composites using Fuzzy Logic
Commenced in January 2007
Development of Predictive Model for Surface Roughness in End Milling of Al-SiCp Metal Matrix Composites using Fuzzy Logic
Authors: M. Chandrasekaran, D. Devarasiddappa
Metal matrix composites have been increasingly used as materials for components in automotive and aerospace industries because of their improved properties compared with non-reinforced alloys. During
machining the selection of appropriate machining parameters to produce job for desired surface roughness is of great concern considering the economy of manufacturing process. In this study, a surface
roughness prediction model using fuzzy logic is developed for end milling of Al-SiCp metal matrix composite component using carbide end mill cutter. The surface roughness is modeled as a function of
spindle speed (N), feed rate (f), depth of cut (d) and the SiCp percentage (S). The predicted values surface roughness is compared with experimental result. The model predicts average percentage
error as 4.56% and mean square error as 0.0729. It is observed that surface roughness is most influenced by feed rate, spindle speed and SiC percentage. Depth of cut has least influence.
Keywords: End milling, fuzzy logic, metal matrix composites, surface roughness
Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1080135
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
[1] U. Zuperl, F. Cus, M. Milfelner, "Fuzzy control strategy for an adaptive force control in end-milling", Journal of Materials Processing Technology Vol. 164, 2005, pp. 1472-1478.
[2] J. T Lin, D Bhattacharyya, V Kecman, "Multiple regression and neural networks analyses in composites machining", Composites Science and Technology, Vol. 63, 2003, pp. 539-548
[3] D. R Cramer and D. F. Taggart, "Design and manufacture of an affordable advanced composite automotive body structure", Proc. 19th International Battery, Hybrid and Fuel Cell Electric Vehicle
Symposium and Exhibition, October 19-23, 2002, pp. 1-12.
[4] M. Chandrasekaran, M. Muralidhar, C. M. Krishna and U.S. Dixit, "Application of soft computing techniques in machining performance prediction and optimization: a literature review", Int J Adv
Manuf Technol, Vol. 46, 2010, pp. 445-464
[5] L. A. Zadeh., "Fuzzy sets", Information and Control, Vol. 8, 1965, pp. 338-353
[6] N. R. Abburi and U. S. Dixit, "A knowledge based system for the prediction of surface roughness in turning process", Robotics and Computer Integrated Manufacturing, Vol. 22, 2006, pp. 363-372
[7] T. Rajasekaran, K. Palanikumar and B.K Vinayagam, "Application of fuzzy logic for modeling surface roughness in turning CFRP composites using CBN tool", Prod. Eng. Res. Devel, Vol. 5, 2011, pp.
[8] Harun Akkus and Ilhan Asilturk, "Predicting surface roughness of AISI 4140 steel in hard turning process through artificial neural network, fuzzy logic and regression models", Scientific Research
and Essays, Vol. 6 (13), 2011, pp. 2729-2736
[9] M. K. Pradhan and C. K. Biswas, "Nero -fuzzy and neural network- based prediction of various responses in electrical discharge machining of AISI D2 steel", Int J Adv Manuf Technol, Vol. 50, 2010,
pp. 591- 610.doi: 10.1007/s00170-010-2531-8]
[10] J. P. Davim and C. A. Conceicao Antonio, "Optimal drilling of particulate metal matrix composites based on experimental and numerical procedures", International Journal of Machine Tools and
Manufacture, Vol. 41, 2001, pp. 21-31.
[11] S. Basavarajappa, G. Chandramohan, M. Prabhu, K. Mukund and M. Ashwin, "Drilling of hydrid metal matrix composites - workpiece surface integrity", International Journal of Machine tools and
Manufacture, Vol. 47, 2007, pp. 92-96
[12] R. Arokiadass, K. Palanirajda and N. Alagumoorthi, "Predictive modeling of surface roughness in end milling of Al/SiCp metal matrix composite", Archives of Applied Science Research, Vol. 3(2),
2011, pp. 228-236.
[13] C. C. Tsao and H. Hocheng, "Evaluation of thrust force and surface roughness in drilling composite material using Taguchi analysis and neural network", Journal of material processing technology,
Vol. 203, 2008, pp. 342-348
[14] N. Muthukrishan and P.J Davim, "Optimization of machining parameters of Al/SiC -MMC with ANOVA and ANN analysis", Journal of Materials Processing Technology, Vol. 209, 2009 pp. 225-232
[15] P. J. Davim, "Design of optimization of cutting parameters for turning of metal matrix composites based on the orthogonal arrays", Journal of Materials Processing Technology, Vol. 132, 2003 pp.
[16] K. A. Risbood, U. S. Dixit and A. D. Sahasrabudhe, "Prediction of surface roughness and dimensional deviation by measuring cutting forces and vibrations in turning process", J Matter Process
Technol, Vol. 132, 2003 pp. 203-214. doi: 10.1016/s0924-0136(02)00920-2
[17] D. K. Sonar, U. S. Dixit and D. K. Ojha, "The application of radial basis function for predicting the surface roughness in a turning process", Int J Adv Manuf Technol, Vol. 27, 2006, pp.
661-666. doi: 10.1007/s00170- 004-2258-5
[18] Y. M. Ali and L. C. Zhang, "Surface roughness prediction of ground components using a fuzzy logic approach", Journal of Materials Processing Technology, Vol. 89(90), 1999 pp. 561-568.
[19] D. Devarasiddappa, M. Chandrasekaran and A. Mandal, "Artificial neural network modeling for predicting surface roughness in end milling of Al-SiCp metal matrix composite and its evaluation",
Proc .International Conference on Intelligent Manufacturing Systems (ICIMS 2012) SASTRA University, Thanjavur, Taminnadu (India), pp 119-125
[20] J. C. Chen, J. T. Black, "A fuzzy-nets in-process (FNIP) systems for too l breakage monitoring in end-milling operations", Int J Mach Tools Manuf, Vol. 37(6), 1997, pp.783-800. | {"url":"https://publications.waset.org/12908/development-of-predictive-model-for-surface-roughness-in-end-milling-of-al-sicp-metal-matrix-composites-using-fuzzy-logic","timestamp":"2024-11-14T05:33:20Z","content_type":"text/html","content_length":"21506","record_id":"<urn:uuid:8ca45a6e-8895-4d09-a50a-76b208ebdfe7>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00816.warc.gz"} |
The Stacks project
This is one of the main results of [Serre_algebre_locale]. Proposition 43.19.3. Let $X$ be a nonsingular variety. Let $V \subset X$ and $W \subset Y$ be closed subvarieties which intersect properly.
Let $Z \subset V \cap W$ be an irreducible component. Then $e(X, V \cdot W, Z) > 0$. Proof. By Lemma 43.19.2 we have \[ e(X, V \cdot W, Z) = e(X \times X, \Delta \cdot V \times W, \Delta (Z)) \]
Since $\Delta : X \to X \times X$ is a regular immersion (see Lemma 43.13.3), we see that $e(X \times X, \Delta \cdot V \times W, \Delta (Z))$ is a positive integer by Lemma 43.16.3. $\square$
Proposition 43.19.3. Let $X$ be a nonsingular variety. Let $V \subset X$ and $W \subset Y$ be closed subvarieties which intersect properly. Let $Z \subset V \cap W$ be an irreducible component. Then
$e(X, V \cdot W, Z) > 0$.
\[ e(X, V \cdot W, Z) = e(X \times X, \Delta \cdot V \times W, \Delta (Z)) \]
Since $\Delta : X \to X \times X$ is a regular immersion (see Lemma 43.13.3), we see that $e(X \times X, \Delta \cdot V \times W, \Delta (Z))$ is a positive integer by Lemma 43.16.3. $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0B0V. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0B0V, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0B0V","timestamp":"2024-11-12T00:37:03Z","content_type":"text/html","content_length":"14723","record_id":"<urn:uuid:71c6bf40-d543-4388-abd1-aa6e84d0af1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00223.warc.gz"} |
Azarbaijan Shahid Madani University Communications in Combinatorics and Optimization 2538-2128 10 1 2025 03 01 On coherent configuration of circular-arc graphs 1 19 14629 10.22049/cco.2023.28816.1735
EN Fatemeh Raei Barandagh Department of Mathematics Education, Farhangian University, P.O. Box 14665-889, Tehran, Iran Amir Rahnamai Barghi Department of Mathematics, K. N. Toosi University of
Technology, Tehran, Iran Journal Article 2023 07 09 For any graph, Weisfeiler and Leman assigned the smallest matrix algebra which contains the adjacency matrix of the graph. The coherent
configuration underlying this algebra for a graph $\Gamma$ is called the coherent configuration of $\Gamma$, denoted by $\mathcal{X}(\Gamma)$. In this paper, we study the coherent configuration of
circular-arc graphs. We give a characterization of the circular-arc graphs $\Gamma$, where $\mathcal{X}(\Gamma)$ is a homogeneous coherent configuration. Moreover, all homogeneous coherent
configurations which are obtained in this way are characterized as a subclass of Schurian coherent configurations. https://comb-opt.azaruniv.ac.ir/article_14629_d9f1f125040619de623f0d486b420b49.pdf | {"url":"https://comb-opt.azaruniv.ac.ir/?_action=xml&issue=2366","timestamp":"2024-11-12T14:58:47Z","content_type":"application/xml","content_length":"45770","record_id":"<urn:uuid:15196339-0bd1-4c61-9a13-ab7aab831c66>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00090.warc.gz"} |
how do you find error in code?
in coding, how do you actually track down the bug??? example this code where we want to find the quartile....
ignore the "odd" part, not done on there yet, but the even part just successfully running but produce no result after input (0, random huge number, and another 0)
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
using std::cin;
using std::cout;
using std::endl;
using std::string;
using std::vector;
int main()
vector<int> numberstore;
int input;
// real input
while (cin >> input)
// re-arranged
sort(numberstore.begin(), numberstore.end());
vector<double> quartilestore;
// even amount
if (numberstore.size() % 2 == 0)
// compute
int quadro = 4;
for (int quartile = 1; quartile < quadro; ++quartile)
double curr_quartile = (quartile/quadro) * numberstore.size();
if (quartile == 2)
curr_quartile = (curr_quartile + ((quartile/quadro) * numberstore.size() - 1)) / 2;
// odd amount
} else {
// output
for (vector<double>::size_type counter = 0; counter < quartilestore.size(); ++counter)
cout << quartilestore[counter] << endl;
return 0;
First of all, you could save yourself some time on lines 6-10 by just using using namespace std; Also, I assume you are trying to find the interquartile range? In that case this can be done in very
simple linear time. Think about this, there are three quartiles (the points separating the four quarters of the data) one of them is the first, one of them is the third, and the one in the middle is
usually called the median. Once you have sorted your array, you do not care at all about the values in it. You can access the quartiles simply by using:
int median=numberstore[numberstore.size()/2];//it really is as simple as that
int firstQuartile=numberstore[numberstore.size()/4];//we want the element that is a quarter of the way in
int thirdQuartile=numberstore[(numberstore.size()/4)*3];//this is the most complicated one, but it can easily be derived from the one above
Hope that helped! (I also hope that I am not entirely wrong :P)
Listing individual items in a namespace you will use in a program and not "polluting" by writing for the entire namespace is a very acceptable method, and many would feel is the preferred method,
though many of us use the "convenient" path of exposing the whole namespace.
in coding, how do you actually track down the bug??? example this code where we want to find the quartile....
Too late now, but before you write too much code, compile and test in smaller pieces.
Now, start outputting variables at key places to follow what the code is doing. Compare the values output with the values you calculate they should be.
In this case after the sort display the sorted values.
At line 28 display the value numberstore.size() .
In the for loop, output the values being computed.
If you cant see where you have an error or a bug than go for debugging -:)
Debug your code, when you debug your code you see what your codes really do, and you see how funny error and bug you may have coded :-)
If you are using an IDE to do your programming, then most decent IDEs have a debugger included in them. The basic way to use a debugger is to set a break-point (a point in the code at which the
execution will be suspended). Then, you run the program until the break-point and then step through each line one by one. Between each step you can usually evaluate the variables (local variables,
arrays, etc.) to see if they have the value they should have at that point. See your IDE's documentation or help or tutorials to find out how to do these things (set break-points, step through lines
of code, and evaluate variables).
If you are not using an idea and don't have access to this type of debugger, then do as WaltP suggested: just print out all relevant information at all relevant points in your code, run the program
and inspect the output to see if it makes sense and where it goes wrong.
thnx WaltP, ur post always been a help for me :)
btw, I'm using Code::Blocks, and never use debugger before, can anyone give a little insight on that? wiki just no help this time...
I don't know what wiki you are referring to, but the code-blocks wiki has pretty detailed instruction on doing debugging in code-blocks.
Reply to this Topic | {"url":"https://www.daniweb.com/programming/software-development/threads/415042/how-do-you-find-error-in-code","timestamp":"2024-11-11T06:56:31Z","content_type":"text/html","content_length":"91530","record_id":"<urn:uuid:dcd1bfec-4a32-4f2a-8e4f-43bc41343930>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00205.warc.gz"} |
Nanotechnology on Surfaces and Plasma
F. Yubero, N. Pauli, A. Dubus, S. Tougaard
Physical Review B, 77 (2008) 245405 (11)
doi: 10.1103/PhysRevB.77.245405
An electron reaching the detector after being backscattered from a solid surface in a reflection electron energy loss spectroscopy (REELS) experiment follows a so-called V-type trajectory if it is
reasonable to consider that it has only one large elastic scattering event along its total path length traveled inside the solid. V-type trajectories are explicitly assumed in the dielectric model
developed by Yubero et al.[Phys. Rev. B 53, 9728 (1996)] for quantification of electron energy losses in REELS experiments. However, the condition under which this approximation is valid has not
previously been investigated explicitly quantitatively. Here, we have studied to what extent these REELS electrons can be considered to follow near V-type trajectories. To this end, we have made
Monte Carlo simulations of trajectories for electrons traveling at different energies in different experimental geometries in solids with different elastic scattering properties. Path lengths up to
three to four times the corresponding inelastic mean free paths have been considered to account for 80–90% of the total electrons having one single inelastic scattering event. On this basis, we have
made detailed and systematic studies of the correlation between the distribution of path lengths, the maximum depth reached, and the fraction of all electrons that have experienced near V-type
trajectories. These investigations show that the assumption of V-type trajectories for the relevant path lengths is, in general, a good approximation. In the rare cases, when the detection angle
corresponds to a scattering angle with a deep minimum in the cross section, very few electrons have experienced true V-type trajectories. However, even in these extreme cases, a large fraction of the
relevant electrons have near V-type trajectories.
Test of validity of the V-type approach for electron trajectories in reflection electron energy loss spectroscopy | {"url":"https://sincaf.icms.us-csic.es/test-of-validity-of-the-v-type-approach-for-electron-trajectories-in-reflection-electron-energy-loss-spectroscopy/","timestamp":"2024-11-09T19:56:06Z","content_type":"text/html","content_length":"67500","record_id":"<urn:uuid:f340e841-ca48-4a17-bb2e-b219fd06ac91>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00583.warc.gz"} |
Lesson 1: Pre-visit | More Than Math
Students will solve a challenging three-dimensional geometry problem, investigate and discuss a Josef Albers’ artwork, and replicate a Josef Albers design by measuring and drawing.
• We will enhance our mental imagery through a Quick Draw activity
• We will solve the painted cube problem and identify our solutions
• Through close looking and discussion, we will interpret meaning from an abstract artwork
• Participation in class discussions
• Participation in The Painted Cube activity
• Quick Draw worksheet
• The Painted Cube worksheet
• Homage to the Square worksheet
Time Required
1–1.5 hours
Materials/Resources Needed
• Document camera/projector
• Art images (linked below in the activities)
• Pencils
• 20 snap cubes per student
• Colored pencils or crayons
North Carolina Curriculum Alignment
Activity One: Quick Draw
NOTE: All pages referenced below are linked above in Materials/Resources Needed as “Activity pages.” Quick Draw will be used as a class opener, with the whole class participating. Quick Draw was
designed by Dr. Grayson H. Wheatley, Professor Emeritus of Mathematics Education at Florida State University (http://www.mathematicslearning.org). Quick Draw develops spatial sense, encourages the
transformation of self-constructed images, and develops geometric intuitions through discussion.
1. Provide each student one Quick Draw worksheet (activity page 1).
2. Prepare for the activity by telling students that you are going to show them a shape for just three seconds and you want them to study it, building a mental picture of what they see. Avoid the
temptation to show it for a longer period of time. It is important that students work from mental imagery rather than copying what they are seeing. Students will draw what they saw in the upper
left box on their worksheet. Project Quick Draw #1 for three seconds. Tell students to draw what they saw in box #1 on their worksheet. After a few moments, show the shape again and let students
adjust their drawings. If you feel it is necessary, briefly show the shape a third time. This should only be necessary for more complex figures or groups that are struggling. Two times is the
norm, and three times is usually sufficient.
3. Instruct students to put their pencils down. Show the shape so students can compare their drawing to the actual picture. With the image in view, ask students what they saw, how they drew the
shape, and which part they drew first. Talking about mathematics encourages students to reflect on their imaging. Geometric language will be used naturally. You may wish to supply mathematical
names for objects such as trapezoids and parallelograms as needed.
4. Repeat the same procedure for Quick Draw #2 and #3. Collect papers or go over together as a class.
Activity Two: The Painted Cube
NOTE: All pages referenced below are linked above in Materials/Resources Needed as “Activity pages.”
1. Project Jon Kuhn’s Crystal Cream as inspiration for the activity. Ask students to describe what they see. If students describe the form as a cube ask them how they know it’s a cube and to define a
2. Organize students in pairs, like ability works best. Provide students with The Painted Cube worksheet (activity page 2) and 20 snap cubes each (or for each pair if you don’t have enough).
3. Ask students to work in pairs to solve the following problem and explain their solution in writing: A large cube is formed from smaller individual cubes. Four of the smaller cubes fit along each
edge of the large cube. Imagine if the 4 by 4 by 4 large cube is dipped in paint. How many of the individual cubes will have paint on them?
4. Walk around and observe your students, helping those that need assistance. You can prompt them by asking if they are counting cubes or faces of cubes.
5. After most pairs of students have a solution, have the class come together and ask two pairs to explain in front of the class how they solved the problem. If another pair has a different solution
with the correct answer allow them to explain too. Point out that some problems have multiple correct solutions.
Activity Extension #1: Ask students to determine how many individual cubes there are.
Activity Extension #2: Ask students how they could sort the individual cubes into like families if they took the cube apart once the paint dried. What criteria would students use to sort them?
The large cube is composed of 64 cubes, four layers of 16 cubes each. Of these 64 cubes, 56 have paint on them.
Solution 1. 16 on the top layer and 16 on the bottom layer are painted. Front and back then have 8 with paint on them. When these cubes (not faces) are considered, there are only 4 cubes in the
middle of the sides that have paint on them. Thus 16 + 16 + 8 + 8 + 4 + 4 = 56.
Solution 2. In a 4 by 4 by 4 cube, there is in the middle of this large cube, a 2 by 2 by 2 cube that is not painted. When these 8 cubes are subtracted from the total of 64 cubes the difference is 56
There are numerous other ways of solving the problem. Please be open to other methods and avoid imposing or even emphasizing one particular method.
Activity Three: Homage to the Square
NOTE: Read the artist biography of Josef Albers with your students before you begin the lesson. All pages referenced below are linked above in Materials/Resources Needed as “Activity pages.”
1. Project Josef Albers’ Formulation: Articulation, Folio II, Folder 13. Have students look carefully and discuss what they see. This should be a lively discussion. Discussion of color and the
interaction of the colors might arise.
2. Provide students with Homage to the Square worksheets (activity pages 3-6). Students can begin by completing questions #1-10. The directions and answers are below. Answers have not been provided
for those questions whose responses are subjective. Question #6 may require some class discussion.
• What is a square? A figure with four equal sides and four right angles.
• How many squares do you see in this artwork? Possible responses: 2 sets of four, eight
• What words could you use to describe the colors of these squares?
• Would you want your bedroom painted like this?
• Why or why not?
• Compare the relationship between the widths of the borders at the sides of the squares to the widths of the borders at the bottom of the squares. How are they similar? How are they different? The
width of the side borders are twice the size of the width of the bottom border.
• In the space provided on the next page, draw one square inside another so that the distance between the squares at the sides (the side border) is twice the width of the border at the bottom, just
like Albers’ squares. Use a ruler and draw carefully. Comment: This is likely to be challenging for the students. They have to measure and draw precisely.
• Measure the distance of the border at the top. Is it related to the other two distances? How? The top border is three times the size of the bottom border.
• Add a third square to your drawing that is outside the second, so that the border at the bottom is equal to the border just above it and the side borders are the same, just like in Albers’
• Using colored pencils or crayons, fill the squares with colors from the same color family. In the example, this color wheel shows four colors for each color family, but students do not need to
limit themselves to just these colors. Any values (lighter or darker versions) or a color can be in a color family. If students don’t have many colors to choose from, they can create multiple
values of one color by pressing down lighter or harder when coloring.
3. Take time to review students drawings individually while walking around the room. When most have completed their first drawings, the class can move on to questions #11–13.
• In the space provided on the next page, draw a large square. Draw a second square inside the first one so that the border at the sides is twice the width of the border at the bottom. Comment: It
is not easy to draw a square with just a ruler and pencil. They must measure carefully. How will students form right angles? Not easy.
• Try to draw another square inside the second square meeting the same conditions. Can you draw a fourth square inside the third square? Answer depends on the squares they drew and borders used. It
is possible that only one square can be drawn or it could be more.
• Using colored pencils or crayons, fill the squares with colors from the same color family. | {"url":"https://morethanmath.org/lessons/lesson-1-pre-visit-2/","timestamp":"2024-11-06T01:05:08Z","content_type":"text/html","content_length":"36910","record_id":"<urn:uuid:89d5c6ad-4781-40e0-9c0f-57ef40efea35>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00759.warc.gz"} |
Find and Sketch the Step response of following RLC
1) Series RLC circuit
2)Parallel RLC...
In: Electrical Engineering
Find and Sketch the Step response of following RLC circuit. 1) Series RLC circuit 2)Parallel RLC...
Find and Sketch the Step response of following RLC circuit.
1) Series RLC circuit
2)Parallel RLC circuit
simulate these on Multisim software and Also show overdamped,critical damped,underdamped Graph in Transient analysis.
note: Circuit should be any and simple of your choice. | {"url":"https://wizedu.com/questions/1394649/find-and-sketch-the-step-response-of-following","timestamp":"2024-11-06T10:24:21Z","content_type":"text/html","content_length":"34419","record_id":"<urn:uuid:9a4e6999-9570-450e-a8b6-9e96b13b9248>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00034.warc.gz"} |
Acoustic rheometer
An acoustic rheometer employs a piezo-electric crystal that can easily launch a successive wave of extensions and contractions into the fluid. It applies an oscillating extensional stress to the
system. System response can be interpreted in terms of extensional rheology.
This interpretation is based on a link between shear rheology, extensional rheology and acoustics. Relationship between these scientific disciplines was described in details by Litovitz and Davis in
It is well known that properties of viscoelastic fluid are characterised in shear rheology with a shear modulus G, which links shear stress \( {T_{ij} \) and shear strain \( {S_{ij} \)
\( {\displaystyle {T_{ij}=G\cdot S_{ij}}} \)
There is similar linear relationship in extensional rheology between extensional stress P, extensional strain S and extensional modulus K:
\( {\displaystyle {P=-K\cdot S}} \)
Detail theoretical analysis indicates that propagation of sound or ultrasound through a viscoelastic fluid depends on both shear modulus G and extensional modulus K,.[2][3] It is convenient to
introduce a combined longitudinal modulus M:
\( {\displaystyle M=M'+M''=K+{\frac {4}{3}}G} \)
There are simple equations that express longitudinal modulus in terms of acoustic properties, sound speed V and attenuation α
\( {\displaystyle M'=\rho \cdot V^{2}} \)
\( {\displaystyle M''={\frac {2\rho \alpha V^{3}}{\omega }}} \)
Acoustic rheometer measures sound speed and attenuation of ultrasound for a set of frequencies in the megahertz range. These measurable parameters can be converted into real and imaginary components
of longitudinal modulus.
Sound speed determines M', which is a measure of system elasticity. It can be converted into fluid compressibility.
Attenuation determines M", which is a measure of viscous properties, energy dissipation. This parameter can be considered as extensional viscosity
In the case of Newtonian liquid attenuation yields information on the volume viscosity. Stokes' law (sound attenuation) provides relationship among attenuation, dynamic viscosity and volume viscosity
of the Newtonian fluid.
This type of rheometer works at much higher frequencies than others. It is suitable for studying effects with much shorter relaxation times than any other rheometer.
Litovitz, T.A. and Davis, C.M. In "Physical Acoustics", Ed. W.P.Mason, vol. 2, chapter 5, Academic Press, NY, (1964)
Morse, P. M. and Ingard, K. U. "Theoretical Acoustics", Princeton University Press (1986)
Dukhin, A.S. and Goetz, P.J. "Characterization of liquids, nano- and micro- particulates and porous bodies using Ultrasound", Elsevier, 2017 ISBN 978-0-444-63908-0
See also
Hellenica World - Scientific Library
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License | {"url":"https://www.hellenicaworld.com/Science/Physics/en/Acousticrheometer.html","timestamp":"2024-11-06T17:04:41Z","content_type":"application/xhtml+xml","content_length":"7797","record_id":"<urn:uuid:acd5e051-14c0-4bc7-ac74-c5986f6214ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00570.warc.gz"} |
Exercise 6.1: Demand, supply, cost, revenue and profit functions, Elasticity - Problem Questions with Answer, Solution | Applications of Differentiation | Mathematics
Business Mathematics and Statistics Book back answers and solution for Exercise questions - Mathematics: Applications of Differentiation: Demand, supply, cost, revenue and profit functions,
Tags : Problem Questions with Answer, Solution | Applications of Differentiation | Mathematics , 11th Business Mathematics and Statistics(EMS) : Chapter 6 : Applications of Differentiation
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
11th Business Mathematics and Statistics(EMS) : Chapter 6 : Applications of Differentiation : Exercise 6.1: Demand, supply, cost, revenue and profit functions, Elasticity | Problem Questions with
Answer, Solution | Applications of Differentiation | Mathematics | {"url":"https://www.brainkart.com/article/Exercise-6-1--Demand,-supply,-cost,-revenue-and-profit-functions,-Elasticity_40259/","timestamp":"2024-11-13T11:42:04Z","content_type":"text/html","content_length":"30385","record_id":"<urn:uuid:632f9921-ba6a-41ce-9d7b-1ec13a6d4374>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00508.warc.gz"} |
NodeBox Support
"gradient" fill
i am trying to colorize circles in a grid with gradient based on three rgb colors - with those i have created a range of values.
but so far i have only managed to colorize a grid with a consistent number of points in each row. (eg. the amount of colors in the range is the same as the number of rows in the grid)
as i am generating (rasterized) shapes based on this grid, the number of points in the rows is not consistend anymore - and this workflow (of course) breaks.
what could be a possible approach?
the idea is to somehow either
a) limit the duplication of each color to the amount of points in the corresponding row.
b) color based on the y-position of each point
- different shapes, with a different amount of rows are generated
• i am not able to access the y-position key of an ellipse
any ideas?
1. Tom,
To access the y-position of an ellipse (or any other path) just feed it into a centroid node to get its center point, and then feed that into a lookup node with the key set to y.
Once you have the y value for each dot, you can assign each row a color by using a slice node to slice one color from the list of colors forming your gradient.
You can use a convert_range node to convert the range of Y values to a sequence from 0 to the number of colors in your gradient. You can then feed that into the start parameter of a slice node to
assign a color to each dot. Dots with the same Y value will get the same color.
In case that is not clear, I have attached a demo showing the basic technique (see screenshot and zipped file).
□ I create a collection of dots in the shape of a lower case a
□ I use the Y values of those dots and a convert_range node to find a sequence number within the gradient for each dot
□ I form the gradient by extending the 3 colors from your example into a color list using my palette node
□ Then I use a slice node to pick each sequence number from the gradient list and feed that into a colorize node
Hope that helps. Let me know if you have any more questions.
2. hey John, thanks a lot!
i managed to find a solution myself, but yours is way more efficient.
for my solution i used this custom node to count the points in each row based on their y-attribute.
and then i wrote two more custom nodes:
- one for remapping rgb-values based on min, mid and max by an amount of n - one for duplicating these in a list based on the count
all the best
3. tom closed this discussion on .
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
? Show this help
ESC Blurs the current field
Comment Form
r Focus the comment reply box
^ + ↩ Submit the comment
You can use Command ⌘ instead of Control ^ on Mac | {"url":"http://support.nodebox.net/discussions/nodebox-2-3/7186-gradient-fill","timestamp":"2024-11-08T20:55:12Z","content_type":"text/html","content_length":"27000","record_id":"<urn:uuid:44442a37-41b8-434b-97fa-82d7b28fd7c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00777.warc.gz"} |
Clinical Trial Design & Sample Size Calculation Mistakes To Avoid
The calculation of the correct sample size is one of the first and most important steps in study design. Below is a list of sample size determination practices to be avoided as per the E9 Statistical
Principles for Clinical Trials found in the FDA Guidance for industry.
The calculation of the correct sample size is one of the first and most important steps in study design. Below is a list of sample size determination practices to be avoided as per the E9 Statistical
Principles for Clinical Trials found in the FDA Guidance for industry.
Before we continue, let us recap what is required for sample size estimation.
Using the usual method for determining the appropriate sample size, the following items need to be specified:
• A primary variable(s)
• The test statistic(s)
• The null hypothesis
• The alternative (working) hypothesis at the chosen dose(s) i.e.
• Effect size
• The probability of erroneously rejecting the null hypothesis (The type I error)
• The probability of erroneously failing to reject the null hypothesis (The type II error)
• Adjustments for treatment withdrawals and protocol violations
How many subjects do I need to obtain
a significant result for my study?
The most common question posed to a biostatistician
Mistake #1
Failing To Fully Examine Existing Research
Novice researchers often do not spend the appropriate amount of time examining existing literature that may of already addressed part of the question they wish to research.
Important insights can be gained from these before proceeding with your trial. You will most likely have many internal resources available to search a large number of databases and journals.
In addition to paid resources, there are many available to the public such as:
Mistake #2
Failing To Design Techniques That Avoid Bias
The most important design techniques for avoiding bias in clinical trials are blinding and randomization. These should be normal features of most controlled clinical trials.
It is desirable for trials to follow a double-blind approach. A double-blind trial is one in which neither the subject nor any of the investigator or sponsor staff involved in the treatment or
clinical evaluation of the subjects are aware of the treatment received.
Treatments are prepacked in accordance with a suitable randomization schedule, and supplied to the trial center(s) labelled only with the subject number and the treatment period, so that no one
involved in the conduct of the trial is aware of the specific treatment allocated to any particular subject, not even as a code letter.
If a double-blind trial is not feasible, then the single-blind option should be considered. In some cases only an open-label trial is practically or ethically possible. In some cases only an
open-label trial is practically or ethically possible.
Bias can also be reduced at the design stage by specifying procedures in the protocol aimed at minimizing any anticipated irregularities in trial conduct that might impair a satisfactory analysis,
including various types of protocol violations, withdrawals and missing values. The protocol should consider ways both to reduce the frequency of such problems and to handle the problems that do
occur in the analysis of data.
Randomization introduces a deliberate element of chance into the assignment of treatments to subjects in a clinical trial. In combination with blinding, randomization helps to avoid possible bias in
the selection and allocation of subjects arising from the predictability of treatment assignments.
Although unrestricted randomization is an acceptable approach, some advantages can generally be gained by randomizing subjects in blocks.
This helps to increase the comparability of the treatment groups, particularly when subject characteristics may change over time, as a result, for example, of changes in recruitment policy.
Mistake #3
Not Using Multiple Primary Variables Where Necessary
It may sometimes be desirable to use more than one primary variable, each of which (or a subset of which) could be sufficient to cover the range of effects of the therapies. The planned manner of
interpretation of this type of evidence should be carefully spelled out.
It should be clear whether an impact on any of the variables, some minimum number of them,or all of them, would be considered necessary to achieve the trial objectives.
The primary hypothesis or hypotheses and parameters of interest (e.g. mean, percentage,distribution) should be clearly stated with respect to the primary variables identified, and the approach to
statistical inference described. The effect on the Type I error should be explained because of the potential for multiplicity problems and the method of controlling Type I error should be given in
the protocol.
The extent of intercorrelation among the proposed primary variables may be considered in evaluating the impact on Type I error. If the purpose of the trial is to demonstrate effects on all of the
designated primary variables, then there is no need for adjustment of the Type I error, but the impact on Type II error and sample size should be carefully considered.
Mistake #4
Not Accounting For The Loss Of Power
From Categorized Variables
Dichotomization or other categorization of continuous or ordinal variables may sometimes be desirable. Criteria of success and response are common examples of dichotomies that should be specified
precisely in terms of, for example, a minimum percentage improvement (relative to baseline) in a continuous variable or a ranking categorized as at or above some threshold level (e.g., good) on an
ordinal rating scale.
The reduction of diastolic blood pressure below 90 mmHg is a common dichotomization. Categorizations are most useful when they have clear clinical relevance.
The criteria for categorization should be predefined and specified in the protocol, as knowledge of trial results could easily bias the choice of such criteria.
As categorization normally implies a loss of information, a consequence will be a loss of power in the analysis; this should be accounted for in the sample size calculation.
Mistake #5
Changes In Inclusion And Exclusion Criteria
Inclusion and exclusion criteria should remain constant, as specified in the protocol, throughout the period of subject recruitment.
Changes may occasionally be appropriate, for example, in long-term trials, where growing medical knowledge either from outside the trial or from interim analyses may suggest a change of entry
Changes may also result from the discovery by monitoring staff that regular violations of the entry criteria are occurring or that seriously low recruitment rates are due to over-restrictive
Changes should be made without breaking the blind and should always be described by a protocol amendment. This amendment should cover any statistical consequences, such as sample size adjustments
arising from different event rates, or modifications to the planned analysis, such as stratifying the analysis according to modified inclusion/exclusion criteria.
Mistake #6
Not Preparing For Sample Size Adjustment
In long-term trials there will usually be an opportunity to check the assumptions which underlie the original design and sample size calculations. This may be particularly important if the trial
specifications have been made on preliminary and/or uncertain information.
An interim check conducted on the blinded data may reveal that overall response variances, event rates or survival experience are not as anticipated.
A revised sample size may then be calculated using suitably modified assumptions, and should be justified and documented in a protocol amendment and in the clinical study report. The steps taken to
preserve blindness and the consequences, if any, for the Type I error and the width of confidence intervals should be explained. The potential need for re-estimation of the sample size should be
envisaged in the protocol whenever possible.
For example, a trial sized on the basis of safety questions or requirements or important secondary objectives may need larger numbers of subjects than a trial sized on the basis of the primary
efficacy question.
Mistake #7
Incorrectly Reporting Sample Size
According to the CONSORT statement, sample size calculations should be reported and justified in all published Randomised Control Trials (RCTs). Correct output statements should be applied in many
situations in addition to RCTs, such as funding applications.
Output statements are almost meaningless if they neglect full details such as the estimates for the effect of interest and the variability. nQuery will automatically write up your sample size
statement in the correct language and the format required for regulatory agencies.
Sources: https://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm073137.pdf
About nQuery Sample Size Software:
nQuery is now the world's most trusted sample size and power analysis software.
In 2017, 90% of organizations with FDA approved clinical trials used nQuery as their sample size calculator. It is used by Biostatisticians of all levels of expertise. Created by sample size experts,
nQuery boasts an extensive list of easy-to-use but powerful feature for sample size calculation and power analysis. | {"url":"https://www.statsols.com/articles/common-clinical-trial-design-sample-size-calculation-mistakes-to-avoid","timestamp":"2024-11-02T20:18:00Z","content_type":"text/html","content_length":"97786","record_id":"<urn:uuid:4af2164a-1c06-439b-9539-4e02ad307323>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00698.warc.gz"} |
How to Cultivate the Philosophical Thinking Ability of Science and Engineering Students in Classroom Teaching
How to Cultivate the Philosophical Thinking Ability of Science and Engineering Students in Classroom Teaching ()
1. Introduction
Philosophy is a discipline that conducts research on basic and universal issues, and is a theoretical system about world outlook and methodology. The world outlook is a general understanding of basic
issues such as the nature of the world, the fundamental laws of development, and the fundamental relationship between human thinking and existence. Methodology is the way humans understand the world
based on the world outlook. Philosophy is a method, not a set of propositions or theories. The study of philosophy is based on rational thinking, seeking to make hypotheses that have been reviewed or
just pure analogy. Philosophy is a thinking ability. Philosophy can start from an established concept system, concept or concept, and then integrate historical thinking technology to make one’s
thinking powerful, thereby further creating new knowledge.
With the rapid development and accelerated iteration of modern science and technology, cross-penetration between disciplines becomes more and more frequent, the boundaries between disciplines are
becoming more and more blurred, and the boundaries between liberal arts and sciences are becoming more and more blurred. Contemporary university education in science and engineering is more inclined
to cultivate talents with both advanced and deep professional knowledge in science and engineering, but also with profound humanistic literacy. Literature (Lin & Shen, 2018; Xu, 2019; Li & Wang,
2019; Chen, Zhang, & Dong, 2015; Wang, Liu, & Liang, 2017; Zeng, 2011) has made useful explorations in this area. At the same time, in the university education environment, classroom teaching runs
through, which is the most direct and important way for students to receive education. In the classroom teaching of science and engineering college students, how to inspire students to carry out
philosophical thinking about what they have learned, so as to enhance the depth and breadth of knowledge understanding, the author of the literature (Tong, Sun, & Zheng, 2011; Ding, Chen, & Ren,
2014; Yan-Yan M, 2019) has conducted a more in-depth study on this issue.
“Signal and System” is an important professional basic course for electrical majors. Its task is to study the basic theories and basic analysis methods of signals and linear time-invariant systems.
It requires mastering the most basic signal transformation theory and the analysis methods of linear time-invariant systems. It is for learning follow-up courses and engaging in related fields.
Engineering technology and scientific research work have laid a solid theoretical foundation. Through the study of this course, students will understand the function representation of signals and
system analysis methods, and master the time domain analysis and frequency domain analysis methods of continuous-time systems and discrete-time systems.
Because there are many derivations and formulas in this course, many students feel boring and complicated when they study, and at the same time they feel that they have no connection with the
feelings in real life, so some students are easy to lose interest in learning. In order to increase the students’ interest in learning, the author of this article pays attention to integrating
philosophical knowledge into the teaching when teaching this course. This will make the teaching of the course more lively and interesting, and at the same time make the students in the learning
process be more proactive. This method of philosophical speculation enables students to further think about the ins and outs of these formulas while understanding the meaning of various formula
symbols, and can also cultivate their creative thinking ability to a certain extent. Teaching practice shows that this method has achieved better teaching results.
2. Introduce the Principle of Universality and Particularity of Contradiction to Fourier Series
French mathematician Fourier discovered that any periodic function can be represented by an infinite series composed of a sine function and a cosine function (the sine function and the cosine
function are chosen as the basis functions because they are orthogonal), which is later called Fourier Series is a special trigonometric series. According to Euler’s formula, trigonometric functions
can be transformed into exponential form, which is also called Fourier series as an exponential series.
When explaining Fourier series, the author first introduces the concept of expansion from the most common one-dimensional, two-dimensional and three-dimensional spaces. As shown in Figure 1, in
one-dimensional space, any point away from the source point has a value. This value can be regarded as the projection of this point in one-dimensional space.
Similarly, given two-dimensional rectangular coordinate system and three dimensional rectangular coordinate system: (Figure 2 & Figure 3).
Assuming that the coordinates of A are (2, 2), according to the vector representation, then $\text{OA}=\text{2}x+2y$, $x$, $y$ represents the direction vector on the x and y axis.
Also given a three-dimensional rectangular coordinate system:
Assuming that the coordinates of B1 are (3, 3, 3), according to the vector representation, then ${\text{OB}}_{\text{1}}=\text{3}x+\text{3}y+\text{3}z$, $x$, $y$, $z$ represent the direction vectors
on the x, y, and z axes respectively.
In other words, the space rectangular coordinate system decomposes any point in space to different coordinate axes. Obviously, the x, y, and z axes are perpendicular to each other. In real space, no
other axes can be found. They can be perpendicular to the x, y, and z axes. Generally speaking, for any space the decomposition of one point will end everywhere. In order to elicit the concept of
n-dimensional space, according to the principle of universality and particularity in philosophy: Universality resides in particularity and is expressed through particularity. Without particularity,
there is no universality. At the same time, particularity cannot be separated. Open universality. Based on this, it can be considered that 1, 2, 3 dimensional space is a special expression of
universal n-dimensional space, which leads to the concept of n-dimensional space in which each axis is perpendicular to each other: (Figure 4)
Two functions orthogonal means that they satisfy: ${\int }_{{t}_{1}}^{{t}_{2}}{f}_{1}\left(t\right){f}_{2}\left(t\right)\text{d}t=0$.
Given function system $\left\{1,\mathrm{cos}x,\mathrm{sin}x,\mathrm{cos}2x,\mathrm{sin}2x,\cdots ,\mathrm{cos}nx,\mathrm{sin}nx,\cdots \right\}$ It is called the trigonometric function system.
Obviously 2π is the period of all the above functions.
The orthogonality of the trigonometric function system on $\left[-\pi ,\pi \right]$ means:
For any positive integer $n,m$$\left(me n\right)$, there is
${\int }_{-\pi }^{\pi }\mathrm{cos}nx\text{d}x=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\int }_{-\pi }^{\pi }\mathrm{sin}nx\text{d}x=0,$
${\int }_{-\pi }^{\pi }\mathrm{sin}mx\mathrm{cos}nx\text{d}x=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\int }_{-\pi }^{\pi }\mathrm{sin}mx\mathrm{sin}nx\text{d}x=0,\text{\hspace{0.17em}}\text{\
hspace{0.17em}}{\int }_{-\pi }^{\pi }\mathrm{cos}mx\mathrm{cos}nx\text{d}x=0,$
${\int }_{-\pi }^{\pi }{1}^{2}\text{d}x=2\pi ,\text{ }{\int }_{-\pi }^{\pi }{\mathrm{cos}}^{2}nx\text{d}x={\int }_{-\pi }^{\pi }{\mathrm{sin}}^{2}nx\text{d}x=\pi$
Then f(x) can be expanded to:
$f\left(x\right)=\frac{{a}_{0}}{2}+\underset{n=1}{\overset{\infty }{\sum }}\left({a}_{n}\mathrm{cos}nx+{b}_{n}\mathrm{sin}nx\right)$
$\left\{\begin{array}{ll}{a}_{n}=\frac{1}{\pi }{\int }_{-\pi }^{\pi }f\left(x\right)\mathrm{cos}nx\text{d}x,\hfill & n=0,1,2,\cdots ,\hfill \\ {b}_{n}=\frac{1}{\pi }{\int }_{-\pi }^{\pi }f\left(x\
right)\mathrm{sin}nx\text{d}x,\hfill & n=1,2,\cdots .\hfill \end{array}$
Philosophical epistemology points out: Perceptual knowledge and rational knowledge are two different stages of knowledge in the process of knowledge. Perceptual knowledge is the lower stage of
knowledge, characterized by directness and concreteness. Rational knowledge is the advanced stage of knowledge, including concepts, judgments, reasoning and other forms. The characteristic is its
indirectness and abstraction. From the content point of view, the object of perceptual knowledge is the phenomenon of things, and the object of ideal knowledge is the essence of things. The
low-dimensional space in this example can be regarded as a perceptual knowledge, and the n-dimensional space can be regarded as a rational knowledge. Therefore, the Fourier series expansion of the
n-dimensional space can be derived from the simple coordinates of the low-dimensional space. It can also be considered as a process from perceptual knowledge to rational knowledge.
3. Conclusion
Philosophy has a guiding significance for the research of natural sciences. Therefore, by introducing philosophical viewpoints into a specific course, not only can the teaching effect be improved,
but it can also guide students to think independently and enhance their innovative thinking ability to a certain extent. Of course, this method may not be very rigorous when linking a specific
science formula or concept with a philosophical principle, but this does not prevent the achievement of the teaching purpose.
This paper is supported by Research Foundation of the Nanchang Normal University for Doctors (NSBSJJ2018014).
Key R & D Projects of Jiangxi Provincial Department of Science and Technology: 20192BBHL80002, 20192BBEL50040. | {"url":"https://www.scirp.org/journal/paperinformation?paperid=104056","timestamp":"2024-11-12T03:48:55Z","content_type":"application/xhtml+xml","content_length":"108082","record_id":"<urn:uuid:907badd7-abab-4316-a4f7-cd5e25d94845>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00712.warc.gz"} |
Stacks Of Sums
Write down many different types
of calculations which give the answer 41.
Sign in to your Transum subscription account to see the answers
Note to teacher: Doing this activity once with a class helps students develop strategies. It is only when they do this activity a second time that they will have the opportunity to practise those
strategies. That is when the learning is consolidated. Click the button above to regenerate another version of this starter from random numbers.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to today's pupil activity. | {"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_August16.ASP","timestamp":"2024-11-07T12:54:02Z","content_type":"text/html","content_length":"23819","record_id":"<urn:uuid:f8057f22-47bd-454f-9dc2-0261ffbec8d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00813.warc.gz"} |
Stress Concentration Calculator v1
Price: £26.00(£26.00 Inc. VAT)
A stress concentration factor is an intensification multiplier by which stress rises due to a discontinuity, which may be a notch, hole, groove or a change in material. The multiplier is influenced
by two principle factors; the size of the discontinuity and radius at its root.
Stress concentration factors are applied to otherwise evenly distributed stresses that would normally be seen in plain shapes of the same cross-sectional with smooth and continuous edges that are
exposed to axial, bending and/or shear forces.
All you need to do is multiply your calculated stress by the concentration factor to identify its magnification at the root of the discontinuity.
Stress concentrations are extremely important in defining the size and shape of any reinforcement you may need to incorporate into structural recesses and/or joints.
The stress concentration calculator includes 12 calculation options covering various notches, holes and grooves in bars, plates and tubes.
Stress Concentration generates factors that should be applied to particular stresses; bending, axial torsional.
All you need to do is multiply your calculated stress by the factor(s) produced to identify the maximum intensification around a stress raiser such as a notch, groove or hole. The factors generated
in this calculator can be applied to the stresses identified in any section or shape irrespective of complexity.
For help using this calculator see Technical Help
Stress Concentration Factor Calculator - Options
Stress Concentration includes the following calculation options:
FLAT – One U-Notch
Calculates the maximum stress concentration factor for a flat bar with parallel sides and a U-shaped notch in one edge normal to the axis of the bar.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Notch root radius • In-plane bending
• Notch depth • Out-of-plane bending
FLAT – Two U-Notches
Calculates the maximum stress concentration factor for a flat bar with parallel sides and two equal U-shaped notches in opposite edges normal to the axis of the bar.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Notch root radius • In-plane bending
• Notch depth • Out-of-plane bending
FLAT – One V-Notch
Calculates the maximum stress concentration factor for a flat bar with parallel sides and a V-shaped notch in one edge normal to the axis of the bar.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Notch root radius • In-plane bending
• Notch depth • Out-of-plane bending
• V-angle
FLAT – Two V-Notches
Calculates the maximum stress concentration factor for a flat bar with parallel sides and two equal V-shaped notches in opposite edges normal to the axis of the bar.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Notch root radius • In-plane bending
• Notch depth • Out-of-plane bending
• V-angle
FLAT – Fillet
Calculates the maximum stress concentration factor for a flat bar with parallel sides and both sides equally recessed (filleted) on opposite sides.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Fillet root radius • In-plane bending
• Fillet depth • Out-of-plane bending
FLAT – Hole
Calculates the maximum stress concentration factor for a flat bar with parallel sides and a through hole between one edge and the centreline of the bar.
You enter: and the stress concentration calculator will provide:
• Bar depth • Axial
• Hole radius • In-plane bending
• Hole centre to edge • Out-of-plane bending
• Bar thickness • Cylindrical bending
FLAT – Reinforced Hole
Calculates the maximum stress concentration factor for a flat bar with parallel sides and a reinforced through hole on its centreline. This calculation option only provides a result for an applied
tensile load. See the Technical Help page for suitable bending stress concentration calculation methods.
You enter: and the stress concentration calculator will provide:
• Reinforcement depth
• Root radius
• Bar thickness • Axial
• Hole diameter
• Reinforcement diameter
PLATE – Hole
Calculates the maximum stress concentration factor for a flat bar with parallel sides and a through hole between one edge and the centreline of the bar.
You enter: and the stress concentration calculator will provide:
• In-plane tension (+ve & +ve)
• Plate thickness • In-plane tension (+ve)
• Hole radius • In-plane tension (+ve & -ve)
• Notch depth • Out-of-plane bending
• Tubular bending
• Membrane bending
Round – U-Groove
Calculates the maximum stress concentration factor for a round bar with parallel sides and a U-shaped groove turned into its surface anywhere along its length.
You enter: and the stress concentration calculator will provide:
• Bar diameter • Axial
• Groove root radius • In-plane bending
• Groove depth • Out-of-plane bending (torsion)
Round – V-Groove
Calculates the maximum stress concentration factor for a round bar with parallel sides and a V-shaped groove turned into its surface anywhere along its length.
You enter: and the stress concentration calculator will provide:
• Bar diameter • Axial
• Groove root radius • In-plane bending
• Groove depth • Torsion
Round – Fillet
Calculates the maximum stress concentration factor for a round bar with parallel sides turned down to a smaller diameter for part of its length.
You enter: and the stress concentration calculator will provide:
• Bar diameter • Axial
• Groove root radius • In-plane bending
• Groove depth • Torsion
• Groove angle
Tube – Hole
Calculates the maximum stress concentration factor for a tubular bar with parallel sides and a hole drilled through both opposite walls on its centreline.
You enter: and the stress concentration calculator will provide:
• Outside diameter • Axial
• Inside diameter • In-plane bending
• Hole diameter • Torsion | {"url":"https://www.calqlata.com/proddetail.asp?prod=00036","timestamp":"2024-11-09T12:48:08Z","content_type":"text/html","content_length":"23412","record_id":"<urn:uuid:b4c12390-9ab6-47bc-b6eb-76cc1070c255>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00760.warc.gz"} |
Quantum Computing vs. Encryption - A Clash of Technologies | DigiNudge
Quantum Computing vs. Encryption – A Clash of Technologies
Published on November 6th, 2023 by Rahul Kumar Singh
Will quantum computers spell the end of modern encryption? As quantum computing harnesses the bizarre properties of subatomic particles to process data in new ways, it threatens to crack the very
math securing our digital world. Experts warn that today’s encryption could soon meet its match against the exponential power of quantum.
But while the threat is real, quantum computers are still in their infancy. Cryptographers are racing to shield encryption before this clash of technologies hits. Though an “encryption apocalypse”
dominates headlines, quantum-resistant encryption and prudent planning may just save our data in the end.
According to ExpressVPN’s research on encryption’s history, our modern public key infrastructure is potentially vulnerable to the looming quantum threat.
A Brief History of Encryption
Before diving into the details of quantum computing and encryption, it helps to understand the history of encryption.
Early Ciphers and Cryptography
• Encryption dates back thousands of years, with simple ciphers used by ancient cultures to protect messages.
• Cryptography advanced alongside mathematics, with cipher techniques like substitution and transposition becoming more complex.
• Early encryption was used mainly for military and diplomatic purposes.
The Information Age and Public Key Cryptography
• The digital revolution increased the need for encryption to protect computer systems and data.
• In the 1970s, public key cryptography was invented, allowing strangers to communicate securely.
• Public key systems like RSA and elliptic curve cryptography power ecommerce and secure internet today.
The Quantum Threat Emerges
• In the 1990s, quantum computing was proposed, with implications for breaking encryption.
• Shor’s algorithm could theoretically crack RSA and ECC public keys on a large enough quantum computer.
• Though the threat exists, practical quantum computers are still emerging. Defenses are being developed.
As this history shows, encryption has evolved alongside technology to secure communications. Today’s widespread public key systems could be threatened as quantum matures.
The Power of Quantum Computing
To understand the interaction between quantum computing and encryption, we must explore what gives quantum computing its power.
Qubits and Superposition
• Quantum computers use qubits instead of binary bits. Qubits can represent 1 and 0 simultaneously via superposition.
• This allows quantum computers to perform calculations on many states at once in parallel.
Entanglement and Interference
• Separate qubits can be entangled and act as one unit even when physically apart.
• Quantum interference from superposition and entanglement creates advantages in computing.
The Promise and Challenges
• In theory, quantum allows certain problems, like factorization, to be solved much faster.
• Technical challenges exist in building stable qubits and scaling up systems.
• If achieved, quantum supremacy over classical computing is possible for some but not all problems.
Quantum introduces fundamental computing advantages, though practical systems are still emerging. This leads to the encryption threat.
Cracking Encryption with Quantum Computers
Most encryption today relies on mathematical problems that are very difficult for normal computers to solve, like factoring large prime numbers. However, quantum computing changes the game.
Shor’s Algorithm for Factorization
• Discovered in the 1990s, Shor’s algorithm leverages quantum properties to factor large numbers efficiently.
• This could be used to break popular public key systems like RSA, threatening security.
• However, substantial qubits over today’s computers would be needed to run Shor’s algorithm.
Grover’s Algorithm and Symmetric Keys
• Grover’s algorithm could speed up brute-force attacks against encryption keys.
• Symmetric systems with keys like AES may be impacted but not necessarily broken.
• Again, large qubit quantum computers would be needed to see benefits.
When Will the Threat Be Real?
• Currently, quantum computers are not capable enough to break encryption schemes.
• Predicting the exact timeline is difficult, but 10-30 years is a common estimate.
• The threat is real, but encryption is still secure against modern quantum prototypes.
Quantum algorithms like Shor’s and Grover’s pose future risks. But when will quantum computers be ready for such applications?
The Race to Practical Quantum Computing
To assess the quantum threat, we have to examine the state of quantum computer development:
Major Tech Players and Startups
• Tech giants like IBM, Google, Intel, and Microsoft are all investing in quantum research.
• Startups are also entering the space, trying to pioneer quantum technologies.
• Government labs and academia are pushing theoretical and practical development.
Current Scale and Limitations
• Right now, quantum computers are limited to less than 100 qubits with high error rates.
• This small size limits the complexity of problems they can currently solve.
• Scaling up stable qubits remains a huge engineering challenge.
Timeframes for Cryptanalysis Capabilities
• Most experts think it will take at least 10 years to develop quantum computers capable of breaking modern encryption.
• Some predictions put the timeline at 20 years or more from today.
• Progress is uneven, making timeline predictions difficult. The threat is real but still years away.
Though the pace is accelerating, quantum computers are still quite far from breaking encryption schemes in a practical sense.
Defending Encryption from the Quantum Threat
The good news is that work is already underway to enhance encryption against the risk of future quantum attacks:
Promising Post-Quantum Cryptographic Schemes
• Cryptographers are developing new public key algorithms resistant to quantum techniques.
• Leading proposals include lattice-based and multivariate cryptosystems.
• Work is being done to standardize post-quantum algorithms before quantum matures.
Hybrid Encryption Approaches
• Current encryption can be strengthened by combining algorithms and keys.
• For example, double-encrypting communication with both RSA and AES may prevent easy breaking.
• Splitting data across quantum-resistant and conventional schemes also adds complexity for code breakers.
Managing Cryptographic Agility
• Encryption methods can be swapped out and upgraded as risks emerge.
• However, this requires careful planning to scale across users and systems.
• Cryptographic agility will become more critical as the post-quantum transition approaches.
Researchers are already developing encryption designed to withstand the quantum threat. Deploying these updated schemes in time will be key.
The Path Forward
Quantum computing and encryption are on a collision course, but the reality is nuanced. While we must take the risks seriously, encryption can adapt to the challenges ahead. Wise preparation now will
enable a smooth transition to post-quantum security. With vigilance and continued innovation, our encrypted digital infrastructure can withstand this clash of technologies.
Also read – Top Security Tips for Using Public Networks
The exponential power promised by quantum computing could upend many fields, including the ability to break modern encryption. However, practical quantum computers on this scale are still years away.
In the interim, cryptographers are racing to implement quantum-resistant encryption schemes. With prudent planning and upgrades to existing systems, we can defend against this future threat when it
finally emerges. The path forward will require agility and foresight to keep our data secure in the coming quantum age.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
2 Comments
Inline Feedbacks
View all comments
2 months ago
Fantastic insights on quantum computing’s impact on encryption!
11 months ago
Hi! It’s a great blog. Thanks for this productive content. I will be waiting for more of your posts. | {"url":"https://diginudge.com/quantum-computing-vs-encryption/","timestamp":"2024-11-10T01:15:22Z","content_type":"text/html","content_length":"143281","record_id":"<urn:uuid:d32a36a0-0dcb-41d3-b2dc-6118ade35fd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00707.warc.gz"} |
Bigmart Sales Prediction Analysis using Python | Regression | Machine Learning Project Tutorial
Hackers Realm
Unlock the secrets of Bigmart sales prediction with Python! This project tutorial delves into regression and machine learning, enabling you to forecast sales. Explore data preprocessing, feature
engineering, and model evaluation. Gain practical experience with regression algorithms like linear regression, decision trees, and random forests. Supercharge your Python programming, data analysis,
and machine learning skills. Dominate the art of Bigmart sales prediction! #BigmartSalesPrediction #Python #Regression #MachineLearning #DataAnalysis
Big Mart Sales Prediction - Regression
In this project tutorial, we will analyze and predict the sales of Bigmart. Furthermore, we will operate one-hot encoding to improve the accuracy of our prediction models.
You can watch the video-based tutorial with step by step explanation down below
Dataset Information
The data scientists at BigMart have collected 2013 sales data for 1559 products across 10 stores in different cities. Also, certain attributes of each product and store have been defined. The aim is
to build a predictive model and find out the sales of each product at a particular store.
Using this model, BigMart will try to understand the properties of products and stores which play a key role in increasing sales.
Download the Dataset here
Import modules
Let us import all the basic modules we will be needing for this project.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
%matplotlib inline
• pandas - used to perform data manipulation and analysis
• numpy - used to perform a wide variety of mathematical operations on arrays
• matplotlib - used for data visualization and graphical plotting
• seaborn - built on top of matplotlib with similar functionalities
• %matplotlib - to enable the inline plotting.
• warnings - to manipulate warnings details
• filterwarnings('ignore') is to ignore the warnings thrown by the modules (gives clean results)
Loading the dataset
df = pd.read_csv('Train.csv')
# statistical info
Statistical Information of Dataset
• We will fill the missing values using the range values (mean, minimum and maximum values).
# datatype of attributes
• We have categorical as well as numerical attributes which we will process separately.
# check unique values in dataset
df.apply(lambda x: len(x.unique()))
• Attributes containing many unique values are of numerical type. The remaining attributes are of categorical type.
Preprocessing the dataset
Let us check for NULL values in the dataset.
# check for null values
• We observe two attributes with many missing values (Item_Weight and Outlet_Size).
# check for categorical attributes
cat_col = []
for x in df.dtypes.index:
if df.dtypes[x] == 'object':
• For loop gets the columns from the datasets. If the datatype of these columns is equal to the object, then it will be added to the categorical attributes.
• Above shown are the categorical columns of the dataset.
• We can eliminate a few columns like 'Item_Identifier' and 'Outlet_Identifier'.
Let us remove unnecessary columns.
• The remaining are the necessary columns for this project.
Let's print the categorical columns.
# print the categorical columns
for col in cat_col:
• value_counts() - displays the number of counts for that particular value.
• We will combine the repeated attributes which represents the same information.
• We can also combine the attributes which contain low values. This practice will boost our prediction.
Let us now fill in the missing values.
# fill the missing values
item_weight_mean = df.pivot_table(values = "Item_Weight", index = 'Item_Identifier')
• We have calculated the mean based on the 'Item_Identifier'.
• pivot_table() is used to create a categorical column and fill the missing values based on those categories.
• As a result, we have the average weight of each row of Item_Identifer.
Let's check for the missing values of Item_Weight.
miss_bool = df['Item_Weight'].isnull()
• Rows will be represented as (True when having missing values) or (False when there are no missing values.)
• In the case of True, we will fill the missing values for that row.
• Let's fill in the missing values of Item_weight.
for i, item in enumerate(df['Item_Identifier']):
if miss_bool[i]:
if item in item_weight_mean:
df['Item_Weight'][i] = item_weight_mean.loc[item]['Item_Weight']
df['Item_Weight'][i] = np.mean(df['Item_Weight'])
• We have iterated in terms of Item_Identifier.
• This if-else condition will get the average weight of that particular item and assigned it to that particular row.
• As a result, the missing values has been filled with the average weight of that item.
Let's check for the missing values of Outler_Type.
outlet_size_mode = df.pivot_table(values='Outlet_Size', columns='Outlet_Type', aggfunc=(lambda x: x.mode()[0]))
• We use the aggregation function from the pivot table.
• Since the Outlet_Type is a categorical attribute we will use Mode. In the case of numerical attributes, we have to use mean or median.
Let's fill in the missing values for Outlet_Size.
miss_bool = df['Outlet_Size'].isnull()
df.loc[miss_bool, 'Outlet_Size'] = df.loc[miss_bool, 'Outlet_Type'].apply(lambda x: outlet_size_mode[x])
• In the subscript of location operation, we have set a condition for filling the missing values for 'Outlet_Size'.
• As a result, it will fill the missing values.
Similarly, we can check for Item_Visibility.
• We have some missing values for this attribute.
• Let's fill in the missing values.
# replace zeros with mean
df.loc[:, 'Item_Visibility'].replace([0], [df['Item_Visibility'].mean()], inplace=True)
• inplace=True, will keep the changes in the dataframe.
• All the missing values are now filled.
Let us combine the repeated Values of the categorical column.
# combine item fat contentdf['Item_Fat_Content'] = df['Item_Fat_Content'].replace({'LF':'Low Fat', 'reg':'Regular', 'low fat':'Low Fat'})
• It will combine the values into two separate categories (Low Fat and Regular).
Creation of New Attributes
We can create new attributes 'New_Item_Type' using existing attributes 'item_Identifier'.
df['New_Item_Type'] = df['Item_Identifier'].apply(lambda x: x[:2])
After creating a new attribute, let's fill in some meaningful value in it.
df['New_Item_Type'] = df['New_Item_Type'].map({'FD':'Food', 'NC':'Non-Consumable', 'DR':'Drinks'})
• Map or Replace is used to change the values.
• We have three categories of (Food, Non-Consumables and Drinks).
• We will use this 'Non_Consumable' category to represent the 'Fat_Content' which are 'Non-Edible'.
df.loc[df['New_Item_Type']=='Non-Consumable', 'Item_Fat_Content'] = 'Non-Edible'
• This will create another category for 'Item_Fat_Content'.
Let us create a new attribute to show small values for the establishment year.
# create small values for establishment year
df['Outlet_Years'] = 2013 - df['Outlet_Establishment_Year']
Creation of New Attribute Outlet Years
• It will return the difference between 2013 (when the dataset was collected) and the 'Outlet_Establishment_Year', and store it into the new attribute "Outlet_Years'.
• Since the values are smaller than the previous, it will improve our model performance.
Let's print the dataframe.
Exploratory Data Analysis
Let us explore the numerical columns.
Distribution of Item Weight
• We observe higher mean values.
• And many items don't have enough data, thus showing zero.
Distribution of Item Visibility
• We have filled zero values with the mean, and it shows a left-skewed curve.
• All the values are small. Hence, we don't have to worry about normalizing the data.
• This graph shows four peak values.
• Using this attribute we can also create other categories depending on the cost.
Distribution of Item Output Sales
• The values are high and the curve is left-skewed.
• We will normalize this using log transformation.
Log transformation helps to make the highly skewed distribution less skewed.
# log transformation
df['Item_Outlet_Sales'] = np.log(1+df['Item_Outlet_Sales'])
Distribution of Item Output Sales after Log Transformation
• After using log transformation, the curve is normalized.
Let us explore the categorical columns.
Distribution of Item Fat Content
• We observe that most items are low-fat content.
# plt.figure(figsize=(15,5))
l = list(df['Item_Type'].unique())
chart = sns.countplot(df["Item_Type"])
chart.set_xticklabels(labels=l, rotation=90)
Distribution of Item Type
• plt.figure() is to increase the figure size.
• chart.set_xticklabels() is to display the labels in a vertical manner as shown in the graph.
Distribution of Outlet Establishment Year
• Most outlets are established in an equal distribution.
Distribution of Outlet Size
Distribution of Outlet Location Type
Distribution of Outlet Type
• You can also combine the low values into one category.
Correlation Matrix
A correlation matrix is a table showing correlation coefficients between variables. Each cell in the table shows the correlation between two variables. The value is in the range of -1 to 1. If two
variables have a high correlation, we can neglect one variable from those two.
corr = df.corr()
sns.heatmap(corr, annot=True, cmap='coolwarm')
• Since we have derived 'Outlet_Years' from 'Oulet_Establishment_Year', we can observe a highly negative correction between these two.
• And a positive correlation is between 'Item_MRP' and 'Item_Outlet_Sales'.
Let's check the values of the dataset.
Label Encoding
Label encoding is to convert the categorical column into the numerical column.
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['Outlet'] = le.fit_transform(df['Outlet_Identifier'])
cat_col = ['Item_Fat_Content', 'Item_Type', 'Outlet_Size', 'Outlet_Location_Type', 'Outlet_Type', 'New_Item_Type']
for col in cat_col:
df[col] = le.fit_transform(df[col])
• We access each column from the 'cat col' list. For the corresponding column, the le.fit_transform() function will convert the values into numerical then store them into the corresponding column.
One Hot Encoding
We can also use one hot encoding for the categorical columns.
df = pd.get_dummies(df, columns=['Item_Fat_Content', 'Outlet_Size', 'Outlet_Location_Type', 'Outlet_Type', 'New_Item_Type'])
• It will create a new column for each category. Hence, it will add the corresponding category instead of numerical values.
• If the corresponding location type is present it will show as '1', or else it will show '0'.
• We have around 26 features, which may increase the training time.
Splitting the data for Training and Testing
Let us drop some columns before training our model.
X = df.drop(columns=['Outlet_Establishment_Year', 'Item_Identifier', 'Outlet_Identifier', 'Item_Outlet_Sales'])
y = df['Item_Outlet_Sales']
Model Training
Now the preprocessing has been done, let's perform the model training and testing.
Note: Don't train & test with full data like below; split data for training and testing. For this project, consider the cross validation score for comparing the model performance
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error
def train(model, X, y):
# train the model
model.fit(X, y)
# predict the training set
pred = model.predict(X)
# perform cross-validation
cv_score = cross_val_score(model, X, y, scoring='neg_mean_squared_error', cv=5)
cv_score = np.abs(np.mean(cv_score))
print("Model Report")
print("CV Score:", cv_score)
• X contains input attributes and y contains the output attribute.
• We use 'cross val score()' for better validation of the model.
• Here, cv=5 means that the cross-validation will split the data into 5 parts.
• np.abs() will convert the negative score to positive and np.mean() will give the average value of 5 scores.
from sklearn.linear_model import LinearRegression, Ridge, Lasso
model = LinearRegression(normalize=True)
train(model, X, y)
coef = pd.Series(model.coef_, X.columns).sort_values()
coef.plot(kind='bar', title="Model Coefficients")
Model report: MSE = 0.288 CV Score = 0.289
• The positive values are attributes with positive coefficients and negative values are attributes with negative coefficients.
• There are minor values between positive and negative coefficients. This indicates that the centre attributes do not provide significant information.
model = Ridge(normalize=True)
train(model, X, y)
coef = pd.Series(model.coef_, X.columns).sort_values()
coef.plot(kind='bar', title="Model Coefficients")
Model report: MSE = 0.142 CV Score = 0.429
model = Lasso()
train(model, X, y)
coef = pd.Series(model.coef_, X.columns).sort_values()
coef.plot(kind='bar', title="Model Coefficients")
Model report: MSE = 0.762 CV Score = .763
• Both the MSE and CV score is increasing.
• Let's try some advanced models
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
coef.plot(kind='bar', title="Feature Importance")
Model report: MSE = 2.7767015e-34 CV Score = 0.567651
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
coef.plot(kind='bar', title="Feature Importance")
Model report: MSE = 0.04191 CV Score = 0.30664
from sklearn.ensemble import ExtraTreesRegressor
model = ExtraTreesRegressor()
train(model, X, y)
coef = pd.Series(model.feature_importances_, X.columns).sort_values(ascending=False)
coef.plot(kind='bar', title="Feature Importance")
Model report: MSE = 1.0398099e-28 CV Score = 0.3295
• The MSE has decreased, but the CV score is greater than the random forest.
Final Thoughts
• Out of the 6 models, linear regression is the top performer with the least cv score.
• You can also use hyperparameter tuning to improve the model performance.
• You can further try other models like XGBoost, CatBoost etc.
In this project tutorial, we have explored the Bigmart Sales dataset. We learned the uses of one hot encoding in the dataset. We also compared different models to train the data starting from basic
to advanced models.
Get the project notebook from here
Thanks for reading the article!!!
Check out more project videos from the YouTube channel Hackers Realm | {"url":"https://www.hackersrealm.net/post/bigmart-sales-prediction-analysis-using-python","timestamp":"2024-11-11T00:03:54Z","content_type":"text/html","content_length":"1050039","record_id":"<urn:uuid:b53791f8-4813-4b19-9990-bf038624fd3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00040.warc.gz"} |
"Codes, arithmetic and manifolds" by Matthias Kreck
"Codes, arithmetic and manifolds" by Matthias Kreck
I will explain the problem behind codes. When sending and receiving information, errors can happen. One wants to reconstruct the original information from the received. This leads to the concept of
error correcting codes. This is so simple that one wonders why not everything is known. To indicate that this is completely wrong I will explain a relation between codes and unimodular lattices,
which lead to very difficult problems in analysis. On the way very important lattices like E_8 or the leech lattice will occur. If time permits I will explain how codes occur in a very natural way in
topology from manifolds with a bit symmetry. This leeds to a new construction of codes, which I will explain. | {"url":"http://www.issmys.eu/scientific-information/lectures-notes/codes-arithmetic-and-manifolds-by-matthias-kreck","timestamp":"2024-11-03T15:57:45Z","content_type":"application/xhtml+xml","content_length":"55079","record_id":"<urn:uuid:b6685c4c-3140-49e8-8f38-0bcf97dcae36>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00284.warc.gz"} |
CS代考程序代写 Data 100, Midterm 1 Fall 2019
Data 100, Midterm 1 Fall 2019
Student ID:
Exam Room:
All work on this exam is my own (please sign):
• This midterm exam consists of 90 points and must be completed in the 80 minute time period ending at 9:30, unless you have accommodations supported by a DSP letter.
• Note that some questions have circular bubbles to select a choice. This means that you should only select one choice. Other questions have boxes. This means you should select all that apply.
• When selecting your choices, you must fully shade in the box/circle. Check marks will likely be mis-graded.
• Youmayuseaone-sheet(two-sided)cheatsheet,inadditiontotheincludedMidterm 1 Reference Sheet.
Data 100 Midterm 1, Page 2 of 11 October 8, 2019
1 Cereal (Pandas)
You are given a Pandas DataFrame cereal with information about 80 different cereals.
The figure above is the result of running cereal.head(). All values are per-serving. type can be either cold or hot. rating is the average score (out of 100) given by customers.
(a) [1 Pt] What is the granularity of the cereal data frame? ⃝serving of cereal
⃝type of cereal ⃝manufacturer ⃝name of cereal
(b) [3 Pts] Add a new column to cereal named low calorie which has the boolean value True if the cereal is low-calorie and False otherwise. A cereal is low-calorie if it has less than or equal to 100
calories per serving.
cereal[______________________] = _________________________
(c) [3 Pts] Identify the type for each of the following variables.
⃝continuous type
⃝continuous low calorie ⃝continuous
Data 100 (d)
Midterm 1, Page 3 of 11 October 8, 2019
[6 Pts] For this problem and the next problem below you may use the following func- tions:
groupby, agg, filter, merge
unique, value_counts, sort_values, apply
max, min, mean, median, std, count,
np.mean, sum, any, all, isnull, len
You can also use any other methods you have used in class, for example, anything in the Pandas or Numpy libraries. You can leave lines inside parentheses blank to represent a function call with no
Create a Series indexed by manufacturer. For each manufacturer, the value should be equal to the maximum sugars value of all cereals by that manufacturer. Your series should be sorted by the value in
decreasing order. You may not need all lines.
max_sugar = cereal._________________(__________________)[“sugars”]
For example, the first few entries of the Series would be:
Kelloggs 15
Post 14
Quaker Oats 14
[8 Pts] Which manufacturers make only cold cereals? Return an array, list, or series of these manufacturers that exclusively manufacture cold cereals, i.e. your list should not a company if it makes
ANY hot cereals. You may not need all lines. You can leave lines inside parentheses blank to represent a function call with no arguments.
def f(df):
cold_only =
Data 100 (f)
Midterm 1, Page 4 of 11 October 8, 2019
[2 Pts] Consider the data frame below. Assume cereal was modified correctly in part a. The Interior values are average rating per category, e.g. 32.026596 is the average rating of the low calorie
cereals made by General Mills.
Which of the following four lines of code could be used to create this data frame?
pd.pivot table(data=cereal, index=’manufacturer’, columns=’low calorie’, values=’rating’, aggfunc=np.mean)
cereal.groupby([’manufacturer’,’low calorie’])[’rating’] .mean()
pd.pivot table(data=cereal, index=’low calorie’, columns=’manufacturer’, values=’rating’, aggfunc=np.mean)
cereal.groupby(’rating’)[[’manufacturer’,’low calorie’]] .mean()
[2 Pts] The above table contains NaNs because some companies don’t make cereals with the given calorie level. By calorie level, we mean whether low calorie is True or False. E.g. Nabisco does not
make any low calorie cereals. If we wanted to show a colleague this pivot table to illustrate the average rating across manufacturer and calorie level combinations for cereals, what should we do with
these NaN values? Pick the one best option that applies.
⃝Fill them with the average rating across all the cereals in the same calorie level
⃝Fill them with the average rating across all cereals from the same manufacturer
⃝Both A and B are acceptable
⃝Leave as-is
⃝Replace with a rating randomly selected from a cereal with the same calorie level
Data 100 Midterm 1, Page 5 of 11 October 8, 2019
2 Computing Summary Statistics
Suppose we’re given the set of points {−15, 10, 20, 30, 30, 35, 40, 50}, and we want to deter- mine a summary statistic c. For each of the following loss functions, determine or select the optimal
value of c, cˆ, that minimizes the corresponding empirical risk.
For (a), (b), and (c), select the correct answer. For (d), write your answer in the provided box. To help you with this task, we have computed the following: mean = 25, median = 30, SD =
20, and n = 8.
(a) [2 Pts] (xi − c)2 ⃝20
⃝25 ⃝5 ⃝50 ⃝0
(b) [2 Pts] 5(xi − c)2 ⃝20
⃝125 ⃝25 ⃝100 ⃝-15
(c) [2 Pts] |xi − c| ⃝25
⃝50 ⃝40 ⃝30 ⃝-15
(d) [5 Pts] (3xi − c)2
For part d, write only your answer in the box below. Feel free to show your work else-
where on this page.
cˆ =
Data 100 Midterm 1, Page 6 of 11 October 8, 2019
3 Regex
You are interviewing for a job at Triple Rock, and they want you to prove your skills with regular expressions on synthetic data.
(a) [4 Pts] First they give you a list of two of their distributors.
distributors[0] = “Geyser Beverage: 55 Wright Brothers Ave”,
distributors[1] = “Mindful Distribution: 2935 Adeline St”
Give a regular expression that extracts the name and street number from such strings. Your regex should work for either string. For example, after running the code below, name should be ’Geyser
Beverage’ and street number should be 55.
regex_1 = r’ ___________________________________________’
name, street_number = re.findall(regex_1, distributors[0])[0]
(b) Next they give you information regarding how each of two table paid its bill. The first two entries are:
paid bills[0] = “4123713131673827 paid $30.50, and $37 paid by 5612512165638672.”,
paid bills[1] = “$171.25 was charged to 4612512165638672.”
i. [4 Pts] Give a regular expression that can extracts Visa and Mastercard credit card numbers from such strings. Your regex should work for either string. A Visa credit card number is any sequence
of 16 digits that starts with a 4, and a Mastercard is any sequence of 15 digits that starts with 5. Observe there are no dashes or other extra- neous characters in a credit card number. For example,
after running the code below, cc nums should be [’4123713131673827’, ’5612512165638672’].
regex_2 = r’ ___________________________________________’
cc_nums = re.findall(regex_2, paid_bills[0])
ii. [4 Pts] Write a regular expression which will correctly find and return the dollar amounts including whatever is to the right of the optional decimal point. Your regex should work for either
string. For example, after running the code below, amountsshouldbe[’30.50’, ’37’].
regex_3 = r’ ___________________________________________’
amounts = re.findall(regex_3, paid_bills[0])
Data 100 Midterm 1, Page 7 of 11 October 8, 2019
4 EDA
(a) [2 Pts] Which of the following transformations could help make linear the relationship shown in the plot below? Select all that apply:
log(y) log(x) ex
y3
None of the Above
(b) Sally likes making desserts, and she wants to learn more about the sugar content of her recipes. For each of 100 recipes, she records the amount of sugar (in grams) per serving. A histogram of
the sugar measurements appears in the plot below on the left.
Her friend, Max, makes a similar histogram of his 100 recipes, which appears below on the right. Note: The two images shown are exactly alike.
i. [3 Pts] How many of Sally’s recipes have more than 10 grams of sugar per serving? Do not worry about interval endpoints, i.e. assume that no recipes have exactly 0, 5, 10, or 20 grams.
⃝15 ⃝30 ⃝60 ⃝Impossibletotell
ii. [3 Pts] How would you describe the distribution of sugar in Sally’s recipes? Check
all that apply.
unimodal multimodal symmetric
skew left skew right contains outliers
iii. [2 Pts] Do Max’s recipes have the exact same sugar content as Sally’s?
⃝Yes ⃝No ⃝Impossible to tell
Data 100 Midterm 1, Page 8 of 11 October 8, 2019
5 Visualizations
(a) [6 Pts] Consider plots A and B below. For each plot, identify its primary flaw (if any)
and give a recommendation in the provided box.
Does Plot A have a significant flaw? ⃝Yes ⃝No
If you picked yes, in the box below, make a recommendation to fix the primary flaw.
Does Plot B have a significant flaw? ⃝Yes ⃝No
If you picked yes, in the box below, make a recommendation to fix the primary flaw.
Data 100 (b)
Midterm 1, Page 9 of 11 October 8, 2019
[6 Pts] Consider plots C and D below. For each plot, identify its primary flaw (if any) and give a recommendation in the provided box.
Does Plot C have a significant flaw? ⃝Yes ⃝No
If you picked yes, in the box below, make a recommendation to fix the primary flaw.
Does Plot D have a significant flaw? ⃝Yes ⃝No
If you picked yes, in the box below, make a recommendation to fix the primary flaw.
[2Pts] ForplotDabove,whichspeciesofIrishasthehighestfrequencyinthedataset? ⃝Virginica
⃝Impossible to tell
Data 100 Midterm 1, Page 10 of 11 October 8, 2019
6 Sampling
(a) Professor Hug is an instructor for both Data 100 and CS W186 this semester. Stu- dents come to his office hours for both of these classes, but some students also come for other reasons. Professor
Hug is interested in knowing how many Data 100 students this semester have taken CS 61B. He takes a convenience sample of people that come to office hours.
i. [2 Pts] Name a group or individual that is included in the sampling frame but is not in the population of interest.
ii. [2 Pts] Name a group or individual that is in the population of interest but not in the sampling frame.
(b) For the rest of the question, assume that the sampling frame is the exact same as the population. That is, both the population and the sampling frame is the population of Data 100 students. Also,
assume that there are 1000 students in Data 100 and that 500 of them have taken CS 61B. Using the class list, Professor Hug takes a simple random sample of 50 students in Data 100. Let Xi be 1 if the
ith person sampled took CS 61B and 0 otherwise, for i = 1,…,50.
Find the following quantities. Somewhere in this problem you might need the ”finite population correction factor” given by N −n .
i. [3Pts] P(X5 =1)=
ii. [4Pts] P(X5 =1,X50 =1)=
iii. [3Pts] Var(X1 +X2 +···+X50)=
Data 100 Midterm 1, Page 11 of 11 October 8, 2019
(c) [4 Pts] Now suppose Professor Hug takes a census of the class. Let Xi be 1 if the ith per- son sampled took CS 61B and 0 otherwise, for i = 1, . . . , 1000. Find the given quantity.
Var(X1 +X2 +···+X1000)= | {"url":"https://www.cscodehelp.com/%E5%B9%B3%E6%97%B6%E4%BD%9C%E4%B8%9A%E4%BB%A3%E5%86%99/cs%E4%BB%A3%E8%80%83%E7%A8%8B%E5%BA%8F%E4%BB%A3%E5%86%99-data-100-midterm-1-fall-2019/","timestamp":"2024-11-10T13:06:05Z","content_type":"text/html","content_length":"62007","record_id":"<urn:uuid:344b11ca-444f-419e-bab6-5bfaed25ff1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00614.warc.gz"} |
Characteristics of Coarse aggregates -Unit Weight, Water Absorption and Specific Gravity 56 - TechConsultsCharacteristics of Coarse aggregates -Unit Weight, Water Absorption and Specific Gravity 56 - TechConsults
(IS 2386 2016 Part 3 Method of test for Aggregates for Concrete- Part- III Specific Gravity, Density, Voids Absorption and Bulking.)
This method covers the procedure for determining unit weight or bulk density, Specific Gravity, Density, Voids, Absorption and Bulking of aggregate.
Bulk density of Aggregates
The bulk density of an aggregate =Mass of aggregates / Volume of Aggregates (Volume of the Container) Kg / litter
The aggregates are placed and compacted in container. The compaction depends on the shape of the aggregates.
Flaky, elongated shaped particles affect the compaction process and ultimate affect on value of the bulk density. If the bulk density is less, it is due to such unwanted shape particles. The
percentage of such particles should not be more than 15 to 20 percent
Apparatus for Bulk density Test:
Cylindrical Measure as follows:
Size of Aggregates mm Volume of Container Liter Tamping Rod Size mm
<4.75 mm 3 3.15
4.75 – 40 mm 15 4.0
Over 40 mm 30 5.0
Tamping rods of sizes as above
Test procedure for testing Bulk Density of aggregates
1. The designated container is filled in three layers (volume is known) in liters as A
2. Each layer is compacted with 25 strokes of the designated rod size.
3. The net weight of aggregate is determined. (weight is known in kg) as B
Bulk density= B/ A Kg per liter and can be converted to other units such as Kg per cubic mete
Bulk Density of Aggregates – Test Format
Test covers the fine and coarse aggregates classified as below:
Less than 4.75 mm.
4.7m mm to 40 mm and
above 40 mm.
Accordingly, the code suggests the different volume measures (5, 15, 30) liters respectively, and their size along with the sizes of the temping rod, as given in table above under the heading
Water Absorption and Specific Gravity of Aggregate
(IS 2386 2016 Part 3 Method of test for Aggregates for Concrete- Density SG Bulking Absorption)
Three main methods are specified for use according to whether
the size of the aggregate is larger than 10 mm (Method I) between 40
and 10 mm (Method I or II may be used); or smaller than 10 mm (Method III).
An alternate method (Method IV) is also permitted.
The specific gravity (SG) of an aggregate = mass of solid in each volume of sample / mass of an equal volume of water at the same temperature. It is a number.
(when a substance is immersed in water, then it loses its weight and the loss of weight in water is equal to volume of water displaced)
The value of SG of aggregate ranges from 2.6 to 3.0. Below this value the aggregates have less strength and may not be used in structural concrete. The changes in value of SG depends on the quality
of aggregates such as shape and internal grading.
The aggregate generally contains voids, there are different types of specific gravities.
The absolute or complete SG, the denominator of the definition of SG the volume of solid material excluding the voids
Absolute SG = mass of solid to the weight / equal void-free volume of water at a stated temperature.
A= the weight in of the saturated aggregate in water (A1 – A2) as detailed
in calculation sheet
B = the weight of the saturated surface-dry (SSD) aggregate in air, and
C= the weight in of oven-dried aggregate in air.
Specific gravity Bulk = C / B – A
Apparent specific gravity = C / C – A
Water Absorption and Specific Gravity of Aggregate > 10 mm _ test
1. A 2000 gm of aggregate shall be thoroughly washed and placed in the wire basket and immersed in water at the temperature of 22^0 C to 32^0 C for 24 hrs.
2. After that basket and the sample shall be jolted / shake and weigh in the water (Let it be A1 units = weight of basket and aggregates in water).
3. After that, the aggregate shall be removed from the water and allowed to drain for 5 minutes. The aggregate then be dried with dry cloth and the weighed (Let it be B units = weight of SSD
4. Weigh the basket in the water (A2).
5. The aggregate shall then be placed in the oven at the temperature at 100 -110^0 C for 24 hrs. Weigh the aggregates (Let it be C units).
Specific gravity = C / B – A
Apparent specific gravity = C / C – A
Water Absorption
Water absorption is the amount of water absorbed by the aggregate. It is the amount of water retained by the aggregate in saturated surface dry condition (SSD).
Dry a sample of aggregate to 100 degree centigrade till constant weight is obtained (Let it be C units).
Wet the sample for 24 hours. Surface dry it and weigh it. (Let it be B units).
Water absorption (percent of dry weight) = 100 * (B –C) / C
Water Absorption and SG of Aggregates – Test Format
Water Absorption and Specific Gravity of Aggregate
< 10 mm _ test Procedure
Pycnometer Method
Sample size:
1.0 Kg – for Aggregates 4.75mm to 10 mm and ).
0.5 Kg – for aggregates finer than 4.75 mm.
Put the sample for 24 hours in water at 22 to 32 degree C temperature.
The sample is taken out and drained and is put to SSD conditions and weighed. Let it be A
Fill water in pycnometer and add aggregates to it. Shake it to remove air entrapped. The pycnometer now has aggregates and water above it. Maintain a certain level and put a mark on pycnometer.
Find the weight of Pycnometer + aggregates+ water> Let it be B.
Empty the pycnometer over an arrangement to drain out the water and keep the aggregates in oven at 110 degree centigrade. Let this oven dry weight be D.
Refill it with water up to marked level.
Find weight of pycnometer + water. Let it be C.
Specific gravity = D / A-(B-C)Apparent specific gravity = D / D – (B-C)Water absorption ( percent of dry weight) = (A-D) / D *100
A dry sand has a volume measure. The same quantity of sand in wet condition has more value. The increase in such volume is called Bulking.
Test Method for Bulking of Sand
Take a container and fill it loosely by sand up to two third of height of container. Level the sand contents and measure the depth of fill at center. Let it e = h
Add water in tank to submerge the sand and compact it by rod . Level the sand surface under water and measure the depth of sand in the center. Let it be = H
The bulking of sand = (h – H / H)*100
When wet sand is used at open sites for manufacturing of concrete on volume basis, the bulking percent is used to adjust the quantity of sand. | {"url":"https://techconsults.in/characteristics-of-coarse-aggregates-unit-weight-water-absorption-and-specific-gravity/","timestamp":"2024-11-02T19:18:17Z","content_type":"text/html","content_length":"308529","record_id":"<urn:uuid:f7b108ea-00f0-493e-b980-47591bc173fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00613.warc.gz"} |
[Solved] Find the general solution of the differential equation... | Filo
Find the general solution of the differential equation
Not the question you're searching for?
+ Ask your question
We have
Separating the variables in equation (1), we get
Integrating both sides of equation (2), we get
or , where
which is the general solution of equation (1).
Was this solution helpful?
Video solutions (8)
Learn from their 1-to-1 discussion with Filo tutors.
4 mins
Uploaded on: 8/14/2023
Was this solution helpful?
2 mins
Uploaded on: 6/12/2023
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Mathematics Part-II (NCERT)
Practice questions from Mathematics Part-II (NCERT)
View more
Practice more questions from Differential Equations
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Find the general solution of the differential equation
Updated On Aug 14, 2023
Topic Differential Equations
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 8
Upvotes 766
Avg. Video Duration 4 min | {"url":"https://askfilo.com/math-question-answers/find-the-general-solution-of-the-differential-equation-fracd-yd-xfracx12-yy-neq","timestamp":"2024-11-12T07:16:25Z","content_type":"text/html","content_length":"623799","record_id":"<urn:uuid:622163d1-5222-4d21-a37c-f00e58c9b19e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00161.warc.gz"} |
Designed experiments with replicates: Principal components or Canonical Variates?
A few days ago, a colleague of mine wanted to hear my opinion about what multivariate method would be the best for a randomised field experiment with replicates. We had a nice discussion and I
thought that such a case-study might be generally interesting for the agricultural sciences; thus, I decided to take my Apple Mac-Book PRO, sit down, relax and write a new post on this matter.
My colleague’s research study was similar to this one: a randomised block field experiment (three replicates) with 16 durum wheat genotypes, which was repeated in four years. The quality of grain
yield was assessed by recording the following four traits:
1. kernel weight per hectoliter (WPH)
2. percentage of yellow berries (YB)
3. kernel weight (grams per 1000 kernels; TKW)
4. protein content (% d.m.; PC)
My colleague had averaged the three replicates for each genotype in each year, so that the final dataset consisted of a matrix of 64 rows (i.e. 16 varieties x 4 years) and 4 columns (the 4 response
variables). Taking the year effect as random, we have four random replicates for each genotype, across the experimental seasons.
You can have a look at this dataset by loading the ‘WheatQuality4years.csv’ file, that is online available, as shown in the following box.
fileName <- "https://www.casaonofri.it/_datasets/WheatQuality4years.csv"
dataset <- read.csv(fileName)
dataset$Year <- factor(dataset$Year)
## Genotype Year WPH YB TKW PC
## 1 ARCOBALENO 1999 81.67 46.67 44.67 12.71
## 2 ARCOBALENO 2000 82.83 19.67 43.32 11.90
## 3 ARCOBALENO 2001 83.50 38.67 46.78 13.00
## 4 ARCOBALENO 2002 78.60 82.67 43.03 12.40
## 5 BAIO 1999 80.30 41.00 51.83 13.91
## 6 BAIO 2000 81.40 20.00 41.43 12.80
My colleague’s question
My colleague’s question was: “can I use a PCA biplot, to have a clear graphical understanding about (i) which qualitative trait gave the best contribution to the discrimination of genotypes and (ii)
which genotypes were high/low in which qualitative traits?”.
I think that the above question may be translated into the more general question: “can we use PCA with data from designed experiments with replicates?”. For this general question my general answer
has to be NO; it very much depends on the situation and aims. In this post I would like to show my point of view, although I am open to discussion, as usual.
Independent subjects or not?
I must admit that I appreciated that my colleague wanted to use a multivariate method; indeed, the quality of winter wheat is a ‘multivariable’ problem and the four recorded traits are very likely
correlated to each other. Univariate analyses, such as a set of four separate ANOVAs (one per trait) would lead us to neglect all the reciprocal relationships between the variables, which is not an
efficient way to go.
PCA is a very widespread multivariate method and it is useful whenever the data matrix is composed by a set of independent subjects, for which we have recorded a number of variables and we are only
interested in the differences between those independent subjects. In contrast to this, data from experiments with replicates are not composed by independent subjects, but there are groups of subjects
treated alike. For example, in our case study we have four replicates per genotype and these replicates are not independent, because they share the same genotype. Our primary interest is not to study
the differences between replicates, but, rather, the differences between genotypes, that are groups of replicates.
What happens if we submit our matrix of raw data to PCA? The subjects are regarded as totally independent from one another and no effort is made to keep them together, depending on the group
(genotype) they belong to. Consequently, a PCA biplot (left side, below) offers little insight: when we isolate, e.g., the genotypes ARCOBALENO and COLORADO, we see that the four replicates are
spread around the space spanned by the PC axes, so that we have no idea about whether and how these two groups are discriminated (right side, below).
# PCA with raw data
par(mfrow = c(1,2))
pcdata <- dataset[,3:6]
row.names(pcdata) <- paste(abbreviate(dataset$Genotype, 3),
dataset$Year, sep = "-")
pcaobj <- vegan::rda(pcdata, scale = T)
cex = 0.5, xlim = c(-1,1), ylim =c(-2,2),
expand = 0.5)
biplot(summary(pcaobj)$sites[c(1:4, 13:16),],
cex = 0.5, xlim = c(-1,1), ylim =c(-2,2),
expand = 3)
After seeing this, my colleague came out with his next question: “what if we work on the genotype means?”. Well, when we make a PCA on genotype means, the resulting biplot appears to be clearer (see
below), but such clarity is not to be trusted. Indeed, the year-to-year variability of genotypes has been totally ‘erased’ and played no role in the construction of such biplot. Therefore, there is
no guarantee that, for each genotype, all the replicates can be found in the close vicinity of the genotype mark. For example, in the biplot below we see that COLORADO and ARCOBALENO are very
distant, although we have previously seen that the replicates were not very well discriminated.
# PCA with genotype means
par(mfrow = c(1,1))
avgs <- aggregate(dataset[,3:6], list(dataset$Genotype),
rownames(avgs) <- avgs[,1]
avgs <- avgs[,-1]
pcaobj2 <- vegan::rda(avgs, scale = T)
biplot(pcaobj2, scaling = 2)
In simple words, PCA is not the right tool, because it looks at the distance between individuals, but we are more concerned about the distance between groups of individuals, which is a totally
different concept.
Obviously, the next question is: “if PCA is not the right tool, what is the right tool, then?”. My proposal is that Canonical Variate Analysis (CVA) is much better suited to the purpose of group
What is CVA?
Canonical variates (CVs) are similar to principal components, in the sense that they are obtained by using a linear transformation of the original variables (\(Y\)), such as:
\[CV = Y \times V\]
where \(V\) is the matrix of transformation coefficients. Unlike PCA, the matrix \(V\) is selected in a way that, in the resulting variables, the subjects belonging to the same group are kept close
together and, thus, the discrimination of groups is ‘enhanced’.
This is clearly visible if we compare the previous PCA biplot with a CVA biplot. Therefore, let’s skip the detail (so far) and perform a CVA, by using the CVA() function in the aomisc package, that
is the companion package for this blog. Please, note that, dealing with variables in different scales and measurement units, I decided to perform a preliminary standardisation process, by using the
scale() function.
# Loads the packages
# Standardise the data
groups <- dataset$Genotype
Z <- apply(dataset[,3:6], 2, scale, center=T, scale=T)
## WPH YB TKW PC
## [1,] 0.3375814 -0.003873758 -0.37675661 -0.5193726
## [2,] 0.7681267 -1.154020460 -0.59544503 -1.5621410
## [3,] 1.0168038 -0.344657966 -0.03495471 -0.1460358
## [4,] -0.8018791 1.529655178 -0.64242254 -0.9184568
## [5,] -0.1709075 -0.245404565 0.78310197 1.0254693
## [6,] 0.2373683 -1.139963112 -0.90160881 -0.4035095
# Perform a CVA with the aomisc package
cvaobj <- CVA(Z, groups)
The CVA biplot
The main results of a CVA consist of a matrix of canonical coefficients and a matrix of canonical scores. Both these entities are available as the output of the CVA() function.
Vst <- cvaobj$coef # canonical coefficients
CVst <- cvaobj$scores # canonical scores
These two entities resemble, respectively, the rotation matrix and principal component scores from PCA and, although they have different properties, they can be used to draw a CVA biplot.
# biplot code
par(mfrow = c(1, 2))
row.names(CVst) <- paste(abbreviate(dataset$Genotype, 3),
dataset$Year, sep = "-")
biplot(CVst[,1:2], Vst[,1:2], cex = 0.5,
xlim = c(-3,4), ylim = c(-3, 4))
abline(h=0, lty = 2)
abline(v=0, lty = 2)
biplot(CVst[c(1:4, 13:16),1:2], Vst[,1:2], cex = 0.5,
xlim = c(-3,4), ylim = c(-3, 4),
expand = 24)
We see that, in contrast to the PCA biplot, the four replicates of each variable are ‘kept’ relatively close together, so that the groups are well discriminated. For example, we see that the genotype
COLORADO is mainly found on the second quadrant and it is pretty well discriminated by the genotype ARCOBALENO, which is mainly found on the third quadrant.
Furthermore, we can also plot the scores of centroids for all groups, that are available as the output of the CVA() function.
cscores <- cvaobj$centroids
# biplot code
par(mfrow = c(1,1))
biplot(cscores[,1:2], Vst[,1:2], cex = 0.5,
xlim = c(-3,3.5), ylim = c(-3, 3.5))
abline(h=0, lty = 2)
abline(v=0, lty = 2)
Due to the fact that the groups are mostly kept together in a CVA biplot, we can expect that subjects belonging to a certain group, with highest probability, are found in the close proximity of the
respective centroid (which is not true for a PCA biplot, obtained from group means). As the reverse, we can say that the group centroid is a good representative of the whole group and the distances
between the centroids will reflect how well the respective groups are discriminated.
Having said so, we can read the biplot by using the usual ‘inner product rule’ (see this post here): the average value of one genotype in one specific variable can be approximated by considering how
long are the respective trait-arrow and the projection of the group-marker on the trait-arrow.
We can see that COLORADO, BAIO and SANCARLO are mainly discriminated by high protein content (PC) and low number of yellow berries (YB). On the other hand, CLAUDIO and COLOSSEO are discriminated by
their low PC and high number of YB.
GRAZIA showed high weight per hectoliter (WPH), together with high PC and low Thousand Kernel Weight (TKW). ARCOBALENO and IRIDE were discriminated by high WPH, high number of YB, low PC and low TKW.
Other genotypes were very close to the origin of axes, and thus they were very little discriminated, showing average values for most of the qualitative traits.
Nasty detail about CVA
With this swift example I hope that I have managed to convince my colleague (and you) that, while a PCA biplot is more suited to focus on the differences between subjects, a CVA biplot is more suited
to focus on the differences between groups and, therefore, it is preferable for designed experiments with replicates.
In the next part I would like to give you some ‘nasty’ detail about how the CVA() function works; if you are not interested in such detail, you can safely skip this and I thank you anyway for having
followed me up to this point!
Performing a CVA is a four step process:
1. data standardisation
2. ANOVA/MANOVA
3. eigenvalue decomposition
4. linear transformation
Step 1: standardisation
As we said, standardisation is often made as the preliminary step, by taking the values in each column, subtracting the respective column mean and dividing by the respective column standard
deviation. Although this is the most widespread method, it is also possible to standardise by using the within group standard deviation (Root Mean Squared Error from one-way ANOVA), as done, for
example, in SPSS. In this post we stick to the usual technique, but, please, take this difference in mind if you intend to compare the results obtained with R with those obtained with other
statistical packages.
Step 2: ANOVA/MANOVA
The central point to CVA is to define the discriminating ability of the original variables. In the univariate realm, we use one-way ANOVA to split the total sum of squares into two components, the
between-groups sum of squares (\(SS_b\); roughly speaking, the amount of variability between group means) and the within-groups sum of squares (\(SS_w\); roughly speaking, the amount of variability
within each treatment group). We know that the total sum of squares \(SS_T\) is equal to the sum \(SS_b + SS_w\) and, therefore, we could use the ratio \(SS_w/SS_b\) as a measure of the
discriminating ability of each variable.
The multivariate analogous to ANOVA is MANOVA, where we should also consider the relationships (codeviances) between all pairs af variables. In particular, with four variables, we have a \(4 \times 4
\) matrix of total deviances-codeviances (\(T\)), that needs to be split into the sum of two components, i.e. the matrix of between-groups deviances-codeviances (\(B\)) and the matrix of
within-groups deviances-codeviances (\(W\)), so that:
\[ T = B + W \]
These three matrices (\(T\), \(B\) and \(W\)) can be obtained by matrix multiplication, starting from, respectively, (i) the \(Z\) matrix of standardised data, (ii) the \(Z\) matrix where each value
has been replaced by the mean of the corresponding variable and genotype and (iii) the matrix of residuals from the group means. More easily, we can derive these matrices from the output of the CVA()
# Solution with 'CVA()' function in 'aomisc' package
TOT <- cvaobj$TOT
B <- cvaobj$B
W <- cvaobj$W
print(TOT, digits = 4)
## WPH YB TKW PC
## WPH 63.000 -26.212 34.639 -1.293
## YB -26.212 63.000 -3.271 -7.053
## TKW 34.639 -3.271 63.000 30.501
## PC -1.293 -7.053 30.501 63.000
print(B, digits = 4)
## WPH YB TKW PC
## WPH 20.7760 -2.707 7.986 -0.7946
## YB -2.7071 12.009 2.640 -8.5455
## TKW 7.9862 2.640 27.191 11.6353
## PC -0.7946 -8.545 11.635 21.0150
print(W, digits = 4)
## WPH YB TKW PC
## WPH 42.2240 -23.505 26.652 -0.4986
## YB -23.5053 50.991 -5.911 1.4928
## TKW 26.6524 -5.911 35.809 18.8661
## PC -0.4986 1.493 18.866 41.9850
Analogously to one-way ANOVA, we can calculate the ratio \(WB = W^{-1} B\):
WB <- solve(W) %*% B
print(WB, digits = 4)
## WPH YB TKW PC
## WPH 1.2427 -0.4432 -1.0966 -0.5767
## YB 0.4149 0.1283 -0.2258 -0.3764
## TKW -0.8169 0.7038 1.8276 0.5567
## PC 0.3481 -0.5296 -0.5491 0.2569
What do the previous matrices tell us? First of all, there are notable total, between-groups and within-groups codeviances between the four quality traits which suggests that these traits are
correlated and the contributions they give to the discrimination of genotypes are, partly, overlapping and, thus, redundant.
The diagonal elements in \(WB\) can be regarded as measures of the ‘discriminating power’ for each of the four variables: the higher the value the higher the differences between the behaviour of
genotypes across years. The total ‘discriminating power’ of the four variables is, respectively, \(1.243 + 0.128 + 1.828 + 0.257 = 3.456\).
Step 3: eigenvalue decomposition
While total deviances-codeviances are central to PCA, the \(WB\) matrix is central to CVA, because it contains relevant information for group discrimination. Therefore, we submit this matrix to
eigenvalue decomposition and calculate its scaled eigenvectors (see code below), to obtain the so-called canonical coefficients.
# Eigenvalue decomposition
V1 <- eigen(WB)$vectors
# get the centered canonical variates and their RMSEs
VCC <- Z %*% V1
aovList <- apply(VCC, 2, function(col) lm(col ~ groups))
RMSE <- lapply(aovList, function(mod) summary(mod)$sigma)
# Scaling process
scaling <- diag(1/unlist(RMSE))
Vst <- V1 %*% scaling
## [,1] [,2] [,3] [,4]
## [1,] -1.9722706 -0.5941660 0.73150227 0.2436692
## [2,] -0.4953078 -0.7217420 0.06457254 0.8815688
## [3,] 2.3158763 -0.3791882 0.13441587 -0.4566838
## [4,] -0.8919442 0.7719516 0.66575788 0.6881214
Step 4: linear transformation
The canonical coefficients can be used to transform the original variables into a set of new variables, the so-called canonical variates or canonical scores:
CVst <- Z %*% Vst
colnames(CVst) <- c("CV1", "CV2", "CV3", "CV4")
## CV1 CV2 CV3 CV4
## [1,] -1.0731535 -0.4558524 -0.1497271 -0.10648959
## [2,] -0.9289930 -0.6036012 -0.6326765 -1.63319215
## [3,] -1.7853954 -0.4548743 0.6196159 -0.14060305
## [4,] 0.1553135 -1.0929723 -1.1856243 0.81447716
## [5,] 1.3575325 0.7733359 0.6471100 0.09003153
## [6,] -1.6316285 0.7121127 -0.2898050 -0.81303000
Now, we have four new canonical variates in place of the original quality traits. What did we gain? If we calculate the matrices of total, between-groups and within-groups deviances-codeviances for
the CVs, we see that the off-diagonal elements are all 0 which implies that canonical variates are uncorrelated.
# Deviances-codeviances for the canonical variates
# $Total
# CV1 CV2 CV3 CV4
# CV1 1.515993e+02 -5.329071e-15 4.152234e-14 4.884981e-15
# CV2 -5.329071e-15 8.418391e+01 4.019007e-14 -1.287859e-14
# CV3 4.152234e-14 4.019007e-14 7.090742e+01 2.398082e-14
# CV4 4.884981e-15 -1.287859e-14 2.398082e-14 5.117518e+01
# $Between-groups
# CV1 CV2 CV3 CV4
# CV1 1.035993e+02 2.886580e-15 2.797762e-14 1.976197e-14
# CV2 2.886580e-15 3.618391e+01 3.330669e-15 -3.774758e-15
# CV3 2.797762e-14 3.330669e-15 2.290742e+01 8.326673e-15
# CV4 1.976197e-14 -3.774758e-15 8.326673e-15 3.175176e+00
# $Within-groups
# CV1 CV2 CV3 CV4
# CV1 4.800000e+01 -4.329870e-15 6.217249e-15 -1.260103e-14
# CV2 -4.329870e-15 4.800000e+01 3.674838e-14 -1.443290e-14
# CV3 6.217249e-15 3.674838e-14 4.800000e+01 1.776357e-14
# CV4 -1.260103e-14 -1.443290e-14 1.776357e-14 4.800000e+01
# $`B/W`
# CV1 CV2 CV3 CV4
# CV1 2.158318e+00 1.281369e-16 5.210524e-16 4.290734e-16
# CV2 2.548295e-16 7.538314e-01 -2.959802e-16 -5.875061e-17
# CV3 3.033087e-16 -5.077378e-16 4.772379e-01 1.489921e-16
# CV4 9.783126e-16 1.480253e-16 -3.141157e-18 6.614950e-02
Furthermore, the \(BW\) matrix above shows that the ratios of ‘between-groups/within-groups’ deviances are in decreasing order and their sum is equal to the sum of the diagonal elements of the \(BW\)
matrix for the original variables.
In simpler words, the total ‘discriminating power’ of the CVs is the same as that of the original variables, but the first CV, on itself, has a very high ‘discriminating power’, that is equal to
62.5% of the ‘discriminating power’ of the original variables (\(2.155/3.540 \cdot 100\)). If we add a second CV, the ‘discriminating power’ raises to the 85% of the original variables. It means
that, if we use two CVs in place of the four original variables, the discrimination of genotypes across years is almost as good as that of the original four variables. Therefore, we can conclude that
the biplot above is relevant.
Please, note that the output of the CVA() function also contains the proportion of total discriminating ability that is contributed by each canonical variate (see box below).
## [1] 0.62459704 0.21815174 0.13810817 0.01914304
As the final remark, the canonical coefficients can be used to calculate the canonical scores for centroids, which we used for the biplot above:
avg <- aggregate(Z, list(groups), mean)
row.names(avg) <- avg[,1]
avg <- as.matrix(avg[,-1])
head(avg %*% Vst)
## [,1] [,2] [,3] [,4]
## ARCOBALENO -0.9080571 -0.6518251 -0.3371030 -0.26645191
## BAIO -0.7823965 0.8928011 0.5747970 0.32086167
## CLAUDIO -0.5496314 -1.3845288 0.3590569 0.16423169
## COLORADO -1.1481765 1.4231696 -0.5535939 -0.13201115
## COLOSSEO 1.0654126 -1.3108147 -0.1201515 -0.05709275
## CRESO 0.3070820 -0.3947049 0.6469757 -0.09565658
In conclusion, canonical variate analysis is the best way to represent the multivariate data in reduced rank space, preserving the discrimination between groups. Therefore, it may be much more
suitable than PCA with designed experiments with replicates.
Thanks for reading!
Prof. Andrea Onofri
Department of Agricultural, Food and Environmental Sciences
University of Perugia (Italy)
Send comments to: andrea.onofri@unipg.it
Follow @onofriandreapg
Further readings
1. Crossa, J., 1990. Advances in Agronomy 44, 55-85.
2. NIST/SEMATECH, 2004. In “e-Handbook of Statistical Methods”. NIST/SEMATECH, http://www.itl.nist.gov/div898/handbook/.
3. Manly F.J., 1986. Multivariate statistical methods: a primer. Chapman & Hall, London, pp. 159.
4. Adugna W. e Labuschagne M. T., 2003. Cluster and canonical variate analyses in multilocation trials of linseed. Journal of Agricultural Science (140), 297-304.
5. Barberi P., Silvestri N. e Bonari E., 1997. Weed communities of winter wheat as influenced by input level and rotation. Weed Research 37, 301-313.
6. Casini P. e Proietti C., 2002. Morphological characterisation and production of Quinoa genotypes (Chenopodium quinoa Willd.) in the Mediterranean environment. Agricoltura Mediterranea 132, 15-26.
7. Onofri A. e Ciriciofolo E., 2004. Characterisation of yield quality in durum wheat by canonical variate anaysis. Proceedings VIII ESA Congress “European Agriculture in a global context”,
Copenhagen, 11-15 July 2004, 541-542.
8. Shresta A., Knezevic S. Z., Roy R. C., Ball-Cohelo B. R. e Swanton C. J., 2002. Effect of tillage, cover crop and crop rotation on the composition of weed flora in a sandy soil. Weed Research 42
(1), 76-87.
9. Streit B., Rieger S. B., Stamp P. e Richner W., 2003. Weed population in winter wheat as affected by crop sequence, intensity of tillage and time of herbicide application in a cool and humid
climate. Weed Research 43, 20-32. | {"url":"https://www.statforbiology.com/2023/stat_multivar_cva/","timestamp":"2024-11-04T12:23:18Z","content_type":"text/html","content_length":"41033","record_id":"<urn:uuid:d827a26d-de66-4ada-8906-df9c02df80c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00218.warc.gz"} |
Adjacent Angles
Two angles are Adjacent when they have a common side and a common vertex (corner point), and don't overlap.
Angle ABC is adjacent to angle CBD
• they have a common side (line CB)
• they have a common vertex (point B)
What Is and Isn't an Adjacent Angle
Adjacent Angles
they share a vertex and a side
NOT Adjacent Angles
they only share a vertex, not a side
NOT Adjacent Angles
they only share a side,
not a vertex
Don't Overlap!
ALSO the angles must not overlap.
NOT Adjacent Angles
angles a and b overlap
2104, 2105 | {"url":"http://wegotthenumbers.org/adjacent-angles.html","timestamp":"2024-11-08T11:19:21Z","content_type":"text/html","content_length":"4890","record_id":"<urn:uuid:45d2d4ae-1fee-4cdb-937f-66e59018d413>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00677.warc.gz"} |
Slope Intercept Form Calculator: Convert Linear Equations Easily
Introduction to Slope Intercept Form Calculator:
Slope Intercept Form Calculator with two points is a tool that helps you find slope intercept form equation line in less than a minute. Our tool can evaluate the given problem to convert it into
slope-intercept form.
The slope intercept calculator has an advanced algorithm so it can take the point that passes through a line as an input to find slope intercept form equation because of its up-to-date functionality.
What is Slope Intercept Form?
The slope-intercept form is the method for geometry to evaluate the equation of a line from the given point in the form of y=mx+b. For the intercept slope, y is the dependent, x is the independent,m
is the slope of the line and b is the y-intercept value when x=0.This equation is called a slope-intercept form.
How to Calculate Slope Intercept Form?
For the calculation of the slope-intercept form equation from the given equation of a line, there are some basic rules used by the Slope intercept form calculator for the calculation of the slope of
the line equation which you should know.
The slope-intercept form calculator, using the rules, helps you to evaluate the given equation easily without any difficulty. Let's see the basic principle for solving slope intercept form problems.
Method 1:
If you have two points from a line that passes through it then as per the gradient intercept calculator,
• Find the slope value with the help of its formula as
$$ m \;=\; \frac{y_2 - y_1}{x_2 - x_1} $$
• Put the coordinates of the (x,y) value in it to get the slope m.
• After that put the slope value of m in the equation y=mx+b.
• Now for the value of b, apply the y-intercept method, and put x=0.
• Lastly, put the value of b in the intercept slope equation so that you get a solution in the form of an intercept slope as,
$$ y \;=\; mx + b $$
Method 2:
If you have the equation of a straight line then as per the slope intercept form solver,
• Determine whether the equation is in the form of a straight-line equation or not. If it is not then rewrite it in the form of y=mx+b.
• Apply the y-intercept method in which, put x=0 to find the value of b.
• After this apply the x-intercept method to find the value of m, put y=0 and b value in the equation of line.
• After calculation, you get the solution of slope intercept form equation y=mx+b.
The slope-intercept formula calculator gives the slope-intercept form of a line when either the slope and y-intercept or two points pass through the line.
Practical Example of Slope Intercept Form
The slope intercept form calculator with two points can be used to solve the slope intercept form but it is important to understand its manual calculation. For that, we have given an example,
What is x+2y= 4 in slope intercept form,
To express the given equation x+2y=4 in slope-intercept form (y=mx+b), use the y-intercept method,
First, convert the given equation in the form of y=mx+b as
$$ x + 2y \;=\; 4 $$
$$ 2y \;=\; 4 - x $$
$$ y \;=\; 2 - \frac{1}{2x} $$
For the value of b, put x=0
$$ y \;=\; 2 $$
The above equation becomes after we get m=-1/2 and y=2
Put the value of m and b value to get the solution in the form of slope intercept form equation:
$$ y \;=\; - \frac{1}{2x} + 2 $$
How to Use Slope Intercept Form Calculator?
The slope intercept calculator has a simple interface that allows you to solve the gradient intercept form of an equation instantly. You just need to put your problem in it. Follow some instructions
that help you to get results of slope-intercept form. These instructions are:
• Enter the given points of both coordinates in the input box.
• Enter the given equation of a straight line in the input fields.
• Review your given input value to get the exact solution of slope intercept form value.
• Click the “Calculate” button for the evaluation of slope intercept form equation problems.
• If you want to check the calculation of our slope-intercept form calculator then you can use the load example for its solution.
• Click the “Recalculate” button of the gradient intercept calculator for the evaluation of more examples of the slope intercept form with the solution.
What Input the Slope Intercept Calculator Give?
Slope Intercept Form Calculator with two points provides you with a solution (both equation or points of a line) as per your input problem when you click on the calculate button. It may include as:
In the Result Box,
Click on the result button so that the slope-intercept forms the equation question solution you get.
Steps Box
When you click on the steps option, you get the solution of the slope of intercept questions in a step-by-step method
Advantages of Slope-Intercept Form Calculator:
The slope intercept form solver has different advantages whenever so that you use it to solve slope of intercept form equation problems to get the solution. Our tool only gets the input value and
gives a solution without any trouble. These advantages are
• The slope intercept equation calculator is a trustworthy tool as it always provides you with accurate solutions of the slope-intercept form of the equation.
• The slope-intercept formula calculator is a speedy tool that evaluates slope intercept equations from the given point problems with solutions in a few seconds.
• The slope intercept form equation calculator is a learning tool that helps children about the concept of slope intercept form very easily on online platforms at home.
• It is a handy tool that solves slope of intercept equation problems quickly you do not put any type of external effort.
• The slope intercept calculator is a free tool that allows you to use it for the calculation of slope intercept form without getting any fee.
• Slope intercept form calculator with two points is an easy-to-use tool, anyone or even a beginner can easily use it for the solution of the slope of equation problems. | {"url":"https://pinecalculator.com/slope-intercept-form-calculator","timestamp":"2024-11-06T11:04:03Z","content_type":"text/html","content_length":"58580","record_id":"<urn:uuid:e4b8906e-d50b-4a80-a029-d7b071718da9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00148.warc.gz"} |
Grothendieck universe
The basis of it all
Set theory
• fundamentals of set theory
• presentations of set theory
• structuralism in set theory
Foundational axioms
Removing axioms
Problems of set theory arise by the unjustified recursion of the naive notion of a ‘collection of things.’ If ‘Col’ is one notion of collections (such as ‘Set’ or ‘Class’), then the notion ‘Col of
all Cols’ is in general problematic, as it is subject to the construction of Russell-style paradoxes (although it is not the only source of such paradoxes).
One way out is to consider a hierarchy of notions of collections: Postulate that the collection of all ‘Col’s is not a ‘Col’ itself but instead is another notion of collection, ‘Col+’. We may thus
speak of the ‘Col+ of all Cols.’ Similarly, the collection of all ‘Col+’-type collections may be taken to be ‘Col++,’ and so on.
One formalization of this idea is that of a Grothendieck universe. This is defined to be a set $U$ that behaves like a ‘set+ of all sets’ in that all the standard operations of set theory (union,
power set, etc.) can be performed on its elements.
Although developed for application to category theory, the definition is usually given in a form that only makes sense in a membership-based set theory. On this page, we consider only that version;
for a form that makes sense in structural set theory, please see universe in a topos.
A Grothendieck universe is a pure set $U$ such that:
1. for all $u \in U$ and $t\in u$, we have $t \in U$ (i.e., $U$ is transitive);
2. for all $u \in U$, we have $\mathcal{P}(u) \in U$;
3. $\varnothing \in U$;
4. for all $I \in U$ and functions $u: I \to U$, we have $\displaystyle \bigcup_{i \in I} u_{i} \in U$.
Some authors leave out (3), which allows the empty set $\varnothing$ itself to be a Grothendieck universe. Other authors use the set $\mathbb{N}$ of natural numbers in place of $\empty$ in (3), which
prevents the countable set $V_{\omega}$ of hereditarily finite sets from being a Grothendieck universe.
From the definition above, one can prove additional closure properties of a universe $U$, including the usual codings in pure set theory of function sets and cartesian products and disjoint unions of
sets, using the following lemmata:
If $t$ is a subset of $u$ and $u \in U$, then $t \in U$.
By (2), $\mathcal{P}(u) \in U$. Then as $t \in \mathcal{P}(u)$, we have $t \in U$ by (1).
If $u, v \in U$, then $u \cup v \in U$.
As $\empty \in U$ by (3), so are $\star \stackrel{\text{df}}{=} \mathcal{P}(\empty)$ and $TV \stackrel{\text{df}}{=} \mathcal{P}(\star)$ by (2). Even in constructive mathematics, $2 = \{ \bot,\top \}
$ is a subset of $TV$, so $2 \in U$ by Lemma 1. Then $(\bot \mapsto u,\top \mapsto v)$ is a function from $2 \to U$, so the union $u \cup v$ in $U$ by (4).
Then using their usual encodings in set theory:
• the nullary cartesian product $\star$ is $\mathcal{P}(\varnothing)$ as in the previous proof;
• the binary cartesian product $u \times v$ is a subset of $\mathcal{P}(\mathcal{P}(u \cup v))$;
• the general cartesian product $\displaystyle \prod_{i \in I} u_{i}$ is a subset of $\displaystyle \mathcal{P} \left( I \times \bigcup_{i \in I} u_{i} \right)$;
• the nullary disjoint union is $\varnothing$;
• the binary disjoint union $u \uplus v$ is a subset of $2 \times (u \cup v)$;
• the general disjoint union $\displaystyle \biguplus_{i \in I} u_{i}$ is a subset of $\displaystyle I \times \bigcup_{i \in I} u_{i}$;
• the set of functions $u \to v$ is a subset of $\mathcal{P}(u \times v)$.
Terminology: Small/Large
Given a universe $U$, an element of $U$ is called a $U$-small set, while a subset of $U$ is called $U$-moderate. Every $U$-small set is $U$-moderate by requirement (1) of the definition. If the
universe $U$ is understood, we may simply say small and moderate.
The term $U$-large is ambiguous; it sometimes means ‘not small’ but sometimes means the same as ‘moderate’ (or ‘moderate but not small’). The reason is that language that distinguishes ‘small’ from
‘large’ in terms of sets and proper classes translates fairly directly into terms of $U$-small and $U$-moderate sets. To be precise, if we redefine ‘set’ to mean ‘$U$-small set,’ then every proper
class in this new world of sets will be represented by a $U$-moderate set (a subset of $U$). Those sets that are not even $U$-moderate are ‘too large’ to be translated into language of proper
(Note, though, that not all $U$-moderate sets represent proper classes in the language of set theory relative to the world of $U$-small sets, only those that are first-order definable from $U$-small
sets. In fact, if $\kappa$ is the cardinality of the universe $U$, then there are only $\kappa$ proper classes relative to $U$, but there are $2^{\kappa}$-many $U$-moderate sets.)
As defined above, these concepts violate the principle of equivalence, as two sets may be isomorphic yet have different properties with respect to $U$. However, a set which is isomorphic to a $U$
-small or $U$-moderate set is called essentially $U$-small or $U$-moderate; these respect the principle of equivalence.
Axiom of Universes
If $U$ is a Grothendieck universe, then it is easy to show that $U$ is itself a model of ZFC (minus the axiom of infinity unless you modify (3) to rule out countable universes). Therefore, one cannot
prove in ZFC the existence of a Grothendieck universe containing $\mathbb{N}$, and so we need extra set-theoretic axioms to ensure that uncountable universes exist. Grothendieck’s original proposal
was to add the following axiom of universes to the usual axioms of set theory:
• For every set $s$, there exists a universe $U$ that contains $s$, i.e., $s \in U$.
In this way, whenever any operation leads one outside of a given Grothendieck universe (see applications below), there is guaranteed to be a bigger Grothendieck universe in which one lands. In other
words, every set is small if your universe is large enough!
Later, Mac Lane pointed out that often, it suffices to assume the existence of one uncountable universe. In particular, any discussion of ‘small’ and ‘large’ that can be stated in terms of sets and
proper classes can also be stated in terms of a single universe $U$ (with ‘large’ meaning ‘$U$-moderate but not $U$-small’).
Large cardinals
If $U$ is a Grothendieck universe, then one can prove in ZFC that it must be of the form $V_{\kappa}$, where $\kappa$ is a (strongly) inaccessible cardinal (Williams). Here, $V_{\kappa}$ is the $\
kappa$-th set in the von Neumann hierarchy of pure sets. Conversely, every such $V_{\kappa}$ is a Grothendieck universe. Thus, the existence of Grothendieck universes is equivalent to the existence
of inaccessible cardinals, and so the axiom of universes is equivalent to the ‘large cardinal axiom’ that ‘there exist arbitrarily large inaccessible cardinals.’
It is worth noting, for those with foundational worries, that the axiom of universes is much, much weaker than many large cardinal axioms which are routinely used, and believed to be consistent, by
modern set theorists. Of course, one cannot prove the consistency of any large cardinal axiom (if it really is consistent) except by invoking a stronger one.
Structural Version
An equivalent concept (at least for the purposes of category theory) can also be defined in structural set theories (like ETCS). Please see universe in a topos.
The set $V_{\omega}$ of hereditarily finite sets (finite sets of finite sets of…) is a Grothendieck universe, unless you phrase axiom (3) in the definition to specifically rule it out. In this way,
the axiom of infinity can be seen as a simple universe axiom (stating that at least one universe exists), and Mac Lane’s axiom that an uncountable universe exists is merely one step further.
If you refrain from using the axiom of universes (except perhaps once, to get $\mathbb{N}$ as above), then the set of all sets (or cardinal numbers) that you can actually construct is a Grothendieck
universe. Of course, you cannot possibly have proved that this universe exists, but the intuition that you ought be able to form the collection of ‘everything that we’ve used so far’ is the
justification for the axiom of universes.
Similarly, if you use the axiom of universes at most $n$ times, then the set of all sets that you can construct with this restriction is a Grothendieck universe. Thus, we can find a sequence $U_{1} \
in U_{2} \in U_{3} \in \ldots$ of universes. The axiom of replacement then allows us to form the union (a directed colimit) $\bigcup_{n \lt \omega} U_{n}$. This will not be a universe (it violates
(4), by definition), but we can use the axiom of universes again to show that it is in some universe $U_{\omega}$. Proceeding in this way, we can construct a tower of universes indexed by the ordinal
The set of all sets that can be constructed using the axioms of ZFC together with the axiom of universes is, if it exists, again a universe which contains all the $U_{\alpha}$ constructed above. Of
course, it cannot be shown to exist using only ZFC and the axiom of universes; the axiom of universes is not the final word on large cardinal axioms by any means.
Let $U Set$ be the category of $U$-small sets, a full subcategory of Set. It is common, especially when $U$ is understood, to redefine $Set$ to mean $U Set$; here we keep the distinction for clarity.
However, when $Set$ means $U Set$, sometimes $SET$ is used to mean the category of all sets.
A category whose set of morphisms is (essentially) $U$-small may be called a $U$-small category; it can also be thought of as an internal category in $U Set$. A category whose hom-sets are all
(essentially) $U$-small may be called locally $U$-small; it can also be thought of as an enriched category over $U Set$. Every $U$-small category is locally $U$-small.
A category whose set of morphisms is $U$-moderate may be called a $U$-moderate category; again ‘$U$-large’ may mean ‘not $U$-small,’ ‘$U$-moderate,’ or both. In practice, most $U$-moderate categories
are locally $U$-small and vice versa, but there is no theorem that this must be true. Note that $U Set$ itself is $U$-moderate and locally $U$-small but not $U$-small.
All notions of category theory that reference size, such as completeness and local presentability, must then be relativized to $U$. In order to move from a category defined in one universe to
another, we need a procedure of universe enlargement.
Presheaf Categories
Let $C$ be a $U$-small category. Then the category of $U$-presheaves on $C$ (the functor category $[C^{op},U Set]$) is also $U$-moderate and locally $U$-small but not $U$-small unless $C$ is empty. (
$U Set$ itself is the special case of this where $C$ is the point.) These arguments go as follows:
• $U PSh(C)$ is $U$-moderate: An upper bound for the size of $[C^{op},U Set]$, hence of the set $Obj([C^{op},U Set])$ is the size of $\{ F: Obj(C) \times Mor(C) \to U \}$, where both $Obj(C)$ and
$Mor(C)$ are in $U Set$. Hence, we are looking at the cardinal number $|U|^{|u| \times |v|}$, where $u = Obj(C)$ and $v = Mor(C)$. Use the fact that any Grothendieck universe must be infinite
(since it has $\varnothing$, $\mathcal{P}(\varnothing)$, etc.), and the result follows from cardinal arithmetic that $\kappa^{\lambda} = \kappa$ if $\lambda \lt \kappa$ and $\kappa$ is infinite.
• $U PSh(C)$ is locally $U$-small: An upper bound for the size of the set of morphisms between two functors $F,G: C^{op} \to U Set$ is the disjoint union indexed by the objects $c$ of $C$ over the
$U$-sets $G(c)^{F(c)}$. Now $G(c)^{F(c)} \in U$ as it is a function set and $\displaystyle \bigcup_{c \in Obj(C)} G(c)^{F(c)}$ by the assumption that unions stay in $U$.
Now let $C$ be a $U$-moderate category (and not small). Then the category of $U$-presheaves on $C$ is not even locally $U$-small, nor is it even $U$-moderate (it is ‘too large’). However, it is
locally $U$-moderate. Also, it is quite possible, if $C$ is a $U$-moderate site, that the category of $U$-sheaves on $C$ is $U$-moderate and locally $U$-small.
Note: Here we are considering presheaves on $C$ with values in $U$-small sets. In many cases, a more appropriate notion of ‘$U$-small presheaf’ is that discussed at small presheaf, namely a presheaf
that is a $U$-small colimit of representables.
Alternative Approaches
• A different, potentially much more elegant and natural proposal for solving the problem to be solved by Grothendieck universes is that described at category of all sets. Don’t get your hopes up
too high, though; even if it works, it isn’t quite the category theory you’re used to.
As a solution to foundational problems of category theory the concept was apparently brought up by Grothendieck (cf. below, p.149) in the context of 1958 Bourbaki discussions. It makes a sketchy
appearance in SGA1 in exposé VI on fibered categories.
Fuller early accounts are
The “official” account is then:
Further early discussion:
• Saunders MacLane, One universe as a foundation for category theory, In: Reports of the Midwest Category Seminar III, Lecture Notes in Mathematics 106, Springer (1969) 192-200 [doi:10.1007/
Comprehensive historical review with further references:
• Ralf Krömer, §6.4.4 in: Tool and object: A history and philosophy of category theory, Science Networks. Historical Studies 32, Springer (2007) [doi:10.1007/978-3-7643-7524-9]
• Ralf Krömer, La « machine de Grothendieck », se fonde-t-elle seulement sur des vocables métamathématiques? Bourbaki et les catégories au cours des années cinquante, Revue d’Histoire des
Mathématiques 12 111–154 (Numdam)
For more information on the origin of the terminology consult:
Most texts on category theory and related topics mention the topic of Grothendieck universes without providing details. Exceptions are:
Discussions spelling out more details:
The proof that a Grothendieck universe is equivalently a set of $\kappa$-small sets for $\kappa$ an inaccessible cardinal is in
• N. H. Williams, On Grothendieck universes, Compositio Mathematica, tome 21 no 1 (1969) (numdam)
SGA uses universes and much of modern results in algebraic geometry use general results from SGA, including Wiles proof of Fermat’s theorem. Colin McLarty discusses how to remove the need for
universes in Wiles’ proof in
• Colin McLarty, What does it take to prove Fermat’s last theorem? – Grothendieck and the logic of number theory, pdf | {"url":"https://ncatlab.org/nlab/show/Grothendieck%20universe","timestamp":"2024-11-11T17:25:03Z","content_type":"application/xhtml+xml","content_length":"94560","record_id":"<urn:uuid:8f00de52-18bd-4102-9d71-443d2f6a7710>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00719.warc.gz"} |
The Discriminatory Power of Diagnostic Information from Discrete Medical Tests
A useful diagnostic test should provide information that helps to discriminate between competing hypotheses. But any practical diagnostic will be imperfect: both false positive and false negative ind
ications are to be expected. So just how useful is a diagnostic test when it is, necessarily, imperfect? In [1], p. 44 shows a static, graphical example of how Bayes's theorem may be used to understa
nd the factors determining the discriminatory power of diagnostic tests. This Demonstration is a dynamic version of that argument.
Let be the logical truth value (1 or 0) of a proposition about a state variable (e.g., a disease or health risk is present or absent), and let be the logical truth value (1 or 0) of a proposition abo
ut the outcome of an indicative imperfect diagnostic test (e.g., an X-ray or blood test measurement is either definitely positive or negative for this disease). From a statistical perspective there a
re three precise numerical inputs that feed into a coherent posterior inference about binary-valued after having observed the result of the binary-valued diagnostic signal : a sensitivity number, a s
pecificity number, and a base rate number. The first two characterize uncertainty about the outcome of the diagnostic as a conditional probability under two different information conditions about the
state . The sensitivity number expresses uncertainty about whether the diagnostic test will be positive, that is, , assuming that is true. The specificity number expresses an uncertainty about whethe
r the diagnostic test for will be negative, that is, , assuming that is true. The third number, the base rate number, is a marginal or unconditional probability, , characterizing uncertainty about th
e binary state variable in the absence of, or prior to knowing, any diagnostic information .
The discriminatory power of diagnostic information can be measured by the levels and differences between two inverse conditional probability assessments, and , one for each possible diagnostic test r
esult. This interactive Demonstration creates a graphical depiction of the inverse probabilities and as functions of the underlying sensitivity, specificity, and base rate inputs. A natural frequency
representation of the full joint probability distribution over the random variables (, ) is provided in a truth table format above the graph, where the column entries are frequency counts or "cases"
in a hypothetical population of a fixed size. | {"url":"https://www.wolframcloud.com/objects/demonstrations/TheDiscriminatoryPowerOfDiagnosticInformationFromDiscreteMed-source.nb","timestamp":"2024-11-05T09:20:16Z","content_type":"text/html","content_length":"386555","record_id":"<urn:uuid:59d26f42-926d-4016-97a2-7538d6d6e1a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00420.warc.gz"} |
What is The average sat score - advisor
sat average
It’s currently possible to earn a combined SAT score anywhere between 400 and 1600 in ten-point increments. We all know it’s highly unusual to score only a 400 or to top out at the elusive, perfect
But what’s an average SAT score?
Ever since SAT scores were recentered in the 1990s, the test has been engineered so that the median SAT score is 1000, the sum of two scores of 500 on the Evidence-Based Reading and Writing (ERW)
section and Math section. The mean average scores on the SAT change, however–even annually.
In fact, that’s why the SAT was recentered in the first place: at that time the average SAT score was consistently well below the intended median. You can imagine the collective outrage from one
group of students and elation from another after recentering: one year everyone’s average SAT score suddenly jumped.
As of 2018, the SAT official data showed that the average SAT score was slightly above 1000.
2018 Average SAT Scores
MEAN TOTAL SCORE ERW MATH # TEST TAKERS
w/o essay 1068 536 531 2,136,539
with essay 1096 549 546 1,449,142
The College Board releases an annual report full of SAT data, so it’s relatively easy to view SAT scores through the several lenses we’ll use here. (You can read the full reports here.)
Naturally, you’ll want to know more than what the average SAT score is, and a snapshot of the percentiles for SAT scores can be useful for understanding how your scores stack up against those of
other students. After all, most students who are applying to extremely competitive colleges and universities will be competing with students nationwide for admission.
You’ll see here that the SAT releases percentile scores that compare your score with those of all the students in the United States and just the students who took the SAT, the “SAT Users.”
Total Score Percentile Ranking for 2018
Score Nationally Representative Sample SAT User
1600 99+ 99+
Find out more about what a good SAT score is here.
What Is the Average SAT Score by State?
You can begin to better understand why it’s tricky to pinpoint the meaning of any given average SAT score when you consider SAT scores by state. Here we see that the average SAT score varies–as does
the number of students who take the test in any given region.
This matters as colleges try to admit students from a wide variety of states; in other words, if you live in California or Texas, you’re competing with more students from your area and will also be
compared to them. Your SAT score may be relatively strong or weak compared to the smaller population you’re competing with in your state.
This is also a great lesson on why the basics of statistics are included on the SAT test itself: you’ll see that the average SAT score in Alabama exceeds the average SAT score in California by 90
points. Only 2,878 students took the SAT in Alabama last year, though, so they probably have a different (and more consistent) profile than those students in California.
You’d also need to take averages in states like Maine and Florida with a grain of salt; most students in those states take the SAT because of graduation assessment requirements.
Average SAT Scores by State
STATE AVERAGE SAT SCORE ERW MATH # TEST TAKERS # HS GRADUATES % OF STUDENTS WHO TOOK THE SAT
Alabama 1166 595 571 2,878 49,844 6%
California 1076 540 536 262,228 435,365 60%
Florida 1014 522 493 176,746 181,306 97%
Maine 1013 512 501 14,310 14,428 99%
Ohio 1099 552 547 22,992 124,473 18%
Oregon 1117 564 553 17,476 36,734 48%
South Carolina 1070 547 523 25,390 46,536 55%
Texas 1032 520 512 226,374 341,613 66%
Is your state not listed in my samples? You can find every state’s individual report here.
What Is the Average SAT score at Ivy League Schools?
Here’s a sampling of the average SAT Scores at some Ivy League Schools.
• Harvard: Average Total SAT Score 1515 (2016)
• Princeton: Average Total SAT Score 1495 (2016)
• Yale: middle 50%
• SAT-Evidence-Based Reading and Writing: 720-770
• SAT-Math: 740-790
• Brown: Average SAT Score 1470 (2016)
• Stanford: middle 50%
• SAT Math Section: 720-800
• SAT Evidence-Based Reading and Writing: 700-770
Remember that the average score at an Ivy League school is about as valuable to you as the published score range: there are always allowances for certain subsets of populations (major donors,
students with significant legacy, recruited athletes, etc.), and while it’s not always the case, sometimes those students don’t score as high on the SAT as the average SAT score of students who do
not enjoy those benefits while applying.
In other words, if you’re a regular student applying to an Ivy League school without “pull,” shoot for a score higher than the average SAT at that school.
What Is the Average SAT Score for the Duke TIP Program?
Duke University has a famous Talent Identification Program that uses 7th grade SAT test scores to invite gifted young people to engage in a variety of academic enrichment programs. Duke seeks out
students who test in the top five percent of their grade, and one of the ways they do that is through an early SAT test.
Duke TIP Average SAT Scores
ERW Math Total
Average SAT Score 500 480 980
Top SAT Score 780 800 1570
You’d need a special registration to take the SAT this young; find out more about the Duke TIP Program here.
What Is the Average SAT Score for Johns Hopkins SET Program?
The Johns Hopkins Center for Talented Youth offers a program called the Study for Exceptional talent. This free program enrolls students who score at least a 700 on either the SAT Math or the SAT
Evidence-Based Reading and Writing by their 13th birthday or 700 plus increments of ten for every month beyond their 13th birthday.
Hopkins doesn’t release the average SAT score for students in the SET program, but you can be sure it’s at least 700.
You can find out more about Hopkins SET eligibility here.
What Is the Average SAT Score at Community College?
Most community colleges don’t require SAT scores to enroll, so there isn’t information generally available about their average SAT scores. That being said, many community colleges use SAT scores in
lieu of placement tests, which can help you get out of taking introductory classes, which saves you time and money.
Now that you know the average SAT score, you can set some goals and start prepping. Find out when you should take the SAT here. | {"url":"https://testprepadvisor.com/sat/what-is-the-average-sat-score/","timestamp":"2024-11-03T00:41:42Z","content_type":"text/html","content_length":"77321","record_id":"<urn:uuid:8112065d-928d-45ca-bfc0-847798bf33de>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00422.warc.gz"} |
Convert Joule to Gigajoule
Please provide values below to convert joule [J] to gigajoule [GJ], or vice versa.
Joule to Gigajoule Conversion Table
Joule [J] Gigajoule [GJ]
0.01 J 1.0E-11 GJ
0.1 J 1.0E-10 GJ
1 J 1.0E-9 GJ
2 J 2.0E-9 GJ
3 J 3.0E-9 GJ
5 J 5.0E-9 GJ
10 J 1.0E-8 GJ
20 J 2.0E-8 GJ
50 J 5.0E-8 GJ
100 J 1.0E-7 GJ
1000 J 1.0E-6 GJ
How to Convert Joule to Gigajoule
1 J = 1.0E-9 GJ
1 GJ = 1000000000 J
Example: convert 15 J to GJ:
15 J = 15 × 1.0E-9 GJ = 1.5E-8 GJ
Popular Energy Unit Conversions
Convert Joule to Other Energy Units | {"url":"https://www.unitconverters.net/energy/joule-to-gigajoule.htm","timestamp":"2024-11-14T22:32:16Z","content_type":"text/html","content_length":"10571","record_id":"<urn:uuid:15b7e211-e5d4-42da-af86-71df2b20c21f>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00083.warc.gz"} |
Window Aggregate Functions - Ocient Documentation
Window Aggregate Functions
Window functions operate on a set of rows and return a single value for each row from the underlying query. Like an aggregate function, a window function operates on a set of rows, but it does not
reduce the number of rows returned by the query. Unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row – the rows retain their
separate identities and the window function is able to access more than just the current row in the query result. The term window describes the set of rows on which the window function operates. A
window function returns a value from the rows in a window.
The database uses the OVER() and PARTITION BY clauses to define the scope of the window.
<PARTITION BY value_expression> Defines a window or user-specified set of rows within a result set. It does not reduce the number of rows in the result set like GROUP BY can. The value_expression
specifies the column by which the rows are partitioned.
<ORDER BY order_by_expression [ASC | DESC] [NULLS FIRST | NULLS LAST], …> Defines the logical order of the rows within each partition of the result set. The order_by_expression specifies a column or
expression on which to sort.
ASC | DESC Specifies that the values in the specified column should be sorted in ascending or descending order. ASC is the default sort order.
NULLS FIRST | NULLS LAST Specifies if NULLs should be sorted first or last. The default is last.
<ROWS or RANGE clause> Further limits the rows within the partition by specifying start and end points within the partition. This is done by specifying a range of rows with respect to the current row
either by logical association or physical association. Physical association is achieved by using the ROWS clause.
The ROWS clause limits the rows within a partition by specifying a fixed number of rows preceding or following the current row. Alternatively, the RANGE clause logically limits the rows within a
partition by specifying a range of values with respect to the value in the current row. Preceding and following rows are defined based on the ordering in the ORDER BY clause. The window frame “RANGE
… CURRENT ROW …” includes all rows that have the same values in the ORDER BY expression as the current row. For example, ROWS BETWEEN 2 PRECEDING AND CURRENT ROW means that the window of rows that
the function operates on is three rows in size, starting with 2 rows preceding until and including the current row.
UNBOUNDED PRECEDING Specifies that the window starts at the first row of the partition. UNBOUNDED PRECEDING can only be specified as window starting point.
row_integer PRECEDING Specified with <unsigned integer literal> to indicate the number of rows or values to precede the current row. This specification is not allowed for RANGE.
CURRENT ROW Specifies that the window starts or ends at the current row when used with ROWS or the current value when used with RANGE. CURRENT ROW can be specified as both a starting and ending
BETWEEN <window_frame_bound> AND <window_frame_bound> Used with either ROWS or RANGE to specify the lower (starting) and upper (ending) boundary points of the window. The first <window_frame_bound>
defines the boundary starting point and the subsequent <window_frame_bound> defines the boundary end point. The upper bound cannot be smaller than the lower bound.
UNBOUNDED FOLLOWING Specifies that the window ends at the last row of the partition. UNBOUNDED FOLLOWING can only be specified as a window end point. For example RANGE BETWEEN CURRENT ROW AND
UNBOUNDED FOLLOWING defines a window that starts with the current row and ends with the last row of the partition.
row_integer FOLLOWING Specified with <unsigned value specification> to indicate the number of rows or values to follow the current row. When <unsigned value specification> FOLLOWING is specified as
the window starting point, the ending point must be <unsigned value specification> FOLLOWING. For example, ROWS BETWEEN 2 FOLLOWING AND 10 FOLLOWING defines a window that starts with the second row
that follows the current row and ends with the tenth row that follows the current row. This specification is not allowed for RANGE.
ROWS vs RANGE The difference is that RANGE always includes all rows with equal values on the ORDER BY keys even if they are outside the range, whereas ROWS does not.
These are the supported window functions in addition to those listed under V4 Aggregate Functions which can also be used as window functions. The DISTINCT keyword cannot be used with window
aggregation functions.
Returns a number 0 < n ⇐1 and can be used to calculate the percentage of values less than or equal to the current value in the group
Computes the finite difference between successive values of expression under the specified ordering. This is a backwards difference, which means that, at degree one, the value for a given row is the
difference between the value of expression for that row and the previous row.
The expression values have the following requirements:
• Expressions must be numeric types.
• The optional degree argument can be any floating point number with a default value of one. If you set the degree argument to a positive integer that is greater than one, the window function
calculates the higher order finite difference without using nested DELTA calls. If the degree argument is a negative integer, the calculation is an anti-difference (i.e. discrete integration).
You can specify fractional values, either positive or negative, and these values mean fractional differences or sums.
The function requires two rows of input data to calculate a first degree difference, three rows of input data to calculate a second degree difference, and so on. However, rather than
returning NULL for the first few rows, the function returns the correct values such that using DELTA of degree n followed by DELTA of degree -n cancel each other out to return the initial result
(other than floating point error accumulation). In general, when you use this window function over larger result sets with larger degrees (the absolute value of the degree matters), the floating
point error accumulation worsens. Integer degree values in the range [-3, 3] are generally stable up to approximately 5 million rows. Fractional degree values in the range [-3, 3] are generally
stable up to approximately 1 million rows. The stable degree range is wider over smaller result sets and narrower over larger result sets. For example, fractional and integer degrees are stable over
the range [-4, 4] up to approximately 80,000 rows.
The function treats any NULL values in the input data as zero, which might cause unexpected results. Filter out NULL values before you use the window function.
Assigns a number to each row in the result set with equal values having the same number. There will be no gaps between ranks.
Computes the difference quotient between successive values of expression with respect to expression2. If expression is a sampled time-series variable, then this calculation is equivalent to the
derivative of expression with respect to expression2 (i.e. the DELTA of expression divided by the DELTA of expression2). This is a backwards difference, which means that, at degree one, the
calculation uses the value for the current and previous row. Both expression arguments must be numeric types.
For higher order calls, the sample points for expression must be equally spaced or the results are incorrect. In other words, the DELTA calls between consecutive expression2 values must always be the
same. If this is not true, use some form of interpolation to create an evenly spaced series of samples before you use the window function.
The optional degree argument can be any floating point number with a default value of one. If you set the degree argument to a positive integer that is greater than one, the window function
calculates the higher order difference quotients (discrete derivatives) without using nested DERIVATIVE calls. If the degree argument is a negative integer, the calculation is a discrete
anti-differentiation or integration. You can specify fractional values, either positive or negative, and these values mean fractional differentiation or integration.
The function requires two rows of input data to calculate a first degree derivative, three rows of input data to calculate a second degree derivative, and so on. However, rather than
returning NULL for the first few rows, the function returns the correct values such that using DERIVATIVE of degree n followed by DERIVATIVE of degree -n cancel each other out to return the initial
result (other than floating point error accumulation). In general, when you use this window function over larger result sets with larger degrees (the absolute value of the degree matters), the
floating point error accumulation worsens. Integer degree values in the range [-3, 3] are generally stable up to approximately 5 million rows. Fractional degree values in the range [-3, 3] are
generally stable up to approximately 1 million rows. The stable degree range is wider over smaller result sets and narrower over larger result sets. For example, fractional and integer degrees are
stable over the range [-4, 4] up to approximately 80,000.
The function treats any NULL values in the input data for either expression or expression2 arguments as zero, which might cause unexpected results. Filter out NULL values before you use the window
Returns the first value in the ordered result set.
Returns the row which is the specified number backward from the current row. Default is 1 if offset is omitted.
Returns the last value in the ordered result set.
Returns the row which is the specified number forward from the current row. Default is 1 if offset is omitted.
Returns the nth value in the ordered result set.
Returns the value that corresponds to the specified percentile (0 ≤ n ≤ 1) within the group. The set of values is treated as a continuous distribution, so it is possible that the computed value can
not appear in the result set.
The value returned is 0 < n ≤ 1 and can be used to calculate the percentage of values less than the current group, excluding the highest value. The highest value in a group will always be 1.
Assigns a number to each row in the result set with equal values having the same number. There can be gaps between ranks. Similar to the ranking used in sporting events.
Computes the ratio of a value to the sum of the set of values. For example, RATIO_TO_REPORT can calculate the ratio of an employee’s salary relative to the total salary in a department.
Assigns a unique number to each row in the result set.
Zscore of the sample based on the stddev() function
Zscore of the the sample based on the stddevp() function | {"url":"https://docs.ocient.com/window-aggregate-functions","timestamp":"2024-11-12T15:31:48Z","content_type":"text/html","content_length":"448281","record_id":"<urn:uuid:4be81d94-b909-4093-940f-7b48b6654e21>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00315.warc.gz"} |
Memoization and Dynamic Programming
Explore how these techniques can be used to optimize depth-first search algorithms.
We'll cover the following
Our topological sort algorithm is arguably the model for a wide class of dynamic programming algorithms. Recall that the dependency graph of a recurrence has a vertex for every recursive subproblem
and an edge from one subproblem to another when we’re evaluating the first subproblem requires a recursive evaluation of the second. The dependency graph must be acyclic, or the naïve recursive
algorithm would never halt.
Evaluating any recurrence with memoization is exactly the same as performing a depth-first search of the dependency graph. In particular, a vertex of the dependency graph is “marked” if the value of
the corresponding subproblem has already been computed. The black-box subroutines $PreVisit$ and $PostVisit$ are proxies for the actual value computation.
Dynamic programming and reverse topological ordering
Carrying this analogy further, evaluating a recurrence using dynamic programming is the same as evaluating all subproblems in the dependency graph of the recurrence in reverse topological order—every
subproblem is considered after the subproblems it depends on. Thus, every dynamic programming algorithm is equivalent to a postorder traversal of the dependency graph of its underlying recurrence.
Create a free account to access the full course.
By signing up, you agree to Educative's Terms of Service and Privacy Policy | {"url":"https://www.educative.io/courses/mastering-algorithms-for-problem-solving-in-python/memoization-and-dynamic-programming","timestamp":"2024-11-07T16:49:29Z","content_type":"text/html","content_length":"790493","record_id":"<urn:uuid:2a5eb26b-6484-4851-a238-30ae3cb914c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00567.warc.gz"} |
Optimal Sketching for Kronecker Product Regression and Low Rank
Optimal Sketching for Kronecker Product Regression and Low Rank Approximation
We study the Kronecker product regression problem, in which the design matrix is a Kronecker product of two or more matrices. Given A_i ∈R^n_i × d_i for i=1,2,...,q where n_i ≫ d_i for each i, and b
∈R^n_1 n_2 ... n_q, let A = A_1 ⊗ A_2 ⊗...⊗ A_q. Then for p ∈ [1,2], the goal is to find x ∈R^d_1 ... d_q that approximately minimizes Ax - b_p. Recently, Diao, Song, Sun, and Woodruff (AISTATS,
2018) gave an algorithm which is faster than forming the Kronecker product A Specifically, for p=2 their running time is O(∑_i=1^q nnz(A_i) + nnz(b)), where nnz(A_i) is the number of non-zero entries
in A_i. Note that nnz(b) can be as large as n_1 ... n_q. For p=1,q=2 and n_1 = n_2, they achieve a worse bound of O(n_1^3/2poly(d_1d_2) + nnz(b)). In this work, we provide significantly faster
algorithms. For p=2, our running time is O(∑_i=1^q nnz(A_i) ), which has no dependence on nnz(b). For p<2, our running time is O(∑_i=1^q nnz(A_i) + nnz(b)), which matches the prior best running time
for p=2. We also consider the related all-pairs regression problem, where given A ∈R^n × d, b ∈R^n, we want to solve min_xA̅x - b̅_p, where A̅∈R^n^2 × d, b̅∈R^n^2 consist of all pairwise differences of
the rows of A,b. We give an O(nnz(A)) time algorithm for p ∈[1,2], improving the Ω(n^2) time needed to form A̅. Finally, we initiate the study of Kronecker product low rank and low t-rank
approximation. For input A as above, we give O(∑_i=1^q nnz(A_i)) time algorithms, which is much faster than computing A. | {"url":"https://deepai.org/publication/optimal-sketching-for-kronecker-product-regression-and-low-rank-approximation","timestamp":"2024-11-09T17:54:08Z","content_type":"text/html","content_length":"155755","record_id":"<urn:uuid:e1bffd02-2f52-4f87-a7cf-0caa9e5d84a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00516.warc.gz"} |
i : the Imaginary Number
After the amazing response to my post about zero, I thought I'd do one about something that's fascinated me for a long time: the number *i*, the square root of -1. Where'd this strange thing come
from? Is it real (not in the sense of real numbers, but in the sense of representing something *real* and meaningful)?
The number *i* has its earliest roots in some of the work of early arabic mathematicians; the same people who really first understood the number 0. But they weren't quite as good with *i* as they
were with 0: they didn't really get it. They had some concept of roots of a cubic equation, where sometimes the tricks for finding the roots of the equation *just didn't work*. They knew there was
something going on, some way that the equation needed to have roots, but just what that really mean, they didn't get.
Things stayed that way for quite a while. Various others, like the Greeks, encountered them in various ways when things didn't work, but no one *really* grasped the idea that algebra required numbers
that were more than just points on a one-dimensional number-line.
The next step was in Italy, over 1000 years later. During the 16th century, people were searching for solutions to the cubic equations - the same thing that the arabic scholars were looking at. But
getting some of the solutions - even solutions to equations with real roots - required playing with the square root of -1 along the way. It was first really described by Rafael Bombelli in the
context of the solutions to the cubic; but Bombello didn't really think that they were *real*, *meaningful* numbers: it was viewed as a useful artifact of the process of solving the equations, but it
wasn't accepted.
It got its name as the *imaginary number* as a result of a diatribe by Rene Descartes, who believed it was a phony artifact of sloppy algebra. He did not accept that it had any meaning at all: thus
it was an "imaginary" number.
They finally came into wide acceptance as a result of the work of Euler in the 18th century. Euler was probably the first to really, fully comprehend the complex number system created by the
existence of *i*. And working with that, he discovered one of the most fascinating and bizzare mathematical discoveries ever, known as *Euler's equation*. I have no idea how many years it's been
since I was first exposed to this, and I *still* have a hard time wrapping my head around *why* it's true.
e^iθ = cos θ + i sin θ
And what *that* really means is:
e^iπ = -1
That's just astonishing. The fact that there is *such* a close relationship between i, π, and e is just shocking to me.
What *i* does
Once the reality of *i* as a number was accepted, mathematics was changed irrevocably. Instead of the numbers described by algebraic equations being points on a line, suddenly they become points *on
a plane*. Numbers are really *two dimensional*; and just like the integer "1" is the unit distance on the axis of the "real" numbers, "i" is the unit distance on the axis of the "imaginary" numbers.
As a result numbers *in general* become what we call *complex*: they have two components, defining their position relative to those two axes. We generally write them as "a + bi" where "a" is the real
component, and "b" is the imaginary component.
The addition of *i* and the resulting addition of complex numbers is a wonderful thing mathematically. It means that *every* polynomial equation has roots; in particular, a polynomial equation in "x"
with maximum exponent "n" will always have exactly "n" complex roots.
But that's just an effect of what's really going on. The real numbers are *not* closed algebraically under multiplication and addition. With the addition of *i*, multiplicative algebra becomes
closed: every operation, every expression in algebra becomes meaningful: nothing escapes the system of the complex numbers.
Of course, it's not all wonderful joy and happiness once we go from real to complex. Complex numbers aren't ordered. There is no < comparison for complex numbers. The ability to do meaningful
inequalities evaporates when complex numbers enter the system in a real way.
What *i* means
But what do complex numbers *mean* in the real world? Do they really represent real phenomena? Or are they just a mathematical abstraction?
They're very real. There's one standard example that everyone uses: and the reason that we all use it is because it's such a perfect example. Take the electrical outlet that's powering your computer.
It's providing alternating current. What does that mean?
Well, the *voltage* - which (to oversimplify) can be viewed as the amount of force pushing the current - is complex. In fact, if you've got a voltage of 110 volts AC at 60 hz (the standard in the
US), what that means is that the voltage is a number of magnitude "110". If you were to plot the "real" voltage on a graph with time on the X axis and voltage of the Y, you'd see a sine wave:
But that's not really accurate. If you grabbed the wire when the voltage is supposedly zero on that graph, *you'd still get a shock*! Take the moment marked "t1" on the graph above. The voltage at
time t1 on the complex plane is a point at "110" on the real axis. At time t2, the voltage on the "real" axis is zero - but on the imagine axis it's 110. In fact, the *magnitude* of the voltage is
*constant*: it's always 110 volts. But the vector representing that voltage *is rotating* through the complex plane.
You also see it in the Fourier transform: when we analyze sound using a computer, one of the tricks we use is decomposing a complex waveform (like a human voice speaking) into a collection of basic
sine waves, where the sine waves added up equal the wave at a given point in time. The process by which we
do that decomposition is intimately tied with complex numbers: the fourier transform, and all of the analyses and transformations built on it are dependent on the reality of complex numbers (and in
particular on the magnificent Euler's equation up above).
More like this
I'm away on vacation this week, taking my kids to Disney World. Since I'm not likely to have time to write while I'm away, I'm taking the opportunity to re-run an old classic series of posts on
numbers, which were first posted in the summer of 2006. These posts are mildly revised. After the…
When we think of numbers, our intuitive sense is to think of them in terms of quantity: counting, measuring, or comparing quantities. And that's a good intuition for real numbers. But when you start
working with more advanced math, you find out that those numbers - the real numbers - are just a…
The quadratic formula. With the exception of the Pythagorean Theorem, it's probably the single most common mathematical formula people carry from high school. It's not a function as such, it's
something that solves a function. Let me give an example: Pick a number x, square it, add twice x to…
I'm away on vacation this week, taking my kids to Disney World. Since I'm not likely to have time to write while I'm away, I'm taking the opportunity to re-run an old classic series of posts on
numbers, which were first posted in the summer of 2006. These posts are mildly revised. This post…
I love i. Question! Having only a bare minimum of physical science knowledge (hate physics, love chemistry), is that graph you have of the real voltage what you would get off of a voltmeter? That is,
does the voltmeter only measure the real voltage?
If so, that is fascinating. I know that properly-built circuits can do quite interesting things to electricity, but I didn't realize that it could actually cut out the imaginary part of the voltage.
Also, I had heard that with a properly set up circuit, you can get a resistor with imaginary resistance. Is this true? If so, what does this *do* when it interacts with current with real voltage? Or
does the circuit setup only allow imaginary voltage differences in that area? I clearly have no idea what I'm talking about, this is why I ask!
My favorite teacher in high school was my math teacher. He would go on a rant about 'i' and the term imaginary number. There is No Such Thing as a Real number he would say. Go out in the back yard
and dig up a three for me. Go to the store and buy a bag of three's. You can't do it. 'i' is as REAL as the number 3 and don't let anyone tell you otherwise.
If I haven't made my point, he 'hated' the terms REAL and IMAGINARY. For me, I just think of 'i' as being a mechanism for thinking two dimensionally rather than one.
If you get an AC voltmeter, it will read constant for AC current - no variation in the force behind the current. If you're using a DC voltmeter, you'll see it swing back and forth, because it's only
measuring the component of the force in one dimension, when it's actually a two-dimensional property.
That's the trick to it: i isn't imaginary. Complex numbers are as "real" as what we call real numbers. Complex numbers are two dimensional.
You aren't "cutting out" the imaginary part of voltage; you're using an instrument which measures the *projection* of a two-dimensional value onto one-dimension.
Great post - imaginary numbers have always fascinated me. I think it was one of my math teachers who said, when asked about whether imaginary numbers were real things, "they're as real as any other
One little quibble, though - the voltage at the wall socket in North America is about 110V rms, so the peak voltage (assuming a perfect sinusoid) is sqrt(2)*110, or about 156V. Also, when you say you
would still get a shock if you grabbed the wires when the voltage is zero, I disagree - assuming you mean you could grab and let go before the (real) voltage increased to a shocking level. The real
voltage is what drive real electric current through a resistive circuit (the human body can be considered purely resistive at 60Hz - the capacitive and inductive reactances will be negligible at such
a low frequency). And it's the real current which causes the shock (muscles contracting involuntarily, pain, and even burns at high enough current levels).
Yeah, that's it exactly. Complex numbers are numbers in two dimensions; algebra (and many physical phenomena) require two-dimensional numbers in order to really make sense.
I rather like the expression as:
(e to the power of (i * pi) ) + 1 = 0
as that way you can get e, i, pi, zero and 1 into a single equation. Neat
It's been years since I took serious math so forgive me if I'm rusty, but how can i show up in cubic equations of "real" numbers. Doesn't "cubic" imply a x^3 somewhere? And if so, since the exponent
is odd, doesn't i have to show up bare somewhere in the original equation? I always thought i was a consequence of some quatratic equeations, that is, equations with 2 as the highest exponent.
Most of the early efforts to solve cubics were based on doing transformations that allowed you to find a transformation that gave you a quadratic equation which you could solve, and then pass that
through the reverse transformation. My understanding is that the inner quadratic used in the solution process can have imaginary roots.
I take issue with your electrical and acoustic example. In those examples the imaginary number is used purely as a mathematical abstraction. Yes a very powerful and convenient abstraction, but in
classical E+M, there is no such thing as imaginary voltage or current(at least not as a physical quantity).
However, a far better example of physical complex numbers would be probability amplitudes from Quantum Mechanics. Of course, you can not actually measure 'probability amplitudes', but still it
requires complex numbers to be represented.
That being said I too love 'i'.
"If you grabbed the wire when the voltage is supposedly zero on that graph, you'd still get a shock!"
Good math, bad physics. The real power follows the voltage amplitude, so for an ideal resistance we have P = UrmsIrms = Irms^2*R. During 0 passage you aren't shocked, if you are earthed. That is why
it is much easier to let go of an AC than a DC wire at the same voltage magnitude.
"is that graph you have of the real voltage what you would get off of a voltmeter?"
A modern digital oscilloscope meter lets you track the waveform. Earlier you switched between an DC setting to see voltage magnitude |U| and an AC setting to see RMS voltage Urms. (RMS = Root Mean
Square; Urms = sqr(Ur^2 + Ui^2) where Ur = real(U) and Ui = imag(U), that is the instantaneous real and imaginare components of the voltage U.)
"I had heard that with a properly set up circuit, you can get a resistor with imaginary resistance."
No, but you can get reactances, that is components with imaginary loads (impedances) Z. Then you need to follow through Mark's analysis on complex currents, ie currents with magnitude and phase (time
An ideal resistance Zr = U/I, an ideal capacitor Zc = 1/jwC (j = imaginary unit, w = frequency), an ideal inductance Zi = jwL. So if you had ideal components, a capacitor or an inductor gives you a
pure imaginary impedance. (With a capacitor, the current is leading the voltage with pi/2 radians, with an inductor coil it's the reverse.)
Xanthir: As for imaginary resistance, look up "impedance". The imaginary component of impedance is actually the capacitance/inductance we observe. But all of this only because relevant for the case
of varying signals. In a DC world, capacitance and inductance don't affect your circuit. If the imaginary value of your voltage is zero, the imaginary value of your impedance is ignored.
As a BSEE and a guy who digs signals processing, i (or rather j, mwahaha) is definitely my favourite number.
Mark, I'm afraid that you're mistaken when you say "If you grabbed the wire when the voltage is supposedly zero on that graph, you'd still get a shock!". That's true in the sense that it's impossible
to grab the wire for a period of time with duration = 0, but there will be a zero-crossing in the current. Exactly when that zero-crossing will take place depends on the complex impedance Z of the
If the load is purely resistive (real), i.e., Z = R + j0, (in electrical engineering i is called j because the letter "i" already has a meaning- instantaneous current), the current will pass through
zero at the same time the voltage does. If the load is purely reactive (imaginary), i.e., Z = 0 ± jX, the current will lag or lead the voltage by Ï radians, depending on the sign of the imaginary
component, so the current zero will take place 1/4 cycle ahead or behind the voltage zero. If the load is complex, i.e., Z = R ± jX. the current will lead or lag the voltage by an angle equal to
arctan (X/R).
In any case, if the circuit is composed entirely of linear, bidirectional components and the voltage source is a single-phase sine, there will be two current zero-crossings per cycle just as there
are two voltage zero-crossings per cycle.
BTW, that's why the DC current rating is often less than the AC current rating for switches. It's easier to break an AC arc than a DC arc because in an arc conducting AC, the current goes through
zero twice per cycle, which means that the arc is extinguished twice per cycle, so you just need to move the contacts apart far enough that it can't re-ignite on the next half-cycle (the "firing"
voltage for an arc gap is generally higher than the voltage drop across the arc once it's established), while breaking a DC arc requires the contacts to separate enough that the arc's conduction
voltage drop is greater than the voltage available from the source.
I taught math (including 7th - 9th grades) for 15 years. I found it helpful when teaching about fractions to introduce the idea of ordered pairs - that 1/3 is just an invented symbol for (1,3), and
that fractions were invented so as to have a way to represent the solution of certain equations. That makes it much easier to introduce complex numbers also as an ordered pair with an invented symbol
of "i" to distinguish it from the "/" used in fractions. That way it can be presented as just as "real" as fractions are, it is just something that was invented in order to be able to solve certain
equations. And in both cases there are "real world" interpretations of them.
What's the error in the following:
eiÏ = -1
(eiÏ)2 = -12
e2iÏ = 1
ln(e2iÏ) = ln(1)
2iÏ = 0
i = 0
The real interesting thing I think is that the complex plane makes analytic functions (differentiable at every point) having a lot of nice properties, such as analytic derivatives of all orders. And
lets one play around with residues, cuts and conformal mappings when doing integrals. What is complex with real numbers become easy with complex numbers.
The easiest way to see that imaginary numbers turn up for higher order equations must be to solve x^n = 1. You get complex roots symmetrically set on the unit circle.
Nice post.
I bet you could make differential geometry look simple...
"there will be a zero-crossing in the current"
If this is interesting depends on if it is the voltage or the current that contracts muscles and makes wires hard to release, and if it is current or voltage that makes the shocking feeling. I
believe the voltage that mess with muscles that are made of cells closely related to nervous cells, not the current, and that it is the muscle spasm that gives the shock feeling. OTOH, it is the
current that gives tissue damage AFAIK.
qetzal - Maybe someone else has a better-formed answer for you, but just recall that the angle 2Ï is equivalent to the angle zero. ex is not a one-to-one function; it's periodic. I think you're
therefore probably taking an illegal inverse when you use the ln function. It's just like saying sin-1(sin(0)) = sin-1(sin(2Ï)), therefore, Ï = 0.
Someone help if you think I'm floundering. Thanks.
ThePolynomial: Actually, ex is one-to-one. However, eix is periodic rather than one-to-one, so your argument is still correct.
The logarithmic function on the complex numbers is multi-valued, i.e. one argument gives you infinitely many values. So log(re^(ix)) for real r,x gives the values ln(r)+i(x+2nÏ), where n takes the
value of all integers.
The correct argument is :
e^2iÏ = 1
log(e^2iÏ) = log(1)
ln(1)+ i(2nÏ+2n'Ï)=ln(1)+i(2n''Ï)
which is fine for n+n'=n''.
ThePolynomial's explanation is right: ln is not the right inverse; you need the multivalued log function.
Thank you, all. I had gone through Physics2 last year, but I've already forgotten it all. >_< I remember impedance dealing with imaginary numbers, though.
Essentially, impedance is the 'resistance' to the magnetic part of the EM wave, right? Resistance (proper) is the resistance to the electrical part. That's why a current with purely imaginary Z
doesn't hurt you - all the energy is magnetic at that point.
If I'm still wrong about the details at this point, don't worry - as I said, I am *not* a physics guy (though I do love reading about the bleeding edge of physics).
Ktesibios et al:
Ok, so I blew it a tad with the electricity example. I should stick to the math, and stay away from the physics :-). The electrical is still a very cool example because the power transmission is
still constant, but it's changing form through the cycle.
No, I couldn't make differential geometry look easy, which is why I'll never write about it here.
But there's actually an interesting autobiographical story about me and differential equations. Back when I started college, I was an EE major. Unfortunately, I was a *very* bad engineer; the way
that an engineer solves problems just *does not work* for my warped brain. So I flunked out of the engineering program. During my last semester in EE, I took diffeqs. At the time, I was pretty much
convinced that I was a moron, that I couldn't do anything right, etc. (Knowing that you're probably going to get kicked out of school at the end of the year will do that to you.) So I nearly flunked
diffeqs; I ended up with a D, and frankly, that was a gift from the prof.
I took a year off college, and then came back and started doing CS, which I was very good at. After a year, one of my best friends was signing up to take diffeqs, and I said "hey, I'll sign up for it
with you, just to prove to myself that I can do better than a D".
For the first half of the semester, I did even worse than the first time I took it. Failed the first exam *badly*, got atrocious scores on homeworks, couldn't understand a damned thing. (It didn't
help that the teacher sucked, but that wasn't my problem.)
Anyway, about halfway through the semester, my friend and I had appropriated a classroom to work on homework for diffeqs, and for the discrete math class we were also taking at the same time. It
happened that the homework for the discrete class was solving recurrence relations. So we're scribbling away on the chalkboard trying to do the diffeqs, and I got really frustrated, and said "let's
switch to the discrete". So we start working on the discrete - and I'm just tearing through these recurrence relations. And after I explain to my friend how I did one, she says "Hey, you know what,
that's *exactly* what you needed to do in that diffeq that you couldn't solve".
I stared at her. And then went back and looked at the diffeqs.
It was the same thing. The recurrence relation was a discrete version of the same form of equation as the diffeq. I could do it for the recurrence, but not the diffeq. Until that moment.
From that moment on, I suddenly could do diffeqs. It had been completely psychological: I had convinced myself that I couldn't do it, and so I couldn't. The moment my friend pointed out that the
method of solving the diffeqs and the recurrence relations were basically the same thing, and I realized I *was* doing it, the whole problem just disappeared, and I wound up getting As on the second
midterm and final, and a B for the class. (Which would have been an A if it weren't for the failed first exam.)
(I also heard that four years later, the dreadful prof that taught that class was fired for sexual harassment. Which he deserved. He'd ask you questions, and if you couldn't answer them, he'd scream
and shout and call you names - *unless* you were a pretty girl, in which case he'd stop yelling at you if you would give him a "pretty smile".)
I think i is my favorite number. It still confuses the heck out of me, though.
Moving away from all this "ivory tower" math and towards the silly concerns of us geeks who stick primarily with "real" numbers... ;)
On the whole electricity thing: I once got the idea of a D&D spellcaster who primarily uses lightning spells and has a penchant for math. In the unlikely event that he bumps into someone who doubts
imaginary numbers, he would ask, "Did you know that you can kill a man with an imaginary number?" or something to that effect. The "Geometer" prestige class would probably be a good career move for a
character concept like that.
Completely silly question: In D&D, you've got positive energy (which heals living creatures and harms undead) and negative energy (the reverse, of course)... So, what would imaginary energy do?
That's what you use to power Plane Shift.
"After the amazing response to my post about zero, I thought I'd do one about something that's fascinated me for a long time: the number i, the square root of -1."
I was only recently made aware of the fact that there is no such thing as the square root of minus one. :)
Great post. I never made the connection that numbers are two-dimensional. I also appreciated the physics commentary, since my AC electricity skills are woefully unpracticed. I have a pedagogical
question. The imaginary and complex numbers are normally taught in high school Algebra II and precalculus courses, but I never get the feeling that the students ever have a clue what they are good
for. Solving polynomials is one example, but a high school math student thinks solving polynomials is drudge work. Are there other "real world" applications of complex numbers, other than in AC
electricity and quantum mechanics?
Complex numbers show up all over the place once you start looking. The two-dimensional number plane ends up having a lot of uses. Fourier transforms (which show up in almost everything involving
generating or listening to sound using a computer) are based on complex numbers.
Dynamical systems are almost always described using complex numbers.
Fluid dynamics computations often involve integrations of complex equations.
As someone pointed out in another comment in this thread, imaginary numbers get used frequently in quantum mechanics.
Many fractals - and thus the applications of fractal math - are based on complex numbers.
Some kinds of tomographic imaging systems use complex numbers.
And that's just a small sample.
One of the most beautiful things I ever saw was a diagram demonstrating Euler's formulain a textbook on complex analysis from a geometric perspective. It showed the partial sums of the Tayor
expansion of eiθ spiraling into the limit. It really made clear why the terms can be broken up into the cosine series for the real part and the sine series for the imaginary part.
That same textbook also pointed out and demonstrated that the theory of convergence of Taylor expansions is most sensibly formulated in the complex plane, even if you only want to evaluate the Taylor
expansion using real arguments.
Doesn't "cubic" imply a x^3 somewhere? And if so, since the exponent is odd, doesn't i have to show up bare somewhere in the original equation?
Chris, it's actually pretty easy to force complex numbers to appear when solving a cubic. For example, the equation x3-1=0 has three roots, two of which are imaginary (the third of which is x=1).
However, the complex roots of a cubic polynomial always result from a quadratic factor of the polynomial.
In the case of the polynomial I gave, what's going on is that x3-1=(x-1)(x2+x+1), and the second factor has complex roots.
Just keep in mind this is only true for cubics, though -- it's a result of the facts that any cubic polynomial has at least one real root, and complex roots come in pairs. (So your options are 3 real
roots, or 1 real root and a quadratic factor with complex roots.)
A friend showed me a fun alternative way to construct the complex numbers as a subset of the collection of 2 x 2 matrices. The complex number z=a+bi is represented as a matrix
[ a b ]
[-b a ],
which I'll call Z. A little bit of matrix algebra shows that the matrix Z2 represents the complex number z2. Moreover, complex conjugation is just the operation of swapping the off-diagonal elements.
And i is just the matrix
[ 0 1 ]
[-1 0 ].
The utility of this approach may be limited, but it provides a good way of demonstrating to laypeople that complex numbers really aren't so weird.
Total layperson here but... I still don't get it at all. Why do you need "i" to plot something in two dimensions? x and y aren't good enough? What does "i" mean? Why do all equations have have
solutions: why can't they just be denied (i.e. x^2 != -1) if they don't make sense?
>>I think it was one of my math teachers who said, when asked about whether imaginary numbers were real things, "they're as real as any other number!"
Funny, I once heard Peter Sarnak say that p-adic numbers are as real as any other number.
Actually, I seem to recall that he may have said that the real numbers are no less unreal than the p-adics.
Numbers. . . they all look alike anyway. . .
x and y aren't good enough because they aren't *a number*. (x,y) is a point on a graph; (x + yi) can be drawn as a point on a graph, *but it's a number*. The reason that that's important isn't
because it means you can plot it on a graph; ordered pairs (x,y) are find for plotting things on a graph. But *numbers* that are intrinsically two-dimensional have very different properties from
ordered pairs - and those properties are important and meaningful.
Just to give a trivial example: if you've got two points, how do you multiply them? There are a few possible different answers. One is to treat them as vectors - and then the result is *outside* the
plain where the points lie. Another is to treat them as a polynomial: (a,b)*(c,d) can be treated as (ax + by)(cx + dy) = (acx2 + (bc+ad)xy + bdy2). But again - that's now something different -
instead of being a point in a plane, it's a messy quadratic equation. But the third is to treat it as a complex: (a + bi)(c + di) starts out the same as the polynomial multiplication, giving you (ac
(12) + (bc+ad)(1)(i) + bd(i2). But that simplifies out to (bc+ad-bd) + (bc+ad)i - that is, another complex number. It's two dimensional, but it's *closed* under algebra - no algebraic operation will
give you a result that is not a complex number.
And finally, for "why do all equations have to have solutions?" - they don't have to: the fact of the matter is that there are some equations that *don't* have solutions.
But with things like the basic polynomials, *they do have solutions*: some equations have meaningful solutions - but those solutions aren't real numbers. You can say that there's no solution to x2+1=
0 if you want; but you're just *defining* that it has no solution. The fact will always remain that there *is* a solution to it, a solution that works, and that's useful for describing real things.
There's a rather simple metaphor that a professor of mine once used. Some people don't like cats as pets. Suppose that one of them, in an attempt to get rid of cats, changes the definition of the
word "pet" so that cats can no longer be called pets. Now nobody has a cat as a pet. But the cats are still there. Redefining the word did nothing but change the definition - it didn't change the
fact. Simple polynomials *do* always have solutions; you can redefine solution so that it means "real number solution" to exclude the complex ones; but they're still there.
Your question has been bothering me too. What is the difference between a 2 dimention vector and an imaginary number? What defines an application in which we should use vectors versus one in which
imaginary numbers are better.
I mean, we could use vectors to represent alternating voltage in electrical circuits but we don't, we chose to use imaginary numbers. I know its gotta have something to with euler's equation as it
wouldn't apply to vectors. I wonder how much harder it would be to do circuits with only vectors. Which operations would be harder to do?
The difference between the two-dimensional vector and the complex numbers is in the definition of what they mean - see my answer to plunge wrt multiplication. There are two ways to multiply vectors -
dot product, and cross product. Dot product gives you a scalar, not a vector. Cross product gives you a vector - but it's a vector *outside* the plane where the vectors belong.
Complex numbers have two dimensions, and they have well-defined addition and multiplication operators that *are closed* over the complex numbers, and that produce exactly the same results as addition
and multiplication for real numbers on complex with a 0i component.
Multiplying two vectors and getting a vector outside the plane is just very different from multiplying two complex numbers and always getting a complex.
My favorite intuitive description of complex numbers is the one Feynman uses in QED to avoid having to assume the reader understands complex numbers or scare off those who don't. He defines a times
operation on arrows in which you multiply their lengths and add their angles to get the new angle. Of course, if you're limited to 0 and 180 degree angles, you just get the real numbers. Otherwise,
any non-zero arrow must have a non-zero i component.
What I like about this explanation is that it is a concrete and visual explanation of a very "real" operation easily understood in terms of geometry. You can also see immediately how the real numbers
are not closed and that you necessarily get something off the real number line when trying to find a value whose square is an arrow with a 180 degree angle (negative real) but just sort of
conveniently get another real when taking the square root of a nonnegative real.
I wish the word "imaginary" could be banned from math pedagogy, as it suggests to students that there is something shady about complex numbers. As the "arrow arithmetic" model shows, complex numbers
represent planar rotation and scaling, of which reals are simply a special case of pure scaling, as the roots of unity are a special case of pure rotation.
I also wish that C-like languages had had the foresight to make complex an intrinsic type. Too many otherwise sensible people have been led astray into using Fortran for numerical analysis. I hope
that this situation will soon be remedied.
Question: How did we figure out which one is i and which one is -i? It seems like it would be really easy to get them backwards.
(I'm not the first person to ask this question...I think I got it from Hofstadter but I couldn't swear to it.)
The only difference between the complex numbers and any other two dimensional real vector space is that we have a rule for multiplying two complex numbers. In general, a vector space with a
"multiplication" of any sort is called an "algebra"; the complex numbers are a very special algebra for several reasons, as mentioned above.* You would use vectors instead of complex numbers if, for
whatever reason, the multiplication rule for complex numbers wasn't appropriate for the situation. (Kind of like you wouldn't call something a vector space if you weren't going to use scalar
multiplication at all.)
* My favorite reason: it's possible to divide one complex number by another. That sounds pretty innocuous until you find out that there are essentially only four finite-dimensional algebras where
that's possible!
I'm glad you brought up Feynman's "arrows". Now that I've given it a little thought, I find that a number like 3 + 4i makes more sense --- is more "real" --- to me than a supposedly simple quantity
like five hundred thousand. I can get what 3 + 4i means in terms of shifts and turns, but I couldn't tell you how big a pile of half a million jelly beans will look.
Nobody "figured out" which way to put the minus signs. It makes a weird kind of sense: if the only property you have which defines "i" is that the square of it equals -1, then there will have to be
two quantities which fit the bill, one of them equal to minus the other. If I work all my problems in terms of what I call i, and you come along and say, "I want to use -i: my imaginary unit is minus
your imaginary unit!" then there isn't a way to judge who chose more wisely.
Feynman has a wonderful chapter in his Lectures on Physics entitled "Algebra". It builds up to Euler's Theorem --- the eiθ relation MarkCC gave above --- in a truly beautiful way. I also recommend
The Book of Numbers by John Conway and Richard Guy, which gets into the complex kind in chapter 8. For online reading, check out "Trigonometry and Complex Numbers" by J. Baez. "Curious Quaternions"
is also nice.
Cat lover, good answer!
To give a concrete example, vectors (but not complex numbers) are used in mechanics to represent velocities, because addition and scalar multiplication (multipying a vector by a real number) make
sense in this interpretation but multiplying two velocities together would not make sense, so we don't need that additional structure.
Whereas. . . my mind kind of fritzes trying to come up with a physics example going the other way. . . all I can come up with is the use of a complex number to represent a rotation (by its argument =
angle) and a dilation (= stretching or shrinking, by its length); there the product of two complex numbers give the net effect of one such rotation-dilation followed by another. So you would need the
additional structure in this case.
Sorry that one's so geometrical but un-physical: it's too darn hot here for strsightforward thought.
I put the equation eiÏ + 1 = 0 in needlepoint on plastic canvas when I was in my teens, I liked it so much....
I really enjoyed this post! :)
The imaginary numbers also have a geometric interpretation. In geometric algebra (also known as Clifford algebra), there are actually multiple distinct entities which have the property that their
square is equal to -1. There is insufficient space here to adequately describe where this comes from, however, there is an excellent article by researchers at Cambridge which does an excellent job of
introducing the algebra and the illustrating geometric interpretation of the imaginary numbers. The article is titled Imaginary Numbers are not Real and can be found at the following URL.
If you find this article interesting and would like to learn more, many more technical papers can be found at the cambridge site:
and David Hestenes' Geometric Calculus R&D site:
For those who are interested, I'd say that the first chapter of Tristan Needham's "Visual Complex Analysis" is absolutely the best, most inspired, most intuitive inspiration of the complex numbers
that I've seen.
I don't really like complex numbers mainly because it is often very hard to visualize them when they are used to represent a space. For instance I find the idea of complex manifolds to be very
unintuitive. Same with complex projective geometry.
However I am very impressed with the way in which complex numbers simplify analysis. I don't understand the deep reason for this and I have never really seen a good explanation as to why complex
analysis helps in understanding real analysis. Examples of this have already been given above: convergence of real power series is explained by singularies/branch points in complex plane (e.g. real
fnc 1/(1+x^2) converges on -1
I just finished being introduced to complex numbers and my text-book completely skipped on real world examples. I've now printed out your article to stick it in my notes, thankyou.
"Essentially, impedance is the 'resistance' to the magnetic part of the EM wave, right? Resistance (proper) is the resistance to the electrical part. That's why a current with purely imaginary Z
doesn't hurt you - all the energy is magnetic at that point."
No, impedance is a load to both parts. Imaginary Z is both capacitive and inductive.
If you use the maxwellian formulation of EM, you will see that it is more complex than a separate electric and magnetic field.
An inductive load consists of a magnetic field around a coil created by changing current through it. A capacitive load consists of the load for a displacement current between capacitor plates caused
by changing potential across.
But the displacement current is connected to the magnetic field since it is needed to close the equations for cases involving nonstationary currents. And in the symmetrical formulation for radiation
fields the E and M field will be seen as married together nicely into the EM fields. Magnetism is our one daily seen relativistic effect (from moving charges) at low speeds. We are just not used to
see it as such.
"Did you know that you can kill a man with an imaginary number?"
I can't imagine that. But you can try trap him with complex numbers, kill him with a pole, and leave no residue to trace. ;-)
"what would imaginary energy do?"
If we are looking at alternating currents, the imaginary power is the reactive part of the complex power.
"we could use vectors to represent alternating voltage in electrical circuits but we don't"
No, but vectors do come in when we stop looking at this simple model and start looking at EM fields instead. The electric field is a scalar field (a value in every point) but the magnetic field is a
vector field (a direction and magnitude in every point). And when we can use stuff like poynting vectors that show how EM power flows in the circuit you mentioned.
The perhaps most fundamental property of complex numbers outside closure of addition and multiplication, already mentioned, is analyticity: a function on complex numbers is often analytic, which mean
that derivatives of all order exists, which in turn also are analytic functions. This makes most everything easier to do in complex analysis than in real analysis, especially integration.
"The electric field is a scalar field (a value in every point) but the magnetic field is a vector field (a direction and magnitude in every point)."
Duh! I was thinking about the radiative EM formulation where one have two such fields. In the maxwellian formulation both fields are vector of course!
Mark - re: your answer to plunge's post -
You appear to denigrate the idea that complex numbers could be seen as ordered pairs, but I'm sure I've seen at least one explanation of complex numbers that treat them as exactly that - that is,
that treat the set of complex numbers as the set of ordered pairs of real numbers, augmented with particular rules for addition and multiplication; so that the complex number a + bi is the ordered
pair of real numbers (a,b).
The "addition" is defined componentwise, so that (a,b)+(c,d)=(a+c,b+d), and the "multiplication" is defined so that (a,b)*(c,d)=(ac-bd,ad+bc).
After all, the set of complex numbers is closely related to the real plane, as you obviously appreciate.
Chaos_engineer -
That is what my first post was about. There's no real distinction between the two possible square roots of -1; If there exists one of them, then according to the rules we use, there must be another
one, its negative, and it has the same properties as the original. They're not different like 1 and -1 are. We chose one arbitrarily and call it i, so the other one is -i. But "choosing the other
one" as i doesn't matter in the slightest - assuming you don't swap them aroud halfway through whatever you're doing, of course.
While my post from last night waits in the spam queue (I included two URLs for reference), I'll throw in another.
@Philip Eve:
One such presentation of complex numbers --- as ordered pairs with special multiplication rules --- is William Rudin's Principles of Mathematical Analysis. We used this book in my one and only
analysis class, which may explain why I remember so little of the analysis I "learned". It's a much better book for looking things up after you've already seen them than it is for studying things the
first time! If my memory is not too fallible, Rudin does it that way specifically to avoid the confusion about choosing which i you want to call i.
The point I was trying to make was that as we generally understand them, ordered pairs are *pairs*. That is, they're a set of two different numbers. A complex number is *not* a pair of numbers; it's
a single number with two components. It's a difference in how we generally understand the concepts. As far as *representation* goes, an ordered pair is a fine representation for a complex number;
it's just important to make the distinction between the *representation* of the complex number as a pair of reals, and the *reality* of the complex number as a single, atomic number in two
...a function on complex numbers is often analytic, which mean that derivatives of all order exists, which in turn also are analytic functions.
Let me point out that complex numbers are especially nice in this regard. If a complex function is differentiable, it is also infinitely differentiable and analytic (analytic = representable by power
series, not just infinitely differentiable).
On the other hand, there exist real functions that are differentiable but not infinitely differentiable, and real fuinctions that are infinitely differentiable but not analytic.
An amazing thing about imaginary numbers:
Add them and you have to associate them with a real number (eg, i + i = 2i). You cannot have an "imaginay" quantity of "imaginary" things, else they become "negative real numbers" (eg: i x 2i = -2).
Of course there are quaternoins, and so on, but these are non-commutative.
analytic = representable by power series, not just infinitely differentiable.
Thanks for the correction! I didn't have my complex analysis book handy, and I felt I left something out.
Torbjorn, the canonical example if you want to remember the difference between analytic and infinitely differentiable is
f(x)=e(-1/x2) for x not 0, f(0)=0.
Homework: check that all derivatives of f(x) exist at 0, and are equal to 0 there. In particular, this implies that the Taylor series of f(x) centered at x=0 is identically 0. Thus f(x) is infinitely
differentiable at x=0, but not analytic there.
On the other hand, this function is not differentiable at 0 as a function on the complex numbers.
You know the usual; it has been a long time.
I still can't find my original course book. Here is the definitions I find:
"an analytic function is a function that is locally given by a convergent power series." ( http://en.wikipedia.org/wiki/Analytic_function )
"A complex function is said to be analytic on a region R if it is complex differentiable at every point in R." ( http://mathworld.wolfram.com/AnalyticFunction.html )
"The function f(z) is analytic in a domain Omega if f(z) is differentiable at every point of Omega." (BETA Mathematics Handbook)
And of course from (complex) differentiation comes regular derivatives: "Assume that f(z) is analytic in Omega with boundary C. Then in Omega, 1. Any order derivative of f(z) exists and is an
analytic function." (BETA)
The difference in definitions is due to wikipedia discussing both real and complex analytic functions, while mathworld, BETA (and my course book) and I are discussing complex analyticity.
I will try to remember your more general definition, it is of course better.
I am way late on this, but, in essence, you have divided by 0
While I agree that complex numbers are frequently useful in physics, I've often wondered if they are necessary. In other words, couldn't a group of Martians, who avoid complex numbers for theological
reasons, could still solve all the problems of electrical circuits just by considering voltage and impedance to be two (real) cycles out of phase with each other. Are there any physical problems that
require complex numbers? The only candidate I can think of is quantum mechanics, and even there I'm not sure that the Martians would be out of luck.
What I really want to know is, does nature really use complex numbers? Or are they just a convenient shortcut for us to make solving the equations easier?
Yes, complex numbers are really necessarily. You need to forget the idea that there's something "imaginary" about them. A complex number *is* just a pair of "out-of-phase" real numbers with a
peculiar multiplication operation on those pairs. They're not two disconnected real numbers: when you do a multiplication, you get a transfer effect between the two sides of the pair: the "real"
component affects the "imaginary" component, and vice versa.
To make the equations of electrical physics work, you need *both* the pairing of the two numbers, *and* the multiplication behavior. So if you somehow removed complex numbers, you'd basically just be
rewriting all of the equations to *recreate* the complex numbers and their multiplication operation.
i need more explanation on how you got x.
should Y= x^2+9 look different if it were graphed on the complex plane? What would it look like at 3i and -3i...and if there is no order in imaginary numbers, is it possible to graph an equation like
y=x ?
Should Y= x^2+9 look different if it were graphed on the complex plane? What would it look like at 3i and -3i...and if there is no order in imaginary numbers,how does one proceed to graph it? | {"url":"https://scienceblogs.com/goodmath/2006/08/01/i","timestamp":"2024-11-02T21:41:21Z","content_type":"text/html","content_length":"163603","record_id":"<urn:uuid:9ac9eb12-23db-43b9-b68c-5f60b0c0dc8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00022.warc.gz"} |
1.6. Nearest Neighbors
1.6. Nearest Neighbors#
sklearn.neighbors provides functionality for unsupervised and supervised neighbors-based learning methods. Unsupervised nearest neighbors is the foundation of many other learning methods, notably
manifold learning and spectral clustering. Supervised neighbors-based learning comes in two flavors: classification for data with discrete labels, and regression for data with continuous labels.
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a
user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard
Euclidean distance is the most common choice. Neighbors-based methods are known as non-generalizing machine learning methods, since they simply “remember” all of its training data (possibly
transformed into a fast indexing structure such as a Ball Tree or KD Tree).
Despite its simplicity, nearest neighbors has been successful in a large number of classification and regression problems, including handwritten digits and satellite image scenes. Being a
non-parametric method, it is often successful in classification situations where the decision boundary is very irregular.
The classes in sklearn.neighbors can handle either NumPy arrays or scipy.sparse matrices as input. For dense matrices, a large number of possible distance metrics are supported. For sparse matrices,
arbitrary Minkowski metrics are supported for searches.
There are many learning routines which rely on nearest neighbors at their core. One example is kernel density estimation, discussed in the density estimation section.
1.6.1. Unsupervised Nearest Neighbors#
NearestNeighbors implements unsupervised nearest neighbors learning. It acts as a uniform interface to three different nearest neighbors algorithms: BallTree, KDTree, and a brute-force algorithm
based on routines in sklearn.metrics.pairwise. The choice of neighbors search algorithm is controlled through the keyword 'algorithm', which must be one of ['auto', 'ball_tree', 'kd_tree', 'brute'].
When the default value 'auto' is passed, the algorithm attempts to determine the best approach from the training data. For a discussion of the strengths and weaknesses of each option, see Nearest
Neighbor Algorithms.
Regarding the Nearest Neighbors algorithms, if two neighbors \(k+1\) and \(k\) have identical distances but different labels, the result will depend on the ordering of the training data.
1.6.1.1. Finding the Nearest Neighbors#
For the simple task of finding the nearest neighbors between two sets of data, the unsupervised algorithms within sklearn.neighbors can be used:
>>> from sklearn.neighbors import NearestNeighbors
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
>>> distances, indices = nbrs.kneighbors(X)
>>> indices
array([[0, 1],
[1, 0],
[2, 1],
[3, 4],
[4, 3],
[5, 4]]...)
>>> distances
array([[0. , 1. ],
[0. , 1. ],
[0. , 1.41421356],
[0. , 1. ],
[0. , 1. ],
[0. , 1.41421356]])
Because the query set matches the training set, the nearest neighbor of each point is the point itself, at a distance of zero.
It is also possible to efficiently produce a sparse graph showing the connections between neighboring points:
>>> nbrs.kneighbors_graph(X).toarray()
array([[1., 1., 0., 0., 0., 0.],
[1., 1., 0., 0., 0., 0.],
[0., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 0.],
[0., 0., 0., 1., 1., 0.],
[0., 0., 0., 0., 1., 1.]])
The dataset is structured such that points nearby in index order are nearby in parameter space, leading to an approximately block-diagonal matrix of K-nearest neighbors. Such a sparse graph is useful
in a variety of circumstances which make use of spatial relationships between points for unsupervised learning: in particular, see Isomap, LocallyLinearEmbedding, and SpectralClustering.
1.6.1.2. KDTree and BallTree Classes#
Alternatively, one can use the KDTree or BallTree classes directly to find nearest neighbors. This is the functionality wrapped by the NearestNeighbors class used above. The Ball Tree and KD Tree
have the same interface; we’ll show an example of using the KD Tree here:
>>> from sklearn.neighbors import KDTree
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> kdt = KDTree(X, leaf_size=30, metric='euclidean')
>>> kdt.query(X, k=2, return_distance=False)
array([[0, 1],
[1, 0],
[2, 1],
[3, 4],
[4, 3],
[5, 4]]...)
Refer to the KDTree and BallTree class documentation for more information on the options available for nearest neighbors searches, including specification of query strategies, distance metrics, etc.
For a list of valid metrics use KDTree.valid_metrics and BallTree.valid_metrics:
>>> from sklearn.neighbors import KDTree, BallTree
>>> KDTree.valid_metrics
['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity']
>>> BallTree.valid_metrics
['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity', 'seuclidean', 'mahalanobis', 'hamming', 'canberra', 'braycurtis', 'jaccard', 'dice', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine', 'pyfunc']
1.6.2. Nearest Neighbors Classification#
Neighbors-based classification is a type of instance-based learning or non-generalizing learning: it does not attempt to construct a general internal model, but simply stores instances of the
training data. Classification is computed from a simple majority vote of the nearest neighbors of each point: a query point is assigned the data class which has the most representatives within the
nearest neighbors of the point.
scikit-learn implements two different nearest neighbors classifiers: KNeighborsClassifier implements learning based on the \(k\) nearest neighbors of each query point, where \(k\) is an integer value
specified by the user. RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius \(r\) of each training point, where \(r\) is a floating-point value
specified by the user.
The \(k\)-neighbors classification in KNeighborsClassifier is the most commonly used technique. The optimal choice of the value \(k\) is highly data-dependent: in general a larger \(k\) suppresses
the effects of noise, but makes the classification boundaries less distinct.
In cases where the data is not uniformly sampled, radius-based neighbors classification in RadiusNeighborsClassifier can be a better choice. The user specifies a fixed radius \(r\), such that points
in sparser neighborhoods use fewer nearest neighbors for the classification. For high-dimensional parameter spaces, this method becomes less effective due to the so-called “curse of dimensionality”.
The basic nearest neighbors classification uses uniform weights: that is, the value assigned to a query point is computed from a simple majority vote of the nearest neighbors. Under some
circumstances, it is better to weight the neighbors such that nearer neighbors contribute more to the fit. This can be accomplished through the weights keyword. The default value, weights = 'uniform'
, assigns uniform weights to each neighbor. weights = 'distance' assigns weights proportional to the inverse of the distance from the query point. Alternatively, a user-defined function of the
distance can be supplied to compute the weights.
1.6.3. Nearest Neighbors Regression#
Neighbors-based regression can be used in cases where the data labels are continuous rather than discrete variables. The label assigned to a query point is computed based on the mean of the labels of
its nearest neighbors.
scikit-learn implements two different neighbors regressors: KNeighborsRegressor implements learning based on the \(k\) nearest neighbors of each query point, where \(k\) is an integer value specified
by the user. RadiusNeighborsRegressor implements learning based on the neighbors within a fixed radius \(r\) of the query point, where \(r\) is a floating-point value specified by the user.
The basic nearest neighbors regression uses uniform weights: that is, each point in the local neighborhood contributes uniformly to the classification of a query point. Under some circumstances, it
can be advantageous to weight points such that nearby points contribute more to the regression than faraway points. This can be accomplished through the weights keyword. The default value, weights =
'uniform', assigns equal weights to all points. weights = 'distance' assigns weights proportional to the inverse of the distance from the query point. Alternatively, a user-defined function of the
distance can be supplied, which will be used to compute the weights.
The use of multi-output nearest neighbors for regression is demonstrated in Face completion with a multi-output estimators. In this example, the inputs X are the pixels of the upper half of faces and
the outputs Y are the pixels of the lower half of those faces.
1.6.4. Nearest Neighbor Algorithms#
1.6.4.1. Brute Force#
Fast computation of nearest neighbors is an active area of research in machine learning. The most naive neighbor search implementation involves the brute-force computation of distances between all
pairs of points in the dataset: for \(N\) samples in \(D\) dimensions, this approach scales as \(O[D N^2]\). Efficient brute-force neighbors searches can be very competitive for small data samples.
However, as the number of samples \(N\) grows, the brute-force approach quickly becomes infeasible. In the classes within sklearn.neighbors, brute-force neighbors searches are specified using the
keyword algorithm = 'brute', and are computed using the routines available in sklearn.metrics.pairwise.
1.6.4.2. K-D Tree#
To address the computational inefficiencies of the brute-force approach, a variety of tree-based data structures have been invented. In general, these structures attempt to reduce the required number
of distance calculations by efficiently encoding aggregate distance information for the sample. The basic idea is that if point \(A\) is very distant from point \(B\), and point \(B\) is very close
to point \(C\), then we know that points \(A\) and \(C\) are very distant, without having to explicitly calculate their distance. In this way, the computational cost of a nearest neighbors search can
be reduced to \(O[D N \log(N)]\) or better. This is a significant improvement over brute-force for large \(N\).
An early approach to taking advantage of this aggregate information was the KD tree data structure (short for K-dimensional tree), which generalizes two-dimensional Quad-trees and 3-dimensional
Oct-trees to an arbitrary number of dimensions. The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes, dividing it into nested orthotropic
regions into which data points are filed. The construction of a KD tree is very fast: because partitioning is performed only along the data axes, no \(D\)-dimensional distances need to be computed.
Once constructed, the nearest neighbor of a query point can be determined with only \(O[\log(N)]\) distance computations. Though the KD tree approach is very fast for low-dimensional (\(D < 20\))
neighbors searches, it becomes inefficient as \(D\) grows very large: this is one manifestation of the so-called “curse of dimensionality”. In scikit-learn, KD tree neighbors searches are specified
using the keyword algorithm = 'kd_tree', and are computed using the class KDTree.
1.6.4.3. Ball Tree#
To address the inefficiencies of KD Trees in higher dimensions, the ball tree data structure was developed. Where KD trees partition data along Cartesian axes, ball trees partition data in a series
of nesting hyper-spheres. This makes tree construction more costly than that of the KD tree, but results in a data structure which can be very efficient on highly structured data, even in very high
A ball tree recursively divides the data into nodes defined by a centroid \(C\) and radius \(r\), such that each point in the node lies within the hyper-sphere defined by \(r\) and \(C\). The number
of candidate points for a neighbor search is reduced through use of the triangle inequality:
\[|x+y| \leq |x| + |y|\]
With this setup, a single distance calculation between a test point and the centroid is sufficient to determine a lower and upper bound on the distance to all points within the node. Because of the
spherical geometry of the ball tree nodes, it can out-perform a KD-tree in high dimensions, though the actual performance is highly dependent on the structure of the training data. In scikit-learn,
ball-tree-based neighbors searches are specified using the keyword algorithm = 'ball_tree', and are computed using the class BallTree. Alternatively, the user can work with the BallTree class
References# Choice of Nearest Neighbors Algorithm#
The optimal algorithm for a given dataset is a complicated choice, and depends on a number of factors:
• number of samples \(N\) (i.e. n_samples) and dimensionality \(D\) (i.e. n_features).
□ Brute force query time grows as \(O[D N]\)
□ Ball tree query time grows as approximately \(O[D \log(N)]\)
□ KD tree query time changes with \(D\) in a way that is difficult to precisely characterise. For small \(D\) (less than 20 or so) the cost is approximately \(O[D\log(N)]\), and the KD tree
query can be very efficient. For larger \(D\), the cost increases to nearly \(O[DN]\), and the overhead due to the tree structure can lead to queries which are slower than brute force.
For small data sets (\(N\) less than 30 or so), \(\log(N)\) is comparable to \(N\), and brute force algorithms can be more efficient than a tree-based approach. Both KDTree and BallTree address
this through providing a leaf size parameter: this controls the number of samples at which a query switches to brute-force. This allows both algorithms to approach the efficiency of a brute-force
computation for small \(N\).
• data structure: intrinsic dimensionality of the data and/or sparsity of the data. Intrinsic dimensionality refers to the dimension \(d \le D\) of a manifold on which the data lies, which can be
linearly or non-linearly embedded in the parameter space. Sparsity refers to the degree to which the data fills the parameter space (this is to be distinguished from the concept as used in
“sparse” matrices. The data matrix may have no zero entries, but the structure can still be “sparse” in this sense).
□ Brute force query time is unchanged by data structure.
□ Ball tree and KD tree query times can be greatly influenced by data structure. In general, sparser data with a smaller intrinsic dimensionality leads to faster query times. Because the KD
tree internal representation is aligned with the parameter axes, it will not generally show as much improvement as ball tree for arbitrarily structured data.
Datasets used in machine learning tend to be very structured, and are very well-suited for tree-based queries.
• number of neighbors \(k\) requested for a query point.
□ Brute force query time is largely unaffected by the value of \(k\)
□ Ball tree and KD tree query time will become slower as \(k\) increases. This is due to two effects: first, a larger \(k\) leads to the necessity to search a larger portion of the parameter
space. Second, using \(k > 1\) requires internal queueing of results as the tree is traversed.
As \(k\) becomes large compared to \(N\), the ability to prune branches in a tree-based query is reduced. In this situation, Brute force queries can be more efficient.
• number of query points. Both the ball tree and the KD Tree require a construction phase. The cost of this construction becomes negligible when amortized over many queries. If only a small number
of queries will be performed, however, the construction can make up a significant fraction of the total cost. If very few query points will be required, brute force is better than a tree-based
Currently, algorithm = 'auto' selects 'brute' if any of the following conditions are verified:
• input data is sparse
• metric = 'precomputed'
• \(D > 15\)
• \(k >= N/2\)
• effective_metric_ isn’t in the VALID_METRICS list for either 'kd_tree' or 'ball_tree'
Otherwise, it selects the first out of 'kd_tree' and 'ball_tree' that has effective_metric_ in its VALID_METRICS list. This heuristic is based on the following assumptions:
• the number of query points is at least the same order as the number of training points
• leaf_size is close to its default value of 30
• when \(D > 15\), the intrinsic dimensionality of the data is generally too high for tree-based methods
Effect of leaf_size#
As noted above, for small sample sizes a brute force search can be more efficient than a tree-based query. This fact is accounted for in the ball tree and KD tree by internally switching to brute
force searches within leaf nodes. The level of this switch can be specified with the parameter leaf_size. This parameter choice has many effects:
construction time
A larger leaf_size leads to a faster tree construction time, because fewer nodes need to be created
query time
Both a large or small leaf_size can lead to suboptimal query cost. For leaf_size approaching 1, the overhead involved in traversing nodes can significantly slow query times. For leaf_size
approaching the size of the training set, queries become essentially brute force. A good compromise between these is leaf_size = 30, the default value of the parameter.
As leaf_size increases, the memory required to store a tree structure decreases. This is especially important in the case of ball tree, which stores a \(D\)-dimensional centroid for each node.
The required storage space for BallTree is approximately 1 / leaf_size times the size of the training set.
leaf_size is not referenced for brute force queries.
Valid Metrics for Nearest Neighbor Algorithms#
For a list of available metrics, see the documentation of the DistanceMetric class and the metrics listed in sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS. Note that the “cosine” metric uses
A list of valid metrics for any of the above algorithms can be obtained by using their valid_metric attribute. For example, valid metrics for KDTree can be generated by:
>>> from sklearn.neighbors import KDTree
>>> print(sorted(KDTree.valid_metrics))
['chebyshev', 'cityblock', 'euclidean', 'infinity', 'l1', 'l2', 'manhattan', 'minkowski', 'p']
1.6.5. Nearest Centroid Classifier#
The NearestCentroid classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the KMeans algorithm.
It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal
variance in all dimensions is assumed. See Linear Discriminant Analysis (LinearDiscriminantAnalysis) and Quadratic Discriminant Analysis (QuadraticDiscriminantAnalysis) for more complex methods that
do not make this assumption. Usage of the default NearestCentroid is simple:
>>> from sklearn.neighbors import NearestCentroid
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = NearestCentroid()
>>> clf.fit(X, y)
>>> print(clf.predict([[-0.8, -1]]))
1.6.5.1. Nearest Shrunken Centroid#
The NearestCentroid classifier has a shrink_threshold parameter, which implements the nearest shrunken centroid classifier. In effect, the value of each feature for each centroid is divided by the
within-class variance of that feature. The feature values are then reduced by shrink_threshold. Most notably, if a particular feature value crosses zero, it is set to zero. In effect, this removes
the feature from affecting the classification. This is useful, for example, for removing noisy features.
In the example below, using a small shrink threshold increases the accuracy of the model from 0.81 to 0.82.
1.6.6. Nearest Neighbors Transformer#
Many scikit-learn estimators rely on nearest neighbors: Several classifiers and regressors such as KNeighborsClassifier and KNeighborsRegressor, but also some clustering methods such as DBSCAN and
SpectralClustering, and some manifold embeddings such as TSNE and Isomap.
All these estimators can compute internally the nearest neighbors, but most of them also accept precomputed nearest neighbors sparse graph, as given by kneighbors_graph and radius_neighbors_graph.
With mode mode='connectivity', these functions return a binary adjacency sparse graph as required, for instance, in SpectralClustering. Whereas with mode='distance', they return a distance sparse
graph as required, for instance, in DBSCAN. To include these functions in a scikit-learn pipeline, one can also use the corresponding classes KNeighborsTransformer and RadiusNeighborsTransformer. The
benefits of this sparse graph API are multiple.
First, the precomputed graph can be re-used multiple times, for instance while varying a parameter of the estimator. This can be done manually by the user, or using the caching properties of the
scikit-learn pipeline:
>>> import tempfile
>>> from sklearn.manifold import Isomap
>>> from sklearn.neighbors import KNeighborsTransformer
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.datasets import make_regression
>>> cache_path = tempfile.gettempdir() # we use a temporary folder here
>>> X, _ = make_regression(n_samples=50, n_features=25, random_state=0)
>>> estimator = make_pipeline(
... KNeighborsTransformer(mode='distance'),
... Isomap(n_components=3, metric='precomputed'),
... memory=cache_path)
>>> X_embedded = estimator.fit_transform(X)
>>> X_embedded.shape
(50, 3)
Second, precomputing the graph can give finer control on the nearest neighbors estimation, for instance enabling multiprocessing though the parameter n_jobs, which might not be available in all
Finally, the precomputation can be performed by custom estimators to use different implementations, such as approximate nearest neighbors methods, or implementation with special data types. The
precomputed neighbors sparse graph needs to be formatted as in radius_neighbors_graph output:
• a CSR matrix (although COO, CSC or LIL will be accepted).
• only explicitly store nearest neighborhoods of each sample with respect to the training data. This should include those at 0 distance from a query point, including the matrix diagonal when
computing the nearest neighborhoods between the training data and itself.
• each row’s data should store the distance in increasing order (optional. Unsorted data will be stable-sorted, adding a computational overhead).
• all values in data should be non-negative.
• there should be no duplicate indices in any row (see scipy/scipy#5807).
• if the algorithm being passed the precomputed matrix uses k nearest neighbors (as opposed to radius neighborhood), at least k neighbors must be stored in each row (or k+1, as explained in the
following note).
When a specific number of neighbors is queried (using KNeighborsTransformer), the definition of n_neighbors is ambiguous since it can either include each training point as its own neighbor, or
exclude them. Neither choice is perfect, since including them leads to a different number of non-self neighbors during training and testing, while excluding them leads to a difference between fit
(X).transform(X) and fit_transform(X), which is against scikit-learn API. In KNeighborsTransformer we use the definition which includes each training point as its own neighbor in the count of
n_neighbors. However, for compatibility reasons with other estimators which use the other definition, one extra neighbor will be computed when mode == 'distance'. To maximise compatibility with all
estimators, a safe choice is to always include one extra neighbor in a custom nearest neighbors estimator, since unnecessary neighbors will be filtered by following estimators.
1.6.7. Neighborhood Components Analysis#
Neighborhood Components Analysis (NCA, NeighborhoodComponentsAnalysis) is a distance metric learning algorithm which aims to improve the accuracy of nearest neighbors classification compared to the
standard Euclidean distance. The algorithm directly maximizes a stochastic variant of the leave-one-out k-nearest neighbors (KNN) score on the training set. It can also learn a low-dimensional linear
projection of data that can be used for data visualization and fast classification.
In the above illustrating figure, we consider some points from a randomly generated dataset. We focus on the stochastic KNN classification of point no. 3. The thickness of a link between sample 3 and
another point is proportional to their distance, and can be seen as the relative weight (or probability) that a stochastic nearest neighbor prediction rule would assign to this point. In the original
space, sample 3 has many stochastic neighbors from various classes, so the right class is not very likely. However, in the projected space learned by NCA, the only stochastic neighbors with
non-negligible weight are from the same class as sample 3, guaranteeing that the latter will be well classified. See the mathematical formulation for more details.
1.6.7.1. Classification#
Combined with a nearest neighbors classifier (KNeighborsClassifier), NCA is attractive for classification because it can naturally handle multi-class problems without any increase in the model size,
and does not introduce additional parameters that require fine-tuning by the user.
NCA classification has been shown to work well in practice for data sets of varying size and difficulty. In contrast to related methods such as Linear Discriminant Analysis, NCA does not make any
assumptions about the class distributions. The nearest neighbor classification can naturally produce highly irregular decision boundaries.
To use this model for classification, one needs to combine a NeighborhoodComponentsAnalysis instance that learns the optimal transformation with a KNeighborsClassifier instance that performs the
classification in the projected space. Here is an example using the two classes:
>>> from sklearn.neighbors import (NeighborhoodComponentsAnalysis,
... KNeighborsClassifier)
>>> from sklearn.datasets import load_iris
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.pipeline import Pipeline
>>> X, y = load_iris(return_X_y=True)
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
... stratify=y, test_size=0.7, random_state=42)
>>> nca = NeighborhoodComponentsAnalysis(random_state=42)
>>> knn = KNeighborsClassifier(n_neighbors=3)
>>> nca_pipe = Pipeline([('nca', nca), ('knn', knn)])
>>> nca_pipe.fit(X_train, y_train)
>>> print(nca_pipe.score(X_test, y_test))
The plot shows decision boundaries for Nearest Neighbor Classification and Neighborhood Components Analysis classification on the iris dataset, when training and scoring on only two features, for
visualisation purposes.
1.6.7.2. Dimensionality reduction#
NCA can be used to perform supervised dimensionality reduction. The input data are projected onto a linear subspace consisting of the directions which minimize the NCA objective. The desired
dimensionality can be set using the parameter n_components. For instance, the following figure shows a comparison of dimensionality reduction with Principal Component Analysis (PCA), Linear
Discriminant Analysis (LinearDiscriminantAnalysis) and Neighborhood Component Analysis (NeighborhoodComponentsAnalysis) on the Digits dataset, a dataset with size \(n_{samples} = 1797\) and \(n_
{features} = 64\). The data set is split into a training and a test set of equal size, then standardized. For evaluation the 3-nearest neighbor classification accuracy is computed on the
2-dimensional projected points found by each method. Each data sample belongs to one of 10 classes.
1.6.7.3. Mathematical formulation#
The goal of NCA is to learn an optimal linear transformation matrix of size (n_components, n_features), which maximises the sum over all samples \(i\) of the probability \(p_i\) that \(i\) is
correctly classified, i.e.:
\[\underset{L}{\arg\max} \sum\limits_{i=0}^{N - 1} p_{i}\]
with \(N\) = n_samples and \(p_i\) the probability of sample \(i\) being correctly classified according to a stochastic nearest neighbors rule in the learned embedded space:
\[p_{i}=\sum\limits_{j \in C_i}{p_{i j}}\]
where \(C_i\) is the set of points in the same class as sample \(i\), and \(p_{i j}\) is the softmax over Euclidean distances in the embedded space:
\[p_{i j} = \frac{\exp(-||L x_i - L x_j||^2)}{\sum\limits_{k \ne i} {\exp{-(||L x_i - L x_k||^2)}}} , \quad p_{i i} = 0\]
Mahalanobis distance#
NCA can be seen as learning a (squared) Mahalanobis distance metric:
\[|| L(x_i - x_j)||^2 = (x_i - x_j)^TM(x_i - x_j),\]
where \(M = L^T L\) is a symmetric positive semi-definite matrix of size (n_features, n_features).
1.6.7.4. Implementation#
This implementation follows what is explained in the original paper [1]. For the optimisation method, it currently uses scipy’s L-BFGS-B with a full gradient computation at each iteration, to avoid
to tune the learning rate and provide stable learning.
See the examples below and the docstring of NeighborhoodComponentsAnalysis.fit for further information.
1.6.7.5. Complexity#
1.6.7.5.1. Training#
NCA stores a matrix of pairwise distances, taking n_samples ** 2 memory. Time complexity depends on the number of iterations done by the optimisation algorithm. However, one can set the maximum
number of iterations with the argument max_iter. For each iteration, time complexity is O(n_components x n_samples x min(n_samples, n_features)).
1.6.7.5.2. Transform#
Here the transform operation returns \(LX^T\), therefore its time complexity equals n_components * n_features * n_samples_test. There is no added space complexity in the operation. | {"url":"https://scikit-learn.qubitpi.org/modules/neighbors.html","timestamp":"2024-11-13T05:46:37Z","content_type":"text/html","content_length":"125542","record_id":"<urn:uuid:cc8b2822-3f88-45d3-8c86-352aa215560c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00633.warc.gz"} |
Question and Answers Forum
All Questions Topic List
Algebra Questions
Previous in All Question Next in All Question
Previous in Algebra Next in Algebra
Question Number 153864 by liberty last updated on 11/Sep/21
Commented by MJS_new last updated on 11/Sep/21
$$\mathrm{1}\:\mathrm{equation}\:/\:\mathrm{3}\:\mathrm{unknown}\:??? \\ $$
Terms of Service
Privacy Policy
Contact: info@tinkutara.com | {"url":"https://static.tinkutara.com/question/153864.htm","timestamp":"2024-11-04T01:08:26Z","content_type":"text/html","content_length":"4031","record_id":"<urn:uuid:c32d01ff-d30f-41a1-a09f-17946def41d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00278.warc.gz"} |
Reconstruction of traffic state using autonomous vehicles
A coupled PDE-ODE model
We consider the following hybrid PDE-ODE model
Above, (1a) and (1b) consists of a scalar conservation proposed by Lighthill-Whitham-Richards law modeling the evolution of vehicular traffic. (1c) and (1d) represent the trajectory of autonomous
vehicles. stands for the density of cars and the flux function is defined by with the average speed .
We assume that is an unknown function and our goal is to find a time at which it is possible to reconstruct the true density between two autonomous vehicles based only on the measured local density
of each autonomous vehicle.
We introduce the two following operators
• is the solution of (1a) at time and at the position with initial density
• where is the solution of (1) with initial data .
Figure 1.1 - if and if with two autonomous vehicles positioned at and . Figure 1.2 - if and if and if with two connected vehicles positioned at and
In Figure 1.1 and Figure 1.2, we construct two initial data and such that and for every , for every , . Thus, there isn’t backward uniqueness from the trajectory of autonomous vehicles.
Figure 1.3 if and if with two autonomous vehicles positioned at $x=8$ and $x=12$ respectively
In Figure Figure 1.3, we cannot reconstruct the traffic state at any time using data collected by the two autonomous vehicles. Some additional conditions on the set of initial densities will be
Theorem Let and we assume that for every , . For every there exists such that for every and for almost every we have
The reconstruction time is obtained using the notion of generalized characteristics and has an implicit form.
Numerical simulations
In Figure 2.1 and Figure 2.2, we consider the example of two shocks with a fan of rarefaction shocks between the two shocks. A total of three AVs, denoted by , and , are used to reconstruct the
traffic state resulting in two regions of reconstruction between and , and between and . Specifically, the initial density is defined as follows : $\rho_0(x) = 0.0938 $ for $ x\in (-\infty, 8] $, $\
rho_0(x) = 0.9062$ for , for $x\in (10, 13]$ and $\rho_0(x) = 0.9062$ for . , , and start at , , and , respectively. The resulting traffic state solved using wave front tracking over the first 20
seconds is shown in Figure 2.1, while the reconstructed state between the AVs is shown in Figure 2.2. The time at which the reconstruction becomes valid is between and , and between and .
Figure 2.1 Figure 2.2
In Figure 3.1 and Figure 3.2, one autonomous vehicle, denoted by , is deployed on a ring. The initial density is a -periodic function defined as follows: for , for . Since and are also periodic
functions, both trajectories plotted in red in Figure 3.2 are the ones of . The traffic state on the whole ring can be reconstructed after .
Figure 3.1 Figure 3.2
[1] T.Liard, B. Piccoli, R. Stern, D. Work. Traffic reconstruction using autonomous vehicles; submitted (2019) | {"url":"https://deustotech.github.io/DyCon-Blog/tutorial/wp04/P0004","timestamp":"2024-11-10T02:09:03Z","content_type":"text/html","content_length":"29448","record_id":"<urn:uuid:dc53e32a-b764-470d-b070-0cd0a067dbef>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00374.warc.gz"} |
Iteration over lists/arrays/etc in a function to calculate values during simulation
Q: I just have a simple question, because I cannot find some example about it. It is possible to iterate over a collection (arrays, list, etc) just to calculate some values in a function, when
executing the simulation?
I want to calculate “mean” and “deviation” values, taking them from indexing 2 arrays I have in the declaration, to later send them as parameters for a normal distribution function (simulating a
delay()). I need to use direct indexes like: if I have an array a=[100,250,300] in the declaration, then in a function I can get a[i], being i: 1..3. So, I can do calculations like a[i]+b[i]; a[i]/b
[i]; etc.
I tried defining “colset”, “list” and also with “val”. Any of them is ok, because the values are constant and defined on the declaration section, just to index them I received a “i” value during
simulation, so I need to do a[i], that’s it.
Can you help me please? Do you have some easy example?. I already checked CPN Tools examples and SML manuals, but I didn’t see this case.
Thank you in advance for any help you can give me.
A: You could use the list functions for this. In your case, if a is your list and i is your index (with base 1), then List.nth(a, i-1) provides you with a[i]. Note that you use arrays with base 1,
whereas Standard ML uses lists with base 0, that’s why we need i-1.
You may also want to take a look at the Standard ML list structure. It contains generic list functions, and in particular the two fold functions. http://sml-family.org/Basis/list.html#
The fold functions implement the same paradigm you may know as Googles MapReduce (actually MapReduce was inspired by the fold functions).
That allows you to implement a list sum and average functions as:
fun intSum (a, b) = a + b
fun listSum lst = List.foldl intSum 0 lst
fun listAvg [] = 0 | listAvg lst = (listSum lst) div (List.length lst)
Note the average has several issues: it uses integer division, so listAvg [1, 2] = 1 and it is prone to overflows, so listAvg [1000000000, 1000000000] will fail.
The intSum function is a misnomer; it will also work with lists of real. listSum and listAvg will only work with ints (because of 0 and div; use 0.0 and / instead to make it use with reals). | {"url":"https://cpntools.org/2018/02/26/iteration-over-lists-arrays-etc-in-a-function-to-calculate-values-during-simulation/","timestamp":"2024-11-09T06:43:48Z","content_type":"text/html","content_length":"58644","record_id":"<urn:uuid:a68461d0-3c5b-4c59-afae-e70d8cd6fb17>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00275.warc.gz"} |
Average Roofing Cost per Square Foot in context of roofing cost
04 Oct 2024
Title: The Average Roofing Cost per Square Foot: A Comprehensive Analysis
Abstract: This study aims to investigate the average roofing cost per square foot, a crucial factor in determining the overall expenditure for roof replacement or installation. By analyzing various
factors influencing roofing costs, this research provides a comprehensive framework for understanding the relationship between roofing cost and square footage.
Introduction: Roofing is a critical component of any building’s infrastructure, requiring periodic maintenance and eventual replacement. The cost of roofing can vary significantly depending on
several factors, including material type, installation method, and geographic location. This study focuses on the average roofing cost per square foot, providing a formula-based approach to
understanding this crucial metric.
Methodology: To determine the average roofing cost per square foot, we analyzed data from various sources, including industry reports, academic studies, and online databases. We considered factors
such as:
• Material type (e.g., asphalt shingles, metal, clay tile)
• Installation method (e.g., manual labor, mechanized installation)
• Geographic location (e.g., urban, rural, coastal)
• Roof complexity (e.g., multiple layers, irregular shapes)
Formula: The average roofing cost per square foot can be calculated using the following formula:
Average Roofing Cost per Square Foot = (Total Roofing Cost / Total Square Footage) + Additional Costs
• Total Roofing Cost is the total cost of the roofing project
• Total Square Footage is the total area of the roof being replaced or installed
• Additional Costs include factors such as permits, inspections, and labor costs
Discussion: The average roofing cost per square foot is influenced by various factors, including material type, installation method, and geographic location. For example:
• Asphalt shingles may have a lower average cost per square foot compared to metal or clay tile roofs.
• Mechanized installation methods may result in higher costs due to equipment rental fees.
• Coastal regions may experience higher costs due to increased labor demands and material prices.
Conclusion: This study provides a comprehensive framework for understanding the average roofing cost per square foot. By considering factors such as material type, installation method, and geographic
location, building owners and architects can better estimate the total cost of a roofing project. The formula presented in this article serves as a useful tool for calculating the average roofing
cost per square foot.
• [Insert relevant sources]
Note: This is an abstract-style academic article, providing a general overview of the topic without numerical examples or specific data.
Related articles for ‘roofing cost ‘ :
• Reading: **Average Roofing Cost per Square Foot in context of roofing cost **
Calculators for ‘roofing cost ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/7c9f2788bd8792acf8dfe642ddbab5b4/JSON_TO_ARTCL_Average_Roofing_Cost_per_Square_Foot_in_context_of_roofing_cost_.html","timestamp":"2024-11-04T14:34:15Z","content_type":"text/html","content_length":"16599","record_id":"<urn:uuid:b3400101-145c-439a-8e08-01312c9faff3>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00045.warc.gz"} |
DC GENERATORS by CK Joe-uzuegbu PDF download - 1113
DC GENERATORS by CK Joe-uzuegbu PDF free download
CK Joe-uzuegbu DC GENERATORS PDF, was published in 2020 and uploaded for 400-level Engineering students of Federal University of Technology, Owerri (FUTO), offering PSE411 course. This ebook can be
downloaded for FREE online on this page.
DC GENERATORS ebook can be used to learn generator, GENERATED EMF, Self-excited Generator, Separately excited Generator. | {"url":"https://carlesto.com/books/1113/dc-generators-pdf-by-ck-joe-uzuegbu","timestamp":"2024-11-05T22:36:18Z","content_type":"text/html","content_length":"69730","record_id":"<urn:uuid:fb9edfc9-83cd-4279-a051-841491790f75>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00566.warc.gz"} |
Business Model
Business Modelling
Business Modelling Revision
Mutually exclusive event- add the probabilities together to find the probability that one or other of the events will occur. E.g men/woman
P(A or B)= P(A)+P(B)
Non mutually exclusive- shared characteristic
P (A or B)= P(A) + P(B) – P(B+A)
Independent events – outcome is known to have no effect on another outcome
P (A+B) = P(A) X P(B)
Dependant events- outcome of one event affects the probability of the outcome of the other. Probability of the second event said to be dependent on the outcome of the first.
P (A+B) = P(A) P(B/A)
Bionomial distribution, used when a series of trails have the following characteristics:
Each trial has two possible outcomes
The two outcomes are mutually exclusive
There are constant probabilities of success, p and failure, q=1-p
P( r successes in n trials) =
Example using the binomial distribution
A company employs a large number of graduates each September . In spite of careful recruitment including interviews and assessment centres, the company finds that 2% of graduates have left the
company within one month of starting. Use the binomial distribution to find the probability that:
i) Three out of a sample of 10 graduates will have left the company within one month of starting. – Binomial with n= 10, p=0.02. P(three leave) = 10c3, 0.02^3, (0.98)^97 =0.0008334 Three out of ten
leaving is clearly very unlikely. ii) n=100 p=0.02 , 100c3, 0.02^3, (0.98)^97= 0.182
Poisson distribution- close relative of the bionomial distribution and can be used to approximate it when
The number of trials, n is large
The probability of success, p, is small
Also useful for solving problems where events occur at random
Main difference between the bionomial and poisson distributions is that the bionomial distribution uses the probabilities of both success and failure, while the poisson uses only the probability of
P( r successes) =
Normal distribution
Both the bionomial and poisson distributions are used | {"url":"https://www.studymode.com/essays/Business-Modelling-66808573.html","timestamp":"2024-11-08T15:54:31Z","content_type":"text/html","content_length":"95777","record_id":"<urn:uuid:a0d78f41-01b8-43a5-84e7-79a00b18ef07>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00363.warc.gz"} |
How to Solve Complicated Circuits with Kirchhoff's Voltage Law (KVL)?
We have gone over Kirchhoff’s Current Law (KCL) in a previous tutorial and Kirchhoff’s Voltage Law (KVL) is very similar but focused on the voltage in a circuit, not the current. Kirchhoff’s Voltage
law states that the sum of the voltages in a closed loop will equal zero. In other words, if you look at any loop that goes completely all the way around, any increases in voltage throughout the loop
will be offset with an equal amount of decreases in voltage. Visually, this can be seen in the image below.
Using this concept, much as how we can use nodal analysis with KCL, we can use mesh analysis because of KVL. While a mesh is basically any loop within a circuit, for mesh analysis, we will need to
define meshes that don’t enclose any other meshes.
You can see that if we make a loop around the ‘outside’ of the entire circuit, technically it is a mesh because the loop can be completed. However, for purposes of analysis, we need to break it into
three different meshes. So let’s go over the steps of how to solve a circuit using mesh analysis before jumping into a few examples.
There are 5 steps that we recommend, and as we did with the KCL/nodal analysis steps, two of the steps are to calm down and step back, making sure that everything makes sense intuitively.
1. Take your time, breathe, and assess the problem. Write down what info you’ve been given and any intuitive insights you have.
2. Assign mesh currents to all of the meshes. There should be one current assigned per mesh. You need to choose which direction your current is flowing - this is semi-arbitrary because as long as
you do your math right, it doesn’t really matter. But in most cases, people assume a clockwise current direction.
3. Apply KVL to each of the meshes, using Ohm’s Law to show the voltages in terms of the current.
4. Solve the simultaneous equations (like we did with KCL) to find the actual values.
5. Sanity check. Take a moment to review what you’ve done and see if the numbers make sense and are internally consistent.
We’ll go over some examples now and frankly, after these examples, the only real additions and changes will be complications that make the math more difficult. The problems shouldn’t get much harder
conceptually but the math can get significantly harder. Please, don’t get lost in the math. If the numbers start to lose their numbers, make sure to come up for air and remember what you’re doing and
what you’re trying to do.
Example 1
A simple example - 1 mesh.
Let’s start here! This is a simple circuit, so simple that we could solve this using tools we already know. But I want to start simple so that we can focus on the concepts and the steps. So, let’s do
Step 1: Let’s take stock of the circuit. It obviously only has one loop, and we’ve got a voltage source and two resistors. We’ve been given the value of the voltage source and both resistors, so all
we need is to find out the current around the loop and the voltage drops over the resistors. And as soon as we find one, we can quickly use Ohm’s Law to get the other. This is going to be easy.
Step 2: We already noticed in step 1 that there will only be one mesh, so let’s draw in our mesh current, give it a direction, and give it a name. We’ll go clockwise and call it i[1]. Now, I’m
usually sloppy and don’t distinguish between i[1] and I[1], but in this case, we will do a lowercase ‘i’. This will be important in later examples. And, we know that since we have one mesh, there
will only be one equation.
Step 3: Let’s create our equations based off of KVL. This is the first step that requires any math. So, with KVL, let’s figure out our equation.
There are two ways of looking this, which can cause untold confusion. I will explain the differences and, as long as you’re consistent in each equation (not even necessarily in each problem, but
sheesh, why would you confuse yourself unnecessarily?) then everything will be fine.
In the first option, as we go around the loop, we see that we increase by 5V across the voltage source and then drop voltage across R[1] and R[2], giving us our positive 5 volts and then our two
negatives. To me, this is more intuitive because you’re going up in voltage across the voltage source in the way we defined the current flow, and you drop voltage across the resistors as the current
flows through them. However, it is extremely common for people to learn it the second way.
In the second option, you just use the sign of the voltage on the side of your branch that the current enters into. With the voltage source, since we are going clockwise, the current sees the
negative sign first, so it is a minus. As the voltage is dropping from positive to negative over the resistors, the current sees the positive sign on the resistors first, so you add them. If this is
more intuitive for you - use it! Neither of these options are wrong, you see that you get the same equations (just multiply both sides by -1) but make sure you’re consistent with each equation.
Step 4: Since there are no unknowns, we can simply plug in the values for R[1] and R[2] and find out what i[1] is.
And now we can find the voltages across R[1] and R[2].
Step 5: Sanity check! Note that V[1] + V[2] basically equals 5V (rounding errors!) which means that the voltage that drops over the two resistors is the same as the voltage increase from the voltage
Let’s make things a little more complicated.
Example 2
Step 1: What have we got here? It looks like we have two meshes that share a common resistor in the middle, R[3]. Again, we have all the values of voltage sources and resistors, so we should be able
to get actual values for the current and voltages through/across those resistors. Even without any values, we could do the analysis and show relationships but it is a bit more satisfying to me to
actually come up with a numerical answer. We do need to know how to treat R[3], but we’ll take care of that in step 3.
Step 2: Let’s identify the meshes. We’ll make both current loops flow clockwise and we’ll name the left hand one i[1] and the right hand one i[2]. Note that these are still lower case. And it matters
this time, because we also have the current through the resistor R[1], which is I[1] (note the capitalized “I”), the current through resistor R[2], which is I[2], and then through R[3], which is I
[3]. The capitalization is how to distinguish between the mesh currents (i[1] and i[2]) and the branch currents (I[1], I[2], and I[3]).
Step 3: Create the equations for the meshes. This will be quite straightforward but we need to know what to do about the voltage across R[3]. Let’s actually do the equation for i[1] and then talk
about it for a moment.
So, looking at that equation, you’re probably wondering why there is i[2] in our equation for mesh current i[1]. Remember that each section is in reference to voltage. We increase by 10V, which is
straightforward. We drop a voltage across R[1] that is equal to i[1]*R[2], still fairly straightforward. But the voltage drop across R[3] is the amount of current flowing downward as i[1] minus the
amount of current flowing upward as i[2] multiplied by R[3].
With our clockwise direction, we have stated that i[2] is flowing up through R[3]. Obviously, in reality, current is only flowing one way, but we don’t know which way right now, and mathematically,
we have said that there is both current flowing down through R[3] as i[1] and flowing up through R[3] as i[2]. The trick here is that if we had defined i[2] in the opposite (counterclockwise)
direction, we would have to add the current i[2] to i[1] to figure out the voltage drop across R[3].
So with this, pause, take a second, make sure you understand why we created the equation we did for the first mesh current. Then see what you come up with for the second mesh current before checking
to see what we come up with. You’re going to have to control your eyeballs, though, because the answer is right below this text.
Is this what you got? Remember that, with our definition that the current is flowing clockwise, the voltage is dropping as we go from ground across R[3], and still dropping as we go across R[2],
before coming to the voltage source, which since we’ve defined this clockwise direction, gives us a negative 5 volts. This is where it’s incredibly important to understand what’s going on intuitively
- if you get bogged down into the math too much without knowing what’s going on, you’re going to be setting up and solving the wrong equations! Trust me - I speak from much painful experience.
So now we have two equations and two unknowns. We can either solve this with substitution or by getting ready to do some linear algebra. Let’s do substitution.
Insert the values for the resistors.
Simplify the first equation.
Simplify the second equation a bit before replacing i[1].
Example 3 (Supermeshes)
With KCL, if we had a voltage source that wasn’t connected directly to reference ground, we would create a supernode and then, as part of the process, we would need to do a bit of KVL to finish the
analysis. With KVL, if we have a current source that is shared between two meshes, we need to treat it in a similar way. We get rid of the current source and anything that is connected in series with
it. We then treat the remainder as a single, larger supermesh.
Once we make that mesh, we create the equation to describe it. In this case, we get:
Now we have the equation for the supermesh but we have two unknowns and only one equation. So let’s put the current source back in with any elements that were in series with it and do KCL at the node
where they connect to the bigger circuit. Once it’s in place, we use KCL at that node to create a second equation.
Now we have two equations and two unknowns! Let’s put this in the format needed to do some linear algebra and see what we get.
So our two equations are:
Which we put into a linear equation solver to get:
As I’m prone to making math errors, I prefer the linear equation method as it is usually faster and less likely that I screw up it. With the supermesh, this isn’t a common concern as, unless you’re
dealing with transistors or CMOS level circuit design, current sources aren’t very typical. However, it’s a good tool to have in your toolbag in case it crops up and helps us better understand the
relationship between the physical circuits and the mathematical representations.
That is our brief overview of Kirchhoff’s Voltage Law and how that leads to mesh analysis. You’ll note that we sometimes used mesh analysis and KVL interchangeably. While technically not the same, it
is very common to hear them used like that. Depending on where you are and who you’ve studied with, you may find some other differences in approach, naming conventions, and even direction
assumptions. However, despite these superficial differences, all mesh analysis comes down to finding the voltage across the different elements in a mesh. As long as you’re consistent and have a good
understanding of what you’re doing, you should be able to get the answer you’re looking for.
Terms Used
Explore CircuitBread
Friends of CircuitBread
Get the latest tools and tutorials, fresh from the toaster. | {"url":"https://www.circuitbread.com/tutorials/how-to-solve-complicated-circuits-with-kirchhoffs-voltage-law-kvl","timestamp":"2024-11-03T07:49:20Z","content_type":"text/html","content_length":"214158","record_id":"<urn:uuid:917042ad-1273-489d-885d-fa26b6a77a5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00096.warc.gz"} |
Lionel Levine's hat challenge has t players, each with a (very large or infinite) stack of hats on their head, each hat independently colored at random black or white. The players are allowed to
coordinate before the random colors are chosen, but not after. Each player sees all hats except for those on her own head. They then proceed to simultaneously try and each pick a black hat from their
respective stacks. They are proclaimed successful only if they are all correct. Levine's conjecture is that the success probability tends to zero when the number of players grows. We prove that this
success probability is strictly decreasing in the number of players, and present some connections to problems in graph theory: relating the size of the largest independent set in a graph and in a
random induced subgraph of it, and bounding the size of a set of vertices intersecting every maximum-size independent set in a graph.
All Science Journal Classification (ASJC) codes
• combinatorics
• hats
• independent sets
• independent sets
• random graphs
Dive into the research topics of 'THE SUCCESS PROBABILITY IN LEVINE'S HAT PROBLEM, AND INDEPENDENT SETS IN GRAPHS'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/the-success-probability-in-levines-hat-problem-and-independent-se","timestamp":"2024-11-11T18:34:06Z","content_type":"text/html","content_length":"50280","record_id":"<urn:uuid:a35d3d2b-a32b-4ccf-975a-549527e74095>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00295.warc.gz"} |
Glenn Shafer - Probabilistic Expert Systems
Review by Slawomir T. Wierzchon, Polish Academy of Sciences, in Control Engineering Practice, March 1997, pp. 442-443.
Glenn Shafer is known to the AI community as the author of "A Mathematical Theory of Evidence" - a monograph published exactly twenty years ago, and devoted to a generalization of the Bayesian theory
of subjective probability judgement. Within this new and not uncontroversial theory, subjective judgements are expressed in terms of so-called "belief functions", i.e. using not necessarily additive
but monotone "probabilities". However, there is one serious problem: how to combine such functions effectively, particularly when they are defined on different domains. An algorithm for doing this
task, called the propagation algorithm, was worked out by G. Shafer and his colleagues. During further studies the algorithm proved to be a useful tool in solving discrete optimization problems,
decision-making problems and reasoning in predicate calculus. Since the probability measure is a special kind of belief function, the propagation algorithm supports the task of probabilistic
reasoning, which relies upon computing the conditional probability of a variable, given a set of observations. It is remarkable that, although almost trivial from a textbook point of view, this
problem is extremely difficult from the computational standpoint.
In 1978 two influential AI researchers, P. Szolovits and S.G. Pauker, argued in their paper "Categorical and probabilistic reasoning" (published in Artificial Intelligence, 11, pp. 115-144) that,
because of their enormous data and space requirements, the pure probabilistic models may be used only in small problems. For instance, if a model engages only 56 binary variables, then 2^56 values
must be computed to specify the joint probability distribution. If the computer can calculate the terms of the probability distribution at one million values per second, then it will take 2,283 years
to come up with the whole probability distribution. Hopeless! But no-one knew that an effective method for performing this task in a reasonable time already existed. This had been suggested in 1970
by geneticists analyzing pedigree data. Unfortunately, it was published in Clinical Genetics a journal unknown to the AI community, who had to wait until 1983 when J. Pearl and J. Kim developed the
first effective algorithm for computing the posterior probabilities in problems represented by tree structures. In the late eighties, algorithms oriented toward probability propagation in general
graphical structures were developed by S. Lauritzen & D. Spiegelhalter, G. Shafer & P. Shenoy, and F.V. Jensen.
The three approaches referred to above are discussed in depth in Shafer's new book. When implementing these algorithms it is better to think of appropriate probabilities as tables (vectors) of
numbers satisfying certain conditions. This point of view is consequently used in the whole book, and the author starts his presentation by tracing the main properties of probability tables that
represent conditional and unconditional probabilities.
Under certain conditions imposed on the set of conditionals, their products represent a joint probability distribution, and a sequence of conditionals satisfying these requirements is said to be a
"construction sequence". So the author studies the main properties of such sequences, and he mentions their various graphical representations including belief (or Bayesian) networks, bubble graphs
(this is a new notion introduced by him), chain graphs (studies by N. Wermuth and S.L. Lauritzen) and valuation networks (introduced by P. Shenoy). All these representations are discussed int eh
context of a practical example concerning the external audit of an organization's financial statement, which allows a reader to see the advantages and disadvantages of the corresponding
Knowing a construction sequence, the problem of probabilistic reasoning can be briefly described as the process of the elimination, one by one, of "unnecessary" (at the current moment) variables.
This is a rather old idea, used in 1972 by U. Bertele and F. Brioschi to solve so-called "nonserial dynamic programming problems". Shafer and Shenoy renewed this approach, and gave it a new appeal.
Instead of working with graphs (as its originators have done) they use so-called "join trees", well known to those working on the mathematical foundations of database systems. A particularly good
part of the book is the presentation of how the join trees occur naturally during the elimination of variables from a construction sequence. The rest of the book is devoted the various
implementations of the propagation algorithm in join trees.
In summary, this is a clever and concise book guaranteeing a quick and detailed introduction to the domain of probability propagation. Exercises added at the end of each chapter extend its content,
and allow a reader a deeper understanding of the core ideas. The short comments added to the biographical material included at the end of the book opens a perspective to other and new approaches to
this problem (especially those based on Markov-chain Monte Carlo simulation, mentioned in the closing section). Lastly, the book contains the electronic addresses of the main web sites where it is
possible to find further information and computer software.
Bayesian networks, a type of probabilistic system, are currently used in domains such as diagnosis, planning and control, dynamic systems and time series, data analysis or computer vision. Hence, the
material presented in the book may be of use, not only for students, but also for research workers from these domains. | {"url":"http://www.glennshafer.com/books/pes.html","timestamp":"2024-11-09T23:12:14Z","content_type":"text/html","content_length":"18488","record_id":"<urn:uuid:8f3e5e7c-fd6f-4f1f-984e-a65eb6e41f2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00029.warc.gz"} |
Random Walk Theory Gotchas (Hidden Dangers)
Discover the Surprising Hidden Dangers of Random Walk Theory – Don’t Fall for These Gotchas!
What are Statistical Arbitrage Strategies and How Can They Impact Random Walk Theory?
Step Action Novel Insight Risk Factors
Statistical arbitrage strategies involve using quantitative analysis and These strategies can impact the Random Walk Theory by challenging the The risk of overfitting the data and
1 algorithmic trading to identify and exploit market inefficiencies. idea that stock prices follow a random walk and instead suggest that creating a strategy that only works in
there are patterns and relationships that can be exploited for profit. the past but not in the future.
Mean reversion is a statistical arbitrage strategy that involves buying stocks This strategy challenges the idea that stock prices follow a random walk The risk of the mean not reverting,
2 that have underperformed and selling stocks that have overperformed, with the by suggesting that there are mean-reverting tendencies in the market. leading to losses.
expectation that they will eventually revert to their mean.
Correlation trading is a statistical arbitrage strategy that involves identifying This strategy challenges the idea that stock prices follow a random walk The risk of the correlation breaking
3 stocks that have a high correlation and taking advantage of any divergences in by suggesting that there are relationships between stocks that can be down, leading to losses.
their prices. exploited for profit.
Pair trading is a statistical arbitrage strategy that involves buying one stock This strategy challenges the idea that stock prices follow a random walk The risk of the correlation breaking
4 and shorting another stock that is highly correlated, with the expectation that by suggesting that there are relationships between stocks that can be down, leading to losses.
any divergences in their prices will eventually converge. exploited for profit.
Convergence trades are a statistical arbitrage strategy that involves buying a This strategy challenges the idea that stock prices follow a random walk The risk of the mispricings not
5 stock that is undervalued and shorting a stock that is overvalued, with the by suggesting that there are mispricings in the market that can be converging, leading to losses.
expectation that their prices will eventually converge. exploited for profit.
Divergence trades are a statistical arbitrage strategy that involves buying a This strategy challenges the idea that stock prices follow a random walk The risk of the mispricings not
6 stock that is overvalued and shorting a stock that is undervalued, with the by suggesting that there are mispricings in the market that can be diverging, leading to losses.
expectation that their prices will eventually diverge. exploited for profit.
Volatility arbitrage is a statistical arbitrage strategy that involves taking This strategy challenges the idea that stock prices follow a random walk The risk of the market not behaving as
7 advantage of differences in implied and realized volatility, with the expectation by suggesting that there are patterns in volatility that can be exploited expected, leading to losses.
that the market is overestimating or underestimating future volatility. for profit.
Risk management strategies are essential when using statistical arbitrage These strategies can help mitigate the risk of losses and ensure that the The risk of the risk management
8 strategies to manage the risk of losses. strategy is profitable over the long term. strategy not working as expected,
leading to losses.
9 High-frequency trading (HFT) is a type of algorithmic trading that involves using HFT can be used to implement statistical arbitrage strategies and take The risk of technical glitches or
sophisticated algorithms to execute trades at high speeds. advantage of market inefficiencies. errors leading to losses.
Liquidity provision is a strategy that involves providing liquidity to the market This strategy can be used to implement statistical arbitrage strategies The risk of the market not behaving as
10 by buying and selling stocks, with the expectation of profiting from the bid-ask and take advantage of market inefficiencies. expected, leading to losses.
Statistical arbitrage strategies can be used to generate alpha by The risk of the strategy not
11 Alpha generation is the process of generating excess returns above a benchmark. exploiting market inefficiencies. generating alpha, leading to
12 Trading signals are indicators that suggest when to buy or sell a stock. Statistical arbitrage strategies rely on trading signals to identify The risk of the trading signals not
market inefficiencies and execute trades. being accurate, leading to losses.
13 Market neutral is a strategy that involves taking long and short positions in This strategy can be used to implement statistical arbitrage strategies The risk of the market not behaving as
equal amounts, with the expectation of profiting from the difference in returns. and take advantage of market inefficiencies. expected, leading to losses.
The Mean Reversion Bias: A Hidden Danger in Random Walk Theory
Step Action Novel Insight Risk Factors
Define Mean Reversion Mean Reversion Bias is the tendency for stocks that have performed well or poorly in the Investors may assume that a stock will continue to perform well or poorly
1 Bias past to return to their average performance over time. based on past performance, leading to overconfidence bias and
misinterpretation of data.
Random Walk Theory assumes that stock prices move randomly and cannot be predicted. However,
Explain how Mean Mean Reversion Bias suggests that stocks may not move randomly, but rather tend to revert to Trend extrapolation fallacy may occur when investors assume that a stock
2 Reversion Bias affects their mean performance over time. This means that investors may be able to predict future will continue to perform well or poorly based on past performance, leading
Random Walk Theory stock performance based on past performance, contradicting the assumptions of Random Walk to false sense of security and market anomalies.
Discuss the limitations Statistical analysis can be limited by the amount and quality of data available, as well as Confirmation bias may occur when investors only look for data that supports
3 of statistical analysis the assumptions made in the analysis. Mean Reversion Bias may not be present in all stocks their assumptions about Mean Reversion Bias, leading to investment
in predicting Mean or markets, and may be affected by external factors such as economic conditions or company strategies pitfalls.
Reversion Bias news.
Explain the importance Risk management techniques such as diversification and stop-loss orders can help investors Market efficiency hypothesis suggests that all available information is
of risk management manage the risks associated with Mean Reversion Bias. Diversification can help investors already reflected in stock prices, making it difficult to consistently
4 techniques in dealing spread their investments across different stocks and markets, reducing the impact of any one outperform the market. Risk management techniques can help investors manage
with Mean Reversion Bias stock’s performance. Stop-loss orders can help investors limit their losses if a stock’s their risk without relying on the assumption that they can consistently beat
performance does not revert to its mean as expected. the market.
Overfitting Data Sets: How it Can Mislead Your Understanding of Random Walk Theory
Step Action Novel Insight Risk Factors
1 Collect Overfitting occurs when a model is too complex and fits the data too closely, resulting in inaccurate predictions Data manipulation, sample size issues, selection bias
data when applied to new data.
2 Analyze Curve fitting bias is a common form of overfitting where a model is too complex and fits the data too closely, False correlations, overly complex models, extrapolation errors
data resulting in inaccurate predictions when applied to new data.
3 Build Spurious relationships can occur when a model is too complex and fits the data too closely, resulting in inaccurate Cherry-picking data, confirmation bias, data dredging
model predictions when applied to new data.
4 Test model Model overconfidence can occur when a model is too complex and fits the data too closely, resulting in inaccurate Inaccurate predictions, selection bias, P-hacking
predictions when applied to new data.
5 Validate Overfitting data sets can mislead your understanding of random walk theory by making it seem like there is a pattern Risk of relying on inaccurate predictions, risk of making
model or trend when there is not. decisions based on false correlations
In summary, overfitting data sets can lead to inaccurate predictions and false correlations, which can mislead your understanding of random walk theory. To avoid overfitting, it is important to be
aware of curve fitting bias, spurious relationships, and model overconfidence. Additionally, it is important to validate your model and be cautious of data manipulation, sample size issues, selection
bias, cherry-picking data, confirmation bias, data dredging, and P-hacking. By managing these risks, you can improve the accuracy of your predictions and better understand random walk theory.
Survivorship Bias Effect: Why It Matters in Evaluating the Validity of Random Walk Theory
Black Swan Events and Their Implications for Random Walk Theory
Step Action Novel Insight Risk Factors
1 Define Black Swan Black Swan Events are low probability, unforeseeable, rare and unexpected incidents that have a significant The risk of Black Swan Events is often underestimated, leading
Events impact on the market. to inadequate risk management strategies.
Explain the Black Swan Events challenge the assumptions of Random Walk Theory, which assumes that market movements are Random Walk Theory does not account for tail risk events and
2 Implications for random and follow a normal distribution. However, Black Swan Events are non-linear phenomena that can cause fat-tailed distributions, which are more common than assumed.
Random Walk Theory disruptive and catastrophic disruptions, leading to systemic risks and market shocks.
Discuss the Black The Black Swan Theory, developed by Nassim Nicholas Taleb, argues that Black Swan Events are more common than The Black Swan Theory is criticized for being too pessimistic
3 Swan Theory assumed and that they have a significant impact on the market. The theory emphasizes the importance of managing and for underestimating the role of human agency in shaping
risk and preparing for unexpected events. events.
Explain the The Uncertainty Principle, developed by Werner Heisenberg, states that it is impossible to predict the future The Uncertainty Principle challenges the assumption of Random
4 Uncertainty with certainty. This principle applies to the market, where unexpected events can occur at any time. Walk Theory that the future can be predicted based on past
Principle data.
Discuss Risk Risk management strategies should focus on managing tail risk events and preparing for Black Swan Events. This Risk management strategies are not foolproof and can be costly
5 Management includes diversification, hedging, and stress testing. to implement.
Explain Volatility Volatility spikes are sudden increases in market volatility that can occur during Black Swan Events. These Volatility spikes are difficult to predict and can occur at any
6 Spikes spikes can lead to significant losses for investors who are not prepared. time. Investors should be prepared for these events by
implementing risk management strategies.
Fat Tail Risks and Their Relationship to the Assumptions Underlying Random Walk Theory
Step Action Novel Insight Risk Factors
1 Understand the assumptions of Random Random Walk Theory assumes that stock prices follow a normal distribution and that past prices do not Non-normal distributions, heavy-tailed
Walk Theory affect future prices. distributions, outlier events, black swan events
2 Recognize the limitations of Random Fat tail risks, or the occurrence of extreme events, are not accounted for in Random Walk Theory. Tail risk hedging, risk management strategies
Walk Theory
3 Understand the impact of non-normal Non-normal distributions, such as heavy-tailed distributions, have a higher probability of extreme Volatility clustering, market inefficiencies
distributions on fat tail risks events occurring.
4 Recognize the importance of risk Tail risk hedging can help mitigate the impact of fat tail risks on investment portfolios. Long-term memory effects, autocorrelation in
management strategies returns
5 Consider the impact of skewness and Leptokurtosis of returns, or a higher peak and fatter tails, can lead to more extreme events. Skewness Kurtosis of return distribution
kurtosis on return distribution of return distribution can also impact the likelihood of extreme events.
6 Explore alternative models to Random Stochastic volatility models can better account for fat tail risks and non-normal distributions. None
Walk Theory
Overall, it is important to recognize the limitations of Random Walk Theory and to implement risk management strategies, such as tail risk hedging, to mitigate the impact of fat tail risks on
investment portfolios. Additionally, alternative models, such as stochastic volatility models, can better account for non-normal distributions and extreme events.
Illiquid Markets Risk: What It Means for Applying Random Walk Theory
Step Action Novel Insight Risk Factors
1 Define illiquid markets Illiquid markets risk refers to the difficulty of selling assets due to low liquidity, which can lead to reduced market Thinly traded securities, market depth issues
risk efficiency, increased transaction costs, and limited arbitrage opportunities. , liquidity crunches impact
Explain how illiquid Illiquid markets can make it difficult to execute trades and can lead to price volatility concerns, high bid-ask spreads, Inability to execute trades, market
2 markets impact random and lack of price transparency. This can make it challenging to apply random walk theory, which assumes that asset prices manipulation potential, challenges for
walk theory move randomly and cannot be predicted. portfolio diversification
Discuss the importance of Managing illiquid markets risk is crucial for investors who want to minimize the impact of market inefficiencies and Reduced market efficiency, increased
3 managing illiquid markets reduce the potential for losses due to price volatility. This can be done through diversification, careful selection of transaction costs, limited arbitrage
risk assets, and monitoring market conditions. opportunities
Correlation Assumptions Error: A Common Pitfall When Using Random Walk Models
Step Action Novel Insight Risk Factors
Understand the limitations of random Random walk theory assumes that stock prices move randomly and cannot be predicted. However, this assumption is Overreliance on past data, unforeseen
1 walk theory. not always accurate, and there are market inefficiencies and anomalies that can be exploited. market changes, fluctuating stock prices,
volatility of the market.
Be aware of the correlation Correlation assumptions error is a common pitfall when using random walk models. It occurs when the model assumes Misleading results from correlations,
2 assumptions error. that two variables are correlated when they are not, or vice versa. This can lead to misleading results and inaccurate financial predictions.
inaccurate financial predictions.
To mitigate the risk of correlation assumptions error, it is important to use risk management strategies such as Risk management strategies, portfolio
3 Use risk management strategies. portfolio diversification techniques. This can help to reduce the impact of any one stock or asset on the overall diversification techniques.
Be aware of financial forecasting Financial forecasting is a complex process that involves many variables and assumptions. It is important to be Limitations of random walk theory,
4 challenges. aware of the challenges involved in financial forecasting, such as the limitations of random walk theory and the financial forecasting challenges.
risk of correlation assumptions error.
Incorporate market inefficiencies By incorporating market inefficiencies and anomalies into the investment decision-making process, investors can Market inefficiencies and anomalies,
5 and anomalies into the investment take advantage of opportunities that may not be captured by random walk models. This can help to improve the investment decision-making process.
decision-making process. accuracy of financial predictions and reduce the risk of correlation assumptions error.
Behavioral Biases Impact on Our Interpretation of Results from a Random Walk Model
Step Action Novel Insight Risk Factors
1 Understand the random The random walk model assumes that stock prices move randomly and cannot be predicted. Misunderstanding the model can lead to
walk model incorrect interpretations of results.
Identify behavioral Behavioral biases are psychological tendencies that can affect decision-making. Common biases include overconfidence bias, Failure to recognize and account for
2 biases confirmation bias, hindsight bias, anchoring bias, availability heuristic, gambler’s fallacy, herding behavior, loss aversion, biases can lead to inaccurate
regret avoidance, self-attribution bias, sunk cost fallacy, illusion of control, and framing effect. interpretations of results.
Recognize the impact Biases can lead to overconfidence in predictions, selective interpretation of data, and a tendency to ignore evidence that Ignoring the impact of biases can lead
3 of biases on contradicts preconceived notions. to poor decision-making and increased
interpretation risk.
Manage biases through Quantitative risk management involves using data and statistical analysis to identify and manage risks. By using data-driven Failure to use quantitative risk
4 quantitative risk approaches, biases can be minimized and more accurate interpretations of results can be made. management can lead to increased risk
management and poor decision-making.
Overall, it is important to recognize the impact of behavioral biases on our interpretation of results from a random walk model. By understanding the model, identifying biases, and using quantitative
risk management, we can make more accurate decisions and manage risk effectively.
Common Mistakes And Misconceptions
Mistake/Misconception Correct Viewpoint
Random Walk Theory is always true. The Random Walk Theory is a useful model, but it does not hold in all situations. It assumes that stock prices are unpredictable and follow a random path, which
may not be the case in reality. Therefore, it should be used with caution and combined with other models to manage risk effectively.
Past performance can predict future While past performance can provide some insight into future returns, it cannot guarantee them. Market conditions change over time, and what worked in the past may
returns accurately. not work in the future due to various factors such as economic changes or shifts in investor sentiment. Therefore, investors should use multiple sources of
information when making investment decisions rather than relying solely on historical data.
Technical analysis can predict Technical analysis uses charts and patterns to identify trends and potential price movements based on historical data; however, these methods do not always
market movements accurately. produce accurate predictions since they rely on assumptions about human behavior that may not hold up over time or under different market conditions.
Efficient Market Hypothesis (EMH) EMH suggests that financial markets incorporate all available information into asset prices quickly and efficiently; however, this does not mean that markets are
implies that markets are always always perfectly efficient or free from anomalies or inefficiencies at any given moment since new information constantly emerges causing fluctuations in asset
efficient. prices.
Diversification eliminates all risks Diversification helps reduce portfolio risk by spreading investments across different assets classes; however diversification cannot eliminate all risks
associated with investing. associated with investing because there will still be systematic risks inherent within each asset class regardless of how diversified your portfolio is | {"url":"https://predictivethought.com/random-walk-theory-gotchas-hidden-dangers/","timestamp":"2024-11-06T16:54:40Z","content_type":"text/html","content_length":"119515","record_id":"<urn:uuid:83ee773f-27df-4a5d-9aa2-2c9826e6d9b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00372.warc.gz"} |