content
stringlengths
86
994k
meta
stringlengths
288
619
ball mill volume loading WEBOct 30, 2022 · Therefore, in BMATC, two values can be calculated for mill fill percentage: a) using the volume of planned pulp plus the dead space; b) using only planned pulp volume. Initial loading of the ball mill starts with "Mill Fill 1" (a), but once the dead space is filled with pulp material, it will continue with "Mill Fill 2" (b).
{"url":"https://nature-nett.fr/2022_Jun_26-7822.html","timestamp":"2024-11-11T01:16:34Z","content_type":"text/html","content_length":"36703","record_id":"<urn:uuid:8999b7b9-b1fb-4753-bcf1-4bbf016e47c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00862.warc.gz"}
Numerical Riddle: Find the Missing Number in the Table Let’s have some fun with numbers, with this riddle. In the given table, there are some numbers which are related to each other with some logic. Can you find the connection between the numbers and then find the missing number in the given table? Find the missing number; 75 26 ? So were you able to solve the riddle? Leave your answers in the comment section below. You can check if your answer is correct by clicking on show answer below. If you get the right answer, please do share the riddle with your friends and family on WhatsApp, Facebook and other social networking sites. 2 thoughts on “Numerical Riddle: Find the Missing Number in the Table” 1. 150 2. 148 Leave a Comment
{"url":"https://ec2-65-0-158-107.ap-south-1.compute.amazonaws.com/2018/04/numerical-riddle-find-the-missing-number-in-the-table/","timestamp":"2024-11-08T05:38:48Z","content_type":"text/html","content_length":"97502","record_id":"<urn:uuid:32631ec0-8eb5-4fb1-908d-e4b9b0d9318d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00885.warc.gz"}
A Frank Discussion About the Propagation of Measurement Uncertainty Posted on 7 August 2023 by Bob Loblaw, jg Let’s face it. The claim of uncertainty is a common argument against taking action on any prediction of future harm. “How do you know, for sure?”. “That might not happen.” “It’s possible that it will be harmless”, etc. It takes on many forms, and it can be hard to argue against. Usually, it takes a lot more time, space, and effort to debunk the claims than it takes to make them in the first It even has a technical term: FUD. Acronym of Fear, uncertainty, and doubt, a marketing strategy involving the spread of worrisome information or rumors about a product. During the times when the tobacco industry was discounting the risks of smoking and cancer, the phrase “doubt is our product” was purportedly part of their strategy. And within the climate change discussions, certain individuals have literally made careers out of waving “the uncertainty monster”. It has been used to argue that models are unreliable. It has been used to argue that measurements of global temperature, sea ice, etc. are unreliable. As long as you can spread enough doubt about the scientific results in the right places, you can delay action on climate concerns. At lot of this happens in the blogosphere, or in think tank reports, or lobbying efforts. Sometimes, it creeps into the scientific literature. Proper analysis of uncertainty is done as a part of any scientific endeavour, but sometimes people with a contrarian agenda manage to fool themselves with a poorly-thought-out or misapplied “uncertainty analysis” that can look “sciencey”, but is full of Any good scientist considers the reliability of the data they are using before drawing conclusions – especially when those conclusions appear to contradict the existing science. You do need to be wary of confirmation bias, though – the natural tendency to accept conclusions you like. Global temperature trends are analyzed by several international groups, and the data sets they produce are very similar. The scientists involved consider uncertainty, and are confident in their results. You can examine these data sets with Skeptical Science’s Trend Calculator. So, when someone is concerned about these global temperature data sets, what is to be done? Physicist Richard Muller was skeptical, so starting in 2010 he led a study to independently assess the available data. In the end, the Berkeley Earth Surface Temperature (BEST) record they produced confirmed that analyses by previous groups had largely things right. A peer-reviewed paper describing the BEST analysis is available here. At their web site, you can download the BEST results, including their uncertainty estimates. BEST took a serious look at uncertainty - in the paper linked above, the word “uncertainty” (or “uncertainties”) appears 73 times! The following figure shows the BEST values downloaded a few months ago, covering from 1850 to late 2022. The uncertainties are noticeably larger in the 1800s, when instrumentation and spatial coverage are much more limited, but the overall trends in global temperature are easily seen and far exceed the uncertainties. For recent years, the BEST uncertainty values are typically less than 0.05°C for monthly or annual anomalies. Uncertainty does not look like a worrisome issue. Muller took a lot of flak for arguing that the BEST project was needed – but he and his team deserve a certain amount of credit for doing things well and admitting that previous groups had also done things well. He was skeptical, he did an analysis, and he changed his mind based on the results. Science as science should be done. So, when a new paper comes along that claims that the entire climate science community has been doing it all wrong, and claims that the uncertainty in global temperature records is so large that “the 20th century surface air-temperature anomaly... does not convey any knowledge of rate or magnitude of change in the thermal state of the troposphere”, you can bet two things: • The scientific community will be pretty skeptical. • The contrarian community that wants to believe it will most likely accept it without critical review. We’re here today to look at a recent example of such a paper: one that claims that the global temperature measurements that show rapid recent warming have so much uncertainty in them as to be completely useless. The paper is written by an individual named Patrick Frank, and appeared recently in a journal named Sensors. The title is “LiG Metrology, Correlated Error, and the Integrity of the Global Surface Air-Temperature Record”. (LiG is an acronym for “liquid in glass” – your basic old-style thermometer.) Sensors 2023, 23(13), 5976 Patrick Frank has beaten the uncertainty drum previously. In 2019 he published a couple of versions of a similar paper. (Note: I have not read either of the earlier papers.) Apparently, he had been trying to get something published for many years, and had been rejected by 13 different journals. I won't link to either of these earlier papers here,, but a few blog posts exist that point out the many serious errors in his analysis – often posted long before those earlier papers were published. If you start at this post from And Then There’s Physics, titled “Propagation of Nonsense”, you can find a chain to several earlier posts that describe the numerous errors in Patrick Frank’s earlier work. Today, we’re only going to look at his most recent paper, though. But before we begin that, let’s review a few basics about the propagation of uncertainty – how uncertainty in measurements needs to be examined to see how it affects calculations based on those measurements. It will be boring and tedious, since we’ll have more equations than pictures, but we need to do this to see the elementary errors that Patrick Frank makes. If all the following section looks familiar, just jump to the next section where we point out the problems in Patrick Frank’s paper. Spoiler alert: Patrick Frank can’t do basic first-year statistics. Some Elementary Statistics Background Every scientist knows that every measurement has error in it. Error is the difference between the measurement that was made, and the true value of what we wanted to measure. So how do we know what the error is? Well, we can’t – because when we try to measure the true value, our measurement has errors! That does not mean that we cannot assess a range over which we think the true measurement lies, though, and there are standard ways of approaching this. So much of a “standard” that the ISO produces a guide: Guide to the expression of uncertainty in measurement. People familiar with it usually just refer to it as “the GUM”. The GUM is a large document, though. For simple cases we often learn a lot of the basics in introductory science, physics, or statistics classes. Repeated measures can help us to see the spread associated with a measurement system. We are familiar with reporting measurements along with an uncertainty, such as 18.4°C, ±0.8°C. We probably also learn that random errors often fit a “normal curve”, and that the spread can be calculated using the standard deviation (usually indicated by the symbol σ). We also learn that “one sigma” covers about 68% of the spread, and “two sigmas” is about 95%. (When you see numbers such as 18.4°C, ±0.8°C, you do need to know if they are reporting one or two standard deviations.) At a slightly more advanced level, we’ll start to learn about systematic errors, rather than random ones. And we usually want to know if the data actually are normally-distributed. (You can calculate a standard deviation on any data set, even one as small as two values!, but the 68% and 95% rules only work for normally-distributed data.) There are lots of other common distributions out there in the physical sciences - uniform Poisson, etc. - and you need to use the correct error analysis for each type. But let’s first look at what we probably all recognize – the characteristics of random, normally-distributed variation. What common statistics do we have to describe such data? • The mean, or average. Sum all the values, divide by the number of values (N), and we have a measure of the “central tendency” of the data. □ Other common “central tendency” measures are the mode (the most common value) and the median (the value where half the observations are smaller, and half are bigger), but the mean is most common. In normally-distributed data, they are all the same, anyway. • The standard deviation. Start by taking each measurement, subtracting the mean, to express it as a deviation from the mean. □ We want to turn this into some sort of an “average”, though – but if we just sum up these deviations and divide by N, we’ll get zero because we already subtracted the mean value. ☆ So, we square them before we sum them. That turns all the values into positive numbers. ☆ Then we divide by N. ☆ Then we take the square root. • After that, we’ll probably learn about the standard error of the estimate of the mean (often referred to as SE). □ If we only measure once, then our single value will fall in the range as described by the standard deviation. What if we measure twice, and average those two readings? or three times? or N □ If the errors in all the measurements are independent (the definition of “random”) then the more measurements we take (of the same thing) the close the average will be to the true average. The reduction is proportional to 1/(sqrt(N). A key thing to remember is that measures such as standard deviation involve squaring something, summing, and then taking the square root. If we do not take the square root, then we have something called the variance. This is another perfectly acceptable and commonly-used measure of the spread around the mean value – but when combining and comparing the spread, you really need to make sure whether the formula you are using is applied to a standard deviation or a variance. The pattern of “square things, sum them, take the square root” is very common in a variety of statistical measures. “Sum of squares” in regression should be familiar to us all. Differences between two systems of measurement Now, all that was dealing with the spread of a single repeated measurement of the same thing. What if we want to compare two different measurement systems? Are they giving the same result? Or do they differ? Can we do something similar to the standard deviation to indicate the spread of the differences? Yes, we can. • We pair up the measurements from the two systems. System 1 at time 1 compared to system 2 at time 1. System 1 at time 2 compared to system 2 at time 2. etc. • We take the differences between each pair, square them, add them, divide by N, and take the square root, just like we did for standard deviation. • And we call it the Root Mean Square Error (RMSE) Although this looks very much like the standard deviation calculation, there is one extremely important difference. For the standard deviation, we subtracted the mean from each reading – and the mean is a single value, used for every measurement. In the RMSE calculation, we are calculating the difference between system 1 and system 2. What happens if those two systems do not result in the same mean value? That systematic difference in the mean value will be part of the RMSE. • The RMSE reflects both the mean difference between the two systems, and the spread around those two means. The differences between the paired measurements will have both a mean value, and a standard deviation about that mean. • So, we can express the RMSE as the sum of two parts: the mean difference (called the Mean Bias Error, MBE), plus the standard deviation of those differences. We have to square them first, though (remember “variance”): • Although we still use the σ symbol here, note that the standard deviation of the differences between two measurement systems is subtly different from the standard deviation measured around the mean of a single measurement system. Combining uncertainty from different measurements Lastly, we’ll talk about how uncertainty gets propagated when we start to do calculations on values. There are standard rules for a wide variety of common mathematical calculations. The main calculations we’ll look at are addition, subtraction, and multiplication/division. Wikipedia has a good page on propagation of uncertainty, so we’ll borrow from them. Half way down the page, they have a table of Example Formulae, and we’ll look at the first three rows. We’ll consider variance again – remember it is just the square of the standard deviation. A and B are the measurements, and a and b are multipliers (constants). Calculation Combination of the variance f = aA σ[f]^2 = a^2 σ^2[A] f=aA+bB σ[f]^2 = a^2 σ^2[A] + b^2 σ^2[B] + 2ab σ[AB] f=aA-bB σ[f]^2 = a^2 σ^2[A] + b^2 σ^2[B] - 2ab σ[AB] If we leave a and b as 1, this gets simpler. The first case tells us that if we multiply a measurement by something, we also need to multiply the uncertainty by the same ratio. The second and third cases tell us that when we add or subtract, the sum or difference will contain errors from both sources. (Note that the third case is just the second case with negative b.) We may remember this sort of things from school, when we were taught this formula for determining the error when we added two numbers with uncertainty: σ[f]^2 = σ^2[A] + σ^2[B] But Wikipedia has an extra term: ±2ab σ[AB], (or ±2σ[AB] when we have a=b=1). What on earth is that? • That term is the covariance between the errors in A and the errors in B. □ What does that mean? It addresses the question of whether the errors in A are independent of the errors in B. If A and B have errors that tend to vary together, that affects how errors propagate to the final calculation. □ The covariance is zero if the errors in A and B are independent – but if they are not... we need to account for that. • Two key things to remember: □ When we are adding two numbers, if the errors in B tend to go up when the errors in A go up (positive covariance), we make our uncertainty worse. If the errors in B go down when the errors in A go up (negative covariance), they counteract each other and our uncertainty decreases. □ When subtracting two numbers, the opposite happens. Positive covariance make uncertainty smaller; negative covariance makes uncertainty worse. • Three things. Three key things to remember. When we are dealing with RMSE between two measurement systems, the possible presence of a mean bias error (MBE) is a strong warning that covariance may be present. RMSE should not be treated as if it is σ. Let’s finish with a simple example – a person walking. We’ll start with a simple claim: “The average length of an adult person’s stride is 1 m, with a standard deviation of 0.3 m.” There are actually two ways we can interpret this. 1. Looking at all adults, the average stride is 1 m, but individuals vary, so different adults will have an average stride that is 1 ±0.3 m long (one sigma confidence limit). 2. People do not walk with a constant stride, so although the average stride is 1 m, the individual steps one adult takes will vary in the range ±0.3 m (one sigma). The claim is not actually expressed clearly, since we can interpret it more than one way. We may actually mean both at the same time! Does it matter? Yes. Let’s look at three different people: 1. Has an average stride length of 1 m, but it is irregular, so individual steps vary within the ±0.3 m standard deviation. 2. Has a shorter than average stride of 0.94m (within reason for a one-sigma variation of ±0.3 m), and is very steady. Individual steps are all within ±0.01 m. 3. Has a longer than average stride of 1.06m, and an irregular stride that varies by ±0.3 m. Figure 4 gives us two random walks of the distance travelled over 200 steps by these three people. (Because of the randomness, a different random sequence will generate a slightly different graph.) • Person B follows a nice straight line, due to the steady pace, but will usually not travel as far as person A or C. After 200 steps, they will always be at a total distance close to 188m. • Persons A and C exhibit wobbly lines, because their step lengths vary. On average, person C will travel further than persons A and B, but because A and C vary in their individual stride lengths, this will not always be the case. The further they walk, though, the closer they will be to their average. • Person A will usually fall in between B and C, but for short distances the irregular steps can cause this to vary. What’s the point? Well, to properly understand what the graph tells us about the walking habits of these three people, we need to recognize that there are two sources of differences: 1. The three people have different average stride lengths. 2. The three people have different variability in their individual stride lengths. Just looking at how individual steps vary across the three individuals is not enough. The differences have an average component (e.g., Mean Bias Error), and a random component (e.g., standard deviation). Unless the statistical analysis is done correctly, you will mislead yourself about the differences in how these three people walk. Here endeth the basic statistics lesson. This gives us enough tools to understand what Patrick Frank has done wrong. Horribly, horribly wrong. The problems with Patrick Frank’s paper Well, just a few of them. Enough of them to realize that the paper’s conclusions are worthless. The first 26 pages of this long paper look at various error estimates from the literature. Frank spends a lot of time talking about the characteristics of glass (as a material) and LiG thermometers, and he spends a lot of time looking at studies that compare different radiation shields or ventilation systems, or siting factors. He spends a lot of time talking about systemic versus random errors, and how assumptions about randomness are wrong. (Nobody actually makes this assumption, but that does not stop Patrick Frank from making the accusation.) All in preparation for his own calculations of uncertainty. All of these aspects of radiation shields, ventilation etc. are known by climate scientists – and the fact that Frank found literature on the subject demonstrates that. One key aspect – a question, more than anything else. Frank’s paper uses the symbol σ (normally “standard deviation”) throughout, but he keeps using the phrase RMS error in the text. I did not try to track down the many papers he references, but there is a good possibility that Frank has confused standard deviation and RMSE. They look similar in calculations, but as pointed out above, they are not the same. If all the differences he is quoting from other sources are RMSE (which would be typical for comparing two measurement systems), then they all include the MBE in them (unless it happens to be zero). It is an error to treat them as if they are standard deviation. I suspect that Frank does not know the difference – but that is a side issue compared to the major elementary statistics errors in the paper. It’s on page 27 that we begin to see clear evidence that he simply does not know what he is doing. In his equation 4, he combines the uncertainty for measurements of daily maximum and minimum temperature (T[max] and T[min]) to get an uncertainty for the daily average: At first glance, this all seems reasonable. The 1.96 multiplier would be to take a one-sigma standard deviation and extend it to the almost-two-sigma 95% confidence limit (although calling it 2σ seems a bit sloppy). But wait. Checking that calculation, there is something wrong with equation 4. In order get his result of ±0.382°C, the equation in the middle tells me to do the following: 1. 0.366^2 + 0.135^2 = 0.133956 + 0.018225 = 0.152181 2. 0.152181 ÷ 2 = 0.0760905 3. sqrt(0.076905) = 0.275845065 4. 1.96 * 0.275845065 = 0.541 1. ...not the 0.382 value we see on the right... What Frank actually seems to have done is: 1. 0.366^2 + 0.135^2 = 0.133956 + 0.018225 = 0.152181. 2. sqrt(0.152181) = 0.3910 3. 0.39103832 ÷ 2 = 0.19505 4. 1.96 * 0.19505 = 0.382 See how steps B and C are reversed? Frank did the division by 2 outside the square root, not inside as the equation is written. Is the calculation correct, or is the equation correct? Let’s look back at the Wikipedia equation for uncertainty propagation, when we are adding two terms (dropping the covariance term): σf^2 = a^2 σ^2[A] + b^2 σ^2[B] To make Frank’s version – as the equation is written - easier to compare, let’s drop the 1.96 multiplier, do a little reformatting, and square both sides: σ^2 = ½ (0.366^2)+ ½(0.135^2) The formula for an average is (A+B)/2, or ½ A + ½ B. In Wikipedia’s format, the multipliers a and b are both ½. That means that when propagating the error, you need to use ½ squared, = ¼. • Patrick Frank has written the equation to use ½. So the equation is wrong. • In his actual calculation, Frank has moved the division by two outside the square root. This is the same as if he’d used 4 inside the square root, so he is getting the correct result of ±0.382°C. So, sloppy writing, but are the rest of his calculations correct? No. Look carefully at Frank's equation 5. Equation 5 supposedly propagates a daily uncertainty into a monthly uncertainty, using an average month length of 30.417. It is very similar in format to his equation 4. • Instead of adding the variance (=0.195^2) for 30.417 times, he replaces the sum with a multiplication. This is perfectly reasonable. • ...but then in the denominator he only has 30.417, not 30.417^2. The equation here is again written with the denominator inside the square root (as the 2 was in equation 4). But this time he has actually done the math the way his equation is (incorrectly) written. • His uncertainty estimate is too big. In his equation the two 30.417 terms cancel out, but it should be 30.417/30.417^2, so that cancelling leaves 30.417 only in the denominator. After the square root, that’s a factor of 5.515 times too big. And equation 6 repeats the same error in propagating the uncertainty of monthly means to annual ones. Once again, the denominator should be 12^2, not 12. Another factor of 3.464. So, combining these two errors, his annual uncertainty estimate is √30.417 * √12 = 19 times too big. Instead of 0.382°C, it should be 0.020°C – just two hundredths of a degree! That looks a lot like the BEST estimates in figure 2. Notice how each of equations 4, 5, and 6 all end up with the same result of ±0.382°C? That’s because Frank has not included a factor of √N – the term in the equation that relates the standard deviation to the standard error of the estimate of the mean. Frank’s calculations make the astounding claim that averaging does not reduce uncertainty! Equation 7 makes the same error when combining land and sea temperature uncertainties. The multiplication factors of 0.7 and 0.3 need to be squared inside the square root symbol. Patrick Frank messed up writing his equation 4 (but did the calculation correctly), and then he carried the error in the written equation into equations 5, 6, and 7 and did those calculations Is it possible that Frank thinks that any non-random features in the measurement uncertainties exactly balance the √N reduction in uncertainty for averages? The correct propagation of uncertainty equation, as presented earlier from Wikipedia, has the covariance term, 2ab σ[AB]. Frank has not included that term. Are there circumstances where it would combine exactly in such a manner that the √N term disappears? Frank has not made an argument for this. Moreover, when Frank discusses the calculation of anomalies on page 31, he says “the uncertainty in air temperature must be combined in quadrature with the uncertainty in a 30-year normal”. But anomalies involve subtracting one number from another, not adding the two together. Remember: when subtracting, you subtract the covariance term. It’s σ[f]^2 = a^2 σ^2[A] + b^2 σ^2[B] - 2ab σ[AB]. If it increases uncertainty when adding, then the same covariance will decrease uncertainty when subtracting. That is one of the reasons that anomalies are used to begin with! We do know that daily temperatures at one location show serial autocorrelation (correlation from one day to the next). Monthly anomalies are also autocorrelated. But having the values themselves autocorrelated does not mean that errors are correlated. It needs to be shown, not assumed. And what about the case when many, may stations are averaged into a regional or global mean? Has Frank done a similar error? Is it worth trying to replicate his results to see if he has? He can’t even get it right when doing a simple propagation from daily means to monthly means to annual means. Why would we expect him to get a more complex problem correct? When Frank displays his graph of temperature trends lost in the uncertainty (his figure 19), he has completely blown the uncertainty out of proportion. Here is his figure: Is that enough to realize that Patrick Frank has no idea what he is doing? I think so, but there is more. Remember the discussion about standard deviation, RMSE, and MBE? And random errors and independence of errors when dealing with two variables? In every single calculation for the propagation of uncertainty, Frank has used the formula that assumes that the uncertainties in each variable are completely independent. In spite of pages of talk about non-randomness of the errors, of the need to consider systematic errors, he does not use equations that will handle that non-randomness. Frank also repeatedly uses the factor 1.96 to convert the one-sigma 68% confidence limit to a 95% confidence limit. That 1.96 factor and 95% confidence limit only apply to random, normally distributed data. And he’s provided lengthy arguments that the errors in temperature measurements are neither random nor normally-distributed. All the evidence points to the likelihood that Frank is using formulae by rote (and incorrectly, to boot), without understanding what they mean or how they should be used. As a result, he is simply getting things wrong. To add to the question of non-randomness, we have to ask if Frank has correctly removed the MBE from any RMSE values he has obtained from the literature. We could track down every source Frank has used, to see what they really did, but is it worth it? With such elementary errors in the most basic statistics, is there likely to be anything really innovative in the rest of the paper? So much of the work in the literature regarding homogenization of temperature records and handling of errors is designed to identify and adjust for station changes that cause shifts in MBE – instrumentation, observing methodology, station moves, etc. The scientific literature knows how to do this. Patrick Frank does not. I’m running out of space. One more thing. Frank seems to argue that there are many uncertainties in LiG thermometers that cause errors – with the result being that the reading on the scale is not the correct temperature. I don’t know about other countries, but the Meteorological Service of Canada has a standard operating procedure manual (MANOBS) that covers (among many things) how to read a thermometer. Each and every thermometer has a correction card (figure 6) showing the difference between what is read and what the correct value should be. And that correction is applied for every reading. Many of Frank’s arguments about uncertainty fall by the wayside when you realize that they are likely included in this correction. Frank’s earlier papers? Nothing in this most recent paper, which I would assume contains his best efforts, suggests that it is worth wading through his earlier work. You might ask, how did this paper get published? Well, the publisher is MDPI, which has a questionable track record. The journal - Sensors - is rather off-topic for a climate-related subject. Frank’s paper was first received on May 20, 2023, revised June 17, 2023, accepted June 21, 2023, and published June 27, 2023. For a 46-page paper, that’s an awfully quick review process. Each time I read it, I find something more that I question, and what I’ve covered here is just the tip of the iceberg. I am reminded of a time many years ago when I read a book review of a particularly bad “science” book: the reviewer said “either this book did not receive adequate technical review, or the author chose to ignore it”. In the case of Frank’s paper, I would suggest that both are likely: inadequate review, combined with an author that will not accept valid criticism. The paper by Patrick Frank is not worth the electrons used to store or transmit it. For Frank to be correct, not only would the entire discipline of climate science need to be wrong, but the entire discipline of introductory statistics would need to be wrong. If you want to understand uncertainty in global temperature records, read the proper scientific literature, not this paper by Patrick Additional Skeptical Science posts that may be of interest include the following: Of Averages and Anomalies (first of a multi-part series on measuring surface temperature change). Berkeley Earth temperature record confirms other data sets Are surface temperature records reliable? 1 2 Next Comments 1 to 50 out of 66: 1. Gil O at 23:46 PM on 8 August, 2023 He lost me when he called nominal thermometer precision with accuracy. Maybe it was a typo, but such an elementary mistake really takes a toll on any presumed ethos the author may have wanted to 2. DavidS at 00:38 AM on 9 August, 2023 You cannot validate science with marketing, nor marketing with scientific processes. In this article there is a lot of taking something hot in one hand as proof the other hand is not cold. A lot of selling going on here. Moderator Response: [BL] - Contents snipped. This is a return of a previously-banned user, using a new name, which is strictly prohibited by the Comments Policy. 3. Bob Loblaw at 01:18 AM on 9 August, 2023 Gil @ 1: Who is "he"? Patrick Frank? Can you specify in which part of his article he makes this reference? 4. Eclectic at 10:44 AM on 9 August, 2023 There is a 2017 YouTube presentation by by Dr Patrick T. Brown (climatologist) which is highly critical of Dr Patrick Frank's ideas. The video title is:- "Do 'propagation of error' calculations invalidate climate model projections of global warming?" [length ~38 minutes] This video currently shows 7235 views and 98 comments ~ many of which are rather prickly comments by Patrick Frank . . . who at one point says "see my post on the foremost climate blog Watts Up With That" [a post in 2015?] # Frank also states: "There's no doubt that the climate models cannot predict future air temperatures. There is also no doubt that the IPCC does not know what it's talking about." Frank has also made many prickly comments on WUWT at various other times. And he has an acolyte or two on WUWT who will always denounce any critics as not understanding that uncertainty and error are not the same. [And yet the acolytes also fail to address the underlying physical events in global climate.] In a nutshell : Dr Patrick Frank's workings have a modicum of internal validity mathematically, but ultimately are unphysical. 5. Bob Loblaw at 22:18 PM on 9 August, 2023 Yes, Eclectic. That video - and the comments from Pat Frank - are rather mind-boggling. That video is one of the key links given in the AndThenTheresPhysics blog post on Propagation of Uncertainty that I linked to above. There were many other red flags in the recent paper by Frank. On page 18, in the last paragraph, he makes the claim that "...the ship bucket and engine-intake measurement errors displayed non-normal distributions, inconsistent with random error." There are many other distributions in physics - uniform, Poisson, etc. - that can occur with random data. Non-normality is not a test for randomness. In this blog post, I focused on the most basic mistakes he makes with respect to simple statistics - standard error of the mean, etc. Frankly, it is Pat Frank that does not know what he is talking about. 6. nigelj at 07:04 AM on 10 August, 2023 Pat Frank sounds like a classic case of a person promoting crank science. Scientific crank for short. I'm no expert in scientific cranks, or crank science, but I have a little bit of background in psychology having done a few papers at university, (although I have a design degree). I have observed that cranks have certain attributes. 1)They are usually moderately intelligent, sometimes highly intelligent. This helps them come up with inventive nonsense. 2)They are very stubborn and dont admit they are wrong, either to themselves or anyone else. 3) They also frequently tend to be egocentric, arrogant, very confident and somewhat narcissistic. Some people have a disorder called NPD (narcissistic personality disorder) or lean that way: "Narcissistic personality disorder is a mental health condition in which people have an unreasonably high sense of their own importance. They need and seek too much attention and want people to admire them. People with this disorder may lack the ability to understand or care about the feelings of others. But behind this mask of extreme confidence, they are not sure of their self-worth and are easily upset by the slightest criticism." (mayo clinic) Narcissists are usually overconfident and very arrogant and they can sometimes be very dishonest. We all have some egoism or self love, but narcissists are at the extreme end of the spectrum. Maybe its a bell curve distribution thing. I've noticed that narcissists are unable to ever admit to themselves or others that they are wrong about something and perhaps its because its exceptionally painful for this personality type. So they just go on repeating the same nonsense forever. While nobody loves admitting they are wrong, or have been fooled or sucked in, either to themselves or others most people eventually do and move on. Unfortuntately this means the cranks hang around influencing those who want reasons to deny the climate issue. And of course some of the scientific cranks prove to be correct at least about some things. Which confuses things further. But it looks to me like most cranks aren't correct, especially the arm chair cranks who may not have science degrees. The Realclimate.org website attracts some of these cranks, including both climate science denialists and also warmists usually with dubious mitigation solutions. You guys probably know who I 7. Eclectic at 09:08 AM on 10 August, 2023 Nigelj @8 , you are correct about some of the characters that you yourself encounter at RealClimate blog. Psychiatrically though, the WUWT blog presents a "target-rich environment" for a wider range of pathologies. And Dr Pat Frank is still to be seen making brief comments in the WUWT threads . . . but mostly his comments match the run-of-the-mill WUWT craziness stuff, rather than relating to the Uncertainty Monster. Anecdote ~ long ago, I knew a guy who had spent a decade or more tinkering in his garden shed, inventing an electrical Perpetual-Motion machine. Continual updates & modifications, but somehow never quite hitting the bullseye. He had a pleasant-enough personality, not a narcissist. But definitely had a bee in his bonnet or a crack in his pot [=pate]. R.I.P. And the Uncertainty Monster still lives in the darker corners of public discussion. Living sometimes as a mathematical nonsense, but much more commonly in the form of: "Well, that AGW stuff is not absolutely certain to six decimal places, so we ought to ignore it all." Or existing in the practical sphere as: "It is not certain that we could eventually power all our economy with renewables & other non-fossil-fuel systems . . . so we should not even make a partial effort." 8. Eclectic at 09:12 AM on 10 August, 2023 Correction : should be Nigelj @ post 6. Almost certainly! 9. MA Rodger at 02:40 AM on 11 August, 2023 I'm not sure Frank is responsible for 'propagation' or the spreading of nonsense. He is the creator of nonsense that others happily spread. But strangely the remarkable level of stupidity achieved by Frank and other denialists is not an issue for those who spread such denialist messages, or those who happily receive them. As for debunking Frank's crazyman attack on climatology, this debunking can be achieved in many ways. There is plenty of opportunity as the man is evidently heavily in denial over AGW and stupid enought to feel he is able to prove that his denial-position is correct while the whole of science is flat wrong. I'm not of the view that climbing down the rabbit hole to chase his nonsense round down in the wonderland world of Pat Frank is the best way to debunk Fank's lunacy. (I talk of 'chasing his nonsense' because his latest published serving of nonsense is an embellishment of work now a decade-plus old while a whole lot different from his nonsense from four years back featured in the video linked @4.) Yet this SkS OP is attempting such a chase. Frank's obvious stupidity does lend itself to debunking although his embellishments with lengthy coverage of associated stuff in this 2023 paper provides him a means of obfuscation. In such a situation, I'd go straight for the conclusions, or in Frank's latest paper 'final' conclusions. So Frank agrees that there has been warming since the 19th century (as shown by phenology which certainly does indicate "unprecedented climate warming over the last 200 years, or over any other timespan" has occurred in recent decades). But generally Frank's paper would suggest the instrument temperature record cannot show any warming. Indeed having told us of the phenological "direct evidence of a warming climate" he then tells us "The 20th century surface air-temperature anomaly, 0.74 ± 1.94 °C (2σ), does not convey any knowledge of rate or magnitude of change in the thermal state of the troposphere." That 0.74 ± 1.94 °C (2σ) surely suggests a 23% chance that measurement of actual temperature represents a cooling not a warming, with a 5% chance of a cooling of -1.2 °C or more. And really, if anybody were attempting to question the accuracy of the global temperature instrument record, it would be sensible to start with the work already done to establish such accuracy, eg Lenssen et al (2019). But Frank seemingly doesn't do 'sensible'. 10. Nick Palmer at 04:30 AM on 11 August, 2023 Did anyone ever see what the reviewers (including Carl Wunsch and Davide Zanchettin) said about Frank's similar 2019 paper in Frontiers in Science? 11. Bob Loblaw at 05:39 AM on 11 August, 2023 MA Rodger @ 9: It is an open question whether it is better to ignore papers such as Pat Frank's most recent attempt, or spend the time debunking it. I chose to debunk it in this case, but to paraphrase Top Gun, "this is a target-rich environment". There are so many logical inconsistencies in that 46-page tome that it would take weeks to identify them all. Just trying to chase down the various references he uses would require months of work. I took this as an opportunity for a "teaching moment" about propagation of uncertainty, as much as an opportunity to debunk Pat Frank. Thanks for the reference to Lenssen et al. But what Pat Frank needs to do is start with a Statistics 101 course. Nick Palmer @ 10: I'm sure the general answer to your question is "yes", but I did not try to chase down every blog post and link in the lengthy chain exposed by the linked post at ATTP's. I did read parts of some, and remember that some of his earlier efforts to publish involved open review. Nigelj's comment that some people "...are unable to ever admit to themselves or others that they are wrong about something..." seems particularly true about Pat Frank. 12. nigelj at 06:31 AM on 11 August, 2023 Eclectic @7, agreed overall. I didnt mean to imply all cranks are narcissists or lean that way. It just seems to me a disproportionately high percentage of cranks are narcissists. It's just my personal observation but the pattern is rather obvious and striking. I tried to find some research on the issue but without success. Perhaps the crank tendencies you mentioned and narcissism are mutually reinforcing. And remember narcissists do frequently come as cross as nice, normal likeable people, just the same as sociopaths (psychopaths) often do. They find this front works best for them. In fact they can be especially likeable, but its about what lies beneath and eventually one notices oddities about them, or in their writings, and if you get into a close relationship with them it can end very badly. Have seen some ciminal cases in the media. 13. nigelj at 07:37 AM on 11 August, 2023 I enjoyed Bob Loblows article because it did a thorough debunking and I learned some things about statistics. Some of the errors look like almost basic arithmetic and I thought a key purpose of peer review was to identify those things? They did an awful job. It looks like they didnt even bother to try. Regarding whether such papers should be debunked or not. I've often wondered about this. Given the purpose of this website is to debunk the climate myths it seems appropriate to debunk bad papers like this, or some of them. It would not be possible to address all of them because its too time consuming. There is a school of thought that says ignore the denialists because "repeating a lie spreads the lie" and engaging with them gives them oxygen. There is some research on this as well. There has to be some truth in the claim that responding to them will spread the lies. It's virtually self evident. Now the climate science community seems to have taken this advice, and has by my observation mostly ignored the denialists. For example there is nothing in the IPCC reports listing the main sceptical arguments and why they are wrong, like is done on this website (unless Ive missed it). I have never come across any articles in our media by climate scientists or experts addressing the climate denialists myths. The debunking seems to be confined to a few websites like this one and realclimate.org or Taminos open mind. And how has this strategy of (mostly) giving denialists the silent treatment worked out? Many people are still sceptical about climate change, and progress has been slow dealing with it which suggests that giving the denialists the silent treatment may have been a flawed strategy. I suspect debunking the nonsense, and educating people on the climate myths is more important than being afraid that it would cause lies to spread. Lies will spread anyway. A clever denialist paper probably potentially has some influence on governments and if its not rebutted this creates a suspicion it might be valid. However I think that actually debating with denialists can be risky, and getting into actual formal televised debates with denialists would be naieve and best avoided. And fortunately it seems to have been avoided. Denialists frequently resort to misleading but powerful rhetorical techniques (Donald Trump is a master of this) that is hard to counter without getting down into the rhetorical gutter and then this risks making climate scientists look bad and dishonest. All their credibility could be blown with the decision makers. 14. MA Rodger at 18:52 PM on 11 August, 2023 Nigelj @13, The paper Frank (2019) did take six months from submission to gain acceptance and Frontiers does say "Frontiers only applies the most rigorous and unbiased reviews, established in the high standards of the Frontiers Review System." Yet the total nonsense of Frank (2019) is still published, not just a crazy approach but quite simple mathematical error as well. But do note that a peer-reviewed publication does not have to be correct. A novel approach to a subject can be accepted even when that approach is easily show to be wrong and even when the implications of the conclusions (which are wrong) are set out as being real. I suppose it is worth making plain that peer-review can allow certain 'wrong' research to be published as this will prevent later researchers making the same mistakes. Yet what is so often lost today is the idea that any researcher wanting publishing must be familiar with the entirety of the literature and takes account of it within their work. And for a denialist, any publication means it is entirely true, if they want it to be. In regard to the crazy Frank (2019), it is quite simple to expose the nonsense. This wondrous theory (first appearing in 2016) suggests that, at a 1sd limit, a year's global average SAT could be anything between +0.35ºC to -0.30ºC the previous year's temperature, this variation due alone to the additional AGW forcing enacted since that previous year. The actual SAT records do show an inter-year variation but something a little smaller (+/-0.12ºC at 1sd in the recent BEST SAT record) but this is from all causes not just from a single cause that is ever accumulating. And these 'all causes' of the +/-0.12ºC are not cumulative through the years but just wobbly noise. Thus the variation seen do not increase with variation measured over a longer period. After 8 years in the BEST SAT record is pretty-much the same as the 1-year variation and not much greater at 60 years (+/-0.22ºC). But in the crazy wonderland of Pat Frank, these variations are apparently potentially cumulative (that would be the logic) so Frank's 8-year variation is twice the 1-year variation. And after 60 years of these AGW forcings (which is the present period with roughly constant AGW forcing) according to Frank we should be seeing SAT changes anything from +17.0ºC to -12.0ºC solely due to AGW forcing. And because Frank's normal distributions provides the probability of these variations, we can say there was an 80% chance of us seeing global SAT increases accumulating over that 60 years in excess of +4.25ºC and/or decreases acumulating in excess of -3.0ºC. According to Frank's madness, we should have been seeing such 60-year variation. But we haven't. So as a predictive analysis, the nonsense of Frank doesn't begin to pass muster. And another test for garbage is the level of interest shown by the rest of science. In the case of Frank (2019), that interest amounts to 19 citations according to Google Scholar, these comprising 6 citations by Frank himself, 2 mistaken citation (only one by a climatological paper which examines marine heat extremes and uses the Frank paper to support the contention "Substantial uncertainties and biases can arise due to the stochastic nature of global climate systems." which Frank 2019 only says are absent), a climatology working-paper that lists Frank with a whole bunch of denialists, three citations by one Norbert Schwarzer who appears more philosopher than scientist, and six by a fairly standard AGW denier called Pascal Richet. That leaves a PhD thesis citing Frank (2019)'s to say "... general circulation models generally do not have an error associated with predictions" So science really has no interest in Frank's nonsense (other than demonstrating that it is nonsense). 15. Nick Palmer at 21:36 PM on 11 August, 2023 Bob Loblaw @11 I was interested in what the two reviewers listed actually said. I couldn't find it. Wunsch is highly unlikely to have given it a 5* review and I don't think Zanchettini, who is much less well known, would either. The reason it would be helpful to know is that Frank's recent '23 paper and its past incarnations, such as '19, are currently being promulgated across the denialosphere as examples of published peer reviewed literature that completely undermines all of climate science. If one knew that Wunsch and Zanchettini had both said seomthing like 'the overall construction of the paper was interesting but has some major logical and statistical flaws in it', and Frontiers in Science had decided to publsih it anyway, that would be very useful anti-denialist 'ammo'. 16. Nick Palmer at 01:26 AM on 12 August, 2023 nigelj @13 wrote " I have never come across any articles in our media by climate scientists or experts addressing the climate denialists myths. The debunking seems to be confined to a few websites like this one and realclimate.org or Taminos open mind" That's true - but don't forget 'And then there's physics'... and Climate Adam and Dr Gilbz (both phd'd climate scientists) on Youtube quite often debunk stuff. Here's them collaborating - it's 17. nigelj at 07:01 AM on 12 August, 2023 MAR @14. Good points. Franks "wondrous theory" does indeed sound crazy. What mystifies me is how a guy with a Phd in chemistry, thus very highly scientifically literate could get relatively basic things like that wrong. Because its fairly well doumented that the up and down variation in year to year temperatures is due in large part to a component of natural variability cycles. And he does not seem like a person that would dispute the existence of natural climate cycles. It makes me wonder if he hasnt actually read much background information like this on the actual climate issue - and perhaps feels he is such a genius that he doesn't need to. Yet this would be astounding really that he is unaware of short term climate cycles and cant see their relevance. I do not have his level of scientific training by a long way, but its obvious to me. It seems more likely he is he just being crazy about things for whatever reasons and this craziness seems to be the main underling characteristics of a crank. But their frequent narcissim means no matter how much you point out the obvious flaws they just go on repeating the same theories, thus their nonsense certainly compounds over time. 18. Eclectic at 11:06 AM on 12 August, 2023 Nigelj @17 : Cranks or crackpots do inhabit the Denier spectrum, But IMO they are outliers of the main body. Dr Frank's wondrous "Uncertainty" simply produces absurd results ~ see his chart showing the envelope of uncertainty which "explodes" from the observation starting point, rendering all data nearly meaningless. Yet he cannot see the absurdity. He falls back on the bizarre argument of uncertainty being separate from error. (But in practical terms, there is a large Venn Diagram overlap of the two concepts.) WUWT blog is an enjoyable stamping ground where I observe the denialists' shenanigans. Most of the commenters at WUWT are angry selfish characters, who do not wish to see any socio-economic changes in this world ~ and hence their motivated reasoning against AGW. Certainly, WUWT has its share of cranks & crackpots. Also a large slice of "CO2-deniers" who continually assert "the trace gas CO2 simply cannot do any global warming". (WUWT blog's founder & patron, Anthony Watts initially tried to oust the CO2-deniers . . . but in the past decade he seems to have abandoned that attempt.) Dr Frank's comments in a WUWT thread are worth reading, but sadly they rarely rise above the common ruck there. Much more interesting to read, is a Mr Rud Istvan ~ though he does blow his own trumpet a lot (and publicizes his own book "Blowing Smoke" which I gather does in all ways Smite The Heathen warmists & alarmists. Istvan, like Frank, is very intelligent, yet does come out with some nonsenses. For instance, Istvan often states mainstream AGW must be wrong because of reasons A , B , C & D . And unfortunately, 3 of his 4 facts/reasons are quite erroneous. He is so widely informed, that he must know his facts/reasons are erroneous . . . yet he keeps repeating them blindly (in a way that resembles the blindness in Dr Frank). To very broadly paraphrase Voltaire : It is horrifying to see how even intelligent minds can believe absurdities. 19. nigelj at 06:21 AM on 13 August, 2023 Eclectic @11. I agree the cranks are a small subset of denilaists and that many of the denialists are fundamentally driven by selfish and economic motives. However you should add political motives to the list, being an ideologically motivated dislike of government rules and regulations. Of course these things are interrelated. I have a bit of trouble identifying one single underlying cause of the climate denialism issue. It seems to be different denialists have different motives to an extent, randging from vested interests, to political and ideological axes to grind, to selfishness, to just a dislike of change to plain old contrariness. Or some combination. But if anyone thinks there is one key underling motive for the denialim I would be interested in your reasoning. The cranks are not all deniers as such. Some believe burning fossil fuels is causing warming but some of them think other factors play a very large part like the water cycle or deforestation. A larger part than the IPCC have documented. They unwittingly serve the hard core denialists cause. They are like Lenin and Stalins "useful idiots." I do visit WUWT sometimes, and I know what you are saying. "To very broadly paraphrase Voltaire : It is horrifying to see how even intelligent minds can believe absurdities." Voltaire is right. Its presumably a lot to do with cognitive dissonance. Intelligent minds are not immunue from strong emotively or ideologically driven beliefs and resolving conflicts between those and reality might lead to deliberate ignorance. Reference on cognitive dissonance: This does suggests cranks might be driven by underlying belief systems not just craziness. 20. nigelj at 06:23 AM on 13 August, 2023 I meant Eclectic @18. Doh! 21. bdgwx at 06:23 AM on 16 August, 2023 I have had numerous discussions with Pat Frank regarding this topic. His misunderstanding boils down to using Bevington equation 4.22. There are two problems here. First and foremost, 4.22 is but an intermediate step in propagating the uncertainty through a mean. Bevington makes it clear that it is actually equation 4.23 that gives you the final result. Second, Equations 4.22 and 4.23 are really meant for relative uncertainties when there are weighted inputs. Frank does not use weighted inputs so it is unclear why he would be using this procedure anyway. Furthermore, Frank's own source (Bevington) tells us exactly how to propagate uncertainty through a mean. If the inputs are uncorrelated you use equation 4.14. If the inputs are correlated you use the general law of propagation of uncertainty via equation 3.13. A more modern and robust exploration of the propagation of uncertainty is defined in JCGM 100:2008 which NIST TN 1900 is based. And I've told Pat Frank repeatedly to verify his results with the NIST uncertainty machine. He has insinuated (at the very least) that NIST and JCGM including NIST's own calculator are not to be Another point worth mentioning...he published this in the MDPI journal Sensors. MDPI is known to be a predatory publisher. I emailed the journal editor back in July asking how this publication could have possibly made it through peer review with mistakes this egregious. I basically explained the things mentioned in this article. The editor sent my list of mistakes to Pat Frank and let him respond instead. I was hoping for a response from the editor or the reviewers. I did not get that. 22. Eclectic at 08:31 AM on 16 August, 2023 Bdgwx @21 , allow me to say it here at SkS (since I don't post comments at WUWT ) . . . that it is always a pleasure to see your name/remarks featured in the list of comments in any thread at WUWT. You, along with a very small number of other scientific-minded posters [you know them well] provide some sane & often humorous relief, among all the dross & vitriol of that blog. My apologies for the fulsome praise. ( I am very 'umble, Sir. ) 23. Bob Loblaw at 09:59 AM on 16 August, 2023 Thanks for the additional information. From the Bevington reference you link to, equation 3.13 looks like it matches the equation I cited from Wikipedia, where you need to include the covariance between the two non-independent variance terms. Equations 4.13, 4.14, and 4.23 are the normal Standard Error estimate I mentioned in the OP. Of course, calculating the mean of two values is equivalent to merging two values where each is weighted by 1/2. Frank's equations 5 and 6 are just "weighted" sums where the weightings are 30.417 and 12 (average number of days in a month, and average number of months in a year), and each day or month is given equal weight. ...and all the equations use N when dealing with variances, or sqrt(N) when dealing with standard deviations. That Pat Frank screws up so badly by putting the N value inside the sqrt sign as a denominator (thus being off by a factor of sqrt(N)) tells us all we need to know about his statistical chops. In the OP, I linked to the Wikipedia page on MDPI, which largely agrees that they are not a reputable publisher. I took a look through the Sensors web pages at MDPI. There are no signs that any of the editors or reviewers they list have any background in meteorological instrumentation. It seems like they are more involved in electrical engineering of sensors, rather than any sort of statistical analysis of sensor performance. A classic case of submitting a paper to a journal that does not know the topic - assuming that there was any sort of review more complex than "has the credit card charge cleared?" The rapid turn-around makes it obvious that no competent review was done. Of course, we know that by the stupid errors that remain in the paper. It is unfortunate that the journal simply passes comments on to the author, rather than actually looking at the significance of the horrible mistakes the paper contains. So much for a rigorous concern about scientific quality. The JCGM 100:2008 link you provide is essentially the same as the ISO GUM I have as a paper copy (mentioned in the OP). 24. bigoilbob at 00:51 AM on 19 August, 2023 "Did anyone ever see what the reviewers (including Carl Wunsch and Davide Zanchettin) said about Frank's similar 2019 paper in Frontiers in Science?" Carl Wunsch has since disavowed his support for the contents. I'm not sure whether he would have rejected it upon closer reading. Many reviewers want alt.material to be published. If it's good stuff, it will be cited. The 2019 Pat Frank paper and this one have essentially not been... 25. Eclectic at 05:02 AM on 19 August, 2023 Bigoilbob @24 , as a side-note :- Many thanks for your numerous very sensible comments in "contrarian" blogsites, over the years. Gracias. 26. Bob Loblaw at 07:03 AM on 19 August, 2023 bigoilbob @ 24: Do you have any link to specific statements from Carl Wunsch? Curiosity arises. I agree that science will simply ignore the paper. But the target is probably not the science community: it's the "we'll believe ABC [Anything But Carbon]" community, which unfortunately includes large blocks of politicians with significant power. All they need is a thick paper that has enough weight (physical mass, not significance) to survive being tossed down the stairwell, with a sound bite that says "mainstream science has it all wrong". It doesn't matter that the paper isn't worth putting at the bottom of a budgie cage. 27. bigoilbob at 02:39 AM on 20 August, 2023 "Do you have any link to specific statements from Carl Wunsch? Curiosity arises." Specifically, this is what I found. Old news, but not to me. I hope that I did not mischaracterize Dr. Wunsch earlier, and my apologies to both him and readers if aI did so. "#5 Carl Wunsch I am listed as a reviewer, but that should not be interpreted as an endorsement of the paper. In the version that I finally agreed to, there were some interesting and useful descriptions of the behavior of climate models run in predictive mode. That is not a justification for concluding the climate signals cannot be detected! In particular, I do not recall the sentence "The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables." which I regard as a complete non sequitur and with which I disagree totally. The published version had numerous additions that did not appear in the last version I saw. I thought the version I did see raised important questions, rarely discussed, of the presence of both systematic and random walk errors in models run in predictive mode and that some discussion of these issues might be worthwhile." Moderator Response: [BL] Link activated. The web software here does not automatically create links. You can do this when posting a comment by selecting the "insert" tab, selecting the text you want to use for the link, and clicking on the icon that looks like a chain link. Add the URL in the dialog box. 28. bigoilbob at 02:42 AM on 20 August, 2023 And here's the twiter thread I began with. Moderator Response: [BL] Link activated 29. Bob Loblaw at 03:36 AM on 20 August, 2023 Thanks, bigoilbob. That PubPeer thread is difficult to read. The obstinance by Pat Frank is painful to watch. One classic comment from Pat Frank: Everyone understands the 1/sqrt(N) rule for random error, Joshua. Except, apparently, Pat Frank, who in the paper reviewed above seems to have messed up square rooting the N when propagating the standard deviation into the error of the mean. Unless he actually thinks that non-randomness in errors can be handled by simply dropping the sqrt. Pat Frank is also quite insistent in that comment thread that summing up a year's worth of a measurement (e.g. temperature) and dividing by the number of measurements to get an annual average gives a value of [original units] per year. Gavin Cawley sums it up in this comment: I had a look at your responses to reviewers on the previous submissions, which seem also to contain this very poor attitude to criticism. It tells me there is no point in me explaining to you what is wrong with your paper, because you have made it very clear that you are not prepared to listen to anybody that disagrees with you. 30. bigoilbob at 00:48 AM on 24 August, 2023 An update. Dr. Frank created an auto erotic self citation for his paper, so there is no longer "no citations" for it. 31. Tom Dayton at 02:43 AM on 24 August, 2023 Diagram Monkey posted about the most recent Pat Frank article. 32. Bob Loblaw at 03:35 AM on 24 August, 2023 Thanks for that link, Tom. I see that DiagramMonkey thinks that I am braver than he is. (There is a link over there back to this post...) 33. Bob Loblaw at 03:47 AM on 24 August, 2023 DiagramMonkey's post also has a link to an earlier DiagramMonkey post that has an entertaining description of Pat Frank's approach to his writing (and criticism of it). DiagramMonkey refers to it as Hagfishing. 34. Bob Loblaw at 11:16 AM on 24 August, 2023 DiagramMonkey also has a useful post on uncertainty myths, from a general perspective. It touches on a few of the issues about averaging, normality, etc. that are discussed in this OP. 35. Bob Loblaw at 08:10 AM on 25 August, 2023 In the "small world" category, a comment over at AndThenTheresPhysics has pointed people to another DiagramMonkey post that covers a semi-related topic: a really bad paper by one Stuart Harris, a retired University of Calgary geography professor. The paper argues several climate myths. I have had a ROFLMAO moment reading that DiagramMonkey post. Why? Well in the second-last paragraph of my review of Pat Frank's paper (above), I said: I am reminded of a time many years ago when I read a book review of a particularly bad “science” book: the reviewer said “either this book did not receive adequate technical review, or the author chose to ignore it”. That was a book (on permafrost) that was written by none other than Stuart Harris. I remember reviewing a paper of his (for a permafrost conference) 30 years ago. His work was terrible then. It obviously has not improved with age. 36. bdgwx at 00:58 AM on 26 August, 2023 Here is a lengthy interview with Pat Frank posted 2 days ago. Per usual there are a lot of inaccuracies and misrepresentations about uncertianty in it. Moderator Response: [RH] Activated link. 37. bdgwx at 03:46 AM on 26 August, 2023 I'm sure this has already been discussed. But regarding Frank 2019 concerning CMIP model uncertainty the most egregious mistake Frank makes is interpretting the 4 W/m2 calibration error of the longwave cloud flux from Lauer & Hamilton 2013 as 4 W/m2.year. He sneakily changed the units from W/m2 to W/m2.year. And on top of that he arbitrarily picked a year as a model timestep for the propagation of uncertainty even though many climate models operate on hourly timesteps. It's easy to see the absurdity of his method when you consider how quickly his uncertainty blows up if he had arbitrarily picked an hour as the timestep. Using equstions 5.2 and 6 and assuming F0 = 34 W/m2 and 100 year prediction period we get ±16 K for yearly model timesteps and ±1526 K for hourly model timesteps. Not only is it absurd, but it's not even physically possible. 38. Eclectic at 05:09 AM on 26 August, 2023 Bdgwx @36 , you have made numerous comments at WUWT blogsite, where the YouTube Pat Frank / Tom Nelson interview was "featured" as a post on 25th August 2023. Video length is one hour and 26 minutes long. ( I have not viewed it myself, for I hold that anything produced by Tom Nelson is highly likely to be a waste of time . . . but I am prepared to temporarily suspend that opinion, if an SkS reader can refer me to a worthwhile Nelson production.) The WUWT comments column has the advantage that it can be skimmed through. The first 15 or so comments are the usual rubbish, but then things gradually pick up steam. Particularly, see comments by AlanJ and bdgwx , which are drawing heat from the usual suspects (including Pat Frank). Warning : a somewhat masochistic perseverance is required by the reader. But for those who occasionally enjoy the Three-Ring Circus of absurdity found at WUWT blog, it might have its entertaining aspects. Myself, I alternated between guffaws and head-explodings. Bob Loblaw's reference to hagfish (up-thread) certainly comes to mind. The amount of hagfish "sticky gloop" exuded by Frank & his supporters is quite spectacular. [ The hagfish analogy breaks down ~ because the hagfish uses sticky-gloop to defend itself . . . while the denialist uses sticky-gloop to confuse himself especially. ] 39. bdgwx at 10:43 AM on 26 August, 2023 Electic, I appreciate the kind words. I think the strangest part of my conversation with Pat Frank is when he quotes Lauer & Hamilton saying 4 W m-2 and then gaslights me because I didn't arbitary change the units to W m-2 year-1 like he did. The sad part is that he did this in a "peer reviwed" publication. Oh wait...Frontiers in Earth Science is a predatory journal. 40. Bob Loblaw at 22:48 PM on 26 August, 2023 Although DiagramMonkey may think that I am a braver person than he is, I would reserve "braver" for people that have a head vice strong enough to spend any amount of time reading and rationally commenting on anything posted at WUWT. I don't think I would survive watching a 1h26m video involving Pat Frank. I didn't start the hagfish analogy. If you read the DiagramMonkey link I gave earlier, note that a lot of his hagfish analogy is indeed discussing defence mechanisms. Using the defence mechanism to defend one's self from reality is still a defence mechanism. The "per year" part of Pat Frank's insanity has a strong presence in the PubPeer thread bigoilbob linked to in comment 27 (and elsewhere in other related discussions). I learned that watts are Joules/second - so already a measure per unit time - something like 40-50 years ago. Maybe some day Pat Frank will figure this out, but I am not hopeful. 41. bdgwx at 07:17 AM on 27 August, 2023 Oh wow. That PubPeer thread is astonishing. I didn't realize this had already been hashed. 42. Kiwigriff at 14:46 PM on 27 August, 2023 Sorry that this is the first time I have commented have been lurking for years. Figure 4 3rd explanation Typo: Person A will usually fall in between A and C, but for short distances the irregular steps can cause this to vary. Should read between B and C. Moderator Response: [BL] Thanks for noticing that! Corrected... 43. Eclectic at 00:41 AM on 28 August, 2023 Apart from ATTP's posts on his own website (with relatively brief comments) . . . there is a great deal more in the above-mentioned PubPeer thread (from September 2019) ~ for those who have time to go through it. So far, I am only about one-third of the way at the PubPeer one. Yet worth quoting Ken Rice [=ATTP] at #59 of PubPeer :- "Pat, Noone disagrees that the error propagation formula you're using is indeed a valid propagation formula. The issue is that you shouldn't just apply it blindly whenever you have some uncertainty that you think needs to be propagated. As has been pointed out to you many, many, many, many times before, the uncertainty in the cloud forcing is a base state error, which should not be propagated in the way you've done so. This uncertainty means that there will be an uncertainty in the state to which the system will tend; it doesn't mean that the range of possible states diverges with time." In all of Pat Frank's many, many, many truculent diatribes on PubPeer, he continues to show a blindness to the unphysical aspect of his assertions. 44. Bob Loblaw at 23:50 PM on 28 August, 2023 Eclectic # 43: you can write books on propagation of uncertainty - oh, wait. People have. The GUM is excellent. Links in previous comments. When I taught climatology at university, part of my exams included doing calculations of various sorts. I did not want students wasting time trying to memorize equations, though - so the exam included all the equations at the start (whether they were needed in the exam or not). No explanation of what the terms were, and no indication what each equation was for - that is what the students needed to learn. Once they reached the calculations questions, they knew they could find the correct equation form on the exam, but they needed to know enough to pick the right one. Pat Frank is able to look up equations and regurgitate them, but he appears to have little understanding of what they mean and how to use them. [In the sqrt(N) case in this most recent paper, he seems to have choked on his own vomit, though.] 45. Bob Loblaw at 00:12 AM on 29 August, 2023 Getting back to the temperature question, what happens when a manufacturer states that the accuracy of a sensor they are selling is +/-0.2C? Does this mean that when you buy one, and try to measure a known temperature (an ice-water bath at 0C is a good fixed point), that your readings will vary by +/-0.2C from the correct value? No, it most likely will not. In all likelihood, the manufacturer's specification of +/-0.2C applies to a large collection of those temperature sensors. The first one might read 0.1C too high. The second might read 0.13C too low. And the third one might read 0.01C too high. And the fourth one might have no error, etc. If you bought sensor #2, it will have a fixed error of -0.13C. It will not show random errors in the range +/-0.2C - it has a Mean Bias Error (as described in the OP). When you take a long sequence of readings, they will all be 0.13C too low. □ You may not know that your sensor has an error of -0.13C, so your uncertainty in the absolute temperature falls in the +/-0.2C range, but once you bought the sensor, your selection from that +/-0.2C range is complete and fixed at the (unknown) value of -0.13C. □ You do not propagate this fixed -0.13C error through multiple measurements by using the +/-0.2C uncertainty in the large batch of sensors. That +/-0.2C uncertainty would only vary over time if you kept buying a new sensor for each reading, so that you are taking another (different) sample out of the +/-0.2C distribution. The randomness within the +/-0.2C range falls under the "which sensor did they ship?" question, not the "did I take another reading?" question. □ When you want to examine the trend in temperature, that fixed error becomes part of the regression constant, not the slope. □ ...and if you use temperature anomalies (subtracting the mean value), then the fixed error subtracts out. Proper estimation of propagation of uncertainty requires recognizing the proper type of error, the proper source, and properly identifying when sampling results in a new value extracted from the distribution of errors. 46. Bob Loblaw at 06:05 AM on 29 August, 2023 At the risk of becoming TLDR, I am going to follow up on something I said in comment #5: On page 18, in the last paragraph, [Pat Frank] makes the claim that "...the ship bucket and engine-intake measurement errors displayed non-normal distributions, inconsistent with random Here is a (pseudo) random sequence of 1000 values, generated in a spreadsheet, using a mean of 0.5 and a standard deviation of 0.15. (Due to random variation, the mean of this sample is 0.508, with a standard deviation of 0.147.) If you calculate the serial correlation (point 1 vs 2, point 2 vs 3, etc.) you get r = -0.018. Here is the histogram of the data. Looks pretty "normal" to me. Here is another sequence of values, fitting the same distribution (and with the same mean and standard deviation) as above: How do I know the distribution, mean , and standard deviation are the same? I just took the sequence from the first figure and sorted the values. The fact that this sequence is a normally-distributed collection of values has nothing to do with whether the sequence is random or not. In this second case, the serial correlation coefficient is 0.99989. The sequence is obviously not random. Still not convinced? Let's take another sequence of values, generated as a uniform pseudo-random sequence ranging from 0 to 1, in the same spreadsheet: In this case, the mean is 0.4987, and the standard deviation is 0.292, but the distribution is clearly not normal. The serial correlation R value is -0.015. Here is the histogram. Not perfectly uniform, but this is a random sequence, so we don't expect every sequence to be perfect. It certainly is not normally-distributed. Once again, if we sort that sequence, we will get exactly the same histogram for the distribution, and exactly the same mean and standard deviation. Here is the sorted sequence, with r = You can't tell if things are random by looking at the distribution of values. Don't listen to Pat Frank. 47. bdgwx at 06:17 AM on 29 August, 2023 Another interesting aspect of the hypothetical ±0.2 C uncertainty is that while it may primary represent a systematic component for an individual instrument (say -0.13 C bias for instrument A) when you switch the context to the aggregation of many instruments that systematic component now presents itself as a random component because instruments B, C, etc. would each have different The GUM actually has a note about this concept in section E3.6. Benefit c) is highly advantageous because such categorization is frequently a source of confusion; an uncertainty component is not either “random” or “systematic”. Its nature is conditioned by the use made of the corresponding quantity, or more formally, by the context in which the quantity appears in the mathematical model that describes the measurement. Thus, when its corresponding quantity is used in a different context, a “random” component may become a “systematic” component, and vice versa. This is why when we aggregate temperature measurements spatially we get a lot of cancellation of those individual biases resulting in an uncertainty of the average that at least somewhat scales with 1/sqrt(N). Obviously there will be still be some correlation so you won't get the full 1/sqrt(N) scaling effect, but you will get a significant part of it. This is in direct conflict with Pat Frank's claim that there is no reduction in the uncertainty of an average of temperatures at all. The only way you would not get any reduction in uncertainty is if each and every instrument had the exact same bias. Obviously that is infintesemially unlikely especially given the 10,000+ stations that most traditional datasets assimilate. 48. Bob Loblaw at 07:01 AM on 29 August, 2023 Yes, bdgwx, that is a good point. The "many stations makes for randomness" is very similar to the "selling many sensors makes the errors random when individual sensors have systematic errors". The use of anomalies does a lot to eliminate fixed errors, and for any individual sensor, the "fixed" error will probably be slightly dependent on the temperature (i.e., not the same at -20C as it is at +25C). You can see this in the MANOBS chart (figure 10) in the OP. As temperatures vary seasonally, using the monthly average over 10-30 years to get a monthly anomaly for each individual month somewhat accounts for any temperature dependence in those errors. ...and then looking spatially for consistency tells us more. One way to look to see if the data are random is to average over longer and longer time periods and see if the RMSE values scale by 1/sqrt(N). If they do, then you are primarily looking at random data. If they scale "somewhat", then there is some systematic error. If they do not change at all, then all error is in the bias (MBE). ...which is highly unlikely, as you state. In terms of air temperature measurement, you also have the question of radiation shielding (Stevenson Screen or other methods), ventilation, and such. If these factors change, then systematic error will change - which is why researchers doing this properly love to know details on station changes. Again, it all comes down to knowing when you are dealing with systematic error or random error, and handling the data (and propagation of uncertainty) properly. 49. Bob Loblaw at 22:53 PM on 29 August, 2023 I will not try to say "one last point" - perhaps "one additional point". The figure below is based on one year's worth of one-minute temperature data taken in an operational Stevenson Screen, with three temperature sensors (same make/model). The graph shows the three error statistics mentioned in the OP: Root Mean Square Error (RMSE), Mean Bias Error (MBE), and the standard deviation (Std). These error statistics compare each pair of sensors: 1 to 2, 1 to 3, and 2 to 3. The three sensors generally compare within +/-0.1C - well within manufacturer's specifications. Sensors 2 and 3 show an almost constant offset between 0.03C and 0.05C (MBE). Sensor 1 has a more seasonal component, so comparing it to sensors 2 or 3 shows a MBE that varies roughly from +0.1C in winter (colder temperatures) to -0.1C in summer (warmer temperatures). The RMSE error is not substantially larger than MBE, and the standard deviation of the differences is less than 0.05C in all cases. This confirms that each individual sensor exhibits mostly systematic error, not random error. We can also approach this my looking at how the RMSE statistic changes when we average the data over longer periods of time. The following figure shows the RMSE for these three sensor pairings, for two averaging periods. The original 1-minute average in the raw data, and an hourly average (sixty 1-minute readings). We see that the increased averaging has had almost no effect on the RMSE. This is exactly what we expect when the differences between two sensors have little random variation. If the two sensors disagree by 0.1C at the start of the hour, they will probably disagree by very close to 0.1C throughout the hour. As mentioned by bdgwx in comment 47, when you collect a large number of sensors across a network (or the globe), then these differences that are systematic on a 1:1 comparison become mostly random globally. 50. Bob Loblaw at 05:07 AM on 30 August, 2023 ...and, to put data where my mouth is.... I claimed that using anomalies (expressing each temperature as a difference from its monthly mean) would largely correct for systematic error in the temperature measurements. Here, repeated from comment 49, is the graph of error statistics using the original data, as-measured. ...and if we calculate monthly means for each individual sensor, subtract that monthly mean from each individual temperature in the month, and then do the statistics comparing each pair of sensors (1-2, 1-3, and 2-3), here is the equivalent graph (same scale). Lo and behold, the MBE has been reduced essentially to zero - all within the range -0.008 to +0.008C. Less than one one-hundredth of a degree. With MBE essentially zero, the RMSE and standard deviation are essentially the same. The RMSE is almost always <0.05C - considerably better than the stated accuracy of the temperature sensors, and considerably smaller than if we leave the MBE The precision of the sensors (small standard deviation) can detect changes that are smaller than the accuracy (Mean Bias Error). Which is one of the reasons why global temperature trends are analyzed using temperature anomalies. 1 2 Next You need to be logged in to post a comment. Login via the left margin or if you're new, register here.
{"url":"https://skepticalscience.com/frank_propagation_uncertainty.html","timestamp":"2024-11-02T03:10:17Z","content_type":"application/xhtml+xml","content_length":"179687","record_id":"<urn:uuid:5f2ac20f-3f01-440f-a99d-b77008c5f6e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00561.warc.gz"}
MATH 110 UMKC Exponential and Logarithmic Functions Questions - Course Help Online math homework . need to be doe. it wont take you time………………………… Written Assignment #2: Exponential and Logarithmic Functions: Graphing and Instructions: This assignment is to be completed and submitted ONLINE on or before APRIL 13th. Use the BOXES provided to show all of your work, and make your graphs clear and neat. Points will be deducted if your graphs are unnecessarily small or difficult to read. Problem 1. Let 𝑓(𝑥) = 2𝑥 , and let 𝑔(𝑥) = log 2 𝑥. Complete tables, then, graph BOTH functions on the same set of axes. 7 𝑥 Problem 2. Let 𝑓(𝑥) = (4) , and let 𝑔(𝑥) = log 7/4 𝑥. Complete tables, then, graph BOTH functions on the same set of axes. 2 𝑥 Problem 3. Let 𝑓(𝑥) = (5) . Complete table and graph the function 𝑓(𝑥). Problem 4. The amount of electricity that a solar panel is capable of producing slowly decays over time. After ten years, a solar panel produces 89% of the electricity that it was able to produce when it was brand new. Find the exponential decay constant k. If a solar panel is initially capable of producing 450 watts of power, how long will it take before the solar panel is only able to produce 300 watts of power? Problem 5. Suppose you invest $3,000 at an interest rate of %6. If the interest is compounded semiannually, how much money will you have after 15 years? If the interest is compounded semiannually, how long will it take for you to have $14,000? If the interest is compounded continuously, how much money will you have after 15 years? If the interest is compounded continuously, how long will it take for you to have $14,000? Problem 6. The total number of electric cars that are sold each year has been growing exponentially. In the initial year of major production (year zero), 50,000 electric cars were sold world wide. Four years later, the number of cars sold in a year had increased to 500,000. Find the exponential growth constant k and write down an exponential function A(t) that models the number of electric cars sold in year t of major production. According to this model, how many cars will be sold in year ten? How many years (after the initial year of major production) will pass before the number of electric cars sold in a year reaches 100,000,000?
{"url":"https://coursehelponline.com/math-110-umkc-exponential-and-logarithmic-functions-questions/","timestamp":"2024-11-13T22:33:58Z","content_type":"text/html","content_length":"43017","record_id":"<urn:uuid:30b40b1d-56da-4a50-9212-873c7227aea1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00604.warc.gz"}
Calculating DV01 for Treasury Futures with CTD switch risk When trading Treasury futures, it is important for traders to understand and manage the risks associated with changes in the cheapest-to-deliver (CTD) bond. One such risk is the CTD switch risk, which can impact the calculation of DV01. In this blog post, we will discuss how to calculate DV01 for Treasury futures with CTD switch risk. Understanding DV01 DV01, also known as dollar value of 01, is a measure of the change in the price of a bond or a bond portfolio for a 1 basis point (0.01%) change in yield. It is an important metric for fixed income traders and helps them quantify the interest rate risk associated with their positions. Calculating DV01 To calculate DV01 for Treasury futures, traders need to consider the notional value of the futures contract, the modified duration of the CTD bond, and the yield of the CTD bond. The first step is to determine the notional value of the futures contract. This is the contract size multiplied by the current futures price. For example, if the contract size is $100,000 and the current futures price is 101.25, the notional value would be $101,250. Next, traders need to calculate the modified duration of the CTD bond. Modified duration measures the sensitivity of the bond’s price to changes in yield. It takes into account the bond’s coupon rate, time to maturity, and yield to maturity. Traders can use bond pricing models or financial calculators to calculate the modified duration. Finally, traders need to determine the yield of the CTD bond. This can be obtained from the bond market or through financial data providers. Once the notional value, modified duration, and yield of the CTD bond are known, traders can calculate DV01 using the following formula: DV01 = (notional value * modified duration) / 100 For example, if the notional value is $101,250 and the modified duration is 7.5, the DV01 would be: DV01 = (101250 * 7.5) / 100 = $7,593.75 CTD Switch Risk CTD switch risk arises when the CTD bond changes during the life of the futures contract. This can happen due to changes in the yield curve or changes in the characteristics of the underlying bonds. When a CTD switch occurs, the DV01 calculation needs to be adjusted to reflect the new CTD bond. To calculate DV01 with CTD switch risk, traders need to repeat the DV01 calculation using the new CTD bond’s notional value, modified duration, and yield. The adjusted DV01 can then be compared to the original DV01 to assess the impact of the CTD switch on the position’s interest rate risk. Calculating DV01 for Treasury futures is an important aspect of managing interest rate risk. Traders need to consider the notional value, modified duration, and yield of the CTD bond to accurately calculate DV01. Additionally, they should be aware of the CTD switch risk and adjust the DV01 calculation accordingly when a CTD switch occurs. By understanding and managing these risks, traders can make informed decisions and effectively manage their Treasury futures positions.
{"url":"https://daily24apps.com/calculating-dv01-for-treasury-futures-with-ctd-switch-risk/","timestamp":"2024-11-05T10:01:19Z","content_type":"text/html","content_length":"78700","record_id":"<urn:uuid:60c081df-8870-4384-8bf3-a76c9e6ca31e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00566.warc.gz"}
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(section:linear-algebra-row-reduction)=\n", "# Row Reduction (Gaussian Elimination)\n", "\n", "**Last Revised on November 8, 2024**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**References:**\n", "\n", "- Section 2.1.1, *Naive Gaussian Elimination*, of {cite}`Sauer`.\n", "- Section 6.1, *Linear Systems of Equations*, of {cite}`Burden-Faires`.\n", "- Section 2.1, *Naive Gaussian Elimination*, of {cite}`Chenney-Kincaid`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", " \n", "This section introduces a basic algorithm for solving a system of $n$ linear equations in $n$ unknowns,\n", "based on the strategy of row-reduction or Gaussian elimination.\n", "You might have seen this strategy in a linear algebra course, but if not, do not worry:\n", "this chapter introduces everything from the beginning, not assuming any previous knowledge of linear algebra beyond matrix notation.\n", "\n", "This section also takes the opportunity to introduce some general tactics for getting from mathematical facts to precise algorithms and then code that use them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Module numpy.linalg with standard nickname la\n", ":label: remark-modules-numpy-and-linalg\n", "\n", "As mentioned in the introduction, Python package [NumPy](https://numpy.org) provides a lot of useful tools for numerical linear algebra through a module `numpy.linalg`.\n", "\n", "Thus we will often start a section or Jupyter notebook that uses linear algebra with the following import commands,\n", "using standard nicknames `np` and `la`:\n", "```" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# In addition to `numpy`, we will often need tools for linear algebra from its sub-module `linalg`\n", "import numpy as np\n", "import numpy.linalg as la" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# Import some items individually, so they can be used by \"first name only\".\n", "from numpy import array, inf, zeros_like, empty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Strategy for getting from mathematical facts to a good algorithm and then to its implentation in [Python] code\n", " \n", "Here I take the opportunity to illustrate some useful strategies for getting from mathematical facts and ideas to good algorithms and working code for solving a numerical problem.\n", "The pattern we will see here, and often later, is:\n", "\n", "### Step 1. Get a basic algorithm:\n", "1. Start with mathematical facts (like the equations $\\sum_{j=1}^n a_{ij}x_j = b_i$).\n", "2. Solve to get an equation for each unknown — or for an updated aproximation of each unknown — in terms of other quantitities.\n", "3. Specify an order of evaluation in which all the quantities at right are evaluated earlier.\n", "\n", "In this, it is often best to start with a verbal description before specifying the details in more precise and detailed mathematical form.\n", "\n", " (subsubsection:row-reduction-step2)=\n", "### Step 2. Refine to get a more **robust** algorithm:\n", "1. Identify cases that can lead to failure due to division by zero and such, and revise to avoid them.\n", "2. Avoid inaccuracy due to problems like severe rounding error. One rule of thumb is that anywhere that a zero value is a fatal flaw (in particular, division by zero), a very small value is also a hazard when rounding error is present.\n", "So *avoid very small denominators*. (The section on\n", "[Machine numbers, rounding error and Error Propagation] (section:machine-numbers-rounding-error-error-propagation)\n", "examines this through the phenomenon of **loss of significance**.)\n", "\n", "### Step 3. Refine to get a more **efficient** algorithm\ n", "For example,\n", "- Avoid repeated evaluation of exactly the same quantity.\n", "- Avoid redundant calculations, such as ones whose value can be determined in advance;\n", "for example, values that can be shown in advance to be zero.\n", "- Compare and choose between alternative algorithms." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Row Reduction, a.k.a. Gaussian Elimination\n", "\n", "We start by considering the most basic algorithm, based on ideas seen in a linear algebra course.\n", "\n", "The problem is best stated as a collection of equations for individual numerical values:\n", "\n", "Given coefficients $a_{i,j},\\ 1 \\leq i \\leq n,\\, 1 \\leq j \\leq n$ and right-hand side values $b_i,\\ 1 \\leq i \\leq n$,\n", "solve for the $n$ unknowns $x_j,\\ 1 \\leq j \\leq n$ in the equations\n", "\n", "$$\n", "\\sum_{j=1}^n a_{i,j} x_j = b_i,\\, 1 \\leq i \\leq n.\n", "$$\n", "\n", "In verbal form, the basic strategy of *row reduction* or *row reduction* is this:\n", "\n", "- **Choose** one equation and use it to eliminate one **chosen** unknown from all the other equations, leaving that chosen equation plus $n-1$ equations in $n-1$ unknowns.\n", "- Repeat recursively, at each stage using one of the remaining equations to eliminate one of the remaining unknowns from all the other equations.\n", "- This gives a final equation in just one unknown, preceeded by an equation in that unknown plus one other, and so on: solve them in this order, from last to first." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Determining those choices, to produce a first algorithm: \"Naive row reduction\"\n", "\n", "A precise algorithm must include rules specifying all the choices indicated above.\n", "The simplest \ "naive\" choice, which works in most but not all cases, is to eliminate from the top to bottom and left to right:\n", "\n", "- Use the first equation to eliminate the first unknown from all other equations.\n", "- Repeat recursively, at each stage using the first remaining equation to eliminate the first remaining unknown. Thus, at step $k$, equation $k$ is used to eliminate unknown $x_k$.\ n", "- This gives one equation in just the last unknown $x_n$; another equation in the last two unknowns $x_{n-1}$ and $x_n$, and so on: solve them in this reverse order, evaluating the unknowns from last to first." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This usually works, but can fail because at some stage the (updated) $k$-th equation might not include the $k$-th unknown: that is, its coefficient might be zero, leading to division by zero.\n", "\n", "We will refine the algorithm to deal with that in the section on [Partial Pivoting](section:linear-algebra-pivoting)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Using Numpy for matrices, vectors and their products\n", ":label: remark-numpy-matrix-product\n", "\n", "As of version 3.5 of Python, vectors, matrices, and their products can be handled very elegantly using Numpy arrays, with the one quirk that the product is denoted by the at-sign `@`.\n", "That is, for a matrix $A$ and compatible matrix or vector $b$ both stored in Numpy arrays, their product is given by `A @ b`.\n", "\n", "This means that, along with my encouragement to totally ignore *Python* arrays in favor of *Numpy* arrays, and to usually avoid Python *lists* when working with numerical data, I also recommend that you ignore the now obsolescent Numpy `matrix` data type, if you happen to come across it in older material on Numpy.\n", "\n", "**Aside:** Why not `A * b`? Because that is the more general \"point-wise\" array product:\n", "`c = A * b` gives array `c` with `c[i,j]` equal to `A[i,j] * b [i,j]`, which is not how matrix multiplication works.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The general case of solving $Ax = b$, using Python and NumPy\n", "\n", "The problem of solving $Ax = b$ in general, when all you know is that $A$ is an $n \\times n$ matrix and $b$ is an $n$-vector, can in most cases be handled well by using standard software rather than by writing your own code. Here is an example in Python, solving\n", "\n", "$$\n", "\\left[ \\begin{array}{rrr} 4 & 2 & 7 \\\\ 3 & 5 & -6 \\\\ 1 & -3 & 2 \\end{array} \\right]\n", "\\left[ \\ begin{array}{r} x_1 \\\\ x_2 \\\\ x_3 \\end{array} \\right]\n", "= \\left[ \\begin{array}{r} 2 \\\\ 3 \\\\ 4 \\end{array} \\right]\n", "$$\n", "\n", "using the `array` type from package `numpy` and the function `solve` from the linear algebra module `numpy.linalg`." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A =\n", "[[ 4. 2. 7.]\n", " [ 3. 5. -6.]\n", " [ 1. -3. 2.]]\n", "b = [2. 3. 4.]\n", "A @ b = [42. -3. 1.]\n" ] } ], "source": [ "A = array([[4., 2., 7.], [3., 5., -6.],[1., -3., 2.]])\n", "print(f\"A =\\n{A}\")\n", "b = array([2., 3., 4.])\n", "print(f\"b = {b}\")\n", "print(f\"A @ b = {A @ b}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} floating point numbers versus integers\n", ":label: remark-float-vs-int\n", "\n", "It is important to specify that the entries are real numbers (type \"float\");\n", "otherwise Numpy does integer arithmetic.\ n", "\n", "One way to do this is as above: putting a decimal point in the numbers (or to be lazy, in at least one of them!)\n", "\n", "Another is to tell the function `array` that the type is float:\ n", "```" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "A = array([[4, 2, 7], [3, 5, -6],[1, -3, 2]], dtype=float)\n", "b = array([2, 3, 4], dtype= float)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "numpy.linalg.solve says that the solution of Ax = b is\ n", "x = [ 1.81168831 -1.03246753 -0.45454545]\n", "\n", "As a check, the residual (or backward error) is\n", " r = b-Ax = [0.0000000e+00 0.0000000e+00 8.8817842e-16],\n", "and its infinity (or 'maximum') norm is ||r|| = 8.881784197001252e-16\n", "\n", "Aside: another way to compute this is with max(abs(r)):\n", "||r|| = 8.881784197001252e-16\n", "and its 1-norm is ||r|| = 8.881784197001252e-16\n" ] } ], "source": [ "x = la.solve(A, b)\n", "print(\"numpy.linalg.solve says that the solution of Ax = b is\")\n", "print(f\"x = {x}\")\n", "# Check the backward error, also known as the residual\n", "r = b - A @ x\n", "print(f\"\\nAs a check, the residual (or backward error) is\")\n", "print(f\" r = b-Ax = {r},\")\n", "print(f\"and its infinity (or 'maximum') norm is || r|| = {la.norm(r, inf)}\")\n", "print(\"\\nAside: another way to compute this is with max(abs(r)):\")\n", "print(f\"||r|| = {max(abs(r))}\")\n", "print(f\"and its 1-norm is ||r|| = {la.norm(r, 1)}\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Not quite zero values and rounding\n", ":label: remark-1-not-quite-zero-values-and-rounding\n", "\n", "Some values here that you might hope to be zero are instead very small non-zero numbers, with exponent $10^{-16}$,\n", "due to rounding error in computer arithmetic.\n", "For details on this (like why \"-16\" in particular) see section\n", "[Machine Numbers, Rounding Error and Error Propagation](section:machine-numbers-rounding-error-error-propagation).\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The naive row reduction algorithm, in pseudo-code\n", "\n", "Here the elements of the transformed matrix and vector after step $k$ are named $a_{i,j}^{(k)}$ and $b_{k}^{(k)}$, so that the original values are $a_{i,j}^{(0)} = a_{i,j}$ and $b_{i}^{(0)} = b_{i}$.\n", "\n", "The name $l_{i,k}$ is given to the multiple of row $k$ that is subtracted from row $i$ at step $k$. This naming might seem redundant, but it becomes very useful later,\n", "in the section on [LU factorization](section:linear-algebra-lu-factorization)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:algorithm} naive row reduction\n", ":label: naive-gaussian-elimination\n", "\n", "for k from 1 to n-1 $\\qquad$ *Step k: get zeros in column k below row k:*\n", " \n", "$\\quad$ for i from k+1 to n\n", " \n", "$\\qquad$ *Evaluate the multiple of row k to subtract from row i:*\n", " \n", "$\\quad\\quad l_{i,k} = a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}$ $\\qquad$ **If** $a_{k,k}^{(k-1)} \\neq 0$!\n", " \n", "$\\qquad$ *Subtract $(l_{i,k}$ times row k) from row i in matrix A ...:*\n", " \n", "$\\quad\\quad$ for j from 1 to n\n", " \n", "$\\quad\\quad\\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)} - l_{i,k} a_{k,j}^{(k-1)}$\n", " \n", "$\\quad\\quad$ end\n", " \n", "$\\qquad$ ... and at right, subtract $(l_{i,k}$ times $b_k)$ from $b_i$:\n", " \n", "$\\quad\\quad b_i^{(k)} = b_i^{(k-1)} - l_{i,k} b_{k}^{(k-1)}$ \n", " \n", "$\\quad$ end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The rows before $i=k$ are unchanged, so they are ommited from the update;\n", "however, in a situation where we need to complete the definitions of $A^{(k)}$ and $b^{(k)}$ we would also need the following inside the `for k` loop:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " ```{prf:algorithm} Inserting the zeros below the main diagonal\n", ":label: gaussian-elimination-inserting-zeros\n", "\n", "$\\quad$ for i from 1 to k\n", " \n", "$\\quad\\quad$ for j from 1 to n\n", " \n", "$\\quad\\quad\\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)}$\n", " \n", "$\\quad\\quad$ end\n", " \n", "$\\quad\\quad b_i^{(k)} = b_i^{(k-1)}$\n", " \n", "$\\quad$ end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, the algorithm will usually be implemented by overwriting the previous values in an array with new ones, and then this part is redundant." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next improvement in efficiency: the updates in the first $k$ columns at step $k$ give zero values (that is the key idea of the algorithm!), so there is no need to compute or store those zeros, and thus the only calculations needed in the above `for j from 1 to n` loop are covered by `for j from k+1 to n`.\n", "Thus from now on we use only the latter: except when, for demonstration purposes, we need those zeros.\n", "\n", "Thus, the standard algorithm looks like this:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:algorithm} basic row reduction\n", ":label: gaussian-elimination\n", "\n", "for k from 1 to n-1 $\\qquad$ *Step k: Get zeros in column k below row k:*\n", " \n", "$\\quad$ for i from k+1 to n $\\qquad$ *Update only the rows that change: from k+1 on:*\n", " \n", "$\\qquad$ *Evaluate the multiple of row k to subtract from row i:*\n", " \n", "$\\quad\\quad l_{i,k} = a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}$ $\\qquad$ **If** $a_{k,k}^{(k-1)} \\neq 0$!\n", " \n", "$\\qquad$ *Subtract $(l_{i,k}$ times row k) from row i in matrix A, in the columns that are not automaticaly zero:*\n", " \n", "$\\quad\\quad$ for j from k+1 to n\n", " \n", "$\\quad\\quad\\quad a_{i,j}^{(k)} = a_{i,j}^{(k-1)} - l_{i,k} a_{k,j}^{(k-1)}$\n", " \n", "$\\quad\\quad$ end\n", " \n", "$\\qquad$ *and at right, subtract $(l_{i,k}$ times $b_k)$ from $b_i$:*\n", " \n", "$\\quad\\quad b_i^{(k)} = b_i^{(k-1)} - l_{i,k} b_{k}^{(k-1)}$ \n", " \n", "$\\quad$ end\n", "$end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Syntax for for loops and 0-based array indexing\n", ":label: remark-python-for-0-based\n", "\n", "Since array indices in Python (and in Java, C, C++, C#, Swift, etc.) start from zero, not from one, it will be convenient to express linear algebra algorithms in a form compatible with this.\n", "\n", "- Every index is one less than in the above!\n", "Thus in an array with $n$ elements, the index values $i$ are $0 \\leq i < n$, **excluding n**, which is the half-open interval of integers $[0, n)$.\n", "\n", "- In the indexing of an array, one can refer to the part the array with indices $a \\leq i < b$, **excluding b**, with the **slice** notation `a:b `.\n", "\n", "- Similarly, when specifiying the range of consecutive integers $i$, $a \\leq i < b$ in a `for` loop, one can use the expression `range(a,b)`.\n", "\n", "Also, when indices are processed in order (from low to high), these notes will abuse notation slightly, refering to the values as a set — specifically, a semi-open interval of integers.\n", "\n", "For example, the above loop\n", "\n", " for j from k+1 to n:\n", "\n", "first gets all indices lowered by one, to\n", "\n", " for j from k to n-1:\n", "\n", "and then this will sometimes be described in terms of the set of `j` values:\n", "\n", " for j in [k,n):\n", "\n", "which in Python becomes\n", "\n", " for j in range(k, n):\n", " ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This new notation needs care initially, but helps with clarity in the long run.\n", "For one thing, it means that the indices of an $n$-element array, $[0,n-1)$, are described by `range(0,n)` and by `0:n`.\n", "In fact, the case of \"starting at the beginning\", with index zero, can be abbreviated: `range(n)` is the same as `range(0,n)`, and `:b` ia the same as `0:b`.\n", "\n", "Another advantage is that the index ranges `a:b` and `b:c` together cover the same indices as `a:c`, with no gap or duplication of `b`, and likewise `range(a,b)` and `range(b,c)` combine to cover `range(a,c)`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The naive row reduction algorithm, in Pythonic zero-based pseudo-code\n", "\n", "Here the above notational shift is made, along with eliminating the above-noted redundant formulas for values that are either zero or are unchanged from the previous step.\n", "It is also convenient for $k$ to be the index of the row being used to reduce subsequent rows, and so also the index of the column in which values below the main diagonal are being set to zero." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " ```{prf:algorithm}\n", ":label: gaussian-elimination-0-based\n", "\n", "for k in [0, n-1):\n", " $\\quad$ for i in [k+1, n):\n", " $\\quad\\quad l_{i,k} = a_{i,k}^{(k)}/a_{k,k}^{(k)}\\qquad$ **If** $a_{k,k}^{(k)} \\neq 0$!\n", " $\\quad\\quad$ for j in [k+1, n):\n", " $\\quad\\quad\\quad a_{i,j}^{(k+1)} = a_{i,j}^{(k)} - l_{i,k} a_{k,j}^{(k)}$\n", " $\\quad\\quad$ end\n", " $\\quad\\quad b_i^{(k+1)} = b_i^{(k)} - l_{i,k} b_{k}^{(k)}$\n", " $\\quad$ end\n", " end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The naive row reduction algorithm, in Python\n", "\n", "Conversion to actual Python code is now quite straightforward; there is litle more to be done than:\n", "\n", "- Change the way that indices are described, from $b_i$ to `b[i]` and from $a_{i,j}$ to `A[i,j]`.\n", "\n", "- Use case consistently in array names, since the quirk in mathematical notation of using upper-case letters for matrix names but lower case letters for their elements is gone!\n", "In these notes, matrix names will be upper-case and vector names will be lower-case (even when a vector is considered as 1-column matrix).\n", "\n", "- Rather than create a new array for each matrix $A^{(0)}$, $A^{(0)}$, etc. and each vector $b^{(0)} $, $b^{(1)}$,\n", "we overwite each in the same array." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark}\n", ":label: remark-1-to-0-easy\n", "\n", "We will see that this simplicity in translation is quite common once algorithms have been expressed with zero-based indexing. The main ugliness is with loops that count backwards; see below.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " for k in range(n-1):\n", " for i in range(k+1, n):\n", " L[i,k] = A[i,k] / A[k,k]\n", " for j in range(k+1, n):\n", " A[i,j] -= L[i,k] * A[k,j]\n", " b[i] -= L[i,k] * b[k]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To demonstrate this, some additions are needed:\n", "- Putting this algorithm into a function.\n", "- Getting the value $n$ needed for the loop, using the fact that it is the length of vector `b`.\n", "- Creating the array $L$.\n", "- Copying the input arrays `A` and `b` into new ones, `U` and `c`, so that the original arrays are not changed. That is, when the row reduction is completed, `U` contains $A^{(n-1)}$ and `c` contains $b^{(n-1)}$.\n", "\n", "Also, for some demonstrations, the zero values below the main diagonal of `U` are inserted, though usually they would not be needed." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "def rowReduce(A, b, demoMode= False):\n", " \"\"\"To avoid modifying the matrix and vector specified as input,\n", " they are copied to new arrays, with the method .copy()\n", " Warning: it does not work to say \"U = A\" and \"c = b\";\n", " this makes these names synonyms, referring to the same stored data.\n", " \"\"\" \n", " U = A.copy()\n", " c = b.copy()\n", " n = len(b)\n", " # The function zeros_like() is used to create L with the same size and shape as A,\n", " # and with all its elements zero initially.\n", " L = np.zeros_like(A)\n", " for k in range(n-1):\n", " for i in range(k+1, n):\n", " # compute all the L values for column k:\n", " L[i,k] = U[i,k] / U[k,k] # Beware the case where U[k,k] is 0\n", " for j in range(k+1, n):\n", " U[i,j] -= L[i,k] * U[k,j]\n", " # Insert the below-diagonal zeros in column k;\n", " # this is not important for calculations, since those elements of U are not used in backward substitution,\n", " # but it helps for displaying results and for checking the results via residuals.\n", " U[i,k] = 0.\n", " \n", " c[i] -= L[i,k] * c[k]\n", " return (U, c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** As usual, you could omit the above `def` and instead import this function with" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "from numericalMethods import rowReduce" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "though that actually gives the slightly revised version seen below." ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Row reduction gives\n", "U =\n", "[[ 4. 2. 7. ]\n", " [ 0. 3.5 -11.25]\n", " [ 0. 0. -11. ]]\n", "c = [2. 1.5 5. ]\n" ] } ], "source": [ "(U, c) = rowReduce(A, b)\n", "print(f\"Row reduction gives\\nU =\\n{U}\")\n", "print(f\"c = {c}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take advantage of the fact that we have used `la.solve` to get a very accurate approximation of the solution x of $Ax=b$;\n", "this should also solve $Ux=c$, so check the backward error, a.k.a. the *residual*:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "The residual (backward error), b-Ax, is [0.0000000e+00 0.0000000e+00 8.8817842e-16], with maximum norm 8.881784197001252e-16.\n", "\n", "The residual for the row reduced form, c-Ux, is [ 0.00000000e+00 -2.22044605e-16 0.00000000e+00], with maximum norm 2.220446049250313e-16.\n" ] } ], "source": [ "r_Ab = b - A@x\n", "print(f\"\\nThe residual (backward error), b-Ax, is {r_Ab}, with maximum norm {max(abs(r_Ab))}.\")\n", "r_Uc = c - U@x\n", "print(f\"\\nThe residual for the row reduced form, c-Ux, is {r_Uc}, with maximum norm {max(abs(r_Uc))}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Array slicing in Python\n", ":label: python-array-slicing\n", "\ n", "Operations on a sequence of array indices, with \"slicing\": vectorization\n", "\n", "Python code can specify vector operations on a range of indices $[c,d)$, referred to withthe slice notaiton `c:d`.\n", "For example, the *slice* notation `A[c:d,j]` refers to the array containing the $d-c$ elements `A[i,j]` for $i$ in the semi-open interval $[c,d)$.\n", "\n", "Thus, each of the three arithmetic calculations above can be specified over a range of index values in a single command, eliminating all the inner-most `for` loops;\n", "this is somtimes called *vectorization*.\n", "Only `for` loops that contains other `for` loops remain.\n", "\n", "Apart from mathematical elegance, this usually allows far faster execution.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " for k in range(n-1):\n", " L[k+1:n,k] = U[k+1:n,k] / U[k,k] # compute all the L values for column k\n", " for i in range(k+1, n):\n", " U[i,k+1:n] -= L[i,k] * U[k,k+1:n] # Update row i\ n", " c[k+1:n] -= L[k+1:n,k] * c[k] # update c values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I will break my usual guideline by redefining `rowReduce`, since this is just a different statement of the same algorithm\n", "(though with some added \"demoMode\" output)." ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "def rowReduce(A, b, demomode=False):\n", " \"\"\"To avoid modifying the matrix and vector specified as input,\n", " they are copied to new arrays, with the method .copy()\n", " Warning: it does not work to say \"U = A\" and \"c = b\";\n", " this makes these names synonyms, referring to the same stored data.\n", " \"\"\" \n", " U = A.copy()\n", " c = b.copy()\n", " n = len(b)\n", " # The function zeros_like() is used to create L with the same size and shape as A, and with all its elements zero initially.\n", " L = np.zeros_like(A)\n", " for k in range(n-1):\n", " if demomode: print(f\"Step {k =}\")\n", " # compute all the L values for column k:\n", " L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0\n", " if demomode:\n", " print(f\"The multipliers in column {k+1} are {L [k+1:,k]}\")\n", " for i in range(k+1, n):\n", " U[i,k+1:n] -= L[i,k] * U[k,k+1:n] # Update row i\n", " c[k+1:n] -= L[k+1:n,k] * c[k] # update c values\n", " if demomode:\n", " # Insert the below-diagonal zeros in column k;\n", " # this is not important for calculations, since those elements of U are not used in backward substitution,\n", " # but it helps for displaying results and for checking the results via residuals.\n", " U[k+1:, k] = 0.\n", " print(f\"The updated matrix is\\n{U}\")\n", " print(f\"The updated right-hand side is\\n{c}\")\n", " return (U, c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} another way to select some rows or columns of a matrix\n", ":label: python-array-dicing\n", "\n", "As a variant on slicing, one can give a list of indices to select rows or columns a of a matrix; for example:\n", "\n", " A[[r1 r2 r3], :]\n", "\n", "gives a three row part of array `A` and\n", "\n", " A[2:, [c1 c2 c3 c4]]\n", "\n", "selects the indicated four columns — but only from row 2 onwards.\n", "\n", "This gives another way to describe the update of the lower-right block `U[k+1:n,k+1:n]` with a single matrix multiplication:\n", "it is the *outer product* of part of column `k` of `L` after row `k` by the part of row `k` of `U` after column `k`.\n", "\n", "To specify that the piecws of `L` nd `U` are identifies as a 1-column matrix and a 1-row matrix respectively, rather than as vectors,\n", "the above \"row/column list\" method must be used, with the list being just `[k]` in each case.\n", "```" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "def rowReduce(A, b, demomode=False):\n", " \"\"\"To avoid modifying the matrix and vector specified as input,\n", " they are copied to new arrays, with the method .copy()\n", " Warning: it does not work to say \"U = A\" and \"c = b\";\n", " this makes these names synonyms, referring to the same stored data.\n", " \"\"\" \n", " U = A.copy()\n", " c = b.copy()\n", " n = len(b)\n", " # The function zeros_like() is used to create L with the same size and shape as A, and with all its elements zero initially.\n", " L = np.zeros_like(A)\n", " for k in range(n-1):\n", " if demomode: print(f\"Step {k=}\")\n", " # compute all the L values for column k:\n", " L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0\n", " if demomode:\n", " print(f\"The multipliers in column {k+1} are {L[k+1:,k]}\")\n", " U[k+1:n,k+1:n] -= L[k+1:n,[k]] @ U[[k],k+1:n] # The new \"outer product\" method. \n", " # Insert the below-diagonal zeros in column k;\n", " # this is not important for calculations, since those elements of U are not used in backward substitution,\n", " # but it helps for displaying results and for checking the results via residuals.\n", " U[k+1:n,k] = 0.0\n", " \n", " c[k+1:n] -= L[k+1:n,k] * c[k] # update c values\n", " if demomode:\n", " U[k+1:n, k] = 0. # insert zeros in column k of U:\n", " print(f\"The updated matrix is\\n{U}\")\n", " print(f\"The updated right-hand side is\\n{c}\")\n", " return (U, c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Repeating the above testing:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(U, c) = rowReduce(A, b, demomode=True)\n", "print()\n", "print(f\"U =\\n{U}\")\n", "print(f\"c = {c}\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Backward substitution with an upper triangular matrix\n", "\n", "The transformed equations have the form\n", "\ n", "$$\n", "\\begin{split}\n", "u_{1,1} x_1 + u_{1,2} x_2 + u_{1,3} x_3 + \\cdots + u_{1,n} x_n &= c_1 \\\\\n", "\\vdots \\\\\n", "u_{i,i} x_i + u_{i+1,i+1} x_{i+1} + \\cdots + u_{i,n} x_n &= c_i \\ \\\n", "\\vdots \\\\\n", "u_{n-1,n-1} x_{n-1} + u_{n-1,n} x_{n} &= c_{n-1} \\\\\n", "u_{nn} x_n &= c_n \\\\\n", "\\end{split}\n", "$$\n", "\n", "and can be solved from bottom up, starting with $x_n = c_n/u_{n,n}$.\n", "\n", "All but the last equation can be written as\n", "\n", "$$\n", "u_{i,i}x_i + \\sum_{j=i+1}^{n} u_{i,j} x_j = c_i, \\; 1 \\leq i \\leq n-1\n", "$$\n", "\n", "and so solved as\ n", "\n", "$$\n", "x_i = \\frac{c_i - \\sum_{j=i+1}^{n} u_{i,j} x_j}{u_{i,i}},\n", "\\qquad \\textbf{ If } u_{i,i} \\neq 0\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This procedure is **backward substitution**, giving the algorithm\n", "\n", "```{prf:algorithm}\n", ":label: backward-substitution-1\n", "\n", "$x_n = c_n/u_{n,n}$\n", " for i from n-1 down to 1\n", " $\\displaystyle \\quad x_i = \\frac{c_i - \\sum_{j=i+1}^{n} u_{i,j} x_j}{u_{i,i}}$\n", " end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This works so long as none of the main diagonal terms $u_{i,i}$ is zero, because when done in this order, everything on the right hand side is known by the time it is evaluated.\n", "\n", "For future reference, note that the elements $u_{k,k}$ that must be non-zero here, the ones on the **main diagonal** of $U$, are the same as the elements $a_{k,k}^{(k)}$ that must be non-zero in the row reduction stage above, because after stage $k$, the elements of row $k$ do not change any more: $a_{k,k}^{(k)} = a_{k,k}^{(n-1)} = u_{k,k}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The backward substitution algorithm in zero-based pseudo-code\n", "\n", "Again, a zero-based version is more convenient for programming in Python (or Java, or C++):\n", "\n", "```{prf:algorithm}\n", ":label: backward-substitution-\n", "\n", "$x_{n-1} = c_{n-1}/u_{n-1,n-1}$\n", " for i from n-2 down to 0\n", " $\\displaystyle \\quad x_i = \\frac{c_i - \\sum_{j=i+1}^{n-1} u_{i,j} x_j}{u_{i,i}}$\n", " end\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Indexing from the end of an array and counting backwards\n", ":label: python-counting-backwards\n", "\n", "To express the above backwards counting in Python, we have to deal with the fact that `range(a,b)` counts upwards and excludes the \"end value\" `b`.\n", "The first part is easy: the extended form `range(a, b, step)` increments by `step` instead of by one, so that `range(a, b, 1)` is the same as `range(a,b)`, and `range(a, b, -1)` counts down: $a, a-1, \\dots, b+1$.\n", "\n", "But it still stops just before $b$, so getting the values from $n-1$ down to $0$ requires using $b= -1$, and so the slightly quirky expression `range(n-1, -1, -1)`.\n", "\n", "One more bit of Python: for an $n$-element single-index array `v`, the sum of its elements $\\sum_{i=0}^{n-1} v_i$ is given by `sum(v)`.\n", "Thus $\\sum_{i=a}^{b-1} v_i$, the sum over a subset of indices $[a,b)$, is given by `sum (v[a:b])`.\n", "\n", "And remember that multiplication of Numpy arrays with `*` is pointwise.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The backward substitution algorithm in Python\n", "\n", "With all the above Python details, the core code for backward substitution is:\n", "\n", " x[n-1] = c[n-1]/U[n-1,n-1]\n", " for i in range(n-2, -1, -1):\n", " x[i] = (c [i] - sum(U[i,i+1:] * x[i+1:])) / U[i,i]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} \n", ":label: mathematically-correct-notation\n", "\n", "Note that the backward substitution algorithm and its Python coding have a nice mathematical advantage over the row reduction algorithm above: the precise mathematical statement of the algorithm does not need any intermediate quantities distinguished by superscripts ${}^{(k)}$, and correspondingly, all variables in the code have fixed meanings, rather than changing at each step.\n", "\n", "In other words, all uses of the equal sign are mathematically correct as equations!\n", "\n", "This can be advantageous in creating algorithms and code that is more understandable and more readily verified to be correct, and is an aspect of the *functional programming* approach.\n", "We will soon go part way to that *functional* ideal, by rephrasing row reduction in a form where all variables have clear, fixed meanings, corresponding to the natural mathematical description of the process:\n", "the method of **LU factorization** introduced in\n", "{ref}`section:linear-algebra-lu-factorization`.\n", " ```" ] }, { "cell_type": "markdown", "metadata": { "jp-MarkdownHeadingCollapsed": true, "tags": [] }, "source": [ "```{prf:remark} Another way to count backwards along an array\n", ":label: another-way-to-count-backwards\n", "\n", "On the other hand, there is an elegant way access array elements \"from the top down\".\n", "Firstly (or \"lastly\") `x[-1]` is the last element: the same as `x[n-1]` when `n = len(x)`, but without needing to know that length $n$.\n", "\n", "More generally, `x[-i]` is `x[n-i]`.\n", "\n", "Thus, one possibly more elegant way to describe backward substitution is to count with an increasing index, the \"distance from the bottom\":\n", "from `x[n-1]` which is `x[-1]` to `x[0]`, which is `x[-n]`.\n", "That is, index `-i` replaces index $n - i$: \n", "\n", " x[-1] = c[-1]/U[-1,-1]\n", " for i in range(2, n+1):\n", " x[-i] = (c[-i] - sum(U[-i,1-i:] * x[1-i:])) / U[-i,-i]\n", "\n", "There is still the quirk of having to \"overshoot\", referring to `n+1` in `range` to get to final index `-n`.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a final demonstration, we put this second version of the code into a complete working Python function and test it:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "def backwardSubstitution(U, c, demomode=False):\n", " \ "\"\"Solve U x = c for b.\"\"\"\n", " n = len(c)\n", " x = np.zeros(n)\n", " x[-1] = c[-1]/U[-1,-1]\n", " if demomode: print(f\"x_{n} = {x[-1]}\")\n", " for i in range(2, n+1):\n", " x[-i] = (c[-i] - sum(U[-i,1-i:] * x[1-i:])) / U[-i,-i]\n", " if demomode: print(f\"x_{n-i+1} = {x[-i]}\")\n", " return x" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "which as usual is also available via\n", "\n", " from numericalMethods import backwardSubstitution" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x = [ 1.81168831 -1.03246753 -0.45454545]\n", "\n", "The residual b - Ax = [0.0000000e+00 0.0000000e+00 8.8817842e-16],\n", "with maximum norm 8.88e-16.\n" ] } ], "source": [ "x = backwardSubstitution(U, c)\n", "print(f\"x = {x}\")\n", "r = b - A@x\n", "print(f\"\\nThe residual b - Ax = {r},\")\n", "print(f\"with maximum norm {max(abs(r)):.3}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since one is often just interested in the solution given by the two steps of row reduction and then backward substitution,\n", "they can be combined in a single function by composition:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [], "source": [ "def solveLinearSystem(A, b): return backwardSubstitution(*rowReduce(A, b));" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "```{prf:remark} On Python\n", ":label: python-splat-*\n", "\n", "The `*` here takes the value to its right (a single tuple with two elements `U` and `c`)\n", "and \"unpacks\" it to the two separate variables `U` and `c` needed as input to `backwardSubstitution`\n", "```" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 1.81168831, -1.03246753, -0.45454545])" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "solveLinearSystem(A, b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Two code testing hacks: starting from a known solution, and using randomly generated examples\n", "\n", "An often useful strategy in developing and testing code is to create a test case with a known solution;\n", "another is to use random numbers to avoid accidently using a test case that in unusually easy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Prefered Python style is to have all `import` statements at the top,\n", "but since this is the first time we've heard of module `random`,\n", "I did not want it to be mentioned mysteriously above." ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [], "source": [ "import random" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x_random = [-0.27352765 0.31619792 -0.02981991]\n" ] } ], "source": [ "x_random = empty(len(b)) # An array the same length as b, with no values specified yet\n", "for i in range(len(x)):\n", " x_random[i] = random.uniform(-1, 1) # gives random real value, from uniform distribution in [-1, 1]\n", "print(f\"x_random = {x_random}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a right-hand side b that automatically makes `x_random` the correct solution:" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [], "source": [ "b_random = A @ x_random" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A =\n", "[[ 4. 2. 7.]\n", " [ 3. 5. -6.]\n", " [ 1. -3. 2.]]\n", "\n", "b_random = [-0.67045415 0.9393261 -1.28176123]\n", "\n", "U=\n", "[[ 4. 2. 7. ]\n", " [ 0. 3.5 -11.25]\n", " [ 0. 0. -11. ]]\n", "\n", "Residual c_random - U@x_random = [0. 0. 0.]\n", "\n", "x_computed = [-0.27352765 0.31619792 -0.02981991]\n", "\n", "Residual b_random - A@x_computed = [0. 0. 0.]\n", "\n", "Backward error |b_random - A@x_computed| = 0.0\n", "\n", "Error x_random - x_computed = [0. 0. 0.]\n", "\n", "Absolute error |x_random - x_computed| = 0.0\n" ] } ], "source": [ "print(f\"A =\\n{A}\")\n", "print(f\"\\nb_random = {b_random}\")\n", "(U, c_random) = rowReduce(A, b_random)\n", "print(f\"\\nU=\\n{U}\")\n", "print(f\"\\nResidual c_random - U@x_random = {c_random - U@x_random}\")\n", "x_computed = backwardSubstitution(U, c_random)\n", "print(f\"\\nx_computed = {x_computed}\") \n", "print(f\"\\nResidual b_random - A@x_computed = {b_random - A@x_computed}\")\n", "print(f\"\\nBackward error |b_random - A@x_computed| = {max(abs(b_random - A@x_computed))}\")\n", "print(f\"\\ nError x_random - x_computed = {x_random - x_computed}\")\n", "print(f\"\\nAbsolute error |x_random - x_computed| = {max(abs(x_random - x_computed))}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What can go wrong? Three examples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:example} An obvious division by zero problem\n", ":label: example-obvious-division-by-zero\n", "\n", "Consider the system of two equations\n", "\n", "\\begin{align*}\n", "x_2 &= 1\n", "\\\\\n", "x_1 + x_2 &= 2\n", "\\end{align*}\n", "\n", "It is easy to see that this has the solution $x_1 = x_2 = 1$;\n", "in fact it is already in \"reduced form\".\n", "However when put into matrix form\n", "\n", "$$\n", "\\left[\\begin{array}{rr} 0 & 1 \\\\ 1 & 1 \\end {array}\\right]\n", "\\left[\\begin{array}{r} x_1 \\\\ x_2 \\end{array}\\right] = \\left[\\begin{array}{r} 1 \\\\ 2 \\end{array}\\right]\n", "$$\n", "\n", "the above algorithm fails, because the fist *pivot element* $a_{11}$ is zero:\n", "```" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "U1 = \n", "[[ 0. 1.]\n", " [ 0. -inf]]\n", "c1 = [ 1. -inf]\n", "x1 = [nan nan]\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/ 2412833099.py:15: RuntimeWarning: divide by zero encountered in divide\n", " L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0\n", "/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ ipykernel_79766/1659329062.py:5: RuntimeWarning: invalid value encountered in scalar divide\n", " x[-1] = c[-1]/U[-1,-1]\n" ] } ], "source": [ "A1 = array([[0., 1.], [1. , 1.]])\n", "b1 = array([1., 1.])\n", "(U1, c1) = rowReduce(A1, b1)\n", "print(f\"U1 = \\n{U1}\")\n", "print(f\"c1 = {c1}\")\n", "x1 = backwardSubstitution(U1, c1)\n", "print(f\"x1 = {x1}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} On Python \"Infinity\" and \"Not a Number\"\n", "\n", "- `inf`, meaning \"infinity\", is a special value given as the result of operations like division by zero.\n", "Surprisingly, it can have a sign!\n", "(This is available in Python from package Numpy as `numpy.inf`)\n", "- `nan`, meaning \"not a number\", is a special value given as the result of calculations like `0/0`.\n", "(This is available in Python from package Numpy as `numpy.nan`)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:example} A less obvious division by zero problem\n", ":label: example-less-obvious-division-by-zero\n", "\n", "Next consider this system\n", "\n", "$$\n", "\\left[\\begin{array}{rrr} 1 & 1 & 1 \\\\ 1 & 1 & 2 \\\\ 1 & 2 & 2 \\end{array}\\right]\n", "\\left[\\begin{array}{r} x_1 \\\\ x_2 \\\\ x_3 \\end{array}\\right] = \\left[\\begin{array}{r} 3 \\\\ 4 \\\\ 5 \\end{array}\\right]\n", "$$\n", "\n", "The solution is $x_1 = x_2 = x_3 = 1$, and this time none of the diagonal elements is zero,\n", "so it is not so obvious that a division by zero problem will occur, but:\n", "```" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [], "source": [ "A2 = array([[1., 1., 1.], [1., 1., 2.],[1., 2., 2.]])\n", "b2 = array([3., 4., 5.])" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "U2 = \n", "[[ 1. 1. 1.]\n", " [ 0. 0. 1.]\n", " [ 0. 0. -inf]]\n", "c2 = [ 3. 1. -inf]\n", "x2 = [nan nan nan] \n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/2412833099.py:15: RuntimeWarning: divide by zero encountered in divide\n", " L[k+1:,k] = U[k+1:n,k] / U[k,k] # Beware the case where U[k,k] is 0\n", "/var/folders/xv/1pdcq_w93rl5n_1hk34hgvz1qc6snk/T/ipykernel_79766/1659329062.py:5: RuntimeWarning: invalid value encountered in scalar divide\n", " x[-1] = c[-1]/U[-1,-1]\n" ] } ], "source": [ "(U2, c2) = rowReduce(A2, b2)\n", "print(f\"U2 = \\n{U2}\")\n", "print(f\"c2 = {c2}\")\n", "x2 = backwardSubstitution (U2, c2)\n", "print(f\"x2 = {x2}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What happens here is that the first stage subtracts the first row from each of the others ..." ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [], "source": [ "A2[1,:] -= A2[0,:]\n", "b2[1] -= b2[0]\n", "A2[2,:] -= A2[0,:]\n", "b2[2] -= b2[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "... and the new matrix has the same problem as above at the next stage:" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Now A2 is \n", "[[1. 1. 1.]\n", " [0. 0. 1.]\n", " [0. 1. 1.]]\n", "and b2 is [3. 1. 2.]\n" ] } ], "source": [ "print(f\"Now A2 is \\n{A2}\")\n", "print(f\"and b2 is {b2}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thus, the second and third equations are\n", "\n", "$$\n", "\\left[\\begin{array}{rr} 0 & 1 \\\\ 1 & 1 \\end {array}\\right]\n", "\\left[\\begin{array}{r} x_2 \\\\ x_3 \\end{array}\\right] = \\left[\\begin{array}{r} 1 \\\\ 2 \\end{array}\\right]\n", "$$\n", "\n", "with the same problem as in {prf:ref} `example-obvious-division-by-zero`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:example} Problems caused by inexact arithmetic: \"divison by almost zero\"\n", ":label: example-almost-division-by-zero\n", "\n", "The equations\n", "\n", "$$\n", "\\left[\\begin{array}{rr} 1 & 10^{16} \\\\ 1 & 1 \\end{array}\\right]\n", "\\left[\\begin{array}{r} x_1 \\\\ x_2 \\end {array}\\right] = \\left[\\begin{array}{r} 1+10^{16} \\\\ 2 \\end{array}\\right]\n", "$$\n", "\n", "again have the solution $x_1 = x_2 = 1$, and the only division that happens in the above algorithm for row reduction is by that pivot element\n", "$a_{11} = 1, \\neq 0$, so with exact arithmetic, all would be well. But:\n", "```" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A3 = \n", "[[1.e+00 1.e+16]\n", " [1.e+00 1.e+00]]\n", "b3 = [1.e+16 2.e+00]\n" ] } ], "source": [ "A3 = array([[1., 1e16], [1. , 1.]])\n", "b3 = array([1. + 1e16, 2.])\n", "print(f\"A3 = \\n{A3}\")\n", "print(f\"b3 = {b3}\")" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "U3 = \n", "[[ 1.e+00 1.e+16]\n", " [ 0.e+00 -1.e+16]]\n", "c3 = [ 1.e+16 -1.e+16]\n", "x3 = [2. 1.]\n" ] } ], "source": [ "(U3, c3) = rowReduce(A3, b3)\n", "print (f\"U3 = \\n{U3}\")\n", "print(f\"c3 = {c3}\")\n", "x3 = backwardSubstitution(U3, c3)\n", "print(f\"x3 = {x3}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This gets $x_2 = 1$ correct, but $x_1$ is completely wrong!\n", "\n", "One hint is that $b_1$, which should be $1 + 10^{16} = 1000000000000001$, is instead just given as $10^{16}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, all is well with less large values, like $10^{15}$:" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A3a = \n", "[[1.e+00 1.e+15]\n", " [1.e+00 1.e+00]]\n", "b3a = [1.e+15 2.e+00]\n" ] } ], "source": [ "A3a = array([[1., 1e15], [1. , 1.]])\n", "b3a = array([1. + 1e15, 2.])\n", "print(f\"A3a = \\n{A3a}\")\n", "print(f\"b3a = {b3a}\")" ] }, { "cell_type": "code", "execution_count": 87, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "U3a = \n", "[[ 1.e+00 1.e+15]\n", " [ 0.e+00 -1.e+15]]\n", "c3a = [ 1.e+15 -1.e+15]\n", "x3a = [1. 1.]\n" ] } ], "source": [ "(U3a, c3a) = rowReduce(A3a, b3a)\n", "print(f\"U3a = \\n{U3a}\")\n", "print(f\"c3a = {c3a}\")\n", "x3a = backwardSubstitution(U3a, c3a)\n", "print(f\"x3a = {x3a}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:example} Avoiding small denominators\n", ":label: example-avoiding-small-denominators\n", "\n", "The first equation in {prf:ref}`example-almost-division-by-zero` can be divided by $10^{16}$ to get an equivalent system with the same problem:\n", "\n", "$$\n", "\\left[\\begin{array}{rr} 10^{-16} & 1 \\\\ 1 & 1 \\end{array}\\right]\n", "\\left[\\begin{array}{r} x_1 \\\\ x_2 \\end{array}\\right] = \\left[\\begin {array}{r} 1+10^{-16} \\\\ 2 \\end{array}\\right]\n", "$$\n", "\n", "Now the problem is more obvious:\n", "this system differs from the system in {prf:ref}`example-obvious-division-by-zero`\n", "just by a tiny change of $10^{-16}$ in that pivot elements $a_{11}$, and the problem is *division by a value very close to zero*.\n", "```" ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A4 = \n", "[[1.e-16 1.e+00]\n", " [1.e+00 1.e+00]]\n", "b4 = [1. 2.]\n" ] } ], "source": [ "A4 = array([[1e-16, 1.], [1. , 1.]])\n", "b4 = array([1. + 1e-16, 2.])\n", "print(f\"A4 = \\n{A4}\")\n", "print(f\"b4 = {b4}\")" ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "U4 = \n", "[[ 1.e-16 1.e+00]\n", " [ 0.e+00 -1.e+16]]\n", "c4 = [ 1.e+00 -1.e+16]\n", "x4 = [2.22044605 1. ]\n" ] } ], "source": [ "(U4, c4) = rowReduce(A4, b4)\ n", "print(f\"U4 = \\n{U4}\")\n", "print(f\"c4 = {c4}\")\n", "x4 = backwardSubstitution(U4, c4)\n", "print(f\"x4 = {x4}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One might think that there is no such small denominator in\n", "{prf:ref}`example-almost-division-by-zero`,\n", "but what counts for being \"small\" is magnitude relative to other values — 1 is very small compared to $10^{16}$.\n", "\n", "To understand these problems more (and how to avoid them) we will explore\n", "[Machine Numbers, Rounding Error and Error Propagation] (section:machine-numbers-rounding-error-error-propagation)\n", "in the next section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## When naive Guassian elimination is safe: diagonal dominance\n", "\n", "There are several important cases when we can guarantee that these problem do not occur.\n", "One obvious case is when the matrix $A$ is diagonal and non-singular (so with all non-zero elements);\n", "then it is already row-reduced and with all denominators in backward substitution being non-zero.\n", "\n", "A useful measure of being \"close to diagonal\" is *diagonal dominance*:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:definition} Strict Diagonal Dominance\n", ":label: definition-strictly-diagonally-dominant\n", "\n", "A matrix $A$ is **row-wise strictly diagonally dominant**,\n", "sometimes abbreviated as just **strictly diagonally dominant** or **SDD**,\n", "if \n", "\n", "$$\n", "\\sum_{1 \\leq k \\leq n, k \\neq i}|a_{i,k}| < |a_{i,i}|\n", "$$\n", "\n", "Loosely, each main diagonal \"dominates\" in size over all other elements in its row.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " ````{prf:definition} Column-wise Strict Diagonal Dominance\n", ":label: definition-columnwise-strictly-diagonally-dominant\n", "\n", "If instead\n", "\n", "$$\n", "\\sum_{1 \\leq k \\leq n, k \\neq i}|a_{k,i}| < |a_{i,i}|\n", "$$\n", "\n", "(so that each main diagonal element \"dominates its column\")\n", "the matrix is called **column-wise strictly diagonally dominant**.\n", "\n", "Note that this is the same as saying that the transpose $A^T$ is SDD.\n", "````\n", "\n", "**Aside:** If only the corresponding non-strict inequality holds, the matrix is called *diagonally dominant*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:theorem}\n", ":label: theorem-row-reduction-preserves-sdd\n", "\n", "For any strictly diagonally dominant matrix $A$, each of the intermediate matrices $A^{(k)}$ given by the naive Gaussan elimination algorithm is also strictly diagonally dominant, and so the final upper triangular matrix $U$ is.\n", "In particular, all the diagonal elements $a_{i,i}^{(k)}$ and $u_{i,i}$ are non-zero, so no division by zero occurs in any of these algorithms, including the backward substitution solving for $x$ in $Ux = c$.\n", "\n", "The corresponding fact also true if the matrix is column-wise strictly diagonally dominant: that property is also preserved at each stage in naive Guassian elimination.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thus in each case the diagonal elements — the elements divided by in both row reduction and backward substitution — are in some sense safely away from zero.\ n", "We will have more to say about this in the sections on\n", "[pivoting](section:linear-algebra-pivoting)\n", "and\n", "[LU factorization](section:linear-algebra-lu-factorization)\n", "\n", "For a column-wise SDD matrix, more is true: at stage $k$, the diagonal dominance says that\n", "the pivot elemet on the diagonal, $a_{k,k}^{(k-1)}$, is larger (in magnitude) than any of the elements $a_ {i,k}^{(k-1)}$ below it, so the multipliers $l_{i,k}$ have\n", "\n", "$$\n", "|l_{i,k}| = |a_{i,k}^{(k-1)}/a_{k,k}^{(k-1)}| < 1.\n", "$$\n", "\n", "As we will see when we look at the effects of rounding error in the sections on\n", "[Machine Numbers, Rounding Error and Error Propagation](section:machine-numbers-rounding-error-error-propagation)\n", "and\n", "[Error bounds for linear algebra](section:linear-algebra-errors)\n", "keeping intermediate values small is generally good for accuracy, so this is a nice feature." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{prf:remark} Positive definite matrices\n", ":label: remark-positive-definite-matrices-also-work\n", "\n", "Another class of matrices for which naive row reduction works well is **positive definite matrices** which arise in any important situations; that property is in some sense more natural than diagonal dominance.\n", "However that topic will be left for later.\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{exercise-start}\n", ":label: linear-algebra-row-reduction-exercise-1\n", "```\n", "Solve $A \\vec{x} = \\vec b$\n", "with\n", "\n", "$$\n", "A = \\left[ \\begin{array}{ccc} 4. & 2. & 1. \\\\ 9. & 3. & 1. \\\\ 25. & 5. & 1. \\end {array} \\right],\n", "\\quad\n", "\\vec b = \\left[ \\begin{array}{c} 0.693147 \\\\ 1.098612 \\\\ 1.609438 \\end{array} \\right],\n", "$$\n", "using *naive Gaussian elimination,* computing all intermediate values as decimal approximations rounded to four significant digits -- no fractions!\n", "Call this approximate solution $\\vec{x}_1$.\n", "\n", "Then compute the residual $\\vec{r}_1 = \\vec{b} - A \\vec{x}_1$ and its norm $\\| \\vec{r}_1 \\|_\\infty$.\n", "This should be done with high accuracy (not rounding to four decimal places) and could be done using a Python notebook as a \ "calculator\".\n", "\n", "Next, use Python software to compute the condition number of $A$, $K(A) = K_\\infty(A)$,\n", "and use this to get a bound on the absolute error in each of the above approximations.\n", "\n", "Finally some \"cheating\": use Python software to compute the \"exact\" solution and use this to compute the actual absolute error;\n", "compare this to the bound computed above.\n", "```{exercise-end}\n", "```" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.4" } }, "nbformat": 4, "nbformat_minor": 4 }
{"url":"https://lemesurierb.people.charleston.edu/numerical-methods-and-analysis-python/_sources/main/linear-algebra-1-row-reduction-python.ipynb","timestamp":"2024-11-13T12:48:54Z","content_type":"text/plain","content_length":"63798","record_id":"<urn:uuid:095f9735-19d3-4791-a231-31e7fc6eb84d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00610.warc.gz"}
C# random number generation code This weekend Code Project posted an updated version of my article Simple Random Number Generation. The article comes with C# source code for generating random samples from the following • Cauchy • Chi square • Exponential • Inverse gamma • Laplace (double exponential) • Normal • Student t • Uniform • Weibull After I submitted the revised article I realized I could have easily included a beta distribution generator. To generate a sample from a beta(a, b) distribution, generate a sample u from gamma(a, 1) and a sample v from gamma(b, 1) and return u/(u+v). (See why this works here.) This isn’t the most efficient beta generator possible, especially for some parameters. But it’s not grossly inefficient either. Also, it’s very simple, and the code in that article emphasizes simplicity over efficiency. The code doesn’t use advanced C# features; it could easily be translated to other languages. Related links: 2 thoughts on “C# random number generation code” 1. Wanted to drop a quick note that I ported this to Ruby. My tests are less sophisticated, it’s on my TODO to do it up right, but the random number generation appears to be working correctly. And I added the beta distribution per your comment above. or view the gem at:
{"url":"https://www.johndcook.com/blog/2010/05/03/c-random-number-generation-code/","timestamp":"2024-11-13T13:14:11Z","content_type":"text/html","content_length":"54461","record_id":"<urn:uuid:ab76bf0c-688d-4cc8-8096-b2057e52f0fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00210.warc.gz"}
3. A Script-Fu Tutorial In this training course, we'll introduce you to the fundamentals of Scheme necessary to use Script-Fu, and then build a handy script that you can add to your toolbox of scripts. The script prompts the user for some text, then creates a new image sized perfectly to the text. We will then enhance the script to allow for a buffer of space around the text. We will conclude with a few suggestions for ways to ramp up your knowledge of Script-Fu. Scheme is a dialect of the Lisp family of programming languages. GIMP uses TinyScheme, which is a lightweight interpreter of a subset of the so-called R5RS standard. The first thing to learn is that: Every statement in Scheme is surrounded by parentheses (). The second thing you need to know is that: The function name/operator is always the first item in the parentheses, and the rest of the items are parameters to the function. However, not everything enclosed in parentheses is a function — they can also be items in a list — but we'll get to that later. This notation is referred to as prefix notation, because the function prefixes everything else. If you're familiar with postfix notation, or own a calculator that uses Reverse Polish Notation (such as most HP calculators), you should have no problem adapting to formulating expressions in Scheme. The third thing to understand is that: Mathematical operators are also considered functions, and thus are listed first when writing mathematical expressions. This follows logically from the prefix notation that we just mentioned. Here are some quick examples illustrating the differences between prefix, infix, and postfix notations. We'll add a 1 and 23 together: • Prefix notation: + 1 23 (the way Scheme will want it) • Infix notation: 1 + 23 (the way we „normally” write it) • Postfix notation: 1 23 + (the way many HP calculators will want it) In GIMP, select → → → from the main menu. This will start up the Script-Fu Console window, which allows us to work interactively in Scheme. At the bottom of this window is a text entry field for commands. Here, we can test out simple Scheme commands interactively. Let's start out easy, and add some numbers: (+ 3 5) Typing this in and hitting Enter yields the expected answer of 8 in the center window. The „+” function can take more arguments, so we can add more than one number: (+ 3 5 6) This also yields the expected answer of 14. So far, so good — we type in a Scheme statement and it's executed immediately in the Script-Fu Console window. Now for a word of caution… If you're like me, you're used to being able to use extra parentheses whenever you want to — like when you're typing a complex mathematical equation and you want to separate the parts by parentheses to make it clearer when you read it. In Scheme, you have to be careful and not insert these extra parentheses incorrectly. For example, say we wanted to add 3 to the result of adding 5 and 6 3 + (5 + 6) + 7 = ? Knowing that the + operator can take a list of numbers to add, you might be tempted to convert the above to the following: (+ 3 (5 6) 7) However, this is incorrect — remember, every statement in Scheme starts and ends with parens, so the Scheme interpreter will think that you're trying to call a function named „5” in the second group of parens, rather than summing those numbers before adding them to 3. The correct way to write the above statement would be: (+ 3 (+ 5 6) 7) If you are familiar with other programming languages, like C/C++, Perl or Java, you know that you don't need white space around mathematical operators to properly form an expression: 3+5, 3 +5, 3+ 5 These are all accepted by C/C++, Perl and Java compilers. However, the same is not true for Scheme. You must have a space after a mathematical operator (or any other function name or operator) in Scheme for it to be correctly interpreted by the Scheme interpreter. Practice a bit with simple mathematical equations in the Script-Fu Console until you're totally comfortable with these initial concepts.
{"url":"https://testing.docs.gimp.org/2.99/hu/gimp-using-script-fu-tutorial.html","timestamp":"2024-11-07T20:39:00Z","content_type":"application/xhtml+xml","content_length":"14225","record_id":"<urn:uuid:a51bbf79-c541-42f8-be83-8621608c08d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00320.warc.gz"}
Finite-State Machine A Finite-State Machine (FSM or a State Machine for short) is a computational model to simulate sequential logic or algorithms. The FSM can change the state of the system from one to another in response to some inputs and each state specifies which state to switch to in response to the input. A finite-state machine can be implemented using hardware (i.e. FPGA) or software (i.e C-code). There are two types of state machines: 1. Deterministic FSMs 2. Non-Deterministic FSMs In a non-deterministic finite state machine, the machine can move to any combination of states for each given input. A finite-state machine is called a deterministic, if each of its transitions is uniquely determined by its source state and system input. In other words, Deterministic refers to the uniqueness of the computation run. State-machines are so powerful and popular, that if you look closely around you, you can find a number of systems that employ them in their logic. One example can be turnstile in a subway, which can be represented by the following diagram. Using the Weld app, the above state-machine can be translated to: 10 switch (state) { case locked: if (ticket) { state = open; } else { state = locked; } break; case open: if (push) { state = locked; } else { state = open; } break; default: state = locked; break;} This is obviously a very simplistic example and as the state machine grows bigger, developing and managing its code logic becomes exponentially harder. There are a number of ways for representing and visualizing a state-machines. For a simple machine, a State Transition Table would be the best option due to its simplicity and clarity. However, for more complex machines, this representation becomes impossible to use. UML State Diagrams are best suited for representing state machines as they overcome limitations of transition tables and are able to represent the concepts of nested and parallel states. Weld uses UML State Diagrams to represent State Machines. The Weld application uses UML State Diagrams to represent the Deterministic FSM that represents the logic of your code. Using the UML representation, you can: 1. specify a start transition to denote the initial state to be activated when the state machine executes for the first time 2. create nested state machines and further divide the code logic and visually create complex decision trees 3. add transition actions to execute code snippets for given conditions These are just some of the benefits of UML representation. By using the Weld application you will be able to experience, first hand, how easy it is to create and maintain extremely complex code logic, and just how much functionalities tou can fit into a small controller.
{"url":"https://doc.weld.dev/basics/finite-state-machine/","timestamp":"2024-11-09T12:21:16Z","content_type":"text/html","content_length":"24350","record_id":"<urn:uuid:a3f2e76a-263f-44e8-b2c1-473ebfe4a2c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00396.warc.gz"}
seminars - Spectral Analysis of Graphs using Quantum Probability (1) 7월 24일(금, 오후 3:30 - 5:30) (2) 7월 27일(월, 오전 10:00-12:00) (3) 7월 28일(화, 오전 10:00-12:00) Quantum probability (non-commutative probability) provides a framework for extending the measure-theoretical (Kolmogorovian) probability theory. The idea traces back to von Neumann (1932), who, aiming at the mathematical foundation for the statistical questions in quantum mechanics, initiated a parallel theory by making a selfadjoint operator and a trace play the roles of a random variable and a probability measure, respectively. Since around 1980 quantum probability has developed considerably spreading effects on various fields of mathematics and quantum physics. As one of the recently developed applications, we focus on the asymptotic spectral analysis of growing graphs. These lectures are based mostly on ``Quantum Probability and Spectral Analysis of Graphs," by A. Hora and N. Obata (Springer, 2007). 1. Basic concepts of quantum probability 2. Growing graphs and spectra 3. Quantum decomposition and Interacting Fock spaces 4. Stieltjes transform and continued fractions 5. From Kesten distribution to the semicircle law 6. Notion of independence and central limit theorems
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&l=ko&page=83&sort_index=Time&order_type=desc&document_srl=744035","timestamp":"2024-11-06T21:44:11Z","content_type":"text/html","content_length":"46047","record_id":"<urn:uuid:a5205588-0820-4878-86e2-777ced43c9c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00406.warc.gz"}
imaginary numbers a: imaginary numbers ~ b: your crazy uncle "I like when the metaphors are more complicated than the referent. Here's my answer: imaginary numbers are like your crazy uncle...there's internal consistency to the things he's mumbling, but it may not make much sense to outsiders." Writer: Maserschmidt Date: Aug 21 2014 4:39 PM a: imaginary numbers ~ b: mathematical scaffolding "I am starting to take the view that imaginary numbers are like mathematical scaffolding, they help get to the next level, but not quite strong enough to build a lasting foundation." Writer: Not Stated Date: Aug 21 2014 4:42 PM a: Imaginary numbers ~ b: spirits "Imaginary numbers are like spirits wandering between being and nothingness, and are again much more numerous than real numbers." Writer: Koji Suzuki Date: Aug 21 2014 4:46 PM a: imaginary numbers ~ b: the Holy Ghost "As another instance of Leibniz' theological simulacrums, we have his remark that imaginary numbers are like the Holy Ghost of Christian scriptures-a sort of amphibian, midway between existence and Writer: Not Stated Date: Aug 21 2014 5:08 PM a: Imaginary numbers ~ b: y-axis "It's more like two-dimensional. Real numbers are like the x-axis on the complex plane, and imaginary numbers are like the y-axis. Another way to look at it is, each real number can be described as itself times the multiplicative unit, i.e. 5 is 5*sqrt(1), 24 is 24*sqrt(1), 1 is sqrt(1)2. That's their 'x unit'. Each imaginary number can be described as a number times the imaginary unit, sqrt (-1), or i. We'd say that the number 5i is imaginary and is equal to 5 times the imaginary unit. In geometric terms, it's 5 y-units on the complex plane. When you mix together imaginary and real numbers, you get a complex number, describing both its coordinates on the complex plane, except instead of (5, 10) for 5x and 10y, we have (5 + 10i) for 5*sqrt(1) and 10*sqrt(-1). I hope this is making some kind of sense. " Writer: Appathy Date: Aug 21 2014 5:09 PM a: imaginary numbers ~ b: 90-degree rotation "Imaginary numbers let us rotate numbers. Don't start by defining i as the square root of -1. Show how if negative numbers represent a 180-degree rotation, imaginary numbers represent a 90-degree Writer: Not Stated Date: Jun 25 2017 12:22 PM METAMIA is a free database of analogy and metaphor. Anyone can contribute or search. The subject matter can be anything. Science is popular, but poetry is encouraged. The goal is to integrate our fluid muses with the stark literalism of a relational database. Metamia is like a girdle for your muses, a cognitive girdle.
{"url":"http://www.metamia.com/analogize.php?q=imaginary+numbers","timestamp":"2024-11-13T09:14:24Z","content_type":"text/html","content_length":"22515","record_id":"<urn:uuid:b90f4c7f-2803-4903-b9a1-ddb62473c5bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00007.warc.gz"}
Insurance & Linear Regression Model Example - Analytics Yogi Insurance & Linear Regression Model Example Ever wondered how insurance companies determine the premiums you pay for your health insurance? Predicting insurance premiums is more than just a numbers game—it’s a task that can impact millions of lives. In this blog, we’ll demystify this complex process by walking you through an end-to-end example of predicting health insurance premium charges by demonstrating with Python code example. Specifically, we’ll use a linear regression model to predict these charges based on various factors like age, BMI, and smoking status. Whether you’re a beginner in data science or a seasoned professional, this blog will offer valuable insights into building and evaluating regression models. What is Linear Regression? Linear Regression is a supervised machine learning algorithm used for predicting a numerical dependent variable based on one or more features (independent variables). In the case of insurance, the target variable is the insurance premium (charge), and the features could be age, gender, BMI, and so on. For learning more about linear regression models, check out my other related blogs: The following are key steps which will be explained while building the regression models for predicting health insurance premium charges: • Exploratory data analysis • Feature engineering • Data preparation • Building the model • Model evaluation Exploratory Data Analysis (EDA) of Insurance Data We will work with the insurance data which can be found on this Github page – Insurance Linear Regression Model Example. The dataset contains the following columns: • age: Age of the insured • sex: Gender of the insured • bmi: Body Mass Index • children: Number of dependents/children • smoker: Smoking status • region: Geographical region • charges: Insurance premium charges First and foremost, we will load the data. import pandas as pd # Load the data file_path = '/path/to/insurance.csv' insurance_data = pd.read_csv(file_path) # Display the first few rows Exploratory Data Analysis (EDA) is an essential step to understand the data before building any machine learning model. We’ll look into the following commonly explored aspects: 1. Descriptive Statistics 2. Data Types and Missing Values 3. Univariate Analysis (Distribution of individual features) 4. Bivariate Analysis (Correlation between features and target variable) Descriptive statistics Let’s start with the descriptive statistics to get an overview of the numerical columns in the dataset. # Generate descriptive statistics of the numerical columns The descriptive statistics for the numerical columns are as follows: • Age: Ranges from 18 to 64 years, with a mean age of approximately 39.2 years. • BMI: Ranges from 15.96 to 53.13, with a mean BMI of around 30.7. • Children: Ranges from 0 to 5, with an average of approximately 1.1 children. • Charges: The insurance premium charges vary significantly, ranging from 1121.87 to 63770.43, with a mean of approximately 13270.42. Data types & missing values Next, let’s check the data types of each column and see if there are any missing values. # Check data types and missing values data_info = pd.DataFrame({ 'Data Type': insurance_data.dtypes, 'Missing Values': insurance_data.isnull().sum(), 'Unique Values': insurance_data.nunique() The data types and missing values are as follows: • Age: Integer type, no missing values • Sex: Object (categorical), no missing values • BMI: Float, no missing values • Children: Integer type, no missing values • Smoker: Object (categorical), no missing values • Region: Object (categorical), no missing values • Charges: Float, no missing values All columns have appropriate data types and there are no missing values, which is a great news. Handling missing values is a critical step in the data preprocessing pipeline, as most machine learning algorithms cannot work with missing data directly. Here are some common techniques to deal with missing values: • Removing rows and / or columns • Data imputation (mean/median/mode) • Advanced data imputation (regression modeling) Univariate analysis Let’s move on to the univariate analysis. We’ll start by visualizing the distribution of the numerical variables (age, bmi, children, and charges) and then take a look at the categorical variables ( sex, smoker, and region). We’ll plot histograms for age, bmi, children and charges to understand their distributions. import matplotlib.pyplot as plt import seaborn as sns # Set the style for the visualizations # Plot histograms for numerical variables fig, axes = plt.subplots(2, 2, figsize=(14, 10)) fig.suptitle('Distribution of Numerical Variables') sns.histplot(insurance_data['age'], kde=True, bins=20, ax=axes[0, 0]) axes[0, 0].set_title('Age Distribution') sns.histplot(insurance_data['bmi'], kde=True, bins=20, ax=axes[0, 1]) axes[0, 1].set_title('BMI Distribution') sns.histplot(insurance_data['children'], kde=True, bins=20, ax=axes[1, 0]) axes[1, 0].set_title('Children Distribution') sns.histplot(insurance_data['charges'], kde=True, bins=20, ax=axes[1, 1]) axes[1, 1].set_title('Charges Distribution') plt.tight_layout(rect=[0, 0, 1, 0.96]) Here’s what we can observe from the histograms: • Age Distribution: Most of the insured individuals are between 20 and 30 years old, with fewer individuals above 50. • BMI Distribution: The BMI appears to be normally distributed, centering around 30. • Children Distribution: A large number of insured individuals have no children, followed by those with 1 or 2 children. • Charges Distribution: The distribution of charges is right-skewed, indicating that most people pay lower premiums, but there are some who pay significantly higher premiums. Next, let’s look at the distribution of the categorical variables (sex, smoker, and region) using bar plots. # Plot bar plots for categorical variables fig, axes = plt.subplots(1, 3, figsize=(18, 6)) fig.suptitle('Distribution of Categorical Variables') sns.countplot(x='sex', data=insurance_data, ax=axes[0]) axes[0].set_title('Gender Distribution') sns.countplot(x='smoker', data=insurance_data, ax=axes[1]) axes[1].set_title('Smoker Distribution') sns.countplot(x='region', data=insurance_data, ax=axes[2]) axes[2].set_title('Region Distribution') plt.tight_layout(rect=[0, 0, 1, 0.96]) The bar plots for the categorical variables show the following: • Gender Distribution: The dataset is fairly balanced between males and females. • Smoker Distribution: The number of non-smokers is significantly higher than that of smokers. • Region Distribution: The data is relatively evenly distributed across the four regions, with the ‘southeast’ region having a slightly higher representation. Now that we have a better understanding of the individual features, let’s move on to bivariate analysis to explore the relationships between these features and the target variable (charges). Bivariate Analysis In the bivariate analysis, we’ll focus on understanding the relationship between the individual features and the target variable, charges. We’ll start by plotting scatter plots between the numerical variables (age, bmi, children) and charges to observe any trends or patterns. # Plot scatter plots for numerical variables vs charges fig, axes = plt.subplots(1, 3, figsize=(18, 6)) fig.suptitle('Scatter Plots of Numerical Variables vs Charges') sns.scatterplot(x='age', y='charges', data=insurance_data, ax=axes[0]) axes[0].set_title('Age vs Charges') sns.scatterplot(x='bmi', y='charges', data=insurance_data, ax=axes[1]) axes[1].set_title('BMI vs Charges') sns.scatterplot(x='children', y='charges', data=insurance_data, ax=axes[2]) axes[2].set_title('Children vs Charges') plt.tight_layout(rect=[0, 0, 1, 0.96]) Here are some observations from the scatter plots: • Age vs Charges: There seems to be a positive correlation between age and insurance charges, indicating that older individuals are likely to be charged higher premiums. • BMI vs Charges: There is a general trend showing higher charges for individuals with higher BMI, although the relationship is not as clear-cut. • Children vs Charges: The relationship between the number of children and charges is not very clear from the scatter plot. Charges appear to be distributed across the range for different numbers of children. Next, let’s look at how the categorical variables (sex, smoker and region) relate to the insurance charges. We’ll use box plots for this analysis. # Plot box plots for categorical variables vs charges fig, axes = plt.subplots(1, 3, figsize=(18, 6)) fig.suptitle('Box Plots of Categorical Variables vs Charges') sns.boxplot(x='sex', y='charges', data=insurance_data, ax=axes[0]) axes[0].set_title('Gender vs Charges') sns.boxplot(x='smoker', y='charges', data=insurance_data, ax=axes[1]) axes[1].set_title('Smoker vs Charges') sns.boxplot(x='region', y='charges', data=insurance_data, ax=axes[2]) axes[2].set_title('Region vs Charges') plt.tight_layout(rect=[0, 0, 1, 0.96]) The box plots reveal the following insights: • Gender vs Charges: The median insurance charges are quite similar for both genders, although males seem to have a slightly higher range of charges. • Smoker vs Charges: There’s a significant difference in charges between smokers and non-smokers, with smokers generally facing much higher premiums. • Region vs Charges: The charges appear to be distributed fairly similarly across different regions, with no substantial differences. The EDA has provided valuable insights into how various features relate to insurance charges. Now we are better equipped to build a linear regression model for predicting insurance premiums. As a next step, we will prepare data before we train the model. Feature Engineering for Building Linear Regression Model Before building the model, let’s summarize our understanding of the features based on the exploratory data analysis: 1. Age: Positively correlated with charges; older individuals tend to have higher charges. 2. BMI: Generally, higher BMI corresponds to higher charges, though the relationship isn’t perfectly linear. 3. Children: The number of children doesn’t show a clear trend but could still provide some predictive power. 4. Sex: Gender doesn’t show a significant difference in charges, but we’ll include it for a more comprehensive model. 5. Smoker: A very strong predictor; smokers have much higher charges. 6. Region: Charges are distributed fairly evenly across regions, but we’ll include it to capture any regional nuances. Based on the above, the following is the rationale for feature selection: • Include: Age, BMI, Children, Smoker — these have shown correlations or differences in charges during the EDA. • Conditional Include: Sex, Region — these don’t show strong correlations but could improve the model’s performance by capturing underlying patterns. Data Preparation for Building Linear Regression Model The next step is data preparation for modeling, which includes encoding categorical variables and splitting the data into training and test sets. • Numerical features (age, bmi, children) have been standardized, meaning they’ve been scaled to have a mean of 0 and a standard deviation of 1. sklearn.preprocessing.StandardScaler is used for standardizing numerical data. • Categorical features (sex, smoker, region) have been one-hot encoded to convert them into a format that can be provided to machine learning algorithms. sklearn.preprocessing.OneHotEncoder is used for encoding categorical variables. In addition, we also split the data in training and testing set. This would be used for training the model and evaluating the model performance. from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer # Define the features and the target X = insurance_data.drop('charges', axis=1) y = insurance_data['charges'] # Identify numerical and categorical columns numerical_cols = ['age', 'bmi', 'children'] categorical_cols = ['sex', 'smoker', 'region'] # Preprocessing for numerical data: standardization numerical_transformer = StandardScaler() # Preprocessing for categorical data: one-hot encoding categorical_transformer = OneHotEncoder(handle_unknown='ignore') # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols)]) # Split data into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) Train the Linear Regression Model for Predicting Insurance Premium Charges The next step is training the linear regression model to predict the health insurance premium charges. from sklearn.pipeline import Pipeline from sklearn.linear_model import LinearRegression # Define the model model = LinearRegression() # Create and evaluate the pipeline pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) # Fit the model using training data pipeline.fit(X_train, y_train) Evaluating the Linear Regression Model Now that the model is trained, its time to evaluate the linear regression model performance. Here is the code: from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score # Predict on test data y_pred = pipeline.predict(X_test) # Evaluate the model mse = mean_squared_error(y_test, y_pred) rmse = mean_squared_error(y_test, y_pred, squared=False) mae = mean_absolute_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) evaluation_metrics = { 'Mean Squared Error': mse, 'Root Mean Squared Error': rmse, 'Mean Absolute Error': mae, 'R-squared': r2 The regression model’s performance metrics on the test set are as follows: • Mean Squared Error (MSE): 31,860,50031,860,500 • Root Mean Squared Error (RMSE): 5,644.515,644.51 • Mean Absolute Error (MAE): 3,942.913,942.91 • R-squared: 0.8000.800 The following can be interpreted based on above metrics: • The R-squared value of 0.8000.800 suggests that the model explains approximately 80% of the variance in the insurance charges, which is quite good. • RMSE and MAE give us an idea of the average prediction error in the units of the target variable (charges). Lower values for these metrics are generally better. So, there we have it—a comprehensive guide to predicting insurance premiums using Linear Regression. From understanding the intricacies of insurance data to diving deep into exploratory data analysis, and finally building and evaluating our model, we’ve covered quite a bit of ground. But what did we really learn? 1. Linear Regression is Powerful Yet Simple: Even with its simplicity, a Linear Regression model can offer a high degree of accuracy for predicting insurance premiums. It’s a great starting point for anyone new to machine learning. 2. Data Understanding is Crucial: Before jumping into any machine learning model, a thorough exploratory data analysis (EDA) is indispensable. It not only helps in feature selection but also gives you a better understanding of the data you’re working with. 3. Preprocessing Matters: Handling missing values, encoding categorical variables, and standardizing features are critical steps that can significantly impact your model’s performance. 4. Evaluation is Key: Always use metrics like MSE, RMSE, MAE, and R-squared to evaluate your model’s performance. It’s not just about building a model; it’s about understanding how well it’s Latest posts by Ajitesh Kumar (see all) I found it very helpful. However the differences are not too understandable for me Very Nice Explaination. Thankyiu very much, in your case E respresent Member or Oraganization which include on e or more peers? Such a informative post. Keep it up Thank you....for your support. you given a good solution for me. Ajitesh Kumar I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking. Posted in Data Science, Insurance, Machine Learning, statistics. Tagged with Data Science, machine learning, statistics.
{"url":"https://vitalflux.com/insurance-linear-regression-model-example/","timestamp":"2024-11-13T15:10:06Z","content_type":"text/html","content_length":"129779","record_id":"<urn:uuid:973d3e3d-71a6-4198-bc23-5bb520618410>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00327.warc.gz"}
American Mathematical Society On positive contractions in $L^{p}$-spaces HTML articles powered by AMS MathViewer Trans. Amer. Math. Soc. 257 (1980), 261-268 DOI: https://doi.org/10.1090/S0002-9947-1980-0549167-7 PDF | Request permission Let T denote a positive contraction $(T \geqslant 0, \left \| T \right \| \leqslant 1)$ on a space ${L^p}(\mu ) (1 < p < + \infty )$. A primitive nth root of unity $\varepsilon$ is in the point spectrum $P\sigma (T)$ iff it is in $P\sigma (T’)$; if so, the unimodular group generated by $\varepsilon$ is in both $P\sigma (T)$ and $P\sigma (T’)$. In turn, this is equivalent to the existence of n-dimensional Riesz subspaces of ${L^p}$ and ${L^q}({p^{ - 1}} + {q^{ - 1}} = 1)$ which are in canonical duality and on which T (resp., $T’$) acts as an isometry. If, in addition, T is quasi-compact then the spectral projection associated with the unimodular spectrum of T (resp., $T’$) is a positive contraction onto a Riesz subspace of ${L^p}$ (resp., ${L^q}$) on which T (resp., $T’$) acts as an isometry. References N. Dunford and J. T.Schwartz, Linear operators, vol. I, Wiley-Interscience, New York, 1958. Similar Articles • Retrieve articles in Transactions of the American Mathematical Society with MSC: 47B55 • Retrieve articles in all journals with MSC: 47B55 Bibliographic Information • © Copyright 1980 American Mathematical Society • Journal: Trans. Amer. Math. Soc. 257 (1980), 261-268 • MSC: Primary 47B55 • DOI: https://doi.org/10.1090/S0002-9947-1980-0549167-7 • MathSciNet review: 549167
{"url":"https://www.ams.org/journals/tran/1980-257-01/S0002-9947-1980-0549167-7/","timestamp":"2024-11-09T10:09:54Z","content_type":"text/html","content_length":"57088","record_id":"<urn:uuid:d9ec4dd9-d1c4-4155-8b4d-0d946e390f12>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00634.warc.gz"}
Decibels (dB) for Part 15 Use Much of this information is posted around this site but here it is presented in one place. A Quick Reference: For Voltage Much of this information is posted around this site but here it is presented in one place. A Quick Reference: For Voltage dB = 20log(V1/V2) V1/v2 = 10^(dB/20) dBuV = 20log(V/1uV) Voltage = 1uV(10^(dBuV/20)) For Power dB = 10log(P1/P2) P1/P2 = 10^(dB/10) dBm = 10log(P/1mW) Power = 1mW(10^(dBm/10)) 3dB is double the power and 10 dB is 10 times the power. 6dB is double the voltage and 20 dB is 10 times the voltage. Detailed Information and Examples The best way to understand what follows is to use a scientific calculator (one which does Logs and exponents of 10) and work the examples presented. As with many things, this can be hard at first but it gets easier with practice. A fundamental to remember is that the decibel is always a ratio which makes dB convenient when comparing two measurements. For example if a voltage increases from 2 to 8 volts this increase is 20log (v1/v2) = 20log(8/2) = 12dB. If the voltage decreases then the dB is negative. Decibels are also used to compare voltages at different parts of a circuit. For example, if the input to an amplifier is 0.7 Volts and the output is 15 Volts then the gain (recall that dB is a ratio) G = 20Log(15/0.7) = 26.6 dB. There is a “trick” which can be used to express a voltage in dB as an absolute but in fact it is still a ratio. For example, dBuV is often used to express the strength of an electric field with the understanding that the reference voltage is 1 microvolt (uV). A field strength of 250 uV/m can be expressed as FS = 20log(250uV/1uV) = 47.96 dBuV. A dBuV reading can be converted to Volts by using V = (1uV)10^(dBuV/20) which is for this example V = (1uV)10^(47.96dBuV/20) = 250 uV. (Try this by rounding 47.96dBuV up to 50dBuV and you will see how the log scale is not intuitive). Of interest ot part 15 hobbyists is range so how are dBuV measurements used for this? Suppose we use a receiver which displays the received signal strength in dBuV and make a measurement of signal strength 200 feet from the transmit antenna and it is 55 dBuV. With tuning the signal increases to 60 dBuV. This doesn’t seem like much improvement but since dBs are logarithmic it actually is. Calculating: V = (1uV)10^(55dBuV/20) = 562 uV. After tuning V = (1uV)10^(60dBuV/20) = 1000 uV and since range is approximately proportional to field strength in the far field the increase in range here is 1000/562 = 1.78 or 78%. It is not necessary to calculate both voltages to get to the range change. Here is a shortcut: take the difference in dBuV (60dBuV – 55 dBuV = 5 dB) and calculate range increase = 10^(5dB/20) = 1.78 or 78%. You might wonder what happened to dBuV when the two readings were subtracted. Subtracting dB is the same as dividing volts so the 1 uV reference cancels leaving only a ratio expressed in dB. At the transmitter, dBm can be used in a similar manner if the transmitter power delivered to the antenna can be measured. Lacking this, the output power ratios can be approximated by using the DC input power to the final stage. Take an example where the input power is increased from 75 to 100 mW and calculate the expected range increase. This increase is 10Log(100mW/75mW) = 1.25 dB. The increase in range using the equation from above is = 10^(1.25/20) = 1.15 or 15%. (This can also be done by calculating the range increase = SQRT(P2/P1) = SQRT(100mW/75mW) = 1.15 or 15%). You might wonder why the factor 10 was used to calculate the increase in dB yet the factor 20 was used to calculate the range increase. This is because 10 is used with dB calculations of power and 20 is used with dB calculations of voltage and the range change uses voltage. There is a bit more about dB calculations which has been omitted here to focus on what is usable for Part15 calculations but in general it is good to understand that the dB is a logarithmic unit and a seemingly small change in dB represents a larger change in voltage, power, or range. For example a signal strength of 96 dBu is double that of 90 dBu yet they appear to be about the same. Some points to remember: dB is always a ratio. dBuV and dBm are ratios to 1 uV and 1mW respectively. Use the factor 10 when calculating with power and 20 when calculating with voltage. Negative dB means a ratio less than 1 3dB is double the power and 6 dB is double the voltage. Using dB can be daunting at first but if you are willing to work through the examples given using a scientific calculator and follow the meaning of the calculations then the quick reference given at the beginning of this article can become a “pocket reference”. 1. MICRO1700 says Thanks Neil, this is great info! Bruce, W Sixty H Z, 1020 kHz carrier current □ Carl Blare says I Knew Her Ah yes, Dessy Belle was one cool gal. We hung out, went to the theater, walked in the park. Oh, I must have misunderstood. □ Rich says Online dB Calculator Here is a link to a rather easy-to-use webpage to calculate the decibel relationships shown in Neil’s post. ☆ Carl Blare says After School Many thanks Radio8Z and rich for posting this valuable instructional information. By it I’ve found myself more ignorant than first expected, but I’m printing out and linking these two marvelous teaching aids. Up in the example math, what is the mini-upside down v super-scripted on certain numbers? They didn’t have that symbol at the schools I attended. ○ Rich says “^” Notation what is the mini-upside down v super-scripted on certain numbers? The ^ symbol used when typing mathematical equations means that the number preceding the ^ is to be raised to the power of the number following the ^. For example, 3^2 = 9, 3^3 = 27 etc. ■ radio8z says Original post edited to include Phil’s change. Though the SI units do not include subscripts yet they are commonly appended for specialized references. Using dBuV clarifies that this is dB referenced to 1 uV. As Phil noted dBu is generally used with reference to 1 mW into 600 ohms. Thanks for pointing this out. Rich is correct that the ^ symbol stands for exponentiation and there is usually a key on a scientific calculator with this function. The ** is sometimes used in computer programs for the same function, e.g. 4**2 = 16. 2. PhilB says A dB Clarification Neil stated: “dBu and dBm are ratios to 1 uV and 1mW respectively”. This should be corrected to say: dBuV and dBm are ratios to 1 uV and 1mW respectively”. The unit dBuV is a voltage ratio relative to 1uV RMS The unit dBu is a voltage ratio relative to .7746 V RMS. The dBu is used in the professional audio industry. Another unit that comes up in part 15 is the dBuV/m which is a field strength voltage ratio relative to 1 microvolt per meter.
{"url":"https://www.part15.org/decibels-db-for-part-15-use/","timestamp":"2024-11-09T22:47:40Z","content_type":"text/html","content_length":"55678","record_id":"<urn:uuid:6bbf79da-ec9f-4195-bf9c-6f7c401af792>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00517.warc.gz"}
How to Convert Transmittance to Absorbance? - ElectronicsHacks When you find yourself needing to measure the amount of light that passes through a certain material, one way to do so is with transmittance. Transmittance is simply the proportion of light that passes through an object or medium in comparison with what would pass without any obstacles at all. This can be expressed as a percentage and it’s used in industries such as manufacturing, electronics, space science and more. But sometimes you need to use absorbance instead – which measures how much light was actually absorbed by the material rather than just passing through it. If this sounds confusing don’t worry! In today’s blog post we guide you through how to convert from transmittance to absorbance – providing simple formulas and practical tips for understanding this difference plus much more. How to Convert Transmittance to Absorbance Calculation Method To convert transmittance to absorbance, the equation A = log10(T/100) must be used, where A is absorbance and T is transmittance. To use this equation, simply plug in the known values of transmittance and solve for absorbance. For example, if a given sample has a transmittance of 50%, then its corresponding absorbance can be calculated as follows: A = log10 (50/100) A = – 0.3010 Therefore, the sample with a transmittance of 50% has an absorbance of -0.3010. It is important to note that this calculation assumes that all light passing through the sample is measured at a single wavelength. [1] In some instances, it may be necessary to convert multiple values of transmittance to absorbance. In these cases, an absorbance-transmittance table can be used as reference. This table will contain various values of transmittance and their corresponding absorbances. For example, if the value of transmittance is 75%, then its corresponding absorbance would be found in the table to be -0.1761. When using this method, it is important that all data points are collected from the same light source and under the same conditions (e.g., temperature, pressure, etc.). Additionally, any losses due to absorption or reflections should also be taken into account when performing the calculations. Nomogram Method Another method for converting transmittance to absorbance is by using a nomogram. A nomogram is a graphical representation of an equation, which can be used to quickly calculate the values of various parameters. To use a nomogram for this conversion, one first must locate the transmittance value on the x-axis and then move along a straight line until it intersects with the y-axis. This intersection indicates the corresponding absorbance value. For instance, if the value of transmittance is 40%, then its corresponding absorbance would be found in the nomogram to be -0.3993. It should also be noted that many modern instruments are now equipped with built-in algorithms which can directly convert transmittance to absorbance. It is important to note that when using either of these methods, all readings should be taken at a single wavelength and in the same conditions (e.g., temperature, pressure, etc.). Additionally, any losses due to absorption or reflections should also be taken into account when performing the calculations. By carefully following these steps, it is possible to accurately convert transmittance values into absorbance values. Experimental Method The conversion of transmittance to absorbance can be done experimentally. The following steps should be followed in this process: 1. Measure the intensity of light passing through a sample using an instrument like a spectrophotometer. This is known as the transmitted intensity, I_t. 2. Calculate the total incident light by measuring its intensity before it passes through the sample. This is known as the incident intensity, I_o. 3. Divide the transmitted intensity by the incident intensity and multiply it by 100 to obtain transmittance (T). 4. To find absorbance (A), use the equation A=-log10(T/100) or A=-log10(I_t/I_o). 5. Convert absorbance to % transmittance by using the equation T = 10^(-A) * 100. By following these steps, one can easily convert transmittance to absorbance. Additionally, it is important to make sure that the light used in this experiment is monochromatic (i.e., of a single wavelength or color). [2] Convert Transmittance to Absorbance in Excel To convert transmittance to absorbance in Excel, you will need to use a formula. The formula is: A=-log10(T/100). Here, “A” stands for absorbance and “T” stands for transmittance (measured as a Example: Convert 40% transmittance to absorbance. The value of T would be 40 and the formula for calculating absorbance becomes A=-log10(40/100) = 0.398. Therefore, the absorbance of 40% transmittance is 0.398 or -0.398 log units. When entering this into an Excel spreadsheet, you would enter: =-LOG10(40/100). This will give you the absorbance value of 0.398 or -0.398 log units. This same formula can be used to convert transmittance to absorbance for any transmittance percentage, simply by inputting the correct values into the equation. Keep in mind that when using this method, you should always use two decimal places for accuracy and enter all values as percentages rather than fractions. By following these steps, you can easily calculate absorbance from transmittance in Excel. Doing so can help you quickly determine how much light is being absorbed by a sample without having to use more advanced scientific instruments or laboratory techniques. Beer-Lambert Law Beer-Lambert Law, also known as Beer’s Law or Lambert-Beer Law, is used to convert transmittance to absorbance. The Beer-Lambert law states that the absorbance of light (A) through a sample is directly proportional to the concentration of the absorbing material (C) in a sample and the path length (b) that light travels through it. Mathematically, it can be expressed as A = ε x b x C. Here, ε represents the molar absorptivity coefficient which varies for different materials and wavelengths of light. To convert transmittance (%) to absorbance, first calculate the transmission ratio by dividing the transmitted intensity by incident intensity (T/I). Then take the negative logarithm of the ratio to obtain absorbance. Mathematically, it can be expressed as A = -log (T/I). [3] For example, if the transmittance of a sample is 85%, then the transmission ratio is 0.85 and hence its absorbance will be A = -log (0.85) = 0.078. The absorbance can further be used in Beer-Lambert law to estimate sample concentration or molar absorptivity coefficient depending upon the application at hand. Thus, Beer-Lambert Law helps to convert transmittance to absorbance which is essential for many scientific studies and applications such as spectrophotometry, chromatography, medical diagnostics and Relation Between Transmittance and Absorbance The relationship between transmittance and absorbance is expressed using the equation: Absorbance = -log10 (Transmittance). The equation shows that a decrease in transmittance corresponds to an increase in absorbance. Generally, transmittance and absorbance are measured on a scale from 0 to 1 or 0 to 100 percent, where 0 represents total absorption and 1 or 100% represents no absorption or complete transmission of light. To convert transmittance to absorbance, one needs to use a spectrophotometer which measures how much light passes through a sample. This instrument also measures color intensity with photometric readings, such as absorbance, transmittance and reflectance. So if you know the transmittance, you can use the equation mentioned above to calculate absorbance. The conversion between these two values is useful for a variety of purposes. For example, transmittance values are used in determining the efficiency of a filter medium or lens whereas absorbance values are used to measure the amount of light absorbed by a sample. The conversion also enables researchers to analyze and compare different samples more easily and accurately because absorbance and transmittance are measured on different scales. To sum up, transmittance can be converted into absorbance using the equation -log10 (Transmittance). This conversion is useful for various applications such as analyzing filter media or measuring spectral absorptions in solution-based samples. With the right instrumentation, anyone can accurately convert between transmittance and absorbance. [4] Use of Transmittance Transmittance is a measure of how much light passes through a sample material. It is often used in spectroscopic measurements and can be calculated using the formula: (I/Io) x 100%, where I is the amount of light that passes through the sample material, and Io is the amount of light that enters it. Transmittance can then be converted into absorbance by using the following equation: A = log10 (1/T), where T is transmittance expressed as a decimal fraction. Absorbance values are useful for determining concentrations in spectrophotometric laboratory experiments because they follow Beer’s Law, which states that absorbance is directly proportional to concentration when all other variables remain constant. In other words, absorbance can be used to infer the concentration of a sample from its spectrophotometric reading. The conversion from transmittance to absorbance is important in order to accurately interpret and compare spectrophotometer readings, which are typically given in terms of transmittance or absorbance. By understanding how to convert between the two, researchers can more easily analyze their results and draw meaningful conclusions about experiments that involve measuring light absorption through various materials. Ultimately, this increases confidence in laboratory research and allows for more accurate data collection and interpretation. Does High Transmittance Mean Low Absorbance? No. Transmittance is the fraction of incident light that passes through a sample, and absorbance is the amount of energy absorbed by an object. The two measurements have an inverse relationship – a high transmittance means low absorbance, and vice versa. Therefore, it’s possible for a sample to have a high transmittance (i.e., to transmit more light) but still have high absorbance (i.e., absorb more energy). Can Transmittance Be Greater Than 100%? No. Transmittance is a measurement of the fraction of incident light that passes through a sample, and therefore it can never be greater than 100%. If your transmittance reading is over 100%, then it’s likely an error in the measurement or a problem with the equipment. If you wish to convert from transmittance to absorbance, there are two common methods for doing so: using Beer-Lambert’s Law or employing a calibration curve. Beer-Lambert’s Law states that absorbance is proportional to the concentration of a substance and its path length. This means that if you know both of these parameters, you can calculate absorbance by dividing concentration by (path length x transmittance). [5] How do you convert transmittance to absorbance? The formula for converting transmittance to absorbance is A = -log10(T/100), where T is the transmittance percentage and A is the absorbance. To calculate this, first one must convert the transmittance value from a percentage into a decimal by dividing it by 100. After that, take the logarithm of that decimal value with base 10; this will give you the absorbance value. For example, if you had a transmittance of 50%, then you would use 0.5 as your decimal value, which would yield an absorbance of 0.3 (A = -log10(0.5)). This conversion can be helpful when using spectrophotometers to measure light absorbance in various applications. Are there any other ways to convert transmittance to absorbance? Yes, there are alternative methods for converting transmittance to absorbance such as the Lambert-Beer Law. This law states that the amount of light absorbed is proportional to the path length times the concentration of a chemical species. The formula for this law is A = εbc, where A is absorbance, ε is molar absorption coefficient (in Liters per mole), b is pathlength (in centimeters) and c is concentration (moles per Liter). Though this method doesn’t directly calculate from transmittance values, it can be used in tandem with them when analyzing samples with spectrophotometers. Additionally, some spectrophotometers come with their own software to convert transmittance values to absorbance values. This makes the process even simpler and more accurate than using manual What is the absorbance of 20% transmittance? The absorbance of 20% transmittance is equal to 0.92. This can be calculated using the following equation: Absorbance = -log10(Transmittance/100). Therefore, for 20% transmittance, this calculation would look like this: Absorbance = -log10(20/100) = 0.92. The absorbance is a measure of how much of the light that passes through a material is absorbed by it. It is expressed on a logarithmic scale ranging from 0 (no absorption) to infinity (total absorption). Therefore, an absorbance of 0.92 indicates that 92% of the light was absorbed by the material and only 8% passed through. What absorbance corresponds to 50% transmittance? To convert transmittance to absorbance, the formula A = log10 (1/T) is used. In this equation, T represents transmittance and A stands for absorbance. Therefore, when the transmittance is 50%, the corresponding absorbance can be calculated as follows: A = log10 (1/0.50) = 0.301029996 (= -log10(0.50)). This means that 50% transmittance corresponds to an absorbance of 0.301029996. It’s important to note that these equations only apply to ideal optical systems where there are no extraneous losses due to scattering or other effects. The accuracy of the results will depend on the conditions of the system under study. In conclusion, the equation A = log10 (1/T) enables the conversion of transmittance to absorbance. For example, 50% transmittance corresponds to an absorbance of 0.301029996. However, this equation only applies to ideal optical systems and its accuracy may vary depending on the conditions of the system under study. What is absorbance when transmittance is 100%? When transmittance is 100%, absorbance is 0. This means that all of the light passing through the sample has been transmitted and none has been absorbed. The absorbance value of a sample with 100% transmittance is considered to be nonexistent because any amount of absorption would cause a decrease in transmittance. This value directly corresponds to the Beer-Lambert law, which states that for an infinitely thick sample, absorbance is equal to zero when transmittance equals one. In other words, if no light is being absorbed, then there could not possibly be any absorbance. To calculate absorbance from transmittance values less than 100%, the following formula can be used:Absorbance = -log10(Transmittance). This formula is used to convert from transmittance values to absorbance, and vice versa. It is important to note that this formula has its limitations; it only works for transmittance values between 0 and 1 (or 0% and 100%). For transmittance values outside of this range, a different equation must be used. Additionally, the Beer-Lambert law assumed that the sample being measured was infinitely thick. Thus, if the sample thickness changes or there are multiple layers present, the results may not accurately reflect the true absorbance value. Useful Video: How to convert the Transmittance data to Absorbanc In conclusion, the conversion of transmittance to absorbance is a straightforward process and can be done using either the Beer-Lambert Law or a more direct method. The Beer-Lambert Law uses a logarithmic scale to convert the transmitted light values into absorbance values, allowing scientists to accurately measure how much light is being absorbed by an object. Alternatively, the direct method takes advantage of the inverse relationship between transmission and absorption in order to calculate absorbance without having to use any logarithmic calculations. Whichever method you decide to use, it’s important that you understand how each works and are comfortable with your results. With this knowledge, you’ll be able to effectively monitor transmittance and absorbance in your Thanks for reading! We hope this article has been helpful in understanding how to convert transmittance to absorbance. Take the time to practice the steps and consider the different methods before you decide which one is best for your research. Good luck! 1. https://www.sigmaaldrich.com/US/en/support/calculators-and-apps/absorbance-transmittance-conversion 2. https://www.circuitsgallery.com/how-to-convert-transmittance-to-absorbance/ 3. https://www.studysmarter.us/textbooks/chemistry/principles-of-instrumental-analysis-7th/an-introduction-to-ultraviolet-visible-molecular-absorption-spectrometry/ 4. https://www.researchgate.net/post/How_can_I_convert_absorption_data_into_transmission_data 5. https://pubs.acs.org/doi/10.1021/ed050p259
{"url":"https://electronicshacks.com/how-to-convert-transmittance-to-absorbance/","timestamp":"2024-11-09T17:11:22Z","content_type":"text/html","content_length":"185478","record_id":"<urn:uuid:8ab4264d-9f3b-4c37-9ea9-ec45ab66473c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00649.warc.gz"}
Trustees Report 2007 OASDI Trustees Report Short-Range Actuarial Estimates For the short range (2007-2016), the Trustees measure financial adequacy by comparing assets at the beginning of each year to projected program cost for that year under the intermediate set of assumptions. Having a trust fund ratio of 100 percent or more-that is, assets at the beginning of each year at least equal to projected cost for the year-is considered a good indication of a trust fund's ability to cover most short-term contingencies. Both the OASI and the DI trust fund ratios under the intermediate assumptions exceed 100 percent throughout the short-range period and therefore satisfy the Trustees' short-term test of financial adequacy. Figure II.D1 below shows that the trust fund ratios for the combined OASI and DI Trust Funds reach a peak level in 2014 and begin declining thereafter. OASDI Trust Fund [Assets as a percentage of Long-Range Actuarial Estimates The financial status of the program over the next 75 years is measured in terms of annual cost and income as a percentage of taxable payroll, trust fund ratios, the actuarial balance (also as a percentage of taxable payroll), and the open group unfunded obligation (expressed in present-value dollars). Considering Social Security's cost as a percentage of the total U.S. economic output (gross domestic product or GDP) provides an additional perspective. The year-by-year relationship between income and cost rates shown in figure II.D2 illustrates the expected pattern of cash flows for the OASDI program over the full 75-year period. Under the intermediate assumptions, the OASDI cost rate is projected to decline slightly in 2008 and then increase up to the 2007 level within the next 2 years. It then begins to increase rapidly and first exceeds the income rate in 2017, producing cash-flow deficits thereafter. Cash-flow deficits are less than trust fund interest earnings until 2027. Redemption of trust fund assets will allow continuation of full benefit payments on a timely basis until 2041, when the trust funds will become exhausted. This redemption process will require a flow of cash from the General Fund of the Treasury. Pressures on the Federal Budget will thus emerge well before 2041. Even if a trust fund's assets are exhausted, however, tax income will continue to flow into the fund. Present tax rates would be sufficient to pay 75 percent of scheduled benefits after trust fund exhaustion in 2041 and 70 percent of scheduled benefits in 2081. Income and Cost Rates [As a of taxable Social Security's cost rate generally will continue rising rapidly through about 2030 as the baby-boom generation reaches retirement eligibility age. Thereafter, the cost rate is estimated to rise at a slower rate for about 5 years and then stabilize for the next 15 years as the baby-boom ages and decreases in size. Continued reductions in death rates and maintaining birth rates at levels well below those from the baby-boom era and before will cause a significant upward shift in the average age of the population and will push the cost rate from 17.3 percent of taxable payroll in 2050 to 18.5 percent by 2081 under the intermediate assumptions. In a pay-as-you-go system (with no trust fund assets or borrowing authority), this 18.5-percent cost rate means the combination of the payroll tax (scheduled to total 12.4 percent) and proceeds from income taxes on benefits (expected to be 0.9 percent of taxable payroll in 2081) would have to equal 18.5 percent of taxable payroll to pay all currently scheduled benefits. After 2081, the upward shift in the average age of the population is likely to continue and to increase the gap between OASDI costs and income. The primary reason that the OASDI cost rate will increase rapidly between 2010 and 2030 is that, as the large baby-boom generation born in the years 1946 through 1965 retires, the number of beneficiaries will increase much more rapidly than the number of workers. The estimated number of workers per beneficiary is shown in figure II.D3. In 2006, there were about 3.3 workers for every OASDI beneficiary. The baby-boom generation will have largely retired by 2030, and the projected ratio of workers to beneficiaries will be only 2.2 at that time. Thereafter, the number of workers per beneficiary will slowly decline, and the OASDI cost rate will continue to increase largely due to projected reductions in mortality. The maximum projected trust fund ratios for the OASI, DI, and combined funds appear in table II.D1. The year in which the maximum projected trust fund ratio is attained and the year in which the assets are projected to be exhausted are shown as well. Table II.D1.-Projected Maximum Trust Fund Ratios Attained and Trust Fund Exhaustion Dates Under the Intermediate │ │OASI│ DI │OASDI │ │ Maximum trust fund ratio (percent)│ 463│ 200│ 409│ ││ Year attained│2015│2007│ 2014│ │ Year of│2042│2026│ 2041│ │ │ │ │ │ │ trust fund exhaustion│ │ │ │ The actuarial balance is a measure of the program's financial status for the 75-year valuation period as a whole. It is essentially the difference between income and cost of the program expressed as a percentage of taxable payroll over the valuation period. This single number summarizes the adequacy of program financing for the period. When the actuarial balance is negative, the actuarial deficit can be interpreted as the percentage that could be added to the current law income rate for each of the next 75 years, or subtracted from the cost rate for each year, to bring the funds into actuarial balance. Because the timing of any future changes is unlikely to follow this pattern, this measure should be viewed only as providing a rough indication of the average change that is needed over the 75-year period as a whole. In this report, the actuarial balance under the intermediate assumptions is a deficit of 1.95 percent of taxable payroll for the combined OASI and DI Trust Funds. The actuarial deficit was 2.02 percent in the 2006 report and has been in the range of 1.86 percent to 2.23 percent for the last ten reports. Another way to illustrate the financial shortfall of the OASDI system is to examine the cumulative value of taxes less costs, in present value. Figure II.D4 shows the present value of cumulative OASDI taxes less costs over the next 75 years. The balance of the combined trust funds peaks at $2.6 trillion in 2017 (in present value) and then turns downward. This cumulative amount continues to be positive, indicating trust fund assets, or reserves, through 2040. However, after 2040 this cumulative amount becomes negative, indicating a net unfunded obligation. Through the end of 2081, the combined funds have a present-value unfunded obligation of $4.7 trillion. This unfunded obligation represents 1.8 percent of future taxable payroll and 0.7 percent of future GDP, through the end of the 75-year projection period. OASDI Income Less Cost, Based on Present Law Tax Rates and [Present value as of January 1, 2007, in Still another important way to look at Social Security's future is to view its cost as a share of U.S. economic output. Figure II.D5 shows that Social Security's cost as a percentage of GDP will grow from 4.3 percent in 2007 to 6.2 percent in 2030, and then slightly increase to 6.3 percent in 2081. However, Social Security's scheduled tax income is projected to be about 4.9 percent of GDP in both 2007 and 2030, and then to decrease to 4.5 percent in 2081. Income from payroll taxes declines generally in relation to GDP in the future because an increasing share of employee compensation is assumed to be provided in fringe benefits, making wages a shrinking share of GDP. Between 2010 and 2030, however, the total non-interest income does not decline as a percent of GDP because benefits, and thus income to the trust funds from taxation of these benefits, are rising rapidly as a percent of GDP during the period. Cost and Tax Revenue as a of GDP Consideration of a 75-year period is not enough to provide a complete picture of Social Security's financial condition. Figures II.D2, II.D4, and II.D5 show that the program's financial condition is worsening at the end of the period. Overemphasis on summary measures for a 75-year period can lead to incorrect perceptions and to policy prescriptions that do not achieve sustainable solvency. Thus, careful consideration of the trends in annual deficits and unfunded obligations toward the end of the 75-year period is important. In addition, summary measures for a time period that extends to the infinite horizon are included in this report. These measures provide an additional indication of Social Security's very long-run financial condition, but are subject to much greater uncertainty. These calculations show that extending the horizon beyond 75 years increases the unfunded obligation. Over the infinite horizon, the shortfall (unfunded obligation) is $13.6 trillion in present value, or 3.5 percent of future taxable payroll and 1.2 percent of future GDP. These calculations of the shortfall indicate that much larger changes may be required to achieve solvency beyond the 75-year period as compared to changes needed to balance 75-year period summary measures. The measured unfunded obligation over the infinite horizon increases from $13.4 trillion in last year's report to $13.6 trillion in this report. In the absence of any changes in assumptions, methods, and starting values, the unfunded obligation over the infinite horizon would have risen to $14.1 trillion due to the change in the valuation date. Changes From Last Year's Report The long-range OASDI actuarial deficit of 1.95 percent of taxable payroll for this year's report is smaller than the deficit of 2.02 percent of taxable payroll shown in last year's report under intermediate assumptions. Changes in methodology and assumed rates of disability incidence are the main reasons for the decrease in the deficit. For a detailed description of the specific changes identified in table II.D2 below, see section IV.B.7. Table II.D2.-Reasons for Change in the 75-Year Actuarial Balance Under Intermediate Assumptions [As a percentage of taxable payroll] │ Item │OASI │ DI │OASDI│ │ Shown in last year's report:│ ││ Income rate│11.95│1.93│13.88│ ││ Cost rate│13.63│2.27│15.90│ ││ Actuarial balance│-1.68│-.33│-2.02│ │ Changes in actuarial balance due to changes in:│ │││ Legislation / Regulation│ .00│ .00│ .00│ │││ Valuation period│ -.05│-.01│ -.06│ │││ │ │ │ │ │││ │ │ │ │ │││ Demographic data and assumptions│ -.03│ .00│ -.03│ │││ Economic data and assumptions│ +.01│+.01│ +.02│ │││ Disability data and assumptions│ -.02│+.08│ +.06│ │││ Programmatic data and methods│ +.09│-.01│ +.08│ ││ Total change in actuarial balance│ -.01│+.07│ +.06│ │ Shown in this report:│ ││ Actuarial balance│-1.69│-.27│-1.95│ ││ Income rate│11.99│1.93│13.92│ ││ Cost rate│13.68│2.19│15.87│ Note: Totals do not necessarily equal the sums of rounded components. The open group unfunded obligation over the 75-year projection period has increased from $4.6 trillion (present discounted value as of January 1, 2006) to $4.7 trillion (present discounted value as of January 1, 2007). The measured increase in the unfunded obligation would be expected to be about $0.3 trillion due to advancing the valuation date by 1 year and including the additional year 2081. Changes in methods and assumptions offset most of this expected increase. Figure II.D6 shows that this year's projections of annual balances are generally higher than those in last year's report principally because of the changes in methods and assumptions. Annual balances are similar between the two reports through about 2030. Thereafter, annual balances are somewhat higher for the rest of the long-range projection period. Section IV.B.7 provides a detailed presentation of these changes. 2006 and [As a of taxable under the Uncertainty of the Projections Significant uncertainty surrounds the intermediate assumptions. The Trustees have traditionally used low cost (alternative I) and high cost (alternative III) assumptions as an indication of this uncertainty. Figure II.D7 shows the projected trust fund ratios for the combined OASI and DI Trust Funds under the intermediate, low cost, and high cost assumptions. The low cost alternative is characterized by assumptions that improve the financial condition of the trust funds, including a higher fertility rate, slower improvement in mortality, a higher real-wage differential, and lower unemployment. The high cost alternative, in contrast, features a lower fertility rate, more rapid declines in mortality, a lower real-wage differential, and higher unemployment. While it is extremely unlikely that all of these parameters would move in the same direction over the 75-year period relative to the intermediate projections, there is a not-insignificant-though quite low-probability that the actual outcome for future costs could be as extreme as either of the outcomes portrayed by the low and high cost projections. The method for constructing these high and low cost projections does not allow for the assignment of a specific probability to the likelihood that actual experience will lie within or outside the range they entail. However, an alternative approach to illustrating the uncertainty inherent in such long-term projections discussed in Appendix E suggests that the low and high cost projections bound a range that encompasses something on the order of 95 percent of possible future financial outcomes. Given there is an equal probability that the actual outcome will be either more or less favorable than that portrayed by the intermediate cost projection, this implies that there is something on the order of only a 2.5 percent probability that it will be as favorable as that portrayed by the low cost projection or as unfavorable as that portrayed by the high cost projection. OASDI Trust Fund Ratios Under [Assets as a percentage of annual cost]
{"url":"https://www.ssa.gov/oact/TR/TR07/II_project.html","timestamp":"2024-11-11T00:45:09Z","content_type":"text/html","content_length":"38565","record_id":"<urn:uuid:5b15d930-6d28-4fdc-9e45-20ff4f2b1dbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00051.warc.gz"}
Can You Solve the 10 Hardest Logic Puzzles Ever Created? (2024) So you think you are clever, right? Then here is your chance to pit your brain against some of the world's hardest logic puzzles ever created. After having created number puzzles like Calcudoku and Killer Sudoku for many years, I decided to try and find the most challenging ones out there. Every once in a while I added a new type of puzzle, until I ended up with a list of 10. Related Content Warhammer 40K's New Culture War Crossfire Is a Mess of Its Own Making Andor Season 2 Might Be the Most Important Thing Tony Gilroy's Ever Made Why Was Now the Right Time to Come Back and Do 28 Years Later? Why Was Now the Right Time to Come Back and Do 28 Years? In the following list you will find both familiar puzzles and games such as Sudoku and Calcudoku as well as lesser known ones such as the Bongard Problem and Fill-a-Pix. Some of these puzzles can be solved right on this page while others can be downloaded or reached elsewhere. All of them, however, are promised to test your solving skills to the absolute limit and keep you busy for hours, if not Related Content Warhammer 40K's New Culture War Crossfire Is a Mess of Its Own Making Andor Season 2 Might Be the Most Important Thing Tony Gilroy's Ever Made Why Was Now the Right Time to Come Back and Do 28 Years Later? Why Was Now the Right Time to Come Back and Do 28 Years? Find an even harder puzzle? Be sure to let me know! For more information about this project and other logic puzzles visit my website Calcudoku.org 1. The World's Hardest Sudoku Sudoku is easily the most played and most analyzed puzzle in the world, so coming up with the hardest one is no mean feat. In 2012, Finnish mathematician Arto Inkala claimed to have created the "World's Hardest Sudoku". According to the British newspaper The Telegraph, on the difficulty scale by which most Sudoku grids are graded, with one star signifying the simplest and five stars the hardest, the above puzzle would "score an eleven". More information on how Inkala's puzzles are rated is on his website. 2. The Hardest Logic Puzzle Ever Three gods A, B, and C are called, in no particular order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language, in which the words for yes and no are da and ja, in some order. You do not know which word means which. American philosopher and logician George Boolos invented the above riddle, published in the Harvard Review of Philosophy in 1996, and called it "The Hardest Logic Puzzle Ever". The original article can be downloaded here. You can read about making this puzzle even harder on the Physics arXiv Blog. 3. The World's Hardest Killer Sudoku A Killer Sudoku is very similar to a Sudoku, except that the clues are given as groups of cells + the sum of the numbers in those cells. From a large number of highest rated puzzles at Calcudoku.org, I measured what percentage of puzzlers solved them on the day they were published. Easily the hardest was the Killer Sudoku shown above, published on the 9th of November 2012. You can solve this puzzle right here. 4. The Hardest Bongard Problem This type of puzzle first appeared in a book by Russian computer scientist Mikhail Moiseevich Bongard in 1967. They became more widely known after Douglas Hofstadter, an American professor of cognitive science, mentioned them in his book "Gödel, Escher, Bach". To solve the above puzzle, published on Harry Foundalis' website, you have to find a rule that the 6 patterns on the left hand side conform to. The 6 patterns on the right do not conform to this rule. For example, the first problem on this page has as a solution: all patterns on the left are triangles. 5. The Hardest Calcudoku Puzzle A Calcudoku is similar to a Killer Sudoku, except that (1) any operation can be used to compute the result of a "cage" (not only addition), (2) the puzzle can be any square size, and (3) the Sudoku rule of requiring the numbers 1..9 in each 3×3 set of cells does not apply. Calcudoku was invented by Japanese math teacher Tetsuya Miyamoto, who called it "Kashikoku naru" ("smartness"). Identified in the same way as the Killer Sudoku presented in this article, the hardest Calcudoku was a 9×9 puzzle published on April 2, 2013, which only 9.6% of the regular puzzlers at Calcudoku.org managed to solve. You can give it a try right here. If you're not up for solving it yourself, check out this step-by-step solving analysis by "clm". 6. The Hardest "Ponder this" Puzzle Design a storage system that encodes 24 information bits on 8 disks of 4 bits each, such that: 1. Combining the 8*4 bits into a 32 bits number (taking a nibble from each disk), a function f from 24 bits to 32 can be computed using only 5 operations, each of which is out of the set {+, -, *, /, %, &, |, ~} (addition; subtraction, multiplication; integer division, modulo; bitwise-and; bitwise-or; and bitwise-not) on variable length integers. In other words, if every operation takes a nanosecond, the function can be computed in 5 nanoseconds. 2. One can recover the original 24 bits even after any 2 of the 8 disks crash (making them unreadable and hence loosing 2 nibbles) IBM Research has been publishing very challenging monthly puzzles since May 1998 on their Ponder this page. Judging from the number of solvers for each, the hardest number puzzle is the one shown above, published in April 2009. If you need some clues visit this page. 7. The Hardest Kakuro Puzzle Kakuro puzzles combine elements of Sudoku, logic, crosswords and basic math into one. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Those in the know tell me that the Absolutely Nasty Kakuro Series by Conceptis Puzzles has the world's hardest Kakuro puzzles. Gladly, the guys at Conceptis have produced the above even nastier Kakuro specimen, especially for this article. Play this puzzle online here. 8. Martin Gardner's Hardest Puzzle A number's persistence is the number of steps required to reduce it to a single digit by multiplying all its digits to obtain a second number, then multiplying all the digits of that number to obtain a third number, and so on until a one-digit number is obtained. For example, 77 has a persistence of four because it requires four steps to reduce it to one digit: 77-49-36-18-8. The smallest number of persistence one is 10, the smallest of persistence two is 25, the smallest of persistence three is 39, and the smaller of persistence four is 77. What is the smallest number of persistence five? Martin Gardner (1914-2010) was a popular American mathematics and science writer specializing in recreational mathematics, but with interests encompassing micromagic, stage magic, literature, philosophy, scientific skepticism and religion (Wikipedia). In his book The Colossal Book of Short Puzzles and Problems puzzles in many categories are listed in order of difficulty. The above is the hardest puzzle from the "Numbers" chapter. 9. The Most Difficult Go Problem Ever Go is a board game for two players that originated in China more than 2,500 years ago. The game is noted for being rich in strategy despite its relatively simple rules (Wikipedia). The above problem is considered to be the hardest ever and is said to have taken 1000 hours to solve by a group of high level students. Solutions and many references can be found on this page. 10. The Hardest Fill-a-Pix Puzzle Fill-a-Pix is a Minesweeper-like puzzle based on a grid with a pixilated picture hidden inside. Using logic alone, the solver determines which squares are painted and which should remain empty until the hidden picture is completely exposed. Advanced logic Fill-a-Pix such as the one above contain situations where two clues simultaneously affect each other as well as the squares around them making these puzzles extremely hard to solve. Fill-a-Pix was invented by Trevor Truran, a former high-school math teacher and the editor of Hanjie and several other famed British magazines published by Puzzler Media. For Fill-a-Pix solving rules, advanced solving techniques and more about the history of this puzzle check the Get started section on conceptispuzzles.com. This ultra-hard puzzle was generated by Conceptis especially for this article and can be played online here. This article originally appeared on Conceptis Puzzles and is reproduced here with kind permission. Conceptis is the leading supplier of logic puzzles to printed and electronic gaming media all over the world. On average, more than 20 million Conceptis puzzles are solved each day in newspapers and magazines,online and on mobile platforms across the world. Patrick Min is a freelance scientific programmer. He specializes in geometry software, but has worked in many other areas, such as search engine technology, acoustic modelling, and information security. He has published several papers and open/closed-source software across these subjects. Patrick holds a Master's degree in Computer Science from Leiden University, the Netherlands, and a Ph.D. in Computer Science from Princeton University. He is also a puzzle enthusiast, devising math puzzles for his father since the age of 7. This continues to this date, with dad solving his son's Calcudoku puzzles. Patrick lives in London. Top art by David Masters under Creative Commons license.
{"url":"https://gienes.best/article/can-you-solve-the-10-hardest-logic-puzzles-ever-created","timestamp":"2024-11-07T03:36:51Z","content_type":"text/html","content_length":"133226","record_id":"<urn:uuid:739ecd20-f641-4bc2-bd6d-6f877133ae8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00436.warc.gz"}
Lectures, seminars and dissertations * Dates within the next 7 days are marked by a star. Leah Schätzler Existence of variational solutions for doubly nonlinear equations in noncylindrical domains * Today * Wednesday 13 November 2024, 10:15, M3 (M234) I will talk about the existence of variational solutions to doubly nonlinear parabolic PDEs in noncylindrical domains $E \subset \mathds{R}^n \times [0,\infty)$. This setting arises from models where the underlying domain $E^t := \{ x \in \mathds{R}^n : (x,t) \in E \}$ changes in time. The prototype of the considered PDEs is $$ \partial_t \big( |u|^{q-1} u \big) - \operatorname{div}\big( |Du|^ {p-2} Du \big) = 0 \quad\text{in } E $$ with parameters $q \in (0,\infty)$ and $p \in (1,\infty)$, which combines the porous medium equation and the parabolic $p$-Laplacian. The talk is based on joint work (in progress) with Christoph Scheven, Jarkko Siltakoski and Calvin Stanko. Seminar on analysis and geometry Yu Liu (Aalto University) Optimisation with neural network surrogate models embedded (Midterm review) * Today * Wednesday 13 November 2024, 13:00, M3 (M234) Aino Weckman Optimising emission reduction actions in organisations' climate work (MSc thesis presentation) * Thursday 14 November 2024, 10:00, M2 (M233) Joanna Bisch Model order reduction for parametric generalised EVPs * Thursday 14 November 2024, 11:00, M2 (M233) We look for approximate eigensolutions of the pencil $( A(\sigma),M)$ for several values of the $d$-dimensional parameter vector $\sigma$. We are interested in few of the smallest eigenvalues that lie in the spectral interval of interest $(0,\Lambda)$. Both matrices are assumed to be s.p.d for any admissible parameter vector. In addition, the matrix $A(\sigma)$ is assumed to be spectrally equivalent to an s.p.d. average matrix $\overline{A}$. For this purpose we develop a Ritz method that uses the same subspace for any parameter value. The subspace is designed using the observation that any eigenvector can be split into two components. The first component belongs to an easily computable subspace. The second component is defined by a correction formula that is a $d+1$ dimensional analytic function. Accordingly, the Ritz space is defined using this splitting and polynomial interpolation of the second component. We give estimates for the approximation error and illustrate the method by numerical examples. The advantage of our approach is that the analysis easily treats eigenvalue crossings that typically have posed technical challenges. Numerical Analysis seminar Matematiikan kandiseminaari (Bachelor thesis seminar in Math.) * Friday 15 November 2024, 09:00, M3 (M234) Further information Konstantin Izyurov (University of Helsinki) Bosonization of the critical Ising correlations * Tuesday 19 November 2024, 10:15, M3 (M234) I will explain an identity between scaling limits of the Ising correlations in arbitrary finitely connected planar domains, and correlations of a suitable version of a Gaussian free field, yielding explicit formulae for the former. The proof is based on (rigorously established) operator product expansions for the Ising and GFF correlation, and a limiting version of a classical identity between Szegö and Bergman kernel on Riemann surfaces. Joint work with Baran Bayraktaroglu, Tuomas Virtanen and Christian Webb. Olavi Nevanlinna Extracts from metric functional analysis Wednesday 20 November 2024, 10:15, M3 (M234) We shall concentrate on metric compactification of Banach spaces, in order to highlight differences and similarities between metric and linear functional analysis. We discuss weak convergences via metric functionals, indicate connections to invariant subspace problem, present a metric version of Eberlein-Shmulian theorem. As an example related to fixed point theorems we show a metric version of Markov-Kakutani theorem. The talk is based on discussions and work with Armando Gutiérrez. Recommended non-technical reading: Anders Karlsson, From linear to metric functional analysis, PNAS July 9, 2021. Seminar on analysis and geometry Dr. Olli Herrala (Aalto University) A novel strong duality-based reformulation for trilevel infrastructure models in energy systems development Monday 25 November 2024, 15:15, Riihi (Y225a) Further information We explore the class of trilevel equilibrium problems with a focus on energy-environmental applications and present a novel single-level reformulation for such problems, based on strong duality. To the best of our knowledge, only one alternative single-level reformulation for trilevel problems exists. This reformulation uses a representation of the bottom-level solution set, whereas we propose a reformulation based on strong duality. Our novel reformulation is compared to this existing formulation, discussing both model sizes and computational performance. In particular, we apply this trilevel framework to a power market model, exploring the possibilities of an international policymaker in reducing emissions of the system. Using the proposed approach, we are able to obtain globally optimal solutions for a five-node case study representing the Nordic countries and assess the impact of a carbon tax on the electricity production portfolio. SAL Weekly Seminar Lizao Ye Tuesday 26 November 2024, 15:15, M2 (M233) ADM seminar Stanislav Hencl (Charles University, Prague) Ball Evans approximation problem: Recent progress and open problems Wednesday 27 November 2024, 10:15, M3 (M234) In this talk we give a short overview about the Ball-Evans approximation problem, i.e. about the approximation of Sobolev homeomorphism by a sequence of diffeomorphisms (or piecewise affine homeomorphisms) and we recall the motivation for this problem. We show some recent planar results and counterexamples in higher dimensions, and we give a number of open problems connected to this problem and related fields. We concentrate in detail on the joint result with A. Pratelli [1] about the approximation on planar W1,1-homeomorphisms by a sequence of piecewise affine homeomorphisms. Seminar on analysis and geometry Matematiikan kandiseminaari (Bachelor thesis seminar in Math.) Monday 09 December 2024, 09:15, M3 (M234) Further information Eric Schippers (University of Manitoba) Tuesday 10 December 2024, 10:15, M3 (M234) Anna-Mariya Otsetova Wednesday 11 December 2024, 10:15, M3 (M234) Seminar on analysis and geometry Theo Elenius Thursday 12 December 2024, 10:15, M3 (M234) Seminar on analysis and geometry MSc (Tech) Leevi Olander Mid-term review presentation Wednesday 18 December 2024, 14:00, M2 (M233) Mid-term review presentation by Leevi Olander (exact title will be confirmed nearer the date) Stochastic Sauna 2024 Workshop on Probability and Statistics Thursday 19 December 2024, 09:45, M1 (M232) Further information See workshop homepage: https://math.aalto.fi/en/research/stochastics/sauna2024/ Jonas Tölle Prof. Guillermo Mantilla-Soler (National U. Colombia Medellin) Seminar course (7.-17.1.): An introduction to Dirichlet's L-functions and a proof of Dirichlet's theorem of primes in arithmetic progressions Tuesday 07 January 2025, 10:15, M3 (M234) We will begin this course (MS-EV0030) by reviewing Euler's change of paradigm, with respect to Euclid, and his proof on infinitude of primes. Then, we will study the generalization made by Dirichlet, and will prove Dirichlet's theorem on arithmetic progression. Through the course we will learn about the development of L-functions, character theory and the beginning of the relation between Galois representations and certain complex functions. There will be 5 sessions during 2 weeks. For students interested in credits: attendance gives 2 cr and more can be obtained (upon request) by completing further assignments. Sessions take place on Tue, Thu on the first week and Mon, Wed, Fri the second week, all at 10:15-12. ANTA Seminar / Hollanti et al. Prof. Guillermo Mantilla-Soler (National U. Colombia Medellin) Seminar course (7.-17.1.): An introduction to Dirichlet's L-functions and a proof of Dirichlet's theorem of primes in arithmetic progressions Thursday 09 January 2025, 10:15, M3 (M234) We will begin this course (MS-EV0030) by reviewing Euler's change of paradigm, with respect to Euclid, and his proof on infinitude of primes. Then, we will study the generalization made by Dirichlet, and will prove Dirichlet's theorem on arithmetic progression. Through the course we will learn about the development of L-functions, character theory and the beginning of the relation between Galois representations and certain complex functions. There will be 5 sessions during 2 weeks. For students interested in credits: attendance gives 2 cr and more can be obtained (upon request) by completing further assignments. Sessions take place on Tue, Thu on the first week and Mon, Wed, Fri the second week, all at 10:15-12. ANTA Seminar / Hollanti et al. Prof. Guillermo Mantilla-Soler (National U. Colombia Medellin) Seminar course (7.-17.1.): An introduction to Dirichlet's L-functions and a proof of Dirichlet's theorem of primes in arithmetic progressions Monday 13 January 2025, 10:15, M3 (M234) We will begin this course (MS-EV0030) by reviewing Euler's change of paradigm, with respect to Euclid, and his proof on infinitude of primes. Then, we will study the generalization made by Dirichlet, and will prove Dirichlet's theorem on arithmetic progression. Through the course we will learn about the development of L-functions, character theory and the beginning of the relation between Galois representations and certain complex functions. There will be 5 sessions during 2 weeks. For students interested in credits: attendance gives 2 cr and more can be obtained (upon request) by completing further assignments. Sessions take place on Tue, Thu on the first week and Mon, Wed, Fri the second week, all at 10:15-12. ANTA Seminar / Hollanti et al. Andreas Rosen (University of Gothenburg) Wednesday 15 January 2025, 10:15, M3 (M234) Seminar on analysis and geometry Prof. Guillermo Mantilla-Soler (National U. Colombia Medellin) Seminar course (7.-17.1.): An introduction to Dirichlet's L-functions and a proof of Dirichlet's theorem of primes in arithmetic progressions Wednesday 15 January 2025, 10:15, M134 We will begin this course by reviewing Euler's change of paradigm, with respect to Euclid, and his proof on infinitude of primes. Then, we will study the generalization made by Dirichlet, and will prove Dirichlet's theorem on arithmetic progression. Through the course we will learn about the development of L-functions, character theory and the beginning of the relation between Galois representations and certain complex functions. There will be 5 sessions during 2 weeks. For students interested in credits: attendance gives 2 cr and more can be obtained (upon request) by completing further assignments. Sessions take place on Tue, Thu on the first week and Mon, Wed, Fri the second week, all at 10:15-12. ANTA Seminar / Hollanti et al. Prof. Guillermo Mantilla-Soler (National U. Colombia Medellin) Seminar course (7.-17.1.): An introduction to Dirichlet's L-functions and a proof of Dirichlet's theorem of primes in arithmetic progressions Friday 17 January 2025, 10:15, M3 (M234) We will begin this course (MS-EV0030) by reviewing Euler's change of paradigm, with respect to Euclid, and his proof on infinitude of primes. Then, we will study the generalization made by Dirichlet, and will prove Dirichlet's theorem on arithmetic progression. Through the course we will learn about the development of L-functions, character theory and the beginning of the relation between Galois representations and certain complex functions. There will be 5 sessions during 2 weeks. For students interested in credits: attendance gives 2 cr and more can be obtained (upon request) by completing further assignments. Sessions take place on Tue, Thu on the first week and Mon, Wed, Fri the second week, all at 10:15-12. ANTA Seminar / Hollanti et al. Prof Joni Virta (University of Turku) Unsupervised linear discrimination using skewness Wednesday 12 February 2025, 10:15, M237 It is known that, in Gaussian two-group separation, the optimally discriminating projection direction can be estimated without any knowledge on the group labels. In this presentation, we (a) motivate this estimation problem, and (b) gather several unsupervised estimators based on skewness and derive their limiting distributions. As one of our main results, we show that all affine equivariant estimators of the optimal direction have proportional asymptotic covariance matrices, making their comparison straightforward. We use simulations to verify our results and to inspect the finite-sample behaviors of the estimators. Aalto Stochastics and Statistics Seminar / Leskelä Show the events of the past year Page content by: webmaster-math [at] list [dot] aalto [dot] fi
{"url":"https://math.tkk.fi/en/current/talks/","timestamp":"2024-11-13T19:07:16Z","content_type":"text/html","content_length":"39941","record_id":"<urn:uuid:85e11c71-d535-4625-b360-0cadbd2abaef>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00308.warc.gz"}
50 Catchy Pi Day Slogans and Clever Pi Day Phrases50 Catchy Pi Day Slogans and Clever Pi Day Phrases - Slogans Hub 50 Catchy Pi Day Slogans and Clever Pi Day Phrases If you are a Mathematics lover and a food addict, this blog is for you. Yes, your guess is right. We are going to talk about Pi Day. The Pi day is all about a constant of mathematics celebrated on the 14th of march. The date itself is very intriguing as it represents the value of the Pi. People also bake Pies and throw them at each other for fun. The concept of pie came from a pun or perhaps the circle shape of it, but either way, it makes Pi day very entertaining. Discussions on Pi and new developments in the field of mathematics are part of the celebrations. The slogan for Pi day is also an element of the math gala. Enthusiasts post slogans of Pi day on their social media accounts and talk about them on programs. The fact that Pi has the value of endless numbers and numbers that are patternless. That is what makes it a miracle in the world of math and unique enough to have a day to celebrate named after it. Catchy Pi Day Slogans We have collected the top slogans for Pi day so that you celebrate Pi Day with the right choice of words!!! Let’s dig in here are the Top 10 Catchy Pi Day Slogans. 1. Start your calculation today 2. A day devoted to mathematics 3. I want a pie on this pi day 4. Be irrational, pi day is here 5. I fell in love with math when I met Pi 6. A calculation worth doing 7. For all the mathematic lovers 8. Pi makes the world go-’round 9. It’s pi day, it’s pi day! gotta get down on pi-day! 10. I hated Math then I found out about Pi Any Day’s Better with a Little Pi It’s easy as pi Got pi? It’s as easy as 3.14 Q.T. Pi or Cutie Pi I like Pi I love Pi Happy Pi Day Be rational, Get real Be irrational, celebrate Pi Day Ultimate Pi Day Legendary Pi Day Pi Day Pie Party! Keep calm and love Pi Keep calm, it’s Pi day Live and let π Keep calm & celebrate Pi day Weren’t these smart and fun? Choose your favorite. Pi Day Phrases If anything has math involved in it, it has to be clever. To celebrate Pi Day, we have collected Slogans that suit their enthusiasm. Here are a few best Clever Pi Day slogans Your calculation must not end Pi-rate (pirate) Simple as 3.1415 T. Pi or Cutie Pi The larger the Pi, the larger the world could be Pi – an irrational that makes more logic Math’s would not be math without Pi The larger the Pi, the larger the circumference is Throw up a pie party on a Pi day It is almost fascinating how these clever slogans contained pies and Pi two different things and yet made sense. The day just through these slogans for Pi day sounds so fun. When do we get a chance to throw Pies on our friends and get away with it? Pi Day gives us that immunity. So keep on sharing these slogans on Pi day and keep throwing Pies as well. Make your day memorable, and we will keep coming up with more creative Pi day slogans for you! Further Reading
{"url":"https://sloganshub.org/pi-day-slogans/","timestamp":"2024-11-10T16:06:27Z","content_type":"text/html","content_length":"163187","record_id":"<urn:uuid:3c0a9ec2-34ce-4d84-a57b-1dd1e68b2223>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00502.warc.gz"}
You comes from the Proto-Germanic demonstrative base *juz-, *iwwiz from Proto-Indo-European *yu- (second-person plural pronoun).^[1] Old English had singular, dual, and plural second-person pronouns. The dual form was lost by the twelfth century,^[2]^:117 and the singular form was lost by the early 1600s.^[3] The development is shown in the following table.^[2]^:117,120,121 Second-person pronouns in Old English, Middle English, & Modern English Singular Dual Plural OE ME Mod OE ME Mod OE ME Mod Nominative þu þu ġit ġe ȝē Accusative þe þē — inc — ēow ȝou you Genitive þīn þī(n) incer ēower ȝour(es) your(s) Early Modern English distinguished between the plural ye and the singular thou. As in many other European languages, English at the time had a T–V distinction, which made the plural forms more respectful and deferential; they were used to address strangers and social superiors.^[3] This distinction ultimately led to familiar thou becoming obsolete in modern English, although it persists in some English dialects. Yourself had developed by the early 14th century, with the plural yourselves attested from 1520.^[4] In Standard Modern English, you has five shapes representing six distinct word forms:^[5] • you: the nominative (subjective) and accusative (objective or oblique case^[6]^:146) forms • your: the dependent genitive (possessive) form • yours: independent genitive (possessive) form • yourselves: the plural reflexive form • yourself: the singular reflexive form Plural forms from other varieties Although there is some dialectal retention of the original plural ye and the original singular thou, most English-speaking groups have lost the original forms. Because of the loss of the original singular-plural distinction, many English dialects belonging to this group have innovated new plural forms of the second person pronoun. Examples of such pronouns sometimes seen and heard include: • y'all, or you all – southern United States,^[7] African-American Vernacular English, the Abaco Islands,^[8] St. Helena^[8] and Tristan da Cunha.^[8] Y'all however, is also occasionally used for the second-person singular in the North American varieties. • you guys [ju gajz~juɣajz] – United States,^[9] particularly in the Midwest, Northeast, South Florida and West Coast; Canada, Australia. Gendered usage varies; for mixed groups, "you guys" is nearly always used. For groups consisting of only women, forms like "you girls" or "you gals" might appear instead, though "you guys" is sometimes used for a group of only women as well. • you lot – United Kingdom,^[10] Palmerston Island,^[11] Australia • you mob – Australia^[12] • you-all, all-you – Caribbean English,^[13] Saba^[11] • a(ll)-yo-dis – Guyana^[13] • allyuh – Trinidad and Tobago^[14] • among(st)-you – Carriacou, Grenada, Guyana,^[13] Utila^[11] • wunna – Barbados^[13] • yinna – Bahamas^[13] • unu/oona – Jamaica, Belize, Cayman Islands, Barbados,^[13] San Salvador Island^[8] • yous(e) – Ireland,^[15] Tyneside,^[16] Merseyside,^[17] Central Scotland,^[18] Australia,^[19] Falkland Islands,^[8] New Zealand,^[11] Philadelphia,^[20] parts of the Midwestern US,^[21] Cape Breton and rural Canada • yous(e) guys – in the United States, particularly in New York City region, Philadelphia, Northeastern Pennsylvania, and the Upper Peninsula of Michigan; • you-uns, or yinz – Western Pennsylvania, the Ozarks, the Appalachians^[22] • ye, yee, yees, yiz – Ireland,^[23] Tyneside,^[24] Newfoundland and Labrador^[11] You prototypically refers to the addressee along with zero or more other persons, excluding the speaker. You is also used to refer to personified things (e.g., why won't you start? addressed to a car).^[25] You is always definite even when it is not specific. Semantically, you is both singular and plural, though syntactically it is almost always plural: i.e. always takes a verb form that originally marked the word as plural, (i.e. you are, in common with we are and they are). First person usage The practice of referring to oneself as you, occasionally known as tuism,^[26]^[27] is common when talking to oneself.^[28]^[29] It is less common in conversations with others, as it could easily result in confusion. Since English lacks a distinct first person singular imperative mood, you and let's function as substitutes. Third person usage You is used to refer to an indeterminate person, as a more common alternative to the very formal indefinite pronoun one.^[30] Though this may be semantically third person, for agreement purposes, you is always second person. Example: "One should drink water frequently" or "You should drink water frequently". You almost always triggers plural verb agreement, even when it is semantically singular. You can appear as a subject, object, determiner or predicative complement.^[5] The reflexive form also appears as an adjunct. You occasionally appears as a modifier in a noun phrase. • Subject: You're there; your being there; you paid for yourself to be there. • Object: I saw you; I introduced her to you; You saw yourself. • Predicative complement: The only person there was you. • Dependent determiner: I met your friend. • Independent determiner: This is yours. • Adjunct: You did it yourself. • Modifier: This sounds like a you problem. Pronouns rarely take dependents, but it is possible for you to have many of the same kind of dependents as other noun phrases. See also
{"url":"https://www.knowpia.com/knowpedia/You","timestamp":"2024-11-07T03:31:04Z","content_type":"text/html","content_length":"123651","record_id":"<urn:uuid:f9c198ee-f6c4-4963-b13c-0f0e9faf0bde>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00698.warc.gz"}
Linear Forces Speed Review Concept overview 1. Newton’s force laws 2. Drawing FBDs properly 3. Resolving vectors 4. Finding the net force 5. Solving problems Newton’s Laws We can use Newton’s three laws to help explain everyday phenomena, such as: • What force, if any, is needed to make a tennis ball bounce off the sidewalk? • If you walk along a log floating on a lake, why does the log move in the opposite direction • Why do you seem to be thrown forward in a car that rapidly decelerates? Law 1: Inertia An object in motion will stay in motion. An object at rest will stay at rest. Alternate explanation: all objects want to resist changes in their motion. The more inertia (mass) an object has, the more it can resist change. Think of how hard it is to bring a plane to a complete Law 2: [katex] F_{net} = ma [/katex] Any horizontal or vertical NET force acting on a mass will cause an acceleration. Vertical net force is the sum of ALL forces vertically. The horizontal net force is the sum of ALL forces horizontally. Law 3: Equal and Opposite Forces Push down on your desk. Does it move? It doesn’t, because it’s pushing back up at you with the same force. When two objects interact with each other, they both apply the same force to each other. For every action, there is an equal and opposite reaction. Alternate explanation: think of a truck hitting a motorcycle. They will experience the same force, but since the motorcycle has less mass it will experience greater acceleration. If you want to solve questions correctly, your best bet is to start by drawing an FBD (force body diagrams). Here are the rules for drawing common forces. 1. Weight ALWAYS points straight down. Draw the vector from the center of the object. 2. The normal force is ALWAYS 90° to the surface. Draw the vector from the bottom of the object. 3. Friction always points opposite to the direction of motion (except in circular motion, in which it points inwards) 4. Vectors can be at angles, but don’t draw the vector components unless asked to. 5. Vectors of equal magnitude should be of equal length. Common FBDs Know how to draw the following FBDs: • Mass in free fall, neglecting air resistance • Mass falling at terminal velocity • Box sliding horizontally with friction • Box sliding down a ramp with friction • Elevator going up and slowing down • Two different masses on opposite sides of a pulley Mass vs. Weight Mass is a scalar quantity (measured in kg). It remains constant everywhere in the universe. Weight is a vector quantity (measured in Newtons). It is the force on a mass due to gravity. Weight changes with gravity. [katex] \text{Weight} = F_{\text{net}} = mg [/katex] Setting Directions If you’ve finished Unit 1 Kinematics, you’ll be familiar with setting direction. Gravity can be positive or negative. The direction of positive and negative motion is arbitrarily chosen. That means you can choose if up means +/- or if left means +/-. But you need to be consistent. Indicate the directions on your FBD. If you set down to be positive, then gravity will be positive since it points down. Resolving Vectors We only use horizontal and vertical forces when solving force problems. Therefore, if you are using a vector that is at an angle, you need to break (resolve) it into x and y components. Use this trig shortcut: Given the vector [katex] \vec{A} [/katex], look at the component of the triangle you are trying to find. • If that component is opposite to the angle → use [katex] \vec{A} \sin \theta [/katex] • If that component is adjacent to the angle → use [katex] \vec{A} \cos \theta [/katex] If you’re still confused on vector components, read this short guide. You can also watch the video below. Net Forces vs. Equilibrium The net force is the SUM of ALL forces in either the X or Y direction. An object in equilibrium experiences zero net force in all directions. In other words, any force acting in a particular direction is canceled out with an equal but opposite force along the same axis. Since [katex] F_{net} = 0 = ma [/katex] → this implies the net acceleration of the object is 0. This either means the object is not moving OR moving at constant velocity. For example, a book laying on the table has 0 net force, because the weight vector and normal vector cancel out. → 0 Net Force = 0 acceleration = constant velocity Common Equations Kinetic Friction Force: [katex] f_k = \mu_kN [/katex] Static Friction Force: [katex] f_s = \mu_sN [/katex] • [katex]N[/katex] = normal force • [katex]\mu_k[/katex] = coefficient of kinetic friction (for when the object is moving) • [katex]\mu_s [/katex]= coefficient of static friction (for when the object is not moving • [katex]\mu_s > \mu_k [/katex] • PRO TIP: on a ramp –> [katex] \mu_s = \tan\theta [/katex] Hooke’s Law (Spring Force): [katex] F_s = kx [/katex] • [katex]k[/katex] = spring constant • [katex]x[/katex] = compression or stretch of the spring Centripetal force: [katex] F_c = ma_c = \frac{mv^2}{r} [/katex] • [katex]v[/katex] = tangential velocity • [katex]r[/katex] = radius of the circle Gravitational Force between two masses: [katex] F_g = \frac{Gm_1m_2}{r^2} [/katex] • [katex]G[/katex] = gravitational constant ≈ 6.67 × 10^-11 • [katex]m_1[/katex] = mass of object 1 • [katex]m_2[/katex] = mass of object 2 • [katex]r[/katex] = distance between [katex]m_1[/katex] and [katex]m_2[/katex] Solving Problems Quickly Students struggle the most on this. Using the steps (framework) below you can solve ALL force problems, it just comes down to practice. This framework was covered in depth in the Unit 2.4 course 1. Turn the word problem into a FBD. 2. Use the FBD to find either the horizontal or vertical net force. 3. Lastly set the net force equal to ma, to solve for the unknown variable. There are many of types of problems involving forces. But I’ve simplified it to this list of 60 linear force questions [available for course members], based on the actual AP Physics 1 exam. These will likely show up on your test. Common Types of Linear Force Questions Listed below is a list of common linear force problems (in the next article we’ll add circular force problems). You should know how to solve all of these, using the 3 step method above. As a bonus, I’ve linked short guides on how to specifically tackle each type of problem. • Box being pull by cord at an angle above the horizontal • Systems with multiple masses, like two blocks being pushed together • Gravitational force problems (force between two planets) Extra Help If any of this sounds confusing, you’re not alone! I’ve helped 100s of Physics students to achieve a 5 and boost class grades. You’ll start seeing results in just 3 lessons or less, guaranteed. Book A Free 1-to-1 Trial Lesson 10 Practice Questions for Mastery Question 1 Find the downwards acceleration of an elevator, given that the ratio of a person’s stationary weight to their weight in the elevator is 5:4. View Full Question and Explanation Question 3 A 1kg and unknown mass (M) hangs on opposite sides of the pulley suspended from the ceiling. When the masses are released, M accelerates down at 5 m/s². What is M? View Full Question and Explanation Question 4 A sled moves with constant speed down a sloped hill. The angle of the hill with respect to the horizontal is 10.0°. What is the coefficient of kinetic friction between the sled and the hill’s View Full Question and Explanation Question 5 A ladder is leaning against a wall at an angle. Which of the following forces must have the same magnitude as the frictional force exerted on the ladder by the floor? 1. The force of gravity on the ladder. 2. The normal force exerted on the ladder by the floor. 3. The frictional force exerted on the ladder by the wall. 4. The normal force exerted on the ladder by the wall. 5. None of these choices. View Full Question and Explanation Question 6 Three blocks of masses 5, 4, and 3 kg are placed side by side in that order. A 25 N force applied on the 5 kg block accelerates all three blocks together to the right. Find the acceleration of the blocks and the normal force the 4 kg block exerts on the 3 kg block. View Full Question and Explanation Question 7 A truck of mass 3500 kg hits the back of a small car of mass 1400 kg. Which car exerted more force on the other and why? 1. The truck exerts more force on the car because it has more mass. 2. The tuck exerts more force on the car because it has more acceleration. 3. The car exerts more force on the truck because it has less mass. 4. The car exerts more force on the truck because it has more acceleration 5. They exert the same force, and the resulting acceleration of the car is greater. View Full Question and Explanation Question 8 A block of mass m is accelerated across a rough surface by a force of magnitude F exerted at an angle θ above the horizontal. The frictional force between the block and surface is ƒ. Find the acceleration of the block (as an equation). View Full Question and Explanation Question 9 A clothesline is stretched between two trees. A tire hangs in the middle of the line, and the two halves of the line make equal angles with the horizontal. The tension in the line is 1. half the tire’s weight. 2. is equal to the tire’s weight divided by 9.18 m/s^2. 3. is less than half the tire’s weight. 4. is equal to the tire’s weight. 5. is more than the tire’s weight. View Full Question and Explanation Question 10 A child on Earth has a weight of 500N. Determine the weight of the child if the earth was to triple in both mass and radius (3M and 3r). View Full Question and Explanation Quick Answers 1. 2 m/s^2 2. 1650 N 3. 3 kg 4. tan(10) or .176 5. C. 6. 2.1 m/s^2 and 6.25 N 7. E. 8. a = (Fcosθ – ƒ)/m 9. E. 10. 167 N Practice Exam If you’re feeling confident try completing this practice exam. Make sure to time yourself and check your answers in the end. Here’s another Multiple Choice Test, that has questions similar to the AP Exam. Next Speed Review In the next speed review we’ll cover forces that cause a circular motion. We often call this centripetal forces.
{"url":"https://nerd-notes.com/forces-in-10-minutes/","timestamp":"2024-11-03T10:37:10Z","content_type":"text/html","content_length":"541010","record_id":"<urn:uuid:72d7953a-60ca-4e18-9c0c-173eaac04ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00800.warc.gz"}
Date and Time: 29 September 2020, 5:30pm IST/ 12:00GMT / 08:00am EDT (joining time: 5:15 pm IST - 5:30 pm IST) Google meet link: meet.google.com/vog-pdxx-fdt Speaker: K.N. Raghavan, The Institute of Mathematical Sciences Title: Multiplicities of points on Schubert varieties in the Grassmannian - Part 1 Abstract: Given an arbitrary point on a Schubert (sub)variety in a Grassmannian, how to compute the Hilbert function (and, in particular, the multiplicity) of the local ring at that point? A solution to this problem based on "standard monomial theory" was conjectured by Kreiman-Lakshmibai circa 2000 and the conjecture was proved about a year or two later by them and independently also by Kodiyalam and the speaker. The two talks will be an exposition of this material aimed at non-experts in the sense that we will not presume familiarity with Grassmannians (let alone flag varieties) or Schubert varieties. There are two steps to the solution. The first translates the problem from geometry to algebra and in turn to combinatorics. The second is a solution of the resulting combinatorial problem, which involves establishing a bijection between two combinatorially defined sets. The two talks will roughly deal with these two steps respectively. Three aspects of the combinatorial formulation of the problem (and its solution) are noteworthy: (A) it shows that the natural determinantal generators of the tangent cone (at the given point) form a Groebner basis (in any "anti-diagonal" term order); (B) it leads to an interpretation of the multiplicity as counting certain non-intersecting lattice paths; and (C) as was observed by Kreiman some years later, the combinatorial bijection is a kind of Robinson-Schensted-Knuth correspondence, which he calls the "bounded RSK". Determinantal varieties arise as tangent cones of Schubert varieties (in the Grassmannian), and thus one recovers multiplicity formulas for these obtained earlier by Abhyankar and Herzog-Trung. (The multiplicity part of the Kreiman-Lakshmibai conjecture was also proved by Krattenthaler, but by very different methods.) What about Schubert varieties in other (full or partial) flag varieties (G/Q with Q being a parabolic subgroup of a reductive algebraic group G)? The problem remains open in general, even for the case of the full flag variety GL(n)/B, although there are several papers over the last two decades by various authors using various methods that solve the problem in various special cases. Time permitting, we will give some indication of these results, without however any attempt at comprehensiveness. Tue, September 29, 2020 10:51am IST
{"url":"https://www.math.iitb.ac.in/webcal/view_entry.php?id=728&date=20200929","timestamp":"2024-11-13T09:20:54Z","content_type":"text/html","content_length":"15962","record_id":"<urn:uuid:b494ab9b-da3b-45dd-9d09-093a166c63e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00519.warc.gz"}
Criss Cross Math Problem This worksheet presents a math problems that provide a puzzle where the numbers 1 through 6 must be placed in a cross-shaped figure so that the sum of the numbers in the horizontal row equals 11 and the sum of the numbers in the vertical column equals 11, with two of the numbers already provided. The worksheet includes an image of the cross with boxes for the numbers and a question mark indicating where one of the numbers should go. The worksheet is designed to teach students logical reasoning and the concept of equality in sums. It challenges them to use the process of elimination and algebraic thinking to determine where each number must be placed to achieve the required sums. The problem also helps students practice addition skills and understand the constraints of a problem with multiple solutions, enhancing their problem-solving abilities.
{"url":"https://mathgoodies.com/word_problems/pow16/","timestamp":"2024-11-05T15:33:18Z","content_type":"text/html","content_length":"34166","record_id":"<urn:uuid:ce5d21f9-c785-40c6-8ca5-2a83b8ab1370>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00390.warc.gz"}
Build an Outdoor Scale Model Solar System An outdoor demonstration of the distance and size of the planets. In this outdoor activity, students will learn to build a scale model of the Solar System. They'll learn how big the Sun is compared to the planets and how far apart everything is in space. • Duration: 1 hour • Ages: 8+ • Price: $250 • Location: in-person in Montréal, large park needed Québec Curriculum Connections A. Space • 1. Gets his/her bearings and locates objects in space (spatial relationships) • 2. Locates objects in a plane • 3. Locates objects on an axis (based on the types of numbers studied) A. Lengths • 1. Compares lengths • 2. Constructs rulers • 4. Estimates and measures the dimensions of an object using conventional units □ a. metre, decimetre and centimetre G. Time • 1. Estimates and measures time using conventional units • 2. Establishes relationships between units of measure Science and Technology Earth and Space D. Systems and interaction • 2. System involving the sun, the Earth and the moon □ Associates the sun with the idea of a star, the Earth with the idea of a planet and the moon with the idea a natural satellite • 4. Seasons □ a. Describes the changes to the environment throughout the seasons (temperature, amount of daylight, type of precipitation) □ Explains the sensations experienced (hot, cold, comfortable) with regard to temperature measurements B. Fractions (using objects or drawings) • 3. Matches a fraction to part of a whole (congruent or equivalent parts) or part of a group of objects, and vice versa • 4. Identifies the different meanings of fractions (sharing, division, ratio) • 5. Distinguishes a numerator from a denominator • 6. Reads and writes a fraction • 7. Compares a fraction to 0, 1⁄2 or 1 • 8. Verifies whether two fractions are equivalent Science and Technology Material World E. Techniques and instrumentation • 1. Use of simple measuring instruments □ Appropriately uses simple measuring instruments (rulers, dropper, graduated cylinder, balance, thermometer, chronometer) Earth and Space B. Energy • 1. Sources of energy □ a. Explains that the sun is the main source of energy on Earth E. Techniques and instrumentation • 2. Use of simple measuring instruments □ a. Appropriately uses simple measuring instruments (e.g. rulers, dropper, graduated cylinder, balance, thermometer, wind vane, barometer, anemometer, hygrometer) B. Fractions (using objects or drawings) • 4. Identifies the different meanings of fractions (sharing, division, ratio) • 7. Compares a fraction to 0, 1⁄2 or 1 • 8. Verifies whether two fractions are equivalent • 9. Matches a decimal or percentage to a fraction • 10. Orders fractions with the same denominator • 11. Orders fractions where one denominator is a multiple of the other(s) C. Fractions • 1. Uses objects, diagrams or equations to represent a situation and conversely, describes a situation represented by objects, diagrams or equations (use of different meanings of addition, subtraction and multiplication by a natural number) Science and Technology Material World C. Forces and motion • 3. Gravitational attraction on an object □ a. Describes the effect of gravitational attraction on an object (e.g. freefall) E. Techniques and instrumentation • 1. Use of simple measuring instruments □ Appropriately uses simple measuring instruments (rulers, dropper, graduated cylinder, balance, thermometer, chronometer) Earth and Space B. Energy • 2. Transmission of energy □ a. Describes methods for transmitting thermal energy (e.g. radiation, convection, conduction) D. Systems and interaction • 3. Solar system □ a. Recognizes the main components of the solar system (sun, planets, natural satellites) □ b. Describes the characteristics of the main components of the solar system (e.g. composition, size, orbit, temperature) Science and Technology The Earth and Space C. Astronomical phenomena • 1. Concepts related to astronomy □ a. Universal Gravitation ☆ i. Defines gravitation as a force of mutual attraction between bodies • 2. Solar system □ a. Characteristics of the solar system ☆ i. Compares some of the characteristics of the planets in our solar system (e.g. distances, relative size, composition) Science and Technology The Earth and Space C. Astronomical phenomena • 3. Space □ a. Scale of the universe ☆ i. Astronomical unit ○ Defines an astronomical unit as the unit of length corresponding to the average distance between the Earth and the Sun ☆ ii. Light year ○ Defines light year as a unit of length corresponding to the distance travelled by light in one Earth year ☆ iii. Location of the Earth in the universe ○ Compares the relative distance between different celestial bodies (e.g. stars, nebulae, galaxies) □ b. Conditions conducive to the development of life ☆ i. Describes conditions conducive to the development or maintenance of life (e.g. presence of a gaseous atmosphere, water, energy source) What Your Class Will Learn • how big the Sun is compared to the planets • how far apart the planets are • how to measure using a tape measure, and a digital map • basics of scale models and how they can represent large things To start, we'll measure how big the park or field that is available to us using Google Maps. From there, we'll use a Solar System Model Calculator to determine how far away the Sun and Neptune can Once calculated, we'll have the sizes and distances of the planets. Using plasticine, we'll build each planet (however small), and then head outside. Augmented reality will be used to show the size of the Sun. Then, we'll measure out how far each planet is. Book Now Fill out the form below and let's get in touch! You'll receive an email confirmation, then I'll follow-up to confirm a date and anything special you'd like for your presentation.
{"url":"https://www.plateauastro.com/presentations/scale-model-solar-system","timestamp":"2024-11-03T03:11:42Z","content_type":"text/html","content_length":"36869","record_id":"<urn:uuid:37d401b0-4ddd-4987-a8e7-67811c083c6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00868.warc.gz"}
COINTEG Statement COINTEG RANK=number <H=(matrix)> <J=(matrix)> <EXOGENEITY> <NORMALIZE=variable> ; The COINTEG statement fits the vector error correction model to the data, tests the restrictions of the long-run parameters and the adjustment parameters, and tests for the weak exogeneity in the long-run parameters. The cointegrated system uses the maximum likelihood analysis proposed by Johansen and Juselius (1990) and Johansen (1995a, 1995b). Only one COINTEG statement is allowed. You specify the ECM= option in the MODEL statement or the COINTEG statement to fit the VECM(). The P= option in the MODEL statement is used to specify the autoregressive order of the VECM. The following statements are equivalent for fitting a VECM(2). proc varmax data=one; model y1-y3 / p=2 ecm=(rank=1); proc varmax data=one; model y1-y3 / p=2; cointeg rank=1; To test restrictions of either or or both, you specify either J= or H= or both, respectively. You specify the EXOGENEITY option in the COINTEG statement for tests of the weak exogeneity in the long-run parameters. The following is an example of the COINTEG statement. proc varmax data=one; model y1-y3 / p=2; cointeg rank=1 h=(1 0, -1 0, 0 1) j=(1 0, 0 0, 0 1) exogeneity; The following options can be used in the COINTEG statement:
{"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_varmax_syntax06.htm","timestamp":"2024-11-13T21:23:42Z","content_type":"application/xhtml+xml","content_length":"27867","record_id":"<urn:uuid:ae4cfd5c-4b8a-4538-8942-9cc66c9c0038>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00174.warc.gz"}
WBBSE Solutions For Class 6 Maths Chapter 1 Simplification Unitary Method - WBBSE Solutions WBBSE Solutions For Class 6 Maths Chapter 1 Simplification Unitary Method Chapter 1 Simplification Unitary Method Unitary Method : 1. The unitary method is a method of solving a problem by obtaining the value of one unit of material from some given value of the material. 2. Suppose you are given that the cost of 10 pens is 50. You have to obtain the cost of 16 pens. 3. Here we have to find the cost of one pen, then find the total cost of 16 pens. 4. So the cost of 10 pens = 50 5. ∴ The cost of 1 pen = ₹ \(\frac{50}{10}\) = 5 6. (Here the number of pens is less, so the cost would be less. 7. Therefore, the division process must be done.) 8. ∴ The cost of 16 pens = ₹ (5 x16) = ₹ 80. 9. (Here the number of pens is more, so the cost would be more. Therefore, the multiplication process must be done.) 10. Similarly, if the cost of 5 apples is 30, then what is the cost of 12 apples? 11. So to solve this problem, we can take the help of the above unitary method. 12. Here cost of 5 apples = ₹ 30 13. ∴ Cost of 1 apple = ₹ \(\frac{30}{5}\) =₹ 6 14. So the cost of 12 apples = 6 x 12 = ₹ 72. 15. In general, two variables are so related that if one variable increases which cause, the increase of the other variable, or the decrease of one variable causes the decrease of the other variable, then the relation is said to be direct relation. 16. On the other hand, if the increase of one variable causes the decrease of the other variable or the decrease of one variable causes the increase of the other variable, then the relation between the variable is said to be indirect relation or inverse relation. 17. For example, the number of books is variable and the cost of the books is another variable. 18. There exists a direct relation between these two variables. 19. This means that the increase in the number of books causes an increase in the cost of the books or the decrease in the number of books causes a decrease in the total cost of the books. 20. The relation between the variable is direct. 21. Again let us consider that some men complete work in some days. 22. Here for the same amount of work done the number of days required is a variable and the number of men required is the other variable. 23. For the same amount of work, if the number of men is more they would take less number of days to finish the work. 24. Also if the number of men is less then they would take more days to finish the work. 25. The relation between the variables is indirect or inverse. 26. The relation between the number of books and their cost is a direct relation i.e., the increase in the number of books causes an increase in their cost and the decrease in the number of books causes a decrease in their total cost. 27. On the other hand, the relation between the number of daily working hours to complete work and the number of days required is inverse relation, i.e., the increase in the daily working hours causes the decrease in the number of days required to complete the work and the decrease of the daily working hours causes the increase of the number of days required to complete the work. class 6 math wbbse solution So there exist two variables in general two relations: 1. Direct Relation 2. Indirect or Inverse Relation. For direct relation, the value of unit quantity would be less and for inverse relation, the value of unit quantity would be more. To solve this type of problem, first, you have to ascertain which type of relationship exists between the variables, then solve the problem using the unitary method otherwise the wrong results may come. You observe the following worked-out examples, then you will have a clear concept or idea about the unitary method. Class 6 West Bengal Board Math Solution Question 1. If 40 laborers can take 35 days to construct a part of the embankment of the Matla River, then how many laborers will be required to construct the same part of the bank in 28 days? 40 laborers can take 35 days to construct a part of the embankment of the Matla River A part of the embankment of the Matla River can be constructed in 35 days by 40 laborers. ∴ The same part can be constructed in 1 day by 40 x 35 laborers. ∴ In 28 days the same part can be constructed by = (10 × 5) = 50 labourers ∴ 50 laborers can be required. Question 2. Debarsi, Debalina, Debmalya, and Debdut can do 150 sums in 6 days. If each of them can do a same number of sums per day, then how many days will be required to do 250 sums by Debarsi and Solution : Debarsi, Debalina, Debmalya, and Debdut can do 150 sums in 6 days. If each of them can do a same number of sums per day Here total number of men = 4, the number of days = 6, and the number of sums = 150. It is also given that each of them can do every day a same number of sums. So, 4 persons can do 150 sums in 6 days 1 person can do 150 sums in 6 x 4 = 24 days 1 persons can do 150 sums in \(\frac{24}{150}\) 2 persons can do 1 sums in \(\frac{24}{150 x 2}\) day ∴ 2 persons can do 250 sums in days = 2 x 10 = 20 days ∴ The total number of required days = 20. Read And Learn More: WBBSE Solutions For Class 6 Maths Chapter 1 Simplification Solved Problems Question 3. 45 laborers can dig a well in 24 days. If the well can be dug in 18 days, then how many more laborers will be required? 45 laborers can dig a well in 24 days. If the well can be dug in 18 days A well can be dug in 24 days by 45 laborers. ∴ The well can be dug in 1 day by 45 x 24 laborers. The well can be dug in 8 days by = 60 laborers. There are already 45 laborers. ∴ 60 – 45 = 15 more laborers will be appointed. Class 6 West Bengal Board Math Solution Question 4. : If 2 men can polish \(\frac{1}{3}\) part of a table in one day, then how many men will be required to polish \(\frac{2}{3}\) part of the table in 2 days? Solution : 2 men can polish \(\frac{1}{3}\) part of a table in one day \(\frac{1}{3}\) part of a table can be polished in 1 day by 2 men ∴ 1 part of the table can be polished in 1 day by 2 x \(\frac{3}{1}\) men ∴ 1 part of the table can be polished in 2 days by \(\frac{2 \times 3}{1 \times 2}\) \(\frac{2}{3}\) part of the table can be polished in 2 days by \(\frac{2 \times 3}{1 \times 2} \times \frac{2}{3}\) ∴ The required number of men = 2. Question 5. 175 kg of rice is required for a week for a mid-day meal of 500 students. After 75 kg of rice has been used, how long will the remaining rice last for 400 students? 175 kg of rice is required for a week for a mid-day meal of 500 students. After 75 kg of rice has been used One week 7 days. Amount or remaining rice = (175 – 75) kg = 100 kg. 175 kg of rice will last 500 students for 7 days 1 kg of rice will last for 500 students for \(\frac{7}{175}\) days 1 kg of rice will last for 1 student for \(\frac{7 \times 500}{175}\) days 100 kg of rice will last for 1 student for \(\frac{7 \times 500}{175} \times 100\) days 100 kg of rice will last 400 students for Days = 5 days The remaining rice will last for 400 students for 5 days. Class 6 West Bengal Board Math Solution Question 6. If the price of 15 books is 1275, then how many books will be purchased for 2125? The price of 15 books is 1275 For₹ 1275, the number of books purchased = is 15 For ₹ 1, the number of books be purchased = \(\frac{15}{1275}\) For ₹ 2125, the number of books be purchased = \(\frac{15 \times 2125}{1275}\) ∴ The required number of books = is 25. Question 7. Sita, Gita, and Rita can complete a piece of work separately in 12 hours, 15 hours, and 18 hours respectively. If they do it together then in how many hours will they complete \(\frac{1} {2}\)of the work? Sita, Gita, and Rita can complete a piece of work separately in 12 hours, 15 hours, and 18 hours respectively. Here the whole of the work = 1 part. Then Sita can complete the work in 12 hours. ∴ Sita in 12 hours, can do 1 part of the work In 12 hours, Sita can do 1 part of the work ∴ In 1 hour, Sita can do \(\frac{1}{12}\) part of the work. Gita can do in 15 hours 1 part of the work. In 1 hour, Gita can do \(\frac{1}{15}\) part of the work. In 18 hours, Rita can do 1 part of the work. ∴ In 1 hour, Rita can do \(\frac{1}{18}\) part of the work. So in 1 hour Sita, Gita, and Rita together can do (\(\frac{1}{12}\) + \(\frac{1}{15}\) + \(\frac{1}{18}\) part of the work = \(\frac{15 + 12 + 10}{180}\) part = \(\frac{37}{180}\) part of the work. ∴ Sita, Gita, and Rita together can do \(\frac{37}{180}\) part of the work in 1 hour. ∴ Sita, Gita, and Rita together can do 1 part of the work in \(\frac{1 \times 180}{37}\) hours They together can do \(\frac{1}{2}\) part of work in \(\frac{1 \times 180}{2 \times 37}\) hours = \(\frac{90}{37}\) hours = 2 \(\frac{16}{37}\) hours. ∴ The required time = 2 \(\frac{16}{37}\)hours. Wbbse Class 6 Maths Solutions Question 8. : 4 tractors are required to cultivate 360 bighas of land in 20 days. How many tractors will be required to cultivate 1800 bighas of land in 10 days? 4 tractors are required to cultivate 360 bighas of land in 20 days. To cultivate 360 bighas of land in 20 days 4 tractors are required. Question 9. There are 20 boys in a hostel and 150 kg of atta is stored for them for 30 days. But 30 kg of atta was wasted and 5 boys went home from the hostel. How long will the remaining boys be fed with the remaining amount of atta? There are 20 boys in a hostel and 150 kg of atta is stored for them for 30 days. But 30 kg of atta was wasted and 5 boys went home from the hostel. The total amount of atta stored in the hostel was 150 kg and the amount of atta wasted was 30 kg. ∴ Remaining amount of atta= (150 – 30) = 120 kg The remaining number of boys in the hostel = is 20 – 5 = 15. In mathematical language, we have Wb Class 6 Maths Solutions Question 10. 15 vans can carry 75 quintals of fish in 40 minutes. How long will 20 vans carry 100 quintals of fish? 15 vans can carry 75 quintals of fish in 40 minutes. In mathematical language, we have, Wb Class 6 Maths Solutions Question 11. 12 farmers can cultivate land in 7 days working 6 hours a day. How many farmers will be required to cultivate that land in 9 days working 4 hours a day? 12 farmers can cultivate land in 7 days working 6 hours a day. In mathematical language, we have, Question 12. A compositor can compose 11 pages in 8 hours. How many days will be required to compose a book containing 264 pages working 6 hours on average per day? A compositor can compose 11 pages in 8 hours In mathematical language, we have, Wb Class 6 Maths Solutions Leave a Comment
{"url":"https://wbbsesolutions.net/wbbse-solutions-for-class-6-maths-chapter-1-simplification-unitary-method/","timestamp":"2024-11-01T19:19:43Z","content_type":"text/html","content_length":"132213","record_id":"<urn:uuid:fa406269-5059-428d-b9ed-47704b46976d>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00807.warc.gz"}
Shapes on the Playground Sally and Ben were drawing shapes in chalk on the school playground. Can you work out what shapes each of them drew using the clues? Sally and Ben went outside the school to draw some very large shapes on the tarmac. None of the shapes they drew shared any sides with any other shape - they were all drawn separately although some small shapes were inside larger shapes. The children each had a box of ten sticks of chalk. Each stick drew 10 metres before it was completely worn away. They both drew shapes with all sides one metre long. Sally drew squares and octagons, sixteen shapes altogether, and used up all her chalk. How many of each shape did she draw? Ben drew triangles, hexagons and squares. He drew $20$ shapes and still had two sticks of chalk left. How many of each shape did he draw? Getting Started Try starting by sketching out each of the shapes and writing down the perimeter of each one. Student Solutions Classmates from Moorfield Junior School, Thomas and Robert , explain their strategy for solving some of the challenges of Shapes on the Playground : To find the shapes that Sally drew they explain: "We listed all the numbers that add up to 100 and then starting multiplying them by 4 and 8. The answer we got gave us the solution, 7 squares and 9 octagons. We then worked Ben's out the same way. The answer we got gave us the solution 20 squares, 0 triangles and 0 hexagons!!!" Why do you think Thomas and Robert multiplied by 4 and 8? Do you agree with their answer for the shapes that Ben drew? The problem told us that, "Ben drew triangles, hexagons and squares". The pupils from St. Aldhelm's C.E. Combined School agree with the solution for Sally, but for Ben they have some other thoughts. They made tables showing the amount of chalk used for each shape and then by completing the tables they discovered how many of each shape Ben would have drawn if he drew 20 shapes in total. The answer they arrived at was that Ben drew 10 triangles (30 metres), 5 squares (20 metres) and 5 hexagons (30 metres). This used 8 sticks of chalk, as each stick drew 10 metres, leaving two unused sticks. Well done St. Aldhelm's! Can you recognise your initials? LM, ED, RD, HS, KT, DF, LH, NT, AND DR. Mel and Katie from Loretto Junior School also worked out that Ben could have drawn these shapes: • 17 squares, 2 triangles, and 1 hexagon • 8 squares, 8 triangles and 4 hexagons Very well done! Teachers' Resources Why do this problem? This problem is one that would be suitable for learners who are looking at measurement, regular polygons or revising the addition and subtraction of numbers to 100. It requires logical thinking and careful reading of the questions asked. Key questions How long are each of the sides of every shape? What is the perimeter of a hexagon/square/triangle/octagon? What shapes does the problem say Ben drew? Possible extension Learners could make a table of their results. Possible support Suggest starting by drawing each the shapes and writing underneath how many metres the perimeter of each would be.
{"url":"https://nrich.maths.org/problems/shapes-playground","timestamp":"2024-11-05T12:01:21Z","content_type":"text/html","content_length":"41141","record_id":"<urn:uuid:0a8379a2-fbf2-4992-baa2-b0667dad525c>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00285.warc.gz"}
Copyright 2013-2016 Edward Kmett and Dan Doel License BSD Maintainer Edward Kmett <ekmett@gmail.com> Stability experimental Portability rank N types Safe Haskell Safe-Inferred Language Haskell2010 Day f -| Curried f Day f ~ Compose f when f preserves colimits / is a left adjoint. (Due in part to the strength of all functors in Hask.) So by the uniqueness of adjoints, when f is a left adjoint, Curried f ~ Rift f Right Kan lifts newtype Curried g h a Source # Functor g => Functor (Curried g h) Source # Defined in Data.Functor.Day.Curried (Functor g, g ~ h) => Applicative (Curried g h) Source # Defined in Data.Functor.Day.Curried composedAdjointToCurried :: (Functor h, Adjunction f u) => u (h a) -> Curried f h a Source # Curried f h a is isomorphic to the post-composition of the right adjoint of f onto h if such a right adjoint exists.
{"url":"http://hackage-origin.haskell.org/package/kan-extensions-5.2.3/docs/Data-Functor-Day-Curried.html","timestamp":"2024-11-06T04:38:28Z","content_type":"application/xhtml+xml","content_length":"25631","record_id":"<urn:uuid:3e74b121-0c47-4c19-b7c2-d49da9bbb27f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00260.warc.gz"}
"N-Ary Summation (∑)" The Mathematical Symbol "N-Ary Summation (∑)" Unraveling the "N-Ary Summation" Symbol (∑) Mathematical symbols provide a powerful and concise means to convey intricate mathematical relationships. A prominent example is the ∑ symbol, which stands for "N-Ary Summation". This article will shed light on its meaning, applications, and significance in the world of mathematics. Decoding ∑ The ∑ symbol represents the act of summing up a series of numbers. It is commonly associated with the concept of a summation, especially when summing up a sequence or series. This notation is particularly handy for expressing long or intricate sums in a succinct manner. Example 1: Simple Summation For a sequence of numbers \( a_1, a_2, \dots, a_n \), the summation can be represented as: \[ ∑_{i=1}^{n} a_i \] This states that we are summing all the \( a_i \) values starting from \( i = 1 \) up to \( n \). Example 2: Summation in Functions If we have a function \( f(x) \) and we want to sum its values from x=1 to x=n, it can be denoted as: \[ ∑_{x=1}^{n} f(x) \] Domains of ∑ Application The ∑ notation is versatile and has applications across various mathematical areas: • Calculus: Especially in discussions about series convergence. • Statistics: Useful for calculating means, variances, and other statistical measures. • Linear Algebra: When working with vector and matrix operations. • Discrete Mathematics: In combinatorics and number theory problems. It's a fundamental tool in mathematics, streamlining complex summation problems and offering an easily recognizable notation for summations of all sizes. In summary, the ∑ symbol, emblematic of "N-Ary Summation", is one of the cornerstones of mathematical notation. It encapsulates the essence of addition across numerous mathematical contexts, embodying both elegance and utility. Are You Good at Mathematical Symbols? Do you know, or can you guess, the technical symbols? Well, let's see! • This test has questions. • A correct answer is worth 5 points. • You can get up to 5 bonus points for a speedy answer. • Some questions demand more than one answer. You must get every part right. • Beware! Wrong answers score 0 points. • 🏆 If you beat one of the top 3 scores, you will be invited to apply for the Hall of Fame. Scoring System Guru (+) Hero (+) Captain (+) Sergeant (+) Recruit (+) Codes for the ∑ Symbol The Symbol ∑ Alt Code Alt 8721 HTML Code &#8721; HTML Entity &ampsum; CSS Code \2211 Hex Code &#x2211; Unicode U+2211 How To Insert the ∑ Symbol (Method 1) Copy and paste the symbol. The easiest way to get the ∑ symbol is to copy and paste it into your document. Bear in mind that this is a UTF-8 encoded character. It must be encoded as UTF-8 at all stages (copying, replacing, editing, pasting), otherwise it will render as random characters or the dreaded �. (Method 2) Use the "Alt Code." If you have a keyboard with a numeric pad, you can use this method. Simply hold down the Alt key and type 8721. When you lift the Alt key, the symbol appears. ("Num Lock" must be on.) (Method 3) Use the HTML Decimal Code (for webpages). HTML Text Output <b>My symbol: &#8721;</b> My symbol: ∑ (Method 4) Use the HTML Entity Code (for webpages). HTML Text Output <b>My symbol: &sum;&lt/b> My symbol: ∑ (Method 5) Use the CSS Code (for webpages). CSS and HTML Text Output span:after { content: "\2211";} My symbol: ∑ <span>My symbol:</span> (Method 6) Use the HTML Hex Code (for webpages and HTML canvas). HTML Text Output <b>My symbol: &#x2211;&lt/b> My symbol: ∑ On the assumption that you already have your canvas and the context set up, use the Hex code in the format to place the ∑ symbol on your canvas. For example: JavaScript Text const x = "0x"+"E9" ctx.fillText(String.fromCodePoint(x), 5, 5); (Method 7) Use the Unicode (for various, e.g. Microsoft Office, JavaScript, Perl). The Unicode for ∑ is . The important part is the hexadecimal number after the , which is used in various formats. For example, in Microsoft Office applications (e.g. Word, PowerPoint), do the following: Type Output 2211 ∑ [Hold down Alt] (The 2211 turns into ∑. Note that you can omit any leading zeros.) [Press x] In JavaScript, the syntax is \uXXXX. So, our example would be \u2211. (Note that the format is 4 hexadecimal characters.) JavaScript Text Output let str = "\u2211" My symbol: ∑ document.write("My symbol: " + str) You might also like... Help Us Improve Mathematics Monster • Do you disagree with something on this page? • Did you spot a typo? Please tell us using this form. Find Us Quicker! • When using a search engine (e.g., Google, Bing), you will find Mathematics Monster quicker if you add #mm to your search term. If you like Mathematics Monster (or this page in particular), please link to it or share it with others. If you do, please tell us. It helps us a lot! Create a QR Code Use our handy widget to create a QR code for this page...or any page. More about Mathematical Symbols Mathematics is a universal language that is used to describe and understand the intricacies of the universe. At the heart of this language are symbols, concise representations that convey specific meanings and ideas. Just as letters come together to form words in spoken languages, mathematical symbols combine to form expressions and equations, encapsulating intricate ideas in a compact form. The history of these symbols is as varied as their meanings; some have been in use for centuries while others have been introduced more recently to describe new discoveries and concepts. Whether you are a student, educator, researcher, or simply curious, this list of mathematical symbols will serve as a guide, shedding light on their meanings, origins, and applications. From the simple plus and minus signs to the more esoteric and complex, each symbol has its unique story and significance. More Symbols Full List of Mathematical Symbols
{"url":"https://mathematics-monster.com/symbols/N-Ary-Summation.html","timestamp":"2024-11-05T16:52:04Z","content_type":"text/html","content_length":"18568","record_id":"<urn:uuid:37a7b4c3-e54a-49d8-afef-fc5af4812346>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00759.warc.gz"}
60.1 mph to kmh (60.1 miles per hour to kilometers per hour) Here we will explain and show you how to convert 60.1 miles per hour to kilometers per hour. Miles per hour can be abbreviated to mph and kilometers per hour can be shortened to kmh or km/h. To calculate how fast 60.1 mph is in km/h, you need to know the mph to kmh formula. There are 1.609344 kilometers per mile. Therefore, the formula and the math to convert 60.1 mph to km/h is as mph × 1.609344 = km/h 60.1 × 1.609344 = 96.7215744 60.1 mphs ≈ 96.72 kmh Below is an image of a speedometer showing the needle pointing at 60.1 mph. The speedometer shows the mph in orange and km/h in black so you can see how the two speeds correspond visually. Now you know how fast 60.1 mph is in km/h. So what does it mean? It means that if you are driving 60.1 mph to get to a destination, you would need to drive 96.72 km/h to reach that same destination in the same time frame. mph to kmh Converter Enter another speed in miles per hour below to have it converted to kilometers per hour. 60.2 mph to kmh Here is the next speed in miles per hour (mph) that we have converted to kilometers per hour (km/h) for you. Privacy Policy
{"url":"https://convertermaniacs.com/miles-per-hour-to-kilometer-per-hour/convert-60.1-mph-to-kmh.html","timestamp":"2024-11-10T21:48:20Z","content_type":"text/html","content_length":"7043","record_id":"<urn:uuid:13d46c7c-b599-49cc-8adb-ba0fdf754104>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00755.warc.gz"}
Theory of Quadratic Equation Formula A polynomial of the second degree is generally called a quadratic polynomial. In elementary algebra, the quadratic formula is the solution of the quadratic equation. There are other ways to solve the quadratic equation instead of using the quadratic formula, such as factoring, completing the square, or graphing. Using the quadratic formula is often the most convenient way. If f(x) is a quadratic polynomial, then f(x) = 0 is called a quadratic equation. An equation in one unknown quantity in the form ax^2 + bx + c = 0 is called quadratic equation. Theory of Quadratic Equation Formula The theory of quadratic equation formulae will help us to solve different types of problems on the quadratic equation. The general form of a quadratic equation is ax^2 + bx + c = 0 where a, b, c are real numbers (constants) and a ≠ 0, while b and c may be zero. (i) The Discriminant of a quadratic equation is ax^2 + bx + c = 0 (a ≠ 0) is ∆ = b^2 – 4a (ii) If α and β be the roots of the equation ax^2 + bx + c = 0 (a ≠ 0) then α + β = – b/a = – [coefficient of x / coefficient of x^2] and αβ = c/a = [constant term / coefficient of x^2] (iii) The formula for the formation of the quadratic equation whose roots are given: x^2 – (sum of the roots)x + product of the roots = 0. (iv) When a, b and c are real numbers, a ≠ 0 and discriminant is positive (i.e., b^2 – 4ac > 0), then the roots α and β of the quadratic equation ax^2 + bx + c = 0 are real and unequal. (v) When a, b and c are real numbers, a ≠ 0 and discriminant is zero (i.e., b^2 – 4ac = 0), then the roots α and β of the quadratic equation ax^2 + bx + c = 0 are real and equal. (vi) When a, b and c are real numbers, a ≠ 0 and discriminant is negative (i.e., b^2 – 4ac < 0), then the roots α and β of the quadratic equation ax^2 + bx + c = 0 are unequal and imaginary. Here the roots α and β are a pair of the complex conjugates. (viii) When a, b and c are real numbers, a ≠ 0 and discriminant is positive and perfect square, then the roots α and β of the quadratic equation ax^2 + bx + c = 0 are real, rational unequal. (ix) When a, b and c are real numbers, a ≠ 0 and discriminant is positive but not a perfect square then the roots of the quadratic equation ax^2 + bx + c = 0 are real, irrational and unequal. (x) When a, b and c are real numbers, a ≠ 0 and the discriminant is a perfect square but any one of a or b is irrational then the roots of the quadratic equation ax^2 + bx + c = 0 are irrational. (xi) Let the two quadratic equations are a[1]x^2 + b[1]x + c[1] = 0 and a[1]x^2 + b[2]x + c[2] = 0 Condition for one common root: (c[1]a[2] – c[2]a[1])^2 = (b[1]c[2] – b[2]c[1])(a[1]b[2] – a[2]b[1]), which is the required condition for one root to be common of two quadratic equations. Condition for both roots common: a[1]/a[2] = b[1]/b[2] = c[1]/c[2] (xii) In a quadratic equation with real coefficients has a complex root α + iβ then it has also the conjugate complex root α – iβ. (xiii) In a quadratic equation with rational coefficients has an irrational or surd root α + √β, where α and β are rational and β is not a perfect square, then it has also a conjugate root α – √β. Information Source;
{"url":"https://assignmentpoint.com/theory-quadratic-equation-formula/","timestamp":"2024-11-05T04:17:32Z","content_type":"text/html","content_length":"27544","record_id":"<urn:uuid:e215cc86-55f4-4a70-abb4-b90d74f7de1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00230.warc.gz"}
October 2016 - Conception of the good In my last post I outlined how I developed pupils’ knowledge of rational numbers and simplifying surds in preparation for pupils to solve the following equations. I asked them if we could have any number that could be squared to result in a negative number. This was the point in the session that I introduced the idea of imaginary numbers. I gave them a brief history referring to the work and discussions by Rene Descartes and Leonard Euler. Now, “If I can square a number to get a negative result then it is an imaginary number because when we square a positive number we get a positive result, and when we square a negative number we get a positive number. This is how we display the square of an imaginary number”: I told the kids that this is a fact that we are going to accept. Now let’s move on. “We are now going to square root both sides to see what is equal to. We can’t square root a negative number but I am going to present it like this, and we can present it like this, again we are going to accept this for now and move on.” These two knowledge facts are the foundation in solving the next few problems which I reiterated again and again to the kids. I then showed how we can evaluate the two calculations shown at the start where the square of an imaginary number will result in a negative integer answer. I explicitly outlined each step, one by one, and I kept each line of the algorithm between the equation and the solution consistent for each question I demonstrated. “I am going to square root both sides, and I am going to include the positive and negative sign in front of the square root. We must include this.” “I am going to separate √-25 as a calculation of √25 and √-1 because I can then square root positive 25.” “I am going to evaluate √25.” “How handy! I know that √-1 is equal to . We are using our knowledge facts which I showed you midway into our session,” “This is my final answer.” I demonstrated another example, and then asked pupils to attempt a selection of questions by themselves. I then told the pupils, “We are now reaching the peak of our lesson where we are going to combine our knowledge of finding the solutions of a negative number, but this time the number that is going to be square rooted will not be a square number, this is where our knowledge of simplifying √24 and √500 will help us here.” I modelled the next example with the following teacher instruction: Here is my problem: “I am going to square root both sides, include the positive and negative sign in front of the square root, -72 will be within the square root” “I am going to separate -72 where it is shown as a product of -1 and 72, both values will be within a square root.” “I am going to simplify the square root of 72 where I have an integer and a non rational number, because the square root of 72 is positive, no longer negative!” “I am going to replace the square root of -1 with . “I am going to rewrite this so i is next to the integer, you will do the same for all your answers too. We are finished.” This is an example of a session that was delivered where the goal at the end was to have pupils being able to solve equations where they had to combine their understanding of rational numbers, and imaginary numbers. Pupils who attend this session are learning about different concepts in a succinct and limited manner to then apply their understanding to specific problem types selected by myself and the department. The questions the kids did to apply their understanding were carefully crafted by me to ensure that pupils were deliberately practising what I had modelled on the board. I think it worked quite well. If anything It made me realise that through a strong foundational understanding of number and times tables and thought out instruction, we can teach something as abstract as imaginary numbers. Furthermore, to get the delivery of the worked example to be as tight and accurate as possible I did script this lesson and rehearse it as well. The idea of imaginary numbers is huge! There is so much that can be taught but I narrowed the focus to ensure that pupils could achieve the goal required. I have attached a few images of pupils’ work, and a video of a pupil dictating his understanding to me. I stop him because he did not simplify the surd where we had the greatest possible k integer. The pupil was correct in the method that he was using as it had two steps, but I wanted pupils to simplify the surd using the largest factor. This post explained how I taught pupils the procedure of solving equations where the result has an imaginary number. The video posted shows the outcome of the session where two pupils solve one equation. Enjoy the video available at this link with a selection of photos of the pupils’ work. Think this sounds interesting? Come visit! We love having guests. It challenges our thinking and it boosts the pupils’ confidence to have people come in to see them. Think it sounds wonderful? Apply to join our team of enthusiastic maths nerds! We are advertising for a maths teacher, starting in September (or earlier, for the right candidate). Closing SOON:
{"url":"http://conceptionofthegood.co.uk/?m=201610","timestamp":"2024-11-04T15:27:54Z","content_type":"text/html","content_length":"49732","record_id":"<urn:uuid:50ce5405-b6d1-424b-a8a9-aed7f4526c88>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00702.warc.gz"}
Math Education for Arabian Kids If you’re struggling with a particular subject at school, math tutor in Dubai can help. These individuals can teach you a subject that is difficult for you. A physics and maths tutor can help you get the grades you need, and they can help you with any homework you have. Many physicists will work with students one-on-one, as well as in small groups, or you can find an online physicist who will do one-to-one lessons. The KFUPM Math department has many opportunities available to those interested in studying mathematics. Some of these include courses, research, and publications. There are also a variety of ways to get involved with the department, such as joining the FPTARG… • 2 years ago • Read Time: 6 minutes • by Tiffany Diaz Math worksheets for kindergarten are a great way for your child to practice maths skills such as counting, recognizing shapes and more. They can be easy to find online or you can print them out yourself. Free printable shapes worksheets… The Math Playground is an amazing website for kids that want to learn and practice their math skills. You’ll find the site has plenty of games – both numbers and logic – as well as some fun activities like story… When it comes to math, it seems like there are a ton of different calculators on the market. For example, there are graphing calculators, quick math calculators, and even some other math solvers. So which one should you choose? Gauthmath… • 2 years ago • Read Time: 4 minutes • by Tiffany Diaz The first grade curriculum also includes lessons in addition and subtraction. The children should learn to recognize the connection between addition and subtraction. They will then be encouraged to form fact families of related addition and subtraction problems. They can… • 3 years ago • Read Time: 6 minutes • by Tiffany Diaz The Dubai English Speaking School is the first comprehensive international English language school in the UAE. The establishment offers a dynamic learning environment, based on the motto of ‘Excel, Share and Create’. It also supports students in developing their own… The easiest way to learn Arabic numbers is to memorize them. The letters for Arabic numbers are spelled out in the form of a digit. These are followed by nouns and agree with their gender and case. If you are… • 3 years ago • Read Time: 6 minutes • by Tiffany Diaz There are many international schools in Jeddah, Saudi Arabia. The oldest one, the American International School, was founded in 1977. It has a rich curriculum and is popular with local families and expatriates alike. Besides the curriculum, it also offers… The Arab Primary School is a public elementary school in Arab, Alabama, that serves preschool and first grade students. The curriculum at Arab Primary School includes reading, physical education, music, and language arts. The teachers also incorporate science and math… Arab High School is a public school that provides various educational programs for students in grades 10 through 12. The school offers a wide range of classes, including English, social studies, math, science, foreign language, physical education, and special education…. There are many international schools in Riyadh, but not all are created equal. Some are community-centered, meaning they are run under the Ministry of Education, while others are not. In either case, the teachers are excellent and the curriculum caters… The King Abdullah Academy is a Saudi Arabian international school located in Floris, a suburban unincorporated area in Fairfax County, Virginia. The school is near Herndon and Dulles International Airports and serves students in preschool through grade 12. Its academic… Born in Tunis, Tunisia, Mohamed-Slim Alouini received a Ph.D. in Electrical Engineering from Caltech in 1998. He has since served as a faculty member at the University of Minnesota and at Texas A&M University in Qatar before joining the King… Taking the IELTS exam is one of the most important steps in achieving a high score in the world. The test has three sections: Reading, Listening and Speaking, and takes approximately three hours to complete. The goal of an IELTS… • 3 years ago • Read Time: 5 minutes • by Tiffany Diaz Students at university may need private tuition in mathematics. Advanced concepts such as probability and statistics, calculus, linear algebra, and probability and statistics are difficult to understand. In such cases, a math tutor can provide valuable assistance. These individuals are… • 3 years ago • Read Time: 7 minutes • by Tiffany Diaz When you are working with data, the modal class is the class with the highest frequency. The mode is defined as the number that occurs most frequently. For example, the number five appears more than any other number. When data…
{"url":"https://www.saudiinstitute.org/","timestamp":"2024-11-09T12:31:41Z","content_type":"text/html","content_length":"98698","record_id":"<urn:uuid:e5541da3-8d0c-491c-bea8-73562aaf958f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00789.warc.gz"}
Subjectivity in Risk Analysis There are known knowns; things we know we know. There are known unknowns; things we know we dont know. But there are also unknown unknowns – things we dont know we dont know.” —United States Secretary of Defense, Donald Rumsfeld. And now, onto the main blog entry: "The Unknown Unknowns" An interesting comment on the Fukushima Daiichi disaster: The Japanese Nuclear Commission had the following goals set in 2003: "The mean value of acute fatality risk by radiation exposure resultant from an accident of a nuclear installation to individuals of the public, who live in the vicinity of the site boundary of the nuclear installation, should not exceed the probability of about 1x10⁻⁶ per year (that is , at least 1 per million years)". That policy was designed only 8 years ago. Their one in a million-year accident occurred about 8 years later. - ENDQUOTE Plus the disaster was only 66 years after 1945. How reliable are probability estimates for real life systems? Firstly, let us take a look at the definition of the 'deterministic system': "A deterministic system is a conceptual model of the philosophical doctrine of determinism applied to a system for understanding everything that has and will occur in the system, based on the physical outcomes of causality. In a deterministic system, every action, or cause, produces a reaction, or effect, and every reaction, in turn, becomes the cause of subsequent reactions. The totality of these cascading events can theoretically show exactly how the system will exist at any moment in time." Wiki:Deterministic System In a deterministic/mechanistic universe, all forces/variables/interrelationships operating in any system would be identifiable and quantifiable. However, our universe is NOT a deterministic/mechanistic/discreet system in which cause effect relationships are known or knowable. Probabilities of discrete systems like the throw of a dice or toss of a coin in which outputs are well defined, such probabilities are reliable. But probabilities of real life systems, that are complex non discreet systems marked by non quantifiable "Emergents", systems in which all the forces/variables/interrelationships involved can never be known - such probabilities cannot be reliable. Put simply, • The probabilties associated with simple systems like the throw of a dice come with 100% certainty. The probability that any one side will show up is 1/6, and this cannot be disputed. • A complex system consisting of thousands or millions of moving parts, electronics, human operators, weather conditions, is however a different ballgame. The probabilities calculated can never be certain, the subjectivity involved in the calculations is fundamentally unhandleable. One example: "We overestimate our ability to predict and serially assign lower than warranted probabilities to extreme events. During the week of the Lehman collapse, bank analysts claimed that they had seen 3 six sigma events (each with probability lower than 1% of 1%)!" Policy Tensor The following are some examples of popular scientists/philosophers/mathematicians who's basic world view is Indeterminism. The Indeterminist cannot trust probability/uncertainty calculations for real life systems. By no means an exhaustive list, there's a lot more: • Richard Feynman (Physicist, Nobel Prize 1965 ) • Fritjof Capra (physicist) • Benoit Mandelbrot (mathematician) • Nassim Taleb (mathematician) • Robert Pirsig (philosopher/metaphysician) • Robert Laughlin (physicist) • Werner Heisenberg (Quantum physicist, Nobel Prize 1932) • Max Born (Quantum physicist, Nobel Prize 1954) • Jacques Monod (Nobel Prize 1965) • Murray Gell-Man (Nobel Prize 1969) • Ilya Prigogine* (Nobel Prize 1977) *Argued for indeterminism in complex systems. Some QUOTES: • "If a guy tells me that the probability of failure is 1 in 10⁵, I know he’s full of crap." - Richard Feynman, Nobel Laureate commenting on the NASA Challenger disaster. • "We don't know the probability. We don't have enough data, we don't have enough knowledge, we don't have reliable information." Benoit Mandelbrot, founding father of Fractal Mathematics, commenting on the complexity of global economics. • "We should not talk about small probabilities in any domain. Science cannot deal with them. It is irresponsible to talk about small probabilities and make people rely on them, except for natural systems that have been standing for 3 billion years." - Nassim Taleb. • "If you compute the frequency of a rare event and your survival depends on such event not taking place (such as nuclear events), then you underestimated that probability." - Nassim Taleb. Passages from Subjectivity in Risk @ CSR.NCL.AC.UK by Felix Redmill. • Risk values are arrived at via the process of risk analysis. In many quarters, this is assumed to be objective, and its results - the risk values - to be correct. Yet, as will be shown in subsequent sections of this report, all stages of the process involve subjectivity, in some cases to a considerable extent. Always there is reliance on judgement, and, as in all cases in which judgement is called for, there can be no guarantee that it will be made to a reasonable approximation, even by an expert. Indeed, it may be - and sometimes is - made by an inexperienced novice. The need for judgement introduces subjectivity and bias, and therefore uncertainty and the likelihood of inaccuracy. The results obtained by one risk analyst are unlikely to be obtained by others starting with the same information. Further, there is a natural impediment to arriving at 'correct' risk values. Although definitions of risk do not explicitly refer to time, the future is implicit in them. • Thus, risk may be estimated but it cannot be measured (Gould et al 1988). Risk values cannot be assumed to be 'correct'. • The decisions on how to define consequence, at both the definition-of-scope and analysis stages, are subjective. So too are the predictions of what the actual consequences might be. • This omission of possible causes of failure (or dangerous failure, if safety is the main criterion) is not unusual and, no guarantee can be given that it has been avoided. It renders risk calculations spurious. And finally, a passage from Techné: Research in Philosophy and Technology: Few if any decisions in actual life are based on probabilities that are known with certainty. • Strictly speaking, the only clear-cut cases of “risk” (known probabilities) seem to be idealized textbook cases that refer to devices such as dice, coins, or roulette wheels that are supposedly known with certainty to be fair. More typical real-life cases are characterized by uncertainty that does not, primarily, come with exact probabilities. Hence, almost all decisions are decisions “under uncertainty”. To the extent that we make decisions “under risk”, this does not mean that these decisions are made under conditions of completely known probabilities. Rather, it means that we have chosen to simplify our description of these decision problems by treating them as cases of known probabilities. This ubiquity of uncertainty applies also in engineering design. An engineer performing a complex design task has to take into account a large number of hazards and eventualities. Some of these eventualities can be treated in terms of probabilities; the failure rates of some components may for instance be reasonably well-known from previous experiences. However, even when we have a good experience-based estimate of a failure rate, some uncertainty remains about the correctness of this estimate and in particular about its applicability in the context to which we apply it. In addition, in every design process there are uncertainties for which we do not have good or even meaningful probability estimates. This includes the ways in which humans will interact with new constructions. As one example of this, users sometimes “compensate” for improved technical safety by more risk-taking behaviour. Drivers are known to have driven faster or delayed braking when driving cars with better brakes. (Rothengatter 2002) It is not in practice possible to assign meaningful numerical probabilities to these and other human reactions to new and untested designs. It is also difficult to determine adequate probabilities for unexpected failures in new materials and constructions or in complex new software. We can never escape the uncertainty that refers to the eventuality of new types of failures that we have not been able to foresee. Of course, whereas reducing risk is obviously desirable, the same may not be said about the reduction of uncertainty. Strictly interpreted, uncertainty reduction is an epistemic goal rather than a practical one. • Many of the most ethically important safety issues in engineering design refer to hazards that cannot be assigned meaningful probability estimates. It is appropriate that at least two of the most important strategies for safety in engineering design, namely safety factors and multiple safety barriers, deal not only with risk (in the standard, probabilistic sense of the term) but also with Currently there is a trend in several fields of engineering design towards increased use of probabilistic risk analysis (PRA). This trend may be a mixed blessing since it can lead to a one-sided focus on those dangers that can be assigned meaningful probability estimates. PRA is an important design tool, but it is not the final arbitrator of safe design since it does not deal adequately with issues of uncertainty. Design practices such as safety factors and multiple barriers are indispensable in the design process, and so is ethical reflection and argumentation on issues of safety. Probability calculations can often support, but never supplant, the engineer’s ethically responsible judgment. Safe Design Sven Ove Hansson , Department of Philosophy and the History of Technology, Royal Institute of Technology, Stockholm. Recommended 1: Role of Richard Feynman in NASA Challanger disaster, Rogers Commission Report Some quotes from the abovementioned: • "Feynman was struck by management's claim that the risk of catastrophic malfunction on the shuttle was 1 in 10^5; i.e., 1 in 100,000. Feynman immediately realized that this claim was risible on its face; as he described, this assessment of risk would entail that NASA could expect to launch a shuttle every day for the next 274 years while suffering, on average, only one accident." • Feynman was disturbed by two aspects of this practice. First, NASA management assigned a probability of failure to each individual bolt, sometimes claiming a probability of 1 in 10^8; that is, one in one hundred million. Feynman pointed out that it is impossible to calculate such a remote possibility with any scientific rigor. Secondly, Feynman was bothered not just by this sloppy science but by the fact that NASA claimed that the risk of catastrophic failure was "necessarily" 1 in 10^5. As the figure itself was beyond belief, Feynman questioned exactly what "necessarily" meant in this context—did it mean that the figure followed logically from other calculations, or did it reflect NASA management's desire to make the numbers fit? • Feynman suspected that the 1/100,000 figure was wildly fantastical, and made a rough estimate that the true likelihood of shuttle disaster was closer to 1 in 100. Recommended 2: Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Volume 2: Appendix F - Personal Observations on Reliability of Shuttle by R. P. Feynman Recommended 3: Black Swan MindMap Recommended 4: Policy Tensor Recommended 5: Philosophy of Risk Homepage No comments:
{"url":"https://tech.vikram-madan.com/2012/07/subjectivity-in-risk.html","timestamp":"2024-11-04T13:45:34Z","content_type":"application/xhtml+xml","content_length":"103342","record_id":"<urn:uuid:b8b518d4-8c82-4714-b2ac-e1438c61bc1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00712.warc.gz"}
multiplicative thinking Archives - Math Motivator Another exciting day of job-embedded learning with students and educators in a primary class. The teacher had just introduced her students to perimeter the day before. – “estimate, measure, and record the distance around objects, using non-standard units”. The students were organized into pairs and given a shape and snap … More Perimeter and Multiplicative Thinking » Ants! Ants! Ants! Recently in a Gr 3, 4, 5 classroom we gave the students this question: Students are learning about insects. They discover that an ant has 1 body, 2 antennae and 6 legs. They each make a model. How many bodies, antennae and legs will they need for 5, 10 and … More Ants! Ants! Ants! » Proportional Reasoning – Measurement I love problems that provide authentic opportunities to cluster several big ideas. Recently, I was in a classroom where the teacher gave the following problem to his students. Tia is filling a bucket with water. She knows that 500 ml of water comes out of the hose every 10 seconds. … More Proportional Reasoning – Measurement » Explicit Teaching – Timing is Everything! Explicit teaching must take place in the consolidation part of the shared math lesson when working with EQAO type thinking questions. To be most effective, teachers need to know the learning goal. As Dr Marian Small says the learning goal should be connected to understanding and less about doing. When … More Explicit Teaching – Timing is Everything! »
{"url":"http://mathmotivator.com/tag/multiplicative-thinking/","timestamp":"2024-11-10T05:10:06Z","content_type":"text/html","content_length":"59406","record_id":"<urn:uuid:d9c2dade-d01b-4b2c-b552-9eeb53ff25e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00253.warc.gz"}
Re: [xsl] The Holy Trinity of Functional Programming ... Is Hi Folks, Professor Richard Bird has written extensively on functional programming. In one of his books he has a fascinating discussion on three key aspects of functional programming, which he calls the holy trinity of functional programming. The first key aspect is: User-defined recursive data types He gives an example (Haskell notation): data Nat = Zero | Succ Nat The elements of this data type include: Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ... Is there a way in XSLT 2.0 to define recursive data types? If yes, how would the Nat data type be defined in XSLT 2.0? P.S. For those interested, below is my summary of Bird's discussion on the holy trinity of functional programming. The Holy Trinity of Functional Programming These three ideas constitute the holy trinity of functional programming: 1. User-defined recursive data types. 2. Recursively defined functions over recursive data types. 3. Proof by induction: show that some property P(n) holds for each element of a recursive data type. Here is an example of a user-defined recursive data type. It is a declaration for the natural numbers 0, 1, 2, ...: data Nat = Zero | Succ Nat The elements of this data type include: Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ... To understand this, when creating a Nat value we have a choice of either Zero or Succ Nat. Suppose we choose Succ Nat. Well, now we must choose a value for the Nat in Succ Nat. Again, we have a choice of either Zero or Succ Nat. Suppose this time we choose Zero, to obtain Succ Zero. The ordering of the elements in the Nat data type can be specified by defining Nat to be a member of the Ord class: instance Ord Nat where Zero < Zero = False Zero < Succ n = True Succ m < Zero = False Succ m < Succ n = (m < n) Here is how the Nat version of the expression 2 < 3 is evaluated: Succ (Succ Zero) < Succ (Succ (Succ Zero)) -- Nat version of 2 < 3 = Succ Zero < Succ (Succ Zero) -- by the 4th equation defining order = Zero < Succ Zero -- by the 4th equation defining order = True -- by the 2nd equation defining order Here is a recursively defined function over the data type; it adds two Nat elements: (+) :: Nat -> Nat -> Nat m + Zero = m m + Succ n = Succ(m + n) Here is how the Nat version of 0 + 1 is evaluated: Zero + Succ Zero -- Nat version of 0 + 1 = Succ (Zero + Zero) -- by the 2nd equation defining + = Succ Zero -- by the 1st equation defining + Here is another recursively defined function over the data type; it subtracts two Nat elements: (-) :: Nat -> Nat -> Nat m - Zero = m Succ m - Succ n = m - n If the Nat version of 0 - 1 is executed, then the result is undefined: Zero - Succ Zero The "undefined value" is denoted by this symbol: _|_ (also known as "bottom") Important: _|_ is an element of *every* data type. So we must expand the list of elements in Nat: _|_, Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ... There are still more elements of Nat. Suppose we define a function that returns a Nat. Let's call the function undefined: undefined :: Nat undefined = undefined It is an infinitely recursive function: when invoked it never stops until the program is interrupted. This function undefined is denoted by the symbol _|_ Recall how we defined the ordering of values in Nat: instance Ord Nat where Zero < Zero = False Zero < Succ n = True Succ m < Zero = False Succ m < Succ n = (m < n) From that ordering we can see that Succ _|_ is an element of Nat: Zero < Succ undefined -- Note: same as Zero < Succ _|_ = True -- by the 2nd equation defining order And Succ (Succ _|_ ) is an element of Nat: Succ Zero < Succ (Succ undefined) -- Note: same as Zero < Succ (Succ _|_ ) = Zero < Succ undefined -- by the 4th equation defining order = True -- by the 2nd equation defining order And Succ (Succ (Succ _|_ ) is an element of Nat, and so forth. So the list of elements in Nat expands: ..., Succ (Succ (Succ _|_ )), Succ (Succ _|_ ), Succ _|_, _|_, Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ... One can interpret the extra values in the following way: _|_ corresponds to the natural number about which there is absolutely no information Succ _|_ to the natural number about which the only information is that it is greater than Zero Succ (Succ _|_ ) to the natural number about which the only information is that it is greater than Succ Zero And so on There is one further value of Nat, namely the "infinite" number: Succ (Succ (Succ (Succ ...))) It can be defined by this function: infinity :: Nat infinity = Succ infinity It is different from all the other Nat values because it is the only number for which Succ m < infinity for all finite numbers m: Zero < infinity = True Succ Zero < infinity = True Succ (Succ Zero) < infinity = True Thus, the values of Nat can be divided into three classes: - The finite values, Zero, Succ Zero, Succ (Succ Zero), and so on. - The partial values, _|_, Succ _|_, Succ (Succ _|_ ), and so on. - The infinite value. Important: the values of *every* recursive data type can be divided into three classes: - The finite values of the data type. - The partial values, _|_, and so on. - The infinite values. Although the infinite Nat value is not of much use, the same is not true of the infinite values of other data types. Recap: We have discussed two aspects of the holy trinity of functional programming: recursive data types and recursively defined functions over those data types. The third aspect is induction, which is discussed now. Induction allows us to reason about the properties of recursively defined functions over a recursive data type. In the case of Nat the principle of induction can be stated as follows: in order to show that some property P(n) holds for each finite number n of Nat, it is sufficient to treat two cases: Case (Zero). Show that P(Zero) holds. Case (Succ n). Show that if P(n) holds, then P(Succ n) also holds. As an example, let us prove this property (the identity for addition): Zero + n = n Before proving this, recall that + is defined by these two equations: m + Zero = m m + Succ n = Succ(m + n) Proof: The proof is by induction on n. Case (Zero). Substitute Zero for n in the equation Zero + n = n. So we have to show that Zero + Zero = Zero, which is immediate from the first equation defining +. Case (Succ n). Assume P(n) holds; that is, assume Zero + n = n holds. This equation is referred to as the induction hypothesis. We have to show that P(Succ n) holds; that is, show that Zero + Succ n = Succ n holds. We do so by simplifying the left-hand expression: Zero + Succ n = Succ (Zero + n) -- by the 2nd equation defining + = Succ (n) -- by the induction hypothesis We have proven the two cases and so we have proven that Zero + n = n holds for all finite numbers of Nat. Full Induction. To prove that a property P holds not only for finite members of Nat, but also for every partial (undefined) number, then we have to prove three things: Case ( _|_ ). Show that P( _|_ ) holds. Case (Zero). Show that P(Zero) holds. Case (Succ n). Show that if P(n) holds, then P(Succ n) holds also. To just prove that a property P holds for every partial (undefined) number, then we omit the second case. To illustrate proving a property that holds for every partial number (not for the finite numbers), let us prove the rather counterintuitive result that m + n = n for all numbers m and all partial numbers n. For easy reference, we repeat the definition of + m + Zero = m m + Succ n = Succ(m + n) Proof: The proof is by partial number induction on n. Case ( _|_ ). Substitute _|_ for n in the equation m + n = n. So we have to show that m + _|_ = _|_, which is true since _|_ does not match either of the patterns in the definition of +. Case (Succ n). We assume P(n) holds; that is, assume m + n = n holds. This equation is the induction hypothesis. We have to show that P(Succ n) holds; that is, show that m + Succ n = Succ n holds. For the left-hand side we reason m + Succ n = Succ(m + n) -- by the second equation for + = Succ(n) -- by the induction hypothesis Since the right-hand side is also Succ n, we are done. The omitted case (m + Zero = Zero) is false, which is why the property does not hold for finite numbers. As an added bonus, having proved that an equation holds for all partial (undefined) numbers, we can assert that it holds for the infinite number too; that is, P(infinity) holds. Thus, we can now assert that m + infinity = infinity for all numbers m.
{"url":"https://www.biglist.com/lists/lists.mulberrytech.com/xsl-list/archives/201208/msg00135.html","timestamp":"2024-11-13T09:21:47Z","content_type":"text/html","content_length":"16287","record_id":"<urn:uuid:c573ea95-a00c-4633-ac1c-e99f7c965c42>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00244.warc.gz"}
Challenge 2: Bayes' Theorem Data Science Interview Challenge Data Science Interview Challenge In the world of probability and statistics, Bayesian thinking offers a framework for understanding the probability of an event based on prior knowledge. It contrasts with the frequentist approach, which determines probabilities based on the long-run frequencies of events. Bayes' theorem is a fundamental tool within this Bayesian framework, connecting prior probabilities and observed data. Imagine you are a data scientist working for a medical diagnostics company. Your company has developed a new test for a rare disease. The prevalence of this disease in the general population is 1%. The test has a 99% true positive rate (sensitivity) and a 98% true negative rate (specificity). Your task is to compute the probability that a person who tests positive actually has the disease. • P(Disease) = Probability of having the disease = 0.01 • P(Positive|Disease) = Probability of testing positive given that you have the disease = 0.99 • P(Negative|No\ Disease) = Probability of testing negative given that you don't have the disease = 0.98 Using Bayes' theorem: P(Disease|Positive) = P(Positive|Disease) * P(Disease) / P(Positive) Where P(Positive) can be found using the law of total probability: P(Positive) = P(Positive|Disease) * P(Disease) + P(Positive|No Disease) * P(No Disease) Compute P(Disease|Positive), the probability that a person who tests positive actually has the disease. Switch to desktop for real-world practiceContinue from where you are using one of the options below Thanks for your feedback! In the world of probability and statistics, Bayesian thinking offers a framework for understanding the probability of an event based on prior knowledge. It contrasts with the frequentist approach, which determines probabilities based on the long-run frequencies of events. Bayes' theorem is a fundamental tool within this Bayesian framework, connecting prior probabilities and observed data. Imagine you are a data scientist working for a medical diagnostics company. Your company has developed a new test for a rare disease. The prevalence of this disease in the general population is 1%. The test has a 99% true positive rate (sensitivity) and a 98% true negative rate (specificity). Your task is to compute the probability that a person who tests positive actually has the disease. • P(Disease) = Probability of having the disease = 0.01 • P(Positive|Disease) = Probability of testing positive given that you have the disease = 0.99 • P(Negative|No\ Disease) = Probability of testing negative given that you don't have the disease = 0.98 Using Bayes' theorem: P(Disease|Positive) = P(Positive|Disease) * P(Disease) / P(Positive) Where P(Positive) can be found using the law of total probability: P(Positive) = P(Positive|Disease) * P(Disease) + P(Positive|No Disease) * P(No Disease) Compute P(Disease|Positive), the probability that a person who tests positive actually has the disease. Switch to desktop for real-world practiceContinue from where you are using one of the options below Thanks for your feedback! In the world of probability and statistics, Bayesian thinking offers a framework for understanding the probability of an event based on prior knowledge. It contrasts with the frequentist approach, which determines probabilities based on the long-run frequencies of events. Bayes' theorem is a fundamental tool within this Bayesian framework, connecting prior probabilities and observed data. Imagine you are a data scientist working for a medical diagnostics company. Your company has developed a new test for a rare disease. The prevalence of this disease in the general population is 1%. The test has a 99% true positive rate (sensitivity) and a 98% true negative rate (specificity). Your task is to compute the probability that a person who tests positive actually has the disease. • P(Disease) = Probability of having the disease = 0.01 • P(Positive|Disease) = Probability of testing positive given that you have the disease = 0.99 • P(Negative|No\ Disease) = Probability of testing negative given that you don't have the disease = 0.98 Using Bayes' theorem: P(Disease|Positive) = P(Positive|Disease) * P(Disease) / P(Positive) Where P(Positive) can be found using the law of total probability: P(Positive) = P(Positive|Disease) * P(Disease) + P(Positive|No Disease) * P(No Disease) Compute P(Disease|Positive), the probability that a person who tests positive actually has the disease. Switch to desktop for real-world practiceContinue from where you are using one of the options below Thanks for your feedback! In the world of probability and statistics, Bayesian thinking offers a framework for understanding the probability of an event based on prior knowledge. It contrasts with the frequentist approach, which determines probabilities based on the long-run frequencies of events. Bayes' theorem is a fundamental tool within this Bayesian framework, connecting prior probabilities and observed data. Imagine you are a data scientist working for a medical diagnostics company. Your company has developed a new test for a rare disease. The prevalence of this disease in the general population is 1%. The test has a 99% true positive rate (sensitivity) and a 98% true negative rate (specificity). Your task is to compute the probability that a person who tests positive actually has the disease. • P(Disease) = Probability of having the disease = 0.01 • P(Positive|Disease) = Probability of testing positive given that you have the disease = 0.99 • P(Negative|No\ Disease) = Probability of testing negative given that you don't have the disease = 0.98 Using Bayes' theorem: P(Disease|Positive) = P(Positive|Disease) * P(Disease) / P(Positive) Where P(Positive) can be found using the law of total probability: P(Positive) = P(Positive|Disease) * P(Disease) + P(Positive|No Disease) * P(No Disease) Compute P(Disease|Positive), the probability that a person who tests positive actually has the disease. Switch to desktop for real-world practiceContinue from where you are using one of the options below Switch to desktop for real-world practiceContinue from where you are using one of the options below
{"url":"https://codefinity.com/courses/v2/8f74b411-aed3-4916-8a1a-b25629653d8e/0978e72e-cbd4-45b1-ac44-ce7162f23eab/48a73873-e870-4d85-bfca-96410755471f","timestamp":"2024-11-08T13:55:24Z","content_type":"text/html","content_length":"399413","record_id":"<urn:uuid:008844f2-0aa4-4e8c-b759-f631cc040949>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00378.warc.gz"}
CMUS 120 Fundamentals of Music 16 Intervals Chelsey Hamm and Bryn Hughes Two pitches form an interval, which is usually defined as the distance between two notes. But what does an interval measure? Physical distance, on the staff? Difference in wavelength between pitches? Something else? Music theorists have had contradictory ideas on the definition of “interval,” and these definitions have varied greatly with milieu. This chapter will focus on intervals as a measure of two things: written distance between two notes on a staff, and an aural “distance” (or space) between two sounding pitches. It will be important to keep in mind at all times that intervals are both written and aural, so that you are thinking of them musically (and not simply as an abstract concept that you are writing and reading). You might encounter melodic intervals or harmonic intervals. Melodic intervals are played or sung separately, while harmonic intervals are played or sung together. shows the difference: As you can see and hear in , the notes in the first measure sound together (harmonically), while in the second measure they sound separately (melodically). Every interval has a size and a quality. A size is the distance between two notes on a staff—i.e. it is a measurement of the number of lines and spaces between two notes. Sizes are written with Arabic numbers (2, 3, 4, etc.); however, they are spoken with ordinal numbers (second, third, fourth, fifth, sixth, seventh, etc.). Always count a note to itself as one when counting size. shows the first 8 sizes within an C major scale: As you can see in , a note to itself is not said to be a “first;” instead, it is a unison. Likewise, notes eight lines and spaces apart are not said to be an “eighth” but instead they are an Size is considered generic. In other words, it doesn’t matter what accidentals you apply to the notes, the size is always the same. demonstrates this: As you can see in , each of these intervals is a third because there are three lines/spaces between the two notes. Accidentals do not matter in the determination of generic size. We would say that each of these intervals is a “generic third.” Perfect, Major, and Minor Qualities A quality makes an interval specific when used in combination with a size. Quality more precisely measures written distance between notes, and—in combination with an interval’s size—it describes the aural sound of an interval. There are five possible interval qualities: • Augmented (designated as A or +) • Major (M) • Perfect (P) • Minor (m) • Diminished (d or ^o) When speaking about or writing intervals, one says or writes the quality first and then the size. For example, an interval could be described as a “perfect fourth” (abbreviated P4), a “minor third” (abbreviated m3), or an “augmented second” (abbreviated +2 or A2). For now, we will only discuss three qualities: perfect, major, and minor. Different theorists (in different locations and time periods) have applied these qualities to different sizes of intervals, depending on milieu. shows how these qualities are applied today: [table id=61 /] As you can see in , unisons, fourths, fifths, and octaves are perfect intervals (shown in the right column), while seconds, thirds, sixths, and sevenths are major and/or minor (shown in the left The “Major Scale” Method for Determining Quality There are several different methods for learning to write and identify qualities of intervals. One method you may have heard of is counting half-steps. This is not a recommended method, because it is time consuming and often inaccurate. Instead, we recommend the “Major Scale” method. To identify an interval (size and quality) using this method, complete the following steps: 1. Determine size (by counting lines and spaces between the notes). 2. Determine if the top note is in the major scale of the bottom note. 3. If it is: the interval is perfect (if it is a unison, fourth, fifth, or octave) or it is major (if it is a second, third, sixth, or seventh). If it is not: then, for now, the interval is minor (a lowered second, third, sixth, or seventh). shows two intervals. Try identifying their size and quality: For the first interval: the notes are F and C in treble clef. Here is the process in more detail: 1. First, this interval is a generic fifth (F to itself is 1; to G is 2; to A is 3; to B is 4; to C is 5). 2. Second, C is within the key of F major (which has one flat, B♭). 3. The interval is a perfect fifth because fifths are perfect (not major/minor), and the notes are unaltered by accidentals. Let’s now use this process for the second example. The notes in this example are E♭ and C♭ in treble clef. Let’s go through the process in more detail: 1. First, this interval is a generic sixth (E♭ to itself is 1; to F is 2; to G is 3; to A is 4; to B is 5; to C is 6). 2. Second, C♭ is NOT in the key of E♭ major (which has three flats, B♭, E♭, and A♭). 3. Therefore, this is a minor sixth. If it were a major sixth, then the C would have to be C♮ instead of C♭, because C♮ is in the key of E♭ major. Augmented and Diminished Qualities To review, there are five possible interval qualities, of which we have covered major, minor, and perfect: • Augmented (designated as A or +) • Major (M) • Perfect (P) • Minor (m) • Diminished (d or ^o) Augmented intervals are one half-step larger than a perfect or major interval. shows this: As you can see in the first measure of , the notes F and C form a perfect fifth (because C is in the key of F major). The top note of that interval has been raised by a half-step to a C♯, and so is one half-step larger; consequently, the interval F to C♯ is an augmented fifth (abbreviated either A5 or +5). In the second measure of , a major sixth is shown with the notes G and E (because E is in the key of G major). The top note of that interval has been raised by a half-step to E♯, and so the interval is one half-step larger and is now an augmented sixth. Note that it is not always the top note that is altered. shows two augmented intervals in which the bottom notes have been altered: In the first measure of , F and C again form a perfect fifth. However, the bottom note has now been lowered by a half-step to an F♭, creating an augmented fifth (because the interval is one half-step larger than a perfect fifth). In the second measure of , G and E once again form a major sixth. The bottom note, G, has been lowered a half-step to G♭, creating an augmented sixth because the interval is now one half-step larger than a major sixth. Diminished intervals are one half-step smaller than a perfect or minor interval. shows this: In the first measure of , the perfect fifth F and C has been made a half-step smaller, since the top note has been lowered by a half-step. Consequently, F to C♭ is a diminished fifth (abbreviated usually as a d5 or ^o5). In the second measure of , G and E form a major sixth which becomes a minor sixth when the top note is lowered by a half-step (making the entire interval one half-step smaller). The minor sixth then becomes a diminished sixth when it is again contracted by a half-step from G to E𝄫. It is very important to note that major intervals do not become diminished intervals directly; a major interval becomes minor when contracted by a half-step. It is only a minor interval that becomes diminished when further contracted by a half-step. Again, it is not always the top note that is altered. shows two diminished intervals in which the bottom notes have been altered: In the first measure of , F to C form a perfect fifth. This interval becomes diminished when it is made a half-step smaller by the bottom note moving up a half-step from F to F♯. In the second measure of , G to E form a major sixth. This interval is made into a minor sixth when the G moves up a half-step to G♯, making the interval a half-step smaller (or contracted). Furthermore, this minor interval becomes diminished when the G moves to G♯♯, making the minor interval a further half-step contracted. and again demonstrate and summarize the relative size of intervals. Each bracket in these examples is one half-step larger or smaller than the brackets to their right and left. shows intervals with the top note altered by accidentals: As you can see in , intervals one half-step larger than perfect intervals are augmented, while intervals one half-step smaller than perfect intervals are diminished. Likewise, in , intervals one half-step larger than major intervals are augmented, while intervals one half-step smaller than major are minor and intervals one half-step smaller than minor are diminished. shows intervals with the bottom note altered by accidentals: outlines the same qualities as ; the only difference between the examples is which note is altered by accidentals. In it is the top note, while in it is the bottom note. Doubly and Triply Augmented and Diminished Intervals Intervals can be further contracted or expanded outside of the augmented and diminished qualities. An interval a half-step larger than an augmented interval is a doubly augmented interval, while an interval a half-step larger than a doubly augmented interval is a triply augmented interval. An interval a half-step smaller than a diminished interval is a doubly diminished interval, while an interval a half-step smaller than a doubly diminished interval is a triply diminished interval. Compound Intervals The intervals discussed above, from unison to octave, are simple intervals , which have a size an octave or smaller. Any interval larger than an octave is a compound interval. shows the notes A and C, first as a simple interval and then as a compound interval: The notes A to C form a minor third in the first pair of notes; in the second pair of notes, the C has been brought up an octave. Quality remains the same for simple and compound intervals, which is why a minor third and minor tenth both have the same quality. If you want to make a simple interval a compound interval, add 7 to its size. Consequently: • Unisons (which get the number “1”) become octaves (“8s”) • 2nds become 9ths • 3rds become 10ths • 4ths become 11ths • 5ths become 12ths These are the most common compound intervals that you will encounter in your music studies. Remember that octaves, 11ths, and 12ths are perfect like their simple counterparts, while 9ths and 10ths are major/minor. Interval Inversion Intervallic inversion occurs when two notes are “flipped.” For example, the notes C (on bottom) with E (above) is an inversion of E (on bottom) with C (above), as can be seen in : You might be wondering: why is this important? There are two reasons: first, because inverted pairs of notes share many interesting properties (which are sometimes exploited by composers), and second, because inverting a pair of notes can help you to identify or write an interval when you do not want to work from the given bottom note. Let’s start with the first point: the interesting properties. First, the size of inverted pairs always add up to 9: • Unisons (“1s”) invert to octaves (“8s”) (1 + 8 = 9) and octaves invert to unisons. • Seconds invert to sevenths (2 + 7 = 9) and sevenths invert to seconds. • Thirds invert to sixths (3 + 6 = 9) and sixths invert to thirds. • Fourths invert to fifths (4 + 5 = 9) and fifths invert to fourths. Qualities of inverted pairs of notes are also very consistent: • Perfect intervals invert to perfect intervals. • Major intervals invert to minor intervals (and minor intervals to major intervals). • Augmented intervals invert to diminished intervals (and diminished intervals to augmented intervals). With that information you can now calculate the inversions of intervals without even looking at staff paper. For example; a major seventh inverts to a minor second; an augmented sixth inverts to a diminished third; and a perfect fourth inverts to a perfect fifth. Now for the second point: sometimes you will come across an interval that you do not want to calculate or identify from the bottom note. shows one such instance of this: The bottom note is E𝄫, and there is no key signature for this note (its key signature is “imaginary”). So, if you were given this interval to identify you might consider inverting the interval, as shown in : Now the inversion of the interval can be calculated from the non-imaginary key of A♭ major. The key of A♭ major has four flats (B, E, A, and D flat). An E♭ above A♭ would therefore be a perfect fifth; however, this interval has been contracted (made a half-step smaller) because the E♭ has been lowered to E𝄫. That means this interval is a d5 (diminished fifth). Now that we know the inversion of the first interval is a d5, we can calculate the original interval from this inversion. A diminished fifth inverts to an augmented fourth (because diminished intervals invert to augmented intervals and because five plus four equals nine). Thus, the first interval is an augmented fourth (A4). Consonance and Dissonance Intervals are categorized as consonant or dissonant . Consonant intervals are intervals that are considered more stable, as if they do not need to resolve, while dissonant intervals are considered less stable, as if they do need to resolve. These categorizations have varied with milieu. shows a table of melodically consonant and dissonant intervals: [table id=63 /] Melodically consonant and dissonant intervals. shows harmonically consonant and dissonant intervals: [table id=64 /] Another Method for Intervals: the White-Key Method Ultimately, intervals need to be committed to memory, both aurally and visually. There are, however, a few tricks to learning how to do this quickly. One such trick is the so-called “white-key method,” which refers to the piano keyboard. This method requires you to memorize all of the intervals found between the white keys on the piano (or simply all of the intervals in the key of C major). Once you’ve learned these, any interval can be calculated as an alteration of a white-key interval. For example, we can figure out the interval for the notes D and F♯ if we know that the interval D to F is a minor third, and this interval has been made one semitone larger: a major third. Conveniently, there is a lot of repetition of interval size and quality among white-key intervals. Memorize the most frequent type, and the exceptions. All of the seconds are major except for two: E and F, and B and C, which are minor, as seen in : All of the thirds are minor except for three: C and E, F and A, and G and B, which are major, as shown in : All of the fourths are perfect except for one: F and B, which is augmented, as seen in : Believe it or not, you now know all of the white-key intervals, as long as you understand the concept of intervallic inversion, which was previously explained. For example, if you know that all seconds are major except for E and F and B and C (which are minor), then you know that all sevenths are minor except for F and E and C and B (which are major), as seen in : Once you’ve mastered the white-key intervals, you can figure out any other interval by taking into account the interval’s accidental or accidentals. Intervallic Enharmonic Equivalence may be useful when thinking about enharmonic equivalence of intervals: unis. 2nd 3rd 4th 5th 6th 7th oct. 0 P1 d2 1 A1 m2 2 M2 d3 3 A2 m3 4 M3 d4 5 A3 P4 6 A4 d5 7 P5 d6 8 A5 m6 9 M6 d7 10 A6 m7 11 M7 d8 12 A7 P8 In this chart, the columns are different intervallic sizes, while the rows present intervals based on the number of half-steps they contain. Each row in this chart is enharmonically equivalent. For example, a M2 and d3 are enharmonically equivalent (both are 2 half-steps). Likewise, an A4 and d5 are enharmonically equivalent—both are six half-steps in size. Intervallic enharmonic equivalence is useful when you come across an interval that you do not want to calculate or identify from the bottom note. We have already discussed one method for this situation previously, which was intervallic inversion. You may prefer one method or the other, though both will yield the same result. reproduces the interval from : As you’ll recall, there is no key signature for the bottom note (E𝄫), making identification of this interval difficult. By using enharmonic equivalence, however, we can make identification of this interval easier. We can recognize that E𝄫 is enharmonically equivalent with D and that A♭ is enharmonically equivalent with G♯, as shown in : Now we can identify the interval as an A5 (augmented fourth), using the key signature of the enharmonically equivalent bottom note (D). 1. Writing and Identifying Intervals Assignment #1 (.pdf, .mcsz) 2. Writing and Identifying Intervals Assignment #2 (.pdf, .mcsz) 3. Writing and Identifying Intervals Assignment #3 (.pdf, .mcsz) The distance between two notes The interval is played or sung separately (one note after another) The interval is played or sung together (both notes at the same time) Interval size is written with Arabic numbers (2, 3, 4, etc.); it is the distance between two notes on a staff The number of scale steps between notes of a collection or scale When applied to an interval, the term "quality" modifies the size descriptor in order to be more specific about the exact number of semitones in the interval When applied to triadic harmony, "quality" refers to the size of the different intervals that make up the harmony A combination of size and quality to describe an interval Intervals that are one half-step larger than a perfect or major interval Intervals that are one half-step smaller than a perfect or minor interval Intervals with a size an octave or smaller An interval that is larger than an octave Occurs when two notes (such as C and E) are flipped; C (on bottom) with E (above) is an inversion of E (on bottom) with C (above) A quality in an interval or chord that, in a traditional tonal context, is stable; this stability is the result of its perceived independence from a need to resolve A quality in an interval or chord that, in a traditional tonal context, is unstable; this instability is the result of its perceived dependence on a need to resolve A physical and/or social setting An interval a half-step larger than an augmented interval An interval a half-step larger than a doubly augmented interval An interval a half-step smaller than a diminished interval An interval a half-step smaller than a doubly diminished interval
{"url":"https://viva.pressbooks.pub/cmus120emu/chapter/intervals/","timestamp":"2024-11-07T06:06:52Z","content_type":"text/html","content_length":"114827","record_id":"<urn:uuid:fb721c80-3087-4d13-96b8-d44ff9251604>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00739.warc.gz"}
Solve System of ODEs with Multiple Initial Conditions This example compares two techniques to solve a system of ordinary differential equations with multiple sets of initial conditions. The techniques are: • Use a for-loop to perform several simulations, one for each set of initial conditions. This technique is simple to use but does not offer the best performance for large systems. • Vectorize the ODE function to solve the system of equations for all sets of initial conditions simultaneously. This technique is the faster method for large systems but requires rewriting the ODE function so that it reshapes the inputs properly. The equations used to demonstrate these techniques are the well-known Lotka-Volterra equations, which are first-order nonlinear differential equations that describe the populations of predators and Problem Description The Lotka-Volterra equations are a system of two first-order, nonlinear ODEs that describe the populations of predators and prey in a biological system. Over time, the populations of the predators and prey change according to the equations $\begin{array}{l}\frac{\mathrm{dx}}{\mathrm{dt}}=\alpha \mathit{x}-\beta \mathrm{xy},\\ \frac{\mathrm{dy}}{\mathrm{dt}}=\delta \mathrm{xy}-\gamma \mathit{y}.\end{array}$ The variables in these equations are • $\mathit{x}$ is the population size of the prey • $\mathit{y}$ is the population size of the predators • $\mathit{t}$ is time • $\alpha$, $\beta$, $\delta$, and $\gamma$ are constant parameters that describe the interactions between the two species. This example uses the parameter values $\alpha =\gamma =1$, $\beta =0.01$ , and $\delta =0.02$. For this problem, the initial values for $\mathit{x}$ and $\mathit{y}$ are the initial population sizes. Solving the equations then provides information about how the populations change over time as the species interact. Solve Equations with One Initial Condition To solve the Lotka-Volterra equations in MATLAB®, write a function that encodes the equations, specify a time interval for the integration, and specify the initial conditions. Then you can use one of the ODE solvers, such as ode45, to simulate the system over time. A function that encodes the equations is function dpdt = lotkaODE(t,p) % LOTKA Lotka-Volterra predator-prey model delta = 0.02; beta = 0.01; dpdt = [p(1) .* (1 - beta*p(2)); p(2) .* (-1 + delta*p(1))]; (This function is included as a local function at the end of the example.) Since there are two equations in the system, dpdt is a vector with one element for each equation. Also, the solution vector p has one element for each solution component: p(1) represents $\mathit{x}$ in the original equations, and p(2) represents $\mathit{y}$ in the original equations. Next, specify the time interval for integration as $\left[0,15\right]$ and set the initial population sizes for $\mathit{x}$ and $\mathit{y}$ to 50. t0 = 0; tfinal = 15; p0 = [50; 50]; Solve the system with ode45 by specifying the ODE function, the time span, and the initial conditions. Plot the resulting populations versus time. [t,p] = ode45(@lotkaODE,[t0 tfinal],p0); title('Predator/Prey Populations Over Time') Since the solutions exhibit periodicity, plot the solutions against each other in a phase plot. title('Phase Plot of Predator/Prey Populations') The resulting plots show the solution for the given initial population sizes. To solve the equations for different initial population sizes, change the values in p0 and rerun the simulation. However, this method only solves the equations for one initial condition at a time. The next two sections describe techniques to solve for many different initial conditions. Method 1: Compute Multiple Initial Conditions with for-loop The simplest way to solve a system of ODEs for multiple initial conditions is with a for-loop. This technique uses the same ODE function as the single initial condition technique, but the for-loop automates the solution process. For example, you can hold the initial population size for $\mathit{x}$ constant at 50, and use the for-loop to vary the initial population size for $\mathit{y}$ between 10 and 400. Create a vector of population sizes for y0, and then loop over the values to solve the equations for each set of initial conditions. Plot a phase plot with the results from all iterations. y0 = 10:10:400; for k = 1:length(y0) [t,p] = ode45(@lotkaODE,[t0 tfinal],[50 y0(k)]); hold on title('Phase Plot of Predator/Prey Populations') hold off The phase plot shows all of the computed solutions for the different sets of initial conditions. Method 2: Compute Multiple Initial Conditions with Vectorized ODE Function Another method to solve a system of ODEs for multiple initial conditions is to rewrite the ODE function so that all of the equations are solved simultaneously. The steps to do this are: • Provide all of the initial conditions to ode45 as a matrix. The size of the matrix is s-by-n, where s is the number of solution components and n is the number of initial conditions being solved for. Each column in the matrix then represents one complete set of initial conditions for the system. • The ODE function must accept an extra input parameter for n, the number of initial conditions. • Inside the ODE function, the solver passes the solution components p as a column vector. The ODE function must reshape the vector into a matrix with size s-by-n. Each row of the matrix then contains all of the initial conditions for each variable. • The ODE function must solve the equations in a vectorized format, so that the expression accepts vectors for the solution components. In other words, f(t,[y1 y2 y3 ...]) must return [f(t,y1) f (t,y2) f(t,y3) ...]. • Finally, the ODE function must reshape its output back into a vector so that the ODE solver receives a vector back from each function call. If you follow these steps, then the ODE solver can solve the system of equations using a vector for the solution components, while the ODE function reshapes the vector into a matrix and solves each solution component for all of the initial conditions. The result is that you can solve the system for all of the initial conditions in one simulation. To implement this method for the Lotka-Volterra system, start by finding the number of initial conditions n, and then form a matrix of initial conditions. n = length(y0); p0_all = [50*ones(n,1) y0(:)]'; Next, rewrite the ODE function so that it accepts n as an input. Use n to reshape the solution vector into a matrix, then solve the vectorized system and reshape the output back into a vector. A modified ODE function that performs these tasks is function dpdt = lotkasystem(t,p,n) %LOTKA Lotka-Volterra predator-prey model for system of inputs p. delta = 0.02; beta = 0.01; % Change the size of p to be: Number of equations-by-number of initial % conditions. p = reshape(p,[],n); % Write equations in vectorized form. dpdt = [p(1,:) .* (1 - beta*p(2,:)); p(2,:) .* (-1 + delta*p(1,:))]; % Linearize output. dpdt = dpdt(:); Solve the system of equations for all of the initial conditions using ode45. Since ode45 requires the ODE function to accept two inputs, use an anonymous function to pass in the value of n from the workspace to lotkasystem. [t,p] = ode45(@(t,p) lotkasystem(t,p,n),[t0 tfinal],p0_all); Reshape the output vector into a matrix with size (numTimeSteps*s)-by-n. Each column of the output p(:,k) contains the solutions for one set of initial conditions. Plot a phase plot of the solution p = reshape(p,[],n); nt = length(t); for k = 1:n hold on title('Predator/Prey Populations Over Time') hold off The results are comparable to those obtained by the for-loop technique. However, there are some properties of the vectorized solution technique that you should keep in mind: • The calculated solutions can be slightly different than those computed from a single initial input. The difference arises because the ODE solver applies norm checks to the entire system to calculate the size of the time steps, so the time-stepping behavior of the solution is slightly different. The change in time steps generally does not affect the accuracy of the solution, but rather which times the solution is evaluated at. • For stiff ODE solvers (ode15s, ode23s, ode23t, ode23tb) that automatically evaluate the numerical Jacobian of the system, specifying the block diagonal sparsity pattern of the Jacobian using the JPattern option of odeset can improve the efficiency of the calculation. The block diagonal form of the Jacobian arises from the input reshaping performed in the rewritten ODE function. Compare Timing Results Time each of the previous methods using timeit. The timing for solving the equations with one set of initial conditions is included as a baseline number to see how the methods scale. % Time one IC baseline = timeit(@() ode45(@lotkaODE,[t0 tfinal],p0),2); % Time for-loop for k = 1:length(y0) loop_timing(k) = timeit(@() ode45(@lotkaODE,[t0 tfinal],[50 y0(k)]),2); loop_timing = sum(loop_timing); % Time vectorized fcn vectorized_timing = timeit(@() ode45(@(t,p) lotkasystem(t,p,n),[t0 tfinal],p0_all),2); Create a table with the timing results. Multiply all of the results by 1e3 to express the times in milliseconds. Include a column with the time per solution, which divides each time by the number of initial conditions being solved for. TimingTable = table(1e3.*[baseline; loop_timing; vectorized_timing], ... 1e3.*[baseline; loop_timing/n; vectorized_timing/n],... 'VariableNames',{'TotalTime (ms)','TimePerSolution (ms)'},'RowNames', ... {'One IC','Multi ICs: For-loop', 'Mult ICs: Vectorized'}) TimingTable=3×2 table TotalTime (ms) TimePerSolution (ms) ______________ ____________________ One IC 0.44553 0.44553 Multi ICs: For-loop 20.534 0.51336 Mult ICs: Vectorized 8.6379 0.21595 The TimePerSolution column shows that the vectorized technique is the fastest of the three methods. Local Functions Listed here are the local functions that ode45 calls to calculate the solutions. function dpdt = lotkaODE(t,p) % LOTKA Lotka-Volterra predator-prey model delta = 0.02; beta = 0.01; dpdt = [p(1) .* (1 - beta*p(2)); p(2) .* (-1 + delta*p(1))]; function dpdt = lotkasystem(t,p,n) %LOTKA Lotka-Volterra predator-prey model for system of inputs p. delta = 0.02; beta = 0.01; % Change the size of p to be: Number of equations-by-number of initial % conditions. p = reshape(p,[],n); % Write equations in vectorized form. dpdt = [p(1,:) .* (1 - beta*p(2,:)); p(2,:) .* (-1 + delta*p(1,:))]; % Linearize output. dpdt = dpdt(:); See Also ode45 | odeset Related Topics
{"url":"https://www.mathworks.com/help/matlab/math/solve-system-of-odes-with-multiple-initial-conditions.html","timestamp":"2024-11-05T14:24:16Z","content_type":"text/html","content_length":"92279","record_id":"<urn:uuid:3e0b5f7f-aad7-4d18-8f2b-fb6ff85d250b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00724.warc.gz"}
Let me start by saying something that will come as a shock to many: it isn't. As a caveat, this is coming from a person who, while tutoring a student at the university, was greeted by one of the math professors who offered me a "recreational" linear algebra test. My response? "Cool! Thanks!" But on a more normal level, yes, math is easier and language, and we're talking by two years old. Think about it. Even before electronics, there were mechanical computers that could add, subtract, multiply, and divide. Yet it's only been recently that computers got any good and understanding or producing speech, and they're still not as good at it as humans. Language is much more difficult problem. Here's another example. You're in a room, and need to find the shortest path from one corner on the floor to the opposite corner on the ceiling. You can determine that by applying variational calculus to a path length to minimize the functional.... "Wait!" you interrupt. "It's just a straight line from one to the other!" You're right. Congratulations, you just did variational calculus ... in your head. Essentially, that is what you did. So if math is really not so hard, why do so many people think it is? Let's take a look at some of the reasons. 1. People tell you it's hard Look at all the movies and comics. Math is something that whiz kids and scientists do, like the students in Real Genius do in between building megawatt lasers. Math is an art form they use to make something look complicated, even if it actually isn't. Take the image here. If you look closely, you'll see it's gibberish, not even real math. It's something AI generated, but it looks complicated, doesn't it? That's the point. Math has become the symbol for something complicated. You start to believe it, everyone around you believes it, which only convinces you that you were right to believe it. It's a con. Think of math more like a jigsaw or crossword puzzle, or solving the challenges in a video game. That's a much more accurate and productive way to look at it. 2. Some math teachers aren't really math people To become a teacher, you go to college and get a degree in education, do your practicum, and get certified as a teacher. Some may take an extra class or two on how to teach math, or maybe really want to be an art teacher, but they already have an art teacher, so they get stuck teaching math. A person that isn't really into math isn't likely to pass on much enthusiasm to the students. 3. Schools often don't teach math that well Some examples of the Fibonacci sequence in nature. And this sometimes goes for college, too, but it's usually better there. All too often, they teach math as a set of rules, something to memorize, instead helping students see what it's all really about. Image if you tried to solve jigsaw puzzles by memorizing every possible rule, like "A piece with half a moon and an S-shaped curve on the left probably goes next to a piece with the other half of the moon and an S-shaped curve on the right." Of course you don't do that! So why approach math that way. Instead, you see the big picture, and figure out how to put the pieces of the puzzle together. Math is all about finding patterns. Take Fibonacci numbers, which are related to Lucas numbers, the Golden ratio, and logarithmic spirals. They're all connected. They're also all over nature, from pine cones, to snail shells, to sunflowers, to galaxies. Why do you suppose that is? What is the great pattern throughout the universe that brings all those things together. It's like a mystery, isn't it? It's not really about rules and memorization at all. Another aspect of this is that we often teach math as something abstract, separate from reality. We learn things like the Pythagorean theorem about right triangles c^2=a^2+b^2, but so what. Did Pythagoras pluck that out of a tree? Actually, it's pretty easy to show in a picture why that's true, then it's like the light comes on ... "Oh...." The reason a lot of people hate word problems is because we do teach math in the abstract, so when it comes to applying it to the real world, which is actually why we are learning it, students get lost. 4. Math builds on previous knowledge you might have missed If you're going to do differential equations, you'll obviously need to know what differentials are. Those come from calculus, and for that you need limits. For limits, you need algebra, and for that you need arithmetic. Most of those are skills most people will never have any use for, but it illustrates a point. You can't build on a foundation that you don't have. And I don't mean just learning the methods; I mean really getting into it and understanding what you're doing. Those limits often use division, and it sure helps if, in addition to just being able to work the problems, you have a good idea in your head what it's really doing. If you're stuck somewhere, you might have to go back and see if you've missed anything. 5. You have a learning disability that gets in the way It's always possible that for you, math really is hard, and that doesn't mean you can fall back on this as a excuse. People with ADHD often have trouble focusing, and not just on math. It can be anything. Other times, they can have trouble not focusing, and it's hard to get their attention off something. Also, there is a condition called dyscalculia. It's a neurological condition like dyslexia, only it affects numbers instead of words, and it can make math extremely difficult. And there's autism. Any of these conditions, or others, need professional care to address, and for most people, they're not the problem. 6. You just don't like math There's always this possibility. As far as hamburgers, go for me, few things beat a mushroom and Swiss cheese burger. Unfortunately, my son Ryan can't handle mushrooms, so, that's that. It's hard for me to imagine, but I accept it as the truth. Likewise, there's no law that says you have to like math. But please, please, PLEASE don't use that as an excuse just because you find something difficult. Once you get it figured out, you might find you like it. That sort of change in perspective happens all the time. When I was younger, I used to explore caves, deep, dark, mysterious. There's nothing like that feeling of discovering a new passage, exploring it, wondering where that crack or that crawlway goes. The drive to push on can be overwhelming. Math can be like that with the right attitude. Take pi (π). Anyone in high school knows it's the ratio of a circle's circumference to its diameter, but there's a lot more to it than that. It shows up in radio waves, zebras' stripes, sub-atomic physics, the momentum of light. Good grief, it's all over the universe. Also interesting is that there are dozens if not hundreds of ways to calculate it, and pretty much all of them involve adding, multiplying, dividing, or something, an infinite number of terms. Browse through the Wikipedia article for some of them. One has to wonder how you can possibly calculate an infinite number of terms finally ending in an infinite nesting of square roots. One has to wonder how all those different formulas, that look nothing like each other, can possibly give the same result. Come on, now. Aren't you the least bit curious?
{"url":"https://tutor.duanevore.com/blog.php?post=1","timestamp":"2024-11-02T18:08:12Z","content_type":"text/html","content_length":"22551","record_id":"<urn:uuid:872a9aba-8b4e-4b79-b4f3-78831abddb05>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00770.warc.gz"}
This page describes the rwlibs/softbody RobWork module. (Please note that this module is not enabled by default as it requires 3rd party libraries.) The softbody module implements a non-linear beam model supporting large deformations and Laying-Down placement operations. The beam has been described in -Modeling and Simulation of Grasping of Deformable Objects (2012) -An Adaptable Robot Vision System Performing Manipulation Actions with Flexible Objects (2013) The current implementation supports cuboid geometries and non-homogeneous materials under non-penetration constraints. Supplementary material A short presentation and some videos of an application using the softbody module can be found at https://svnsrv.sdu.dk/svn/RobWorkApp/SoftBeamPlugin/presentation/ The softbody module is currently configured to use the IPOPT https://projects.coin-or.org/Ipopt non-linear optimization library. It is currently linked statically and must be present at compile time. IPOPT should be compiled separately and either placed in the system-wide include/library folders or pointed to by setting IPOPT_HOME accordingly. IPOPT requires a compatible linear algebra solver library. It supports a variety of routines, e.g. Harwell routines, Pardiso, WSMP and MUMPS (see the options reference at http://www.coin-or.org/Ipopt /documentation/node50.html ). Currently the build system in RobWork is configured to use MUMPS due to it having the most permissive license. CMake settings Make sure to have set the environment variable IPOPT_HOME to the root of your IPOPT installation, e.g. with a line in .bashrc: export IPOPT_HOME=/home/arf/Documents/Ipopt-3.10.3 After this, enable the flags BUILD_rw_softbody and RW_BUILD_SOFTBODY. The relevant FIND_PACKAGE CMake commands are located in RobWork/cmake/RobWorkSetup.cmake -Dimensions/deformation result: mm -Young's modulus: MPa -Poisson's ratio: (unitless, between 0.0 and 0.5) -Mass density: kg/mm^3 boost::scoped_ptr< rwlibs::softbody::ModRusselBeamIpopt > beamPtr; std::shared_ptr< rwlibs::softbody::BeamGeometryCuboid > beamGeomPtr; std::shared_ptr< rwlibs::softbody::BeamObstaclePlane> beamObstaclePtr; double dx = 110.0; // length of beam double dy = 7.0; // thickness of beam double dz = 57.0; // width of beam int M = 32; // number of cross sections in beam std::vector<double> Exvec = getExvec(M); // Young's modulus for each cross section std::vector<double> vxvec = getvxvec(M); // Poisson's ratio for each cross section std::vector<double> Rhovec = getRhovec(M); // Mass density for each cross section Transform3D<> beamGeomTrans = Transform3D<>::identity(); // transform of the beam base frame at x=0 Vector3D<> G = Vector3D<> ( 0.0, 9.82, 0.0 ); // gravity vector beamGeomPtr.reset ( new BeamGeometryCuboid ( Vector3D<> obstacleNormal = Vector3D<> ( 0.0, 1.0, 0.0 ); // normal of the plane obstacle Transform3D<> obstacleTrans = Transform3D<>::identity(); // transform of the plane obstacle beamObstaclePtr.reset ( new BeamObstaclePlane ( rw::geometry::Plane ( obstacleNormal, 0.0 /* instantiate the beam, passing the geometry, obstacle and beam size */ beamPtr.reset ( new ModRusselBeamIpopt ( /* vectors for storing the solver results */ boost::numeric::ublas::vector<double> avec(M); // angle of each cross section in radians boost::numeric::ublas::vector<double> Uvec(M); // x-deformation of each cross section in mm boost::numeric::ublas::vector<double> Vvec(M); // y-deformation of each cross section in mm * Optional: Provide starting guess for optimization by setting avec, Uvec and Vvec appropiately * a starting guess of zero (which we effectively do right here) usually works fine beamPtr->setAccuracy ( 1.0e-3 ); // accuracy goal for optimization beamPtr->setIntegralIndices ( integralIndices ); // vector containing the indices of constraints that should be active beamPtr->solve ( avec, Uvec, Vvec ); // invoke solver, result will be stored in the passed vectors Setting frames A small non-working code example from a RobWorkStudio plugin where the solution stored in avec, Uvec, Vvec is used to set frames in a workcell. void SoftBeamPlugin::setRwFrames ( const boost::numeric::ublas::vector< double >& U, const boost::numeric::ublas::vector< double >& V, const boost::numeric::ublas::vector< double >& avec ) { _pluginState = _wc->getStateStructure()->getDefaultState(); for ( int i = 1; i < ( int ) _framePtrList.size(); i++ ) { MovableFrame *frame = _framePtrList[i]; Transform3D<> trans = _beamFrameBase->getTransform ( _pluginState ); trans.P() [0] = U[i] * 1.0e-3; trans.P() [1] = V[i] * 1.0e-3; EAA<> rotEAA ( 0.0, 0.0, avec[i] ); trans.R() = rotEAA.toRotation3D(); frame->setTransform ( trans, _pluginState ); _wc->getStateStructure()->setDefaultState ( _pluginState ); getRobWorkStudio()->setState ( _pluginState ); Future work -Improvement of the hessian approximation. Currently we use IPOPT's built-in numerical approximation which does not fully exploit sparsity. -Evaluation of other linear algebra solver routines. HSL MA57 should provide a higher theoretical performance than MUMPS for moderately-sized problems, but has stricter licensing. Supported constraints -Add support for more types of constraints The beam implements non-penetration constraints by the means of IPOPT inequality constraints. These are enabled/disabled for each cross section of the beam by the user passing a list of active constraints. A more flexible interface supporting different types of constraints, e.g. ones fixing the beam in space should be made. Supported geometry types -Add support for more geometry types The abstract base class BeamGeometry defines the integrals that should be evaluated upon geometry cross sections. This is implemented for cuboid cross sections in BeamGeomtryCuboid. If for instance it should be wished to support spherical cross-sections, it is a matter of evaluating the same integrals but on spherical domains.
{"url":"https://robwork.org/apidoc/cpp/doxygen/page_rw_softbodysimulation.html","timestamp":"2024-11-10T18:49:39Z","content_type":"application/xhtml+xml","content_length":"11845","record_id":"<urn:uuid:d44c1b8c-f6f4-407f-b58a-62a2a6965dc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00603.warc.gz"}
Scikit-learn Tutorial – Beginner’s Guide to GPU Accelerated ML Pipelines | NVIDIA Technical Blog This tutorial is the fourth installment of the series of articles on the RAPIDS ecosystem. The series explores and discusses various aspects of RAPIDS that allow its users solve ETL (Extract, Transform, Load) problems, build ML (Machine Learning) and DL (Deep Learning) models, explore expansive graphs, process signal and system log, or use SQL language via BlazingSQL to process data. It’s been over 60 years [at the time of this writing] since the term Machine Learning (ML) was introduced. Since then, it has established a prominent place at the intersection between computer science, statistics, and econometrics. It is not that hard to imagine that nowadays virtually every human with a smartphone is using some form of machine learning every single day (if not every minute). Our preferences are modeled when we access the Internet so we are shown more relevant products, our opponents are more intelligent when we play a game, or a smartphone can recognize our face and unlock our phone. Accelerate machine learning pipelines with RAPIDS cuML Thus, when RAPIDS was introduced in late 2018, it arrived pre-baked with a slew of GPU-accelerated ML algorithms to solve some fundamental problems in today’s interconnected world. Since then, the palette of algorithms available in cuML (shortened from CUDA Machine Learning) has been expanded, and the performance of many of them has been taken to ludicrous levels. All while maintaining the familiar and logical API of scikit-learn! In the previous posts we showcased other areas: • In the first post, the python pandas tutorial, we introduced cuDF, the RAPIDS DataFrame framework for processing large amounts of data on an NVIDIA GPU. • The second post compared similarities between cuDF DataFrame and pandas DataFrame. • In the third post, data processing with Dask, we introduced a Python distributed framework that helps to run distributed workloads on GPUs. In this tutorial, we will introduce and showcase the most common functionality of RAPIDS cuML. Using cuML helps to train ML models faster and integrates perfectly with cuDF. Thanks to such tight integration, the end-to-end time to estimate a model is significantly reduced. This, in turn, allows interactively to engineer features and test new models and/or optimize their hyperparameters leading to better defined and more accurate models and enabling the ability to retrain the models more frequently. To aid in getting familiar with using cuML, we provide a handy cuML cheatsheet that can be downloaded here. Using GPUs to build statistical models Estimating statistical models boils down to finding a minimum value of a loss function—or, inversely, a maximum value of the reward function—given a set of features (independent variables) and ground truth (or a dependent variable). Many algorithms exist that help finding roots of equations, some of them reaching back to the 17th century (see Newton-Rhapson algorithm) but all of them have one thing in common: ultimately all the features and the ground truth need to have a numerical representation. Effectively, no matter whether we are building a classical ML model, try to estimate the latest-and-greatest Deep Learning model, or process an image, our computer will be dealing with a large matrix of numbers and apply some algorithms to it. GPUs, with thousands of cores, were made with that particular application in mind: to parallelize the processing of large matrices. For example, the frames when we play a game are rendered so fast you cannot perceive any lag on the screen, a filter we apply to an image does not take one day to finish, or, as we might have guessed, the process of estimating a model is significantly sped up. GPUs made the first forays into the world of Machine Learning by speeding up the estimation of Deep Learning models since the amount of data and the size of these models require an outrageous amount of computation. DL models, however, while capable of solving some sophisticated modeling problems, are quite often overkill for other simpler problems with well-established solutions. Thus, if our dataset is large, and we want to go from reading our data to having an estimated regression or classification model in minutes rather hours or days, RAPIDS is the right tool Regression and classification – the backbone of machine learning Regression and classification problems are intimately related, differing mostly in the way how the loss function is derived. In the regression model, we normally want to minimize the distance (or squared distance) between the value predicted by the model and the target, while the aim of a classification model is to minimize the number of misclassified observations. To further the argument that either regression or classification are based on virtually the same underlying mathematical model, we have a family of models, live Support-Vector Machines or ensemble models (like Random Forest or XGBoost) that can be applied to solve either. The cuML package has a whole portfolio of models to solve regression or classification problems. And all but a few implement exactly the same API call that simplifies testing different approaches. Consider estimating a linear regression, and then trying out a ridge or lasso: all we have to do is to change the object we create. You can estimate a Linear Regression model in two lines of code: model = cuml.LinearRegression(fit_intercept=True, normalize=True) model.fit(X_train, y_train).predict(X_test) Estimating a Lasso Regression (as a reminder, it provides an L1 regularization on the coefficients) requires changing the object: model = cuml.Lasso(fit_intercept=False, normalize=True) model.fit(X_train, y_train).predict(X_test) A similar pattern can be found when trying to estimate a classification model: a logistic regression can be estimated as follows: model = cuml.LogisticRegression(fit_intercept=True) model.fit(X_train, y_train).predict(X_test) Unlike regression models, some of the estimated classification models can also output probabilities of belonging to a certain class. Check the cuML-cheatsheet to further explore the regression and classification models of RAPIDS, or use cuML-notebooks to try it. Finding pattern in data Many times, the target variable or a label is not readily available in a dataset produced in a real-world scenario. Labeling datasets for machine learning has even become a business model on its own. However, clustering data is an example of statistical modeling without a teacher or unsupervised modeling that does not require having the ground truth but rather finds patterns in data. One of the simplest yet powerful segmentation models is k-means. k-means, given the observations and their features, finds clusters that maximize the cluster homogeneity (how similar the observations are within the same cluster) while at the same time maximizing the heterogeneity (dissimilarity) between clusters. While the k-means algorithm is quite efficient and scales well to a relatively large dataset, estimating a k-means model using RAPIDS we gain further performance improvements. k_means = cuml.KMeans(n_clusters=4, n_init=3) One of the drawbacks of k-means is that it requires explicitly stating the number of clusters we expect to see in the data. DBSCAN, also available in cuML, does not have such requirements. dbscan = cuml.DBSCAN(eps=0.5, min_samples=10) DBSCAN is a density-based clustering model. Unlike k-means, DBSCAN can leave some of the points unclustered, effectively finding some outliers that do not really match any of the patterns found. Dealing with high-dimensional data For many real-life phenomena, we have no ability to collect enough data to estimate a statistically significant machine learning model, or the nature of the phenomenon makes the data extremely high-dimensional and sparse. For example, some rare diseases can have many features describing the patient compared to the number of observed patients we can collect the data from, putting us effectively in a statistical impasse. On the other hand, we might have billions of records but the feature space can be hundreds of millions big, making estimating the model impractical at best. Dimensionality reduction is one of the techniques to reduce the number of features and keep only those that are highly correlated with the target, or can explain most of the target’s variance. PCA, or Principal Component Analysis, is one of the oldest techniques that projects a high-dimensional feature space onto an orthogonal hyperplane where each feature is guaranteed to be independent of each other, and then retains only some number of them that explain most of the variance of the target. cuML has a fast implementation of PCA that we can estimate in one line of code. pca = cuml.PCA(n_components=2) The above code takes a dataset and retrieves only the first two principal components that can be easily plotted. An example below retrieved 2 principal components from a dataset created using cuML. X, y = cuml.make_blobs( , centers=4 , n_features=50 , cluster_std=[1.0, 5.0, 10.0, 0.5] , random_state=np.random.randint(1e9) We have 50 features, 1000 observations, and four clusters. Running PCA to retrieve the top two principal components and plotting the results shows the following image. Figure 1: 2-D projection of the dataset with Principal Components Analysis aka. PCA. So, instead of 50 features, we can use only two and most likely still build a decent model. Of course, not many models would be able to find four clusters here, but the separation between 3 of them is quite profound. umap = cuml.UMAP(n_neighbors=10, n_components=2) tsne = cuml.TSNE(n_components=2, perplexity=500, learning_rate=200) Both UMAP and t-SNE produce better-separated clusters than the PCA, with the UMAP being the ultimate winner being able to almost ideally retrieve linearly separable four clusters of points. Figure 2. 2-D projection of the dataset using UMAP. t-SNE produces linearly separable clusters but seems to be missing one of them, just like PCA. Figure 3. 2-D projection of the dataset using t-sne. Download the cuML cheatsheet to help create your own GPU-accelerated ML pipelines!
{"url":"https://developer.nvidia.com/blog/scikit-learn-tutorial-beginners-guide-to-gpu-accelerated-ml-pipelines/","timestamp":"2024-11-12T04:01:14Z","content_type":"text/html","content_length":"211143","record_id":"<urn:uuid:e3f65d05-ed31-42d0-ac2d-99318589e7ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00470.warc.gz"}
Damir Dzhafarov. I can still be reached at david.j.nichols@uconn.edu. Doctoral Thesis University of Connecticut, 2019 • 2017 New England Recursion and Definability Seminar Wellesley College Cornell Logic Seminar Cornell University • 2016 UConn Logic Group University of Connecticut Fifth NY Graduate Student Conference in Logic CUNY Graduate Center • 2015 Computability in Europe Universitatea din București In the Spring 2018 semester I am teaching MATH 2410: Elementary Differential Equations. View the syllabus. I think that visual aids are one important tool available to mathematics instructors, and that the best visual aids are usually those that can be manipulated or interacted with. In collaboration with Amit Savkar, I develop interactive visualizations of calculus concepts and examples. These visualizations are used for calculus teaching and learning at the University of Connecticut. You can find examples of my early work on Dr. Savkar's website. For more recent examples, contact me. I want my visual aids to be useful to as many people as possible, so I am always trying to up my accessibility game. In order to help myself and others develop visual aids that use colors in an accessible way, I have made an online colorblindness simulation tool.
{"url":"https://davidmathlogic.com/","timestamp":"2024-11-03T09:32:49Z","content_type":"text/html","content_length":"4883","record_id":"<urn:uuid:9a1d3b6a-1ec7-437a-9263-413c7fec5702>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00094.warc.gz"}
Lesson 11 Rectangles with the Same Perimeter Warm-up: Number Talk: Multiply to Divide (10 minutes) The purpose of this Number Talk is to elicit strategies and understandings students have for dividing within 100. These understandings help students develop fluency and will be helpful later in this lesson when students will need to be able to divide fluently within 100. • Display one expression. • “Give me a signal when you have an answer and can explain how you got it.” • 1 minute: quiet think time • Record answers and strategy. • Keep expressions and work displayed. • Repeat with each expression. Student Facing Find the value of each expression mentally. • \(5 \times 5\) • \(10 \times 5\) • \(2 \times 5\) • \(85 \div 5\) Activity Synthesis • “How did knowing the first 3 facts help you find the value of \(85 \div 5\)?” (The first 3 products added up to 85, which was the number I was dividing in the last problem. So, I was able to use those to figure out \(85 \div 5\).) Activity 1: Perimeter of 16 Units (15 minutes) The purpose of this activity is for students to understand that rectangles with the same perimeter do not necessarily have the same area. In the synthesis, students begin to consider how to systematically draw different rectangles with the same perimeter. MLR8 Discussion Supports. Synthesis: Provide students with the opportunity to rehearse what they will say with a partner before they share with the whole class. Advances: Speaking • Groups of 2 • “Take a couple of minutes to draw some rectangles that have a perimeter of 16 units.” • 2–3 minutes: independent work time • “Share your rectangles with your partner and see if there are any other rectangles you can think of together. Then, find the area of each rectangle.” • 6–8 minutes: partner work time • Monitor for different rectangles students draw. Student Facing 1. Draw as many different rectangles with a perimeter of 16 units as you can. 2. Calculate the area of each rectangle you draw. Explain or show your reasoning. Activity Synthesis • Select students to share their rectangles and to explain how they knew the perimeter was 16 and how they found the area. • “We just showed that rectangles with a certain perimeter do not always have the same area.” • “How would you explain to someone how to draw rectangles with a perimeter of 30 that had different areas?” (Choose a length for two of the sides, like 10, and then double that to get 20. There’s 10 left for the other two sides, so each side will be 5. Split 30 in half to get 15. The two different side lengths need to add up to 15, so we can use different pairs of numbers with the sum of Activity 2: Same Perimeter, Different Area (20 minutes) The purpose of this activity is for students to draw rectangles with the same perimeter and different areas. Students draw a pair of rectangles for each given perimeter, then display their rectangles and make observations about them in a gallery walk. Students may notice new patterns (MP7) in the rectangles with the same perimeter (for instance, that as two sides each increase by 1 unit, the other two sides each decrease in length by 1 unit). They may also notice that, so far, all the perimeters are even numbers. Students may wonder if it is possible for a perimeter to be an odd number. If these observations arise, consider discussing them in the synthesis. Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite students to select to complete 3 of the 5 perimeter problems in task 1. Supports accessibility for: Organization, Attention, Social-Emotional Skills Required Materials Materials to Gather Materials to Copy • Square Dot Paper Standard Required Preparation • Create 4 visual displays. Each visual display should be labeled with a different perimeter. Use the following perimeters: 12 units, 20 units, 26 units, 34 units). • Students cut out and tape their rectangles on one of the visual displays during this activity. • Groups of 2 • Display the visual display labeled with each of the four perimeters in the first problem. • Give each group 2 sheets of dot paper, scissors, and access to tape. • “Work with your partner to complete the first problem.” • 6–8 minutes: partner work time • “Choose which rectangles you want to share and put them on the appropriate poster. Try to look for rectangles that are different from what other groups have already placed.” • 3–5 minutes: partner work time • Monitor to make sure each visual display has a variety of rectangles. • When all students have put their rectangles on the posters, ask students to visit the posters with their partner and discuss one thing they notice and one thing they wonder about the rectangles. • 5 minutes: gallery walk Student Facing Your teacher will give you some dot paper for drawing rectangles. 1. For each of the following perimeters, draw 2 rectangles with that perimeter but different areas. 1. 12 units 2. 20 units 3. 26 units 4. 34 units 5. Choose your own perimeter. 2. Cut out 1 or 2 rectangles you want to share and put them on the appropriate poster. Try to look for rectangles that are different from what other groups have already placed. 3. Gallery Walk: As you visit the posters with your partner, discuss something you notice and something you wonder. Activity Synthesis • “As you visited the posters, what did you notice? What did you wonder?” • Discuss observations or questions that can reinforce the connections between side lengths, perimeter, and area of rectangles. • Consider asking: □ “What perimeter did you and your partner choose to work with when you could choose your own perimeter? Why did you choose that perimeter?” Lesson Synthesis Refer to the posters from the previous activity. “How is it possible that many rectangles can have the same perimeter, but not have the same area?” (The perimeter is the distance around the rectangle, it does not determine the amount of space the rectangle covers.) “How did you know the areas were different? Can you tell by looking at the rectangles whether they have the same area?” (Some you can tell just by looking at them that one takes up more space than the other. I would find the area to be sure. Even if the rectangles look different, they could have the same area.) Cool-down: Perimeter of 18 (5 minutes)
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-3/unit-7/lesson-11/lesson.html","timestamp":"2024-11-03T06:06:47Z","content_type":"text/html","content_length":"83473","record_id":"<urn:uuid:c9e60c02-b131-46d6-aacd-9cd25ec6f22e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00539.warc.gz"}
ELI5: Continued fraction Continued fractions are when you use fractions inside of other fractions. For example, let's say you want to make a fraction like 3/2. But instead of just writing that, you can write it as a continued fraction by writing it as 1 + 1/(1+1/2). That just means the same thing, but the whole number part is shown first, then the fraction part.
{"url":"https://eli5.gg/Continued%20fraction","timestamp":"2024-11-03T03:40:37Z","content_type":"text/html","content_length":"10904","record_id":"<urn:uuid:bb66101f-ac7e-4a28-8866-7a3a9cfbb0a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00044.warc.gz"}
Minimum Time to Finish K Tasks - Google Top Interview Questions | HackerRank Solutions Minimum Time to Finish K Tasks - Google Top Interview Questions Problem Statement : You are given a two-dimensional list of integers tasks where each element has 3 integers. You are also given an integer k. Pick k rows from tasks, call it S, such that the following sum is minimized and return the sum: max(S[0][0], S[1][0], ...S[k - 1][0]) + max(S[0][1], S[1][1], ...S[k - 1][1]) + max(S[0][2], S[1][2], ...S[k - 1][2]) In other words, each of the 3 columns contribute to a cost, and is calculated by taking the max value of that column in S. The max of an empty list is defined to be 0. k ≤ n ≤ 1,000 where n is the length of tasks Example 1 tasks = [ [1, 2, 2], [3, 4, 1], [3, 1, 2] k = 2 We pick the first row and the last row. And the total sum becomes S = [[1,2,2],[3,1,2]] max(S[0][0], S[1][0]) = 3 max(S[0][1], S[1][1]) = 2 max(S[0][2], S[1][2]) = 2 Solution : Solution in C++ : int solve(vector<vector<int>>& tasks, int k) { if (k == 0) return 0; int ret = 2e9; vector<pair<int, pair<int, int>>> v; for (auto& t : tasks) { v.emplace_back(t[0], make_pair(t[1], t[2])); sort(v.begin(), v.end()); for (int i = k - 1; i < v.size(); i++) { vector<pair<int, int>> twocol; for (int j = 0; j <= i; j++) { sort(twocol.begin(), twocol.end()); priority_queue<int> q; for (int j = 0; j < twocol.size(); j++) { if (q.size() > k) { if (q.size() == k) { ret = min(ret, v[i].first + twocol[j].first + q.top()); return ret; Solution in Python : class Solution: def solve(self, A, K): if not A or not K: return 0 def solve_2D(B): yheap = [-B[i][1] for i in range(K)] ans = B[K - 1][0] + (-yheap[0]) for i in range(K, len(B)): x = B[i][0] heapq.heappushpop(yheap, -B[i][1]) assert len(yheap) == K y = -yheap[0] ans = min(ans, x + y) return ans B = [[A[i][1], A[i][2]] for i in range(K)] ans = A[K - 1][0] + max(y for y, z in B) + max(z for y, z in B) for i in range(K, len(A)): B.append([A[i][1], A[i][2]]) ans = min(ans, A[i][0] + solve_2D(B)) return ans View More Similar Problems The Strange Function One of the most important skills a programmer needs to learn early on is the ability to pose a problem in an abstract way. This skill is important not just for researchers but also in applied fields like software engineering and web development. You are able to solve most of a problem, except for one last subproblem, which you have posed in an abstract way as follows: Given an array consisting View Solution → Self-Driving Bus Treeland is a country with n cities and n - 1 roads. There is exactly one path between any two cities. The ruler of Treeland wants to implement a self-driving bus system and asks tree-loving Alex to plan the bus routes. Alex decides that each route must contain a subset of connected cities; a subset of cities is connected if the following two conditions are true: There is a path between ever View Solution → Unique Colors You are given an unrooted tree of n nodes numbered from 1 to n . Each node i has a color, ci. Let d( i , j ) be the number of different colors in the path between node i and node j. For each node i, calculate the value of sum, defined as follows: Your task is to print the value of sumi for each node 1 <= i <= n. Input Format The first line contains a single integer, n, denoti View Solution → Fibonacci Numbers Tree Shashank loves trees and math. He has a rooted tree, T , consisting of N nodes uniquely labeled with integers in the inclusive range [1 , N ]. The node labeled as 1 is the root node of tree , and each node in is associated with some positive integer value (all values are initially ). Let's define Fk as the Kth Fibonacci number. Shashank wants to perform 22 types of operations over his tree, T View Solution → Pair Sums Given an array, we define its value to be the value obtained by following these instructions: Write down all pairs of numbers from this array. Compute the product of each pair. Find the sum of all the products. For example, for a given array, for a given array [7,2 ,-1 ,2 ] Note that ( 7 , 2 ) is listed twice, one for each occurrence of 2. Given an array of integers, find the largest v View Solution → Lazy White Falcon White Falcon just solved the data structure problem below using heavy-light decomposition. Can you help her find a new solution that doesn't require implementing any fancy techniques? There are 2 types of query operations that can be performed on a tree: 1 u x: Assign x as the value of node u. 2 u v: Print the sum of the node values in the unique path from node u to node v. Given a tree wi View Solution →
{"url":"https://hackerranksolution.in/minimumtimetofinishktasksgoogle/","timestamp":"2024-11-15T04:52:52Z","content_type":"text/html","content_length":"40669","record_id":"<urn:uuid:43d904dd-988a-41fa-a05b-7da675a2e7c8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00589.warc.gz"}
The International Journal of Railway ResearchFleet Size and Frequency in Rapid Transit Systems fa Railway track and structures Railway track and structures پژوهشي Research Given is a set of stations linked by railway tracks, forming an undirected railway network. One or more lines are formed in this network. Each line consists of an origin station, a destination station, and a number of intermediate stations. In order to complete the line configuration, you need to decide the frequency (number of services per hour) and the capacity (number of carriages) for each train. This problem is called the line frequency and capacity setting. In this work, we propose an ILP-based algorithm for solving this problem when the objective function is the net profit. We also consider a version of this problem that takes into account the passenger behavior. Finally, we present some computational results. apid transit, Fleet size, Capacity setting, Net profit 1 7 http://ijrare.iust.ac.ir/browse.php?a_code=A-10-25-32&slc_lang=fa&sid=1 Alicia De los Santos aliciasan- tos@us.es 180031947532846001427 180031947532846001427 Yes Department of Applied Mathematics II, University of Seville, Spain Department of Applied Mathematics II, University of Seville, Spain Ahmad Mahmoodjanlou amjan-lou@gmail.com 180031947532846001428 180031947532846001428 No 2Mazandaran University of Science and Technology, Behshahr, Iran 2Mazandaran University of Science and Technology, Behshahr, Iran Juan A. A. Mesa jmesa@us.es 180031947532846001429 180031947532846001429 No Department of Applied Mathematics II, University of Seville, Spain Department of Applied Mathematics II, University of Seville, Spain Federico Perea perea@eio.upv.es 180031947532846001430 180031947532846001430 No 4Department of Applied Statistics, Operations Research an Quality, Polytechnical University of Valencia, Spain 4Department of Applied Statistics, Operations Research an Quality, Polytechnical University of Valencia, Spain
{"url":"https://ijrare.iust.ac.ir/xml_out.php?a_id=179","timestamp":"2024-11-05T23:12:06Z","content_type":"application/xml","content_length":"6073","record_id":"<urn:uuid:cc6f24e1-edff-404f-b704-14fee64fb2d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00066.warc.gz"}
Bernard L. Feringa - Nobel diploma - NobelPrize.org Symposium in honour of the - Kungl. vetenskapsakademien Abel Prize (2006); Lomonosov Gold Medal (2002); Wolf Prize in Mathematics (1992); doctor honoris causa of the University of Her father is Fields Medal and Abel prize winning mathematician Jean-Pierre Serre. Monteil i Claudine Monteil (born 1949) is a French writer, women's rights Expresses a desire to establish a mathemat- ics institute. The first Nobel Prizes are awarded. 1902. The book is intended as the first volume of a series, each volume covering five years. … The article is written in such a way as to explain the importance of the results at the level ‘of a well-informed and able first year mathematics student’ … . 2020-03-17 2011-04-17 2018-06-21 2018-03-20 2020-03-24 Mikhael Leonidovich Gromov (also Mikhail Gromov, Michael Gromov or Misha Gromov; Russian: Михаи́л Леони́дович Гро́мов; born 23 December 1943) is a Russian-French mathematician known for his work in geometry, analysis and group theory.He is a permanent member of IHÉS in France and a Professor of Mathematics at New York University. Avi Wigderson of the Institute for Advanced Study (IAS) was named a recipient of the 2021 Abel Prize, which he shares jointly with László Lovász—a former IAS visiting professor—of Eötvös Loránd University. They are cited by the Abel Committee “for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into Welcome to the Abel Prize The Abel Prize was established on 1 January 2002. The purpose is to award the Abel Prize for outstanding scientific work in the field of mathematics. The prize amount is 7,5 million NOK and was awarded for the first time on 3 June 2003. Yale University, New Haven, CT, USA The Abel Prize recognizes contributions to the field of mathematics that are of extraordinary depth and influence. Abelpriset matematikpris The Abel Prize, which honours achievements in mathematics, was awarded Wednesday to Hungarian Laszlo Lovasz and Israeli Avi Wigderson for their contributions to computer security, the Norwegian Two mathematicians will share this year’s Abel Prize — regarded as the field’s equivalent of the Nobel — for advances in understanding the foundations of what can and cannot be solved with The Abel Prize / ˈɑːbəl / (Norwegian: Abelprisen) is a Norwegian prize awarded every year by the Government of Norway to one or more outstanding mathematicians. The Abel Prize has been awarded annually by the Norwegian Academy of Science and Letters since 2003. In 2020, Yale’s Gregory A. Margulis, the Erastus L. DeForest Emeritus Professor of Mathematics, won the Abel Prize. Abel Prize Week 2017 — Helsingfors universitet Lägg till i din kalender. Datum: 27 maj, kl. 13.30; Plats: F1 (Alfvénsalen), KTH, Lindstedtsvägen 22, The Abel Prize · The Kavli Prize · VISTA · The Birkeland Lecture · The Committee for Geomedicine - food, environment, health · Centre for Advanced Study . Back to On these occasions, one of my heroes, Abel Tasman, comes to mind. The Abel Prize (/ ˈɑːbəl /; Norwegian: Abelprisen) is a prize awarded annually by the King of Norway to one or more outstanding mathematicians. It is named after Norwegian mathematician Niels Henrik Abel (1802–1829) and directly modeled after the Nobel Prizes. Abel Prize, award granted annually for research in mathematics, in commemoration of the brilliant 19th-century Norwegian mathematician Niels Henrik Abel. Apotea linkoping 1902. Celebration honoring Abel in Kristiania, Norwegian Academy of Science and Letters awards Belgian mathematician Pierre Deligne with Abel prize of 2013 Lyssna Norwegian Academy of Science and 126 lines. Abel Prize to John T. Tate for path-breaking work. Karen Keskulla Uhlenbeck of the University of Texas at Austin wins one of the "Abel Prize" av Frederic P Miller · Book (Bog). Väsentliga händelser under räkenskapsåret corona hallbarhetsstrateg utbildningtrolls barb voiceladuviken brfunionen ledarnakörförbud påföljdhustillverkare pris Unsorted links - Henrik Bäärnhielm - Släkten Bäärnhielm Congratulations! Lázló Lovász and Avi Wigderson share the Abel Prize of 2021. They receive the prize “for their foundational 2000: The Clay Mathematics Institute published the seven Millenium Prize woman to have received the Abel Prize, one of the highest awards in mathematics. In 2018, Langlands received the Abel Prize, one of the highest awards in mathematics, for “his visionary program connecting representation theory to number Meanwhile, Swedish mathematician Lennart Carleson received the Norwegian Abel Prize Tuesday, for proving a 19th century theorem for analyzing sound Abel Prize 2017 to Yves Meyer. 461 bilder, fotografier och illustrationer med Abel Award People who won the Abel Prize award are listed along with photos for every Abel Prize winner that has a picture associated with their name online. The Abel Prize is awarded annually by the King of Norway to one or more outstanding Mathematician(s) and is dedicated in the memory of Norwegian mathematician, Niels Henrik Abel. The Abel prize recognizes the contribution to the field of mathematics which is extraordinary and of great influence. The Abel Prize / ˈ ɑː b əl / (Norwegian: Abelprisen) is a Norwegian prize awarded every year by the Government of Norway to one or more outstanding mathematicians. It is named after Norwegian mathematician Niels Henrik Abel (1802–1829) and modelled after the Nobel Prizes, the award was established in 2001 by the Government of Norway and complements its sister prize in the humanities, the Abel Prize algorithms combinatorics computational complexity computer science graph theory mathematics All topics When Avi Wigderson and László Lovász began their careers in the 1970s, theoretical computer science and pure mathematics were almost entirely separate disciplines.
{"url":"https://hurmanblirrikcfjg.web.app/83170/42238.html","timestamp":"2024-11-09T12:25:10Z","content_type":"text/html","content_length":"11682","record_id":"<urn:uuid:028af0c6-c722-4ede-b25b-ba313023816b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00438.warc.gz"}
Hdu2236 no question II Max matching + Binary Search A Free Trial That Lets You Build Big! Start building with 50+ products and up to 12 months usage for Elastic Compute Service • Sales Support 1 on 1 presale consultation • After-Sales Support 24/7 Technical Support 6 Free Tickets per Quarter Faster Response • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
{"url":"https://topic.alibabacloud.com/a/hdu2236-no-question-ii-max-matching--binary-search_8_8_31634346.html","timestamp":"2024-11-09T06:52:39Z","content_type":"text/html","content_length":"79529","record_id":"<urn:uuid:11e4aad7-c86f-489c-8d00-86e45ecc346a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00219.warc.gz"}
About - McKellar Math McKellar Math was born in 2007, a few years after Danica McKellar spoke before a Congressional Committee about why kids in the United States are far behind kids of many other countries in math scores, and why girls especially begin to shy away from math in middle school. A recent UCLA graduate with a degree in Mathematics, McKellar pledged to Congress that she would be a part of the solution. Identifying the problem as largely cultural – kids in the US tend to view math as boring, scary and just for nerds – McKellar challenged these stereotypes with her first book, aimed at middle school kids: Math Doesn’t Suck. This how-to book was a runaway bestseller; and McKellar was named ‘Person of the Week” on ABC World News upon the book’s release for her positive contribution as an author and as a role model for kids everywhere. The incredible success of Math Doesn’t Suck led to three more books in the series, Kiss My Math, Hot X: Algebra Exposed, and Girls Get Curves: Geometry Takes Shape. The mission of McKellar Math is to explain math concepts in fun, easy-to-digest ways with colorful analogies and simple tricks, and to show kids that math is an inherent part of the world around them. McKellar Math gives kids the tools they need to succeed in math and to build the confidence that comes from feeling smart. This New York Times bestselling series of math books has helped hundreds of thousands of students to improve their math scores, and their outlook on math itself Now it is was time for McKellar Math to focus on the youngest audience! March 2017 saw the release of Goodnight Numbers, the youngest of the McKellar Math books, aimed at toddlers. Goodnight Numbers, an instant New York Times bestseller , is the first in an 8-book series that will become available over the next few years, spanning toddlerhood through the 3rd grade, with the remaining books becoming available over the next few years. Ten Magic Butterflies, aimed at kids 4-7 yrs old, is an enchanted story which also just so happens to teach the various ways to make ten (9 + 1, 8 + 2, etc) and it hits the shelves March 2018, followed by two more later in the year. McKellar Math aims to ensure that kids experience math as a delight, a fun challenge, and an opportunity to get stronger and smarter – in all areas of life. In addition to her lifelong acting career, Danica McKellar is an internationally-recognized mathematician and advocate for math education. A summa cum laude graduate of UCLA with a degree in Mathematics, Danica has been honored in Britain’s esteemed Journal of Physics and the New York Times for her work in mathematics, most notably for her role as co-author of a ground-breaking mathematical physics theorem which bears her name (The Chayes-McKellar-Winn Theorem).
{"url":"https://mckellarmath.com/about/","timestamp":"2024-11-14T15:09:39Z","content_type":"text/html","content_length":"44786","record_id":"<urn:uuid:8d8c689d-549f-4f49-8d92-11d9418fd6c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00169.warc.gz"}
James Madison High School Algebra Inequality Solutions Multiple Choices Questions - Custom Scholars Delivering a high-quality product at a reasonable price is not enough anymore. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. Money-back guarantee You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent. Read more Zero-plagiarism guarantee Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in. Read more Free-revision policy Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result. Read more Privacy policy Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems. Read more Fair-cooperation guarantee By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language. Read more
{"url":"https://customscholars.com/james-madison-high-school-algebra-inequality-solutions-multiple-choices-questions/","timestamp":"2024-11-06T17:44:22Z","content_type":"text/html","content_length":"53855","record_id":"<urn:uuid:cad8bb85-c212-40aa-a138-a4a021c3f32b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00891.warc.gz"}
Mountain Empire Community College Applied Calculus I - MTH 261 at Mountain Empire Community College Effective: 2022-03-31 Course Description Introduces limits, continuity, differentiation and integration of algebraic, exponential and logarithmic functions, and techniques of integration with an emphasis on applications in business, social sciences and life sciences. This is a Passport and UCGS transfer course. Lecture 3 hours. Total 3 hours per week. 3 credits The course outline below was developed as part of a statewide standardization process. General Course Purpose The general purpose of this one-semester course is to prepare students in business, social sciences and life sciences to apply concepts of differentiation and integration of algebraic, exponential and logarithmic functions in future mathematics and degree coursework. Course Prerequisites/Corequisites Prerequisite: Completion of MTH 161 or equivalent with a grade of C or better. Course Objectives • Limits and Continuity □ Calculate and interpret limits at particular x-values and as x approaches infinity. □ Determine whether a function is continuous at a given point and over open/closed intervals. • Derivatives □ Find the derivative of a function applying the limit definition of the derivative. □ Interpret the derivative as both the instantaneous rate of change of a function and the slope of the tangent line to the graph of a function. □ Use the Power, Product, Quotient, and Chain rules to find the derivatives of algebraic, exponential, and logarithmic functions • Applications of the Derivative □ Find the relative extreme values for a continuous function using the First and Second Derivative Tests. □ Apply derivatives to solve problems in life sciences, social sciences, and business. □ Find higher order derivatives and interpret their meaning. □ Use derivatives to model position, velocity, and acceleration. □ Apply First and Second Derivative Tests to determine relative extrema, intervals of increase and decrease, points of inflection, and intervals of concavity. □ Graph functions, without the use of a calculator, using limits, derivatives and asymptotes. □ Use derivatives to find absolute extrema and to solve optimization problems in life sciences, social sciences, and business. □ Perform implicit differentiation and apply the concept to related rate problems.AND/OR □ Evaluate partial derivatives and interpret their meaning. • Integration and Its Applications □ Use basic integration formulas to find indefinite integrals of algebraic, exponential, and logarithmic functions. □ Develop the concept of definite integral using Riemann Sums. □ Evaluate definite integrals using Fundamental Theorem of Calculus. □ Use the method of integration by substitution to determine indefinite integrals. □ Evaluate definite integrals using substitution with original and new limits of integration. □ Calculate the area under a curve over a closed interval [a, b]. □ Calculate the area bounded by the graph of two or more functions by using points of intersections. □ Use integration to solve applications in life sciences such as exponential growth and decay. □ Use integration to solve applications in business and economics, such as future value and consumer and producer's surplus Major Topics to be Included • Limits and Continuity • Derivatives • Applications of the Derivative • Integration and Its Applications
{"url":"https://courses.vccs.edu/colleges/mecc/courses/MTH261-AppliedCalculusI/detail","timestamp":"2024-11-03T10:57:29Z","content_type":"application/xhtml+xml","content_length":"12308","record_id":"<urn:uuid:b685228f-7939-42d7-8d13-0dec53464aa9>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00404.warc.gz"}
What is the magnifying power of the eyepiece if it has been magnified 100 times under 10 times? - Answers How do you determine a magnifying glass strength? The strength of a magnifying glass, or its magnification power, is typically indicated by a number followed by &quot;X&quot; (for example, 2X, 3X, 5X). This number represents how many times the object will appear larger when viewed through the magnifying glass compared to viewing it with the naked eye. The higher the number, the greater the magnification power of the magnifying glass.
{"url":"https://math.answers.com/math-and-arithmetic/What_is_the_magnifying_power_of_the_eyepiece_if_it_has_been_magnified_100_times_under_10_times","timestamp":"2024-11-02T05:36:51Z","content_type":"text/html","content_length":"162241","record_id":"<urn:uuid:9d67d3ba-fac3-4e2a-a821-0d37c59df548>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00422.warc.gz"}
Computer Science homework help INSTRUCTIONS: Write the answer in the designated lines. Your answers must be well-organized and well-written. Write your name on all pages. Use PENCIL ONLY. • Assume data provided below is discretized with categorical Categories of each feature have the same significance. The distance between data points Pi and Pj is d(i,j) = 1 – (m/N), where m and N denote the number of matches and the number of features, respectively. Features are Age (A), Salary (S), Number of movie watched/month (N), Size of family (F). • Obtain the contingency tables and calculate the dissimilarity DisSim(i,j) and the similarity Sim(i,j) between data points Pi and A S N F P1 1 2 2 2 P2 3 1 3 1 P3 2 2 2 2 DisSim(1,3) = Sim(1,3) = DisSim(2,3) = Sim(2,3) = DisSim(1,2) = Sim(1,2) = • Calculate proximity Prx(Pi, Pj) between data points based on the supremum distance measure for the same data set assuming values are numeric (not categorical). What is the most similar data points? Why? Show your Prx(Pi, Pj) P1 P2 P3 • Assume that each data point {P1, P2, P3} represents a transaction. Convert these transactions which are in the horizontal data format into the vertical data format. For an “attribute=value”, such as “A=1” use the format “A1” in the vertical data format. Add columns as • Given samples below, □ Find clusters using the algorithm K-medoids (closest to the mean). Initial seed points are P1 for cluster-1 (“o”) and P10 for cluster-2 (“”). Use Manhattan Use ceiling for decimal values. In case of equal distance, keep the point in its current cluster. Show your calculations only for the first iteration. Plot points with their clusters (use curvy line) and mark centroid point with “*” at each iteration; use the charts provided. Start clustering on the first chart below. P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 X 5 15 20 25 30 30 35 40 40 60 Y 30 30 20 40 35 50 25 30 45 50 • Find clusters using the hierarchical algorithm. Use the Complete Link (MAX) as the inter-cluster proximity measure and Manhattan Use ceiling for decimal values. Show clusters with curvy line at each iteration on the charts below; name them with “K” for cluster designation such as “K1” for the cluster-1. Calculate proximity matrix values for each iteration and enter them into the table below; when a cluster is in equal proximity to others, then merge with the one with larger size. Start clustering on the first chart below. 3) Assume that you have a data set represented by three (3) attributes A, B, C. The value categories for each attribute are V(A) = {a1, a2}, V(B) = {b1, b2, b3}, and V(C) = {c1, c2}; note that an item (attribute=value), e.g. “A = 1” is represented by “A1”. The distribution of the transactions is given in the table-1. • Fill up NOT-shaded empty cells in table-1. Show how to calculate each Table-1: The distribution of the transactions A Count(ai) P(ai) B Count(ai,bj) P(ai,bj) C Count(ai,bj,ck) P(ai,bj,ck) a1 Count(a1)= _ P(a1)= _ b1 Count(a1,b1)= _ P(a1,b1)= c1 Count(a1,b1,c1)= 0.05 a1 b1 c2 0.10 a1 b2 Count(a1,b2)= _ P(a1,b2)= c1 0.20 a1 b2 c2 0.05 a1 b3 Count(a1,b3)= _ P(a1,b3)= c1 0.30 a1 b3 c2 0.10 a2 Count(a2)= _ P(a2)= _ b1 Count(a2,b1)= _ P(a2,b1)= c1 0.02 a2 b1 c2 0.03 a2 b2 Count(a2,b2)= _ P(a2,b2)= c1 0.01 a2 b2 c2 0.04 a2 b3 Count(a2,b3)= _ P(a2,b3)= c1 0.05 a2 b3 c2 0.05 TOTAL 200 1.00 • Calculate support of each item. Then, provide results in the table below. Add column(s) as you • Given the minimum support as 23%, using closure property find all frequent items-sets. Draw a lattice-tree to show generating combinations; use the lattice tree format given. The nodes must include item and the corresponding support value S(*). 3.4) Assume S = {A2, B3, C2} is a frequent 3-items-set. Find frequent 2-items-sets from given the set S. • What would be the biggest minimum-support value considering rules with 2- and 3-items-sets only? • Given the rule “C1 à A2, B3”, □ What is the local frequency of observing the consequent given the antecedent is observed? • Compare this local frequency against the global frequency of the consequent. What do you think about association degree of the antecedent and the consequent? • Discuss interestingness of the rule based on 3.6)3.1. Lift(C1 à A2, B3) 3.6)3.2. Lift(C1 à NOT {A2, B3} )
{"url":"https://essaymartials.com/computer-science-homework-help-254-2/","timestamp":"2024-11-10T05:59:34Z","content_type":"text/html","content_length":"85833","record_id":"<urn:uuid:40e6a491-7c16-423d-963e-9bcf39ea8507>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00761.warc.gz"}
Stochastics | Math REU Jason Freitas and Joshua Huang Mentor: Oleksii Mostovyi REU participants: Bobita Atkins, Massachusetts College of Liberal Arts Ashka Dalal, Rose-Hulman Institute of Technology Natalie Dinin, California State University, Chico Jonathan Kerby-White, Indiana University Bloomington Tess McGuinness, University of Connecticut Tonya Patricks, University of Central Florida Genevieve Romanelli, Tufts University Yiheng Su, Colby College Mentors: Bernard Akwei, Rachel Bailey, Luke Rogers, Alexander Teplyaev Convergence, optimization and stabilization of singular eigenmaps B.Akwei, B.Atkins, R.Bailey, A.Dalal, N.Dinin, J.Kerby-White, T.McGuinness, T.Patricks, L.Rogers, G.Romanelli, Y.Su, A.Teplyaev Eigenmaps are important in analysis, geometry and machine learning, especially in nonlinear dimension reduction. Versions of the Laplacian eigenmaps of Belkin and Niyogi are a widely used nonlinear dimension reduction technique in data analysis. Data points in a high dimensional space \(\mathbb{R}^N\) are treated as vertices of a graph, for example by taking edges between points separated by distance at most a threshold \(\epsilon\) or by joining each vertex to its \(k\) nearest neighbors. A small number \(D\) of eigenfunctions of the graph Laplacian are then taken as coordinates for the data, defining an eigenmap to \(\mathbb{R}^D\). This method was motivated by an intuitive argument suggesting that if the original data consisted of \(n\) sufficiently well-distributed points on a nice manifold \(M\) then the eigenmap would preserve geometric features of \(M\). Several authors have developed rigorous results on the geometric properties of eigenmaps, using a number of different assumptions on the manner in which the points are distributed, as well as hypotheses involving, for example, the smoothness of the manifold and bounds on its curvature. Typically, they use the idea that under smoothness and curvature assumptions one can approximate the Laplace-Beltrami operator of \(M\) by an operator giving the difference of the function value and its average over balls of a sufficiently small size \(\epsilon\), and that this difference operator can be approximated by graph Laplacian operators provided that the \(n\) points are sufficiently well distributed. In the present work we consider several model situations where eigen-coordinates can be computed analytically as well as numerically, including the intervals with uniform and weighted measures, square, torus, sphere, and the Sierpinski gasket. On these examples we investigate the connections between eigenmaps and orthogonal polynomials, how to determine the optimal value of \(\epsilon\) for a given \(n\) and prescribed point distribution, and the dependence and stability of the method when the choice of Laplacian is varied. These examples are intended to serve as model cases for later research on the corresponding problems for eigenmaps on weighted Riemannian manifolds, possibly with boundary, and on some metric measure spaces, including fractals. Approximation of the eigenmaps of a Laplace operator depends crucially on the scaling parameter \(\epsilon\). If \(\epsilon\) is too small or too large, then the approximation is inaccurate or completely breaks down. However, an analytic expression for the optimal \(\epsilon\) is out of reach. In our work, we use some explicitly solvable models and Monte Carlo simulations to find the approximately optimal value of \(\epsilon\) that gives, on average, the most accurate approximation of the eigenmaps. Our study is primarily inspired by the work of Belkin and Niyogi “Towards a theoretical foundation for Laplacian-based manifold methods.” Talk: Laplacian Eigenmaps and Chebyshev Polynomials Talk: A Numerical Investigation of Laplacian Eigenmaps Talk: Analysis of Averaging Operators Intro Text: Graph Laplacains, eigen-coordinates, Chebyshev polynomials, and Robin problems Intro Text: A Numerical Investigation of Laplacian Eigenmaps Intro Text: Comparing Laplacian with the Averaging Operator Poster: Laplacian Eigenmaps and Orthogonal Polynomials Results are presented at the 2023 Young Mathematicians Conference (YMC) at the Ohio State University, a premier annual conference for undergraduate research in mathematics, and at the 2024 Joint Mathematics Meetings (JMM) in San Francisco, the largest mathematics gathering in the world. Bobita Atkins Ashka Dalal Natalie Dinin Jonathan Kerby-White Tess Mcguinness Tonya Patricks Genevieve Romanelli Yiheng Su Working with Professor Sasha Teplyaev Yiheng and Jonathan share their results. Vievie and Tonya present their work. Bobita, Ashka, and Natalie explain random eigencoordinates. Group Members: Tyler Campos, Andrew Gannon, Benjamin Hanzsek-Brill, Connor Marrs, Alexander Neuschotz, Trent Rabe and Ethan Winters. Andrew Gannon Benjamin Hanzsek-Brill Trent Rabe Ethan Winters Mentors: Rachel Bailey, Fabrice Baudoin, Masha Gordina Overview: We study and simulate on computers the fractional Gaussian fields and their discretizations on surfaces like the two-dimensional sphere or two-dimensional torus. The study of the maxima of those processes will be done and conjectures formulated concerning limit laws. Particular attention will be paid to log-correlated fields (the so-called Gaussian free field). The REU students and mentor work with Professor Baudoin Group hike at Mansfield Hollow State Park Tyler and Ethan present a proof to the group Roller skating Getting some advice from Professor Masha Gordina William Busching, Delphine Hintz, Oleksii Mostovyi, Alexey Pozdnyakov • Fair Pricing and Hedging Under Small Perturbations of the Numéraire on a Finite Probability Space Involve (2022), Vol. 15(4), pp. 649-668. [published version] [arXiv] accepted in the Missouri Journal of Mathematical Sciences (2023) Group Members Sarah Boese, Tracy Cui, Sam Johnston Gianmarco Molino, Olekisii Mostovyi In practice, financial models are not exact — as in any field, modeling based on real data introduces some degree of error. However, we must consider the effect error has on the calculations and assumptions we make on the model. In complete markets, optimal hedging strategies can be found for derivative securities; for example, the recursive hedging formula introduced in Steven Shreve’s “Stochastic Calculus for Finance I” gives an exact expression in the binomial asset model, and as a result the unique arbitrage-free price can be computed at any time for any derivative security. In incomplete markets this cannot be accomplished; one possibility for computing optimal hedging strategies is the method of sequential regression. We considered this in discrete-time; in the (complete) binomial model we showed that the strategy of sequential regression introduced by Follmer and Schweizer is equivalent to Shreve’s recursive hedging formula, and in the (incomplete) trinomial model we both explicitly computed the optimal hedging strategy predicted by the Follmer-Schweizer decomposition and we showed that the strategy is stable under small perturbations. Publication “Stability and asymptotic analysis of the Föllmer–Schweizer decomposition on a finite probability space” Involve, a Journal of Mathematics , v.13 , 2020 doi.org/10.2140/ The financial value of knowing the distribution of stock prices in discrete market models Ayelet Amiran, Fabrice Baudoin, Skylyn Brock, Berend Coster, Ryan Craver, Ugonna Ezeaka, Phanuel Mariano and Mary Wishart Vol. 12 (2019), No. 5, 883–899 DOI: 10.2140/involve.2019.12.883 project page: Two of our REU (2017 Stochastics) participants, Raji Majumdar and Anthony Sisti, will be presenting posters Applications of Multiplicative LLN and CLT for Random Matrices and Black Scholes using the Central Limit Theorem on Friday, January 12 at the MAA Student Poster Session, and both of them will be giving talks on Saturday, January 13 at the AMS Contributed Paper Session on Research in Applied Mathematics by Undergraduate and Post-Baccalaureate Students. Their travel to the 2018 JMM has been made possible with the support of the MAA and UConn’s OUR travel grants. Group Members Ayelet Amiran, Skylyn Brock, Ryan Craver, Ugonna Ezeaka, Mary Wishart Fabrice Baudoin, Berend Coster, Phanuel Mariano Financial markets have asymmetry of information when it comes to the prices of assets. Some investors have more information about the future prices of assets at some terminal time. However, what is the value of this extra information? We studied this anticipation in various models of markets in discrete time and found (with proof) the value of this information in general complete and incomplete markets. For special utility functions, which represent a person’s satisfaction, we calculated this information for both binomial (complete) and trinomial (incomplete) models. Journal reference: Involve 12 (2019) 883-899 DOI: 10.2140/involve.2019.12.883
{"url":"https://mathreu.uconn.edu/tag/stochastics/","timestamp":"2024-11-13T17:43:08Z","content_type":"text/html","content_length":"112241","record_id":"<urn:uuid:b996e6be-edcf-433f-b077-0015400cb0c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00428.warc.gz"}
XOCATEGORIES procedure • Genstat v21 Performs analyses of categorical data from cross-over trials (D.M. Smith & M.G.Kenward). PRINT = string token What to print at each fit (model, summary, accumulated, estimates, correlations, fittedvalues, monitoring); default * PDATA = string token Whether or not a display of category combination by sequence is required (yes, no); default no METHOD = string token Type of analysis for which factors are required (subject, loglinear, ownsubject, ownloglinear); default subj CARRYOVER = string token Whether or not models with carryover effects in are to be produced (yes, no); default no SEQUENCE = factors The identifier of the sequence of treatments RESULTS = pointers Pointer containing factors (one for each period) giving the category scores observed NUMBER = variates Numbers recorded in the sequence/category combinations SAVE = pointers Saves the factors constructed to do the analysis REUSE = pointers To reuse factors saved earlier using SAVE MODEL = formula Additional terms to be fitted to model if OWNSUBJECT or OWNLOGLINEAR options used; default * XOCATEGORIES calculates factors, variates and performs various analyses of categorical cross-over data. All analyses conform to one of two different types both utilising a log-linear structure, although only one is derived from an orthodox log-linear model. The first type is based on a latent variable or subject effects model and is described by Kenward & Jones (1991). The subject effects are eliminated through the use of a conditional likelihood and the resultant conditional analysis can be formulated in terms of a conventional log-linear model. In the process of conditioning all between-subject information is lost. This has little consequence for the majority of well-designed cross-over trials in which nearly all information on important comparisons lies in the within-subject stratum. An exception to this is the two-period two-treatment design for which information on the carry-over effect lies in the between-subject stratum. The second type, which uses a multivariate log-linear model, allows between-subject information to be recovered, which in the binary case leads to the Hills-Armitage test for carry-over effect. Details can be found in Jones & Kenward (1989, Section 3.3). If such a test, and other allied tests for the two-period two-treatment design, are required then the log-linear option of the procedure can be used. However, the estimates from this multivariate log-linear model do have the disadvantage of an awkward interpretation. For this reason the latent variable model is to be preferred for higher-order designs and for the two-period two-treatment design when the carry-over test is not required. In the latter case, with binary data, the test for direct treatments reduces to the Mainland-Gart test. In the latent variable model, effects are defined in terms of generalized logits, reducing to ordinary logits in the binary case. This is not ideal for ordered categorical data because the ordering is ignored. Some account can be taken of the ordering of categories by regressing on category scores in a generalization of Armitage’s trend test. This can be done by using the parameter SAVE to obtain the treatment and carryover factors, which are in pointers SAVE[3...(NTRT+2)] and SAVE[(NTRT+3)...(2+2*NTRT)] respectively, NTRT being the number of treatments. From these treatment and carryover factors (NCAT-1) variates corresponding to linear (-1, 0, 1), quadratic (1, -2, 1), etc., contrasts amongst the NCAT categories can be calculated. For example, using the example data where the number of treatments (NTRT) is 3 and the number of categories (NCAT) is 3, the following statements will produce linear and quadratic variates for treatments. XOCATEGORIES SEQUENCE=Seqid; RESULTS=Res; NUMBER=Number;\ CALCULATE TLN[1...3]=(Fsave[3...6].eq.3)-(Fsave[3...6].eq.1) & TQU[1...3]=(Fsave[3...6].eq.3)-2*(Fsave[3...6].eq.2)\ The OWNSUBJECT or OWNLOGLINEAR options together with the REUSE and MODEL parameters can then be used to fit models involving these variates and the deviances produced used to compare the models. For example, for the above variates. XOCATEGORIES [METHOD=OWNSUBJECT] REUSE=Fsave;\ The data for the procedure are specified by parameters SEQUENCES, RESULTS and NUMBERS. SEQUENCES supplies a factor with labels indicating the treatment received at each time period. The treatments are labelled by capital letters A, B &c, so (with three periods) BCA indicates treatment B in period 1, C in 2 and A in 1. RESULTS is a pointer containing a factor for each time period, to indicate the corresponding scores recorded in each period. NUMBER then indicates the number of subjects involved. It is not necessary to input data for category combinations in which no subjects were XOCATEGORIES processes the data to form the necessary factors to do the analysis using the Genstat facilities for generalized linear models. This information can be saved using the SAVE parameter (see Method) and input again, to save time in later analyses, using the REUSE parameter. Output of the procedure comprises significance tests of treatment and/or carryover and first order period interactions; together with estimates of log odds ratios and their standard errors. Options: PRINT, PDATA, METHOD, CARRYOVER. Parameters: SEQUENCE, RESULTS, NUMBER, SAVE, REUSE, MODEL. The methods of analysis follow Kenward & Jones (1991) for SUBJECT and OWNSUBJECT, and Jones & Kenward (1989, pages 124-129) for LOGLINEAR and OWNLOGLINEAR. The actual model fitting is performed using Genstat directives FIT, ADD, DROP and SWITCH, with the PRINT options being those of these directives. The data structure SAVE has the following form, all factors as Kenward & Jones (1991). SAVE[1] = The factor G (sequence). SAVE[2] = The factor S (outcome). SAVE[3...(NTRT+2)] = The factors T[1...NTRT] (treatment). SAVE[(NTRT+3)...(2+2*NTRT)] = The factors C[1...NTRT] (carryover). SAVE[(3+2*NTRT)...(NPER+2*NTRT)] = The factors P[1...NPER] (period). SAVE[NPER+2*NTRT+1] = The category labels if they exist. We wish to thank Dr Byron Jones of the University of Kent, Canterbury UK, for his assistance. Input structures must not be restricted, and any existing restrictions will be cancelled. Jones, B. & Kenward, M.G. (1989). Design and Analysis of Crossover Trials. Chapman & Hall, London. Kenward, M.G. & Jones, B. (1991). The analysis of categorical data from cross-over trials using a latent variable model. Statistics in Medicine, 10, 1607-1619. See also Procedures: AFCARRYOVER, AGCROSSOVERLATIN, XOEFFICIENCY, XOPOWER. Commands for: Regression analysis. CAPTION 'XOCATEGORIES example',\ 'Data from Kenward & Jones, Statistics in Medicine, 10, 1991.';\ FACTOR [LEVELS=3; LABELS=!T(NONE,MODERATE,COMPLETE)] Res[1...3] & [LEVELS=6; LABELS=!T(ABC,ACB,BAC,BCA,CAB,CBA)] Seqid READ [SETNVALUES=Y] Seqid,Res[1...3],Number; FIELDWIDTH=3,3(*),*; FREP=labels ACB NONE NONE NONE 2 CAB NONE NONE NONE 3 CBA NONE NONE NONE 1 ABC NONE NONE MODERATE 1 BCA NONE NONE MODERATE 1 ABC NONE NONE COMPLETE 1 BAC NONE NONE COMPLETE 1 ABC NONE MODERATE NONE 2 ABC NONE MODERATE MODERATE 3 BAC NONE MODERATE MODERATE 1 ABC NONE MODERATE COMPLETE 4 ACB NONE MODERATE COMPLETE 3 BAC NONE MODERATE COMPLETE 1 CAB NONE MODERATE COMPLETE 2 BAC NONE COMPLETE NONE 1 BCA NONE COMPLETE NONE 1 ACB NONE COMPLETE MODERATE 2 ABC NONE COMPLETE COMPLETE 2 ACB NONE COMPLETE COMPLETE 4 BAC NONE COMPLETE COMPLETE 1 CBA NONE COMPLETE COMPLETE 1 ACB MODERATE NONE NONE 1 BAC MODERATE NONE NONE 1 CBA MODERATE NONE NONE 3 BAC MODERATE NONE MODERATE 2 CAB MODERATE NONE MODERATE 1 CBA MODERATE NONE MODERATE 1 BAC MODERATE NONE COMPLETE 1 ABC MODERATE MODERATE NONE 1 BCA MODERATE MODERATE NONE 6 CAB MODERATE MODERATE NONE 1 CBA MODERATE MODERATE NONE 1 BCA MODERATE COMPLETE NONE 1 CBA MODERATE COMPLETE NONE 2 BCA COMPLETE NONE NONE 1 CBA COMPLETE NONE NONE 2 BAC COMPLETE NONE MODERATE 2 CAB COMPLETE NONE MODERATE 2 CBA COMPLETE NONE MODERATE 1 BAC COMPLETE NONE COMPLETE 3 CAB COMPLETE NONE COMPLETE 4 CBA COMPLETE NONE COMPLETE 1 BCA COMPLETE MODERATE NONE 1 CBA COMPLETE COMPLETE NONE 1 : XOCATEGORIES [CARRYOVER=yes] SEQUENCE=Seqid; RESULTS=Res; NUMBER=Number
{"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/xocatego/","timestamp":"2024-11-14T10:02:58Z","content_type":"text/html","content_length":"47586","record_id":"<urn:uuid:e6da8b03-67c4-4c61-9e29-87be9c340a26>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00563.warc.gz"}
Calculator Online About The Calculator Online This Calculator Online website is a one-stop destination that everyone needs to solve their daily life calculations related to health, finance, statistics, maths, physics, and chemistry. Everyone should have free and instant access to calculations, and we drive the vision of providing up-to-date results and tackling mathematical challenges with unparalleled efficiency and accuracy. Whether you are tackling basic maths problems or delving into more complex equations, this platform is designed to streamline your mathematical endeavours. Every tool developed by the Calculator Online ensures precision in every calculation domain, ensuring our strong commitment to your satisfaction. If you feel any inconvenience regarding these calculators, feel free to Contact Us. Increase your mathematical productivity with the Calculator Online, your best companion in mathematics.
{"url":"https://calculator-online.net","timestamp":"2024-11-03T21:32:31Z","content_type":"text/html","content_length":"427872","record_id":"<urn:uuid:21c32fc0-50a3-45b3-982b-df9868f3cce7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00753.warc.gz"}
Compute confidence intervals for percentiles in SAS PROC UNIVARIATE has provided confidence intervals for standard percentiles (quartiles) for eons. However, in SAS 9.3M2 (featuring the 12.1 analytical procedures) you can use a new feature in PROC UNIVARIATE to compute confidence intervals for a specified list of percentiles. To be clear, percentiles and quantiles are essentially the same thing. For example, the median value of a set of data is the 0.5 quantile, which is also the 50th percentile. In general, the pth quantile is the (100 p)th percentile. The CIPCTLDF option on the PROC UNIVARIATE statement produces distribution-free confidence intervals for the 1st, 5th, 10th, 25th, 50th, 75th, 90th, 95th, and 99th percentiles as shown in the following example: /* CI for standard percentiles: 1, 5, 10, 25, 50, 75, 90, 95, 99 */ ods select Quantiles; proc univariate data=Sashelp.Cars cipctldf; var MPG_City; However, prior to the 12.1 releaase of the analytics procedures, there was not an easy way to obtain confidence intervals for arbitrary percentiles. (Recall that you can specify by nonstandard percentiles by using the PCTLPTS= option on the OUTPUT statement.) I am happy to report that the OUTPUT statement in the UNIVARIATE procedure now supports the CIPCTLDF= option, which you can use as follows: proc univariate data=sashelp.cars noprint; var MPG_City; output out=pctl pctlpts=2.5 20 80 97.5 pctlpre=p cipctldf=(lowerpre=LCL upperpre=UCL); /* 12.1 options (SAS 9.3m2) */ proc print noobs; run; The CIPCTLDF= option computes distribution-free confidence intervals for the percentiles that are specified on the PCTLPTS= option. The LOWERPRE= option specifies the prefix to use for lower confidence limits; the UPPERPRE= option specifies the prefix to use for upper confidence limits. If your data are normally distributed, you can use the CIPCTLNORMAL= option on the OUTPUT statement to compute confidence limits. However, if your data are not normally distributed, the CIPCTLNORMAL= option might produce inaccurate results. For example, on the MPG_City data, which is highly skewed, the confidence intervals for large percentiles (like the 99th percentile) do not contain the corresponding point estimate. For this reason, I prefer the distribution-free intervals for most analyses. 12 Comments Neat! Thanks for sharing this... In using PROC QUANTreg the output includes "95% confidence limits" for the parameter estimates. Could you explain what this is, and how to use it in reporting results? P.S. Excerpt from SAS code: proc QUANTreg ci=sparsity/iid algorithm=interior(tolerance=1.e-4) where age eq 60; model PUTRL = pipopera ann_ppt / quantile=0.2 0.4 0.6 0.8 A 95% confidence interval (CI) accounts for the fact that the sample is a random draw from a population. If you were to take additional samples of the same size and rerun the analysis, you would get slightly different parameter estimates. However, 95% of the time your parameter estimates will be within the upper and lower confidence limits. For more about CIs, see the article "Regression coefficient plots in SAS." A really great and useful article. Is there a simple way to calculate the 95% CI for the mode of a given data set? I'm working on a survey related exercise, trying to match the output of a survey (300 responses) to the output of an assessment (1 response). if the assessment response to a specific question is within the 95%CI for the mode, I'd like to say that there is no difference between the two. Any thoughts on this. You want a CI for the mode of a continuous unimodal distribution? Sorry, but I am not familiar with any result like that. Even estimating the mode from data is hard unless you assume a parametric form of the underlying distribution. (Of course, for a symmetric distribution, mean=mode so you can solve that problem.) What is the formula that is used for Order Statistics LCL and UCL Rank? I provide a link to the PROC UNIVARIATE documentation. The formula is in the Details chapter, "Calculating Percentiles" section. Leave A Reply
{"url":"https://blogs.sas.com/content/iml/2013/05/06/compute-confidence-intervals-for-percentiles-in-sas.html","timestamp":"2024-11-11T23:46:41Z","content_type":"text/html","content_length":"54129","record_id":"<urn:uuid:bb3d7799-bef6-421c-ac47-53520abe4e5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00894.warc.gz"}
Can you use Isblank on a range Google Sheets? The ISBLANK function can be used in arrays. That means it can check multiple cells at a time. Assume you want to test the range A1:A10 for blank cells. Then use this range in ISBLANK and wrap the entire formula with the ARRAYFORMULA function. How do I use Isblank in Google Sheets? Using ISBLANK As you can see in the example above, it’s a very easy function to use. It only has one argument which is the cell reference that you want to check for whether it contains a blank. If the cell is blank, the formula results in TRUE, if the cell is not blank the formula results in FALSE. Can you use Isblank on a range? We can use the ISBLANK coupled with conditional formatting. For example, suppose we want to highlight the blank cells in the range A2:F9, we select the range and use a conditional formatting rule with the following formula: =ISBLANK(A2:F9). Is there a range function in Google Sheets? Google Sheets has several functions that make the calculation of this rather simple. The MAX and MIN function is located under the Insert and Function menu. To find the statistical range or a data set just type in =(MAX(x) – MIN(x)) where x is your range. How do you use Isblank in an if statement? Sometimes you need to check if a cell is blank, generally because you might not want a formula to display a result without input. In this case we’re using IF with the ISBLANK function: =IF(ISBLANK (D2),”Blank”,”Not Blank”) How do you use Isblank formula? Excel ISBLANK Function 1. Summary. The Excel ISBLANK function returns TRUE when a cell is empty, and FALSE when a cell is not empty. For example, if A1 contains “apple”, ISBLANK(A1) returns FALSE. 2. Test if a cell is empty. 3. A logical value (TRUE or FALSE) 4. =ISBLANK (value) 5. value – The value to check. How do you use if and Isblank together? How do you check if a range of cells contains a value? Value exists in a range 1. =COUNTIF(range,value)>0. 2. =IF(COUNTIF(range,value),”Yes”,”No”) 3. =COUNTIF(A1:A100,”*”&C1&”*”)>0. 4. =ISNUMBER(MATCH(value,range,0)) How do I use Isblank function? The ISBLANK function returns TRUE when a cell is empty, and FALSE when a cell is not empty. For example, if A1 contains “apple”, ISBLANK(A1) returns FALSE. Use the ISBLANK function to test if a cell is empty or not. ISBLANK function takes one argument, value, which is a cell reference like A1. How do I create a dynamic range in Google Sheets? Here are the steps to create a dynamic named range in Google Sheets: 1. In a cell (E2 in this example) enter the formula =COUNT(C2:C100)+1. 2. In another cell (F2 in this example), enter the formula =“Sheet1! 3. Go to the Data tab and select Named Ranges. 4. Create a named range with the name SalesData and use Sheet1! How do I select a data range in Google Sheets? Hold down the Shift key and click on the cell in the bottom right of the range. The entire range will be selected. How do you check if a range of cells is blank in Excel? Excel ISBLANK function The ISBLANK function in Excel checks whether a cell is blank or not. Like other IS functions, it always returns a Boolean value as the result: TRUE if a cell is empty and FALSE if a cell is not empty. Where value is a reference to the cell you want to test. Can I use Isblank with Vlookup? Key Takeaways. ISBLANK Excel function is a logical function in Excel that verifies if a target cell is blank or not. It is also a type of referencing worksheet function which takes a single argument, the cell reference. It can be used for conditional formatting along with other Excel functions like IF, VLOOKUP, etc. How do you check if a range of cells contains a specific text? Select the range of cells that you want to search. To search the entire worksheet, click any cell. On the Home tab, in the Editing group, click Find & Select, and then click Find. In the Find what box, enter the text—or numbers—that you need to find. How do you check if a value lies in a range in Excel? For example, you need to check if value in cell B2 is between values in cell A2 and A3. Please apply the following formula to achieve it. 1. Select a blank cell which you need to display the result, enter formula =IF(AND(B2>A2,B2. How do you make a cell blank until data is entered? Keep cell blank until data entered in Select first cell that you want to place the calculated result, type this formula =IF(OR(ISBLANK(A2),ISBLANK(B2)), “”, A2-B2), and drag fill handle down to apply this formula to the cells you need. How do you use a named range in data validation in Google Sheets? Select the cell where you want to create it and go to Data –> Data Validation. In the Data Validation dialog box, select the ‘Criteria’ as ‘List from a range’ and specify the cells that contain the names (Fruits/Vegetables). Make sure ‘Show dropdown list in cell’ is checked and click on Save. How to use isblank in Google Sheets? Google Sheets considers blank cells in it contains a date input 30/12/1899. So in date calculations, you can use ISBLANK as below to avoid unforeseen errors. Similar Content: Learn Google Sheets complete date functions. This formula returns the date in B1 if the date in A1 is blank, else it returns the value of A1 minus B1. How to check if a range is blank in Google Sheets? You can use the following formulas to check if cells in a range are blank in Google Sheets: If all cells in the range A2:C2 are blank, this formula returns TRUE. Otherwise, it returns FALSE. How do I use the isblank function in Excel? The ISBLANK function can be used in arrays. That means it can check multiple cells at a time. Assume you want to test the range A1:A10 for blank cells. Then use this range in ISBLANK and wrap the entire formula with the ARRAYFORMULA function. The ISBLANK function can also test multiple columns at a time. Why is my isblank formula returning false? However, our ISBLANK formula returns a FALSE because the blank cell is not actually blank. This is because of the “” even-though it may not be visible. Hope you understand it. Similarly, if there are white space, beeline or any hidden characters, the ISBLANK formula would return FALSE only.
{"url":"https://vidque.com/can-you-use-isblank-on-a-range-google-sheets/","timestamp":"2024-11-03T23:13:34Z","content_type":"text/html","content_length":"58307","record_id":"<urn:uuid:ab6ec39b-d28d-46d2-8e4d-148a553c84b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00613.warc.gz"}
Factorization using Vedic Mathematics is done by using 2 Sutras. Combo Rule (Perfect Quadratic Expression): We use combination of 2 sutras. Anurupyena(Proportionality). Adyamadyenantyamantya (1st by 1st and last by last) (explained below): In Anurupyena, we split the middle term (coefficient of x) of quadratic equation in 2 terms such that Proportion/Ratio of coeff of x2 term to 1st coeff of x term = Ratio of 2nd coeff of x term to constant term. That ratio of the 1st 2 coeff is one of the root of equation. Adyamadyenantyamantya In Adyamadyenantyamantya (Commonly called as Adyamadyena), we divide the first term’s coeff of eq with 1st term of factor obtained above and last term of eq with the last term of the same factor. Sanskrit Name (For Adyamadyenantyamantya): आद्यमाद्ये नान्त्यमन्त्येन English Translation (For Adyamadyenantyamantya): 1st by 1st and last by last Examples: 2x2 + 5x -3 Anurupyena: Split middle terms coeff(5) in 2 parts such that coeff of x2 term to 1st coeff of x term = Ratio of 2nd coeff of x term to constant term.Hence split it in 6 and -1 (2/6 = -1/-3) => 2x2 + 6x –x -3So 1st factor: x+3 (2:6) Adyamadyenantyamantya: Divide the first term’s coeff (2) of eq by 1st term of factor(1) and divide last term of eq (-3) by 2st term of factor (3)So 2nd factor: 2x-1 Similarly, 4x2 + 12x + 5 = (2x+1)(2x+5) 9x2 -15x + 4 = (3x-1)(3x-4) 6x2 + 11x -10 = (2x+5)(3x-2) Here we come across to another important sutra Gunitasamuccaya Samuccayagunita Gunitasamuccaya Samuccayagunita Sanskrit Name: गुणितसमुच्चयः समुच्चयगुणितः Commonly called as Gunitasamuccaya. English Translation: Product of the sum of the coefficients of the factors = sum of the coefficients in the product. Example: 4x2 + 12x + 5 = (2x+1)(2x+5) Sum of the coefficients in the product: 4 + 12 + 5 = 21 Product of the sum of the coefficients of the factors: (2+1)(2+5) = 21 Lopana Sthapanabhyam (Subsutra of … [Read more...]
{"url":"https://mathlearners.com/tag/gunitasamuccaya/","timestamp":"2024-11-02T18:56:16Z","content_type":"text/html","content_length":"45094","record_id":"<urn:uuid:bab8bb73-6e6d-4e67-b0bf-3b8c650d4b5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00313.warc.gz"}
Deep neural network architecture using piecewise linear approximation Deep neural network architecture using piecewise linear approximation A log circuit for piecewise linear approximation is disclosed. The log circuit identifies an input associated with a logarithm operation to be performed using piecewise linear approximation. The log circuit then identifies a range that the input falls within from various ranges associated with piecewise linear approximation (PLA) equations for the logarithm operation, where the identified range corresponds to one of the PLA equations. The log circuit computes a result of the corresponding PLA equation based on the respective operands of the equation. The log circuit then returns an output associated with the logarithm operation, which is based at least partially on the result of the PLA equation. This disclosure relates in general to the field of computer architecture and design, and more particularly, though not exclusively, to a processing architecture for deep neural networks (DNNs). Due to the continuously increasing number of deep learning applications that are being developed for many different use cases, there is a strong demand for specialized hardware designed for deep neural networks (DNNs). For example, DNNs typically require a substantial amount of real-time processing, which often involves multiple layers of complex operations on floating-point numbers, such as convolution layers, pooling layers, fully connected layers, and so forth. Existing hardware solutions for DNNs suffer from various limitations, however, including heavy power consumption, high latency, significant silicon area requirements, and so forth. The present disclosure is best understood from the following detailed description when read with the accompanying figures. It is emphasized that, in accordance with the standard practice in the industry, various features are not necessarily drawn to scale, and are used for illustration purposes only. Where a scale is shown, explicitly or implicitly, it provides only one illustrative example. In other embodiments, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion. FIG. 1 illustrates an example embodiment of a deep neural network (DNN) implemented using log and antilog piecewise linear approximation circuits. FIGS. 2A-B illustrate an example embodiment of a unified activation function circuit for deep neural networks (DNNs). FIGS. 3A-E illustrate example activation functions for a unified activation function circuit. FIG. 4 illustrates an example embodiment of a unified activation function circuit implemented using modified activation function equations with base 2 exponent terms. FIGS. 5A-C illustrate an example embodiment of a log circuit implemented using piecewise linear approximation. FIGS. 6A-C illustrate example embodiments of an antilog circuit implemented using piecewise linear approximation. FIG. 7 illustrates an example embodiment of an exponent circuit implemented using piecewise linear approximation. FIG. 8 illustrates a flowchart for an example processing architecture used to implement artificial neural networks. FIGS. 9A-B illustrate the scalability of example processing architectures for artificial neural networks with respect to the supported number of parallel operations. FIGS. 10A-E illustrate various performance aspects of example processing architectures for artificial neural networks. FIGS. 11A-C illustrate examples of DNNs implemented using traditional activation functions versus modified activation functions with base 2 exponent terms. FIGS. 12A-B and 13 illustrate various performance aspects of DNNs implemented using traditional activation functions versus modified activation functions. FIGS. 14A-B, 15, 16, 17, and 18 illustrate example implementations of computer architectures that can be used in accordance with embodiments disclosed herein. The following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Further, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Different embodiments may have different advantages, and no particular advantage is necessarily required of any embodiment. Deep Neural Network (DNN) Inference Using Log/Antilog Piecewise Linear Approximation Circuits Due to the continuously increasing number of artificial intelligence applications that rely on machine learning (e.g., deep learning), there is a strong demand for specialized hardware that is designed for implementing artificial neural networks (e.g., deep neural networks, convolutional neural networks, feedforward neural networks, recurrent neural networks, and so forth). Low-power, low-area, and high-speed hardware is ideal for deep learning applications. In particular, artificial neural networks, such as deep neural networks (DNNs), are implemented using multiple layers of processing nodes or “neurons,” such as convolution layers, pooling layers, fully connected layers, and so forth. The nodes in each layer perform computations on a collection of inputs and associated weights (typically represented as vectors) to generate outputs, which are then used as inputs to the nodes in the next layer. The computations performed by nodes in each layer typically involve transformations of the inputs based on the associated weights, along with activation functions that are used to determine whether each node should be “activated.” Further, the layers are typically repeated in this manner based on the requirements of a particular application in order to reach the global minima. Moreover, state-of-the-art DNNs are typically implemented using operations on numeric values that are represented using single-precision (32-bit) floating-point format. DNN inference generally requires a significant volume of real-time processing on these floating-point numbers, as it involves multiple layers of complex operations, such as convolution layers, pooling layers, fully connected layers, and so forth. Further, because these complex operations often involve multiplications, floating-point multipliers are one of the key components in existing DNN solutions. Floating-point multipliers, however, are extremely costly in terms of power consumption, silicon area, and latency. Further, while lookup tables (LUTs) may be used to simplify DNN operations in some cases, LUTs similarly require costly silicon area. Accordingly, in some cases, DNN optimization techniques may be leveraged in order to improve performance and/or reduce the requisite silicon area of hardware used for implementing DNNs and other types of artificial neural networks. These DNN optimization techniques, however, typically focus on reducing the cost of operations by reducing the overall precision and/or reducing the number of underlying operations (e.g., limiting the number of convolution layers, pooling layers, and so forth). In some embodiments, for example, hardware used for implementing DNNs may be designed to operate on floating-point representations that have fewer bits and thus provide less precision (e.g., from 8-bit quantized floating-point to 16-bit fixed-point representations). The use of lower-precision floating-point representations, however, results in an unacceptable accuracy loss in some cases, particularly for larger datasets. Moreover, DNN optimization techniques that reduce the number of underlying operations or layers may have adverse effects, such as poor convergence time for reaching the global minima during DNN training. Further, because these various optimizations still require floating-point multipliers, they still suffer from the power, area, and performance limitations of multiplier circuits. In some cases, DNNs may be implemented using circuitry that performs logarithm, antilogarithm, and/or exponent calculations using lookup tables (LUTs) in order to mitigate the requirements of multiplier circuitry. In some embodiments, for example, the parabolic curve of a log, antilog, and/or exponent operation may be divided into multiple segments, a curve fitting algorithm may be used to pre-compute the values of the respective coefficients, and the pre-computed coefficients may then be stored in a lookup table implemented using a memory component (e.g., ROM). In this manner, in order to compute ax^2+bx+c for any point on the curve, the values of coefficients a, b, and c are first fetched from the lookup table, and the result is then calculated using multipliers and adders. This approach requires significant silicon area for the associated LUTs and multipliers, however, and it may also consume multiple clock cycles (e.g., 5-8 clock cycles) in order to compute the above Accordingly, this disclosure describes various embodiments of hardware that can perform DNN computations efficiently without depending on lookup tables and/or multipliers. Example embodiments that may be used to implement the features and functionality of this disclosure will now be described with more particular reference to the attached FIGURES. FIG. 1 illustrates an example embodiment of a deep neural network (DNN) 100 implemented using log and antilog piecewise linear approximation circuits. In the illustrated example, DNN 100 is implemented using multiple layers 106a-e, including a first convolution layer 106a, a max pooling layer 106b, a second convolution layer 106c, a third convolution layer 106d, and a fully connected layer 106e. Moreover, DNN 100 is implemented using a multiplier-free neural network microarchitecture, which uses log and antilog circuits 110, 120 rather than multiplier circuits in order to perform computations for the respective DNN layers 106a-e. In particular, the log and antilog circuits 110, 120 perform log base 2 (log[2]) and antilog base 2 (antilog[2]) calculations, which can be leveraged to convert the multiplication operations that are typically required in certain DNN layers 106 into addition. Further, the log and antilog circuits 110, 120 use piecewise linear approximation to perform the log[2 ]and antilog[2 ]calculations, which enables each calculation to be performed in a single clock cycle and without the use of lookup tables or multipliers. In this manner, the illustrated embodiment reduces DNN processing latency while also eliminating the need for multiplier circuitry and lookup tables, which significantly reduces the requisite silicon area of the hardware. Example implementations of the log 110 and antilog 120 circuits are further illustrated and described in connection with FIGS. 5A-C and 6A-C. As an example, with respect to the convolution layer(s) of a DNN (e.g., layers 106a,c,d of DNN 100), convolution can generally be represented by the following equation (where f(n) and g(n) are floating-point vectors): $( f * g ) ⁢ ( n ) = ∑ k = - inf + inf ⁢ ⁢ f ⁡ ( k ) ⁢ g ⁡ ( n - k )$ In this equation, each summation term is computed using multiplication. If log[2 ]is taken on both sides of the equation, however, the equation becomes: Further, if the left side of the equation is defined as y(n), meaning log[2](ƒ*g)(n)=y(n), the equation then becomes: $y ⁡ ( n ) = log 2 ⁢ ∑ k = - inf + inf ⁢ ⁢ f ⁡ ( k ) ⁢ g ⁡ ( n - k )$ The above equation no longer serves the purpose of convolution, however, as convolution cannot be performed by accumulating the results of log[2 ]calculations. Accordingly, antilog[2 ]must be taken on each summation term before it is accumulated (e.g., in order to convert each summation term from the log[2 ]domain back to the original domain): $y ⁡ ( n ) = ∑ k = - inf + inf ⁢ ⁢ 2 log 2 ⁡ ( f ⁡ ( k ) ) + log 2 ⁡ ( g ⁡ ( n - k ) )$ In this alternative equation for convolution, each summation term is now computed using addition. Thus, while the original convolution equation shown above requires multiplication to compute each summation term, this alternative convolution equation requires addition rather than multiplication. Accordingly, this alternative equation essentially leverages log[2 ](and antilog[2]) operations to convert the multiplications required by the original convolution equation into additions. For example, since f(n) and g(n) in the convolution equation are floating-point numbers (e.g., IEEE-754 single-precision floating-point numbers), log[2 ]and antilog[2 ]are taken on the mantissa bits, while the exponent and sign bits are handled separately, as discussed further in connection with FIGS. 5A-C and 6A-C. In this manner, the log and antilog circuitry 110, 120 can be used to perform convolution using this alternative equation instead of the original equation in order to avoid complex floating-point multiplication operations. As another example, a fully connected layer (e.g., layer 106e of DNN 100) is the last layer of a DNN and is responsible for performing the final reasoning and decision making. In general, a fully connected layer is similar to a convolution layer, but typically involves single-dimension vectors. Accordingly, a fully connected layer can leverage log[2 ]calculations in a similar manner as a convolution layer in order to convert multiplication operations into addition. However, because a fully connected layer is the last layer of a DNN, the final outputs should be in the normal domain rather than the log[2 ]domain. To illustrate, a fully connected layer can generally be represented using the following equation: $( f fcl * g fcl ) ⁢ ( n ) = ∑ k = - inf + inf ⁢ ⁢ f ⁡ ( k ) fcl ⁢ g ⁡ ( n - k ) fcl$ As with convolution, log[2 ]can be taken on both sides of the equation in order to convert the multiplication into addition in the summation terms. After taking log[2 ]on both sides of the equation, and further substituting the left side of the equation with y[fcl], the resulting equation becomes: $y fcl = ∑ k = - inf + inf ⁢ ⁢ log 2 ⁡ ( f ⁡ ( k ) ) fcl + log 2 ⁡ ( g ⁡ ( n - k ) ) fcl$ Antilog[2 ]can then be taken on the respective summation terms before they are accumulated, thus converting them from the log[2 ]domain back to the normal domain: $y fcl = ∑ k = - inf + inf ⁢ ⁢ 2 log 2 ⁡ ( f ⁡ ( k ) ) fcl + log 2 ⁢ ( g ⁡ ( n - k ) ) fcl$ In this manner, the final outputs of the fully connected layer are in the normal domain rather than the log[2 ]domain. Further, the multiplications required by the original equation have been converted into additions in this alternative equation. In the illustrated embodiment, for example, DNN 100 is implemented using multiple layers 106a-e, including a first convolution layer 106a, a max pooling layer 106b, a second convolution layer 106c, a third convolution layer 106d, and a fully connected layer 106e. Each layer 106a-e performs computations using an input (X) 101a-e, along with a weight vector (W) 102a-d in certain layers, and produces a corresponding output (Y) 103a-f. Moreover, an initial input vector (X) 101a is fed into the first layer 106a of DNN 100, while each remaining layer 106b-e is fed with the output (Y) 103a-d of the preceding layer as its input (X) 101b-e. Further, log and antilog circuits 110, 120 implemented using piecewise linear approximation are leveraged to perform the computations at each layer 106 of DNN 100, thus eliminating the need for multiplier circuits and lookup tables, while also reducing latency. For example, the log circuitry 110 performs log[2 ]calculations in order to convert floating-point numbers into fixed-point numbers, which enables complex operations such as floating-point multiplications to be converted into fixed-point additions, and the antilog circuitry 120 performs anitlog[2 ]calculations in order to subsequently convert fixed-point numbers back to floating-point numbers. Moreover, the log and antilog circuits 110, 120 use piecewise linear approximation to perform the respective log[2 ]and antilog[2 ]calculations, which enables each calculation to be performed in a single clock cycle. In the illustrated embodiment, for example, log circuitry 110 is used to convert the original input vector (X) 101a and each weight vector (W) 102a-d into the log[2 ]domain before they are fed into DNN 100, while antilog circuitry 120 is used to convert the final output (Y) 103f of the fully connected layer 106e back to the normal domain from the log[2 ]domain. Further, additional antilog[2 ] and log[2 ]operations (not shown) are also performed throughout the hidden layers of DNN 100 (e.g., the intermediate layers between the input and output layers) in order to convert between the log[2 ]domain and the normal domain, as necessary. For example, as explained above, a convolution layer requires each summation term to be converted back to the normal domain before it is accumulated, and thus an antilog[2 ]operation must be performed before accumulating each summation term. The final output of a hidden layer is subsequently converted back to the log[2 ]domain before being provided to the next layer, however, in order to continue avoiding multiplication operations in subsequent layers. For example, the result of each hidden layer node is typically passed to an activation function that determines whether the node should be “activated,” and the output of the activation function is then fed as input to the next layer. Accordingly, in order to avoid multiplication operations in the next layer, log[2 ]of the activation function of a hidden layer node is supplied to the next layer. For example, after a hidden layer node performs antilog[2 ]operations for the purpose of computing a convolution component, the result is converted back to the log 2 domain before being passed to the activation function. In this manner, the output (Y) computed by each hidden layer node is already in the log[2 ]domain when it is provided as input (X) to the next layer. Accordingly, the illustrated embodiment provides numerous advantages, including low latency, high precision, and reduced power consumption using a flexible, low-area hardware design that is highly scalable and portable. For example, in the illustrated embodiment, DNN 100 is implemented using log and antilog circuits 110, 120 that perform log[2 ]and antilog[2 ]calculations using piecewise linear approximation, which eliminates the need for multiplier circuits and lookup tables in the hardware design. In this manner, the illustrated embodiment significantly reduces the requisite silicon area (e.g., by eliminating multipliers and lookup tables), power consumption, and latency of the hardware, yet still provides high precision. In particular, the proposed microarchitecture performs each log[2 ]and antilog[2 ]calculation in a single clock cycle, which decreases the delay through the datapath and thus decreases the overall latency of the hardware. The proposed microarchitecture is also highly scalable. In particular, the flexible implementation of the proposed microarchitecture allows the hardware to be replicated as needed in order to increase the number of supported parallel operations. For example, the proposed microarchitecture may be implemented using any number of log and antilog circuit(s) 110, 120. In this manner, the proposed microarchitecture can be easily scaled to support the number of parallel operations required by a particular application or use case. The precision of the proposed microarchitecture can also be scaled based on application requirements. For example, if an application demands greater precision, the number of segments in the piecewise linear approximation model used by the log and antilog circuitry 110, 120 can be increased to accommodate the precision requirements. In this manner, the proposed microarchitecture is also highly portable, as it can be easily ported and/or scaled for any product or form factor, including mobile devices (e.g., handheld or wearable devices), drones, servers, and/or any other artificial intelligence solutions that require DNN operations without any dependencies or modifications. DNN Activation Function Circuit Using Piecewise Linear Approximation Due to the continuously increasing number of products designed with artificial intelligence (AI) capabilities, there is a strong demand for specialized hardware capable of accelerating fundamental AI operations (e.g., neural network activation functions), while also remaining generic enough to support a variety of different implementations and associated algorithms, particularly for resource-constrained form factors (e.g., small, low-power edge devices). In particular, the rising popularity of AI solutions that rely on machine learning (e.g., deep learning) has led to a demand for hardware acceleration designed for artificial neural networks (e.g., deep neural networks, convolutional neural networks, feedforward neural networks, recurrent neural networks, and so forth). For example, a deep neural network (DNN) is implemented using multiple layers of “artificial neurons,” which are typically processing nodes that use non-linear activation functions to determine whether they should each “activate” in response to a particular input. An activation function, for example, is a function that typically maps an input to an output using a non-linear transformation in order to determine whether a particular processing node or “artificial neuron” should activate. The use of activation functions is an important aspect of DNNs, but it can also be very computationally-intensive. There are many different types of activation functions that can be used in the implementation of a DNN, including Sigmoid, Hyperbolic Tangent (Tan h), Rectified Linear Unit (ReLU), Leaky ReLU, and Swish, among other examples. The choice of activation function(s) has a significant impact on the training dynamics and task performance of a DNN. Thus, in some cases, a DNN may be implemented using multiple activation functions within a single neural network in order to increase training dynamics and performance. DNN compute engines may also rely on specialized hardware for implementing these activation functions, which typically occupies a decent amount of area on silicon. For example, hardware designed for state-of-the-art DNNs typically operates on single-precision (32-bit) floating-point numbers and uses a lookup table (LUT) approach to implement activation functions. The use of a lookup table (LUT) approach for activation functions, however, increases silicon area, power consumption, and latency, which each continue to grow as the number of neurons in a DNN increases. Moreover, because each activation function requires its own lookup table, the use of multiple activation functions in a single DNN increases the requisite number of lookup tables, thus further impacting silicon area, power, and latency. As an example, using a lookup table approach, the curve of an activation function is typically bounded between an interval [−m, m] (where ‘m’ is a real number), and the bounded curve may then be divided into multiple segments. A curve fitting algorithm may then be used to pre-compute the values of the respective coefficients, and the pre-computed coefficients may then be stored in a lookup table implemented using a memory component (e.g., ROM). In this manner, in order to compute ax^2+bx+c for any point on the curve, the values of coefficients a, b, and c are first fetched from the lookup table, and the result is then calculated using multipliers and adders. This approach requires significant silicon area for the associated lookup tables and multipliers, however, and it may also consume multiple clock cycles (e.g., 5-8 clock cycles) in order to compute the above equation. To illustrate, a bounded curve over the interval [−3, 3] that is divided into 256 uniform segments with a 64-bit coefficient width (a: 20 bits, b: 20 bits, c: 24 bits) produces 21-bit mantissa precision for IEEE-754 single-precision floating-point numbers. In certain embodiments, this approach requires a 256×64 ROM and a compute block which respectively comprise 41,853 and 5,574 synthesis gates (e.g., NAND equivalent gates). Scaling down this hardware with less precision (e.g., 12-bit or 10-bit precision) will only save ROM area. In certain embodiments, for example, the estimated silicon area required for the Sigmoid activation function with 10-bit precision is 17,120 synthesis gates. Moreover, this area must be further replicated or instantiated based on the number of parallel operations that the hardware is required to support. Thus, existing hardware used to implement DNN activation functions (e.g., hardware implemented using lookup tables) has various drawbacks, including costly silicon area requirements, poor power consumption, and high processing latency, among other examples. These drawbacks are further magnified as the hardware is scaled, such as by increasing the number of artificial neurons, parallel operations, and/or activation functions. Further, there are no unified hardware solutions that can implement multiple activation functions without using separate hardware blocks and/or lookup tables for each activation function. Accordingly, this disclosure describes various embodiments of a unified hardware solution that supports multiple DNN activation functions without using lookup tables, as described further below. FIGS. 2A-B illustrate an example embodiment of a unified activation function (AF) circuit 200 for artificial neural networks (e.g., deep neural networks (DNNs)). In particular, AF circuit 200 provides support for multiple DNN activation functions on a single hardware component without depending on lookup tables. In the illustrated embodiment, for example, AF circuit 200 implements the respective activation functions using a novel algorithm that leverages exponent, log base 2 (log[2]), and antilog base 2 (antilog[2]) calculations, which are implemented using piecewise linear approximation, in order to simply the requisite computations for each activation function. For example, many activation functions are non-linear functions that involve complex exponent, division, and/or multiplication operations, which are typically implemented using costly multiplier circuitry (e.g., division may be implemented using multiplier circuitry that multiplies the numerator by the inverse of the denominator). AF circuit 200, however, leverages log[2 ]and antilog[2 ]calculations in order to eliminate complex division and/or multiplication operations required by certain activation functions and instead convert them into subtraction and/or addition. Further, AF circuit 200 implements exponent, log [2], and antilog[2 ]calculations using piecewise linear approximation in order to further simplify the requisite computations required by activation functions. As a result, log[2 ]and antilog[2 ] calculations can be performed in a single clock cycle, while exponent calculations can be performed in two clock cycles. In this manner, an activation function can be computed in only five clock cycles, and the underlying computations can easily be pipelined in order to increase throughput. Accordingly, AF circuit 200 leverages the log[2], antilog[2], and exponent calculations implemented using piecewise linear approximation to simplify the underlying computations for an activation function, which eliminates the need for lookup tables, reduces the multiplier circuitry requirements, and reduces the overall latency of an activation function. This approach translates directly into significant savings of silicon area (e.g., due to the elimination of lookup tables and reduced multiplier circuitry), as it requires a much smaller number of synthesis gates compared to a typical lookup table approach with similar precision. In the illustrated embodiment, for example, AF circuit 200 includes log, antilog, and exponent blocks 210, 220, 230 for performing the respective log[2], antilog[2], and exponent calculations using piecewise linear approximation. In some embodiments, for example, log, antilog, and exponent blocks 210, 220, 230 may be implemented using 16-segment piecewise linear approximation, with 12-bit precision in the mantissa of an IEEE-754 single-precision floating-point number (i.e., 1 sign bit+8 exponent bits+12 mantissa bits=21-bit precision). Example implementations of log, antilog, and exponent blocks 210, 220, 230 are further illustrated and described in connection with FIGS. 5A-C, 6A-C, and 7. AF circuit 200 is a configurable circuit that supports the following activation functions: Sigmoid, Hyperbolic Tangent (Tan h), Rectified Linear Unit (ReLU), Leaky ReLU, and Swish. In other embodiments, however, AF circuit 200 may be designed to support any type or number of activation functions. AF circuit 200 can be configured to use any of the supported activation functions using opcodes. In the illustrated embodiment, for example, AF circuit 200 uses 5-bit opcodes to select the type of activation function desired by a particular layer or node in the implementation of a DNN, and the circuit can be re-configured for other types of activation functions by simply changing the opcode value. In the illustrated embodiment, the five opcode bits 202a-e are designated as Tan h 202a, Sigmoid 202b, Swish 202c, ReLU 202d, and Leaky ReLU 202e, and these respective bit values are set based on the desired type of activation function. TABLE 1 identifies the hardware configuration of AF circuit 200 for the various supported activation functions based on the values of opcode bits 202a-e. TABLE 1 Activation function opcodes OPCODE BITS HARDWARE Leaky ACTIVATION Tanh Sigmoid Swish ReLU ReLU FUNCTION 1 0 0 0 0 Tanh 0 0 0 0 1 Leaky ReLU 0 0 0 1 0 ReLU 0 0 1 0 0 Swish 0 1 0 0 0 Sigmoid The operation of AF circuit 200 varies depending on which activation function is selected via opcode bits 202a-e. Accordingly, the functionality of AF circuit 200 is discussed further below in connection with FIGS. 3A-E, which illustrate the various activation functions that are supported by AF circuit 200. FIG. 3A illustrates a graph of the Sigmoid activation function, which is represented mathematically as $Y = 1 1 + e - x .$ The output of Sigmoid (y-axis) has a range between 0 and 1, and its shape resembles a smooth step function, which is an important characteristic that makes it useful as a DNN activation function. In particular, the function is smooth and continuously differentiable, and the gradient is very steep between the interval −4 to 4. This means that a small change in X will cause a large change in Y, which is an important property for back-propagation in DNNs. However, there are also some disadvantages to the Sigmoid function. For example, Sigmoid suffers from the vanishing gradient problem, as the function is almost flat in the regions beyond +4 and −4, which results in a very small gradient and makes it difficult for a DNN to perform course correction. In addition, because the output ranges from 0 to 1, the output is not symmetric around the origin, which causes the gradient update to go in the positive direction. In general, for a given input X represented in single-precision floating-point format, the Sigmoid of X, or Sigmoid(X), can be computed using the following equation: $f ⁡ ( X ) = 1 1 + e - X$ Since the above equation requires a costly division operation, however, log[2 ]and antilog[2 ]calculations can be leveraged to avoid the division. For example, based on the properties of logarithmic functions, log[2 ]can be taken on each side of the equation in order to convert the division into subtraction: $log 2 ⁢ f ⁡ ( X ) = log 2 ⁡ ( 1 1 + e - X ) = log 2 ⁡ ( 1 ) - log 2 ⁡ ( 1 + e - X )$ In order to solve for ƒ(X), however, antilog[2 ]must also be taken on each side of the equation: $f ⁡ ( X ) = 1 1 + e - X = 2 log 2 ⁡ ( 1 ) - log 2 ⁡ ( 1 + e - X )$ This alternative equation for the Sigmoid function no longer requires division, as the division has been replaced with subtraction and log[2]/antilog[2 ]calculations. Further, the exponent, log[2], and antilog[2 ]calculations can be implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. Accordingly, turning back to FIGS. 2A-B, AF circuit 200 implements the Sigmoid function using the simplified approach described above. For example, when AF circuit 200 is configured for the Sigmoid function, the Sigmoid opcode bit (reference numeral 202b) is set to 1 and the remaining opcode bits for the other activation functions (reference numerals 202a,c,d,e) are set to 0. In this manner, when an input X (reference numeral 201) is fed into AF circuit 200, it passes through mux 206a and demux 207 to bias block 208, which adds a bias to input X in order to convert it into a negative number (−X). The result −X is then passed to exponent block 230 in order to compute e^−X, and that result is then passed to adder 212 in order to compute 1+e^−X. The result of 1+e^−X passes through mux 206d to log block 210b, which then computes log[2](+e^−X). Separately, subtractor 211 is supplied with a constant value of 1 as its first operand, while the output of mux 206e is supplied as its second operand. In this case, since the Sigmoid opcode bit 202b that is fed into mux 206e is set to 1, mux 206e selects a constant value of 0 as its output. Accordingly, constant values of 1 and 0 are supplied as the respective operands to subtractor 211, and thus subtractor 211 computes 1−0=1. The resulting value 1 is then passed through mux 206f to log block 210a, which then computes log[2](1) (which is equal to 0). Thus, log blocks 210a and 210b respectively output the results of log[2](1) and log[2](1+e^−X), and those results are then passed as operands to adder/subtractor 213. In this case, adder/subtractor 213 performs subtraction in order to compute log[2](1)−log[2](+e^−X), and that result is then passed to antilog block 220, which performs an antilog[2 ]calculation: 2^log^2^(1)−log^2^(1+e^−X^). In this manner, the result computed by antilog block 220 corresponds to the final result of the Sigmoid function. For example, based on the properties of logarithmic functions discussed above: $2 log 2 ⁡ ( 1 ) - log 2 ⁡ ( 1 + e - X ) = 2 log 2 ⁡ ( 1 1 + e - X ) = 1 1 + e - X = f ⁡ ( X )$ Accordingly, AF circuit 200 outputs the result of antilog block 220 as the final output Y (reference numeral 203) of the Sigmoid function. Further, as noted above, the exponent, log[2], and antilog[2 ]calculations performed by the respective exponent, log, and antilog blocks 210-230 of AF circuit 200 are implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. FIG. 3B illustrates a graph of the Hyperbolic Tangent (Tan h) activation function, which is represented mathematically as $Y = 1 - e - 2 ⁢ x 1 + e - 2 ⁢ x .$ This function has an output that ranges from −1 to 1 and is symmetric around the origin, and it also has a steeper gradient than the Sigmoid function, although it still suffers from the vanishing gradient problem. In general, for a given input X represented in single-precision floating-point format, the hyperbolic tangent of X, or Tan h(X), can be computed using the following equation: $f ⁡ ( X ) = 1 - e - 2 ⁢ X 1 + e - 2 ⁢ X$ Since the above equation requires a costly division operation, the division can be avoided by leveraging log[2 ]and antilog[2 ]calculations in a similar manner as described above for the Sigmoid function from FIG. 3A. For example, log[2 ]can be taken on each side of the equation in order to convert the division into subtraction: $log 2 ⁢ f ⁡ ( X ) = log 2 ⁡ ( 1 - e - 2 ⁢ X 1 + e - 2 ⁢ X ) = log 2 ⁡ ( 1 - e - 2 ⁢ X ) - log 2 ⁡ ( 1 + e - 2 ⁢ X )$ Further, in order to solve for ƒ(X), antilog[2 ]can then be taken on each side of the equation: $f ⁡ ( X ) = 1 - e - 2 ⁢ X 1 + e - 2 ⁢ X = 2 log 2 ⁡ ( 1 - e - 2 ⁢ X ) - log 2 ⁡ ( 1 + e - 2 ⁢ X )$ This alternative equation for the Tan h function no longer requires division, as the division has been replaced with subtraction and log[2]/antilog[2 ]calculations. Further, the exponent, log[2], and antilog[2 ]calculations can be implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. Accordingly, turning back to FIGS. 2A-B, AF circuit 200 implements the Tan h function using the simplified approach described above. For example, when AF circuit 200 is configured for the Tan h function, the Tan h opcode bit (reference numeral 202a) is set to 1 and the remaining opcode bits for the other activation functions (reference numerals 202b,c,d,e) are set to 0. In this manner, when an input X (reference numeral 201) is fed into AF circuit 200, it initially passes through shifter 204, which left shifts X by a single bit in order to double its value, thus producing an output of 2X. Moreover, since AF circuit 200 is configured for the Tan h function, the output of 2X from shifter 204 is then passed through mux 206a and demux 207 to bias block 208. For example, since the selection signal of mux 206a is based on the Tan h opcode bit 202a, which is set to 1, mux 206a selects 2X as the output that it passes to demux 207. Further, since the selection signal of demux 207 is based on the output of an OR gate 205 that is fed with the ReLU/Leaky ReLU opcode bits 202d,e as input, which are both set to 0, demux 207 routes the value of 2X to bias block 208. Bias block 208 then adds a bias to 2X in order to convert it into a negative number (−2X), and the resulting value of −2X is then passed to exponent block 230, which outputs the value of e^−2X. The output e^−2X from exponent block 230 is then passed to both subtractor 211 (via mux 206e) and adder 212, and subtractor 211 then computes the value of 1−e^−2X, while adder 212 computes the value of 1+e^−2X. These outputs from subtractor 211 and adder 212 are respectively passed to log blocks 210a and 210b, which respectively compute the values of log[2](1−e^−2X) and log[2](1+e^−2X). The respective outputs from log blocks 210a and 210b are then passed as operands to adder/subtractor 213, which performs subtraction in order to compute log[2](−e^−2X)−log[2](1+e^−2X), and that result is then passed to antilog block 220, which performs an antilog[2 ]calculation: 2^log^2^(1−e^−2X^)−log^2^(1+e^−2X^). In this manner, the result computed by antilog block 220 corresponds to the final result of the Tan h function. For example, based on the properties of logarithmic functions discussed above: $2 log 2 ⁡ ( 1 - e - 2 ⁢ X ) - log 2 ⁡ ( 1 + e - 2 ⁢ X ) = 2 log 2 ⁡ ( 1 - e - 2 ⁢ X 1 + e - 2 ⁢ X ) = 1 - e - 2 ⁢ X 1 + e - 2 ⁢ X = f ⁡ ( X )$ Accordingly, AF circuit 200 outputs the result of antilog block 220 as the final output Y (reference numeral 203) of the Tan h function. Further, as noted above, the exponent, log[2], and antilog[2 ]calculations performed by the respective exponent, log, and antilog blocks 210-230 of AF circuit 200 are implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. FIG. 3C illustrates a graph of the Rectified Linear Unit (ReLU) activation function, which is represented mathematically as Y=max(0,X). ReLU is a widely used activation function that provides various advantages. In particular, ReLU is a non-linear function that avoids the vanishing gradient problem, it is less complex and thus computationally less expensive than other activation functions, and it has favorable properties that render DNNs sparse and more efficient (e.g., when its input is negative, its output becomes zero, and thus the corresponding neuron does not get activated). On the other hand, weights cannot be updated during back-propagation when the output of ReLU becomes zero, and ReLU can only be used in the hidden layers of a neural network. In general, for a given input X represented in single-precision floating-point format, the ReLU of X, or ReLU(X), can be computed using the following equation: $f ⁡ ( x ) = { 0 , x < 0 x , x ≥ 0$ The above equation is simple and does not require any costly computations, and thus its implementation is relatively straightforward, as there is no need to leverage exponent, log[2], or antilog[2 ] For example, turning back to FIGS. 2A-B, when AF circuit 200 is configured for the ReLU function, the ReLU opcode bit (reference numeral 202d) is set to 1 and the remaining opcode bits for the other activation functions (reference numerals 202a,b,c,e) are set to 0. In this manner, when an input X (reference numeral 201) is fed into AF circuit 200, X initially passes through mux 206a to demux 207 , and demux 207 then routes X to mux 206c. Separately, a constant value of 1 is also supplied to mux 206c (via mux 206b). Further, since the selection signal of mux 206c is based on the sign bit of x, mux 206c selects either X or 0 as its output depending on whether X is positive or negative. Since the output of mux 206c is the final result of the ReLU function, the remaining logic of AF circuit 200 is bypassed and the output of mux 206c is ultimately used as the final output Y (reference numeral 203) of AF circuit 200 for the ReLU function. FIG. 3D illustrates a graph of the Leaky Rectified Linear Unit (Leaky ReLU) activation function, which is represented mathematically as $Y = { ⁢ X , X ≥ 0 aX , X < 0 ,$ where a=0.01. Leaky ReLU is an improved variation of ReLU. For example, with respect to ReLU, when the input is negative, the output and gradient become zero, which creates problems during weight updates in back-propagation. Leaky ReLU addresses this issue when the input is negative using multiplication of the input by a small linear component (0.01), which prevents neurons from becoming dead and also prevents the gradient from becoming zero. In general, for a given input X represented in single-precision floating-point format, the Leaky ReLU of X, or LeakyReLU(X), can be computed using the following equation: $f ⁡ ( x ) = { 0.01 , x < 0 ⁢ x , x ≥ 0$ As with ReLU, the equation for Leaky ReLU is simple and does not require any costly computations, and thus its implementation is relatively straightforward, as there is no need to leverage exponent, log[2], or antilog[2 ]calculations. For example, turning back to FIGS. 2A-B, when AF circuit 200 is configured for the Leaky ReLU function, the Leaky ReLU opcode bit (reference numeral 202e) is set to 1 and the remaining opcode bits for the other activation functions (reference numerals 202a,b,c,d) are set to 0. In this manner, when an input X (reference numeral 201) is fed into AF circuit 200, X initially passes through mux 206 a to demux 207, and demux 207 then routes X to mux 206c. Separately, a constant value of 0.01 is also supplied to mux 206c (via mux 206b). Further, since the selection signal of mux 206c is based on the sign bit of X, mux 206c selects either X or 0.01 as its output depending on whether X is positive or negative. Since the output of mux 206c is the final result of the Leaky ReLU function, the remaining logic of AF circuit 200 is bypassed and the output of mux 206c is ultimately used as the final output Y (reference numeral 203) of AF circuit 200 for the Leaky ReLU function. FIG. 3E illustrates a graph of the Swish activation function, which is represented mathematically as Y=X*Sigmoid(X). In many cases, Swish has been shown to provide better accuracy than other activation functions (e.g., ReLU). In general, for a given input X represented in single-precision floating-point format, the Swish of X, or Swish(X), can be computed using the following equation: $f ⁡ ( X ) = X * Sigmoid ⁢ ⁢ ( X ) = X * 1 1 + e - X = X 1 + e - X$ Since the above equation requires a costly division operation, the division can be avoided by leveraging log[2 ]and antilog[2 ]calculations in a similar manner as described above for the Sigmoid function from FIG. 3A. For example, log[2 ]can be taken on each side of the equation in order to convert the division into subtraction: $log 2 ⁢ f ⁡ ( X ) = log 2 ⁡ ( X 1 + e - X ) = log 2 ⁡ ( X ) - log 2 ⁡ ( 1 + e - X )$ Further, in order to solve for ƒ(X), antilog[2 ]can then be taken on each side of the equation: $f ⁡ ( X ) = X 1 + e - X = 2 log 2 ⁡ ( X ) - log 2 ⁡ ( 1 + e - X )$ This alternative equation for the Swish function no longer requires division, as the division has been replaced with subtraction and log[2]/antilog[2 ]calculations. Further, the exponent, log[2], and antilog[2 ]calculations can be implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. Accordingly, turning back to FIGS. 2A-B, AF circuit 200 implements the Swish function using the simplified approach described above. For example, when AF circuit 200 is configured for the Swish function, the Swish opcode bit (reference numeral 202c) is set to 1 and the remaining opcode bits for the other activation functions (reference numerals 202a,b,d,e) are set to 0. In this manner, when an input X (reference numeral 201) is fed into AF circuit 200, X passes through mux 206a to demux 207, and demux 207 then routes X to bias block 208. For example, since the selection signal of mux 206a is based on the Tan h opcode bit 202a, which is set to 0, mux 206a selects X as the output that it passes to demux 207. Further, since the selection signal of demux 207 is based on the output of an OR gate 205 that is fed with the ReLU/Leaky ReLU opcode bits 202d,e as input, which are both set to 0, demux 207 routes the value of X to bias block 208. Bias block 208 then adds a bias to X in order to convert it into a negative number (−X), and the resulting value −X is then passed to exponent block 230, which outputs the value of e^−X. The output e ^−X of exponent block 230 is then passed to adder 212 in order to compute 1+e^−X, and that result then passes through mux 206d to log block 210b, which then computes log[2](1+e^−X). Separately, since the selection signal of mux 206f is based on the Swish opcode bit 202c, which is set to 1, mux 206f selects X as the output that is passed to log block 210a, which then computes log[2](X). The respective outputs from log blocks 210a and 210b are then passed as operands to adder/subtractor 213, which performs subtraction in order to compute log[2](X)−log[2](1+e^−X), and that result is then passed to antilog block 220, which performs an antilog[2 ]calculation: 2^log^2^(X)−log^2^(1+e^−X^). In this manner, the result computed by antilog block 220 corresponds to the final result of the Swish function. For example, based on the properties of logarithmic functions discussed above: $2 log 2 ⁡ ( X ) - log 2 ⁡ ( 1 + e - X ) = 2 log 2 ⁡ ( X 1 + e - X ) = X 1 + e - X = f ⁡ ( X )$ Accordingly, AF circuit 200 outputs the result of antilog block 220 as the final output Y (reference numeral 203) of the Swish function. Further, as noted above, the exponent, log[2], and antilog[2 ]calculations performed by the respective exponent, log, and antilog blocks 210-230 of AF circuit 200 are implemented using piecewise linear approximation in order to further simplify the computations required by this alternative equation. Accordingly, the illustrated embodiment of AF circuit 200 of FIGS. 2A-B provides numerous advantages, including low latency, high precision, and reduced power consumption using a flexible, low-area hardware design that supports multiple activation functions and is highly scalable and portable. In particular, AF circuit 200 is a unified solution that implements multiple DNN activation functions on a single hardware component (e.g., rather than using separate hardware components for each activation function) without depending on lookup tables. For example, in the illustrated embodiment, AF circuit 200 is implemented using log, antilog, and exponent circuits 210, 220, 230 that perform log[2], antilog[2], and exponent calculations using piecewise linear approximation, which eliminates the need for lookup tables in the hardware design and reduces the required multiplier circuitry. In this manner, the illustrated embodiment significantly reduces the requisite silicon area, power consumption, and latency of the hardware, yet still provides high precision. For example, the elimination of lookup tables and reduced multiplier circuitry translates directly into significant savings of silicon area, as a much smaller number of synthesis gates is required in comparison to a typical lookup table approach with similar precision. Further, log[2 ]and antilog[2 ]calculations can be performed in a single clock cycle, while exponent calculations can be performed in two clock cycles, which enables an activation function to be computed in only five clock cycles. Moreover, the underlying computations can easily be pipelined in order to increase throughput. AF circuit 200 also eliminates the dependency on software for loading/programming lookup tables associated with different activation functions, as AF circuit 200 can be configured for different activation functions by simply programming the appropriate opcode. Programming an opcode on AF circuit 200 is much simpler and requires fewer clock cycles compared to programming a lookup table for an activation function. AF circuit 200 is also highly scalable. In particular, the flexible implementation of AF circuit 200 allows the underlying hardware to be replicated as needed in order to increase the number of supported parallel operations. In this manner, AF circuit 200 can be easily scaled to support the number of parallel operations required by a particular application or use case. The precision of AF circuit 200 can also be scaled based on application requirements. For example, if an application demands greater precision, the number of segments in the piecewise linear approximation model used by the log, antilog, and exponent circuitry 210, 220, 230 can be increased to accommodate the precision requirements. In this manner, AF circuit 200 is also highly portable, as it can be easily ported and/or scaled for any product or form factor, including mobile devices (e.g., handheld or wearable devices), drones, servers, and/or any other artificial intelligence solutions that require DNN operations without any dependencies or modifications. FIG. 4 illustrates an alternative embodiment of a unified activation function (AF) circuit 400 for artificial neural networks (e.g., deep neural networks (DNNs)). In the illustrated embodiment, AF circuit 400 includes input X 401, activation functions 402a-d (ReLu, Leaky ReLu, Sigmoid, and Swish, respectively), output Y 403, OR gate 405, multiplexers (muxes) 406a-e, demultiplexers (demuxes) 407, bias block 408, log blocks 410a-b, subtractor 411, adder 412, adder/subtractor 413, and antilog blocks 420a-b. In particular, AF circuit 400 is similar to AF circuit 200 from FIGS. 2A-B, except certain activation functions are implemented using modified equations that use powers of 2 instead of powers of the exponent constant e. To illustrate, the original and modified equations for the Sigmoid, Swish, and Hyperbolic Tangent activation functions are provided in TABLE 2. TABLE 2 Modified activation function equations using powers of 2 ORIGINAL EQUATION MODIFIED EQUATION SIGMOID $f ⁡ ( X ) = 1 1 + e - X$ $f ⁡ ( X ) = 1 1 + 2 - X$ SWISH $f ⁡ ( X ) = ⁢ X * Sigmoid ( X ) = ⁢ X * 1 1 + e - X = ⁢ X 1 + e - X$ $f ⁡ ( X ) = ⁢ X * Sigmoid ( X ) = ⁢ X * 1 1 + 2 - X = ⁢ X 1 + 2 - X$ HYPERBOLIC TANGENT $f ⁡ ( X ) = 1 - e - 2 ⁢ ⁢ X 1 + e - 2 ⁢ ⁢ X$ $f ⁡ ( X ) = 1 - 2 - 2 ⁢ ⁢ X 1 + 2 - 2 ⁢ ⁢ X$ As shown in TABLE 2, the exponents of base e in the original equations are replaced with exponents of base 2 in the modified equations. In this manner, the important non-linear characteristics of the activation functions (e.g., the shape of the curve) are still exhibited by the modified equations, but the underlying activation function hardware can be implemented much more efficiently. In particular, by replacing the exponents of base e with exponents of base 2, an exponent circuit is no longer needed by the modified equations, as all of the exponent operations can now be performed by an antilog circuit. For example, since antilog base 2 of a variable x is equivalent to 2 raised to the power of x (2^x), an antilog circuit that performs antilog base 2 operations can be used to compute the powers of base 2 that appear in the modified activation function equations. Moreover, performing exponent operations using antilog circuitry rather than exponent circuitry reduces both the latency and silicon area of AF circuit 400. By way of comparison, for example, AF circuit 200 of FIGS. 2A-B performs exponent operations using an exponent circuit implemented using piecewise linear approximation (e.g., the exponent circuit of FIG. 7), which can perform an exponent operation in two clock cycles and requires at least one multiplier. AF circuit 400, however, performs exponent operations using an antilog circuit implemented using piecewise linear approximation (e.g., the antilog circuit of FIGS. 6A-C), which can perform an antilog base 2 operation in a single clock cycle and requires no multipliers. Thus, by replacing the exponent circuitry with antilog circuitry, the overall latency of AF circuit 400 is reduced by one clock cycle, thus enabling an activation function to be computed in only four clock cycles, compared to five clock cycles for the activation function circuit of FIGS. 2A-B. Further, AF circuit 400 no longer requires any multiplier circuitry, which results in significant silicon area savings, as the eliminated exponent circuit was the only component that required a multiplier. For example, while AF circuit 200 of FIGS. 2A-B can be implemented using 8,321 gates, AF circuit 400 can be implemented using only 7,221 gates. Moreover, similar to AF circuit 200 of FIGS. 2A-B, AF circuit 400 leverages log base 2 (log[2]) and antilog base 2 (antilog[2]) calculations using piecewise linear approximation in order to simply the requisite computations for certain activation functions. For example, log[2 ]and antilog[2 ]calculations can be used to eliminate the complex division and/or multiplication operations required by certain activation functions and instead convert them into subtraction and/or addition. The log/antilog equations for the modified Sigmoid, Swish, and Hyperbolic Tangent activation functions from TABLE 2 (which use powers of 2 instead of e) are provided in TABLE 3. These log/antilog equations for the modified activation functions are derived in a similar manner as those of the original activation functions, as described in connection with FIGS. 3A-E. TABLE 3 Log/antilog versions of modified activation functions (using powers of 2) MODIFIED EQUATION LOG/ANTILOG EQUATION SIGMOID $f ⁡ ( X ) = 1 1 + 2 - X$ 2^log^2^(1)−log^2^(1+2^−X^) SWISH $f ⁡ ( X ) = ⁢ X * Sigmoid ( X ) = ⁢ X * 1 1 + 2 - X = ⁢ X 1 + 2 - X$ 2^log^2^(X)−log^2^(1+2^−X^) HYPERBOLIC TANGENT $f ⁡ ( X ) = 1 - 2 - 2 ⁢ ⁢ X 1 + 2 - 2 ⁢ ⁢ X$ 2^log^2^(1−2^−2X^)−log^2^(1+2^−2X^) In the illustrated embodiment, AF circuit 400 is designed to implement the Sigmoid, Swish, Tan h, ReLU, and Leaky ReLU activation functions. The Sigmoid, Swish, and Tan h activation functions are implemented using the log/antilog equations from TABLE 3, while the ReLU and Leaky ReLU activation functions are implemented using their original equations from FIGS. 3C-D, as they do not require any complex division, multiplication, or exponent operations. The operation of AF circuit 400 is otherwise similar to AF circuit 200 of FIGS. 2A-B. Log, Antilog, and Exponent Circuits Implemented Using Piecewise Linear Approximation FIGS. 5A-C illustrate an example embodiment of a log circuit 500 implemented using piecewise linear approximation. In particular, FIG. 5A illustrates the overall implementation of log circuit 500, while FIGS. 5B and 5C illustrate the implementation of certain components of log circuit 500. Log circuit 500 performs log calculations using 16-segment piecewise linear approximation. In this manner, no lookup tables or multiplier circuits are required by log circuit 500, and log calculations can be performed in a single clock cycle. The equations used by log circuit 500 to perform piecewise linear approximation for log calculations are shown below in TABLE 4. TABLE 4 Piecewise linear approximation equations for log[2](1 + m) RANGE # RANGE EQUATION 0 0 ≤ m < 0.0625 $m + m 4 + m 8 + m 64$ 1 0.0625 ≤ m < 0.125 $m + m 4 + m 16 + 1 256 + 1 1024$ 2 0.125 ≤ m < 0.1875 $m + m 4 + 1 64 - 1 512$ 3 0.1875 ≤ m < 0.25 $m + m 8 + m 16 + 1 64 + 1 128$ 4 0.25 ≤ m < 0.3125 $m + m 8 + 1 32 + 1 128$ 5 0.3125 ≤ m < 0.375 $m + m 16 + m 128 + 1 16$ 6 0.375 ≤ m < 0.4375 $m + m 64 + m 128 + 1 16 + 1 128$ 7 0.4375 ≤ m < 0.5 $m - m 64 + 1 16 + 1 32$ 8 0.5 ≤ m < 0.5625 $m - m 16 + m 128 + 1 8$ 9 0.5625 ≤ m < 0.625 $m - m 8 + m 32 + 1 8 + 1 128$ 10 0.625 ≤ m < 0.6875 $m - m 8 + 1 8 + 1 32$ 11 0.6875 ≤ m < 0.75 $m - m 8 - m 32 + 1 8 + 1 16$ 12 0.75 ≤ m < 0.8125 $m - m 4 + m 16 + 1 8 + 1 16$ 13 0.8125 ≤ m < 0.875 $m - m 4 + m 32 + 1 4$ 14 0.875 ≤ m < 0.9375 $m - m 4 + 1 4$ 15 0.9375 ≤ m < 1 $m - m 4 - m 64 + 1 4 + 1 64$ The equations in TABLE 4 are designed to compute or approximate the result of log[2](1+m), where m represents the mantissa portion of a single-precision floating-point input 501. For example, since the mantissa m is always bounded between 0 and 1, and since the result of log[2](0) is undefined, log[2](1+m) is computed instead of log[2](m) in order to avoid an undefined result when m is 0. Moreover, in order to compute log[2](1+m) using 16-segment piecewise linear approximation, the potential values of m over the interval [0,1] are divided into 16 different ranges or segments, which are designated as ranges 0-15, and separate equations are defined for each range in order to approximate the result of log[2](1+m). Further, the respective equations are defined exclusively using addition and/or subtraction on any of the following types of operands: m, fractions of m divided by powers of 2, and/or constant values. In this manner, the only division required by the equations is exclusively by powers of 2, and thus all division operations can be implemented using shifters. Further, the loss in precision that results from the limited “shift-based” division is compensated for through use of the constant values that are added and/or subtracted in certain equations. Accordingly, the respective equations can be implemented exclusively using addition, subtraction, and/or shift operations, thus eliminating the need for complex multiplication/division circuitry. FIG. 5A illustrates the overall logic of log circuit 500, which is designed to implement the equations of TABLE 4. In the illustrated embodiment, log circuit 500 is supplied with a 32-bit single-precision floating-point number as input 501 (e.g., supplied via an input register), and log circuit 500 computes a corresponding 32-bit fixed-point number as output 503 (e.g., returned via an output register), which represents the log[2 ]value of input 501. Input 501 includes a sign bit (input[31]), an 8-bit exponent e (input[30:23]), and a 23-bit mantissa m (input[22:0]). Given that the sign of input 501 always matches that of output 503, the sign bit of input 501 (input[31]) is fed directly into the corresponding bit of output 503 (output[31]). Moreover, the exponent e of input 501 (input[30:23]) is fed into an 8-bit subtractor 502, which subtracts a bias of 0x7F from exponent e in order to generate a corresponding 8-bit unbiased exponent. From a mathematical perspective, for example, subtracting the bias from the exponent of a floating-point number always results in a value equivalent to log[2 ]of the exponent. Accordingly, the resulting unbiased exponent serves as the integer portion of the fixed-point number represented in output 503 (output[30:23]). Moreover, the mantissa m of input 501 is used to select the corresponding range and equation from TABLE 4 that will be used to compute the fraction field of output 503 (output[22:0]). For example, the four most significant bits of the mantissa m (input[22:19]) are supplied as input to range selection logic 504, which outputs sixteen 1-bit signals (range[0]-range[15]) that correspond to the respective ranges of m from TABLE 4, such that the signal corresponding to the applicable range is set to 1 while the remaining signals are set to 0. Based on the output of range selection logic 504, multiplexers (muxes) 508a-c are then used to select the operands that correspond to the selected equation from TABLE 4, and those operands are then supplied as input to adder/subtractor 520. In particular, muxes 508a-c select between various fractions of the mantissa m (e.g., generated using shift operations) as well as certain constant values. For example, the mantissa m is fed into multiple shifters 506, which each perform a right shift of m by a certain number of bits in order compute the various fractions of m over powers of 2 that appear throughout the equations in TABLE 4 $( e . g . , m 4 , m 8 , m 16 , m 32 , m 64 , m 128 ) .$ The outputs of these shifters 506 are then fed as inputs to the respective muxes 508a-c in the manner shown in FIG. 5A. A constant value of 0 is additionally supplied as one of the inputs to mux 508b , as that value is output by mux 508b for certain equations that do not otherwise require an operand from mux 508b. Finally, the constant value required by certain equations from TABLE 4 (e.g., ¼, ⅛, 1/16, 1/32, 1/64, 1/128, 1/256, 1/512, and/or 1/1024) is generated by constant generation logic 510 and is further supplied as input to mux 508c. The implementation of constant generation logic 510 is further illustrated and described below in connection with FIG. 5B. Each mux 508a-c then selects an appropriate output that corresponds to one of the operands for the applicable equation from TABLE 4 (e.g., based on the range selection logic 504), and those outputs are then supplied as inputs to adder/subtractor 520. Further, the mantissa m is also supplied directly as another input to adder/subtractor 520, as the mantissa is an operand for all of the equations from TABLE 4. Adder/subtractor 520 then performs the appropriate addition and/or subtraction operations on the various operands supplied as input, and the computed result then serves as the 23-bit fraction field of output 503 (output [22:0]). The implementation of adder/subtractor 520 is further illustrated and described below in connection with FIG. 5C. FIG. 5B illustrates an example implementation of the constant generation logic 510 of log circuit 500 from FIG. 5A, which is used to generate the constant value(s) required by certain equations from TABLE 4. In the illustrated embodiment, constant generation logic 510 includes shifters 512 to generate the collection of constant values that appear throughout the equations from TABLE 4, multiplexers (muxes) 514a,b to select the corresponding constant value(s) for a selected equation from TABLE 4, and an adder 516 to add the selected constant values. In the illustrated embodiment, a 23-bit constant value of either +1 or −1 is supplied as input to the respective shifters 512 (e.g., depending on whether each shifter 512 generates a positive or negative fraction constant). For example, a 23-bit constant value of +1 is supplied as input to all but one of the shifters which generate positive results, while a 23-bit signed representation of −1 is supplied as input to the only remaining shifter which generates a negative result (e.g., the shifter that performs a 9-bit right shift to generate − 1/512). Each shifter 512 then performs a right shift by a certain number of bits in order to generate the respective fraction constants that appear throughout the equations from TABLE 4 (e.g., ¼, ⅛, 1/16, 1/32, 1/64, 1/128, 1/256, 1/512, 1/1024). Moreover, since the respective equations from TABLE 4 require either zero, one, or two of these fraction constants, the appropriate combination of fraction constants for a selected equation from TABLE 4 is selected using two muxes 514a,b. In particular, the outputs of shifters 512 are supplied as inputs to the two muxes 514a,b in the manner shown in FIG. 5B, and a 23-bit constant value of 0 is also supplied as an input to each mux 514a,b. Each mux 514a,b then outputs either a particular fraction constant or a value of 0. In this manner, muxes 514,a,b can collectively output zero, one, or two of the respective fraction constants generated by shifters 512, depending on the particular equation selected from TABLE 4 (e.g., as determined using the range selection logic 504 of log circuit 500 from FIG. 5A). The outputs of muxes 514a,b are then supplied as inputs to adder 516, which computes their sum. The result from adder 516 then serves as the final constant value 511 that is output by the constant generation logic 510 of FIG. 5B. FIG. 5C illustrates an example implementation of the adder/subtractor logic 520 of log circuit 500 from FIG. 5A, which is used to perform addition and/or subtraction on the operands of a selected equation from TABLE 4. In the illustrated embodiment, adder/subtractor logic 520 includes two adder/subtractors 522a,b and two OR gates 524a,b, which are described further below. The first adder/subtractor 522a is supplied with the mantissa m and the outputs of muxes 508b and 508c of log circuit 500 as its operands. Moreover, the particular combination of addition and/or subtraction performed on these operands by adder/subtractor 522a is dictated by OR gate 524a. For example, OR gate 524a is supplied with the signals corresponding to ranges 11 and 15 from TABLE 4 as input (e.g., as generated by range selection logic 504 of log circuit 500), and the output of OR gate 524a is then fed into adder/subtractor 522a. In this manner, when the output of OR gate 524a is 0, adder/subtractor 522a adds all operands together, but when the output of OR gate 524a is 1 (e.g., the mantissa m falls within either range 11 or range 15), adder/subtractor 522a subtracts the operand corresponding to the output of mux 508b and adds the remaining operands. In other words, for any range of m from TABLE 4 aside from ranges 11 and 15, adder/subtractor 522a outputs the result of m+output of mux 508b+output of mux 508c, but for ranges 11 and 15, adder/subtractor 522a outputs the result of m−output of mux 508b+output of mux 508c. The second adder/subtractor 522b is then supplied with the output of the first adder/subtractor 522a and the output of mux 508a of log circuit 500 as its respective operands. The output of OR gate 524b dictates whether adder/subtractor 522b performs addition or subtraction on these operands. For example, OR gate 524b is supplied with the signals corresponding to ranges 7-15 from TABLE 4 as input (e.g., as generated by range selection logic 504 of log circuit 500), and the output of OR gate 524b is then fed into adder/subtractor 522b. In this manner, when the output of OR gate 524b is 0, adder/subtractor 522b adds both operands together, but when the output of OR gate 524b is 1 (e.g., the mantissa m falls within any of ranges 7-15), the output of mux 508a is subtracted from the output of the first adder/subtractor 522a. In other words, when m falls within ranges 0-6 from TABLE 4, adder/subtractor 522b computes the output of the first adder/subtractor 522a+the output of mux 508a, but when m falls within ranges 7-15, adder/subtractor 522b computes the output of the first adder/subtractor 522a—the output of mux 508a. The result from the second adder/subtractor 522b serves as the final output 521 of the adder/subtractor 520 logic of FIG. 5C. To illustrate the operation of log circuit 500, the processing flow will be described for an example input 501. Since the sign and exponent fields of input 501 are always processed in the same manner regardless of their underlying values, this example focuses on the processing associated with the mantissa m of input 501. In this example, the mantissa m of input 501 (input[22:0]) is assumed to fall within the range 0.125≤m<0.1875, which corresponds to range 2 of TABLE 4. Accordingly, log circuit 500 will execute the corresponding equation for range 2 from TABLE 4, which is $m + m 4 + 1 64 - 1 512 .$ Log circuit 500 begins by processing the original input 501 in order to generate and/or obtain the respective operands for the above-referenced equation, and log circuit 500 then supplies those operands to adder/subtractor 520 to compute a result of the equation. The first operand m, which corresponds to the mantissa field of input 501 (input[22:0]), is fed directly from the relevant bitfields of input 501 to adder/subtractor logic 520. The remaining operands for the above-referenced equation are supplied to adder/subtractor logic 520 by muxes 508a-c. In particular, the collection of operands that appear throughout the various equations from TABLE 4 are supplied as inputs to muxes 508a-c, and muxes 508a-c then output the particular operands required by the appropriate equation from TABLE 4. For example, based on the four most significant bits of the mantissa m, range selection logic 504 outputs a signal that identifies the particular range from TABLE 4 that m falls within, and that range signal is then used by muxes 508a-c to select the appropriate operands to output. In this example, since m falls within range 2 from TABLE 4, range selection logic 504 outputs a range signal that corresponds to range 2, otherwise denoted as the range[2] signal. Based on the range[2] signal, mux 508a selects $m 4 ⁢ ⁢ ( m >> 2 )$ as its output, mux 508b selects 0 as its output, and mux 508c selects a constant 511 generated by constant generation logic 510 as its output. Turning to constant generation logic 510 of FIG. 5B, for example, the range[2] signal causes muxes 514a and 514b to select 1/64 (1>>6) and − 1/512 (−1>>9) as their respective outputs, those values are then added together by adder 516, and the resulting constant 511 is output by constant generation logic 510. Accordingly, mux 508c selects this constant 511 as its output, which has a corresponding value of 1/64− 1/512. In this manner, the following operands are ultimately supplied to adder/subtractor 520: □ m (supplied directly from input 501); $m 4$ □ (supplied by mux 508a); □ 0 (supplied by mux 508b); and □ 1/64− 1/512 (supplied by mux 508c). The range[2] signal causes adder/subtractor 520 to perform addition on all of these operands thus computing a result of the equation $m + m 4 + 0 + ( 1 64 - 1 512 ) = m + m 4 + 1 64 - 1 512 ,$ which is the equation for range 2 from TABLE 4. Accordingly, the resulting value serves as the 23-bit fraction field in the output 503 (output [22:0]) generated by log circuit 500. FIGS. 6A-C illustrate example embodiments of an antilog circuit 600 implemented using piecewise linear approximation. In particular, FIGS. 6A and 6B illustrate alternative implementations of the overall antilog circuit 600, while FIG. 6C illustrates the underlying adder/subtractor logic 620 of antilog circuit 600. Antilog circuit 600 performs antilog calculations using 16-segment piecewise linear approximation. In this manner, no lookup tables or multiplier circuits are required by antilog circuit 600, and antilog calculations can be performed in a single clock cycle. The equations used by antilog circuit 600 to perform piecewise linear approximation for antilog calculations are shown below in TABLE 5. TABLE 5 Piecewise linear approximation equations for antilog[2](η) = 2^η RANGE # RANGE EQUATION 0 0 ≤ η < 0.0625 $η - η 4 - η 32 + 1$ 1 0.0625 ≤ η < 0.125 $η - η 4 - η 64 + 1 - 1 512$ 2 0.125 ≤ η < 0.1875 $η - η 4 + η 32 + 1 - 1 128$ 3 0.1875 ≤ η < 0.25 $η - η 8 - η 16 + 1 - 1 64$ 4 0.25 ≤ η < 0.3125 $η - η 8 - η 32 + 1 - 1 32 + 1 128$ 5 0.3125 ≤ η < 0.375 $η - η 8 + 1 - 1 32$ 6 0.375 ≤ η < 0.4375 $η - η 16 - η 64 + 1 - 1 32 - 1 64$ 7 0.4375 ≤ η < 0.5 $η - η 32 - η 64 + 1 - 1 16$ 8 0.5 ≤ η < 0.5625 $η + η 512 + 1 - 1 8 + 1 32$ 9 0.5625 ≤ η < 0.625 $η + η 16 - η 64 + 1 - 1 8 + 1 128$ 10 0.625 ≤ η < 0.6875 $η + η 16 + η 32 + 1 - 1 8 + 1 64$ 11 0.6875 ≤ η < 0.75 $η + η 8 + η 64 + 1 - 1 4 + 1 16$ 12 0.75 ≤ η < 0.8125 $η + η 8 + η 16 + 1 - 1 4 + 1 32$ 13 0.8125 ≤ η < 0.875 $η + η 4 - η 128 + 1 - 1 4 - 1 128$ 14 0.875 ≤ η < 0.9375 $η + η 4 + η 32 + 1 - 1 4 - 1 16$ 15 0.9375 ≤ η < 1 $η + η 4 + η 8 + 1 - 1 4 - 1 8$ The equations in TABLE 5 are designed to compute or approximate the result of antilog[2 ]of η, which is equivalent to 2 raised to the power of η, or 2^n, where η represents the fraction portion of a fixed-point input number 601 (input[22:0]). For example, the fixed-point input 601 of antilog circuit 600 will typically be derived from intermediate DNN computations on the fixed-point outputs 503 of the log circuit(s) 500 from FIGS. 5A-C. Moreover, as discussed above, the fraction portion of the fixed-point output 503 of a log circuit 500 is computed as log[2](1+m) rather than log[2](m) in order to avoid an undefined result when m=0. Thus, the equations from TABLE 5 for computing antilog[2](η)=2^n are designed to produce a value that is equivalent to 1+m, as reflected by the constant value of +1 in each equation. In order to compute antilog[2](η)=2^n using 16-segment piecewise linear approximation, for example, the potential values of η over the interval [0,1] are divided into 16 different ranges or segments, which are designated as ranges 0-15, and separate equations are defined for each range in order to approximate the result of antilog[2](η) or 2^n. Further, the respective equations are defined exclusively using addition and/or subtraction on any of the following types of operands: q, fractions of η divided by powers of 2, and/or constant values. In this manner, the only division required by the equations is exclusively by powers of 2, and thus all division operations can be implemented using shifters. Further, the loss in precision that results from the limited “shift-based” division is compensated for through use of the constant values that are added and/or subtracted in certain equations. Accordingly, the respective equations can be implemented exclusively using addition, subtraction, and/or shift operations, thus eliminating the need for complex multiplication/division circuitry. FIG. 6A illustrates the overall logic of antilog circuit 600, which is designed to implement the equations of TABLE 5. In the illustrated embodiment, antilog circuit 600 is supplied with a 32-bit fixed-point number as input 601 (e.g., supplied via an input register), and antilog circuit 600 computes a corresponding 32-bit floating-point number as output 603 (e.g., returned via an output register), which represents the antilog[2 ]result for input 601. Input 601 includes a sign bit (input[31]), an 8-bit integer (input[30:23]), and a 23-bit fraction (input[22:0]). Given that the sign of input 601 always matches that of output 603, the sign bit of input 601 (input[31]) is fed directly into the corresponding bit of output 603 (output[31]). The integer portion of input 601 (input[30:23]) is fed into an 8-bit adder 602, which adds back a bias of 0x7F in order to generate an 8-bit biased exponent that serves as the exponent field of the floating-point output 603. Moreover, the fraction portion of input 601 (input [22:0]), which corresponds to the value of η in TABLE 5, is used to select the corresponding range and equation from TABLE 5 that will be used to compute the mantissa of the floating-point output 603 (output[22:0]). For example, the four most significant bits of the fraction portion of input 601 (input[22:19]) are supplied as input to range selection logic 604, which outputs sixteen 1-bit signals (range[0]-range[15]) that correspond to the respective ranges of η from TABLE 5, such that the signal corresponding to the applicable range is set to 1 while the remaining signals are set to 0. Based on the output of range selection logic 604, multiplexers (muxes) 608a-d are then used to select certain operands required by the corresponding equation from TABLE 5, such as the requisite fractions of η and fraction constants. In particular, muxes 608a and 608b are used to select the fractions of η that are required by the corresponding equation from TABLE 5. For example, the value of η (input[22:0]) is fed into a first collection of shifters 606a, which each perform a right shift of η by a certain number of bits in order compute the various fractions of η over powers of 2 that appear throughout the equations from TABLE 5 $( e . g . , η 4 , η 8 , η 16 , η 32 , η 64 , η 128 , η 512 ) .$ inputs to muxes 608a and 608b in the manner shown in FIG. 6A. Muxes 608a and 608b then select the particular fractions of η that are required by the corresponding equation from TABLE 5, which is determined based on the output of range selection logic 604. Similarly, muxes 608c and 608d are used to select the fraction constants that are required by the corresponding equation from TABLE 5. For example, a 23-bit constant with a value of 1 is fed into a second collection of shifters 606b, which each perform a right shift by a certain number of bits in order to generate the respective fraction constants that appear throughout the equations from TABLE 5 (e.g., ¼, ⅛, 1/16, 1/32, 1/64, 1/128, 1/512). The outputs of these shifters 606b are then fed as inputs to muxes 608c and 608d in the manner shown in FIG. 6A. Muxes 608c and 608d then select the particular constant fractions that are required by the corresponding equation from TABLE 5, which is determined based on the output of range selection logic 604. The respective operands selected by muxes 608a-d for the corresponding equation from TABLE 5 are then supplied as inputs to adder/subtractor 620. The value of η (input[22:0]) and a constant value of 1 are also supplied as inputs to adder/subtractor 620, as those values are operands in all of the equations from TABLE 5. Adder/subtractor 620 then performs the appropriate addition and/or subtraction operations on these operands (e.g., based on the output of range selection logic 604), as required by the corresponding equation from TABLE 5. The result from adder/subtractor 620 then serves as the 23-bit mantissa portion of the floating-point output 603 (output [22:0]) of antilog circuit 600. The implementation of adder/subtractor 620 is further illustrated and described below in connection with FIG. 6C. FIG. 6B illustrates an alternative implementation of an antilog circuit 600 implemented using piecewise linear approximation. In particular, while the antilog circuit of FIG. 6A is designed to compute the antilog of a fixed-point number as input, the antilog circuit of FIG. 6B is capable of computing the antilog of either a fixed-point number or a floating-point number as input. In this manner, the antilog circuit of FIG. 6B can compute the antilog[2 ]of either a fixed-point number in the log[2 ]domain (e.g., derived from the output of log circuit 500 from FIGS. 5A-C) or a floating-point number in its original domain. For example, as described in connection with FIG. 4, activation function (AF) circuit 400 uses antilog circuits for two purposes: (1) converting fixed-point numbers in the log[2 ]domain back to floating-point numbers in the original domain; and (2) computing powers of base 2 raised to floating-point exponents. Thus, the operands of antilog circuits of AF circuit 400 include both fixed-point numbers (e.g., for log[2 ]domain conversions) and floating-point numbers (e.g., for powers of base 2). Accordingly, the antilog circuits of AF circuit 400 can be implemented using the antilog circuit of FIG. 6B, which is capable of processing an input represented as either a fixed-point or floating-point number. The operation of the antilog circuit of FIG. 6B is similar to that of FIG. 6A with the exception of how it processes the integer or exponent field of input 601 (input[30:23]), as described further In particular, if input 601 is a fixed-point number, then it will contain an integer field (input[30:23]), which is supplied as one of the inputs to mux 616. The selection signal of mux 616 is a binary signal that indicates whether input 601 is a fixed-point number. Accordingly, the selection signal of mux 616 will have a value of 1 when input 601 is a fixed-point number, which causes mux 616 to select the integer portion of input 601 (input[30:23]) as its output, which is then supplied as an operand to 8-bit adder 602. Adder 602 then adds back a bias of 0x7F to the integer portion of input 601 (input[30:23]) in order to generate an 8-bit biased exponent, which then serves as the exponent portion of the floating-point output 603. Alternatively, if input 601 is a floating-point number, then it will contain an exponent field (input[30:23]). The exponent field (input[30:23]) is supplied to an 8-bit subtractor 612, which subtracts a bias of 0x7F in order to generate a corresponding 8-bit unbiased exponent, which mathematically represents the log[2 ]value of the exponent. The output of subtractor 612 is then fed into shifter 614, which performs a left shift by 2^unbiased_exponent in order to compute a corresponding antilog[2 ]value, which is then supplied as one of the inputs to mux 616. The selection signal of mux 616 will have a value of 0 when input 601 is a floating-point number, which causes mux 616 to select the value from shifter 614 as its output. The output of mux 616 is then supplied as an operand to 8-bit adder 602, which adds back a bias of 0x7F in order to generate an 8-bit biased exponent, which then serves as the exponent portion of the floating-point output 603. FIG. 6C illustrates an example implementation of the adder/subtractor logic 620 of antilog circuit 600 of FIGS. 6A and 6B, which is used to perform addition and/or subtraction on the operands of a corresponding equation from TABLE 5. In the illustrated embodiment, adder/subtractor logic 620 includes three adder/subtractors 622a-c and three OR gates 624a-c, which are described further below. The first adder/subtractor 622a is supplied with the following operands as input: the value of η (input[22:0]), a constant value of 1, and the output of mux 608a. The particular combination of addition and/or subtraction performed on these operands is dictated by OR gate 624a, which is fed with the signals for range[0]-range[7] as input. In this manner, when η falls within ranges 0-7 from TABLE 5, the output of OR gate 624a will be 1, which causes adder/subtractor 622a to compute: η+1−output of mux 608a. Alternatively, when η falls within ranges 8-15, the output of OR gate 624a will be 0, which causes adder/subtractor 622a to compute: η+1+output of mux 608a. The second adder/subtractor 622b is supplied with the following operands as input: the output of mux 608c and the output of mux 608d. The particular combination of addition and/or subtraction performed on these operands is dictated by OR gate 624b, which is fed with the signals for range[4], range[8], range[9], range[11], and range[12] as input. In this manner, when η falls within range 4, 8, 9, 11, or 12 from TABLE 5, the output of OR gate 624a will be 1, which causes adder/subtractor 622b to compute: output of mux 608c-output of mux 608d. Alternatively, when η falls within any of the remaining ranges from TABLE 5, the output of OR gate 624a will be 0, which causes adder/subtractor 622b to compute: output of mux 608c+output of mux 608d. The third adder/subtractor 622c is supplied with the following operands as input: the output of adder/subtractor 622a, the output of adder/subtractor 622b, and the output of mux 608b. The particular combination of addition and/or subtraction performed on these operands is dictated by OR gate 624c, which is fed with the inverse or NOT value of the signals for range[2], range[10], range[11], range [14], and range[15] as input. In this manner, when q falls within any range from TABLE 5 other than range 2, 10, 11, 14, and 15, the output of OR gate 624c will be 1, which causes adder/subtractor 622c to compute: output of adder/subtractor 622a-output of adder/subtractor 622b-output of mux 608b. Alternatively, when η falls within range 2, 10, 11, 14, or 15 from TABLE 5, the output of OR gate 624c will be 0, which causes adder/subtractor 622c to compute: output of adder/subtractor 622a-output of adder/subtractor 622b+output of mux 608b. The result from the third adder/subtractor 622c serves as the final output 621 of the adder/subtractor logic 620 of FIG. 6C. FIG. 7 illustrates an example embodiment of an exponent circuit 700 implemented using piecewise linear approximation. In particular, exponent circuit 700 performs exponent calculations using 16-segment piecewise linear approximation. In this manner, exponent circuit 700 requires no lookup tables and only one multiplier, and exponent calculations can be performed in two clock cycles. The equations used by exponent circuit 700 to perform piecewise linear approximation for exponent calculations are shown below in TABLE 6. TABLE 6 Piecewise linear approximation equations for exponent e^x RANGE # RANGE EQUATION 0 0 ≤ x < 0.0625 $x + x 32 + 1$ 1 0.0625 ≤ x < 0.125 $x + x 64 + x 32 + 1 - 1 256$ 2 0.125 ≤ x < 0.1875 $x + x 8 + x 32 + 1 - 1 64$ 3 0.1875 ≤ x < 0.25 $x + x 4 + 1 - 1 32$ 4 0.25 ≤ x < 0.3125 $x + x 4 + x 16 + 1 - 1 16$ 5 0.3125 ≤ x < 0.375 $x + x 4 + x 8 + 1 - 1 16$ 6 0.375 ≤ x < 0.4375 $x + x 2 + 1 - 1 8$ 7 0.4375 ≤ x < 0.5 $x + x 2 + x 8 + 1 - 1 8 - 1 16$ 8 0.5 ≤ x < 0.5625 $2 ⁢ x - x 4 - x 32 + 1 - 1 4 + 1 32$ 9 0.5625 ≤ x < 0.625 $2 ⁢ x - x 8 - x 16 + 1 - 1 4 - 1 64$ 10 0.625 ≤ x < 0.6875 $2 ⁢ x - x 16 - x 64 + 1 - 1 4 - 1 8$ 11 0.6875 ≤ x < 0.75 $2 ⁢ x + x 16 + 1 - 1 2 + 1 16$ 12 0.75 ≤ x < 0.8125 $2 ⁢ x + x 2 + x 16 + 1 - 1 2 - 1 32$ 13 0.8125 ≤ x < 0.875 $2 ⁢ x + x 4 + x 16 + 1 4 + 1 8$ 14 0.875 ≤ x < 0.9375 $2 ⁢ x + x 2 - x 16 + 1 4 + 1 64$ 15 0.9375 ≤ x < 1 $2 ⁢ x + x 2 + x 8 + 1 16$ The equations in TABLE 6 are designed to compute or approximate the result of the natural exponential function e^x, where e represents the natural exponent constant (e.g., Euler's number) and x represents the 23-bit mantissa of a 32-bit floating-point input 701 (input[22:0]). In order to compute e^x using 16-segment piecewise linear approximation, for example, the potential values of x over the interval [0,1] are divided into 16 different ranges or segments, which are designated as ranges 0-15, and separate equations are defined for each range in order to approximate the result of e^x. Further, the respective equations are defined exclusively using addition and/or subtraction on any of the following types of operands: x, 2x, fractions of x divided by powers of 2, and/or constant values. In this manner, the only division required by the equations is exclusively by powers of 2, and thus all division operations can be implemented using shifters. Further, the loss in precision that results from the limited “shift-based” division is compensated for through use of the constant values that are added and/or subtracted in certain equations. Accordingly, the respective equations for e^x can be implemented exclusively using addition, subtraction, and/or shift operations. In order to complete the exponent operation, the resulting value of e^x, which represents e raised to the power of the mantissa portion of input 701 (input[22:0]), must then be multiplied with the value of e raised to the power of the exponent portion of input 701 (input[30:23]). Accordingly, only one multiplication operation is required for the exponent operation, and thus exponent circuit 700 only requires a single multiplier circuit. The overall logic of exponent circuit 700 is illustrated in FIG. 7. In the illustrated embodiment, exponent circuit 700 is supplied with a 32-bit floating-point number as input 701, and exponent circuit 700 computes a corresponding 32-bit fixed-point number as output 703, which corresponds to the natural exponent constant e raised to the power of the floating-point number represented by input 701, or e^input. Floating-point input 701 includes a sign bit (input[31]), an 8-bit exponent (input[30:23]), and a 23-bit mantissa (input[22:0]). Given that the sign of input 701 always matches that of output 703, the sign bit of input 701 (input[31]) is fed directly into the corresponding bit of output 703 (output[31]). The exponent portion of input 701 (input[30:23]) is fed into an 8-bit subtractor 702, which subtracts a bias of 0x7F in order to generate an 8-bit unbiased exponent. The value of the natural exponent constant e raised to the power of the unbiased exponent, or e^unbiased_exponent, is then output by mux 710. For example, an 8-bit unbiased exponent has 2^8=256 potential values, which range from −128 to +127. The values of e raised to all 256 potential values of the unbiased exponent (e^−128, e^−127, . . . , e^0, e^1, . . . , e^127) are precomputed and fed as constants inputs into mux 710. The 8-bit unbiased exponent output by subtractor 702 serves as the selection signal of mux 710, which causes mux 710 to select the precomputed constant input that corresponds to e^unbiased_exponent. The output of mux 710 (e^unbased_exponent) is then fed into multiplier 712 as one of its operands, as discussed further below. The mantissa portion of input 701 (input [22:0]) is processed according to the equations from TABLE 6. For example, exponent circuit 700 is designed to compute the natural exponent constant e raised to the power of the mantissa (input [22:0]), or e^x, where x represents the mantissa (input [22:0]). Moreover, exponent circuit 700 computes e^x using piecewise linear approximation based on the equations from TABLE 6. In particular, the mantissa portion of input 701 (input [22:0]) (which corresponds to the value of x in TABLE 6) is used to select the corresponding range and equation from TABLE 6 that will be used to compute the value of e^x. For example, the four most significant bits of the mantissa of input 701 (input[22:19]) are supplied as input to range selection logic 704, which outputs sixteen 1-bit signals (range[0]-range[15]) corresponding to the respective ranges of x from TABLE 6, such that the signal corresponding to the applicable range is set to 1 while the remaining signals are set to 0. Based on the output of range selection logic 704, multiplexers (muxes) 708a-d are then used to select certain operands required by the corresponding equation from TABLE 6, such as the requisite fractions of x and constant values that appear in the equation. For example, muxes 708a and 708b are used to select the fractions of x that are required by the corresponding equation from TABLE 6. For example, the value of x (input[22:0]) is fed into a first collection of shifters 706a, which each perform a right shift of x by a certain number of bits in order compute the various fractions of x over powers of 2 that appear throughout the equations from TABLE 6 $( e . g . , x 2 , x 4 , x 8 , x 16 , x 32 , x 64 ) .$ The outputs of these shifters 706a are then fed as inputs to muxes 708a and 708b in the manner shown in FIG. 7. Muxes 708a and 708b then select the particular fractions of x that are required by the corresponding equation from TABLE 6, which is determined based on the output of range selection logic 704. Similarly, muxes 708c and 708d are used to select the constant values that are required by the corresponding equation from TABLE 6, such as the fraction constants and/or the constant value of 1 required by certain equations. For example, a 23-bit constant with a value of 1 is fed into a second collection of shifters 706b, which each perform a right shift by a certain number of bits in order to generate the respective fraction constants that appear throughout the equations from TABLE 6 (e.g., ½, ¼, ⅛, 1/16, 1/32, 1/64, 1/256). The outputs of these shifters 706b are then fed as inputs to muxes 708c and 708d in the manner shown in FIG. 7. Further, a constant value of 1 (which is required by certain equations from TABLE 6) is also supplied as one of the inputs to mux 708d. Muxes 708c and 708d then select the particular combination of constants that are required by the corresponding equation from TABLE 6, which is determined based on the output of range selection logic 704. The respective operands selected by muxes 708a-d for the corresponding equation from TABLE 6 are then supplied as inputs to adder/subtractor 720. The value of x (input[22:0]) is also supplied to adder/subtractor 720 through either one or two of its inputs, depending on whether the corresponding equation from TABLE 6 requires an operand of x or 2x. For example, the value of x is always directly supplied as one of the inputs of adder/subtractor 720, and in some cases, it may also be supplied through mux 709 as another input of adder/subtractor 720. In particular, the value of x and a 23-bit constant of 0 are supplied as inputs to mux 709, and mux 709 selects one of those values to supply as input to adder/subtractor 720. For example, if the corresponding equation from TABLE 6 requires x as an operand rather than 2x, mux 709 selects the constant of 0 as its output to adder/subtractor 720, since the value of x has already been supplied directly through another input of adder/subtractor 720. Alternatively, if the corresponding equation from TABLE 6 requires 2x as an operand rather than x, mux 709 selects the value of x as its output to adder/subtractor 720, since that results in the value of x being supplied through two inputs to adder/subtractor 720. Adder/subtractor 720 then performs the appropriate addition and/or subtraction operations on these operands (e.g., based on the output of range selection logic 704), as required by the corresponding equation from TABLE 6. In this manner, the output of adder/subtractor 720 corresponds to the final result for e^x, which is equivalent to the natural exponent constant e raised to the power of the mantissa of input 701 (input[22:0]). The output of mux 710 (e raised to the power of the unbiased exponent of input 701), and the output of adder/subtractor 720 (e raised to the power of the mantissa of input 701), are then supplied as operands to multiplier 712, which multiplies those values together in order to generate the integer and fraction portions of the fixed-point output 703 of exponent circuit 700. FIG. 8 illustrates a flowchart 800 for an example processing architecture used to implement artificial neural networks (e.g., a deep neural network (DNN)). Flowchart 800 may be implemented, for example, using the embodiments and functionality described throughout this disclosure. For example, in some embodiments, flowchart 800 may be implemented using the activation circuit of FIG. 2A-B or 4, the log circuit of FIGS. 5A-C, the antilog circuit of FIGS. 6A-C, and/or the exponent circuit of FIG. 7. In the illustrated flowchart, a particular activation function is performed by an activation function circuit that is designed to accelerate performance of activation functions. In some embodiments, for example, the activation function circuit may be designed to support multiple types of activation functions that are commonly used to implement artificial or deep neural networks. Moreover, the activation function circuit may leverage log, antilog, and/or exponent circuits implemented using piecewise linear approximation in order to accelerate the calculations associated with the supported activation functions. In some embodiments, for example, the activation function circuit may be implemented on and/or used in connection with processors, devices, and/or systems that execute applications with artificial neural networks (e.g., deep neural networks, convolutional neural networks, feedforward neural networks, recurrent neural networks, and so forth). In this manner, applications can leverage the activation function circuit in order to accelerate the activation functions used to implement artificial neural networks. For example, an application with an artificial neural network may be stored in memory and executed by a processor on a particular device or system. When the application needs to perform an activation function in connection with an operation in the artificial neural network, the application may issue a corresponding instruction or command to the processor and/or the activation function circuit, and the processor may then leverage the activation function circuit to perform the activation function. The result of the activation function may then be provided back to the processor and/or the application and subsequently used by the artificial neural network. The flowchart begins at block 802, where an instruction or command to perform a particular activation function is received. The instruction or command, for example, may be issued by the application and received by the processor and/or the activation function circuit. In some embodiments, the instruction or command may identify the desired activation function and any operands or other parameters associated with the activation function. Further, in some cases, the selected activation function may contain a combination of exponent, multiplication, and/or division operations. Accordingly, the flowchart first proceeds to block 804 to perform any exponent operations associated with the activation function. In some embodiments, for example, the exponent operations may be performed using piecewise linear approximation in order to reduce the latency associated with those operations (e.g., using the exponent circuit of FIG. 7). In some embodiments, however, if an exponent operation contains a base of 2, it can be computed using an antilog circuit (using an antilog base 2 operation) instead of the exponent circuit to reduce latency. The flowchart then proceeds to block 806 to perform any multiplication and/or division operations associated with the activation function. In some embodiments, for example, multiplication and/or division operations of the activation function may be performed using a combination of log, antilog, and addition/subtraction operations. For example, by leveraging the properties of logarithmic functions, log and antilog operations can be used to convert the expensive multiplication/division operations into addition and/or subtraction. Further, the log and antilog operations can be performed using piecewise linear approximation in order to reduce the latency associated with those operations. In some embodiments, for example, a log circuit may be used to perform log operations using piecewise linear approximation (e.g., the log circuit of FIGS. 5A-C). For example, the log circuit may be designed to compute the logarithm of an input represented as a floating-point number (e.g., with an exponent and a mantissa), and the log circuit may represent the resulting output as a fixed-point number (e.g., with an integer and a fraction). The log circuit first identifies the input or operand associated with the logarithm operation, and it then identifies or determines a particular range that the input falls within for purposes of piecewise linear approximation. For example, a plurality of ranges or segments, along with associated equations, are defined for performing logarithm operations using piecewise linear approximation. Accordingly, the corresponding range that the input falls within is identified, and the operands required by the equation for that range are obtained and/or generated. In some embodiments, for example, certain operands may be extracted, obtained, and/or computed using the input, such as the mantissa of the input, as well as fraction operands with denominators that are powers of 2, which may be generated using shift circuits (e.g., by shifting the bits in order to perform the division). The appropriate operands are then selected using one or more multiplexers, and a result of the equation is computed using the selected operands. For example, the result may be computed by performing addition and/or subtraction on the respective operands using an adder-subtractor circuit. Moreover, the exponent of the floating-point input may be converted into an unbiased exponent by subtracting a bias using a subtractor circuit. The output of the logarithm operation is then generated using the result from the corresponding equation and the unbiased exponent. For example, the unbiased exponent serves as the integer of the resulting fixed-point output, and the result of the equation serves as the fraction of the fixed-point output. Similarly, an antilog circuit may be used to perform the antilog operations using piecewise linear approximation (e.g., the antilog circuit of FIGS. 6A-C). The antilog operations may be implemented in a similar manner, except using different computations for the integer/exponent portion and different equations for piecewise linear approximation. The flowchart then proceeds to block 808 to output a result of the activation function based on the results of the exponent, multiplication, and/or division operations. At this point, the flowchart may be complete. In some embodiments, however, the flowchart may restart and/or certain blocks may be repeated. For example, in some embodiments, the flowchart may restart at block 802 to continue performing activation functions. DNN Performance FIGS. 9A-B illustrate the scalability of the described embodiments with respect to the supported number of parallel operations. In particular, the described embodiments are implemented using a flexible design that enables the underlying hardware to be replicated in order to increase the number of parallel operations that are supported. In this manner, the described embodiments can be scaled as necessary in order to support the number of parallel operations required by a particular application or use case. For example, as shown by FIGS. 9A-B, the proposed solution can simply be replicated in order to scale it from a single operand to n operands. In particular, FIG. 9A illustrates the proposed solution for a single operand, whereas FIG. 9B illustrates the same for n operands. Further, the proposed solution can be pipelined to reduce latency and improve throughput. FIG. 10A illustrates the scalability of the described embodiments with respect to precision. In particular, the described embodiments can be scaled to provide varying levels of precision by simply adjusting the number of segments in the piecewise linear approximation models implemented by the log, antilog, and/or exponent circuits. In this manner, the described embodiments can be scaled as necessary in order to provide the level of precision required for different applications and use cases. For example, if an application demands greater precision, the number of segments involved in the piecewise linear approximation models can be increased in order to accommodate the higher precision requirements. The number of segments required in a piecewise linear approximation model for varying levels of precision is shown in FIG. 10A. As shown by FIG. 10A, for example, if an application demands 23-bit precision out of a 23-bit mantissa, the piecewise linear approximation model should be implemented using at least 1,556 segments. The number of segments used for piecewise linear approximation can be adjusted in a similar manner in order to provide any requisite level of precision. FIG. 10B compares the silicon area requirements for various implementations of log and antilog hardware. In particular, FIG. 10B illustrates the number of synthesis gates in a solution implemented using the lookup table method (with 12-bit precision) versus log and antilog circuits implemented using piecewise linear approximation (PLA) (with either 10-bit or 12-bit precision). As shown in FIG. 10B, the log and antilog circuits implemented using piecewise linear approximation use significantly fewer gates than the solution implemented using the lookup table method. For example, the lookup table solution uses 8,800 gates for 12-bit precision, while the piecewise linear approximation (PLA) log and antilog circuits respectively use 350 and 450 gates for 10-bit precision and 1,048 and 1,348 gates for 12-bit precision. FIG. 10C compares the silicon area requirements for various implementations of activation function hardware (with 10-bit precision). In particular, FIG. 10C illustrates the number of synthesis gates for various individual activation functions implemented using lookup tables (LUTs) (e.g., Sigmoid, Hyperbolic Tangent (Tan h), and Swish), versus the number of synthesis gates for a unified activation function circuit that supports multiple activation functions and is implemented using piecewise linear approximation (PLA) (e.g., activation function circuit 200 of FIGS. 2A-B). As shown in FIG. 10C, for example, the unified circuit uses significantly fewer gates than any one of the LUT-based circuits, yet it supports multiple activation functions. For example, the unified circuit can be implemented using 8,321 total synthesis gates, as its simplified equations enable the exponent, logarithm, and antilogarithm blocks to be implemented using only 4,387, 1,048, and 1,348 synthesis gates, respectively. By comparison, each LUT-based circuit requires approximately 17,000-18,000 synthesis gates for only a single activation function. FIG. 10D illustrates the approximation error for log and antilog circuits implemented using piecewise linear approximation (e.g., FIGS. 5A-C and 6A-C), while FIG. 10E illustrates the approximation error for an exponent circuit implemented using piecewise linear approximation (e.g., FIG. 7). In particular, these circuits can be implemented with an absolute error of 0.045% (ABS) for the respective log[2], and antilog[2], and exponent calculations, which translates into 12-bit precision in mantissa for an IEEE-754 single-precision floating-point number (e.g., 1 sign bit+8 exponent bits+12-bit mantissa=21-bit precision). The overall precision of a unified activation function circuit implemented using the log, antilog, and exponent circuits (e.g., AF circuit 200 of FIGS. 2A-B) is 10 mantissa bits for an IEEE-754 single-precision floating-point number (e.g., 1 sign bit+8 exponent bits+10-bit mantissa=19-bit precision). FIGS. 11A-C illustrate a performance comparison of deep neural networks (DNNs) implemented using traditional activation function equations (with powers of the exponent constant e) versus modified activation function equations (using powers of base 2). For example, as discussed above, AF circuit 200 of FIGS. 2A-B and AF circuit 400 of FIG. 4 both leverage piecewise linear approximation in order to implement activation functions. However, AF circuit 200 implements traditional activation functions, while AF circuit 400 implements modified activation functions that use powers of 2 instead of powers of the exponent constant e. The performance of these respective approaches is compared in the example illustrated by FIGS. 11A-C. FIG. 11A illustrates an example DNN 1100 that includes an input layer with two input neurons, a single hidden layer with three neurons, and a fully connected layer with one output neuron. For simplicity, DNN 1100 implements the feature mapping shown in TABLE 7, which has the same behavior of an XOR gate. TABLE 7 Mapping of DNN feature sets FEATURE A FEATURE B OUTPUT 0 0 0 0 1 1 1 0 1 1 1 0 The illustrated example focuses on one of these feature sets, namely A=1 and B=1, which is expected to have an output of 0. In the illustrated example, input neuron X1 corresponds to feature A and input neuron X2 corresponds to feature B. FIG. 11B illustrates the processing that is performed when DNN 1100 is implemented using the traditional Sigmoid activation function: $f ⁡ ( x ) = 1 1 + e - x .$ During the forward propagation stage, the weights are selected randomly as follows: W11=0.8, W12=0.4, W13=0.3, W21=0.2, W22=0.9, WW23=0.5, Wh1=0.3, Wh2=0.5, and Wh3=0.9. The output of the hidden layer nodes (H) is then computed as follows: A bias of 0 is taken in the illustrated example for simplicity. After applying the Sigmoid activation function, the updated hidden layer neurons have the following values: The fully connected (FC) layer can then be computed as: FC=H1*Wh1+H2*Wh2+H3*Wh3=0.7310*0.3+0.7858*0.5+0.6899*0.9=1.235. After applying the Sigmoid activation function to the fully connected layer, FC =0.7746. The error is then computed as follows: error=expected−calculated=0-0.7746=−0.7746. During the backwards propagation and weight update stage, the derivative of the Sigmoid activation function $f ′ ⁡ ( x ) = e - x ( 1 + e - x ) 2$ is used, and the following calculations are performed: ΔFC=f′(FC value without activation function)*error=f′(1.235)*(−0.7746)=−0.13439; ΔFC=hidden layer neurons*hidden layer weights=H1*Wh1+H2*Wh2+H3*Wh3; The following calculations are then performed with respect to ΔH1, ΔH2 and ΔH3: New hidden layer values=ΔFC*f′(hidden layer neuron values without activation function)/hidden layer weights: After back propagation, the new hidden layer weights have the following values: The weight update between the input and hidden layers is then computed as follows: Delta of weights=delta of hidden layer/inputs: New weights: FIG. 11B illustrates the state of DNN 1100 after the weights have been updated based on the calculations above. The output of the DNN is 0.69 after this iteration, which is an improvement over the output of 0.77 after the first iteration. The iterations continue in this manner until the loss function gradient reaches an acceptable level. FIG. 11C illustrates the processing that is performed when DNN 1100 is implemented using the modified Sigmoid activation function, which has an exponent term with a base of 2 instead of a base of e: $f ⁡ ( x ) = 1 1 + 2 - x .$ During the forward propagation stage, the weights are selected randomly as follows: W1=0.8, W12=0.4, W13=0.3, W21=0.2, W22=0.9, WW23=0.5, Wh1=0.3, Wh2=0.5, and Wh3=0.9. The output of the hidden layer nodes (H) is then computed as follows: A bias of 0 is taken in the illustrated example for simplicity. After applying the modified Sigmoid activation function, the updated hidden layer neurons have the following values: The fully connected (FC) layer can then be computed as: FC=H1*Wh1+H2*Wh2+H3*Wh3=0.6667*0.3+0.7117*0.5+0.6351*0.9=1.1272. After applying the modified Sigmoid activation function to the fully connected layer, FC=0.6859. The error is then computed as follows: error=expected−calculated=0−0.6859=−0.6859. During the backwards propagation and weight update stage, the derivative of the modified Sigmoid activation function $f ′ ⁡ ( x ) = ln ⁡ ( 2 ) · 2 x ( 1 + 2 - x ) 2$ is used, and the following calculations are performed: ΔFC=f′(FC value without activation function)*error=f′(1.1272)*(−0.6859)=−0.1024; ΔFC=hidden layer neurons*hidden layer weights=H1*Wh1+H2*Wh2+H3*Wh3; The following calculations are then performed with respect to ΔH1, ΔH2 and ΔH3: New hidden layer values=ΔFC*f′(hidden layer neuron values without activation function)/hidden layer weights: After back propagation, the new hidden layer weights have the following values: The weight update between the input and hidden layers is then computed as follows: Delta of weights=delta of hidden layer/inputs: New weights: FIG. 11C illustrates the state of DNN 1100 after the weights have been updated based on the calculations above (e.g., using the modified Sigmoid activation function). The output of the DNN of FIG. 11 C is 0.70 after this iteration, which is comparable to the output of 0.69 for the DNN of FIG. 11B. The iterations continue in this manner until the loss function gradient reaches an acceptable level. FIGS. 12A-B illustrate a comparison of the training convergence for the respective DNNs of FIGS. 11B and 11C. In particular, FIG. 12A illustrates the rate of convergence for the original Sigmoid activation function used by the DNN of FIG. 11B, while FIG. 12B illustrates the rate of convergence for the modified Sigmoid activation function (e.g., using a power of base 2 instead of e) used by the DNN of FIG. 11C. FIG. 13 illustrates the error percentage or accuracy of these approaches. Example Computing Architectures FIGS. 14-18 illustrate example implementations of computing environments and architectures that can be used in accordance with embodiments disclosed herein. In various embodiments, for example, these example computer architectures may be used in conjunction with and/or used to implement the Deep Neural Network (DNN) processing functionality described throughout this disclosure. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. FIG. 14A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. FIG. 14B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGS. 14A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described. In FIG. 14A, a processor pipeline 1400 includes a fetch stage 1402, a length decode stage 1404, a decode stage 1406, an allocation stage 1408, a renaming stage 1410, a scheduling (also known as a dispatch or issue) stage 1412, a register read/memory read stage 1414, an execute stage 1416, a write back/memory write stage 1418, an exception handling stage 1422, and a commit stage 1424. FIG. 14B shows processor core 1490 including a front end unit 1430 coupled to an execution engine unit 1450, and both are coupled to a memory unit 1470. The core 1490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like. The front end unit 1430 includes a branch prediction unit 1432 coupled to an instruction cache unit 1434, which is coupled to an instruction translation lookaside buffer (TLB) 1436, which is coupled to an instruction fetch unit 1438, which is coupled to a decode unit 1440. The decode unit 1440 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1440 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1490 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1440 or otherwise within the front end unit 1430). The decode unit 1440 is coupled to a rename/allocator unit 1452 in the execution engine unit 1450. The execution engine unit 1450 includes the rename/allocator unit 1452 coupled to a retirement unit 1454 and a set of one or more scheduler unit(s) 1456. The scheduler unit(s) 1456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1456 is coupled to the physical register file(s) unit(s) 1458. Each of the physical register file(s) units 1458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1458 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1458 is overlapped by the retirement unit 1454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1454 and the physical register file(s) unit(s) 1458 are coupled to the execution cluster(s) 1460. The execution cluster(s) 1460 includes a set of one or more execution units 1462 and a set of one or more memory access units 1464. The execution units 1462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1456, physical register file(s) unit(s) 1458, and execution cluster(s) 1460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster—and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order. The set of memory access units 1464 is coupled to the memory unit 1470, which includes a data TLB unit 1472 coupled to a data cache unit 1474 coupled to a level 2 (L2) cache unit 1476. In one exemplary embodiment, the memory access units 1464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1472 in the memory unit 1470. The instruction cache unit 1434 is further coupled to a level 2 (L2) cache unit 1476 in the memory unit 1470. The L2 cache unit 1476 is coupled to one or more other levels of cache and eventually to a main memory. By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1400 as follows: 1) the instruction fetch 1438 performs the fetch and length decoding stages 1402 and 1404; 2) the decode unit 1440 performs the decode stage 1406; 3) the rename/allocator unit 1452 performs the allocation stage 1408 and renaming stage 1410; 4) the scheduler unit(s) 1456 performs the schedule stage 1412; 5) the physical register file(s) unit(s) 1458 and the memory unit 1470 perform the register read/memory read stage 1414; the execution cluster 1460 perform the execute stage 1416; 6) the memory unit 1470 and the physical register file(s) unit(s) 1458 perform the write back/memory write stage 1418; 7) various units may be involved in the exception handling stage 1422; and 8) the retirement unit 1454 and the physical register file(s) unit(s) 1458 perform the commit stage 1424. The core 1490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, Calif.; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, Calif.), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1434/1474 and a shared L2 cache unit 1476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. FIG. 15 is a block diagram of a processor 1500 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG. 15 illustrate a processor 1500 with a single core 1502A, a system agent 1510, a set of one or more bus controller units 1516, while the optional addition of the dashed lined boxes illustrates an alternative processor 1500 with multiple cores 1502A-N, a set of one or more integrated memory controller unit(s) 1514 in the system agent unit 1510, and special purpose logic 1508. Thus, different implementations of the processor 1500 may include: 1) a CPU with the special purpose logic 1508 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1502A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1502A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1502A-N being a large number of general purpose in-order cores. Thus, the processor 1500 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. The memory hierarchy includes one or more levels of cache 1504A-N within the cores, a set or one or more shared cache units 1506, and external memory (not shown) coupled to the set of integrated memory controller units 1514. The set of shared cache units 1506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1512 interconnects the integrated graphics logic 1508, the set of shared cache units 1506, and the system agent unit 1510/integrated memory controller unit(s) 1514, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1506 and cores 1502-A-N. In some embodiments, one or more of the cores 1502A-N are capable of multi-threading. The system agent 1510 includes those components coordinating and operating cores 1502A-N. The system agent unit 1510 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1502A-N and the integrated graphics logic 1508. The display unit is for driving one or more externally connected displays. The cores 1502A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1502A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Referring now to FIG. 16, shown is a block diagram of a system 1600 in accordance with one embodiment of the present invention. The system 1600 may include one or more processors 1610, 1615, which are coupled to a controller hub 1620. In one embodiment the controller hub 1620 includes a graphics memory controller hub (GMCH) 1690 and an Input/Output Hub (IOH) 1650 (which may be on separate chips); the GMCH 1690 includes memory and graphics controllers to which are coupled memory 1640 and a coprocessor 1645; the IOH 1650 is couples input/output (I/O) devices 1660 to the GMCH 1690. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1640 and the coprocessor 1645 are coupled directly to the processor 1610, and the controller hub 1620 in a single chip with the IOH 1650. The optional nature of additional processors 1615 is denoted in FIG. 16 with broken lines. Each processor 1610, 1615 may include one or more of the processing cores described herein and may be some version of the processor 1500. The memory 1640 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1620 communicates with the processor(s) 1610, 1615 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1695. In one embodiment, the coprocessor 1645 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1620 may include an integrated graphics accelerator. There can be a variety of differences between the physical resources 1610, 1615 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. In one embodiment, the processor 1610 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1610 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1645. Accordingly, the processor 1610 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1645. Coprocessor(s) 1645 accept and execute the received coprocessor instructions. Referring now to FIG. 17, shown is a block diagram of a first more specific exemplary system 1700 in accordance with an embodiment of the present invention. As shown in FIG. 17, multiprocessor system 1700 is a point-to-point interconnect system, and includes a first processor 1770 and a second processor 1780 coupled via a point-to-point interconnect 1750. Each of processors 1770 and 1780 may be some version of the processor 1500. In one embodiment of the invention, processors 1770 and 1780 are respectively processors 1610 and 1615, while coprocessor 1738 is coprocessor 1645. In another embodiment, processors 1770 and 1780 are respectively processor 1610 coprocessor 1645. Processors 1770 and 1780 are shown including integrated memory controller (IMC) units 1772 and 1782, respectively. Processor 1770 also includes as part of its bus controller units point-to-point (P-P) interfaces 1776 and 1778; similarly, second processor 1780 includes P-P interfaces 1786 and 1788. Processors 1770, 1780 may exchange information via a point-to-point (P-P) interface 1750 using P-P interface circuits 1778, 1788. As shown in FIG. 17, IMCs 1772 and 1782 couple the processors to respective memories, namely a memory 1732 and a memory 1734, which may be portions of main memory locally attached to the respective processors. Processors 1770, 1780 may each exchange information with a chipset 1790 via individual P-P interfaces 1752, 1754 using point to point interface circuits 1776, 1794, 1786, 1798. Chipset 1790 may optionally exchange information with the coprocessor 1738 via a high-performance interface 1739. In one embodiment, the coprocessor 1738 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. Chipset 1790 may be coupled to a first bus 1716 via an interface 1796. In one embodiment, first bus 1716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited. As shown in FIG. 17, various I/O devices 1714 may be coupled to first bus 1716, along with a bus bridge 1718 which couples first bus 1716 to a second bus 1720. In one embodiment, one or more additional processor(s) 1715, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1716. In one embodiment, second bus 1720 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1720 including, for example, a keyboard and/or mouse 1722, communication devices 1727 and a storage unit 1728 such as a disk drive or other mass storage device which may include instructions/code and data 1730, in one embodiment. Further, an audio I/O 1724 may be coupled to the second bus 1720. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 17, a system may implement a multi-drop bus or other such architecture. Referring now to FIG. 18, shown is a block diagram of a SoC 1800 in accordance with an embodiment of the present invention. Similar elements in FIG. 15 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG. 18, an interconnect unit(s) 1812 is coupled to: an application processor 1810 which includes a set of one or more cores 1802A-N, cache unit(s) 1804A-N, and shared cache unit(s) 1806; a system agent unit 1810; a bus controller unit(s) 1816; an integrated memory controller unit(s) 1814; a set or one or more coprocessors 1820 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1830; a direct memory access (DMA) unit 1832; and a display unit 1840 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1820 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code, such as code 1730 illustrated in FIG. 17, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. The flowcharts and block diagrams in the FIGURES illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or alternative orders, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. The foregoing disclosure outlines features of several embodiments so that those skilled in the art may better understand various aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure. All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including a central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips. As used throughout this specification, the term “processor” or “microprocessor” should be understood to include not only a traditional microprocessor (such as Intel's® industry-leading x86 and x64 architectures), but also graphics processors, matrix processors, and any ASIC, FPGA, microcontroller, digital signal processor (DSP), programmable logic device, programmable logic array (PLA), microcode, instruction set, emulated or virtual machine processor, or any similar “Turing-complete” device, combination of devices, or logic elements (hardware or software) that permit the execution of instructions. Note also that in certain embodiments, some of the components may be omitted or consolidated. In a general sense, the arrangements depicted in the figures should be understood as logical divisions, whereas a physical architecture may include various permutations, combinations, and/or hybrids of these elements. It is imperative to note that countless possible design configurations can be used to achieve the operational objectives outlined herein. Accordingly, the associated infrastructure has a myriad of substitute arrangements, design choices, device possibilities, hardware configurations, software implementations, and equipment options. In a general sense, any suitably-configured processor can execute instructions associated with data or microcode to achieve the operations detailed herein. Any processor disclosed herein could transform an element or an article (for example, data) from one state or thing to another state or thing. In another example, some activities outlined herein may be implemented with fixed logic or programmable logic (for example, software and/or computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (for example, a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof. In operation, a storage may store information in any suitable type of tangible, non-transitory storage medium (for example, random access memory (RAM), read only memory (ROM), field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), or microcode), software, hardware (for example, processor instructions or microcode), or in any other suitable component, device, element, or object where appropriate and based on particular needs. Furthermore, the information being tracked, sent, received, or stored in a processor could be provided in any database, register, table, cache, queue, control list, or storage structure, based on particular needs and implementations, all of which could be referenced in any suitable timeframe. Any of the memory or storage elements disclosed herein should be construed as being encompassed within the broad terms ‘memory’ and ‘storage,’ as appropriate. A non-transitory storage medium herein is expressly intended to include any non-transitory special-purpose or programmable hardware configured to provide the disclosed operations, or to cause a processor to perform the disclosed operations. A non-transitory storage medium also expressly includes a processor having stored thereon hardware-coded instructions, and optionally microcode instructions or sequences encoded in hardware, firmware, or software. Computer program logic implementing all or part of the functionality described herein is embodied in various forms, including, but in no way limited to, hardware description language, a source code form, a computer executable form, machine instructions or microcode, programmable hardware, and various intermediate forms (for example, forms generated by an HDL processor, assembler, compiler, linker, or locator). In an example, source code includes a series of computer program instructions implemented in various programming languages, such as an object code, an assembly language, or a high-level language such as OpenCL, FORTRAN, C, C++, JAVA, or HTML for use with various operating systems or operating environments, or in hardware description languages such as Spice, Verilog, and VHDL. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form, or converted to an intermediate form such as byte code. Where appropriate, any of the foregoing may be used to build or describe appropriate discrete or integrated circuits, whether sequential, combinatorial, state machines, or otherwise. In one example, any number of electrical circuits of the FIGURES may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processor and memory can be suitably coupled to the board based on particular configuration needs, processing demands, and computing designs. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In another example, the electrical circuits of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated or reconfigured in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGURES may be combined in various possible configurations, all of which are within the broad scope of this specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. Example Implementations The following examples pertain to embodiments described throughout this disclosure. One or more embodiments may include an apparatus, comprising: a log circuit, wherein the log circuit comprises circuitry to: identify, via an input register, an input associated with a logarithm operation, wherein the logarithm operation is to be performed by the log circuit using piecewise linear approximation; identify, using range selection circuitry, a first range that the input falls within, wherein the first range is identified from a plurality of ranges associated with a plurality of piecewise linear approximation (PLA) equations for the logarithm operation, and wherein the first range corresponds to a first equation of the plurality of PLA equations; obtain a plurality of operands associated with the first equation; compute, using adder-subtractor circuitry, a result of the first equation based on the plurality of operands; and return, via an output register, an output associated with the logarithm operation, wherein the output is generated based at least in part on the result of the first equation. In one example embodiment of an apparatus, the logarithm operation is associated with an artificial neural network operation. In one example embodiment of an apparatus: the input comprises a floating-point number, wherein the floating-point number comprises an exponent and a mantissa; and the output comprises a fixed-point number, wherein the fixed-point number comprises an integer and a fraction. In one example embodiment of an apparatus, the plurality of operands comprises the mantissa and one or more fraction operands, wherein the one or more fraction operands each comprise a denominator that comprises a power of two. In one example embodiment of an apparatus, the log circuit further comprises one or more shift circuits to generate the one or more fraction operands. In one example embodiment of an apparatus, the log circuit further comprises a subtractor circuit to subtract a bias from the exponent of the floating-point number to generate an unbiased exponent. In one example embodiment of an apparatus, the circuitry to return, via the output register, the output associated with the logarithm operation is further to: generate the integer of the fixed-point number based on the unbiased exponent; and generate the fraction of the fixed-point number based on the result of the first equation. In one example embodiment of an apparatus, wherein the log circuit further comprises one or more multiplexers to select the plurality of operands associated with the first equation. In one example embodiment of an apparatus, wherein the adder-subtractor circuitry is to perform one or more addition or subtraction operations on the plurality of operands. In one example embodiment of an apparatus, the apparatus further comprises an antilog circuit, wherein the antilog circuit comprises circuitry to: identify a second input associated with an antilogarithm operation, wherein the antilogarithm operation is to be performed by the antilog circuit using piecewise linear approximation; identify a second range that the second input falls within, wherein the second range is identified from a second plurality of ranges associated with a second plurality of piecewise linear approximation (PLA) equations for the antilogarithm operation, and wherein the second range corresponds to a second equation of the second plurality of PLA equations; compute a second result of the second equation based on a second plurality of operands associated with the second equation; and generate a second output associated with the antilogarithm operation, wherein the second output is generated based at least in part on the second result of the second equation. In one example embodiment of an apparatus, the apparatus further comprises an activation function circuit, wherein the activation function circuit comprises the log circuit and the antilog circuit, and wherein the activation function circuit further comprises circuitry to: receive an instruction to perform an activation function selected from a plurality of available activation functions, wherein the activation function comprises one or more multiplication or division operations; perform the one or more multiplication or division operations using one or more logarithm operations and one or more antilogarithm operations, wherein the one or more logarithm operations are performed using the log circuit and the one or more antilogarithm operations are performed using the antilog circuit; and generate an activation output associated with the activation function, wherein the activation output is generated based at least in part on one or more results of the one or more multiplication or division operations. In one example embodiment of an apparatus: the activation function further comprises one or more exponent operations; and the activation function circuit further comprises an exponent circuit to perform the one or more exponent operations using piecewise linear approximation. One or more embodiments may include a system, comprising: a memory to store information associated with an application; a processor to execute one or more instructions associated with the application; and an activation function circuit to perform a plurality of activation functions, wherein the activation function circuit comprises circuitry to: receive an instruction to perform an activation function associated with the application, wherein the activation function is selected from the plurality of activation functions, and wherein the activation function comprises one or more multiplication or division operations; perform the one or more multiplication or division operations using one or more log operations and one or more antilog operations, wherein the one or more log operations are performed by a log circuit using piecewise linear approximation, and wherein the one or more antilog operations are performed by an antilog circuit using piecewise linear approximation; and generate an output associated with the activation function, wherein the output is generated based at least in part on one or more results of the one or more multiplication or division operations. In one example embodiment of a system, the application comprises an artificial neural network, and wherein the activation function is associated with an operation of the artificial neural network. In one example embodiment of a system, the circuitry to perform the one or more multiplication or division operations using the one or more log operations and the one or more antilog operations is further to: perform one or more logarithm base 2 operations on one or more operands associated with the one or more multiplication or division operations, wherein the one or more logarithm base 2 operations are performed using piecewise linear approximation; perform one or more addition or subtraction operations on one or more results of the one or more logarithm base 2 operations; and perform one or more antilogarithm base 2 operations on one or more results of the one or more addition or subtraction operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. In one example embodiment of a system: the activation function further comprises one or more exponent operations; and the activation function circuit further comprises circuitry to perform the one or more exponent operations using piecewise linear approximation. In one example embodiment of a system: the one or more exponent operations each comprise a base of 2; and the circuitry to perform the one or more exponent operations using piecewise linear approximation is further to perform the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. In one example embodiment of a system, the plurality of activation functions comprises: a sigmoid function; a hyperbolic tangent function; a swish function; and a rectified linear unit function. In one example embodiment of a system: at least one of the sigmoid function, the hyperbolic tangent function, or the swish function is defined using one or more exponent operations that exclusively comprise a base of 2; and the activation function circuit further comprises circuitry to perform the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. One or more embodiments may include at least one machine accessible storage medium having instructions stored thereon, wherein the instructions, when executed on a machine, cause the machine to: receive, by an activation function circuit, an instruction to perform an activation function selected from a plurality of available activation functions, wherein the activation function comprises one or more multiplication or division operations; perform the one or more multiplication or division operations using one or more log operations and one or more antilog operations, wherein the one or more log operations and the one or more antilog operations are performed using piecewise linear approximation; and generate an output associated with the activation function, wherein the output is generated based at least in part on one or more results of the one or more multiplication or division operations. In one example embodiment of a storage medium, the instructions that cause the machine to perform the one or more multiplication or division operations using the one or more log operations and the one or more antilog operations further cause the machine to: perform one or more logarithm base 2 operations on one or more operands associated with the one or more multiplication or division operations, wherein the one or more logarithm base 2 operations are performed using piecewise linear approximation; perform one or more addition or subtraction operations on one or more results of the one or more logarithm base 2 operations; and perform one or more antilogarithm base 2 operations on one or more results of the one or more addition or subtraction operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. In one example embodiment of a storage medium: the activation function further comprises one or more exponent operations; and the instructions further cause the machine to perform the one or more exponent operations using piecewise linear approximation. In one example embodiment of a storage medium: at least one activation function of the plurality of available activation functions is defined using one or more exponent operations that exclusively comprise a base of 2; and the instructions further cause the machine to perform the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. One or more embodiments may include a method, comprising: receiving, by an activation function circuit, an instruction to perform an activation function selected from a plurality of available activation functions, wherein the activation function comprises one or more multiplication or division operations; performing the one or more multiplication or division operations using one or more log operations and one or more antilog operations, wherein the one or more log operations and the one or more antilog operations are performed using piecewise linear approximation; and generating an output associated with the activation function, wherein the output is generated based at least in part on one or more results of the one or more multiplication or division operations. In one example embodiment of a method, the method further comprises performing one or more logarithm base 2 operations on one or more operands associated with the one or more multiplication or division operations, wherein the one or more logarithm base 2 operations are performed using piecewise linear approximation; performing one or more addition or subtraction operations on one or more results of the one or more logarithm base 2 operations; and performing one or more antilogarithm base 2 operations on one or more results of the one or more addition or subtraction operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. 1. An activation function circuit, comprising: decode circuitry to decode a first instruction to perform a first activation function, wherein the first activation function is one of a plurality of activation functions implemented on the activation function circuit, and wherein the first activation function includes an exponent operation and a division operation; exponent circuitry to perform the exponent operation using piecewise linear approximation; log circuitry to perform logarithm operations on a numerator and a denominator of the division operation using piecewise linear approximation; subtractor circuitry to perform a subtraction operation on results of the logarithm operations; antilog circuitry to perform an antilogarithm operation on a result of the subtraction operation using piecewise linear approximation; and output circuitry to output a result of the first activation function based on a result of the antilogarithm operation. 2. The activation function circuit of claim 1, wherein: the first activation function further includes a multiplication operation; the log circuitry is further to perform logarithm operations on operands of the multiplication operation using piecewise linear approximation; the activation function circuit further comprises adder circuitry to perform an addition operation on results of the logarithm operations on the operands of the multiplication operation; and the antilog circuitry is further to perform an antilogarithm operation on a result of the addition operation using piecewise linear approximation, wherein a result of the antilogarithm operation is a result of the multiplication operation. 3. The activation function circuit of claim 1, wherein: the exponent operation has a base of 2; and the exponent circuitry to perform the exponent operation using piecewise linear approximation is further to perform the exponent operation using an antilogarithm base 2 operation, wherein the antilogarithm base 2 operation is performed using piecewise linear approximation. 4. The activation function circuit of claim 1, wherein the first activation function is: a sigmoid function; a hyperbolic tangent function; or a swish function. 5. The activation function circuit of claim 1, wherein the plurality of activation functions include: a sigmoid function; a hyperbolic tangent function; a swish function; and a rectified linear unit function. 6. The activation function circuit of claim 5, wherein: at least one of the sigmoid function, the hyperbolic tangent function, or the swish function is defined using one or more exponent operations that exclusively have a base of 2, wherein the one or more exponent operations include the exponent operation; and the exponent circuitry to perform the exponent operation using piecewise linear approximation is further to perform the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. 7. The activation function circuit of claim 1, wherein the first instruction includes an opcode identifying the first activation function from the plurality of activation functions implemented on the activation function circuit. 8. The activation function circuit of claim 1, wherein the plurality of activation functions are associated with one or more artificial neural networks. 9. At least one non-transitory machine-accessible storage medium having instructions stored thereon, wherein the instructions, when implemented or executed on processing circuitry comprising exponent circuitry, log circuitry, antilog circuitry, and subtractor circuitry, cause the processing circuitry to: receive a first instruction to perform a first activation function, wherein the first activation function is one of a plurality of activation functions implemented on the processing circuitry, and wherein the first activation function includes an exponent operation and a division operation; perform, using the exponent circuitry, the exponent operation using piecewise linear approximation; perform, using the log circuitry, logarithm operations on a numerator and a denominator of the division operation using piecewise linear approximation; perform, using the subtractor circuitry, a subtraction operation on results of the logarithm operations; perform, using the antilog circuitry, an antilogarithm operation on a result of the subtraction operation using piecewise linear approximation; and output a result of the first activation function based on a result of the antilogarithm operation. 10. The storage medium of claim 9, wherein: the first activation function further includes a multiplication operation; the processing circuitry further comprises adder circuitry; and the instructions further cause the processing circuitry to: perform, using the log circuitry, logarithm operations on operands of the multiplication operation using piecewise linear approximation; perform, using the adder circuitry, an addition operation on results of the logarithm operations on the operands of the multiplication operation; perform, using the antilog circuitry, an antilogarithm operation on a result of the addition operation using piecewise linear approximation; and return a result of the multiplication operation based on a result of the antilogarithm operation on the result of the addition operation. 11. The storage medium of claim 9, wherein: the exponent operation has a base of 2; and the instructions that cause the processing circuitry to perform, using the exponent circuitry, the exponent operation using piecewise linear approximation further cause the processing circuitry to: perform, using the exponent circuitry, the exponent operation using an antilogarithm base 2 operation, wherein the antilogarithm base 2 operation is performed using piecewise linear 12. The storage medium of claim 9, wherein: the first activation function is defined using one or more exponent operations that exclusively have a base of 2, wherein the one or more exponent operations include the exponent operation; and the instructions that cause the processing circuitry to perform, using the exponent circuitry, the exponent operation using piecewise linear approximation further cause the processing circuitry to: perform, using the exponent circuitry, the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. 13. The storage medium of claim 9, wherein the first activation function is: a sigmoid function; a hyperbolic tangent function; or a swish function. 14. The storage medium of claim 9, wherein the plurality of activation functions include: a sigmoid function; a hyperbolic tangent function; a swish function; and a rectified linear unit function. 15. The storage medium of claim 9, wherein the plurality of activation functions are associated with one or more artificial neural networks. 16. A system, comprising: a processor; and an activation function circuit, comprising: decode circuitry to decode a first instruction to perform a first activation function, wherein the first activation function is one of a plurality of activation functions implemented on the activation function circuit, and wherein the first activation function includes an exponent operation and a division operation; exponent circuitry to perform the exponent operation using piecewise linear approximation; log circuitry to perform logarithm operations on a numerator and a denominator of the division operation using piecewise linear approximation; subtractor circuitry to perform a subtraction operation on results of the logarithm operations; antilog circuitry to perform an antilogarithm operation on a result of the subtraction operation using piecewise linear approximation; and output circuitry to output a result of the first activation function based on a result of the antilogarithm operation. 17. The system of claim 16, wherein: the first activation function further includes a multiplication operation; the log circuitry is further to perform logarithm operations on operands of the multiplication operation using piecewise linear approximation; the activation function circuit further comprises adder circuitry to perform an addition operation on results of the logarithm operations on the operands of the multiplication operation; and the antilog circuitry is further to perform an antilogarithm operation on a result of the addition operation using piecewise linear approximation, wherein a result of the antilogarithm operation is a result of the multiplication operation. 18. The system of claim 16, wherein: the exponent operation has a base of 2; and the exponent circuitry to perform the exponent operation using piecewise linear approximation is further to perform the exponent operation using an antilogarithm base 2 operation, wherein the antilogarithm base 2 operation is performed using piecewise linear approximation. 19. The system of claim 16, wherein the first activation function is: a sigmoid function; a hyperbolic tangent function; or a swish function. 20. The system of claim 16, wherein the plurality of activation functions include: a sigmoid function; a hyperbolic tangent function; a swish function; and a rectified linear unit function. 21. The system of claim 20, wherein: at least one of the sigmoid function, the hyperbolic tangent function, or the swish function is defined using one or more exponent operations that exclusively have a base of 2, wherein the one or more exponent operations include the exponent operation; and the exponent circuitry to perform the exponent operation using piecewise linear approximation is further to perform the one or more exponent operations using one or more antilogarithm base 2 operations, wherein the one or more antilogarithm base 2 operations are performed using piecewise linear approximation. 22. The system of claim 16, wherein the plurality of activation functions are associated with one or more artificial neural networks. Referenced Cited U.S. Patent Documents 5581661 December 3, 1996 Wang 5796925 August 18, 1998 Deville 20030220953 November 27, 2003 Allred 20040267854 December 30, 2004 Haider 20160026912 January 28, 2016 Falcon 20160380653 December 29, 2016 Sheikh 20170011288 January 12, 2017 Brothers 20180189651 July 5, 2018 Henry 20180373977 December 27, 2018 Carbon Other references • Mishra, Amit, and Krishna Raj. “Implementation of a digital neuron with nonlinear activation function using piecewise linear approximation technique.” 2007 International Conference on Microelectronics. IEEE, 2007: 1-4 (Year: 2007). • Jin, Zhanpeng. Autonomously reconfigurable artificial neural network on a chip. University of Pittsburgh, 2010: i-230 (Year: 2010). • Sheikh, Farhana, et al. “A 2.05 GVertices/s 151 mW lighting accelerator for 3D graphics vertex and pixel shading in 32 nm CMOS.” IEEEjournal of solid-state circuits 48.1 (2012): 128-139. (Year: • Eldridge, Schuyler, et al. “Neural network-based accelerators for transcendental function approximation.” Proceedings of the 24th edition of the great lakes symposium on VLSI. 2014: 169-174 (Year: 2014). • Avramović, Aleksej, et al. “An approximate logarithmic squaring circuit with error compensation for DSP applications.” Microelectronics Journal 45.3 (2014): 263-271. (Year: 2014). • Altaf, Muhammad Awais Bin, and Jerald Yoo. “A 1.83 muJ/classification, 8-channel, patient-specific epileptic seizure classification SoC using a non-linear support vector machine.” IEEE Transactions on Biomedical Circuits and Systems 10.1 (2015): 49-60. (Year: 2015). • Trinh, Hong-Phuc, Marc Duranton, and Michel Paindavoine. “Efficient data encoding for convolutional neural network application.” ACM Transactions on Architecture and Code Optimization (TACO) 11.4 (2015): 1-21. (Year: 2015). • Ramachandran, Prajit, Barret Zoph, and Quoc V. Le. “Searching for activation functions.” arXiv preprint arXiv:1710.05941 (2017): 1-13. (Year: 2017). • Ramachandran, Prajit, Barret Zoph, and Quoc V. Le. “Swish: a self-gated activation function.” arXiv preprint arXiv:1710.05941 7 (2017): 1-12 (Year: 2017). • Alcaide, Eric. “E-swish: Adjusting activations to different network depths.” arXiv preprint arXiv:1801.07145 (Jan. 2018): 1-13 (Year: 2018). • Bhanja, Mousumi, and Baidya Nath Ray. “OTA-based logarithmic circuit for arbitrary input signal and its application.” IEEE Transactions on Very Large Scale Integration (VLSI) Systems 24.2 (2015): 638-649. (Year: 2015). • Bucila, Cristian, et al., “Model Compression”, KDD 2006, Twelfth International Conference on Knowledge Discovery and Data Mining, Philadelphia, Pennsylvania, Aug. 20-23, 2006, 7 pages. • Courbariaux, Matthieu, et al., “Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or −1”, accessed at: https://arxiv.org/pdf/1602.02830v3, last updated Mar. 17, 2016, 11 pages. • Gysel, Philipp Matthias, “Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks”, University of California, Davis, accessed at: https://arxiv.org/abs/1605.06402, updated May 20, 2016, 73 pages. • Hashemi, Soheil, et al., “Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks”, accessed at: https://arxiv.org/pdf/1612.03940, last updated Dec. 12, 2016, 6 pages. • Hinton, Geoffrey, et al., “Distilling the Knowledge in a Neural Network”, accessed at: https://arxiv.org/pdf/1503.02531, updated Mar. 9, 2015, 9 pages. • Lai, Liangzhen, et al., “CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUS”, accessed at: https://arxiv.org/pdf/1801.06601, updated Jan. 19, 2018, 10 pages. • Lai, Liangzhen, et al., “Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations”, accessed at: https://arxiv.org/pdf/1703.03073, updated Mar. 8, 2017, 10 pages. • Ramachandran, Prajit, et al., “Searching for Activation Functions”, accessed at: https://arxiv.org/pdf/1710.05941, updated Oct. 27, 2017, 13 pages. • Ramachandran, Prajit, et al., “Swish: A Self-Gated Activation Function”, accessed at: https://arxiv.org/pdf/1710.05941, updated Oct. 16, 2017, 12 pages. • Romero, Adriana, et al., “FitNets: Hints for Thin Deep Nets”, accessed at: https://arxiv.org/pdf/1412.6550, last updated Mar. 27, 2015, 13 pages. • Russakovsky, Olga, et al., “ImageNet Large Scale Visual Recognition Challenge”, accessed at: https://arxiv.org/pdf/1409.0575, last updated Jan. 30, 2015, 43 pages. Patent History Patent number : 11775805 : Jun 29, 2018 Date of Patent : Oct 3, 2023 Patent Publication Number 20190042922 Assignee Intel Coroporation (Santa Clara, CA) Kamlesh Pillai Gurpreet S. Kalsi Amit Mishra Primary Examiner Kamran Afshar Assistant Examiner Randall K. Baldwin Application Number : 16/023,441 Current U.S. Class Digital Neuron Processor (706/43) International Classification : G06N 3/048 (20230101); G06F 17/11 (20060101); G06N 3/063 (20230101); G06F 7/499 (20060101); G06F 7/556 (20060101); G06N 3/084 (20230101); G06F 17/17 (20060101); G06N 3/045 (20230101); G06N 3/044
{"url":"https://patents.justia.com/patent/11775805","timestamp":"2024-11-06T21:17:31Z","content_type":"text/html","content_length":"334948","record_id":"<urn:uuid:7a4da26a-e121-4c5e-b12d-24ad9265f16e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00346.warc.gz"}
Automated Testing for Whiley Wednesday, December 2nd, 2020 Recently, the online editor for Whiley was updated with some new features. Actually, the update represents a complete rewrite of the front-end in Whiley. Obviously, I am very excited about that! Previously it was written using raw (i.e. ugly) JavaScript, but now uses a framework for Functional Reactive Programming (called Web.wy). That has made a huge difference to how the code looks. Still, I’m not going to talk about that here. Rather, it is the new check feature (highlighted in red below) that I’m interested in: What is this new “check” feature then? In a nutshell, it automatically generates test inputs for functions and runs them through the code looking for problems (e.g. divide-by-zero, index-out-of-bounds, etc). In many ways, it is similar to the QuickCheck line of tools (with the added benefit that Whiley has first-class specifications). We can think of it as a “half-way” step towards formal verification. The key is that it is easier to check a program than it is to statically verify it (more on this below). Some might think that having a check feature like this doesn’t make sense when you also have static verification. But, I prefer to think of them as complementary. Thinking about a developer’s workflow, we might imagine checking a function first as we develop it (since this is easier and quicker) before, finally, attempting to statically verify it (since this is harder and may force the specification to be further refined). As a simple example to illustrate how it works, consider the following signature for the max(int[]) function: function max(int[] items) -> (int r) // items cannot be empty requires |items| > 0 // result not smaller than any in items ensures all { i in 0..|items| | r >= items[i] } // result is element in items ensures some { i in 0..|items| | r == items[i] }: We’re not really concerned with the implementation here (and we can assume it might have bugs). To check this function, the tool will do the following: 1. (Generate) The tool generates some number of valid inputs. That is, input values which meet the precondition. For our example above, that is any int[] array containing at least one element. 2. (Execute). The function in question is then executed using each of the generated inputs. This may lead to obvious failures (e.g. out-of-bounds or divide-by-zero errors), in which case we’ve already found some bugs! 3. (Check). For any execution which completed, the tool then checks the result against the postcondition. If the postcondition doesn’t hold then, again, this indicates a bug somewhere. The key is that the specification acts as the test oracle. In other words, having written the specification we get testing for free! And, the tool takes care of the difficult stuff so we don’t have to. For example, generating valid inputs efficiently is harder than it looks (more on this below). Technical Stuff The main technical challenge is that of efficiently generating input values under constraints. To start with, we can generate raw input values for the various data types in Whiley: • Primitives. These are pretty easy. For example, to generate values of type int, we’ll use some (configurable) domain (e.g. -2 .. 2) of values. We can easily sample uniformly from this domain as well (e.g. using Knuth’s Algorithm S). • Arrays. Here, we limit the maximum length of an array (e.g. length 2), and then enumerate all arrays upto this length. To generate values in the array, we recursively call the generator associated with the element type. For the array type int[] and assuming max length 2 and integers -1..1, we might generate: [], [-1], [0], [1], [-1,-1], [0,-1],[1,-2], etc. • Records. These are similar, except there is no maximum length. For each field, we recursively generate values using the generator associated with its type. For the record type {bool flag, int data} and assuming and integers -1..1, we might generate: {flag:false,data:-1}, {flag:true,data:-1}, {flag:false,data:0}, {flag:true,data:0}, {flag:false,data:1}, etc. • References. These are more tricky as we must allow for aliasing between heap locations. For a single reference type &int, we generate heap locations for each of the possible integer values. However, for two reference types (e.g. parameters &int x, &int y), we have to additionally consider both the aliased and non-aliased cases. • Lambdas. These are very challenging as, in principle, it requires enumerating all implementations of a function! Consider the lambda type function(int)->(int) — how do we generate values (i.e. implementations) of this type? There is no easy answer here, but there are some strategies we can use. Firstly, we can scour the source program looking for functions with the same signature. Secondly, we can generate simple input/output mappings (e.g. {-1=>-1, 0=>0, 1=>1}). We can go beyond this by applying rotations (e.g. {-1=>0, 0=>1, 1=>-1}). Given the ability to generate arbitrary values for a type, the more difficult question is how to generate values which meet some constraint. For example, consider this data type: type List<T> is { int length, T[] items where 0 <= length where length <= |items| The current approach we take is rather simplistic here. We simply generate all values of the underlying type and discard those which don’t satisfy the invariant. This works, but it has some problems: 1. (Scale). Enumerating all values of a complex data type upto a given bound can be expensive. We might easily need to generate 1K, 10K, 100K (or more) values, of which only a tiny fraction may satisfy our invariant. For our List example above, assuming a maximum array length of 3 and integer range -3..3, there are 2800 possible values of the underlying type of which only 1534 meet the invariant (55%). That’s not too bad, but as we get more invariants this changes quite quickly. Consider this example taken from an implementation of Tarjan’s algorithm for computing strongly connected compoenents: type Data is { bool[] visited, int[] lowlink, int[] index where |visited| == |lowlink| where |lowlink| == |index| In this case, there are 2.4M values of the underlying type, of which only 946K match the invariant (39%). And, as we add more invariants (e.g. that no element in lowlink can be negative) this ratio continues a downward trend. 2. (Sampling). Another interesting issue arises with sampling. Typically, we want to sample from large domains (i.e. rather than enumerate them) to allow checking in reasonable time. The problem is that, for complex data types with invariants, the probability that a given value sampled from the underlying domain meets the invariant is often very low. We have observed situations where sampling 1K or 10K values from such a domain produced exactly zero values meeting the invariant. In other words, it was completely useless in these cases (though not in others). Whilst these may seem like serious problems, they stem from the simplistic fashion in which we generate values for constrained types. Actually, it is possible to generate values of constrained types directly (i.e. without enumerating those of the underlying type) and efficiently. The next evolution of the check functionaliy will implement this and, hopefully, it will offer some big improvements. Pros / Cons There are several advantages to checking over static verification, some of which may be unexpected: • (Incomplete Specifications). We can check our programs even when the specifications are incomplete. Suppose we are developing two functions in tandem, say max(int[]) which uses max(int,int) as a subroutine. With static verification, we cannot verify max(int[]) before we have specified and verified max(int,int). However, we can start checking max(int[]) as soon as we have implementated max(int,int) (i.e. since checking just executes it). • (Loop Invariants). We can check our programs without writing any loop invariants. Since writing loop invariants can be challenging, this is an important benefit. With static verification, we cannot verify a function containing a loop without correctly specifying the loop invariant first. That can be a real pain, especially in the early stages of developing a program. • (Counterexamples). Whenever checking finds a problem in our program, it will also tell you the input values causing it and, furthermore, they always correspond to real failures in your program. Static verification, on the other hand, cannot always generate an execution trace and, even when it does, they are not always realisable traces of the program. That can make life quite Of course, there are some disadvantages with checking as well. Most importantly, checking doesn’t guarantee to find all problems with your program! But, in our experience, it usually finds most of them. There are also problems (discussed above) with generating enough values for complex data types to properly test our functions. Overall, despite some limitations, we find checking to be incredibly useful — especially as its free. Here’s a short demo to give you a taste: So, head on over to the online editor for Whiley and give it a go! Follow the discussion on Twitter or Reddit
{"url":"https://whileydave.com/2020/12/02/automated-testing-for-whiley/","timestamp":"2024-11-11T00:14:34Z","content_type":"text/html","content_length":"20287","record_id":"<urn:uuid:7451f69e-cd76-4666-a70e-ee3aa62a5d69>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00051.warc.gz"}
Logic Gates Logic Gates and its verification in Proteus Introduction to Logic Gates Imagine you want to create a machine that can make decisions based on certain conditions. To solve this problem, we can use logic gates, which help machines understand what to do in different situations. These gates can be combined to form more complex decision-making systems, enabling us to build various devices like calculators, computers, and even self-driving cars. A logic gate is like a simple question with a "yes" or "no" answer. For example, "Is it raining?" can be answered with "yes" (1) or "no" (0). Envision logic gates as the digital world's magical puzzle pieces that come together to form the complex circuits powering our everyday electronic devices. These gates operate with binary inputs, 0 or 1, and create an output based on the specific combination of input values. In a machine, a logic gate takes two inputs (0 or 1) and gives an output (also 0 or 1) based on the inputs. By combining different types of logic gates, such as AND, OR, NOT etc. machines can make more complex decisions. To better understand logic gates, we can learn about Boolean algebra, a type of math that deals with true (1) and false (0) values. This helps us design and optimize digital systems using logic gates, forming the basis for advanced digital systems like microprocessors and memory devices. Once we grasp the fundamentals of logic gates, we can explore the complexities of creating digital circuits, such as combinational and sequential circuits. This knowledge opens the door to cutting-edge fields like Very Large Scale Integration (VLSI) design, quantum computing, and novel computing architectures. In the next section, we will learn how to practically verify the workings of logic gates using Proteus software, an amazing platform for simulating electronic circuits and bringing them to life. This hands-on activity will provide a deeper understanding of logic gates and their functions, allowing you to appreciate their crucial role in the world of digital systems. Types of Logic Gates and IC Numbers Let's examine the seven main types of logic gates, their corresponding integrated circuit (IC) numbers, and their characteristics: • AND gate (IC 7408): Outputs 1 only when both inputs are 1. It uses a two-input AND operation, symbolized as A ⋅ B. AND gates can be cascaded to create multi-input AND gates. • OR gate (IC 7432): Outputs 1 when at least one input is 1. It uses a two-input OR operation, represented as A + B. OR gates can also be cascaded for multi-input functionality. • NOT gate (IC 7404): Inverts the input, turning 1 into 0 and vice versa. It uses a single-input NOT operation, denoted as ¬A or A̅. NOT gates are often combined with other gates to create complex • NAND gate (IC 7400): A combination of AND and NOT gates, outputs 0 only when both inputs are 1. It uses a two-input NAND operation, symbolized as A ⋅ B with an overbar. NAND gates are universal, meaning they can be combined to form any other gate. • NOR gate (IC 7402): A combination of OR and NOT gates, outputs 1 only when both inputs are 0. It uses a two-input NOR operation, represented as A + B with an overbar. Like NAND gates, NOR gates are also universal. • XOR gate (IC 74LS86): Outputs 1 when inputs are different and 0 when they are the same. It uses a two-input XOR operation, denoted as A ⊕ B. XOR gates are widely used in error detection and correction circuits. • XNOR gate (IC 4077): The opposite of XOR, outputs 1 when inputs are the same and 0 when they are different. It uses a two-input XNOR operation, symbolized as A ⊕ B with an overbar. XNOR gates play a key role in parity checkers and adder circuits. Requirements: Computer (Windows OS recommended), Proteus software Proteus Setup and General Direction To start exploring logic gates using Proteus, follow these simple steps: 1. Open the Proteus software. 2. Create a new project. 3. Add the IC for the desired logic gate to the design. 4. Connect input and output pins to switches and LED indicators, respectively. 5. Run the simulation and test the truth table by toggling the switches. Procedures of Doing the Experiment AND Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of an AND logic gate through Proteus software simulation. Proteus software, 7408 AND gate IC, Logic State, and Logic Probe tools. An AND gate performs a logical conjunction between input signals. The output is true (1) only when all inputs are true (1). The truth table of an AND gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of AND Gate (7408) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 7408 AND gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the AND gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of an AND gate. The simulation results match the AND gate truth table, validating its correct functionality. The AND gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. OR Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of an OR logic gate through Proteus software simulation. Proteus software, 7432 OR gate IC, Logic State, and Logic Probe tools. An OR gate performs a logical disjunction between input signals. The output is true (1) if any of the inputs are true (1). The truth table of an OR gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of OR Gate (7432) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 7432 OR gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the OR gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of an OR gate. The simulation results match the OR gate truth table, validating its correct functionality. The OR gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. NOT Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of a NOT logic gate through Proteus software simulation. Proteus software, 7404 NOT gate IC, Logic State, and Logic Probe tools. A NOT gate, also known as an inverter, performs a logical negation of the input signal. The output is true (1) when the input is false (0) and vice versa. The truth table of a NOT gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of NOT Gate (7404) 1. Open Proteus, create a new project and open schematic capture. 2. Add the 7404 NOT gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the NOT gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input values "0" and "1". 5. Verify the simulation results against the expected truth table of a NOT gate. The simulation results match the NOT gate truth table, validating its correct functionality. The NOT gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. NAND Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of a NAND logic gate through Proteus software simulation. Proteus software, 7400 NAND gate IC, Logic State, and Logic Probe tools. A NAND gate is a combination of an AND gate followed by a NOT gate. The output is false (0) only when all inputs are true (1). The truth table of a NAND gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of NAND Gate (7400) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 7400 NAND gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the NAND gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of a NAND gate. The simulation results match the NAND gate truth table, validating its correct functionality. The NAND gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. NOR Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of a NOR logic gate through Proteus software simulation. Proteus software, 7402 NOR gate IC, Logic State, and Logic Probe tools. A NOR gate is a combination of an OR gate followed by a NOT gate. The output is true (1) only when all inputs are false (0). The truth table of a NOR gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of NOR Gate (7402) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 7402 NOR gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the NOR gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of a NOR gate. The simulation results match the NOR gate truth table, validating its correct functionality. The NOR gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. XOR Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of an XOR logic gate through Proteus software simulation. Proteus software, 74LS86 XOR gate IC, Logic State, and Logic Probe tools. An XOR gate, also known as an Exclusive OR gate, performs a logical exclusive disjunction between input signals. The output is true (1) only when the number of true inputs is odd. The truth table of an XOR gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of XOR Gate (74LS86) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 74LS86 XOR gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the XOR gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of an XOR gate. The simulation results match the XOR gate truth table, validating its correct functionality. The XOR gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. XNOR Gate Verification of all Logic Gate's Truth Table Using Proteus Software To validate the truth table of an XNOR logic gate through Proteus software simulation. Proteus software, 4077 XNOR gate IC, Logic State, and Logic Probe tools. An XNOR gate, also known as an Exclusive NOR gate, performs a logical equivalence between input signals. The output is true (1) when the number of true inputs is even. The truth table of an XNOR gate lists input-output combinations, providing a basis for verifying the gate's functionality in a simulation. Truth Table of XNOR Gate (4077) Input: A Input: B Output 1. Open Proteus, create a new project and open schematic capture. 2. Add the 4077 XNOR gate IC, Logic State, and Logic Probe (Big) from the pick device menu to the dashboard. 3. Place the XNOR gate, Logic State, and Logic Probe tools onto the schematic. 4. Connect the components, run the simulation, and observe the output for input combinations "00", "01", "10", "11". 5. Verify the simulation results against the expected truth table of an XNOR gate. The simulation results match the XNOR gate truth table, validating its correct functionality. The XNOR gate truth table has been successfully verified using Proteus software, confirming its proper operation in digital circuits. In this series of experiments, we have successfully verified the truth tables of AND, OR, NOT, NAND, NOR, XOR, and XNOR logic gates using Proteus software. The observed results match the expected truth tables, which validates the functionality of these gates in Proteus software. This provides students with an effective way to learn and understand the behavior of various logic gates and to develop their skills in digital circuit design and simulation.
{"url":"https://dmj.one/edu/su/course/csu1289/lab/verification-of-logic-gates-in-proteus","timestamp":"2024-11-08T20:45:58Z","content_type":"text/html","content_length":"29894","record_id":"<urn:uuid:cd49b520-3f09-43eb-a80b-6a7b99d13502>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00532.warc.gz"}
Copyright (c) The University of Glasgow 2002 License BSD-style (see the file libraries/base/LICENSE) Maintainer libraries@haskell.org Stability experimental Portability portable Safe Haskell Trustworthy Language Haskell98 Multi-way trees (aka rose trees) and forests. data Tree a Multi-way trees, also known as rose trees. Monad Tree Functor Tree Applicative Tree Foldable Tree Traversable Tree Generic1 Tree Eq a => Eq (Tree a) Data a => Data (Tree a) Read a => Read (Tree a) Show a => Show (Tree a) Generic (Tree a) NFData a => NFData (Tree a) Typeable (* -> *) Tree type Rep1 Tree type Rep (Tree a) Two-dimensional drawing flatten :: Tree a -> [a] The elements of a tree in pre-order. levels :: Tree a -> [[a]] Lists of nodes at each level of the tree. Building trees unfoldTree :: (b -> (a, [b])) -> b -> Tree a Build a tree from a seed value unfoldForest :: (b -> (a, [b])) -> [b] -> Forest a Build a forest from a list of seed values unfoldTreeM :: Monad m => (b -> m (a, [b])) -> b -> m (Tree a) Monadic tree builder, in depth-first order unfoldTreeM_BF :: Monad m => (b -> m (a, [b])) -> b -> m (Tree a) Monadic tree builder, in breadth-first order, using an algorithm adapted from Breadth-First Numbering: Lessons from a Small Exercise in Algorithm Design, by Chris Okasaki, ICFP'00. unfoldForestM_BF :: Monad m => (b -> m (a, [b])) -> [b] -> m (Forest a) Monadic forest builder, in breadth-first order, using an algorithm adapted from Breadth-First Numbering: Lessons from a Small Exercise in Algorithm Design, by Chris Okasaki, ICFP'00.
{"url":"https://treeowl-containers-general-merge.netlify.app/data-tree","timestamp":"2024-11-11T03:38:28Z","content_type":"application/xhtml+xml","content_length":"14605","record_id":"<urn:uuid:ddb55d3d-7b9e-4959-b711-20c91405820f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00607.warc.gz"}
Coursnap app Lecture 13 - Validation Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation. Lecture 13 of 18 of Caltech's Machine Learning Course - CS 156 by Professor Yaser Abu-Mostafa. View course materials in iTunes U Course App - https://itunes.apple.com/us/course/machine-learning/id515364596 and on the course website - http://work.caltech.edu/telecourse.html Produced in association with Caltech Academic Media Technologies under the Attribution-NonCommercial-NoDerivs Creative Commons License (CC BY-NC-ND). To learn more about this license, http://creativecommons.org/licenses/ by-nc-nd/3.0/ This lecture was recorded on May 15, 2012, in Hameetman Auditorium at Caltech, Pasadena, CA, USA. {'title': 'Lecture 13 - Validation', 'heatmap': [{'end': 578.777, 'start': 511.889, 'weight': 1}, {'end': 726.767, 'start': 669.047, 'weight': 0.719}, {'end': 828.318, 'start': 774.332, 'weight': 0.72}, {'end': 1346.575, 'start': 1292.89, 'weight': 0.718}, {'end': 1810.89, 'start': 1755.221, 'weight': 0.899}, {'end': 2226.444, 'start': 2117.863, 'weight': 0.889}], 'summary': 'The lecture covers topics on regularization in machine learning, emphasizing transition from constrained to unconstrained regularization, significance of validation and model selection, trade-off in choosing a validation set size, impact of early stopping, model selection algorithm based on validation errors, and the role of cross-validation in preventing overfitting and model selection, showcasing a 40% improvement in out-of-sample error for a digit classification task.', 'chapters': [{'end': 123.848, 'segs': [{'end': 105.357, 'src': 'embed', 'start': 36.551, 'weight': 0, 'content': [{'end': 40.835, 'text': 'and thereby reducing the VC dimension and improving the generalization property.', 'start': 36.551, 'duration': 4.284}, {'end': 52.701, 'text': 'to an unconstrained version, which creates an augmented error, in which no particular vector of weights is prohibited per se.', 'start': 41.716, 'duration': 10.985}, {'end': 58.864, 'text': 'But basically, you have a preference of weights based on a penalty that has to do with the constraint.', 'start': 53.221, 'duration': 5.643}, {'end': 66.525, 'text': 'And that equivalence will make us focus on the augmented error form of regularization in every practice we have.', 'start': 59.901, 'duration': 6.624}, {'end': 78.413, 'text': 'And the argument for it was to take the constrained version and look at it either as Lagrangian, which would be the formal way of solving it, or,', 'start': 67.366, 'duration': 11.047}, {'end': 80.374, 'text': 'as we did it in a geometric way,', 'start': 78.413, 'duration': 1.961}, {'end': 89.28, 'text': 'to find a condition that corresponds to minimization under a constraint and find that that would be locally equivalent to minimizing this in an unconstrained way.', 'start': 80.374, 'duration': 8.906}, {'end': 93.629, 'text': 'Then we went to the general form of a regularizer.', 'start': 91.007, 'duration': 2.622}, {'end': 98.312, 'text': 'And we called it capital omega of H.', 'start': 94.569, 'duration': 3.743}, {'end': 105.357, 'text': 'And it depends on small h rather than capital H, which was the other capital omega that we used in the VC analysis was.', 'start': 98.312, 'duration': 7.045}], 'summary': 'Regularization reduces vc dimension, improves generalization, uses penalty for weight preference.', 'duration': 68.806, 'max_score': 36.551, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk36551.jpg'}], 'start': 0.703, 'title': 'Regularization in machine learning', 'summary': 'Discusses regularization in machine learning, emphasizing the transition from constrained to unconstrained regularization, the concept of augmented error, and the preference of weights based on a penalty. it also highlights the importance of minimizing the augmented error for better out-of-sample error prediction.', 'chapters': [{'end': 123.848, 'start': 0.703, 'title': 'Regularization in machine learning', 'summary': 'Discusses regularization in machine learning, emphasizing the transition from constrained to unconstrained regularization, the concept of augmented error, and the preference of weights based on a penalty. it also highlights the importance of minimizing the augmented error for better out-of-sample error prediction.', 'duration': 123.145, 'highlights': ['The transition from constrained to unconstrained regularization involves explicitly forbidding some hypotheses to reduce VC dimension and improve generalization property, leading to an augmented error preference based on a penalty.', 'The focus on the augmented error form of regularization is emphasized in practice, with the argument for it being the equivalence to the constrained version, either through Lagrangian or geometric approach.', 'The discussion delves into the general form of a regularizer, denoted as capital omega of H, which depends on small h, and the formation of the augmented error as the in-sample error plus a specific term for better out-of-sample error prediction.']}], 'duration': 123.145, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk703.jpg', 'highlights': ['The transition from constrained to unconstrained regularization involves explicitly forbidding some hypotheses to reduce VC dimension and improve generalization property, leading to an augmented error preference based on a penalty.', 'The discussion delves into the general form of a regularizer, denoted as capital omega of H, which depends on small h, and the formation of the augmented error as the in-sample error plus a specific term for better out-of-sample error prediction.', 'The focus on the augmented error form of regularization is emphasized in practice, with the argument for it being the equivalence to the constrained version, either through Lagrangian or geometric approach.']}, {'end': 959.57, 'segs': [{'end': 215.583, 'src': 'embed', 'start': 189.365, 'weight': 2, 'content': [{'end': 193.266, 'text': 'the validation will tell you take lambda equals 0, and therefore no harm done.', 'start': 189.365, 'duration': 3.901}, {'end': 200.909, 'text': 'And, as you see, the choice of lambda is indeed critical because when you take the correct amount of lambda, which happens to be very small,', 'start': 194.443, 'duration': 6.466}, {'end': 205.734, 'text': 'in this case, the fit, which is the red curve, is very close to the target, which is the blue.', 'start': 200.909, 'duration': 4.825}, {'end': 209.397, 'text': 'Whereas if you push your luck and have more of the regularization,', 'start': 206.134, 'duration': 3.263}, {'end': 215.583, 'text': 'you end up constraining the fit so much that the red really wants to move toward the blue,', 'start': 209.397, 'duration': 6.186}], 'summary': 'Choosing the correct lambda is critical for model fit. small lambda leads to close fit, while excessive regularization constrains the fit.', 'duration': 26.218, 'max_score': 189.365, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk189365.jpg'}, {'end': 288.471, 'src': 'embed', 'start': 252.964, 'weight': 0, 'content': [{'end': 256.007, 'text': 'So why do we call it validation? And the distinction will be pretty important.', 'start': 252.964, 'duration': 3.043}, {'end': 261.043, 'text': "And then we'll go for model selection, a very important subject in machine learning.", 'start': 257.096, 'duration': 3.947}, {'end': 263.467, 'text': 'And it is the main task of validation.', 'start': 261.344, 'duration': 2.123}, {'end': 264.93, 'text': "That's what you use validation for.", 'start': 263.507, 'duration': 1.423}, {'end': 269.979, 'text': "And we'll find that model selection covers more territory than what the name may suggest to you.", 'start': 265.451, 'duration': 4.528}, {'end': 278.927, 'text': "Finally, we'll go to cross-validation, which is a type of validation that is very interesting, that allows you, if I give you a budget of n examples,", 'start': 271.343, 'duration': 7.584}, {'end': 282.688, 'text': 'to basically use all of them for validation and all of them for training.', 'start': 278.927, 'duration': 3.761}, {'end': 288.471, 'text': 'Which looks like cheating, because validation will look like a distinct activity from training, as we will see.', 'start': 283.349, 'duration': 5.122}], 'summary': 'Validation, model selection, and cross-validation in machine learning.', 'duration': 35.507, 'max_score': 252.964, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk252964.jpg'}, {'end': 388.369, 'src': 'embed', 'start': 361.569, 'weight': 1, 'content': [{'end': 371.082, 'text': 'So basically, what we did is concoct a term that we think captures the overfit complexity, overfit penalty.', 'start': 361.569, 'duration': 9.513}, {'end': 377.725, 'text': 'And then, instead of minimizing the in-sample, we minimize the in-sample plus that, and we call that the augmented error.', 'start': 372.063, 'duration': 5.662}, {'end': 381.186, 'text': 'And hopefully, the augmented error will be a better proxy for Eout.', 'start': 378.205, 'duration': 2.981}, {'end': 382.647, 'text': 'That was the deal.', 'start': 381.566, 'duration': 1.081}, {'end': 388.369, 'text': 'And we noticed that we are very, very inaccurate in the choice here.', 'start': 383.227, 'duration': 5.142}], 'summary': 'Concocted a term to capture overfit complexity, aiming for better proxy for eout.', 'duration': 26.8, 'max_score': 361.569, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk361569.jpg'}, {'end': 578.777, 'src': 'heatmap', 'start': 511.889, 'weight': 1, 'content': [{'end': 520.118, 'text': 'Now, if you take this quantity, and we are now treating it as an estimate for E out, a poor estimate, but nonetheless an estimate.', 'start': 511.889, 'duration': 8.229}, {'end': 525.904, 'text': 'We call it an estimate, because if you take the expected value of that with respect to the choice of K,', 'start': 520.739, 'duration': 5.165}, {'end': 531.41, 'text': 'with the probability distribution over the input space that generates X, what will that value be?', 'start': 525.904, 'duration': 5.506}, {'end': 534.601, 'text': 'Well, that is simply E out.', 'start': 532.679, 'duration': 1.922}, {'end': 541.166, 'text': 'So indeed, this quantity, the random variable here, has the correct expected value.', 'start': 535.221, 'duration': 5.945}, {'end': 543.228, 'text': "It's an unbiased estimate of E out.", 'start': 541.226, 'duration': 2.002}, {'end': 548.913, 'text': "But unbiased means that it's as likely to be here or here in terms of expected value.", 'start': 543.929, 'duration': 4.984}, {'end': 551.896, 'text': 'But we could be this, and this would be a good estimate.', 'start': 549.474, 'duration': 2.422}, {'end': 555.519, 'text': "Or we could be this, and this would be a terrible estimate, because you're not getting all of them.", 'start': 552.376, 'duration': 3.143}, {'end': 556.42, 'text': "You're just getting one of them.", 'start': 555.539, 'duration': 0.881}, {'end': 564.307, 'text': 'So if this guy swings very large, and I tell you this is an estimate of E out, and you get it here, this is what you will think E out is.', 'start': 557.201, 'duration': 7.106}, {'end': 567.39, 'text': 'So there is an error, but the error is not biased.', 'start': 565.028, 'duration': 2.362}, {'end': 568.831, 'text': "That's what this equation says.", 'start': 567.65, 'duration': 1.181}, {'end': 575.435, 'text': 'But we have to evaluate that swing, and the swing is obviously evaluated by the usual quantity, the variance.', 'start': 570.393, 'duration': 5.042}, {'end': 578.777, 'text': "And let's just call the variance sigma squared.", 'start': 576.696, 'duration': 2.081}], 'summary': 'The quantity serves as an unbiased estimate for e out, but its variance needs evaluation.', 'duration': 66.888, 'max_score': 511.889, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk511889.jpg'}, {'end': 631.481, 'src': 'embed', 'start': 605.871, 'weight': 6, 'content': [{'end': 611.075, 'text': 'Now, the notation we are going to have is that the number of points in the validation set is K.', 'start': 605.871, 'duration': 5.204}, {'end': 615.779, 'text': 'Remember that the number of points in the training set was N.', 'start': 611.075, 'duration': 4.704}, {'end': 622.624, 'text': 'So this will be K points, also generated according to the same rules, independently according to the probability distribution over the input space.', 'start': 615.779, 'duration': 6.845}, {'end': 631.481, 'text': 'And the error on that set, we are going to call eval, as in validation error.', 'start': 624.265, 'duration': 7.216}], 'summary': 'Validation set has k points, independently generated according to the same rules.', 'duration': 25.61, 'max_score': 605.871, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk605871.jpg'}, {'end': 726.767, 'src': 'heatmap', 'start': 669.047, 'weight': 0.719, 'content': [{'end': 674.169, 'text': 'So the main component is the expected value of this fellow, which we have seen before, expected value in a single point.', 'start': 669.047, 'duration': 5.122}, {'end': 679.631, 'text': 'And you just average linearly, as you did.', 'start': 674.609, 'duration': 5.022}, {'end': 682.472, 'text': 'This quantity happens to be E out.', 'start': 680.511, 'duration': 1.961}, {'end': 684.933, 'text': 'The expected value on one point is E out.', 'start': 682.572, 'duration': 2.361}, {'end': 687.854, 'text': 'Therefore, when you do that, you just get E out again.', 'start': 685.553, 'duration': 2.301}, {'end': 695.936, 'text': 'So indeed, again, the validation error is an unbiased estimate of the out-of-sample error,', 'start': 689.274, 'duration': 6.662}, {'end': 700.577, 'text': 'provided that all you did with the validation set is just measure the out-of-sample error.', 'start': 695.936, 'duration': 4.641}, {'end': 701.737, 'text': "You didn't use it in any way.", 'start': 700.637, 'duration': 1.1}, {'end': 707.779, 'text': "Now let's look at the variance, because that was our problem with the single-point estimate.", 'start': 703.077, 'duration': 4.702}, {'end': 709.359, 'text': "And let's see if there is an improvement.", 'start': 708.079, 'duration': 1.28}, {'end': 712.477, 'text': 'When you get the variance, you are going to take this formula.', 'start': 710.336, 'duration': 2.141}, {'end': 719.502, 'text': 'And then you are going to have a double summation, and have all cross terms of E between different points.', 'start': 713.178, 'duration': 6.324}, {'end': 726.767, 'text': "So you'll have the covariance between the value for k equals 1 and k equals 2, k equals 1 and k equals 3, et cetera.", 'start': 719.562, 'duration': 7.205}], 'summary': 'Validation error is an unbiased estimate of out-of-sample error, with variance addressed by double summation.', 'duration': 57.72, 'max_score': 669.047, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk669047.jpg'}, {'end': 701.737, 'src': 'embed', 'start': 674.609, 'weight': 3, 'content': [{'end': 679.631, 'text': 'And you just average linearly, as you did.', 'start': 674.609, 'duration': 5.022}, {'end': 682.472, 'text': 'This quantity happens to be E out.', 'start': 680.511, 'duration': 1.961}, {'end': 684.933, 'text': 'The expected value on one point is E out.', 'start': 682.572, 'duration': 2.361}, {'end': 687.854, 'text': 'Therefore, when you do that, you just get E out again.', 'start': 685.553, 'duration': 2.301}, {'end': 695.936, 'text': 'So indeed, again, the validation error is an unbiased estimate of the out-of-sample error,', 'start': 689.274, 'duration': 6.662}, {'end': 700.577, 'text': 'provided that all you did with the validation set is just measure the out-of-sample error.', 'start': 695.936, 'duration': 4.641}, {'end': 701.737, 'text': "You didn't use it in any way.", 'start': 700.637, 'duration': 1.1}], 'summary': 'Validation error is an unbiased estimate of out-of-sample error if not used.', 'duration': 27.128, 'max_score': 674.609, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk674609.jpg'}, {'end': 828.318, 'src': 'heatmap', 'start': 774.332, 'weight': 0.72, 'content': [{'end': 778.773, 'text': 'Because I had k squared elements, the fact that many of them dropped out is just to my advantage.', 'start': 774.332, 'duration': 4.441}, {'end': 781.034, 'text': 'I still have the 1 over k squared.', 'start': 779.393, 'duration': 1.641}, {'end': 787.082, 'text': 'And that gives me the better variance for the estimate based on eval than on a single point.', 'start': 781.514, 'duration': 5.568}, {'end': 793.41, 'text': 'This is your typical analysis of adding a bunch of independent estimates.', 'start': 787.222, 'duration': 6.188}, {'end': 795.132, 'text': 'So you get the sigma squared.', 'start': 793.811, 'duration': 1.321}, {'end': 797.015, 'text': 'That was the variance on a particular point.', 'start': 795.313, 'duration': 1.702}, {'end': 798.517, 'text': 'But now you divide it by k.', 'start': 797.315, 'duration': 1.202}, {'end': 803.884, 'text': 'Now we see a hope because even if the original estimate was this way,', 'start': 799.418, 'duration': 4.466}, {'end': 810.052, 'text': 'maybe we can have K big enough that we keep shrinking the error bar such that the E value itself as a random variable becomes.', 'start': 803.884, 'duration': 6.168}, {'end': 812.856, 'text': 'this, which is around E out, is what we want.', 'start': 810.052, 'duration': 2.804}, {'end': 815.078, 'text': 'And therefore, it becomes a reliable estimate.', 'start': 813.156, 'duration': 1.922}, {'end': 817.809, 'text': 'This looks promising.', 'start': 817.068, 'duration': 0.741}, {'end': 828.318, 'text': 'So now we can write the E val, which is a random variable, to be E out, which is the value we want plus or minus, something that averages to 0,', 'start': 818.169, 'duration': 10.149}], 'summary': 'Adding k independent estimates reduces variance, improving reliability.', 'duration': 53.986, 'max_score': 774.332, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk774332.jpg'}, {'end': 828.318, 'src': 'embed', 'start': 797.315, 'weight': 4, 'content': [{'end': 798.517, 'text': 'But now you divide it by k.', 'start': 797.315, 'duration': 1.202}, {'end': 803.884, 'text': 'Now we see a hope because even if the original estimate was this way,', 'start': 799.418, 'duration': 4.466}, {'end': 810.052, 'text': 'maybe we can have K big enough that we keep shrinking the error bar such that the E value itself as a random variable becomes.', 'start': 803.884, 'duration': 6.168}, {'end': 812.856, 'text': 'this, which is around E out, is what we want.', 'start': 810.052, 'duration': 2.804}, {'end': 815.078, 'text': 'And therefore, it becomes a reliable estimate.', 'start': 813.156, 'duration': 1.922}, {'end': 817.809, 'text': 'This looks promising.', 'start': 817.068, 'duration': 0.741}, {'end': 828.318, 'text': 'So now we can write the E val, which is a random variable, to be E out, which is the value we want plus or minus, something that averages to 0,', 'start': 818.169, 'duration': 10.149}], 'summary': 'Dividing by k can shrink error bar, making e value reliable estimate.', 'duration': 31.003, 'max_score': 797.315, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk797315.jpg'}], 'start': 123.948, 'title': 'Ml regularization and validation', 'summary': 'Covers the heuristic choice of regularization parameter omega, principled determination of lambda using validation, and emphasizes significance of validation and model selection. it also explores the process of estimating out-of-sample error using a validation set and the impact of sample size on error estimation.', 'chapters': [{'end': 411.406, 'start': 123.948, 'title': 'Regularization and validation in ml', 'summary': 'Discusses the heuristic choice of the regularization parameter omega, the principled determination of lambda using validation, and the critical choice of lambda in achieving a close fit to the target, emphasizing the significance of validation and model selection in machine learning.', 'duration': 287.458, 'highlights': ['The choice of omega in a practical situation is really a heuristic choice, guided by theory and certain goals. The selection of omega in practical situations is heuristic-driven, guided by theory and specific objectives.', 'The determination of lambda using validation is critical, as it allows for the selection of the correct amount of lambda, leading to a close fit to the target. Validation plays a crucial role in determining the correct amount of lambda, resulting in a close fit to the target.', 'Validation is a fundamental technique in machine learning, used for model selection and covers more territory than implied by its name. Validation is an essential technique in machine learning, primarily employed for model selection and encompasses a broader scope than its title suggests.', 'Cross-validation enables the use of all examples for both validation and training, providing a workaround to distinct activities, such as training and validation. Cross-validation allows for the utilization of all examples for both validation and training, offering a solution to the distinction between training and validation activities.', 'Regularization attempts to estimate the overfit penalty by minimizing the augmented error, which serves as a better proxy for the out-of-sample error. Regularization aims to estimate the overfit penalty by minimizing the augmented error, serving as a more suitable proxy for the out-of-sample error.']}, {'end': 959.57, 'start': 411.886, 'title': 'Validation and out-of-sample estimation', 'summary': 'Discusses the process of estimating out-of-sample error using a validation set, exploring the unbiased estimate of out-of-sample error, the impact of sample size on error estimation, and the trade-off between training and validation sets.', 'duration': 547.684, 'highlights': ['The validation error is an unbiased estimate of the out-of-sample error, provided the validation set is only used to measure the out-of-sample error and not for training. The validation error serves as an unbiased estimate of the out-of-sample error, assuming it is solely utilized for out-of-sample error measurement, enhancing the reliability of the estimation.', 'The variance of the validation error decreases as the sample size (k) increases, potentially leading to a more reliable estimate of the out-of-sample error. As the sample size (k) for the validation set increases, the variance of the validation error decreases, suggesting a potential improvement in the reliability of the out-of-sample error estimate.', 'The trade-off between the number of points used for training and validation is highlighted, as every point allocated to the validation set reduces the points available for training. Allocation of points to the validation set results in a trade-off, as it reduces the available points for training, emphasizing the need to carefully balance the allocation for effective model evaluation.']}], 'duration': 835.622, 'thumbnail': '', 'highlights': ['Validation is a fundamental technique in machine learning, primarily employed for model selection and encompasses a broader scope than its title suggests.', 'Regularization aims to estimate the overfit penalty by minimizing the augmented error, serving as a more suitable proxy for the out-of-sample error.', 'The determination of lambda using validation is critical, as it allows for the selection of the correct amount of lambda, leading to a close fit to the target.', 'The validation error serves as an unbiased estimate of the out-of-sample error, assuming it is solely utilized for out-of-sample error measurement, enhancing the reliability of the estimation.', 'As the sample size (k) for the validation set increases, the variance of the validation error decreases, suggesting a potential improvement in the reliability of the out-of-sample error estimate.', 'Cross-validation allows for the utilization of all examples for both validation and training, offering a solution to the distinction between training and validation activities.', 'Allocation of points to the validation set results in a trade-off, as it reduces the available points for training, emphasizing the need to carefully balance the allocation for effective model evaluation.']}, {'end': 1437.172, 'segs': [{'end': 1070.265, 'src': 'embed', 'start': 1037.603, 'weight': 1, 'content': [{'end': 1041.484, 'text': 'Therefore, if you increase k, you are moving in this direction.', 'start': 1037.603, 'duration': 3.881}, {'end': 1047.309, 'text': 'So I used to be here, and I used to expect that level of E out.', 'start': 1043.587, 'duration': 3.722}, {'end': 1050.551, 'text': "Now I am here, and I'm expecting that level of E out.", 'start': 1048.309, 'duration': 2.242}, {'end': 1053.761, 'text': "That doesn't look very promising.", 'start': 1052.481, 'duration': 1.28}, {'end': 1061.603, 'text': "I may get a reliable estimate, because I'm using bigger K, but I'm getting a reliable estimate of a worse quantity.", 'start': 1054.582, 'duration': 7.021}, {'end': 1070.265, 'text': 'If you want to take an extreme case, you are going to take this estimate and go to your customer and tell them what you expect the performance to be.', 'start': 1061.623, 'duration': 8.642}], 'summary': 'Increasing k may result in a reliable estimate but worse quantity, impacting performance.', 'duration': 32.662, 'max_score': 1037.603, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1037603.jpg'}, {'end': 1112.692, 'src': 'embed', 'start': 1082.233, 'weight': 2, 'content': [{'end': 1086.076, 'text': 'Now you want the estimate to be very reliable, and you forget about the quality of the hypothesis.', 'start': 1082.233, 'duration': 3.843}, {'end': 1090.12, 'text': 'So you keep increasing k, keep increasing k, keep increasing k.', 'start': 1086.457, 'duration': 3.663}, {'end': 1093.022, 'text': 'So you end up with a very, very reliable estimate.', 'start': 1090.12, 'duration': 2.902}, {'end': 1101.09, 'text': "The problem is that it's an estimate of a very, very poor quantity, because you use two examples to train, and you are basically in the noise.", 'start': 1093.963, 'duration': 7.127}, {'end': 1108.156, 'text': 'So the statement you are going to make to your customer in this case is that here is a system.', 'start': 1102.231, 'duration': 5.925}, {'end': 1112.692, 'text': "I am very sure that it's terrible.", 'start': 1109.708, 'duration': 2.984}], 'summary': 'Increasing k for reliable estimate leads to poor quality hypothesis, resulting in a terrible system.', 'duration': 30.459, 'max_score': 1082.233, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/ video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1082233.jpg'}, {'end': 1346.575, 'src': 'heatmap', 'start': 1292.89, 'weight': 0.718, 'content': [{'end': 1293.81, 'text': "So it's a funny situation.", 'start': 1292.89, 'duration': 0.92}, {'end': 1297.812, 'text': "I'm giving you g, and I'm giving you the validation estimate on g minus.", 'start': 1293.83, 'duration': 3.982}, {'end': 1299.893, 'text': "Why? Because that's the only estimate I have.", 'start': 1298.072, 'duration': 1.821}, {'end': 1304.355, 'text': "I cannot give you the estimate on g, because now if I get g, I don't have any guys to validate on.", 'start': 1299.973, 'duration': 4.382}, {'end': 1306.956, 'text': 'So you can see now the compromise.', 'start': 1305.275, 'duration': 1.681}, {'end': 1314.098, 'text': 'So now, under this scenario, I am not really using in performance by taking a bigger validation set,', 'start': 1308.016, 'duration': 6.082}, {'end': 1316.679, 'text': "because I'm going to put them back when I get the final hypothesis.", 'start': 1314.098, 'duration': 2.581}, {'end': 1324.723, 'text': "What I am losing here is that the validation error I'm reporting is a validation error on a different hypothesis than the one I am giving you.", 'start': 1316.98, 'duration': 7.743}, {'end': 1331.727, 'text': "And if the difference is big, then my estimate is bad, because I'm estimating on something other than what I'm giving you.", 'start': 1325.783, 'duration': 5.944}, {'end': 1335.609, 'text': "And that's what happens when you have large K.", 'start': 1332.447, 'duration': 3.162}, {'end': 1339.171, 'text': 'When you have large K, the discrepancy between G- and G is bigger.', 'start': 1335.609, 'duration': 3.562}, {'end': 1341.772, 'text': 'And I am giving you the estimate in G-.', 'start': 1339.911, 'duration': 1.861}, {'end': 1343.433, 'text': 'So that estimate is poor.', 'start': 1342.072, 'duration': 1.361}, {'end': 1346.575, 'text': 'And therefore, I get a bad estimate again.', 'start': 1344.173, 'duration': 2.402}], 'summary': 'Using a larger validation set leads to poor estimation with a large discrepancy between g- and g.', 'duration': 53.685, 'max_score': 1292.89, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1292890.jpg'}, {'end': 1416.421, 'src': 'embed', 'start': 1352.797, 'weight': 0, 'content': [{'end': 1360.379, 'text': 'After you do your thing, and you do your estimates, and as you will see further, you do your choices, you go and put all the examples to train on.', 'start': 1352.797, 'duration': 7.582}, {'end': 1363.06, 'text': 'Because this is your best bet of getting a good hypothesis.', 'start': 1360.479, 'duration': 2.581}, {'end': 1368.222, 'text': 'If your k is small, the validation error is not reliable.', 'start': 1364.02, 'duration': 4.202}, {'end': 1371.723, 'text': "It's a bad estimate, just because the variance of it is big.", 'start': 1368.302, 'duration': 3.421}, {'end': 1373.783, 'text': 'Small k, I have small k.', 'start': 1372.263, 'duration': 1.52}, {'end': 1375.444, 'text': "It's 1 over square root of k, so I'm doing this.", 'start': 1373.783, 'duration': 1.661}, {'end': 1379.955, 'text': 'If you get big K, the problem is not the reliability of the estimate.', 'start': 1376.314, 'duration': 3.641}, {'end': 1385.456, 'text': 'The problem is that the thing you are estimating is getting further and further away from the thing you are reporting.', 'start': 1379.995, 'duration': 5.461}, {'end': 1387.596, 'text': 'So now we have a compromise.', 'start': 1386.576, 'duration': 1.02}, {'end': 1390.877, 'text': "We don't want K to be too small in order not to have fluctuations.", 'start': 1387.856, 'duration': 3.021}, {'end': 1395.638, 'text': "We don't want K to be too big in order not to be too far from what we are reporting.", 'start': 1391.177, 'duration': 4.461}, {'end': 1399.719, 'text': 'And as usual in machine learning, there is a rule of thumb.', 'start': 1396.618, 'duration': 3.101}, {'end': 1402.64, 'text': 'And the rule of thumb is pretty simple.', 'start': 1401.359, 'duration': 1.281}, {'end': 1403.8, 'text': "That's why it's a rule of thumb.", 'start': 1402.96, 'duration': 0.84}, {'end': 1408.255, 'text': 'It says, take 1 fifth for validation.', 'start': 1404.733, 'duration': 3.522}, {'end': 1412.418, 'text': 'That usually gives you the best of both worlds.', 'start': 1409.676, 'duration': 2.742}, {'end': 1413.279, 'text': 'Nothing proved.', 'start': 1412.638, 'duration': 0.641}, {'end': 1415.32, 'text': 'You can find counterexamples.', 'start': 1413.959, 'duration': 1.361}, {'end': 1416.421, 'text': "I'm not going to argue with that.", 'start': 1415.34, 'duration': 1.081}], 'summary': 'Choose k value carefully for reliable validation error; rule of thumb: use 1/5 for validation.', 'duration': 63.624, 'max_score': 1352.797, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1352797.jpg'}], 'start': 961.946, 'title': 'Balancing validation set size', 'summary': 'Discusses the trade-off between reliability of estimate and expected error value in choosing a validation set size, emphasizing the importance of finding a balance. it suggests using 1/5th of the dataset for validation to achieve a balance.', 'chapters': [{'end': 1127.026, 'start': 961.946, 'title': 'Validation set and learning curves', 'summary': 'Discusses the trade-off between the reliability of the estimate of the validation set and the expected value of error, suggesting that a larger k for validation points may provide a reliable estimate but of a worse quantity, ultimately highlighting the importance of finding a balance.', 'duration': 165.08, 'highlights': ['The reliability of the estimate of the validation set is inversely proportional to the square root of the number of validation points, with a conclusion that using a small k leads to a bad estimate.', 'Increasing the number of validation points (K) may result in a more reliable estimate, but it could also lead to a worse expected value of error, indicating a trade-off between reliability and quality of the estimate.', 'Continuously increasing K for validation points can lead to a very reliable estimate but of a very poor quantity, ultimately resulting in an estimate of a very poor quality hypothesis, which may not be well-received by the customer.']}, {'end': 1437.172, 'start': 1127.987, 'title': 'Optimizing validation set size', 'summary': 'Discusses the importance of choosing the right validation set size for training machine learning models, emphasizing the trade-off between reliability of estimates and proximity to reported results, and suggests a rule of thumb of using 1/5th of the dataset for validation to achieve a balance.', 'duration': 309.185, 'highlights': ['The compromise between reliability of estimates and proximity to reported results when choosing the validation set size is emphasized, with a rule of thumb of using 1/5th of the dataset for validation suggested. Rule of thumb suggests using 1/5th of the dataset for validation', 'The impact of choosing a small k for validation is discussed, emphasizing the unreliability of the validation error estimate due to high variance. Small k leads to unreliable validation error estimate due to high variance', 'The trade-off when using a large k for validation is highlighted, focusing on the increasing discrepancy between the estimated and reported hypotheses, leading to poor estimates. Large k leads to increasing discrepancy between estimated and reported hypotheses, resulting in poor estimates']}], 'duration': 475.226, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk961946.jpg', 'highlights': ['Using a small k leads to a bad estimate due to high variance', 'Increasing K may result in a more reliable estimate but worse expected error value', 'Continuously increasing K can lead to a very reliable estimate but of poor quality', 'The compromise between reliability of estimates and proximity to reported results is emphasized', 'Rule of thumb suggests using 1/5th of the dataset for validation']}, {'end': 1931.791, 'segs': [{'end': 1486.838, 'src': 'embed', 'start': 1456.755, 'weight': 2, 'content': [{'end': 1459.836, 'text': "And this is a very important point, so let's talk about it in detail.", 'start': 1456.755, 'duration': 3.081}, {'end': 1470.668, 'text': 'Once I make my estimate affect the learning process, the set I am using is going to change nature.', 'start': 1461.366, 'duration': 9.302}, {'end': 1472.749, 'text': "So let's look at the situation that we have seen before.", 'start': 1470.849, 'duration': 1.9}, {'end': 1478.631, 'text': 'Remember this fellow? This was early stopping in neural networks.', 'start': 1473.689, 'duration': 4.942}, {'end': 1483.252, 'text': 'And let me magnify it for you to see the green curve.', 'start': 1478.951, 'duration': 4.301}, {'end': 1486.838, 'text': 'You see the green curve now? So there is a green curve.', 'start': 1483.272, 'duration': 3.566}], 'summary': 'Estimates impact learning process, changing nature of data. discusses early stopping in neural networks.', 'duration': 30.083, 'max_score': 1456.755, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1456755.jpg'}, {'end': 1635.183, 'src': 'embed', 'start': 1607.033, 'weight': 1, 'content': [{'end': 1612.457, 'text': "Let's say I have the test set which is unbiased, and I'm claiming that the validation set has an optimistic bias.", 'start': 1607.033, 'duration': 5.424}, {'end': 1615.758, 'text': 'Optimistic is not like, I mean, optimism is good.', 'start': 1613.257, 'duration': 2.501}, {'end': 1618.479, 'text': 'But here is optimism followed by disappointment.', 'start': 1616.138, 'duration': 2.341}, {'end': 1619.279, 'text': "It's deception.", 'start': 1618.539, 'duration': 0.74}, {'end': 1625.961, 'text': "We are just calling it optimistic to understand that it's always in the direction of thinking that the error will be smaller than it will actually turn out to be.", 'start': 1620.059, 'duration': 5.902}, {'end': 1629.822, 'text': "So let's say we have two hypotheses.", 'start': 1628.601, 'duration': 1.221}, {'end': 1635.183, 'text': "And for simplicity, let's have them have both the same E out.", 'start': 1631.362, 'duration': 3.821}], 'summary': 'Validation set may have optimistic bias, leading to deception in error estimation.', 'duration': 28.15, 'max_score': 1607.033, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1607033.jpg'}, {'end': 1813.652, 'src': 'heatmap', 'start': 1751.239, 'weight': 3, 'content': [{'end': 1754.461, 'text': 'because you are deliberately picking the minimum of the realization.', 'start': 1751.239, 'duration': 3.222}, {'end': 1759.622, 'text': "And it's very easy to see that the expected value of E is less than 0.5.", 'start': 1755.221, 'duration': 4.401}, {'end': 1770.219, 'text': 'The easiest thing to say is that if I have two variables like that, the probability that the minimum will be less than 1.5, is 75%.', 'start': 1759.622, 'duration': 10.597}, {'end': 1773.621, 'text': 'Because all you need to do is one of them being less than 1 half.', 'start': 1770.219, 'duration': 3.402}, {'end': 1778.864, 'text': 'So if the probability of being less than 1 half is 75%, you expect the expected value to be less than 1 half.', 'start': 1774.361, 'duration': 4.503}, {'end': 1779.784, 'text': "It's mostly there.", 'start': 1779.224, 'duration': 0.56}, {'end': 1781.085, 'text': 'The mass is mostly below.', 'start': 1779.944, 'duration': 1.141}, {'end': 1785.648, 'text': 'So now we realize this is what? This is an optimistic bias.', 'start': 1782.286, 'duration': 3.362}, {'end': 1789.458, 'text': 'And that is exactly the same what happened with the early stopping.', 'start': 1786.536, 'duration': 2.922}, {'end': 1792.84, 'text': "We picked a point because it's minimum on the realization.", 'start': 1789.858, 'duration': 2.982}, {'end': 1794.861, 'text': 'And that is what we reported.', 'start': 1793.8, 'duration': 1.061}, {'end': 1796.922, 'text': 'Because of that, the thing used to be this.', 'start': 1794.901, 'duration': 2.021}, {'end': 1797.923, 'text': 'But we wait.', 'start': 1797.342, 'duration': 0.581}, {'end': 1799.163, 'text': "When it's there, we ignore it.", 'start': 1798.023, 'duration': 1.14}, {'end': 1800.504, 'text': "When it's here, we take it.", 'start': 1799.504, 'duration': 1}, {'end': 1803.986, 'text': 'So now that introduces a bias, and that bias is optimistic.', 'start': 1801.144, 'duration': 2.842}, {'end': 1806.327, 'text': 'And that will be true for the validation set.', 'start': 1804.586, 'duration': 1.741}, {'end': 1810.89, 'text': 'So our discussion so far is based on just looking at the out.', 'start': 1806.768, 'duration': 4.122}, {'end': 1813.652, 'text': 'Now we are going to use it, and we are going to introduce a bias.', 'start': 1811.25, 'duration': 2.402}], 'summary': 'Expected value e is less than 0.5 due to optimistic bias in picking minimum.', 'duration': 62.413, 'max_score': 1751.239, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk1751239.jpg'}, {'end': 1836.6, 'src': 'embed', 'start': 1814.724, 'weight': 0, 'content': [{'end': 1822.93, 'text': 'Fortunately for us, the utility of validation in machine learning is so light that we are going to swallow the bias.', 'start': 1814.724, 'duration': 8.206}, {'end': 1824.731, 'text': 'Bias is minor.', 'start': 1823.811, 'duration': 0.92}, {'end': 1826.072, 'text': 'We are not going to push our luck.', 'start': 1824.771, 'duration': 1.301}, {'end': 1833.137, 'text': 'We are not going to estimate tons of stuff and keep adding bias until the validation error basically becomes training error in disguise.', 'start': 1826.112, 'duration': 7.025}, {'end': 1836.6, 'text': "We're just going to choose a parameter, choose between models and whatnot.", 'start': 1833.437, 'duration': 3.163}], 'summary': 'Validation utility in ml is light, minimizing bias, choosing parameters and models.', 'duration': 21.876, 'max_score': 1814.724, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1814724.jpg'}, {'end': 1887.852, 'src': 'embed', 'start': 1859.815, 'weight': 4, 'content': [{'end': 1864.839, 'text': 'And the choice of lambda, in the case we saw, happens to be a manifestation of this.', 'start': 1859.815, 'duration': 5.024}, {'end': 1866.34, 'text': "So let's talk about it.", 'start': 1865.379, 'duration': 0.961}, {'end': 1871.882, 'text': 'Basically, we are going to use the validation set more than once.', 'start': 1867.839, 'duration': 4.043}, {'end': 1873.163, 'text': "That's how we're going to make the choice.", 'start': 1871.962, 'duration': 1.201}, {'end': 1874.104, 'text': "So let's look.", 'start': 1873.383, 'duration': 0.721}, {'end': 1876.125, 'text': 'This is a diagram.', 'start': 1875.285, 'duration': 0.84}, {'end': 1877.486, 'text': "I'm going to build it up.", 'start': 1876.165, 'duration': 1.321}, {'end': 1882.47, 'text': "So let's build it up, and then I'll focus on it and look at how the diagram reflects the logic.", 'start': 1877.706, 'duration': 4.764}, {'end': 1887.852, 'text': 'We have M models that we are going to choose from.', 'start': 1884.251, 'duration': 3.601}], 'summary': 'Using validation set multiple times to choose from m models.', 'duration': 28.037, 'max_score': 1859.815, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1859815.jpg'}], 'start': 1437.192, 'title': 'Validation, bias, and model selection in ml', 'summary': 'Discusses validation in machine learning, emphasizing the shift from unbiased test error to biased validation error, the impact of early stopping, and the optimistic bias introduced in model selection by minimum validation error.', 'chapters': [{'end': 1625.961, 'start': 1437.192, 'title': 'Validation and bias in machine learning', 'summary': 'Discusses the concept of validation in machine learning, emphasizing the shift from unbiased test error to biased validation error and the impact of early stopping on learning process and model choices.', 'duration': 188.769, 'highlights': ['The shift from unbiased test error to biased validation error due to early stopping in the learning process is a key point in understanding the concept of validation in machine learning. Unbiased test error becomes biased validation error.', 'The impact of early stopping on the learning process and model choices is highlighted, illustrating how it changes the nature of the dataset being used. Early stopping affects the learning process and the nature of the dataset.', 'The difference between unbiased test set and optimistic biased validation set is emphasized, showing how the validation set tends to deceive by consistently underestimating the error. Validation set exhibits optimistic bias, consistently underestimating the error.']}, {'end': 1931.791, 'start': 1628.601, 'title': 'Biases in model selection', 'summary': 'Discusses how the minimum validation error introduces an optimistic bias in model selection, impacting the expected value of the out-of-sample error and the use of validation sets for model selection.', 'duration': 303.19, 'highlights': ['The expected value of E is less than 0.5 due to the rules of the game, with a 75% probability that the minimum will be less than 0.5. The expected value of E is impacted by the minimum validation error, with a 75% probability that the minimum will be less than 0.5, resulting in an optimistic bias.', 'The bias introduced by the minimum validation error is minor and does not significantly impact the reliability of the estimate for the out-of-sample error, especially with a respectable size validation set. The bias introduced by the minimum validation error is considered minor and not expected to significantly impact the reliability of the estimate for the out-of-sample error, particularly with a considerable size validation set.', 'The main use of validation sets is for model selection, and the choice of lambda is a manifestation of this. Validation sets are primarily used for model selection, and the choice of lambda serves as an example of this utilization.']}], 'duration': 494.599, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1437192.jpg', 'highlights': ['The shift from unbiased test error to biased validation error due to early stopping in the learning process is a key point in understanding the concept of validation in machine learning.', 'The difference between unbiased test set and optimistic biased validation set is emphasized, showing how the validation set tends to deceive by consistently underestimating the error.', 'The impact of early stopping on the learning process and model choices is highlighted, illustrating how it changes the nature of the dataset being used.', 'The expected value of E is less than 0.5 due to the rules of the game, with a 75% probability that the minimum will be less than 0.5, resulting in an optimistic bias.', 'The main use of validation sets is for model selection, and the choice of lambda is a manifestation of this.', 'The bias introduced by the minimum validation error is minor and does not significantly impact the reliability of the estimate for the out-of-sample error, especially with a respectable size validation set.']}, {'end': 2755.427, 'segs': [{'end': 2226.444, 'src': 'heatmap', 'start': 2117.863, 'weight': 0.889, 'content': [{'end': 2126.308, 'text': 'And from that training, which is training now on the model we chose, we are going to get the final hypothesis, which is g m star.', 'start': 2117.863, 'duration': 8.445}, {'end': 2136.015, 'text': 'Again, we are reporting the validation error on a reduced hypothesis, if you will, but reporting the hypothesis the best we can do,', 'start': 2127.388, 'duration': 8.627}, {'end': 2139.238, 'text': 'because we know that we get better out of sample when we add the examples.', 'start': 2136.015, 'duration': 3.223}, {'end': 2140.639, 'text': 'This is the regime.', 'start': 2139.258, 'duration': 1.381}, {'end': 2144.282, 'text': "Let's complete the slide.", 'start': 2141.86, 'duration': 2.422}, {'end': 2150.306, 'text': 'em that we introduced happens to be the value of the validation error on the reduced, as we discussed.', 'start': 2145.563, 'duration': 4.743}, {'end': 2152.969, 'text': 'And this is true for all of them.', 'start': 2151.728, 'duration': 1.241}, {'end': 2158.535, 'text': 'And then you pick the model m star that happens to have the smallest em.', 'start': 2153.49, 'duration': 5.045}, {'end': 2160.698, 'text': "And that is the one that you're going to report.", 'start': 2158.876, 'duration': 1.822}, {'end': 2164.542, 'text': "And you're going to restore your d, as we did before.", 'start': 2160.798, 'duration': 3.744}, {'end': 2166.184, 'text': 'And this is what you have.', 'start': 2164.662, 'duration': 1.522}, {'end': 2168.927, 'text': 'So this is the algorithm for model selection.', 'start': 2166.324, 'duration': 2.603}, {'end': 2172.86, 'text': "Now let's look at the bias.", 'start': 2170.939, 'duration': 1.921}, {'end': 2174.901, 'text': "I'm going to run an experiment to show you the bias.", 'start': 2172.88, 'duration': 2.021}, {'end': 2177.943, 'text': 'Let me put it here and just build towards it.', 'start': 2175.241, 'duration': 2.702}, {'end': 2181.745, 'text': 'What is the bias now? We know we selected a particular model.', 'start': 2178.463, 'duration': 3.282}, {'end': 2185.567, 'text': 'And we selected it based on d val.', 'start': 2182.666, 'duration': 2.901}, {'end': 2186.128, 'text': "That's the killer.", 'start': 2185.607, 'duration': 0.521}, {'end': 2194.011, 'text': 'So when you use the estimate to choose, the estimate is no longer reliable, because you particularly chose for it.', 'start': 2187.088, 'duration': 6.923}, {'end': 2198.113, 'text': 'So now it looks optimistic, because by choice, it has a good performance.', 'start': 2194.071, 'duration': 4.042}, {'end': 2202.075, 'text': 'Not because it has an inherently good performance, because you looked for the one with the good performance.', 'start': 2198.533, 'duration': 3.542}, {'end': 2207.397, 'text': 'So the expected value of this fellow.', 'start': 2204.316, 'duration': 3.081}, {'end': 2213.737, 'text': 'is now a biased estimate of the ultimate quantity we want, which is the out-of-sample error.', 'start': 2209.375, 'duration': 4.362}, {'end': 2217.639, 'text': 'So the eval, the sample thing, is biased of that.', 'start': 2213.938, 'duration': 3.701}, {'end': 2219.681, 'text': 'And we would like to evaluate that.', 'start': 2218.36, 'duration': 1.321}, {'end': 2222.322, 'text': 'So here is the illustration on the curve.', 'start': 2220.581, 'duration': 1.741}, {'end': 2226.444, 'text': "And I'm going to ask you a question about it, so you have to pay attention in order to be able to answer the question.", 'start': 2222.462, 'duration': 3.982}], 'summary': 'Training model, selecting smallest error, evaluating bias in model selection', 'duration': 108.581, 'max_score': 2117.863, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk2117863.jpg'}, {'end': 2432.824, 'src': 'embed', 'start': 2403.301, 'weight': 0, 'content': [{'end': 2405.403, 'text': 'And in every situation you will have, there will be a bias.', 'start': 2403.301, 'duration': 2.102}, {'end': 2407.365, 'text': 'How much bias depends on a number of factors.', 'start': 2405.683, 'duration': 1.682}, {'end': 2408.566, 'text': 'But the bias is there.', 'start': 2407.585, 'duration': 0.981}, {'end': 2422.151, 'text': "Let's try to find analytically a guideline for the type of bias.", 'start': 2418.146, 'duration': 4.005}, {'end': 2422.852, 'text': 'Why is that??', 'start': 2422.311, 'duration': 0.541}, {'end': 2429.039, 'text': "Because I'm using the validation set to estimate the out-of-sample error and I'm really claiming that it's close to the out-of-sample error.", 'start': 2422.952, 'duration': 6.087}, {'end': 2432.824, 'text': "And we realize that if I don't use it too much, I'll be OK.", 'start': 2429.26, 'duration': 3.564}], 'summary': 'Bias exists in every situation, its extent depends on factors. using the validation set to estimate out-of-sample error is crucial.', 'duration': 29.523, 'max_score': 2403.301, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk2403301.jpg'}, {'end': 2500.247, 'src': 'embed', 'start': 2467.037, 'weight': 3, 'content': [{'end': 2473.98, 'text': 'Models in the general sense, this could be M values of the regularization parameter lambda in a fixed situation.', 'start': 2467.037, 'duration': 6.943}, {'end': 2480.362, 'text': "But we're still making one of M choices.", 'start': 2474.4, 'duration': 5.962}, {'end': 2492.086, 'text': 'The way to look at it is to think that the validation set is actually used for training, but training on a very special hypothesis set.', 'start': 2480.582, 'duration': 11.504}, {'end': 2496.282, 'text': 'the hypothesis set of the finalists.', 'start': 2494.059, 'duration': 2.223}, {'end': 2500.247, 'text': 'What does that mean? So I have H1 up to HM.', 'start': 2496.983, 'duration': 3.264}], 'summary': 'Exploring m values of lambda for training on a specialized hypothesis set.', 'duration': 33.21, 'max_score': 2467.037, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk2467037.jpg'}, {'end': 2616.979, 'src': 'embed', 'start': 2589.331, 'weight': 1, 'content': [{'end': 2592.072, 'text': 'And the final, final hypothesis is this guy.', 'start': 2589.331, 'duration': 2.741}, {'end': 2598.488, 'text': 'Okay? is less than or equal to the out-of-sample error, plus a penalty for the model complexity.', 'start': 2592.596, 'duration': 5.892}, {'end': 2602.35, 'text': 'And the penalty, if you use even the simple union bound, will have that form.', 'start': 2598.948, 'duration': 3.402}, {'end': 2605.832, 'text': 'So you still have the 1 over square root of k.', 'start': 2602.89, 'duration': 2.942}, {'end': 2608.133, 'text': 'So you can always make it better by having more examples.', 'start': 2605.832, 'duration': 2.301}, {'end': 2612.076, 'text': 'But then you have a contribution because of the number of guys you are choosing from.', 'start': 2608.734, 'duration': 3.342}, {'end': 2614.817, 'text': "So if you are choosing between 10 guys, that's one thing.", 'start': 2612.696, 'duration': 2.121}, {'end': 2616.979, 'text': "If you are choosing between 100 guys, that's another.", 'start': 2615.097, 'duration': 1.882}], 'summary': 'Final hypothesis <= out-of-sample error + penalty for model complexity.', 'duration': 27.648, 'max_score': 2589.331, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk2589331.jpg'}, {'end': 2669.822, 'src': 'embed', 'start': 2644.77, 'weight': 2, 'content': [{'end': 2650.392, 'text': "And indeed, if you are looking for a choice of one parameter, let's say I'm picking the regularization parameter.", 'start': 2644.77, 'duration': 5.622}, {'end': 2657.677, 'text': "When you are actually picking the regression parameter and you haven't put a grid, you don't say I'm choosing between 1, 0.1, and 0.01,,", 'start': 2651.312, 'duration': 6.365}, {'end': 2659.518, 'text': 'et cetera a finite number.', 'start': 2657.677, 'duration': 1.841}, {'end': 2662.56, 'text': "I'm actually choosing the numerical value of lambda, whatever it be.", 'start': 2659.878, 'duration': 2.682}, {'end': 2667.881, 'text': 'So I could end up with lambda equals 0.127543.', 'start': 2662.82, 'duration': 5.061}, {'end': 2669.822, 'text': 'You are making a choice between an infinite number of guys.', 'start': 2667.881, 'duration': 1.941}], 'summary': 'Selecting a numerical value for the regularization parameter, e.g., lambda equals 0.127543.', 'duration': 25.052, 'max_score': 2644.77, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/ pics/o7zzaKd0Lkk2644770.jpg'}], 'start': 1931.831, 'title': 'Model selection and bias analysis', 'summary': 'Explores a model selection algorithm based on validation errors, discusses bias in the selected model, showcases an experiment with two models and their out-of-sample errors, and analyzes the impact of validation set size on bias. it also discusses the challenges of choosing between models and parameters, emphasizing the impact of the number of choices on the training and out-of-sample errors, as well as the relationship between the validation set size and the number of parameters.', 'chapters': [{'end': 2429.039, 'start': 1931.831, 'title': 'Model selection algorithm and bias analysis', 'summary': 'Explores a model selection algorithm based on validation errors and discusses the presence of bias in the selected model, showcasing an experiment with two models and their out-of-sample errors, while also analyzing the impact of validation set size on bias.', 'duration': 497.208, 'highlights': ['The algorithm for model selection involves training multiple models on a reduced set, evaluating their performance using the validation set, and selecting the model with the smallest validation error as the final hypothesis. The algorithm involves training multiple models on a reduced set, evaluating their performance using the validation set, and selecting the model with the smallest validation error as the final hypothesis.', 'The validation error of the selected model exhibits optimistic bias due to the selection process, leading to unreliable estimates of the out-of-sample error. The validation error of the selected model exhibits optimistic bias due to the selection process, leading to unreliable estimates of the out-of-sample error.', 'An experiment is conducted to showcase the systematic bias in the selection of models based on their validation errors, with the average validation error on the chosen model being compared to its actual out-of-sample error. An experiment is conducted to showcase the systematic bias in the selection of models based on their validation errors, with the average validation error on the chosen model being compared to its actual out-of-sample error.', 'The impact of validation set size on bias is analyzed, demonstrating that larger validation set sizes lead to more reliable estimates and reduced bias in model selection. The impact of validation set size on bias is analyzed, demonstrating that larger validation set sizes lead to more reliable estimates and reduced bias in model selection.']}, {'end': 2755.427, 'start': 2429.26, 'title': 'Choosing between models and parameters', 'summary': 'Discusses the challenges of choosing between models and parameters, emphasizing the impact of the number of choices on the training and out-of-sample errors, as well as the relationship between the validation set size and the number of parameters.', 'duration': 326.167, 'highlights': ['The impact of the number of choices on training and out-of-sample errors is discussed, emphasizing the penalty for model complexity and the logarithmic worsening effect with an increasing number of choices. The penalty for model complexity and the logarithmic worsening effect with an increasing number of choices are emphasized, illustrating how the number of choices impacts both training and out-of-sample errors.', 'The relationship between the validation set size and the number of parameters is explored, highlighting the impact of the VC dimension and the rule of thumb regarding the number of parameters that can be effectively chosen using the validation set. The impact of the VC dimension and the rule of thumb regarding the number of parameters that can be effectively chosen using the validation set is explained, emphasizing the relationship between the validation set size and the number of parameters.', 'The concept of choosing parameters with one degree of freedom, such as the regularization parameter and early stopping, is introduced, emphasizing the correspondence between the number of parameters and degrees of freedom. The concept of choosing parameters with one degree of freedom, such as the regularization parameter and early stopping, is introduced, highlighting the correspondence between the number of parameters and degrees of freedom.']}], 'duration': 823.596, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk1931831.jpg', 'highlights': ['The impact of validation set size on bias is analyzed, demonstrating that larger validation set sizes lead to more reliable estimates and reduced bias in model selection.', 'The penalty for model complexity and the logarithmic worsening effect with an increasing number of choices are emphasized, illustrating how the number of choices impacts both training and out-of-sample errors.', 'The concept of choosing parameters with one degree of freedom, such as the regularization parameter and early stopping, is introduced, highlighting the correspondence between the number of parameters and degrees of freedom.', 'The algorithm for model selection involves training multiple models on a reduced set, evaluating their performance using the validation set, and selecting the model with the smallest validation error as the final hypothesis.']}, {'end': 3733.777, 'segs': [{'end': 2915.765, 'src': 'embed', 'start': 2886.813, 'weight': 0, 'content': [{'end': 2893.618, 'text': 'When you give that as your estimate, your customer is as likely to be pleasantly surprised as unpleasantly surprised.', 'start': 2886.813, 'duration': 6.805}, {'end': 2899.803, 'text': 'And if your test set is big, they are likely not to be surprised at all, to be very close to your estimate.', 'start': 2894.459, 'duration': 5.344}, {'end': 2901.524, 'text': 'So there is no bias there.', 'start': 2900.603, 'duration': 0.921}, {'end': 2905.581, 'text': 'Now, the validation set is in between.', 'start': 2903.12, 'duration': 2.461}, {'end': 2909.543, 'text': "It's slightly contaminated, because it made few choices.", 'start': 2906.702, 'duration': 2.841}, {'end': 2915.765, 'text': 'And the wisdom here, please keep it slightly contaminated.', 'start': 2910.463, 'duration': 5.302}], 'summary': 'Estimate accurately to avoid customer surprises. validation set may be slightly contaminated.', 'duration': 28.952, 'max_score': 2886.813, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/ pics/o7zzaKd0Lkk2886813.jpg'}, {'end': 3081.181, 'src': 'embed', 'start': 3050.434, 'weight': 2, 'content': [{'end': 3052.795, 'text': 'And therefore, I can claim that the out-of-sample error is close.', 'start': 3050.434, 'duration': 2.361}, {'end': 3058.56, 'text': 'Because the bigger K is, the bigger the discrepancy between the training set and the full set.', 'start': 3052.876, 'duration': 5.684}, {'end': 3062.684, 'text': 'And therefore, the bigger the discrepancy between the hypothesis I get here and the hypothesis I get here.', 'start': 3059.021, 'duration': 3.663}, {'end': 3064.305, 'text': "So I'd like K to be small.", 'start': 3063.084, 'duration': 1.221}, {'end': 3068.328, 'text': "But also, I'd like K to be large.", 'start': 3065.546, 'duration': 2.782}, {'end': 3072.612, 'text': 'Because the bigger K is, the more reliable this estimate is for that.', 'start': 3068.889, 'duration': 3.723}, {'end': 3076.134, 'text': 'So I want K to have two conditions.', 'start': 3073.952, 'duration': 2.182}, {'end': 3081.181, 'text': 'It has to be small, and it has to be large.', 'start': 3076.575, 'duration': 4.606}], 'summary': 'K should be both small and large for reliable estimate.', 'duration': 30.747, 'max_score': 3050.434, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk3050434.jpg'}, {'end': 3127.28, 'src': 'embed', 'start': 3098.503, 'weight': 1, 'content': [{'end': 3102.307, 'text': 'And when you look at it, it will look first, and then you realize this is actually valid.', 'start': 3098.503, 'duration': 3.804}, {'end': 3109.633, 'text': "So what do we do? I'm going to describe one form of cross-validation, which is the simplest to describe, which is called leave-one-out.", 'start': 3102.687, 'duration': 6.946}, {'end': 3112.015, 'text': 'Other methods will be leave-more-out.', 'start': 3110.133, 'duration': 1.882}, {'end': 3112.555, 'text': "That's all.", 'start': 3112.215, 'duration': 0.34}, {'end': 3114.437, 'text': "But let's focus on leave-one-out.", 'start': 3113.256, 'duration': 1.181}, {'end': 3117.572, 'text': 'Here is the idea.', 'start': 3115.47, 'duration': 2.102}, {'end': 3120.454, 'text': 'You give me a data set of N.', 'start': 3118.092, 'duration': 2.362}, {'end': 3123.296, 'text': 'I am going to use N minus 1 of them for training.', 'start': 3120.454, 'duration': 2.842}, {'end': 3127.28, 'text': "That's good, because now I am very close to N.", 'start': 3124.257, 'duration': 3.023}], 'summary': 'Describes leave-one-out cross-validation method for training data, giving n-1 for training.', 'duration': 28.777, 'max_score': 3098.503, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk3098503.jpg'}, {'end': 3405.278, 'src': 'embed', 'start': 3375.596, 'weight': 6, 'content': [{'end': 3378.579, 'text': 'Never mind the fact that each of them is evaluated in a different hypothesis.', 'start': 3375.596, 'duration': 2.983}, {'end': 3387.306, 'text': 'So now I was able to use n minus 1 points to train, and that will give me something very close to what happens with n.', 'start': 3379.78, 'duration': 7.526}, {'end': 3389.188, 'text': "And I'm using n points to validate.", 'start': 3387.306, 'duration': 1.882}, {'end': 3394.834, 'text': 'The catch, obviously, these are not independent.', 'start': 3390.612, 'duration': 4.222}, {'end': 3401.617, 'text': 'These are not independent, because the examples were used to create the hypothesis, and some examples were used to evaluate them.', 'start': 3395.114, 'duration': 6.503}, {'end': 3405.278, 'text': 'And you will see that each of them is affected by the other,', 'start': 3402.297, 'duration': 2.981}], 'summary': 'Using n-1 points for training and n points for validation can give close results, but not independent due to shared examples in hypothesis creation and evaluation.', 'duration': 29.682, 'max_score': 3375.596, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk3375596.jpg'}, {'end': 3671.359, 'src': 'embed', 'start': 3641.421, 'weight': 3, 'content': [{'end': 3654.721, 'text': 'if your question is is the linear model better than the constant model in this case? The only thing you look in all of this is the cross-validation error.', 'start': 3641.421, 'duration': 13.3}, {'end': 3664.071, 'text': "So this guy, this guy, this guy averaged is the negative grade, because it's error, for the linear model.", 'start': 3656.403, 'duration': 7.668}, {'end': 3668.476, 'text': 'This guy, this guy, this guy averaged is the grade for the constant model.', 'start': 3664.692, 'duration': 3.784}, {'end': 3671.359, 'text': 'And as you see, the constant model wins.', 'start': 3668.997, 'duration': 2.362}], 'summary': 'Constant model outperforms linear model based on cross-validation error.', 'duration': 29.938, 'max_score': 3641.421, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk3641421.jpg'}, {'end': 3719.166, 'src': 'embed', 'start': 3694.366, 'weight': 5, 'content': [{'end': 3701.052, 'text': "There is a very strong indication that there is a positive slope involved, and maybe it's a linear model with a positive slope.", 'start': 3694.366, 'duration': 6.686}, {'end': 3702.353, 'text': "Don't go there.", 'start': 3701.212, 'duration': 1.141}, {'end': 3705.075, 'text': 'You can fool yourself into any pattern you want.', 'start': 3702.633, 'duration': 2.442}, {'end': 3707.277, 'text': 'Go about it in a systematic way.', 'start': 3705.695, 'duration': 1.582}, {'end': 3709.839, 'text': 'This is the quantity we know, the cross-validation error.', 'start': 3707.617, 'duration': 2.222}, {'end': 3711.14, 'text': 'This is the way to compute it.', 'start': 3710.099, 'duration': 1.041}, {'end': 3719.166, 'text': "We are going to take it as the indication, notwithstanding that there is an error bar, because it's a small sample, in this case 3.", 'start': 3711.5, 'duration': 7.666}], 'summary': 'Indication of positive slope in linear model with small sample size of 3.', 'duration': 24.8, 'max_score': 3694.366, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk3694366.jpg'}], 'start': 2755.587, 'title': 'Data contamination and cross-validation in ml', 'summary': 'Discusses the concept of data contamination in training and its impact on error estimates, emphasizing the trade-off between reliability and contamination. it also delves into the role of different data sets like training, test, and validation sets, and the dilemma of k in cross-validation. additionally, it covers the leave-one-out cross-validation method and the use of cross-validation for model selection, showcasing the process of leave-one-out cross-validation and using the cross-validation error to choose between linear and constant models.', 'chapters': [{'end': 3098.083, 'start': 2755.587, 'title': 'Data contamination in training', 'summary': 'Discusses the concept of data contamination in training and its impact on error estimates, emphasizing the trade-off between reliability and contamination. it also delves into the role of different data sets like training, test, and validation sets, and the dilemma of k in cross-validation.', 'duration': 342.496, 'highlights': ['The training set is totally contaminated, making it unreliable as an estimate for the out-of-sample error, while the test set remains unbiased and provides a reliable estimate. The validation set, although slightly contaminated due to making some choices, requires careful handling to maintain reliability. ', 'The chapter explores the dilemma of choosing the value of K in cross-validation, highlighting the conflicting requirements of K being small for minimizing discrepancy between training and full sets, and K being large for reliable estimates. It teases the introduction of new mathematics to address this dilemma. ']}, {'end': 3442.017, 'start': 3098.503, 'title': 'Cross-validation in machine learning', 'summary': 'Discusses the leave-one-out cross-validation method, where a model is trained on n-1 data points and validated on one point, leading to the estimation of the average cross-validation error.', 'duration': 343.514, 'highlights': ["The leave-one-out cross-validation method involves training a model on N-1 data points and validating on one point, leading to the estimation of the average cross-validation error. By repeatedly performing leave-one-out cross-validation for different indices n, a set of estimates for the validation error is obtained, providing an unbiased estimate of the model's performance.", 'The cross-validation error, ECV, is defined as the average of the validation error estimates from the leave-one-out cross-validation process. The cross-validation error is calculated by averaging the validation error estimates obtained from multiple training sessions, each followed by a single evaluation on a data point.', 'The examples used in cross-validation are not independent, leading to correlation between the validation errors, yet the effective number of points is close to N, as if they were independent. Despite the correlation between the validation errors due to using the same examples for training and evaluation, the effective number of points in cross-validation is found to be very close to N.']}, {'end': 3733.777, 'start': 3442.858, 'title': 'Cross-validation for model selection', 'summary': 'Illustrates the use of cross-validation for model selection, demonstrating the process of leave-one-out cross-validation and using the cross-validation error to choose between linear and constant models, where the constant model is found to be better.', 'duration': 290.919, 'highlights': ['The chapter illustrates the process of leave-one-out cross-validation for model selection, using a small sample size of 3 points. The process of leave-one-out cross-validation is demonstrated, emphasizing the small sample size of 3 points.', 'The cross-validation error is used to compare the performance of linear and constant models, with the constant model being found to have a lower error and thus chosen as the better model. The cross-validation error is utilized to compare the performance of linear and constant models, ultimately leading to the selection of the constant model due to its lower error.', 'The importance of avoiding heuristic approaches and relying on systematic computation of the cross-validation error is emphasized for accurate model selection. The emphasis is placed on the significance of avoiding heuristic approaches and instead relying on systematic computation of the cross-validation error for accurate model selection.']}], 'duration': 978.19, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/ pics/o7zzaKd0Lkk2755587.jpg', 'highlights': ['The training set is totally contaminated, making it unreliable as an estimate for the out-of-sample error, while the test set remains unbiased and provides a reliable estimate.', 'The leave-one-out cross-validation method involves training a model on N-1 data points and validating on one point, leading to the estimation of the average cross-validation error.', 'The chapter explores the dilemma of choosing the value of K in cross-validation, highlighting the conflicting requirements of K being small for minimizing discrepancy between training and full sets, and K being large for reliable estimates.', 'The cross-validation error is used to compare the performance of linear and constant models, with the constant model being found to have a lower error and thus chosen as the better model.', 'The validation set, although slightly contaminated due to making some choices, requires careful handling to maintain reliability.', 'The importance of avoiding heuristic approaches and relying on systematic computation of the cross-validation error is emphasized for accurate model selection.', 'The examples used in cross-validation are not independent, leading to correlation between the validation errors, yet the effective number of points is close to N, as if they were independent.']}, {'end': 5157.868, 'segs': [{'end': 3864.984, 'src': 'embed', 'start': 3837.902, 'weight': 3, 'content': [{'end': 3842.164, 'text': 'When you look at the training error, not surprisingly, the training error always goes down.', 'start': 3837.902, 'duration': 4.262}, {'end': 3844.045, 'text': 'What else is new? You have more.', 'start': 3842.625, 'duration': 1.42}, {'end': 3844.606, 'text': 'You fit better.', 'start': 3844.085, 'duration': 0.521}, {'end': 3851.878, 'text': 'The out-of-sample error, which I am evaluating on the points that were not involved at all in this process cross-validation or otherwise,', 'start': 3846.195, 'duration': 5.683}, {'end': 3855.059, 'text': 'just out-of-sample, totally I get this fellow.', 'start': 3851.878, 'duration': 3.181}, {'end': 3864.984, 'text': 'And the cross-validation error which I get from the 500 examples by excluding one point at a time and taking the average is remarkably similar to E out.', 'start': 3856.2, 'duration': 8.784}], 'summary': 'Training error decreases, out-of-sample error is similar to cross-validation error.', 'duration': 27.082, 'max_score': 3837.902, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/ video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk3837902.jpg'}, {'end': 3999.942, 'src': 'embed', 'start': 3904.713, 'weight': 0, 'content': [{'end': 3905.874, 'text': 'So this is a typical thing.', 'start': 3904.713, 'duration': 1.161}, {'end': 3907.055, 'text': "It's like unregularized.", 'start': 3905.914, 'duration': 1.141}, {'end': 3913.901, 'text': "Now, when you use the validation, and you stop at the sixth because the cross-validation error told you so, it's a nice, smooth surface.", 'start': 3907.936, 'duration': 5.965}, {'end': 3919.206, 'text': "It's not a perfect error, but it didn't put an effort where it didn't belong.", 'start': 3914.302, 'duration': 4.904}, {'end': 3924.831, 'text': 'And when you look at the bottom line, what is the in-sample error here? 0%.', 'start': 3919.846, 'duration': 4.985}, {'end': 3925.511, 'text': 'You got it perfect.', 'start': 3924.831, 'duration': 0.68}, {'end': 3926.912, 'text': 'We know that.', 'start': 3926.432, 'duration': 0.48}, {'end': 3930.536, 'text': 'And the out-of-sample error? 2.5%.', 'start': 3928.314, 'duration': 2.222}, {'end': 3934.479, 'text': "For digits, that's OK, but not great.", 'start': 3930.536, 'duration': 3.943}, {'end': 3939.71, 'text': 'Here we went, and now the in-sample error is 0.8.', 'start': 3936.687, 'duration': 3.023}, {'end': 3940.55, 'text': 'But we know better.', 'start': 3939.71, 'duration': 0.84}, {'end': 3942.632, 'text': "We don't care about the in-sample error going to 0.", 'start': 3940.651, 'duration': 1.981}, {'end': 3944.013, 'text': "That's actually harmful in some cases.", 'start': 3942.632, 'duration': 1.381}, {'end': 3947.256, 'text': 'The out-of-sample error is 1.5%.', 'start': 3944.634, 'duration': 2.622}, {'end': 3951.68, 'text': 'Now, if you are in the range, 2.5% means that you are performing 97.5%.', 'start': 3947.256, 'duration': 4.424}, {'end': 3952.461, 'text': 'Here, you are performing 8.5%.', 'start': 3951.68, 'duration': 0.781}, {'end': 3953.722, 'text': '40% improvement in that range is a lot.', 'start': 3952.461, 'duration': 1.261}, {'end': 3955.143, 'text': 'There is a limit here that you cannot exceed.', 'start': 3953.742, 'duration': 1.401}, {'end': 3967.871, 'text': 'So here, you are really doing great by just doing that simple thing.', 'start': 3963.528, 'duration': 4.343}, {'end': 3972.956, 'text': 'Now you can see why validation is considered, in this context, as similar to regularization.', 'start': 3967.911, 'duration': 5.045}, {'end': 3974.017, 'text': 'It does the same thing.', 'start': 3973.056, 'duration': 0.961}, {'end': 3981.103, 'text': 'It prevented overfitting, but it prevented overfitting by estimating the out-of-sample error, rather than estimating something else.', 'start': 3974.337, 'duration': 6.766}, {'end': 3994.602, 'text': 'Now, let me go and very quickly, and I will close the lecture with it, give you the more general ones.', 'start': 3985.407, 'duration': 9.195}, {'end': 3995.704, 'text': 'So we talked about leave one out.', 'start': 3994.622, 'duration': 1.082}, {'end': 3998.942, 'text': 'Seldom you use leave-one-out in real problems.', 'start': 3996.92, 'duration': 2.022}, {'end': 3999.942, 'text': 'And you can think of why.', 'start': 3999.022, 'duration': 0.92}], 'summary': 'Validation helps prevent overfitting, achieving 40% improvement in performance with 2.5% out-of-sample error.', 'duration': 95.229, 'max_score': 3904.713, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk3904713.jpg'}, {'end': 4120.91, 'src': 'embed', 'start': 4092.416, 'weight': 5, 'content': [{'end': 4096.12, 'text': 'Now, the reason I introduce this is because this is what I actually recommend to you.', 'start': 4092.416, 'duration': 3.704}, {'end': 4102.064, 'text': 'Very specifically, tenfold cross-validation works very nicely in practice.', 'start': 4096.74, 'duration': 5.324}, {'end': 4109.046, 'text': 'So the rule is, you take the total number of examples, divide them by 10, and that is the size of your validation set.', 'start': 4104.184, 'duration': 4.862}, {'end': 4113.328, 'text': 'You repeat it 10 times, and you get an estimate, and you are ready to go.', 'start': 4109.086, 'duration': 4.242}, {'end': 4114.127, 'text': "That's it.", 'start': 4113.768, 'duration': 0.359}, {'end': 4120.91, 'text': "I will stop here, and we'll take questions after a short break.", 'start': 4114.688, 'duration': 6.222}], 'summary': 'Recommend tenfold cross-validation for practical use.', 'duration': 28.494, 'max_score': 4092.416, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk4092416.jpg'}, {'end': 4464.714, 'src': 'embed', 'start': 4438.284, 'weight': 6, 'content': [{'end': 4443.385, 'text': 'So you can use it to choose the regularization parameter, and then you can also use it on the side to do something else.', 'start': 4438.284, 'duration': 5.101}, {'end': 4445.326, 'text': 'So both of them are active in the same problem.', 'start': 4443.465, 'duration': 1.861}, {'end': 4450.967, 'text': 'And in most of the practical cases you will encounter, you will actually be using both.', 'start': 4445.946, 'duration': 5.021}, {'end': 4459.026, 'text': 'Very seldom can you get away without regularization, and very seldom can you get away without validation.', 'start': 4451.917, 'duration': 7.109}, {'end': 4464.714, 'text': 'Someone is asking that this seems to be like a brute force method for model selection.', 'start': 4460.248, 'duration': 4.466}], 'summary': 'Regularization and validation are essential for model selection in practical cases.', 'duration': 26.43, 'max_score': 4438.284, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk4438284.jpg'}, {'end': 4509.955, 'src': 'embed', 'start': 4483.818, 'weight': 7, 'content': [{'end': 4489.882, 'text': "I can do model selection based on, I know my target function is symmetric, so I'm going to choose a symmetric model.", 'start': 4483.818, 'duration': 6.064}, {'end': 4491.884, 'text': 'That can be considered model selection.', 'start': 4490.303, 'duration': 1.581}, {'end': 4496.568, 'text': 'And there are a bunch of other logical methods to choose the model.', 'start': 4492.504, 'duration': 4.064}, {'end': 4500.69, 'text': 'The great thing about validation is that there are no assumptions whatsoever.', 'start': 4497.128, 'duration': 3.562}, {'end': 4502.451, 'text': 'You have capital M models.', 'start': 4501.45, 'duration': 1.001}, {'end': 4509.955, 'text': 'What are the models? What assumptions do they have? How close they are, or not close to the target function? Who cares? They have M models.', 'start': 4502.631, 'duration': 7.324}], 'summary': 'Model selection based on symmetry, validation with no assumptions, and m models.', 'duration': 26.137, 'max_score': 4483.818, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/ o7zzaKd0Lkk4483818.jpg'}, {'end': 4573.595, 'src': 'embed', 'start': 4544.848, 'weight': 9, 'content': [{'end': 4546.609, 'text': 'Is it used for that or not??', 'start': 4544.848, 'duration': 1.761}, {'end': 4552.904, 'text': 'Validation makes a principled choice.', 'start': 4549.375, 'duration': 3.529}, {'end': 4556.907, 'text': 'Regardless of the nature of that choice.', 'start': 4553.925, 'duration': 2.982}, {'end': 4561.109, 'text': "Let's say that I have a time series.", 'start': 4557.867, 'duration': 3.242}, {'end': 4567.272, 'text': "And one of the things in time series, let's say for financial forecasting, is that you can train, and then you get a system.", 'start': 4561.789, 'duration': 5.483}, {'end': 4570.794, 'text': 'And then the world is not stationary.', 'start': 4567.772, 'duration': 3.022}, {'end': 4573.595, 'text': "So a system that used to work doesn't work anymore.", 'start': 4571.174, 'duration': 2.421}], 'summary': 'Validation helps in adapting to non-stationary time series data for financial forecasting.', 'duration': 28.747, 'max_score': 4544.848, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/ o7zzaKd0Lkk/pics/o7zzaKd0Lkk4544848.jpg'}, {'end': 4696.842, 'src': 'embed', 'start': 4670.296, 'weight': 8, 'content': [{'end': 4675.177, 'text': "And if I have the other guy, which is completely swinging, it's very easy to pull it down, and I get worse effect of the bias.", 'start': 4670.296, 'duration': 4.881}, {'end': 4680.339, 'text': 'So whenever you minimize the error bar, you minimize the vulnerability to bias as well.', 'start': 4675.717, 'duration': 4.622}, {'end': 4683.24, 'text': "That's the only thing that cross-validation does.", 'start': 4681.159, 'duration': 2.081}, {'end': 4688.362, 'text': 'It allows you to use a lot of examples to validate, while using a lot of examples to train.', 'start': 4683.32, 'duration': 5.042}, {'end': 4689.402, 'text': "That's the key.", 'start': 4688.742, 'duration': 0.66}, {'end': 4696.842, 'text': 'Going back to the previous lecture, a question on that.', 'start': 4693.259, 'duration': 3.583}], 'summary': 'Minimize error bar to reduce bias vulnerability. cross-validation uses many examples to validate and train.', 'duration': 26.546, 'max_score': 4670.296, 'thumbnail': 'https:// coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk4670296.jpg'}, {'end': 4871.888, 'src': 'embed', 'start': 4846.98, 'weight': 10, 'content': [{'end': 4853.363, 'text': "Say there's a scenario where you find your model through cross-validation, and then you test the out-of-sample error.", 'start': 4846.98, 'duration': 6.383}, {'end': 4857.825, 'text': 'But somehow you test a different model, and it gives you a smaller out-of-sample error.', 'start': 4853.723, 'duration': 4.102}, {'end': 4865.782, 'text': 'Should you still keep the one you found through cross-validation? I went through this learning and came up with a model.', 'start': 4857.845, 'duration': 7.937}, {'end': 4871.888, 'text': 'Someone else went through whatever exercise they have and came up with a final hypothesis in this case.', 'start': 4866.483, 'duration': 5.405}], 'summary': 'When testing a different model, it gave a smaller out-of-sample error than the one found through cross-validation.', 'duration': 24.908, 'max_score': 4846.98, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk4846980.jpg'}], 'start': 3735.799, 'title': 'Importance of cross-validation in model selection', 'summary': 'Demonstrates the importance of cross-validation for model selection, showing a 40% improvement in out-of-sample error for a digit classification task. it discusses preventing overfitting, recommends tenfold cross-validation, emphasizes the use of regularization and cross-validation in model selection, and discusses model validation simplicity, time-dependent data, and cross-validation benefits in large datasets.', 'chapters': [{'end': 3974.017, 'start': 3735.799, 'title': 'Cross-validation for model selection', 'summary': 'Demonstrates the use of cross-validation to select the optimal number of features for a nonlinear transformation, resulting in a 40% improvement in out-of-sample error in a digit classification task.', 'duration': 238.218, 'highlights': ['The use of cross-validation to select the optimal number of features for a nonlinear transformation resulted in a 40% improvement in out-of-sample error in a digit classification task.', 'The cross-validation error was remarkably similar to the out-of-sample error, tracking it very nicely and guiding the selection of the optimal number of features.', 'Validation through cross-validation led to a 40% improvement in performance, demonstrating its similarity to regularization in this context.', 'The in-sample error reduced to 0% after feature selection based on cross-validation, while the out-of-sample error improved to 2.5% in the selected model, indicating a significant enhancement in performance.']}, {'end': 4459.026, 'start': 3974.337, 'title': 'Cross-validation and model selection', 'summary': 'Discusses the importance of cross-validation in preventing overfitting, recommends tenfold cross-validation as a practical approach, and emphasizes the use of both regularization and cross-validation in model selection.', 'duration': 484.689, 'highlights': ['The chapter discusses the importance of cross-validation in preventing overfitting. It prevented overfitting by estimating the out-of-sample error, rather than estimating something else.', 'Recommends tenfold cross-validation as a practical approach. The rule is to take the total number of examples, divide them by 10, and repeat it 10 times to get an estimate.', 'Emphasizes the use of both regularization and cross-validation in model selection. One of the biggest utilities for validation is to choose the regularization parameter, and both of them are active in the same problem.']}, {'end': 4846.86, 'start': 4460.248, 'title': 'Model selection and validation', 'summary': 'Discusses the concept of model selection and validation, emphasizing the simplicity and lack of assumptions in the validation process, the use of validation for time-dependent data, the comparison between validation and cross-validation, and the potential benefits of cross-validation in large datasets, citing the example of the netflix case.', 'duration': 386.612, 'highlights': ['The validation process is extremely simple and immune to assumptions, allowing for model selection without requiring specific assumptions about the data. Validation process is simple, immune to assumptions, and allows for model selection without specific assumptions.', 'Validation can be used for making principled choices in the case of time-dependent data, such as in tracking the evolution of systems. Validation can be used for making principled choices in the case of time-dependent data, such as in tracking the evolution of systems.', 'Cross-validation allows for using a lot of examples to validate while using a lot of examples to train, minimizing the vulnerability to bias, especially in large datasets Cross-validation allows for using many examples to validate and train, minimizing bias, especially in large datasets.']}, {'end': 5157.868, 'start': 4846.98, 'title': 'Cross-validation and model evaluation', 'summary': 'Discusses the evaluation of models through cross-validation, addressing the issue of model selection based on out-of-sample error, the impact of sample size on cross-validation, and the behavior of bias with an increasing number of validation points.', 'duration': 310.888, 'highlights': ['The chapter addresses the issue of model selection based on out-of-sample error, questioning the validity of selecting a model found through cross-validation over a different model that yields a smaller out-of-sample error. out-of-sample error', 'It discusses the impact of sample size on cross-validation, considering the relative size of the error bar, correlations, and bias, and whether it is a good idea to resample to enlarge the current sample. sample size, error bar, correlations, bias', 'The chapter also explores the behavior of bias when increasing the number of points left out, and how the bias is a function of how it is used rather than something inherent in the estimate. number of points left out, bias']}], 'duration': 1422.069, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/o7zzaKd0Lkk/pics/o7zzaKd0Lkk3735799.jpg', 'highlights': ['The use of cross-validation resulted in a 40% improvement in out-of-sample error for a digit classification task', 'Validation through cross-validation led to a 40% improvement in performance, demonstrating its similarity to regularization', 'The in-sample error reduced to 0% after feature selection based on cross-validation, indicating a significant enhancement in performance', 'The cross-validation error tracked the out-of-sample error very nicely, guiding the selection of the optimal number of features', 'The chapter discusses the importance of cross-validation in preventing overfitting by estimating the out-of-sample error', 'Recommends tenfold cross-validation as a practical approach for model selection', 'Emphasizes the use of both regularization and cross-validation in model selection', 'The validation process is extremely simple and immune to assumptions, allowing for model selection without requiring specific assumptions about the data', 'Cross-validation allows for using many examples to validate and train, minimizing bias, especially in large datasets', 'Validation can be used for making principled choices in the case of time-dependent data, such as in tracking the evolution of systems', 'The chapter addresses the issue of model selection based on out-of-sample error and the impact of sample size on cross-validation']}], 'highlights': ['The use of cross-validation resulted in a 40% improvement in out-of-sample error for a digit classification task', 'The in-sample error reduced to 0% after feature selection based on cross-validation, indicating a significant enhancement in performance', 'The cross-validation error tracked the out-of-sample error very nicely, guiding the selection of the optimal number of features', 'The transition from constrained to unconstrained regularization involves explicitly forbidding some hypotheses to reduce VC dimension and improve generalization property, leading to an augmented error preference based on a penalty', 'The discussion delves into the general form of a regularizer, denoted as capital omega of H, which depends on small h, and the formation of the augmented error as the in-sample error plus a specific term for better out-of-sample error prediction', 'The focus on the augmented error form of regularization is emphasized in practice, with the argument for it being the equivalence to the constrained version, either through Lagrangian or geometric approach', 'The impact of validation set size on bias is analyzed, demonstrating that larger validation set sizes lead to more reliable estimates and reduced bias in model selection', 'The penalty for model complexity and the logarithmic worsening effect with an increasing number of choices are emphasized, illustrating how the number of choices impacts both training and out-of-sample errors', 'The algorithm for model selection involves training multiple models on a reduced set, evaluating their performance using the validation set, and selecting the model with the smallest validation error as the final hypothesis', 'The training set is totally contaminated, making it unreliable as an estimate for the out-of-sample error, while the test set remains unbiased and provides a reliable estimate', 'The leave-one-out cross-validation method involves training a model on N-1 data points and validating on one point, leading to the estimation of the average cross-validation error', 'The chapter explores the dilemma of choosing the value of K in cross-validation, highlighting the conflicting requirements of K being small for minimizing discrepancy between training and full sets, and K being large for reliable estimates', 'The cross-validation error is used to compare the performance of linear and constant models, with the constant model being found to have a lower error and thus chosen as the better model', 'The validation set, although slightly contaminated due to making some choices, requires careful handling to maintain reliability', 'The importance of avoiding heuristic approaches and relying on systematic computation of the cross-validation error is emphasized for accurate model selection', 'The examples used in cross-validation are not independent, leading to correlation between the validation errors, yet the effective number of points is close to N, as if they were independent', 'The impact of early stopping on the learning process and model choices is highlighted, illustrating how it changes the nature of the dataset being used', 'The expected value of E is less than 0.5 due to the rules of the game, with a 75% probability that the minimum will be less than 0.5, resulting in an optimistic bias', 'The main use of validation sets is for model selection, and the choice of lambda is a manifestation of this', 'The bias introduced by the minimum validation error is minor and does not significantly impact the reliability of the estimate for the out-of-sample error, especially with a respectable size validation set', 'The determination of lambda using validation is critical, as it allows for the selection of the correct amount of lambda, leading to a close fit to the target', 'The validation error serves as an unbiased estimate of the out-of-sample error, assuming it is solely utilized for out-of-sample error measurement, enhancing the reliability of the estimation', 'As the sample size (k) for the validation set increases, the variance of the validation error decreases, suggesting a potential improvement in the reliability of the out-of-sample error estimate', 'Cross-validation allows for the utilization of all examples for both validation and training, offering a solution to the distinction between training and validation activities', 'Allocation of points to the validation set results in a trade-off, as it reduces the available points for training, emphasizing the need to carefully balance the allocation for effective model evaluation', 'Using a small k leads to a bad estimate due to high variance', 'Increasing K may result in a more reliable estimate but worse expected error value', 'Continuously increasing K can lead to a very reliable estimate but of poor quality', 'The compromise between reliability of estimates and proximity to reported results is emphasized', 'Rule of thumb suggests using 1/5th of the dataset for validation', 'The concept of choosing parameters with one degree of freedom, such as the regularization parameter and early stopping, is introduced, highlighting the correspondence between the number of parameters and degrees of freedom', 'The chapter discusses the importance of cross-validation in preventing overfitting by estimating the out-of-sample error', 'Recommends tenfold cross-validation as a practical approach for model selection', 'Emphasizes the use of both regularization and cross-validation in model selection', 'The validation process is extremely simple and immune to assumptions, allowing for model selection without requiring specific assumptions about the data', 'Cross-validation allows for using many examples to validate and train, minimizing bias, especially in large datasets', 'Validation can be used for making principled choices in the case of time-dependent data, such as in tracking the evolution of systems', 'The chapter addresses the issue of model selection based on out-of-sample error and the impact of sample size on
{"url":"https://learn.coursnap.app/staticpage/o7zzaKd0Lkk.html","timestamp":"2024-11-02T15:05:46Z","content_type":"text/html","content_length":"99882","record_id":"<urn:uuid:b3af94ee-203f-4ef4-9135-e2d16f66279c>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00846.warc.gz"}
Math Science Quest (MSQ) teaches mathematical analysis and scientific reasoning skills that are the essence of scientific method. Each Quest presents a mathematical puzzle along with an elegant simulation of scientific method that allows users to derive clues to the symbols in Solutions by composing careful experiments and analyzing the resulting data. MSQ increases analytical capabilities and intuitive thinking - resulting in integrated learning.
{"url":"http://msqmulti.org/page/help","timestamp":"2024-11-03T09:24:06Z","content_type":"text/html","content_length":"8001","record_id":"<urn:uuid:105a72e1-0af7-4b5c-83f3-58c6cd2d76e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00503.warc.gz"}
Major League Baseball – Did Banning “The Shift” work? image from https://www.mlb.com/glossary/rules/defensive-shift-limits in 2023 Major League Baseball made a rule restricting certain defensive players being in certain portions of the field (See here for the actual definition of the rule). This was done to combat “The Shift“, a defensive technique which was popularized by the Tampa Bay Rays in 2006, where one side of the field is overloaded with players. I had the notion at the time that banning the Shift was just a band-aid measure and would have no impact. Since the ban was in 2023, we have had one full season to evaluate any impact of the ban. History of The Shift The idea of shifting players to counter power hitters’ tendencies to pull the ball to one side goes to the early parts of baseball. It disappeared for a long time, however, until Originally, the Rays’ had the idea on how to shut down David “Big Papi” Ortiz of the Boston Red Sox, a left handed hitter who had great power pulling the ball down the right side of the field. Joe Maddon, the manager of the Rays, used Sabermetrics to identify that Ortiz hit nearly every time to the right side, and mostly to the outfield. The ploy was effective and Ortiz, who had hit over .300 from 2004 to 2006 moved to .265 midway through the 2006 season after multiple teams started copying the Rays’ technique against him. The Shift attracted a lot of fan attention because it was often deployed against the most well-known power hitters and was seen as stifling to the offensive aspect of the MLB. Eventually, it was banned (limited, actually, see the definition above for detail) and the 2023 season was the first to be held without the old, dramatic version of the Shift. See below for an image of the Shift being applied by the Angels (there’s an extra person in the shortstop position). Image from Wikimedia Commons – By Jon Gudorf Photography – https://www.flickr.com/photos/jongudorf/16802945985/, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=112638138 Based on the way the Shift was deployed, I figured that if I wanted to demonstrate if the rule banning the Shift had any effect, I would have to evaluate the performance of elite power hitters both before and after the ban. This is not a perfect approach, though, because what if some other variable was introduced (a new “juicier” ball? Rules restricting pitchers) that impacted hitters’ performance. This means that I would have to evaluate performance differences of groups of “non-sluggers” as well to detect any non-Shift related performance changes. I’m defining sluggers (the ones most impacted by the Shift) as hitters who have a Slugging Percentage (a common measure that records the total number of bases coming from hits) greater than the league’s average. My inclination is that the true sluggers are the ones who are at least one standard deviation above the mean (i.e., the top 16% of hitters. Data Gathering I used the Python library, pybaseball, to scrape some basic data. Pybaseball is useful in that it scrapes multiple baseball stats sites (including advanced pitch-based metrics). I only needed it to pull data on at-bats, hits, doubles, triples, home runs, and walks from 2006 to today’s date in 2024. The data was pulled in two groups. One represented the “post-Shift” era from 2006 to 2022 and the other represented the “post-ban” era from 2023 to the current date. Data was evaluated by player and then normalized by the number of at bats. Multiple minimum at-bats were used (400, 600, 800) to determine impact of the Shift on players regardless of their usage on the team (but my insight was to not go much lower than 400 at-bats, in the theory that players who had few at-bats were unlikely to have the shift deployed against them (as it appeared to be reputational). Both groups were separated into two types of players, 1) “Normal” players, who’s Slugging Percentage numbers were close to the league mean and 2) “Sluggers”, who’s Slugging Percentage was a) in the top half of the league, b) one standard deviation from the mean (top 16%), c)_two standard deviations from the mean (top 5%), and c) three standard deviations from the mean (top ~1%). The notion is to identify if any of these groups of “sluggers” statistics (Hits, Doubles, Triples, Home Runs, Walks) were statistically different between the pre-Shift group and the post-Shift group. The first thing I looked at to compare performance from the “Shift Era” to the “Post-Shift Era” was the mean value of a number of common metrics. I selected ones that I felt were most likely to be impacted by the Shift. Hits, Doubles, Triples, Home Runs (the Shift doesn’t really impact Home Runs.. but I was curious), and Walks. I normalized these metrics by the number of at-bats for every player to make sure to keep things consistent. Top 16% of Sluggers Compared Before and After the Shift was Banned. Also non-Sluggers for Comparison I did this for a range of Minimum At Bats and Numbers of Standard Deviations away from the mean to define who was a “slugger”. They all looked a bit like this. The first thing I notice is that the period after the Shift was banned sees better offensive performance (and more walks) across the board. Great! We have an answer! No? Of course it’s never that simple. First off, we need to remember these are just the mean values for these eras and the mean of a distribution is not always the best way to describe the whole distribution. Also, we need to understand if these differences are significant or could just be explained away by common variation. The next step was to apply an algorithm called the Kolmogorov-Smirnov two sample algorithm. This test compares the underlying continuous distributions F(x) and G(x) of two independent samples (pre-ban and post-ban) to determine if they come from the same distribution (our base assumption) or if they were drawn from different distributions. To wit, do the performance metrics before the Shift was banned have a fundamentally different distribution than the metrics after the ban. We will establish a required confidence interval of 95% (the typically accepted number) before we can determine the distributions different. p-values comparing Sluggers before and after the ban. The red line is our confidence interval. Any bars below the red line indicate that metric is statistically different before and after the Shift Above you can see the p-value for just Sluggers (top 50% of sluggers on the left, top 16% in the middle, and top 5% on the right) before and after the ban. We already know that the offensive metrics tend to be higher after the ban, this just tells us if that difference is significant and if it extends to the whole distribution. p-values comparing non-sluggers before and after the ban. The red line is our confidence interval. Any bars below the red line indicate that metric is statistically different before and after the Analysis of the Results There are obviously more charts, but these tell the story well enough. In the top chart (comparing Sluggers’ performance), we see that for the top 16% of sluggers, their performance on every metric other than doubles meets our requirements to claim that the differences are statistically significant. However, it’s hit or miss (pun!) for the other two clusters. Hmm. Then looking at the non-slugger comparisons (we are comparing the hitters in the lower 50%, 84%, and 95%), we see that there are fundamental differences almost in all categories (most of the bars are below our red line, indicating that the performance changes in these metrics are significant), clearly more than with the sluggers! This indicates to me that something OTHER than the Shift has been responsible for affecting offensive performance across baseball. The Shift was rarely or never applied to any players other than pull-hitting sluggers, so it couldn’t be responsible for the performance changes we see in this bottom graph. 1. It seems pretty straightforward. Offensive performance has changed across the board between the time period from 2006-2022 and the time period from 2023 on. These are a large number of years, and lots of rule changes could have happened. 2. However, the changes in performance have been consistent across all hitters in MLB, not just the sluggers. 3. In actuality, the Sluggers seem to have had a less significant increase in performance than the non-Sluggers. 4. All of this makes me say that the performance impacts was from factors other than the banning of the Shift and that my initial hypothesis that the banning of the Shift had no impact is true. To see others of my recent sports analytic posts:
{"url":"http://todnewman.com/?p=1477","timestamp":"2024-11-03T22:30:52Z","content_type":"text/html","content_length":"77875","record_id":"<urn:uuid:e0062f17-3b09-47f6-abd2-3e6c8ef0f829>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00593.warc.gz"}
Exploring Cryptography in Ethereum: The Building Blocks of Security Published on Exploring Cryptography in Ethereum: The Building Blocks of Security A Deep Dive into Cryptography on Ethereum Cryptography is a fundamental component of blockchain technology, enabling secure transactions, digital signatures, and the generation of unique addresses. This article explores key concepts of cryptography in the context of Ethereum, detailing how private and public keys, digital signatures, and hash functions work together to secure the network. Understanding Cryptography in Ethereum Cryptography, a branch of mathematics, is used extensively in computer security to enable: • Encryption (often called "secret writing") • Digital Signatures, which prove knowledge of a secret without revealing it • Hashes, which create unique digital fingerprints to prove the authenticity of data Public Key Cryptography Public key cryptography is foundational to Ethereum's security model: • Private Key: Secretly held by wallet software, it is used to sign transactions. • Public Key: Derived from the private key, it’s used to verify the authenticity of transactions without revealing the private key. • Anyone with access to the private key can control the corresponding Ethereum account. Ethereum Addresses Ethereum addresses are generated from public keys. For externally owned accounts (EOAs), the address is derived from the public key. However, addresses can also represent smart contracts, which don’t rely on public-private key pairs. Public Key Cryptography Explained Public key cryptography uses mathematical functions that are easy to compute in one direction but difficult to reverse. This is achieved using principles like: • Prime Factorization: Multiplying two large prime numbers is straightforward, but finding the original primes from the product is difficult. □ For example, given the number 8,018,009, finding its prime factors is computationally hard. • Trapdoor Functions: Some mathematical functions can be reversed easily if specific information is known. □ For instance, if you know one prime factor, finding the other becomes trivial. • Elliptic Curve Cryptography (ECC): Modern systems, including Ethereum, use elliptic curves, which are based on arithmetic operations on points on a curve. This enables secure key generation and Digital Signatures Digital signatures in Ethereum enable users to sign messages (like transactions) in a way that proves the message's authenticity: • Using cryptographic techniques, the transaction details are combined with the private key to create a digital signature. • When a transaction is sent to the network, it must be signed using the private key. This signature is then used to verify that the transaction came from the rightful owner without revealing the private key. Private and Public Keys Private Keys Private keys can be generated randomly. For example, you could generate a private key using a simple method like flipping a coin 256 times to produce a binary string. In practice, programming libraries use algorithms like Keccak-256 or SHA-256 to generate a 256-bit private key within a secure range. Public Keys A public key is derived from a private key using elliptic curve multiplication, which is easy to perform but nearly impossible to reverse: • Elliptic Curve Multiplication: In elliptic curve cryptography, this operation functions like multiplication but cannot be reversed with division. □ For example: ( K = k \times G ) □ Here, ( K ) is the public key, ( k ) is the private key, and ( G ) is a predefined point on the elliptic curve. Hash Functions Hash functions convert data of arbitrary size into a fixed-size output: • They are many-to-one functions, meaning different inputs can produce the same output, known as a hash collision. • The Ethereum network uses Keccak-256, which outputs a unique digital fingerprint for any given input. EOA Addresses in Ethereum Ethereum addresses for EOAs are derived from public keys using the following steps: 1. Generate the Private Key: k = f8f8a2f43c8376ccb0871305060d7b27b0554d2cc72bccf41b2705608452f315 2. Compute the Public Key: The public key ( K ) is the result of elliptic curve multiplication, producing ( K ) as a set of ( x ) and ( y ) coordinates concatenated in hexadecimal: K = 6e145ccef1033dea239875dd00dfb4fee6e3348b84985c92f103444683bae07b83b5c38e5e... 3. Hash the Public Key with Keccak-256: Keccak256(K) = 2a5bc342ed616b5ba5732269001d3f1ef827552ae1114027bd3ecf1f086ba0f9 4. Generate the Ethereum Address: The address is derived by keeping the last 20 bytes (least significant bytes) of the hashed public key: Address = 001d3f1ef827552ae1114027bd3ecf1f086ba0f9 The final Ethereum address is typically represented with a 0x prefix to indicate hexadecimal encoding: The Zero Address • 0x0 is a special address used in Ethereum for actions like contract creation. • For intentionally burning Ether, a specific address, 0x000000000000000000000000000000000000dEaD, is used to clearly denote burned funds. Cryptography in Ethereum involves advanced mathematical functions that underpin the security and functionality of the network. Through public key cryptography, hash functions, and digital signatures, Ethereum maintains a secure and decentralized environment for transaction processing and smart contracts. Understanding these elements is crucial for anyone interested in the technical foundations of blockchain and decentralized technologies. My shorthand notes were the source material for this article produced by generative AI.
{"url":"https://deadlytechnology.com/blog/blockchain/ethereum-cryptography","timestamp":"2024-11-04T08:58:15Z","content_type":"text/html","content_length":"98365","record_id":"<urn:uuid:3bd03489-fcc9-476c-8490-58358a525b30>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00449.warc.gz"}
How To Predict Online Singapore Lottery - sargatanasreign.com for those of you who want to find numbers to play today / numbers so tonight then you have to refer to the results of lottery expenditure yesterday (1 day past). Example you want to find the numbers to play SGP / HK 08-13-2013, here you will see the results of the SGP Lottery yesterday, which is 01-20-2012 4452. and before we continue to our topic. We will give you the best information about lottery online sit. The first 2D Togel Formulas: Add the AS + KOP numbers = 4 + 4 = 8 Here you must sort: For you to play you can see from the numbers that have been marked in RED, and 2D dead numbers you can look for with numbers in green and the SGP / HK expenditure on 21-01-2012 is 0412. This formula will not guarantee every The day you will get a win, but at least in one week you will get a win at least 2 to 4 times. Please just try it now by registering yourself with us at Dragon99bet Safe and Reliable Gambling Agent. Alright we come to the discussion “How to Calculate Expenses 2D Singapore Togel Formula” which we have arranged so neatly to make it easier for you and can be learned quickly how to calculate the lottery formula and also make a precise and accurate combination of SGP HK 2D lottery below: How to Calculate Singapore 2D Lottery Formula How to calculate the 2D Togel Singapore Formula – 2D Deadly Head Formula The US double on past expenses will be added to last year’s US output, for example: (9) 737 = 9 (6) 656 = 6 9 + 6 = 15 counts = 6 Then the number 6 dies in the head How to calculate Singapore’s 2D Lottery Formula – Finding a Dead Head So here you have to consider where you want to find the dead number in what position? if you intend to look in the head, you must add to the existing number of US positions in the previous two weeks. For example on Saturday the number that came out that day was 3799 and on Sunday the number that came out was 2121. This means that the US number of the two weeks that came out on that day was 3 and 2. You have to add up 3 + 2 = 5. So head you get 5 and this will probably come out on Monday. How to calculate the 2D Togel Singapore Formula – Look for dead 2D numbers The 2D Togel formula is very easy and also simple, that is, 100 will be subtracted from the 2D number that has been added by 20. The example number that came out today is 6736, this means the 2D number is 36. So here you have to enter into the formula you have made so that it becomes 100 – 36 = 64 then 64 + 20 = 84 so the number 64 to 84 is unlikely to come out. How to look for 2D Togel Formulas Occur How to Calculate the Singapore Togel 2D Formula – The Tail Formula is dead For how to find / predict the dead tail lottery formula you just need to look for the number of TESSON.2, an example of today’s results that came out was 9470 this week meaning the 2D number that came out was 70. The number 70 of the number of TESSON.2 was = 4. So it’s likely tomorrow will come out tail = 4, please try yourself and learn well and correctly. How to Calculate the 2D Singapore Lottery Formula – 2D Main Number Added Formula Please just look at the example directly below: 9242 = 2 -> 89 = 87 = 6 = 45 = Tens (out 41) 1941 = 1 -> 90 = 98 = 8 = 78 = Tail (exit 18) 2218 = 4 -> 67 = 43 = 7 = 12 = Tens (out 17) 4917 = 4 -> 67 = 43 = 7 = 01 = Tail (exit 31) 6531 = 2 -> 89 = 87 = 6 = 56 = Tens (out of 5 ..) Example of the 9242 Togel Formula Calculation: Add thousands and hundreds (9 + 2). The result = 11. In order to obtain a single digit, then the number 11 must be separated and then added (1 + 1) so that the number 2 is obtained. In order to obtain perfect results in the form of number 10, then number 2 requires a friend number 8. After that, raise one level and add it (2 -> 89 = 8). Next, the number 8 is lowered one level and added up again (8 + 7 = 15 = 6). The final step, the number 6 must be subtracted from the tail number so that it becomes (6 – 2 = 4). From all of these 4 numbers you will raise one level and eventually it will become 45. Both numbers will be a very strong guideline for you to buy and install the numbers that you have found through the SGP and HK Best Sniper 2D Formula. What is unique here is the two numbers will come out alternately. If last Monday the numbers that have come out on the left, then on the coming Monday will come out will be on the right. Such are some of the ways that you can use to find and calculate Singapore / Hong Kong 2D lottery formulas as in the example above. A little notice for you besides you can gamble lottery to us, you will also be able to play soccer betting, online casino gambling, with a minimum deposit that is very cheap and also we provide Cashback bonus bonuses and Bonus Rolls for all our loyal members. So what are you waiting for, join us right now and get many interesting benefits. How to Calculate Singapore 2D Lottery Formula (SGP)
{"url":"http://www.sargatanasreign.com/how-to-predict-online-singapore-lottery/","timestamp":"2024-11-04T14:09:54Z","content_type":"text/html","content_length":"44758","record_id":"<urn:uuid:08bff087-7b7c-452b-b10c-4ea4c8de3a5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00420.warc.gz"}
Clustering of consumers based on their demand elasticity 21 Clustering of consumers based on their demand elasticity 21.1 Rationale & Link to BEYOND Apps The demand elasticities of portfolio customers can provide useful insights of the demand-price relationship that depict the customers’ consuming habits according to prices. For a retailer, the clustering of customers into groups of similar elasticity levels, can contribute to defining pricing schemes dedicated to each group’s consuming behavior. The customers are clustered into three different groups of high, medium and low elasticity, meaning that customers that belong i.e. to the cluster of low elasticity, are less possible to alter their consumption based on prices. Various pricing strategies will be proposed to the retailer by the BPMO tool based on the elasticities of each customer group of his portfolio. However, one will be chosen for each group and will be available for the PEASH tool to retrieve it and take it into consideration in load shifting activities. 21.2 Overview of relevant implementations Customer segmentation techniques are mainly applied in the formulation of prices in demand response programs to achieve a more targeted scheme to customers with similar characteristics. Most commonly, customers are clustered based on their load profiles from smart meter data (i.e. with Finite Mixture models [1]), on consumption along certain periods of time when specific tariffs were applied with the use of KMeans clustering [2] or on mean daily shifting loads by using Fuzzy C-means clustering [3]. In addition, there were attempts to include the responsiveness of customers to price variations in customer segmentation by considering survey data and device-level elasticities over groups of customers in order to define proper incentives for each customer cluster [4]. 21.3 Implementation in BEYOND For the implementation in BEYOND the clustering is performed over the customer related elasticities that have been already calculated with OLS method and stored in the local storage component of the BPMO tool. For each customer, a representative from these results is chosen. The algorithm used for clustering is KMeans, with k set to 3 because of the three distinct classes of high, low and medium elasticity levels. In general, what we expect is that a bigger slope of the regression line, indicates that a customer is more elastic in price changes in contrast to those with smaller, who do not tend to alter their consumption pattern based on prices. Those customers that are somewhere in-between are expected to be clustered as “medium elasticity level” customers. 21.3.1. Data inputs and Analytics Pipeline (incl. assumptions /limitations) Input of the KMeans algorithm are the slope and intercept of the Ordinary Least Squares results applied on the demand-price pairs for each group of measurements of the same hour (0 to 23), type of day (weekday or weekend) and season (winter/summer/spring-autumn). Then, a pair of slope-intercept is chosen for each customer. In case the pairs are identical between each distinct set of hour-type of day-season, since the same pricing scheme was applied in all hours, then only one entry of price elasticities is enough to represent the customer for the clustering procedure. Otherwise, a representative between all possible sets has to be found, so as to allow the customer to participate in the process. When the labels are defined, they are sorted by their centroid values, from maximum to minimum slope (the first feature) and customers that belong in each cluster are assigned to each class (“high”, “medium”, “low”). The customers’ ids along with their cluster label are finally kept in the BPMO’s storage for statistical visualizations and as useful information for the definition of a retailer’s portfolio pricing strategies. 21.3.2. Analytics Libraries Employed The python libraries used for data manipulation and data analytics are: - Pandas - Numpy - Sklearn [1] Haben, S., Singleton, C., & Grindrod, P. (2016). Analysis and clustering of residential customers energy behavioral demand using smart meter data. IEEE Transactions on Smart Grid, 7, 136–144 [2] Menga F., Mab Q., Liub Z., Zeng X.(2021). Multiple Dynamic Pricing for Demand Response with Adaptive Clustering-based Customer Segmentation in Smart Grids [3] Chrysopoulos, A., & Mitkas, P. (2017). Customized time-of-use pricing for small-scale consumers using multi-objective particle swarm optimization. Advances in Building Energy Research, (pp. [4] Asadinejad, A., Varzaneh, M. G., Tomsovic, K., Chen, C.-f., & Sawhney, R. (2016). Residential customers elasticity estimation and clustering based on their contribution at incentive-based demand response. In Power and Energy Society General Meeting (PESGM), 2016 (pp. 1–5) IEEE Back to BEYOND_Baseline_Analytics
{"url":"https://wiki.beyond-platform.eu/index.php?title=Clustering_of_consumers_based_on_their_demand_elasticity","timestamp":"2024-11-06T11:53:00Z","content_type":"text/html","content_length":"18303","record_id":"<urn:uuid:d27fbb78-b93c-4b6b-9197-ac2afe711f01>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00807.warc.gz"}
Chapter 2. Mathematical Functions While discussing variables, we already used the simplest mathematical operations: addition, subtraction, multiplication, and division. These operations do not require any special notation and can be performed using basic characters: plus (+), minus (-), asterisk (*), and slash(/), respectively. Also, declaration of Boolean variables often implies using comparison operators: greater than (>), less than (<), greater than or equals to (>=), less than or equals to (<=), etc. There are also equality operators falling into the same group. To test if two values are equal, use the double equals sign (==). For the inequality, use either '!=' or '<>' signs. Here is an example script for using equality/inequality operators: def a = close==open; def b = close!=open; def c = close<>open; Variables b and c are identical: they will both have values of 0 for bars closing at the Open price (sometimes called Doji candles) and 1 for the rest; values of variable a will be vice versa. The full list of comparison operators can be found here; don't be surprised when you see that thinkScript® accepts some English words as operators, this will be thoroughly explained in chapter 7. For more complex calculations you might need some of thinkScript® mathematical/trigonometric functions. Let's start with power-related ones. These functions allow you to raise a value to a power, or find an exponential value or a logarithm. Study the following script: def val = close/open; def data1 = Sqr(val); def data2 = Sqrt(val); def data3 = Power(val, 3); def data4 = Exp(val); def data5 = Lg(val); def data6 = Log(val); This script calculates six data values based on the initial value of the Close to Open ratio for each bar. Data1 calculates the square of this value with Sqr function. Data2 returns the square root using Sqrt function. Data3 uses Power function to raise the value to the power of 3. Data4 calls the Exp to raise Euler's number to the power of the value. Data5 returns the common logarithm of the value and data6, the natural logarithm; functions Lg and Log are used for these purposes, respectively. Let's study the syntax of calling functions in thinkScript®: first, you type the name of the function, then you specify its arguments in parentheses and terminate the line with semicolon. If a function requires several arguments, they need to be separated with a comma – see how it works with data3 variable in the example above. Function Power requires two arguments: value to be raised (in our case, it is variable val) and the power to raise this value to (we used 3 in the example). When working with power-related functions, you might want to use the Euler's number. There is a built-in constant in thinkScript® for this purpose: Double.E. Word Double in its name has nothing to do with multiplying by 2: it is just a common thinkScript® designation of floating-number data type. See how to use this constant in the following script: def val = close/open; def data1 = Exp(val); def data2 = Power(Double.E, val); Variables data1 and data2 will always return identical values as they both raise Euler's number (Double.E) to the power of val. Next set of functions we are going to discuss is trigonometric. It consists of six functions: Cos, Sin, Tan, ACos, ASin, and ATan. All of these use a single argument for which the value will be returned. Note that functions Cos, Sin, and Tan use arguments expressed in radians so that Double.Pi constant might be useful. Functions ACos, ASin, and ATan return values in radians; for details, see the corresponding articles in thinkScript® reference. thinkScript® also provides you with a number of functions which round values if for some reason fractions or irrational numbers are not desired. These functions are: Ceil, Floor, Round, RoundDown, and RoundUp. Here is an example: def data1 = Ceil(Double.E); def data2 = Floor(Double.E); def data3 = RoundUp(Double.E, 3); def data4 = RoundDown(Double.E, 3); def data5 = Round(Double.E, 3); In this script, Euler's number is rounded using five different algorithms, thus the results will be different. Data1 uses the Ceil function rounding the number to the closest integer which is not less than the number itself; the result is 3. Data2 uses the Floor function which also rounds the number to the integer, however, this integer must not be greater than the number rounded, so the result is 2. Functions RoundDown, RoundUp, and Round reduce the length of the value to the specified number of digits after the decimal point (for this purpose we specified the second argument in parentheses, 3). While RoundUp function rounds the number to the greater one, RoundDown does towards zero, and Round to whichever is closer. So, in the example above, data3 will be equal to 2.719 while both data4 and data5 will be equal to 2.718. However, if you round the values to four decimal places (using (Double.E, 4) as the arguments), data3 and data5 will be both equal 2.7183 and data4 will be equal to 2.7182. The full list of available functions with detailed descriptions can be found in the Mathematical & Trigonometric section of thinkScript® reference, so feel free to research those in order to apply some complex math to your scripts if needed. Now we are going to move on to next chapter where we will discuss inputs which will make your script super flexible and adjustable via the Edit Studies and Strategies dialog window.
{"url":"https://toslc.thinkorswim.com/center/reference/thinkScript/tutorials/Basic/Chapter-2---Mathematical-Functions","timestamp":"2024-11-09T19:25:10Z","content_type":"text/html","content_length":"340441","record_id":"<urn:uuid:3bdc3472-a719-48f5-b4ea-eeaa2ac593bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00262.warc.gz"}
103 research outputs found Assuming, as suggested by recent neutron scattering experiments, that a broken symmetry state with orbital current order occurs in the pseudo-gap phase of the cuprate superconductors, we show that there must be associated equilibrium magnetic fields at various atomic sites in the unit cell, which should be detectable by NMR experiments Using determinantal quantum Monte Carlo, we compute the properties of a lattice model with spin $\frac 1 2$ itinerant electrons tuned through a quantum phase transition to an Ising nematic phase. The nematic fluctuations induce superconductivity with a broad dome in the superconducting $T_c$ enclosing the nematic quantum critical point. For temperatures above $T_c$, we see strikingly non-Fermi liquid behavior, including a "nodal - anti nodal dichotomy" reminiscent of that seen in several transition metal oxides. In addition, the critical fluctuations have a strong effect on the low frequency optical conductivity, resulting in behavior consistent with "bad metal" phenomenology We analyze the dynamical response of a two-dimensional system of itinerant fermions coupled to a scalar boson $\phi$, which undergoes a continuous transition towards nematic order with $d-$wave form-factor. We consider two cases: (a) when $\phi$ is a soft collective mode of fermions near a Pomeranchuk instability, and (b) when it is an independent critical degree of freedom, such as a composite spin order parameter near an Ising-nematic transition. In both cases, the order-parameter is not a conserved quantity and the $d-$wave fermionic polarization $\Pi (q, \Omega)$ remains finite even at $q=0$. The polarization $\Pi (0, \Omega)$ has similar behavior in the two cases, but the relations between $\Pi (0, \Omega)$ and the bosonic susceptibility $\chi (0, \Omega)$ are different, leading to different forms of $\chi^{\prime \prime} (0, \Omega)$, as measured by Raman scattering. We compare our results with polarization-resolved Raman data for the Fe-based superconductors FeSe$_{1-x}$S$_x$, NaFe$_{1-x}$Co$_x$As and BaFe$_2$As$_2$. We argue that the data for FeSe$_{1-x}$S$_x$ are well described within Pomeranchuk scenario, while the data for NaFe$_{1-x} $Co$_x$As and BaFe$_2$As$_2$ are better described within the "independent" scenario involving a composite spin order We study the dynamic response of a two-dimensional system of itinerant fermions in the vicinity of a uniform ($\mathbf{Q}=0$) Ising nematic quantum critical point of $d-$wave symmetry. The nematic order parameter is not a conserved quantity, and this permits a nonzero value of the fermionic polarization in the $d-$wave channel even for vanishing momentum and finite frequency: $\Pi(\mathbf{q} = 0,\Omega_m) eq 0$. For weak coupling between the fermions and the nematic order parameter (i.e. the coupling is small compared to the Fermi energy), we perturbatively compute $\Pi (\mathbf{q} = 0,\ Omega_m) eq 0$ over a parametrically broad range of frequencies where the fermionic self-energy $\Sigma (\omega)$ is irrelevant, and use Eliashberg theory to compute $\Pi (\mathbf{q} = 0,\Omega_m)$ in the non-Fermi liquid regime at smaller frequencies, where $\Sigma (\omega) > \omega$. We find that $\Pi(\mathbf{q}=0,\Omega)$ is a constant, plus a frequency dependent correction that goes as $|\ Omega|$ at high frequencies, crossing over to $|\Omega|^{1/3}$ at lower frequencies. The $|\Omega|^{1/3}$ scaling holds also in a non-Fermi liquid regime. The non-vanishing of $\Pi (\mathbf{q}=0, \ Omega)$ gives rise to additional structure in the imaginary part of the nematic susceptibility $\chi^{''} (\mathbf{q}, \Omega)$ at $\Omega > v_F q$, in marked contrast to the behavior of the susceptibility for a conserved order parameter. This additional structure may be detected in Raman scattering experiments in the $d-$wave geometry
{"url":"https://core.ac.uk/search/?q=author%3A(Lederer%2C%20Samuel)","timestamp":"2024-11-14T04:48:37Z","content_type":"text/html","content_length":"137960","record_id":"<urn:uuid:b87effbf-9302-4435-a8d9-eb3bdb4a7b0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00782.warc.gz"}
Net Worth Formula Balance Sheet in Excel (2 Suitable Examples) - ExcelDemy Net worth is a significant key to measuring an organization’s performance. The value represents the value of shareholders’ equity of that company. We can easily create a net worth formula balance sheet using Microsoft Excel. If you are curious about the procedure to design the net worth formula balance sheet in Excel, download our practice workbook and follow us. What Is Net Worth? The value of the net worth of any organization represents the deduction value of the total liabilities from total assets. It is a significant key for benchmarking the performance of any institution. The general expression of this term is: Net Worth = Total Assets – Total Liabilities Read More: How to Make Balance Sheet Format in Excel for Individual 2 Suitable Examples of Net Worth Formula Balance Sheet in Excel To demonstrate the procedure, we consider two easy examples of the net worth balance sheet. The first one is for some general variables, and the second one is for the detailed variables. 1. Calculating Net Worth from General Variables In this example, we are required to input 11 variables of our organization. We will input the value of those variables in the range of cells C5:C11, and their corresponding titles are in the range of cells B5:B11. Step 1: Input Required Variables In our first step, we have to input all the required variables’ values for our company. • Input all of those variables’ values for your company in the range of cell C5:C11 as shown in the image. Thus, we can say that we completed the first step. Step 2: Calculate Total Assets In this step, we will calculate the value of total assets. The mathematical expression of total assets is: Total Assets = Inventories + A/C Receivable + Total Fixed Assets + Cash in Bank To complete the calculation, we will use the SUM function. • First of all, select cell B13 and entitle the cell as Total Assets. • Now, in cell C13, write down the following formula in the cell. • You will find the value of total assets. In the end, we can say that we have finished the second step to create a net worth formula balance sheet in Excel. Step 3: Evaluate Total Liabilities Now, we are going to estimate the value of total liabilities. The general expression of total liabilities is: Total Assets = Trade A/C Payable + Short-Term Debt + Long-Term Debt For estimating the value, we will use the SUBTOTAL function. Besides that, you can also use the SUM function. • First, select cell B14. • Then, entitle the cell as Total Liabilities. • After that, write down the following formula into cell C14. • You will find the value of total liabilities. In the end, we can say that we have finished the third step to create a net worth formula balance sheet in Excel. Step 4: Estimate Value of Net Worth This is the final step to generating a net worth formula balance sheet. After completing the step, we get the value of the net worth of our company. • At first, select cell B15. • Now, entitle the cell as Net Worth. • Then, write down the following formula in cell C15. • Press the Enter. • You will find the value of net worth. So, we can say that all of our formulas worked perfectly and we are able to create a net worth formula balance sheet in Excel. Read More: How to Prepare Charitable Trust Balance Sheet Format in Excel 2. Estimating Net Worth Through Detailed Variables In the following example, we need to input about 14 types of variables. The titles of the variables are in the range of cells B5:B18 and the corresponding cells to input their values are in the range of C5:C11. Step 1: Input All Necessary Variables In the first step, we have to input all the necessary variables’ values for our company. • Input the variables’ values for your company in the range of cell C5:C18. So, we can say that we finished the first step. Step 2: Estimate Total Assets Now, we will calculate the value of total assets. In this case, the general expression of total assets is: Total Assets = Inventories + A/C Receivable + Market Securities + Cash & Cash Equivalents + Vendor Non-Trade Receivables + Total PPE + Other Current Assets + Other Non-Current Assets To evaluate the value, we will use the SUM function. • At first, select cell B13. • Now, write down the title of that cell as Total Assets. • After that, in cell C13, write down the following formula in the cell. • You will find the value of total assets. At last, we can say that we have completed the second step to create a net worth formula balance sheet in Excel. Step 3: Determine Total Liabilities Here, we will evaluate the value of total liabilities. The mathematical expression of total liabilities is: Total Assets = Deferred Revenue + A/C Payable + Other Current Liabilities + Commercial Paper + Long-Term Debt + Other Non-Current Liabilities The SUBTOTAL function will help us to get the value of total liabilities. Besides it, you can also use the SUM function. • First of all, select cell B14 and entitle the cell as Total Liabilities. • Then, write down the following formula in cell C14. • You will get the value of total liabilities. Thus, we can say that we completed the third step to create a net worth formula balance sheet in Excel. Step 4: Calculate the Value of Net Worth Now, in this final step, we will get the value of net worth. • First, select cell B15. • After that, entitle the cell as Net Worth. • Now, write down the following formula in cell C15. • Again, press Enter. • Finally, you will find the value of net worth. So, we can say that all of our formulas worked successfully and we are able to create a net worth formula balance sheet in Excel. Read More: How to Create Consolidated Balance Sheet Format in Excel Download Practice Workbook Download this practice workbook for practice while you are reading this article. That’s the end of this article. I hope that this article will be helpful for you and you will be able to create a net worth formula balance sheet in Excel. Please share any further queries or recommendations with us in the comments section below if you have any further questions or recommendations. Related Articles << Go Back to Balance Sheet | Finance Template | Excel Templates Get FREE Advanced Excel Exercises with Solutions! We will be happy to hear your thoughts Leave a reply
{"url":"https://www.exceldemy.com/net-worth-formula-balance-sheet-in-excel/","timestamp":"2024-11-07T23:32:06Z","content_type":"text/html","content_length":"200329","record_id":"<urn:uuid:b7efddfb-73e4-43be-b910-87fa2f334b08>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00856.warc.gz"}
X, Y, Z In a 3D coordinate system a point P is referred to by three real numbers (ccordinates) indicating the positions of the perpendicular projections from the axes which intersect at the origin. These axes are know as the x, y and z axis. The Coordinate planes can be refered to as the XY, XZ and YZ planes.
{"url":"https://support.onscale.com/hc/en-us/articles/360010964951-X-Y-Z","timestamp":"2024-11-09T00:13:21Z","content_type":"text/html","content_length":"35397","record_id":"<urn:uuid:ee5ca6eb-fc89-4e21-b9aa-e992c2ea65fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00345.warc.gz"}
Dynamic Problems of the Planets and Asteroids, and Their Discussion International Journal of Astronomy and Astrophysics, 2012, 2, 129-155 http://dx.doi.org/10.4236/ijaa.2012.23018 Published Online September 2012 (http://www.SciRP.org/journal/ijaa) Dynamic Problems of the Planets and Asteroids, and Their Discussion Joseph J. Smulsky1*, Yaroslav J. Smulsky2 1Institute of the Earth’s Cryosphere, Siberian Branch of Russian Academy of Sciences, Tyumen, Russia 2Institute of Thermophysics of Russian Academy of Sciences, Siberian Branch, Novosibirsk, Russia Email: *jsmulsky@mail.ru, ysmulskii@mail.ru Received February 9, 2012; revised March 15, 2012; accepted April 18, 2012 The problems of dynamics of celestial bodies are considered which in the literature are explained by instability and randomness of movements. The dynamics of planets orbits on an interval 100 million years was investigated by new numerical method and its stability is established. The same method is used for computing movements of two asteroids Apophis and 1950 DA. The evolution of their movement on an interval of 1000 is investigated. The moments of their closest passages at the Earth are defined. The different ways of transformation of asteroids trajectories into orbits of the Earth’s satellites are considered. The problems of interest are discussed from the different points of view. Keywords: Dynamics; Planets; Asteroids; Satellites; Stability; Discussion 1. Introduction In the last decades a number of problems associated with the precision of calculating movements have accumu- lated in the celestial and the space dynamics. It was found that there were discrepancies between the calcu- lated and the observed movements. These differences were the cause of the conclusions about chaotic move- ments and the impossibility of accurately calculating them. In addition to Newton’s gravitational forces were introduced weaker forces of other nature, as well as new substances “dark energy” and “dark matter”. In this paper we consider the problems that are associated only with the dynamics of the two groups of celestial objects: With planets and asteroids. In the study of the evolution of the Solar system over geological time scales the number of researchers has come to the conclusion that there are the instability of the planetary orbits and chaotic motions in the Solar system. For example, in paper [1], it is noted that the eccentricity of the orbit of Mars can be greater than 0.2, and chaotic diffusion of Mercury is so great that his eccentricity can potentially reach values close to 1 and the planet could be thrown out of the Solar system. Already at the time intervals of 10 Myr (million years) there is the weak di- vergence of the Earth’s orbit [2], which, according to these authors, is caused by multiple resonances in the Solar system. Due to them the movement of Solar system is chaotic. Therefore the motion of the Earth [2] and Mars [3] with an acceptable accuracy cannot be calcu- lated for a time greater than 20 Myr. The same problems of dynamics are occurred in the study of motion of asteroids. Unlike the planets, their movement is considered on a smaller time intervals. However, the higher accuracy of determination of their movement is required. In connection with the urgency of the tasks of asteroids motion the problems of their dy- namics is considered in more detail. Over the past decade, the asteroids of prime interest have been two asteroids, Apophis and 1950 DA, the first predicted to approach the Earth in 2029, and the second, in 2880. Reported calculations revealed some probability of an impact of the asteroids on the Earth. Yet, by the end of the decade refined orbital-element values of the asteroids were obtained, and more precise algorithms for calculating the interactions among solar-system bodies were developed. Following this, in the present paper we consider the motion evolution of both asteroids. In addi- tion, we discuss available possibilities for making the asteroids into the Earth-bound satellites. Initially, the analysis is applied to Apophis and, then, numerical data for 1950 DA obtained by the same method will be pre- The background behind the problem we treat in the present study was recently outlined in [4]. On June 19-20, 2004, asteroid Apophis was discovered by astronomers at the Kitt Peak Observatory [5], and on December 20, 2004 this asteroid was observed for the second time by astronomers from the Siding Spring Survey Observato- *Corresponding author. opyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY ry [6]. Since then, the new asteroid has command inter- national attention. First gained data on identification of Apophis’ orbital elements were employed to predict the Apophis path. Following the first estimates, it was re- ported in [7] that on April 13, 2029 Apophis will ap- proach the Earth center to a minimum distance of 38,000 km. As a result of the Earth gravity, the Apophis orbit will alter appreciably. Unfortunately, presently available methods for predicting the travel path of extraterrestrial objects lack sufficient accuracy, and some authors have therefore delivered an opinion that the Apophis trajectory will for long remain unknown, indeterministic, and even chaotic (see [4,7,8]). Different statistical predictions points to some probability of Apophis’ collision with the Earth on April 13, 2036. It is this aspect, the impact risk, which has attracted primary attention of workers dealing with the problem. Rykhlova et al. [7] have attempted an investigation into the possibility of an event that the Apophis will closely approach the Earth. They also tried to evaluate possible threats stemming from this event. Various means to resist the fall of the asteroid onto Earth were put for- ward, and proposals for tracking Apophis missions, made. Finally, the need for prognostication studies of the Apo- phis path accurate to a one-kilometer distance for a pe- riod till 2029 was pointed out. Many points concerning the prospects for tracking the Apophis motion with ground- and space-based observing means were discussed in [4,7-9]. Since the orbits of the asteroid and Earth pass close to each other, then over a considerable portion of the Apophis orbit the asteroid disc will only be partially shined or even hidden from view. That is why it seems highly desirable to identify those periods during which the asteroid will appear ac- cessible for observations with ground means. In using space-based observation means, a most efficient orbital al- location of such means needs to be identified. Prediction of an asteroid motion presents a most chal- lenging problem in astrodynamics. In paper [10], the differential equations for the perturbed motion of the asteroid were integrated by the Everhart method [11]; in those calculations, for the coordinates of perturbing bo- dies were used the JPL planetary ephemeris DE403 and DE405 issued by the Jet Propulsion Laboratory, USA. Sufficient attention was paid to resonance phenomena that might cause the hypothetical 2036 Earth impact. Bykova and Galushina [12,13] used 933 observations to improve the identification accuracy for initial Apophis orbital parameters. Yet, the routine analysis has showed that, as a result of the pass of the asteroid through several resonances with Earth and Mars, the motion of the aste- roid will probably become chaotic. With the aim to eva- luate the probability of an event that Apophis will impact the Earth in 2036, Bykova et al. [12] have made about 10 thousand variations of initial conditions, 13 of which proved to inflict a fall of Apophis onto Earth. Smirnov [14] has attempted a test of various integra- tion methods for evaluating their capabilities in predict- ing the motion of an asteroid that might impact the Earth. The Everhart method, the Runge-Kutta method of fourth order, the Yoshida methods of sixth and eighth orders, the Hermit method of fourth and sixth orders, the Mul- tistep Predictor-Corrector (MS-PC) method of sixth and eighth orders, and the Parker-Sochacki method were analyzed. The Everhart and MS-PC methods proved to be less appropriate than the other methods. For example, at close Apophis-to-Earth distances Smirnov [14] used, instead of the Everhart method, the Runge-Kutta method. He came to the fact that, in the problems with singular points, finite-difference methods normally fail to accu- rately approximate higher-order derivatives. This conclu- sion is quite significant since below we will report on an integration method for motion equations free of such In paper [15] the mathematical problems on asteroid orbit prediction and modification were considered. Pos- sibilities offered by the impact-kinetic and thermonuclear methods in correcting the Apophis trajectory were evalu- An in-depth study of the asteroid was reported in paper [4]. A chronologically arranged outline of observational history was given, and the trend with progressively re- duced uncertainty region for Apophis’ orbit-element values was traced. Much attention was paid to discussing the orbit prediction accuracy and the bias of various fac- tors affecting this accuracy. The influence of uncertainty in planet coordinates and in the physical characteristics of the asteroid, and also the perturbing action of other asteroids, were analyzed. The effects on integration ac- curacy of digital length, non-spherical shape of Earth and Moon, solar-radiation-induced perturbations, non-uni- form thermal heating, and other factors, were examined. The equations of perturbed motion of the asteroid were integrated with the help of the Standard Dynamic Model (SDM), with the coordinates of other bodies taken from the JPL planetary ephemeris DE405. It is a well-known fact that the DE405 ephemerid was compiled as an ap- proximation to some hundred thousand observations that were made till 1998. Following the passage to the ephe- meris DE414, that approximates observational data till 2006, the error in predicting the Apophis trajectory on 2036 has decreased by 140,000 km. According to Gior- gini et al. [4], this error proved to be ten times greater than the errors induced by minor perturbations. Note that this result points to the necessity of employing a more accurate method for predicting the asteroid path. In paper [4], prospects for further refinement of Apo- phis’ trajectory were discussed at length. Time periods Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 131 suitable for optical and radar measurements, and also observational programs for oppositions with Earth in 2021 and 2029 and spacecraft missions for 2018 and 2027 were scheduled. Future advances in error minimi- zation for asteroid trajectory due to the above activities were evaluated. It should be noted that the ephemerides generated as an approximation to observational data enable rather ac- curate determination of a body’s coordinates in space within the approximation time interval. The prediction accuracy for the coordinates on a moment remote from this interval worsens, the worsening being the greater the more the moment is distant from the approximation in- terval. Therefore, the observations and the missions sche- duled in paper [4] will be used in refining future ephe- In view of the afore-said, in calculating the Apophis trajectory the equation of perturbed motion were inte- grated [4,10,15], while the coordinates of other bodies were borrowed from the ephemerid. Difference integra- tion methods were employed, which for closely spaced bodies yield considerable inaccuracies in calculating higher-order derivatives. Addition of minor interactions to the basic Newtonian gravitational action complicates the problem and enlarges the uncertainty region in pre- dicting the asteroid trajectory. Many of the weak interac- tions lack sufficient quantitative substantiation. More- over, the physical characteristics of the asteroid and the interaction constants are known to some accuracy. That is why in making allowance for minor interactions expert judgments were used. And, which is most significant, the error in solving the problem on asteroid motion with Newtonian interaction is several orders greater than the corrections due to weak additional interactions. The researches, for example, Bykova and Galushina [12,13] apply a technique in Giorgini et al. 2008 to study of influence of the initial conditions on probability of collision Apophis with Earth. The initial conditions for asteroid are defined from elements of its orbit, which are known with some uncertainty. For example, eccentricity value e = en ± σe, where en is nominal value of eccentric- ity, and σe is root-mean-square deviation at processing of several hundred observation of asteroid. The collision parameters are searched in the field of possible motions of asteroid, for example for eccentricity, 3σe, the initial conditions are calculated in area e = en ± σe. From this area the 10 thousand, and in some works, the 100 thou- sand sets of the initial conditions are chosen by an acci- dental manner, i.e. instead of one asteroid it is consid- ered movement 10 or 100 thousand asteroids. Some of them can come in collision with Earth. The probability of collision asteroid with the Earth is defined by their Such statistical direction is incorrect. If many mea- surement data for a parameter are available, then the nominal value of the parameter, say, eccentricity en, pre- sents a most reliable value for it. That is why a trajectory calculated from nominal initial conditions can be re- garded as a most reliable trajectory. A trajectory calcu- lated with a small deviation from the nominal initial con- ditions is a less probable trajectory, whereas the prob- ability of a trajectory calculated from the parameters taken at the boundary of the probability region (i.e. from e = en ± e) tends to zero. Next, a trajectory with initial conditions determined using parameter values trice greater than the probable deviations (i.e. e = en ± 3 e) has an even lower, negative, probability. Since initial condi- tions are defined by six orbital elements, then simulta- neous realization of extreme (boundary) values (± 3 ) for all elements is even a less probable event, i.e. the prob- ability becomes of smaller zero. That is why it seems that a reasonable strategy could consist in examining the effect due to initial conditions using such datasets that were obtained as a result of suc- cessive accumulation of observation data. Provided that the difference between the asteroid motions in the last two datasets is insignificant over some interval before some date, it can be concluded that until this date the asteroid motion with the initial conditions was deter- mined quite reliably. As it was shown in paper [4], some additional active- ties are required, aimed at further refinement of Apophis’ trajectory. In this connection, more accurate determina- tion of Apophis’ trajectory is of obvious interest since, following such a determination, the range of possible al- ternatives would diminish. For integration of differential motion equations of so- lar-system bodies over an extended time interval, a pro- gram Galactica was developed [16-19]. In this program, only the Newtonian gravity force was taken into account, and no differences for calculating derivatives were used. In the problems for the compound model of Earth rota- tion [20] and for the gravity maneuver near Venus [21], motion equations with small body-to-body distances, the order of planet radius, were integrated. Following the solution of those problems and subsequent numerous checks of numerical data, we have established that, with the program Galactica, we were able to rather accurately predict the Apophis motion over its travel path prior to and after the approach to the Earth. In view of this, in the present study we have attempted an investigation into orbit evolution of asteroids Apophis and 1950 DA; as a result of this investigation, some fresh prospects toward possible use of these asteroids have opened. 2. Problem Statement For the asteroid, the Sun, the planets, and the Moon, all interacting with one another by the Newton law of gra- Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY vity, the differential motion equations have the form [16]: d, 1,2,, ki ik rr n (1) where i is radius-vector of a body with mass i rela- tively Solar System barycenter; G is gravitational con- stant; is vector and rik is its module; n = 12. ik ik As a result of numerical experiments and their analysis we came to a conclusion, that finite-difference methods of integration do not provide necessary accuracy. For the integration of Equation (1) we have developed algorithm and program Galactica. The meaning of function at the following moment of time t = t0 + t is determined with the help of Taylor series, which, for example, for coor- dinate x looks like: t (2) is derivative of k order at the initial moment The meaning of velocity is defined by the similar formula, and acceleration 0 by the Equation (1). Higher derivatives we are determined analytically as a result of differentiation of the Equation (1). The cal- culation algorithm of the sixth order is now used, i.e. with K = 6. A few words about the method used and the program Galactica. The algorithms of finite-difference methods are derived from the Taylor series (2). In this case the higher derivatives are determined by the difference of the second derivatives at different steps. This leads to errors of integration. There are methods [22,23], in which the derivatives are determined by recurrence formulas. In the program Galactica the derivatives are calculated under the exact analytical formulas, which we have deduced. This provides greater accuracy than other methods. Besides in the program Galactica there are many other details, which allow to not reducing the achieved in such way accuracy. They were found in the process of creat- ing the program Galactica. During its development more than 10 different ways to control errors has been used. In our book [19] the following checks are made mention of: 1) Checking the stability of the angular momentum M of the Solar System; 2) Checking the magnitude of the momentum Р of the Solar System; 4) Integrating backwards and forwards in time; 5) Integrating into a remote epoch and subsequent re- turn to the initial epoch; 6) Picking persistent changes (orbital major semiaxis, period, or precession axis etc.) and their checking; 7) Checking against test problems with exact analytic- cal solutions, for example, n-axisymmetrical problem 8) Comparison with observations; 9) Comparison with other reported data. The high accuracy of the program Galactica is firstly allowed to integrate Equations (1) for the motion of the Solar System for 100 million years [19]. The error was less on some orders in comparison with work [25], in which this problem has been solved for 3 million years by Stórmer method. The program Galactica allows solving the problem of interaction of any number of bodies, which motion is described by Equations (1). For example, the problem of the evolution of Earth’s rotation axis [20] was solved. In this task, the rotational motion of the Earth’s has been replaced by a compound model of the Earth’s rotation. The compound model of the Sun’s rotation was used in another task [26], in which the influence of the oblate rotating Sun on the motion of planets was established. In all these problems, this method of integration and pro- gram Galactica had no failures and we successfully used As noted above, the methods, which use Standard Dy- namic Model, are approximation of the data of Solar Sys- tem body’s observations. To calculate the future move- ment of the asteroid the bodies’ positions are used out- side the framework of observation. Therefore the calcu- lation error increases with distance in time from the base of observations. The base of observations is not used in program Galactica, so this error is missing in it. A high accuracy of the integration method of Galactica allows for a smaller error compute motion of asteroids in their rapprochement with the celestial bodies. The free-access Galactica system, user version for per- sonal computer, can be found at: http://www.ikz.ru/~smulski/GalactcW. The system of Ga- lactica, except the program Galactica, includes several components, which are described in the User’s Guide htt p://www.ikz.ru/~smulsk i/GalactcW /GalD i s cr E . pd f . The Guide also provides detailed instructions for all stages of solving the problem. After the free-access system is created based on a supercomputer, we will post the in- formation at the above site. 3. The Dynamics of the Planets’ Orbits To study the dynamics of the Solar system we used the initial conditions (IC) on the 30.12.1949 with the Julian day JD0 = 2433280.5 in two versions. The first version of the IC was based on ephemerid DE19 of Jet Propulsion Laboratory, USA (JPL), a second version is based on the ephemerid DE406. The first IC was in a coordinate sys- tem of 1950.0, while the second—in the system of 2000.0. These initial conditions are given in our paper [19] and on the website http://www.ikz.ru/~smulski/Data/ OrbtData. As already mentioned, the differential Equa- Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 133 tions (1) were integrated for 11 bodies: 9 planets, the Sun and the Moon. The motion of bodies is considered in barycentric equatorial coordinates. The parameters of the orbits of the planets are defined in the heliocentric coor- dinate system and the orbit of the Moon is in geocentric The main results obtained with the integration step t = 10–4 year and the number of double length (17 decimal). Checking and clarifying the calculations were performed with a smaller step, as well as with the extended length of the numbers (up to 34 decimal places). Methods for validation of the solutions and their errors are investigated in [19]. The computed with the help of the program Galactica change of the parameters of the orbit of Mars at time interval of 7 thousand years is shown in Figure 1 points 1. The eccentricity e and peri- helion angle p increase, and the longitude of the as- cending node Ω decreases in this span of time. The an- gle of inclination of the Mars orbit i does not change monotonically. In the epoch at T = 1400 years from 30 December 1949, it has a maximum value. The changing these parameters for a century are called secular pertur- bations. In contrast, the semi-major axis a and orbital period P on the average remains unchanged, so the graphs are given their deviations from the mean values. These fluctuations relative to the average am = 2.279 × 1011 m and Pm = 1.881 years have a small value. The parameters e, i, Ω and p also vary with the same rela- tive amplitudes as parameters a and P. In Figure 1 the lines 2 and 3 show the approximation of the observation data. As one can see, the eccentricity e and the angles i, Ω and p coincide with observations in the interval 1000 years, i.e. within the validity of ap- proximations by S. Newcomb [27] and J. Simon et al. [28]. Calculations for the semi-major axis a and the pe- riods P also coincide with the observations and their fluc- tuations comparable in magnitude with the difference between the approximations of different authors. Similar studies of secular changes in orbital parameters are made for other planets. They are also compared with the ap- proximations of the observational data, except for Pluto. Its approximation of the observations is missing. In Figure 2 there are the variations of the calculated elements of the orbit of Mars in 3 million years into the past. The eccentricity of the orbit e has short-period os- cillations of amplitude 0.019 with the main period equal to Te1 = 95.2 thousand years. It oscillates relatively the average for 50 million years value of em = 0.066. The longitude of ascending node Ω oscillates with the aver- age period of ТΩ = 73.1 thousand years around the mean Ωm = 0.068 radians. The angle of inclination of the orbital plane to the equatorial plane i fluctuates with the same period Ti = 73.1 thousand years around the mean value іm = 0.405 radians. Longitude of perihelion φр al- Figure 1. Secular changes of the Mars orbit 1 compared with approximated observations by Newcomb [27] and Simon et al. [28], 2 and 3, respectively: Eccentricity e; in- clination i to equator plane for epoch J2000.0; longitude of ascending node relative to the x axis at J2000.0; longi- tude of perihelion p; semi-major axis deviation Δa from the average of 7 thousand years, in meters; orbital period de- viation ΔP from average of 7 thousand years, in centuries. Angles are in radians and time Т is in centuries from 30 December 1949; data points spaced at 200 yr. Figure 2. Evolution of the orbit parameters of Mars for 3 p is the angular velocity of the perihelion rotation in the “century for the time interval of 20 thousand years: pm = 1687” century is the average angular velocity of perihelion rotation during 50 million years; Те1, ТΩ and Тi are the shortest periods of eccentricity, ascending node, and incli- nation, respectively, in kyr (thousand years); em, im and are the average values of appropriate parameters; Тр is a rotation perihelion period averaged over 50 Myr. Other notations see Figure 1. Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY most linearly increases with time, i.e. the perihelion moves in the direction of circulation of Mars around the Sun, making on the average for a –50 million years, one revolution for the time Tp = 76.8 thousand years. At the same time its motion is uneven. As one can see from the graph, the angular velocity of rotation of the perihelion p fluctuates around the average value of pm = 1687" (arc-seconds) per century, while at time T = –1.35 mil- lion years, it takes a large negative value, i.e. in this mo- ment there is the return motion of the perihelion with great velocity. As a result of research for several million years the pe- riods and amplitudes of all fluctuations of the orbits pa- rameters of all planets received. The system of Equation (1) with the help of the program Galactica was integrated for 100 million years in the past and the evolution of the orbits of all planets and the Moon was studied. Figure 3 shows changing parameters of the Mars orbit on an in- terval from –50 million years to –100 million years. The eccentricity e, angles of inclination i, ascending node monotonically oscillate. The fluctuations have several periods, and duration of most of them is much smaller than the interval in 50 million years. For example, the greatest period eccentricity Te2 = 2.31 million years. The angular velocity p of perihelion rotation fluctuate around the average value of pm. In comparison with the Figure 3. The evolution of the Mars orbit in the second half of the period of 100 million years: T is the time in millions of years into the past from the epoch 30.12.1949; other no- tations see Figures 1 and 2. eccentricity e one can see that negative values of p, i.e. reciprocal motion of the perihelion occurs when the ec- centricity of the orbit is close to zero. On the interval from 0 to –50 million years the graph- ics have the same form [19], as shown in Figure 3, i.e. the orbit of Mars is both steady and stable and there is no tendency to its change. Similar results were obtained for other planets, i.e. these studies established the stability of the orbits and the Solar system as a whole. The result is important, since in the abovementioned papers [1-3] at solving the problem of other methods after 20 million years, the orbit begins to change, what led to the destruc- tion of the Solar system in the future. Based on these solutions their authors came to the conclusion about the instability of the Solar system and chaotic motions in it. During these researches we have established, that the evolution of the planets orbits is the result of four move- ments: 1) The precession of the orbital axis around a fixed in space vector of angular momentumof the Solar system; 2) Nutational oscillations of the orbit axis ; 3) The oscillations of orbit; 4) the rotation of the orbit in its own plane (rotation of the perihelion). The behavior of the perihelion of the orbits of all planets in the span of 50 million years is shown in Figure 4 as a function of angles perihelion φр versus time. The orbits of the eight planets from Mercury to Neptune are orbiting counter- clockwise, i.e. in the direction of orbital motion. The orbit of Pluto, the only one, that rotates clockwise. Two Figure 4. Variations of perihelion longitudes p and the precession angle S for 50 Myr in orbits of nine planets from Mercury to Pluto (numbered from 1 to 9), with the respective perihelion periods (Tp) and precession cycles (TS) in Kyr, which averaged over 50 Myr. Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Copyright © 2012 SciRes. IJAA groups of planets (Venus and Earth, Jupiter and Uranus), as seen from the values of the periods Tp, have almost the same velocity of perihelion rotation. The orbit of Saturn has the highest velocity, and the orbit of Pluto has the smallest one. should also be emphasized that reported in literature resonances and instabilities appear in the simplified equ- ations of motion when they are solved by approximate analytical methods. These phenomena do not arise at the integrating not simplified Equation (1) by method (2). The precession of the orbit occurs clockwise, i.e. against the orbital motion of the planets The angle of S of the orbit axis is defined in the plane perpendicular to the vector of the momentum . This plane crosses the equator plane on the line directed at an angle φΩm = 0.068 to the x axis. Precession angle S is the angle between the line of intersection and the node of the planet orbit. The angle S, as the angle of the peri- helion φр, varies irregularly, but a large time interval of 50 million years, as shown in Figure 4, these irregular- rities are not visible. As can be seen from the periods of TS, the orbits axis of Jupiter and Saturn precess with maximum velocity and Pluto—with the lowest one. For the two groups of planets: Venus and Earth, Jupiter and Saturn, the velocity of precession is practically the same. 4. Preparation of Initial Data of Asteroids We consider the motion of asteroids in the barycentric coordinate system on epoch J2000.0, Julian day JDs = 2451545. The orbital elements asteroids Apophis and 1950 DA, such as the eccentricity e, the semi-major axis a, the ecliptic obliquity ie, the ascending node angle the ascending node-perihelion angle ωe, etc., and aste- roids position elements, such as the mean anomaly M, were borrowed from the JPL Small-Body database 2008 as specified on November 30, 2008. The data, represent- ed to 16 decimal digits, are given in Table 1. For Apophis in Table 1 the three variants are given. The first variant is now considered. These elements cor- respond to the solution with number JPL sol. 140, which is received Otto Mattic at April 4, 2008. In Table 1 the uncertainties of these data are too given. The relative un- certainty value is in the range from 2.4 × 10–8 to 8 × 10–7. The same data are in the asteroid database by Ed- ward Bowell [29], although these data are represented only to 8 decimal digits, and they differ from the former data in the 7-th digit, i.e., within value . Giorgini et al. [4] used the orbital elements of Apophis on epoch JD = 2453979.5 (September 01, 2006), which correspond to the solution JPL sol.142. On publicly accessible JPL-system It should be emphasized that the secular changes of angles of inclination i and ascending node Ω in Figure 1 and their variations, are presented by the graphics in Fig- ures 2 and 3, are due to the precessional and nutational motion of the orbit axis The studies testify that the evolution of the planets or- bits in the investigated range is unchanged and stable. This allows us to conclude that the manifestations of in- stability and chaos in the motion of the planets, as de- scribed by other authors, most likely due to the methods of solution which they have been used. Furthermore, it Table 1. Three variants of orbital elements of asteroids Apophis on two epochs and 1950 DA on one epoch in the heliocentric ecliptic coordinate system of 2000.0 with JDS = 2451545 (see JPL Small-Body Database [30]). Apophis 1950 DA 1-st variant November 30, 2008 JD01 = 2454800.5 JPL sol.140 1-st var. 2-nd variant January 04, 2010 JD02 = 2455200.5 JPL sol.144 3-rd variant November 30, 2008 JD01 = 2454800.5 JPL sol.144. November 30, 2008 JD0 = 2454800.5 JPL sol.51 e 0.1912119299890948 7.6088e–08 0.1912110604804485 0.1912119566344382 0.507531465407232 a 0.9224221637574083 2.3583e–08 0.9224192977379344 0.9224221602386669 1.698749639795436 AU q 0.7460440415606373 8.6487e–08 0.7460425256098334 0.7460440141364661 0.836580745750051 AU ie 3.331425002325445 2.024e–06 3.331517779979046 3.331430909298658 12.18197361251942 deg 204.4451349657969 0.00010721 204.4393039605681 204.4453098275707 356.782588306221 deg ωe 126.4064496795719 0.00010632 126.4244705298442 126.4062862564680 224.5335527346193 deg M 254.9635275775066 5.7035e–05 339.9486156711335 254.9635223452623 161.0594270670401 deg tp 2454894.912750123770 (2009-Mar-04.41275013) 5.4824e–05 2455218.523239657948 (2009-Mar-04. 41275429) P 323.5884570441701 n 1.112524233059586 4.2665e–08 1.112529418096263 1.112524239425464 0.445153720449539 deg/d Q 1.098800285954179 2.8092e–08 1.098796069866035 1.098800306340868 2.560918533840822 AU J. J. SMULSKY, Y. J. SMULSKY Horizons the solution sol.142 can be prolonged till No- vember 30.0, 2008. In this case it is seen, that difference of orbital elements of the solution 142 from the solution 140 does not exceed 0.5 uncertainties of the orbit ele- The element values in Table 1 were used to calculate the Cartesian coordinates of Apophis and the Apophis velocity in the barycentric equatorial system by the fol- lowing algorithm (see [19,20,31,32]). From the Kepler equation sinEeE M (3) we calculate the eccentric anomaly E and, then, from E, the true anomaly 02 arctg11tg0.5ee E In subsequent calculations, we used results for the two-body interaction (the Sun and the asteroid) [21,32]. The trajectory equation of the body in a polar coordinate system with origin at the Sun has the form: where the polar angle φ, or, in astronomy, the true ano- maly, is reckoned from the perihelion position r = Rp; 111 e is the trajectory parameter; and Rp = 21aa a is the perihelion radius. The expressions for the radial and transversal velocities are rp tp υυ rυυr , (6) where for φ > π we have ; 0 = r/Rp is the dimen- sionless radius, and the velocity at perihelion is υGm mR , (7) where mS = m11 is the Sun mass (the value of m11 is given in Table 2), and mAs = m12 is the Apophis mass. Table 2. The masses mbj of the planets from Mercury to Pluto, the Moon, the Sun (1 - 11) and asteroids: Apophis (12a) and 1950 DA (12b), and the initial condition on epoch JD0 = 2454800.5 (November 30, 2008) in the heliocentric equatorial coor- dinate system on epoch 2000.0 JDS = 2451545. G = 6.67259E–11 m3·s–2·kg–1. Bodies masses in kg, their coordinates in m and velocities in m·s–1 j mbj xaj, vxaj, yaj, vyaj zaj, vzaj –17405931955.9539 –60363374194.7243 –30439758390.4783 1 3.30187842779737E+23 37391.7107852059 –7234.98671125365 –7741.83625612424 108403264168.357 –2376790191.8979 –7929035215.64079 2 4.86855338156022E+24 1566.99276862423 31791.7241663148 14204.3084779893 55202505242.89 125531983622.895 54422116239.8628 3 5.97369899544255E+24 –28122.5041342966 10123.4145376039 4387.99294255716 –73610014623.8562 –193252991786.298 –86651102485.4373 4 6.4185444055007E+23 23801.7499674501 –5108.24106287744 –2985.97021694235 377656482631.376 –609966433011.489 –270644689692.231 5 1.89900429500553E+27 11218.8059775149 6590.8440254003 2551.89467211952 –1350347198932.98 317157114908.705 189132963561.519 6 5.68604198798257E+26 –3037.18405985381 –8681.05223681593 –3454.56564456648 2972478173505.71 –397521136876.741 –216133653111.407 7 8.68410787490547E+25 979.784896813787 5886.28982058747 2564.10192504801 3605461581823.41 –2448747002812.46 –1092050644334.28 8 1.02456980223201E+26 3217.00932811768 4100.99137103454 1598.60907148943 53511484421.7929 –4502082550790.57 –1421068197167.72 9 1.65085753263927E+22 5543.83894965145 –290.586427181992 –1757.70127979299 55223150629.6233 125168933272.726 54240546975.7587 10 7.34767263035645E+22 –27156.1163326908 10140.7572420768 4468.97456956941 11 1.98891948976803E+30 0 0 0 –133726467471.667 –60670683449.3631 –26002486763.62 12a 30917984100.3039 16908.9331065445 –21759.6060221801 –7660.90393288287 314388505090.346 171358408804.935 127272183810.191 12b 1570796326794.9 –5995.33838888362 9672.35319009371 6838.06006342785 Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 137 The time during which the body moves along an ellip- tic orbit from the point of perihelion to an orbital position with radius r is given by π2arcsin 211 where rrp At the initial time t0 = 0, which corresponds to epoch JD0 (see Table 1), the polar radius of the asteroid r0 as dependent on the initial polar angle, or the true anomaly , can be calculated by Equation (5)The initial radial and initial transversal velocities as functions of r0 can be found using Equation (6). is the dimensionless radial velocity. The Cartesian coordinates and velocities in the orbit plane of the asteroid (the axis xo goes through the perihe- lion) can be calculated by the formulas cos ;sin xr yr cos sin sin cos xo rt yo rt υυ υ υυ υ The coordinates of the asteroid in the heliocentric ecliptic coordinate system can be calculated as coscosΩsin sinΩcos –sincosΩcos sinΩcos ; eo eee oee e cossinΩsin cosΩcos –sinsinΩcos cosΩcos ; eo eee oee e yx i eoe eoe zx iyi The velocity components of the asteroid , υυ and ze in this coordinate system can be calculated by Equa- tions analogous to (11)-(13). Since Equation (1) are considered in a motionless equ- atorial coordinate system, then elliptic coordinates (11)- (13) can be transformed into equatorial ones by the Equ- ae e ae e yy z zy z 0 is the angle between the ecliptic and the equator in epoch JDS. The velocity components , υυ and υ can be transformed into the equatorial ones , υυ and by Equations analogous to (14). With known heliocentric equatorial coordinates of the Solar system n bodies xai, yai, zai i = 1, 2, … n, the coordinates of Solar system bary- centre, for example, along axis x will be: ci aiSs mx M , where Ss i is mass of solar system bodies. Then barycentric equatorial coordinates xi of asteroid and other bodies will be iai c Other coordinates yi and zi and components of velocity υυ and υ in barycentric equatorial system of co- ordinates are calculated by analogous equations. In the calculations, six orbital elements from Table 1, namely, e, a ie, ωe, and M, were used. Other orbital elements were used for testing the calculated data. The perihelion radius Rp and the aphelion radius Ra = were compared to q and Q, respectively. The orbital period was calculated by Equation (8) as twice the time of motion from perihelion to aphelion (r = Ra). The same Equation was used to calculate the mo- ment at which the asteroid passes the perihelion (r = r0). The calculated values of those quantities were compared to the values of P and tp given in Table 1. The largest relative difference in terms of q and Q was within 1.9 × 10–16, and in terms of P and tp, within 8 × 10–9. The coordinates and velocities of the planets and the Moon on epoch JD0 were calculated by the DE406/ LE406 JPL-theory [33,34]. The masses of those bodies were modified by us [18], and the Apophis mass was calculated assuming the asteroid to be a ball of diameter d = 270 m and density = 3000 kg/m3. The masses of all bodies and the initial conditions are given in Table 2. The starting-data preparation and testing algorithm (3)- (14) was embodied as a MathCad worksheet (program 5. Apophis’ Encounter with the Planets and the Moon In the program Galactica, a possibility to determine the minimum distance Rmin to which the asteroid approaches a celestial body over a given interval T was provided. Here, we integrated Equation (1) with the initial condi- tions indicated in Table 2. The integration was per- formed on the NKS-160 supercomputer at the Computing Center SB RAS, Novosibirsk. In the program Galactica, an extended digit length (34 decimal digits) was used, and for the time step a value dT = 10–5 year was adopted. The computations were performed over three time inter- vals, 0 - 100 years (Figure 5(a)), 0 - –100 years (Figure 5(b)), and 0 - 1000 years (Figure 5(c)). In the graphs of Figure 5 the points connected with the heavy broken line show the minimal distances Rmin to which the asteroid approaches the bodies indicated by points embraced by the horizontal line. In other words, a point in the broken line denotes a minimal distance to Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Figure 5. Apophis’ encounters with celestial bodies during the time T to a minimum distance Rmin, km: Mars (Ma), Earth (Ea), Moon (Mo), Venus (Ve) and Mercury (Me); a, b— T = 1 year; c—T = 10 years. T, cyr (1 cyr = 100 yr) is the time in Julian centuries from epoch JD0 (November 30, 2008). Calendar dates of approach in points: A—13 April 2029; B—13 April 2067; C—5 September 2037; E—10 Oc- tober 2586. which, over the time T = 1 year, the asteroid will ap- proach a body denoted by the point in the horizontal line at the same moment. It is seen from Figure 5(a) that, starting from November 30, 2008, over the period of 100 years there will be only one Apophis’ approach to the Earth (point A) at the moment TA = 0.203693547133403 century to a minimum distance RminA = 38907 km. A next approach (point B) will be to the Earth as well, but at the moment TB = 0.583679164042455 century to a minimum distance RminB = 622,231 km, which is 16 times greater than the minimum distance at the first approach. Among all the other bodies, a closest approach with be to the Moon (point D) (see Figure 5(b)) at TD = –0.106280550824626 century to a minimum distance RminD = 3,545,163 km. In the graphs of Figures 5(a) and (b) considered above, the closest approaches of the asteroid to the bodies over time intervals T = 1 year are shown. In integrating Equation (1) over the 1000-year interval (see Figure 5(c)), we considered the closest approaches of the aste- roid to the bodies over time intervals T = 10 years. Over those time intervals, no approaches to Mercury and Mars were identified; in other words, over the 10-year intervals the asteroid closes with other bodies. Like in Figure 5(a), there is an approach to the Earth at the moment TA. A second closest approach is also an approach to the Earth at the point Е at TE = 5.778503 century to a minimum distance RminE = 74002.9 km. During the latter approach, the asteroid will pass the Earth at a minimum distance almost twice that at the moment TA. With the aim to check the results, Equation (1) were integrated over a period of 100 years with double digit length (17 decimal digits) and the same time step, and also with extended digit length and a time step dT = 10–6 year. The integration accuracy (see Table 3) is defined [19] by the relative change of Mz, the z-projection of the angular momentum of the whole solar system for the 100-year period. As it is seen from Table 3, the quantity Mz varies from –4.5 × 10–14 to 1.47 × 10–26, i.e., by 12 orders of magnitude. In the last two columns of Table 3, the difference between the moments at which the asteroid most closely approaches the Earth at point A (see Figure 5(a)) and the difference between the approach distances relative to solution 1 are indicated. In solution 2, ob- tained with the short digit length, the approach moment has not changed, whereas the minimum distance has re- duced by 2.7 m. In solution 3, obtained with ten times reduced integration step, the approach moment has changed by –2 × 10–6 year, or by –1.052 minutes. This change being smaller than the step dT = 1 × 10–5 for solu- tion 1 and being equal twice the step for solution 3, the value of this change provides a refinement for the ap- proach moment. Here, the refinement for the closest ap- proach distance by –1.487 km is also obtained. On the refined calculations the Apophis approach to the Earth occurs at 21 hours 44 minutes 45 sec on distance of 38905 km. We emphasize here that the graphical data of Figure 5, a for solutions 1 and 3 are perfectly coincident. The slight differences of solution 2 from solutions 1 and 3 are observed for Т > 0.87 century. Since all test calcu- lations were performed considering the parameters of solution 1, it follows from here that the data that will be presented below are accurate in terms of time within 1’, and in terms of distance, within 1.5 km. At integration on an interval of 1000 years the relative change of the angular momentum is Mz = 1.45 × 10–20. How is seen from the solution 1 of Table 3 this value exceeds Mz at integration on an interval of 100 years in 10 times, i.e. the error at extended length of number is proportional to time. It allows to estimate the error of the Table 3. Comparison between the data on Apophis’ en- counter with the Earth obtained with different integration accuracies: Lnb is the digit number in decimal digits. solution Lnb dT, yr Mz TAi–TA1, yr RminAi–Rmin A 1, 1 34 1 × 10–5 1.47 × 10–21 0 0 2 17 1 × 10–5 –4.5 × 10–14 0 –2.7 × 10–3 3 34 1 × 10–6 1.47 × 10–26 –2 × 10–6 –1.487 Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 139 second approach Apophis with the Earth in TE = 578 years by results of integrations on an interval of 100 years of the solution with steps dT = 1 × 10–5 years and 1 × 10–6 years. After 88 years from beginning of integration the relative difference of distances between Apophis and Earth has become R88 = 1 × 10–4, that results in an error in distance of 48.7 km in TE = 578 years. So, during the forthcoming one-thousand-year period the asteroid Apophis will most closely approach the Earth only. This event will occur at the time TA counted from epoch JD0. The approach refers to the Julian day JDA = 2462240.406075 and calendar date April 13, 2029, 21 hour 44'45" GMT. The asteroid will pass at a mini- mum distance of 38905 km from the Earth center, i.e., at a distance of 6.1 of Earth radii. A next approach of Apo- phis to the Earth will be on the 578-th year from epoch JD0; at that time, the asteroid will pass the Earth at an almost twice greater distance. The calculated time at which Apophis will close with the Earth, April 13, 2029, coincides with the approach times that were obtained in other reported studies. For instance, in the recent publication [4] this moment is given accurate to one minute: 21 hour 45' UTC, and the geocentric distance was reported to be in the range from 5.62 to 6.3 Earth radii, the distance of 6.1 Earth radii falling into the latter range. The good agreement between the data obtained by different methods proves the ob- tained data to be quite reliable. As for the possible approach of Apophis to the Earth in 2036, there will be no such an approach (see Figure 5(a)). A time-closest Apophis’ approach at the point C to a minimum distance of 7.26 million km will be to the Moon, September 5, 2037. 6. Apophis Orbit Evolution In integrating motion Equation (1) over the interval –1 century ≤ T ≤ 1 century the coordinates and velocities of the bodies after a lapse of each one year were recorded in a file, so that a total of 200 files for a one-year time in- terval were obtained. Then, the data contained in each file were used to integrate Equation (1) again over a time interval equal to the orbital period of Apophis and, fol- lowing this, the coordinates and velocities of the asteroid, and those of Sun, were also saved in a new file. These data were used in the program DefTra to determine the parameters of Apophis’ orbit relative to the Sun in the equatorial coordinate system. Such calculations were performed hands off for each of the 200 files under the control of the program PaOrb. Afterwards, the angular orbit parameters were recalculated into the ecliptic coor- dinate system (see Figure 6). As it is seen from Figure 6, the eccentricity е of the Apophis orbit varies non-uniformly. It shows jumps or Figure 6. Evolution of Apophis’ orbital parameters under the action of the planets, the Moon and the Sun over the time interval −100 years - +100 years from epoch November 30, 2008: 1—as revealed through integration of motion Equation (1); 2—initial values according to Table 1. The angular quantities: , i e, and ωe are given in degrees; the major semi-axis a in AU; and the orbital period P in days. breaks. A most pronounced break is observed at the mo- ment TA, at which Apophis most closely approaches the Earth. A second most pronounced break is observed when Apophis approaches the Earth at the moment TB. The longitude of ascending node shows less breaks, exhibiting instead rather monotonic a decrease (see Fig- ure 6). Other orbital elements, namely, ie, ωe, a, and P, exhibit pronounced breaks at the moment of Apophis’ closest pass near the Earth (at the moment TA). The dashed line in Figure 6 indicates the orbit-ele- ment values at the initial time, also indicated in Table 1. As it is seen from the graphs, those values coincide with the values obtained by integration of Equation (1), the relative difference of e, , ie, ωe, a, and P from the initial values at the moment T = 0 (see Table 1) being respec- tively 9.4 × 10–6, –1.1 × 10–6, 3.7 × 10–6, –8.5 × 10–6, 1.7 × 10–5, and 3.1 × 10–5. This coincidence testifies the reli- ability of computed data at all calculation stages, includ- ing the determination of initial conditions, integration of equations, determination of orbital parameters, and trans- formations between the different coordinate systems. As it was mentioned in Introduction, apart from non- simplified differential Equation (1) for the motion of celestial bodies, other equations were also used. It is a well-known fact (see Duboshin, 1976) that in perturbed- motion equations orbit-element values are used. For this reason, such equations will yield appreciable errors in determination of orbital-parameter breaks similar to Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY those shown in Figure 6. Also, other solution methods for differential equations exist, including those in which expansions with respect to orbital elements or difference quotients are used. As it was already mentioned in Intro- duction, these methods proved to be sensitive to various resonance phenomena and sudden orbit changes ob- served on the approaches between bodies. Equation (1) and method (2) used in the present study are free of such shortcomings. This suggests that the results reported in the present paper will receive no notable corrections in the future. 7. Influence of Initial Conditions With the purpose of check of influence of the initial con- ditions (IC) on Apophis trajectory the Equation (1) were else integrated on an interval 100 years with two variants of the initial conditions. The second of variant IC is given on January 04, 2010 (see Table 1). They are taken from the JPL Small-Body database [29] and correspond to the solution with number JPL sol.144, received Steven R. Chesley on October 23, 2009. In Figure 7 the results of two solutions with various IC are submitted. The line 1 shows the change in time of distance R between Apo- phis and Earth for 100 years at the first variant IC. As it is seen from the graphs, the distance R changes with os- cillations, thus it is possible to determine two periods: the short period TR1 = 0.87 years and long period TR2. The amplitude of the short period Ra1 = 29.3 million km, and long is Ra2 = 117.6 million km. The value of the long oscillation period up to T ~ 70 years is equal TR20 = 7.8 years, and further it is slightly increased. After ap- proach of April 13, 2029 (point A in Figure 7) the am- plitude of the second oscillations is slightly increased. Both short and the long oscillations are not regular; therefore their average characteristics are above given. Let’s note also on the second minimal distance of Apophis approach with the Earth on interval 100 years. It occurs at the time TF1 = 58.37 years (point F1 in Figure 7) on distance RF1 = 622 thousand km. In April 13, 2036 (point H in Figure 7) Apophis passes at the Earth on distance RH1 = 86 million km. The above-mentioned characteristics of the solution are submitted in Table 4. The line 2 in Figure 7 gives the solution with the se- cond of variant IC with step of integration dT = 1 × 10–5 years. The time of approach has coincided to within 1 minutes, and distance of approach with the second of IC became RA2 = 37,886 km, i.e. has decreased on 1021 km. To determine more accurate these parameters the Equa- tion (1) near to point of approach were integrated with a step dT = 1 × 10–6 years. On the refined calculations Apophis approaches with the Earth at 21 hours 44 min- utes 53 second on distance RA2 = 37,880 km. As it is seen from Table 4, this moment of approach differs from the moment of approach at the first of IC on 8 second. As at a Figure 7. Evolution of distance R between Apophis and Earth for 100 years. Influence of the initial conditions (IC): 1—IC from November 30, 2008; 2—IC from January 04, 2010. Calendar dates of approach in points: A—13 April 2029; F1—13 April 2067; F2—14 April 2080. step dT = 1 × 10–6 years the accuracy of determination of time is 16 second, it is follows, that the moments of ap- proach coincide within the bounds of accuracy of their The short and long oscillations at two variants IC also have coincided up to the moment of approach. After ap- proach in point A the period of long oscillations has de- creased up to TR22 = 7.15 years, i.e. became less than period TR20 at the first variant IC. The second approach on an interval 100 years occurs at the moment TF2 = 70.28 years on distance RF2 = 1.663 million km. In 2036 (point H) Apophis passes on distance RH2 = 43.8 million At the second variant of the initial conditions on Janu- ary 04, 2010 in comparison with the first of variant the initial conditions of Apophis and of acting bodies are changed. To reveal only errors influence of Apophis IC, the third variant of IC is given (see Table 1) as first of IC on November 30, 2008, but the Apophis IC are calcu- lated in system Horizons according to JPL sol.144. How follows from Table 1, from six elements of an orbit e, a, , ωe and M the differences of three ones: ie, и ωe from similar elements of the first variant of IC are 2.9, 1.6 and 1.5 appropriate uncertainties. The difference of other elements does not exceed their uncertainties. At the third variant of IC with step of integration dT = 1 × 10–5 year the moment of approach has coincided with that at the first variant of IC. The distance of approach became RA3 = 38,814 km, i.e. has decreased on 93 km. For more accurate determination of these parameters the Equation (1) near to a point of approach were also inte- grated with a step dT = 1 × 10–6 year. On the refined cal- culations at the third variant of IC Apophis approaches with the Earth at 21 hours 44 minutes 45 second on dis- tance RA3 = 38,813 km. These and other characteristics of the solution are given in Table 4. In comparison with the first variant IC it is seen, that distance of approach in 2036 and parameters of the second approach in point F1 are slightly changed. The evolution of distance R in a Figure 7 up to T = 0.6 centuries practically coincides with the first variant (line 1). Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Copyright © 2012 SciRes. IJAA Table 4. Influence of the initial conditions on results of integration of the Equation (1) by program Galactica and of the equations of Apophis motion by system Horizons: TimeA and Rmin A are time and distance of Apophis approach with the Earth in April 13, 2029, accordingly; RH is distance of passage Apophis with the Earth in April 13, 2036; TF and RF are time and distance of the second approach (point F on Figure 7). Solutions at different variants of initial conditions Galactica Horizons JPL sol.140 JPL sol.144 JPL sol.144 JPL sol.144 JPL sol.140 JPL sol.144 TimeA 21:44:45 21:44:53 21:44:45 21:46:47 21:45:47 21:44:45 RminA , km 38905 37880 38813 38068 38161 38068 RH, 106 km 86.0 43.8 81.9 51.9 55.9 51.8 TF, cyr from 30.11.08 0.5837 0.7138 0.6537 0.4237 0.9437 0.4238 RF, 103 km 622 1663 585 1515 684 1541 It is seen (Table 4) that the results of the third variant differ from the first one much less than from the second variant. In the second variant the change of positions and velocities of acting bodies since November 30, 2008 for 04.01.2010 is computed under DE406, and in the third variant it does under the program Galactica. The initial conditions for Apophis in two variants are determined according to alike JPL sol.144, i.e. in these solutions the IC differ for acting bodies. As it is seen from Table 4, the moment of approach in solutions 2 and 3 differs on 8 seconds, and the approach distance differs on 933 km. Other results of the third solution also differ in the greater degree with second ones, in comparison of the third solution with first one. It testifies that the diffe- rences IC for Apophis are less essential in comparison with differences of results of calculations under two pro- grams: Galactica and DE406 (or Horizons). So, the above-mentioned difference of the initial con- ditions (variants 1 and 3 tab. 4) do not change the time of approach of April 13, 2029, and the distance of approach in these solutions differ on 102 km. Other characteristics: RH, TF and RF also change a little. Therefore it is possible to make a conclusion, that the further refinement of Apo- phis IC will not essentially change its trajectory. The same researches on influence of the initial condi- tions we have carried out with the integrator of NASA. In system Horizons (the JPL Horizons On-Line Ephemeris System, manual look on a site http://ssd.jpl.nasa.gov/?horizons_doc) there is opportu- nity to calculate asteroid motion on the same standard dynamic model (SDM), on which the calculations in pa- per [4] are executed. Except considered two IC we used one more IC for Apophis at date of July 12, 2006, which is close to date of September 01, 2006 in paper [4]. The characteristics and basic results of all solutions are given in Table 4. In these solutions the similar results are re- ceived. For example, for 3-rd variant of Horizons the graphs R in a Figure 7 up to T = 0.45 centuries practi- cally has coincided with 2-nd variant of Galactica. The time of approach in April 13, 2029 changes within the bounds of 2 minutes, and the distance is close to 38,000 km. The distance of approach in April 13, 2036 changes from 52 up to 56 million km. The characteristics of se- cond approach for 100 years changes in the same bounds, as for the solutions on the program Galactica. The above- mentioned other relations about IC influence have also repeated for the NASA integrator. So, the calculations at the different initial conditions have shown that Apophis in 2029 will be approached with the Earth on distance 38 - 39 thousand km, and in nearest 100 years it once again will approach with the Earth on distance not closer 600 thousand km. 8. Examination of Apophis’ Trajectory in the Vicinity of Earth In order to examine the Apophis trajectory in the vicinity of Earth, we integrated Equation (1) over a two-year pe- riod starting from T1 = 0.19 century. Following each 50 integration steps, the coordinate and velocity values of Apophis and Earth were recorded in a file. The moment TA at which Apophis will most closely approach the Earth falls into this two-year period. The ellipse E0E1 in Figure 8 shows the projection of the two-year Earth’s trajectory onto the equatorial plane xOy. Along this trajectory, starting from the point E0, the Earth will make two turns. The two-year Apophis trajectory in the same coordinates is indicated by points denoted with the letters Ap. Starting from the point Ap0, Apophis will travel the way Ap0Ap1ApeAp2Ap0Ap 1 to most closely ap- proach the Earth at the point Ape at the time TA. After that, the asteroid will follow another path, namely, the path Figure 9(a) shows the trajectory of Apophis relative to J. J. SMULSKY, Y. J. SMULSKY Figure 8. The trajectories of Apophis (Ap) and Earth (E) in the barycentric equatorial coordinate system xOy over a two-year period: Ap0 and E0 are the initial position of Apo- phis and Earth; Apf is the end point of the Apophis trajec- tory; Ape is the point at which Apophis most closely ap- proaches the Earth; the coordinates x and y are given in Figure 9. Apophis’ trajectory (1) in the geocentric equato- rial coordinate system xrOyr: a—on the normal scale, b—on magnified scale on the moment of Apophis’ closest appro- aches to the Earth (2); 3—Apophis’ position at the moment of its closest approach to the Earth following the correction of its trajectory with factor k = 0.9992 at the point Ap1; the coordinates xr and yr are given in AU. the Earth. Here, the relative coordinates are determined as the difference between the Apophis (Ap) and Earth (E) yyyxx x Along trajectory 1, starting from the point Ap 0, Apo- phis will travel to the Earth-closest point Ape, the trajec- tory ending at the point Apf. The loops in the Apophis trajectory represent a reverse motion of Apophis with respect to Earth. Such loops are made by all planets when observed from the Earth (Smulsky 2007). At the Earth-closest point Ape the Apophis trajectory shows a break. In Figure 9(b) this break is shown on a larger scale. Here, the Earth is located at the origin, point 2. The Sun (see Figure 8) is located in the vicinity of the barycenter O, i.e., in the upper right quadrant of the Earth-closest point Ape. Hence, the Earth-closest point will be passed by Apophis as the latter will move in be- tween the Earth and the Sun (see Figure 9(b)). As it will be shown below, this circumstance will present certain difficulties for possible use of the asteroid. 9. Possible Use of Asteroid Apophis So, on April 13, 2029, we will become witnesses of a unique phenomenon, the pass of a body 31 million tons in mass near the Earth at a minimum distance of 6 Earth radii from the center of Earth. Over subsequent 1000 years, Apophis will never approach our planet closer. Many pioneers of cosmonautics, for instance, K. E. Tsi- olkovsky, Yu. A. Kondratyuk, D. V. Cole, etc. believed that the near-Earth space will be explored using large manned orbital stations. Yet, delivering heavy masses from Earth into orbit presents a difficult engineering and ecological problem. For this reason, the lucky chance to turn the asteroid Apophis into an Earth bound satellite and, then, into a habited station presents obvious interest. Among the possible applications of a satellite, the fol- lowing two will be discussed here. First, a satellite can be used to create a space lift. It is known that a space lift consists of a cable tied with one of its ends to a point at the Earth equator and, with the other end, to a massive body turning round the Earth in the equatorial plane in a 24-hour period, Pd = 24 × 3600 sec. The radius of the sate- llite geostationary orbit is 42241 km 6.62 gsdA E In order to provide for a sufficient cable tension, the massive body needs to be spaced from the Earth center a distance greater than Rgs. The cable, or several such ca- bles, can be used to convey various goods into space while other goods can be transported back to the Earth out of space. If the mankind will become able to make Apophis an Earth bound satellite and, then, deflect the Apophis orbit into the equatorial plane, then the new satellite would suit the purpose of creating a space lift. . (15) A second application of an asteroid implies its use as a Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 143 “shuttle” for transporting goods to the Moon. Here, the asteroid is to have an elongated orbit with a perihelion radius close to that of a geostationary orbit and an apogee radius approaching the perigee radius of the lunar orbit. In the latter case, at the geostationary-orbit perigee goods would be transferred onto the satellite Apophis and then, at the apogee, those goods would arrive at the Moon. The two applications will entail the necessity of solv- ing many difficult problems which now can seem even unsolvable. On the other hand, none of those problems will be solved at all without making asteroid an Earth satellite. Consider now the possibilities available here. The velocity of the asteroid relative to the Earth at the Earth-closest point Ape is AE = 7.39 km·s–1. The velo- city of an Earth bound satellite orbiting at a fixed dis- tance RminA from the Earth (circular orbit) is min A3.2 kms For the asteroid to be made an Earth-bound satellite, its velocity AE should be brought close to CE v. We performed integration of Equation (1) assuming the Apophis velocity at the moment TA to be reduced by a factor of 1.9, i.e., the velocity AE = 7.39 km·s–1 at the moment TA was decreased to 3.89 km·s–1. In the later case, Apophis becomes an Earth bound satellite with the following orbit characteristics: eccentricity es1 = 0.476, equator-plane inclination angle is1 = 39.2˚, major semi- axis as1 = 74540 km, and sidereal orbital period Ps1 = 2.344 days. We examined the path evolution of the satellite for a period of 100 years. In spite of more pronounced oscilla- tions of the orbital elements of the satellite in comparison with those of planetary orbit elements, the satellite’s ma- jor semi-axis and orbital period proved to fall close to the indicated values. For the relative variations of the two quantities, the following estimates were obtained: | a| < ±2.75 × 10–4 and | P| < ±4.46 × 10–4. Yet, the satellite orbits in a direction opposite both to the Earth rotation direction and the direction of Moon’s orbital motion. That is why the two discussed applications of such a sat- ellite turn to be impossible. Thus, the satellite has to orbit in the same direction in which the Earth rotates. Provided that Apophis (see Fig- ure 9(b)) will round the Earth from the night-side (see point 3) and not from the day-side (see line 1), then, on a decrease of its velocity the satellite will be made a satel- lite orbiting in the required direction. For this matter to be clarified, we have integrated Equation (1) assuming different values of the asteroid ve- locity at the point Аp1 (see Figure 9). This point, located at half the turn from the Earth-closest point Ape, will be passed by Apophis at the time TAp1 = 0.149263369488169 century. At the point Аp1 the projections of the Apophis velocity in the barycentric equatorial coordinate system are 1Apx = –25.6136689 km·s–1, 1Apy = 17.75185451 km·s–1, and 1Apz = 5.95159206 km·s–1. In the numeri- cal experiments, the component values of the satellite velocity were varied to one and the same proportion by multiplying all them by a single factor k, and then Equa- tion (1) were integrated to determine the trajectory of the asteroid. Figure 10 shows the minimum distance to which Apophis will approach the Earth versus the value of k by which the satellite velocity at the point Аp1 was v v We found that, on decreasing the value of k (see Fig- ure 10), the asteroid will more closely approach the Earth, and at k = 0.9999564 Apophis will collide with the Earth. On further decrease of asteroid velocity the aste- roid will close with the Earth on the Sun-opposite side, and at k = 0.9992 the asteroid will approach the Earth center (point 3 in Figure 9(b)) to a minimum distance Rmin3 = 39,157 km at the time T3 = 0.2036882 century. This distance Rmin3 roughly equals the distance RminA to which the asteroid was found to approach the Earth cen- ter while moving in between the Earth and the Sun. In this case, the asteroid velocity relative to the Earth is also AE = 7.39 km·s–1. On further decrease of this velocity by a factor of 1.9, i.e., down to 3.89 km·s–1 Apophis will become an Earth bound satellite with the following orbit parameters: eccentricity es2 = 0.486, equ- ator plane inclination angle is2 = 36˚, major semi-axis as2 = 76,480 km, and sidereal period Ps2 = 2.436 day. In addi- tion, we investigated into the path evolution of the Earth bound satellite over a 100-year period. The orbit of the satellite proved to be stable, the satellite orbiting in the same direction as the Moon does. Figure 10. The minimum distance Rmin to which Apophis will approach the Earth center versus the value of k (k is the velocity reduction factor at the point Ap1 (see Figure 8)). The positive values of Rmin refer to the day-side: The values of Rmi n are given in km; 1—the minimum distance to which Apophis will approach the Earth center on April 13, 2029 (day-side); 2—the minimum distance to which Apophis will approach the Earth center after the orbit correction (night- side); 3—geostationary orbit radius Rgs. Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Thus, for Apophis to be made a near-Earth satellite or- biting in the required direction, two decelerations of its velocity need to be implemented. The first deceleration is to be effected prior to the Apophis approach to the Earth, for instance, at the point Ap1 (see Figure 8), 0.443 year before the Apophis approach to the Earth. Here, the Apophis velocity needs to be decreased by 2.54 m/s. A second deceleration is to be effected at the moment the asteroid closes with the Earth. In the case under consid- eration, in which the asteroid moves in an elliptic orbit, the asteroid velocity needs to be decreased by 3.5 km·s–1. Slowing down a body weighing 30 million tons by 3.5 km·s–1 is presently a difficult scientific and engineering problem. For instance, in paper [7] imparting Apophis with a velocity of 10–6 m/s was believed to be a problem solvable with presently available engineering means. On the other hand, Rykhlova et al. [7] consider increasing the velocity of such a body by about 1 - 2 cm/s a difficult problem. Yet, with Apophis being on its way to the Earth, we still have a twenty-year leeway. After the World War II, even more difficult a problem, that on injection of the first artificial satellite in near-Earth orbit and, later, the launch of manned space vehicles, was successfully solved in a period of ten years. That is why we believe that, with consolidated efforts of mankind, the objective under dis- cussion will definitely be achieved. It should be emphasized that the authors of Giorgini et al. 2008 considered the possibility of modifying the Apo- phis orbit for organizing its impact onto asteroid (144898) 2004 VD17. There exists a small probability of the aster- oid’s impact onto the Earth in 2102. Yet, the problem on reaching a required degree of coordination between the motions of the two satellites presently seems to be hardly solvable. This and some other examples show that many workers share an opinion that substantial actions on the asteroid are necessary for making the solution of the va- rious space tasks a realistic program. 10. Asteroid 1950 DA Approaches to the The distances to which the asteroid 1950 DA will ap- proach solar-system bodies are shown versus time in Figure 11. It is seen from Figure 11(a), that, following November 30, 2008, during the subsequent 100-year pe- riod the asteroid will most closely approach the Moon: at the point A (TA = 0.232532 cyr and Rmin = 11.09 million km) and at the point B (TB = 0.962689 cyr and Rmin = 5.42 million km). The encounters with solar-system bod- ies the asteroid had over the period of 100 past years are shown in Figure 11(b). The asteroid most closely ap- proached the Earth twice: at the point C (TC = –0.077395 cyr and Rmin = 7.79 million km), and at the point D (TD = –0.58716 cyr and Rmin = 8.87 million km). Figure 11. Approach of the asteroid 1950 DA to solar-sys- tem bodies. The approach distances are calculated with time interval ∆T: a, b—∆T = 1 year; c—∆T = 10 years. Rmin, km is the closest approach distance. Calendar dates of ap- proach in points see Table 5. For other designations, see Figure 5. Over the interval of forthcoming 1000 years, the mi- nimal distances to which the asteroid will approach so- lar-system bodies on time span ∆T = 10 years are indi- cated in Figure 11(c). The closest approach of 1950 DA will be to the Earth: at the point E (TE = 6.322500 cyr and Rmin = 2.254 million km), and at the point F (TF = 9.532484 cyr and Rmin = 2.248 million km). To summarize, over the 1000-year time interval the asteroid 1950 DA will most closely approach the Earth twice, at the times TE and TF, to a minimum distance of 2.25 million km in both cases. The time TE refers to the date March 6, 2641, and the time TF, to the date March 7, Giorgini et al. [35] calculated the nominal 1950 DA trajectory using earlier estimates for the orbit-element values of the asteroid, namely, the values by the epoch of March 10, 2001 (JPL sol.37). In paper [35], as the varia- tion of initial conditions for the asteroid, ranges were set three times wider than the uncertainty in element values. For the extreme points of the adopted ranges, in the cal- culations 33 collision events were registered. In this connection, Giorgini et al. 2002 have entitled their pub- lication “Asteroid 1950 DA Encounter with Earth in We made our calculations using the orbit-element va- Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Copyright © 2012 SciRes. IJAA lues of 1950 DA by the epoch of November 30, 2008 (JPL sol.51) (see Table 1). By system Horizons the JPL sol.37 can be prolonged till November 30, 2008. As it is seen in this case, the difference of orbital elements of the solution 37 from the solution 51 on two - three order is less, than uncertainties of orbit elements, i.e. the orbital elements practically coincide. With the aim to trace how the difference methods of calculation has affected the 1950 DA motion, in Table 5 we give a comparison of the approach times of Figure 11 with the time-closest approaches predicted in paper [35]. According to Table 5, the shorter the separation between the approach times (see points C and A) and the start time of calculation (2008-11-30), the better is the coincidence in terms of approach dates and minimal approach dis- tances Rmin. For more remote times (see points D and B) the approach times differ already by 1 day. At the point E, remote from the start time of calculation by 680 year, the approach times differ already by eight days, the approach distances still differing little. At the most remote point F, according to our calculations, the asteroid will approach the Earth in 2962 to a distance of 0.015 AU, whereas, according to the data of Giorgini et al. [35], a most close approach to the Earth, to a shorter distance, will be in So, our calculations show that the asteroid 1950 DA will not closely approach the Earth. It should be noted that our calculation algorithm for predicting the motion of the asteroid differs substantially from that of Giorgini et al. [35]. We solve non-simplified Equation (1) by a high-precision numerical method. In doing so, we take into account the Newtonian gravitational interaction only. In paper [35], additional weak actions on the asteroid were taken into account. Yet, the position of celestial bodies acting on the asteroid is calculated from the ephemerides of DE-series. Those ephemeredes approxi- mate observational data and, hence, they describe those data to good precision. Yet, the extent to which the pre- dicted motion of celestial bodies deviates from the actual motion of these bodies is the greater the farther the mo- ment of interest is remote from the time interval during which the observations were made. We therefore believe that the difference between the present calculation data for the times 600 and 900 years (points E and F in Table 5) and the data of Giorgini et al. [35] results from the indicated circumstance. 11. Evolution of the 1950 DA Orbit Figure 12 shows the evolution of 1950 DA orbital ele- ments over a 1000-year time interval as revealed in cal- culations made with time span ∆T = 10 years. With the passage of time, the orbit eccentricity e non-monoto- nically increases. The angle of longitude of ascending , the angle of inclination ie to the ecliptic plane, and the angle of perihelion argument ωe show more monotonic variations. The semi-axis a and the orbital period P both oscillate about some mean values. As it is seen from Figure 12, at the moments of encounter with the Earth, TE and TF, the semi-axis a and the period P show jumps. At the same moments, all the other orbit elements exhibit less pronounced jumps. The dashed line in Figure 12 indicates the initial-time values of orbital elements presented in Table 1. As it is seen from the graphs, these values are perfectly coinci- dent with the values for T = 0 obtained by integration of Equation (1). The relative differences between the values of e, , ie, ωe, a, and P and the initial values of these parameters given in Table 1 are –3.1 × 10–4, –1.6 × 10–5, –6.2 × 10–5, –1.5 × 10–5, –1.5 × 10–5, –1.0 × 10–4, and –3.0 × 10–4, respectively. Such a coincidence validates the calculations at all stages, including the determination of initial conditions, integration of Equation (1), deter- mination of orbital-parameter values, and the transforma- tion between different coordinate systems. It should be noted that the relative difference for the Table 5. Comparison between the data on asteroid 1950 DA encounters with the Earth and Moon: Our data are denoted with characters A, B, C, D, E, F, as in Figure 11, and the data by Giorgini et al. [35] are denoted as Giorg. Source JD, days Date Time, days Body Rmin, AU J. J. SMULSKY, Y. J. SMULSKY Figure 12. Evolution of 1950 DA orbital parameters under the action of the planets, the Moon, and the Sun over the time interval 0 - 1000 from the epoch November 30, 2008: 1—As revealed through integration of motion Equation (1) obtained with the time interval ∆T = 10 years: 2—Initial values according to Table 1. The angular quantities, , ie, e, are given in degrees, the major semi-axis a—in AU, and the orbital period P, in days. same elements of Apophis is one order of magnitude smaller. The cause for the latter can be explained as fol- lows. Using the data obtained by integrating Equation (1), we determine the orbit elements at the time equal to half the orbital period. Hence, our elements are remote from the time of determination of the initial conditions by that time interval. Since the orbital period of Apophis is shorter than that of 1950 DA, the time of determination of Apophis’ elements is 0.66 year closer in time to the time of determination of initial conditions than the same time for 1950 DA. 12. Study of the 1950 DA Trajectory in the Encounter Epoch of March 6, 2641 Since the distances to which the asteroid will approach the Earth at the times TE and TF differ little, consider the trajectories of the asteroid and the Earth at the nearest approach time TE, March 6, 2641. The ellipse E0Ef in Figure 13 shows the projection of the Earth trajectory over a 2.5-year period onto the equatorial plane xOy. This projection shows that, moving from the point E0 the Earth will make 2.5 orbital turns. The trajectory of 1950 DA starts at the point A0. At the point Ae the asteroid will approach the Earth in 2641 to a distance of 0.01507 AU. The post-encounter trajectory of the asteroid remains roughly unchanged. Then, the asteroid will pass through Figure 13. The trajectories of Earth (1) and 1950 DA (2) in the barycentric equatorial coordinate system xOy over 2.5 years in the encounter epoch of March 6, 2641 (point Ae): A0 and E0 are the starting points of the 1950 DA and Earth trajectories; Af and Ef are the end points of the 1950 DA and Earth trajectories; 3—1950 DA trajectory after the correction applied at the point Aa is shown arbitrarily; the coordinates x and y are given in AU. the perihelion point Ap and aphelion point Aa, and the trajectory finally ends at the point Af. Figure 14(a) shows the trajectory of the asteroid rela- tive to the Earth. The relative coordinates xr and yr were calculated by a Equation analogous to (15). Starting at the point A0, the asteroid 1950 DA will move to the point Ae, where it will most closely approach the Earth, the end point of the trajectory being the point Af. The loop in the 1950 DA trajectory represents a reverse motion of the asteroid relative to the Earth. On an enlarged scale, the encounter of the asteroid with the Earth is illustrated by Figure 14(b). The Sun is in the right upper quadrant. The velocity of the asteroid relative to the Earth at the closing point Ae is vAE = 14.3 13. Making the Asteroid 1950 DA an Earth-Bound Satellite Following a deceleration at the point Ae (see Figure 14(b)), the asteroid 1950 DA can become a satellite or- biting around the Earth in the same direction as the Moon does. At this point E (see Table 5) the distance from the asteroid to the Earth’s center is RminE = 2.25 million km, the mass of the asteroid being mA = 1.57 milliard ton. According to (17), the velocity of a satellite moving in a circular orbit of radius RminE is vCE = 0.421 km·s–1. For the asteroid 1950 DA to be made a satellite, its velocity needs to be brought close to the value vCE or, in other words, the velocity of the asteroid has to be decreased by ∆V ≈ 13.9 km·s–1. In this situation, the asteroid’s mo- mentum will become decreased by a value ma∆V = Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 147 Figure 14. The 1950 DA trajectory in the geocentric equato- rial coordinate system xrOyr: a—On ordinary scale; b—On an enlarged scale by the moment of 1950 DA encounter with the Earth: point O—The Earth, point Ae—The aster- oid at the moment of its closest approach to the Earth; the coordinates xr and yr are given in AU. 2.18 × 1016 kg·m/s, for Apophis the same decrease amounts to ma·∆V = 1.08 × 1014 kg·m·s–1, a 200 times greater value. Very probably, satellites with an orbital radius of 2.25 million km will not find a wide use. In this connec- tion, consider another strategy for making the asteroid an Earth-bound satellite. Suppose that the velocity of the asteroid at the aphelion of its orbit (point Aa in Figure 13) was increased so that the asteroid at the orbit perihelion has rounded the Earth orbit on the outside of it passing by the orbit at a distance R1. To simplify calculations, we assume the Earth’s orbit to be a circular one with a radius equals the semi-axis of the Earth orbit aE = 1 AU. So, in the corrected orbit of the asteroid the perihelion radius will be RaR (18) Then, let us decrease the velocity of the asteroid at the perihelion of the corrected orbit to a value such that to make the asteroid an Earth-bound satellite. To check ef- ficiency of this strategy, perform required calculations based on the two-body interaction model for the asteroid and the Sun (Smulsky 2007, Smulsky 2008). We write the expression for the parameter of trajectory in three 0.5 1p pa 2 RR Rv Rv , (19) is the interaction parameter of the Sun and the asteroid, ms is the Sun mass, mAs is the asteroid mass, and –0.6625 is the 1950 DA trajectory parameter. Then, using (19), for the corrected orbit of the asteroid with parameters Rpc and vac we obtain: 0.5 1pc pc a RR Rv From (21), we obtain the corrected velocity of the as- teroid at aphelion: aa pc vRR R Using (19), we express in terms of 1 and va, and after substitution of this expression into (22) we obtain the corrected velocity at aphelion: ac a vvRR R . (23) From the second Kepler law, Ra·vac = Rpc·vpc, we de- termine the velocity at the perihelion of the corrected . (24) As a numerical example, consider the problem on making the asteroid 1950 DA an Earth-bound satellite with a perihelion radius equal to the geostationary orbit radius R1 = Rgs = 42,241 km. Prior to the correction, the aphelion velocity of the asteroid is va = 13.001 km·s–1, whereas the post-correction velocity calculated by Equa- tion (23) is vac = 13.912 km·s–1. Thus, for making the asteroid a body rounding the Earth orbit it is required to increase its velocity at the point Aa in Figure 13 by 0.911 km·s–1. The corrected orbit is shown in Figure 13 with line 3. According to (24), the velocity of the asteroid at the perihelion of the corrected orbit is vpc = 35.622 km·s–1. Using Equation (7), for a circular Earth orbit with –1 and Rp = aE, and with the asteroid mass mAs replaced with the Earth mass mE, for the orbital velocity of the Earth we obtain a value vOE = 29.785 km·s–1. According to (17), the velocity of the satellite in the geostationary orbit is vgs = 3.072 km·s–1. Since those velocities add up, for the asteroid to be made an Earth satellite, its velocity has to be decreased to the value vOE + vCE = 32.857 km·s–1. Thus, the asteroid 1950 DA will become a geo- stationary satellite following a decrease of its velocity at the perihelion of the corrected orbit by vpc – (vOE + vCE) = 2.765 km·s–1. We have performed the calculations for the epoch of 2641. Those calculations are, however, valid for any epoch. Our only concern is to choose the time of 1950 DA orbit Gm m Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY correction such that at the perihelion of the corrected orbit the asteroid would approach the Earth. Such a problem was previously considered in Smulsky 2008, where a launch time of a space vehicle intended to pass near the Venus was calculated. The calculations by Equ- ations (18)-(24) were carried out on the assumption that the orbit planes of the asteroid and the Earth, and the Earth equator plane, are coincident. The calculation me- thod of Smulsky 2008 allows the calculations to be per- formed at an arbitrary orientation of the planes. In the same publication it was shown that, following the deter- mination of the nearest time suitable for correction, such moments in subsequent epochs can also be calculated. They follow at a certain period. In the latter strategy for making the asteroid 1950 DA a near-Earth satellite, a total momentum ma·∆V = ma·(0.911 + 2.765) × 103 = 5.77 × 1015 kg·m/s needs to be applied. This value is 4.8 times smaller than that in the former strategy and 53 times greater than the momentum required for making Apophis an Earth satellite. It seems more appropriate to start the creation of such Earth satel- lites with Apophis. In book [36], page 189, it is reported that an American astronaut Dandridge Cole and his co-author Cox [37] advanced a proposal to capture pla- netoids in between the Mars and Jupiter and bring them close to the Earth. Following this, mankind will be able to excavate rock from the interior of the planetoids and, in this way, produce in the cavities thus formed artificial conditions suitable for habitation. Note that another pos- sible use of such satellites mentioned in [37] is the use of ores taken from them at the Earth. Although the problem on making an asteroid an Earth satellite is a problem much easier to solve than the prob- lem on planetoid capture, this former problem is none- theless also a problem unprecedented in its difficulty. Yet, with this problem solved, our potential in preventing the serious asteroid danger will become many times en- hanced. That is why, mankind getting down to tackling the problem, this will show that we have definitely passed from pure theoretical speculations in this field to practical activities on Earth protection of the asteroid 14. Discussion In the 20th century, in the science of motion, namely, in the mechanics the changes have been introduced that led to the opinion of the movements’ indeterminacy. The Ge- neral Theory of Relativity is beginning of the changes. We have studied all aspects of this theory and established its wrong reasons in the works [16,17,26,38]. Unlike existing methods, our method does not use the distortion of mechanics. Besides for the solution of differential equations we have developed a new method of high ac- curacy. Therefore, our results are more accurate and re- liable. But our work causes sharp objections of support- ers of the existing methods. Below they are presented in the form of objections. The success of scientific research often depends on the worldview of a scientist. He can have a misconception in the field, seemingly distant from the sphere of research. However, it is not to permit it to set scientific truth. The above applies to certain objections. We considered it necessary bring them and give them an answer, because this judgments interfere with the comprehension of sci- entific truth. 1. Objection. It is stated, paragraph 1. INTRODUC- TION, that presently available methods for predicting the travel path of extraterrestrial objects lack sufficient ac- curacy..., but this pronouncement is not justified in any meaningful way. In fact, it is generally regarded that the limitation on prediction is set by observational uncer- tainties, not computational abilities. As is noted, the ra- diation pressure forces set a limit on prediction of Apo- phis and 1950 DA over very long periods of time, but again, the limitation is on our ability to measure or esti- mate these forces, not on computational limitations. Answer. Apart from observational errors and the ra- diation-pressure force, there exist many other factors causing the difference between the calculated trajectory and the actual motion of an asteroid. In our paper, vari- ous approaches proposed by different authors are ana- lyzed, and a method, free of many drawbacks, is used to solve the problem. The referee expresses an opinion that in our paper we do not prove that methods capable of predicting the mo- tion of asteroids with satisfactory accuracy are presently lacking. However, the absence of such methods immedi- ately follows from the publications under consideration. It was not the point to prove that. 2. Objection. The suggestion to alter the orbits of these two objects to put them in orbit about the Earth seems absurd, and without justification. As noted, the change in velocity required to accomplish this is in the several km/sec range. It is barely conceivable with present tech- nology to make a change of a few cm/sec, five orders of magnitude less than would be required to place either object in Earth orbit. The authors make the cavalier statement that it might be possible to accomplish this, making reference to the advance from bare orbiting of instruments around the Earth to landing men on the moon in only a bit more than a decade. But they ignore the fact that the physics of how to do the latter was al- ready known before the former was done, whereas in moving asteroids around by km/sec increments of veloc- ity is far beyond any currently understood technology. It’s a bit like asking Christopher Columbus to plan a vessel to transport 400 people across the Atlantic in six Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 149 hours—he wouldn’t even know where to begin. Answer. In our paper, we put forward an idea of cap- turing an asteroid in Earth orbit, analyze available possi- bilities in implementing this project, and calculate nec- essary parameter values. We do not consider the engineering approaches that can be used in implementing the idea. That is a different field of knowledge, and this matter is to be analyzed in a separate publication. As for the referee’s remark on Christopher Columbus, the history saw how in 1485 the Columbus’ proposal about an expedition to be send through the West Ocean to India was rejected by the Mathematician Council in Portugal. Later, in 1486, the project by Columbus was also rejected by the Academic Senate in the University of Salamanca, Spain, a famous university in the Middle Ages alongside with the Montpellier, Sorbonne and Ox- ford, because the project had incurred ridicule as resting on the “very doubtful” postulate of Earth’s sphericity. The objection expresses an opinion that the idea of man’s exit into the outer space was implemented rather fast because the physics necessary for solving the prob- lems behind that project was known, whereas the physics of how to implement our project presently remains an obscure matter. The problem of man’s exit into space was solved by engineers rather than scientists. When engineers had solved all problems, then established scientists had be- come able to catch the physical essence of the matter. Presently, academic scientists are in captivity of rela- tivistic fantasies about micro- and macro-world. As a result, they failed to properly understand the entire phy- sical picture of our world, including the space travel phy- sics. The best thing such physicists could do is not to interfere into the projects actually important for mankind like the project we discuss in our paper. 3. Objection. In considering the motion of the aster- oids the paper describes only the asteroids are integrated, the other perturbations are derived from planetary ephe- Answer. In our study, we integrated not only the mo- tion of the asteroid; we also integrated the motion of other celestial bodies. 4. Objection. The integration method the authors pre- sent is not new (though the implementation in software may be). They present a simple, fixed-step Newtonian integrator that models only gravitational point masses. Far more sophisticated methods and physics have been published before precisely because the approach the au- thors go on to describe is inadequate. Answer. Whether our integration method is new or not, the definition here is rather relative. Formula (2) in our paper gives a specialist the general idea behind our me- thod and some details of its implementation is stated in the paper. Since none of the already existing methods was used in treating the problem we deal with in our study, we qualify our method as an original one. In our opinion, our method is akin to the method of Taylor- Steffensen series rather than to the Newton method. In this approach the derivatives are determined by recur- rence formulas. In our method (program Galactica) the derivatives are calculated under the exact analytical for- mulas, which we have deduced. This provides greater accuracy than other methods. 5. Objection. The presently available methods for pre- dicting the travel path of extraterrestrial objects are fine. They are the same ones used to deliver spacecraft to planets and fit measurement data-arcs hundreds of years Answer. One of the deficiencies of presently available methods for integrating the motion of celestial bodies is that those methods were constructed so that to provide a best fit to observational data. Within the period of avail- able observations, those methods proved to yield rather good results. On the other hand, calculations of the mo- tion of a previously unobserved object will obviously yield worse results. Also, calculations of the motion of a body observed during some observation period per- formed far outside this period will also yield less accu- rate results. 6. Objection. It is the limited knowledge of the physic- cal properties of the objects that is the problem. Given measurements of those properties (spin, reflectivity, etc.), proper prediction is possible within computable error Answer. The opinion that physical properties of an asteroid such as reflectivity or spin may notably affect the asteroid’s motion is an erroneous opinion. This opin- ion is the consequence of deficiencies inherent to the methods mentioned in Answer 5. The actual motion of celestial bodies and spacecraft having been found differ- ent from their calculated motion, the people dealing with celestial mechanics undertook introducing additional fictitious forces into motion equations, such as the Yar- kovsky force, whose magnitude was assumed to be de- pendent on the physical properties of a particular body under study. 7. Objection. It is stated (p. 1. INTRODUCTION), that “… the Apophis trajectory will for long remain... cha- otic”. No. Error growth is almost entirely in the along- track direction. It is not chaotic over relevant time-scales and measurements likely in 3 years will radically reduce those prediction uncertainties about 97%. This is de- scribed in the papers the authors reference, so seems to be a misunderstanding. Answer. It was the authors of cited publications rather than us that have qualified the motion of Apophis as a chaotic motion. Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 8. Objectio n. The planetary ephemeris error it is far less than radiation related effects like solar pressure and Yarkovsky thermal re-radiation. The Giorgini et al. 2008 paper referenced shows what is required for better pre- diction is PHYSICAL KNOWLEDGE of the object (mea- surement), not METHOD. Answer. Indeed, Giorgini et al. [4] have demonstrated that the solar pressure and Yarkovsky thermal re-radia- tion may have a considerable influence on the motion of celestial bodies. We, however, believe that the results by Giorgini et al. are erroneous. First, the forces of Giorgini et al. dealt with are fictitious nonexistent forces. Second, the interaction constants of those forces were artificially overestimated by Giorgini et al. [4]. We would like to deliver here some additional remarks concerning the fictitious nature of some forces. More than half a century ago it was shown by some physicists that no light pressure is observed in nature. Unfortunately, those results have been forgotten by many physicists. The Yarkovsky force was introduced so that to com- pensate for the difference between the observed motion of celestial bodies, spacecrafts and their motion as pre- dicted by contemporary theories. As it was already men- tioned, the presently available methods for predicting the motion of celestial bodies suffer from serious deficien- cies. Those deficiencies need to be overcome, and we believe that, following this, the difference between the actual motion of bodies and their motion as predicted assuming only the Newtonian gravity force to be opera- tive will be made negligible and even exiled from final results. Then, additional fictitious forces will no longer be needed. Let us give here some direct arguments proving that the forces under discussion are in fact fictitious forces. When in mechanics someone says that a force acts on a body, this does not mean that the force presents a mate- rial object. The sentence “a force acts on a body” is just slang. In mechanics, we imply that some body acts on another body. The influence is manifested in the changed motion of the second body. A change in motion is de- fined by body’s acceleration. Hence, the action exerted by the first body consists in an acceleration experienced by the second body. Man has invented mechanics in which actions are de- fined by an auxiliary quantity called the force. The force was defined as a quantity proportional to acceleration accurate to a factor (for details, see our books [16,17]). So, the term “force” is not a name for an object in our world. When somebody says that a body on an inclined board experiences the actions due to the friction force and due to the gravity force, we imply that the body is acted upon by the board and, through the gravity interac- tion, by the Earth. When somebody says that the Moon is acted upon by the gravity force due to Earth, this means that it is the Earth that acts on the Moon. On the other hand, in the case of light-pressure and Yarkovsky forces the acting bodies are missing. If one thinks of light considering it as a photon flux, he has to remember that photons have no mass, and they are there- fore no physical bodies. Yarkovsky had invented his force as a force due to either particles, which are also nonexistent objects. Thus, both the light pressure and Yar- kovsky thermal re-radiation are not actions due to bodies; such forces therefore bear no relation to mechanics. The only application fields of such forces are extra-sensory perception and Hollywood movies. Those forces “can be used” in ephemerid approximation models, such as SDM, because they all the same need to be fitted to many hun- dred thousand observations. 9. Objection. In p. 2. PROBLEM STATEMENT the in- sufficient information was provided to determine what integration algorithm was used by the authors. This is unacceptable given the rest of the paper. The previously published literature on this subject is vast and highly developed and should be drawn upon and Answer. Formulas (1) and (2) in our paper give the general idea behind our method, and they also define the form of master equations used in it. Details of the algo- rithm, and those of the method and equations, are too numerous to be outlined in the paper. We exploited our method over a period of more than ten years, and during that period, using the method, we have solved many pro- blems. Some of our results were reported in publications [18-21,26,38]. In those publications, some details of the algorithm were described, and ample data on the ade- quacy of our method and credibility of solutions obtained, given. Below we list some of the problems that were tackled with the help of the Galactica software. 9.1. Evolution of planetary orbits and the orbit of Moon over the period of one hundred million years [18, 19]. It was for the first time that non-simplified differen- tial equations of motion were integrated. The periods and amplitudes of planetary-orbit oscillations were evaluated, and stability of the Solar System was demonstrated (see Figures 3 and 4). 9.2. Optimal flight of a spacecraft to the Sun [21]. The spacecraft was proposed to use the gravitational maneu- ver near to Venus. The launch regime of the spacecraft allowing minimization of its starting velocity was identi- 9.3. Compound model of Earth rotation and the evolu- tion of Earth rotation axis [20]. The Earth is considered as a system of several bodies located in the equatorial plane of a central body. The motion of one of the periph- eral bodies models the motion of Earth rotation axis. The evolution of Earth rotation axis was calculated over a period of 110 thousand years. It was found that the Earth Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 151 rotation axis precesses relative to the non-stationary axis of Earth orbital motion. 9.4. Compound model of Sun rotation and its out- comes for the planets [26] and [38]. The Sun rotation period is 25.38 days. The Galactica software was used to predict the outcomes of the compound model of Sun ro- tation on nearest planets. As a result of the calculations, an excessive revolution of Mercury perihelion was iden- tified, which was previously explained assuming other mechanisms to be operative. 9.5. Multilayer ring structures [39]. The structure of interest comprises several rings, each of the rings involv- ing several bodies. Evolution of several such ring struc- tures was calculated, and stable and unstable configure- tions were identified. 10. Objectio n. In p. 4. PREPARATION OF INITIAL DATA OF ASTEROIDS three pages of discussion and Equations (5)-(14) on the transformation of orbital ele- ments to Cartesian coordinates could be deleted. This material is found in every introductory celestial mechan- ics course and need not be belabored. Answer. Orbital elements can be transformed into Cartesian coordinates in different ways that yield diffe- rent results. We have chosen the best transformation; we have deduced them and therefore give it in our paper. In addition, our consideration involves some formulas not be found in standard courses on celestial mechanics. 11. Objection. Further, the authors state their goal is to compute barycentric Cartesian coordinates, but then describe only heliocentric transformations. No informa- tion on if or how transformation from heliocentric to the barycentric needed by their code is given leads the reader to wonder if heliocentric coordinates were im- properly used in the barycentric code. Answer. The transformation of heliocentric coordi- nates to barycentric ones are firstly omitted from our paper as presenting a matter of common knowledge. Guided by the referee’s remark, now we discuss it in our paper after Equation (14). 12. Objection. In end of p. 4. PREPARATION OF INI- TIAL DATA OF ASTREOIDS the authors have written: “… the masses of those bodies were modified by us…” This would introduce a dynamical inconsistency within the planetary ephemeris used to compute perturbations in the integration. Was the magnitude of this inconsis- tency computed? The coordinates from DE405/406 said to be used are derived from the original planetary masses. Change those masses and the positions will change, hence per- turbations on the object being integrated, hence the re- sult of the integration. Answer. We integrate Equations (1) for a total of twelve bodies, including the planets, Moon, Earth, and Apophis. We did not use planet and Moon coordinates taken from ephemerides; hence, any mass values can be adopted. The closer are the mass values to real masses, the better is the consistency between the calculated and observational data. We have checked this fact. In Galac- tica, the mass values and the initial data are specified in a separate file, which can easily be replaced with another file. Now, the relative mass values are taken from the DE405 system, whereas the absolute values have been recalculated as G·MEarth, where MEarth is the Earth mass from the IERS system. The mass values adopted in our calculations are indicated in Table 2. 13. Objection. Studying Figure 5 at length, it is unable to interpret it. It seems to show two dots for Earth at point A; the text says there is only one. Time scale would be better in calendar years instead of fractional centuries. A figure is used if it shows relationships or trends clearly. This figure does not. Why not a useful table of numerical values? Answer. The first dot in the horizontal line Ea refers to time А. The second dot after interval Т = 1 year re- fers to the Earth, too. As it is seen from the graph, here the distance to which the asteroid closes the Earth is greater than 4.25 × 107 km. Figure 5 is indeed an uncommon representation. How- ever, in case this uncommonness is overcome, Figure 5 gives a clear picture of the asteroid’s approach to all the bodies over the whole considered time interval. No such picture can be grasped from a table. 14. Objection. It is stated in end of p. 5 APOPHIS’ that “As for the possible approach of Apophis to the Earth in 2036, there will be no such approach…” This is another fundamental misunderstanding of the paper resulting from an incorrect analysis. The authors integrate a nominal orbit solution only and find it does not closely approach the Earth in 2036. However, it is necessary to examine not just the single nominal orbit, but the set of statistically possible orbit variations, defined by the orbit solution covariance ma- trix, as well as physical uncertainties (uncertainites). The papers the authors cite go into such statistical approaches extensively. Why does this fundamental issue of modern orbit de- termination not exist in this paper? The analysis the authors provide does not recognize the statistical nature of the problem. The authors ap- proach is not acceptable for analyzing such problems because it ignores the statistical distribution of orbit variations defined by the measurement dataset. This alone renders the paper and its conclusions ir- relevant to readers. Answer. We regard such a statistical study a vain un- Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY If many measurement data for a parameter are avail- able, then the nominal value of the parameter, say, ec- centricity en, presents a most reliable value for it. That is why a trajectory calculated from nominal initial condi- tions can be regarded as a most reliable trajectory. A tra- jectory calculated with a small deviation from the nomi- nal initial conditions is a less probable trajectory, where- as the probability of a trajectory calculated from the pa- rameters taken at the boundary of the probability region (i.e. from e = en ± e) tends to zero. Next, a trajectory with initial conditions determined using parameter values trice greater than the probable deviations (i.e. e = en ± e) has an even lower, negative, probability. Since ini- tial conditions are defined by six orbital elements, then simultaneous realization of extreme (boundary) values (± ) for all elements is even a less probable event, i.e. the probability becomes of smaller zero. That is why it seems that a reasonable strategy could consist in examining the effect due to initial conditions using such datasets that were obtained as a result of suc- cessive accumulation of observation data. Provided that the difference between the asteroid motions in the last two datasets is insignificant over some interval before some date, it can be concluded that until this date the asteroid motion with the initial conditions was deter- mined quite reliably. Such computations were carried out and described in the additional Section 6. Influence of initial conditions. 15. Objection. In p. 6. APOPHIS ORBIT EVOLUTION the authors describe integrating the orbit of Apophis over 200 years, writing out a file of coordinates each year. They then go back and, starting from each file, in- tegrate one Apophis orbit period and save that to a file. Why? 201 integrations are being done when one would suffice. Is not going back and integrating from the start- ing point of each yearly file the same as integrating con- tinuously over the span? Answer. On integration of Equation (1), we obtain coordinates of each body in the barycentric system. For determining a body’s orbital elements, it is required to consider the coordinates of the body with respect to a parent body (for an asteroid, with respect to the Sun) during one orbital period. To avoid a complex logic in choosing coordinate values, in integration over the whole time interval of interest, we chose to adhere to the strat- egy described in the paper. 16. Objection. In p. 9. POSSIBLE USE OF ASTEROID APOPHIS the argument made for capturing Apophis into Earth orbit is at a level suitable for sketching on a nap- kin. No discussion of material properties, or mechanics. The composition of Apophis is unknown and the discus- sion amounts to speculation for personal entertainment. Answer. In the paper, we describe available strategies for making the asteroid an Earth-bound satellite and cal- culate parameter values necessary for realization of such a project. An analytical background behind those strate- gies is developed. The motion of the asteroid after tra- jectory correction and the motion of formed satellites were determined by integrating Equations (1). We do not describe all the obtained results in our paper; however, those results were used to substantiate the proposed strategies in capturing the asteroid in Earth orbit. Those strategies are unobvious, and it should be remembered that one can propose strategies that never can be imple- mented. We propose realizable strategies. We have cal- culated the orbit evolution of the satellites and proved that those orbits can be made stationary for a long time. The computations for satellites were made taking into account the action exerted on them by all bodies. We believe our calculations to be original. Following our publication, other workers will move farther in this di- How can those strategies be implemented? This matter will be discussed after the present results are reported in the literature. For the time being, we raise the issue of making an asteroid an Earth-bound satellite. This issue is given rather a deep analysis. All computations are per- formed at a good scientific level. That is why our results are not to be ignored, and the work, regarded as a sketch on napkin, to be one day thrown away. It is more prob- able that it is the statistical data on the asteroid’s en- counter with the Earth rather than our paper that will be one good day thrown away. 17. Objection. Ii is stated in the beginning p. 9 that “Over subsequent 1000 years, Apophis will never ap- proach our planet closer”. The analyses given cannot support the statement. All uncertainties physical and measurement are ignored by the authors. Only the single nominal orbit is considered. This is unacceptable and the results of no interest to readers. Answer. Indeed, our calculations show that the aster- oids will not hit the Earth. On conscientious analysis, statistical data on such collisions in the cited publications are also indicative of this fact. Only undisguised trick- sters, reasoning from such statistics, can frighten the so- ciety with the threat of Apophis danger. With passage of time, people usually become aware of scientific trickery, and this deteriorates their trustfulness to science. The way we propose in our paper will allow mankind to de- velop in the future a good method for preventing the po- tential threat of asteroid’s collisions with Earth. Note that such method can only be implemented if we find a way for making asteroids Earth-bound satellites. The dynamics of Solar system is not linear. Thus, there can be orbits with initial conditions intermediate to those that authors used, which can lead to closer approaches. Actually, the theory predicts that there are KEYHOLES, associated to RESONANT RETURNS which can lead to Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY 153 collisions. This aspect is missing in this work. In this paper no new results. This looks more like the report of a beginner entering in this field for the first time, and just setting up the software tools and the con- ceptual know-how to be able, in the future, to perform research in this field. In particular some conceptual building blocks are still missing, such as the notion of chaos (mentioned just once as dreaded possibility, while it is a well established fact that all the asteroids which can impact the Earth are on chaotic orbits), and the ef- fect of nonlinearity in the orbit determination and in the propagation of the uncertainty to a future time. Answer. There are three aspects in this objection: 1) Within the uncertainty ellipsoid of the orbital ele- ments do not found a collision with Earth; 2) Do not shown the KEYHOLES of resonant and cha- otic trajectories; 3) There are no new results. Our paper shows that the search for collisions within the uncertainty ellipsoid of the orbital elements is mean- ingless work. We have also shown that the conclusion about chaotic motion is caused by imperfection of meth- ods of integrating the equations. By another method and in another way we are solving this problem. We have got new results: Asteroids Apo- phis and 1950 DA do not impact the Earth. In addition we are putting forward and are based the new idea: The transformation of the asteroids in the satellites. So, in our paper the new methods are used, the new results are received and the new ideas are putted forward. In contrast to our paper the published papers, which we cited above, prove the false idea about collisions in 2036 and in 2880. These papers are misleading readers. When the scientists’ errors become clear, the society has intensified distrust of science. In published papers the imaginary constructs are in- vestigated: Chaos, resonances, keyholes, etc. Their au- thors use methods with the imaginary precision by which supposedly can determine the motion of the planets up to mm and up to marcsec. We emphasize the imaginary precision that arises when comparing the methods on those observations, to which they are fitted. If someone is using them to calculate the outside of this area, the mo- tions of bodies differ significantly from the calculated movements. The authors of published papers believe that there are fictitious forces (Yarkovsky force, etc.), reso- nances and keyholes, which make the body motion cha- otic. That is, rather than to doubt the accuracy of the methods they put forward the reasons for their justifica- The same methods found that the Solar system after 20 million years ago is starting to change, and in the future because of the chaos it begins to collapse. The reason for these phenomena lies in the imperfection of methods for calculating the motions. In contrast, our method allowed us to integrate the equations of the Solar system motion for 100 million years: The Solar system is stable and no signs of change. So the keyholes, resonances, chaos and the fictitious forces appear due to imperfect methods of calculating the motions. Our paper cannot be viewed superficially, it must be deeply studied. It gives a lot of new knowledge about the evolution of the asteroids motion, the accuracy of inte- gration methods and on the ways in which to develop these methods. The modern celestial mechanics dominates by ideas of indeterminacy, of unpredictable resonances and of cha- otic motions. Our paper provides the mathematical tools and techniques that allow us to calculate the movement with known accuracies, and then to implement them. The paper presents a path that each can go through and check out our results. This is the science. But the chaos, the resonances, the keyholes are not the science, those are Extrasensory. It is need return to the classical celestial mechanics, the creators of which are not doubted the determinacy of 15. Conclusions 1) The instability and randomness in dynamics of planets and asteroids is caused by imperfection of meth- ods of account of movements; 2) The parameters of planets orbits are steady chang- ing with the certain periods and amplitudes; 3) On 21 hour 45' GMT, April 13, 2029 Apophis will pass close to the Earth, at a minimum distance of 6 Earth radii from Earth’s center. This will be the closest pass of Apophis near the Earth in the forthcoming one thousand 4) Calculations on making Apophis an Earth bound satellite appropriate for solving various space exploration tasks were performed; 5) The asteroid 1950 DA will twice approach the Earth to a minimal distance of 2.25 million km, in 2641 and in 6) At any epoch, the asteroid 1950 DA can be made an Earth-bound satellite by increasing its aphelion velocity by ~1 km·s–1 and by decreasing its perihelion velocity by ~2.5 km·s–1. 16. Acknowledgements The authors express their gratitude to T. Yu. Galushina and V. G. Pol, who provided them with necessary data on asteroid Apophis. They are also grateful to the staff of the Jet Propulsion Laboratory, USA, whose sites were used as a data source from which initial data for integra- tion of motion equations were borrowed. The site by Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Edward Bowell (ftp://ftp.lowell.edu/pub/elgb/) was help- ful in grasping the specific features of asteroid data rep- resentation and in avoiding possible errors in their use. Krotov O. I. took part in calculations of the Apophis mo- tion on the system Horizons. The calculations were car- ried out on the supercomputer of the Siberian Super- computer Centre of Siberian Branch RAS. The study was carried out as part of Integration Pro- grams 13 (2008-2011) of the Presidium of the Russian Academy of Sciences. [1] J. Laskar, “Large-Scale Chaos in the Solar System,” As- tronomy & Astrophysics, Vol. 287, No. 1, 1994, pp. L9- [2] J. Laskar, P. Robutel, F. Joutel, M. Gastineau, A. C. M. Correia and B. Levrard, “A Long-Term Numeriucal Solu- tion for the Insolation Quantities of the Earth,” Astro- nomy & Astrophysics, Vol. 428, No. 1, 2004, pp. 261-285. [3] J. Laskar, A. C. M. Correia, M. Gastineau, F. Joutel, B. Levrard and P. Robutel, “Long Term Evolution and Chao- tic Diffusion of the Insolation Quantities of Mars,” As- tronomy & Astrophysics, Vol. 428, No. 1, 2004, pp. 261- 285. doi:10.1051/0004-6361:20041335. [4] J. D. Giorgini, L. A. M. Benner, S. I. Ostro, H. C. Nolan and M. W. Busch, “Predicting the Earth Encounters of (99942) Apophis,” Icarus, Vol. 193, No. 1, 2008, pp. 1- 19. doi:10.1016/j.icarus.2007.09.012. [5] R. Tucker, D. Tholen and F. Bernardi, MPS 109613, Minor Planet Center, Cambridge, 2004. [6] G. J. Garradd, MPE Circ., Minor Planet Center, Cam- bridge, 2004, Y25. [7] L. V. Rykhlova, B. M. Shustov, V. G. Pol and K. G. Suk- hanov, “Urgent Problems in Protecting the Earth against Asteroids,” Proceedings of International Conference Near- Earth Astronomy, Nalchik, 3-7 September 2007, pp. 25- [8] V. A. Emel’yanov, Y. K. Merkushev and S. I. Barabanov, “Cyclicity of Apophis Observation Sessions with Space and Ground-Based Telescopes,” Proceedings of Interna- tional Conference Near-Earth Astronomy, Nalchik, 3-7 September 2007, pp. 38-43. [9] V. A. Emel’yanov, V. I. Luk’yashchenko, Y. K. Merkushev and G. R. Uspenskii, “Determination Accuracy of Aster- oid Apophis’ Orbit Parameters Ensured by Space Tele- scopes,” Proceedings of International Conference Near- Earth Astronomy, Nalchik, 3-7 September 2007, pp. 59- [10] L. L. Sokolov, A. A. Bashakov and N. P. Pit’ev, “On Pos- sible Encounters of Asteroid 99942 Apophis with the Earth,” Proceedings of International Conference Near- Earth Astronomy, Nalchik, 3-7 September 2007, pp. 33- [11] E. Everhart, “Implicit Single-Sequence Methods for Inte- grating Orbits,” Celestial Mechanics and Dynamical As- tronomy, Vol. 10, No. 1, 1974, pp. 35-55. [12] L. E. Bykova and T. Y. Galushina, “Evolution of the Probable Travel Path of Asteroid 99942 Apophis,” Pro- ceedings of International Conference Near-Earth Astron- omy, Nalchik, 3-7 September 2007, pp. 48-54. [13] L. E. Bykova and T. Yu. Galushina, “Fundamental and Applied Problems in Modern Mechanics,” Proceedings of the 6th All-Russia Scientific Conference, Tomsk, 30 Sep- tember-2 October 2008, pp. 419-420. [14] E. A. Smirnov, “Advanced Numerical Methods for Inte- grating Motion Equations of Asteroids Approaching the Earth,” Proceedings of International Conference Near- Earth Astronomy, Nalchik, 3-7 September 2007, pp. 54- [15] V. V. Ivashkin and K. A. Stikhno, “An Analysis of the Problem on Correcting Asteroid Apophis’ Orbit,” Pro- ceedings of International Conference Near-Earth As- tronomy, Nalchik, 3-7 September 2007, pp. 44-48. [16] J. J. Smulsky, “The Theory of Interaction,” Publishing House of Novosibirsk University, Novosibirsk, 1999. [17] J. J. Smulsky, “The Theory of Interaction,” Publishing House Cultural Information Bank, Ekaterinburg, 2004. [18] E. A. Grebenikov and J. J. Smulsky, “Numerical Inves- tigation of the Mars Orbit Evolution in the Time Interval of Hundred Million,” A. A. Dorodnitsyn Computing Cen- ter of RAS, Moscow, 2007. [19] V. P. Melnikov and J. J. Smulsky, “Astronomical Theory of Ice Ages: New Approximations. Solutions and Chal- lenges,” Academic Publishing House, Novosibirsk, 2009. [20] V. P. Mel’nikov, I. I. Smul’skii and Y. I. Smul’skii, “Com- pound Modeling of Earth Rotation and Possible Implica- tions for Interaction of Continents,” Russian Geology and Geophysics, Vol. 49, No. 11, 2008, pp. 851-858. [21] J. J. Smulsky, “Optimization of Passive Orbit with the Use of Gravity Maneuver,” Cosmic Research, Vol. 46, No. 55, 2008, pp. 456-464. [22] K. F. Steffensen, “On the Problem of Three Bodies in the Plane,” Kongelige Danske Videnskabernes Selskab, Mate- matisk-Fysiske Meddelelser, Vol. 13, No. 3, 1957. [23] M. S. Yarov-Yarovoi, “The Use of Refined Methods of Numerical Integration in Celestial Mechanics,” GAISh Transactions, Vol. 45, 1974, pp. 179-200. [24] J. J. Smulsky, “The Axisymmetrical Problem of Gravi- tational Interaction of N Bodies,” Matematicheskoe Mod- elirovanie, Vol. 15, No. 5, 2003, pp. 27-36. [25] T. R. Quinn, S. Tremaine and M. Duncan, “A Three Mil- lion Year Integration of the Earth’s Orbit,” Astronomical Journal, Vol. 101, 1991, pp. 2287-2305. [26] J. J. Smulsky, “New Components of the Mercury’s Peri- helion Precession,” Natural Science, Vol. 3, No. 4, 2011, pp. 268-274. doi:10.4236/ns.2011.34034. [27] S. Newcomb, “The Elements of the Four Inner Planets and the Fundamental Constants of Astronomy,” Govern- Copyright © 2012 SciRes. IJAA J. J. SMULSKY, Y. J. SMULSKY Copyright © 2012 SciRes. IJAA ment Printing Office, Washington DC, 1895. [28] J. L. Simon, P. Bretagnon, J. Chapront, G. Francou and J. Laskar, “Numerical Expression for Precession Formulae and Mean Elements for the Moon and the Planets,” As- tronomy & Astrophysics, Vol. 282, 1994, pp. 663-683. [29] E. Bowell, “The Asteroid Orbital Elements Database,” 2008. ftp://ftp.lowell.edu/pub/elgb [30] California Institute of Technology, “JPL Small-Body Database 2008. Jet Propulsion Laboratory. 99942 Apo- phis (2004 MN4),” 2008. [31] G. N. Duboshin, “Celestial Mechanics and Astrodynam- ics,” Nauka, Moskva, 1976. [32] J. J. Smulsky, “A Mathematical Model of the Solar Sys- tem,” In: V. A. Bereznev, Ed., The Theoretical and Ap- plied Tasks of the Nonlinear Analysis, A. A. Dorodnicyn Computing Center, Moscow, 2007, pp. 119-139. [33] “Ephemerides of the Jet Propulsion Laboratory,” 2008. [34] E. M. Standish, “JPL Planetary and Lunar Ephemerides,” Interoffice Memorandum, 1998. [35] J. D. Giorgini, et al., “Asteroid 1950 DA Encounter with Earth in 2880: Physical Limits of Collision Probability Prediction,” Science, Vol. 296, No. 5565, 2002, pp. 132- [36] U. Corliss, “Mystery of the Universe,” Mir, Moscow, [37] B. D. V. Cole and D. W. Cox, “Islands in Space,” Chilton Books, Philadelphia, 1964. [38] J. J. Smulsky, “Gravitation, Field and Rotation of Mer- cury Perihelion,” Proceedings of 15th Annual Conference of the Natural Philosophy Alliance, Albuquuerque, 7-11 April 2008, pp. 254-260. [39] I. I. Smul’skii, “Multilayer Ring Structures,” Physics of Particles and Nuclei Letters, Vol. 8, No. 5, 2011, pp. 436- 440. doi:10.1134/S1547477111050189.
{"url":"https://www.scirp.org/html/23224.html","timestamp":"2024-11-14T18:17:53Z","content_type":"text/html","content_length":"425431","record_id":"<urn:uuid:778e97d7-d4c2-436e-bf5f-7d18036b0361>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00713.warc.gz"}
VLBI Analysis software • VLBI processing software PIMA that performs initial calibration for sampler distortion, system temperature, antenna gain, phase calibration, complex bandpass calibration and evaluate group delays, phase delay rate, group delay rate and phase of the cross-correlation. • Library petools — a toolkit for astronomy and space geodesy software. • Library NERS — functions that perform computation of the Earth orientation parameters and the rotation matrix. • The VTD library of routines for computing VLBI path delay and Doppler shift with the highest accuracy. • Library fourpack — a set of routines that implement a convenient interface to Fast Fourier Transform libraries, perform linear filtration with the use of FFT; compute tuning parameters for FFTW. • Library gvh — the database handler of geodetic and absolute astrometry VLBI observations. • Library vex_parser — parses files that defines a VLBI schedule in VEX (VLBI EXchange) and parses STation Parameter files formats. • Program sur_sked for scheduling VLBI experiments of astrometry survey mode. Back to astrogeo.org This web page was prepared by Leonid Petrov ( Last update: 2017.04.12_15:38:55
{"url":"http://astrogeo.org/vlbi_analysis/","timestamp":"2024-11-03T14:10:07Z","content_type":"text/html","content_length":"3423","record_id":"<urn:uuid:9593a113-46ad-4a49-8a3e-c4fcecf40e98>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00035.warc.gz"}
Numerical Evaluation This lesson discusses numeric types and evaluation in SymPy. We'll cover the following Numeric types While SymPy primarily focuses on symbols, it is impossible to have a completely symbolic system without the ability to numerically evaluate expressions. Many operations will directly use numerical evaluation, such as plotting a function or solving an equation numerically. SymPy has 3 numeric types: rational, real, and integers. Since rational is the new one, let’s discuss it here: The rational class represents a rational number as a pair of two integers: the numerator and the denominator. Rational(1, 4) represents $\frac{1}{4}$ Rational(5, 3) represents $\frac{5}{3}$ All the arithmetic operations can be performed on these rational numbers. Let’s see an implementation of this below: Get hands-on with 1200+ tech skills courses.
{"url":"https://www.educative.io/courses/python-for-scientists-and-engineers/numerical-evaluation","timestamp":"2024-11-06T04:04:12Z","content_type":"text/html","content_length":"765086","record_id":"<urn:uuid:c00306fe-9b70-4b4b-9f38-2df0f4c31339>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00518.warc.gz"}
Rock Compression - 911Metallurgist In topic one, we introduced stress and strain. These 2 variables are related because application of a stress results in a strain. In this topic, we will see how a specimen of intact rock responds when it is subjected to a simple compression. So, if we consider specimens, such as this one with flat ends, and we compress the specimen. So that could be represented as this little sketch here. So we have the specimen subjected to the compression and as the specimen is stressed, it deforms. So the mechanical response can be plotted in a stress, sigma, strain, epsilon, plane. It typically involves an initial non-linear part, where stress increases with strain non-linearly and is now, after that, followed by a linear evolution, where stress is proportional to strain. Progressively, as we increase the stress, the rock starts to not be able to cope with the amount of stress anymore and that would lead to progressive failure, where the specimen breaks into pieces. The failure can be brittle, like glass breaking, and would have response like this, where the load suddenly falls, nut the specimen could also sustain the load, where we would have then a plateau there or even we could observe slight increase in the load past this inflexion point here, from the mechanical response, we can determine the unconfined compressive strengths of the rock, this corresponds to the peak value sustained by the rock specimen and we can read the value here, on the stress axis. However, this value, which is a resistance in compression, is not enough to fully characterize the strengths of the rock. Take, for example, this rock specimen, which is broken here. It certainly has some strengths in compression, but there is no strengths in traction. So this shows that strengths has to be considered relative to loading conditions. In fact, in many engineering applications, another failure mechanism prevails. It is called shear, and it is a form of resistance to sliding. Consider this rock shape with discontinuities you can see on this figure. If the block highlighted in yellow detaches from the face, it would slide and that shows that the discontinuity is not subjected to tension or compression. This type of loading is called shear and it occurs when the forces acting on the specimen are not aligned anymore. So when it comes to the strengths of a rock, it is usually required to quantify the unconfined compressive strength, or also called U-C-S, the tensile strength, typically noted sigma t. sigma t is typically lower than about tenth of the unconfined compressive strength and we also have this shear strength we have just mentioned with the previous picture. The thing is this shear strength is not a unique value and this can be illustrated using this coming picture. If you consider two blocks resting on a flat surface, one block made of wood, one made of concrete, you would have more difficulties pushing and sliding the concrete block than the timber block, because the concrete block is heavier than the timber block. So that shows that, to some extent, the sliding resistance, or the resistance offered by the surface, depends very much on the amount of compressive stress applied perpendicular to the sliding direction. So the shear strength is not unique and in fact, of going back to the analogy of pushing a block, we should actually measure the shear strength for different values of weight of blocks, or vertical stress applied to the surface and by plotting all the values of shear strength against the vertical stress we have applied, we can form a failure criterion. So unconfined compressive strength, tensile strength, and the failure criterion of shear strength will provide all the information we need on the strength of the rock, because strength is the maximum value of stress the material can sustain, it is independent from the geometry of the specimen tested. That is the definition of stress we have seen in topic one. So strengths full characterize the material regardless of the geometry. In topic 1, we said that it is important to assess how a material deforms under a load. To do so, we need the material’s deformability. This can be inferred from the stress-strain curve we have drawn before. Let us draw this curve again for a case of simple compression. So, we have stress as vertical axis, strain as horizontal axis, and the response we drew before was something like this. The deformability of the material is typically quantified through the coefficient of proportionality between stress and strain, which is called E and is the Young’s Modulus. So when it comes to engineering, this non-linear part is typically neglected and we simply consider that stress equals E times strain. E is very critical information to relate stress and strain. So let me explain this E a bit more to you. E is called the Young’s Modulus and we have a relationship between stress, sigma, equals E times epsilon. We set sigma as a unit of pascal. We set strain as no unit, hence, E as unit of pascal as well. Because stress and the strain characterize a material, regardless of the geometry of the specimen we have tested so does E. E is an interim property of a material, regardless of the geometry of the specimen we have used to obtain this response. So now let us have a look at what happens in the direction perpendicular to loading. So that is, for example, if we know a specimen vertically, what happens in the horizontal direction? And let us start with an experiment. Let us take this rubber band and we are to stretch it. Look at how wide it is, and look at what happens to the width as we stretch it. We can clearly see that the rubber band narrows as we pull on it. If we had a specimen subjected to compression, the opposite would occur. As the specimen gets shorter, it would bulge and get thicker. This is illustrated in this picture. As the vertical strain and horizontal strains are related. They are in fact perpendicular and the coefficient of proportionality is called Poisson’s ratio. So we have the horizontal strain divided by the vertical strain equals minus Poisson’s ratio. The Poisson’s ratio, noted nu, ranges from 0.1, typically to 0.4 for rocks and a typical value would be around 0.3. you can notice a negative sign on the equation we showed on the picture. This is to reflect the fat that the compression in one direction creates an elongation in the other, and vice versa. So, in conclusion, knowing the properties of rock material, such as strengths and deformability is critical when it comes to designing a structure on a rock mass. In the next topic, I will present you some of the laboratory tests we can conduct to infer the strengths of rock material.
{"url":"https://www.911metallurgist.com/blog/rock-compression/","timestamp":"2024-11-05T00:26:49Z","content_type":"text/html","content_length":"124097","record_id":"<urn:uuid:176224c5-66b7-4322-a8eb-f6f05348ecef>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00221.warc.gz"}
What are Mean & Midpoint values? How to calculate? In our everyday life, we came across a lot of data collection and representation. Whether we discuss our country’s population, the number of students in our class, our bills, or bank statement. In fact, we are surrounded by data in many forms and collections. But the concern right now is, can it be possible to represent a collection of data by any specific figure or quantity of the data collection? Averages or measures of central tendencies are the simple solutions to this question. Mean, Median and Mode are the basic measures of central tendencies. Which central tendency to choose to represent a specific data depends upon the type of data we are going to represent with the So let’s start our quest in this article for the mean and midpoint values and how can we calculate them? The common average we are all aware of is the mean, in which we sum up all the vales and then divide it by the total number of values. However, there are four different types of means in central tendencies but here we will discuss the arithmetic mean. Types of Mean • Arithmetic Mean • Weighted Mean • Geometric Mean • Harmonic Mean Arithmetic Mean In terms of Mathematics, arithmetic mean can be defined as a list of numbers to describe a central tendency. To calculate the arithmetic mean of the, add up all the variables in the data and then divide the sum by the number of items in the data collection. The most well-known and popular measure of central tendency is the average or arithmetic mean. The mean could be used with discrete as well as continuous data. However, most commonly it is used with continuous data. Mean Calculation As mentioned above, the mean can be calculated simply by adding all the values and then by dividing with the total number of values. Mean can also calculate by using So if we want to calculate the mean of x number of values in a data collection and the values it contains are x1, x2, …………xn, then we can do this by: M= sum of terms/no of term In essence, the mean is a model of the data collection that we may calculate by using a mean solver. This is the most often used value. The mean, on the other hand, is rarely one of the real values in the data collection. One of its most valuable features, though, is that it reduces the amount of error in predicting any single value in the data collection. That is, it is the value in the data set that causes the least amount of error when compared to all other values. The fact that every value in your data set is used in the equation is an essential property of the mean. The mean is therefore the only indicator of central tendency in which the sum of each value’s deviations from the mean is always zero. As the name implies the midpoint or median is the value that is present in the center or middle of a data collection. This quantity s could also be used to differentiate the given data set into two subgroups. These are higher half values of the sample and lower half values of the sample. In order to find out the median in a given set of values. First, we have to arrange all of the values in ascending order of values, it is also known as ranking. Then we can find the midpoint by simply the center of the distribution. This method fits the situation when the data set has an odd number of values. However, in the case of an even number of values, there can’t be a midpoint. Then we can find out our midpoint by forming a pair of two middle values. Then add both the values and divide the sum value by “2”. The resultant value then considered as the midpoint or median. That means by taking the mean of two midpoint values we can calculate our midpoint. The median is such a central tendency that is least affected by skewed data and outliers.
{"url":"https://www.todaymagazine.org/what-are-mean-midpoint-values-how-to-calculate/","timestamp":"2024-11-06T02:57:46Z","content_type":"text/html","content_length":"84205","record_id":"<urn:uuid:b88def52-bfa2-4097-955a-8370c562e924>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00251.warc.gz"}
Christian Ferko, Yangrui Hu, Zejun Huang, Konstantinos Koutrolikos, Gabriele Tartaglino-Mazzucchelli SciPost Phys. 16, 038 (2024) · published 29 January 2024 | Toggle abstract · pdf We show that the $3d$ Born-Infeld theory can be generated via an irrelevant deformation of the free Maxwell theory. The deforming operator is constructed from the energy-momentum tensor and includes a novel non-analytic contribution that resembles root-$T \ overline{T}$. We find that a similar operator deforms a free scalar into the scalar sector of the Dirac-Born-Infeld action, which describes transverse fluctuations of a D-brane, in any dimension. We also analyse trace flow equations and obtain flows for subtracted models driven by a relevant operator. In $3d$, the irrelevant deformation can be made manifestly supersymmetric by presenting the flow equation in $\mathcal{N} = 1$ superspace, where the deforming operator is built from supercurrents. We demonstrate that two supersymmetric presentations of the D2-brane effective action, the Maxwell-Goldstone multiplet and the tensor-Goldstone multiplet, satisfy superspace flow equations driven by this supercurrent combination. To do this, we derive expressions for the supercurrents in general classes of vector and tensor/scalar models by directly solving the superspace conservation equations and also by coupling to $\mathcal{N} = 1$ supergravity. As both of these multiplets exhibit a second, spontaneously broken supersymmetry, this analysis provides further evidence for a connection between current-squared deformations and nonlinearly realized symmetries.
{"url":"https://www.scipost.org/contributor/2322","timestamp":"2024-11-04T21:29:44Z","content_type":"text/html","content_length":"42325","record_id":"<urn:uuid:060feef1-d209-4d97-b9de-27054102656a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00456.warc.gz"}
Bolt Support Force The bolt support force which is ultimately applied to a wedge, and the orientation of the applied force (tension or shear), depends on the following factors: 1. Bolt Deformation Mode: The mode in which a bolt intersects a wedge failure plane. The deformation mode depends on whether the failure plane is a shear (sliding) plane or a dilating (opening) plane, and also on the orientation of the bolt with respect to the plane and the sliding direction. There are six possibilities considered, as illustrated in the figure below. Possible bolt deformation modes [Windsor, 1996] 2. Bolt Tensile Capacity: The maximum Tensile Force which can be supplied by the bolt is determined from the Bolt Force Diagram and the point at which a wedge plane intersects the bolt along its length. The Bolt Force diagram is derived from the bolt properties (e.g. Tensile Capacity, Plate Capacity, Bond Strength) entered in the Bolt Properties dialog. 3. Bolt Orientation Efficiency: If considered, Bolt Orientation Efficiency is only applied to the Tensile Force. Bolt efficiency can be toggled on or off, and there are three possible methods of computing the efficiency. 4. Bolt Shear Strength: Whether Shear Strength of the bolt is considered. Applied Bolt Support Force The following table summarizes the possibilities of applied bolt support force, for all combinations of bolt deformation mode, bolt efficiency (on / off) and shear (on/off). Bolt Efficiency: Off Bolt Efficiency: On Bolt Efficiency: On Bolt Efficiency: Off Shear: Off Shear: Off Shear: On Shear: On A tensile tensile * eff tensile * eff tensile B tensile tensile * eff tensile * eff tensile C tensile tensile * eff tensile * eff tensile D tensile tensile * eff shear shear E tensile zero force shear shear F tensile zero force shear shear Table 1: Summary of bolt support force. • "tensile" is the Tensile Failure capacity (without applying Bolt Efficiency), determined from the Bolt Force Diagram. • "eff" is the Bolt Orientation Efficiency factor • "shear" is the bolt Shear capacity, if the Shear Strength option is used. • Modes A, B, C - if a bolt is in mode A, B or C, only a tensile force will be applied, in the direction of the bolt. The tensile force will be multiplied by the Bolt Efficiency Factor if Bolt Efficiency is being used. • Mode D - if a bolt is in mode D, the bolt can use either tensile or shear force. If the Shear Strength option is ON, then the Shear Force will be used. Shear force is applied opposite to the sliding direction of the wedge. If the Shear Strength option is OFF, then the Tensile force will be used. • Modes E, F - if a bolt is in mode E or F, the bolt can use either tensile, zero, or shear force, depending on the selection of Bolt Efficiency and Shear. • If both Bolt Efficiency and Shear are turned OFF, then the full tensile force will be applied, in the direction of the bolt, regardless of how the bolt intersects the wedge (i.e. even if the bolt is in a shear deformation mode). Direction of Bolt Support Force There are TWO possibilities for the direction of the applied bolt support force. Referring to Table 1: 1. Tensile: For all cases marked as "tensile" or "tensile * eff", the direction of the applied bolt support force will be in the direction of the bolt. The direction is not affected by the bolt efficiency factor. The direction is always exactly in the direction of the bolt, even if the efficiency factor is applied. 2. Shear: For all cases marked as "shear" (Shear Strength option is toggled ON) the direction of the applied bolt support force will be opposite to the sliding direction of the wedge. A bolt can never apply both Tensile and Shear support at the same time. The two cases are completely exclusive in the UnWedge implementation. Location of Applied Support Force Like all other forces in the UnWedge analysis, all bolt support forces are applied through the centroid of the wedge. Therefore, once the magnitude and orientation of the bolt force is determined, the force doesn't really have a "location" on the wedge. Since moment equilibrium is not considered in the UnWedge analysis, all forces are assumed to pass through the same point, the 3-dimensional centroid of the wedge.
{"url":"https://www.rocscience.com/help/unwedge/documentation/theory/support/bolt-support-force","timestamp":"2024-11-03T22:18:11Z","content_type":"application/xhtml+xml","content_length":"297119","record_id":"<urn:uuid:6a4a48cc-19ec-4680-91e5-86c13e1fa9c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00119.warc.gz"}
Day 1 – new data, old charts This post is a part of a series on drug shortages. I spent the bulk of the morning setting up my development environment and refreshing myself on R and my analysis code from our paper. I then grabbed a recent export of the drug shortages database and plugged it in to my old code, and with some patching up here and there, I was able to pump out updates to some of the charts used in our paper (and a few new ones). You can find all of my mutant code from today over here. New shortages per month First up, here’s a tally of new drug shortages per month from the drugshortagescanada.ca (DSC). You can clearly see that the database came online in March, 2017. If there are any differences over the last few years in the number of shortages, I’m not seeing anything obvious from that plot. We may have to do some statistics…. Also, in seeing this plot again, I’m reminded that what I’d also like to see is the number of shortages active at any given time, as well as some sense of distribution of shortage duration over time. Perhaps that’s tomorrow’s work. Reasons for shortages Next up is a plot showing the distribution of stated reasons for shortages. I don’t recall if we had a plot like this in the paper. In any case, nothing jumps out at me: Top shortages This next chart is a doozy, and I’ve made it even more horrendous by faceting it by year. This was our attempt at trying to show the kinds of drugs most often in shortage. As you can see…. well, I’m not sure what you can see. If you have some ideas for a more interpretable figure that captures the same essence, please comment below. Shortages by manufacturer size Next up, three charts for each year from 2017-2019 which attempt to give some sense of whether larger manufacturers have proportionately greater or fewer than smaller drug manufacturers. Note that the dotted line is not a trend line, but in fact shows a slope of 1, i.e. when the proportion of shortages = proportion of drugs marketed. My takeaways are this: (1) I should have kept a consistent scale/limits on the charts because comparing them is tricky, (2) number of shortages generally vary with size of the manufacturer (i.e. points nearish to the dotted line), which I expected, and (3) there are a few subjective outliers each year (e.g. Apotex, in all three years). I have not run any statistics to look at these data in any more detail. Drug Products The last two charts deal with the Drug Product Database (DPD) only. I think it’s important to keep these data in mind because they provide a rough denominator when looking at raw shortage counts. Here are the number of drugs in the database over the last 40 years: There are a few discontinuities which I don’t have great explanations for. For example, from 2017 to 2018 there is sudden drop in the number of “marketed” drugs which is mostly accounted for by a new status label being used “cancelled (unreturned annual)”. I’ll need to dive back into the DPD’s scheme to see what that might be about. (NB: The plateau from 2020 onwards is simply because I extended the plot into the future but there are, obviously, no updates in the database providing any new information.) In this chart, I attempt to label each DIN as either a generic or innovator by looking at when it was first marketed relative to other drugs with the same ingredient. I’d love to know if this method makes sense. 2 Comments
{"url":"https://jon.pipitone.ca/blog/2020-01-06-day-1-new-data-old-charts/","timestamp":"2024-11-04T15:10:12Z","content_type":"text/html","content_length":"34282","record_id":"<urn:uuid:4dd49662-e4e6-4402-9a0e-847d5ad1446d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00378.warc.gz"}
Ninth Annual Upstate Number Theory Conference April 27–28, 2019 A primary goal of the Upstate Number Theory Conference is to bring together the specialists from the various branches of Number Theory in the Upstate New York region and surrounding areas, and to expose the younger researchers to new and old problems in the field. For those who plan to Arrive on Friday, April 26th, Kiran Kedlaya (UCSD & IAS) will be giving a colloquium talk at 4:30pm in Malott 251. (Parking information is below) General registration is now open! Go to the following link Upstate NY Number Theory Conference to register. Banquet registration is also required! A banquet will be held the evening of April 27th in 401/403 Physical Sciences Building. If you are interested in attending the banquet please go to https:// forms.gle/KwH94zyoToZMkvDo6 to register for the banquet. Please note that the banquet is capped at 60. Please note the important deadlines that are provided at the beginning of the general registration page. Abbey Bourdon, Wake Forest University Title: Isolated Points on Modular Curves Abstract: We say a closed point on a curve C is isolated if it does not belong to an infinite family of effective divisors of degree d parametrized by the projective line or a positive rank abelian subvariety of the Jacobian of C. In this talk, we will study the image of isolated points under morphisms, giving conditions under which the image of an isolated point remains isolated. We will explore applications of this result to the case where C is the modular curve X_1(N). This joint work with Ozlem Ejder, Yuan Liu, Frances Odumodu, and Bianca Viray extends recent results of the same authors concerning sporadic points on modular curves. Ling Long, Louisiana State Title: Hypergeometric Supercongruences Abstract: In this talk, we will discuss supercongruences occurred to truncated hypergeometric series originated in the work and conjectures of Beukers and Coster. These congruences can be viewed as p-adic analogues of Hecke recursions satisfied by classical modular forms. We will present the related backgrounds under the hypergeometric umbrella and discuss approaches to supercongruences including a p-adic perturbation method proposed in a joint work with Ravi Ramakrishna. The talk will be concluded by applications and new open conjectures based on the motivic setting. Aaron Pollack, Duke University Title: Modular forms on G_2 Abstract: Classical modular forms are very special automorphic functions for the group GL(2), and similarly holomorphic Siegel modular forms are very special automorphic functions for the group GSp (2n). It turns out that the split exceptional group G_2, and certain forms of the other exceptional groups, possess a similar very special class of automorphic functions. These are called the 'modular forms', and their study was initiated by Gross-Wallach and Gan-Gross-Savin. I will define these modular forms on G_2 and explain what is known about them. Arul Shankar, University of Toronto Title: Families of elliptic curves ordered by conductor Abstract: Conjectures on the statistics of elliptic curves are usually formulated with the assumption that the curves in question are ordered by their conductors. However, when proving results on the statistics of elliptic curves, the curves are usually ordered by (naive) height. There are two two reasons for doing so: first, it is difficult to rule out the possibility that there are many elliptic curves with small discriminant but large height. Second, it is difficult to rule out the possibility that there are many elliptic curves with large discriminant but small conductor. In this talk, we will focus on the second question, and prove some partial results bounding the number of elliptic curves whose discriminants are much larger than their heights. As consequences, we will construct positive proportion families of elliptic curves, and determine their asymptotics when they are ordered by conductor. We will also prove that the average size of their 2-Selmer groups is 3. This is joint work with Ananth Shankar and Xiaoheng Wang. Brian Smithling, Johns Hopkins University Title: On Shimura varieties for unitary groups Abstract: Shimura varieties attached to unitary similitude groups are a well-studied class of PEL Shimura varieties (i.e., varieties admitting a moduli description in terms of abelian varieties endowed with a polarization, endomorphisms, and a level structure). There are also natural Shimura varieties attached to (honest) unitary groups; these lack a moduli interpretation, but they have other advantages (e.g., they give rise to interesting cycles of the sort that appear in the arithmetic Gan-Gross-Prasad conjecture). I will describe some variant Shimura varieties which enjoy good properties from both of these classes. This is joint work with M. Rapoport and W. Zhang. Preston Wake, IAS & Michigan State Title: Eisenstein congruences and a Bloch-Kato conjecture in tame families Abstract: A fact made famous by Mazur is that the Galois representation associated to the modular curve X_0(11) (which is an elliptic curve) is reducible modulo 5. Less famously, the representation is also reducible modulo 25. I'll talk about this extra reducibility and what it has to do with the Bloch-Kato conjecture. This is joint work with Akshay Venkatesh. The Complete Schedule for The Ninth Annual Upstate Number Theory Conference. All talks and activities will be held in Malott 251, 253, 203, 207, and 224 TRAVEL: Driving directions and travel options can be found at Visit Ithaca. Cornell also operates a direct bus service from Manhattan to Ithaca (places are limited and fill up quickly). For mobility or other accommodations, or just for general inquiries regarding the conference please send an e-mail to upstatenynt_math@cornell.edu. You will also be asked during your registration. Organizing committee: • Alexander Borisov, Binghamton University • C. Douglas Haessig, University of Rochester • Jeffrey Hatley, Union College • Joseph Hundley, SUNY at Buffalo • James Ricci, Daemen College • Brian Hwang, Cornell University • Marie MacDonald, Cornell University • Ravi Ramakrishna, Cornell University • David Zywina, Cornell University. Previous conferences: Cornell University is committed to providing a safe, inclusive, and respectful learning, living, and working environment. To this end, Cornell will not tolerate sexual and related misconduct. Through Cornell University Policy 6.4, and the applicable procedures, the university provides means to address bias, discrimination, harassment, and sexual and related misconduct, including gender-based harassment, sexual harassment, sexual assault, domestic and dating violence, stalking, and sexual exploitation. Reports of bias, discrimination, and harassment can be made online at www.biasconcerns.cornell.edu or by contacting the Office of the University Title IX Coordinator at titleix@cornell.edu.
{"url":"https://math.cornell.edu/ninth-annual-upstate-number-theory-conference","timestamp":"2024-11-11T16:55:13Z","content_type":"text/html","content_length":"83473","record_id":"<urn:uuid:237199e4-68d7-4d74-a7dd-b9580e5edef6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00707.warc.gz"}
Algorithms in the 21S I am wondering if anybody here knows which algorithms are used to calculate the inverses of upper tails in the 21s, this is nearly as fast as the 49g+ so I believe the results are not done by a I know such algorithms exist as I tried to use them when I rewrote STAT48 for the 48G, however it was from a Russian book and I could not figure them out completely. The 21S manual refers to D. Knuth, Seminumerical Algorihms, Vol. 2, London: Addison Wesley, 1981 for the random number generator test. There may be something in there? If anyone knows of fast algorithms to get the inverse of upper tails (other than normal distribution which I already have), I would be very interested. 04-15-2005, 09:37 AM the 32e could also invert upper tails, but it looks like it has an internal solver for its own upper tail function (called Q). Q is fast and i did some tests to find it is full 10 digit accurate. this is interesting since all versions in program libraries use a polynomial approximation which is less accurate (about 7 digits). whether the 32e simply had a more accurate internal polynomial or not i don’t know. but there are no clues and the inverse is not too slow for such an old machine. 04-15-2005, 12:55 PM If you get a plynomial approximation to 7 digits, it is quite easy for a solver to quickly get the next 3 digits from this starting point. The polynomial in here works for the normal distribution. But the 21s also does student, F and x2 with a good speed. I would really be interested to know how they really do. Now I have to find a 32e
{"url":"https://archived.hpcalc.org/museumforum/thread-71803-post-71868.html#pid71868","timestamp":"2024-11-06T18:00:14Z","content_type":"application/xhtml+xml","content_length":"36496","record_id":"<urn:uuid:e3a7f83c-e5c2-4981-af87-24b843c220d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00565.warc.gz"}
Find a value for a constant to make a given limit finite and non-zero - Stumbling Robot Find a value for a constant to make a given limit finite and non-zero Define two functions as follows: Find a value for the constant is finite and non-zero. Compute the value of the limit. First, we find the derivatives Then, we evaluate the requested limit, Now, we would like this limit to be finite and non-zero. To find a value of So, the limit in the numerator is Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
{"url":"https://www.stumblingrobot.com/2016/01/12/find-a-value-for-a-constant-to-make-a-given-limit-finite-and-non-zero/","timestamp":"2024-11-09T12:46:20Z","content_type":"text/html","content_length":"66122","record_id":"<urn:uuid:6d1ab62c-6520-42fc-8f99-41b3c58cdd35>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00797.warc.gz"}
First steps in Keras: classifying handwritten digits(MNIST) Licence : Creative Commons Attribution 4.0 International (CC BY-NC-SA 4.0) Copyright : Jeremy Fix, CentraleSupelec Last revision : January 23, 2024 09:11 Link to source : 00-keras-mnist.md Lectures project page: https://github.com/jeremyfix/deeplearning-lectures/ In this practical, we will make our first steps with Keras and train our first models for classifying the handwritten digits of the MNIST dataset. It should be noted that this dataset is no more considered as a challenging problem but for an introduction with Keras and the first trained architectures, it does the job. We will use Keras which was once a layer above multiple deep learning frameworks (theano, tensorflow, CNTK) but which has then be embedded within tensorflow since its 2.0 release. Keras is designed to be a general purpose easy to use framework. If you want to play with standard models, it is really quick and easy to do it with Keras but if you target implementing custom architectures, you should consider lower level frameworks such as tensorflow or pytorch. The models you will train are : • a linear classifier • a fully connected neural network with two hidden layers • a vanilla convolutional neural network (i.e. a LeNet like convnet) • some fancier architectures (e.g. ConvNets without fully connected layers) The point here is to introduce the various syntactic elements of Keras to: • load the datasets, • define the architecture, loss, optimizer, • save/load a model and evaluate its performances • monitor the training progress by interfacing with tensorboard VERY VERY Important: Below, we see together step by step how to set up our training script. While reading the following lines you will progressively fill in python script. We also see the modules to be imported only when these are required for the presentation. But obviously, it is clearer to put these imports at the beginning of your scripts. So the following python codes should not be strictly copy-pasted on the fly. Having a running python interpreter with the required modules For making the labs, I propose the following way to work : • for the CentraleSupelec students, you can edit your code on the labs machines, and run it directly on the GPUs. Your home directory is a network home so that any modifications you do locally are seen from the GPUs, • for SM20 lifelong trainees, you can edit and run your code within the jupyter lab • for masters students (AVR, PSA), you can edit and run your code within the jupyter lab or, for experienced users, edit your code with VIM within a ssh session, a within the VNC client To connect to the GPUs, use one of the procedures described in Using the CentraleSupelec GPUs. You should now have at your disposal a python interpreter with the installed package, i.e. the following should work successfully : sh11:~:mylogin$ python3 -c "from tensorflow import keras" If the above fails, stop here and ask me, I’ll be glad to help you. A Linear classifier Before training deep neural networks, it is good to get an idea of the performances of a simple linear classifier. So we will define and train a linear classifier and see together how this is written in Python/Keras. While reading the following lines, edit a file named train_mnist_linear.py that you will fill. Loading and basic preprocessing the dataset The first step is to load the dataset as numpy arrays. Functions are already provided by Keras to import some datasets. The MNIST dataset is made of gray scale images, of size 28 \times 28, with values in the range [0; 255]. The training set consists in 60000 images and the test set consists in 10000 images. Every image represents an handwritten digit in [0, 9]. Below, we show the 10 first samples of the training set. Few samples of the MNIST dataset To import the MNIST dataset in Keras, you can do the following: 1 from tensorflow.keras.datasets import mnist 2 from tensorflow.keras.utils import to_categorical 4 (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train and y_train are numpy arrays of respective shape (60000, 28, 28) and (60000,). X_test and y_test are numpy arrays of respective shape (10000, 28, 28) and (10000, ). X_train and X_test contain the images and y_train and y_test contain the labels. For a linear classifier (or in fact Dense Layers as well as we shall see in a moment and this is different from convolutional layers we will see later on) expect to get vectors in their input and not images, i.e. 1-dimensional and not 2-dimensional objects. So we need to reshape the loaded numpy arrays num_train = X_train.shape[0] num_test = X_test.shape[0] img_height = X_train.shape[1] img_width = X_train.shape[2] X_train = X_train.reshape((num_train, img_width * img_height)) X_test = X_test.reshape((num_test, img_width * img_height)) Now, the shape of X_train and X_test are (60000, 784) and (10000, 784) respectively. The networks we are going to train output, for each input image, a probability distribution over the labels. We therefore need to convert the labels into their one-hot encoding. Keras provides the to_categorical function to do that. y_train = to_categorical(y_train, num_classes=10) y_test = to_categorical(y_test, num_classes=10) Now, the shape of y_train and y_test is (60000, 10) and (10000, 10) respectively and the first 10 entries of y_train are: [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] Does these values make sense to you ? What is the label of the fourth sample ? Can you plot the corresponding input to check that the label is correct ? For now, this is all we do for loading the dataset. Preprocessing the input actually influences the performances of the training but we come back to this later when normalizing the input. Building the network We consider a linear classifier, i.e. we perform logistic regression. As a reminder, in logistic regression, given an input image \(x_i \in \mathbb{R}^{28\times 28}\), we compute scores for each class as \(w_k^T x_i\) (in this notation, the input is supposed to be extended with a constant dimension equal to 1 to take into account the bias), that we pass through the softmax transfer function to get probabilities over the classes : \[P(y=k / x_i) = \frac{e^{w_k^T x_i}}{\sum_{j=0}^{9} e^{w_j^T x_i}}\] To define this model with Keras, we need an Input layer, a Dense layer and an Activation layer In keras, a model can be specified with the Sequential or Functional API. We here make use of the functional API which is more flexible than the Sequential API. Also, while one could incorporate the activation within the dense layers, we will be using linear dense layers so that we can visualize what is going on after each operation. In Python, this can be written as : from tensorflow.keras import Input from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.models import Model num_classes = 10 xi = Input(shape=(img_height*img_width,)) xo = Dense(num_classes)(xi) yo = Activation('softmax')(xo) model = Model(inputs=[xi], outputs=[yo]) In the input layer, we just specify the dimension of a single sample, here 784 pixels per image. The dense layer has num_classes=10 units. The output of the dense layer feeds the softmax transfer function. By default, in Keras, a dense layer is linear and has the bias so that we do not need to extend the input to include the constant dimension. Therefore the outputs of yo are all in the range [0, 1] and sum up to 1. Finally, we define our model specifying the input and output layers. The call to model.summary() will display a summary of the architecture in the terminal. Compiling and training The next step is, in the terminology of Keras, to compile the model by providing the loss function to be minimized, the optimizer and the metrics to monitor. For this classification problem, an appropriate loss is the crossentropy loss. In Keras, among all the Losses, we will use the categorical_crossentropy loss. For the optimizer, several Optimizers are available and we will use adam. For the metrics, you can use some predefined metrics or define your own. Here, we will use the accuracy which makes sense because the dataset is balanced. In Python, this can be written as Metrics, losses and optimizers can be specified in two ways in the call of the compile function. It can be either specified by a string or by passing a function for the loss, a list of functions for the metrics and an Optimizer object for the optimizer. We are now ready for training our linear classifier, by calling the fit function. model.fit(X_train, y_train, score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) We here used \(10 \%\) of the training set for validation purpose. This is based on the validation loss that we should select our best model (why should we do it on the validation loss rather than the validation accuracy?). At the end, we evaluate the performances on the test set. You are now ready for executing your first training. You need first to log on a GPU node as described in section Using the GPU cluster of CentraleSupelec. # First you log to the cluster # And then : python3 train_mnist_linear.py This is your first trained classifier with Keras. Depending on how lucky you are, you may reach a Test accuracy of something between \(50\%\) and \(85\%\) . If you repeat the experiment, you should end up with various accuracy; The training does not appear to be very consistent. Callbacks for saving the best model and monitoring the training So far, we just get some information within the terminal but Keras allows you to define Callbacks. We will define two callbacks : • a TensorBoard callback which allows to monitor the training progress with tensorboard • a ModelCheckpoint callback which saves the best model with respect to a provided metric For both these callbacks, we need to specify a path to which data will be logged. I propose you the following utility function which generates a unique path: import os def generate_unique_logpath(logdir, raw_run_name): i = 0 run_name = raw_run_name + "-" + str(i) log_path = os.path.join(logdir, run_name) if not os.path.isdir(log_path): return log_path i = i + 1 Note that this function will create a unique directory for storing all the things to need to store for one experiment. We will store there the history of the metrics during training, the best model we found, etc… For defining a TensorBoard callback, you need to add its import, instantiate it by specifying a directory in which the callback will log the progress and then modify the call to fit to specify the from tensorflow.keras.callbacks import TensorBoard run_name = "linear" logpath = generate_unique_logpath("./logs_linear", run_name) tbcb = TensorBoard(log_dir=logpath) model.fit(X_train, y_train, Once this is done, you have to start tensorboard on the GPU. [In one terminal on the GPU] sh11:~:mylogin$ tensorboard --logdir ./logs_linear Starting TensorBoard b'47' at http://0.0.0.0:6006 (Press CTRL+C to quit) And then start a browser and log to http://localhost:6006 . Once this is done, you will be able to monitor your metrics in the browser while the training are running. The second callback we define is a ModelCheckpoint callback which will save the best model based on a provided metric, e.g. the validation loss. from tensorflow.keras.callbacks import ModelCheckpoint run_name = "linear" logpath = generate_unique_logpath("./logs_linear", run_name) checkpoint_filepath = os.path.join(logpath, "best_model.h5") checkpoint_cb = ModelCheckpoint(checkpoint_filepath, save_best_only=True) model.fit(X_train, y_train, callbacks=[tbcb, checkpoint_cb]) You can now run several experiments, monitor them and get a copy of the best models. A handy bash command to run several experiments is given below : for iter in $(seq 1 10); do \ echo ">>>> Run $iter" && python3 train_mnist_linear.py ; done; Logistic regression without normalization of the input Two metrics are displayed for several runs; on the left the training accuracy and on the right the validation accuracy You should reach a validation and test accuracy between \(55\%\) and \(85\%\). The curves above seem to suggest that there might be several local minima…but do not be misleaded, the optimization problem is convex so the results above are just indications that we are not solving our optimization problem the right way. Actually, with recent versions of Keras, it does not appear to be the case Loading a model In the previous paragraph, we regularly saved the best model (with the ModelCheckpoint callback) with respect to the validation loss. You may want to load this best model, for example for estimating its test set performance at the end of your training script. At the time of writing this practical, there is an issue in the “h5” file saved by Keras and some of its elements must be removed in order to be able to reload the model : import h5py from tensorflow.keras.models import load_model with h5py.File(checkpoint_filepath, 'a') as f: if 'optimizer_weights' in f.keys(): del f['optimizer_weights'] model = load_model(checkpoint_filepath) score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) Normalizing the input So far, we used the raw data, i.e. images with pixels in the range [0, 255]. It is usually a good idea to normalize the input because it allows to make training faster (because the loss becomes more circular symmetric), allows to use a consistent learning rate for all the parameters in the architecture and finally allows to use the same regularization coefficient for every parameter. There are various ways to normalize the data and various ways to translate it into Keras. The point of normalization is to equalize the relative importance of the dimensions of the input. One normalization is min-max scaling just scaling the input by a constant factor, e.g. given an image I, you feed the network with \(\frac{I}{255.} - 0.5\). Another normalization is standardization. Here, you compute the mean of the training vectors and their variance and normalize every input vector (even the test set) with these data. Given a set of training images \(X_i \in \mathbb{R}^{784}, i \in [0, N-1]\), and a vector \(X \in \mathbb{R}^{784}\) you feed the network with \(\hat{X} \in \mathbb{R}^{784}\) given by \[ X_\mu = \frac{1}{N} \sum_{i=0}^{N-1} X_i \] \[ X_\sigma = \sqrt{\frac{1}{N} \sum_{i=0}^{N-1} (X_i - X_\mu)^T (X_i - X_\mu)} + 10^{-5} \] \[ \hat{X} = (X - X_\mu)/X_\sigma \] How do we introduce normalization in a Keras model ? One way is to create a dataset that is normalized and use this dataset for training and testing. Another possibility is to embed normalization in the network by introducing a Lambda layer right after the Input layer. For example, introducing standardization in our linear model could be done the following way : from tensorflow.keras.layers import Lambda xi = Input(shape=(img_height*img_width,), name="input") mean = X_train.mean(axis=0) std = X_train.std(axis=0) + 1.0 xl = Lambda(lambda image, mu, std: (image - mu) / std, arguments={'mu': mean, 'std': std})(xi) xo = Dense(num_classes, name="y")(xl) yo = Activation('softmax', name="y_act")(xo) model = Model(inputs=[xi], outputs=[yo]) The advantage of embedding a standardizing layer is that it makes it easier to use the model on new unseen data. Indeed, the normalization coefficients are stored within the model and you do not have to save them separately. Logistic regression with input standardization Two metrics are displayed for several runs; on the left the training accuracy and on the right the validation accuracy You should reach a validation accuracy of around \(93.6\%\) and a test accuracy of around \(92.4\%\). Let us change the network to build a 2 hidden layers feedforward network. This is simply about adding dense layers with appropriate activations in between the input and the output layer. Basically, the only thing you need to change compared to the linear model is when you build up the model. A simple 2 layers MLP would be defined as : xi = Input(shape=input_shape) x = Lambda(....)(xi) x = Dense(nhidden1)(x) x = Activation('relu')(x) x = Dense(nhidden2)(x) x = Activation('relu')(x) x = Dense(num_classes)(x) y = Activation('softmax')(x) where nhidden1 and nhidden2 are the respective size of the first and second hidden layers. We here used a ReLu activation function but you are free to experiment with other activation functions, e.g. SELU, ELU About the minor adaptations, you may want to save the logs in a different root directory than for the linear experiment or keep them in the same directory for comparison. You may also want to change the run_name variable so that it contains the size of the hidden layers to easily distinguish the architectures in the tensorboard. For example, training a Input-256(Relu)-256(Relu)-10(Softmax) network (\(270.000\) trainable parameters), the inputs being standardized, the training accuracy gets around \(99.6\%\), the validation accuracy around \(97.6\%\) and evaluating it on the test set, we get around \(97.37\%\). This model is slightly overfitting. We can try to improve the generalization performance by introducing some regularization, which is addressed in the next paragraph. There are various Regularizers provided by Keras. Some typical regularizers are L1/L2 penalties or also Dropout layers. L2 regularization (or weight decay) is usually applied to the kernel only and not the bias. It adds a term in the loss function to be minimized of the form \(\lambda \sum_i w_i^2\). The parameter \(\ lambda\) has to be experimentally determined (by monitoring the performances on the validation set) and usually quite small, e.g. values around \(10^{-5}\) or so. If you wish to experiment with L2 penalty, you can directly specify it when you create the Dense layer : from tensorflow.keras import regularizers # Creating a dense layer with L2 penalty on the weights (not biases) # l2_reg is a floating point value to be determined x = Dense(hidden1, kernel_regularizer=regularizers.l2(l2_reg))(x) Another more recently introduced regularization technique is called Dropout (Srivastava, Hinton, Krizhevsky, Sutskever, & Salakhutdinov, 2014). It consists in setting to 0 the activations of a certain fraction of the units in a layer. In their original paper, the authors of dropout suggest for example Dropping out 20% of the input units and 50% of the hidden units was often found to be optimal. A dropout mask is generated for every training sample. At test time, an ensemble of dropped out networks are combined to compute the output (see also CS231n regularizers). In Keras, we simply need to introduce Dropout layers specifying the rate at which to drop units. Below we create a Dense(relu) layer followed by dropout where \(50\%\) of the output are set to 0. Learning a neural network with dropout is usually slower than without dropout so that you may need to consider increasing the number of epochs. from tensorflow.keras.layers import Dropout x = Dense(hidden1)(x) x = Activation('relu')(x) x = Dropout(0.5)(x) A vanilla convolutional neural network The multiLayer feedforward network does not take any benefit from the intrinsic structure of the input space, here images. We propose in this paragraph to explore the performances of Convolutional Neural Networks (CNN) which exploit that structure. Our first experiment with CNN will consider a vanilla CNN, i.e. a stack of conv-relu-maxpooling layers followed by some dense layers. In order to write our script from training CNN, compared to the script for training a linear or multilayer fully connected model, we need to change the input_shape and also introduce new layers: Convolutional layers, Pooling layers and a Flatten layer. The tensors for convolutional layers are 4D which include batch_size, image width, image height, number of channels. The ordering is framework dependent. In tensorflow, it is channels_last, in which case, the tensors are (batch_size, image height, image width, number of channels). The first thing to do is to ensure the data arrays are in the appropriate format. Remember, before, we reshaped the training and test numpy arrays as (60000, 784) and (10000, 784) respectively. Now, with convolutional neural networks, the input data topology is exploited and we shall keep the data as images. num_train = X_train.shape[0] num_test = X_test.shape[0] img_height = X_train.shape[1] img_width = X_train.shape[2] X_train = X_train.reshape((num_train, img_height, img_width)) X_test = X_test.reshape((num_test, img_height, img_width)) Then, the input_shape variable, used for defining the shape of the Input layer, must also be adapted : input_shape = (img_rows, img_cols, 1) The next step is to define our model. We here consider stacking Conv-Relu-MaxPool layers. One block with 64 5x5 stride 1 filters and 2x2 stride 2 max pooling would be defined with the following syntax : from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D x = Conv2D(filters=64, kernel_size=5, strides=1 x = Activation('relu')(x) x = MaxPooling2D(pool_size=2, strides=2)(x) The “padding=‘same’” parameter induces that the convolutional layers will not decrease the size of the representation. We let this as the job of the max-pooling operation. Indeed, the max pooling layer has a stride 2 which effectively downscale the representation by a size of 2. How do you set up the architecture ? the size of the filters ? the number of blocks ? the stride ? the padding ? well, this is all the magic. Actually, we begin to see a small part of the large number of degrees of freedom on which we can play to define a convolutional neural network. The last thing we need to speek about is the Flatten Layer. Usually (but this is not always the case), there are some final fully connected (dense) layers at the end of the architecture. When you go from the Conv/MaxPooling layers to the final fully connected layers, you need to flatten your feature maps. This means converting the 4D Tensors to 2D Tensors. For example, the code below illustrates the connection between some convolutional/max-pooling layers to the output layer with a 10-class classification: x = Conv2D(filters=64, kernel_size=5, strides=1 x = Activation('relu')(x) x = MaxPooling2D(pool_size=2, strides=2)(x) x = Flatten()(x) y = Dense(10, activation='softmax')(x) Now, we should have introduced all the required blocks. As a first try, I propose you to code a network with : • The input layer • The standardization Lambda layer • 3 consecutive blocks with Conv(5x5, strides=1, padding=same)-Relu-MaxPooling(2x2, strides=2). Take 16 filters for the first Conv layer, 32 filters for the second and 64 for the third • Two dense layers of size 128 and 64, with ReLu activations • One dense layer with 10 units and a softmax activation Training this architecture, you should end up with almost \(100\%\) training accuracy, \(99.2\%\) of validation accuracy and around \(98.9\%\) of test accuracy. Introducing Dropout (after the standardizing layer and before the dense layers), the test accuracy should raise up to around \(99.2\%\). This means that 80 images out of the 10000 in the test set are misclassified. Small kernels, no fully connected layers, Dataset Augmentation, Model averaging Recently, it is suggested to use small convolutional kernels (typically \(3\times 3\), and sometimes \(5\times 5\)). The rationale is that using two stacked \(3\times 3\) convolutional layers gives you receptive field size of \(5\times 5\) with less parameters (and more non linearity). This is for example the guideline used in VGG : use mostly \(3\times 3\) kernels, stack two of them, followed by a maxpool and then double the number of filters. The number of filters is usually increased as we go deeper in the network (because we expect the low level layers to extract basic features that are combined in the deeper layers). Finally, in (Lin, Chen, & Yan, 2013), it is also suggested that we can completly remove the final fully connected layers and replace them by GlobalAveragePooling layers. It appears that removing fully connected layers, the network is less likely to overfit and you end up with much less parameters for a network of a given depth (this is the fully connected layers that usually contain most of your parameters). Therefore, I suggest you to give a try to the following architecture : • InputLayer • Standardizing lambda layer • 16C3s1-BN-Relu-16C3s1-BN-Relu - MaxPool2s2 • 32C3s1-BN-Relu-32C3s1-BN-Relu - MaxPool2s2 • 64C3s1-BN-Relu-64C3s1-BN-Relu - GlobalAverage • Dense(10), Softmax where 16C3s1 denotes a convolutional layer with 16 kernels, of size \(3\times 3\), with stride 1, with zero padding to keep the same size for the input and output. BN is a BatchNormalization layer. MaxPool2s2 is a max-pooling layer with receptive field size \(2\times 2\) and stride 2. GlobalAverage is an averaging layer computing an average over a whole feature map. This should bring you with a test accuracy around \(99.2\%\) with 72.890 trainable parameters. Parametrizing your experiments Before going further, I invite you to read the section on Parametrizing a script with argparse. This will show you a way to provide options to your script so that you can easily test different Now, it is also a good practice to save the command that was executed for a given experiment. Remember, we created a directory for storing the metrics and the best model, what about also saving the command we executed to keep track of it ? For this, we can both save a summary file and dump its content on the tensorboard. # Write down the summary of the experiment summary_text = """ ## Executed command ## Args """.format(command=" ".join(sys.argv), args=args) with open(os.path.join(logpath, "summary.txt"), 'w') as f: writer = tf.summary.create_file_writer(os.path.join(logpath,'summary')) with writer.as_default(): tf.summary.text("Summary", summary_text, 0) Dataset Augmentation and model averaging One process which can bring you improvements is Dataset Augmentation. The basic idea is to apply transformations to your input images that must keep invariant your label. For example, slightly rotating, zooming or shearing an image of, say, a 5 is still an image of a 5 as shown below : Now, the idea is to produce a stream (actually infinite if you allow continuous perturbations) of training samples generated from your finite set of training samples. With Keras, you can augment your image datasets using an ImageDataGenerator. Here is a snippet to create the generator that produced the images above : from tensorflow.keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(shear_range=0.3, As a note, for some of the transformations ImageDataGenerator performs, it is necessary to invoke the fit method (e.g. featurewise_std_normalization). Now, in order to make use of the generator, we need to slightly adjust our split into training/validation sets, as well as our optimization of the model which we did so far calling the model.fit def split(X, y, test_size): idx = np.arange(X.shape[0]) nb_test = int(test_size * X.shape[0]) return X[nb_test:,:, :, :], y[nb_test:],\ X[:nb_test, :, :, :], y[:nb_test] X_train = X_train.reshape(num_train, img_rows, img_cols, 1) X_train, y_train, X_val, y_val = split(X_train, y_train, y_train = to_categorical(y_train, num_classes) y_val = to_categorical(y_val, num_classes) datagen = ImageDataGenerator(....) train_flow = datagen.flow(X_train, y_train, batch_size=128) validation_data=(X_val, y_val), Fitting the same architecture as before but with dataset augmentation, you should reach an accuracy around \(99.5\%\). Now, as a final step in our beginner tutorial on keras, you can train several models and average their probability predictions over the test set. An average of 4 models might eventually lead you to a loss of 0.0105 and an accuracy of \(99.68\%\). A possible solution You will find a possible solution, with a little bit more than what we have seen in this practical in the LabsSolutions/00-keras-MNIST directory. It is structured in three scripts, one for data loading (data.py), one for defining the models (models.py) and one for orchestrating everything (main.py). The script is also ran on FashionMNIST rather than MNIST. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15 (1), 1929–1958. Retrieved from http://dl.acm.org/citation.cfm?id=2670313
{"url":"https://teaching.pages.centralesupelec.fr/deeplearning-lectures-build/00-keras-mnist.html","timestamp":"2024-11-06T21:07:21Z","content_type":"text/html","content_length":"78975","record_id":"<urn:uuid:5c24253b-42b2-4e54-801a-5b32fd7fca7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00541.warc.gz"}
Mixed conductivity analysis of single crystals of α‴-(Cd1−x Znx)3As2 (x = 0.45) We study the conductivity and magnetoresistance of the α‴ phase solid solution of (Cd[1−x]Zn[x])[3]As[2] (x = 0.45). Single crystals of (Cd[1−x]Zn[x])[3]As[2] are obtained by the modified Bridgman method. The space group and tetragonal lattice parameters of single crystals are found to be I4[1]/amd and a = b = 8.56(5) Å, c = 24.16(6) Å. The temperature dependence of the conductivity and magnetoresistance is studied in the temperature range of 1.6–320 K and in the presence of a transverse magnetic field from 0 to 10 T. Mixed conductivity is analyzed using Hall resistivity data and standard quantitative mobility spectrum analysis. The concentration and mobility of holes are determined at different temperatures. The presence of two types of holes with different mobilities is demonstrated in the temperature range of 1.6–19 K, while with increasing temperature, just one type of charge carrier is observed in the mobility spectrum. Cadmium and zinc pnictides (including both the semimetal Cd[3]As[2] and the semiconductor Zn[3]As[2]) belong to the class of II–V semiconductor compounds and have a well-defined set of interesting characteristics, including structural, optical, and transport properties.^1 These compounds have long been known as materials with a variety of practical applications,^2–5 and their properties are the subject of considerable current research activity. Although Cd[3]As[2] and Zn[3]As[2] have similar crystal structures, the ordering of deformed antifluorite cubes differs between them. After the discovery of the topological properties of Cd[3]As[2],^6,7 considerable research effort was focused on the topological properties of solid solutions of (Cd[1−x]Zn[x])[3]As[2]^8 and solid solutions of dilute magnetic semiconductors based on Cd[3]As[2].^9,10 On the other hand, composite materials based on Cd[3]As[2] have been investigated owing to their applications in spintronics.^11 It is well known that (Cd[1−x]Zn[x])[3]As[2] solid solutions in the composition range 0 ≤ x ≤ 0.6 undergo structural transformations (from the space group I4[1]/acd to P4[2]/nmc and then back to I4[1]/acd) with increasing Zn content.^8 All the studied samples of (Cd[1−x]Zn[x])[3]As[2] belong to the tetragonal system (α″-Cd[3]As[2]) and space group P4[2]/nmc (ICSD Database, Version 2009-1, Ref. Code 23245). In the quasi-binary system Cd[3]As[2]–Zn[3]As[2], there is a continuous series of solid solutions,^12 and the state diagram of Cd[3]As[2]–Zn[3]As[2] shows a region of existence of α‴ solid solution phases of (Cd[1−x]Zn[x])[3]As[2] in the composition range 0.45 ≤ x ≤ 0.65;^6 this region is a poorly studied area in condensed-matter systems. In Ref. 13, single crystals of α‴-(Zn[1−x]Cd[x])[3]As[2] (x = 0.26) were obtained and studied by x-ray diffraction analysis, and the following tetragonal lattice parameters were found: a = b = 8.5377(2) Å, c = 24.0666(9) Å, space group I4[1]/amd, and Z = Although properties such as the electrical conductivity and magnetoresistance of the α‴ phases of (Cd[1−x]Zn[x])[3]As[2] have not been studied in great detail, it is well known that an increase in the Zn content of solid solutions leads to a transition from the Dirac semiconductor Cd[3]As[2] to the direct-gap semiconductor Zn[3]As[2]. In this case, not only does the type of the majority charge carriers change from n to p but also the nature of the temperature dependence of the electrical conductivity changes. It has been shown in magnetoresistance studies of solid solutions that with increasing Zn content, a transition from topological to ordinary semiconductor properties occurs.^8 Thus, the presence of Shubnikov–de Haas (SdH) oscillations both in the narrow-gap semiconductor Cd [3]As[2] with an inverted band structure and in (Cd[1−x]Zn[x])[3]As[2] solid solutions makes it possible to analyze the evolution of the topological properties as the composition changes.^8 Based on the Bodnar model for the semiconductors Cd[3]As[2] and Zn[3]As[2],^14 it is also possible to track the evolution of the band structure from 0.1 to 1.0 eV,^15,16 as well as the changes in the contributions of different groups of charge carriers to the magnetoresistance. In single crystals of α‴-(Cd[1−x]Zn[x])[3]As[2] (X = 0.45), the magnetic field dependence of the resistivity at different temperatures does not exhibit SdH oscillations, in contrast to compositions with x ≤ 0.38.^8 For this reason, we have analyzed the mixed conductivity using experimental data on changes in Hall resistivity in the presence of a magnetic field. In this work, we use standard quantitative mobility spectrum analysis (QMSA).^17–20 (Cd[1−x]Zn[x])[3]As[2] single crystals with X = 0.45 were grown by the modified Bridgeman method from stoichiometric amounts of Cd[3]As[2] and Zn[3]As[2]. The crystals under study were slow-cooled at a rate of 5°C/h and in the presence of a temperature gradient near the melting point (T = 853°C). High degree stoichiometry of the compounds Cd[3]As[2] and Zn[3]As[2] is achieved by additional sublimation in the vapor phase. The composition and homogeneity of the samples were monitored by X-ray powder diffraction (XRD) analysis using a SmartLab diffractometer (Rigaku, Japan) with an angular range from 60° to 80°, step = 0.001, v = 0.5°/min, and a Cu anode (λ = 0.154059290 nm). The (Cd[1−x]Zn[x])[3]As[2] single crystals exhibited high crystallinity with tetragonal lattice parameters a = b = 8.56(5) Å, c = 24.16(6) Å, space group I4[1]/amd, and Z = 16. The XRD pattern of the (Cd[1−x]Zn[x])[3]As[2] (x = 0.45) powder measured in θ–2θ mode is shown in Fig. 1. According to the phase diagram of solid solutions of Cd[3]As[2]–Zn[3]As[2] in the composition range x ≈ 0.4–0.8 at room temperature,^21 a solid solution of (Cd[1−x]Zn[x])[3]As[2] can crystallize into the α‴ phase with a tetragonal structure (space group I4[1]/amd), which is related to a fluorite structure.^13 From the coordinates of the basis atoms, obtained in Ref. 13 for a composition with a predominance of zinc arsenide [(Zn[1−x]Cd[x])[3]As[2], x = 0.26], and using the PowderCell program,^22 it was possible to index the main crystallographic planes from which the diffraction occurred. The matched planes show very good agreement with the peak positions shown in Fig. 1. Thus, a determination of the lattice parameters of the investigated sample (Cd[1−x]Zn[x])[3]As[2] (X = 0.45) was carried out for the α‴ crystal structure, and these were found to be a = b = 8.56(5) Å and c = 24.16(6) Å. These parameters are larger than the values [a = 8.5377(2) Å and c = 24.0666(9) Å] determined in Ref. 13 from x-ray diffraction studies of a solid solution with a lower content of cadmium arsenide, (Zn[1−x] Cd[x])[3]As[2] (x = 0.26). Zn and Cd ions are randomly distributed in the α‴ structure over three independent positions with tetrahedral coordination (fourfold coordination), for which the ionic radii are 0.6 and 0.78 Å, respectively.^23 Therefore, an increase in the degree of substitution of Cd ions for Zn ions leads to an increase in the lattice parameters, a phenomenon that was observed earlier for other polymorphic modifications of Cd[3]As[2]–Zn[3]As[2] solid The (Cd[1−x]Zn[x])[3]As[2] samples were 1 × 1 × 5 mm^3 right-rectangular prisms with soldered thin electrodes. Measurements of the temperature dependence of the magnetoresistance were made in the presence of a transverse magnetic field configuration 0–10 T and over the temperature range of 1.6–320 K using the six-probe method. For measurements, the sample probe was inserted into a He-exchange-gas Dewar, where the temperature could be adjusted with an accuracy of 0.5%. Figure 2 shows the magnetic field dependence of resistivity at different temperatures for (Cd[1−x]Zn[x])[3]As[2] (x = 0.45). For single-carrier materials, the Hall coefficient and resistivity are defined as respectively, where n is the carrier concentration, μ is the mobility, q is the carrier charge, and B is the magnetic field. When there is just one type of carrier, the conductivity tensor is calculated as When different types of carriers are present, the carrier mobility and density calculated from Eqs. (1) and (2) are the values averaged over all carriers. For such materials, the specific electrical conductivities of individual carriers are additive, and the total conductivity tensor (for systems with N carriers) is defined as The validity of Eqs. (5) and (6) depends on two assumptions, which do not always hold. First, it is assumed that the carrier concentration and mobility are independent of the magnetic field, but, strictly speaking, this is not always true. For example, a magnetic-field-dependent shift of the energy gap can lead to a significant change in concentration of intrinsic carriers^25 or the effective mobility can decrease strongly with increasing magnetic field owing to the phenomenon of magnetic freezing.^26 Second, Eqs. (5) and (6) have been obtained from a semiclassical approach, but the discreteness of the Landau levels means that SdH oscillations (and sometimes the quantum Hall effect) are superimposed on classical conductivity. Thus, SdH oscillations often dominate in transport processes at high fields and/or low temperatures and are insignificant at low fields or high temperatures owing to an increased number of collisions and to thermal expansion, respectively. Consequently, the data obtained from the quantum Hall effect are not suitable for rigorous analysis of mixed conductivity when it is not possible to remove SdH oscillations. However, at the same time, SdH oscillations and the quantum Hall effect themselves provide valuable information complementary to that obtained from semiclassical analysis of mixed conductivity. Mobility spectrum analysis (MSA) considers the conductive spectrum as a function of mobility from the relationship between the conductivity tensor and the magnetic field strength. In this spectrum, each peak value corresponds to the contribution from one of the carriers, and the sign of the mobility indicates the carrier type.^27 When applying MSA, it is first assumed that the mobilities of electrons and holes in the samples are continuously distributed. Equations (5) and (6) can then be replaced by integrals, Dziuba and Górska^28 proposed that Eqs. (7) and (8) should be solved by an iterative approximation method. In accordance with this, the integrals in these equations are replaced by Riemann sums, where m is the number of discrete mobilities into which the mobility spectrum is subdivided. The functions $Sixx$ and $Sixy$ are defined as The Jacobi iterative method is used to solve Eqs. (9) and (10) and obtain the mobility spectrum. In this method, the range of mobility depends on the magnetic field used for the measurement. The upper and lower bounds satisfy the conditions $1/Bmaxexp≤μ≤1/Bminexp$. In accordance with the Jacobi iterative procedure, Eqs. (9)–(12) are transformed to the form At each point of the subregion, the carrier parameters are determined by the linear least squares method, The subregion point with the minimum value is used as the initial approximation, and the Goldstein–Armijo rule is used to select the step. Then, to accelerate convergence, the relaxation method is used to solve the linear equations (13) and (14), where $Sixxk$ and $Sixyk$ are the results from the kth iteration step, and both ω[xx] and ω[xy] give the convergence rate of the iterative procedure (the relaxation rate). To determine the optimal value of ω[xx], the following estimation procedure is used.^29 If $ΔSixx=Sixxk+1−Sixxk$ is the change during the kth iteration performed without relaxation (i.e., with ω[xx] = 1), then where p is a positive integer [the value of $ωxyopt$ is determined analogously]. Figure 3 shows the QMSA spectra of (Cd[1−x] Zn[x])[3]As[2] (x = 0.45) at 1.6, 19, 40, and 120 K. In the spectra, the concentration of each type of carrier j is i.e., the weighted sum of all carriers for the jth peak. The bulk concentrations of carriers and their mobilities are shown in Table I. It can be seen that there are two types of holes with different mobilities in the temperature range of 1.6–19 K; however, with increasing temperature, just one type of carrier is observed. It can be assumed that up to about 19 K, (Cd[1−x]Zn[x])[3]As[2] (x = 0.45) contains two types of holes, namely, light and heavy holes. TABLE I. . Concentration (cm^−3) . Mobility (cm^2 V^−1 s^−1) . Temperature (K) Heavy . Light . Heavy . Light . 1.6 1.8 × 10^18 7.7 × 10^17 1.6 × 10^3 1.6 × 10^4 19 1.7 × 10^18 5.2 × 10^17 1.6 × 10^3 8.6 × 10^3 40 5.2 × 10^18 1.2 × 10^3 120 7.7 × 10^18 1.2 × 10^3 . Concentration (cm^−3) . Mobility (cm^2 V^−1 s^−1) . Temperature (K) Heavy . Light . Heavy . Light . 1.6 1.8 × 10^18 7.7 × 10^17 1.6 × 10^3 1.6 × 10^4 19 1.7 × 10^18 5.2 × 10^17 1.6 × 10^3 8.6 × 10^3 40 5.2 × 10^18 1.2 × 10^3 120 7.7 × 10^18 1.2 × 10^3 The concentration of heavy carriers is almost constant between 16 and 19 K and then increases rapidly with increasing temperature above 40 K, whereas their mobility decreases with increasing temperature above 40 K. On the other hand, the concentration of light carriers decreases with increasing temperature above 40 K, while their contribution to conductivity decreases and is no longer visible in the mobility spectrum. Our results are in qualitative agreement with those of the Bodnar model. That model is a generalization of the three-level Kane model for narrow-gap semiconductors with a tetragonal crystal structure within the kp approximation and is used to describe the band structure of many II[3]–V[2] semiconductors, such as Cd[3]As[2] and Zn[3]As[2]. According to the band structure model, the direct-gap p-type semiconductor Zn[3]As[2] contains bands of both light and heavy holes,^14 which is consistent with our results. Using the modified Bridgman method, we have obtained single crystals of the (Cd[1−x]Zn[x])[3]As[2] (x = 0.45) solid solution. We have found by powder x-ray analysis that the single crystals belong to the α‴ phase with space group I4[1]/amd and lattice parameters a = b = 8.56(5) Å and c = 24.16(6) Å. We are the first to study the temperature dependence of the electrical conductivity and magnetoresistance of the α‴ phase of the (Cd[1−x]Zn[x])[3]As[2] solid solution in the temperature range of 1.6–320 K and in the presence of a magnetic field from 0 to 10 T. The spectrum of charge carriers (Fig. 3) has been analyzed in the temperature range of 1.6–120 K by the QMSA method. It has been found that at temperatures of 1.6–19 K, two types of holes with different mobilities are present in the samples, while with increasing temperature, just one type of carrier is observed in the spectrum. It can be assumed that according to the Bodnar band model,^14 two types of holes are present in (Cd[1−x]Zn[x])[3]As[2] (x = 0.45), namely, light and heavy holes. The concentration of heavy carriers is almost constant (1.7 × 10^18–1.8 × 10^18 cm^−3) between 1.6 and 19 K and then increases rapidly with increasing temperature above 40 K. On the other hand, the contribution of light carriers to conductivity decreases and is no longer visible in the mobility spectrum. The mobility of heavy carriers decreases with increasing temperature above 40 K (Table I). This work was partially supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 0851-2020-0035). It was financially supported by a Program of the Ministry of Education and Science of the Russian Federation for higher education establishments [Project No. FZWG-2020-0032(2019-1569)]. The data that support the findings of this study are available within the article. E. K. , “ II[3]V[2] compounds and alloys Prog. Cryst. Growth Charact. Mater. , “ Semiconducting compounds of the AIIBV group Annu. Rev. Matter. Sci. et al., “ Zn[3]P[2]—A new material for optoelectronic devices Microelectron. J. , “ Cadmium phosphide as a new material for infrared converters Semicond. Phys., Quantum Electron. Optoelectron. et al., “ Zn[3]As[2] nanowires and nanoplatelets: Highly efficient infrared emission and photodetection by an earth abundant material Nano Lett. , and , “ Three-dimensional Dirac semimetal and quantum transport in Cd[3]As[2] Phys. Rev. B et al., “ Observation of a three-dimensional topological Dirac semimetal phase in high-mobility Cd[3]As[2] Nat. Commun. , and , “ Topological phase transition in single crystals of (Cd[1−x]Zn[x])[3]As[2] Sci. Rep. V. S. T. B. E. A. M. A. O. N. E. P. A. V. , and B. A. , “ Transport evidence of mass-less Dirac fermions in (Cd[1−x−y]Zn[x]Mn[y])[3]As[2] (x + y = 0.4) Mater. Res. Express , and , “ Asymmetry and parity violation in magnetoresistance of magnetic diluted Dirac–Weyl semimetal (Cd[0.6]Zn[0.36]Mn[0.04])[3]As[2] Phys. Status Solidi RRL L. A. M. M. A. G. N. V. V. S. A. I. S. F. T. N. I. V. , and A. Yu. , “ Effect of hydrostatic pressures of up to 9 GPa on the galvanomagnetic properties of Cd[3]As[2]–MnAs (20 mol % MnAs) alloy in a transverse magnetic field Inorg. Mater. , and , “ The crystal structure of the semiconductor system Cd[3]As[2]–Zn[3]As[2] Bull. Acad. Polon. Sci. Ser. Sci. Chim. G. F. V. S. , and V. Kh. , “ Crystal structure of α‴(Zn[1−x]Cd[x])[3]As[2] (x = 0.26) Crystallogr. Rep. E. K. A. F. A. N. , and S. I. , “ Composition dependence of the band gap of Cd[3−x]Zn[x]As[2] Sov. Phys. Semicond. W. J. A. S. , and W. E. , “ Physical properties of several II–V semiconductors Phys. Rev. M. J. , “ Transport properties of n type Cd[3−x]Zn[x]As[2] alloys Can. J. Phys. D. J. J. R. , and C. A. , “ Magneto-transport characterization using quantitative mobility-spectrum analysis J. Electron. Mater. J. R. C. A. F. J. D. A. , and J. P. , “ Methods for magnetotransport characterization of IR detector materials Semicond. Sci. Technol. J. R. C. A. , and , “ Quantitative mobility spectrum analysis of multicarrier conduction in semiconductors J. Appl. Phys. J. R. C. A. , and J. R. , “ Improved quantitative mobility spectrum analysis for Hall characterization J. Appl. Phys. E. K. , “ Crystal growth and characterization of II[3]V[2] compounds Prog. Cryst. Growth Charact. , “ POWDER CELL – a program for the representation and manipulation of crystal structures and calculation of the resulting X-ray powder patterns J. Appl. Crystallogr. J. A. Lange’s Handbook of Chemistry 15th ed. New York ), pp. L. E. V. M. , and S. F. , “ Temperature-Dependent elastic constants and dielectric properties of (Zn[1–x]Cd[x])[3](P[1–y]As[y])[2] crystals Inorg. Mater. J. R. C. A. F. J. J. M. J. E. R. J. R. J. , and M. W. , “ Magnetotransport and farinfrared magnetooptical studies of molecularbeam epitaxially grown HgTe J. Vac. Sci. Technol., A B. A. I. M. , “ Magnetic-field-induced localization of electrons in fluctuation potential wells of impurities Phys. Status Solidi B W. A. J. R. , “ Determination of electrical transport properties using a novel magnetic field dependent Hall technique J. Appl. Phys. , “ Analysis of the electrical conduction using an iterative method J. Phys. III Numerical Methods in Engineering with Python 3 Cambridge University Press
{"url":"https://pubs.aip.org/aip/adv/article/11/3/035028/989899/Mixed-conductivity-analysis-of-single-crystals-of","timestamp":"2024-11-12T21:55:18Z","content_type":"text/html","content_length":"268215","record_id":"<urn:uuid:2b6a4d78-4e9c-4491-a4b5-68ae4c11a951>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00773.warc.gz"}
Mathematics – The Institute of Stuff Archive for Mathematics A bunch of musically related posts are being re-hosted on another site. This one specifically is now over here. A bunch of musically related posts are being re-hosted on another site. This one specifically is now over here. | Tags: A bunch of musically related posts are being re-hosted on another site. This one specifically is now over here. | Tags: Facebook competitions. Lovely aren’t they? The sort where you’re invited to supply some of your contact details in exchange for a token in a pile, one of which will be randomly picked and its owner be declared the winner. The prizes are often rather nice, rather desirable, so you’re tempted. “Facebook already has my contact details“, you think, “… this company isn’t getting any more than I’ve already chucked in Facebook’s general direction anyway“. What’s the harm? But then they invite you to be a virus. They give you an opportunity to increase your chances of winning. “Invite five friends“, they cry. For every friend that joins in, we’ll give you an extra vote. So the deal here is that, in exchange for you doing their work for them (increasing the likelihood of their getting the large amount of contactage they’re after), you get more tokens in the Of course you instinctively ‘know’ immediately that this is bollocks. Obviously the more people in the game, the worse your chances are, but you’re thinking “yes but I get more votes, so maybe …?” Let’s do some sums. Clearly the best way to win is to be the only player. One participant, one token in the pile, it’s the only one that can be drawn, chance of winning – 100%. Easy. Your strategy – kill all those you know who’ve entered. Which means, pretty much – since you don’t know who’s entered, kill everybody but the guys with the prize. But that sort of behaviour is deemed unacceptable. We know, it’s health and safety gone mad, but what are you to do? So, like an idiot, you invite your five friends. Let’s assume the worst – they all take up the offer. You’ve now won an extra token for each friend so you have six tokens in the pile. And there are now six people in the game, most – five – with only one token. So you’re better off than they are, for sure. There are eleven tokens in the pile – your original one plus the extra five you just earned, and the five new ones from your friends. Your chance of winning just dropped from 1 in 1 (certainty) to 6 in 11 (about a half) and theirs went up from zero to 1 in 11. Suppose that each of these friends is as equally idiotic as you. They aren’t going to invite you back because you’re already in, so they each invite five new friends. That’s twenty five brand new people. There are now thirty one people in the game – you, plus five, plus 25 – and there are your six (the number didn’t increase, sadly) tokens sitting in a pile with twenty five new ones from twenty five new people, plus thirty tokens – six each now – from your five successful friends. Your chance of winning is now down to 6 in 61, about 10%. And you’ve lost control over what happens Except it might not be twenty five new people of course. Because your friends’ friends may each be shared by more than one of your friends. That’s quite likely. In the extreme case, your little coterie may well comprise only you, your five friends, and one other person you don’t know but who happens to be the common friend of each of your friends. The competition runner is only ever going to get seven sets of contact details out of this sorry bunch – so you have that satisfaction – but what are your chances of winning now? They can’t be quite as bad because your universe is so small. Well, your five friends each invite the maximum five friends, but four of those friends were (unbeknownst to them – they knew only not to bother inviting you) already in the game (you brought them in) and it’s only the fifth person (the one you don’t know) who is a new joiner, and only one of your five friends is going to succeed in trapping that individual. So there are seven individuals and a pile of thirteen tokens. That’s the eleven before the new round of invitations, plus one for the new individual, plus one introducer’s token for your successful friend inviter. Your chance of winning dropped a tad, from 6 in 11 to 6 in 13. One of your friends has a 2 in 13 chance of winning and your other four friends, and your ‘friend-in-law’, have a 1 in 13 chance. The universe can have a little snigger at this though since it knows that your friends believe – erroneously – that their chances of winning may still increase at any time when their untaken invitations succeed (which of course they never will). So there’s that. You invited your own competition. How silly of you. If you’d invited only one friend – you had a gun pointed at you, the devil made you do it, whatever – you’d’ve been better off with a 2 in 3 But you know you’re not the one in control here. Others are going to come in regardless of your meanness in attempting to keep knowledge of this competition to yourself. So having more of your tokens in the pile is always going to be better. In fact, if you’re the only one keeping your friends out, and everyone else is being all lady bountiful with their invitations, then you’ve no chance. So you have to bring in more competition. It’s really annoying. Had an idea to find my dad’s Bacon Number. Though not in the film business, he had featured (as half of the double act ‘Johnnie and Ronnie’)- playing the accordion alongside his drummer chum – in Stewart Mackinnon’s 1993 film Border Crossing. This is a difficult film to track down. Amber Styles was in that, so she’s the obvious link, and indeed it turns out that her Bacon Number is three and so his is four. However, Gerhard Garbers – also in Border Crossing – was in a 1990 film called Werner – Beinhart!, which also featured Ludger Pistor, who was in this year’s X-Men First Class with the man himself, old Kev. So Gerhard (also in the highly enjoyable Run Lola Run) beats Amber (sorry Amber) with a Bacon Number of two, so my dad’s is three. Except – there’s a route with even more cachet. Border Crossing’s Les Wilde was in Stormy Monday with Tommy Lee Jones, who was of course with ‘the Bake’ in JFK. So, though my dad’s Bacon Number is still three (which is pretty good for a non-player), that’s a fairly decent path. But whoa hang on there and hold your horses just a cotton’ pickin’ minute – Stormy Monday? My sister was in that! OK, it was a bar scene and she didn’t say anything and she’s uncredited, but still – that means she’s got a Bacon Number of two. So this woman is standing with a clipboard inside the shop, which is Maplin’s, and I’m on my way out and I haven’t bought anything (but she wouldn’t necessarily know that) and she asks (yes she does), like, “Would you mind taking part in a survey?” and I say “It’s ninety pounds an hour” and she laughs and says “No thanks” and lets me go without further ado. Congratulations to Mrs Ivy duPorc of East Gleaming, Hants for proving that the complex function z(s) = 1+2^-s+3^-s+4^-s+… has all of its non-trivial zeros symmetrically distributed about the line Re (s) = ½. A book token for £4.50 is on its way!
{"url":"http://iostuff.org/category/science-2/maths/","timestamp":"2024-11-04T14:49:46Z","content_type":"text/html","content_length":"45390","record_id":"<urn:uuid:37859e70-8dca-4ae0-ba96-2924567e7aae>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00067.warc.gz"}
gold price per gram calculator gold price per gram Gold is one of the most valuable precious metals and is widely used in jewelry, coins, and other decorative objects. The value of gold is determined by its purity and weight. If you are interested in buying or selling gold, it is important to know how to calculate its worth accurately. In this article, we will explain how to calculate the worth of gold, taking into consideration its purity and the cost of melting the gold. Understanding Gold Purity Gold purity is measured in karats (K) or fineness. 24K gold is considered pure gold and is made up of 99.9% gold. 21K gold is 87.5% gold, 18K gold is 75% gold, 14K gold is 58.3% gold, and 9K gold is 37.5% gold. The remaining percentage is made up of other metals such as silver, copper, or nickel, which are added to increase the metal's durability and strength. Calculating the Worth of Gold To calculate the worth of gold, you need to know its weight and purity. The price of gold is quoted in ounces or grams. In most cases, the weight of gold is measured in grams, and the price is quoted per gram. Step 1: Determine the Weight of Gold The first step is to determine the weight of the gold. You can use a digital scale to weigh the gold accurately. If you are buying or selling gold, it is essential to have an accurate weight of the Step 2: Determine the Purity of Gold Next, you need to determine the purity of the gold. This can be done by looking for markings on the gold, such as 24K, 21K, 18K, 14K, or 9K. If there are no markings, you can take the gold to a jeweler, who can test the gold and determine its purity. Step 3: Calculate the Value of Gold Once you know the weight and purity of the gold, you can calculate its value. To do this, you can use an online gold calculator or use the following formula: Value of Gold = Weight of Gold (in grams) x Purity of Gold x Price of Gold per Gram For example, let's say you have 10 grams of 18K gold. The current market price of gold is $60 per gram. To calculate the value of the gold, you would use the following formula: Value of Gold = 10 grams x 0.75 (purity of 18K gold) x $60 per gram Value of Gold = $450 In this example, the value of the 18K gold is $450. Cost of Melting Gold There is also a cost associated with melting the gold. When gold is melted down, it needs to be refined to remove impurities, which can add to the cost. The cost of melting gold can vary depending on the refinery and the purity of the gold. Typically, the cost of melting gold is around 1-2% of the total value of the gold.
{"url":"https://www.talupa.com/gold/Australia","timestamp":"2024-11-03T02:50:02Z","content_type":"text/html","content_length":"422485","record_id":"<urn:uuid:51d428fa-a2cf-4e19-b1e2-b1e8119e41b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00287.warc.gz"}
If every question in JEE Advanced is difficult and different from what it is in books, how do average students crack it with a lot of practice? At first I would like to say you that what you mean by different from what it is in books? If you check the previous year’s question paper properly then you can most of the questions are asked from NCERT text books which are ignored by most of the students. If you talk with IIT JEE toppers they will also suggest you to complete the NCERT books properly. If you do not complete the books thoroughly then you will not able to answer the questions. After that you said every questions are difficult, that’s totally wrong they make only 10% of questions which are difficult but also answerable, just to check the students who have studied properly all the concepts. And the rest 90% huge part are easy and average students can answer it properly. And lastly any average student can crack the examination and secure a good rank with more practice and clarity of all the concepts. You will take help from iitian mentors they will help you how you can prepare things. You need to study hard and make a proper time table and follow it that is the main point. If you can give proper 7 – 8 hours in a day then you can crack the examination with good JEE has a well defined syllabus and almost all the standard books available in the market cover the syllabus. Give me 1 example of a question which you did not find in ANY book and I will edit my answer accordingly. In fact, they lift up questions directly from books. Which books? The books you ignore - NCERT text books. Now pick up JEE 2013 paper and sit with NCERT and then compare them side by side. Every concept asked in the paper can be found in NCERT text books. Its all about correct assembly of concepts in a Your question contains the answer. They crack it with a lot of practice indeed! They solve endless questions so that they develop a directed thinking attitude. An attitude which helps them to analyse the problem and to identify the hidden concept involved. This attitude is not inherent. This attitude has to be developed. There is no shortcut. How to develop it? Read below In general, a JEE aspirant has 2 years 2 years = 365 x 2 = 730 days Assuming that you are in no mood of studying for more than 5 hours a day, you have a total of 730 x 5 = 3650 hours Now you roughly have 90 chapters to be done (30 in each subject) that leaves 3650 / 90 hours ~ 40 hours per chapter. Now pick up a chapter at which you are weak and do this: • Study theory for 10 hours • Solve easy problems for 5 hours • Solve medium problems for 10 hours • Solve hard problems for 15 hours That completes your 40 hours. Come back to me and tell me your status. JEE is a well-known examination in India and most of the students who appear for this exam often find it difficult. However, it is important to understand that the questions in JEE are mostly asked from the NCERT text books, which most students tend to ignore. If you talk to IIT JEE toppers, they would suggest you to complete the NCERT books thoroughly, as it will help you answer most of the questions in the examination. Only 10% of the questions in JEE are difficult, but answerable, to test the students who have studied the concepts properly. The rest 90% of the questions are relatively easy, and an average student can answer them correctly with proper preparation. An average student can crack the JEE examination and secure a good rank by putting in a lot of practice and having clarity of all the concepts. You can take the help of IITian mentors to guide you in your preparation. It is important to have a proper study plan and a time table, and to stick to it. If you can dedicate 7 to 8 hours a day to studying, you can crack the examination with good marks. For a JEE aspirant, there are roughly 2 years to prepare, which translates to 730 days, or 3650 hours if you study for 5 hours a day. To prepare effectively, you need to divide these hours between 90 chapters, roughly 40 hours per chapter. To improve your skills in a particular chapter, you can study the theory for 10 hours, solve easy problems for 5 hours, solve medium problems for 10 hours, and solve hard problems for 15 hours. With this approach, you can develop the directed thinking attitude required to succeed in the JEE examination.
{"url":"https://ask.exprto.com/t/if-every-question-in-jee-advanced-is-difficult-and-different-from-what-it-is-in-books-how-do-average-students-crack-it-with-a-lot-of-practice/14441","timestamp":"2024-11-05T20:24:15Z","content_type":"text/html","content_length":"24867","record_id":"<urn:uuid:8789ca1d-da2b-4479-b8a7-88a81631d05f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00109.warc.gz"}
10.2 Two-Samples: Hypothesis Testing on the Difference in Means | Econometrics I | Class Notes 10.2 Two-Samples: Hypothesis Testing on the Difference in Means We are interested in testing the hypotheses: \[H_0: \mu_1 - \mu_2 = 0 \] \[H_1: \mu_1 - \mu_2 \not= 0 \] Technical note • The independent samples t-test will help us to compare the mean of the two groups. • We can assume that devices corresponds to two unrelated/independent groups. • The question to answer is: Is the mean price of the iPhone is statistically different from other smartphones? We now consider an experimental design where we want to determine whether there is a difference between two groups within the population. If \(\mu_1\) and \(\mu_2\) denote the mean value of group \(1\) and group \(2\), respectively, a mathematical representation of that hypothesis is: \[H_0: \mu_1 - \mu_2 = 0 \] \[H_1: \mu_1 - \mu_2 \not= 0 \] For example, let’s suppose we want to test whether there is any difference between the effectiveness of a new drug for treating cancer. One approach is to create a random sample of 40 people, half of whom take the drug and half take a placebo. For this approach to give valid results it is important that people be assigned to each group at random. Such samples are independent.^9 In order to choose the t-test to evaluate our hypothesis, we will consider these cases: • When the population variances are known, hypothesis testing can be done using a Normal distribution. • When the population variances are unknown, hypothesis testing can be done using a the \(t\) distribution with a pooled sample variance. In the second case we consider three cases: - Independent samples and Equal variances - Independent samples and Unequal variances - Paired samples Technical note The independent samples t-test (or unpaired samples t-test) is used to compare the mean of two independent groups. For example, you might want to compare the mean weights of individuals grouped by gender: male and female groups, which are two unrelated/independent groups. The independent samples t-test comes in two different forms: • the standard Student’s t-test, which assumes that the variance of the two groups are equal. • the Welch’s t-test, which is less restrictive compared to the original Student’s test. This is the test where you do not assume that the variance is the same in the two groups, which results in the fractional degrees of freedom. The two-samples independent t-test assume the following characteristics about the data: • Independence of the observations. Each subject should belong to only one group. There is no relationship between the observations in each group. • No significant outliers in the two groups • Normality. the data for each group should be approximately normally distributed. • Homogeneity of variances. the variance of the outcome variable should be equal in each group. Adapted from here 10.2.1 Independent samples and Equal variances • \(x_{11},x_{12},\ldots,x_{1n_1}\) is a random sample of size \(n_1\) from population \(1\) • \(x_{21},x_{22},\ldots,x_{2n_2}\) is a random sample of size \(n_2\) from population \(2\) • The two populations represented by \(X_1\) and \(X_2\) are independent. • Both population are normal. • \(\bar{x_1}\) and \(\bar{x_2}\) are the sample means. • \(s_1^2\) and \(s_1^2\) are the sample variances. The quantity \[T=\dfrac{(\bar{x}_1 - \bar{x}_2) - (\mu_1 - \mu_2)}{s_p \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}\] has a \(t\) distribution \(n_1 + n_2 - 2\) degrees of freedom. Where \(s_p\) is called pooled variance \[s_p^2 = \dfrac{(n_1-1)s_1^2 + (n_2-1)s_2^2 }{(n_1-1)+(n_2-1)}\] 10.2.2 Example A company wants to improve its sales. It believes that the size of the shop has a big impact on the mean sale per transaction. Recent sales data were taken from 18 salesmen in 2 different shops (9 per shop). The company needs to confirm its suspicions. \[ x_1= \{ 203, 229, 215, 220, 223, 233, 208, 228, 209 \} \] \[n_1=9; \,\,\,\, \bar{x}_1=218.67 ; \,\,\,\, s_1 = 10.52\] \[ x_2= \{ 221, 207, 185, 203, 187, 190, 195, 204, 212 \} \] \[n_2=9; \,\,\,\, \bar{x}_2=200.44 ; \,\,\,\, s_2 = 12.13\] \[s_p^2 = \dfrac{(n_1-1)s_1^2 + (n_2-1)s_2^2 }{(n_1-1)+(n_2-1)}=\dfrac{1}{2}(810.52^2+12.13^2)= 128.9\] \[H_0: \mu_1 = \mu_2 \,\,\,\,\,\, \text{ or } \,\,\,\,\,\, H_0: \mu_1 - \mu_2 = 0\] \[H_0: \mu_1 \neq \mu_2 \,\,\,\,\,\, \text{ or } \,\,\,\,\,\, H_0: \mu_1 - \mu_2 \neq 0\] \[T=\dfrac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_p \sqrt{ \frac{1}{n_1} + \frac{1}{n_2} } } = \dfrac{(218.67-200.44)-(0)}{11.35\sqrt{\frac{1}{9} + \frac{1}{9}}}= 3.4049 \] We would reject \(H_0\) if \(T\) were less than \(t_\alpha\) or greater than \(t_\alpha\) (determined used a \(t\)-table) \[\alpha = 0.05; \,\, df = n_1+n_2-2 =16\] The test statistic \(T= + 3.4049\) is greater than \(t_\alpha= \pm 2.31\) . Then, we reject the null hypothesis. That is, the test statistic falls in the critical region We would determine the area under a \(t_{df}\) curve, to the right of \(T\) and to the left of \(-T\) \[T = \pm 3.4049; \,\, df = 16\] -value of the test is 0.0036, which is less than the significance level \(\alpha = 0.05\) . Then, we reject the null hypothesis. The conclusion is the same regardless of the approach use. There is sufficient evidence, at the \(\alpha= 0.05\) level, to conclude that the mean sale per transaction is different between the two shops. The mean sale in shop 1 was 218.67 (sd = 10.52), whereas the mean in shop 2 was 200.44 (sd = 12.13). A two-samples t-test showed that the difference was statistically significant, t(16) = 3.4049, p < 0.0036; where, t(16) is shorthand notation for a t-statistic that has 16 degrees of freedom. .y. group1 group2 n1 n2 statistic df p p.signif x 1 2 9 9 3.404865 15.68926 0.00371 ** Your turn • The company thinks that there is a difference (that the mean sales are bigger in the first shop). Test this hypothesis.
{"url":"https://ravinesromy-econometrics01-2022.netlify.app/two-samples-hypothesis-testing-on-the-difference-in-means","timestamp":"2024-11-07T19:10:25Z","content_type":"text/html","content_length":"45974","record_id":"<urn:uuid:87561c43-9619-45b5-a105-57c5427e5e3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00767.warc.gz"}
Cutting Slip Velocity Calculation Method 2Cutting Slip Velocity Calculation Method 2 Cutting Slip Velocity Calculation Method 2 This is another method to determine cutting slip velocity. The process of calculation is quite different from the first method however it is still straight forward calculation. It still gives you the following answers: annular velocity, cutting slip velocity and net velocity. Let’s get started with this calculation method. 1. Determine n n is the power law exponent. Θ600 is a value at 600 viscometer dial reading. Θ300 is a value at 300 viscometer dial reading. 2. Determine K K is the fluid consistency unit Θ300 is a value at 300 viscometer dial reading. n is the power law exponent. 3. Determine annular velocity with following equation: AV is annular velocity in ft/min. Q is flow rate in gpm (gallon per minute). Dh is diameter of hole in inch. Dp is diameter of drill pipe in inch. 4. Determine cutting slip velocity with following equation: µ is viscosity in centi-poise. V is annular velocity in ft/min. Dh is diameter of hole in inch. Dp is diameter of drill pipe in inch. n is dimensionless number from the equation 1 K is dimensionless number from the equation 2 5. Slip Velocity (Vs) in ft/min Vs is slip velocity in ft/min. DensC is cutting density in ppg DiaC is cutting diameter in inch µ is viscosity in centi-poise given from equation#4. 6. Determine net rise velocity with following equation: Net rise velocity = AV – Vs AV is annular velocity. Vs Cutting Velocity This figure indicates that cuttings are being lifted by mud or are still falling down. If net rise velocity is positive, it means that you have good flow rate which can carry cuttings in the wellbore. On the other hand, If net rise velocity is negative, your current flow rate is NOT enough to carry cuttings. Example: Please use the following information to determine annular velocity, cutting slip velocity, net rise velocity, and tell us if the flow rate is good for hole cleaning. θ300 = 25 θ600 = 40 Flow rate = 750 gpm Hole size = 12.0 inch Drill pipe diameter = 5 inch Cutting diameter = 0.3 inch Cutting density = 22 ppg Mud weight = 10 ppg 1. Determine n n = 0.678 2. Determine K K = 0.365 3. Determine annular velocity AV = 154.4 ft/min 4. Determine cutting slip velocity µ = 53.88 centi-poise 5. Determine slip velocity (Vs) Vs = 35.78 ft/min 6. Determine net rise velocity Net rise velocity = 154.4 – 35.78 = 118.6 ft/min Conclusion: This flow rate is good for hole cleaning Ref book: Formulas and Calculations for Drilling, Production and Workover, Second Edition 4 Responses to Cutting Slip Velocity Calculation Method 2 1. i like your site very much. it give lot helps peoples who were working. drilling rigs. 2. what formula to predict cutting bed (% bed & bed height)? □ We don’t have the eqatuion to determine a cutting bed. This site uses Akismet to reduce spam. Learn how your comment data is processed. Tagged Cutting Slip Velocity, Cutting Slip Velocity Calculation, hydraulic calculation. Bookmark the permalink.
{"url":"https://www.drillingformulas.com/cutting-slip-velocity-calculation-method-2/","timestamp":"2024-11-04T17:25:02Z","content_type":"text/html","content_length":"94266","record_id":"<urn:uuid:4254c67f-5656-40fa-bd92-36b762087067>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00088.warc.gz"}
Lambda Cost Calculator Aws - Certified Calculator Lambda Cost Calculator Aws Introduction: AWS Lambda is a versatile serverless computing service that allows you to run code without provisioning or managing servers. However, understanding the cost implications of your Lambda functions is crucial for efficient resource allocation. To help you with this, we’ve created the Lambda Cost Calculator for AWS, which simplifies the process of estimating your monthly Lambda costs. Formula: To calculate the monthly cost of your Lambda function, we use the following formula: Monthly Cost = (Lambda Executions) * (Lambda Duration / 100) * $0.00001667 • Lambda Executions: The number of times your Lambda function is executed. • Lambda Duration: The execution duration of your Lambda function in milliseconds. • $0.00001667: The cost per GB-second for Lambda functions. How to Use: 1. Enter the number of Lambda executions in the “Lambda Executions” field. 2. Input the Lambda execution duration in milliseconds in the “Lambda Duration” field. 3. Click the “Calculate” button to estimate your monthly Lambda cost. Example: Suppose you have 100,000 Lambda executions with an average duration of 200 milliseconds. Monthly Cost = (100,000) * (200 / 100) * $0.00001667 = $33.34 1. Q: How is the Lambda duration calculated? A: The Lambda duration is the time it takes for your function to execute, measured in milliseconds. 2. Q: Is this calculator accurate for all AWS regions? A: Yes, it provides an estimate based on the standard AWS Lambda pricing. 3. Q: What if I have variable execution durations? A: This calculator assumes an average duration; actual costs may vary. 4. Q: Are there any hidden fees not accounted for? A: No, this calculator uses the publicly available AWS Lambda pricing. 5. Q: Can I use this for Lambda@Edge? A: Yes, you can estimate costs for Lambda@Edge functions as well. Conclusion: Our Lambda Cost Calculator for AWS is a valuable tool to quickly estimate your monthly Lambda expenses. By inputting the number of executions and average execution duration, you can gain insights into your AWS Lambda costs, enabling better budgeting and resource management for your serverless applications. Leave a Comment
{"url":"https://certifiedcalculator.com/lambda-cost-calculator-aws/","timestamp":"2024-11-13T06:37:29Z","content_type":"text/html","content_length":"52656","record_id":"<urn:uuid:90f7044a-f899-4544-bf9d-fd588232b6c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00828.warc.gz"}
One of the most common data manipulation tasks is data aggregation where, in its simplest form, we carry out operations that reduce multiple values of a column into a single value e.g. compute the sum of values of a numeric column. Typically, we perform data aggregation on mutually exclusive groups of rows then combine the results vertically into a new column or table. In its simplest form, a typical data aggregation operation in SQL looks like so: SELECT col_1, MIN(col_2) col_2_min, SUM(col_3) col_3_sum FROM table_1 GROUP BY col_1; where we use GROUP BY to create mutually exclusive groups of rows, and then we carry out data aggregation operations on each group. This section is organized as follows:
{"url":"https://optima.io/reference/data-manipulation/sql/aggregating","timestamp":"2024-11-03T03:41:00Z","content_type":"text/html","content_length":"173714","record_id":"<urn:uuid:55e3b644-e7f2-45b5-b3f5-133487a4e2ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00323.warc.gz"}
SP1 Book Precompiles are built into the SP1 zkVM and accelerate commonly used operations such as elliptic curve arithmetic and hashing. Under the hood, precompiles are implemented as custom STARK tables dedicated to proving one or few operations. They typically improve the performance of executing expensive operations in SP1 by a few orders of magnitude. Inside the zkVM, precompiles are exposed as system calls executed through the ecall RISC-V instruction. Each precompile has a unique system call number and implements an interface for the SP1 also has been designed specifically to make it easy for external contributors to create and extend the zkVM with their own precompiles. To learn more about this, you can look at implementations of existing precompiles in the precompiles folder. More documentation on this will be coming soon. To use precompiles, we typically recommend you interact with them through patches, which are crates modified to use these precompiles under the hood, without requiring you to call system calls If you are an advanced user you can interact with the precompiles directly using external system calls. Here is a list of all available system calls & precompiles. //! Syscalls for the SP1 zkVM. //! Documentation for these syscalls can be found in the zkVM entrypoint //! `sp1_zkvm::syscalls` module. pub mod bls12381; pub mod bn254; pub mod ed25519; pub mod io; pub mod secp256k1; pub mod secp256r1; pub mod unconstrained; pub mod utils; #[cfg(feature = "verify")] pub mod verify; extern "C" { /// Halts the program with the given exit code. pub fn syscall_halt(exit_code: u8) -> !; /// Writes the bytes in the given buffer to the given file descriptor. pub fn syscall_write(fd: u32, write_buf: *const u8, nbytes: usize); /// Reads the bytes from the given file descriptor into the given buffer. pub fn syscall_read(fd: u32, read_buf: *mut u8, nbytes: usize); /// Executes the SHA-256 extend operation on the given word array. pub fn syscall_sha256_extend(w: *mut [u32; 64]); /// Executes the SHA-256 compress operation on the given word array and a given state. pub fn syscall_sha256_compress(w: *mut [u32; 64], state: *mut [u32; 8]); /// Executes an Ed25519 curve addition on the given points. pub fn syscall_ed_add(p: *mut [u32; 16], q: *const [u32; 16]); /// Executes an Ed25519 curve decompression on the given point. pub fn syscall_ed_decompress(point: &mut [u8; 64]); /// Executes an Sepc256k1 curve addition on the given points. pub fn syscall_secp256k1_add(p: *mut [u32; 16], q: *const [u32; 16]); /// Executes an Secp256k1 curve doubling on the given point. pub fn syscall_secp256k1_double(p: *mut [u32; 16]); /// Executes an Secp256k1 curve decompression on the given point. pub fn syscall_secp256k1_decompress(point: &mut [u8; 64], is_odd: bool); /// Executes an Secp256r1 curve addition on the given points. pub fn syscall_secp256r1_add(p: *mut [u32; 16], q: *const [u32; 16]); /// Executes an Secp256r1 curve doubling on the given point. pub fn syscall_secp256r1_double(p: *mut [u32; 16]); /// Executes an Secp256r1 curve decompression on the given point. pub fn syscall_secp256r1_decompress(point: &mut [u8; 64], is_odd: bool); /// Executes a Bn254 curve addition on the given points. pub fn syscall_bn254_add(p: *mut [u32; 16], q: *const [u32; 16]); /// Executes a Bn254 curve doubling on the given point. pub fn syscall_bn254_double(p: *mut [u32; 16]); /// Executes a BLS12-381 curve addition on the given points. pub fn syscall_bls12381_add(p: *mut [u32; 24], q: *const [u32; 24]); /// Executes a BLS12-381 curve doubling on the given point. pub fn syscall_bls12381_double(p: *mut [u32; 24]); /// Executes the Keccak-256 permutation on the given state. pub fn syscall_keccak_permute(state: *mut [u64; 25]); /// Executes an uint256 multiplication on the given inputs. pub fn syscall_uint256_mulmod(x: *mut [u32; 8], y: *const [u32; 8]); /// Enters unconstrained mode. pub fn syscall_enter_unconstrained() -> bool; /// Exits unconstrained mode. pub fn syscall_exit_unconstrained(); /// Defers the verification of a valid SP1 zkVM proof. pub fn syscall_verify_sp1_proof(vk_digest: &[u32; 8], pv_digest: &[u8; 32]); /// Returns the length of the next element in the hint stream. pub fn syscall_hint_len() -> usize; /// Reads the next element in the hint stream into the given buffer. pub fn syscall_hint_read(ptr: *mut u8, len: usize); /// Allocates a buffer aligned to the given alignment. pub fn sys_alloc_aligned(bytes: usize, align: usize) -> *mut u8; /// Decompresses a BLS12-381 point. pub fn syscall_bls12381_decompress(point: &mut [u8; 96], is_odd: bool); /// Computes a big integer operation with a modulus. pub fn sys_bigint( result: *mut [u32; 8], op: u32, x: *const [u32; 8], y: *const [u32; 8], modulus: *const [u32; 8], /// Executes a BLS12-381 field addition on the given inputs. pub fn syscall_bls12381_fp_addmod(p: *mut u32, q: *const u32); /// Executes a BLS12-381 field subtraction on the given inputs. pub fn syscall_bls12381_fp_submod(p: *mut u32, q: *const u32); /// Executes a BLS12-381 field multiplication on the given inputs. pub fn syscall_bls12381_fp_mulmod(p: *mut u32, q: *const u32); /// Executes a BLS12-381 Fp2 addition on the given inputs. pub fn syscall_bls12381_fp2_addmod(p: *mut u32, q: *const u32); /// Executes a BLS12-381 Fp2 subtraction on the given inputs. pub fn syscall_bls12381_fp2_submod(p: *mut u32, q: *const u32); /// Executes a BLS12-381 Fp2 multiplication on the given inputs. pub fn syscall_bls12381_fp2_mulmod(p: *mut u32, q: *const u32); /// Executes a BN254 field addition on the given inputs. pub fn syscall_bn254_fp_addmod(p: *mut u32, q: *const u32); /// Executes a BN254 field subtraction on the given inputs. pub fn syscall_bn254_fp_submod(p: *mut u32, q: *const u32); /// Executes a BN254 field multiplication on the given inputs. pub fn syscall_bn254_fp_mulmod(p: *mut u32, q: *const u32); /// Executes a BN254 Fp2 addition on the given inputs. pub fn syscall_bn254_fp2_addmod(p: *mut u32, q: *const u32); /// Executes a BN254 Fp2 subtraction on the given inputs. pub fn syscall_bn254_fp2_submod(p: *mut u32, q: *const u32); /// Executes a BN254 Fp2 multiplication on the given inputs. pub fn syscall_bn254_fp2_mulmod(p: *mut u32, q: *const u32);
{"url":"https://docs.succinct.xyz/writing-programs/precompiles.html","timestamp":"2024-11-10T20:57:25Z","content_type":"text/html","content_length":"22297","record_id":"<urn:uuid:f9a395c9-c743-41ce-9ca8-e466a1079065>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00264.warc.gz"}
Student Feature - Wasiur Rahman Khuda Bukhsh 1. Who am I? I am a postdoctoral researcher at the Mathematical Biosciences Institute, the Ohio State University. I was born in a remote yet historical part of India. The place is called Murshidabad, erstwhile capital of Nawab Siraj ud-Daulah, the last independent Nawab of Bengal. It is about 200 kms away from Kolkata, the present capital of the state of West Bengal, India. I completed my Bachelor of Science (B. Sc.) and Master of Statistics (M. Stat.) degrees in Kolkata. After spending a couple of years in the industry, I went to Technische Universität Darmstadt in Germany for my doctoral I am interested in stochastic analysis of complex systems that arise from various branches of sciences. Biological applications, in particular, fascinate me. They often reveal insightful connections among dynamical systems theory, probability theory and statistics. 2. Random dynamical processes on random dynamical structures From an abstract point of view, many of the complex systems I study can be described as random dynamical processes on random dynamical structures. Think of a collection of agents each of which is endowed with a local state space (e.g., immunological statuses, such as susceptible, infected, and recovered, in case of an epidemic, political opinion, queue lengths at various service stations, magnetic dipole moments of atomic “spins” in models of ferromagnetism, buffer availability of a video chunk in peer-to-peer video streaming systems). The agents change their local states dynamically as they interact with other agents. Now, think of the agents as vertices of a graph. Then, the interactions among the agents can be described by the connections (edges between vertices) in the graph. See Figure 1 for an example from infectious disease epidemiology. Because the interaction patterns could change over time (e.g., due to preventive measures such as social distancing against COVID-19), the graph itself evolves dynamically. Exact stochastic analysis of such complex systems becomes cumbersome as the size of the graph grows large. Therefore, we need large-graph approximations. Functional Laws of Large Numbers (FLLNs) in the form of Ordinary Differential Equations (ODEs) or Partial Differential Equations (PDEs), Functional Central Limit Theorems (FCLTs) in terms of Stochastic Differential Equations (SDEs) or Stochastic Partial Differential Equations (SPDEs), and Large Deviations Principles (LDPs) are typical theoretical results that provide such approximations. I spend a lot of time deriving such large-graph approximation theorems. Those theorems are useful not only from a mathematical perspective, but also for statistical purposes. For instance, Dynamic Survival Analysis (DSA) is a method that converts those limiting dynamical systems in the form of ODEs or PDEs into prob- ability laws that can be used to derive likelihood functions for statistical inference. 3. Dynamic Survival Analysis The DSA method [1] combines classical dynamical systems theory and survival analysis from statistics. In essence, it allows one to describe individual-level (micro-scale) dynamics in terms of population-level (macro-scale) dynamics. The population-level dynamics are typically described by large-graph limiting ODEs or PDEs. In the context of infectious disease epidemiology, the mathematical underpinning is provided by a novel application of the Sellke construction, which describes an epidemic process in terms of timings of individual infection and recovery as opposed to counts of individuals with different immunological statuses. In the general context, the DSA method extracts probability laws for the fate of a single individual agent (vertex) in a large graph. The probability laws are then used for likelihood-based parameter inference. The DSA method was instrumental in the response modeling efforts [3] of the Ohio State University (OSU)/Infectious Diseases Institute (IDI) against COVID-19 in the state of Ohio. The response modeling team provided decision analytic support to the Governor’s Office, the Ohio Department of Health (ODH), and the Ohio Hospital Association (OHA). Apart from doing mathematics, I enjoy eating delicious food, playing cricket and badminton, listening to music, and traveling. Even though I do not get much time to read literature nowadays, some of my heroes are actually writers. Rabindranath Tagore and Nazrul Islam, two prolific poets of Bengali literature, had the biggest influence on me during my formative years. Even today, they continue to shape my philosophy towards life. [1] W. R. KhudaBukhsh, B. Choi, E. Kenah, and G. A. Rempała. Survival dynamical systems: individual-level survival analysis from population-level epidemic models. Interface Focus, volume 10(1):p. 20190048, 2020. [2] W. R. KhudaBukhsh, C. Woroszylo, G. A. Rempała, and H. Koeppl. Functional Central Limit Theorem For Susceptible-Infected Process On Configuration Model Graphs. arXiv preprint: https://arxiv.org/ [3] OSU / IDI COVID-19 Response Modeling Team. Predicting covid-19 cases and subsequent hospital burden in Ohio. 2020. Available: https://idi.osu.edu/assets/pdfs/ covid_response_white_paper.pdf. Please login or register to post comments.
{"url":"https://dsweb.siam.org/The-Magazine/Article/student-feature-wasiur-rahman-khuda-bukhsh","timestamp":"2024-11-08T04:19:05Z","content_type":"text/html","content_length":"52616","record_id":"<urn:uuid:b41c8611-c4cb-4b2c-b6a2-ed2357f611a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00188.warc.gz"}
College Algebra with Modeling and Visualization (6th Edition) - Solutions Manual This product ONLY includes the College Algebra with Modeling and Visualization 6th edition solutions manual. The ebook is available for sale separately. About the eBook Showing why mathematics matters – In Rockswold’s College Algebra with Modeling and Visualization 6th edition, author Gary Rockswold doesn’t just mention real-world examples; he teaches mathematical concepts through those applications. For example, if we look at Facebook usage over time, what might that tell us about predictions and linear growth? In this way, college students learn the concepts in the context of the world they know, which leads to better understanding and retention. From there, the author shows a connection between modeling, application, and visualization. Dr. Rockswold is known for presenting the concept of a function as a unifying theme, with an emphasis on the rule of 4 (numerical, graphical, verbal, and symbolic representations). The latest 6th Edition emphasizes conceptual understanding with new in-chapter features and assignment options, while at the same time providing tools to empower teachers and instructors to make their classroom more active through group work and collaboration. 978-0134418193, 978-0134418049 P.S. We also have College Algebra with Modeling and Visualization 6E ebook for sale separately. See related products below. Note: This sale only includes the College Algebra with Modeling and Visualization Sixth Edition solutions. No ebook or Mylab/access codes included. There are no reviews yet. Be the first to review “College Algebra with Modeling and Visualization (6th Edition) – Solutions Manual” You must be logged in to post a review.
{"url":"https://textbooks.dad/product/college-algebra-with-modeling-and-visualization-6th-edition-solutions-manual/","timestamp":"2024-11-13T19:47:17Z","content_type":"text/html","content_length":"112365","record_id":"<urn:uuid:c9a6480e-13ba-44a9-8452-77e9d7debe32>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00856.warc.gz"}
Does Tarski Undefinability apply to HOL ? tim wood PL Olcott I imagine Tarski's indefinability theorem would. AFAIK second order logic already has diagonalization results - so it's either inconsistent or incomplete. You don't need to go above first for it. So long as you put enough arithmetic in, you're going to get the self referential bullshit that sets up these paradoxes. — fdrake Anything that can't be proved in one order of logical can be proved or refuted in the next. A formal system having every order of logic cannot be incomplete. A formal system having only one order of logic is like the "C" volume of an encyclopedia only having articles that begin with the letter "C". I imagine Tarski's indefinability theorem would. AFAIK second order logic already has diagonalization results - so it's either inconsistent or incomplete. You don't need to go above first for it. So long as you put enough arithmetic in, you're going to get the self referential bullshit that sets up these paradoxes. fdrake OptionsShare PL Olcott I cannot give a rigorous answer, but I agree with this. If Tarski's undefinability theorem is basically that "arithmetical truth cannot be defined in arithmetic", or that true Gödel numbers are not definable arithmetically, meaning there’s no first-order formula for this, I think it does go for higher order logics. For those higher order logics there is their own true but unprovable Gödel number. — ssu There is always an unprovable expression in one order of logic that is provable in the next order. When all orders of logic are included in the same formal system then such a system cannot be incomplete. I posted this in two forums because the more appropriate forum has hardly any views. You're off to the races into transfinite-order logics. If I understand the question of the title, it is equivalent to asking if Godel's incompleteness (theorem) is entirely resolved at some higher level of logic. My guess is not. — tim wood I cannot give a rigorous answer, but I agree with this. If Tarski's undefinability theorem is basically that "arithmetical truth cannot be defined in arithmetic", or that true Gödel numbers are not definable arithmetically, meaning there’s no first-order formula for this, I think it does go for higher order logics. For those higher order logics there is their own true but unprovable Gödel But I'm the amateur here, so don't quote me. (PS, why two threads?) ssu OptionsShare PL Olcott ↪PL Olcott Which is to say - just between us in case we're both wrong - that each system being itself deficient requires a successor system to fix it, but that simply creating a new deficiency. Ordinal arithmetic being formidable, I don't see an escape. — tim wood That a formal system can be defined with a single order of logic seems isomorphic to an encyclopedia defined with only the "C" volume. Of course such a system would be incomplete, yet it is a little silly to define systems or encyclopedias this way. tim wood Which is to say - just between us in case we're both wrong - that each system being itself deficient requires a successor system to fix it, but that simply creating a new deficiency. Ordinal arithmetic being formidable, I don't see an escape. PL Olcott Your question (again, if I understand it), is can there be a super-strong formal system that is not incomplete. I am guessing not. And I'm sure a rigorous discussion would be well, rigorous. — tim wood It seems that all of the formal systems that these two apply to only have a single order of logic. When we define a formal system that simultaneously has an unlimited number of orders of logic then there is always one more order of logic as needed. tim wood You shall have to keep it simple with me. From what I've read, all the incompleteness/undefinability theorems only apply to systems of "sufficient" power and interest. I find this online, "The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system." Your question (again, if I understand it), is can there be a super-strong formal system that is not incomplete. I am guessing not. And I'm sure a rigorous discussion would be well, rigorous. PL Olcott And if nothing else, this is the clue, "next higher order...". It appears you want to get to the point where there is no higher order. And that would seem to lead to a set-of-all-sets type of contradiction. — tim wood It has always seemed to me that Tarski's Undefinability theorem fails when applied to a knowledge ontology inheritance hierarchy (KOIH). It has only occurred to me recently that KOIH is essentially type theory which is essentially HOL. https://en.wikipedia.org/wiki/Ontology_(information_science) tim wood You're off to the races into transfinite-order logics. If I understand the question of the title, it is equivalent to asking if Godel's incompleteness (theorem) is entirely resolved at some higher level of logic. My guess is not. Thus a single formal system have every order of logic giving every expression of language in this formal system its own Truth() predicate at the next higher order of logic. — PL Olcott And if nothing else, this is the clue, "next higher order...". It appears you want to get to the point where there is no higher order. And that would seem to lead to a set-of-all-sets type of Thus a single formal system have every order of logic giving every expression of language in this formal system its own Truth() predicate at the next higher order of logic. — PL Olcott I don't understand this. Lionino OptionsShare PL Olcott Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply. https://en.wikipedia.org/wiki/Higher-order_logic All orders of logic in one formal system. There are many ways to further extend second-order logic. The most obvious is third, fourth, and so on order logic. The general principle, already recognized by Tarski (1933 [1956]), is that in higher order logic one can formalize the semantics—define truth—of lower order logic. "Simple type theory, also known as higher-order logic" The seven virtues of simple type theory https://www.sciencedirect.com/science/article/pii/S157086830700081X All orders of logic in one formal system. Thus a single formal system have every order of logic giving every expression of language in this formal system its own Truth() predicate at the next higher order of logic. When all orders of logic are included in the same formal system then such a system cannot be incomplete. — PL Olcott Um, no. I'm going to try to condense a proof due to Emil Post, in The Undecidable , Ed. Davis, 1965, pp. 304-337, esp. pp. 308-317 Let B be understood as a recursively enumerable set of numbers. Let B1, B2,...Bn, B(n+1)... be simply a listing of such sets with duplicates omitted and arranged perhaps by length and lexicographically for sets the same length. Let (Bn,n) be simply the proposition, true or false, that n is in the set Bn. By interlacing the Bns and the integers, e.g., B1,1; B2,1; B1,2; B1, 3; B2,2; B3,1, and so on, there are generated all the distinct couples Bn,n. Call this set E, of all the expressions Bn,n, understood in each case as the expression that n is in the set Bn. From E generate the set T of all Bn,n such that the number n is in Bn; i.e. true. The complement of T will be ~T, that is, all those Bn,n such that n is not in Bn. Let F be any recursively enumerable subset of ~T. That is, if Bn,n is in F, it is in ~T, and n is not in the set generated by Bn. Now it gets a little bit tricky. Now generate the members of F. If a member of F is of the form Bn,n, place n in a set of positive integers called S. S being recursively enumerable will correspond to some B, call it Bv such that S= Bv. Construct Bv,v; that is, the proposition that v is in Bv. And now I quote directly. "Now by construction, S consists of those members of F of the form Bn,n. Suppose that Bv,v is in F. Then,... Bv,v being false, v would not be in... Bv. That is, v would not be in S. But Bv,v being of the form Bn,n, v would be in S. Our assumption leading to a contradiction, it follows that Bv,v is not in F . But v can only be in S by Bv,v being in F. Hence, v is not in S . Finally, Bv,v says that v is in S.... Bv,v is therefore false; that is, Bv,v is in ~T. "...[ S]ince Bv,v of ~T is not in F, T and F together can never exhaust E.... We can then say that no recursively generated logic relative to E is complete , since F alone will lead to the Bv,v not in F." That is. given the logic determined by T and F, Bv,v must be undecidable. "[They, (the ideas here presented)] implicitly justify the generalization that every symbolic logic is incomplete...." (316) tim wood A formal system having only one order of logic is like the "C" volume of an encyclopedia only having articles that begin with the letter "C". — PL Olcott And a complete set would have everything from A to Z. But in our case, you can't have a complete set. PL Olcott A formal system having only one order of logic is like the "C" volume of an encyclopedia only having articles that begin with the letter "C". — PL Olcott And a complete set would have everything from A to Z. But in our case, you can't have a complete set. — tim wood ...14 Every epistemological antinomy can likewise be used for a similar undecidability proof... (Gödel 1931:43-44) Epistemological antinomies (AKA self-contradictory expressions) must be excluded from every formal system of logic because they are not truth bearers. I can't follow your other proof, yet it seems to follow the principle that every simple idea can be made convoluted enough that it can no longer be understood. Anything that can't be proved in one order of logical can be proved or refuted in the next. A formal system having every order of logic cannot be incomplete. A formal system having only one order of logic is like the "C" volume of an encyclopedia only having articles that begin with the letter "C". — PL Olcott I don't think this is true. Every theorem of 0th order logic has a corresponding theorem in 1st order logic. Like P=>Q goes to For all X P ( X ) implies Q ( X ) Every theorem of 1st order logic has a corresponding theorem in 2nd order logic. I'm fairly certain that generalises, but haven't come up with a proof sketch. i.e. nth order logic lets you express and prove all the things that are in (n-1)th order logic and more. Now consider that you're taking the set of all provable statements of all logics up to the nth order. That will then be the set of provable statements of the nth order logic, due to the hierarchy. You've stipulated that n>2, so your logic is strictly more expressive than 2nd order logic. Incompleteness results apply to 2nd order logic, since you can axiomatise the natural numbers with + and * in it. That's more than enough arithmetic for Tarski and Godel. So your big union of logics is one logic - of the highest order you designate. And so long as it contains 2nd order logic, you can derive incompleteness results for it. Moreover, I think you're claiming that you end up axiomatising the (n-1)th's logic's metalanguage in the nth logic's syntax? But when you end up having such a tower of logics and take their the union, it isn't quite that you'd be taking the union of their metalanguages as well, there'd need to be a single unifying metalanguage in which the formulae of all the levels could be expressed. The truth and provability symbols in the metalanguage would thus apply for theorems applying to the big union logic, rather than having a plethora of distinct symbols in different metalanguages - though concepts like 1st order derivable could maybe be phrased in that expansive metalanguage for the union of the logics. Similarly, there would need to be one type of object which would model all the formulas. I'd conjecture set theory would work for all of them. Reason being you can think of quantification of order n as ranging over a set which allows quantification over collections of sets (n-1) recursions deep. Like 0th order logic allows no quantification. 1st allows quantification within a set, 2nd allows.... quantification over sets. 3rd allows... quantification over sets of sets, which is kinda just quantification over sets. So it would surprise me if this giant logic wasn't a version of "set theory in disguise" (like second order logic was called by Quine), to which incompleteness results already apply. fdrake OptionsShare tim wood seems to follow the principle that every simple idea can be made convoluted enough that it can no longer be understood. — PL Olcott That's a convenient principle. Btw, how do you know when an idea is just that simple? PL Olcott seems to follow the principle that every simple idea can be made convoluted enough that it can no longer be understood. — PL Olcott That's a convenient principle. Btw, how do you know when an idea is just that simple? — tim wood When we envision the inherent structure of the set of all knowledge that can be expressed using language then we can see that "incompleteness" and "undecidability" are mere errors in disguise. PL Olcott Now consider that you're taking the set of all provable statements of all logics up to the nth order. That will then be the set of provable statements of the nth order logic, due to the hierarchy. — fdrake For every nth order logic that can be shown to be incomplete there is an n+1 order logic that completes it. PL Olcott there'd need to be a single unifying metalanguage in which the formulae of all the levels could be expressed. — fdrake Yes, definitely. The truth and provability symbols in the metalanguage would thus apply for theorems applying to the big union logic, rather than having a plethora of distinct symbols in different metalanguages — One metalanguage that can refer to one of more elements at any level within the type hierarchy. PL Olcott "[They, (the ideas here presented)] implicitly justify the generalization that every symbolic logic is incomplete...." (316) — tim wood It would seem that for every n order or logic there would necessarily be an n+1 order of logic provability predicate for this N order of logic. There are many ways to further extend second-order logic. The most obvious is third, fourth, and so on order logic. The general principle, already recognized by Tarski (1933 [1956]), is that in higher order logic one can formalize the semantics—define truth—of lower order logic. https://plato.stanford.edu/entries/logic-higher-order/ — Stanford tim wood What is your point? We suppose - that's the best I can do - that a proposition undecidable in L is decidable in L', and one in L' in L'', and so forth. But apparently there is no Lωω...ω that is itself complete. Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply. — PL Olcott Do we need more than first and second order logic in practical uses? PL Olcott ↪PL Olcott What is your point? We suppose - that's the best I can do - that a proposition undecidable in L is decidable in L', and one in L' in L'', and so forth. But apparently there is no Lωω...ω that is itself complete. — tim wood A knowledge ontology inheritance hierarchy capable of formalizing the entire body of human knowledge that can be expressed using language need not be incomplete in the Gödel sense. Such a system would essentially be a type hierarchy from type theory thus isomorphic to HOL. PL Olcott Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply. — PL Olcott Do we need more than first and second order logic in practical uses? — Corvus For formalizing the entire body of human knowledge that can be expressed using language we need this: By the theory of simple types I mean the doctrine which says that the objects of thought ... are divided into types, namely: individuals, properties of individuals, relations between individuals, properties of such relations, etc. https://en.wikipedia.org/wiki/History_of_type_theory#G%C3%B6del_1944 — History of Type Theory There seems to be a finite limit to the number of orders of logic needed. tim wood For formalizing the entire body of human knowledge that can be expressed using language we need this: — PL Olcott Let's try this. Suppose you have your "this," call it T. Now suppose you have some expression. Is it or its negation in T? If so, great! You're done. If not, then you have to figure out if it should be or not. And using existing knowledge, you cannot (if you could, it or its negation would already be in T). Best you can do is add it as axiomatic in some way, or fundamental. Now you have T'. Now a new expression, same business, and you get T'', and so on and so forth. You might argue, and imo quite reasonably, that in collections of data this doesn't really arise. But in math-logic it exactly does. That is, there is no terminal Tωω...ω in math-logic. And this concerns among other things differences between finite and infinite sets. . PL Olcott Let's try this. Suppose you have your "this," call it T. Now suppose you have some expression. Is it or its negation in T? If so, great! You're done. If not, then you have to figure out if it should be or not. And using existing knowledge, you cannot (if you could, it or its negation would already be in T). — tim wood There is a great debate about whether an expression of language can be true without a truth maker. Truthmaker Maximalism defended GONZALO RODRIGUEZ-PEREYRA A truth without a truthmaker is like a cake without a baker, True and unprovable is self-contradictory once one understands how true really works the way that I and Wittgenstein do. tim wood A knowledge ontology inheritance hierarchy capable of formalizing the entire body of human knowledge that can be expressed using language need not be incomplete in the Gödel sense. — PL Olcott This at least seems true. Mainly because such a listing lacks the power of systems that are incomplete in the Godel sense, and in fact have nothing to do with it. Your groceries list can stand for such a body of knowledge, and nothing incomplete about it. True and unprovable — PL Olcott Properly and correctly qualified as unprovable within the system out of which it arose, but proved true by other means. Btw, true is an adjective indicative of a quality that true statements have. Good luck with any attempt to comprehensively further define just what that quality exactly is. PL Olcott Btw, true is an adjective indicative of a quality that true statements have. Good luck with any attempt to comprehensively further define just what that quality exactly is. — tim wood Analytic TRUE is a constant property of some expressions of language defined in terms of its relation to other expressions of language. An expression of language is TRUE if and only if it is (a) stipulated to be true, or (b) semantically derived from expressions of language that are stipulated to be true. For formalizing the entire body of human knowledge that can be expressed using language we need this: — PL Olcott High-Order Logic seems to be more flexible and powerful for the real world cases due to its expanded variables availability for the properties and relations. There seems to be a finite limit to the number of orders of logic needed. — PL Olcott I was under impression that higher than 3rd-order logic would be for the multiple set theories and advanced calculus applications, therefore they wouldn't be used for describing the empirical world
{"url":"https://thephilosophyforum.com/discussion/comment/893297","timestamp":"2024-11-03T22:37:37Z","content_type":"text/html","content_length":"89818","record_id":"<urn:uuid:f3c65f70-f7c9-47c4-9d28-806a3f3d0a84>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00864.warc.gz"}
Correlation: Video, Causes, & Meaning | Osmosis Correlation is a statistical technique that shows whether two quantitative variables are related, and also how strongly they’re related. For example, let’s say you want to figure out if drinking more soda is correlated with having a higher body mass index or BMI. So, you ask 100 people how many sugary beverages they drink in a week and then check each person’s height and weight to calculate their BMI. You could plot these measurements or data points on a scatterplot, with the number of beverages on the x-axis and BMI on the y-axis, and where each data point represents one individual. Typically, a trendline is drawn to best represent the pattern of data points on the plot, with roughly half the points above the line and half the points below the line. Now, a positive correlation means that BMI increases as the number of beverages increases, and, if the two variables have a perfect positive correlation, then the trendline will pass through every single data point. Now imagine that there’s a negative correlation. That means that the BMI decreases as the number of beverages increases, and, with a perfect negative correlation, the trendline also passes through every data point. Finally, if there’s no correlation, then the data points will be randomly spread out all over the scatterplot, and the trendline will be flat with no positive or negative direction. To figure out how strongly two variables are correlated, we can use the Pearson’s correlation test, which is a parametric test that measures how close or spread out the data points are from the The Pearson’s correlation test calculates a correlation coefficient, which is a number that represents how well the two variables are correlated, and usually it’s written with a lowercase r. The correlation coefficient can range from negative 1 - which represents perfect negative correlation - to positive 1 - which represents perfect positive correlation. A correlation coefficient of 0 means that there’s no correlation between the two variables. Now, there are four key assumptions that are used in a Pearson’s correlation test, and an acronym for these assumptions is L-I-N-E, or LINE. First, the relationship between the two variables has to be Linear, which means that the trendline drawn to represent the data points is a straight line. Relationships that have a different type of curve, like in an exponential relationship or a U-shaped relationship, will have a low correlation coefficient because a straight trendline doesn’t match the shape of the data points. The second assumption is that each individual in the sample was recruited Independently from other individuals in the sample. In other words, no individuals influenced whether or not any other individual was included in the study. For example, if one person agrees to be in the study only if their friend can also be included in the study, then these two individuals would not be independent of each other and the second assumption would not be met.
{"url":"https://www.osmosis.org/learn/Correlation?from=/md/foundational-sciences/biostatistics-and-epidemiology/biostatistics/parametric-tests","timestamp":"2024-11-15T03:16:35Z","content_type":"text/html","content_length":"351695","record_id":"<urn:uuid:e53a8946-c0e7-4aeb-bf16-fed8a45a7a09>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00362.warc.gz"}
Bayesian Information Criterion BIC is Bayesian information criterion, it replaced the $+2k$ term in [[AIC]] Akaike Information Criterion Suppose we have a model that describes the data generation process behind a dataset. The distribution by the model is denoted as $\hat f$. The actual data generation process is described by a distribution $f$. We ask the question: How good is the approximation using $\hat f$? To be more precise, how much information is lost if we use our model dist $\hat f$ to substitute the actual data generation distribution $f$? AIC defines this information loss as $$ \mathrm{AIC} = - 2 \ln p(y|\ hat\theta) + … with $k\ln n$ to bring in punishment for the number of parameters of the model based on the number of data records, $$ \mathrm{BIC} = -2\ln p(y|\hat\theta) + k\ln n = \ln \left(\frac{n^k}{p^2}\right) $$ We prefer the model with a small BIC. Planted: by L Ma; L Ma (2020). 'Bayesian Information Criterion', Datumorphism, 11 April. Available at: https://datumorphism.leima.is/cards/statistics/bic/.
{"url":"https://datumorphism.leima.is/cards/statistics/bic/?ref=footer","timestamp":"2024-11-02T15:54:17Z","content_type":"text/html","content_length":"111675","record_id":"<urn:uuid:9b324a56-ab2f-4fa5-8cbf-6049efb23031>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00223.warc.gz"}