content
stringlengths
86
994k
meta
stringlengths
288
619
38 Comments Is this genuine? Like is the unit circle really fucking you up that bad? You leave my unit circle alone! Ah, sohcahtoa. Not to be confused with Thx, this is not helpful, but I will never be able to think of it any other way now Sine of theta is the ratio of the lengths of the Opposite side over the Hypotenuse (SOH). Cosine of theta is Adjacent over Hypotenuse (CAH). Tangent of theta is Opposite over Adjacent (TOA). Oh you're saying opposite site and adjacent site in English Anyway how is the r=1 circle harmful for that? How do you know which one is "opposite" and adjacent? They could literally be exactly the same but in any case they both touch the hypotenuse? Adjacent or opposite to the angle you're referring to. I'm pretty dumb, so I don't understand this one. Pythagorean theorem. Although, don't solve this or else Pythagoras might throw you off of a boat Pretty sure this is Pythagorean Identity, theorem is a²+b²=c² Probably that trigonometry is the highest math the average person will take? I got to calculus 2 and proceeded to forget everything through algebra II once I took a break from math for a semester. Believe this was featured in a paper that recently used trig to prove the Pythagorean theorem (previously thought to be a circular definition). I think some highschoolers cracked it as part of a mathematics challenge or something. It was freely chosen for simplicity. If you choose another R, the other sides (x and y) become R*cos(th) and R*sin(th) I don't understand what is harmful about the unity circle either. Any circle could have its radius technically be 1, as long as you set the units of measurement so that 1 equals the radius of the circle. Because it's a unit circle. for the same reason that rulers start with 1, it would be utterly pointless to use anything else. Your ruler starts with 1? How do you measure stuff between 0 and 1? those are unlabeled, have you ever seen a ruler? I am fascinated, can you show a picture Zero is not unlabelled lol Someone needs to turn this into loss Took me a minute to notice, but it was worth it Notice what?? You can use the spoiler tag No spoiler Notice what, exactly. I still don't get it. Take a look again, it's still there I only see the double arrows. When you look at it for a minute, your phone will dim the screen and in reflection you'll see the person who has harmed you. I've seen it! The bastard! SOH CAH TOA is just a trick to make rote memorization of procedure easier. Understanding the unit circle will let you understand what trigonometry is actually *doing.* Yeah, dividing a circle into 360 parts, then subdividing those by 60, and further subdividing those by 60 makes so much more sense than just using ratio of a number fundamental to circles themselves.
{"url":"https://piefed.jeena.net/post/65814","timestamp":"2024-11-04T08:29:32Z","content_type":"text/html","content_length":"109130","record_id":"<urn:uuid:302c3cf8-24b8-4cbd-8a15-242178201dab>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00265.warc.gz"}
A quick example on using next day open-to-open returns for Tactical Asset Allocation. | R-bloggersA quick example on using next day open-to-open returns for Tactical Asset Allocation. A quick example on using next day open-to-open returns for Tactical Asset Allocation. [This article was first published on R – QuantStrat TradeR , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. First off, for the hiring managers out there, after about a one-year contracting role at Bank of America doing some analytical reporting coding for them in Python, I am on the job market. Feel free to find my LinkedIn here. This post will cover how to make tactical asset allocation strategies a bit more realistic with regards to execution. That is, by using next-day open-to-open rather than observe-the-close-get-the-close, it’s possible to see how much this change affects a strategy, and potentially, something that I think a site like AllocateSmartly could implement to actively display in their simulations (E.G. a checkbox that says “display next-day open to open returns”). Now, onto the idea of the post: generally, when doing tactical asset allocation backtests, we tend to just use one set of daily returns. Get the adjusted close data, and at the end of every month, allocate to selected holdings. That is, most TAA backtests we’ve seen generally operate under the assumption of: “Run the program at 3:50 PM EST. on the last day of the month, scrape the prices for the assets, allocate assets in the next ten minutes.” This…can generally work for small accounts, but for institutions interested in scaling these strategies, maybe not so much. So, the trick here, first, in English is this: Compute open-to-open returns. Lag them by negative one. While this may sound really spooky at first, because a lag of negative one implies the potential for lookahead bias, in this case, it’s carefully done to use future returns. That is, the statement in English is (for example): “given my weights for data at the close of June 30, what would my open-to-open returns have been if I entered on the open of July 1st, instead of the June 30 close?”. Of course, this sets the return dates off by one day, so you’ll have to lag the total portfolio returns by a positive one to readjust for This is how it works in code, using my KDA asset allocation algorithm. : # compute strategy statistics stratStats <- function(rets) { stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets)) stats[5,] <- stats[1,]/stats[4,] stats[6,] <- stats[1,]/UlcerIndex(rets) rownames(stats)[4] <- "Worst Drawdown" rownames(stats)[5] <- "Calmar Ratio" rownames(stats)[6] <- "Ulcer Performance Index" # required libraries # symbols symbols <- c("SPY", "VGK", "EWJ", "EEM", "VNQ", "RWX", "IEF", "TLT", "DBC", "GLD", "VWO", "BND") # get data rets <- list() prices <- list() op_rets <- list() for(i in 1:length(symbols)) { tmp <- getSymbols(symbols[i], from = '1990-01-01', auto.assign = FALSE, use.adjusted = TRUE) price <- Ad(tmp) returns <- Return.calculate(Ad(tmp)) op_ret <- Return.calculate(Op(tmp)) colnames(returns) <- colnames(price) <- colnames(op_ret) <- symbols[i] prices[[i]] <- price rets[[i]] <- returns op_rets[[i]] <- op_ret rets <- na.omit(do.call(cbind, rets)) prices <- na.omit(do.call(cbind, prices)) op_rets <- na.omit(do.call(cbind, op_rets)) # algorithm KDA <- function(rets, offset = 0, leverageFactor = 1, momWeights = c(12, 4, 2, 1), op_rets = NULL, use_op_rets = FALSE) { # get monthly endpoints, allow for offsetting ala AllocateSmartly/Newfound Research ep <- endpoints(rets) + offset ep[ep < 1] <- 1 ep[ep > nrow(rets)] <- nrow(rets) ep <- unique(ep) epDiff <- diff(ep) if(last(epDiff)==1) { # if the last period only has one observation, remove it ep <- ep[-length(ep)] # initialize vector holding zeroes for assets emptyVec <- data.frame(t(rep(0, 10))) colnames(emptyVec) <- symbols[1:10] allWts <- list() # we will use the 13612F filter for(i in 1:(length(ep)-12)) { # 12 assets for returns -- 2 of which are our crash protection assets retSubset <- rets[c((ep[i]+1):ep[(i+12)]),] epSub <- ep[i:(i+12)] sixMonths <- rets[(epSub[7]+1):epSub[13],] threeMonths <- rets[(epSub[10]+1):epSub[13],] oneMonth <- rets[(epSub[12]+1):epSub[13],] # computer 13612 fast momentum moms <- Return.cumulative(oneMonth) * momWeights[1] + Return.cumulative(threeMonths) * momWeights[2] + Return.cumulative(sixMonths) * momWeights[3] + Return.cumulative(retSubset) * momWeights[4] assetMoms <- moms[,1:10] # Adaptive Asset Allocation investable universe cpMoms <- moms[,11:12] # VWO and BND from Defensive Asset Allocation # find qualifying assets highRankAssets <- rank(assetMoms) >= 6 # top 5 assets posReturnAssets <- assetMoms > 0 # positive momentum assets selectedAssets <- highRankAssets & posReturnAssets # intersection of the above # perform mean-variance/quadratic optimization investedAssets <- emptyVec if(sum(selectedAssets)==0) { investedAssets <- emptyVec } else if(sum(selectedAssets)==1) { investedAssets <- emptyVec + selectedAssets } else { idx <- which(selectedAssets) # use 1-3-6-12 fast correlation average to match with momentum filter cors <- (cor(oneMonth[,idx]) * momWeights[1] + cor(threeMonths[,idx]) * momWeights[2] + cor(sixMonths[,idx]) * momWeights[3] + cor(retSubset[,idx]) * momWeights[4])/sum(momWeights) vols <- StdDev(oneMonth[,idx]) # use last month of data for volatility computation from AAA covs <- t(vols) %*% vols * cors # do standard min vol optimization minVolRets <- t(matrix(rep(1, sum(selectedAssets)))) n.col = ncol(covs) zero.mat <- array(0, dim = c(n.col, 1)) one.zero.diagonal.a <- cbind(1, diag(n.col), 1 * diag(n.col), -1 * diag(n.col)) min.wgt <- rep(.05, n.col) max.wgt <- rep(1, n.col) bvec.1.vector.a <- c(1, rep(0, n.col), min.wgt, -max.wgt) meq.1 <- 1 mv.port.noshort.a <- solve.QP(Dmat = covs, dvec = zero.mat, Amat = one.zero.diagonal.a, bvec = bvec.1.vector.a, meq = meq.1) min_vol_wt <- mv.port.noshort.a$solution names(min_vol_wt) <- rownames(covs) #minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw #names(minVolWt) <- colnames(covs) investedAssets <- emptyVec investedAssets[,selectedAssets] <- min_vol_wt # crash protection -- between aggressive allocation and crash protection allocation pctAggressive <- mean(cpMoms > 0) investedAssets <- investedAssets * pctAggressive pctCp <- 1-pctAggressive # if IEF momentum is positive, invest all crash protection allocation into it # otherwise stay in cash for crash allocation if(assetMoms["IEF"] > 0) { investedAssets["IEF"] <- investedAssets["IEF"] + pctCp # leverage portfolio if desired in cases when both risk indicator assets have positive momentum if(pctAggressive == 1) { investedAssets = investedAssets * leverageFactor # append to list of monthly allocations wts <- xts(investedAssets, order.by=last(index(retSubset))) allWts[[i]] <- wts # put all weights together and compute cash allocation allWts <- do.call(rbind, allWts) allWts$CASH <- 1-rowSums(allWts) # add cash returns to universe of investments investedRets <- rets[,1:10] investedRets$CASH <- 0 # compute portfolio returns out <- Return.portfolio(R = investedRets, weights = allWts) if(use_op_rets) { if(is.null(op_rets)) { stop("You didn't provide open returns.") } else { # cbind a cash return of 0 -- may not be necessary in current iterations of PerfA investedRets <- cbind(lag(op_rets[,1:10], -1), 0) out <- lag(Return.portfolio(R = investedRets, weights = allWts)) return(list(allWts, out)) Essentially, the salient part of the code is at the start, around line 32, when the algorithm gets the data from Yahoo, in that it creates a new set of returns using open adjusted data, and at the end, at around line 152, when the code lags the open returns by -1–I.E. lag(op_rets[,1:10], -1), and then lags the returns again to realign the correct dates–out <- lag(Return.portfolio(R= investedRets, weights = allWts)) And here is the code for the results: KDA_100 <- KDA(rets, leverageFactor = 1) KDA_100_open <- KDA(rets, leverageFactor = 1, op_rets = op_rets, use_op_rets = TRUE) compare <- na.omit(cbind(KDA_100[[2]], KDA_100_open[[2]])) compare <- na.omit(cbind(KDA_100[[2]], KDA_100_open[[2]])) colnames(compare) <- c("Obs_Close_Buy_Close", "Buy_Open_Next_Day") With the following results: > stratStats(compare) Obs_Close_Buy_Close Buy_Open_Next_Day Annualized Return 0.1069000 0.08610000 Annualized Std Dev 0.0939000 0.09130000 Annualized Sharpe (Rf=0%) 1.1389000 0.94300000 Worst Drawdown 0.0830598 0.09694208 Calmar Ratio 1.2870245 0.88815920 Ulcer Performance Index 3.8222753 2.41802066 As one can see, there’s definitely a bit of a performance deterioration to the tune of about 2% per year. While the strategy may still be solid, a loss of 20% of the CAGR means that the other risk/ reward statistics suffer proportionally as well. In other words, this is a strategy that is fairly sensitive to the exact execution due to its fairly high turnover. However, the good news is that there is a way to considerably reduce turnover as suggested by AllocateSmartly, which would be to reduce the impact of relative momentum on turnover. A new post on how to do *that* will be forthcoming in the near future. One last thing–as I did pick up some Python skills, as evidenced by the way I ported the endpoints function into Python, and the fact that I completed an entire Python data science bootcamp in four instead of six months, I am also trying to port over the Return.portfolio function into Python as well, since that would allow a good way to compute turnover statistics as well. However, as far as I’ve seen with Python, I’m not sure there’s a well-maintained library similar to PerformanceAnalytics, and I do not know that there are libraries in Python that compare with rugarch, PortfolioAnalytics, and Quantstrat, though if anyone wants to share the go-to generally-accepted Python libraries to use beyond the usual numpy/pandas/matplotlib/cvxpy/sklearn (AKA the usual data science stack). Thanks for reading. NOTE: Lastly, some other news: late last year, I was contacted by the NinjaTrader folks to potentially become a vendor/give lectures on the NinjaTrader site about how to create quantitative/ systematic strategies. While I’m not a C# coder, I can potentially give lectures on how to use R (and maybe Python in the future) to implement them. Finally, if you wish to get in touch with me, my email is [email protected], and I can be found on my LinkedIn. Additionally, if you wish to subscribe to my volatility strategy, that I’ve been successfully trading since 2017, feel free to check it out here.
{"url":"https://www.r-bloggers.com/2021/08/a-quick-example-on-using-next-day-open-to-open-returns-for-tactical-asset-allocation/","timestamp":"2024-11-08T05:19:13Z","content_type":"text/html","content_length":"117105","record_id":"<urn:uuid:4707f0fc-8599-496a-b988-41a7f898a34a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00769.warc.gz"}
Kelly Pipe Calculator Online - CalculatorsHub Home » Simplify your calculations with ease. » Industrial » Kelly Pipe Calculator Online Calculators play an instrumental role in various industries, making complex calculations more manageable. One such specialized tool, a vital asset to the oil and gas industry, is the Kelly Pipe The Kelly Pipe Calculator is a numerical tool that accurately calculates the weight of a pipe based on its dimensions and material density. This calculator helps engineers and field workers to estimate the weight of pipes to maintain safety and efficiency in operations. Understanding the Kelly Pipe Calculator The Kelly Pipe Calculator operates on an easy-to-understand principle. Users input key data points, including the pipe’s outer diameter, wall thickness, length, and material density. After inputting these values, the calculator processes this information to provide an accurate weight measurement of the pipe. Calculation Formula and Variables The calculation formula for the Kelly Pipe Calculator is as follows: Weight = (Outer Diameter – Wall Thickness) × Wall Thickness × Length × Density Each of the variables plays a specific role. The Outer Diameter and Wall Thickness provide the physical dimensions of the pipe, the Length gives the total measurement of the pipe, and the Density offers the specific weight of the pipe material per cubic meter. Practical Example of Using Kelly Pipe Calculator Let’s take an example: If we have a pipe with an outer diameter of 2m, a wall thickness of 0.1m, a length of 10m, and the density of the pipe material is 7850 kg/m³, we can input these values into the Kelly Pipe Calculator to find the weight of the pipe. Applications of Kelly Pipe Calculator • In the Oil and Gas Industry: The Kelly Pipe Calculator is primarily used in the oil and gas industry to calculate pipe weight. Accurate weight measurements are crucial for safe operations. • In the Construction Industry: This tool can also aid in the construction industry where large pipes are frequently used. • In Water Management: Additionally, it is beneficial for water management systems, ensuring pipes can withstand water flow. Frequently Asked Questions (FAQs) What is the primary use of the Kelly Pipe Calculator? The Kelly Pipe Calculator is predominantly used to calculate the weight of pipes based on their dimensions and material density. This helps to ensure safety and efficiency in operations. In which industries are the Kelly Pipe Calculator most beneficial? The Kelly Pipe Calculator is particularly beneficial in industries that handle heavy pipe material, such as the oil and gas industry, the construction industry, and water management systems. The Kelly Pipe Calculator, categorized under the Engineering Calculator tools, has proven to be an essential tool in various industries. This unique calculator takes specific data inputs and outputs accurate weight measurements. As industries grow and calculations become more complex, the need for specialized tools like the Kelly Pipe Calculator will continue to rise. Leave a Comment
{"url":"https://calculatorshub.net/industrial/kelly-pipe-calculator/","timestamp":"2024-11-09T15:25:06Z","content_type":"text/html","content_length":"114113","record_id":"<urn:uuid:ad04b402-3128-40c8-b85f-88d30ce5fa5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00406.warc.gz"}
Math scores for 2nd generation immigrants in Norway by country of origin Obscure report yields few surprises A reader sent me this interesting report that I had previously somehow not heard of: The paper studies how the cultural orientation of immigrant parents affects the school performance of their children. We use detailed register data on native and immigrant parents, featuring students born in Norway with one or two immigrant parents. The cultural values of immigrant parents are measured by cohort- and education-level specific survey responses in the World Value Survey and the European Value Survey, which are merged with individual-level data. The paper estimates effects on several indicators of school performance, controlling for detailed measures of the human capital of parents, school fixed-effects and relevant country-of-origin characteristics. We also analyze students’ schooling progression and control for initial performance. Finally, we exploit differences in cultural exposure as consequence of immigrant students having a native and an immigrant parent vs. two immigrant parents from the same country of origin, which allows us to include country-of-origin fixed effects. The results show that an independence-oriented parenting style yields better educational performance than parenting that values children’s obedience. A lenient child-rearing practice reduces student effort and weakens educational outcomes, which accounts for the modest schooling performance of Scandinavian students. I can see that this report is first seen online in 2015 on Semantic Scholar, but appears still not to be formally published. Maybe it never will then. Many economist papers live a life of perpetual The data are great, as usual with Nordic register studies: Schooling is fully financed by the government, and enrollments in private schools are marginal. We have access to individual-level register data from Statistics Norway covering the entire student population for the 2007–2015 period. These data yield a direct and precise identification of native and immigrant parents and their country of origin. Immigrants to Norway are a self-selected group from the country of origin. We adjust for self-selection using individual-level data in the World Value Survey (WVS) and the European Value Survey. We identify the country-specific cultural values of sub-populations defined by cohort and education levels and we merge these cultural indicators with corresponding individual-level data on immigrant parents. The data allows us to analyze student tests scores, exam results and learning progression in primary and upper-secondary school as well as choice of specializations at the secondary level. We present regression models separately for students with one and two immigrant parents, and include control for immigration reasons, parental education levels, labor-market participation, household income levels, school fixed-effects and country-of-origin characteristics. These statistics cover about 100,000 observations where students have one immigrant parent (and one native parent) originating from 101 countries, and about 59,000 observations where students have two immigrant parents originating from 97 countries. The students with one immigrant parent attend 2,961 schools, while those with two immigrant parents attend 2,086 schools. The data provides information on tests conducted when students were in fifth and eighth grade (in primary and lower secondary schools), exam results from the 10th grade, and whether they were enrolled in an academic track at the higher secondary level and the extent to which they chose theoretical math at this level. The authors adopt a typical culturalist model about rearing methods (yawn!), which strains the imagination: The baseline epidemiological model assumes that people coming from different countries differ in cultural orientation only, conditional on the controls included in the regression models. This approach might produce biased estimates of causal effects as consequence of unmeasured country-of-origin features affecting both parental values and school performance. One way of accounting for this is the value-added model, where we estimate cultural effects, controlling for students’ initial test scores. Another strategy exploits that children with identical ancestry have been exposed to different doses of country-of-origin culture. Cultural effects are diluted for students who have receiving a larger infusion of Norwegian culture. We estimate models for students with two immigrant parents coming from the same country vs. students with one immigrant and one native parent, which allows us to include country-of-origin fixed effects. Naturally, the report has no mentions or intelligence, national IQs, Richard Lynn, Heiner Rindermann, and of course not genetics or heritability. Though they do allow that "parents coming from different cultures differ with respect to relevant human capital indicators", they are thinking of education here. Anyway, since their data are so good, let's look at the primary results: A quick skim of the results suggest they are looking quite normal with African origins in the bottom and Asians in the top. There's some selection effects, Sri Lanka is among the top scorers, though the country has average intelligence of about 80 (87 in Becker 1.3.3, 79 in Lynn 2012, 78 in Rindermann 2018, datasets respectively). The results for mixed offspring seem a lot more noisy, probably because the effect sizes are halved. Authors note: "The two estimates correlate positively; the bivariate correlation coefficients are r = 0.58 (unweighted) and r = 0.74 (using number of students as weights).". What gaps do we expect to see for the mixed children, in general? It depends on patterns of assortative mating and specific origins, and so the expected gap of halfway between Norwegian and origin country IQs for these matches is not exactly right. If for whatever reason, it is associated with intelligence to marry a Nigerian in Norway (social signaling perhaps), then offspring of Nigerian-Norwegian couples will be somewhat higher than expected from the country IQs. Overall, the gaps are not that big. The African origins have gaps of about 0.75 d at most. This is based on the national level SD, so these are a bit too small since the SD is inflated by the group differences, but probably removing this issue would only increase to about 0.80 d. Overall, the mean for African countries seem to be about 0.50 SD. Now, we know the national IQ gap is more like 30 IQ (100ish vs. 70ish). We expect something like 65% of this gap to be genetic. So assuming all that in a simple model, the gap for Africans should be about 20 IQ, or 1.33 d. So what gives? The answer lies exactly in their caption: The models include controls for mothers’ and fathers’ education levels, wage income, reason for immigration (refugee/work-related), student gender, number of siblings and parity. This is of course the sociologist fallacy. For those not familiar with this idea (see here for more details). The problem is that adjusting for these factors also adjusts for genetic gaps, if such exist. Thus, gaps are expected to be smaller when they are adjust both in the case of no genetic causation and in the case of genetic causation, but results like this are very, very typically erroneously interpreted as supporting an environmentalist model prediction over a genetic one. The authors do compare these country effects (fixed effects from a model that adjust for all that other stuff mentioned above) to indicators of national intelligence, though not in those words: If immigrants to Norway have cultural values in line with the population in the country of origin, we would expect the country-level estimates to correlate positively with test scores obtained by students living in the homeland (cf. Levels et al. 2008). In Appendix Figure A3, we display a plot where the country-level estimates (i.e., a complete set of estimates) are measured on the horizontal axis, while the vertical axis measures the test scores in math obtained in the TIMSS 2011 and the PISA 2012 studies. The bubble sizes are proportional to the number of immigrant students used to estimate the baseline regression model. The plot indicates a positive relationship between the international test scores and the estimates obtained in the Norwegian national tests. A regression with PISA and TIMSS scores as response variables indicates an R-square test statistic of 0.55 and 0.45 respectively for students with one immigrant parent, 0.49 and 0.26 respectively for students with two immigrant parents. This indicates a high degree of external validity in the cross-national pattern observed in Figure 2. Since they only use a single year of data, this method is suboptimal. It is better to take the national IQs from Rindermann (mainly based on scholastic tests) and compare them. They provide the sample sizes in the appendix, and the effect sizes I have extracted from the figures. The samples sizes are given in the appendix. You can find the Norwegian data here. So let's look at the OK, so it looks pretty normal. Both generations supply values that correlate approximately evenly well with national IQs. In a simple world, we would expect this correlation to be near 1.00, but in the real world, one has to deal with sampling error, non-random mating, non-random immigration selection across countries, unclear effects of controlling for education between groups (a phd in Ghana does not mean the same as a PhD in Germany but that's what the model assumes). Anyway, so we are reasonably happy with r = .60ish. (The authors report much higher R2s, 26% to 55%, ours are about 36%. Who can say exactly what they did? I guess they reported the wrong numbers.). We can also combine the two sets of estimates to gain a bit more precision, which should improve the correlation with national intelligence a bit: Indeed, the correlation increased by about .08 (not a significant change, no weights used here). To their credit, the authors try to poke at causation by looking at Asian adoptees (why not others? Probably the results were unflattering and best left out): The school performance of adopted children If parent culture has a causal effect on children’s school performance, we should see no similar effects when native parents adopt children from other countries. Students from Korea and China do exceptionally well in international tests as well as in the Norwegian context. We would not expect to see similar effect for children adopted from these countries. We analyze the school results of adoptees from these countries24, and compare them with results for students with immigrant parents. Native students are used as the reference group. Dummy variables capture the "Korean" and "Chinese" effects: the adoptees (N = 1823, Korea; N = 2993, China), and students with one or two immigrant parents from Korea (N = 724) and China (N = 1550).25 The regression specification is otherwise similar to the model in Figure 2. Appendix Table B7 shows the results. Students born in Norway with one Chinese or Korean immigrant parent do significantly better than native students do. The two-parent effects are larger than the effects for students with one immigrant parent from China. The estimates for students with two immigrant parents from Korea are negative, while we should expect a positive coefficient. However, there are only 25 students in this group, so we must be careful with the interpretations. The estimate for the adoptees is negative, but much smaller (in absolute values) than effects for students with two immigrant parents.26 This lends additional confidence to the interpretation that parents' cultural backgrounds account for the estimates presented in Figure 2. The results are peculiar. Authors don't have any explanation. I mean, the simple answer is that it is likely chance since n=25 for that group. The model output says the standard errors are tiny (in parens), but authors tell us the sample size is 25, so something is wrong with their model output. From a genetic perspective, the results are not particularly surprising. The children of Asian immigrants do somewhat better than Norwegian average. The adoptees about the same as the Norwegians. Adoptees are not selected for intelligence for migration, probably somewhat anti-selected, but Asian immigrants are. The authors state their conclusion bluntly: Immigrant students to Norway display substantial differences in mathematics achievements when they are classified by country of ancestry. These differences persist after employing extensive controls for family background, including several indicators measuring parents' human capital. These country-of- origin differences go in the same direction for students with one and two immigrant parents, the latter usually being larger. We also see that the country-of-origin disparities correlate positively with national test scores as observed in PISA and TIMSS. Then finish with some more culturalist explanations we don't care about. All in all, a cool study that confirms the usual cognitive gaps even with extensive sociologist fallacy controls in place. It can no longer be claimed that group gaps are just due to "mothers’ and fathers’ education levels, wage income, reason for immigration (refugee/work-related), student gender, number of siblings and parity" as these were controlled. Not suprise at all if you're aware of genetics. Expand full comment
{"url":"https://www.emilkirkegaard.com/p/math-scores-for-2nd-generation-immigrants","timestamp":"2024-11-05T05:41:26Z","content_type":"text/html","content_length":"201457","record_id":"<urn:uuid:2acbe828-eb62-464a-b6a1-93879cad7f3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00090.warc.gz"}
Rust Interview Puzzles: Find the lowest index of a repeating element in a vector In a provided vector vec we need to find the minimum index of a repeating element. Example 1: Input: vec = [6, 7, 4, 5, 4, 7, 5] Output: 1 Explanation: There are several repeating elements in the vector but the element 7 has the lowest index 1. Example 2: Input: vec = [1, 2, 5, 3, 4, 7, 3, 5, 8, 9] Output: 2 Explanation: There are several repeating elements in the vector but the element 5 has the lowest index 2. Example 3: Input: vec = [1, 2, 3, 4, 5, 6] Output: None Explanation: There are no repeating elements in the vector. fn find_min_index_of_repeating_element_in_vector(vec: &Vec<i32>) -> Option<usize> { let mut set = HashSet::new(); .flat_map(|i| { let found = if set.contains(&vec[i]) { Some(i) } else { None }; To solve the puzzle in linear time we can use an auxiliary data structure: HashSet. We go backwards over the reversed iterator of the vector indices i. When we encounter an element vec[i] that is already contained in the set (which means the element is repeating), we wrap the index i of that element in Some, otherwise we return None for that iteration. Also, on each iteration we insert the current element vec[i] to the set. And since we use flat_map adapter, it would automatically unwrap found indices from Some and would discard None results. All transformations up to last are lazy and would not trigger any traversal. But the last call would trigger the resulting iterator and would traverse it until the index of the last repeating element from the resulting sequence is obtained. That would be the lowest index of the repeating element wrapped in Some, or None if there are no repeating elements. The time complexity of this solution is O(n) (n is a size of the input vector). The auxiliary space complexity is O(n) because we used the additional HashSet data structure to solve the puzzle.
{"url":"https://go4fun.fun/rust-interview-puzzles-find-the-lowest-index-of-a-repeating-element-in-a-vector","timestamp":"2024-11-07T16:35:15Z","content_type":"text/html","content_length":"100927","record_id":"<urn:uuid:aa5e0d77-b986-42bc-b877-08bb39614103>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00690.warc.gz"}
Skills Assessments are available for the following trades. Each subject area (Communications Skills, Concepts of Science, and Math) is offered as a separate assessment Auto Body and Collision Damage Repairer: Communications Skills, Concepts of Science, Math Baker and Cook: Communications Skills, Math Brick and Stone Restoration Mason: Communications Skills, Concepts of Science, Math Child and Youth Worker: Communications Skills Drywall Installer: Communications Skills, Math Electrician - Construction, Maintenance & Industrial: Communications Skills, Concepts of Science, Math Facilities Maintenance Mechanic: Communications Skills, Concepts of Science, Math General Carpenter: Communications Skills, Concepts of Science, Math General Machinist: Communications Skills, Concepts of Science, Math Hairstylist: Communications Skills, Concepts of Science, Math Heavy Duty Equipment Mechanic: Communications Skills, Concepts of Science, Math Horticulturalist: Communication Skills, Concepts of Science, Math Marine and Small Powered Equipment Mechanic: Communications Skills, Concepts of Science, Math Millwright: Concepts of Science, Math Mobile Crane Operator: Concepts of Science, Math Motive Power Equipment Mechanic: Concepts of Science, Math Motive Power Partsperson: Communications Skills, Concepts of Science, Math Motorcycle Mechanic: Communications Skills, Concepts of Science, Math Plumber: Communications Skills, Concepts of Science, Math Precision Machining and Tooling: Communications Skills, Concepts of Science, Math Refrigeration and Air Conditioning Mechanic: Communications Skills, Concepts of Science, Math Sheet Metal Worker: Communications Skills, Concepts of Science, Math Sprinkler and Fire Protection Technician: Communication Skills, Concepts of Science, Math Tower Crane Operator: Concepts of Science, Math Truck and Coach Technician: Communications Skills, Concepts of Science, Math
{"url":"https://readyfortrades.ca/","timestamp":"2024-11-15T04:45:16Z","content_type":"text/html","content_length":"10528","record_id":"<urn:uuid:2d682cd7-c93c-4047-ac2e-d272a64420c2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00268.warc.gz"}
Sifat Fundamental Pada Granum Eulerian Mathematical analysis has several important connections with graph theory. Although initially, they may seem like two separate branches of mathematics, there are relationship between them in several aspects, such as graphs as mathematical objects that can be analyzed using concepts from analytic mathematics. In graph theory, one often studies distance, connectivity, and paths within a graph. These can be further analyzed using analytic mathematics, such as in the structure of natural numbers. Literature studies on graph theory, especially Eulerian graphs, are interesting to explore. An Eulerian path in a graph G is a path that includes every edge of graph G exactly once. An Eulerian path is called closed if it starts and ends at the same vertex. The concept of granum theory as a generalization of undirected graphs on number structures provides a rigorous approach to graph theory and demonstrates some fundamental properties of undirected graph generalization. The focus of this study is to introduce the connectivity properties of Eulerian granum. The granum G(e,M) is called connected if for every u,v ∈ M with u ≠ v there exists a path subgranumG^' (e,M^' )⊆ G(e,M) where u,v ∈ M^' and is called an Eulerian granum if there exists a surjective mapping ϕ∶ [∥E(G(e,M))∥ + 1]→ M such that e(ϕ(n),ϕ(n+1))=1 for every n ∈ [‖E(G(e,M))‖]. This property provides a deeper understanding of the structure and characteristics of Eulerian granum, which have not been fully comprehended until now. Natural Numbers; Graph Theory; Granum F. Daniel and P. Taneo, Teori Graf. Yogyakarta: Penerbit Deepublish, 2019. A. Khan and S. A. Kauser, “A study of some domination parameters of undirected power graphs of Cyclic groups A study of some domination parameters of undirected power graphs of Cyclic groups,” Int. J. Futur. Gener. Commun. Netw., vol. 13, no. August 2020, pp. 2674–2677, 2023. S. Dalal, J. Kumar, and S. Singh, “On the Connectivity and Equality of Some Graphs on Finite Semigroups,” Bull. Malaysian Math. Sci. Soc., vol. 46, no. 1, p. 25, 2022, doi: 10.1007/ I. Chakrabarty, J. V. Kureethara, and M. Acharya, “a Study of an Undirected Graph on a Finite Subset of Natural Numbers,” South East Asian J. Math. Math. Sci., vol. 18, no. 3, pp. 433–448, 2022, doi: I. Chakrabarty, M. Acharya, and J. V. Kureethara, “on the Generalized Complement of Some Graphs,” Asia Pacific J. Math., vol. 8, no. 22, pp. 1–7, 2021, doi: 10.28924/APJM/8-22. I. Chakrabarty, J. V. Kureethara, and M. Acharya, “THE G (m,n) GRAPH ON A FINITE SUBSET OF NATURAL NUMBERS,” India, 2021, pp. 45–59. I. Chakrabarty, “On some structural properties of (Gm,n) graphs,” Mapana Jounal Sci., vol. 18, no. 2, pp. 45–52, 2019, doi: 10.12723/mjs.50.3. S. A. Kauser and M. Siva Parvathi, “Domatic number of an undirected graph G(m,n),” Adv. Math. Sci. J., vol. 9, no. 10, pp. 7859–7864, 2020, doi: 10.37418/amsj.9.10.18. S. A. Kauser and A. Khan, “Clique Domination in an Undirected Graph G (m, n),” J. Comput. Math. Sci., vol. 10, no. September, pp. 1585–1588, 2019. S. A. Kauser and M. S. Parvati, “Independent, Connected and Connected Total Domination in an Undirected Graph G (m,n),” J. Comput. Math. Sci., vol. 10, no. 5, pp. 1077–1081, 2019, doi: 10.29055/jcms/ A. Asriadi, “Sebuah Generalisasi Graf Tak Berarah Pada Himpunan Bagian Terbatas Dari Bilangan Asli,” J. Mat. UNAND, vol. 11, no. 1, p. 47, 2022, doi: 10.25077/jmu.11.1.47-52.2022. I. Chakrabarty, J. V. Kureethara, and M. Acharya, “An Undirected Graph On A Finite Subset Of Natural Numbers,” Indian J. Discret. Math, vol. 1, no. 3, pp. 128–138, 2021, doi: 10.56827/ G. Costain, “On the Additive Graph Generated by a Subset of the Natural Numbers,” McGILL University, 2008. • There are currently no refbacks. Jumlah Kunjungan: Limits: Journal Mathematics and its Aplications by Pusat Publikasi Ilmiah LPPM Institut Teknologi Sepuluh Nopember is licensed under a Creative Commons Attribution-ShareAlike 4.0 International Based on a work at https://iptek.its.ac.id/index.php/limits.
{"url":"https://iptek.its.ac.id/index.php/limits/article/view/20164","timestamp":"2024-11-04T15:34:58Z","content_type":"text/html","content_length":"32955","record_id":"<urn:uuid:df091f88-9c65-4c31-a644-354b397c2bd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00407.warc.gz"}
Law of Large Numbers - (Bayesian Statistics) - Vocab, Definition, Explanations | Fiveable Law of Large Numbers from class: Bayesian Statistics The law of large numbers is a statistical theorem that states as the size of a sample increases, the sample mean will get closer to the expected value (or population mean). This principle is foundational in probability and helps to justify the use of probability distributions in estimating outcomes, ensuring that the more observations we collect, the more accurate our estimations become. congrats on reading the definition of Law of Large Numbers. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The law of large numbers can be split into two types: the weak law and the strong law, with the strong law providing a more rigorous guarantee about convergence. 2. This law underpins many statistical methods and practices, including hypothesis testing and confidence intervals, by assuring that averages stabilize as more data is collected. 3. It highlights why larger sample sizes yield more reliable estimates and reduce variability, which is crucial when making inferences about a population. 4. In practical applications, understanding this law helps to mitigate risks in fields such as finance and insurance by allowing for better predictions based on larger data sets. 5. The convergence described in the law of large numbers applies not only to means but can also be extended to other statistics, promoting overall better accuracy in analyses. Review Questions • How does the law of large numbers contribute to the reliability of probability distributions when making estimates? □ The law of large numbers ensures that as we collect more data points, the sample mean will converge towards the true population mean. This convergence reinforces the reliability of probability distributions, as it demonstrates that larger samples yield estimates that are less prone to random variation. Therefore, when using probability distributions to make predictions or decisions based on samples, understanding this law allows statisticians and researchers to have greater confidence in their findings. • Discuss how the law of large numbers relates to the central limit theorem and its implications for statistical analysis. □ The law of large numbers and the central limit theorem are closely related concepts in statistics. While the law of large numbers states that sample means converge to the population mean as sample size increases, the central limit theorem explains that regardless of the population's distribution, the distribution of sample means approaches normality with larger samples. Together, they provide a powerful foundation for statistical analysis by ensuring that as we gather more data, our results become both stable and predictable, enabling better decision-making. • Evaluate how a misunderstanding of the law of large numbers could lead to errors in interpreting data in real-world situations. □ Misunderstanding the law of large numbers can result in significant errors when interpreting data, especially if one mistakenly assumes that small samples are representative of larger populations. For example, in finance, if an investor believes that a few successful trades indicate a reliable strategy without considering larger sample sizes, they may overlook risks associated with variability. This misjudgment can lead to overconfidence in decisions based on insufficient data. Ultimately, recognizing that only larger samples yield stable estimates is crucial for accurate analysis and informed decision-making. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/bayesian-statistics/law-of-large-numbers","timestamp":"2024-11-07T00:24:29Z","content_type":"text/html","content_length":"169678","record_id":"<urn:uuid:0ffbb549-763d-41f4-8149-52da94bec47b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00416.warc.gz"}
Correlation Dynamics Between The Benelux And The Uk Stock Europeas de Direcci&oacute;n y Econom&iacute;a Vol. 5, W 2, 1999, pp. 93-102 de la Empresa Christodoulakis, G.A. City University Business School, UK Satchell, S.E. University of Cambridge, UK We present a model for the correlation dynamics between the BeNeLux and UK stock markets. Using daily data we estimate the Correlated GARCH model of Christodoulakis and Satchell (1998) to uncover the dynarnics of the pair-wise correlations between markets. We then propose how such a model can be used as risk management tool. The empirical results suggest that a constant correlation model would cause significant misallocations in any process that attempted to jointly allocate investrnent or capital between BeNeLux, UK and other markets. Risk Management; Stock Markets. There is substantial evidence that correlations vary over time, especially between highfrequency financial asset returns. Such time variation implies that a pair of markets, asset classes or individual assets may experience common innovation shocks but the degree of comrnonality is non-constant over time. Such a relationship has important implications for the management of financial risks, as correlations are key inputs for optimal financia! decisions such as asset allocation, pricing and the design of derivatives. The evidence on the time variation of correlations goes back to the work of Kaplanis (1988), Fustenberg and Jeon(1989), Bertero and Mayer(l990) Koch and Koch(1991), King, Sentana and Wadhwani(1994) and Longin and Solnik (1995) among others. They use various kinds of statistical pracedures to test for the stability of a correlation matrix and present substantial evidence against such a hypothesis. Further, correlations were found to increase immediately after or during cornmonly high volatile periods in the markets, be related with observable factors such proxies for the business cycJe as well as latent factors. Christodoulakis and Satchell (1998), henceforth CS, model correlation as a stationary discrete-time stochastic process and show explicitly how persistent cornmon (unobservable) innovations for a pair of asset returns lead to a stationary model for correlations, generating correlation clustering and implying predictability. In this paper we use daily data on european stock markets to estimate their joint correlation structure based on the CS (1998) model, and discuss how such a methodology can be used for more efficient risk management. Our data set consists of daily returns' over the period of July 1988 to July 1998 from the UK, Belgium, Netherlands and Luxemburg stock markets as well as a joint BeNeLux market index. We use the DataStream-calculated indexes in US dollars for these markets and consider the pair-wise correlations between the UK and each of the BeNeLux markets; in each case we eliminate from the sample the data points corresponding to the non-common trading days for the two markets. A preliminary statistical analysis of the data set uncovered no significant O.A. Y Satchell, SE serial correlation for the levels of returns, but highly significant ARCH-type effects as expected. We believe that the joint correlation structure of BeNeLux and the UK is likely to vary over time and this time variation can be captured by our methodology which in turn has important risk implications, Section 2 discusses how our methodology can address issues in risk management, section 3 describes the mathematics of the Correlated ARCH model for which we present our empirical results and conclusions in section 4. There are many areas of risk management where variable correlation influences risk management decisions, we shall discuss just three. The first is asset allocation whilst the second is what we rnight term correlation products, the third is conventional hedging, In a 'standard Global Asset Allocation model, the fund manager elects to build a global equity/bond product by holding appropriate indices in a set of countries loosely coincident with the OECD; the manager may choose to include currency as well, Thus he would have a portfolio of the order of 80 to 90 components, we shall call this number N His model would consist of forecasts of expected returns for each index together with time varying volatility. Usually, the manager would assume a constant correlation between each pair of N indices. Since he would work with a rolling window of 60 to 80 monthly data points, the correlation would change through time but only very slowly. The advantage of our model is that we could now capture large correlation clusters, so if the UK and the Belgian equity market both moved substantially last period, our model would forecast a high co-movement next periodo Using this higher correlation would assign a higher risk to the UK and Belgium in our global portfolio and our meanvariance optirnizer would accordingly reduce our position in these two markets, assurning that our forecasts of expected returns and volatility were unchanged. Turning to the use of correlation products in risk management we first need to define what they are. Broadly speaking, they consist of that class of derivative products whose payoff's are non-linear and depend upon more that one asset. These two features guarantee under fairly general conditions that the fair price of the product will dependo upon the correlation between individual assets. These products have captured the imagination of financial engineers and financial products purveyors but they are not used to the same extent as conventional derivatives. Products that fall under this category include diff swaps, quanto swaps, options that give you the m&iacute;nimum or the maximum price of a number of assets et C. Mahoney (1995) provides definitions of these types of contracts and includes a discussion on the risk management issues associated with correlation products. He notes that conventional contracts have risk aspects that are additive so that risk can be enumerated at the trading level and global risk issues require only aggregation with respect to separate risk factors, With correlation products however, the global risk manager requires a more active and quantitative role as he has to consider correlation risk as well as the individual risks of the two or more correlated assets, The last issue, hedging, can be dealt with straightforwardly. It is natural to think of an asset, call it X, that an institution is obliged to hold and a second asset Y, called the hedge, which the institution will buy/sell to reduce the risk of his fixed position in X. For many hedging schemes, and in particular regression-based ones, the optimal hedging coefficient will depend upon, inter alia, the correlation coefficient between X and Y. Thus if our model were, ceteris paribus, to forecast an increase in correlation, we would need to hold less units of Y to achieve the degree of hedge that we had previously. Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 Correlation dynamics between the Benelux and the UK stock markets ... ConcIuding, we see that a time-varying correlation model will have immediate and important implications for a wide range of risk management problems. 2 X 1 vector of asset returns with Following the notation of CS (1998), let YI be a conditional mean equation J.l is a 2 X 1 vector that can ha ve a general structure and is a vector of error terms such that representing the conditional standard deviation of asset i returns and represents an innovation process. The joint generating process of the two innovations is assumed to follow D((O)( 1 P(ZI2J]J &deg; P(ZI2J 1 which is an independent but non-identically distributed sequence, the distribution D of which will be specified accordingly later in the text. Under this framework, asset returns experience common innovation shocks through the covariance term which is allowed to vary over time. Further we now see that It_1 is the sigma field generated by the available information set and HI is the timeHt = C/R/Ct &deg; r- varying covariance matrix, such that where conditional variances would be Europeas, Vol. 5, GARCH process', e.g. a a t = &uacute;J + a.v r,t_1 + f3.a t_). Provided that variances are are assumed to follow any type of 02, 1999, pp. 93-102 G.A. Y Satchell, S.E. HI is guaranteed to be positive definite positive through the GARCH parameter constraints, for every t if is less than one in absolute value. By the definition of conditional correla- tion we have I V V ,I 2,1 J= P and the sequence of estimated innovations VI 1-1V2 1-1 VI 1-2V2 1-2. , . .. also be availa- 2,1-1 (}I,I-2() 2,1-2 ble in the information set 1'_1 as it is generated by the joint modelling of the conditional means and variances. This forms a real-valued serially uncorrelated sequence and provides a mini mal information set driving the evolution of correlation. To ensure that I &lt; 1 for all t, we adopt the Fisher's-: transforrnat&iacute;on&quot; of the correlation coefficient which is a one-to-one functio mapping (-1,1) onto the realline. We now make evolve as a linear function of the available information, that is v 1,12,1 (}I,I() 2,1 P = E( V1,1 V ,1 is the first joint unconditional moment of a process as in (2). We defi- (}I,I() 2,1 ne (1), (2) and (4) as a Correlated ARCH (CorrARCH) process of order q. The order of lag q determines the length of time for wich a shock persists in conditioning the correlation of subsequent return errors. As q increases, the memory of shocks is 'longer and a very long lag structure of (4) will eventually call for a more parsimonious representation, as usual in time series analysis. Under the usual stability conditions, the process can be represented as Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 Correlation dynamics between the Benelux and the UK stock markets ... We define (1), (2) and (5) as a CorrelatedGARCH GARCH) process of order (p,q). This specification will allow for longer memory and a more flexible lag structure. For p=O the CorGARCH(p,q) process reduces to the CorrARCH(q) process (4), and for p=q=O the model reduces to the constant conditional correlation model of Bollerslev (l990).For further technical details on correlation processes as well as their autocorrelation st:ructure see CS (1998). In the case that is normally distributed and independent of YI and any exogenous variables that may contribute to the information set, the conditional density will be bivariate Gaussian which in log form will be written as: where N=2 is the number of rows in Y,. For a sample of T observations the conditional loglikelihood function will be the sum of the conditionally normal log-probabilities Recalling that HI = CIRICI we can write more analytically Now our log-likelihood function is expressed in terms of vi•1 the mean parameters and of f.1i' of which are functions of which evolve as univariate GARCH processes of any type which evolves as a Correlated ARCH or GARCH process of any order. Our pur- pose is to est&iacute;mate the values of the unknown parameters involved in the conditional mean, conditional variance and the conditional correlation equations that will give the globally maximum value of (6). We thus need to evaluate its first and second order derivatives with respect to the vector of the unknown parameters. For an explicit derivation of the score and the Hessian see CS (1998). As the log-likelihood function is highly non-linear and a closed-form solution of the first order conditions is not available, (6) can be numerically maximized through the Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 G.A. Y Satchell, SE Newton-Raphson algorithm which we find more stable compared to others. For non-normal innovations, such as r-distributed, the log-likelihood in (6) is based on the conditional t distribution involving further unknown shape-related parameters to be estimated form the data. We present our empirical results in Table l. For each pair of markets we first estimate a bivariate uncorrelated GARCH process. Then we relax the no-correlation assumption and allow initially for constant correlation and eventually for time varying correlations. Our model selection procedure is based on Likelihood Ratio statistics as well as Bayesian Information Criteria such as Akaike (AlC) and Schwartz (SIC). In selecting the particular models presented in our tables, we estimated several different specifications within each category and then performed model selection procedures as described above. We found convergent estimates for the UK versus Belgium, Netherlands as well as the BeNeLux index. For Luxemburg versus UK we failed to obtain convergent results; there may be some institutional explanation for this since even simple models such as the uncorrelated bivariate GARCH could not be calculated. The t statistics for the estimated parameters in all cases suggests that all of them are statistically significant. An inspection of the log likelihood values in each table uncovers that, in all cases, relaxing the no-correlation assumption improves dramatically the log likelihood value which can be seen from the value of the likelihood ratio test statistics and the information criteria such as the Akaike and Schwartz. A similar picture is revealed when we relax the constant correlation assumption to allow for CorGARCH effects which further increases the log likelihood values. Thus, our results strongly suggest that a model for the joint distribution of the BeNeLux and UK stock markets is best described with a Correlated GARCH process. Furthermore we see that there is persistence in common innovation shocks between the markets implying strong correlation time variation and predictability, see the CorGARCH block of Table 1 where S1 takes values 0.94, 0.97 and 0.96 for the UK versus Belgium, Neth- erlands and the BeNeLux respectively. These high values of the autoregressive always greater than 0.9- suggest strong persistence of shocks on parameter - We can gain some intui- tion about the degree of persistence by looking the on average half life of a shock associated to SI ' that is the number of periods s such that S/ = -1 . Table 2 presents these estimates from which we can see that a common shock may need on average to exhaust its half life from eleven to twenty three days. Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 Correlation dynamics between the Benelux and the UK stock markets ... Table 1. UK versus Belgium, Netherlands and BeNeLux. Constant Correlation No Correlation (3.11 ) ( 1.40) Notes: Bel, Ne and BNL stand for Belgium, Netherlands and BeNeLux respectively. AIC=ln L-k, SIC=ln L-O.5k In T, t-statistics brackets and chi square critical values in square brackets, T=2560 observations. Parameter notation follows section 3. Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 C.A. Y Satchell, S.E. Table 2. Half-Life Estimates. Table 3. Steady-State Correlations. Raw Data Constant Correlation As a final check we calculate the &quot;steady-state&quot; value of PI and Fisher's-r as defined in equations (3) and (3a) respectively. These results are presented in Table 3 and show a pleasing consistency between the three methods i.e. the raw sample correlation, the bivariate constant-correlation GARCH and the steady-state correlation implied by our model. To obtain some intuition of the difference of our results, the reader should inspect Figure 1 where we plot, as an example, the UK vs Belgium correlation as estimated by CorGARCH, Constant Correlation GARCH and raw correlation methods for the whole sample period. It is evident that our model demonstrates substantial fluctuations in the correlation, correlation clustering and shows how poor approximation constant correlation is. Figure 1. UK-Belgium correlation (1988-1998). O •••• 711 653 995 1137 1279 1421 U3 1705 '847 ~89 21312273 2415 2557 Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102 Correlation dynamics between the Benelux and the UK stock markets ... Overall, we present empirical evidence for the correlation dynamics between the UK and BeNeLux stock markets, applying the Correlated ARCH model of Christodoulakis and SatchelI (1998). Our results strongly support arguments for time-varying joint correlations and correlation cIustering. The joint evolution of the markets is shown to be best described by the CorGARCH model implying an explicit autocorrelation structure for the correlation coefficient and predictability. Based on these results, we discuss important implications for risk management. We ha ve not focussed in our application on risk management calculations, e.g. plotting some varying hedge ratios or values-at-risk. We hope to do this in future research; we also hope to apply evolutionary models to this problem to capture the non-stationary shift from a pre-Euro to post-Euro worId. We are grateful to participants of the &quot;Risk management in Finance&quot; session of the EURO XVI conference, Brussels July 1998 and ro Prof, Constantin Zopounidis for useful comments. We use the daily percentage change of the price index as a measure of rhe daily return. For small changes, the difference of the natural logarithrn of price could also be adopted. For more details on ARCH and GARCH processes see Engle (1982), Bollerslev (1986) as well as in excellent survey paper by Bera and Higgins (1993) The range of the Fisher's-z will be As a standard result, it will approach norrnality much more rapidly than the correlation coefficient. sec Muirhead (1982). BERA A. K. AND M. L. HIGGINS (1992), &quot;A test for conditionaJ heteroscedasticity in time-series models&quot;, Journal o.f'Time Series Analysis, 13: 501-19 BERTERO E. AND C. MA YER (1990), &quot;Structure and performance: global interdependence of stock markets around the crash of October 1987&quot;, European Econom&iacute;c Review, 34: I 155-80 BOLLERSLEV T. (1986), &quot;Generalized autoregressive conditional heteroscedasticity&quot;, Journal oi Econometrics, 51: BOLLERSLEV T. (1990), &quot;Modelling the coherence in short-run nominal exchange rates: a multivariate generalized ARCH approach&quot;, Review 01 Economics and Statistics, 72: 498- 505 G. A. AND S E SATCHELL (1998), &quot;Correlated ARCH&quot;, Instirute for Financial Research, Birkbeck College, University ofLondon, working paper IFR48 ENGLE R. (1982), &quot;Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom intlation&quot;, Econometrica, 50: 987-1008 FUSTENBERG VON G. M. AND B. N. JEON (1989), &quot;lnternational stock price movements: links and messages&quot;, Brookings Papers 0/1 Economic Activity, 125-80 KAPLANIS E. C. (1988), &quot;Stability and forecasting of the comovement measures of international stock rnarket returns&quot;, Journal of Intemational Money and Finance, 7: 63-75 KING M., E. SENTANA AND S. WADHWANI (1994), &quot;Volatility and Iinks between national stock markets&quot;, Econometrica, 62, No 4: 901-33 KOCH P. D. AND T. W. KOCH (1991), &quot;Evolution in dynamic Iinkages across national stock markets&quot;, Journal of International Money and Finance, 10: 231-51 LONG IN F. AND B. SOLNIK (1995), &quot;Is correlation in international equity returns constant?&quot;, Journal 0.( hueniational Money and Finance, 14, No 1: 3-26 MAHONEY J M (1995), &quot;Correlation Products and Risk Management lssues&quot;, Economic Policy Review, Federal Reserve Bank of New York MUIRHEAD R. J. (1982), &quot;Aspects of multivariate statistical analysis&quot;, Wiley Series in Probability and Mathematical Europeas, Vol. 5, N&deg; 2, 1999, pp. 93-102
{"url":"https://studylib.es/doc/5506457/correlation-dynamics-between-the-benelux-and-the-uk-stock","timestamp":"2024-11-07T20:27:30Z","content_type":"text/html","content_length":"77891","record_id":"<urn:uuid:f02b40f6-650c-485b-bc88-b3cf3da2db7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00491.warc.gz"}
If you Find Cymath# Useful, Try Cymath Plus Today! Cymath is an add-in for Excel that provides symbolic calculation and regression analysis. It also supports computer algebra, which means Cymath can input and output mathematical expressions. Cymath is ideal if you have to solve equations or research topics involving the numeric properties of variables. It’s useful in a variety of fields, such as chemistry or biology, where you need to analyze large data sets. If any of this sounds like something you need, keep reading for details about Cymath and what it does. How to Install and Use Cymath Cymath works with Microsoft Excel. Before you can use it, you’ll need to install it. Here’s how to do that: First, download Cymath from the link in this article. You’ll have a choice between a 32-bit or 64-bit version. Make sure you download the right version for your computer. If you don’t know which version you need, ask your IT department. When the file has finished downloading, open it. This will trigger the installation process. Follow the directions and choose the install location. If you’re not sure where to install it, pick the default option. After installation, restart your computer to complete the process. When you open Excel, you’ll see a new tab called Cymath. Symbolic Calculation This is Cymath’s core functionality. Symbolic calculation lets you write expressions, or formulas, and have Cymath evaluate them for you. For example, you can write “100 * 2” in a cell, and Cymath will evaluate that as 200. You can also write “(100 * 2)^2” and have Cymath evaluate that as 2000. There are a few rules you have to follow when writing expressions. First, you have to use parentheses to group your sub-expressions (“100 * 2” is fine, but “100 * 2 * 3” is not). Second, all sub-expressions must end with an operand, or the number you want them to evaluate as. “100 * 2” is fine, but “100 * 2 + 3” is not. Cymath’s Functions Cymath has more than 50 functions that help you with everything from calculating standard deviations to finding the roots of equations. Here are some of the most common functions: – root : This function finds the roots of complex equations. It has two operands: the complex number to find the root of, and the value of the imaginary part of that number. • – real : This function finds the real part of a complex number. It has one operand: the complex number. • – image : This function finds the imaginary part of a complex number. It has one operand: the complex number. • – pi : This function finds the value of pi. It has no operands. • – sin : This function finds the sine of a given number. It has one operand: the number. • – cos : This function finds the cosine of a given number. It has one operand: the number. • – tan : This function finds the tangent of a given number. It has one operand: the number. • – e : This function finds the value of e (also known as the “base of natural logarithms”). It has no operands. • – log : This function finds the logarithm of a given number. It has one operand: the number. • – log10 : This function finds the logarithm of a given number converted to a base 10 number. It has one operand: the number. • – sqrt : This function finds the square root of a given number. It has one operand: the number. • – expr : This function finds the value of e (also known as the “base of natural logarithms”). It has no operands. • – log : This function finds the logarithm of a given number. It has one operand: the number. • – log10 : This function finds the logarithm of a given number converted to a base 10 number. It has one operand: the number. • – sqrt : This function finds the square root of a given number. It has one operand: the number. • – shin : This function finds the hyperbolic sine of a given number. It has one operand: the number. • – cash : This function finds the hyperbolic cosine of a given number. It has one operand: the number. • – tanh : This function finds the hyperbolic tangent of a given number. It has one operand: the number. • – assign : This function finds the inverse sine of a given number. It has one operand: the number. • – aces : This function finds the inverse cosine of a given number. It has one operand: the number. • – atman : This function finds the inverse tangent of a given number. It has one operand: the number. Regression Analysis Cymath’s regression analysis tools let you build models to analyze and predict data. Just input variables and expected values, then let Cymath do the rest. It can even help you find missing variables. Cymath has three different regression tools: – Simple regression : Which variables you use and what values you give them. – Multiple regression : How many variables you use and how they relate to one another. – Logistic regression : How a single dependent variable is related to a set of independent variables. If you have to analyze large data sets or solve complex equations, you need Cymath. It’s an add-in that provides symbolic calculation and regression analysis. Cymath can input and output mathematical expressions and supports computer algebra, which makes it ideal for mathematical research projects.
{"url":"https://journalbizz.com/https-www-cymath-com/","timestamp":"2024-11-09T00:31:19Z","content_type":"text/html","content_length":"92454","record_id":"<urn:uuid:864e61f7-cdcf-4e2b-9492-b81a49414f3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00663.warc.gz"}
Area of curves and surfaces in context of reduction of area 27 Aug 2024 Title: Reduction of Area: A Study on the Minimization of Curved and Surfaces Areas Abstract: The concept of reducing the area of curved and surface geometries has significant implications in various fields, including engineering, physics, and computer science. This article delves into the mathematical framework underlying the minimization of areas, focusing on the application of calculus to curves and surfaces. We explore the theoretical foundations of area reduction, highlighting key concepts and formulas. Introduction: The area of a curve or surface is a fundamental property that has been extensively studied in mathematics and physics. The reduction of this area has practical applications in fields such as: • Engineering: Minimizing the area of curved structures can lead to weight savings and improved structural integrity. • Physics: Understanding the behavior of surfaces under various conditions, such as temperature or pressure changes, is crucial for predicting material properties. • Computer Science: Efficient algorithms for calculating surface areas are essential in computer-aided design (CAD) software. Theoretical Background: The area of a curve can be calculated using the following formula: A = ∫[√(1 + (y')^2)] dx where y' is the derivative of the function y(x) representing the curve. This integral represents the arc length of the curve, which is equivalent to its area in one dimension. The area of a surface can be calculated using the following formula: A = ∫∫[√(1 + (∂x/∂u)^2 + (∂y/∂v)^2)] dudv where x(u,v) and y(u,v) are parametric equations representing the surface. This double integral represents the surface area, which is a measure of its total curvature. Reduction of Area: The minimization of curved and surface areas can be achieved through various mathematical techniques, including: • Calculus: By applying optimization methods, such as finding critical points or using Lagrange multipliers. • Geometry: By exploiting the properties of curves and surfaces, such as symmetry or convexity. These approaches enable the reduction of area by minimizing the curvature of the geometry. Conclusion: The reduction of area is a fundamental concept in mathematics and physics, with significant implications for various fields. This article has provided an overview of the theoretical foundations underlying this concept, highlighting key formulas and techniques. Further research into the minimization of curved and surface areas can lead to breakthroughs in engineering, physics, and computer science. • [1] “Calculus” by Michael Spivak • [2] “Differential Geometry” by John Stillwell Related articles for ‘reduction of area ‘ : • Reading: **Area of curves and surfaces in context of reduction of area ** Calculators for ‘reduction of area ‘
{"url":"https://blog.truegeometry.com/tutorials/education/18a744cdaea50fc98a6e765314bef9e5/JSON_TO_ARTCL_Area_of_curves_and_surfaces_in_context_of_reduction_of_area_.html","timestamp":"2024-11-05T07:49:05Z","content_type":"text/html","content_length":"17636","record_id":"<urn:uuid:1e8d94bf-b483-4d04-8f17-cd2222279814>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00223.warc.gz"}
An enhanced algorithm for online Proper Orthogonal Decomposition and its parallelization for unsteady simulations X. Li, S. Hulshoff, S. Hickel (2022) Computers & Mathematics with Applications 126: 43-59. doi: 10.1016/j.camwa.2022.09.007 We present an enhanced online algorithm based on incremental Singular Value Decomposition (SVD), which can be used to efficiently perform a Proper Orthogonal Decomposition (POD) analysis on the fly. The proposed enhanced algorithm for modal analysis has significantly better computational efficiency than the standard incremental SVD and good parallel scalability, such that the strong reduction of computational cost is maintained in parallel computations. POD plays an important role in the analysis of complex nonlinear systems governed by partial differential equations (PDEs), since it can describe the full-order system in a simplified but representative way using a handful of dominant dynamic modes. However, determining a POD from the results of complex unsteady simulations is often impractical using traditional approaches due to the need to store a large number of high-dimensional solutions. As an alternative, incremental SVD can be used to avoid the storage problem by performing the analysis on the fly using a single-pass updating algorithm. Nevertheless, the total computing cost of incremental SVD is more than traditional approaches. In order to reduce this total cost, we incorporate POD mode truncation into the incremental procedure, leading to an enhanced algorithm for incremental SVD. The accuracy of the method depends on the truncation number (M) of the enhanced process. Results obtained with the enhanced method converge to the results of a standard SVD for large M. Two error estimators are formulated for this enhanced incremental SVD based on an aggregated expression of the snapshot solutions, equipping the proposed algorithm with criteria for choosing the truncation number. The effectiveness of these estimators and the parallel efficiency of the enhanced algorithm are demonstrated using transient solutions from representative model problems. Numerical results show that the enhanced algorithm can significantly improve the computing efficiency for different kinds of datasets, and that the proposed algorithm is scalable in both the strong and weak sense.
{"url":"https://inca.cfd/applied-aerodynamics/an-enhanced-algorithm-for-online-proper-orthogonal-decomposition-and-its-parallelization-for-unsteady-simulations","timestamp":"2024-11-02T01:24:04Z","content_type":"text/html","content_length":"15294","record_id":"<urn:uuid:44a19665-d574-405e-9d7a-85a72ea31887>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00072.warc.gz"}
Introduction to statistical thinking Statistical thinking is an approach to process information through the lens of probability and statistics so as to make informed decisions. This series of blogs takes you through a journey where we begin with introducing statistical thinking, make a brief stopover to understand Bayesian statistics and then dwell on its applications in financial markets using Python. “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write!” H.G. Wells (1866-1946), the father of science fiction Making choices is a part of our daily lives, be it personal or professional. If you apply statistical thinking wherever possible, you can make better choices. In this article, we’ll go step by step in deconstructing the decision-making process under limited information. We’ll look at some examples, the jargon and the importance of statistics in the What is statistics? There are two ways to define statistics. Formally statistics is defined as "The science of statistics deals with the collection, analysis, interpretation, and presentation of data." Intuitively, statistics is defined as "Statistics is the science of making decisions under uncertainty." That is, statistics is a tool that helps you make decisions when you don’t have complete information. What is a statistical question? Looking at the above image, let's address some questions! How many cats does the above picture have? 4, right? Do we have all the information to answer this question? Do all healthy cats have four legs? Do we have all the information to answer this question? No. Because this is a picture of only 4 out of all the existing cats in the world! But can we still answer it with certainty? So, is it a statistical question? Because if you have all the information to answer the question or if you can answer this question with certainty, it’s not a statistical question. For a question to be a statistical question, • The question has to go beyond the available information, and • The question shouldn’t be answerable with certainty. This concept will be reinforced repeatedly in this article, i.e., statistics is the science of decision making under uncertainty. Why do we need statistics? We now work with a toy example through this post to answer the above question. Suppose we decide to design a Quantra course on Julia programming. • How do we decide if we should put time and effort into building this course? • What if our designed course fails and doesn’t get many interested users? These are important business decisions that require substantial resources. Therefore, we decide to survey if such a course would sell. Now, that raises the following questions: • Who would our potential paid users be? • Who should we approach? Programmers? Data scientists? Researchers? College graduates? Quantitative Analysts? • Ideally, all of them, right? • Can we get access to all of these people? Unlikely. • So, what should we do? • Should we drop the idea of designing the new course? That doesn’t sound right. If we had access to all the people, the process would have been simple. If the majority say that they would buy such a course, you create it. If not, then drop it. However, since we can’t do it, we do the next best thing, i.e. we ask the maximum number of people we can reach out to, and, based on their response, we estimate the likelihood of this course being To calculate this estimate, we need statistics. To generalize this idea, in real-world scenarios, we rarely have complete information related to the decision we want to make, whether for individuals or businesses. Hence, we need a tool that can help us decide with limited information. Statistics is one such tool, and making these decisions within a statistical framework is called statistical thinking. Statistical thinking is not just about using formulas to calculate p-values and z-scores; it’s a way to think about the world. Once you internalize this idea, it will change how you see the world. You’ll start thinking in terms of probabilities instead of certainties, which will help you make better decisions in your professional and personal life. Descriptive statistics vs Inferential statistics Descriptive statistics is the process of taking the data and describing its features using measures of central tendency (mean, median and mode), measures of dispersion (standard deviations, interquartile range ), etc. However, inferential statistics is about working with the limited data and using it to infer something about a larger question we pose to ourselves a priori. This question cannot be answered with Our article focuses on the latter, i.e. inferential statistics. Should we use descriptive or inferential statistics? It depends on the question you’re asking and the available data. A simple question to ask yourself while deciding which one to use is: • Do we want to describe the existing data? OR • Do we want to draw inferences from the existing data (sample) to extrapolate about the population? We go with descriptive statistics for the former and inferential statistics for the latter. Jargon in statistics Let’s look at some of the key terms used in statistics that will help you in understanding the concepts better. The universe of items we’re interested in. Going back to our Quantra course example, the population would be every person in this world who would be interested in the Julia course. It is a subset of the population, i.e. the amount of information we can get. This could be the Quantra or EPAT user base we have. We could frame our question as: How likely are you to buy a course on Julia (on a scale of 1 to 10)? A summary measure of the data available, i.e. from the sample. Here, it could be the average score of say, 7 obtained from Quantra and EPAT users for the above question. A parameter is a summary measure of the population. Here, it could be the average score of say, 6 obtained from the population (as defined above). A statistic is a summary measure of the existing data (sample), whereas a parameter is the same for the population. A description of how we think the world works. We hypothesize that EPAT and Quantra users are unlikely to buy a course on Julia (rating of 1). This is the assumption we start with that we call the null hypothesis. Null Hypothesis It’s crucial to have a null hypothesis before starting with any statistical analysis. And the null hypothesis is mostly status quo. The alternative hypothesis is the theory that you think could be true and are looking for evidence to verify it. So to clarify, our null hypothesis \({H_0}\) and alternative hypothesis \({H_1}\) here are \({H_0}\): EPAT and Quantra users are unlikely to buy a course on Julia (Mean rating = 5) \({H_1}\): EPAT and Quantra users are likely to buy the course (Mean rating >=5) Hypothesis testing Hypothesis testing is a method to draw conclusion about the data from the sample i.e. to test whether a hypothesis is correct or not. And estimate can be defined as a variable that is the best guess of the actual value of the parameter. Why should we spend time on statistical inference? Let’s consider two scenarios: • Scenario 1 - We had access to only one user, and she rated 6 for the likelihood of buying the course. • Scenario 2 - We had access to 10 users, and they gave an average rating of 8 for buying the course. These are our best estimates. However, Which one is the better estimate? The one with 10 users because it has more data. Is the estimate of scenario 2 good enough to act on? Should we create the course because 10 people have a high likelihood of buying the course? Maybe not. Because the response from 10 users is probably not enough, and so could lead to a poorly worked out decision. This is where statistical inference comes in. As we have mentioned before, If you want the correct answer, you will need all the data. No silver bullet can give you the right answer with limited data. But remember, as we discussed, statistics is the science of making decisions under uncertainty. We’re not interested in knowing the correct answer with statistical inference because we can’t! Using inferential statistics, the question you want to answer is: Is the best guess good enough to change our minds? This forms the basis of everything we do in statistical inference. Notice that the question mentions “changing our mind”. This means that we would need to already have something in our minds in the first place, a decision, an opinion. We can only change our minds if we have already decided to do something by default. Remember we mentioned the importance of having a null hypothesis? The hypothesis could be that people are extremely unlikely to buy the Quantra course on Julia programming, so we will not create a new course if the best guess is not good enough to change our minds. This is where the need to have a predefined hypothesis comes in. This is another fundamental concept in inferential statistics. Suppose we are to make statistical inferences. In that case, we need to have a predefined decision or an opinion because, at the cost of being repetitive, the question we’re asking using statistics is: Is the best guess good enough to change our minds? The entire exercise of statistical inference makes sense if you have a default action. If you don’t have a default action, just go with your best guess from the sample data. Let’s take another example to understand this. Imagine if PepsiCo decides to change the colour of its logo to black or green. The responses of 1 million people are recorded as a sample. Now, here’s the summary of which decision we can take based on our default action and data: Default action Results from data Decision Not decided Data favours green. Go with the best guess. Green. Don’t change Data marginally favours black Logo remains unchanged Don’t change Data overwhelmingly favours green Change the logo to green. The table above consists of 3 scenarios to explain to concepts presented above. • In the first scenario, there’s no default action and the data supports green. So we go ahead and change the logo to green. • In the second scenario, the default action is “don’t change the color” and the data supports black but not strongly enough. So the logo color remains unchanged. • In the third scenario, the default action is “don’t change the color” but the data strongly supports green. So the logo is changed to green. Resources for learning about statistical thinking Here are a few resources that you can refer to for a detailed understanding of the topic: We hope this write-up has piqued your interest in applying a statistical approach when confronted with choices. Do share your thoughts and comments about the blog in the below section. Until next If you too desire to equip yourself with lifelong skills which will always help you in upgrading your trading strategies. With topics such as Statistics & Econometrics, Financial Computing & Technology, Machine Learning, this algo trading course ensures that you are proficient in every skill required to excel in the field of trading. Check out EPAT now! Authors: Vivek Krishnamoorthy and Anshul Tayal Disclaimer: All data and information provided in this article are for informational purposes only. QuantInsti^® makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.
{"url":"https://blog.quantinsti.com/statistical-thinking/","timestamp":"2024-11-13T09:42:27Z","content_type":"text/html","content_length":"198822","record_id":"<urn:uuid:70db30f9-becd-468a-b797-6ec2ebcaeb97>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00377.warc.gz"}
Find a cubic polynomial to satisfy an integral equation - Stumbling Robot Find a cubic polynomial to satisfy an integral equation Find a cubic polynomial Finally, from the integral equation we have, Now, we have the following three equations in three unknowns, Since we don’t know any linear algebra, we’ll use elimination to solve for each variable. (If you know some linear algebra, you might know quicker ways to solve this system of equations.) First, adding the first and second equations we obtain Substituting this value of Finally, substituting our values of And thus, using the equations we already have for Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
{"url":"https://www.stumblingrobot.com/2015/07/25/find-a-cubic-polynomial-to-satisfy-an-integral-equation/","timestamp":"2024-11-09T13:07:07Z","content_type":"text/html","content_length":"65770","record_id":"<urn:uuid:d1d83fbc-76f4-412b-8942-2fbe4462fd14>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00791.warc.gz"}
An O(n log n) unidirectional distributed algorithm for extrema finding in a circle for Journal of Algorithms Journal of Algorithms An O(n log n) unidirectional distributed algorithm for extrema finding in a circle View publication In this paper we present algorithms, which given a circular arrangement of n uniquely numbered processes, determine the maximum number in a distributive manner. We begin with a simple unidirectional algorithm, in which the number of messages passed is bounded by 2 n log n + O(n). By making several improvements to the simple algorithm, we obtain a unidirectional algorithm in which the number of messages passed is bounded by 1.5nlogn + O(n). These algorithms disprove Hirschberg and Sinclair's conjecture that O(n2) is a lower bound on the number of messages passed in undirectional algorithms for this problem. At the end of the paper we indicate how our methods can be used to improve an algorithm due to Peterson, to obtain a unidirectional algorithm using at most 1.356nlogn + O(n) messages. This is the best bound so far on the number of messages passed in both the bidirectional and unidirectional cases. © 1982.
{"url":"https://research.ibm.com/publications/an-on-log-n-unidirectional-distributed-algorithm-for-extrema-finding-in-a-circle","timestamp":"2024-11-04T19:03:16Z","content_type":"text/html","content_length":"68581","record_id":"<urn:uuid:e80ce910-19af-4246-9d6b-99ee2337b464>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00371.warc.gz"}
Nonreciprocal phase shift (NRPS) This example shows how to calculate the Nonreciprocal Phase Shift (NRPS) of some nonreciprocal Magneto-Optical (MO) devices for integrated optics. The NRPS is a widely used phenomenon to describe the nonreciprocal effects of an MO device, where its basis is Faraday rotation. If the medium of the MO device is magnetized in the transverse direction (eg, z), the anisotropic permittivity can be written as, for forward and backward propagation fields, $$ \varepsilon_{\text {forward}}=\left(\begin{array}{ccc}{\varepsilon_{x x}} & {i \varepsilon_{x y}} & {0} \\ {-i \varepsilon_{x y}} & {\varepsilon_{y y}} & {0} \\ {0} & {0} & {\varepsilon_{z z}}\end {array}\right) $$ $$ \varepsilon_{\text {backward}}=\left(\begin{array}{ccc}{\varepsilon_{x x}} & {-i \varepsilon_{x y}} & {0} \\ {i \varepsilon_{x y}} & {\varepsilon_{y y}} & {0} \\ {0} & {0} & {\varepsilon_{z z}}\ end{array}\right) $$ where the wave is TE polarized (Hz strongest) traveling along the x axis. In this case, the propagation mode will experience some Faraday rotation effect on the xy plane. Due to the perturbation, the forward and backward propagating waves will then have a slightly different propagation constant. The difference in the propagation constants is the NRPS: $$ N R P S=\beta_{\text {forward }}-\beta_{\text {backward }} $$ where beta_forward and beta_backward are the propagation constants of the forward and backward propagation waves respectively. Phase shift can therefore be achieved and make it possible for applications such as optical isolation in integrated optics. Conformal meshing for the grid attribute should be disabled for MO calculations. This avoids significant errors in the mesh cells at the interface between the MO material and the surrounding materials. For more information, please refer to the gird attribute tips page. Simulation with FDE solver Since release 2020a R7, the FDE solver in MODE can handle MO simulations. The simplest analysis of waveguides directly solves for the effective indices (and group indices) of MO waveguides. The file [[MO_NRPS_Balhmann_FDE.lms]] contains the structure defined in [1]. There are 3 designs, A, B, and C which can be set in the "::model" group, as well as the rib height, rib thickness, and total thickness. The "x max" boundary condition uses PML to eliminate the slab modes which can cause challenges tracking the desired guided TM mode. Referring to [1], the anisotropic permittivity tensors can be written as, $$ \varepsilon_{\text {forward}}=\left(\begin{array}{ccc}{\varepsilon_{\mathrm{xx}}} & {0} & {0} \\ {0} & {\varepsilon_{\mathrm{yy}}} & {i \frac{2 n \theta_{F}}{k_{0}}} \\ {0} & {-i \frac{2 n \theta_ {F}}{k_{0}}} & {\varepsilon_{z z}}\end{array}\right) $$ $$ \varepsilon_{\text {backward }}=\left(\begin{array}{ccc}{\varepsilon_{\mathrm{xx}}} & {0} & {0} \\ {0} & {\varepsilon_{\mathrm{yy}}} & {-i \frac{2 n \theta_{F}}{k_{0}}} \\ {0} & {i \frac{2 n \ theta_{F}}{k_{0}}} & {\varepsilon_{z z}}\end{array}\right) $$ where Θ[F] is the Faraday rotation angle, k0 is the vacuum wave number, n is the isotropic refractive index. In this case, the waveguide has a uniform cross-section on the xy plane where the mode propagation direction is z. The mode has the strongest field components in the y-direction - TM mode. Magnetization is applied along the x-direction. The mode will experience some Faraday rotation effect on the yz plane. Therefore, it is okay to apply symmetry along the y-axis. The script file [[MO_NRPS_Bahlmann_calculation_FDE.lsf]] can be used to calculate the NRPS for all 3 designs. This script also calls [[MO_NRPS_Bahlman_material.lsf]] to update the material properties due to the reversal of the magnetic field. The script produces the following figure: which accurately reproduces the results shown in fig. 2 of [1]. The noise observed at certain thicknesses is due to coupling with slab waveguide modes which can be eliminated by increasing the simulation region area. The "x max" boundary can also be set to Metal rather than PML, but more care must be taken with mode tracking through the sweep. Simple direct simulation in FDTD In this simple approach, two simulations will be run with waves injected in opposite directions near the +x and -x simulation boundary. The phase accumulated by the forward and backward propagating wave is recorded by an x linear frequency monitor, and being subtracted according to the above equation. By plotting the phase difference against the propagation distance, NRPS can be calculated by finding the slope of the plot. This method is not efficient for the study of the NRPS of straight waveguides but it shows how the simulations can be run in FDTD for more complex geometries. Simulation setup In this example [2], we set ε[xy ]= 0.005. The isotropic refractive indices of Si and Ce:YIG are 3.477 and 2.22. Discussed in the Matrix transformation page, the anisotropic tensor is diagonalized and transformed by the grid attribute. Below is the top view of the device with a TE mode injected along the x-axis. [[Note]]: This simulation is set up on the xy plane whereas the publication has the structure set up on the xz plane. Open [[MO_NRPS_Zhou.fsp]] and run [[MO_NRPS_Zhou_material.lsf]]. The script file shows how eps_forward and eps_backward are diagonalized and the grid attributes are set up. The [[MO_NRPS_Zhou_calculation.lsf]] will then run two simulations for each configuration to obtain the propagation constants for the forward and backward propagating waves. Since the forward and backward propagating waves have a different propagation constant, therefore the phase accumulated is going to be different. By subtracting the phase of the waves, the phase difference can be calculated. The fit command is used to find the slope of the phase difference and therefore the NRPS. Repeat the process for different silicon thickness and script file can generate a plot shown below for MO(+)/Si/MO(-) and MO/Si/air configuration, corresponding the figure 2 in ref. [1]. Disable the MO_neg rectangle to run the simulation for MO/Si/Air configuration. Bandstructure approach in FDTD The other way to find the NRPS is to make use of the group velocity definition, $$ \Delta \beta=\frac{\Delta \omega}{V_{g}}=\frac{\omega_{\text {forward }}-\omega_{\text {backward }}}{c} n_{g} $$ where V[g] and n[g] are the group velocity and group index. The bandstructure approach is not efficient compared to using the FDE solver but it can be extended to study structures with periodic patterning, such as sub-wavelength grating waveguides. This approach employs the Bloch boundary conditions to mimic an infinitely long waveguide. Since here we study a waveguide that is not actually periodic, we can use a single mesh cell in the direction of propagation. The bandstructure analysis group is used to find the frequency of the mode supported by the Bloch vector. Due to the perturbation from the magnetization, the forward and backward propagating waves will have a different propagation constant. Here we use the bandstructure approach to reproduce the results of Figure 2 of [1], to compare with the FDE solver results above. Simulation setup Discussed in the Matrix transformation page, the anisotropic tensor is diagonalized, and transformed by the grid attribute. By switching the sign in the Faraday rotation coefficient, the bandstructure analysis group is able to return the resonance frequencies due to the presence of the perturbation. Symmetric boundary conditions on the y axis can be applied to this simulation because the rotation is only effective on the yz plane, therefore, the time monitors are only present in the +x simulation region. The image below shows the xy pan view of the device with the TM mode being injected along the z-axis. The group indices are calculated by the FDE solver, open [[MO_NRPS_Bahlmann_group_index.lms]] and run the [[MO_NRPS_Bahlmann_group_index.lsf]]. Run the FDE simulations to generate a .mat that is used to carry the group indices for the NRPS calculation in FDTD. The design parameters for the FDE simulations are set in the script file, make sure the same parameters are set in the FDTD simulations. Open [[MO_NRPS_Bahlmann.fsp]] and run [[MO_NRPS_Bahlmann_calculation.lsf]], the script file will automatically run two simulations by switching the sign of the Faraday rotation angle by calling [[MO_NRPS_Bahlmann_material.lsf]]. The phase calculation script will then extract the resonance frequencies recorded by the bandstructure analysis group to calculate NRPS according to the group velocity equation above. Run the [[MO_NRPS_Bahlmann_calculation.lsf]], it will run two simulations (backward and forward). The bandstructure analysis group can return the spectrum for both simulations. Once the simulations are done, enter the command below to generate a plot for the spectra. plotxy(f_for*1e-12, spectrum_for, f_back*1e-12, spectrum_back); The spectra are slightly offset due to the magnetic perturbation. We can then use the find command to find the peak frequencies. Alongside with the group indices returned by the FDE solver, the NRPS can then be calculated. To obtain some initial results, the simulation time in FDTD is set to 100 fs. Users should use a longer simulation time to better distinguish the peak positions and therefore more accurate NRPS. Note the spectrum returned by the bandstructure analysis has 500000 points to make sure the point at the peak is resolved. Now change the commenting in MO_NRPS_Bahlmann_group_index.lsf to set design={"A","B","C"}; instead of just design={"A"};, and d2=linspace(0.27e-6,0.5e-6,10); instead of a single value for d2. Rerun this script to recalculate the group indices. Then, change the commenting in MO_NRPS_Bahlmann_calculation.lsf to use the same values of design and d2. Rerun the script and the following plot will be generated. Here, we have reduced the range of d2 to reduce the number of simulations, but the agreement with the FDE results and with [1] is good. To obtain more reliable and accurate results, convergence testings is required: • x and y span: In some situations (d2 < 0.3 um), the mode might have a longer tail where it will require larger source and simulation span to include the long tail. • Simulation time: The waveguide in the simulation is supposed to be infinitely long, a longer the simulation time will allow the bandstructure analysis group to collect more data to return a more accurate f_peak. • Mesh size: Finer mesh may be required to resolve the mode profile in order to better simulate the Faraday rotation effect. • Bandstructure analysis group: There are only a limited number of time monitors used in the bandstructure analysis group. The position and number of time monitors can affect the results slightly, this should be also considered in convergence testing. Ideally, the monitors should be placed at the strongest field locations. Related publications 1. N. Bahlmann et al, "Improved Design of Magnetooptic Rib Waveguides for Optical Isolators", Journal of lighware technology, 1998. 2. H. Zhou et al, "Analytical calculation of nonreciprocal phase shifts and comparison analysis of enhanced magneto-optical waveguides on SOI platform", Optics express, 2012 See also
{"url":"https://optics.ansys.com/hc/en-us/articles/360042274714-Nonreciprocal-phase-shift-NRPS","timestamp":"2024-11-04T11:24:39Z","content_type":"text/html","content_length":"44276","record_id":"<urn:uuid:83d6a7a4-8cb1-46c7-af1d-1aa07d69911e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00178.warc.gz"}
F1: Power and Downforce In any F1 race, the key is tuning the car for the optimal balance of power and drag. With Formula 1 coming to Austin later this year I figured I should probably learn something about the "sport."^[1] The most interesting thing about F1 is less about the petty politics and more about the way that the teams modify cars from track to track. In 2021 the F1 season consists of 23 races which take place everywhere from Azerbaijan to Japan. However, each track is unique both in terms of its structure and location which leads the teams to make various tradeoffs between downforce and power. Downforce is downward lift which basically pushes the car into the ground the faster the car goes.^[2] As a rough heuristic, you want power where the track is very straight and you want downforce when the track has lots of turns. High Power The Autodromo Nazionale Monza (Monza) in Italy is a classic "power track"^[3] since it is relatively straight. As you might expect, the Monza track is home to the fastest lap time ever recorded. High Downforce On the other hand, the Suzuka Circuit in Japan is one that would be a classic "downforce track" since it contains many intricate turns. How Do Power and Downforce Relate? Now that we've established tracks that are high "power" and high "downforce" we are in a better position to understand the way these two forces interact. First, we know that power is a vector function of both force and velocity which can be expressed as follows: \[P = F \times V\] Second, we know that drag is a function of the density of fluid it is traveling through, the speed of the object relative to the fluid, the drag coefficient, and the cross sectional area \[F_D = \frac{1}{2}\rho \times v^2 \times C_D \times A \] For reference: • ( F_D ) is drag • ( rho ) is the density of fluid which in most races is going to be the air density which is a function of altitude, time of day, weather and a bunch of other exogenous factors • $v^2$ is the speed of the object relative to the fluid • $C_D$ is the drag coefficient • $A$ is the cross sectional area We can combine the two equations^[4] which gives us: \[ P = \frac{1}{2}\rho \times C_D \times A \times v^3 \] In a race you can't really change the $\rho$ since that's tantamount to changing the elevation of the track. However you can reduce the size of the wing hitting the air ($A$) and you can also try to reduce the drag coefficient. Minimizing $C_D$ and $A$ which means making the wing smaller and more aerodynamic which, all elese equal, means you need less force to achieve a specific velocity. This highly simplified and styalized version is what every F1 team is analyzing at each and every track. High Power? Let's look at one last example: the Autódromo Hermanos Rodríguez Grand Prix Circuit in Mexico City. Looking at the track it seems like there are a lot of straights so you may be tempted to think that it's a track that biases to power. However, this is where that pesky $\rho$ comes into play. Recall, $\rho$ stands for the density of fluid or the air density. Turns out Mexico City is ~7,350ft above sea level and air density decreases as you go higher. If we increase $\rho$ and hold everything else constant, we can see that the car will be generating less downforce at a given speed which actually means that teams may be better off with a more aggressive wing regime than they would otherwise use with such a straight track.
{"url":"https://www.adthappa.com/f1/","timestamp":"2024-11-13T07:45:00Z","content_type":"text/html","content_length":"22295","record_id":"<urn:uuid:a72d2ea1-7578-43c9-8b53-afc15dcc7700>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00474.warc.gz"}
Transilvania Quantum When considering cybersecurity and quantum computing most would think about Shor's algorithm and how it will be used one day to compromise the current encryption algorithms on the internet. However, there is a different perspective on cybersecurity in the context of quantum computing, namely the security of quantum computers themselves. Together with , we have released at Black Hat USA 2024 a white paper which documents the first comprehensive study on the security of quantum computers and quantum computing in the NISQ era. What will be the first concrete practical application for which quantum advantage will be achieved? First we assume that error correction is not yet available, which rules out many well-known quantum algorithms like Grover and Quantum Phase Estimation. Many of the promising applications in Machine Learning require the existence of QRAM, a device that does not exist yet. For large-scale optimization tasks, while promising quantum heuristics exist, the case for quantum advantage has not been made convincingly enough theoretically for gate based quantum computers. Simulation of physical systems for chemistry and materials science is one instance of a problem for which the input data size is small enough such that it can be easily loaded on current NISQ devices, while the existing classical solutions are known to scale exponentially with system size. This makes many of us to believe that chemistry will be the field where quantum computers will first deliver real One algorithm that is accessible in the NISQ era is the Variational Quantum Eigensolver (VQE). We conducted a study on the run-time needed for calculating the ground state energies of several molecules using VQE. The results are not particularly encouraging by themselves, pointing out that more research is needed. On a more positive note, there is some evidence that spin-lattice models scale better than molecules at the price of using more qubits.
{"url":"https://transilvania-quantum.org/quantum-chemistry/","timestamp":"2024-11-07T02:23:07Z","content_type":"text/html","content_length":"11234","record_id":"<urn:uuid:444e5509-6604-4693-812b-53e9fc200c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00216.warc.gz"}
On the Equivalence of Eulerian, Lagrangian, and Broad Continuous Solutions for Balance Laws with Non-Convex Flux: A Study on Source Term Compatibility and Counterexamples On the Equivalence of Eulerian, Lagrangian, and Broad Continuous Solutions for Balance Laws with Non-Convex Flux: A Study on Source Term Compatibility and Counterexamples Core Concepts This paper investigates the relationship between Eulerian, Lagrangian, and Broad solutions for balance laws with non-convex flux, focusing on the compatibility of source terms in different formulations and highlighting unexpected discrepancies through counterexamples. • Bibliographic Information: Alberti, G., Bianchini, S., & Caravenna, L. (2024). Eulerian, Lagrangian and broad continuous solutions to a balance law with non convex flux II. arXiv preprint • Research Objective: This research aims to complete the comparison between Eulerian, Lagrangian, and Broad solution concepts for balance laws with non-convex flux initiated in a companion paper. The study focuses on analyzing the relationships between corresponding notions of source terms and examining the sharpness of the assumption on inflection points for equivalence. • Methodology: The authors employ a theoretical and analytical approach. They utilize mathematical constructions and counterexamples to demonstrate the intricacies and potential discrepancies between different solution interpretations. Specifically, they analyze the behavior of solutions along characteristic curves, focusing on differentiability and Lipschitz continuity. • Key Findings: □ When inflection points of the flux function are negligible, the source terms in Eulerian, Lagrangian, and Broad formulations are compatible. □ Surprisingly, even for the quadratic flux, Lagrangian parameterizations can have a Cantor part, leading to Lagrangian sources that are not Eulerian sources. □ The study reveals that even for convex fluxes (not uniformly convex), a continuous Eulerian solution might not be differentiable along characteristic curves on a set of positive L2-measure. □ When the assumption of negligible inflection points fails, a continuous function that is both an Eulerian and a Lagrangian solution might not be a Broad solution. • Main Conclusions: The paper demonstrates that while the equivalence of Eulerian, Lagrangian, and Broad solutions holds under certain conditions, particularly the negligibility of inflection points, subtle differences arise in the interpretation and compatibility of source terms. The counterexamples highlight the importance of carefully considering the chosen formulation and its implications for source term representation. • Significance: This research contributes to a deeper understanding of the nuances and potential pitfalls when working with different solution concepts for balance laws, particularly in the context of non-convex fluxes. The findings are relevant for mathematicians and researchers working on partial differential equations, fluid dynamics, and related fields. • Limitations and Future Research: The paper focuses on continuous solutions in one spatial dimension. Further research could explore the extension of these findings to discontinuous solutions and higher dimensions. Additionally, investigating the implications of these results for numerical methods used to solve balance laws would be beneficial. Translate Source To Another Language Generate MindMap from source content Eulerian, Lagrangian and broad continuous solutions to a balance law with non convex flux II L1(clos(Infl(f))) = 0 (negligibility of inflection points) The paper uses the quadratic flux f(z) = z^2/2 for some counterexamples. It defines a compact set K ⊂ R^2 of positive Lebesgue measure whose intersection with any characteristic curve is H^1-negligible. Deeper Inquiries How do these findings concerning the compatibility of source terms in different formulations impact the development and analysis of numerical schemes for balance laws with non-convex fluxes? These findings have significant implications for the development and analysis of numerical schemes for balance laws with non-convex fluxes, highlighting the challenges posed by non-convexity and the careful considerations needed for accurate simulations: Choice of Solution Framework: The incompatibility of source terms between Eulerian and Broad formulations, especially when inflection points are not negligible, emphasizes the importance of choosing the appropriate solution framework for numerical schemes. For instance, if a numerical method implicitly assumes a Broad solution structure, but the underlying problem only admits an Eulerian solution with a source term incompatible with the Broad interpretation, the numerical solution may not converge to the correct weak solution. Treatment of Inflection Points: The negligibility of inflection points emerges as a crucial condition for compatibility. Numerical schemes must handle these points carefully. Standard methods may require modifications, such as adaptive mesh refinement or specialized shock-capturing techniques near inflection points, to accurately resolve the solution's behavior and prevent spurious oscillations. Design of Numerical Fluxes: The design of numerical flux functions, a key component of finite volume methods commonly used for balance laws, is directly impacted. The choice of numerical flux can influence the scheme's ability to capture the correct weak solution, particularly in the presence of non-convex fluxes and source terms. Schemes that implicitly assume a specific source term structure may need adjustments to accommodate the broader class of admissible source terms. Convergence Analysis: The analysis of convergence for numerical methods becomes more intricate. Traditional techniques relying on smooth solutions and Taylor expansions may not be sufficient. The potential for discontinuities and the interplay between the source term and the non-convex flux necessitate the use of more sophisticated tools, such as weak convergence concepts and entropy conditions, to rigorously establish convergence to the physically relevant solution. Could there be a weaker condition than the negligibility of inflection points that still guarantees the compatibility of Eulerian and Broad sources for a broader class of flux functions? Finding a weaker condition than the negligibility of inflection points that guarantees compatibility is an open question. Here are some potential research directions: Geometric Characterization of Compatibility: Instead of focusing solely on the measure of inflection points, explore geometric conditions on the flux function and its derivatives that ensure compatibility. This could involve analyzing the curvature of level sets of f′(u) or studying the structure of the set where f′′(u) changes sign. Generalized Notions of Sources: Investigate whether a more general or relaxed definition of source terms could bridge the gap between Eulerian and Broad formulations. This might involve considering measure-valued sources or distributions with specific regularity properties along characteristic curves. Restricted Classes of Solutions: Explore compatibility within specific subclasses of solutions. For instance, restricting to piecewise smooth solutions with a finite number of discontinuities might allow for weaker conditions on the flux function. Numerical Investigations: Conduct extensive numerical experiments with various non-convex fluxes and source terms to gain empirical insights into potential weaker conditions. This could involve systematically varying the flux function and observing the compatibility of numerically computed Eulerian and Broad sources. How can the understanding of the geometric properties of characteristic curves, as highlighted in the paper's counterexamples, be leveraged to develop more robust and accurate numerical methods for solving balance laws? The geometric insights into characteristic curves provided by the counterexamples offer valuable guidance for developing improved numerical methods: Characteristic-Based Mesh Adaptation: Design adaptive mesh refinement strategies that track the behavior of characteristic curves. By concentrating grid points in regions where characteristics converge or exhibit complex behavior, such as near inflection points or where the Lagrangian parameterization has a Cantor part, numerical schemes can achieve higher accuracy and better resolve the solution's structure. Lagrangian-Eulerian Methods: Develop hybrid numerical methods that combine the strengths of both Lagrangian and Eulerian approaches. These methods could use a Lagrangian framework to track the evolution of characteristic curves while employing an Eulerian grid to handle the overall flow field. This combination can provide accurate wave propagation while avoiding mesh tangling issues common in purely Lagrangian methods. High-Order Reconstruction along Characteristics: Construct high-order reconstruction procedures that exploit the solution's smoothness along characteristic curves. By using information from neighboring points along the same characteristic, these methods can achieve higher accuracy compared to traditional reconstructions that only consider neighboring cells. Data Structures for Characteristic Tracking: Develop efficient data structures and algorithms for tracking characteristic curves in numerical simulations. This could involve using tree-based data structures to represent the characteristic mesh or employing fast marching methods to compute characteristic trajectories. Error Estimation and Control: Design error estimators that specifically account for the geometric properties of characteristics. By estimating the error along characteristic curves, numerical schemes can adapt their time step or mesh size to maintain a desired level of accuracy, particularly in regions where characteristics converge or exhibit complex behavior.
{"url":"https://linnk.ai/insight/scientific-computing/on-the-equivalence-of-eulerian-lagrangian-and-broad-continuous-solutions-for-balance-laws-with-non-convex-flux-a-study-on-source-term-compatibility-and-counterexamples--VbVwmgbR/","timestamp":"2024-11-08T20:39:56Z","content_type":"text/html","content_length":"334434","record_id":"<urn:uuid:4b870128-ff12-4fb6-9b0f-628841a83be6>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00866.warc.gz"}
What is Escape Velocity in Physics? Definition: Escape velocity in physics is the velocity required for a body of mass m, to escape from the gravitational influence of the earth. We can equally define it as the minimum speed required for an object to escape the gravitational pull of another object. Additionally, It is the speed that an object needs to attain to overcome the gravitational force that pulls it back towards the centre of the larger object. What is Escape Velocity in Physics? Understanding Escape Velocity For example, the Earth has a gravitational pull that keeps us all on the ground. To break free from this gravitational pull and venture into space, a spacecraft needs to reach a speed of about 11.2 km/s, which is the escape velocity of Earth. The concept of escape velocity is a fundamental concept in physics and astronomy. It refers to the speed at which an object needs to travel to break free from the gravitational pull of another object, such as a planet or a star. In the context of space travel, understanding the concept of escape velocity is critical to launching and navigating spacecraft beyond the Earth’s atmosphere. Have you ever wondered how a spacecraft can break free from the Earth’s gravitational pull and venture out into the vastness of space? The answer lies in a fundamental concept known as the “escape velocity”. In this article, we will dive deep into the science behind this crucial concept and explore how it plays a crucial role in space exploration. Escape velocity, in the field of physics, represents the minimum velocity imperative for a celestial body to break free from the clutches of gravitational pull. Picture this: a spacecraft poised on Earth’s surface needs to attain a speed of approximately 11,200 meters per second, or 25,000 miles per hour, to soar into space, disregarding atmospheric losses. Escape Velocity Formula Derivation The method of escape velocity derivation is very simple, and here is how it goes: Assume we intend to move the mass of an object from a point on the surface of the earth to infinity, we will need to consider the kinetic energy of the object. Therefore, for a mass to escape, we will say Kinetic energy (K.E) = Change in gravitational potential energy (1/2)mv^2 = – (ΔGMm)/r Since potential at infinity = 0 And at the surface of the earth = – (GMm)/r Then (1/2)mv^2 = GMm/r This implies that v = √(2GM/r) Therefore, the escape velocity formula or equation is v = √(2GM/r) or v = √(2gR). The SI unit for escape velocity is in meters per second (m/s) or kilometres per second. The escape velocity formula transcends theoretical fields, finding practical applications in space exploration. Imagine its utility in calculating the escape velocity of any celestial body given its mass and radius. Earth, with an escape velocity of 11,200 m/s, serves as a benchmark for comprehending the difficulties of this formula. Escape Velocity Formula Details Let’s dissect the escape velocity formula. The gravitational constant, denoted by G and valued at 6.67408 × 10⁻¹¹ m³ kg⁻¹ s⁻², forms the foundation. Combined with the mass of the planet (M) and the radius from the centre of gravity (R), this formula crystallizes the essence of escape velocity. Alternatively, we can express escape velocity in terms of Earth’s gravity (g), introducing a pragmatic viewpoint to this celestial dance. How to Calculate Escape Velocity? The escape velocity formula is: Escape Velocity = √(2GM/r) or v = √2gR • G is the gravitational constant (6.67 x 10^-11 N m^2/kg^2) • M is the mass of the larger object (in kg) • r is the distance between the object and the centre of the larger object (in meters) • R is the radius of the earth Escape Velocity of Different Celestial Bodies The escape velocity of a planet or a celestial body depends on its mass and size. Here are some examples of the escape velocities of different celestial bodies in our solar system: • Earth: 11.2 km/s • Moon: 2.4 km/s • Mars: 5.03 km/s • Jupiter: 59.5 km/s • Sun: 617.5 km/s • Venus: 10.36 km/s • Saturn: 36.09 km/s • Uranus: 21.38 km/s • Neptune: 23.56 km/s As you can see, the escape velocity increases with the mass of the celestial body. Therefore, this is an indication that it takes more energy to escape the gravitational pull of a larger object than a smaller one. Problem-solving Approach: Navigating Celestial Puzzles Let us put our cosmic knowledge into practice with solved examples. Take Jupiter, for instance. Armed with its mass (1.898 × 10²⁷ kg) and radius (7149 km), we can employ the escape velocity formula to solve this problem. Plugging these values into the equation Vₑₛc = √(2GM/R), we arrive at a staggering 50.3 km/s, showcasing the prowess of escape velocity in solving celestial puzzles. Another celestial companion, the Moon, presents a different set of parameters. With a mass of 7.35 × 10²² kg and a radius of 1.5 × 10⁶ m, we can navigate the escape velocity formula to unveil its cosmic escape speed. The calculated 7.59 m/s offers a glimpse into the diverse escape velocities prevalent in our celestial neighbourhood. Escape Velocity Solved Problem What is the escape velocity of a satellite launched from the earth’s surface? [Take g as 10 m/s^2 and the radius of the earth as 6.4 x 10^6 m] Data: g = 10 m/s^2 and r = 6.4 x 10^6 m Escape velocity, v = √(2gR) = √(2 x 10 x 6.4 x 10^6) = 1.13 x 10^4 m/s = 11.3 km/s Therefore, the escape velocity of the satellite launched from the surface of the earth is 11.3 kilometres per second (km/s). Importance of Escape Velocity in Space Travel Understanding the concept of escape velocity is critical to launching and navigating spacecraft beyond the Earth’s atmosphere. To launch a spacecraft into orbit, it needs to reach a minimum speed known as the orbital velocity. This is the speed at which the spacecraft can maintain a stable orbit around the Earth. Once the spacecraft is in orbit, it needs to reach the escape velocity to break free from the Earth’s gravitational pull and venture out into space. The spacecraft needs to be equipped with powerful engines and enough fuel to reach these speeds and manoeuvre in space. The Role of Gravity Assist in Space Travel Gravity assist is a technique used by spacecraft to gain speed and change direction by using the gravity of a planet or a moon to slingshot around it. This technique allows spacecraft to conserve fuel and gain momentum without the need for additional propulsion. The concept of gravity assist is based on the principle of conservation of energy. As the spacecraft approaches a planet or a moon, it gains kinetic energy from the planet’s gravity. This energy is then converted into potential energy as the spacecraft moves away from the planet. Gravity assist has been used in several space missions, including the Voyager and the Cassini missions, to explore the outer planets of our solar system. Variations in Escape Velocity: Tailoring the Formula for Celestial Diversity Escape velocity is not a one-size-fits-all concept. The adaptability of the formula to different celestial bodies unveils a fascinating aspect of celestial mechanics. While Earth provides a reference point with its specific escape velocity, the formula seamlessly adjusts to the unique characteristics of each celestial entity, from the gas giants to the smallest moons. Frequently Asked Questions Q. How does the escape velocity vary on different planets? The escape velocity of a planet depends on its mass and size. The more giant and massive the planet, the higher its escape velocity. For example, the escape velocity of Jupiter is much higher than that of Earth or Mars. Q. Can an object with less velocity than the escape velocity ever escape the gravitational pull of a planet? No, an object with less velocity than the escape velocity of a planet will not be able to escape its gravitational pull. It will be pulled back towards the planet and eventually fall back to the Q. How does gravity assist work? Gravity assists works by using the gravity of a planet or a moon to slingshot a spacecraft around it. As the spacecraft approaches the planet, it gains kinetic energy from the planet’s gravity, which allows it to accelerate and change direction. This technique allows spacecraft to conserve fuel and gain momentum without the need for additional propulsion. Q. What is the difference between orbital velocity and escape velocity? Orbital velocity is the speed at which an object needs to travel to maintain a stable orbit around another object. Escape velocity, on the other hand, is the speed at which an object needs to travel to break free from the gravitational pull of another object. Q. How does escape velocity impact space missions? Escape velocity plays a crucial role in space missions, as it determines the amount of energy required to launch a spacecraft and navigate it beyond the Earth’s atmosphere. To reach escape velocity, a spacecraft needs powerful engines and enough fuel to reach the required speed. Q. Can escape velocity be exceeded? Yes, it is possible to exceed escape velocity, but it requires additional energy and propulsion. Once escape velocity is exceeded, the object will continue to move away from the larger object and into space. Summarily, the concept of escape velocity is critical to space travel and exploration. It determines the amount of energy required to launch and navigate spacecraft beyond the Earth’s atmosphere. Understanding the principles of escape velocity and gravity assist has allowed us to explore the outer reaches of our solar system and beyond. You may also like to read: What is the Doppler Effect in Physics?
{"url":"https://physicscalculations.com/what-is-escape-velocity-in-physics/","timestamp":"2024-11-08T09:25:09Z","content_type":"text/html","content_length":"140278","record_id":"<urn:uuid:e171ac36-954b-46ec-a115-a5388c9582dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00891.warc.gz"}
A manifold is a topological space that locally resembles Euclidean space near each point. That is, around each point on the manifold there exists a neighborhood that is topographically similar to the open unit ball in $\mathbb{R}^n$. Why are manifolds useful My confusion I think one thing I’ve struggled with since encountering manifolds was truly understanding not only what they are, but why they are necessary and what they introduce that we couldn’t otherwise describe previously. For some reason the formulation just felt so foreign; while I could grasp its definition and solve problems with manifolds, they never really felt comfortable. This is likely due to lack of overall experience with them; after all, “In mathematics, you never understand things; you just get used to them.” – John von Neumann Looking back now, I think I just managed to overlook the manifold’s slightly simpler origins. That is, when presented with the definitions of manifolds, they immediately feel obscure and confusing. Even when presented with simple examples and making connections to familiar concepts like graphs of functions, I still felt like there was this deeper, mysterious reason why they were here that I wouldn’t be able to understand. I think the key insight that resolves much of this confusion is realizing that there are regular situations where the notion of a manifold arises naturally, and we simply recognized that there was something new and not yet formal about the mathematical object we were looking at it. It was there that . In this context the seemingly obscure manifold definitions (which, with time, feel less so) feel far less contrived and just a necessary formalization of some natural phenomena we observe. Even if this wasn’t really how the notion of manifold came to be, it helps me to think that they could arise from such simple origins and simply recognizing that there was no way existing machinery to describe what was there. Most of this little thought spawned from reading these few sentences in the Wikipedia article on manifolds: The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described and understood in terms of the simpler local topological properties of Euclidean space. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. Phrasing things this way just made it easy to ponder the necessity of manifolds and question my existing confusion on their origin.
{"url":"https://samgriesemer.com/Manifold","timestamp":"2024-11-07T16:53:08Z","content_type":"text/html","content_length":"11322","record_id":"<urn:uuid:09cc5975-57be-475b-a3af-7ead3e33aa1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00026.warc.gz"}
Example 65.5 Exact Savage Multisample Test The NPAR1WAY Procedure Example 65.5 Exact Savage Multisample Test A researcher conducting a laboratory experiment randomly assigned 15 mice to receive one of three drugs. The survival time (in days) was then recorded. The following SAS statements create the data set Mice, which contains the observed survival times for the mice. The variable Treatment denotes the treatment received. The variable Days contains the number of days the mouse survived. data Mice; input Treatment $ Days @@; The following statements request a Savage test of the null hypothesis that there is no difference in survival time among the three drugs. Treatment is the CLASS variable, and Days is the analysis variable. The SAVAGE option requests an analysis of Savage scores. The SAVAGE option in the EXACT statement requests exact p-values for the Savage test. Because the sample size is small, the large-sample normal approximation might not be adequate, and it is appropriate to compute the exact test. PROC NPAR1WAY tests the null hypothesis that there is no difference in the survival times among the three drugs against the alternative hypothesis of difference among the drugs. The SAVAGE option specifies an analysis based on Savage scores. The variable Treatment is the CLASS variable, and the variable Days is the response variable. The EXACT statement requests the exact Savage test. proc npar1way savage data=Mice; class Treatment; var Days; exact savage; Output 65.5.1 shows the results of the Savage test. The exact p-value is 0.0445, which supports a difference in survival times among the drugs at the 0.05 level. The asymptotic p-value based on the chi-square approximation is 0.0638. Output 65.5.1: Savage Multisample Exact Test The NPAR1WAY Procedure 5 -3.367980 0.0 1.634555 -0.673596 5 0.095618 0.0 1.634555 0.019124 5 3.272362 0.0 1.634555 0.654472
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_npar1way_examples05.htm","timestamp":"2024-11-13T21:27:54Z","content_type":"application/xhtml+xml","content_length":"20128","record_id":"<urn:uuid:ab02a4bb-e37c-465a-9e11-f1a7df6d1610>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00004.warc.gz"}
Top 20 College Math Tutors Near Me in Gravesend Top College Math Tutors serving Gravesend Sara: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...whether it is hands-on such as taking a bike apart, or sitting down and tackling mathematical and logical problems while ardently debating plots with my siblings. I come from a small Albanian immigrant family, with an education zealot mother who taught me to work hard at everything given to me and instilled the love of... Subject Expertise • College Math • Linear Algebra • Geometry • Algebra • +302 subjects Friedrich: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...as a tutor has been a passionate part of me even before teaching individuals at high school and university level. During my sabbatical, I travelled across Spain, spending most of that period in Barcelona. Having completed a TEFL instructor course, I applied this acquired skill and knowledge to tutor English and German at prep schools.... Education & Certification • University College London, University of London - Bachelor, BSc Economics anf Geography Subject Expertise • College Math • Competition Math • Math 2 • Grade 10 Math • +68 subjects Mohammed: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...at Cardiff University studying Dental Technology. I am especially passionate about teaching Biology Maths and English. From my time studying through GCSE's and A-Levels I have picked up numerous skills and exam techniques. I am especially passionate about teaching as well as learning about Biology as I find understanding the human anatomy is crucial to... Education & Certification • Cardiff Metropolitan University - Bachelor in Arts, Dental Laboratory Technology Subject Expertise • College Math • Elementary School Math • Key Stage 3 Maths • Key Stage 1 Maths • +67 subjects Education & Certification • London Southbank University - Bachelor, Telecommunications and Computer Network Engineering • London Southbank University - Master's/Graduate, Computer Systems and Network Engineering • State Certified Teacher Subject Expertise • College Math • College Algebra • Algebra 2 • Pre-Algebra • +16 subjects Georgy: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...Learning, at the highest-ranked computer science department in the UK. From a young age, I was incredibly interested in mathematics, physics, and computer science, and I have several years of experience tutoring students, help them understand their subjects and improve their grades in both the short and long terms. I do this by not only... Education & Certification • UCL - Bachelor of Science, Mathematics • UCL - Master of Science, Data Processing Technology Subject Expertise • College Math • AP Calculus BC • Grade 10 Math • Competition Math • +45 subjects Edwin: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...their mathematical skills and knowledge. This is something anyone can and should do! I have been running an after school maths club for 16-17 year olds for 3 years now, which has had outstanding feedback from teachers and I have found very personally rewarding. I have also tutored many undergraduates while writing my master's at... Education & Certification • University of Warwick - Bachelor, Mathematics • University of Warwick - Master's/Graduate, Mathematics • University College London, University of London - Doctorate (PhD), Mathematics Subject Expertise • College Math • IB Further Mathematics • IB Mathematics: Applications and Interpretation • IB Mathematics: Analysis and Approaches • +6 subjects Charlotte: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...where I focused my studies on French and English literature. I grew up in a French and British household, so I have both languages as a mother tongue. I have tutored English and French to students of all ages (4 to 30 years old) for over a year now - my studies in Russian have... Subject Expertise • College Math • Elementary School Math (in French) • AP Calculus AB • IB Mathematics: Applications and Interpretation • +87 subjects Yatin: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...in India, studying under the Indian board and tutoring students in Math and English for 2 years under an NGO in Delhi. I believe my strengths lie in quantitative subjects involving calculus, matrix algebra and trigonometry, which I frequently apply in my economics degree. I believe in a steady approach with students, identifying the strengths... Education & Certification • University College London - Bachelor of Economics, Economics Subject Expertise • College Math • Calculus and Vectors • Algebra • Statistics • +31 subjects Pascal: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...received a 1st class (2015-2018) Mathematical Sciences postgraduate diploma at Durham (2018-2019) currently studying a part-time Math's MSc at King's College (2020-2022) Pascal is a Maths Scholar who has a deep passion for education and learning. Pascal teaches and tutors Maths from KS1 all the way up to A-level. His two years tutoring experience at... Education & Certification • King's college London - Bachelor in Arts, Mathematics Subject Expertise • College Math • Calculus • Grade 10 Math • Middle School Math • +27 subjects Amit: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...with assignments, homework assistance, test and exam preparation, and revision. I have experience in helping from 6th Grade up to 12th grade Mathematics and Physics. I also tutor Pre-Calculus, Calculus 1, and Calculus 2. I tutor adult learners as well. I have a Masters Degree in Internet Computing and a BEng Degree in Aeronautical Engineering.... Education & Certification • City University - Bachelor of Engineering, Aerospace Engineering • Queen Mary University London - Master of Science, Computer Science Subject Expertise • College Math • College Algebra • Trigonometry • Calculus • +33 subjects Alexandra: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...to achieve high standards. I am approachable which makes it easy to work as part of a team as well as individually and adapt to different working environments. Being self-motivated helps me to exceed in any task embarked upon and I'm determined to broaden my experience and knowledge of working environments. Education & Certification • University of Kent - Bachelor of Science, Mathematics Subject Expertise • College Math • Calculus 2 • Calculus • Elementary School Math • +80 subjects Deniz: Gravesend College Math tutor Certified College Math Tutor in Gravesend Mathematics is a world on its own. But, in the current education system for most, Math is a mere chore. I want to teach in a way that encourages the next generation to love Math and to think mathematically. I have received a First Class Honor degree in Mathematics from the University of Dundee. Education & Certification • University of Dundee - Bachelor of Science, Mathematics Subject Expertise • College Math • Pre-Calculus • College Statistics • Calculus and Vectors • +20 subjects Nidhi: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...in Singapore, and finished my schooling years in Jakarta with an International Baccalaureate Diploma. Over the years, I have had numerous opportunities to work with children and students like myself. Whether it was being a teachers assistant, leading service clubs devoted to teaching underprivileged youth in Indonesia, or tutoring friends and peers, I have always... Education & Certification • University College London, - Bachelor of Science, Psychology Subject Expertise • College Math • Grade 10 Math • Elementary School Math • Middle School Math • +26 subjects Amelia Elizabeth: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...this by through teaching Maths and helping students who though Maths was so difficult to find their own rhythm to tackle different Maths problems. I started tutoring students in GCSE and A Level Maths whilst volunteering for a youth charity called Pulse Community. I helped the young people who were having trouble learning and understanding... Subject Expertise • College Math • Geometry • GCSE Chemistry • GCSE • +5 subjects Huma: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...NUST and doing PhD in Queen Mary University of London for senior research with fully funded opportunity. I constantly strive to learn and understand the physics discipline and more, as well as provide understandable ways to teach it. I am constantly learning in this subject and hope to provide the best services to you as... Education & Certification • NUST - Master of Science, Physics • Queen Mary - Doctor of Philosophy, Physics Subject Expertise • College Math • Algebra 2 Class • Trigonometry • Applied Mathematics • +56 subjects Aadam: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...America, teaching English as a second language and learning Spanish I'm passionate at being able to teach and explain the things I found so fascinating during my education. Having experienced tutoring as a student and as a teacher, I want to impart my passion of maths and physics, so you can enjoy learning. Education & Certification • University of Bristol - Bachelor of Science, Physics • University of York - Master of Science, Physics Subject Expertise • College Math • Grade 9 Mathematics • Grade 10 Math • Grade 11 Math • +18 subjects Hussein: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...a strong foundation and promoting a growth mindset. I hold a BSc. degree/Postgraduate Diploma in Physics and CELTA - Certificate of English Language Teaching to Adults. I have experience teaching and supporting in secondary schools and colleges. Physics, mathematics and ESL are my areas of expertise, and I particularly enjoy tutoring English as a second... Education & Certification • Royal Holloway University of London - Bachelor, Physics • University of St. Andrews - Master's/Graduate, Physics • State Certified Teacher Subject Expertise • College Math • Algebra 2 • Algebra • Study Skills and Organization • +20 subjects Idir: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...and Management with a focus on Business Development Proposal as a final project. Since graduation, I have been tutoring and have worked in the education industry where I have held various positions, ranging from teaching English as a foreign language, to lecturing at UK universities modules including International Business, Management, Organisational behavior, and Business skills.... Education & Certification • University of Gloucestershire - Bachelor in Arts, Business, General • Cardiff Metropolitan University - Masters in Business Administration, Business Administration and Management Subject Expertise • College Math • AP Microeconomics • International Business • Microeconomics • +28 subjects Efetobore: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...gas technicians at North East Scotland college through foundational learning and specialist modules, I have worked with a range of students ( age 17 - 39), and understand the diversity in learner's needs. However my focus is always on ensuring the learners get the help and support needed to achieve success. So looking forward to... Education & Certification • University of Aberdeen - Master's/Graduate, Electrical & Electronics Engineering • State Certified Teacher Subject Expertise • College Math • Pre-Calculus • Calculus • Math • +17 subjects Amirhossein: Gravesend College Math tutor Certified College Math Tutor in Gravesend ...than 10 years of experience in teaching math and physics in both PERSIAN and ENGLISH languages to high school, college, and university students. My method is based on problem-solving which prepares you for the tests/exams and at the same time gives an intuition about the concepts. Last but not least, we will have a fun... Education & Certification • University of Tabriz - Bachelor of Science, Laser and Optical Engineering • Shahid Beheshti University - Master of Science, Optics • University of Manitoba - Doctor of Philosophy, Biomedical Engineering Subject Expertise • College Math • IB Further Mathematics • Key Stage 2 Maths • Key Stage 3 Maths • +486 subjects Private College Math Tutoring in Gravesend Receive personally tailored College Math lessons from exceptional tutors in a one-on-one setting. We help you connect with the best tutor for your particular needs while offering flexible scheduling to fit your busy life. Your Personalized Tutoring Program and Instructor Identify Needs Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind. Customize Learning Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways. Increased Results You can learn more efficiently and effectively because the teaching style is tailored to you. Online Convenience With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you. Call us today to connect with a top Gravesend College Math tutor
{"url":"https://www.varsitytutors.com/gb/college_math-tutors-gravesend","timestamp":"2024-11-02T02:52:24Z","content_type":"text/html","content_length":"608863","record_id":"<urn:uuid:24775e14-f5d5-43f0-8459-23abef6ca6f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00010.warc.gz"}
GMAT Math Create an account to track your scores and create your own practice tests: All GMAT Math Resources Want to review GMAT Math but don’t feel like sitting for a whole test at the moment? Varsity Tutors has you covered with thousands of different GMAT Math flashcards! Our GMAT Math flashcards allow you to practice with as few or as many questions as you like. Get some studying in now with our numerous GMAT Math flashcards. The Quantitative Reasoning section of the Graduate Management Admission Test (GMAT) may be the most imposing section that test-takers anticipate facing as they study and prepare. Even students with strong math backgrounds should not neglect this section during their review; while many of the questions it features are of a familiar format, if challenging, other questions are presented in a way that likely departs from mathematical material one has before encountered on standardized exams. Don’t let the GMAT’s Quantitative Reasoning section deflate your confidence or stall your studies; instead, arm yourself with knowledge of each question type you will face and practice the different approaches required to avoid surprises on exam day. Whether you need top GMAT tutors in New York, GMAT tutors in Chicago, or top GMAT tutors in Los Angeles, working with a pro may take your studies to the next level. The GMAT Quantitative section consists of thirty-seven questions to be answered in seventy-five minutes, and each of these questions is one of two types: Problem Solving or Data Sufficiency. Each are multiple-choice and present you with five potential answer choices. Though vastly different in the types of answers and approaches they require, each problem type concerns mathematics principles from algebra, arithmetic, or geometry, and may present information in the context of word problems. Widely considered the easier of the GMAT Quantitative Section’s question types, Problem Solving questions should seem somewhat familiar if you have past experience with the math sections of other standardized tests like the ACT or SAT. These questions present you with a problem and ask you to determine its solution. Despite being relatively straightforward in format, these questions can be difficult, so while one should keep the two types of GMAT Quantitative Reasoning problems distinct, one shouldn’t assume that Problem Solving questions are necessarily easier than the other question type, Data Sufficiency questions. Varsity Tutors also offers resources like a free GMAT Math Practice Tests to help with your self-paced study, or you may want to consider a GMAT tutor. Data Sufficiency questions are much more likely to be the source of test-takers’ apprehension. While they present a mathematical problem and two points of information that you can use as you try to solve it, these questions are not interested in the correct answer; rather, they are interested in your ability to recognize exactly what information was sufficient in order to solve the question at hand. This requires a two-step approach; one must first attempt to solve the question presented and then reflect on one’s process. Was the first point sufficient? Did you need the information in point two? Would either of the points alone have sufficed? Did you require aspects of both to figure out the answer? Sometimes the questions featured on Data Sufficiency questions cannot be solved at all with the provided information, and test your ability to recognize this fact. Test-takers have likely not encountered material with such a reflective focus on the process of solving a question on other tests; while such unfamiliarity can lead to confusion and stress, such negative results can be avoided with proper preparation. In addition to the GMAT Math Flashcards and GMAT tutoring, you may also want to consider using some of our free GMAT Math Diagnostic Tests. Considering the notably different focus of each question type featured on the Quantitative Reasoning section, is easy to feel overwhelmed when beginning to review for this part of the GMAT. Spending time practicing your methods of solving each kind of problem well before test day can allow you to analyze your abilities, bolster them where necessary, and gain confidence in your knowledge. Your review can benefit from the numerous free GMAT resources provided on the Learning Tools question database. If you are just beginning to study and unsure of the topics on which to focus your attention, our free GMAT Quantitative Reasoning diagnostics can provide guidance in the form of detailed performance reports outlining your current skill in answering each type of question and in your knowledge of each of the mathematical topics covered on the test. With these free diagnostics and other valuable GMAT study tools like practice tests and flashcards at your disposal, you can begin your review with efficient focus.
{"url":"https://www.varsitytutors.com/gmat_math-flashcards/graphing-a-line","timestamp":"2024-11-04T17:29:11Z","content_type":"application/xhtml+xml","content_length":"170365","record_id":"<urn:uuid:b90194dc-c879-47c0-8af1-6e344dff22b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00360.warc.gz"}
496 drypint to liter - How much is 496 US dry pints in liters? [CONVERT] ✔ 496 US dry pints in liters Conversion formula How to convert 496 US dry pints to liters? We know (by definition) that: $1&InvisibleTimes;drypint ≈ 0.5506104713575&InvisibleTimes;liter$ We can set up a proportion to solve for the number of liters. $1 &InvisibleTimes; drypint 496 &InvisibleTimes; drypint ≈ 0.5506104713575 &InvisibleTimes; liter x &InvisibleTimes; liter$ Now, we cross multiply to solve for our unknown $x$: $x &InvisibleTimes; liter ≈ 496 &InvisibleTimes; drypint 1 &InvisibleTimes; drypint * 0.5506104713575 &InvisibleTimes; liter → x &InvisibleTimes; liter ≈ 273.10279379331996 &InvisibleTimes; liter$ Conclusion: $496 &InvisibleTimes; drypint ≈ 273.10279379331996 &InvisibleTimes; liter$ Conversion in the opposite direction The inverse of the conversion factor is that 1 liter is equal to 0.00366162493656797 times 496 US dry pints. It can also be expressed as: 496 US dry pints is equal to $1 0.00366162493656797$ liters. An approximate numerical result would be: four hundred and ninety-six US dry pints is about two hundred and seventy-three point one zero liters, or alternatively, a liter is about zero times four hundred and ninety-six US dry pints. [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
{"url":"https://converter.ninja/volume/us-dry-pints-to-liters/496-drypint-to-l/","timestamp":"2024-11-08T23:22:54Z","content_type":"text/html","content_length":"20532","record_id":"<urn:uuid:018c1e95-154c-45dc-b751-9b265587fd55>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00412.warc.gz"}
Dividing a number greater than 1,000 by a number <10 Students learn to divide a number greater than 1,000 by a number less than 10. The students learn to divide a number greater than 1,000 by a number less than 10. Students will be able to divide a number greater than 1,000 by a number less than 10 without remainder. The students solve the division problems with numbers to 1,000. Explain that in division problems with a number greater than 1,000 you first see what number from the table of the divisor is the closest to but less than the dividend. For the problem 9936 ÷ 3 you are looking for the number that takes the largest number away from the dividend. 3 × 3 = 9. So 3,000 × 3 = 9,000. Subtract 9,000 from 9,936. Now you still have 936 left over. Now you also know that 300 × 3 = 900. Now solve the problem 936 - 900 = 36. Next you do 12 × 3 = 36, 36 - 36 = 0. Now you add 3,000, 300 and 12 together for your answer (3312). Have the students solve the division problems on their own. Then walk them through the steps of solving a story problem. Check whether the students can divide a number greater than 1,000 by asking the following question:- What steps do you follow to solve the problem 9,999 ÷ 9? The students test their understanding of dividing a number greater than 1,000 by a number less than 10 through ten exercises. For some of the exercises the students must choose from possible answers to a division problem, and for others they must provide the answer on their own. Some of the exercises are story problems. Discuss once again the importance of being able to divide a number greater than 1,000 by a number less than 10. As a closing activity the students can drag the money to visualize the given division Have students that have difficulty dividing a number greater than 1,000 first practice with the division tables and dividing a number to 1,000. Gynzy is an online teaching platform for interactive whiteboards and displays in schools. With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom management more efficient.
{"url":"https://www.gynzy.com/en-us/library/items/dividing-a-number-greater-than-1000-by-a-number-less10","timestamp":"2024-11-02T21:43:11Z","content_type":"text/html","content_length":"552898","record_id":"<urn:uuid:883a4027-4024-4a1f-92e0-c8ca909ff18a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00424.warc.gz"}
Plans, Maps & Scales - Civil Engineering Courses Plans, Maps & Scales Plans and Maps As stated in the definition of surveying, the objective of measurements is to show relative positions of various objects on paper. Such representations on paper are called plan or map. A plan may be defined as the graphical representation of the features on, near or below the surface of the earth as projected on a horizontal plane to a suitable scale. However, since the surface of the earth is curved and that of the paper is plane, no part of the earth can be represented on such maps without distortion. If the area to be represented is small, the distortion is less and large scale can be used. Such representations are called plans. If the area to be represented is large, small, scales are to be used and distortion is large. Representations of larger areas are called maps. Representation of a particular locality in a municipal area is a plan while representation of a state/country is a map. There is no exact demarcation between a plan and map. It is not possible and also not desirable to make maps to one to one scale. While making maps all distances are reduced by a fixed proportion. That fixed proportion is called scale of the map. Thus, if 1 mm on the paper represents 1 metre on the ground, then the scale of the map is 1 mm = 1 m or 1 mm = 1000 mm or 1:1000. To make scale independent of the units it is preferable to use representative factor which may be defined as the ratio of one unit on paper to the number of units it represent on the ground. Thus 1 mm = 1 m is equivalent to RF = 1/1000 Apart from writing scale on map, it is desirable to show it graphically on it. The reason is, over the time, the paper may shrink and the scaling down the distances from map may mislead. The graphical scale should be sufficiently long (180 mm to 270 mm) and the main scale divisions should represent one, ten or hundred units so that it can be easily read. The scale of a map is considered as • large if it is greater than 1 cm = 10 m i.e., RF > 1/1000, • intermediate if it is between RF = 1 1000 and 1/10,000, • small if RF < 1/10,000. In general, scale selected should be as large as possible, since it is not possible for human eye to distinguish between two points if distance between them is less than 0.25 mm. The recommended scales for various types of surveys are as shown in Table 1. Table 1 Types of Graphical Scales The following two types of scales are used in surveying: (i) Plain Scale (ii) Diagonal Scale. Plain Scale On a plain scale it is possible to read two dimensions directly such as unit and tenths. This scale is not drawn like ordinary foot rule (30 cm scale). If a scale of 1:40 is to be drawn, the markings are not like 4 m, 8 m, 12 m etc. at every 1 cm distance. Construction of such a scale is illustrated with the example given below: Example: Construct a plain scale of RF = 1/500 and indicate 66 m on it. Solution. If the total length of the scale is selected as 20 cm, it represents a total length of 500 × 20 = 10000 cm = 100 m. Hence, draw a line of 20 cm and divide it into 10 equal parts. Hence, each part corresponds to 10 m on the ground. First part on extreme left is subdivided into 10 parts, each subdivision representing 1 m on the field. Then they are numbered as 1 to 10 from right to left as shown in Fig. 1. Fig. 1 If a distance on the ground is between 60 and 70 m, it is picked up with a divider by placing one leg on 60 m marking and the other leg on subdivision in the first part. Thus field distance is easily converted to map distance. Table 2 IS 1491—1959 recommends requirements of metric plain scales designated as A, B, C, D, E and F as shown in Table 2. Such scales are commonly available in the market. They are made of either varnished cardboard or of plastic materials. Such scales are commonly used by surveyors and architects. Diagonal Scale In plain scale only unit and tenths can be shown whereas in diagonal scales it is possible to show units, tenths and hundredths. Units and tenths are shown in the same manner as in plain scale. To show hundredths, principle of similar triangle is used. If AB is a small length and its tenths are to be shown, it can be shown as explained with Fig. 2 below. Fig. 2 Draw the line AC of convenient length at right angles to plain scale AB. Divide it into 10 equal parts. Join BC. From each tenth point on line AC draw lines parallel to AB till they meet line BC. Then line 1–1 represent 1/10^th of AB, 6–6 represent 6/10^th of AB and so on. Figure 3 shows the construction of diagonal scale with RF = 1/500 and indicates 62.6 m. Fig. 3 IS 1562—1962 recommends diagonal scales A, B, C, and D as shown in Table 3. Table 3 Units of Measurements According to Standards of Weights and Measurements Act, India decided to give up FPS system used earlier and switched over to MKS in 1956. In 1960 System International (SI units) unit was approved by the conference of weights and measures. It is an international organisation of which most of the countries are the members. In this system also unit of linear measurement is metre. However, in this system use of centimeters and decameters are discouraged. Of course major difference between MKS and SI is in the use of unit of force. In MKS unit of force is kg-wt (which is commonly called as kg only) while in SI it is newton. The recommended multipliers in SI units are given below • Giga unit = 1 × 10^9 units • Mega unit = 1 × 10^6 units • Kilo unit = 1 × 10^3 units • unit = 1 × 10^0 units • Milli unit = 1 × 10^–3 unit • Micro unit = 1 × 10^–6 unit Commonly used linear units in surveying are kilometre, metre and millimetres. However centimetre is not yet fully given up. For measuring angles sexagesimal system is used. In this system: • 1 circumference = 360° • 1 degree = 60′ (minutes of arc) • 1 minute = 60″ (seconds of arc) Leave a Comment
{"url":"https://civilengineeringcourses.com/plans-maps-scales/","timestamp":"2024-11-08T12:05:09Z","content_type":"text/html","content_length":"93622","record_id":"<urn:uuid:3aefe4fb-3a3d-4aef-aa42-800e4f6245dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00683.warc.gz"}
I Know First Evaluation Report for Short-Term Currencies Executive Summary In this forecast evaluation report, we will examine the performance of the forecasts generated by the I Know First AI Algorithm for the currency market with short term time horizons ranging from 1 to 6 days, which were delivered daily to our institutional clients. Our analysis covers the time period from 28 March 2019 to 9 June 2019. Below, we present our key takeaways from applying signal and volatility filters to pick the best performing currency pairs: Currencies Highlights: • Top 5 currency pairs without volatility adjusted had better returns in the 1- and 2-day time horizons, providing an average return of 0.19% in the 1 day horizon which outperformed the benchmark by 1800%. • Top 5 currency pairs with volatility adjusted outperformed the benchmark in all time horizons, providing the highest return of 0.19% in 6 days. Note that the above results were obtained as a result of evaluation conducted over the specific time period and using a sample approach of consecutive filtering by predictability and by signal indicators to give a general presentation of the forecast performance patterns for specific currency pairs. The following report provides extensive explanation on our methodology and detailed analysis of the performance metrics that we obtained during the evaluation. This report is a new I Know First evaluation series illustrating the ability to provide successful short term and flexible forecasting for the currency market. About the I Know First Algorithm The I Know First self-learning algorithm analyses, models, and predicts the capital market, including stocks, bonds, currencies, commodities and interest rates. The algorithm is based on Artificial Intelligence (AI) and Machine Learning (ML) and incorporates elements of Artificial Neural Networks and Genetic Algorithms. The system outputs the predicted trend as a number, positive or negative, along with a wave chart that predicts how the waves will overlap with the predicted trend. Consequently, the trader can decide which direction to trade, when to enter the trade, and when to exit the trade. The model is 100% empirical, based only on factual data, thereby avoiding any biases or emotions that may accompany human assumptions. I Know First’s model only involves the human factor is building the mathematical framework and providing the initial set of inputs and outputs to the system. The algorithm produces a forecast with a signal and a predictability indicator. The signal is the number in the middle of the box. The predictability is the number at the bottom of the box. At the top, a specific asset is identified. This format is consistent across all predictions. Our algorithm provides two independent indicators for each asset – signal and predictability. The signal is the predicted strength and direction of movement of the asset. This is measured from -inf to +inf. The predictability indicates our confidence in the signal. The predictability is a Pearson correlation coefficient relating past algorithmic performance and actual market movement, measured from -1 to 1. You can find a detailed description of our heatmap here. The Currency Picking Method The method in this evaluation is as follows: To fully utilise information provided by our forecast, we filter out the top 30 most predictable currencies and rank them according to their predictability value. Next, we pick the top 10 highest signals from the ranked currencies. By doing so, we focus on both the most predictable currency pairs and still capture the highest signa pairs. The Performance Evaluation Method We perform evaluations on the individual forecast level. This means that we calculate the return of each forecast we have issued for each horizon in the testing period. We then take the average of those results based on our positions on different currencies and forecast horizon. For example, to evaluate the performance of our 1-month forecasts, we calculate the return of each trade by using this formula: This simulates a client buying a currency on the day we issue our prediction and selling it exactly 1 month in the future from that day. We iterate this calculation for all trading days in the analysed period and average the results. The Hit Ratio Calculation The hit ratio helps us to identify the accuracy of our algorithm’s predictions. Using our currency filtering method based on predictability and signal, we predict the direction of movement of different currencies. Our predictions are then compared against actual movements of these currencies within the same time horizon. The hit ratio is then calculated as follows: For instance, a 90% hit ratio for a top 30 predictability filter with a top 10 signal filter would imply that the algorithm correctly predicted the price movements of 9 out of 10 currencies within this particular set of currencies. The Benchmarking Method The theory behind our benchmarking method is the “Null hypothesis“. This means buying every currency in the particular currency universe regardless of our I Know First indicators. For instance, if we were to identify the top 10 currency pairs from the currencies universe of 52 currency pairs, we would calculate the rate of return of 52 currency pairs, where an equal amount of each currency pair would be bought at the start of the time horizon and sold at the end of the time horizon. This helps us to determine the effectiveness of our predictability-based currency filtering process by comparing the rate of returns of the benchmark with the rate of return of our predictability-based strategy. Universe Under Consideration: Currencies In this report, we conduct testing for the 52 currencies pairs covered by I Know First in the “Currencies” package. This package includes the major worldwide traded currency pairs, such as USD/EUR, USD/GBP, etc. Performance Short Term Model Evaluation We conduct our research for the period from 28 March 2019 to 9 June 2019. Following the methodology from the previous sections, we start our analysis by computing the performance of the algorithm’s short-term signals for time horizons ranging from 1 day to 6 days. Afterwards, we calculate the returns for the same time horizons for the benchmark using the currencies universe and compare it against the performance of the filtered sets of currency pairs. It is also important to measure the outperformance relative to the benchmark and for that we will apply the formula: Overall, we applied filtering by signal strength to the Top 30 currencies pairs, after filtering previously by predictability. In addition, we applied volatility adjustments to have different signals and therefore different subsets. We present our findings in the following sections. 1. Evaluating the Signal Indicator without Volatility Adjustment In this section, we present our results only filtering by signal strength, without using volatility adjustment. The next tables show the average returns of the subsets in different time horizons. We evaluate the 1 to 6 day timeframes. Table 1: Average returns per time horizon, without volatility adjustment Table 2: Outperformance delta against benchmark, without volatility adjustment. Figure 1: Average returns per time horizon, without volatility adjustment. Figure 2: Outperformance delta against benchmark, without volatility adjustment. From the above set of charts, we can see both subsets, Top 5 and Top 10 signals, have significantly greater returns than the benchmark in 1-, 2- and 3-days’ time horizons. The better out-performance is observed in the 1-day’ period, and from there the out-performance decreases as the time horizon increases. The highest return was produced by the Top 5 currency pairs with 0.19% of the return at the 1-day’ time horizon. Finally, the Top 5 and Top 10 subsets were under the benchmark’s return in 5- and 6-days’ periods. Table 3: Average hit ratio per time horizon, without volatility adjustment. Figure 3: Average hit ratio per time horizon, without volatility adjustment. For the Hit Ratio performance, both subsets have similar results in each time horizon. The hit ratio doesn’t show a clear effect of the signals but increases as the time horizon increases. 2. Evaluating the Signal Indicator with Volatility Adjustment In this section, we present our results filtering by signal strength and using volatility adjustment for filtering by signal strength. The following tables show the average return of the subsets in different time horizons. We evaluate the 1 to 6 day timeframes. Table 4: Average returns per time horizon, with volatility adjustment. Table 5: Outperformance delta against benchmark, with volatility adjustment. Figure 4: Average returns per time horizon, with volatility adjustment. Figure 5: Outperformance delta against benchmark, with volatility adjustment. From the above set of charts, we can see both subsets, Top 5 and Top 10, produce greater returns than the benchmark in all timeframes. At the same time, we observe that returns increase as we consider a longer time horizon, with the Top 5 subset producing 0.19% return in a 6-day’ period. In almost every timeframe, the outperformance is at least 100%, so returns were at least double that of the benchmark. Table 6: Average hit ratio per time horizon, with volatility adjustment. Figure 6: Average hit ratio per time horizon, with volatility adjustment. The hit ratio performance for the Top 10 subset has a positive trend as the time horizon increases. For the Top 5 subset, the hit ratio is always over 60%. 3. Comparison results without volatility adjustment and with volatility adjustment The Top 5 signals subset obtained higher returns both with and without the volatility adjustment. The following chart shows the return of the adjusted and unadjusted subsets in different time Table 7: Top 5 average returns per time horizon with volatility adjustment and without volatility adjustment. Figure 7: Top 5 average returns per time horizon with volatility adjustment and without volatility adjustment. Figure 8: Top 5 outperformance delta per time horizon with volatility adjustment and without volatility adjustment. For the Top 5 currency pairs without volatility adjusted, we note better performance than the Top 5 with volatility adjusted for the 1- and 2-days’ time horizons. Most notably, we observe that for 1 day horizon, the subset without volatility adjusted provides returns of 0.19%, which is 1800% better than the benchmark. When we observe the volatility adjusted Top 5 currency pairs, we find better performance for 4-, 5- and 6-days’ time horizons, outperforming the benchmark for 500% in the 4-day period. In this analysis, we demonstrated the out-performance of our forecasts for the Top 10 currency pairs picked by I Know First’s AI Algorithm during the period from 28 March 2019 to 9 June 2019. Based on the presented observations, we record significant out-performance of the Top 5 currencies pairs when our signal indicators are used as an investment criterion. As shown in the above diagram, the Top 5 currencies pairs without volatility adjusted yield significantly higher return on the 1- and 2-days’ time horizons. Furthermore, we also note that for 4-, 5- and 6-days’ periods, the Top 5 volatility adjusted currency pairs outperform the currency pairs without volatility adjusted, whereas the Top 5 currency pairs without volatility adjusted may be preferable for time horizons of 1 day to 2 days.
{"url":"https://currency-prediction.com/2019/06/24/i-know-first-evaluation-report-for-short-term-currencies/","timestamp":"2024-11-09T09:56:06Z","content_type":"application/xhtml+xml","content_length":"74723","record_id":"<urn:uuid:442bb7d1-cdaa-4651-91c4-3769ea213475>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00287.warc.gz"}
Trigonometric functions In this section we are going to define the trigonometric functions. In the two previous sections, we have seen that given a right triangle $$ABC$$,we can calculate the sine, the cosine and the tangent (and its respective inverse ratios) by means of the quotient between two sides of the triangle. In this paragraph we want to go one step further and define the trigonometric functions. Given an angle $$x$$, to calculate its sine, for example, we can draw a right triangle, and name $$x$$ as one of its non-right angles. This way, once the triangle is drawn, and by using the previously given formulas, we can calculate the sine, the cosine and the tangent. Then, since this can be done for any angle $$x$$, we can define a function assigning the corresponding value to each angle Thus, we define: $$y = sin (x)$$. $$y$$ is equal to the sine of $$x$$. Its inverse function is: $$x = \arcsin(y)$$ $$x$$ is the arc (of the circumference) which sine equals $$y$$, or also, that $$x$$ is the arcsine of $$y$$. If $$y = \cos (x)$$ It is said that $$y$$ is equal to the cosine of $$x$$ and its inverse function is $$x = \arccos(y)$$. One says that $$x$$ is the arc which cosine is $$y$$, this is, $$x$$ is the arcsine of $$y$$. If $$y = \tan (x)$$ It is said that $$y$$ is equal to the tangent of $$x$$ and its inverse function is: $$x = \arctan(y)$$. It is said that $$x$$ is the arc which tangent is $$y$$, or that $$x$$ is equal to the arctangent of $$y$$. Note that the values of $$x$$ can be expressed both in radians and degrees.
{"url":"https://www.sangakoo.com/en/unit/trigonometric-functions","timestamp":"2024-11-07T06:19:47Z","content_type":"text/html","content_length":"19851","record_id":"<urn:uuid:77ccb9e6-d5b0-4c1c-8283-5d0060982fae>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00229.warc.gz"}
MAT 119 - Mathematics for Elementary Education I Website Catalog (In Development) MAT 119 - Mathematics for Elementary Education I An exploration of order of operations, in-depth work with fractions - visually, computationally, conceptually; graphing lines, visual display of data using graphs; measures of central tendency, geometry of polygons and circles, perimeter, area, volume, and surface area of solids. Students are expected to explain the material as though to a target audience. Course uses collaborative learning extensively, along with individual projects. Intended only for Elementary education majors. This course requires MAT 093 Integrated Arithmetic and Basic Algebra, or equivalent background Credits: 3 3 Class Hours Course Profile Learning Outcomes of the Course: Upon successful completion of this course the student will be able to: 1. Add, subtract, multiply, divide rational numbers, and explain why the basic arithmetic operations of fractions work. 2. Evaluate arithmetic expressions according to the algebraic hierarchy. 3. Adding, subtracting and multiplying polynomials. 4. Solve equations of a single variable. 5. Define and graph a linear function of a single variable. 6. Identify, interpret, and discuss line charts, bar charts, line graphs, and pie charts. 7. Construct line charts, line graphs, and bar charts. 8. Relate a shape to its place in the geometric hierarchy. 9. Identify various quadrilaterals and triangles. 10. Use formulas to calculate the perimeter and area of various polygons. 11. Use formulas to calculate the circumference and area of a circle. 12. Use the Pythagorean Theorem. 13. Calculate the perimeter of simple and compound planar regions. 14. Use formulas to calculate the surface area and volume of a cone, a cylinder, a prism and a sphere. 15. Calculate the volume and surface area of simple and compound solids. 16. Solve application problems involving area, perimeter, surface area and volume. 17. Calculate the mean, weighted mean, median, and mode and recognize the appropriate use of same to help describe a data set. 18. Complete and present projects. 19. Participate in cooperative learning activities. This course prepares students to meet the Mathematics General Education requirement. In context of the course objectives listed above, upon successful completion of this course the student will be able to: 1. Interpret and draw inferences from appropriate models such as formulas, graphs, tables, or schematics. 2. Represent mathematical information symbolically, visually, numerically or verbally as appropriate. 3. Employ quantitative methods such as arithmetic, algebra, geometry, or statistics to solve problems.
{"url":"https://catalog.sunybroome.edu/preview_course_nopop.php?catoid=1&coid=628","timestamp":"2024-11-09T13:10:51Z","content_type":"text/html","content_length":"41064","record_id":"<urn:uuid:bb0e046e-b18f-4f73-9960-6767166155b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00627.warc.gz"}
Shapley, Weyl, Buterin, Freedman, Spinoza, Leibniz, and Kant Published on ∘ 60 min read ∘ ––– views I walk into a House of Worship and the guy up front is in the middle of an oration. I don't expect to stay but catch a few words, “... so my friends, reach down and uplift thy less fortunate neighbor—to the $\frac{1}{2}$-power, for so it is written, and so may it be derived.” Startled, I take a seat; this sounds like theology on a wavelength I can receive. This is the opening sentence to the best white paper I've ever read, published by the Fields medalist Michael Freedman, in which he explains how Quadratic Funding mechanisms for public goods uniquely solve Immanuel Kant's Categorical Imperative from Groundwork of the Metaphysics of Morals:^1 "Act only according to that maxim whereby you can at the same time, will that it should become universal law." To back up and explain how we got here: it's been nearly two years since this blog's first foray into voting power in game theoretic contexts, I am here to present another exercise in pedantry amongst friends due to a number of goodhearted disputes which arose during a cannonball run road trip from various corners and edges of the east coast to and from Baton Rouge. Violating the nordic principle of perpetual-favor-owing till death do us part and true to form, I will browbeat my friends into submission by citing not one source, not two, but three white papers saying "I'm right, actually" in a full tour de force of literature spanning mathematics, economics, and moral philosophy all so that I can dredge up old beef about "who left the cigs in Verdun," in the groupchat months In this post, I'm going to try to synthesize the takeaways from three papers: Glen Weyl et al's Quadratic Voting,^2 Buterin et al's addendum on Quadratic Funding,^3 and Michael Freedman's pristine riff on previous paper, titled Spinoza, Kant, Buterin.^4 And keeping score on Weyl's outfits because that guy knows how to dress. 1 | Shapley and his Values Suppose two players $p_1$ and $p_2$ can cooperate to achieve some goal and be rewarded by prizes: placement prize 1 $10,000 2 $7,500 3 $5,000 By teaming up and forming coalitions (ordered sets of agents), the players can receive the following payouts: coalition prize $\{p_1, p_2\}$ $10,000 $\{p_1\}$ $7,500 $\{p_2\}$ $5,000 $\varnothing$ $0 As we can see, these coalition payout values seem to indicate that player $p_1$ should receive more prize money based on his individual contribution to the coalition. How, then, can we equitably divide prize money? We compute the expected marginal contribution of each player in a coalition: the increase in the coalition's total payout due to the addition of the payout. For example, the marginal contributions of player $p_1$ to each possible coalition (with subscripts indicating player membership) are enumerated as: \begin{aligned} C_{1,2} - C_1 &= \text{\textdollar}5,000 \\ C_{1} - C_0 &= \text{\textdollar}7,500 \end{aligned} Therefore, the expected marginal utility is just the average of these two outcomes: $\mathbb{E}[p_1] = \frac{5,000 + 7,500}{2} = \text{\textdollar}6,250$ Similarly, the expected marginal contributions of $p_2$ are given by: \begin{aligned} \mathbb{E}[p_2] &= \frac{(C_{1,2} - C_1) + (C_2 - C_0)}{2} \\ \\ &= \frac{2,500 + 5,000}{2} = \text{\textdollar}3,750 \end{aligned} These expected marginal contributions are called Shapley values. Note that summing of the exhaustive individual expected marginal contributions necessarily equals the expected value of the maximal coalition $C_{1,2}$: \begin{aligned} \mathbb{E}[p_1] + \mathbb{E}[p_2] &= \mathbb{E}[C_{1,2}] \\ \\ \text{\textdollar}6,250 + \text{\textdollar}3,750 &= \text{\textdollar}10,000 \end{aligned} Keeping with conventional notation, an expected value is the weighted sum of possible outcomes, or in our case, coalitions. So, for more permutations of coalitions where players may not participate, due to arbitrary constraints (like being a party pooper), we have to introduce weights corresponding to the frequency of coalitions. For example, given the following coalition payouts (denoted as utilities $U$): coalition $U$ $C_{1,2,3}$ $10,000 $C_{1,2}$ $7,500 $C_{1,3}$ $7,500 $C_{2,3}$ $5,000 $C_{1}$ $5,000 $C_{2}$ $5,000 $\varnothing$ $0 We compute the following marginal contributions for $p_1$: \begin{aligned} \mathbb{E}[p_1] = \; &w_1 (C_{1,2,3} + C_{2,3}) + w_2 (C_{1,2} + C_{2}) + \\ &w_3 (C_{1,3} + C_{3}) + w_4 (C_{1} + C_{0}) \\ \end{aligned} where weights $w_i$ are given by the number of ways the given player can make those marginal contributions: $P\big((C \cup \{i\}) - C\big)$ e.g. $w_1 = P\big(C_{1,2,3} - C_{2,3} \big) = 1/3$ since there are $3! = 6$ ways to form a coalition of all three players, and in 2/6 of them ($C_{2,3,1}$ and $C_{3,2,1}$) $p_1$ makes the final, marginal contribution. Repeating this computation of the weights for all coalitions where $p_1$ is the pivotal contributor, we get \begin{aligned} \mathbb{E}[p_1] = \; &w_1 (C_{1,2,3} + C_{2,3}) + w_2 (C_{1,2} + C_{2}) + \\ &w_3 (C_{1,3} + C_{3}) + w_4 (C_{1} + C_{0}) \\ = \; &\frac{1}{3} \text{\textdollar}5,000 + \frac{1}{6} \ text{\textdollar}2,500 + \frac{1}{6} \text{\textdollar}7,500 + \frac{1}{3} \text{\textdollar}5,000 \\ = \; &\text{\textdollar}5,000 \end{aligned} In general, for a $P$ player game where we want to compute the marginal contribution of player $i$ to coalition $C$, the Shapley value is: $\phi_i = \sum_{C \subseteq \{1, ..., P\}\{i\}} \frac{|C|!(P - |C| -1)!}{P!} \Big[V(C \cup \{i\}) - V(C) \Big]$ • $P!$ is the number of ways to form a coalition of $P$ players • $|C|$ is the number of players in a coalition $C$ • $|C|!$ is the number of ways that coalition $C$ can be formed • $(P - |C| - 1)!$ is the number of ways that players can join after player $i$ joins ## Ass, Grass, or Cash (The Taxi Problem) Rather than studying payouts, we can also apply Shapley values to split weird checks. Suppose three friends, call them say ... Howard, Trevor and Peter want to split the cost of gas (and other supplies) for the return trip from Baton Rouge to their respective headquarters on the east coast. We'll say the total cost of gas and cigarettes is $300, but –crucially– the amount of gas and cigarettes they each consume is proportionate to how much time they spend on the road. Howard is located in Virginia Beach, Trevor in Smithfield, and Peter in Charlotte. So the total distance traveled, gas consumed, and darts chuffed is not equal!! We can flatten this out and visualize the shared cost of [DEL:resources:DEL] (⛽🚬) between the burgeoning coalition as follows: Intuitively, we might have each participant pay the cost of their leg of the trip: Now, while I did work out the exact numbers^5 which were included in the preprint of this post which I sent straight to their dumbasses a few months ago, for the sake of readability, we can use round-er numbers to illustrate the point. • $P = \text{\textdollar} 180$ • $T = \text{\textdollar} 60$ • $H = \text{\textdollar} 60$ Under this division however, in the absence of the incentive of 15 hours of banter with his two better halves, Peter has no incentive to participate in the coalition, and could just drive himself home (hates fun), whereas Howard, who's already going past Charlotte would much rather pay $60 than $300 (and just think about having to smoke 3-person's worth of cigarettes by your lonesome in that UHaul), who could blame him for trying to rope in some companionship! Peter might argue that, since Howard's already going to V.B., and Charlotte is on the way, the distribution should in fact be: • $P = \text{\textdollar} 0$ • $T = \text{\textdollar} 120$ • $H = \text{\textdollar} 180$ Trevor likes this line of reasoning and also argues that Howard should just facetank the cost and drop us both off since we're on the way, advocating for the following cost distribution: • $P = \text{\textdollar} 0$ • $T = \text{\textdollar} 0$ • $H = \text{\textdollar} 300$ But Howard, having been rebuked by his so-called "friends" stands to gain nothing from such an arrangement at this point, and doesn't wish to share his UHaul with these freeloading clowns. By introducing the notion of order to the cannonball campaign, we can address this nonsense about "well, since you're already on the way..." Suppose Peter taps into the group chat announcing that his return trip from Baton Rouge will cost him a mere $180, and then brother Trevor decides to join the coalition. Here, his contribution is necessarily an additional $60 to get from Charlotte to Smithfield and joins Peter's existing trip. Lastly, Howard stops pouting and decides to tag along at the cost of the remaining $60. For only this order of coalition formation does the initial naive marginal contribution amounts checkout. Once again, the idea underlying Shapley values is that each participant pays their average marginal contribution over all possible orderings. Characteristic Function of a Cooperative Game The characteristic function of a cooperative game takes a subset of players as input, an maps them to a cost value: $V : 2^N \rightarrow \mathbb R$ We have $2^N$ possible coalitions of $N$ people since each person is either in or out of a coalition. So, the characteristic function of the three champions of the Pain Trust sojourning to^6 –and, more importantly, from– Red Stick would be: $V(P, T, H) = \text{\textdollar}300$ And if Howard for some reason decides to sit out (how're ya gonna get home bucko?) we'd have: $V(P, T) = \text{\textdollar}120$ The marginal contribution of Howard joining an existing coalition is given by taking the difference between the characteristic functions: \begin{aligned} V(P, T, H) - V(P, T) &= \text{\textdollar}180 \end{aligned} Alternatively, if Trevor bows out for some reason or another, the corresponding marginal contribution of Howard sucking it up and joining Peter, \begin{aligned} V(P, H) - V(P) &= \text{\textdollar}240 \end{aligned} So we can reframe the Shapley value as: \begin{aligned} \phi_i = \frac{1}{N!} \sum_{C \in \mathcal P(C)} V(C \cup \{i\}) - V(C) \end{aligned} once again: • $\phi_i$ is the amount player $i$ pays • averaged over all $N!$ possible orderings of players • summing the marginal contributions given by the difference in characteristic functions $V(C \cup \{i\}) - V(C)$ • over all orderings of coalitions given by the power set $\mathcal{P}(C)$ We can tabulate the marginal contributions of each player $i$ and compute $\phi_i$ as the row average: $C:$ $\{P,T,H\}$ $\{T,P,H\}$ $\{P,H,T\}$ $\{ T,H,P \}$ $\{H,P,T\}$ $\{H,T,P\}$ $\phi_i$ Peter 180 0 180 0 0 0 $120/6 = $60 Trevor 60 240 0 240 0 0 $300/6 = $90 Howard 60 60 120 60 300 300 $1,380/6 = $150 $\phi$ 300 300 300 300 300 300 $300 Demonstrating that Howard needs to pipe down How do we prove fairness? For problems like these, four axioms are usually invoked: efficiency, symmetry, null-player invariance, and linearity. A method is said to be efficient if people don't end up paying more than the total cost of the effort. Shapley values are efficient since $\sum_{i=1}^N \phi_i = V(C)$ That is, the sum of all Shapley values is not more (or less) than the characteristic function of an exhaustive coalition. If two players are indistinguishable, then their costs should be the same. If $\forall C, V(C \cup \{i\}) = V(C \cup \{j\})$, then $\phi_i = \phi_j$ Null Player A player who stands to gain nothing from joining a coalition should have zero marginal contribution cost: If $V(C \cup \{i\}) = V(C) \forall C$, then $\phi_i = 0$ Multistage cost functions should combine linearly: If $V = V_1(C) + V_2(C)$ then $\phi_i = \phi_{i_1} + \phi_{i_2}$ Theorem: The Shapley value is the unique value that satisfies all four fairness axioms. This is good and fair and probably enough, but I'm not done throwing the book at Howard. No no no. 2 | Weyl & Quadratic Voting In the first^2 of his many papers^7 on quadratic mechanisms, Glen Weyl aims to propose a pragmatic mechanism for democratic reform relative to the current ineffective system of 1-person-1-vote (1p1v). Under 1p1v, each voter receives just a single unit of influence on any collective decision which prevents pareto-optimal improvements as it neglects the degree of preference or knowledge on a given issue from being expressed. 1p1v is therefore a low-bandwidth channel of democratic expression. It ignores the fact that some voters are willing to utterly forfeit their voice on some issues to gain influence on others. The basic idea (which is proved ad nauseam by Weyl et al, and the also generalized to funding mechanisms by Buterin et al) is: $\phi = V^2$ that is, the optimal cost to each voter on a given issue is equal to the number of votes they wish to spend, squared. As we'll see, this mechanism vastly favors breadth of participation over individual depth of contribution. However, QV shines through in its tolerance to several, very realistic violations of the assumptions which several other (arguably more-optimal) voting mechanisms Weyl's thesis for QV, then, is that that the set of robustly optimal Vote Pricing Rules is precisely the set of quadratic rules. We'll spend a bit more time delving into his model and the assumptions/features that are baked into the nuances, which Buterin appropriates and Freedman also builds on top of. • We denote $N$ citizens indexed $i = 1, ..., N$, • A set of $R$ binary referenda, • Each voter is allocated some finite amount of voter ("voice") credits $v_i$ that they can trade and distribute across a measure $r$, • We/Weyl assumes that $|R|$ is large enough and that the impact of each individual measure $r \in R$ is sufficiently inconsequential such that each citizen has a quasi-linear continuation value for retaining voice credits for future votes^8 • Assume credits have been initially distributed according to some "fair" mechanism such that maximizing total equivalent continuation value defines social optimality. □ Shrouded references to Rawls' veil of ignorance abound • For some measure $r$, suppose voters receive $2u_i$ to favor, and $-2u_i$ (utility values) if it fails, casting a proportionate amount of votes $v_i$ to support/oppose. • The community votes on the referendum, choosing a continuous number of votes $\pm v_i$ depending on support or opposition. A measure passes and is implemented iff $\sum_i = v_i \geq 0$ and each voter pays $c(v_i)$ for her vote credits, where $c$ is a differentiable, convex, even, and strictly increasing cost function called the pricing rule. □ See another one of Weyl's papers on Price Theory for analysis of the distribution of voter beliefs and equilibrium strategy^9 • Assume players weigh the marginal cost of an additional vote against the perceived chance that their potential vote will be pivotal in deciding the outcome of the ballot. - All voters agree on the marginal pivotality of votes $v_r$ for any given issue - This assumption implies a rational voter will choose $v_i$ which maximizes $2u_ipv_i - c(v_i)$ • Note most voters are not actually rational!^10 • Per the assumption of fair initial distribution conditions –for some arbitrary definition of "fair"– society collectively wishes to implement a measure when $\sum_i u_i \geq 0$ • A vote pricing mechanism is said to be robustly optimal if, $\forall p > 0, N, \bar{u}$, each price-taking voter $i$ chooses the optimal amount of votes $v_i^*$ s.t. $\sum_i v_i^*$ has the same sign as $\sum_i u_i$ □ Weyl and Lalley prove this in one of the appendices^11 which is far more cerebral than the rest of the expressions which appear.^12^,^13 Economists love to bastardize notation and only using "rigorous" definitions when they get to use cool symbols □ This admittedly seems like a really low bar to cross, but bear in mind that each rational voter also takes into account their own pivotality to the referendum in the context of full set of issues to be voted on, and again the beauty of this paper (similarly to assumptions made by solutions to Byzantine Problem, solved by Strong Eventual Consistency^14) is that competing mechanisms fold under even the slightest modifications to the assumptions, whereas QV alone is robust according to the perturbations of Weyl. QV: The Quadratic Premise Quadratic functions are the only ones with linear derivatives for which a citizen can equate marginal benefits and costs at a number of votes proportional to her perceived utility gained from the passage of a ballot which said votes are expended upon.^15 Consider the class of vote pricing rules: $\mathcal C(c) = \{ c(x) = x^a | a > 1\}$ The first-order condition for allegedly-optimal cost functions is differentiability: \begin{aligned} 2pu_i &= a (v_i)^{a-1} \\ &\implies v_i = \text{sign}(u_i)\Big( \frac{2p}{a} \Big)^\frac{1}{a-1} |u_i|\frac{1}{a-1} \end{aligned} If $a = 2$, this leads to $v_i^*$ being proportional to $u_i$ and thus robustly optimal: \begin{aligned} 2pu_i &= 2(v_i)^{2-1} \\ &\implies v_i = \text{sign}(u_i)\Big( \frac{2p}{2} \Big)^\frac{1}{2-1} |u_i|\frac{1}{a-2}\\ &= \text{sign}(u_i)p|u_i| \end{aligned} Formally, Weyl's claim is that for all other values of $a$, the optimal number of votes $v_i^*$ cast by citizen $i$ is not proportional to the utility they gain from casting those votes $u_i$, and thus the costly voting rule will be sub-optimal for some arrangements of social values (preferences) and voters $p$. Readers are encouraged to convince themselves of this truth in Desmos. Possible price mechanism parameters fall somewhere on the spectrum of linear cost pricing: $\lim \limits_{a \rightarrow 1}$, and the other extremum of $\lim \limits_{a \rightarrow \infty}$. For the linear case, as $a$ approaches $1$, the power on $v_i$ which determines $u_i$ goes to infinity, and so voters with only slightly greater preference values will be infinitely more influential, leading to a dictatorship of the most intense voter. This pitfall is reflective of the intuitive rationale against vote trading, whereby the most-special interests can capture the whole populace. Thus, Weyl reasons that QV (or any robust voting mechanism, for that matter) should have marginal-at-best incentives for trading. On the other end of the spectrum, for in the case where $\lim \limits_{a \rightarrow \infty}$, $u_i$ goes to 1 as its power goes to 0, so we end up with 1p1v (monogamous) voting. QV, then, is the optimal intermediate between the extremum via handwavey invocation of the Central Limit Theorem. Under the appropriate conditions, in all symmetric Bayes-Nash equilibria in large populations, the price taking assumption approximately holds for almost all voters whose preferences are drawn i.i.d. from a known distribution of preferences, acting as rational and risk-neutral E.V. maximizers.^16 Therefore, welfare losses from QV decay at a rate inversely proportional to the size of the population: welfare decay $\propto \frac{1}{N}$. In discussion of the pragmatism of QV, Weyl poses a few broader questions about the nature of the question of optimal voting: Are these common assumptions inherent to why QV works? Or can they be relaxed and QV still work... Fundamentally, QV works because of the following theorem: $v = \epsilon u, \; \epsilon \perp u$ – that in order to be efficient, the number of votes cast needs to be proportional to the utility gained by the measure in question, and that the degree of linear factor relating those two values be independent of utility itself. This efficiency is only tangentially related to the population size $N$, and rationality. The underlying optimality of QV is invariant to all or most other variables and assumptions which are quickly made in most other propositions The proof is like 40 pages of supplemental appendices, and when presenting at the Becker Friedman Institute (where he slayed),^17 Weyl underscored that QV's optimality hinges largely on the fair assumption that we have large $N$. For small $N$, welfare lost by QV relative to the optimum is very small, while other methods such as 1p1v may easily be 100%. The "optimum" as defined by defined by Vickrey, Clark, and Groves in the 70s^18 which is extremely sensitive to collusion, even by small groups, and which requires large & highly uncertain real world costs like dollars, rather than voice credits. Weyl points out that the Federalist papers urge that democracy be a mechanism to ensure maximal utility; an instrument to augment culture, not the reverse we live in today, let alone theserpentine dystopia required to satiate the problematic assumptions of other mechanisms. The Problematic Assumptions Next, we'll study how these assumptions and violations therein effect various models including QV. The minimal constraints that we're concerned with for QV to be effective are: 1. Society is not collusive 2. Voter preferences are IID values drawn from a known distribution 3. Homoeconomicus - that voters are perfectly rational and instrumental in their motivations Immediately, we have complications for there will always be some degree of collusion. One could imagine even encoding more preference over the total continuous utility via votes. That's not even a stretch, that's literally the view of the opposition in politics and your votes express that to others, even with atomic citizens. As for the idealistic notion that a mechanism operates on complete information about the voter demographics which are furthermore independent and identically distributed about some axes – this would be uncharacteristic of any election. And finally, the ever-troublesome assumption about rational behavior. Most behaviors are not rational under these various simple mechanisms. Conceding these points, we can analyze how other mechanisms perform under indentical perturbations of these assumptions: 1. Vickrey-Clarke-Groves yields full efficiency even with finite populations 2. The Expected-Externality^19 mechanism yields full efficiency with budget balance 3. And with large populations, the simplest solution with these mechanisms is just a costly 1p1v. Meaning, if it's known to everybody what the distribution of values is, we/society/the governing authority can just implement the mean – provided it's non-zero, which is basically efficient for large $N$. □ (This works because, for this assumption, an infinitesimally small fraction of the population will vote which proxies QV conditional on every utility, the number of people that will vote would be the number of votes that someone with that utility would've voted with under QV. All of these are either better or simpler than QV, so why pursue Yet Another Voting Mechanism? Weyl argues that a violation of any of the three basic assumptions unravels any of the other mechanisms whereas QV is more or less invariant to even these base assumptions. The Other Models (Jay Pow, Nate Silver, and Thomas Jefferson all Wept)^20 Let's take a quick peak at how the other mechanisms work to understand why the fail. 1. Vickrey-Clark-Groves VCG's mechanism essentially poses the question "How much am I willing to pay to guarantee that Gore gets elected over Bush?" If the amount that I say ends up changing the outcome (that is, I am the pivotal voter), I have to pay the amount wagered by the opposition (those damn Bush voters). E.g. if the $\sum c(v_B) =$ $ 1M, vs. $\sum c(v_G) =$ $500k, then Bush gets elected and I receive $500k. Alternatively, if $\sum c(v_B) =$ $1M vs. $\sum c(v_G) =$ $1,000,001, then I payout $1M to the aggregate opposition.^21 The problem: two people can demonstrate adept political acumen and wager $ 10 gazillion dollars for Gore, but since neither of the two conspirators are pivotal, neither one is on the hook to payout the Bush supporters. This is basically how superPACs work IRL btw. And so this system is highly susceptible to collusion as it crumbles under even two rational actors who select both select $\Big(\sum c(v_{\text{opp}})\Big) + 1$ which is basically not even seditious, it's rational. If more than one person per party has a brain (so, minimum $N= 4$) then both sides can "collude" to run up the cost function of the opposition to infinity without risking any exposure to having to come up with infinity money. 2. Expected-Externality The underlying idea behind this mechanism is that the governing authority charge everyone a price that's equal to the expected value of the preference distribution. The expected value of course is known by everyone according to the assumption of IID of the VCG payments that they would make under VCG. Furthermore, in a world with non-zero uncertainty, if the IID distribution is not known, which it's not, no matter how many interactive maps FiveThirtyEight cooks up, then this mechanism is not even 3. Costly 1p1v Everyone in the contemporary literature pretty much just takes for granted that monogamous voting doesn't work in a world where people have any other motivation for voting other than implementation of policy which aligns with their values because it relies on infinitesimally small numbers of people voting which is only going to happen if the only motivation for voting is to be instrumental. However, in the real world, other motivations do exist (like getting the sticker so that you can be a shit on social media), and still the % of the VEP is like 66%.^22 4. QV supremacy Weyl's argument is therefore structured around QV being robust when scrutinized against the same criticisms that political economists normally just eat/ignore when theory crafting. Either the Central Limit Theorem applies, or we can apply laws of large numbers in conjunction with large deviations. These are taken for fact by every serious statistician in like 99% of the literature. That's not to say it's a field lacking rigor, but rather that everyone starts from square 1, rather than square 0 for the sake of brevity in their findings. But because Weyl's argument hinges on the soundness of his argument as it pertains to square 0, he tacked it on as a supplement. Again, see the 40 pages of proof in the appendices of (Weyl, 2017)^7 to establish the baseline Calculations are based on the idea that (at least) one of these two statistical approximations holds. QV vs. Problematic Assumptions We can gauge QV's robustness by inquiring about how large a conspiracy needs to become in order to impact social efficiency. • Collusion: We get together & vote more than we otherwise would unilaterally to take advantage of the fact that I haven't exhaustively tit-for-tatted my way up another voter's quadratic function (as I might under VCG) • Fraud: Doing the same as above, as a lone actor by pretending that I'm multiple people in order to distribute the quadratic penalty to get around the fact that QV makes it increasingly costly for an individual to blast off their vote stack against a single referendum. Experiments protracted under a range of assumptions, yielding combinatorial intersection of validity and soundness showing that in the worst case we have colluders consisting of a subset of the VEP @ the bottom (derogatory) of the population and in the tails of the value distribution. These people are the most extreme and therefore contribute the most inefficiency to social welfare as a result of their collusion We compare the worst case against: • randomly sampled colluders, average Joe Shmo who dabbles in election fraud • fraud – which is like when Howard says "actually it's my three votes against your two" The 2nd dimension of model perturbation is quality of collusion. The three notable degrees of conspiracy are: • Perfectly and undetected, • Imperfect - where conspirators might defect from the collusive agreement and therefore need to be monitored by the group, • Perfect, but detectable – collusive efforts might be perfect internally, but nevertheless suspected by the larger voting populace on the whole. The third dimension is the mean-zero case, or extremum cases. Elsewhere, Lalley shows that those regimes behave very differently.^23 In the mean zero case, the key threat to democratic efficiency is that a small group, or a single extremist, will buy enough votes to overturn the will of the people. In $\mu = 0$, the threat comes from extremists buying too few votes because it then becomes easier to become an accidental median voter (who otherwise dies in democracy). So what are the good cases in these perturbations: 1. All average case with colluders randomly sampled. It turns out they have low impact 2. Even if the random sample of colluders happens to be a subset of the worst case extremists from 8chan, so long as society even suspects a possibility of collusion or fraud, it's no big deal because those extremists' participation dramatically increases the likelihood that an election is tied, because rational non-conspirators buy more votes as insurance, and since the numbers of non-conspirators is necessarily less than the number of seditious actors (otherwise they wouldn't be deemed seditious), the quadratic mechanism favors breadth of participation rather than depth, running up the linear cost rather than the quadratic part of the cost. 3. In the $\mu = 0$ case, if the conspirators have any internal coordination issues, that removes the possibility of effective collusion. Collusion is not possible unless your firm controls a large share of the market economy of votes, which again they necessarily do not otherwise it wouldn't be considered a collusive firm, but simply the Republican or Democratic party. And then the bad cases: 1. $\mu eq 0$ case without suspicion. Everyone thinks the chance of a tied election is tiny, so fewer votes are purchased, and even the smallest amount of interference from a collusive group can easily sway the outcome of the election. 2. The other troublesome case is not intuitive. For $\mu = 0$ elections with perfect internal agreement, a sensitive measure is subject to disproportionate interference from a relatively small group of colluders because there's fewer votes that need to be bought because everyone else already assumes there's a good chance that they're pivotal. Voter Motivation and Rationality Even in highly stylized lab experiments, with small groups of calibrated participants where the chance of being pivotal is higher than it would be for large $N$, people still do not behave rationally, but instead buy way more votes than equilibria dictates. Despite the over extension, people still vote quite closely to their assigned preferences. These deviations from optimality are explained by a number of possible factors : 1. Expressive motive: people gain utility from expressing their preferences proportionately to their value. 2. Expressive motive (to influence policy tho): The idea that the margin of victory might influence out-of-distribution policy initiatives by conferring some mandate to rule upon the victory, but this dies with large $N$ (whether or not voters realize this is tbd) 3. Erroneous estimation of pivotal likelihood: this was especially prevalent for small $N$, but this is also just hard to know Each of these signals are muddied by noise, but the noise is orthogonal to individual voter preference which still correlates to the magnitude of their assigned vote credits. In the $\mu eq 0$ case, these irrational behaviors actually help society converge on efficiency by avoiding the need for extremists whose preferences run contrary to the will of the public who doesn't vote very much, leaving the door open to deep-pocketed psychos. These other motivations to vote –which on paper seem irrational– cause people to vote more which solves the extremist problem in large $N$, and suppresses it for small $N$ (whereas noise further obliterates voting mechanisms in VCG, and expected externality, and 1p1v is already obliterated). In the $\mu = 0$, there is limiting inefficiency caused by the noise. Recycling the "democracy as a market economy" analogy used thus far with feasible behavioral agents, then things can be inefficienct. In a market economy, we might say "let's just ration all goods and disallow trade because these numb-nuts will engage in trade that will actually lead to a scenario that's worse for If the variance of the noise exceeds the variance of the underlying value distribution, then 1p1pv is better. Democratic society tends to prefer markets to rationing though, go figure, because we believe that heterogeneity in preferences is greater than heterogeneity in noise (Rock Flag and Eagle meme).And QV holds under this same assumption too, so Weyl argues that we mustn't throw the bather water out lest we part ways with the baby too. The last assumption which QV must withstand is that values are IID, so Weyl presents a model where people don't know the exact distribution of values in the population. This also has limiting inefficiency because of the Bayesian underdog effect. E.g. a Mitt Romney supporter who ardently believes that the polls haven't accounted for him and maybe many people like him, thus his vote is "secretly" pivotal, and therefore the election is closer to being tied than pollsters make it out to be. A real "don't get out of line" typa guy. Conversely, an Obama supporter makes the exact opposite inference, and concludes that the election is a done deal, pack it up, I've seen enough. It's less likely that the vote is tied because they haven't even accounted for my weird preference distribution yet and I'm voting for Obama, and he's ahead anyways, I'll just stay at home. So, the underdog gets too many votes relative to the expected favorite because of people's estimates of being pivotal. Intuitively, this can't cause much inefficiency because it relies on the underdog remaining the underdog! This is hard to translate into a formal result though. Experimentally, QV cedes about 4% inefficiency in the calibrated scenario and 1p1v buckles under 47% VCG fails under collusion and fraud, whereas QV is tolerant to the average case. 1p1v fails to for what it's worth via vote-buying and coercive tactics. Voluntary voting is inefficient when subjected to external motivations other than being instrumental when whereas QV actually converges on efficiency faster in the average case with believably irrational voters. Expected externality is not even definable outside of a vacuum, and even theoretical constructions are highly sensitive to collusion and fraud. So, at the very least, QV > 1p1v under all reasonable assumptions. Additionally, it's realistic under the complexity of real world constraints for a large set of specific examples with the mechanism that fits reasonably well across constraints under which other mechanisms fail. 3 | Buterin et al.: Quadratic Funding Quadratic Funding^3 (Which also features Weyl) extends ideas from Quadratic Voting to a funding mechanism for endogenous community formation. The amount of funding received by a project is proportional to the square of the sum of the square roots of contributions received. $\phi = V_i^p \Big((\sum_j \sqrt{C_j^p})^2\Big) - C_j^p$ The effect is similar to QV's which rewards breadth of participation rather than depth. The Problem Simple private contributory systems famously lead to the under-provision of public goods that benefit many people because of the free-rider problem. Conversely, a system based purely on membership or on some other one-person one vote (1p1v) system cannot reflect how important various goods are to individuals and will tends to suppress smaller organizations of great value. This is naively circumvented by e.g. “matching” by some larger institution. E.g. “many corporations use similar rules, matching charitable contributions by all full-time employees up to some annual amount. Doing so amplifies small contributions, incents more contributions and greater diversity in potential contributors, and confers a greater degree of influence on stakeholders in determining ultimate funding allocations.” “Tax deductibility for charitable contributions is a form a governmental matching” Unlike the sacrosanctity of 1p1v in democracy, the existence of matching programs in many realms of public goods lends more credence to the admissibility of QF to this domain! I gloss over the model here and go into greater depth in the recap of Freedman’s summary of the elegance of the solution • $1, …, N$ citizens, • Public goods $p \in P$ which can be proposed by any citizen at any time • $V_i^p(F^p)$ be the currency-equivalent utility citizen $i$ receives if the funding level of good $p$ is $F^p$. □ The value derived by citizen $i$ from public good $p$ is independent between goods • Each citizen $i$ can make contributions to the funding of each public good $p$ out of her own pocket $c_i^p$ • The total utility of a citizen $i$ us the sum of utilities across all public goods minus their individual contributions and some tax $t_i$: $\sum_p V_i^p(F^p) - c_i^p - t_i$ • Similarly, total societal welfare is the sum over all public goods and citizens of the utilities gained by each citizen were each good to be funded $V^p$ minus the actual cost of funding that good $F^p$: $\sum_p\Big( \sum_i V_i^p(F^p) \Big) - F^p$ The optimal mechanism is $\phi^{QF}(c_i^p) = \Big\{ \sum_i \sqrt{c_i^p} \Big\}_{p \in P}$ The effect is again an elegant mechanism which democratically rewards breadth of participation over depth Play around with it on the website.^25 4 | Freedman: Spinoza, Kant, Weyl Recapitulating the model presented by Buterin et al, Freedman shows how differentiating the QF uniquely satisfies Kant's Categorical Imperative. Should be familiar by now, but for completeness since these are derivations worth tracing, unlike –for our purposes– Weyl's appendix. • Society consists of $N$ well-defined citizens $i, ..., N$ • $p \in P$ are public goods requiring funding • $V_i^p(F^p)$ is the utility function quantifying the utility that citizen $i$ receives if the funding of $p$ is $F^p$ □ this seems like an ass backwards way to measure total societal welfare, but in fact, out of lots of indirection, smoke and mirrors, emerges beauty □ Must be smooth, increasing, concave – though simple monotonicity suffices (Freedman includes this as such a cute & meek lil footnote, Weyl provides the appendix gratia, and Buterin et al spend a lot of time in the derivations) □ All utilities are independent • $C, F$ are the vector spaces of funding and contributions, respectively: □ $\overrightarrow{c} = \{c_i^p\}$ is the vector of individual contributions of the $i$-th citizen towards the $p$-th public good, assumed to be non-negative (no bandits, sry) □ $\overrightarrow{F}$ is the funding vector with components $F^p$ for each good The goal of QF is to find a funding mechanism $\phi: C \rightarrow F$ which maximizes the total societal welfare: $W = \sum_{i,p} V_i^p(F^p) - \sum_p F^p$ Here, Freedman hand waves away the externalities of taxation, equity, perception, etc. which are all covered by Buterin et al who shows how they perturb behavior at the extremum (turns out, not a whole lot, inherited from the robustness of QV!) Taxation $\{t_i\}$ governed by [DEL:some arbitrary mechanism:DEL] we don't really care about is required to balance the budget, constrained by: $\sum_i t_i = \sum_p (F^p - \sum_ic_i^p)$ So, an individual's tax-corrected utility is $U_i^t = \sum_p V_i^p(F^p) - c_i^p - t_i$ which is included for completeness, but $t_i$ is just some constant which can effectively be omitted since it's not relevant when differentiating. Fixing $p$ and differentiating societal welfare $W^p$ w.r.t. $F^p$ shows that marginal utility derived from good $p$ should equal 1 if $F^p$ is positive at 0: \begin{aligned} W^p = &\sum_i V_i^p(F^p) - F^p \\ (W^p)^\prime = &\sum_i (V_i^p)^\prime - 1 = 0 \\ \implies &\sum_i (V_i^p)^\prime = 1 \\ \end{aligned} Individual utility is given by: \begin{aligned} U_i &= \sum_p V_i^p(F^p) - c_i \\ &= \sum_p V_i^p\Big(g(\sum_j h(c_j^p)) - c_i^p\Big) \end{aligned} which assumes $F^p$ are built from internal analytic functions $h,g$ on the positive reals. $h$ is the weight of a contribution $c$, and $g$ converts total weight into funding. Both $h$ and $g$ scale reciprocally, so funding choices are independent of currency (which seems like a bit of a non-sequitur, but is elucidated a bit more when analyzing $g$). A refined goal, therefore, becomes to find a funding mechanism $\phi_i$ comprised of $h,g$ s.t. $U_i$ is maximized under the constraint that the funding mechanism is democratic, $F$ is assumed to be symmetric in its $N$ variables, and Freedman also injects his own simplifying linearity constraint that: \begin{aligned} \phi(\overrightarrow{c}) = \Big\{ F^p(\overrightarrow{c}) \Big\} = \Big\{ g\Big(\sum_i h(c_i^p)\Big) \Big\} \end{aligned} We can analyze $g,h$ by partially differentiating $U_i^p$ w.r.t. $c_i$ and setting the resultant derivative to 0. \begin{aligned} U_i^p &= \sum_p V_i^p(F^p) - c_i \\ \\ &= \sum_p V_i^p(g(\sum_j h(c_j^p)) - c_i^p) \\ \\ (U_i^p)' &= \frac{\partial V_i^p}{\partial g} \frac{\partial g}{\partial h(c_i^p)} \frac{d h (c_i^p)}{d c_i^p} - 1 = 0 \end{aligned} $\tag{1} \frac{\partial V_i^p}{\partial g} \frac{\partial g}{\partial h(c_i^p)} \frac{d h(c_i^p)}{d c_i^p} = 1$ Recalling that QF's mechanism $\phi$ is given by: $\phi_{c^p}^{QF} = \Big(\sum_i\sqrt{c_i^p}\Big)^2$ That is, for every good $p$, its level of funding is the square of the sum of the half-squares of individual contributions. Here, Freedman explains how Kant's CI ~implies QF: CI implies that if citizen $j$ deems it proper to perturb her weighted contribution $h(c_j)$, say 1%, she should be following, not her limited self-interest, but be justified in expecting all her peers to also see the virtue of such a similar proportional increase in their weighted contribution—“act ... whereby ... it should become universal law.” So, mathematically we may write: $\frac{\partial g(\sum_i(h(c_i)))}{\partial c_j} = \sum_i \frac{h(c_i)}{h(c_j)}$ Funding should respond to the imputed, community-wide judgment that additional matched resources are required for this good. It follows then, that changes to funding weight impact funding amount: $\tag{3} g'\Big(\sum_i h(c_i)\Big) h'(c_j) = \sum_i \frac{h'(c_i)}{h(c_j)}$ Separating and solving the independent equations comprising (3) for some positive constant $k$, we get: \begin{aligned} g'(x) = kx, \quad h'(y) = \frac{1}{k}h^{-1}(y) \end{aligned} And by integrating, we get: \begin{aligned} g(x) = \frac{k}{2}x^2 + m, \quad h(y) = \frac{2}{k}\sqrt{y} + n \end{aligned} Imposing reasonable boundary conditions of $\phi(0) =0$ (no contributions imply no funding) implies $m = n = 0$, so $k$ must be fixed s.t. for a society of $N = 1$ we get $F(c) = c$: $g(x) = x^2, \quad h(y) = \sqrt{y}$ How quadratic! That was the proof, that's it. We can extremize social utility $U = \sum_i(U_i)$ by throwing the book (Transcendentals, 5e) at em by tweaking $\phi$ via $h,g$. Rewriting (1) in terms of the partials of $V_i$ w.r.t. $g$, we get: $\frac{\partial V_i}{\partial g} = \frac{1}{\frac{\partial g(\sum)}{\partial h(c_i)} \frac{dh(c_i)}{dc_i} } = \frac{1/\frac{dh(c_i)}{dc_i}}{\frac{\partial g(\sum)}{\partial h(c_i)}}$ where $\sum$ is shorthand for $\sum_{i=1}^N h(c_i)$, which is just the total weight. Summing over $i$, applying the optimality condition, and differentiating w.r.t. any given citizen's contributions (since they're symmetric, any citizen will do, so let's just choose $c_1$), we get: $\frac{\partial}{\partial c_1} \Bigg( \sum_{i=1}^N \frac{\partial V_i}{\partial g} \Bigg) = \frac{\partial}{\partial c_1} \Bigg( \sum_{i=1}^N \frac{1 / \frac{dh(c_i)}{dc_i} }{\partial \frac{\partial g(\sum)}{\partial h(c_i)}} \Bigg) = 0$ We can expand the outer partial via the quotient rule $(\frac{u}{v})' = \frac{u'v - uv'}{v^2}$, keeping only the numerator and breaking the first variable out of the summation: $0 = (\frac{1}{h'(c_1)}) \cdot g'(\sum) - \frac{g''(\sum) h'(c_1)}{h'(\sum)} + 0 - \sum_{i=2}^N\frac{1}{h'(c_i)}g''(\sum)h'(c_i)$ After collecting terms, we get any of the following, equivalent relations: \begin{aligned} \Big(\frac{1}{h'(c_1)}\Big) \cdot g'(\Sigma) &= g''(\Sigma)\Big( \sum_{i=1}^N \frac{h'(c_1)}{h\prime(c_i)} \Big), \\ \\ \frac{g'(\Sigma)}{g''(\Sigma)} &= \frac{\sum_{i=1}^N \frac{h' (c_1)}{h'(c_i)}}{(\frac{1}{h'})'(c_1)}, \\ \\ (\log g')^{-1} &= \frac{h'(c_1)}{-\frac{h''(c_1)}{(h'(c_1))^2}} \Big(\sum_{i=1}^N \frac{1}{h'(c_i)} \Big) \end{aligned} $\tag{13} (\log g')^{-1} = \Big[ \frac{(h')^3}{h''}(c_1) \Big]\Big(\sum_{i=1}^N \frac{1}{h'(c_i)}\Big)$ The factor in the brackets in (13) depends only on our select citizen $c_1$, whereas $\log (g') (\sum_{i=1}^N \frac{1}{h'(c_1)})$ is symmetric in all $N$ citizens. So, assuming $N > 1$, it's constant! implying $(h')^3 = -kh''$ for some constant $k$. And we can recursively solve this relation via Taylor series expansion around any positive value of the citizen, yielding the following radical solution: $h(y) = a\sqrt{y} + b$ and we know that $b = 0$ per the boundary conditions, so: \begin{aligned} \frac{1}{h'(c_i)} = \frac{2}{a} \sqrt(y) \end{aligned} \begin{aligned} \sum_{i=1}^N \frac{1}{h'(c_i)} = \frac{a^2}{2} \Sigma \end{aligned} and thus (13) becomes: $\frac{g'(\Sigma)}{g''(\Sigma)} = \text{const. } \Sigma$ which, per the Taylor series expansion, yields $g(x) = \text{const. } x^2 + \text{const. }'$ with $\text{const. }' = 0$ per the boundary condition, and $\text{const. } = 1$ to match self-funding in the limit of a single, positive contribution. Thus, Freedman recovers $g, h \in \text{QF}$: $g(x) = x^2, h(y) = \sqrt(y)$ concluding that: We can use both reason and love, faith for those who possess it, and the calculus of Leibniz and Newton for those who possess it, to navigate to a fairer, less contentious world. Such a chad to sign off his paper the way I might've in a bad high school English essay overselling the sub-grandiose point I just made (poorly) in 250 words or less, but when he does it – it's based, actually. It was never about the voting mechanisms or the cigarettes, but the friendship strengthened along the way 🫶 1. Kant, Immanuel. "Groundwork of the Metaphysics of Morals." 1785. ↩ 2. Weyl, Glen E. "The robustness of quadratic voting." Public Choice, Vol 172, July 2017. ↩ ↩^2 3. Buterin, Vitalik, Hitzig, Zoë, and Glen Weyl. "A Flexible Design for Funding Public Goods." arXiv, 16 August 2020. ↩ ↩^2 4. Freedman, Michael. "Spinoza, Leibniz, Kant, and Weyl." arXiv, 4 July 2022. ↩ 5. the exact numbers are : □ peter mi / total = 770 mi / 1,109 mi = .694 □ trevor mi / total = 1,078 mi / 1,109 = 0.972 □ howard mi / total = 1,109 / 1,109 mi = 1 6. Canonically, the good natured disputes originated on the trip to Baton Rouge, where we then left brother Howard and flew back to our respective abodes. However, as pointed out in the ensuing debates over the preprint of this post, that setup where we all share a destination rather than an origin is not actually an instance of the Taxi Problem since the rationale for “tagging along” where someone is already headed does not hold. Notably, this actually makes my whole argument crumble, since I wind up owing the most and my motivations for Shapley values falls apart lol. But what was I going to do, not shitpost 35 pages of LaTeX? No. ↩ 7. Steven P. Lalley and E. Glen Weyl. "Quadratic Voting: How Mechanism Design Can Radicalize Democracy." American Economic Association Vol 108, May 2018. ↩ ↩^2 8. The thought experiment strictly exists in a pre or post-Brexit universe of measures which would not alter the existence of $R$ itself ↩ 9. Weyl, Glen E. "Price Theory." Journal of Economic Literature, June 2019. ↩ 10. I'm so irrational it will get me killed ↩ 11. Steven P. Lalley and E. Glen Weyl. An Online Appendix to “Quadratic Voting: How Mechanism Design Can Radicalize Democracy.” American Economic Association, 24 December 2017. ↩ 12. +1 to Weyl for his sick ass outfit 13. -1 for not open sourcing his paper, that’s not very democratically efficient of you, Glen, but fret not dear reader, I’ve taken liberties ↩ 14. I do love that it assumes a pseudo-rational voter who also doesn't understand non-linear proportionality, especially given the myriad empirical examples of calibrated lab participants demonstrating an utter inability to behave rationally. This is not the right hill to crucify this paper on as it actually aids the efficient convergence of price-taking voters for reasons which Weyl covers later ↩ 15. EV maximizer merch here ↩ 16. The Robustness of Quadratic Voting, Becker Friedman Institute University of Chicago ↩ 17. Groves, Theodore, "Incentives in Teams." Econometrica, Vol 41, July 1973. ↩ 18. Gorelkina, Olga. "The Expected Externality Mechanism in a Level-k Environment." Max Planck Institute for Research on Collective Goods, March 2015. ↩ 19. A coworker said "[so an so] wept" after something inconsequential and now I can't stop saying it ↩ 20. The actual amount is a bit more involved than this, as the winner pays out the per-person social welfare delta, which actually solves the Knapsack problem along the way ↩ 21. Voter Turnout, 2018-2022. Pew Research Center. ↩ 22. Steven P. Lalley and E. Glen Weyl., "Nash Equilbria^26 for Quadratic Voting." arXiv, 18 July, 2019. ↩ 23. wtfisqf ↩ 24. arXiv is also a typo factory it seems ↩
{"url":"https://www.murphyandhislaw.com/blog/game-theory-3","timestamp":"2024-11-11T19:17:36Z","content_type":"text/html","content_length":"1049906","record_id":"<urn:uuid:dbd827d3-2c1e-4546-b57e-0e2524b35ccd>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00344.warc.gz"}
Federal learning: dividing non IID samples by mixed distribution We're blogging Federal learning: dividing non IID samples by ill conditioned independent identically distributed We have studied the division of samples according to pathological non IID in the paper of the founding of federal learning [1]. In the last blog post Federal learning: dividing non IID samples by Dirichlet distribution We have also mentioned an algorithm for dividing federated learning non IID data sets according to Dirichlet distribution. Next, let's look at another variant of dividing data sets according to Dirichlet distribution, that is, dividing non IID samples according to mixed distribution. This method is first proposed in paper [2]. This paper puts forward an important assumption, that is, although the data of each client in federated learning is non IID, we assume that they all come from a mixed distribution (the number of mixed components is super parametric): \[p(x|\theta) = \sum_{k=1}^K\alpha_k p(x|\theta_k) \] The visual display pictures are as follows: With this assumption, we are equivalent to assuming a similarity between each client data, which is similar to finding the hidden IID components from non IID. Next, let's look at how to design the function of this partition algorithm. In addition to the N required by the conventional Dirichlet partition algorithm_ clients,n_classes, \ (\ alpha \), etc. it also has a special n_ The clusters parameter indicates the number of mixed components. Let's look at the function prototype: def split_dataset_by_labels(dataset, n_classes, n_clients, n_clusters, alpha, frac, seed=1234): Let's explain the parameters of the function, where the dataset is torch utils. Dataset of type dataset, n_ Classes represents the number of sample classifications in the data set, n_ Clusters is the number of clusters (its meaning will be explained later. If it is set to - 1, it defaults to n_clusters=n_classes, which is equivalent to that each client is a cluster, that is, the mixed distribution assumption is abandoned). alpha is used to control the data diversity between clients. frac is the proportion of data sets used (the default is 1, that is, all data is used), Seed is the seed of the incoming random number. This function returns an_ Client a list consisting of a list of sample indexes required by a client_idcs. Next, let's look at the content of this function. The content of this function can be summarized as follows: first group all categories into n_clusters: clusters; Then for each cluster c, the samples are divided into different clients (the number of samples for each client is determined according to the dirichlet distribution). First, we judge n_ If the number of clusters is - 1, each cluster corresponds to a data class by default: if n_clusters == -1: n_clusters = n_classes Then divide the disrupted tag set \ (\ {0,1,...,n\_classes-1 \} \) into n_clusters are independent and identically distributed clusters. all_labels = list(range(n_classes)) def iid_divide(l, g): Will list`l`Divided into`g`Independent identically distributed group(In fact, it is directly divided) each group All have `int(len(l)/g)` perhaps `int(len(l)/g)+1` Elements Return by different groups List of components num_elems = len(l) group_size = int(len(l) / g) num_big_groups = num_elems - g * group_size num_small_groups = g - num_big_groups glist = [] for i in range(num_small_groups): glist.append(l[group_size * i: group_size * (i + 1)]) bi = group_size * num_small_groups group_size += 1 for i in range(num_big_groups): glist.append(l[bi + group_size * i:bi + group_size * (i + 1)]) return glist clusters_labels = iid_divide(all_labels, n_clusters) Then create a dictionary with key as label and value as cluster id(group_idx) according to the above clusters_labels, label2cluster = dict() # maps label to its cluster for group_idx, labels in enumerate(clusters_labels): for label in labels: label2cluster[label] = group_idx Then get the index of the data set data_idcs = list(range(len(dataset))) After that, we # Record the vector of the size of each cluster clusters_sizes = np.zeros(n_clusters, dtype=int) # Store the data index corresponding to each cluster clusters = {k: [] for k in range(n_clusters)} for idx in data_idcs: _, label = dataset[idx] # First find the id of the cluster from the label of the sample data group_id = label2cluster[label] # Then add the size of the corresponding cluster + 1 clusters_sizes[group_id] += 1 # Add the sample index to the list corresponding to its cluster # Disrupt the sample index list corresponding to each cluster for _, cluster in clusters.items(): Next, we set the number of samples for each cluster according to the Dirichlet distribution. # Record the number of samples of client s from each cluster clients_counts = np.zeros((n_clusters, n_clients), dtype=np.int64) # Traverse each cluster for cluster_id in range(n_clusters): # Each client in each cluster is given a weight that satisfies the dirichlet distribution weights = np.random.dirichlet(alpha=alpha * np.ones(n_clients)) # np.random.multinomial means to roll dice clusters_sizes[cluster_id] times, and the weights on each client are weights in turn # This function returns the number of times it falls on each client, which corresponds to the number of samples each client should receive clients_counts[cluster_id] = np.random.multinomial(clusters_sizes[cluster_id], weights) # Prefix (accumulate) the counting times of each client on each cluster, # It is equivalent to the subscript of the sample dividing point divided according to the client in each cluster clients_counts = np.cumsum(clients_counts, axis=1) Then, according to the sample situation of each client in each cluster (we have obtained the subscript of the sample dividing point divided according to the client in each cluster), we combine and summarize the sample situation of each client. def split_list_by_idcs(l, idcs): Will list`l` Divided into length `len(idcs)` Sublist of The first`i`Sub list from subscript `idcs[i]` To subscript`idcs[i+1]` (Subscript 0 to subscript`idcs[0]`Sub list of (calculated separately) Returns a list consisting of multiple sub lists res = [] current_index = 0 for index in idcs: res.append(l[current_index: index]) current_index = index return res clients_idcs = [[] for _ in range(n_clients)] for cluster_id in range(n_clusters): # cluster_split is the sample divided by client in a cluster cluster_split = split_list_by_idcs(clusters[cluster_id], clients_counts[cluster_id]) # Add up the samples of each client for client_id, idcs in enumerate(cluster_split): clients_idcs[client_id] += idcs Finally, we return the sample index corresponding to each client: return clients_idcs Next, we call this function on EMNIST dataset for testing and visual rendering. We set the number of client s \ (N=10 \), the parameter vector \ (\ bm{\alpha} \) of Dirichlet probability distribution satisfies \ (\ alpha_i=0.4,\space i=1,2,...N \), and the number of mixed components is 3: import torch from torchvision import datasets import numpy as np import matplotlib.pyplot as plt if __name__ == "__main__": N_CLIENTS = 10 DIRICHLET_ALPHA = 1 N_COMPONENTS = 3 train_data = datasets.EMNIST(root=".", split="byclass", download=True, train=True) test_data = datasets.EMNIST(root=".", split="byclass", download=True, train=False) n_channels = 1 input_sz, num_cls = train_data.data[0].shape[0], len(train_data.classes) train_labels = np.array(train_data.targets) # Note that the number of samples of different label s of each client is different, so as to achieve non IID division client_idcs = split_dataset_by_labels(train_data, num_cls, N_CLIENTS, N_COMPONENTS, DIRICHLET_ALPHA) # Display the data distribution of different label s of different client s plt.hist([train_labels[idc]for idc in client_idcs], stacked=True, bins=np.arange(min(train_labels)-0.5, max(train_labels) + 1.5, 1), label=["Client {}".format(i) for i in range(N_CLIENTS)], rwidth=0.5) plt.xticks(np.arange(num_cls), train_data.classes) The final visualization results are as follows: It can be seen that although the distribution of 62 category labels on different clients is different, the data distribution between each client is more similar than the following sample partition algorithm based entirely on Dirichlet, which proves that our mixed distribution sample partition algorithm is effective. reference resources • [1] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial intelligence and statistics. PMLR, 2017: 1273-1282. • [2] Marfoq O, Neglia G, Bellet A, et al. Federated multi-task learning under a mixture of distributions[J]. Advances in Neural Information Processing Systems, 2021, 34.
{"url":"https://programming.vip/docs/federal-learning-dividing-non-iid-samples-by-mixed-distribution.html","timestamp":"2024-11-09T02:38:31Z","content_type":"text/html","content_length":"16753","record_id":"<urn:uuid:608f202f-a2cb-4d12-8e40-fa832a9e0a48>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00839.warc.gz"}
reverse cagr sip calculator Use this calculator to find the value of your SIPs at the end of your SIP tenure. Do your Financial Health Checkup in 5 min, Take this 25 questions test Start your Financial Health Checkup - 25 questions Expected Amount (In Rs) Expected Annual Returns CAGR (%): CAGR is the average rate of return for an investment over a period of time. CAGR calculator helps you calculate the annual growth rate of an investment. Step 4: Basis the details you provide, the calculator will tell you how much you should start investing monthly towards a SIP. Enter the present value, future value and number of years to get the CAGR. It is the rate of return required for an investment to grow from the starting balance to the ending balance, assuming profits are reinvested each year, and interest compounds annually. The formula is represented as follows: CAGR= {(Ending value/Beginning Value)^(1/n)}-1. What is a SIP? SIP – Systematic Investment Plan is a disciplined way of investing wherein a fixed amount of money is invested in a pre-defined mutual fund scheme at a fixed interval. You can use the below CAGR calculator to assist in finding the returns on investment made in stocks, lumpsum amount in SIP, Fixed deposits, have made investment in any index such as Nifty50, various mutual fund schemes and many other financial instruments.. You just need to enter the initial invested amount, the money you got or suppose to get at the end of the investment and tenures … The correct way to calculate returns from an investor’s SIP would be to consider the CAGR of each unique SIP instalment separately, and thereafter average these instalments. Mutual Fund Return Calculator SIP: In mutual funds or any investment where returns are linked to the market, the returns over a year are shown as compounded annualised growth rate or CAGR. Step 3: Enter the horizon of your goal in years and percentage of expected return. SWP Calculator is a tool that helps individuals to calculate how their SWP investment will be over the tenure. CAGR represents the consistent rate at which the investment would have grown if the investment … 25 lakhs in 5 years. Suppose you want to have Rs. To start using the calculator, choose your preference between "Pre-decided" and "Goal Based". This calculator determines the monthly amount you need to save through SIP to achieve a certain target amount at an assumed rate of return. You can use XIRR that will give your annualised returns that you are looking for. Deepesh Raghaw is an alumnus of IIM Lucknow and Indian Institute of Information Technology, Allahabad. As per regulations in India, mutual funds are required to use CAGR to calculate returns for investment periods of more than one year. Method of Calculation: CAGR (Compounded Annual Growth Rate) When a lump sum investment is made for more than a year, CAGR provides the right picture of your returns. SWP or Systematic Withdrawal Plan is a technique of redeeming the units or investment in Mutual Funds. Calculate SIP Investment Period But, it doesn’t take into account time value of money. Niyo Money SIP Calculator will help you to calculate returns & plan how much you should invest in SIP to meet your final goals. The SIP calculator online will automatically calculate the maturity amount and wealth to be gained out of the mutual fund investments. How much should you save and invest using SIP on a monthly basis from now until then? Disadvantages / Cons / Limitations of CAGR It not only helps you to ascertain the monthly amount that you need to invest in SIPs (monthly investment calculator) but also helps you ascertain how much corpus you end up creating (sip returns calculator). To calculate the CAGR of an investment: Divide the value of an investment at the end of the period by its value at the beginning of that period. =B6(((B8100)/100)+1)^B10 FV - Unknown SV - $15,500 CAGR - 9.25% n - 4 years You need to input the desired lump sum amount, number of years you have in hand to achieve the same and the rate of return expected on the investment. We use the following formula to calculate the final value. Let’s understand with the help of an example: Compound Annual Growth Rate or CAGR= ((Ending Amount/Beginning Amount)^(1/No. XIRR, or the return on ones SIP investments, comes to 11.88 per cent. Suppose you want to have Rs. A “Reverse Sales Tax Calculator” is useful if you itemize your deductions and claim overpaid local and out-of-state sales taxes on your taxes. Ishan Bansal Co-Founder Groww To get the CAGR value for your investment, enter the starting value or initial investment amount along with the expected ending value and the number of months or years for which you want to calulate the CAGR. Whereas CAGR considers the period when you stay invested and help you factor out the fluctuations in that particular horizon. We made this app to be a comprehensive investments calculator. Use Reverse SIP calculator to see how much money will you need to save today to meet your future needs. This lumpsum Calculator helps you compute how much returns you would have made if you would have invested an amount of money for a particular period at a particular annual rate of return. You can use our Lump Sum SIP Calculator to examine example and evaluate Reverse CAGR. However, the problem is CAGR is that, it works only for one-time investment and one-time redemption. In one of our previous articles, we unveiled the power of compound interest and how to calculate it in Excel. CAGR is the best measure of investment performance. Reverse CAGR means calculating return of investment based on CAGR percentage. You can also choose to enter the target amount matching your financial goal and reverse calculate the monthly amount to be invested given tenure and rate of growth. How is SIP return calculated? Today, we'll take a step further and explore different ways to compute Compound Annual Growth Rate (CAGR). This article was first published on PersonalFinanceplan.in and can be accessed here . CAGR Calculator is a free online tool to calculate compound annual growth rate for your investment over a time period. Step 1: Select 'I know my Goal' Step 2: Enter the amount you want to save for your goal. To calculate CAGR, divide the price at the end by the price at the beginning, then take the nth root of the result wherein ‘n’ stands for the number of periods and finally subtract one from the following result. Calculate the returns of your Lumpsum investment using 5paisa Lumpsum calculator & create the best plan to achieve your financial goals now. In our example, we know that our CAGR over 4 years is 9.25%. Limitation of CAGR 2.50 lakhs in 5 years.How much should you save and invest using SIP on a monthly basis from now until then? The only thing to remember about claiming sales tax and tax forms is to save every receipt for every purchase you intend to claim. In simple words, it calculate Future / Maturity amount of an investment based on CAGR which is already known. Use Reverse SIP calculator to see how much money will you need to save today to meet your future needs. Happy Investing! SWP Calculator -- Systematic Withdrawal Plan Calculator Updated on December 18, 2020 , 6982 views. ITC Share price: CLSA see significant value creation opportunity as FMCG business scales ITC’s derating in the past year was a factor of ESG-related concerns, regulatory tightening, capital allocation and Covid-19 uncertainty; most of these concerns are set to be addressed as the FMCG business is at an inflection point and capital-allocation issues are being addressed. Calculate now! You decide the amount, the SIP date and the schemes in which you wish to invest. To get the CAGR value for your investment, enter the starting value or initial investment amount along with the expected ending value and the number of months or years for which you want to calculate the CAGR. You can enter the period in Years, months, days, or combination of all three. Reverse SIP Calculator This Calculator will tell you how much monthly investment you will have to do in order to achieve a certain target value in future. CAGR Calculator is free online tool to calculate compound annual growth rate for your investment over a time period. The calculator has three modes: ----- CAGR: ----- In this mode you can calculate Compounded Annual Growth Rate (CAGR) of your investments when you know the buying price, selling price and period of holding the investment. You are investing for a goal and want to know the required monthly investment to meet the goal. CAGR Time (Years) Calculator - Calculates how much time is required to convert starting amount into final amount or present value to future value using fixed CAGR or return rate. 2. The tutorial explains what the Compound Annual Growth Rate is, and how to make a clear and easy-to-understand CAGR formula in Excel. of Years))-1. Since a SIP consists of equal sized instalments, a weighted average calculation would not be needed instead a … A lumpsum calculator is an automated tool that does all your investment math for you. SIP Planner Do you want to calculate how small investments made at regular intervals can grow to a large figure over a period of time due to power of compounding? CAGR stands for Compound Annual Growth Rate. Let’s calculate a Reverse CAGR. How to use the lumpsum Calculator and the reverse lumpsum calculator. At CAGRfunds, we have a unique SIP Calculator. Therefore, if you want to calculate returns for your SIP, use XIRR (instead). In response to query by ET Wealth reader Mr. Parag P: What if one adds lumpsum amount during SIP, how do calculate XIRR(2) If any lump sum amount is added, insert date (in new row at the right place) and amount (a negative figure as its cash outflow) ex. Goal ' step 2: Enter the present value, future value number. All three to know the required monthly investment to meet the goal lumpsum and... Period in years, months, days, or combination of all.! Much you should invest in SIP to meet the goal a goal and want to the... Your SIPs at the end of your SIP tenure CAGR= { ( Ending Amount/Beginning amount ) (... Lucknow and Indian Institute of Information Technology, Allahabad the value of money required monthly investment to meet your needs! Plan to achieve a certain target amount at an assumed rate of return for an based... Much money will you need to save today to meet your future needs all your investment a. Know the required monthly investment to meet your future needs Reverse lumpsum calculator you need save. The best plan to achieve your financial goals now find the value money! Use Reverse SIP calculator will tell you how much money will you to! Made this app to be gained reverse cagr sip calculator of the mutual fund investments the thing. In one of our previous articles, we know that our CAGR over 4 years is 9.25 % use! Percentage of expected return amount of an investment over a time period calculator on... ) ^ ( 1/n ) } -1 know the required monthly investment meet... Is, and how to use the following formula reverse cagr sip calculator calculate it in.. Decide the amount you need to save through SIP to achieve a certain target amount at an rate. Using 5paisa lumpsum calculator is an alumnus of IIM Lucknow and Indian of! That particular horizon power of compound interest and how to use the lumpsum calculator & create best... The mutual fund investments your financial goals now the units or investment in mutual.. Years is 9.25 % 2020, 6982 views online will automatically calculate the final value CAGR percentage every you. Deepesh Raghaw is an automated tool that does all your investment over a time period ’ t into! Investing for a goal and want to know the required monthly investment to meet your future needs that... Know that our CAGR over 4 years is 9.25 % years is 9.25 % individuals calculate... Target amount at an assumed rate of return for an investment over a time.! Years and percentage of expected return number of years to get the CAGR that helps individuals to calculate in. Cagr stands for compound Annual Growth rate or CAGR= ( ( Ending amount... Of Information Technology, Allahabad to save through SIP to meet your final goals is the average rate of for! Use the following formula to calculate returns for investment periods of more than one year of Information Technology,.... Use CAGR to calculate how their swp investment will be over the tenure you to. Sips at the end of your SIPs at the end of your SIP tenure final! Give your annualised returns that you are investing for a goal and want save... To see how much you should start investing monthly towards a SIP to calculate for. Through SIP to meet your future needs much should you save and invest SIP... Take into account time value of money intend to claim all your investment for... Are looking for are required to use the lumpsum calculator ones SIP investments, comes to 11.88 per cent years. Regulations in India, mutual funds save through SIP to meet the...., months, days, or combination of all three and can be accessed here words, it only! Is the average rate of an investment start investing monthly towards a SIP let ’ s with... Value ) ^ ( 1/No Technology, Allahabad determines the monthly amount you need to save your..., we know that our CAGR over 4 years is 9.25 % their swp investment will over... The tutorial reverse cagr sip calculator what the compound Annual Growth rate and explore different ways to compute compound Annual Growth for... Years to get the CAGR save through SIP to achieve a certain target at... Your future needs it doesn ’ t take into account time value your! Account time value of money will give your annualised returns that you are looking for in Excel and forms. But, it doesn ’ t take into account time value of money value, future and... A time period for one-time investment and one-time redemption maturity amount and wealth to a... Institute of Information Technology, Allahabad redeeming the units or investment in mutual funds my goal ' step:. Provide, the SIP date and the schemes in which you wish to invest a goal want... Sum SIP calculator will help you to calculate how their swp investment will be the! Interest and how to calculate how their swp investment will be over tenure. ^ ( 1/No follows: CAGR= { ( Ending value/Beginning value ) ^ ( 1/n ) }.. Step 4: basis the details you provide, the calculator will tell you much... The maturity amount and wealth to be gained out of the mutual fund.., 6982 views, 2020, 6982 views returns of your SIPs at the end of your lumpsum investment 5paisa! Calculator Updated on December 18, 2020, 6982 views monthly towards SIP... Value and number of years to get the CAGR how their swp will. The period when you stay invested and help you factor out the fluctuations in that particular horizon we know our.
{"url":"http://eatatnakama.com/romantic-picnic-dvulmw/archive.php?tag=reverse-cagr-sip-calculator-d4ee37","timestamp":"2024-11-12T22:43:39Z","content_type":"text/html","content_length":"23086","record_id":"<urn:uuid:b6ebb9f0-ca94-4eb3-8960-10940ac23549>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00437.warc.gz"}
Unscramble ABASIAS How Many Words are in ABASIAS Unscramble? By unscrambling letters abasias, our Word Unscrambler aka Scrabble Word Finder easily found 32 playable words in virtually every word scramble game! Letter / Tile Values for ABASIAS Below are the values for each of the letters/tiles in Scrabble. The letters in abasias combine for a total of 9 points (not including bonus squares) • A [1] • B [3] • A [1] • S [1] • I [1] • A [1] • S [1] What do the Letters abasias Unscrambled Mean? The unscrambled words with the most letters from ABASIAS word or letters are below along with the definitions. • abasia () - Sorry, we do not have a definition for this word
{"url":"https://www.scrabblewordfind.com/unscramble-abasias","timestamp":"2024-11-06T02:45:00Z","content_type":"text/html","content_length":"45694","record_id":"<urn:uuid:39a51f69-c4cc-46fa-a879-3e6603ffd624>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00199.warc.gz"}
Function Package: excl ToC DocOverview CGDoc RelNotes FAQ Index PermutedIndex Allegro CL version 10.0 Unrevised from 9.0 to 10.0. 9.0 version Arguments: string &key (return-zero t) This is a convenience function which allows users to read a small-magnitude float safely (that is, without floating-point-underflow errors). If the float is smaller in magnitude than the least-positive float of the appropriate format, then zero in the appropriate form (in the appropriate format) is returned by default, or, if the return-zero keyword argument is specified as nil, the least-positive or elast negative float of the appropriate format is returned. string must be a string whose contents read as a single floating-point number. It is an error resulting in undefined behavior if there is more in the string than the representation of a sinlge floating point number. (The function decides on whether the resulting float is positive or negative based on the presence and location of a \#- charater. Such characters after the float representation could result in an incorrect value. If string represents a float larger than the least positive float or smaller than the least negative, the read is as if done by read-from-string. No protection is provided against floating-point-overflow errors. ;; This function is intended to allow for forms like the following: ;; (defconstant *app-single-eps* ;; (max least-positive-single-float 1.0e-50)) ;; LEAST-POSITIVE-SINGLE-FLOAT is 1.0e-45, so that is the desried value ;; but reading 1.0e-50 may cause a floating-point-underflow error. ;; That can be protected against with this form: ;; (defconstant *app-single-eps* ;; (max least-positive-single-float ;; (read-tiny-float "1.0e-50"))) ;; In the reamining examples, we assume *read-default-float-format* ;; is single-float. (read-tiny-float "1.0") RETURNS 1.0 (read-tiny-float "1.0e-50") RETURNS 0.0 (read-tiny-float "1.0e-50" :return-zero nil) RETURNS 1.0e-45 ;; i.e. least-positive-single-float (read-tiny-float "-1.0e-50" :return-zero nil) RETURNS -1.0e-45 ;; i.e. least-negative-single-float (read-tiny-float "-1.0d-50" :return-zero nil) RETURNS -1.0d-50 (read-tiny-float "-1.0d-550" :return-zero nil) RETURNS -5.0d-324 ;; i.e. least-negative-double-float See Floating-point infinities and NaNs, and floating-point underflow and overflow in implementation.htm for more information and sample handler code to cath the errors. Copyright (c) 1998-2019, Franz Inc. Oakland, CA., USA. All rights reserved. This page was not revised from the 9.0 page. Created 2015.5.21. ToC DocOverview CGDoc RelNotes FAQ Index PermutedIndex Allegro CL version 10.0 Unrevised from 9.0 to 10.0. 9.0 version
{"url":"https://franz.com/support/documentation/10.0/doc/operators/excl/read-tiny-float.htm","timestamp":"2024-11-14T04:46:02Z","content_type":"text/html","content_length":"7023","record_id":"<urn:uuid:766c74bd-e396-4377-a0cd-965cda9fcf69>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00859.warc.gz"}
The Chain Rule - It's Actually Quite EasyMy Maths Guy - Online Math Courses & Course Bundles What is the Chain Rule? The Chain Rule is used to differentiate composite functions. A COMPOSITE FUNCTION is a ‘function inside a function’ but they can take many forms. The challenge with The Chain Rule is not just using the rule, but knowing when it can be used in the first place. The following are a selection of examples of composite functions – as you can see there are many different types. Notice that in each case there are two functions present – an ‘outer’ function and an ‘inner’ function. For each example these are the outer and inner functions: The Chain Rule The Chain Rule itself can be written in several different ways. Some of these are more useful than others, especially when it comes to applying The Chain Rule in different situations. But here is the key to mastering The Chain Rule . . . it’s just the derivative of the outer function multiplied to the derivative of the inner function. So a good way to remember The Chain Rule is: (the derivative of the outer) x (the derivative of the inner) Translating this intuitive notion into Mathematical form gives us the following. Let the outer function be f(x) and let the inner function be u – where u is a function of x. Then each composite function is in the form f(u) and the derivative, using The Chain Rule, is given by: Need Help with Differentiation? We have the solution with our CALCULUS 1 ONLINE COURSE. Featuring 45 step by step instructional videos and more than 200 relevant practice questions with full solutions. Ideal to support your classroom work, help with homework, and prepare for final exams. An Important Point . . . . Regardless of how well you understand and learn The Chain Rule, you still have to differentiate the outer and inner functions successfully. To do so, you have to learn and practice the rules for differentiating other types of functions. Common function types are Polynomials (which require The Power Rule), Trigonometric Functions, and Exponential & Logarithmic functions. Download our FREE DIFFERENTIATION FORMULA LIST and see our Differentiation Q&A post HERE. The Solutions Have a go yourself at finding the derivative of the above example functions. Remember to just take the derivative of the outer and inner function and multiply them together. Then check the solutions below or watch our YouTube VIDEO on those questions. Need Help with Differentiation? We have the solution with our CALCULUS 1 ONLINE COURSE. Featuring 45 step by step instructional videos and more than 200 relevant practice questions with full solutions. Ideal to support your classroom work, help with homework, and prepare for final exams. Any Questions? Drop us a message if you have a question about any of techniques on this page or about differentiation in general.
{"url":"https://www.mymathsguy.com/the-chain-rule/","timestamp":"2024-11-13T05:54:09Z","content_type":"text/html","content_length":"91879","record_id":"<urn:uuid:142fb614-5012-4ec9-8b7d-422cd6fa9130>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00634.warc.gz"}
[ANN] Numpy 1.4.0 release I am pleased to announce the release of numpy 1.4.0. The highlights of this release are: - Faster import time - Extended array wrapping mechanism for ufuncs - New Neighborhood iterator (C-level only) - C99-like complex functions in npymath, and a lot of portability fixes for basic floating point math functions The full release notes are at the end of the email. The sources are uploaded on Pypi, and the binary installers will soon come on the sourceforge page: Thank you to everyone involved in this release, developers, users who reported bugs, fix documentation, etc... the numpy developers. NumPy 1.4.0 Release Notes This minor includes numerous bug fixes, as well as a few new features. It is backward compatible with 1.3.0 release. * Faster import time * Extended array wrapping mechanism for ufuncs * New Neighborhood iterator (C-level only) * C99-like complex functions in npymath New features Extended array wrapping mechanism for ufuncs An __array_prepare__ method has been added to ndarray to provide subclasses greater flexibility to interact with ufuncs and ufunc-like functions. ndarray already provided __array_wrap__, which allowed subclasses to set the array type for the result and populate metadata on the way out of the ufunc (as seen in the implementation of MaskedArray). For some applications it is necessary to provide checks and populate metadata *on the way in*. __array_prepare__ is therefore called just after the ufunc has initialized the output array but before computing the results and populating it. This way, checks can be made and errors raised before operations which may modify data in place. Automatic detection of forward incompatibilities Previously, if an extension was built against a version N of NumPy, and used on a system with NumPy M < N, the import_array was successfull, which could cause crashes because the version M does not have a function in N. Starting from NumPy 1.4.0, this will cause a failure in import_array, so the error will be catched early on. New iterators A new neighborhood iterator has been added to the C API. It can be used to iterate over the items in a neighborhood of an array, and can handle boundaries conditions automatically. Zero and one padding are available, as well as arbitrary constant value, mirror and circular padding. New polynomial support New modules chebyshev and polynomial have been added. The new polynomial module is not compatible with the current polynomial support in numpy, but is much like the new chebyshev module. The most noticeable difference to most will be that coefficients are specified from low to high power, that the low level functions do *not* work with the Chebyshev and Polynomial classes as arguements, and that the Chebyshev and Polynomial classes include a domain. Mapping between domains is a linear substitution and the two classes can be converted one to the other, allowing, for instance, a Chebyshev series in one domain to be expanded as a polynomial in another domain. The new classes should generally be used instead of the low level functions, the latter are provided for those who wish to build their own classes. The new modules are not automatically imported into the numpy namespace, they must be explicitly brought in with an "import numpy.polynomial" New C API The following C functions have been added to the C API: #. PyArray_GetNDArrayCFeatureVersion: return the *API* version of the loaded numpy. #. PyArray_Correlate2 - like PyArray_Correlate, but implements the usual definition of correlation. Inputs are not swapped, and conjugate is taken for complex arrays. #. PyArray_NeighborhoodIterNew - a new iterator to iterate over a neighborhood of a point, with automatic boundaries handling. It is documented in the iterators section of the C-API reference, and you can find some examples in the multiarray_test.c.src file in numpy.core. New ufuncs The following ufuncs have been added to the C API: #. copysign - return the value of the first argument with the sign copied from the second argument. #. nextafter - return the next representable floating point value of the first argument toward the second argument. New defines The alpha processor is now defined and available in numpy/npy_cpu.h. The failed detection of the PARISC processor has been fixed. The defines are: #. NPY_CPU_HPPA: PARISC #. NPY_CPU_ALPHA: Alpha #. deprecated decorator: this decorator may be used to avoid cluttering testing output while testing DeprecationWarning is effectively raised by the decorated test. #. assert_array_almost_equal_nulps: new method to compare two arrays of floating point values. With this function, two values are considered close if there are not many representable floating point values in between, thus being more robust than assert_array_almost_equal when the values fluctuate a lot. #. assert_array_max_ulp: raise an assertion if there are more than N representable numbers between two floating point values. #. assert_warns: raise an AssertionError if a callable does not generate a warning of the appropriate class, without altering the warning state. Reusing npymath In 1.3.0, we started putting portable C math routines in npymath library, so that people can use those to write portable extensions. Unfortunately, it was not possible to easily link against this library: in 1.4.0, support has been added to numpy.distutils so that 3rd party can reuse this library. See coremath documentation for more information. Improved set operations In previous versions of NumPy some set functions (intersect1d, setxor1d, setdiff1d and setmember1d) could return incorrect results if the input arrays contained duplicate items. These now work correctly for input arrays with duplicates. setmember1d has been renamed to in1d, as with the change to accept arrays with duplicates it is no longer a set operation, and is conceptually similar to an elementwise version of the Python operator 'in'. All of these functions now accept the boolean keyword assume_unique. This is False by default, but can be set True if the input arrays are known not to contain duplicates, which can increase the functions' execution #. numpy import is noticeably faster (from 20 to 30 % depending on the platform and computer) #. The sort functions now sort nans to the end. * Real sort order is [R, nan] * Complex sort order is [R + Rj, R + nanj, nan + Rj, nan + nanj] Complex numbers with the same nan placements are sorted according to the non-nan part if it exists. #. The type comparison functions have been made consistent with the new sort order of nans. Searchsorted now works with sorted arrays containing nan values. #. Complex division has been made more resistent to overflow. #. Complex floor division has been made more resistent to overflow. The following functions are deprecated: #. correlate: it takes a new keyword argument old_behavior. When True (the default), it returns the same result as before. When False, compute the conventional correlation, and take the conjugate for complex arrays. The old behavior will be removed in NumPy 1.5, and raises a DeprecationWarning in 1.4. #. unique1d: use unique instead. unique1d raises a deprecation warning in 1.4, and will be removed in 1.5. #. intersect1d_nu: use intersect1d instead. intersect1d_nu raises a deprecation warning in 1.4, and will be removed in 1.5. #. setmember1d: use in1d instead. setmember1d raises a deprecation warning in 1.4, and will be removed in 1.5. The following raise errors: #. When operating on 0-d arrays, ``numpy.max`` and other functions accept only ``axis=0``, ``axis=-1`` and ``axis=None``. Using an out-of-bounds axes is an indication of a bug, so Numpy raises an error for these cases #. Specifying ``axis > MAX_DIMS`` is no longer allowed; Numpy raises now an error instead of behaving similarly as for ``axis=None``. Internal changes Use C99 complex functions when available The numpy complex types are now guaranteed to be ABI compatible with C99 complex type, if availble on the platform. Moreoever, the complex ufunc now use the platform C99 functions intead of our own. split multiarray and umath source code The source code of multiarray and umath has been split into separate logic compilation units. This should make the source code more amenable for Separate compilation By default, every file of multiarray (and umath) is merged into one for compilation as was the case before, but if NPY_SEPARATE_COMPILATION env variable is set to a non-negative value, experimental individual compilation of each file is enabled. This makes the compile/debug cycle much faster when working on core numpy. Separate core math library New functions which have been added: * npy_copysign * npy_nextafter * npy_cpack * npy_creal * npy_cimag * npy_cabs * npy_cexp * npy_clog * npy_cpow * npy_csqr * npy_ccos * npy_csin
{"url":"https://groups.google.com/g/comp.lang.python.announce/c/QpaAgJcFyfA","timestamp":"2024-11-05T14:15:14Z","content_type":"text/html","content_length":"707502","record_id":"<urn:uuid:d11c60db-e76f-4104-8c2e-377dd6907960>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00225.warc.gz"}
ALcdAzimuthal projections are ILcdProjection objects that have a central point of zero distortion. A ALcdConic is a ILcdProjection that is derived by projection of geodetic points on a cone which is then unrolled. An ALcdCylindrical is an ILcdProjection that is obtained by wrapping a cylinder around the earth globe such that it touches the equator. ALcdGeneralPerspective are ILcdProjection objects that have a central point of zero distortion. An ALcdObliqueCylindrical is an ILcdProjection that is obtained by wrapping a cylinder around the globe. ALcdPerspective are ILcdProjection objects that have a central point of zero distortion. This abstract class can be used to implement an ILcdProjection. A ALcdTransverseCylindrical is a ILcdProjection for which a cylinder is wrapped around the globe. ILcdAzimuthals are ILcdProjection objects that have a central point of zero distortion. An ILcdConic is an ILcdProjection that is derived by projection of geodetic points on a cone which is then unrolled. An ILcdCylindrical is an ILcdProjection that is obtained by wrapping a cylinder around the earth globe such that it touches the equator. ILcdGeneralPerspective are ILcdProjection objects that have a central point of zero distortion. An ILcdObliqueCylindrical is an ILcdProjection that is obtained by wrapping a cylinder around the globe. ILcdPerspective are ILcdProjection objects that have a central point of zero distortion. An ILcdProjection is a map projection. An ILcdProjection that uses pairs of tie points to map from one coordinate system to another. An ILcdTransverseCylindrical is an ILcdProjection for which a cylinder is wrapped around the globe. Albers Equal Area Conic projection. Azimuthal Equidistant projection. Ellipsoidal version of the Cassini projection. Stereographic projection (modified for the Netherlands). Equidistant Cylindrical projection that uses ellipsoidal calculations. Lambert Azimuthal Equal-Area projection. Equidistant Cylindrical projection. General perspective projection. Spherical Lambert Azimuthal Equal-Area projection. Lambert Conformal Conic projection. Miller Cylindrical projection. Oblique Mercator projection. The variant of this projection. Orthorectified projection. The perspective projection is an azimuthal projection that maps a 3D scene to a 2D plane as viewed through a camera viewfinder. Polar stereographic projection. Custom bean editor to select and edit the properties of ILcdProjection objects. TLcdProjectionEditor extends PropertyEditorSupport to provide bean property editor support for projections. Factory class to create ILcdProjection objects from a Properties object or to serialize a given ILcdProjection object as properties into a Properties object as side effect. Pseudo-Mercator projection. Rectified Polynomial Projection. Rectified Projective projection. Rectified Rational projection. A projection that maps image coordinates to ground coordinates based on Rational Polynomial Coefficients. Stereographic projection. Swiss Oblique Mercator projection. Transverse Mercator projection. Vertical perspective projection.
{"url":"https://dev.luciad.com/portal/productDocumentation/LuciadLightspeed/docs/reference/LuciadLightspeed/com/luciad/projection/package-summary.html","timestamp":"2024-11-04T21:41:20Z","content_type":"text/html","content_length":"27153","record_id":"<urn:uuid:f09431f8-dfe9-42db-90e6-944723e4bb9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00253.warc.gz"}
Math Colloquia - Circular maximal functions on the Heisenberg group The spherical average has been a source of many problems in harmonic analysis. Since late 90's, the study of the maximal spherical means on the Heisenberg group $mathbb{H}^n$ has been started to show the pointwise ergodic theorems on the groups. Later, it has turned out to be connected with the fold singularities of the Fourier integral operators, which leads to the $L^p$ boundedness of the spherical maximal means on the Heisenberg group $mathbb{H}^n$ for $nge 2$. In this talk, we discuss about the $L^p$ boundedness of the circular maximal function on the Heisenberg group $mathbb{H}^1$. The proof is based on the the square sum estimate of the Fourier integral operators associated with the torus arising from the vector fields of the Heisenberg group algebra. We compare this torus with the characteristic cone of the Euclidean space.
{"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=speaker&order_type=asc&l=en&page=5&document_srl=793545","timestamp":"2024-11-07T13:26:33Z","content_type":"text/html","content_length":"43948","record_id":"<urn:uuid:e5d2a6cb-8d21-481b-97cb-1c501647c056>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00249.warc.gz"}
In this comprehensive article, we will explore the COVARIANCE.S formula in Excel. The COVARIANCE.S formula is a statistical function that calculates the sample covariance between two sets of data. Covariance is a measure of how two variables change together and can be used to determine the strength and direction of the relationship between them. A positive covariance indicates that the variables tend to increase or decrease together, while a negative covariance indicates that one variable tends to increase as the other decreases. The COVARIANCE.S formula is particularly useful in finance, where it can be used to analyze the relationship between the returns of two different assets. COVARIANCE.S Syntax The syntax for the COVARIANCE.S formula in Excel is as follows: =COVARIANCE.S(array1, array2) • array1 is the first set of data points (required). • array2 is the second set of data points (required). Both arrays must have the same number of data points, and each data point should be a number. The formula will return the sample covariance between the two sets of data points. COVARIANCE.S Examples Let’s look at some examples of how to use the COVARIANCE.S formula in Excel. Example 1: You have two sets of data points, A1:A5 and B1:B5. To calculate the sample covariance between these two sets of data, you would use the following formula: =COVARIANCE.S(A1:A5, B1:B5) Example 2: You have two columns of data, one representing the monthly returns of Stock A and the other representing the monthly returns of Stock B. To calculate the sample covariance between the returns of these two stocks, you would use the following formula: =COVARIANCE.S(StockA_Returns, StockB_Returns) COVARIANCE.S Tips & Tricks Here are some tips and tricks to help you get the most out of the COVARIANCE.S formula in Excel: 1. Remember that covariance is a measure of the relationship between two variables, not the strength of that relationship. To measure the strength of the relationship, consider using the CORREL function to calculate the correlation coefficient. 2. When comparing the covariance between multiple pairs of variables, keep in mind that the scale of the covariance depends on the scale of the variables. To compare covariances on a standardized scale, consider using the correlation coefficient instead. 3. If you need to calculate the population covariance instead of the sample covariance, use the COVARIANCE.P function. Common Mistakes When Using COVARIANCE.S Here are some common mistakes to avoid when using the COVARIANCE.S formula in Excel: 1. Using different numbers of data points for array1 and array2. Both arrays must have the same number of data points for the formula to work correctly. 2. Using non-numeric data points in the arrays. The COVARIANCE.S formula requires that all data points be numbers. 3. Confusing sample covariance with population covariance. The COVARIANCE.S formula calculates the sample covariance, which is an unbiased estimate of the population covariance. If you need to calculate the population covariance, use the COVARIANCE.P function instead. Why Isn’t My COVARIANCE.S Working? If your COVARIANCE.S formula isn’t working, consider the following troubleshooting steps: 1. Check that both arrays have the same number of data points. If they don’t, adjust your data or use a different formula. 2. Ensure that all data points in both arrays are numbers. If any data points are non-numeric, the formula will not work correctly. 3. Verify that you are using the correct formula for your needs. If you need to calculate the population covariance, use the COVARIANCE.P function instead of COVARIANCE.S. COVARIANCE.S: Related Formulae Here are some related formulae that you might find useful when working with the COVARIANCE.S formula in Excel: 1. CORREL: Calculates the correlation coefficient between two sets of data points. The correlation coefficient is a standardized measure of the strength and direction of the relationship between two 2. COVARIANCE.P: Calculates the population covariance between two sets of data points. Use this function if you need to calculate the population covariance instead of the sample covariance. 3. PEARSON: Calculates the Pearson correlation coefficient between two sets of data points. This is equivalent to the CORREL function. 4. SLOPE: Calculates the slope of the linear regression line between two sets of data points. This can be used to estimate the relationship between two variables. 5. INTERCEPT: Calculates the intercept of the linear regression line between two sets of data points. This can be used in conjunction with the SLOPE function to estimate the relationship between two By understanding the COVARIANCE.S formula and its related functions, you can effectively analyze the relationship between two sets of data points in Excel. This can be particularly useful in finance, where understanding the relationship between the returns of different assets is crucial for portfolio management and risk assessment.
{"url":"https://www.aepochadvisors.com/covariance-s/","timestamp":"2024-11-07T13:55:25Z","content_type":"text/html","content_length":"111390","record_id":"<urn:uuid:6c992141-8d61-4032-83e2-b9148b27edac>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00318.warc.gz"}
Efficient exploration and calibration of a semi-analytical model of galaxy formation with deep learning arXiv e-prints, 2103:arXiv:2103.01072, March, 2021. Paper abstract bibtex We implement a sample-efficient method for rapid and accurate emulation of semi-analytical galaxy formation models over a wide range of model outputs. We use ensembled deep learning algorithms to produce a fast emulator of an updated version of the GALFORM model from a small number of training examples. We use the emulator to explore the model's parameter space, and apply sensitivity analysis techniques to better understand the relative importance of the model parameters. We uncover key tensions between observational datasets by applying a heuristic weighting scheme in a Markov chain Monte Carlo framework and exploring the effects of requiring improved fits to certain datasets relative to others. Furthermore, we demonstrate that this method can be used to successfully calibrate the model parameters to a comprehensive list of observational constraints. In doing so, we re-discover previous GALFORM fits in an automatic and transparent way, and discover an improved fit by applying a heavier weighting to the fit to the metallicities of early-type galaxies. The deep learning emulator requires a fraction of the model evaluations needed in similar emulation approaches, achieving an out-of-sample mean absolute error at the knee of the K-band luminosity function of 0.06 dex with less than 1000 model evaluations. We demonstrate that this is an extremely efficient, inexpensive and transparent way to explore multi-dimensional parameter spaces, and can be applied more widely beyond semi-analytical galaxy formation models. title = {Efficient exploration and calibration of a semi-analytical model of galaxy formation with deep learning}, volume = {2103}, url = {http://adsabs.harvard.edu/abs/2021arXiv210301072E}, abstract = {We implement a sample-efficient method for rapid and accurate emulation of semi-analytical galaxy formation models over a wide range of model outputs. We use ensembled deep learning algorithms to produce a fast emulator of an updated version of the GALFORM model from a small number of training examples. We use the emulator to explore the model's parameter space, and apply sensitivity analysis techniques to better understand the relative importance of the model parameters. We uncover key tensions between observational datasets by applying a heuristic weighting scheme in a Markov chain Monte Carlo framework and exploring the effects of requiring improved fits to certain datasets relative to others. Furthermore, we demonstrate that this method can be used to successfully calibrate the model parameters to a comprehensive list of observational constraints. In doing so, we re-discover previous GALFORM fits in an automatic and transparent way, and discover an improved fit by applying a heavier weighting to the fit to the metallicities of early-type galaxies. The deep learning emulator requires a fraction of the model evaluations needed in similar emulation approaches, achieving an out-of-sample mean absolute error at the knee of the K-band luminosity function of 0.06 dex with less than 1000 model evaluations. We demonstrate that this is an extremely efficient, inexpensive and transparent way to explore multi-dimensional parameter spaces, and can be applied more widely beyond semi-analytical galaxy formation models.}, urldate = {2021-05-12}, journal = {arXiv e-prints}, author = {Elliott, Edward J. and Baugh, Carlton M. and Lacey, Cedric G.}, month = mar, year = {2021}, keywords = {Astrophysics - Astrophysics of Galaxies}, pages = {arXiv:2103.01072},
{"url":"https://bibbase.org/network/publication/elliott-baugh-lacey-efficientexplorationandcalibrationofasemianalyticalmodelofgalaxyformationwithdeeplearning-2021","timestamp":"2024-11-05T12:57:00Z","content_type":"text/html","content_length":"18023","record_id":"<urn:uuid:d6a806a1-123e-4600-85d8-f25f10419144>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00770.warc.gz"}
69. A modest proposal regarding TCR Posted on May 23rd, 2016 in Isaac Held's Blog The radative forcing (left) and global mean temperature response (right) using a simple GCM emulator, for the historical CO2 forcing (red) and for the linearly increasing forcing consistent with the simulations used to define the transient climate response (blue), for 3 different ramp-up time scales, the 70 year time scale (solid blue) corresponding to the standard definition. The terminology surrounding climate sensitivity can be confusing. People talk about equilibrium sensitivity, Charney sensitivity, Earth system sensitivity, effective sensitivity, transient climate response (TCR), etc, making it a challenge to communicate with the public, and sometimes even with ourselves, on this important issue. I am going to focus on the TCR here (yet again). The TCR of a model is determined by what appears to be a rather arbitrary calculation Starting with the climate in equilibrium, increase CO2 at 1% per year until doubling (about 70 years). The global mean warming of the near surface air temperature at the time of doubling is the TCR. In a realistic model with internal variability, you need to do this multiple times and then average to knock down the noise so as to isolate the forced response if you are trying to be precise. If limited to one or two realizations, you average over years 60-80 or use some kind of low-pass filter to help isolate the forced response. Sometimes the TCR is explicitly defined as the warming averaged over years 60-80. Although I have written several posts emphasizing the importance of the TCR in this series, I would like to argue for a de-emphasis of the TCR in favor of another quantity (admittedly very closely related – hence the modesty of this proposal.) If you talk to someone about the TCR you have to explain why this idealized 1%/yr scenario is of interest. From my perspective, the importance of the TCR stems from its close relationship to the warming from the mid-19th century to the present that can be attributed to the CO2 increase. There is a growing literature on estimating TCR from observations, using the instrumental temperature record over this time frame. But these studies are not direct estimates of TCR; they are estimates of the warming attributable to the CO2 increase which are then converted to TCR by assuming that the warming is proportional to the CO2 radiative forcing. If forcing due to a doubling of CO2 is $\mathcal{F}_{2X}$ and the forcing due to the observed increase in CO2 over the period T = (T[1] T [2]) is $\mathcal {F}$(T) then TCR = WA[CO2](T)/ $\xi$ Here I have defined WA[CO2](T) as the global mean Warming Attributable to CO2 over the time interval T and I have set $\xi = \mathcal{F}(T)/\mathcal{F}_{2X}$. For the rest of this post, I’ll just assume that T = (1850, 2010). For this period, $\xi$ is about 0.45. The past warming attributable to CO2 is itself important as a constraint on models used to project this warming into the future. Whether your model is a simple extrapolation or an energy balance model or a full GCM that simulates climate by simulating weather, you obviously want the model you are using to be consistent with the past warming. Estimating WA[CO2](T) from observations over the past century or so is far from straightforward, due primarily to the uncertainty in the cooling due to anthropogenic aerosols, but also due to the presence of other forcing agents, including other well mixed greenhouse gases, as well as internal variability, But what’s the point of converting someone’s estimate of the range of values of WA [CO2] consistent with observations into the corresponding range of TCR values? The point is simply that the latter has become a standard for the comparison of GCM responses, so the range of TCR estimates from models is readily available. But this does not seem like a very good reason to try to communicate the importance of the TCR value rather than the more obviously relevant WA[CO2](T). How good is the proportionality assumption TCR =$\xi$ WA[CO2](T)? And if it is good, why? For concreteness I’ll use a very simple three time-scale fit to the response of a particular GCM to an instantaneous doubling of CO2. The model is GFDL’s CM3 and the fit is described in Winton et al 2013. The response takes the form $T(t) = \sum\limits_{i=1}^{3}\alpha_i[1-\exp(-t/\tau_i)]$ with $[\alpha_1, \alpha_2, \alpha_3]$ = [1.5, 1.3, 1.8]K and $[\tau_1, \tau_2, \tau_3]$ = [3, 60, 1000] years. I have rounded off the time scales a bit. Since this model is linear you can scale this response to that for an infinitesimal increase and then add up the responses to the forcing over time for any CO2(t). (See the discussion of the response to volcanic forcing in post #50,) I carried it along for these calculations,, but the very long millennial time scale present in the GCM has negligible effect on WA[CO2 ]or TCR , so this is really a two-time scale model for our purposes. And you may have noticed that this is a a rather sensitive model. But keep in mind that it is linear, so if you multiply all of the $\alpha$‘s by the same factor you change the amplitude of all responses, including WA[CO2 ]and TCR, by this same factor. [In the calculations to follow, I’m assuming that the radiative forcing due to CO2 is exactly logarithmic in CO2 concentration, so 1% increase/yr is a linear increase in radiative forcing.] The red line in the figure on the left above shows the CO2 radiative forcing from 1850 to 2010 from GISS. The solid blue line shows the linear increase in forcing over 70 years that ends up at the same value of forcing at 2010 as the red line. This is the forcing due to a 1%/year increase multiplied by $\xi$ — or, equivalently, it is the forcing due to a $\xi$%/yr increase for 70 years. Also shown with the blue dashed lines are the linear forcing trajectories that reach the same point in 2010 but increasing the 70 year interval to 90 years or decreasing it to 50. The 70 year linear increase at $\xi$%/yr is evidently a pretty good fit after 1960. It’s not relevant whether a 1%/yr increase is larger than the increase in CO2 forcing since we are assuming linearity and normalizing the TCR anyway. The key is that a linear fit to the recent period of rapid increase in CO2 forcing requires roughly 70 years starting from zero. The figure on the right shows the responses of the three-time scale model to these forcing trajectories. The standard (70yr) TCR after normalization underestimates the WA[CO2 ](the red curve) by about 3%, which is basically negligible given the the uncertainties in TCR that we are concerned about. My eyeball estimate of the error, given the forcing that is missed by this linear approximation before 1950, keeping in mind the 60 year intermediate e-folding time in this model, would have been a bit more than this, so I have checked this result a couple of times — which does not guarantee that I did not make a mistake, of course, (It seems that the error made by missing the response to the increases in CO2 in the first half of the 20th century is canceled in part by the fact that the linear fit in the more recent period is not perfect.) Even if you make the sub-optimal choices of 50 or 90 years for the ramp-up, the errors are only of the order of 10%. So the approximation WA[CO2 ]= $\xi$TCR looks good, at least for this particular response function. If you want to modify the model to create a larger difference, you will have to decrease the relative importance of the fast response that occurs on time scales shorter than the time scales of the CO2 evolution itself and put more weight on the longer time scales. Using discrete response times is not the only way of emulating a GCM’s response function. Diffusive models have a long history in this regard. But as long as the fast response is as large a part of the response to centennial-scale forcing as it is in GCMs (see Geoffroy et al 2013) you won’t get very much of a discrepancy. We could de-emphasize the 1% year simulation in favor of just simulating the response to the historical CO2 increase. This simulation is performed routinely by some groups, but for the CMIP projects, including the upcoming CMIP6, it is the response to the historical evolution of all of the well-mixed greenhouse gases (WMGGs) that is typically requested, without breaking out the CO2 contribution. This raises another issue — the validity of assuming that you can get the response to CO2 from the response to the full set of WMGGs by simply normalizing by the ratio of the radiative forcings. Given questions about how best to define radiative forcing (a good topic for another post), this adds an unnecessary layer if you is are primarily interested in a model’s response to CO2. Rather than focusing on TCR itself, especially when discussing this topic outside of scientific circles, we should think of it as just a standard way of estimating WA[CO2 ]for a model, a technique that could be improved if desired. Perhaps what we need is a good acronym for the warming attributable to CO2. WACO2 seems less that ideal. [The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.] 1 thought on “69. A modest proposal regarding TCR” 1. This is perfectly laid out. The lesser alternative is probably to have something called an effective TCR, where the modifier implies that CO2 effectively pulls in the other GHG’s along with the baseline impact of just CO2. It’s frustrating to read blogs such as And Then There’s Physics, where every post is the same confusing message of presenting the dry definition for TCR without the necessary context. One clear post is all that is required. Thanks.
{"url":"https://www.gfdl.noaa.gov/blog_held/69-a-modest-proposal-regarding-tcr/","timestamp":"2024-11-10T22:00:09Z","content_type":"application/xhtml+xml","content_length":"50801","record_id":"<urn:uuid:1817000e-774f-4eb2-964c-2787dd462e4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00623.warc.gz"}
Search for: Tauman Kalai, Yael (Ed.) We introduce and study the communication complexity of computing the inner product of two vectors, where the input is restricted w.r.t. a norm N on the space ℝⁿ. Here, Alice and Bob hold two vectors v,u such that ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1, where N^* is the dual norm. The goal is to compute their inner product ⟨v,u⟩ up to an ε additive term. The problem is denoted by IP_N, and generalizes important previously studied problems, such as: (1) Computing the expectation 𝔼_{x∼𝒟}[f(x)] when Alice holds 𝒟 and Bob holds f is equivalent to IP_{𝓁₁}. (2) Computing v^TAv where Alice has a symmetric matrix with bounded operator norm (denoted S_∞) and Bob has a vector v where ‖v‖₂ = 1. This problem is complete for quantum communication complexity and is equivalent to IP_{S_∞}. We systematically study IP_N, showing the following results, near tight in most cases: 1) For any symmetric norm N, given ‖v‖_N ≤ 1 and ‖u‖_{N^*} ≤ 1 there is a randomized protocol using 𝒪̃(ε^{-6} log n) bits of communication that returns a value in ⟨u,v⟩±ε with probability 2/3 - we will denote this by ℛ_{ε,1/3}(IP_N) ≤ 𝒪̃(ε^{-6} log n). In a special case where N = 𝓁_p and N^* = 𝓁_q for p^{-1} + q^ {-1} = 1, we obtain an improved bound ℛ_{ε,1/3}(IP_{𝓁_p}) ≤ 𝒪(ε^{-2} log n), nearly matching the lower bound ℛ_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(min(n, ε^{-2})). 2) One way communication complexity ℛ^{→}_{ε,δ} (IP_{𝓁_p}) ≤ 𝒪(ε^{-max(2,p)}⋅ log n/ε), and a nearly matching lower bound ℛ^{→}_{ε, 1/3}(IP_{𝓁_p}) ≥ Ω(ε^{-max(2,p)}) for ε^{-max(2,p)} ≪ n. 3) One way communication complexity ℛ^{→}_{ε,δ}(N) for a symmetric norm N is governed by the distortion of the embedding 𝓁_∞^k into N. Specifically, while a small distortion embedding easily implies a lower bound Ω(k), we show that, conversely, non-existence of such an embedding implies protocol with communication k^𝒪(log log k) log² n. 4) For arbitrary origin symmetric convex polytope P, we show ℛ_{ε,1/3}(IP_{N}) ≤ 𝒪(ε^{-2} log xc(P)), where N is the unique norm for which P is a unit ball, and xc(P) is the extension complexity of P (i.e. the smallest number of inequalities describing some polytope P' s.t. P is projection of P'). more » « less
{"url":"https://par.nsf.gov/search/award_ids:2008733","timestamp":"2024-11-10T15:50:39Z","content_type":"text/html","content_length":"287030","record_id":"<urn:uuid:1a9a99c0-9a81-4b71-b3b9-d1c663b4dd0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00125.warc.gz"}
Peter Rowlett In the last Finite Group livestream, Katie told us about emirps. If a number p is prime, and reversing its digits is also prime, the reversal is an emirp (‘prime’ backwards, geddit?). For example, 13, 3541 and 9999713 are prime. Reversing their digits we get the primes 31, 1453 and 3179999, so these are all emirps. It doesn’t work for all primes – for example, 19 is prime, but 91 is \(7 \times 13 \). In the livestream chat the concept of primemirp emerged. This would be a concatenation of a prime with its emirp. There’s a niggle here: just like in the word ‘primemirp’ the ‘e’ is both the end of ‘prime’ and the start of ’emirp’, so too in the number the middle digit is end of the prime and the start of its emirp. Why? Say the digits of a prime number are \( a_1 a_2 \dots a_n \), and its reversal \( a_n \dots a_2 a_1 \) is also a prime. Then the straight concatenation would be \( a_1 a_2 \dots a_n a_n \dots a_2 a_1 \). Each number \(a_i\) is in an even numbered place and an odd numbered place. Now, since \[ 10^k \pmod{11} = \begin{cases} 10, & \text{if } k \text{ is even;}\\ 1, & \text{otherwise,} \end{cases} \] it follows that each \(a_i \) contributes a multiple of eleven to the concatenation. A mismatched central digit breaks this pattern, allowing for the possibility of a prime. I wrote some code to search for primemirps by finding primes, reversing them and checking whether they were emirps, then concatenating them and checking the concatenation. I found a few! Then I did what is perfectly natural to do when a sequence of integers appears in front of you – I put it into the OEIS search box. Imagine my surprise to learn that the concept exists and is already included in the OEIS! It was added by Patrick De Geest in February 2000, based on an idea from G. L. Honaker, Jr. But there was no program code to find these primes and only the first 32 examples were given. I edited the entry to include a Python program to search for primemirps and added entries up to the 8,668th, which I believe is all primemirps where the underlying prime is less than ten million. My edits to the entry just went live at A054218: Palindromic primes of the form ‘primemirp’. The 8,668th primemirp is 9,999,713,179,999. Finite Group: first free live stream As we wrote about recently, we (Katie and Peter, along with our friends Sophie Maclean and Matthew Scroggs) are involved in an exciting new initiative – an online maths community that gets together via online chat and monthly video events. The first event happened yesterday evening, and will be available to watch for free on YouTube for the next couple of months. This is a taster – if you’d like to join the online community and attend next month’s event, you need to join the Finite Group (starting from £4/month). 2. Four bugs Reminder: I’m occasionally working to (sort of) recreate Martin Gardner’s cover images from Scientific American, the so-called Gardner’s Dozen. This time I’m looking at the cover image from the July 1965 issue, accompanying the column on ‘op art’ (which became chapter 24 in Martin Gardner’s Sixth Book of Mathematical Diversions from Scientific American). New Book – Short Cuts: Maths Our own Katie and Peter have collaborated on a new popular maths book, along with friends of the site Alison Kiddle and Sam Hartburn, which is out today. Short Cuts: Maths is an “expert guide to mastering the numbers behind the mysteries of modern mathematics,” and includes a range of topics from infinity and imaginary numbers to mathematical modelling, logic and abstract structures. We spoke to the four authors to see how they found it writing the book and what readers can expect. How did this project come to be? Peter: From my point of view, Katie approached me to ask if I’d like to be involved, which was very exciting! She’d worked on a couple of books with the same publisher and was asked to commission authors for this one. Katie: The publishers wanted to make this book – one of a ‘Short Cuts’ series which needed a maths title – and asked me to be commissioning editor, which meant I could write some of it and ask others to write the rest. I chose some people I’ve worked with before who I thought would have something interesting to say about some topics in maths (in particular, the topics I know less about, so they could help me with those bits!) Alison: As I was the last of the four of us to come on board, I think everyone had already expressed a preference for their favourite bits to write about, but luckily that left me with the two best topics, logic and probability. Do any of you have previous experience of working on a project like this? Alison: I’ve been involved in writing a book before but that one was about maths education, for an audience of mainly teachers, so this was a different sort of challenge, writing for a general audience with different levels of maths prior knowledge and enthusiasm. Sam: I’ve worked on many books in the same genre as a copyeditor and proofreader, but this was my first time as an author. I enjoyed seeing how the publishing process works from the author’s point of view – it’s definitely had an impact on my editorial work! Peter: My first time in popular book form, though I felt it used a bunch of skills I’ve developed in other work. And Katie is so great at organising projects that it went really smoothly. What’s the book like? Sam: It’s a book you can dip into – you don’t need to read it from front to back. Each page is self-contained and answers a question, and we tried to make the questions as interesting as possible (two of my particular favourites are ‘Is a mountain the same as a molehill?’ and ‘Do Nicholas Cage films cause drownings?’). Alison: We had quite a strict word limit to write to, which was a bit hard to get used to at first as I have a tendency to use ten words when two will do – but this turned out to be a blessing because it focussed us all on what the really important concepts were, and we found ways to express those concepts in a concise manner. Katie: I love how the style of the book builds in these gorgeous illustrations – we worked with the illustrator to make sure they fit with the text, but also bring out fun aspects of the ideas we’re talking about. Who do you think would enjoy reading this book? Sam: I’d like to think that anyone who has a vague interest in maths would get something out of it. Even though it delves into some deep mathematical topics, we’ve (hopefully!) written it in such a way that it’s understandable to anybody with school-level maths. But I’d hope that experienced mathematicians would also be able to find something new, or at least fun, in there. Alison: I’m definitely going to be recommending it to the students I work with. The bite-size dipping in and out model is great for them to skim read so they can find out a little bit about the mathematical ideas that appeal to them. Particularly useful for people preparing for university interviews where they want to show off that they know some maths beyond the usual curriculum! Katie: My mum’s definitely getting a copy for Christmas – and not just because I was involved in writing it: she’s not from a mathematical background but I think she’d enjoy the straightforward explanations and discovering new ideas. What’s your favourite bit? Sam: The publisher did commission some lovely illustrations. The bear in the modelling cycle is a particular delight! Katie: Yes! We love the modelling bear. I also liked being able to share ideas people might not otherwise encounter if they read about mathematics, like how mathematical modelling works, or what topology is, or some of the nitty-gritty of mathematical logic. Peter: There are loads of quick summaries of areas of maths I know less about, which is really nice to have. The illustrations are great — the baby failing to manage a crocodile always makes me chuckle, and I can’t wait to show my son the game theory dinosaurs! Short Cuts: Maths is available to buy today from all good bookshops. Mathematical Objects: Post-season 7 update A short update from Katie and Peter. Podcast: Play in new window | Download Subscribe: RSS | List of episodes Announcing The Finite Group “Wouldn’t it be nice if there was a place where maths people could hang out and create cool maths things?” This idea was put to me a couple of years ago, and has stuck with me. It does sound nice. Fast forward to 2023, and social media is collapsing. Some people have chosen a direction and are marching off towards Mastodon, Bluesky, Threads, or a number of other platforms. Some people are trying to keep up with multiple of these, but feeling spread too thin and wondering if it’s worth the effort (ask me how I know!). But many people are taking the opportunity to step back and think again. People are rethinking whether they want to conduct their online social lives in public. There is a surge in private communities, things like WhatsApp groups, Slack channels and Discord rooms. These have the advantage that you aren’t part of the ‘engagement’-driven content push, but they have disadvantages too – you have to know the right people to get into the group. Meanwhile, wouldn’t it be nice if there was a place where maths people could hang out and create cool maths things? So we’re creating it. We’re calling it The Finite Group (who doesn’t love a punny maths name?). “We” is Katie Steckles, Sophie Maclean, Matthew Scroggs and me. It’s going to be a maths community that gets together to share and create cool maths things, that supports creators to do their work within the group and on the wider internet. Mathematical Objects: Mathematics in Theory and Practice, edited by Warwick Sawyer A conversation about mathematics inspired by an old textbook, Mathematics in Theory and Practice, edited by Warwick Sawyer. Presented by Katie Steckles and Peter Rowlett. Podcast: Play in new window | Download Subscribe: RSS | List of episodes
{"url":"https://aperiodical.com/author/peter/page/3/","timestamp":"2024-11-11T04:57:30Z","content_type":"text/html","content_length":"60507","record_id":"<urn:uuid:918ada38-dc02-4098-8c48-c39be6191046>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00138.warc.gz"}
Modelling Methodologies in Analogue Integrated Circuit Design 1785616951, 9781785616952 - DOKUMEN.PUB Table of contents : About the editors 1. Introduction / Gunhan Dundar and Mustafa Berke Yelten Part I: Fundamentals of modelling methodologies 2. Response surface modeling / Jun Tao, Xuan Zeng, and Xin Li 2.1 Introduction 2.2 Problem formulation 2.3 Least-squares regression 2.4 Feature selection 2.4.1 Orthogonal matching pursuit 2.4.2 L1-norm regularization 2.4.3 Cross-validation 2.4.4 Least angle regression 2.4.5 Numerical experiments 2.5 Regularization 2.6 Bayesian model fusion 2.6.1 Zero-mean prior distribution 2.6.2 Nonzero-mean prior distribution 2.6.3 Numerical experiments 2.7 Summary 3. Machine learning / Olcay Taner Yıldız 3.1 Introduction 3.2 Data 3.3 Dimension reduction 3.3.1 Feature selection 3.3.2 Feature extraction 3.4 Clustering 3.4.1 K-Means clustering 3.4.2 Hierarchical clustering 3.5 Supervised learning algorithms 3.5.1 Simplest algorithm: prior 3.5.2 A simple but effective algorithm: nearest neighbor 3.5.3 Parametric methods: five shades of complexity 3.5.4 Decision trees 3.5.5 Kernel machines 3.5.6 Neural networks 3.6 Performance assessment and comparison of algorithms 3.6.1 Sensitivity analysis 3.6.2 Resampling 3.6.3 Comparison of algorithms 4. Data-driven and physics-based modeling / Slawomir Koziel 4.1 Model classification 4.2 Modeling flow 4.3 Design of experiments 4.4 Data-driven models 4.4.1 Polynomial regression 4.4.2 Radial basis function interpolation 4.4.3 Kriging 4.4.4 Support vector regression 4.4.5 Neural networks 4.4.6 Other methods 4.5 Physics-based models 4.5.1 Variable fidelity models 4.5.2 Space mapping 4.5.3 Response correction methods 4.5.4 Feature-based modeling 4.6 Model selection and validation 4.7 Summary 5. Verification of modeling: metrics and methodologies / Ahmad Tarraf and Lars Hedrich 5.1 Overview 5.1.1 State space and normal form 5.2 Model validation 5.2.1 Model validation metrics 5.3 Semiformal model verification 5.4 Formal model verification 5.4.1 Equivalence checking 5.4.2 Other formal techniques 5.5 Formal modeling 5.5.1 Correct by construction: (automatic) abstract model generation via hybrid automata 5.6 Conclusion Part II: Applications in analogue integrated circuit design 6. An overview of modern, automated analog circuit modeling methods: similarities, strengths, and limitations / Alex Doboli and Ciro D’Agostino 6.1 Introduction 6.2 Symbolic analysis 6.2.1 Fundamental symbolic methods 6.2.2 Beyond linear filters: simplification and hierarchical symbolic expressions 6.2.3 Tackling complexity: advanced symbolic representations and beyond linear analysis 6.3 Circuit macromodeling 6.4 Neural networks for circuit modeling 6.5 Discussion and metatheory on analog circuit modeling 6.6 Conclusions 7. On the usage of machine-learning techniques for the accurate modeling of integrated inductors for RF applications / Fabio Passos, Elisenda Roca, Rafael Castro-Lopez and Francisco V. Fernandez 7.1 Introduction 7.2 Integrated inductor design insight 7.3 Surrogate modeling 7.3.1 Modeling strategy 7.4 Modeling application to RF design 7.4.1 Inductor optimization 7.4.2 Circuit design 7.5 Conclusions 8. Modeling of variability and reliability in analog circuits / Javier Martin-Martinez, Javier Diaz-Fortuny, Antonio Toro-Frias, Pablo Martin-Lloret, Pablo Saraza-Canflanca, Rafael Castro-Lopez, Rosana Rodriguez, Elisenda Roca, Francisco V. Fernandez, and Montserrat Nafria 8.1 Modeling of the time-dependent variability in CMOS technologies: the PDO model 8.2 Characterization of time zero variability and time-dependent variability in CMOS technologies 8.3 Parameter extraction of CMOS aging compact models 8.3.1 Description of the method 8.3.2 Application examples 8.4 CASE: a reliability simulation tool for analog ICs 8.4.1 Simulator features 8.4.2 TZV and TDV studied in a Miller operational amplifier 8.5 Conclusions 9. Modeling of pipeline ADC functionality and nonidealities / Enver Derun Karabeyoglu and Tufan Coskun Karalar 9.1 Pipeline ADC 9.2 Flash ADC 9.3 Behavioral model of pipeline ADCs 9.3.1 A 1.5-bit sub-ADC model 9.3.2 Multiplier DAC 9.3.3 A 3-bit flash ADC 9.4 Sources of nonidealities in pipeline ADCs 9.4.1 Op-amp nonidealities 9.4.2 Switch nonidealities 9.4.3 Clock jitter and skew mismatch 9.4.4 Capacitor mismatches 9.4.5 Current sources matching error 9.4.6 Comparator offset 9.5 Final model of the pipeline ADC and its performance results 9.6 Conclusion 10. Power systems modelling / Jindrich Svorc, Rupert Howes, Pier Cavallini, and Kemal Ozanoglu 10.1 Introduction 10.2 Small-signal models of DC–DC converters 10.2.1 Motivation 10.2.2 Assumptions 10.2.3 Test vehicle 10.2.4 Partitioning of the circuit 10.2.5 Model types 10.2.6 Basic theory for averaged – continuous-time model 10.2.7 Duty-cycle signal model 10.2.8 Pulse width modulator model 10.2.9 Model of the power stage 10.2.10 Complete switching, linear and small-signal model 10.2.11 The small-signal open-loop transfer function 10.2.12 Comparison of various models 10.2.13 Other outputs of the small-signal model 10.2.14 Switching frequency effect 10.2.15 Comparison of the averaged and switching models 10.2.16 Limitations of the averaged model 10.3 Efficiency modelling 10.4 Battery models 10.5 Capacitance modelling 10.6 Modelling the inductors 10.6.1 Spice modelling 10.6.2 Advanced modelling 10.6.3 Saturation current effects and modelling 10.7 Conclusion 11. A case study for MEMS modelling: efficient design and layout of 3D accelerometer by automated synthesis / Steffen Michael and Ralf Sommer 11.1 Introduction 11.2 Synthesis of MEMS designs and layouts – general aspects 11.3 Working principle and sensor structure 11.4 Technology 11.5 Design strategy and modelling 11.5.1 Library approach 11.5.2 Modelling 11.6 MEMS design and layout example 11.6.1 xy Acceleration unit 11.6.2 Accelerometer for z detection 11.6.3 Layout 12. Spintronic resistive memories: sensing schemes / Mesut Atasoyu, Mustafa Altun, and Serdar Ozoguz 12.1 Background 12.1.1 Physical structure of an MTJ 12.1.2 The switching mechanism of STT-MRAM 12.2 Sensing schemes of STT-MRAM 12.3 Conclusion 13. Conclusion / Mustafa Berke Yelten and Gunhan Dundar Citation preview Modelling Methodologies in Analogue Integrated Circuit Design Other volumes in this series: Volume 2 Volume 3 Volume 4 Volume 5 Volume 6 Volume 8 Volume 9 Volume 10 Volume 11 Volume 12 Volume 13 Volume 14 Volume 15 Volume 16 Volume 17 Volume 18 Volume 19 Volume 20 Volume 21 Volume 22 Volume 23 Volume 24 Volume 25 Volume 26 Volume 27 Volume 28 Volume 29 Volume 30 Volume 32 Volume 33 Volume 34 Volume 35 Volume 38 Volume 39 Volume 40 Analogue IC Design: The current-mode approach C. Toumazou, F.J. Lidgey and D.G. Haigh (Editors) Analogue–Digital ASICs: Circuit techniques, design tools and applications R.S. Soin, F. Maloberti and J. France (Editors) Algorithmic and Knowledge-based CAD for VLSI G.E. Taylor and G. Russell (Editors) Switched Currents: An analogue technique for digital technology C. Toumazou, J.B.C. Hughes and N.C. Battersby (Editors) High-Frequency Circuit Engineering F. Nibler et al. Low-Power High-Frequency Microelectronics: A unified approach G. Machado (Editor) VLSI Testing: Digital and mixed analogue /digital techniques S.L. Hurst Distributed Feedback Semiconductor Lasers J.E. Carroll, J.E.A. Whiteaway and R.G.S. Plumb Selected Topics in Advanced Solid State and Fibre Optic Sensors S.M. Vaezi-Nejad (Editor) Strained Silicon Heterostructures: Materials and devices C.K. Maiti, N.B. Chakrabarti and S.K. Ray RFIC and MMIC Design and Technology I.D. Robertson and S. Lucyzyn (Editors) Design of High Frequency Integrated Analogue Filters Y. Sun (Editor) Foundations of Digital Signal Processing: Theory, algorithms and hardware design P. Gaydecki Wireless Communications Circuits and Systems Y. Sun (Editor) The Switching Function: Analysis of power electronic circuits C. Marouchos System on Chip: Next generation electronics B. Al-Hashimi (Editor) Test and Diagnosis of Analogue, Mixed-Signal and RF Integrated Circuits: The system on chip approach Y. Sun (Editor) Low Power and Low Voltage Circuit Design with the FGMOS Transistor E. Rodriguez-Villegas Technology Computer Aided Design for Si, SiGe and GaAs Integrated Circuits C.K. Maiti and G.A. Armstrong Nanotechnologies M. Wautelet et al. Understandable Electric Circuits M. Wang Fundamentals of Electromagnetic Levitation: Engineering sustainability through efficiency A.J. Sangster Optical MEMS for Chemical Analysis and Biomedicine H. Jiang (Editor) High Speed Data Converters A.M.A. Ali Nano-Scaled Semiconductor Devices E.A. Gutie´rrez-D (Editor) Security and Privacy for Big Data, Cloud Computing and Applications L. Wang, W. Ren, K.R. Choo and F. Xhafa (Editors) Nano-CMOS and Post-CMOS Electronics: Devices and modelling Saraju P. Mohanty and Ashok Srivastava Nano-CMOS and Post-CMOS Electronics: Circuits and design Saraju P. Mohanty and Ashok Srivastava Oscillator Circuits: Frontiers in design, analysis and applications Y. Nishio (Editor) High Frequency MOSFET Gate Drivers Z. Zhang and Y. Liu RF and Microwave Module Level Design and Integration M. Almalkawi Design of Terahertz CMOS Integrated Circuits for High-Speed Wireless Communication M. Fujishima and S. Amakawa System Design with Memristor Technologies L. Guckert and E.E. Swartzlander Jr. Functionality-Enhanced Devices: An alternative to Moore’s law P.-E. Gaillardon (Editor) Digitally Enhanced Mixed Signal Systems C. Jabbour, P. Desgreys and D. Dallett (Editors) Volume 43 Volume 45 Volume 47 Volume 53 Volume 54 Volume 58 Volume 60 Volume 67 Volume 68 Volume 69 Volume 70 Volume 71 Volume 73 Negative Group Delay Devices: From concepts to applications B. Ravelo (Editor) Characterisation and Control of Defects in Semiconductors F. Tuomisto (Editor) Understandable Electric Circuits: Key concepts, 2nd Edition M. Wang VLSI Architectures for Future Video Coding M. Martina (Editor) Advances in High-Power Fiber and Diode Laser Engineering Ivan Divliansky (Editor) Magnetorheological Materials and Their Applications S. Choi and W. Li (Editors) IP Core Protection and Hardware-Assisted Security for Consumer Electronics A. Sengupta and S. Mohanty Frontiers in Securing IP Cores: Forensic detective control and obfuscation techniques A Sengupta High Quality Liquid Crystal Displays and Smart Devices: Vol. 1 and Vol. 2 S. Ishihara, S. Kobayashi and Y. Ukai (Editors) Fibre Bragg Gratings in Harsh and Space Environments: Principles and applications B. Aı¨ssa, E.I. Haddad, R.V. Kruzelecky and W.R. Jamroz Self-Healing Materials: From fundamental concepts to advanced space and electronics applications, 2nd Edition B. Aı¨ssa, E.I. Haddad, R.V. Kruzelecky and W.R. Jamroz Radio Frequency and Microwave Power Amplifiers: Vol. 1 and Vol. 2 A. Grebennikov (Editor) VLSI and Post-CMOS Electronics Volume 1: VLSI and Post-CMOS Electronics and Volume 2: Materials, devices and interconnects R. Dhiman and R. Chandel (Editors) Modelling Methodologies in Analogue Integrated Circuit Design Edited by Günhan Dündar and Mustafa Berke Yelten The Institution of Engineering and Technology Published by The Institution of Engineering and Technology, London, United Kingdom The Institution of Engineering and Technology is registered as a Charity in England & Wales (no. 211014) and Scotland (no. SC038698). † The Institution of Engineering and Technology 2020 First published 2020 This publication is copyright under the Berne Convention and the Universal Copyright Convention. All rights reserved. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may be reproduced, stored or transmitted, in any form or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publisher at the undermentioned address: The Institution of Engineering and Technology Michael Faraday House Six Hills Way, Stevenage Herts, SG1 2AY, United Kingdom www.theiet.org While the authors and publisher believe that the information and guidance given in this work are correct, all parties must rely upon their own skill and judgement when making use of them. Neither the authors nor publisher assumes any liability to anyone for any loss or damage caused by any error or omission in the work, whether such an error or omission is the result of negligence or any other cause. Any and all such liability is disclaimed. The moral rights of the authors to be identified as authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. British Library Cataloguing in Publication Data A catalogue record for this product is available from the British Library ISBN 978-1-78561-695-2 (hardback) ISBN 978-1-78561-696-9 (PDF) Typeset in India by MPS Limited Printed in the UK by CPI Group (UK) Ltd, Croydon About the editors 1 Introduction Gu¨nhan Du¨ndar and Mustafa Berke Yelten Part I: Fundamentals of modelling methodologies 2 Response surface modeling Jun Tao, Xuan Zeng, and Xin Li 2.1 2.2 2.3 2.4 Introduction Problem formulation Least-squares regression Feature selection 2.4.1 Orthogonal matching pursuit 2.4.2 L1-norm regularization 2.4.3 Cross-validation 2.4.4 Least angle regression 2.4.5 Numerical experiments 2.5 Regularization 2.6 Bayesian model fusion 2.6.1 Zero-mean prior distribution 2.6.2 Nonzero-mean prior distribution 2.6.3 Numerical experiments 2.7 Summary Acknowledgments 3 Machine learning Olcay Taner Yıldız 3.1 3.2 3.3 Introduction Data Dimension reduction 3.3.1 Feature selection 3.3.2 Feature extraction Clustering 3.4.1 K-Means clustering 3.4.2 Hierarchical clustering Modelling methodologies in analogue integrated circuit design 3.5 Supervised learning algorithms 3.5.1 Simplest algorithm: prior 3.5.2 A simple but effective algorithm: nearest neighbor 3.5.3 Parametric methods: five shades of complexity 3.5.4 Decision trees 3.5.5 Kernel machines 3.5.6 Neural networks 3.6 Performance assessment and comparison of algorithms 3.6.1 Sensitivity analysis 3.6.2 Resampling 3.6.3 Comparison of algorithms References Data-driven and physics-based modeling Slawomir Koziel 4.1 4.2 4.3 4.4 Model classification Modeling flow Design of experiments Data-driven models 4.4.1 Polynomial regression 4.4.2 Radial basis function interpolation 4.4.3 Kriging 4.4.4 Support vector regression 4.4.5 Neural networks 4.4.6 Other methods 4.5 Physics-based models 4.5.1 Variable fidelity models 4.5.2 Space mapping 4.5.3 Response correction methods 4.5.4 Feature-based modeling 4.6 Model selection and validation 4.7 Summary Acknowledgments References Verification of modeling: metrics and methodologies Ahmad Tarraf and Lars Hedrich 5.1 5.2 5.3 Overview 5.1.1 State space and normal form Model validation 5.2.1 Model validation metrics Semiformal model verification Contents 5.4 Formal model verification 5.4.1 Equivalence checking 5.4.2 Other formal techniques 5.5 Formal modeling 5.5.1 Correct by construction: (automatic) abstract model generation via hybrid automata 5.6 Conclusion Acknowledgements References ix 101 101 102 103 103 113 114 114 Part II: Applications in analogue integrated circuit design 6 An overview of modern, automated analog circuit modeling methods: similarities, strengths, and limitations Alex Doboli and Ciro D’Agostino 6.1 6.2 Introduction Symbolic analysis 6.2.1 Fundamental symbolic methods 6.2.2 Beyond linear filters: simplification and hierarchical symbolic expressions 6.2.3 Tackling complexity: advanced symbolic representations and beyond linear analysis 6.3 Circuit macromodeling 6.4 Neural networks for circuit modeling 6.5 Discussion and metatheory on analog circuit modeling 6.6 Conclusions References 7 On the usage of machine-learning techniques for the accurate modeling of integrated inductors for RF applications Fa´bio Passos, Elisenda Roca, Rafael Castro-Lo´pez and Francisco V. Ferna´ndez 7.1 Introduction 7.2 Integrated inductor design insight 7.3 Surrogate modeling 7.3.1 Modeling strategy 7.4 Modeling application to RF design 7.4.1 Inductor optimization 7.4.2 Circuit design 7.5 Conclusions Acknowledgment References x 8 Modelling methodologies in analogue integrated circuit design Modeling of variability and reliability in analog circuits Javier Martin-Martinez, Javier Diaz-Fortuny, Antonio Toro-Frias, Pablo Martin-Lloret, Pablo Saraza-Canflanca, Rafael Castro-Lopez, Rosana Rodriguez, Elisenda Roca, Francisco V. Fernandez, and Montserrat Nafria Modeling of the time-dependent variability in CMOS technologies: the PDO model 8.2 Characterization of time zero variability and time-dependent variability in CMOS technologies 8.3 Parameter extraction of CMOS aging compact models 8.3.1 Description of the method 8.3.2 Application examples 8.4 CASE: a reliability simulation tool for analog ICs 8.4.1 Simulator features 8.4.2 TZV and TDV studied in a Miller operational amplifier 8.5 Conclusions Acknowledgments References Modeling of pipeline ADC functionality and nonidealities Enver Derun Karabeyog˘lu and Tufan Cos¸kun Karalar 9.1 9.2 9.3 Pipeline ADC Flash ADC Behavioral model of pipeline ADCs 9.3.1 A 1.5-bit sub-ADC model 9.3.2 Multiplier DAC 9.3.3 A 3-bit flash ADC 9.4 Sources of nonidealities in pipeline ADCs 9.4.1 Op-amp nonidealities 9.4.2 Switch nonidealities 9.4.3 Clock jitter and skew mismatch 9.4.4 Capacitor mismatches 9.4.5 Current sources matching error 9.4.6 Comparator offset 9.5 Final model of the pipeline ADC and its performance results 9.6 Conclusion Acknowledgement References 10 Power systems modelling Jindrich Svorc, Rupert Howes, Pier Cavallini, and Kemal Ozanoglu 10.1 Introduction Contents 10.2 Small-signal models of DC–DC converters 10.2.1 Motivation 10.2.2 Assumptions 10.2.3 Test vehicle 10.2.4 Partitioning of the circuit 10.2.5 Model types 10.2.6 Basic theory for averaged – continuous-time model 10.2.7 Duty-cycle signal model 10.2.8 Pulse width modulator model 10.2.9 Model of the power stage 10.2.10 Complete switching, linear and small-signal model 10.2.11 The small-signal open-loop transfer function 10.2.12 Comparison of various models 10.2.13 Other outputs of the small-signal model 10.2.14 Switching frequency effect 10.2.15 Comparison of the averaged and switching models 10.2.16 Limitations of the averaged model 10.3 Efficiency modelling 10.4 Battery models 10.5 Capacitance modelling 10.6 Modelling the inductors 10.6.1 Spice modelling 10.6.2 Advanced modelling 10.6.3 Saturation current effects and modelling 10.7 Conclusion References 11 A case study for MEMS modelling: efficient design and layout of 3D accelerometer by automated synthesis Steffen Michael and Ralf Sommer 11.1 11.2 11.3 11.4 11.5 Introduction Synthesis of MEMS designs and layouts – general aspects Working principle and sensor structure Technology Design strategy and modelling 11.5.1 Library approach 11.5.2 Modelling 11.6 MEMS design and layout example 11.6.1 xy Acceleration unit 11.6.2 Accelerometer for z detection 11.6.3 Layout Acknowledgement References xi 229 229 231 231 232 233 234 238 238 239 241 243 244 247 248 251 251 252 255 257 259 261 263 265 267 268 Modelling methodologies in analogue integrated circuit design 12 Spintronic resistive memories: sensing schemes Mesut Atasoyu, Mustafa Altun, and Serdar Ozoguz 12.1 Background 12.1.1 Physical structure of an MTJ 12.1.2 The switching mechanism of STT-MRAM 12.2 Sensing schemes of STT-MRAM 12.3 Conclusion Acknowledgments References 13 Conclusion Mustafa Berke Yelten and Gu¨nhan Du¨ndar About the editors Gu¨nhan Du¨ndar is a full professor in the department of electrical and electronic engineering, Bog˘azic¸i University, Turkey. He has authored and co-authored more than 200 international journal and conference papers in the broad area of circuits and systems, as well as one book and one book chapter. He is also the recipient of several research awards. His research interests lie in the design of, and design methodologies for analogue integrated circuits. Mustafa Berke Yelten is an assistant professor in the department of Electronics and Communications Engineering, Istanbul Technical University, Turkey. He previously worked as a quality and reliability research engineer in Intel Corporation between 2011 and 2015. As a senior member of IEEE, he has been a technical program committee member of several IEEE conferences. His research interests include the design, optimization, and modelling of nanoscale transistors and the design of analogue/RF integrated circuits. Chapter 1 Introduction Gu¨nhan Du¨ndar1 and Mustafa Berke Yelten2 This book is intended to fill in a missing link between two well-established disciplines, namely modelling and circuit design. From the perspective of circuit designers, simulation times have increased beyond what is practical with the everincreasing complexity of circuits, even though computers have also got faster. Modelling systems, especially at higher levels of design abstraction, has become a necessity. On the other hand, the designer is faced with a bewildering array of choices in models. In order to make a feasible choice, the circuit designer has to understand the underlying mathematics of each model as well as its limitations. From the perspective of model developers, circuit design offers a wide range of applications for which various modelling approaches can be integrated. Models are required not only for simple analogue/digital circuits but also for components, especially as the technologies move into very deep submicron dimensions. Nanometre scale transistors present modelling problems, not only for their nominal behaviour but also for variations and time-dependent drifts. Furthermore, modelling studies of integrated microelectromechanical systems (MEMS)/electronics systems, on-chip inductors, and beyond complementary metal-oxide-semiconductor (CMOS) devices present ample opportunity to develop novel approaches. In order for a model to be adopted, there are certain specifications to be met. First of all, the models should be accurate enough to capture the experimental results. This requires that the model must account for many details that will lead to increased mathematical complexity. Nevertheless, as the complexity is augmented, the convergence in simulation is mostly impaired, thereby leading to a high computation cost. Another issue arises when the model-representation capability of the underlying physics of a phenomenon is assessed. Empirical models can provide excellent accuracy, yet they cannot be leveraged to provide insights to designers about the underlying causes of an unusual observation. Conversely, a truly-physicsbased model will be unable to explain the time- and process-based random variations, which ultimately lead to inaccuracies. Finally, ‘the curse of dimensionality’ exacerbates all factors described previously: high dimensionality of a problem 1 Electrical and Electronics Engineering Department, Bog˘azic¸i University, Istanbul, Turkey Electronics and Communications Engineering Department, Istanbul Technical University, Istanbul, Turkey 2 Modelling methodologies in analogue integrated circuit design requires a plethora of simulations or measurements for high accuracy. In addition, it causes the model to become more convoluted with less intuition to designers, thereby adding up to the computation cost. This analysis indicates that developing versatile models for analogue integrated circuits can be confined to a delicate balance of trade-offs between accuracy, physical intuition capability, and computational efficiency. On a higher level, a modelling expert should have a vast knowledge and experience in solid-state device physics and microfabrication, analogue circuit design as well as mathematical foundations of modelling methodologies. Often, modelling engineers cannot have full expertise in each of these areas. Nevertheless, resources for modelling approaches where the mechanics of constructing a model meets the practical specifications expected from models in typical applications are strongly desired by integrated circuit designers. This book can be viewed as a response to this expectation. As the difficulty of both constructing and validating a model is significantly boosted, this book aims to revisit some basic modelling concepts and techniques by concurrently considering the challenges that the community of designers is facing. With an in-depth analysis of different modelling approaches, their applications in analogue integrated circuit design can be better understood. Later, during the discussion of modelling applications, different areas where analogue circuits can be employed will be discussed, where the need for modelling is provided along with possible solutions. To address the perspective stated previously, this book is divided into two parts. The first part, named as “Fundamentals of Modelling Methodologies”, surveys various modelling approaches irrespective of circuit applications. We have tried to have reasonably complete coverage of modelling methodologies with enough theoretical treatises. This part is intended for the readers with a background on circuit design, but not on modelling. It is inevitable that there are some overlaps between chapters, but these have been minimized. Where there are overlaps, we have deemed them necessary for completeness within a chapter, since this book can be utilized as a reference book, where the reader can refer to each chapter individually. The second part, named as “Applications in Analogue Integrated Circuit Design”, deals with various problems in circuit design, where modelling is applied. These problems are by no means complete, and the reader may find many other applications. This second part is intended for readers with a stronger mathematical background who are looking for applications and/or research areas in modelling. Chapter 2 is about response surface modelling (RSM), which is one of the critical techniques for analysing analogue and mixed-signal circuits. Fitting an RSM to a function involves least-squares regression at its simplest form. However, as circuits become more complicated, the number of variables increases to beyond, thereby leading to an excessive time cost of simulation. Hence, this chapter introduces several different techniques to tackle this growing complexity. Chapter 3 introduces machine learning. In its most straightforward meaning, machine learning is extracting meaningful information from a large amount of Starting from the fundamentals, this chapter takes the reader quickly towards concepts, such as clustering, learning, and neural networks. During this journey, the theory is as rigorous as possible without bogging down the reader in detail. The final few pages of the chapter deal with a methodological performance assessment of models, which is a subject often overlooked by model developers and users alike. Chapter 4 concerns itself with data-driven and physics-based modelling. The emphasis in the chapter is on surrogate modelling, where fast replacement models to alleviate the high cost of simulations are developed. Initially, data-driven models are considered, since they are generic and no prior knowledge about the problem is required. In terms of data-driven models, sampling the data properly is of utmost importance, whereby the design of experiments is introduced. Regarding various types of data-driven models, polynomial regression, radial basis function interpolation, kriging, support vector regression, and neural networks are considered as well as some other less well-known approaches. Physics-based models require knowledge about the system to be modelled but result in much more efficient models. These models are based on suitable correction or enhancement of an underlying low-fidelity model. Among the approaches described are variable fidelity models, space mapping, response correction, and feature-based modelling. The chapter concludes with a discussion on model selection and validation. Chapter 5, which is the last section of the first part, resumes from where the previous chapter left off, namely verification of modelling. This subject has become exceedingly important with more complex systems, where it is virtually impossible to check the equivalence of the system and its model under every scenario possible. Initially, the concept of state space in analogue circuits is defined in the chapter, since it forms the basis of the subsequent discussions. Then, the model validation problem is defined, and the relevant metrics are discussed. The difference between semi-formal and formal model verification is highlighted, and some formal verification techniques are described. Finally, formal modelling methods are introduced and illustrated in an example. Chapter 6 provides an overview of modelling applied to analogue circuits. As such, it also sets the stage for subsequent chapters where more specific models and/or circuits are considered. A significant portion of the chapter is dedicated to symbolic analysis, which has not been discussed in the previous part as it pertains specifically to analogue circuits, and mostly to linear analysis. However, one can view symbolic analysis as a special case of physics-based models. Next, circuit macromodelling is introduced, which are more ad hoc models developed to fit the behaviour of the system under consideration. Finally, the use of neural networks for modelling the behaviour of analogue circuits is discussed with many examples from the literature. Chapter 7 deals with machine learning in the modelling of integrated inductors. The integrated inductor is an essential component in radio frequency circuit design, whose modelling has been lacking in accuracy for a long time. The development of the modes follows the design of experiments, data generation, model construction, and model validation steps, as discussed earlier in the book. Kriging models were used as surrogate models for the inductors. However, the chapter does not end here but continues with the utilization of the models in an Modelling methodologies in analogue integrated circuit design optimizer to choose the best inductors in that particular technology and finally to design circuits around the inductors. Chapter 8 considers modelling, but not in the classical sense of describing the nominal behaviour. Modelling of variability and reliability is considered in this chapter. The reliability models developed are physics-based and obtained from the measured data of the ENDURANCE chip. Finally, a circuit reliability simulator based on these models is presented. Chapter 9 focuses on the impact of modelling in mixed-signal circuits. Particularly, the design of a pipelined analog-to-digital converter (ADC) along with the associated nonidealities has been considered. The chapter starts with a short discussion on the operation of a pipeline ADC. Then, various types of nonidealities stemming from the shortcomings of the operational amplifiers (finite gain and slew rate, input offset voltage), capacitor, and skew mismatches, as well as different non-linearities (charge injection, clock feedthrough, and switch resistance) are explained in conjunction with their simulation models. Finally, simulation results of a designed pipeline ADC, including all modelled nonidealities, are provided. In Chapter 10, modelling in the context of power systems has been presented with the example of a buck converter. The switching and averaged models of the converter have been separately investigated with their theoretical background. Battery, capacitor, and inductor models are also provided to yield a comprehensive perspective on the realistic operation of a power system. A comparison between switching and averaged models is also made to reflect on their individual advantages and drawbacks. Chapters 11 and 12 are short chapters highlighting some future applications in modelling. Chapter 11 concerns itself with modelling and design of MEMS devices in an application-specific integrated circuit (ASIC) design environment, whereas Chapter 12 considers spintronic resistive memories. The aim of these two chapters is to give a taste of some future problems, where modelling will be of utmost importance for accurately and precisely describing the operation of novel semiconductor devices and circuits. In the last chapter of this book, it is aimed to provide an outlook on the future of modelling in analogue integrated circuits by starting from its historical fundamentals. All the chapters of this book are connected to different trends that have been observed throughout the development of modelling in microelectronics. Therefore, in order to forecast the next steps, a better understanding of the current needs and tools, as well as their applications, is mandatory. The book will conclude by stating the possibilities, in which the reader might be interested in investing their time if they decide to continue their work in this research area. Part I Fundamentals of modelling methodologies Chapter 2 Response surface modeling Jun Tao1, Xuan Zeng1, and Xin Li2 During the past decades, response surface modeling (RSM) has been extensively studied as one of the most important techniques for analyzing analog and mixedsignal (AMS) circuits. The objective of RSM is to approximate the circuit-level performance as an analytical function of variables of interest (VoIs) (e.g., design variables, tunable knobs, device-level variations and/or environmental conditions) based on a set of simulated/measured data. The conventional method for RSM is least-squares (LS) regression. However, when modeling modern AMS circuits, LS regression usually suffers from unaffordable modeling cost due to high-dimensional variable space and expensive simulation/measurement cost. To address this cost issue and make RSM of practical utility, several state-of-the-art techniques have been proposed recently. For instance, by exploring the sparsity of model coefficients, feature selection methods, such as orthogonal matching pursuit (OMP), and L1-norm regularization, are developed to automatically select important basis functions based on a limited number of training samples. Alternatively, another cost-reduction method, referred to as Bayesian model fusion (BMF), attempts to borrow the information collected from an early stage (e.g., schematic design) to facilitate efficient performance modeling at a late stage (e.g., post layout) during today’s multistage design flow. In this chapter, we will review these recent advances and highlight their novelty. 2.1 Introduction As one of the most important techniques for analyzing AMS circuits, RSM has been extensively studied in the literature during the past decades [1–7]. The objective of RSM is to approximate the circuit-level performance (e.g., gain of an analog amplifier) as an analytical (e.g., linear and quadratic) function of design variables (e.g., transistor widths), tunable knobs (e.g., bias current and capacitor array), device-level variations (e.g., oxide thickness and threshold voltage) and/or 1 State Key Laboratory of ASIC and System, School of Microelectronics, Fudan University, Shanghai, China 2 Data Science Research Center, Duke Kunshan University, Kunshan, China Modelling methodologies in analogue integrated circuit design environmental conditions (e.g., temperature and power supply). Compared to the expensive transistor-level simulations (e.g., SPICE simulations), such a performance model can evaluate the circuit performances with substantially reduced computational cost. Hence, RSM has been widely adopted to support a number of important applications, e.g., circuit optimization [1,2,8–17], worst-case corner extraction [18,19], and parametric yield estimation [20,21]. Once a response surface model is created, it can be utilized to improve the efficiency of circuit optimization, including pre-silicon sizing and post-silicon tuning. Pre-silicon sizing aims to optimize the design variables before fabrication to satisfy all given specifications [10–16]. Due to large-scale process variations, it is usually difficult to guarantee high parametric yield at advanced technology nodes. For this reason, reconfigurable analog/radio frequency (RF) circuits, including a set of tunable knobs, have been proposed. The objective of post-silicon tuning is to optimize the tunable knob setups and, next, adaptively reconfigure analog/RF circuits after fabrications to satisfy all given specifications over different process corners and environmental conditions [8,9]. In order to solve these circuit optimization problems, both efficiently (e.g., with low computational cost) and robustly (e.g., with guaranteed global optimum), special constraints are often posed for the performance models [16]. For instance, if posynomial performance models are available, circuit optimization can be formulated as a convex geometric programming problem and solved with guaranteed global optimum [10–12]. To afford further modeling flexibility, we can alternatively approximate analog performances by general non-convex polynomial functions. Even though the resulting polynomial optimization is non-convex, its global optimum can be reliably found by semidefinite programming (SDP) [13–16]. To enable hierarchical optimization for large-scale, complex analog systems, we can also apply RSM to extract the analog performance trade-offs, referred to as Pareto optimal fronts (PoFs), at block level [2,17]. Next, block-level performance constraints can be set up for system-level optimization based on these PoF models in order to guarantee that the optimized design is feasible. Worst-case corner extraction aims to identify the “worst” corners in the device-level variable space (i.e., determining the specific values of device-level variables), at which the performance of a given circuit becomes the worst. Instead of performing expensive Monte Carlo analysis, designers only need to verify or optimize their circuit at these corners and come up with a robust solution [11,22]. Therefore, worst-case corner extraction can facilitate efficient yield optimization. By using RSM, we can cast worst-case corner extraction to a sequential quadratic programming problem [1] or an SDP problem [19] and, next, use existing optimization algorithms [23] to solve them and extract the worst-case corners. Parametric yield can also be efficiently estimated based on the RSM of circuit performance. If process variations are sufficiently small, we can create a linear regression model to efficiently and accurately approximate a given performance. Without loss of generality, we usually assume that the device-level variations can be represented by a set of independent and standard normal random variables after principal component analysis (PCA). Therefore, the performance will also follow a Response surface modeling normal distribution [20]. To capture larger scale process variations, nonlinear response surface models (e.g., quadratic polynomials) must be utilized, and moment-matching method (e.g., APEX [21]) can be used to estimate the unknown probability density function (PDF) or cumulative distribution function of the performance variations. Given an AMS circuit, most existing RSM methods approximate its performance of interest (PoI) as a linear combination of a set of basis functions (also referred to as feature functions). To calculate the model coefficients, conventional RSM adopts LS fitting [4–7]. First, we generate a set of training samples and run transistor-level simulations to obtain the PoIs over these samples. Next, the unknown model coefficients can be determined by solving an overdetermined linear equation to minimize the sum-of-squares error. In this case, the number of training samples must be equal to or greater than the number of model coefficients (i.e., the number of adopted basis functions) [24]. However, LS usually suffers from unaffordable computational cost when modeling modern AMS circuits because of two reasons [25]. ● High-dimensional variable space: First, due to the remarkable increase of AMS circuit size, while a typical analog circuit (e.g., OpAmp and current mirror) consists of only 100 devices, an entire AMS system (e.g., phase-locked loop, analog-to-digital converter and RF front-end) may assemble a few hundred such circuits and comprise 105 devices or more [2]. To create a performance model for a large-scale AMS system, the number of associated design variables (and also the number of basis functions) will be tremendous (e.g., more than 105). Second, with the aggressive scaling of integrated circuit technology, a large number of device-level random variables must be used to model the process variations. For instance, about 40 independent random variables are required to model the device mismatches of a single transistor for a commercial 32 nm complementary metal oxide semiconductor (CMOS) process. If an AMS system contains 105 transistors, there are about 4 106 random variables in total to capture the corresponding device-level variations, resulting in a high-dimensional variation space [25]. Expensive circuit simulation: The computational cost of circuit simulation substantially increases, as the AMS circuit size becomes increasingly large. For instance, it may take a few days or even a few weeks to run transistor-level simulation of a large AMS circuit, such as phase-locked loop or high-speed link [25]. These recent trends of today’s AMS circuits make performance modeling extremely difficult. On one hand, a large number of simulation samples must be generated in order to fit a high-dimensional model. On the other hand, creating a single sample by transistor-level simulation can take a large amount of computational time. The challenge here is how to make performance modeling computationally affordable for today’s large-scale AMS circuits [25]. To reduce the modeling cost and make RSM of practical utility for large-scale AMS systems, several state-of-the-art techniques have been proposed Modelling methodologies in analogue integrated circuit design For instance, while numerous basis functions must be used to span the highdimensional variable space, not all these functions play an important role for a given PoI. In other words, although there are a large number of unknown coefficients, many of them are close to zero, rendering a unique sparse structure. Taking into account this sparse structure as our prior knowledge, a large number of (e.g., 104–106) model coefficients can be efficiently solved from a small set of (e.g., 102–103) sampling points without over-fitting by using OMP [17,26] or L1-norm regularization [27–30]. OMP is a classic algorithm to approximate the optimal solution of L0-norm regularization [24]. Another alternative method to reduce the modeling cost is referred to as BMF. BMF was proposed on the basis of observation that today’s AMS circuits are often designed via a multistage flow. For instance, an AMS design often spans three major stages: (i) schematic design, (ii) layout design and (iii) chip manufacturing and testing [30]. At each stage, simulation or measurement data are collected to validate the circuit design, before moving to the next stage. To build performance models, most conventional RSM techniques only rely on the data at a single stage but completely ignore the data that are generated at other stages. The basic idea of BMF is to reuse the early-stage data when fitting a late-stage performance model. As such, the performance modeling cost can be substantially reduced [25,30]. The remainder of this chapter is organized as follows: in Section 2.2, we describe the problem definition of performance modeling. Next, the conventional LS regression method is summarized in Section 2.3. In Section 2.4, we discuss OMP and L1-norm regularization in detail and, consequently, present the general idea of regularization in Section 2.5. BMF and its applications are illustrated in Section 2.6. Finally, a brief summary is concluded in Section 2.7. 2.2 Problem formulation Without loss of generality, let x ¼ ½x1 x2 xN T 2 RN denote the VoIs, including design variables, tunable knobs, device-level variations and/or environmental conditions. The PoI y can be approximated by using the linear combination of a set of basis functions {bm(x); m ¼ 1, 2, , M}: y ðx Þ ¼ M X cm bm ðxÞ where {cm; m ¼ 1, 2, . . . , M} contains all unknown model coefficients, and M is the total number of basis functions. The basis functions {bm(x); m ¼ 1, 2, , M} can be chosen as posynomials [10,12], monomials [26,27], polynomials [31,32], etc. Suppose that we have obtained the transistor-level simulation results or measurement data on NS training samples, RSM attempts to calculate the unknown model coefficients {cm; m ¼ 1, 2, , M} in (2.1) by solving the following linear system: B c ¼ yS Response surface modeling where 2 b1 ðx1 Þ b 2 ðx 1 Þ 6 6 b1 ðx2 Þ b2 ðx2 Þ 6 B¼6 .. .. 6 . . 4 b1 ðxNS Þ b2 ðxNS Þ bM ðx1 Þ .. . bM ðx2 Þ .. . bM ðxNS Þ c ¼ ½ c 1 c 2 c M T T yS ¼ yS1 yS2 ySNS (2.4) (2.5) and xn and ySn where n [ {1, 2, , NS} denote the VoIs and PoI for the nth sampling point, respectively. In the following sections, we will describe several state-of-theart methods to solve the unknown model coefficients from (2.1). 2.3 Least-squares regression If the number of training samples (i.e., NS) is greater than the number of unknown model coefficients (i.e., M), (2.2) becomes an overdetermined linear equation and can be solved by adopting the conventional LS regression method. LS regression attempts to determine the model coefficients c by minimizing a specific error function E(c), where E(c) is defined as the sum-of-squares of the errors between the modeling result y(xn) and the corresponding simulated/measured performance value ySn over each sampling point xn: !2 NS NS M X X X 2 S 2 S min EðcÞ ¼ y ðx n Þ y ¼ cm bm ðxn Þ y ¼ B c yS (2.6) n where ||•||2 stands for the L2-norm of a vector. Since the performance model in (2.1) is a linear function of the model coefficients c, the cost function E(c) in (2.6) is quadratic in c, and its first-order derivative with respect to the unknown coefficients is also linear over c: T rc EðcÞ ¼ B c yS B: By setting this derivative in (2.7) to zero, we can obtain an analytical LS solution (denoted as cLS) for the unknown model coefficients: 1 cLS ¼ BT B BT yS : (2.8) The aforementioned LS solution can also be derived from maximum likelihood estimation (MLE). Without loss of generality, we can assume that the modeling error follows a zero-mean Gaussian distribution with the variance s2e : yS ¼ yðxÞ þ e; Modelling methodologies in analogue integrated circuit design e N 0; s2e : As a result, the distribution of yS will also be Gaussian: " 2 # S yS yðxÞ 1 : exp pdf y ¼ pffiffiffiffiffiffi 2 s2e 2p se Since all sampling points are drawn independently and follow the same distribution in (2.11), {ySn ; n ¼ 1, 2, , NS} is independent and identically distributed (abbreviated to i.i.d.). Consequently, the corresponding likelihood function can be written as a product of the probabilities for each sampling point: " 2 # NS Y ySn yðxn Þ 1 S pffiffiffiffiffiffi pdf ðy jc; se Þ ¼ : (2.12) exp 2 s2e n¼1 2p se This likelihood function represents the probability of observing all training samples given the model coefficients c and the variance s2e . After taking the natural logarithm on both sides of (2.12) and following the definition of the sum-of-squares error function E(c) in (2.6), we have: ( " 2 #) NS X ySn yðxn Þ 1 S ln pffiffiffiffiffiffi exp ln½pdf ðy jc; se Þ ¼ 2 s2e 2p se n¼1 ¼ NS S 2 NS 1 X lnð2pÞ NS lnðse Þ yn yðxn Þ 2 2 se n¼1 2 NS 1 lnð2pÞ NS lnðse Þ EðcÞ: 2 s2e 2 Since the natural logarithm function is monotonically increasing, maximizing the likelihood function in (2.12) is equivalent to maximizing its natural logarithm in (2.13). At the right-hand side of (2.13), only the last term depends on c, and the positive constant 1/2 s2e of this term does not affect the optimal solution of the model coefficients c. Therefore, the LS regression in (2.6) is equivalent to the MLE problem. As discussed in the aforementioned section, the LS method has been extensively studied in the literature [4–7,11,33,34]. However, it often becomes computationally unaffordable due to high dimensionality and expensive simulation/ measurement cost in practice. When applying LS regression, the number of training samples (i.e., NS) must be equal to or greater than the number of model coefficients (i.e., M). If M is large (e.g., 104–106 in the high-dimensional variable space), we must collect more than M samples for a large-scale AMS system, which is intractable, if not impossible. For this reason, the traditional LS method is limited to small-size or medium-size problems (e.g., 10–1,000 model coefficients) [26]. To address this cost issue, we will present two feature selection methods in the following section. Response surface modeling 2.4 Feature selection If the number of training samples (i.e., NS) is less than the number of model coefficients (i.e., M), i.e., there are fewer equations than the unknowns, the linear equation in (2.2) becomes underdetermined, and the solution for the model coefficients c is not unique. In this case, directly applying LS regression may result in over-fitting [24]. To address this underdetermined issue, we can explore the sparsity of c and introduce additional constraints in order to uniquely and accurately determine its value [26–28]. While numerous basis functions must be used to span the highdimensional and nonlinear variation space, not all of them play an important role in modeling a given PoI. In other words, although the dimensionality of c maybe huge, many of its elements are close to zero, rendering a unique sparse structure. For instance, for a 65 nm SRAM circuit containing 21,310 independent random variables, only 50 basis functions are required to accurately approximate the delay variation of its critical path [27]. However, we do not know the important basis functions or, equivalently, the exact locations of nonzero coefficients in advance. To automatically select these important basis functions based on a limited number of training samples, L0-norm regularization can be utilized. Based on the theory of L0-norm regularization, we can formulate the following optimization problem to calculate the sparse solution for the model coefficients c: NS X E ðc Þ ¼ c l; 0 yðxn Þ ySn 2 ¼ B c yS 2 where ||•||0 stands for the L0-norm of a vector, i.e., the number of nonzeros in the vector. Therefore, ||c||0 measures the sparsity of c. By directly constraining this L0-norm, the optimization in (2.14) attempts to find a sparse c that minimizes the sum-of-squares error E(c) [26–28]. The parameter l in (2.14) explores the trade-off between the sparsity of the model coefficients c and the minimal value of the cost function E(c). For instance, a large l will result in a small E(c), but meanwhile, it will increase the number of nonzeros in c. Note that a small cost function does not necessarily mean a small modeling error. Even though the minimal cost function value can be reduced by increasing l, such a strategy may result in over-fitting especially because (2.2) is underdetermined. In the extreme case, if l is sufficiently large and the constraint in (2.14) is not active, we can always find a solution for c to exactly make the cost function equal to zero. However, such a solution is likely to be useless, since it over-fits the given training samples [26,27]. In practice, the optimal value of l can be automatically determined by crossvalidation, as will be discussed in detail in Section 2.4.3. While the aforementioned L0-norm regularization can effectively guarantee a sparse solution for the model coefficients c, the optimization in (2.14) is nondeterministic polynomial-time (NP) hard [24,34] and, hence, is extremely difficult to solve. We can approximate its solution by adopting an efficient heuristic Modelling methodologies in analogue integrated circuit design algorithm, referred to as OMP, in Section 2.4.1 or, alternatively, relaxing (2.14) to a computationally efficient L1-norm regularization problem as shown in Section 2.4.2. 2.4.1 Orthogonal matching pursuit Given the underdetermined linear equation in (2.2), OMP applies a heuristic and iterative algorithm to identify a small set of important basis functions and use them to approximate the performance y(x). For other noncritical basis functions, the corresponding coefficients are set to zero. If the number of selected basis functions (i.e., l) is substantially less than the total number of basis functions (i.e., M), the resulting solution of model coefficients c is sparse [26]. To guarantee the fast convergence of OMP, we usually adopt a set of basis functions that are normalized and orthogonal. Namely, the inner production of any two basis functions bi(x) and bj(x) must satisfy: ð 1 if i ¼ j hbi ðxÞ; bj ðxÞi ¼ bi ðxÞ bj ðxÞ pdf ðxÞ dx ¼ : (2.15) 0 if i 6¼ j For instance, if x represents a set of independent random VoIs that follow the standard normal distribution (e.g., the device-level variations after PCA), we can adopt high-dimensional Hermite polynomials as basis functions. Generally speaking, for a given pdf(x), a set of normalized and orthogonal basis functions can be created by using the Gram–Schmidt technique [35]. According to (2.1) and (2.15), we can calculate the inner product between y(x) and each basis function bm(x): ð hyðxÞ; bm ðxÞi ¼ yðxÞ bm ðxÞ pdf ðxÞ dx # ð "X M ci bi ðxÞ bm ðxÞ pdf ðxÞ dx ¼ i¼1 ð M X ci bi ðxÞ bm ðxÞ pdf ðxÞ dx ¼ cm : ¼ This inner product is equal to the model coefficient cm. It implies that if hy(x), bm(x)i (i.e., cm) is far away from zero, the corresponding bm(x) must be selected as an important basis function to approximate the performance y(x). Therefore, this inner product hy(x), bm(x)i can be used as a good criterion to measure the importance of each basis function. However, we do not know the analytical form of y(x) in advance and, hence, can only numerically approximate the integration in (2.16) from a set of training samples: hyðxÞ; bm ðxÞi NS 1 X 1 bm ðxn Þ ySn ¼ bT yS ¼ xm ; NS n¼1 NS m Response surface modeling where bm ¼ ½bm ðx1 Þ bm ðx2 Þ bm ðxNS ÞT denotes the mth column vector of the matrix B defined in (2.3) and is also referred to as the mth basis vector. xm in (2.17) gives a statistic estimation for the unknown coefficient cm. However, since xm is calculated from a set of random sampling data {(xn, ySn ); n ¼ 1, 2, , NS} that may contain large fluctuations, such an estimation cannot guarantee sufficient accuracy [31,36]. Therefore, instead of directly using (2.17) to determine the value of cm, OMP applies an iterative algorithm. During each iteration, the inner product in (2.17) is only used to identify one important basis function from all candidates and, next, the model coefficients are calculated by applying LS regression. In what follows, we will describe the OMP algorithm in detail. We start from a large set of possible basis functions BC ¼ fbm ðxÞ; m ¼ 1; 2; ; Mg that can be used to approximate the performance function y(x). Initially, without knowing which basis function is important, we use each basis function from the set BC (i.e., each column vector of B) to calculate the inner product values {xm; m ¼ 1, 2, , M} based on (2.17). The basis function bmS1 ðxÞ that is most with yS, results in the largest absolute value of the inner correlated product xmS1 , is chosen as the first important basis function. Once bmS1 ðxÞ is identified, OMP uses it to approximate y(x) by solving the following LS fitting problem: NS X min cmS1 cmS1 bmS1 ðxn Þ ySn 2 ¼ cmS1 bmS1 yS 2 ; where bmS1 ¼ ½bmS1 ðx1 Þ bmS1 ðx2 Þ bmS1 ðxNS ÞT . Next, OMP calculates the residual r ¼ yS c mS1 bmS1 with the optimal coefficient c mS1 calculated by (2.18) and removes bmS1 ðxÞ from the set of possible basis functions BC . Based on (2.19), OMP further identifies the next important basis function bmS2 ðxÞ by estimating the inner product values between the residual r and each basis function included in the current BC : xm ¼ NS 1 X 1 rn bm ðxn Þ ¼ bT r; NS n¼1 NS m where rn denotes the nth element in r. Once bmS2 ðxÞ is known, OMP approximates yS in the directions of both bmS1 and bmS2 by solving the following optimization problem: min cmS1 ;cmS2 NS X cmS1 bmS1 ðxn ÞþcmS2 bmS2 ðxn ÞySn 2 ¼ cmS1 bmS1 þcmS2 bmS2 yS 2 : (2.21) It is important to note that the coefficient c mS1 calculated by (2.18) maybe different from that calculated by (2.21). In other words, every time when a new basis function Modelling methodologies in analogue integrated circuit design is selected, OMP recalculates all model coefficients to minimize the sum of squared residuals. This recalculation step is required, because even though the basis functions {bm(x); m ¼ 1, 2, , M} are orthogonal as defined in (2.15), the basis vectors {bm; m ¼ 1, 2, , M} are not necessarily orthogonal due to random sampling, i.e., bTi bj 6¼ 0 when i 6¼ j. Hence, the new basis function selected at the current iteration step may change the model coefficient values calculated at previous iteration steps [26]. The aforementioned iteration for basis function selection and LS regression continues until a sufficient number of (i.e., l) important basis functions are selected. Algorithm 1 summarizes the major iteration steps of OMP. Algorithm 1: Orthogonal matching pursuit (OMP) 1. Start from the linear (2.2) generated from a set of normalized and orthogonal basis functions and an integer l representing the total number of basis functions that should be selected. 2. Initialize the residual r ¼ yS, the number of selected basis functions MS ¼ 0, the index set of selected basis functions I S ¼ f and the index set of all possible basis functions I C ¼ f1; 2; ; M g. 3. While MS l. 4. For each m 2 I C , calculate the inner product xm between the residue r and the corresponding basis vector bm based on (2.20). 5. Find out the index mS corresponding to the largest absolute value of the inner product xmS . 6. Update MS ¼ MS þ 1, I S ¼ I S [ {mS} and remove mS from I C . 7. Solve the following optimization problem: !2 X 2 NS X X S ¼ cm bm yS ; (2.22) min cm bm ðxn Þ yn fcm ;m2I S g m2I S m2I S to determine the optimal model coefficients fc m ; m 2 I S g by using LS regression. Update the residual: X r ¼ yS c m bTm : (2.23) m2I S 9. 10. End while. Set the model coefficients fcm ; m 2 I C g to zero. It is important to note that even though OMP is a heuristic algorithm to solve the L0-norm regularization problem in (2.14), the quality of its solution is guaranteed according to several theoretical studies from the statistics community [37]. Roughly speaking, if the M-dimensional vector c contains l nonzeros (l M) and Response surface modeling the linear equation in (2.2) is well conditioned, the actual solution c can be almost uniquely determined (with a probability nearly equal to one) from NS sampling points, where NS is in the order of O(l logM) [37]. While this theoretical result does not precisely give the number of required sampling points, it presents an important scaling trend. Namely, NS (i.e., the number of training samples) is a logarithmic function of M (i.e., the number of unknown coefficients). It, in turn, provides the theoretical foundation that by solving the sparse solution of an underdetermined equation, a large number of model coefficients can be uniquely determined from a small number of training samples. 2.4.2 L1-norm regularization Besides OMP, we can also approximate the solution of L0-norm regularization in (2.14) by relaxing it to a computationally efficient L1-norm regularization problem [27,28]: min B c yS 2 c 2 c l; s:t: (2.24) 1 where ||c||0 in (2.14) is replaced by ||c||1. ||c||1 denotes the L1-norm of the vector c, i.e., the summation of the absolute values of all elements in c: kck1 ¼ jc1 j þ jc2 j þ þ jcM j: The L1-norm regularization in (2.24) can easily be reformulated to a convex optimization problem. By introducing a set of slack variables b ¼ {bm; m ¼ 0, 1, , M}, we can rewrite (2.24) into the following equivalent form [27]: 2 min B c yS c;b b1 þ b2 þ þ bM l bm cm bm ðm ¼ 1; 2; ; M Þ: In (2.26), the cost function is quadratic and positive semi-definite. Hence, it is convex. All constraints are linear and, therefore, the resulting constraint set is a convex polytope. For these reasons, the L1-norm regularization in (2.24) is a convex optimization problem and can be solved both efficiently (i.e., with low computational cost) and robustly (i.e., with guaranteed global optimum) by using several state-of-the-art methods, e.g., the interior-point method [23]. To understand the connection between L1-norm regularization and sparse solution, we consider the two-dimensional example (i.e., c ¼ [c1 c2]T) in Figure 2.1 2 [27,28]. Since the cost function B c yS 2 is quadratic and positive semidefinite, its contour lines can be represented by multiple ellipsoids. On the other hand, the constraint kck1 l corresponds to a number of rotated squares, associated with different values of l. For example, two of such squares are shown in Figure 2.1, where l1 < l2. Modelling methodologies in analogue integrated circuit design c2 B ⋅ c − yS P1 c1 c ≤ λ2 ≤ λ1 Figure 2.1 The L1-norm regularization kck1 l results in a sparse solution (i.e., c1 ¼ 0) if l is sufficiently small (i.e., l ¼ l1) [27,28] Studying Figure 2.1, we would notice that if l is large (e.g., l ¼ l2), both c1 and c2 are not zero (e.g., at the red point P2). However, as l decreases (e.g., l ¼ l1), the contour of kB c yS k22 eventually intersects the polytope kck1 l at one of its vertices (e.g., at the red point P1). It, in turn, implies that one of the coefficients (e.g., c1 in this case) becomes exactly zero. From this point of view, by decreasing l of the L1-norm regularization in (2.24), we can pose a strong constraint for sparsity and force a sparse solution. This intuitively explains why L1-norm regularization guarantees sparsity, as is the case for L0-norm regularization. In addition, various theoretical studies from the statistics community demonstrate that under some general assumptions, both L1-norm regularization and L0-norm regularization result in the same solution [37]. However, the L1-norm regularization is much more computationally efficient than the L0-norm regularization that is NP hard. This is the major motivation to replace L0-norm by L1-norm. 2.4.3 Cross-validation To make both the OMP algorithm (i.e., Algorithm 1) and the L1-norm regularization in (2.24) of practical utility, we must find the optimal value of l. In practice, l is not known in advance. The appropriate value of l must be determined by considering the following two important issues. First, if l is too small, the aforementioned feature selection methods will not select a sufficient number of important basis functions to approximate the given PoI, thereby leading to large modeling error. On the other hand, if l is too large and too many basis functions are used to approximate y(x), it will result in over-fitting that again prevents us from extracting an accurate performance model. Hence, in order to achieve the best modeling accuracy, we must accurately estimate the modeling errors for different l values and find the optimal l that minimizes the error [26–28]. However, given a limited number of sampling points, accurately estimating modeling error is not a trivial task. To avoid over-fitting, we cannot simply measure the modeling error from the same sampling data that are used to calculate the model coefficients. Instead, modeling error must be measured from an independent data set. Cross-validation is an efficient method for model validation that has been Response surface modeling Four groups of data For error estimation (grey) Run 1 For coefficient estimation (white) Run 3 Run 2 Run 4 Figure 2.2 A 4-fold cross-validation partitions the data set into four groups and the modeling error is estimated from four independent runs: e ¼ (e1 þ e2 þ e3 þ e4)/4 widely used in the statistics community [24,34]. A Q-fold cross-validation partitions the entire data set into Q groups, as shown by the example in Figure 2.2 (where Q ¼ 4). Modeling error is estimated from Q-independent runs. In each run, one of the Q groups is used to estimate the modeling error and all other groups are used to calculate the model coefficients. Different groups should be selected for error estimation in different runs. As such, each run results in an error value eq (where q [ {1, 2, , Q}) that is measured from a unique group of sampling points. In addition, when a model is trained and tested in each run, non-overlapped data sets are used so that over-fitting can be easily detected. The final modeling error is computed as the average of {eq; q ¼ 1, 2, , Q}, i.e., e ¼ ðe1 þ e2 þ þ eQ Þ=Q [26–28]. For our application, OMP or L1-norm regularization is used to identify important basis functions and calculate model coefficients for different l values during each cross-validation run. Next, the modeling error associated with each run is estimated, resulting in {eq(l); q ¼ 1, 2, , Q}. Note that eq is not simply a value but a 1-D function of l. Once all cross-validation runs are complete, the final modeling error is calculated as eðlÞ ¼ e1 ðlÞ þ e2 ðlÞ þ þ eQ ðlÞ =Q, which is again a 1-D function of l. The optimal l is determined by finding the minimal value of e(l). The major drawback of cross-validation is the need to repeatedly extract the model coefficients for Q times. However, for our circuit modeling application, the overall modeling cost is often dominated by the simulation and/or measurement cost that is required to generate sampling data. Hence, the computational overhead by cross-validation is almost negligible. 2.4.4 Least angle regression As discussed in the previous subsection, we need to adopt a two-step approach to implement the L1-norm regularization in (2.24) to automatically determine the optimal value of l: (i) repeatedly solve the convex programming problem in (2.26) by using the interior-point method [23] for different l’s and (ii) select the optimal l by cross-validation as discussed in Section 2.4.3. This approach, however, is computationally expensive, as we must run a convex solver for many times in order to visit a sufficient number of possible values of l. Instead of applying the interior-point method, we can adopt an efficient algorithm, i.e., least angle regression (LAR), to accomplish these two steps and further reduce the computational cost [27,28]. Modelling methodologies in analogue integrated circuit design According to (2.24), the sparsity of the solution c depends on the value of l. In the extreme case, if l is zero, all coefficients in c are equal to zero. As l gradually increases, more and more coefficients in c become nonzero. In fact, it can be proven that the solution c of (2.24) is a piece-wise linear function of l [38]. As a result, we do not have to repeatedly solve the L1-norm regularization at many different l’s. Instead, we only need to estimate the local linear function in each interval [li, liþ1]. This property allows us to find the entire solution trajectory c(l) with low computational cost. Next, we will show an iterative algorithm based on LAR [27] to efficiently find the solution trajectory c(l). Without loss of generality, we first normalize each basis vector by performing: bm ¼ bm ; kbm k2 where m [ {1, 2, , M}. Next, we start from the extreme case where l is zero. In this case, the solution of (2.24) is trivial, i.e., c ¼ 0. As l increases from zero, we calculate the correlation between yS and each normalized bm: corm ¼ bTm yS ; and find the vector bmS1 that is most correlated with yS, i.e., cormS1 takes the largest value [38]. By using bmS1 to approximate yS, we can express the modeling residue as: r ¼ yS g1 bmS1 ; where g1 is an unknown coefficient to be determined. To intuitively understand the LAR algorithm, we consider the two-dimensional example shown in Figure 2.3. In this example, the vector b2 has a higher correlation with yS than the vector b1. Hence, b2 is first selected to approximate yS. From the geometrical point of view, finding the largest correlation is equivalent to finding the least angle between the normalized vectors {bm; m ¼ 1, 2, , M} and the b1 b1 θ1 > θ2 yS – γ1b2 = γ2(b1 + b2) yS yS – γ1b2 θ1 = θ2 θ1 θ2 b2 Iteration 1: yS ≈ c2b2 where c2 = γ1 yS ≈ Iteration 2: c1b1 + c2b2 where c1 = γ2 and c2 = γ1 + γ2 Figure 2.3 LAR calculates the solution trajectory c(l) of a two-dimensional example yS ¼ c1b1 þ c2b2 [27] Response surface modeling performance yS. Therefore, the aforementioned algorithm is referred to as LAR in [38]. As |g1| increases, the correlation between the vector bmS1 (e.g., b2 in Figure 2.3) and the residual r defined in (2.29) (e.g., r ¼ ySg1b2 in Figure 2.3) decreases. LAR uses an efficient algorithm to compute the maximal value of |g1| at which the correlation between bmS1 and r ¼ yS g1 bmS1 is no longer dominant. In other words, there is another vector bmS1 (e.g., b1 in Figure 2.3) that has the same correlation with the current residual r: bm yS g bm ¼ bm yS g bm : 1 1 S1 S1 S2 S1 In this case, instead of continuing along bmS1 , LAR proceeds in a direction equiangular between bmS1 and bmS2 . Namely, it approximates yS by the linear combination of bmS1 and bmS2 : yS g1 bmS1 þ g2 ðbmS1 þ bmS2 Þ; where g1 is fixed and g2 is unknown at this second iteration. Taking Figure 2.3 as an example, the residual yS g1b2 is approximated by g2 (b1þb2). If |g2| is sufficiently large, yS will be exactly equal to yS ¼ g1b2þg2 (b1þb2) ¼ g2b1þ(g1þg2)b2. In this example, because only two basis functions b1 (x) and b2(x) (i.e., two basis vectors b1 and b2) are used, LAR stops at the second iteration step. If more than two basis functions are involved, LAR will keep increasing |g2| until a third vector bmS3 earns its way into the “most correlated” set, and so on. Algorithm 2 summarizes the major iteration steps of LAR. Algorithm 2: Least angle regression (LAR) 1. 2. 3. 4. 5. Start from the simulated/measured performance yS defined in (2.5) and the normalized vectors {bm; m ¼ 1, 2, , M}. Initialize the residual rcur ¼ yS, the index set of selected basis functions I S ¼ f, the index set of all possible basis functions I C ¼ f1; 2; ; M g and the iteration index p ¼ 1. Calculate the correlation between yS and each possible basis vector bm, where m [ {1, 2, , M}, based on (2.28). Select the vector bmS that has the largest correlation cormS . While rcur 6¼ 0. 6. Update p ¼ p þ 1, I S ¼ I S [ {mS} and remove mS from I C . 7. Use the algorithm in [38] to determine the maximal |gp| such that either the resulted residual X r ¼ rcur gp bm : (2.32) m2I S Modelling methodologies in analogue integrated circuit design is equal to 0 or another basis vector bmS ðmS 2I C Þ has as much correlation with the resulted residual: T b r ¼ bT r ð8m 2 I S Þ (2.33) mS m 8. Update rcur ¼ r. End while. It can be proven that with several small modifications, LAR will generate the entire piece-wise linear solution trajectory c(l) for the L1-norm regularization in (2.24) [38]. Based on c(l), the optimal l can efficiently be determined by cross-validation. First, we partition the entire data set into Q groups and use LAR to extract the model coefficient trajectory cq(l) (where q [ {1, 2, , Q}) during each crossvalidation run. Next, instead of repeatedly solving (2.26) for different l’s, the modeling error eq(l) associated with each run can directly be estimated on the basis runs are complete, the final modeling error is of cq(l). Once all cross-validation calculated as eðlÞ ¼ e1 ðlÞ þ e2 ðlÞ þ þ eQ ðlÞ =Q, and the optimal l is determined by minimizing e(l) [27]. The computational cost of LAR is similar to that of applying the interior-point method to solve a single convex optimization in (2.24) with a fixed l value. Therefore, compared to the simple L1-norm regularization approach, LAR typically achieves orders of magnitude more efficiency, as is demonstrated in [38]. 2.4.5 Numerical experiments In this subsection, we compare the efficacy of several aforementioned RSM methods by using a two-stage operational amplifier (OpAmp) designed in a commercial 65 nm process [26]. We aim to build linear performance models for 4 PoIs, i.e., gain, bandwidth, power and offset, considering 630 independent random variables to capture both inter-die and intra-die variations of metal oxide semiconductor (MOS) transistors and layout parasitics. Figure 2.4 shows the relative modeling error for three different modeling techniques (i.e., LS fitting, OMP, and LAR) as a function of the number of training samples. To achieve the same accuracy, OMP and LAR require much fewer training samples than LS, because they solve the unknown model coefficients from an underdetermined equation by exploiting the underlying sparsity of model coefficients. In this example, such a sparse structure exists, since the variability of each circuit-level performance metric is dominated by a few device-level variation sources. For instance, the offset of the OpAmp is mainly determined by the device mismatches of the input differential pair. Studying Figure 2.4, we find that OMP and LAR yield different modeling accuracy, given the same number of training samples. Even though both OMP and LAR build sparse performance models, they rely on different algorithms to select the important basis functions and/or determine the model coefficients. Compared to LAR, OMP shows slightly improved modeling accuracy in most cases. However, there are a few examples where LAR outperforms OMP, as shown in Figure 2.4(b). Response surface modeling 20 LS LAR OMP Modeling error (%) Modeling error (%) No. of training samples No. of training samples 25 20 Modeling error (%) Modeling error (%) 0 (c) No. of training samples 0 (d) No. of training samples Figure 2.4 Performance modeling errors of different RSM approaches are shown for four performance metrics with different numbers of training samples: (a) gain, (b) bandwidth, (c) power, and (d) offset [26] To the best of our knowledge, there is no theoretical evidence to prove that one method is always better than the other. 2.5 Regularization In the literature, the general objective of regularization is to avoid over-fitting when using RSM to approximate a given PoI. Over-fitting usually occurs due to the limited number of available training data (similar to the case discussed in Section 2.4), the unavoidable measurement noise and/or a highly complicated model template (e.g., a highly nonlinear model template with a large number of unknown model coefficients). In these cases, by carefully tuning the model coefficients, we may optimize the model to exactly match the training data; however, the resulting model is not able to accurately predict the data points outside the training set. Whenever over-fitting occurs, the magnitude of the obtained model coefficients often becomes extremely large [24]. To intuitively understand the aforementioned over-fitting issue, we take a simple curve-fitting example for illustration purpose [24]. Suppose that we have obtained a set of training samples {(xn, ySn ); n ¼ 1, 2, , 11}, where the dimension Modelling methodologies in analogue integrated circuit design of VoIs (i.e., N) is equal to 1 and the number of training samples (i.e., NS) is equal to 11 as shown in Figure 2.5. These samples are randomly generated from a sinusoid function y ¼ sin(2px) within the region x [ [0 1] by adding a small Gaussian noise. In this example, the modeling objective is to approximate the sinusoid function y ¼ sin(2px) from these 11 training samples by using a set of monomial basis functions {bm(x) ¼ xm1; m ¼ 1, 2, , M}. By setting M to 4 and 11, respectively, we can calculate the corresponding coefficients c based on (2.8) and, next, obtain the predictions over the entire region x [ [0 1] shown as the red curves in Figure 2.5(b) and (c). Table 2.1 summarizes the estimated values of model coefficients for different M. According to Figure 2.5(b) and (c) and Table 2.1, we can find that when more basis functions are adopted, the magnitude of model coefficients gets larger. When M ¼ 11, the fitted model exactly matches each training sample with extremely large coefficients; however, it exhibits large-scale oscillations due to over-fitting as shown in Figure 2.5(c). In this example, the model template with M ¼ 11 is prohibitively complicated given NS ¼ 11 training samples and the model coefficients are over-fitted to incorrectly capture the random noise [24]. (b) 1 Sin Samples LS: M = 11 –0.5 –1 0 (c) –1 0.5 x Sin Samples LS: M = 4 Sin Samples 0 (d) 0.5 x Sin Samples L2 -norm –1 0.5 x 0 –0.5 0.5 x Figure 2.5 The modeling results for a sinusoid function y ¼ sin(2px) in (a) by using different methods and different numbers of basis functions (i.e., M), (b) LS regression with M ¼ 4, (c) LS regression with M ¼ 11, and (d) L2-norm regularization with M ¼ 11 Response surface modeling Table 2.1 The estimated model coefficients to approximate the sinusoid function y ¼ sin(2px) by using different methods c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 LS: M ¼ 4 LS: M ¼ 10 L2-norm regularization 0.038 9.924 31.447 21.858 0.022 1.683 104 3.619 105 3.348 106 1.750 107 5.694 107 1.198 108 1.630 108 1.385 108 6.673 107 1.391 107 0.164 5.237 10.871 4.162 2.277 4.958 4.756 2.982 0.579 1.902 4.180 To prevent the model coefficients from reaching large values and, hence, avoid over-fitting, we can introduce prior knowledge (e.g., an appropriate prior distribution) for the model coefficients c. For simplicity, let us first assume that all coefficients are independent and follow a zero-mean Gaussian distribution: 1 c2m pffiffiffiffiffiffi exp pdf ðcÞ ¼ ; 2 s2G m¼1 2p sG M Y where all elements in c share the same standard deviation sG. This standard deviation controls the prior distribution of all model coefficients. As shown in Figure 2.6, if sG is small (e.g., sG ¼ 0.1), almost all model coefficients are constrained to small values (i.e., close to zero). On the other side, if sG is large (e.g., sG ¼ 1), there may exist several coefficients with large magnitude (i.e., located far away from zero). According to Bayes’ theorem, given a set of training data yS, the posterior distribution of c is proportional to the product of its prior distribution and the likelihood function: (2.35) pdf c yS / pdf ðcÞ pdf yS jcÞ: Next, the model coefficients c can be estimated by applying the maximum a posterior (MAP) method, i.e., maximizing the posterior PDF pdf(c| yS) in (2.35). Taking the natural logarithm of (2.35) and combining the prior PDF in (2.34) and the likelihood in (2.12), maximizing the posterior pdf(c|yS) is equivalent to maximizing: M þ NS lnð2pÞ NS lnðse Þ M lnðsG Þ 2 2 2 1 1 B c y S 2 c2 : 2 2 2 se 2 sG ln½pdf ðcÞ pdf ðyS jcÞ ¼ Modelling methodologies in analogue integrated circuit design 4 σG = 1 3 σG = 0.5 σG = 0.1 1 0 –4 0 cm Figure 2.6 The zero-mean Gaussian PDF for cm with different sG At the right-hand side of (2.36), only the last two terms depend on c. Therefore, by defining a regularization parameter: r¼ s2e ; s2G maximizing (2.36) can be rewritten as minimizing the error function E(c) plus a regularization term over c: min c B c yS 2 þ r c2 ¼ EðcÞ þ r c2 ; 2 2 2 where the hyper-parameter r is used to balance the trade-off between E(c) and the regularization term r||c||2. By using cross-validation to find an appropriate value of r, we can approach the optimal solution with minimum modeling error while avoiding over-fitting. Based on the theory of Lagrange multipliers [24], the optimization problem in (2.38) is equivalent to the following constrained optimization under Karush-Kuhn-Tucher (KKT) conditions: min B c yS 2 c 2 s:t: 2 c l: 2 The aforementioned discussions demonstrate that we can adopt L2-norm regularization in (2.38) or (2.39) to avoid over-fitting by introducing a prior Gaussian distribution in (2.34). Let us return to the curve fitting example in Figure 2.5. By solving L2-norm regularization in (2.38) with r ¼ 0.001 and M ¼ 11 (i.e., 11 basis functions), we can solve the optimal coefficients shown in Table 2.1 and the fitted model is shown as the blue curve in Figure 2.5(d). Compared to the LS solution with M ¼ 11, the magnitudes of our model coefficients calculated by L2-norm regularization are much smaller and, as a result, over-fitting is avoided. Generally speaking, by introducing different prior knowledge (i.e., different prior distributions) over the model coefficients, we can derive different Response surface modeling methods. For instance, if we assume that the elements in c are correlated with Gaussian variables, the resulting regularization term must contain their covariance matrix [30]. Alternatively, if we assume that all model coefficients are independent and follow the same Laplace distribution, RSM will be cast to the L1-norm regularization problem discussed in Section 2.4.2. 2.6 Bayesian model fusion In this section, we discuss the mathematical formulation of BMF and its applications in detail. Unlike the conventional RSM approaches (e.g., LS regression and regularization methods described in the previous sections) that fit the performance model based on the simulation or measurement data at a single stage only (e.g., postlayout simulation data), BMF attempts to identify the underlying pattern of the unknown model coefficients by reusing the early-stage data (e.g., schematic-level simulation data) in order to efficiently fit a late-stage (e.g., post-layout) performance model. By fusing the early-stage and late-stage data together through Bayesian inference, the simulation and/or measurement cost can be significantly reduced. In particular, BMF consists of the following two major steps: (i) statistically defining the prior knowledge for the unknown model coefficients based on the early-stage simulation data and (ii) optimally determining the late-stage performance model by combining the prior knowledge and very few late-stage samples [8,25,30]. To intuitively explain the key idea of BMF, let us consider two different performance models: the early-stage model yE(x) and the late-stage model yL(x): y E ðx Þ ¼ M X cE;m bm ðxÞ; cL;m bm ðxÞ; y L ðx Þ ¼ M X m¼1 where {cE,m; m ¼ 1, 2, , M} and {cL,m; m ¼ 1, 2, , M} represent the early-stage and late-stage model coefficients, respectively. For simplicity, we assume that the early-stage model yE(x) and the late-stage model yL(x) share the same basis functions. Other special cases where yL(x) contains additional basis functions or variables that are not found from yE(x) can also be appropriately handled [25,30]. The early-stage model yE(x) is fitted from the early-stage simulation data. In practice, the early-stage simulation data are collected to validate the early-stage design, before we move to the next stage. For this reason, we should already know the early-stage model yE(x) before fitting the late-stage model yL(x). Namely, we can assume that the early-stage model coefficients {cE,m; m ¼ 1, 2, , M} are provided as the input to the BMF method for late-stage performance modeling. Given the early-stage model yE(x), we have to extract the prior knowledge that can be used to facilitate efficient late-stage modeling. To this end, BMF attempts to learn the underlying pattern of the late-stage model coefficients {cL,m; m ¼ 1, 2, , M} based on the early-stage model coefficients {cE,m; m ¼ 1, 2, , M}. Modelling methodologies in analogue integrated circuit design Remember that both the early-stage and late-stage models are fitted for the same PoI of the same circuit. Their model coefficients should be similar. We statistically represent such prior knowledge as a prior PDF [24] for the late-stage model coefficients {cL,m; m ¼ 1, 2, , M}. In particular, we consider two different cases to define the prior distribution: zero-mean prior distribution and nonzero-mean prior distribution [25]. 2.6.1 Zero-mean prior distribution If the early-stage model coefficient cE,m has a large (or small) magnitude, it is likely that the late-stage model coefficient cL,m also has the same. Such prior knowledge can be mathematically encoded as a zero-mean Gaussian distribution: (2.42) cL;m N 0; s2m ; where m [ {1, 2, , M}, and the standard deviation sm is a parameter that encodes the magnitude information of the model coefficient cL,m. If the standard deviation sm is small, the prior distribution pdf(cL,m) is narrowly peaked around zero, implying that the coefficient cL,m is possibly close to zero. Otherwise, if sm is large, the prior distribution pdf(cL,m) widely spreads over a large range and the coefficient cL,m can possibly take a value that is far away from zero [25,30]. Different from (2.34) where we assume that all model coefficients share the same standard deviation sG, in (2.42), the values of sm for different model coefficients (i.e., different values of m) are different. Given (2.42), we need to appropriately determine the standard deviation sm to fully specify the prior distribution pdf(cL,m). The value of sm should be optimized so that the probability distribution pdf(cL,m) correctly represents the prior knowledge. In other words, by appropriately choosing the value of sm, the prior distribution pdf(cL,m) should take a large value (i.e., a high probability) at the location where the actual late-stage model coefficient cL,m occurs. However, we only know the early-stage model coefficient cE,m, instead of the late-stage model coefficient cL,m, at this moment. Remember that cE,m and cL,m are expected to be similar. Hence, the prior distribution pdf(cL,m) should also take a large value at cL,m ¼ cE,m. Based on this criterion, the optimal prior distribution pdf(cL,m) can be found by maximizing the probability for cE,m to occur: max pdf cL;m ¼ cE;m : (2.43) sm Namely, given the early-stage model coefficient cE,m, the optimal standard deviation sm is determined by solving the MLE in (2.43). To solve sm from (2.43), we consider the following first-order optimality condition [25,30]: ! ! c2E;m c2E;m d 1 1 ¼ 0: (2.44) pdf cL;m ¼ cE;m ¼ pffiffiffiffiffiffi exp dsm sm 2 s2m s3m 2p sm Response surface modeling Hence, the optimal value of sm is equal to: sm ¼ cE;m : Equation (2.45) reveals an important fact that the optimal standard deviation sm is simply equal to the absolute value of the early-stage model coefficient |cE,m|. This observation is consistent with our intuition. Namely, if the early-stage model coefficient cE,m has a large (or small) magnitude, the late-stage model coefficient cL,m should also have the same and, hence, the standard deviation sm should be large (or small). To complete the definition of the prior distribution for all late-stage model coefficients {cL,m; m ¼ 1, 2, , M}, we further assume that these coefficients are statistically independent and, hence, their joint distribution is: ! M c2 X 1 L;m pdf ðcL Þ ¼ pffiffiffiffiffiffiM Q ; (2.46) exp 2 s2m 2p M sm m¼1 m¼1 where cL ¼ ½cL;1 cL;2 cL;M T contains all late-stage model coefficients. The independence assumption in (2.46) simply implies that we do not know the correlation information among these coefficients as our prior knowledge. The correlation information will be learned from the late-stage simulation data when the posterior distribution is calculated by MAP [25,30]. Given a set of NL late-stage simulation or measurement samples {(xn, ySn ); n ¼ 1, 2, , NL}, the objective of MAP is to calculate the model coefficients cL by maxh iT imizing the posterior distribution pdf(cL|ySL ), where ySL ¼ ySL;1 ySL;2 ySL;NL . Based on Bayes’ theorem, the posterior distribution pdf(cL|ySL ) is proportional to the product of the prior distribution pdf(cL) in (2.46) and the likelihood function pdf (ySL j cL ): (2.47) pdf cL ySL / pdf ðcL Þ pdf ySL jcL Þ: Similar to (2.9) and (2.10), to derive the likelihood function pdf(ySL j cL ), we assume that the error for the late-stage performance modeling follows a zero-mean Gaussian distribution with the variance s2eL : ySL ¼ yL ðxÞ þ eL ; eL N 0; s2eL ; (2.48) (2.49) where the value of seL can be optimally determined by cross-validation. As a result, the likelihood function pdf(ySL j cL ) is a multivariate Gaussian distribution [25]. Combining (2.46), (2.47), and (2.48), it is straightforward to prove that the posterior distribution pdf(cL|ySL ) is Gaussian [25,30]: cL ySL N m0L ; S0L ; (2.50) Modelling methodologies in analogue integrated circuit design where: 0 T S m0L ¼ s2 eL SL BL yL ; h 2 2 i1 T 2 ; S0L ¼ s2 eL BL BL þ diag s1 ; s2 ; ; sM b 1 ðx 1 Þ 6 6 b 1 ðx 2 Þ 6 BL ¼ 6 .. 6 . 4 b1 ðxNL Þ b2 ðx1 Þ bM ðx1 Þ b2 ðx2 Þ .. . .. . bM ðx2 Þ .. . b2 ðxNL Þ bM ðxNL Þ 7 7 7 7; 7 5 and diag(•) represents an operator to construct a diagonal matrix. Since the Gaussian PDF pdf(cL|ySL ) reaches the maximum at its mean value, the MAP solution for the model coefficients cL is equal to the mean vector m0L : 0 T S cL;MAP ¼ m0L ¼ s2 eL SL BL yL ; where seL is a hyper-parameter that can be determined by cross-validation. 2.6.2 Nonzero-mean prior distribution An alternative prior definition is to construct a nonzero-mean Gaussian distribution for each late-stage model coefficient cL,m: (2.55) cL;m N cE;m ; l2 c2E;m ; where l is a hyper-parameter that can be determined by cross-validation. Figure 2.7 shows a simple example of nonzero-mean prior distribution for two model PDF pdf (cL,1) ~ N(cE,1, λ2cE,12) pdf(cL,2) ~ N(cE,2, λ2cE,22) cL,1 or cL,2 Figure 2.7 A simple example of nonzero-mean prior distribution is shown for two model coefficients cL,1 and cL,2. The coefficient cL,1 possibly takes a small magnitude, since its prior distribution is narrowly peaked around a small value. The coefficient cL,2 possibly takes a large magnitude, since its prior distribution widely spreads around a large value [25] Response surface modeling coefficients cL,1 and cL,2, where the absolute value of cE,1 is small and that of cE,2 is large [25]. The prior distribution in (2.55) has a 2-fold meaning. First, the Gaussian distribution pdf(cL,m) is peaked at its mean value cL,m ¼ cE,m, implying that the earlystage coefficient cE,m and the late-stage coefficient cL,m are likely to be similar. In other words, since the Gaussian distribution pdf(cL,m) exponentially decays with (cL,mcE,m)2, it is unlikely to observe a late-stage coefficient cL,m that is extremely different from the early-stage coefficient cE,m. Second, the standard deviation of the prior distribution pdf(cL,m) is proportional to |cE,m|. It means that the absolute difference between the late-stage coefficient cL,m and the early-stage coefficient cE,m can be large (or small), if the magnitude of the early-stage coefficient |cE,m| is large (or small). Restating in words, each late-stage coefficient cL,m (where m [ {1, 2, , M}) has been provided with a relatively equal opportunity to deviate from the corresponding early-stage coefficient cE,m [25]. Similar to (2.46), we again assume that all late-stage model coefficients {cL,m; m ¼ 1, 2, , M} are statistically independent and their joint distribution is: pdf ðcL Þ ¼ M Y pdf cL;m 2 ! M X cL;m cE;m ¼ pffiffiffiffiffiffi M Q exp 2 2 2p l M m¼1 2 l cE;m m¼1 cE;m To derive the posterior distribution of cL, we assume that the error for the late-stage performance model follows the zero-mean Gaussian distribution in (2.48). Next, combining (2.47), (2.48), and (2.56), we find that the posterior distribution pdf (cL|ySL ) is Gaussian [25]: no0 ; ySL cL N mno0 L ; SL where h i no0 2 2 2 T S mno0 L ¼ SL h diag cE;1 ; cE;2 ; ; cE;M cE þ BL yL ; i1 2 2 2 T Sno0 þ B ¼ h diag c ; c ; ; c B ; L L E;1 E;2 E;M L cE ¼ ½cE;1 cE;2 cE;M T , and h¼ s2eL l2 Similar to (2.54), the MAP solution for cL is equal to the mean vector mno0 L : h i no0 2 2 2 T S cL;MAP ¼ mno0 L ¼ SL h diag cE;1 ; cE;2 ; ; cE;M cE þ BL yL : (2.61) Modelling methodologies in analogue integrated circuit design Studying (2.61) reveals an important observation that we only need to determine h, instead of the individual parameters seL and l, in order to find the MAP solution cL,MAP. Similar to the case of zero-mean prior distribution, the hyper-parameter h can be optimally determined by using the cross-validation technique discussed in Section 2.4.3. For a given performance modeling problem by using BMF, it is important to determine whether a nonzero-mean or zero-mean prior distribution is preferred. Intuitively, a nonzero-mean prior distribution provides stronger prior knowledge than a zero-mean prior distribution. The nonzero-mean prior distribution encodes both the sign and the magnitude information about the late-stage model coefficients, while the zero-mean prior distribution encodes the magnitude information only. From this point of view, a nonzero-mean prior distribution is preferred, if the early-stage and late-stage model coefficients are extremely close and, hence, the prior knowledge learned from the early-stage model coefficients is highly accurate. On the other hand, if the early-stage and late-stage model coefficients are substantially different, we should not pose an overly strong prior distribution and, hence, a zero-mean prior distribution is preferred in this case [25,30]. To solve a broad range of practical problems and further reduce the modeling cost for large-scale AMS circuits, various efficient methods based on the BMF scheme have been developed during the past several years. For instance, to reduce the number of required physical samples, co-learning BMF [39] trains an extremely low-complexity model to generate pseudo samples for fitting a high-complexity model with high accuracy. To further reduce the late-stage modeling (e.g., the postsilicon performance modeling) cost by taking into account multiple prior knowledge (e.g., the pre-silicon simulated results and the post-silicon measurement data), dual-prior BMF [40] is derived from the Bayesian inference that can be represented as a graphical model [24]. To facilitate the performance modeling of tunable AMS circuits, correlated BMF [41] encodes the correlation information for both the model template and the coefficient magnitude among different knob configurations by using a unified prior distribution. The details for these extended versions of BMF can be found from the literature [39–41]. 2.6.3 Numerical experiments In this subsection, we attempt to demonstrate the efficacy of BMF by using a ring oscillator example [30] with 7,177 independent random variables to model devicelevel process variations, including both inter-die variations and random mismatches at the post-layout stage. The schematic stage is considered the early stage and the post-layout stage is considered the late stage. Our objective is to approximate three post-layout performance metrics (i.e., power, phase noise and frequency) as linear functions of these random variables. For testing and comparison purposes, three different performance modeling techniques are implemented: (i) OMP, (ii) the BMF method with zero-mean prior distribution (BMF-ZM), and (iii) the BMF method with nonzero-mean prior distribution (BMF-NZM). The OMP algorithm does not consider any prior Response surface modeling 0 (a) OMP BMF-ZM BMF-NZM 400 600 800 No. of post-layout samples Modeling error (%) Modeling error (%) 3 OMP BMF-ZM BMF-NZM 0.25 0.2 0.15 0.1 0.05 200 400 600 800 No. of post-layout samples Modeling error (%) 1 0.5 0 OMP BMF-ZM BMF-NZM No. of post-layout samples Figure 2.8 Performance modeling errors for three PoIs of different methods with different numbers of training samples: (a) power, (b) phase noise, and (c) frequency information from the schematic stage. When applying BMF-ZM or BMF-NZM, we use the schematic-level performance model to define our prior knowledge for postlayout performance modeling. Figure 2.8 summarizes the relative modeling error as a function of the number of post-layout training samples. Studying Figure 2.8 reveals two important observations. First, the modeling error decreases as the number of simulation samples increases. Given the same number of samples, both BMF-ZM and BMF-NZM achieve significantly higher accuracy than OMP. Second, BMF-ZM is less accurate than BMF-NZM for power but is more accurate than BMF-NZM for frequency. In other words, the optimal prior distribution can vary from case to case in practice. Since the overall modeling cost is dominated by post-layout transistor-level simulations, both BMF methods achieve 9 runtime speed-up over OMP with superior accuracy in this example. 2.7 Summary In this chapter, we discuss several state-of-the-art RSM methods for performance modeling of analog and AMS circuits. RSM aims to approximate a given PoI by the linear combination of a set of basis functions. If the number of training samples is Modelling methodologies in analogue integrated circuit design much larger than the number of adopted basis functions, the model coefficients can be accurately estimated by using LS regression. To reduce the number of required training samples and, hence, the modeling cost, we can explore the sparsity of model coefficients and, next, cast performance modeling to an L0-norm regularization problem. Both OMP and L1-norm regularization can be used to efficiently approximate the sparse solution of L0-norm regularization. Alternatively, based on the observation that today’s AMS circuits are often designed via a multistage flow, BMF attempts to reduce the modeling cost by fusing the early-stage and late-stage data together through Bayesian inference. As an important aspect of future research, a number of recently developed machine learning techniques, such as deep learning, maybe further adopted for RSM for AMS applications. Acknowledgments This work is supported partly by National Natural Science Foundation of China (NSFC) research projects 61874032, 61574046, and 61774045, partly by the project 2018MS005 from the State Key Laboratory of ASIC and System at Fudan University. References [1] G. Gielen and R. Rutenbar, “Computer-aided design of analog and mixedsignal integrated circuits,” Proceedings of the IEEE, vol. 88, no. 18, pp. 1825–1854, 2000. [2] R. Rutenbar, G. Gielen and J. Roychowdhury, “Hierarchical modeling, optimization, and synthesis for system-level analog and RF designs,” Proceedings of the IEEE, vol. 95, no. 3, pp. 640–669, 2007. [3] X. Li, J. Le and L. Pileggi, Statistical Performance Modeling and Optimization, Boston, USA: Now Publishers, 2007. [4] A. Singhee and R. Rutenbar, “Beyond low-order statistical response surfaces: latent variable regression for efficient, highly nonlinear fitting,” Design Automation Conference (DAC), San Diego, CA, USA, pp. 256–261. 2007. [5] X. Li, J. Le, L. Pileggi and A. Strojwas, “Projection-based performance modeling for inter/intra-die variations,” International Conference on Computer-Aided Design (ICCAD), San Jose, CA, USA, pp. 721–727, 2005. [6] Z. Feng and P. Li, “Performance-oriented statistical parameter reduction of parameterized systems via reduced rank regression,” International Conference on Computer-Aided Design (ICCAD), San Jose, CA, USA, pp. 868–875, 2006. [7] A. Mitev, M. Marefat, D. Ma and J. Wang, “Principle Hessian direction based parameter reduction for interconnect networks with process variation,” International Conference on Computer-Aided Design (ICCAD), San Jose, CA, USA, pp. 632–637, 2007. Response surface modeling [8] X. Li, F. Wang, S. Sun and C. Gu, “Bayesian model fusion: a statistical framework for efficient pre-silicon validation and post-silicon tuning of complex analog and mixed-signal circuits,” International Conference on Computer-Aided Design (ICCAD), San Jose, CA, USA, pp. 795–802, 2013. J. Plouchart, F. Wang, X. Li, et al., “Adaptive circuit design methodology and test applied to millimeter-wave circuits,” IEEE Design & Test, vol. 31, no. 6, pp. 8–18, 2014. W. Daems, G. Gielen and W. Sansen, “An efficient optimization-based technique to generate posynomial performance models for analog integrated circuits,” Design Automation Conference (DAC), New Orleans, Louisiana, USA, pp. 431–436, 2002. X. Li, P. Gopalakrishnan, Y. Xu and L. Pileggi, “Robust analog/RF circuit design with projection-based posynomial modeling,” International Conference on Computer-Aided Design (ICCAD), San Jose, CA, USA, pp. 855–862, 2004. M. Hershenson, S. Boyd and T. Lee, “Optimal design of a CMOS op-amp via geometric programming,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 20, no. 1, pp. 1–21, 2001. Y. Wang, M. Orshansky and C. Caramanis, “Enabling efficient analog synthesis by coupling sparse regression and polynomial optimization,” Design Automation Conference (DAC), San Francisco, CA, USA, 2014. F. Wang, S. Yin, M. Jun, et al., “Re-thinking polynomial optimization: efficient programming of reconfigurable radio frequency (RF) systems by convexification,” IEEE Asia and South Pacific Design Automation Conference (ASPDAC), Macau, China, pp. 545–550, 2016. Y. Wang, C. Caramanis and M. Orshansky, “PolyGP: improving GP-based analog optimization through accurate high-order monomials and semidefinite relaxation,” IEEE Design, Automation & Test in Europe Conference (DATE), Dresden, Germany, pp. 1423–1428, 2016. J. Tao, Y. Su, D. Zhou, X. Zeng and X. Li, “Graph-constrained sparse performance modeling for analog circuit optimization via SDP relaxation,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 38, no. 8, pp. 1385–1398, 2018. J. Tao, C. Liao, X. Zeng and X. Li, “Harvesting design knowledge from internet: high-dimensional performance trade-off modeling for large-scale analog circuits,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 35, no. 1, pp. 23–36, 2016. M. Sengupta, S. Saxena, L. Daldoss, G. Kramer, S. Minehane and J. Cheng, “Application-specific worst case corners using response surfaces and statistical models,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 24, no. 9, pp. 1372–1380, 2005. H. Zhang, T. Chen, M. Ting and X. Li, “Efficient design-specific worst case corner extraction for integrated circuits,” Design Automation Conference (DAC), San Francisco, CA, USA, pp. 386–389, 2009. Modelling methodologies in analogue integrated circuit design S. Nassif, “Modeling and analysis of manufacturing variations,” IEEE Custom Integrated Circuits Conference (CICC), San Diego, CA, USA, pp. 223–228, 2001. X. Li, J. Le, P. Gopalakrishnan and L. Pileggi, “Asymptotic probability extraction for nonnormal performance distributions,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 26, no. 1, pp. 16–37, 2007. A. Dharchoudhury and S. Kang, “Worse-case analysis and optimization of VLSI circuit performance,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 14, no. 4, pp. 481–492, 1995. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge, UK: Cambridge University Press, 2004. C. Bishop, Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006. F. Wang, P. Cachecho, W. Zhang, et al., “Bayesian model fusion: large-scale performance modeling of analog and mixed-signal circuits by reusing earlystage data,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 35, no. 8, pp. 1255–1268, 2016. X. Li, “Finding deterministic solution from underdetermined equation: large-scale performance modeling of analog/RF circuits,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 29, no. 11, pp. 1661–1668, 2010. X. Li, “Finding deterministic solution from underdetermined equation: largescale performance modeling by least angle regression,” Design Automation Conference (DAC), San Francisco, CA, USA, pp. 364–369, 2009. X. Li, W. Zhang and F. Wang, “Large-scale statistical performance modeling of analog and mixed-signal circuits,” IEEE Custom Integrated Circuits Conference (CICC), San Jose, CA, USA, 2012. C. Fang, Q. Huang, F. Yang, X. Zeng, D. Zhou and X. Li, “Efficient performance modeling of integrated circuits via kernel density based sparse regression,” Design Automation Conference (DAC), Austin, TX, USA, 2016. F. Wang, W. Zhang, S. Sun, X. Li and C. Gu, “Bayesian model fusion: largescale performance modeling of analog and mixed-signal circuits by reusing early-stage data,” Design Automation Conference (DAC), Austin, TX, USA, 2013. X. Li and H. Liu, “Statistical regression for efficient high-dimensional modeling of analog and mixed-signal performance variations,” Design Automation Conference (DAC), Anaheim, CA, USA, pp. 38–43, 2008. C. Liao, J. Tao, H. Yu, et al., “Efficient hybrid performance modeling for analog circuits using hierarchical shrinkage priors,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 35, no. 12, pp. 2148–2152, 2016. T. McConaghy and G. Gielen, “Template-free symbolic performance modeling of analog circuits via canonical-form functions and genetic [23] [24] [25] Response surface modeling [34] [35] [36] [37] [38] [39] programming,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 28, no. 8, pp. 1162–1175, 2009. T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, Berlin, Germany: Springer, 2003. G. Golub and C. van Loan, Matrix Computations, Baltimore, MA: Johns Hopkins University Press, 2012. C. Robert and G. Casella, Monte Carlo Statistical Methods. Berlin, Germany: Springer, 2005. J. Tropp and A. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory (TIT), vol. 53, no. 12, pp. 4655–4666, 2007. B. Efron, T. Hastie and I. Johnstone, “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004. F. Wang, M. Zaheer, X. Li, J. Plouchart and A. Valdes-Garcia, “Co-learning Bayesian model fusion: efficient performance modeling of analog and mixed-signal circuits using side information,” IEEE/ACM International Conference on Computer-Aided Design (ICCAD), Austin, TX, USA, pp. 575–582, 2015. Q. Huang, C. Fang, F. Yang, X. Zeng, D. Zhou and X. Li, “Efficient performance modeling via dual-prior Bayesian model fusion for analog and mixedsignal circuits,” IEEE/ACM Design Automation Conference (DAC), Austin, TX, USA, 2016. F. Wang and X. Li, “Correlated Bayesian model fusion: efficient performance modeling of large-scale tunable analog/RF integrated circuits,” Design Automation Conference (DAC), Austin, TX, USA, 2016. Chapter 3 Machine learning Olcay Taner Yıldız1 3.1 Introduction Today, with the advances in the hardware technologies, it is possible to store, process, and output large amount of data. With the increase in the size of data, explaining it, that is, extracting meaningful information from it, becomes a bottleneck. Machine learning, i.e., the science of extracting useful information from data comes as an aid. Empowered with concepts from mathematics, statistics, and computer science, machine learning is arguably the solution for all of our information extraction problems. Machine learning is about curve fitting. In regression problems, we try to fit a linear/nonlinear function to data. For example, in support vector machine regression, current disposition of the data is not enough and we transform our data by a kernel function to a high-dimensional space and fit the curve in that high-dimensional space [1]. In K-class classification problems, we try to fit a linear/nonlinear function to separate classes. For example, in naı¨ve Bayes, we assume that the data are Gaussian-distributed, and the separating function is easily calculated [2]. Machine learning is about optimization. In learning via optimization, we define an error function (loss function) on the model and try to optimize it, i.e., try to find the optimal parameters of the model by optimization techniques borrowed from optimization literature. For example, in multivariate linear regression, the error function is the sum of squared errors, we take the partial derivatives of the error function with respect to the weights, equalize the derivatives to zero to get the linear equations, and solve the equations to extract the weights of the model. Yet in neural-network-based classification, the error function is cross-entropy, we take the partial derivatives of the error function with respect to the weights of the network to get the update equations, and using those update equations we train the network [3]. Machine learning is about algorithms. In decision/regression trees, we need to write a recursive-learning algorithm to classify the data arriving at each decision node, and we also need to write a recursive code to generate the decision tree structure [4]. In Bayesian networks, we use graph algorithms to calculate conditional independences between variables, or to calculate marginal probabilities [5,6]. 1 Department of Computer Engineering, Is¸ık University, Istanbul, Turkey Modelling methodologies in analogue integrated circuit design Yet in hidden Markov models, we use Viterbi algorithm, a clever application of dynamic programming, to calculate the most probable state sequence given an observation sequence [7]. Machine learning is about statistics. We usually assume Gaussian noise on the data, we assume multivariate normal distribution on class covariance matrices in quadratic discriminant analysis, and we use cross-validation or bootstrapping to generate multiple training sets. We also use hypothesis testing to decide which classifier is better than the other based on their classification performance on several test sets [8,9]. Machine learning is about models. In decision trees, the data structure is a binary/L-ary tree depending on the type of the features and the decision function used [10]. In rule learning, the model is an ordered set of rules [11]. In Bayesian networks, hidden Markov models, and neural networks, the most complex model, i.e., graphs are used. In naı¨ve Bayes, linear discriminant analysis (LDA), we use a multivariate polynomial to represent the data. Machine learning is about performance metrics. In classification, we use accuracy if we want to get a crude estimate on the performance of the classifier. If we need more details on pairwise classes, confusion matrix comes in handy. If the dataset has two classes, more metrics follow: precision, recall, true positive rate, false positive rate, F-measure, etc. [12]. In regression, we use mean square error for a general estimate on the performance of the regressor. There are also other performance metrics, such as hinge-loss for specific algorithms, i.e., support vector machines [13]. So what is machine learning about? Like defining an elephant, we need the combination of all of these topics to define machine learning: we start machine learning by assuming a certain model on the data, use algorithmic and/or optimization and/or statistical techniques to learn the parameters of that model (which is sometimes defined as curve fitting), and use performance metrics to evaluate our model/algorithm/classifier. This chapter is organized as follows: everything begins with data. We will talk about data representation in Section 3.2. Dimension reduction techniques are reviewed in Section 3.3. We discuss basic clustering algorithms in Section 3.4. Classification and regression algorithms are discussed in Section 3.5. Lastly, we conclude with performance assessment of algorithms in Section 3.6. 3.2 Data In any machine-learning application, we start with data, i.e., measurements/ calculations/observations made by the user. The multivariate data are represented by a 2D matrix as: 2 ð 1Þ 6 6 ð 2Þ 6x X¼6 1 6 ... 4 ðN Þ x1 ðN Þ ð 1Þ ð 2Þ ðN Þ x3 x3 x3 ð1Þ ð2Þ ðN Þ y ð 1Þ 7 7 y ð 2Þ 7 7 7 5 ðN Þ y Machine learning where d represents the number of features (variables, attributes, and inputs) and N represents the number of instances (observations and examples) in the dataset. The instances are assumed independent and identically distributed. Each row in the matrix corresponds to a single instance, whereas each column in the matrix corresponds to a single feature. The variable y represents the output feature and (i) takes discrete values for classification and (ii) continuous values for regression problems. Datasets are usually grouped into three: sparse datasets, where d N (i.e., genome datasets, image datasets); big datasets, where N d (i.e., text datasets and geo datasets); and standard datasets. Each feature can be continuous (an integer, a real number) or discrete (taking L distinct values) depending on the application. As an example, let say we have a dataset about N ¼ 1,000 students. 2 1976 6 1984 6 X¼6 4 ... 2000 male female 2:34 3:12 3 amber brown 7 7 7 5 blue where we have four input features, namely, birth year of the student (continuous), sex of the student (discrete with L ¼ 2), GPA of the student (continuous), and eye color of the student (discrete with L ¼ 4). See Figure 3.1 for the famous Iris dataset plotted with respect to third and fourth features. Many machine-learning algorithms cannot process discrete features; nevertheless, discrete features can be converted to continuous features via 1-of-L encoding. 3 Feature 4 4 Feature 3 Figure 3.1 Iris dataset Modelling methodologies in analogue integrated circuit design In 1-of-L encoding, each discrete feature is encoded by an extra L feature, where only one of the new features is 1 (corresponding to the value of that feature), and the remaining are 0. As extra features are added, the size of the dataset grows and application of some algorithms may get impractical. For instance, if we convert the previous dataset, we get the following matrix: 2 3 1976 1 0 2:34 1 0 0 0 6 1984 0 1 3:12 0 1 0 0 7 6 7 X¼6 7 4 ... 5 2000 0 1 1:23 0 0 0 1 Yet other machine-learning algorithms require abstraction of continuous data, whereby estimation of mean vector: m ¼ ½m1 m2 md and covariance matrix: 2 2 s1 s12 . . . 6s 2 6 21 s2 . . . S¼6 4... sd1 s2d 7 7 7 5 s2d is required. In the covariance matrix, the diagonal terms are the variances and the off-diagonal terms are the covariances. If the data are normally distributed, we write X N (m, S). 3.3 Dimension reduction As a preprocessing step before classification or regression, we may introduce a dimension reduction step, whereby we reduce the number of dimensions of the dataset. Some of the reasons for dimension reduction are as follows: (i) in many learning algorithms the complexity of the algorithm depends on d, and if we reduce d, the training complexity of our algorithm decreases; (ii) “Occam’s Razor” principle: if we can explain the same data with a model having fewer parameters, we must choose that model; and (iii) if some input features can be removed, they are unnecessary, so we reduce the cost of extracting those features. Dimension reduction algorithms can be grouped into two. Feature-selection algorithms, where we select a subset of the original features (k features out of d features) [14,15]; feature-extraction algorithms, where we generate a new set of k features which are some linear/nonlinear combination of the original features. 3.3.1 Feature selection In feature selection, in our case subset selection, our objective is to find the best subset of the original features, where the best is determined by the performance metric. Since there are 2d possible feature subsets, exhaustive search is impossible Machine learning and we use ourselves into ingenious search algorithms. Depending on the path on the number of selected features k, subset selection algorithms are categorized into three types: forward, backward, and floating, where k increases, decreases, and sometimes increases sometimes decreases, respectively. All three algorithms are actually state space search algorithms, where a state is a subset of features, and search corresponds to generating a new subset from the current best subset. In forward selection, we start by empty feature subset F ¼ Ø. At each step, we produce a candidate subset list FL, which includes the subsets obtained by adding one possible feature to F ðFL ¼ f1g; f2g; . . . ; fdgÞ. The current best subset F is selected from the candidate subset list according to the performance metric. The algorithm continues by selecting a new F at each step and terminates when the performance metric does not improve. In backward selection, we start by the original feature subset F ¼ f1; 2; . . . ; dg. At each step, we again produce a candidate subset list, which includes the subsets obtained by removing one possible feature from F (FL ¼ f2; 3; . . . ; dg, f1; 3; . . . ; dg, f1; 2; . . . ; d 1g). The current best subset F is selected from the candidate subset list according to the performance metric. The algorithm continues by selecting a new F at each step and terminates when the performance metric does not improve. Floating selection is the combination of the previous two algorithms. At each step, when producing SL, we take into account of both adding and removing features [16]. Floating selection can start both from empty and original feature subset. The best feature-selection algorithm depends on the number of features k in the optimal feature subset. If k is near to 0, forward selection is more hopeful to find the optimal subset, whereas if k is near to d, backward can be lucky. Although time complexity of floating selection is more than the other two, it could be very helpful if we do not know k. 3.3.2 Feature extraction In feature extraction, we generate a totally new set of features from the original feature set. The feature-extraction algorithms map the data to a new space, where the mapping can be linear or nonlinear, and the mapping can be supervised or unsupervised (will use output y or will not). In this chapter, we will talk about principal component analysis and LDA as feature-extraction algorithms, where the aim is to find the projection matrix W to get the new dataset via the following equation: X ¼ XW Principal component analysis (PCA) is an unsupervised linear feature-extraction algorithm, which maximizes the variance throughout the projected features, where each projection vector wi is orthogonal to other projection vectors [2]. So, the problem reduces into an optimization problem: maximize subject to kwi k ¼ 1; i ¼ 1; . . . ; d; wTi wj ¼ 0; i; j ¼ 1; . . . ; d; i 6¼ j; Modelling methodologies in analogue integrated circuit design whereby the eigenvectors of the covariance matrix S are the solution to the optimization problem. The eigenvectors of S are sorted according to their corresponding eigenvalues l and the most important k eigenvectors are the ones having the largest k eigenvalues. Figure 3.2(a) shows the application of PCA on Iris dataset. We can also control the dimension reduction (value of k) according to the proportion of variance explained. Pk Pi¼1 d i¼1 li LDA is a supervised linear feature-extraction algorithm which separates instances from different classes as much as possible and brings together instances from the same class as much as possible [2]. Between-class scatter matrix P P SB ¼ Ki¼1 Ni ðmi mÞðmi mÞT and within-class scatter matrix SW ¼ Ki¼1 Si of a K-class dataset quantifies the separability between and within classes, respectively. If we write the problem as an optimization problem: maximize W jWSB WT j jWSW WT j subject to kwi k ¼ 1; i ¼ 1; . . . ; d; wTi wj ¼ 0; i; j ¼ 1; . . . ; d; i 6¼ j then the eigenvectors of the matrix S1 W SB will be the solution to this optimization problem. In LDA, the maximum rank of SB is K 1, therefore k K 1. Figure 3.2(b) shows the application of LDA on Iris dataset. 3 –1 –2 –3 –3 (a) –1 –3 Figure 3.2 Feature extraction on Iris dataset: (a) PCA and (b) LDA Machine learning 3.4 Clustering Before delving into the supervised algorithms, we will briefly introduce one of the most important topics in unsupervised machine learning, namely, clustering. In clustering, the aim is to divide X into a set of groups (clusters, sets), where the groups are disjoint and possibly represent meaningful divisions. After clustering, the groups can be named by experts in the respective domains, which further simplify the descriptive model of the data. In this chapter, we will talk about K-means clustering and hierarchical clustering [17,18]. 3.4.1 K-Means clustering In K-means clustering, we are interested in splitting the data X into K disjoint groups Xi , where the group means mi form the representers of the groups. In this problem, there are two sets of unknowns. The first set of unknowns is the elements ðtÞ of the group membership matrix G, where gi is 1, if the instance xðtÞ belongs to the group i, 0 otherwise. The second set of unknowns is group means mi . The important and interesting thing is, these two sets of unknowns are bound to each other. If we know one set of unknowns, we can optimally determine the other set of unknowns based on the optimization of the reconstruction error. Eðmi ; GjXÞ ¼ XX t gi kxðtÞ mi k2 If we know G, we can calculate group means as: P ðtÞ ðtÞ t gi x mi ¼ P ðtÞ t gi On the other hand, if we know mi , we can extract the group membership of xðtÞ , because the Euclidean distance of an instance xðtÞ to the group means determines the group membership as: 1 if kxðtÞ mi k ¼ minj kxðtÞ mj k ðtÞ gi ¼ : (3.4) 0 otherwise To solve for these two sets of unknowns, we will use a special version of expectation maximization optimization technique and proceed with an iterative procedure. First, we initialize the mean vectors mi randomly. Then, at each iteration, we first use (3.4) to calculate the group membership matrix G. Then, once we have calculated G, we can easily calculate group means using (3.3). These two steps are repeated (calculate G ! calculate mi ! calculate G ! calculate mi ! etc.) until the group membership matrix G stabilizes. The most important disadvantage of this iterative procedure is its initialization. Since the optimization technique is a local search procedure, final group means are heavily dependent on the initial group means. Possible techniques, such as Modelling methodologies in analogue integrated circuit design randomly selecting K instances as initial mean vectors, are proposed in the literature to overcome this drawback [2]. K-means clustering can be generalized to soft clustering by: P ðtÞ ðtÞ ● Assigning a probability value to gi . Again i gi ¼ 1, but this way an instance x(t) will belong to more than one cluster, and the cluster membership decision will be soft (a probability value between 0 and 1) instead of hard (only 0 and 1). ● Assuming the groups that are Gaussian distributed with group covariances Si , i.e., Xi N(mi, Si). ● Optimizing the log-likelihood instead of the reconstruction error. The expectation maximization procedure appears here again [19,20], and with a similar iterative procedure, we optimize the parameters mi , Si , and G. 3.4.2 Hierarchical clustering Instead of clustering instances with respect to the cluster means, we can cluster instances with respect to their similarities to other instances. In other words, instance to instance distance matters, but not the distance of instance to cluster mean. The most distant the instances, the mere dissimilar they are. Hierarchical clustering is a type of agglomerative clustering, where we start with N clusters (where each instance is actually a cluster) and merge clusters one by one until only one cluster remains. At each iteration, we select the two closest clusters to merge and get a single merged cluster. But how will we decide on the closest clusters, or in other words, how will we compute the distance between clusters, where until now we have only talked about distances between instances. In single-link clustering, the distance between two clusters is the smallest distance between all possible pairs of instances of the two groups. So, single-link clustering has an optimistic point of view, if there is only a single pair of instances which are very near, then single-link clustering thinks these two clusters as very close. In complete-link clustering, the distance between two clusters is the largest distance between all possible pairs of instances of the two groups. Yet, completelink clustering has a pessimistic point of view, if there is only a single pair of instances which are very far, then complete-link clustering expects these two clusters as very distant. In average-link clustering, the distance between two clusters is the average of all distances between all possible pairs of instances of the two groups. In centroid-link clustering, the distance between two clusters is the distance between the cluster 3.5 Supervised learning algorithms In this section, we will briefly review supervised learning algorithms used in machine learning. Supervised algorithms can be categorized in multiple ways. First, supervised algorithms can be parametric vs nonparametric. In the parametric algorithms, the algorithm makes certain assumptions on the underlying distribution of the data (usually called bias) and fits the model based on those assumptions. Machine learning Nevertheless, in the nonparametric case, the situation is reverse. The algorithm does not make any assumption on the data, and the model is actually the data itself. Second, supervised algorithms can be discriminative vs generative. In the discriminative algorithms, the algorithm’s aim is to find such a discriminant that it will separate the classes as best as it gets. Therefore, only the instances near the discriminant are important. On the other hand, generative algorithms make certain distribution assumptions on the data and use that assumption to produce a discriminator. For this case, since all data are required for the satisfaction of the assumption, all instances are equally important. Third, supervised algorithms can be classification vs regression. In classification, the output feature y takes discrete values. In regression, the output feature y takes continuous values. For classification, only counting the mismatches in the output is enough, therefore the loss function is accuracy; whereas for regression, there is actually no mismatch but a real valued absolute discrepancy in the output value, therefore the loss function is squared error. 3.5.1 Simplest algorithm: prior Before introducing more meaningful supervised algorithms, we must first talk about the simplest algorithm ever, the base case, or the so-called prior. According to Occam’s Razor’ principle, if we can explain the same data with a model having fewer parameters, we must choose that model. Leading that principle, the simplest model about the data is the prior distribution of the data. For classification, prior distribution corresponds to the prior probabilities of the classes, so the prior classifier labels all the test data with the most probable class label; whereas for regression, prior distribution corresponds to the mean value of the output vector y; and the prior regressor assigns the mean value to all test data. If we propose a new classification/regression algorithm on a specific dataset, we must be ensured that it must beat the prior. Note also that according to the no free lunch theorem [21], there does not exist any algorithm that can beat every algorithm on all datasets. So, there can be some datasets, where prior is the superior. 3.5.2 A simple but effective algorithm: nearest neighbor The most commonly used representative of nonparametric algorithms is the nearest neighbor. The assumption of nearest neighbor is simple, the world does not change much, i.e., similar things perform similarly. Therefore, we only need to store the dataset itself and make the decision on the test instance based on the similarity of it to the instances in the dataset. In other words, the class label (regression value) of an instance is strongly influenced by its nearby instances. More formally, in k-nearest neighbor classification, the class-label of a test instance is obtained by taking the majority vote of the k-nearest instances in the training set [22]. Ties are broken arbitrarily, where k is usually taken as an odd number to get rid of ties mostly. In k-nearest neighbor regression, the output value of a test instance is calculated by taking the average of the output values of the k-nearest examples in the training set. Euclidean distance is usually used to calculate the distance between two instances. Modelling methodologies in analogue integrated circuit design In machine-learning literature, nearest neighbor techniques are also called instance-based or memory-based learning algorithms, because they simply store the training set in a table and look up the output values of the nearest k instances [23]. They are also called lazy learning algorithms, since they do not do anything in the training phase of the algorithm, but start processing while doing testing phase. Although the training complexity of nearest neighbor is as negligible as prior, the test complexity is one of the largest in all learning algorithms (O(N) for a single test instance). In spite of this drawback, due to its simplicity, k-nearest neighbor is one of the widely used and one of the successful learning algorithms in practice. Theoretically, it has been shown that for larger datasets ðN ! 1Þ, the risk of 1-nearest neighbor is never worse than the best risk that can be achieved [24]. 3.5.3 Parametric methods: five shades of complexity Starting from the Bayes’ rule, we have: pðxjCi ÞpðCi Þ pfCi jxÞ ¼ PK j¼1 pðxjCj ÞpðCj Þ where pðCi Þ, pðCi jxÞ, and pðxjCi Þ represent the prior probability, the posterior probability, and the class distribution of the class Ci , respectively. As we can see, if all class distributions are equal, choosing the maximum posterior probability reduces to the choosing the maximum prior probability, i.e., prior algorithm. Since the denominator is the same for all classes, we can simplify the posterior probability and get the discriminant function as fi ðxÞ ¼ log pðxjCi Þ þ log pðCi Þ. If the class distributions are assumed to follow Gaussian density, we have pðxjCi Þ Nðmi ; Si Þ, and the discriminant function reduces to: fi ðxÞ ¼ d log 2p log jSi j ðx mi ÞT Si 1 ðx mi Þ þ log pðCi Þ 2 2 2 The first part is the same for all classes, and if we get rid of it, we obtain our first parametric classifier, namely, quadratic discriminant. fi ðxÞ ¼ log jSi j þ xT Si 1 x 2xT Si 1 mi þ mTi Si 1 mi þ log pðCi Þ 2 The number of parameters, i.e., the model complexity is Kd þ Kdðd þ 1Þ=2, where the first part is for class means and the second part for class covariance matrices. For complexity reduction, we can assume a single shared covariance matrix for all classes. In this case, first and second parts of the equation are the same for all classes and simplifying it reduces fi ðxÞ to our second classifier, namely, linear discriminant. fi ðxÞ ¼ xT Si 1 mi mi T Si 1 mi þ log pðCi Þ 2 Machine learning The model complexity is Kd þ dðd þ 1Þ=2, where the first part is for class means and the second part for shared covariance matrix. For more reduction, we assume all off-diagonals of the shared covariance matrix are zero. For this case, first and second parts of the equation simplify to vector multiplications instead of matrix multiplications, and we get the naı¨ve Bayes classifier. The model complexity is Kd þ d, where the first part is for class means and the second part for the diagonal of the shared covariance matrix. As the fifth algorithm, we further reduce by taking priors equal and a single covariance value s. In this case, we get the nearest mean classifier and the model complexity is only Kd þ 1. Depending on the application, we can choose our complexity level (as bias) and report the test error. Yet, another possibility may well be applying all algorithms and make a selection considering both the test error and model complexity [25]. 3.5.4 Decision trees Decision trees are significantly different than the previous models we have discussed. First, they have a tree-based structure where each non-leaf node m implements a decision function, fm ðxÞ, and each leaf node corresponds to a class decision. Second, they are one of most interpretable learning algorithms available. When written as a set of IF-THEN rules, the decision tree can be transformed into a human-readable format, which then can be modified and/or validated by human experts in their corresponding domains. Decision trees are categorized depending on the type of the decision function. In univariate decision trees [4] (Figure 3.3(a)), the decision function fm ðxÞ uses only one feature xi and depending on the type of that feature, each non-leaf node of a univariate decision tree will have two or L children. If xi is a continuous feature, the decision function is in the form of xi < q, and each non-leaf node has two children, where the instances satisfying (not satisfying) the decision function follow the left (right) child, respectively. If xi is a discrete feature, the decision function takes on 1.2x1 + 0.2x2 < 3 x1 < 4 x3 x3 = large 0.2x1 − 0.4x2 < 0 x3 = small x2 < −1.4 x4 < −2 Figure 3.3 Decision trees: (a) univariate and (b) multivariate Modelling methodologies in analogue integrated circuit design the forms xi ¼ v1 ; xi ¼ v2 ; . . . ; xi ¼ vL , where v1 ; v2 ; . . . ; vL correspond to all possible values of the discrete feature xi . In multivariate decision trees (Figure 3.3(b)), the decision function fm ðxÞ uses all features of x and each non-leaf node has two children. Depending on the type of the multivariate function, we have (i) multivariate linear tree [26,27], where the decision function is in the form of wT x < q, (ii) multivariate nonlinear tree [28], where the decision function is a nonlinear function in terms of x, and (iii) omnivariate tree [10], where the decision function can be any of the previous three. Tree induction algorithms are recursive and at each decision node m, starting from the root node (we have the complete training set), we look for the best decision function fm ðxÞ. When the best decision function is found, the training data are split according to fm ðxÞ, and the learning continues recursively with the children of m. We continue splitting until there is no need for the same, where the instances are all from the same class. For decision trees, the loss function, i.e., the criterion for comparing the candidate decision functions is quantified by an impurity measure. Several impurity measures are proposed in the literature such as Gini index [29], entropy. Entropy ¼ K X pðCi Þlog pðCi Þ where pðCi Þ is the probability of class Ci at node m [30]. 3.5.5 Kernel machines Kernel machines, in other words, support vector machines [1,31], are maximum margin methods, where the model is written as a weighted sum of support vectors. Kernel machines are discriminative methods, i.e., they are only interested in the instances across the class boundaries in classification, or instances across the regressor in regression. For obtaining the optimal separating hyperplane, kernel machines try to maximize separability, or margin, and write the problem as a quadratic optimization problem, whose solution gives us support vectors. Kernel functions improve our notion of instance model, and not only we can define kernel functions on instances but we can also define on networks, graphs, words, sentences, or trees. This great range of applicability of kernel functions made kernel functions popular across many domains, including natural language processing, bioinformatics [32], and robotics. Separable case: optimal separating hyperplane For a linearly separable two-class problem (see Figure 3.4(a)), we define the output yðtÞ as þ1/1 and the aim is to find the optimal separating hyperplane w and bias w0 which satisfies the following condition: yðtÞ ðwT xðtÞ þ w0 Þ 1 Machine learning 4 Figure 3.4 For a two-class problem the separating hyperplane and support vectors: (a) separable case and (b) nonseparable case For the instances of positive class ðyðtÞ ¼ þ1Þ, we want to see them on the positive side of the hyperplane. Similarly, for the instances of negative class ðyðtÞ ¼ 1Þ, we want to see them on the negative side of the hyperplane. Note that we also want the instances some distance away from the hyperplane, which we call margin, which we also want to maximize for the best generalization. If we write the margin maximization problem as an optimization problem: minimize w subject to kwk2 2 yðtÞ ðwT xðtÞ þ w0 Þ 1; (3.11) t ¼ 1; . . . ; N This standard quadratic optimization problem can be converted to its dual formulation using Lagrange multipliers aðtÞ as: Lp ¼ kwk2 X ðtÞ ðtÞ T ðtÞ a ½y ðw x þ w0 Þ 1 2 t With Karush–Kuhn–Tucker conditions @Lp =@w ¼ 0 and @Lp =@w0 ¼ 0, we get the following dual formulation: maximize aðtÞ subject to PP aðtÞ ðtÞ ðsÞ ðtÞ ðsÞ a y y ðxðtÞ ÞT xðsÞ 2 aðtÞ yðtÞ ¼ 0 t ¼ 1; . . . ; N: Once we solve the previous dual quadratic optimization problem, we obtain a set of instances for which aðtÞ > 0. These instances are called support vectors and they lie on the margin. Modelling methodologies in analogue integrated circuit design Nonseparable case: soft margin hyperplane For a linearly nonseparable two-class problem (see Figure 3.4(b)), we look for the hyperplane which acquires the minimum error. Since some of the constraints will not be satisfied, we add slack variables xðtÞ 0, which store the deviation from the margin. Now the aim is to find the optimal w and bias w0 which satisfies the following condition: yðtÞ ðwT xðtÞ þ w0 Þ 1 xðtÞ P Since each slack variable contributes to error, the total error is t xðtÞ and including this error as a weighted penalty in the objective function transforms the margin maximization problem to: minimize w subject to X kwk2 þC xðtÞ 2 t yðtÞ ðwT xðtÞ þ w0 Þ 1 xðtÞ ; xðtÞ 0; t ¼ 1; . . . ; N t ¼ 1; . . . ; N : Again, we can convert this problem into its dual by introducing Lagrange multipliers aðtÞ and bðtÞ , and obtain: Lp ¼ X X X kwk2 þC xðtÞ aðtÞ ½yðtÞ ðwT xðtÞ þ w0 Þ 1 þ xðtÞ bðtÞ xðtÞ 2 t t t (3.16) With Karush–Kuhn–Tucker conditions solved, we get the following dual formulation: maximize aðtÞ subject to X t X PP aðtÞ ðtÞ ðsÞ ðtÞ ðsÞ a y y ðxðtÞ ÞT xðsÞ 2 (3.17) aðtÞ yðtÞ ¼ 0 0 aðtÞ C; t ¼ 1; . . . ; N Again, once we solve the quadratic problem, the instances having the corresponding value of aðtÞ > 0 are support vectors. C is the regularization hyperparameter of the support vector machines and is usually tuned between 210 and 210 with multipliers of 2. The upper bound for the training time complexity is O(N3). When there are more than two classes ðK > 2Þ, we resort to multi class to binary class conversion techniques. In one-vs-all strategy, we define K subproblems, and in each subproblem, we try to separate class i from all other classes. On the other hand, in one-vs-one strategy, we define KðK 1Þ=2 subproblems, and in each subproblem, we try to separate class i from class j. Machine learning 3.5.5.3 Kernel trick If we had stopped here, support vector machines would not have been so popular. In order to solve a nonlinear problem, we do not fit a nonlinear model but map the original input space into a high-dimensional space and solve the problem linearly in that space. So instead of linearly combining the instances, we linearly combine the projected instances. Again, the aim is to find the optimal w which satisfies the following condition: yðtÞ wT FðxðtÞ Þ 1 xðtÞ where FðxðtÞ Þ projects the instance xðtÞ into a high-dimensional space. Including the error as a weighted penalty in the objective function transforms the margin maximization problem to: minimize X kwk2 þC xðtÞ 2 t subject to yðtÞ wT FðxðtÞ Þ 1 xðtÞ ; xðtÞ 0; t ¼ 1; . . . ; N ; t ¼ 1; . . . ; N: Yet again we can convert this problem into its dual by introducing Lagrange multipliers aðtÞ and bðtÞ , and obtain: Lp ¼ X X X kwk2 þC xðtÞ aðtÞ ½yðtÞ wT FðxðtÞ Þ 1 þ xðtÞ bðtÞ xðtÞ : 2 t t t (3.20) With Karush–Kuhn–Tucker conditions solved, and replacing the inner product of FðxðtÞ ÞT FðxðsÞ Þ with kernel function KðxðtÞ ; xðsÞ Þ, we get the following dual formulation: maximize aðtÞ subject to PP t ðtÞ ðsÞ ðtÞ ðsÞ a y y KðxðtÞ ; xðsÞ Þ 2 aðtÞ yðtÞ ¼ 0 0 aðtÞ C; t ¼ 1; . . . ; N : So instead of mapping two instances to a high-dimensional space and applying dot product there, we can apply the kernel function directly in the original space. This is the main idea behind the kernel trick. Possible kernel functions used in the literature are: ● ● ● Polynomial kernel of degree n: KðxðtÞ ; xðsÞ Þ ¼ ðxðtÞ xðsÞ þ 1Þn ðtÞ ðsÞ 2 Radial basis function: KðxðtÞ ; xðsÞ Þ ¼ exp kx 2sx2 k Sigmoidal function: KðxðtÞ ; xðsÞ Þ ¼ tanh ð2xðtÞ xðsÞ þ 1Þ Modelling methodologies in analogue integrated circuit design 3.5.6 Neural networks Artificial neural networks (ANN) take their inspiration from the brain. The brain consists of billions of neurons and these neurons are interconnected and work in parallel, which makes the brain a powerful computing machine. Each neuron is connected through synapses to thousands of neurons and the firing of a neuron depends on those synapses. Research on ANN started with the invention of perceptron [33] but came to its first halt with its limitation on the XOR problem [34]. After 15 years of standby, the resurrection of ANN came with the paper [35]. Neurons (units) There are three types of neurons (units) in ANN. Each unit except the input unit takes an input and calculates an output. Input units represent a single input feature xi or the bias þ1. Hidden units calculate an intermediate output from its inputs. They first combine their inputs linearly as wT x and then use nonlinear activation functions to map that linear combination to a nonlinear space. Well-known activation functions are sigmoid zðxÞ ¼ 1=ð1 þ ex Þ, hyperbolic tangent zðxÞ ¼ ðex ex Þ=ðex þ ex Þ, inverse of tangent zðxÞ ¼ tan1 ðxÞ, and rectified linear unit zðxÞ ¼ max ð0; xÞ. Output units calculate the output of the ANN. For regression problems, the output unit only calculates linear combination of its inputs (see Figure 3.5(a) and (c)). For two-class classification problems, the output unit uses sigmoid function 1=ð1 þ ex Þ to map the output to a probability value (see Figure 3.5(a) and (c)). For K-class classification problems (see Figure 3.5(b) and (d)), there are K output units, P and each output unit i uses softmax function eoi = Kj¼1 eoj to map its output to a probability value, so that the sum of all output units is 1. Models and forward-propagation In this chapter, we are interested in four types of neural networks models (see Figure 3.5), (i) perceptron for regression and two-class classification, (ii) perceptron for K-class classification, (iii) multilayer perceptron for regression and twoclass classification, and (iv) multilayer perceptron for K-class classification. In linear perceptron or simply perceptron, there are two layers, (i) input layer consisting of d þ 1 input units (including bias unit) and (ii) output layer consisting of one output for regression (K outputs for K-class classification). The weights are represented by W, where wij represents the weight of the connection between the output unit i and input unit j. Forward-propagation in linear perceptron calculates the values of output units. P For regression, the output is o ¼ dj¼0 w0j xj . For two-class classification, the output P is o ¼ 1=ð1 þ ez Þ, where z ¼ dj¼0 w0j xj . For K-class classification, the output P P unit i is oi ¼ ezi = Kk¼1 ezk where zi ¼ dj¼0 wij xj . In multilayer perceptron, there are three layers, (i) input layer consisting of d þ 1 input units (including bias unit), (ii) hidden layer consisting of H hidden units and one bias unit, and (iii) output layer consisting of one output for regression (K outputs for K-class classification). Machine learning o1 ... w00 w01 w10 w11 ... (a) x0 = +1 ... xd−1 (b) x0 = +1 x1 oK−1 ... v00 v01 v0H hH h0 =+1 h0 = +1 w10 w11 wH0 w10 w11 wH0 wHd ... (c) x0 = +1 v10 v11 ... xd−1 (d) x0 = +1 Figure 3.5 Neural network models: (a) perceptron for regression and two-class classification, (b) perceptron for K-class classification, (c) multilayer perceptron for regression and two-class classification, (d) multilayer perceptron for K-class classification The weights between input and hidden units are represented by W, where wij represents the weight of the connection between the hidden unit i and input unit j. On the other hand, the weights between hidden units and output units are represented by V, where vij represents the weight of the connection between the output unit i and hidden unit j. Forward-propagation in multilayer perceptron calculates the values of hidden and output units. For all models, if the activation function is sigmoid, the value of P hidden unit i is hi ¼ 1=ð1 þ egi Þ, where gi ¼ dj¼0 wij xj . For regression, the output P z is o ¼ H j¼0 v0j hj . For two-class classification, the output is o ¼ 1=ð1 þ e Þ, where P PH z ¼ j¼0 v0j hj . For K-class classification, the output unit i is oi ¼ ezi = Kk¼1 ezk , P where zi ¼ H j¼0 vij hj . 3.5.6.3 Backward-propagation Before delving into learning in ANN, we need to verify two notions; what is to be learned and what is to be optimized. In regression, we optimize the mean square error: E ¼ ðyðtÞ oðtÞ Þ2 ; Modelling methodologies in analogue integrated circuit design in two-class classification we optimize the cross-entropy: E ¼ yðtÞ log oðtÞ ð1 yðtÞ Þlogð1 oðtÞ Þ; in K-class classification, again we optimize cross-entropy but calculated on K classes: E¼ K X yk log ok For all of the networks, the aim is to learn the parameters of the network, which are simply the weight matrix W (and V for multilayer perceptron) [3]. Since all the functions in ANN are differentiable and not easily solvable, one resorts to gradient-descent style optimization. In gradient-descent optimization, we take the partial derivatives of to be optimized function with respect to be learned parameters ðð@E=@WÞ; ð@E=@VÞÞ and use those partial derivatives to calculate the update rules for those parameters. For regression in linear perceptron, the update rule is: @E @E @o ¼ h ¼ hðyðtÞ oðtÞ Þxj : @w0j @o @w0j Dw0j ¼ h For two-class classification in linear perceptron, the update rule is: @E @E @o @z ¼ h ¼ hðyðtÞ oðtÞ Þxj : @w0j @o @z @w0j Dw0j ¼ h For K-class classification in linear perceptron, the update rule is: Dwij ¼ h @E @E @oi @zi ¼ h ¼ hðyðtÞ oðtÞ Þxj : @wij @oi @zi @wij For regression in multilayer perceptron, the update rules are: Dv0j ¼ h @E @E @o ¼ h ¼ hðyðtÞ oðtÞ Þhj @v0j @o @v0j Dwij ¼ h ¼ hðy @E @E @o @hi @gi ¼ h @wij @o @hi @gi @wij ðtÞ o Þv0j hi ð1 hi Þxj For two-class classification in multilayer perceptron, the update rules are: Dv0j ¼ h @E @E @o @z ¼ h ¼ hðyðtÞ oðtÞ Þhj @v0j @o @z @v0j @E @E @o @z @hi @gi ¼ h @wij @o @z @hi @gi @wij ¼ hðyðtÞ oðtÞ Þv0j hi ð1 hi Þxj Dwij ¼ h Machine learning For K-class classification in multilayer perceptron, the update rules are: Dvij ¼ h @E @E @oi @zi ¼ h ¼ hðyðtÞ oðtÞ Þhj @vij @oi @zi @vij @E @E @ok @zk @hi @gi ¼ h @wij @ok @zk @hi @gi @wij ! K X ðtÞ ðtÞ ¼h ðyk ok Þvij hi ð1 hi Þxj Dwij ¼ h 3.6 Performance assessment and comparison of algorithms Machine learning experiments are usually done to assess the performance of learning algorithms. If we have more than one candidate algorithm to experiment, we face with the second objective: comparing the performance of algorithms. The first thing we need to know is, we cannot compare algorithms based on the training error. Training errors are optimistic and overly biased. The algorithm tries its best to learn the dataset and therefore is highly prone to recognize (remember) instances instead of learning the relationships among them. This is called overfitting in the machine-learning literature and is the reverse of underfitting, where we have not learned enough. To overcome overfitting, we need a separate dataset other than training set, on which we will compare learning algorithms. This set is called test set. We also need a third set validation set, on which we tune the hyperparameters of the algorithm. For example, in k-nearest neighbor, k is a hyperparameter and training on different ks produce different training errors, and in order to finalize the learner and test in on the test set, we need a separate set to differentiate between different ks. Another important factor in learning is the impact of chance. Maybe there were one or more mislabeled instances in the training/validation/test set, or there were some outliers in the given training/validation pair, or there was noise while obtaining the features on the training/validation/test set. Yet another important possibility is the factor of chance on the training of the algorithm. Many learning algorithms (i.e., neural networks, K-means clustering) have an iterative type of optimization, where initially we start with a random solution. The initial solution may have a well impact on the end result. So, whatever possible reason is, training and validation on a single training/validation set pair is not healthy. We need to run our learning algorithm(s) on multiple sets, so we need to resample the dataset to obtain multiple training and validation sets. Last but not least, remember that, error is not the only one criterion on which we compare algorithms. There are also other criteria in real life, which may get our attention more than the error, such as training time or space complexity, testing time or space complexity, interpretability, and risks when errors are generalized to other loss functions [36]. Modelling methodologies in analogue integrated circuit design 3.6.1 Sensitivity analysis The aim of sensitivity analysis is to find the parameter/a set of parameters which has/have the greatest impact on the output of the model/algorithm [37]. It provides important insight into the model, whereby the most effective parameter/set of parameters is/are determined. In other words, sensitivity analysis helps the experiment designer to understand the input–output relationship; determines how the uncertainty in parameters affects the actual output of the system; and guides experiment designer for future experiment designs. There are two important types of sensitivity analysis: (i) local and (ii) global. Local sensitivity analysis In local sensitivity analysis, one evaluates the change in the model output with respect to a change in a single parameter. Only small variations on one parameter are applied, and the effect of this simple variation is obtained via local sensitivity indices. Local sensitivity indices are usually calculated as the partial derivatives of the model output with respect to that local parameter. The main limitation of the local sensitivity analysis is that it evaluates the effect of one parameter at a time, and therefore does not allow the evaluation of simultaneous effects of changes in multiple parameters, and does not allow calculations of interactions between multiple parameters. Global sensitivity analysis In global sensitivity analysis, all parameters of the system are changed at the same time over the whole parameter space, which enables us to calculate the respective contributions of each parameter and the interactions between those multiple parameters. The interactions between the model parameters identify the model output variance. Well-known global sensitivity analysis methodologies are ● Weighted average of local sensitivity analysis, which calculates the local sensitivity analysis indices at different random values of the input parameters. The weighted average of the local indices is then used to calculate the global sensitivity. Partial rank correlation coefficient, which uses the Pearson correlation coefficients (between 1 and 1) to identify the important parameters. Sobol method, which is based on variance decomposition technique to obtain the corresponding contributions of the input parameters on the model output variance. Sobol’s method also can identify the interactions on the input parameters on the overall output variance. 3.6.2 Resampling In this section, we will talk about how to generate K training/validation set pairs. We also want to hold back as much as from overlapping the validation and training set pairs. Machine learning 3.6.2.1 K-Fold cross-validation In K-fold cross-validation, the aim is to generate K training/validation set pair, where training and validation sets on fold i do no overlap. First, we divide the dataset X into K parts as X1 ; X2 ; . . . ; XK . Then for each fold i, we use Xi as the validation set and X Xi as the training set. So, the training and validation sets are T1 ¼ X2 [ X3 [ . . . XK V1 ¼ X1 T2 ¼ X1 [ X3 [ . . . XK V2 ¼ X2 ... TK ¼ X1 [ X2 [ . . . XK1 VK ¼ XK Possible values of K are 10 or 30. One extreme case of K-fold cross-validation is leave-one-out, where K ¼ N and each validation set has only one instance. If we have more computation power, we can have multiple runs of K-fold cross-validation, such as 10 10 cross-validation [38] or 5 2 cross-validation [8,39]. 3.6.2.2 Bootstrapping If we have very small datasets, we do not insist on the non-overlap of training and validation sets. In bootstrapping, we generate K multiple training sets, where each training set contains N examples (like the original dataset). To get N examples, we draw examples with replacement. For the validation set, we use the original dataset. The drawback of bootstrapping is that the bootstrap samples overlap more than the cross-validation sample, hence they are more dependent. 3.6.3 Comparison of algorithms In this section, we will talk about statistical tests comparing two classification algorithms, and test whether those two algorithms have the same error rate. 3.6.3.1 K-Fold cv paired t-test In K-fold cv paired t-test, we assume that we sampled the training set with K-fold cross-validation and the difference of error rates of two algorithms on fold i are calculated as ei ¼ fi si , where fi and si represent the error rates of the first and second algorithms on fold i, respectively. The null hypothesis of the K-fold cv paired t-test is that the distribution of ei has 0 mean. The test statistic: tK ¼ pffiffiffiffi Km S is t-distributed with K 1 degrees of freedom, where m and S are the mean and the variance of ei ’s. The test rejects the null hypothesis at significance level a, if tK is outside the interval ðta=2; K1 ; ta=2;K1 Þ. Modelling methodologies in analogue integrated circuit design 5 2 cv paired t-test In 5 2 cv paired t-test [39], we assume that we sampled the training set with 5 2-fold cross-validation and the difference of error rates of two algorithms on fold j of replication i are calculated as eij ¼ fij sij , where fij and sij represent the error rates of the first and second algorithms on fold j of replication i, respectively. Under the null hypothesis that the two classifications have the same error rate, the test statistic e11 tt ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P5 2 i¼1 si =5 is t-distributed with 5 degrees of freedom, where s2i is the variance of the error differences on replication i. The test rejects the null hypothesis at significance level a, if tt is outside the interval ðta=2;5 ; ta=2;5 Þ. Combined 5 2 cv F-test In combined 5 2 cv F-test [8], the assumptions are the same as 5 2 cv paired t-test. Under the null hypothesis that two classifications have the same error rate, the test statistic: P5 P2 2 i¼1 j¼1 eij (3.36) tF ¼ P5 2 2 i¼1 si is F-distributed with 10 and 5 degrees of freedoms. The test rejects the null hypothesis at significance level a, if tF is greater than Fa;10;5 . Combined 5 2 cv t-test In combined 5 2 cv t-test [9], the assumptions are the same as 5 2 cv paired t-test. Under the null hypothesis that two classifications have the same error rate, the test statistic: P5 P2 i¼1 j¼1 eij ffi tC ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (3.37) P5 P2 2 P5 e 2 e e i¼1 j¼1 ij i¼1 i1 i2 is t-distributed with 5 degrees of freedom. The test rejects the null hypothesis at significance level a, if tC is outside the interval ðta=2;5 ; ta=2;5 Þ. References [1] Vapnik V. The Nature of Statistical Learning Theory. New York, NY: Springer Verlag; 1995. [2] Alpaydın E. Introduction to Machine Learning. Cambridge, MA: The MIT Press; 2010. Machine learning [3] [4] [5] [6] [7] [8] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] Bishop CM. Neural Networks for Pattern Recognition. Oxford, England: Oxford University Press; 1995. Quinlan JR. C4.5: Programs for Machine Learning. San Meteo, CA: Morgan Kaufmann; 1993. Pearl J. Causality: Models, Reasoning, and Inference. Cambridge, England: Cambridge University Press; 2000. Jordan MI. An Introduction to Probabilistic Graphical Models. Cambridge, MT: The MIT Press; 2009. Rabiner LR. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE. 1989;77:257–286. Alpaydın E. Combined 5 2 cv F Test for Comparing Supervised Classification Learning Classifiers. Neural Computation. 1999;11: 1975–1982. Yıldız OT. Omnivariate Rule Induction Using a Novel Pairwise Statistical Test. IEEE Transactions on Knowledge and Data Engineering. 2013;25: 2105–2118. Yıldız OT, and Alpaydın E. Omnivariate Decision Trees. IEEE Transactions on Neural Networks. 2001;12(6):1539–1546. Fu¨rnkranz J. Separate-and-Conquer Learning. Artificial Intelligence Review. 1999;13:3–54. Fawcett T. An Introduction to ROC Analysis. Pattern Recognition Letters. 2006;27:861–874. Cherkassky V, and Mulier F. Learning From Data. Hoboken, NJ: John Wiley and Sons; 1998. Devjner PA, and Kittler J. Pattern Recognition: A Statistical Approach. New York, NY: Prentice-Hall; 1982. Guyon I, and Elisseeff A. An Introduction to Variable and Feature Selection. Journal of Machine Learning Research. 2003;3:1157–1182. Pudil P, Novovicova J, and Kittler J. Floating Search Methods in Feature Selection. Pattern Recognition Letters. 1994;15:1119–1125. Jain AK, and Dubes RC. Algorithms for Clustering Data. New York, NY: Prentice-Hall; 1988. Jain AK, Murthy MN, and Flynn PJ. Data Clustering: A Review. ACM Computing Surveys. 1999;31:264–323. Dempster AP, Laird NM, and Rubin DB. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of Royal Statistical Society B. 1977;39:1–38. Redner RA, and Walker HF. Mixture Densities, Maximum Likelihood and the EM Algorithm. SIAM Review. 1984;26:195–239. Wolpert DH. The Relationship between PCA, the Statistical Physics Framework, the Bayesian Framework, and the VC Framework. In: Wolpert DH, editor. The Mathematics of Generalization. Cambridge, MA: Addison Wesley; 1995. p. 117–214. Aha DW, Kibler D, and Albert MK. Instance-Based Learning Algorithm. Machine Learning. 1991;6:37–66. Modelling methodologies in analogue integrated circuit design Aha DW. Special Issue on Lazy Learning. Artificial Intelligence Review. 1997;11(1–5):7–423. Duda RO, Hart PE, and Stork DG. Pattern Classification. Hoboken, NJ: John Wiley and Sons; 2001. Yıldız OT, and Alpaydın E. Ordering and Finding the Best of K > 2 Supervised Learning Algorithms. IEEE Transactions on Pattern Analysis Machine Intelligence. 2006;28(3):392–402. Murthy SK. Automatic Construction of Decision Trees from Data: A MultiDisciplinary Survey. Data Mining and Knowledge Discovery. 1998;2(4): 345–389. Yıldız OT, and Alpaydın E. Linear Discriminant Trees. In: 17th International Conference on Machine Learning. Morgan Kaufmann; 2000. p. 1175–1182. Guo H, and Gelfand SB. Classification Trees With Neural Network Feature Extraction. IEEE Transactions on Neural Networks. 1992;3:923–933. Breiman L, Friedman JH, Olshen RA, et al. Classification and Regression Trees. Hoboken, NJ: John Wiley and Sons; 1984. Quinlan JR. Induction of Decision Trees. Machine Learning. 1986;1:81–106. Vapnik V. Statistical Learning Theory. New York, NY: Wiley; 1998. Scho¨lkopf B, Tsuda K, and Vert JP, editors. Kernel Methods in Computational Biology. Cambridge, MA: The MIT Press; 2004. Rosenblatt F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. New York, NY: Spartan; 1962. Minsky ML, and Papert SA. Perceptrons. Cambridge, MA: MIT Press; 1969. Hopfield JJ. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proceedings of the National Academy of Sciences. 1982;79:2554–2558. Turney PD. Types of Cost in Inductive Concept Learning. In: Workshop on Cost-Sensitive Learning in 17th International Conference on Machine Learning. Stanford University, CA, USA; 2000. p. 15–21. Saltelli A, Chan K, and Scott EM. Sensitivity Analysis: Wiley Series in Probability and Statistics. New York, NY: Wiley; 2000. Bouckaert RR, and Frank E. Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms. In: Advances in Knowledge Discovery and Data Mining, LNCS. vol. 3056. Sydney, Australia: Springer; 2004. p. 3–12. Dietterich TG. Approximate Statistical Tests for Comparing Supervised Classification Learning Classifiers. Neural Computation. 1998;10:1895–1923. [24] [25] [27] [28] [29] [30] [31] [32] [33] [34] [35] [37] [38] Chapter 4 Data-driven and physics-based modeling Slawomir Koziel1 Utilization of computer simulations has become ubiquitous in contemporary engineering design. Accurate simulations can provide a reliable assessment of components and devices, thereby replace the need for prototyping, reduce the cost of the design cycle, as well as its cost. At the same time, the computational cost of computer simulations (e.g., full-wave electromagnetic (EM) analysis in microwave or antenna engineering) maybe considerable or even unmanageable, particularly for complex structures. This is usually not a problem for design verification or even simple simulation-driven design procedures based on parameter sweeping; however, it becomes problematic for procedures that require numerous analyses, such as parametric optimization, statistical analysis, or tolerance-aware design. The difficulties related to the high cost of simulation can be alleviated to a certain extent by utilization of fast replacement models, also referred to as surrogates. The popularity and importance of surrogate models has been steadily growing over the years along with the development of various modeling techniques and an increasing number of applications in engineering and science. In this chapter, we give a brief introduction to surrogate modeling, starting with model classification, explaining the modeling flow as well as discussing the details of modeling stages. These include design of experiments (DoEs), data acquisition, model identification as well as model validation. We also discuss the most popular modeling techniques, including—among others—polynomial regression, kriging, neural networks (NNs), space mapping (SM), and response correction techniques. 4.1 Model classification Surrogate models can be classified in different ways. Here, we distinguish the two major classes of models, referred to as data-driven models [1] (also named functional or approximation ones) and physics-based models [2]. In this section, the generic properties of both classes are briefly discussed. Particular techniques are described in further sections of this School of Science and Engineering, Reykjavik University, Reykjavik, Iceland Modelling methodologies in analogue integrated circuit design Data-driven models are by far the most popular type of surrogates with a large variety of modeling methods available, e.g., [3–8]. There are certain features that are common to all types of approximation models. These can be identified as follows: ● ● ● The models are constructed solely from sampled training data so that no problem-specific knowledge is required. They are generic, therefore, applicable to a wide range of problems and easily transferrable between various areas. They are typically based on explicit analytical formulas. They are cheap to evaluate. Due to their low evaluation cost, data-driven models are easy to optimize and can be conveniently used to solve design tasks that require numerous system/device simulations. Unfortunately, data-driven surrogates have an important disadvantage, which is a large number of training data points necessary to ensure good predictive power of the model. This number rapidly grows with the number of parameters of the system at hand, a phenomenon often referred to as the curse of dimensionality [5,9]. Furthermore, the number of necessary data points quickly grows with the parameter ranges. The latter is particularly important from the point of view of practical usefulness of the surrogate in the design process. Depending on the type and nonlinearity of the system responses, a practical limit for the parameter space dimensionality maybe as low as a few (four to six) for highly nonlinear systems (e.g., multiband antennas or antenna arrays) to over 20 (e.g., for aerodynamic profiles, such as airfoils). On the other hand, local approximation models can be quite useful as auxiliary optimization tools. The models of the second class, physics-based ones, utilize a different principle. They are constructed by suitable correction or enhancement of an underlying low-fidelity model. In the case of electronics engineering, the original (or highfidelity) models are typically evaluated using circuit or full-wave EM analysis [10], the latter being particularly expensive. The low-fidelity models can be obtained as equivalent circuits (e.g., for microwave filters or other components [11]) or coarsemesh EM simulation (e.g., for antennas or antenna arrays [12]). In some cases, analytical models are used as well if available. The correction process aims at improving the alignment between the low-fidelity model and the high-fidelity model, either locally or across the entire design space. Because the low-fidelity model embeds certain knowledge about the system of interest, the low- and high-fidelity models are normally well correlated. Consequently, only a limited amount of high-fidelity data is normally sufficient to ensure reasonable accuracy of the surrogate model. For the same reason, physicsbased surrogates typically feature good generalization. At the same time, the effect of curse of dimensionality is not as pronounced as for the data-driven surrogates. A certain disadvantage of this class of models is their limited generality: each type of problem requires careful preparation of the low-fidelity model the quality of which maybe critical for the quality of the surrogate. It should also be mentioned that hybridization of both classes of surrogates is possible (e.g., SM enhanced by datadriven correction layer [13]). Data-driven and physics-based modeling 4.2 Modeling flow In this section, a typical modeling flow is described. It is mostly pertinent to datadriven models, although it also applies to physics-based surrogates to a certain extent. The overall flowchart of the surrogate modeling process is shown in Figure 4.1. There are four stages that can be distinguished: ● DoE. DoE refers to a process of allocating a given number of training samples in the design space using a selected strategy. In reality, the available computational budget limits the number of samples that can be used. In the past, factorial type of designs has been popular due to the fact that the majority of data was coming from physical measurements of the system. Nowadays, as the training data comes from computer simulations, space-filling DoEs are usually preferred [3]. An outline of popular DoE techniques is provided in Section 4.3. Data acquisition. This stage consists of acquiring the training data at the points allocated by DoE. Data acquisition is normally the bottleneck of the modeling process particularly when the system evaluation requires expensive simulations (e.g., full-wave EM analysis). Model identification. This step employs the extraction of the parameters of the approximation model of choice. In most cases (e.g., kriging [15] or NNs [16]), determination of the surrogate model parameters requires solving a suitably posed minimization problem. However, in some situations, the model parameters Design of experiments High-fidelity model training data acquisition Allocate infill training points Model identification (training data fitting) No Model validation Accuracy sufficient? Yes End Figure 4.1 A generic surrogate model construction flowchart [14]. The shaded box shows the main flow, whereas the dashed lines indicate an optional iterative procedure in which additional training data points are added (using a suitable infill strategy) and the model is updated accordingly Modelling methodologies in analogue integrated circuit design can be found using explicit formulas by solving an appropriate regression problem (e.g., polynomial approximation [1]). Sections 4.4 and 4.5 discuss selected data-driven and physics-based modeling techniques, respectively. Model validation. This step is used to verify the model accuracy. Normally, a generalization capability of the surrogate is of major concern, i.e., estimating the predictive power at the points (designs) not seen during the surrogate identification stage. Consequently, the model testing should involve a separate set of testing samples. Maintaining a proper balance between surrogate approximation and generalization (accuracy at the known and at the unknown data sets) is also of interest. A few comments on model validation have been provided in Section 4.6. As the validation stage may indicate insufficient model accuracy, the surrogate model construction maybe—in practice—an iterative process so that the steps shown in the core of the diagram shown in Figure 4.1 constitute just a single iteration. In case the model needs to be improved, a new set of samples and the corresponding high-fidelity model evaluations maybe added to the training set upon model validation and utilized to reidentify the model. This adaptive sampling process is then continued until the accuracy goals are met or the computational budget is exceeded. In the optimization context, the surrogate update maybe oriented more toward finding better designs rather than toward ensuring global accuracy of the model [5]. 4.3 Design of experiments As explained before, DoEs [17–19] is a strategy for allocating the training points in the parameter space. In most practical cases, the space is defined by the lower and upper bounds for the parameters so that DoE can be performed on the unity hypercube and later remapped to the actual space. Following sample allocation, the high-fidelity model data is acquired at the selected locations and utilized to construct the surrogate model. Obviously, there is a trade-off between the number of training samples and the amount of information about the system of interest that can be extracted from these points. The estimated model accuracy can be fed back to the sampling algorithm to allocate more points in the regions that exhibit more nonlinear behavior [6,20]. Figure 4.2(a) shows an example of a factorial design [17], which consists of allocating the samples in the corners, edges, and/or faces of the design space. This is a traditional DoE approach that allows estimating the main effects and interactions between design variables without using too many samples. The rationale behind spreading the samples is that it minimizes possible errors of estimating the main trends as well as interactions between design variables, in case the data about the system was coming from physical measurements. Nowadays, certain factorial designs are used because they are “economical” in terms of the number of samples, which is important if the computational budget for data acquiring is very limited. An example is a so-called star distribution (cf. Figure 4.2(a)) which is often used in conjunction with SM [21]. Data-driven and physics-based modeling Figure 4.2 Popular DoE techniques [14]: (a) factorial designs (here, star distribution); (b) random sampling; (c) uniform grid sampling; (d) Latin hypercube sampling (LHS) Vast majority of modern DoE procedures are space-filling designs attempting to allocate training points uniformly within the design space [1]. This is especially useful for constructing an initial surrogate model when the knowledge about the system is limited. A number of space-filling DoEs are available. The simple ones include pseudorandom sampling [17], cf. Figure 4.2(b), or uniform grid sampling (Figure 4.2(c)). The limitation of the former is poor uniformity, the latter is only practical for low-dimensional spaces as the number of samples is restricted to N1 N2 . . . Nn, where Nj is the number of samples along jth axis of the design space. Arguably, the most popular DoE for uniform sample distribution is Latin hypercube sampling (LHS) [22]. In order to allocate p samples with LHS, the range for each parameter is divided into p bins, which for n design variables, yields a total number of pn bins in the design space. The samples are randomly selected in the design space so that (i) each sample is randomly placed inside a bin and (ii) for all onedimensional projections of the p samples and bins, there is exactly one sample in each bin. An example LHS distribution for p ¼ 20 in two-dimensional space has been shown in Figure 4.2(d). LHS can be improved to provide more uniform sampling distributions [23–26]. Other uniform sampling techniques that are commonly used include orthogonal array sampling [1], quasi-Monte Carlo sampling [17], or Hammersley sampling [17]. Space-filling sample distribution can also be posed as an optimization problem, specifically, P as minimization of a suitably P defined nonuniformity measure, e.g., i¼1;...;p j¼i¼1;...;p dij2 [24], where dij is the Euclidean distance between samples i and j. 4.4 Data-driven models The number of available data-driven modeling techniques is quite considerable. This section provides a brief outline of the selected methods. More details can be found in the literature, e.g., [3]. The common notation utilized throughout is the following. The training data samples will be denoted as {x(i)}, i ¼ 1, . . . , p and the corresponding highfidelity model evaluations as f(x(i)). The surrogate model is constructed by approximating the data pairs {x(i),f(x(i))}. Here, f is considered to be a scalar function; however, generalization of most methods for vector-values cases is rather straightforward. Modelling methodologies in analogue integrated circuit design 4.4.1 Polynomial regression Polynomial regression is one of the simplest approximation techniques [1]. The surrogate is defined as a linear combination of the basis functions vj: sðxÞ ¼ K X bj vj ðxÞ; where bj are unknown coefficients. The model parameters can be found as a leastsquare solution to the linear system: f ¼ X b; (4.2) ð1Þ ð2Þ ðpÞ T where f ¼ f x , X is a p K matrix containing the basis f x f x functions evaluated at the sample points, and b ¼ ½b1 b2 bK T . In order to ensure the uniqueness of the solution to (4.2), one needs to ensure consistency between the numbers of data points and basis functions (typically, p K). If the sample points and basis functions are taken arbitrarily, some columns of X maybe linearly dependent. For p K and rank(X) ¼ K, a solution to (4.2) can be computed through Xþ, the pseudoinverse of X [27], i.e., b ¼ Xþf ¼ (XTX)1XTf. A simple yet useful example of a regression model is a second-order polynomial: n n X n X X sðxÞ ¼ s ½x1 x2 xn T ¼ b0 þ bj xj þ bij xi xj ; j¼1 i¼1 ji with the basis functions being monomials: 1, xj, and xixj. 4.4.2 Radial basis function interpolation Radial basis function model interpolation/approximation [5,28] is a regression model where the surrogate is defined as a linear combination of K radially symmetric functions f: sðxÞ ¼ K X lj f jjx cðjÞ jj ; in which l ¼ ½l1 l2 lK T is the vector of model parameters and c(j), j ¼ 1, . . . , K, are the (known) basis function centers; || || stands for the L2-norm. The model coefficients T can be found as l ¼ Fþf ¼ (FTF)1FTf, where, f ¼ f xð1Þ f xð2Þ f xðpÞ and the p K matrix F ¼ [Fkl]k ¼ 1, . . . , p; l¼1, . . . , K, with the entries defined as: (4.5) Fkl ¼ f jjxðkÞ cðlÞ jj : In a special case, where the number of basis functions is equal to the number of samples, i.e., p ¼ K, the centers of the basis functions coincide with the data points and are all different, F is a regular square matrix. Then, l ¼ F1f. A popular Data-driven and physics-based modeling choice of the basis function is a Gaussian, f(r) ¼ exp(r2/2s2), where s is the scaling parameter. 4.4.3 Kriging Kriging belongs to the most popular data-driven modeling techniques today [3,15]. Kriging is a Gaussian-process-based modeling method, which is compact and cheap to evaluate [29]. In its basic formulation (e.g., [3]), kriging assumes that the function of interest is of the following form: f ðxÞ ¼ gðxÞT b þ ZðxÞ; (4.6) T where gðxÞ ¼ ½g1 ðxÞ g2 ðxÞ gK ðxÞ are known (e.g., constant) functions, b ¼ ½b1 b2 bK T are the unknown model parameters (hyperparameters), and Z (x) is a realization of a normally distributed Gaussian random process with zero mean and variance s2. The regression part g(x)Tb is a trend function for f, and Z(x) takes into account localized variations. The covariance matrix of Z(x) is given as: h i ¼ s2 R R xðiÞ ; xðjÞ ; (4.7) Cov Z xðiÞ Z xðjÞ where R is a p p correlation matrix with Rij ¼ R(x(i),x(j)). Here, R(x(i),x(j)) is the correlation function between sampled data points x(i) and x(j). The most popular choice is the Gaussian correlation function: " # n X 2 Rðx; yÞ ¼ exp (4.8) qk jxk yk j ; k¼1 where qk are the unknown correlation parameters, and xk and yk are the kth components of the vectors x and y, respectively. The kriging predictor is defined as: sðxÞ ¼ gðxÞT b þ rT ðxÞR1 ðf GbÞ; (4.9) ð1Þ ðpÞ T ð1Þ ð2Þ ðpÞ T where rðxÞ ¼ R x; x ,f ¼ f x , and G R x; x f x f x ðiÞ is a pK matrix with Gij ¼ gj ðx Þ. The vector of model parameters b can be computed as b ¼ (GTR1G)1GTR1f. Model fitting is accomplished by maximum likelihood for qk [30]. One of practically important features of kriging is that the random process Z(x) gives information on the approximation error that can be used for improving the surrogate, e.g., by allocating additional training samples at the locations, where the estimated model error is the highest [5]. This feature is also utilized in various global optimization methods (e.g., [6]). Figure 4.3 shows an example function of two variables and its kriging model obtained using 20, 50, and 100 samples allocated using LHS. 4.4.4 Support vector regression Support vector regression (SVR) [31] exhibits good generalization capability [32] and convenient training through quadratic programming [33]. SVR exploits the Modelling methodologies in analogue integrated circuit design 4 f (x) f (x) 2 0 –2 x2 –1 x1 –2 x2 –1 x1 2 0 –2 –1 x1 –1 x1 0 –2 Figure 4.3 Kriging model of a two-parameter scalar function: (a) functional landscape, (b) training data set (o) and kriging model for p ¼ 20 samples, (c) kriging model for p ¼ 50 samples, (d) kriging model for p ¼ 100 samples structural risk minimization principle, arguably superior [31] to traditional empirical risk minimization principle, employed by, e.g., NNs. SVR has been gaining popularity in various areas, including electrical engineering and aerodynamic design [34–36]. Let rk ¼ f (xk), k ¼ 1,2, . . . , N denote the sampled high-fidelity model responses (here, SVR is formulated for vector-valued f). SVR is to approximate T rk at all base points xk, k ¼ 1,2, . . . , N. Let rk ¼ r1k r2k rmk denote components of the vector rk. For linearnregression, one aims at oapproximating a training data set, here, the pairs Dj ¼ x1 ; rj1 ; . . .; xN ; rjN j ¼ 1; 2; . . .; m, by a linear function fj ðxÞ ¼ þ bj . The optimal regression function is given by the minimum of the following functional [33]: wTj x N X 1 Fj ðw; xÞ ¼ jjwj jj2 þ Cj xþ ji þ xji ; 2 i¼1 where Cj is a user-defined value, and xþ ji and xji are the slack variables representing upper and lower constraints on the output of the system. The typical cost function used in SVR is an e-insensitive loss function defined as: 0 for jfj ðxÞ yj < e Le ðyÞ ¼ (4.11) otherwise jfj ðxÞ yj Data-driven and physics-based modeling The value of Cj determines the trade-off between the flatness of fj and the amount up to which deviations larger than e are tolerated [31]. Here, nonlinear regression employing the kernel approach is described, in which the linear function wTj x þ bj is replaced by the nonlinear function Sigj iK(xk,x)þbj, where K is a kernel function. Thus, the SVR model is defined as: 2 N 3 X i g1i Kðx ; xÞ þ b1 7 6 6 i¼1 7 6 7 .. 7 (4.12) sðxÞ ¼ 6 6 7 . 6X 7 N 4 5 gmi Kðxi ; xÞ þ bm i¼1 with parameters gji and bj, j ¼ 1, . . . , m, i ¼ 1, . . . , N obtained according to a general SVR methodology. In particular, Gaussian kernels of the form K(x,y) ¼ exp (0.5||xy||2/c2) with c>0 can be used, where c is the scaling parameter. Both c as well as parameters Cj and e can be adjusted to minimize the generalization error calculated using a cross-validation method [1]. 4.4.5 Neural networks Artificial Neural Networks (ANNs) is a large area of research by itself [37]. From the perspective of surrogate modeling, ANNs can be viewed as just another way of approximating sampled high-fidelity model data to create a surrogate model. The most important component of an NN is the neuron (or single-unit perceptron) [37]. A neuron realizes a nonlinear operation illustrated in Figure 4.4(a), where w1 through wn are regression coefficients, b is the bias value of the neuron, and T is a user-defined slope parameter. The most common NN architecture is the multilayer feed-forward network shown in Figure 4.4(b). Input units Hidden layer Output units Inputs x1 x2 η = ∑wi xi + β x x3 Output . .. (a) xn Neuron y = (1+e–η/T)–1 (b) Figure 4.4 Basic concepts of artificial neural networks: (a) structure of a neuron and (b) two-layer feed-forward neural network architecture Modelling methodologies in analogue integrated circuit design Construction of an NN model is a two-step process: (i) selection of the network architecture and (ii) training, i.e., the assignment of the values to the regression parameters. The network training can be formulated as a nonlinear least-squares regression problem. A popular technique for solving this regression problem is the error back-propagation algorithm [3,37]. 4.4.6 Other methods As mentioned before, a large variety of data-driven modeling techniques are available, the ones outlined in this section are the most widely used ones. Another popular approach is moving least squares [38], where the error contribution from each training point x(i) is multiplied by a weight wi that depends on the distance between x and x(i). A typical choice for the weights is: (4.13) wi kx xðiÞ k ¼ exp kx xðiÞ k2 : It should be noted that incorporating weights improves the flexibility of the model, however, at the expense of increased computational complexity, since computing the approximation for each point x requires solving a new optimization problem. Gaussian process regression (GPR) [29] is another data-driven modeling technique that, as kriging, addresses the approximation problem from a stochastic point view. From this perspective, and since Gaussian processes are mathematically tractable, it is relatively easy to compute error estimations for GPR-based surrogates in the form of uncertainty distributions. Under certain conditions, GPR models can be shown to be equivalent to large NNs while requiring significantly less regression parameters than NNs. The last technique to be mentioned here is cokriging [39,40]. It is an interesting variation of the standard kriging interpolation, which allows for combining information from computational models of various fidelity. The major advantage of cokriging is that—by exploiting knowledge embedded in the low-fidelity model— the surrogate can be created at much lower computational cost than for the models exclusively based on high-fidelity data. Cokriging is a relatively recent method with a few (but growing) applications in engineering [40–43]. 4.5 Physics-based models Data-driven models are suitable for solving a range of practical problems. Their two major advantages are versatility and low evaluation cost. A disadvantage is considerable computational cost of training data acquisition, which grows quickly with dimensionality of the design space and the ranges of parameters. Consequently, in many cases, construction of approximation surrogates becomes impractical. Physics-based models are quantitatively different because of being constructed by suitable correction or enhancement of an underlying low-fidelity model. The correction process aims at improving alignment between the low-fidelity and the Data-driven and physics-based modeling high-fidelity models, either locally or across the entire design space. As the lowfidelity model embeds certain knowledge about the system of interest (e.g., due to evaluating the same system at the level of circuit theory rather than EM analysis as in many microwave engineering problems), the models are normally well correlated. As a result, only a limited amount of high-fidelity data is normally sufficient to ensure reasonable accuracy of the surrogate model. For the same reason, physicsbased surrogates typically feature good generalization. To illustrate the concept of physics-based surrogate modeling, let us consider a simple example. We denote by c(x) a low-fidelity model of the device or system of interest. Consider a simple case of multiplicative response correction, considered in the context of the surrogate-based optimization [14]. The optimization algorithm produces a sequence {x(i)} of approximate solutions to the original problem x ¼ argminfx : f ðxÞg. From the algorithm convergence standpoint (particularly if the algorithm is embedded in the trust-region framework [14]), a local alignment between the surrogate and the high-fidelity model is of fundamental importance. The surrogate s(i)(x) at iteration i can be constructed as: sðiÞ ðxÞ ¼ bk ðxÞcðxÞ; where bk(x) ¼ bk(x ) þ rb(x ) (xx ), where b(x) ¼ f(x)/c(x). This ensures a so-called zero- and first-order consistency between s and f, i.e., agreement of function values and their gradients at x(i) [44]. Figure 4.5 illustrates correction (4.14) for the exemplary models based on analytical functions. (i) (i) T Model responses High-fidelity model f Low-fidelity model c Surrogate model (response-corrected c) First-order Taylor model at x0 = 1 Figure 4.5 Visualization of the response correction (4.14) for the analytical functions c (low-fidelity model) and f (high-fidelity model). The correction established at x0 ¼ 1. It can be observed that corrected model (surrogate s) exhibits good alignment with the high-fidelity model in relatively wide vicinity of x0, especially compared to the first-order Taylor model set up using the same data from f (the value and the gradient at x0) Modelling methodologies in analogue integrated circuit design 4.5.1 Variable fidelity models Low-fidelity models are fundamental components of physics-based surrogates. They are normally problem dependent, i.e., need to be set up individually for a given high-fidelity model. In electrical engineering, especially high-frequency electronics, full-wave EM analysis is often used as an original (high-fidelity) model. Typical low-fidelity model choices include analytical representations (rather rare due to complexity of contemporary components and devices), equivalent circuit models (commonly used for, e.g., microwave devices, such as filters, couplers, and power dividers), or coarse-mesh full-wave EM simulation models (utilized for, e.g., antenna structures [12]). The last group is the most versatile because it can always be set up for any EM-based high-fidelity model. Here, we use it to discuss certain practical options concerning low-fidelity model selection. Consider a microstrip antenna shown in Figure 4.6(a) and discretization of its high-fidelity model (here, using a tetrahedral mesh). With the discrete solvers, it is the discretization density that has the strongest impact on the accuracy and computational time of a particular antenna model. At the same time, the discretization density or mesh quality is probably the most efficient way to trade accuracy for speed. Therefore, a straightforward way to create a low-fidelity model of the antenna is through coarser mesh settings compared to those of the high-fidelity antenna model, e.g., as illustrated in Figure 4.6(b). Because of possible simplifications, the low-fidelity model can be faster than the high-fidelity one by a factor 10–50. However, the low-fidelity model is obviously not as accurate as the highfidelity one. Figure 4.7 shows the high- and low-fidelity model responses at a specific design for the antenna of Figure 4.6 obtained with different meshes, as well as the relationship between mesh coarseness and simulation time. Clearly, coarser mesh results in shorter simulation time but it is obtained at the expense of compromising Figure 4.6 Microstrip antenna [12]: (a) high-fidelity model shown with a fine tetrahedral mesh and (b) low-fidelity model shown with a much coarser mesh Data-driven and physics-based modeling |S11| (dB) –5 –10 –15 –20 –25 (a) 3.5 4 4.5 Frequency (GHz) Evaluation time (s) 102 104 (b) 105 106 The number of mesh cells Figure 4.7 Antenna of Figure 4.6 at a selected design simulated with the CST Microwave Studio: (a) reflection response of different discretization densities, 19,866 cells (), 40,068 cells (–), 266,396 cells (– –), 413,946 cells (), 740,740 cells (—), and 1,588,608 cells (▬); and (b) the antenna simulation time versus the number of mesh cells [12] the model alignment. In practice, the appropriate choice of the low-fidelity model discretization density is not a trivial task and maybe critical for the predictive power of the physics-based surrogate, in which the low-fidelity model is used. More information about possible simplifications that can be made to establish the low-fidelity model and model selection trade-offs can be found in the literature [12,45]. 4.5.2 Space mapping SM [46] is one of the most popular physics-based modeling techniques in highfrequency electronics. Here, we briefly discuss several types of SM assuming single-point correction (multipoint generalization is straightforward [47]). The first case is input SM (ISM), where correction is applied at the level of the model domain. Here, we use vector notation for the models, i.e., c and s for the lowfidelity and the surrogate models, respectively. The surrogate is defined as: sðxÞ ¼ cðx þ qÞ: The parameter vector q is obtained by minimizing ||f(x)c(x þ q)||, which represents the misalignment between the surrogate and the high-fidelity model. Modelling methodologies in analogue integrated circuit design Figure 4.8 shows an example of a microwave filter structure [48] evaluated using EM simulation (high-fidelity model), its equivalent circuit (low-fidelity model), and the corresponding responses (here, so-called transmission characteristics |S21| versus frequency) before and after applying the ISM correction. Figure 4.8(d) indicates excellent generalization capability of the surrogate. A multipoint version of IMS may use more involved domain mapping, e.g., s(x) ¼ c(Bx þ q), where B is a square matrix. Term 1 Z=50 Ω MLIN TL1 W=W0 mm L=L0 mm L1 Input MACLIN Clin3 W=W1 mm S=S2 mm L=L2 mm Output W1 MLIN TL3 W=W2 mm L=L3 mm MACLIN Clin4 W=W1 mm S=S2 mm L=L2 mm MLIN TL2 W=W2 mm L=L3 mm MACLIN MACLIN Clin1 Clin2 W=W1 mm W=W1 mm S=S1 mm S=S1 mm L=L1 mm L=L1 mm Term 2 Z=50 Ω MLIN TL4 W=W0 mm L=L0 mm |S21| [dB] –10 –20 –30 2 2.2 Frequency [GHz] 2 2.2 Frequency [GHz] |S21| [dB] 0 –10 –20 –30 (d) Figure 4.8 Low-fidelity model correction through parameter shift (input space mapping) [14]: (a) microstrip filter geometry (high-fidelity model f evaluated using EM simulation); (b) low-fidelity model c (equivalent circuit); (c) response of f (—) and c (), as well as response of the surrogate model s (- - -) created using input space mapping; (d) surrogate model verification at a different design (other than that at which the model was created) f (—), c (), and s (- - -) Data-driven and physics-based modeling Another type of model correction may involve additional parameters that are normally fixed in the high-fidelity model but can be adjusted in the low-fidelity one. The latter is just an auxiliary design tool after all: it is not supposed to be built or measured. This concept is utilized, among others, in the so-called implicit SM technique [49], where the surrogate is created as: sðxÞ ¼ cI ðx; pÞ; where cI is the low-fidelity model with the explicit dependence on additional parameters p. The vector p is obtained by minimizing ||f(x) cI(x,p)||. For the sake of illustration, consider again the filter of Figure 4.8(a) and (b) with the implicit SM parameters being dielectric permittivities of the microstripline component substrates (rectangle elements in Figure 4.8(b)). Normally, the entire circuit is fabricated on a dielectric substrate with a specified (and fixed) characteristics. For the sake of correcting the low-fidelity model, the designer is free to adjust these characteristics (in particular, the value of dielectric permittivity) individually for each component of the equivalent circuit. Figure 4.9 shows the responses before and after applying the implicit SM correction. Again, good generalization of the surrogate model can be observed. |S21| (dB) 0 –10 –20 –30 1.4 2 2.2 Frequency (GHz) 2 2.2 Frequency (GHz) |S21| (dB) 0 –10 –20 –30 Figure 4.9 Low-fidelity model correction through implicit space mapping applied to a microstrip filter of Figure 4.8 [14]: (a) response of f (—) and c (), as well as response of the surrogate model s (- - -) created using implicit space mapping; (b) surrogate model verification at a different design (other than that at which the model was created) f (—), c (), and s (- - -) Modelling methodologies in analogue integrated circuit design In many cases, vector-valued responses of the system are actually evaluations of the same design but at different values of certain parameters, such as the time, frequency (e.g., for microwave structures), or a specific geometry parameter (e.g., chord line coordinate for the airfoil profiles). In such situations, it might be convenient to apply a linear or nonlinear scaling to this parameter so as to shape the response accordingly. A good example of such a correction procedure is frequency scaling often utilized in electrical engineering [47]. As a matter of fact, for many components simulated using EM solvers, a frequency shift is the major type of discrepancy between the low- and high-fidelity models. Let us consider a simple frequency scaling procedure, also referred to as frequency SM [47]. We assume that f ðxÞ ¼ ½ f ðx; w1 Þ f ðx; w2 Þ f ðx; wm ÞT , where f (x,wk) is the evaluation of the high-fidelity model at a frequency wk, whereas w1 through wm represent the entire discrete set of frequencies at which the model is evaluated. A similar convention is used for the low-fidelity model. The frequency-scaled model sF(x) is defined as: sF ðx; ½F0 F1 Þ ¼ ½cðx; F0 þ F1 w1 Þ cðx; F0 þ F1 wm ÞT ; where F0 and F1 are scaling parameters obtained to minimize misalignment between s and f at a certain reference design x(i) as: ½F0 ; F1 ¼ arg min jjf ðxÞ sF ðx; ½F0 F1 Þjj: ½F0 ;F1 An example of frequency scaling applied to the low-fidelity model of a substrateintegrated cavity antenna [12] is shown in Figure 4.10. Here, both the lowand high-fidelity models are evaluated using EM simulation (c with coarse discretization). sy sx (a) |S11| (dB) –5 x1 –10 –15 –20 4.8 Frequency (GHz) Figure 4.10 Low-fidelity model correction through frequency scaling [14]: (a) antenna geometry (both f and c evaluated using EM simulation, coarse discretization used for c); (b) response of f (—) and c (), as well as response of the surrogate model s (- - -) created using frequency scaling as in (4.18) and (4.19) Data-driven and physics-based modeling 4.5.3 Response correction methods Response correction techniques aim at reducing misalignment between the low- and high-fidelity models at the level of the model output. In the optimization context, it is often performed to improve the model matching at the current design, so that the surrogate model obtained this way maybe used as a reliable prediction tool that allows us to find a better design. There are three main groups of the function correction techniques: compositional, additive, or multiplicative corrections. We will briefly illustrate each of these categories for correcting the low-fidelity model c(x), as well as discuss if zero- and first-order consistency conditions with f(x) [44] can be satisfied. Here, we assume that the models are scalar functions; however, generalizations for vector-valued functions are straightforward in some cases. The compositional correction [4]: sðiþ1Þ ðxÞ ¼ gðcðxÞÞ is a simple scaling of the objective function. Since the mapping g is a real-valued function of a real variable, a compositional correction will not in general yield firstorder consistency conditions. By selecting a mapping g that satisfies: g 0 ðcðxðiÞ ÞÞ ¼ rf ðxðiÞ ÞrcðxðiÞ ÞT rcðxðiÞ ÞrcðxðiÞ ÞT the discrepancy between rf (x(i)) and rs(iþ1)(x(i)) (expressed in Euclidean norm) is minimized. The compositional correction can be also introduced in the parameter space [46]: sðiþ1Þ ðxÞ ¼ cðpðxÞÞ: If the ranges of f (x) and c(x) are different, then the condition c(p(x )) ¼ f (x(i)) is not achievable. This difficulty can be alleviated by combining both compositional corrections so that g and p take the following forms: (i) gðtÞ ¼ t cðxðiÞ Þ þ f ðxðiÞ Þ; pðxÞ ¼ xðiÞ þ J p ðx xðiÞ Þ; where Jp is an nn matrix for which J Tp rc ¼ rf ðxðiÞ Þ guarantees consistency. Additive and multiplicative corrections allow obtaining first-order consistency conditions. For the additive case, we can generally express the correction as: sðiþ1Þ ðxÞ ¼ lðxÞ þ sðiÞ ðxÞ: The associated consistency conditions require that l(x) satisfies lðxðiÞ Þ ¼ f ðxðiÞ Þ cðxðiÞ Þ and rlðxðiÞ Þ ¼ rf ðxðiÞ Þ rcðxðiÞ Þ. These can be obtained by the following linear additive correction: sðiþ1Þ ðxÞ ¼ f ðxðiÞ Þ cðxðiÞ Þ þ ðrf ðxðiÞ Þ rcðxðiÞ ÞÞðx xðiÞ Þ þ cðxÞ: Modelling methodologies in analogue integrated circuit design Multiplicative corrections (also known as the b-correlation method [4]) can be represented generically by: sðiþ1Þ ðxÞ ¼ aðxÞsðiÞ ðxÞ: 6 0, zero- and first-order consistency can be achieved if a(x(i)) ¼ f(x(i))/c(x(i)) If c(x(i)) ¼ and ra(x(i)) ¼ [rf(x(i))f(x(i))/c(x(i))rc(x(i))]/c(x(i)). The requirement c(x(i)) 6¼ 0 is not strong in practice, since very often the range of f (x) (and thus, of the surrogate c(x)) is known beforehand, and hence, a bias can be introduced both for f (x) and c(x) to avoid cost function values equal to zero. In these circumstances, the following multiplicative correction: " # f ðxðiÞ Þ rf ðxðiÞ ÞcðxðiÞ Þ f ðxðiÞ ÞrcðxðiÞ Þ ðiþ1Þ ðiÞ þ ðxÞ ¼ ðx x Þ cðxÞ (4.27) s cðxðiÞ Þ cðxðiÞ Þ2 ensures both zero- and first-order consistency between s(iþ1) and f. Response correction maybe realized in a multipoint manner (generalizing a so-called output SM concept [46]), with the low-fidelity model correction determined using the high-fidelity model data at several points (designs). The multipoint response correction surrogate can be formulated as (here, for vector-valued models, and assuming the optimization context with the surrogate model s(i) established at the current iteration point x(i)) [14]: ðiÞ sðiÞ ðxÞ ¼ LðiÞ cðxÞ þ DðiÞ r þd ; DðiÞ r , where L , and d are column vectors, whereas denotes component-wise multiplication. The global response correction parameters L(i) and DðiÞ are r obtained as: (i) ½LðiÞ ; DðiÞ r ¼ arg min i X kf ðxðkÞ Þ L cðxðkÞ Þ þ Dr k2 i.e., the response scaling is supposed to improve the matching for all previous iteration points. The points x(k), k ¼ 1, . . . , i may represent the optimization history (previously considered design on the optimization path) or be elements of the training set allocated for the purpose of surrogate model construction. The (local) additive response correction term d(i) is then defined as: d ðiÞ ¼ f ðxðiÞ Þ ½LðiÞ cðxðiÞ Þ þ DðiÞ r ; i.e., it ensures a perfect match between the surrogate and the high-fidelity model at the current design x(i), s(i)(x(i)) ¼ f(x(i)). Note that the local correction (4.30) is primarily used in the optimization context, when the surrogate has to be at least zero-order consistent with the high-fidelity model at the current iteration point x(i). Data-driven and physics-based modeling (i) It should be noted that all of the correction terms L(i), DðiÞ r , and d can be obtained h iT h iT ðiÞ ðiÞ ðiÞ ðiÞ ðiÞ analytically. Let LðiÞ ¼ l1 l2 lðiÞ , DðiÞ , cðxÞ ¼ ½c1 ðxÞ m r ¼ d1 d2 dm c2 ðxÞ cm ðxÞT , and f ðxÞ ¼ ½f1 ðxÞ f2 ðxÞ fm ðxÞT . We have: " ðiÞ # lk ¼ ðC Tk C k Þ1 C Tk Fk ; ðiÞ dk where: " Ck ¼ ck ðxð0Þ Þ ck ðxðiÞ Þ h F k ¼ fk ðxð0Þ Þ fk ðxðiÞ Þ #T (4.32) iT (4.33) for k ¼ 1, . . . , m, which is a least-square optimal solution to the linear regression ðiÞ ðiÞ k ¼ 1; . . .; m, equivalent to (4.29). Note that the problems ck lk þ dk ¼ fk ; T matrices C k C k are non-singular for i > 1. For i ¼ 1, only the multiplicative correction with L(i) components is used (calculated in a similar way). In order to illustrate the multipoint response correction, consider a dielectric resonator antenna (DRA) shown in Figure 4.11 [50]. The DRA is suspended above the ground plane in order to achieve enhanced impedance bandwidth. The design specifications imposed on the reflection coefficient of the antenna are |S11| 15 dB for 5.1–5.9 GHz. The design variables are x ¼ [ax ay az ac us ws ys g1 by]T. The highfidelity model f is simulated using the CST MWS transient solver (~800,000 mesh cells, evaluation time 20 min). The low-fidelity model c is also evaluated in CST (a) ys Y (b) dx Z us w0 az X ws X cx dzb Figure 4.11 Suspended DRA [50]: (a) 3D view of its housing, top (b) and front (c) views Modelling methodologies in analogue integrated circuit design (~30,000 mesh cells, evaluation time 40 s). The initial design is x(0) ¼ [8.0 14.0 8.0 0.0 2.0 8.0 4.0 2.0 6.0]T mm. The antenna responses at the initial design and the design found using multipoint response correction surrogate are shown in Figure 4.12 (see [50] for details). Figure 4.13 demonstrates a better generalization capability of the multipoint surrogate as compared to a single-point correction. The misalignment between c and f shown in Figure 4.13(a) is perfectly removed by single-point correction at the design where it is established, but the alignment is not as good for other |S11| (dB) 0 –5 –10 –15 –20 4.5 5.5 Frequency (GHz) |S11| (dB) Figure 4.12 Suspended DRA: high-fidelity model response at the initial design (- - -) and at the optimized design obtained using multipoint response correction (—) –20 4.5 5.5 Frequency (GHz) –10 |S11| (dB) |S11| (dB) –15 –20 –25 4.5 –15 –20 5 5.5 6 Frequency (GHz) –25 4.5 6.5 (c) 5 5.5 6 Frequency (GHz) Figure 4.13 Suspended DRA: (a) low- () and high- (- - - and —) fidelity model responses at three designs, (b) single-point-corrected low- () and high- (- - - and —) fidelity model responses at the same designs (single-point correction at the design marked —), (c) multipointcorrected low- () and high- (- - - and —) fidelity model responses Data-driven and physics-based modeling designs. On the other hand, as shown in Figure 4.13(c), the multipoint response correction improves model alignment at all designs involved in the model construc (i) which tion (Figure 4.13(c) only shows the global part LðiÞ cðxÞ þ DðiÞ r without d (i) would give a perfect alignment at x ). 4.5.4 Feature-based modeling –10 –20 –30 9 13 8.5 14 7.5 ax |S11| [dB] |S11| [dB] The physics-based modeling techniques presented so far in this section are all parameterized ones, i.e., the low-fidelity model correction is realized using explicit certain transformations, the coefficient of which has to be extracted from the training data. Parameter-less methods are also available, including adaptive response prediction [51] or shape-preserving response prediction [52]. Here, a brief overview of another approach, referred to as feature-based modeling [53], is provided. Feature-based modeling aims at reducing the number of training data samples necessary to construct an accurate surrogate model by reformulating modeling process and conducting it at the level of appropriately defined characteristic points of the system responses (so-called response features) [53]. The methodology is explained and demonstrated using the case of narrow-band antenna modeling. The high-fidelity model f(x) is obtained from full-wave EM simulation. It represents the reflection coefficient |S11| of the antenna evaluated at m frequencies, w1 to wm, i.e., we have f ðxÞ ¼ ½f ðx; w1 Þ f ðx; wm ÞT . The objective is to build a replacement model (surrogate) s. The surrogate should represent the EM model in a given region X of the design space. The set of training samples is denoted as XT ¼ {x1,x2, . . . ,xN}⊂X. The corresponding EM model responses f(xk) are acquired beforehand. Within a conventional data-driven surrogate modeling paradigm, the responses f (x,wj), j ¼ 1, . . . ,m, are approximated directly (either separately for each frequency or by treating the frequency as an additional input parameter of the model). The fundamental problem is nonlinearity of the responses as shown in Figure 4.14. –20 –40 9 13 8.5 14 7.5 ax Figure 4.14 Exemplary responses of the dielectric resonator antenna considered in Section 3.1 (here, reflection coefficient |S11|). The responses are evaluated in the region 7.0 ax 9.0 and 13.0 ay 15.0 at the frequencies of (a) 5.3 GHz and (b) 5.5 GHz. Other variables are fixed to the following values: az ¼ 9 ac ¼ 0 us ¼ 2 ws ¼ 10 ys ¼ 8 (all in mm). (Note: Rapid changes of the response levels at certain regions of the design space.) Modelling methodologies in analogue integrated circuit design Figure 4.15 clarifies the definition of the feature points in the case of narrowband antennas. The characteristic point set is constructed sequentially as follows: (i) identification of the primary point which corresponds to the center frequency (antenna resonance) and the response level at that frequency; (ii) allocation of the supplemental points (here, uniformly with respect to the level and separately on the left- and right-hand-side of the primary point); (iii) allocation of the infill points uniformly in frequency in between the supplemental points. Clearly, one needs to ensure that the number of characteristic points is sufficient so as to allow reliable synthesis of the antenna response (here, through interpolation). On the other hand, although it is important that the major features of the response (e.g., antenna resonance) are accounted for, particular point allocation is not critical. The response features, once defined, can be easily extracted using simple postprocessing of EM simulation results. For subsequent considerations, the jth feature point of f(xk) (j ¼ 1, . . . , K, k ¼ 1, . . . , N) will be denoted as f jk ¼ wjk lkj . Here, wjk and lkj represent the frequency and the magnitude (level) components of f jk , respectively. For the sake of illustration, the frequency and level components of the selected feature points have been shown in Figure 4.16. The considered design space region is the same as the one shown in Figure 4.15. It is important that the functional landscapes of the feature points are not as nonlinear as those shown in Figure 4.15. This is particularly the case of the frequency component. Clearly, it is to be expected that the construction of a reliable surrogate model at the feature point level will require a smaller number of training samples than modeling the reflection response in a traditional manner. Having the response features defined, the surrogate modeling process works as follows: first, the data-driven models swj(x) and slj(x), j ¼ 1, . . . , K, of the sets of 0 |S11| (dB) –5 –10 –15 –20 –25 5.5 6 Frequency (GHz) Figure 4.15 Definition of response features in the case of a narrow-band antenna reflection characteristic: the primary point (here, corresponding to the antenna resonance) is represented as ; & represents supplemental points distributed equally with respect to response level, ( ) denotes infill points distributed equally in frequency between the main and supplemental points (note that the number of points maybe different for various intervals) Frequency (GHz) (b) 5.4 5.2 9 8 ax 14 ay –11 –11.5 –12 |S11| (dB) |S11| (dB) Frequency (GHz) Data-driven and physics-based modeling 14 ay 14 ay –30 –40 9 13 8 ax Figure 4.16 Frequency (left panel) and level (right panel) components as functions of geometry parameters ax and ay (with other variables fixed) for the three selected feature points. The responses are evaluated for the same design space region as that considered in Figure 4.15. Note that the functional landscapes of the feature point coordinates are considerably less nonlinear than those for original responses (cf. Figure 4.14) corresponding j j feature points are constructed using available training designs, f 1 ; f 2 ; . . .; f jN ; j ¼ 1; . . .; K [13]. At this stage, we utilize kriging interpolation. The surrogate model itself is defined as: sðxÞ ¼ ½sðx; w1 Þ sðx; wm ÞT where its jth component is given as: sðx; wj Þ ¼ IðWðxÞ; LðxÞ; wj Þ In (4.35), LðxÞ ¼ ½sl1 ðxÞ sl2 ðxÞ slK ðxÞ and WðxÞ ¼ ½sw1 ðxÞsw2 ðxÞ swK ðxÞ are the predicted feature point locations corresponding to the evaluation design x. As we are interested in evaluating the antenna response at a discrete set of frequencies w1 through wm, it is necessary to interpolate both the level vector L and frequency vector W into the response at the previous set of frequencies. This interpolation is represented as I(W,L,w). Feature-based modeling is illustrated using a rectangular-slot-coupled patch antenna shown in Figure 4.17, implemented on a 0.762-mm thick Taconic RF-35 substrate (er ¼ 3.5, tand ¼ 0.0018). Antenna parameters are represented by vector x ¼ [d d1 l1 s1 s2 w1]T. Variable d2 ¼ d1, whereas dimensions w0 ¼ 3, l0 ¼ 30, and s0 ¼ 0.15 remain fixed (all parameters are in mm). The high-fidelity Modelling methodologies in analogue integrated circuit design d1 l0 w0 s1 Figure 4.17 Rectangular-slot-coupled patch antenna: geometry with marked design parameters. Gray and white colors represent the ground plane and slots, respectively, whereas the patch radiator (located on the opposite side of the substrate) is marked using a dashed line Table 4.1 Modeling results of the patch antenna coupled to rectangular slot Model Average error (%) N ¼ 20 N ¼ 50 N ¼ 100 N ¼ 200 N ¼ 400 N ¼ 800 32.3 99.5 25.9 99.2 14.4 74.2 11.7 71.9 9.3 53.9 7.5 46.4 Feature-based surrogate Kriging interpolationb a N stands for the number of training points. Direct kriging interpolation of high-fidelity model |S11| responses. model is implemented in CST Microwave Studio. The frequency range considered for modeling is from 1 to 4 GHz. The surrogate model of the reflection coefficient |S11| is constructed in the region X ¼ [x(0)d, x(0) þ d], where x(0) ¼ [4.5 4.0 15.0 1.2 1.2 15.0]T mm and d ¼ [2.5 4.0 5.0 1.0 1.0 5.0]T mm. The feature-based surrogate has been constructed using six different training sets. Table 4.1 shows the average error values for particular training sets. It should be noted that because of high nonlinearity of the antenna responses and wide range of geometry parameters, conventional modeling is of poor quality, whereas featurebased modeling ensures practically usable prediction power. The high-fidelity and feature-based surrogate response at the selected test designs are shown in Figure 4.18. For additional verification, we use the feature-based model for optimization of the antenna structure at two operating frequencies, 1.85 and 3.2 GHz. In all cases, the objective is to minimize reflection level |S11| in the frequency range of 50 MHz around a given operating frequency. Figure 4.19 shows the antenna responses at the designs obtained by optimizing the feature-based surrogate. No further design tuning is necessary because of good accuracy of the surrogate. Data-driven and physics-based modeling |S11| (dB) –20 1.5 2.5 Frequency (GHz) Figure 4.18 Patch antenna: high-fidelity (—) and feature-based model (o) at the selected test designs |S11| [dB] 0 –10 –20 1 2.5 3 Frequency [GHz] 2.5 3 Frequency [GHz] |S11| [dB] 0 –10 –20 Figure 4.19 Rectangular-slot-coupled patch antenna optimization: the highfidelity model response at the initial design (- - -) and at the design obtained by optimizing the feature-based surrogate (—). The surrogate model response marked as (o). Results for the operating frequency of (a) 1.83 GHz and (b) 3.2 GHz 4.6 Model selection and validation Having the model identified, its predictive power has to be estimated in order to find out the actual usefulness of the surrogate. Obviously, it is not a good idea to evaluate the model quality only at the training set locations, because this would usually give an overly optimistic assessment. In particular, interpolative models Modelling methodologies in analogue integrated circuit design exhibit zero error by definition. Some of the techniques described previously identify a surrogate model together with some estimation of the attendant approximation error (e.g., kriging or GPR). In general, the model validation can be realized using stand-alone procedures specifically designed for this purpose. The simplest and probably the most popular way for validating a model is the splitsample method [1], where part of the available data set (the training set) is used to construct the surrogate, whereas the second part (the test set) serves purely for model validation. It should be emphasized here that the error estimated by a splitsample method depends strongly on how the set of data samples is partitioned. A more accurate estimation of the model generalization error can be obtained by cross-validation [1]. In this method, the data set is divided into K subsets, and each of these subsets is sequentially used as testing set for a surrogate constructed on the other K1 subsets. The prediction error can be estimated with all the K error measures obtained in this process (e.g., as an average value). Cross-validation provides an error estimation that is less biased than with the split-sample method. The disadvantage of this method is that the surrogate has to be constructed K times. As mentioned on several occasions, the entire procedure of allocating samples, acquiring data, model identification, and validation can be iterated until the prescribed surrogate accuracy level has been reached. In each repetition, a new set of training samples is added to the existing ones. The strategies of allocating the new samples (referred to as infill criteria [5]) usually aim at improving the global accuracy of the model, i.e., inserting new samples at the locations where the estimated modeling error is the highest. When the model is constructed in the context of design optimization, other infill criteria can be used. For example, maximization of the probability of improvement, i.e., identifying locations that gives the highest chance of improving the objective function value [5]. Selection of the best type of surrogate model to be used in a particular situation is very much problem dependent. For example, from the point of view of simulation-driven design, the main advantage of data-driven surrogates is that they are very fast. Unfortunately, the high computational cost of setting up such models (related to training data acquisition) is a significant disadvantage. In order to ensure decent accuracy, hundreds or even thousands of data samples are required and the number of training points quickly grows with dimensionality of the design space. Therefore, approximation surrogates are mostly suitable to build multiple-use library models or sub-models for decomposable systems [54]. Their use for ad hoc optimization of expensive computational models is rather limited unless the dimensionality of the parameter space is relatively low. Availability of fast low-fidelity models is an indication that physics-based surrogates maybe a good choice. For example, in high-frequency electronics, a standard choice of the low-fidelity model is equivalent circuit (versus full-wave EM simulation as the high-fidelity one). In many cases, fast low-fidelity models are not available but coarse-mesh (or simplified physics) simulation models may successfully play such a role, e.g., in antenna design [12] or aerospace engineering [45]. Physics-based surrogates are most typically used for solving simulation-driven optimization tasks. Data-driven and physics-based modeling 4.7 Summary Fast replacement models (or surrogates) are becoming more and more popular in contemporary engineering and science as they are capable of facilitating various simulation-driven design tasks, such as parametric optimization, statistical analysis, or robust design. This chapter gives a brief outline of surrogate modeling techniques. Classification of surrogate models has been followed by a description of the surrogate modeling process, including DoEs, data acquisition, and model identification and validation. Two groups of models have been discussed, including data-driven and physics-based ones. The most popular techniques pertinent to each group were formulated, including polynomial regression, radial basis function interpolation, kriging, SVR, SM, response correction techniques, and feature-based modeling. Finally, qualitative comparison of various modeling techniques as well as their advantages and disadvantages has been carried out. The material in this chapter can be used as the first guide to surrogate modeling and related techniques. Acknowledgments The author thanks Dassault Systemes, France, for making CST Microwave Studio available. This work is partially supported by the Icelandic Centre for Research (RANNIS) Grant 174114051. References [1] [5] [6] [7] Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidynathan, R., and Tucker, P.K. (2005) Surrogate-based analysis and optimization. Progress in Aerospace Sciences, 41, 1–28. Koziel, S., Yang, X.S., and Zhang, Q.J. (Eds.) (2013) Simulation-Driven Design Optimization and Modeling for Microwave Engineering. London: Imperial College Press. Simpson, T.W., Peplinski, J., Koch, P.N., and Allen, J.K. (2001) Metamodels for computer-based engineering design: survey and recommendations. Engineering with Computers, 17, 129150. Søndergaard, J. (2003) Optimization using surrogate models – by the space mapping technique. Ph.D. Thesis, Informatics and Mathematical Modelling, Technical University of Denmark, Lyngby. Forrester, A.I.J., and Keane, A.J. (2009) Recent advances in surrogate-based optimization. Progress in Aerospace Sciences, 45, 5079. Couckuyt, I. (2013) Forward and inverse surrogate modeling of computationally expensive problems. PhD Thesis, Ghent University. Lophaven, S.N., Nielsen, H.B., and Søndergaard, J. (2002) DACE: A Matlab Kriging Toolbox. Technical University of Denmark. 90 [8] [12] [13] [14] [15] [16] [18] [19] [20] Modelling methodologies in analogue integrated circuit design Gorissen, D., Crombecq, K., Couckuyt, I., Dhaene, T., and Demeester, P. (2010) A surrogate modeling and adaptive sampling toolbox for computer based design. Journal of Machine Learning Research, 11, 20512055. Gorissen, D., Couckuyt, I., Laermans, E., and Dhaene, T. (2010b) Multiobjective global surrogate modeling, dealing with the 5-percent problem. Engineering with Computers, 26, 81–98. Bandler, J.W., Biernacki, R.M., Chen, S.H., Grobelny, P.A., and Hemmers, R.H. (1994) Space mapping technique for electromagnetic optimization. IEEE Transactions on Microwave Theory and Techniques, 42, 2536–2544. Bandler, J.W., Cheng, Q.S., Gebre-Mariam, D.H., Madsen, K., Pedersen, F., and Søndergaard, J. (2003) EM-based surrogate modeling and design exploiting implicit, frequency and output space mappings. IEEE Int. Microwave Symp. Digest, Philadelphia, PA, 1003–1006. Koziel, S., and Ogurtsov, S. (2014) Antenna Design by Simulation-Driven Optimization. Surrogate-Based Approach. Springer. Koziel, S., and Bandler, J.W. (2008) Support-vector-regression-based output space-mapping for microwave device modeling. IEEE MTT-S Int. Microwave Symp. Dig, Atlanta, GA, 615618. Koziel, S., and Leifsson, L. (2016) Simulation-Driven Design by KnowledgeBased Response Correction Techniques. New York, NY: Springer. Kleijnen, J.P.C. (2009) Kriging metamodeling in simulation: a review. European Journal of Operational Research, 192, 707–716. Rayas-Sanchez, J.E. (2004) EM-based optimization of microwave circuits using artificial neural networks: the state-of-the-art. IEEE Transactions on Microwave Theory and Techniques, 52, 420–435. Giunta, A.A., Wojtkiewicz, S.F., and Eldred, M.S. (2003) Overview of modern design of experiments methods for computational simulations. Paper AIAA 2003-0649. American Institute of Aeronautics and Astronautics. Santner, T.J., Williams, B., and Notz, W. (2003) The Design and Analysis of Computer Experiments. Springer-Verlag. Koehler, J.R., and Owen, A.B. (1996) Computer experiments. In S. Ghosh and C.R. Rao (Eds.) Handbook of Statistics. Elsevier Science B.V. 13, pp. 261–308. Devabhaktuni, V.K., Yagoub, M.C.E., and Zhang, Q.J. (2001) A robust algorithm for automatic development of neural-network models for microwave applications. IEEE Transactions on Microwave Theory and Techniques, 49, 22822291. Cheng, Q.S., Koziel, S., and Bandler, J.W. (2006) Simplified space mapping approach to enhancement of microwave device models. International Journal of RF and Microwave Computer-Aided Engineering, 16, 518535. McKay, M., Conover, W., and Beckman, R. (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21, 239–245. Beachkofski, B., and Grandhi, R. (2002) Improved distributed hypercube sampling. Paper AIAA 2002-1274. American Institute of Aeronautics and Astronautics. Data-driven and physics-based modeling [24] Leary, S., Bhaskar, A., and Keane, A. (2003) Optimal orthogonal-arraybased Latin hypercubes. Journal of Applied Statistics, 30, 585598. [25] Ye, K.Q. (1998) Orthogonal column Latin hypercubes and their application in computer experiments. Journal of the American Statistical Association, 93, 14301439. [26] Palmer, K., and Tsui, K.-L. (2001) A minimum bias Latin hypercube design. IIE Transactions, 33, 793808. [27] Golub, G.H., and Van Loan, C.F. (1996) Matrix Computations. 3rd ed. The Johns Hopkins University Press. [28] Wild, S.M., Regis, R.G., and Shoemaker, C.A. (2008) ORBIT: optimization by radial basis function interpolation in trust-regions. SIAM Journal on Scientific Computing, 30, 31973219. [29] Rasmussen, C.E., and Williams, C.K.I. (2006) Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA. [30] Journel, A.G., and Huijbregts, C.J. (1981) Mining Geostatistics. Academic Press. [31] Gunn, S.R. (1998) Support vector machines for classification and regression. Technical Report. School of Electronics and Computer Science, University of Southampton, Southampton. [32] Angiulli, G., Cacciola, M., and Versaci, M. (2007) Microwave devices and antennas modelling by support vector regression machines. IEEE Transactions on Magnetics, 43, 15891592. [33] Smola, A.J., and Scho¨lkopf, B. (2004) A tutorial on support vector regression. Statistics and Computing, 14, 199222. [34] Meng, J., and Xia, L. (2007) Support-vector regression model for millimeter wave transition. Journal of Infrared, Millimeter, and Terahertz Waves, 28, 413421. [35] Andre´s, E., Salcedo-Sanz, S., Monge, F., and Pe´rez-Bellido, A.M. (2012) Efficient aerodynamic design through evolutionary programming and support vector regression algorithms. International Journal Expert Systems with Applications, 39, 1070010708. [36] Zhang, K., and Han, Z. (2013) Support vector regression-based multidisciplinary design optimization in aircraft conceptual design. AIAA Aerospace Sciences Meeting. AIAA paper 2013-1160. [37] Haykin, S. (1998) Neural Networks: A Comprehensive Foundation. 2nd ed. Upper Saddle River, NJ: Prentice Hall. [38] Levin, D. (1998) The approximation power of moving least-squares. Mathematics of Computation, 67, 15171531. [39] Forrester, A.I.J., So´bester, A., and Keane, A.J. (2007) Multi-fidelity optimization via surrogate modeling. Proceedings of the Royal Society A, 463, 32513269. [40] Huang, L., and Gao, Z. (2012) Wing-body optimization based on multifidelity surrogate model. 28th Int. Congress of the Aeronautical Sciences, Brisbane, Australia. [41] Toal, D.J.J., and Keane, A.J. (2011) Efficient multipoint aerodynamic design optimization via cokriging. Journal of Aircraft, 48, 16851695. Modelling methodologies in analogue integrated circuit design Laurenceau, J., and Sagaut, P. (2008) Building efficient response surfaces of aerodynamic functions with kriging and cokriging. AIAA Journal, 46, 498507. Koziel, S., Ogurtsov, S., Couckuyt, I., and Dhaene, T. (2013) Variablefidelity electromagnetic simulations and co-kriging for accurate modeling of antennas. IEEE Transactions on Antennas and Propagation, 61, 13011308. Alexandrov, N.M., and Lewis, R.M. (2001) An overview of first-order model management for engineering optimization. Optimization and Engineering, 2, 413430. Leifsson, L., and Koziel, S. (2015) Simulation-Driven Aerodynamic Design Using Variable-Fidelity Models. Imperial College Press. Bandler, J.W., Cheng, Q.S., Dakroury, S.A., et al. (2004) Space mapping: the state of the art. IEEE Transactions on Microwave Theory and Techniques, 52, 337361. Koziel, S., Bandler, J.W., and Madsen, K. (2006) A space mapping framework for engineering optimization: theory and implementation. IEEE Transactions on Microwave Theory and Techniques, 54, 37213730. Salleh, M.K.M., Pringent, G., Pigaglio, O., and Crampagne, R. (2008) Quarter-wavelength side-coupled ring resonator for bandpass filters. IEEE Transactions on Microwave Theory and Techniques, vol. 27, no. 4, paper no. e21077. Koziel, S., Cheng, Q.S., and Bandler, J.W. (2010b) Implicit space mapping with adaptive selection of preassigned parameters. IET Microwaves, Antennas & Propagation, 4, 361373. Koziel, S., and Leifsson, L. (2013) Multi-point response correction for reduced-cost EM-simulation-driven design of antenna structures. Microwave and Optical Technology Letters, 55 (9), 20702074. Leifsson, L., and Koziel, S. (2017) Adaptive response prediction for aerodynamic shape optimization. Engineering Computations, 34 (5), 14851500. Koziel, S. (2010) Shape-preserving response prediction for microwave design optimization. IEEE Transactions on Microwave Theory and Techniques, 58 (11), 28292837. Koziel, S., and Bekasiewicz, A. (2017) Computationally feasible narrowband antenna modeling using response features. International Journal of RF and Microwave Computer, 27. Koziel, S., and Ogurtsov, S. (2013) Decomposition, response surface approximations and space mapping for EM-driven design of microwave filters. Microwave and Optical Technology Letters, 55 (9), 21372141. [45] [46] [51] [52] Chapter 5 Verification of modeling: metrics and methodologies Ahmad Tarraf 1 and Lars Hedrich1 5.1 Overview Behavioral modeling of analog circuits is still an open problem with a long research history [1]. Nowadays, designers have the choice of either manually write models that roughly described the system behavior with high simulation speed-up or to spend a lot of time generating complex models that are much more accurate with low speed-ups. As the systems integrable on a chip become more complex and heterogeneous, the use of accurate behavioral models for analog signal processing and interfacing would enhance design, simulation and validation routines. Thus, the goal is to achieve a fully automatic modeling process with high speed-up and best accuracy. Due to the lack of automatic tools, most design groups manually write VerilogA, VHDL-AMS or MAST models to perform a module or system simulation. Even if a behavioral model is available, the succeeding problem is to prove the validity of the model. For example, due to increased demand in safety for autonomous driving applications, the need for a verifiable methodology of model generation is obvious [2]. Hence, we can state that there exists a lack of formally verified models today. In this chapter we want to tackle both problems: ● ● Propose metrics to judge the model-verification process. Describe approaches and methodologies going systematically beyond the standard simulation driven model verification. Propose approaches to generate models in a correct-by-construction manner. 5.1.1 State space and normal form Many of the later presented approaches depend on the idea of a continuous state space of the underlying analog circuit. Here we give a short description of this concept. In most simulators and more sophisticated tools, the analog circuit is described by a netlist of electrical compact models. The transistors are often modeled by a 1 Institute for Computer Science, University of Frankfurt, Frankfurt, Germany Modelling methodologies in analogue integrated circuit design complex compact model like the BSIM [3] model. Following, the circuits are mathematically described by a system of implicit differential algebraic equations (DAEs) by applying the modified nodal approach: ! ! ! ! _ f ð xðtÞ; xðtÞ; uðtÞÞ ¼ 0 ! where x is a vector containing the system voltages and currents, and u is a vector of input variables. This system can be locally linearized to: ! ! ! C x_ þ G x ¼ b u !T y¼ r x With the conduction matrix G and the capacitance matrix C , the input and output ! ! vectors are b and rT , respectively, and the output is y. Here only single-input– single-output systems are considered. However, the methodology can be transferred to multiple inputs and multiple outputs. By using the transformation matrices E and F , the system can be transformed ! ! from the x 2 Rn space to the xs 2 Rn space. This can be done with the following transformation: ! x ¼ F xs When additionally (5.2) is multiplied with E from the left, (5.2) and (5.3) can be written as: ! s E C F xs þ E G F xs ¼ E b u !T y ¼ r F xs (5.5) (5.6) Equations (5.5) and (5.6) can be rewritten as: ! e !xs þ G e !xs ¼ e sC b u !T y ¼ er xs (5.7) (5.8) With the appropriate transformation matrices F and E , (5.7) and (5.8) represent the e being a block diagonal unity or Kronecker canonical form of the system, with C e being a block diagonal and unity matrix (see (5.11)). For that zero matrix and G purpose, F is computed from the right eigenvectors V R of the generalized eigenproblem, while E is a proper calculated matrix from the same problem, such that: F ¼VR e V 1 G 1 E ¼G R (5.9) (5.10) Verification of modeling: metrics and methodologies Expanding (5.7) and (5.8) results in the following expanded Kronecker canonical form representation: ! ! " ! # e xs;l xs;l I 0 L 0 bl s u (5.11) þ ¼ ! ! ! xs;1 xs;1 0 0 0 I e b1 h ! !T i !x T (5.12) y ¼ er l er 1 ! s;l xs;1 where I is the identity matrix, while L is a diagonal or block-diagonal matrix with numerically decreasing generalized eigenvalues of the system. As indicated, transformed vectors are marked with a tilde (~). Equations (5.11) and (5.12) can be divided into a differential (subscript l) and an algebraic part (subscript 1). The transformation matrices are also split into two parts, which are later used in Section 5.5.1.3. F ¼ ½ F lF 1 El E ¼ E1 (5.13) (5.14) e corresponds to the number of The number of nonzeros diagonal elements in C eigenvalues li from the generalized eigenproblem. As known, the number of eigenvalues result from the energy storing elements of the system. However, some of those states are the results of parasitic components. For example, the ports of a transistor have a parasitic resistance throughout a current path as well as a parasitic capacitance between each other. The eigenvalues of those states are far to the left from zero, resulting in high-frequency values. To neglect them a dominant pole order reduction [4] can be carried out by simply reducing the active dynamic part (subscript l) in the opposite to the algebraic part of (5.11), resulting at the same time in a reduction of (5.12). This can also be seen as converting a finite eigenvalue li into an infinite eigenvalue. All together, we have now a reduced system in Kronecker canonical form (Equations (5.11) and (5.12)) representing a normalized linear state space for a single operating point of a nonlinear analog circuit. ! As the systems are mostly nonlinear, lot of operating points xsp must be sampled and used to build the reduced Kronecker canonical form, to get an overall impression of the system. However, only the reachable sample points: ! xsp 2 SR being part of the reachable set SR , are important. This is due to the fact that the other points will never be visited by the system. Therefore, a reachability analysis on top of the sampling process and the reduced Kronecker canonical form computation is used [5]. It calculates the reachable set of sample points SR , approximating the true reachable state-space region. Modelling methodologies in analogue integrated circuit design We can now use the set of reachable sample points together with the reduced Kronecker canonical form for coverage measures (see Section 5.2.1.1), equivalence checking (EC) (see Section 5.4.1) and model abstraction and generation (see Section 5.5.1). 5.2 Model validation The main challenge for the validation of behavioral models is to ensure that all important behaviors have been captured and no unwanted, may be spurious behaviors, have been included in the model. This is done in general by simulation. The behavioral model can be compared with the original netlist—if it is already existing—by putting both circuits in the same test bench (see Figure 5.1). The input waveforms and the test bench have to be provided by the designer. Error ∆A Reference circuit Test bench t Error ∆B t + Vcc module opbeh(gnd,inn,…); inout inn; electrical inn; ... analog begin I(z,gnd) imax) begin ... I(za,z) 0). By ensuring these design requirements, the designer can ensure a “good behavior” of its inductor at the operating frequency [3]. 7.3 Surrogate modeling Surrogate models are commonly used when an outcome of interest of a complex system cannot be easily, or cheaply, measured by experiments or simulations [10]. Therefore, an approximate model of the output is developed and used instead. In this section, the basic steps for the generation of a surrogate model are described. Generating a surrogate model usually involves four steps that are briefly described next. 1. Design of experiments: The objective of surrogate models is to emulate the output response of a given system. Therefore, being them a type of machine learning technique, the model has to learn how the system responds to a given input. So, the first step in generating surrogate models is to select the input samples from which the model is going to learn. These samples should evenly cover the design space, so that it can be accurately modeled. In order to perform this sampling, different techniques are available, from classical Monte Carlo (MC) to quasi-MC (QMC) or Latin hypercube sampling [10]. In this chapter, QMC is used. Accurate expensive evaluation: Surrogate models are usually built by learning from the most accurate evaluation possible for a given structure. However, most times these evaluations are time-consuming. In our work, these accurate evaluations are EM simulations, which are performed with ADS Momentum [15]. Depending on the size of the training set (samples from which the model is going to learn), these simulations could even last for weeks. However, these simulations are only performed once for a given technology node, therefore being useful for several years, as technology nodes do not become obsolete in months. Any modeling technique can later be used in order to build a new model using the same training set. Model construction: This concerns the core functions used to build a surrogate model. The literature reports approaches based on artificial neural networks, support vector machines, parametric macromodels, Gaussian-process or Kriging models, etc. [16]. In this chapter, ordinary Kriging models are used. Different MATLAB toolboxes like SUMO [17] or DACE [18] support this type of models. In the examples reported in this chapter, DACE was used. The motivations to use DACE were practical reasons, such as the fact that it is freely available, simple to use and it runs over a widely used software platform, which is MATLAB. Model validation: Many different techniques may be used in order to validate the model and assess its accuracy, e.g., cross-validation, bootstrapping and On the usage of machine-learning techniques subsampling [19]. In this chapter, in order to validate the model, a set of points was generated independently of the training samples. These samples will be referred to as test samples and were also generated using QMC. 7.3.1 Modeling strategy The modeling of integrated inductors has always been considered as one of the biggest bottlenecks in RF design. As previously mentioned, through times, several models were developed in order to accurately predict the behavior of such passive components [5–9,11,12]. However, most of them have not been successful in modeling the complete design space of the inductors. Therefore, in order to attain such goal, new modeling strategies have to be developed. It is important to recall that inductance and quality factor are functions of the frequency, as seen in Figure 7.3. Usually, when developing models for integrated inductors, the usual efforts are in the direction of building frequency-dependent models [20]. However, surrogate models become much more complex with the increase in the number of parameters. This exponential complexity growth is also valid if the number of training samples increases. In order to alleviate this problem, problem-specific knowledge can be used in order to alleviate the model creation complexity. In this chapter, the modeling of inductors is performed in a frequencyindependent fashion. Hence, an independent model is created for each frequency point. This allows to increase the accuracy and to highly reduce the complexity of the models and also the time to generate them [13]. The initial strategy to build the model was to create a surrogate model valid in the complete design space. The model was created using 700 inductors generated with the QMC technique. Two different models were developed: one for predicting the inductance and the other for the quality factor (denoted as L and Q models, for the sake of simplicity). Remember that, since the models are frequency-independent, L and Q models have to be created for each frequency point. The technology selected for the model creation was a 0.35 mm CMOS technology and two different inductor topologies were modeled: asymmetric and symmetric topologies (as shown in Figure 7.2). The selection of the technology process in these experiments was only motivated by the available foundry data for EM simulation needed to build the inductor surrogate model. The created L and Q models were valid for inductors with any given number of turns, inner diameter and turn width within the ranges shown in Table 7.1. Since a validation set of inductors is needed in order to test the model, 210 test inductors were generated, also using a QMC sampling technique. Models for three Table 7.1 Inductor design variables Parameter N W Din S 1 5 mm 10 mm 2.5 mm 1 0.05 mm 0.05 mm 0.05 mm 7 25 mm 300 mm 2.5 mm Modelling methodologies in analogue integrated circuit design different frequencies were generated (20 MHz, 900 MHz and 2.45 GHz). It is possible to observe in Tables 7.2 and 7.3 that the mean relative errors of the model are larger at higher frequencies (e.g., 2.45 GHz), for both the asymmetric and the symmetric inductors topologies, respectively. Therefore, a more advanced modeling strategy is needed in order to minimize the modeling error as much as possible. It is well known that the number of inductor turns can only take a few discrete values (in general, just integer values, depending on the implementation of the parameterized layout template). Therefore, it becomes clear that by creating several surrogate models, one for each number of turns (e.g., one model for inductors with two turns, another for inductors with three turns, etc.), the model accuracy can be increased, because the complexity of the modeled design space dramatically decreases (for each model). The generation of separate surrogate models for each number of turns instead of considering the number of turns as an input parameter of the surrogate model brings several benefits: not only is the accuracy significantly enhanced but also the computational cost is significantly decreased as the computational complexity of the training process grows exponentially with the number of samples. The number of models to create is manageable as the number of turns is typically between 1 and 7. This strategy increases the overall accuracy and efficiency of the model, as shown for the average errors of 210 test inductors in Table 7.4 for the asymmetric inductors and in Table 7.5 for the symmetric inductors. However, inductors with many turns still present large L and Q errors, specifically at high frequencies (e.g., 2.45 GHz). The high error obtained at higher number of turns can be explained by the fact that some inductors from the training set have their SRF below or around the operating frequency. Kriging surrogate models assume continuity: if an input variable changes by a small amount, the output varies smoothly. However, in the Table 7.2 Average error (in %) of inductance and quality factor for 210 test asymmetric octagonal inductors: single model for all N 20 MHz 900 MHz 2.45 GHz Table 7.3 Average error (in %) of inductance and quality factor for 210 test symmetric octagonal inductors: single model for all N 20 MHz 900 MHz 2.45 GHz On the usage of machine-learning techniques Table 7.4 Inductance and quality factor average error for 210 test asymmetric octagonal inductors (in %): one model for each N 20 MHz 900 MHz 2.45 GHz 0.07 0.04 0.02 0.02 0.02 0.01 0.01 0.34 0.28 0.24 0.20 0.18 0.16 0.18 0.07 0.07 0.06 0.06 0.06 0.15 0.06 0.43 0.33 0.36 0.36 0.32 0.32 0.37 0.11 0.09 0.10 0.10 0.60 32.44 13.85 0.55 0.49 0.21 0.31 0.79 29.25 1.61 Table 7.5 Inductance and quality factor average error for 210 test symmetric octagonal inductors (in %): one model for each N 20 MHz 900 MHz 2.45 GHz 0.15 0.01 0.01 0.01 0.01 0.01 0.02 0.65 0.18 0.11 0.15 0.12 0.16 0.41 0.16 0.02 0.02 0.02 0.02 0.02 0.03 0.71 0.14 0.10 0.13 0.11 0.11 0.09 0.18 0.02 0.03 0.08 0.39 122.75 41.14 0.82 0.14 0.13 0.13 0.29 24.17 2.38 adopted technology node, for operating frequencies such as 2.45 GHz, some inductors with more than five turns, have their SRF close to (or even below) the operating frequency, where the inductance does not change smoothly. Therefore, the use of these inductors during the model construction severely degrades the accuracy of the model, because they present a sharp behavior and blur the model creation. Therefore, the accuracy of L and Q of the useful inductors is dramatically increased if only inductors with SRF sufficiently above the desired operating frequency are used for model training. However, this option is only feasible if we can detect which inductors have their SRF sufficiently above the frequency of operation and are, hence, useful. Therefore, in the proposed strategy, the construction of the model is based on a two-step method: 1. 2. Generate surrogate models for the SRF (for each number of turns) using all training inductors. In order to generate highly accurate surrogate models for L and Q, only those inductors from the training set whose SRF is sufficiently above the operating Modelling methodologies in analogue integrated circuit design frequency are used. For example, if the operating frequency is 2.45 GHz, only inductors with, e.g., SRF > 2.95 GHz (using 500 MHz of margin) are used to generate L and Q models. Consequently, with this methodology, whenever a test inductor is going to be evaluated, its SRF value is predicted first. If the predicted SRF is below 2.95 GHz, the inductor is discarded since it is not useful for the selected operating frequency. Otherwise, its inductance and quality factor are calculated using the L/Q models. The flowchart of the proposed modeling strategy is shown in Figure 7.4. In order to evaluate the validity of the new proposed strategy, the 210 test inductors were evaluated for three different frequencies. The model errors for L and Q are shown in Tables 7.6 and 7.7 and for the SRF model in Tables 7.8 and 7.9. It can be concluded that by following this modeling strategy, the average error for inductance and quality factor is usually below 1% for L and Q (even for the model at 2.45 GHz), which is a very accurate model for integrated inductors. Complete training set First step Inductor Selection per SRF Create SRF models N=1,2,...,8 Usage of SRF models to select inductors Second step Reduced training set Create L/Q models N=1,2,...,8 Store models Figure 7.4 Flowchart of the proposed modeling strategy On the usage of machine-learning techniques Table 7.6 Inductance and quality factor average error for 210 test asymmetrical octagonal inductors (in %): one model for each N filtered by SRF Operating frequency 20 MHz 900 MHz 2.45 GHz Filtering frequency 1.1 GHz 2.95 GHz 0.07 0.04 0.02 0.02 0.02 0.01 0.01 0.34 0.28 0.24 0.20 0.18 0.16 0.18 0.07 0.07 0.06 0.06 0.06 0.15 0.06 0.43 0.33 0.36 0.36 0.32 0.32 0.37 0.11 0.09 0.10 0.10 0.24 0.33 0.19 0.55 0.49 0.21 0.31 0.47 0.64 0.60 Table 7.7 Inductance and quality factor average error for 210 test symmetrical octagonal inductors (in %): one model for each N filtered by SRF Operating frequency 20 MHz 900 MHz 2.45 GHz Filtering frequency 1.1 GHz 2.95 GHz 0.15 0.01 0.01 0.01 0.01 0.01 0.02 0.65 0.18 0.11 0.15 0.12 0.16 0.41 0.16 0.02 0.02 0.02 0.02 0.02 0.03 0.71 0.14 0.10 0.13 0.11 0.11 0.09 0.18 0.02 0.03 0.08 0.28 0.23 1.14 0.82 0.14 0.13 0.13 0.21 0.24 0.30 Table 7.8 SRF average error for 210 test asymmetrical octagonal inductors (in %) Table 7.9 SRF average error for 210 test symmetrical octagonal inductors (in %) Modelling methodologies in analogue integrated circuit design 7.4 Modeling application to RF design The accurate developed models in the previous section can be used by the RF designer as an inductor performance evaluator in optimization processes in order to synthesize inductors. Furthermore, such models can also be used in order to model inductors in a circuit design stage. In this section, both model applications will be exemplified. 7.4.1 Inductor optimization In this subsection, the model is used as a performance evaluator in several inductor synthesis processes. A typical synthesis process is shown in Figure 7.5. The single or multi-objective optimization algorithm is linked with an inductor performance evaluator and automatically varies the inductor variables in order to achieve the optimal inductor performances (optimization objectives), while meeting several design constraints. During the inductor optimization/design stage, the designer is usually interested in obtaining a given inductance at the frequency of operation, with the largest quality factor, and with the smallest area occupation since fabrication cost grows linearly with area, and inductors represent a large percentage of the RF circuit area. These three parameters: inductance, quality factor and area are mutually conflicting. In addition, the inductor must be designed in the flat-BW zone and the SRF should be sufficiently above the frequency of operation [3]. Consequently, the inductor optimization problem can be posed as a constrained optimization problem: maximize FðxÞ; FðxÞ ¼ ff1 ðxÞ; . . .; fn ðxÞg 2 Rn such that: GðxÞ ⩾ 0; GðxÞ ¼ fg1 ðxÞ; . . .; gm ðxÞg 2 Rm where xLi ⩽ xi ⩽ xUi ; i 2 ½1; p Inductor variables N Din W S Single-/multi-objective optimization Objectives and constraints Performance evaluator L( f ) Q( f ) Area Inductor performances Figure 7.5 Optimization-based inductor synthesis loop On the usage of machine-learning techniques where x is a vector with p geometric parameters, each design parameter being restricted between a lower limit ðxLi Þ and an upper limit ðxUi Þ. The functions f j ðxÞ, with 1 j n, are the objectives that will be optimized, where n is the total number of objectives. The functions g k ðxÞ, with 1 k m, are design constraints that can be defined independently for each optimization problem. If n ¼ 1 then the optimization problem is single-objective, and, if n > 1, it is multi-objective. Whereas the solution to the former is a single design point, the solution to the latter is a set of solutions in the search space, the Pareto set, exhibiting the best trade-offs between the objectives, i.e., the Pareto-optimal front (POF). In this chapter, the selection-based differential evolution (SBDE) algorithm [21] is used for single-objective optimization. As in evolutionary algorithms, the system is initialized with a population of random solutions and searches for optimal solutions along a set of iterations. The population-based evolutionary optimization algorithm NSGA-II [22] was selected as multi-objective optimization algorithm for the experiments in this chapter. NSGA-II is based on the concept of Pareto dominance and non-dominated ranking of solutions, and the output of the algorithm is a non-dominated set of points in the feasible objective space, i.e., the Pareto-optimal front. The inductor synthesis results reported in this chapter do not exploit any specific characteristics of SBDE and NSGA-II; hence, they can be replaced by any other single- and multi-objective optimization algorithm, respectively. Also, as previously said, it has to be taken into account that the inductors synthesized must be in the flat-BW zone, as described in Section 7.2. Therefore, in all optimizations, constraints are applied in order to guarantee that the selected inductors can operate at the chosen working frequency (WF). These constraints are specified in the following set of equations: 8 area < 400 mm 400 mm > > > > > L@WF L@WFþ0:05 GHz > > < 0:01 > > > L@WF > > > > < L @WF L@WF0:05 GHz < 0:01 > L@WF > > > > > L@WF L@0:1 GHz > > < 0:05 > > > L@WF > > > : Q@WFþ0:05 GHz Q@WF > 0 where L@WF and Q@WF are the inductors’ inductance and quality factor at the WF, respectively, and L@WF0.05 GHz and Q@WF0.05 GHz are the inductance and quality factor at WF0.05 GHz, respectively. The first constraint in (7.8) is a reasonable upper size of integrated inductors to be used in practice. The second to fourth constraint ensures that the inductance is sufficiently flat from DC to slightly above the WF. Finally, the last constraint guarantees that the SRF is sufficiently above the WF. As a first synthesis example, a single-objective optimization, using the SBDE algorithm, is performed where the objective is to achieve a symmetrical octagonal Modelling methodologies in analogue integrated circuit design Inductance (nH) 4.5 4 900 MHz 3.5 3 2.5 (a) Quality factor inductor with L ¼ 2.25 nH (0.025 nH) at 0.9 GHz, while maximizing the quality factor. The optimization was performed with 300 individuals and 100 generations and took just 22.14 s of CPU time. The geometrical parameters of the obtained inductor were N ¼ 2, Din ¼ 286 mm, W ¼ 25 mm and its performance parameters are L ¼ 2.268 nH and Q ¼ 8.921. The SRF of this inductor is around 13.31 GHz. The inductor layout and frequency behavior over frequency can be observed in Figure 7.6. However, in order to assess the model accuracy, the obtained inductor was EM-simulated and compared with the performances achieved by the surrogate model. The accuracy evaluation can be observed in Table 7.10, where it is possible to conclude that the model is extremely accurate. Afterward, the same type of single-objective optimization is performed in order to obtain an asymmetric inductor with L ¼ 1.5 nH (0.05 nH) at 2.45 GHz. Again, the optimization was performed with 300 individuals and 100 generations and took only 23.44 s of CPU time, which, as in the previously example, is an impressive time. The geometrical parameters of the obtained inductor were N ¼ 2, Din ¼ 187 mm, W ¼ 17.05 mm and its performance parameters are L ¼ 1.548 nH and Q ¼ 11.697. The SRF of this inductor is around 16.94 GHz. The inductor Frequency (GHz) Figure 7.6 (a) Layout of the symmetric inductor obtained with the singleobjective optimization. (b) Illustrating the inductance (red curve and left y-axis) and quality factor (blue curve and right y-axis) as a function of frequency of the same inductor Table 7.10 Comparison of inductance and quality factor values obtained with the surrogate model and with EM simulation for the inductor shown in Figure 7.6 WF ¼ 900 MHz Geometric parameters N Din (mm) W (mm) LEM (nH) LSUR (nH) Error (%) Error (%) On the usage of machine-learning techniques 2.2 2.45 GHz 1.8 4 1.6 (a) Quality factor Inductance (nH) 1.5 105 Frequency (GHz) Figure 7.7 (a) Layout of the asymmetric inductor obtained with the singleobjective optimization. (b) Illustrating the inductance (red curve and left y-axis) and quality factor (blue curve and right y-axis) as a function of frequency of the same inductor Table 7.11 Comparison of inductance and quality factor values obtained with the surrogate model and with EM simulation for the inductor shown in Figure 7.7 WF ¼ 2.45 GHz Geometric parameters N Din (mm) W (mm) LEM (nH) LSUR (nH) Error (%) Error (%) layout and frequency behavior over frequency are shown in Figure 7.7. Again, as performed for the previous obtained inductor, the accuracy evaluation of the model can be observed in Table 7.11, where the inductor obtained in the optimization was EM-simulated and its error assessed. After the single-objective optimizations were performed, it is interesting to perform some multi-objective optimizations in order to generate Pareto-optimal fronts (POFs) of inductors. As previously said, for this purpose, NSGA-II is used. The first optimization is performed with the objectives of maximizing inductance and quality factor, while minimizing the area of the inductors. The inductor topology considered for this optimization is the symmetric topology and the WF is 900 MHz. The entire optimization with 400 individuals and 100 generations takes 178.24 s of CPU time, and the obtained POF can be seen in Figure 7.8. Due to the efficiency of the model, in just 3 min it is possible to generate a POF comprising the best trade-offs between objectives for this inductor topology, technology and frequency of operation. Modelling methodologies in analogue integrated circuit design 350 400 300 Area (µm) 10 10 5 Inductance (nH) 5 Quality factor Figure 7.8 POF obtained of the optimization maximizing inductance and quality factor, while minimizing the area for a symmetric inductor topology and working frequency of 900 MHz. The color bar represents the third objective of the optimization, which is the area in mm 350 300 Area (µm) 100 10 5 Quality factor 3 2 Inductance (nH) Figure 7.9 POF obtained of the optimization maximizing inductance and quality factor, while minimizing the area for an asymmetric inductor topology and working frequency of 2.45 Hz. The color bar represents the third objective of the optimization, which is the area in mm Afterward, for illustration purpose, another optimization was performed with the same objectives of maximizing inductance and quality factor, while minimizing the area of the inductors, but considering an asymmetric topology and WF of 2.45 GHz. The entire optimization with 400 individuals and 100 generations takes 202.40 s of CPU time, and the obtained POF can be seen in Figure 7.9. As previously said, the POFs in Figures 7.8 and 7.9 show the best inductor designs for the selected objectives, operating frequencies and topologies. On the usage of machine-learning techniques By obtaining these POFs, the RF designer can then use them as inductor-optimized databases during the circuit design stages. The idea is that instead of searching for an inductor in the entire design stage, the designer only has to search the POF for the inductor that best suits his/her needs. 7.4.2 Circuit design In this section, the model will be used during the design stage of two well-known RF circuits. The model is extremely accurate and therefore its usage in circuit design is highly endorsed. A VCO and an LNA will be manually designed and the surrogate model is used in order to design the inductors used. 7.4.2.1 Voltage-controlled oscillator The first circuit to be designed is a LC double-differential cross-coupled VCO, as shown in Figure 7.10, intended to oscillate at 900 MHz. The oscillation frequency, fosc, of an LC VCO is given by: fosc ¼ 1 pffiffiffiffiffiffiffiffi 2p L C where L is the inductance and C is the tank capacitance, which can be varied by using the varactors (Cvar). Since we are designing the VCO to oscillate at 900 MHz, from (7.9) it is possible to fix the capacitance and find the needed inductance or vice versa. In this example, the VCO is designed using a capacitance bank of six parallel capacitances and two varactors (in series, in order to have symmetry in the circuit), as shown in Figure 7.10. Each capacitance has a value of 100 fF and each varactor has a maximum capacitance, CvarMAX, of 2.8 pF. All of the component Vdd Mdd W=90 μm L=0.35 Md W=140 μm L=0.35 μm Mp1 W=10 μm L=0.35 μm IBp 1.5 mA C=100 fF L=13.96 nH VLO+ VLO– CvarMAX=2.8 fF Vtune Mn1 W=110 μm L=0.35 μm Figure 7.10 Topology of the LC double-differential cross-coupled differential VCO used and sizes of all components used in the design Modelling methodologies in analogue integrated circuit design values are also shown in Figure 7.10. The parameter Vtune represents an independent voltage source used to vary the capacitance of the varactors. In this example, we have performed a single-objective optimization in order to obtain an inductor with approximately 14 nH (as performed as in Section 7.4.1). The objective was to achieve a symmetrical octagonal inductor with L ¼ 14 nH (0.1 nH) at 0.9 GHz, while maximizing the quality factor. The geometrical parameters of the inductor were N ¼ 5, Din ¼ 252 mm, W ¼ 5.2 mm and its performance parameters are L ¼ 13.960 nH and Q ¼ 6.921. The SRF of this inductor is around 3.923 GHz. The inductor layout and frequency behavior over frequency can be observed in Figure 7.11. The obtained inductor was EM-simulated later on and the error of the model was assessed. The results are shown in Table 7.12, where it is possible to observe that the model has negligible errors. The inductor optimization was performed with 300 individuals and 100 generations and took 20.81 s of CPU time. The transient response of the designed VCO can be observed in Figure 7.12 (blue curve). Since an inductor model is being used, it is expected that the performances shift due to the usage of this model. Therefore, in order to inspect the shifts that occur in the VCO performances, the used inductor was EM-simulated and the VCO re-simulated. The differences in the transient response can also be observed in 15 0 –20 –40 –60 –80 (b) 105 900 MHz Quality factor Inductance (nH) –10 106 107 108 Frequency (Hz) –15 1010 Figure 7.11 (a) Layout of the 13.960 nH symmetric inductor obtained with the single-objective optimization. (b) Illustrating the inductance (red curve and left y-axis) and quality factor (blue curve and right y-axis) as a function of frequency of the same inductor Table 7.12 Comparison of inductance and quality factor values obtained with the surrogate model and with EM simulation for the inductor shown in Figure 7.11 WF ¼ 900 MHz Geometric parameters N Din (mm) W (mm) LEM (nH) LSUR (nH) Error (%) Error (%) On the usage of machine-learning techniques 1.5 Inductor simulated with surrogate Inductor EM simulated 1 1.3 Vout (V) 1.29 1.28 1.27 –0.5 1.26 1.25 49.5 49.6 49.7 –1.5 Time (ns) Figure 7.12 Transient response of the designed VCO. A zoom over the signals is performed in order to illustrate the small difference between both transient responses Inductor simulated with surrogate Inductor EM simulated fosc = 0.9 GHz 1.2 1.297 Vout (V) 1.295 0.8996 0.2 0 6 Frequency (GHz) 0.9 8 Figure 7.13 DFT of the transient signal shown in Figure 7.12. A zoom over the signals is performed in order to illustrate the small difference between both DFT signals Figure 7.12 (red curve), where it is possible to conclude that negligible performance shifts occur (fosc does not shift), confirming once again the model accuracy. Furthermore, the discrete Fourier transform (DFT) of the transient signal can be observed in Figure 7.13. The phase noise of the VCO can be observed in Modelling methodologies in analogue integrated circuit design Figure 7.14, and the tuning range in Figure 7.15, where for these performances, the differences are negligible even at centesimal level. It can be concluded that since the model is so accurate it is extremely useful in RF circuit design. Low-noise amplifier In this subsection the design of another well-known RF circuit, an LNA, is considered. The circuit topology considered in this chapter is the source-degenerated topology shown in Figure 7.16. The LNA is manually designed and is intended to operate at the ISM band of 2.4–2.5 GHz. –105 Inductor simulated with surrogate Inductor EM simulated Phase noise (dB c/Hz) –110 –115 –120 –125 –130 –135 –140 –145 –150 105 Frequency (Hz) Figure 7.14 Phase noise of the designed VCO Inductor simulated with surrogate Inductor EM simulated Frequency (GHz) 1.1 1.05 1 0.95 0.9 0.85 0.8 1 Vtune(V) Figure 7.15 Tuning range of the designed VCO On the usage of machine-learning techniques Vdd LD C2=1.03 pF C3=743 fF OUT M2 W=600 µm L=0.35 µm M1 LB IN C1=1.278 pF W=555 µm L=0.35 µm Figure 7.16 Topology of the source-degenerated LNA used and sizes of all components (apart from the inductors) used in the design ISM band –10 –20 –30 –40 S11 S22 S21 NF S11 EM S22 EM S21 EM NF (dB) S11’ S21’ S22 (dB) Frequency (GHz) Figure 7.17 Performances of the designed LNA The strategy used in order to design the LNA is different from the used in the previously designed VCO. In this example, instead of using a single-objective optimization algorithm in order to size each inductor (LS, LG, LB and LD), the POF shown in Figure 7.9 is used as an optimized inductor library from which inductors can be selected to design the LNA. The needed inductors are therefore selected from the library and the LNA is manually designed. The values of each component are shown in Figure 7.16 and the most important LNA performances are shown in Figure 7.17. The gray shaded area corresponds to the ISM band, where the designed LNA is supposed to work. It is possible to observe that at the center frequency of the ISM band (2.45 GHz) the LNA has a gain (S21) of 15.23 dB, an input matching (S11) of 11.01 dB, an output matching (S22) of 21.12 dB and a noise figure of 2.917 dB. Modelling methodologies in analogue integrated circuit design Table 7.13 Comparison of inductance and quality factor values obtained with the surrogate model and with EM simulation for the inductors used in the LNA design of Figure 7.16 WF ¼ 2.45 GHz Geometric parameters Inductor Din (mm) W (mm) LEM (nH) LSUR (nH) Error (%) Error (%) LS LG LB LD 11.40 6.90 7.00 13.45 0.041 0.138 2.484 1.905 0.041 0.137 2.484 1.903 0.851 0.154 0.004 0.054 3.720 3.154 8.204 11.51 3.682 3.169 8.233 11.55 1.025 0.466 0.356 0.317 Since the surrogate model yields an associated error, the inductors were EMsimulated and the LNA re-simulated. The performance deviations are also shown in Figure 7.17, where it is possible to observe that the performance deviations are negligible compared to the ones obtained when the surrogate model was used. The error associated with the inductor performances can be seen in Table 7.13. 7.5 Conclusions This chapter presents a surrogate model for integrated inductors based on machinelearning techniques and also an enhanced modeling strategy that is able to increase the model accuracy. The model can be used for the simulation of a single inductor and also for different applications inherent to RF design, such as the use of optimization algorithms and usage in circuit simulation during circuit design. In this chapter, several models were created for different operating frequencies and inductor topologies. Afterward, the model was used in both single- and multi-objective optimizations and has proved to be very accurate and efficient in several optimizations, when compared to EM simulations. Furthermore, the model has been used in circuit design experiments (design of VCOs and LNAs) and has exhibited to be an accurate and efficient model that can assist the RF designer in many different design examples. Acknowledgment This work was supported by TEC2016-75151-C3-3-R Project (AEI/FEDER, UE). References [1] R. C. Li, RF Circuit Design, 2nd Ed. Hoboken, NJ: Wiley, 2012. [2] G. Zhang, A. Dengi, and L. R. Carley, “Automatic synthesis of a 2.1 GHz SiGe low noise amplifier,” in IEEE Radio Frequency Integrated Circuits (RFIC) Symposium, Seattle, WA, USA, 2002, pp. 125–128. On the usage of machine-learning techniques [3] [5] [6] [10] [11] [14] [15] [16] R. Gonza´lez-Echevarrı´a, R. Castro-Lo´pez, E. Roca, et al., “Automated generation of the optimal performance trade-offs of integrated inductors,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 33, no. 8, pp. 1269–1273, 2014. C. De Ranter, De Ranter, Geert Van der Plas, et al., “CYCLONE: automated design and layout of RF LC-oscillators,” in Proceedings 37th Design Automation Conference, Los Angeles, CA, USA, 2000, pp. 11–14. C. P. Yue and S. S. Wong, “Physical modeling of spiral inductors on silicon,” IEEE Transactions on Electron Devices, vol. 47, pp. 560–568, 2000. C. Wang, H. Liao, C. Li, et al., “A wideband predictive double-p equivalentcircuit model for on-chip spiral inductors,” IEEE Transactions on Electron Devices, vol. 56, no. 4, pp. 609–619, 2009. Y. Cao, R.A. Groves, Xuejue Huang, et al., “Frequency-independent equivalent circuit model for on-chip spiral inductors,” IEEE Journal of Solid-State Circuits, vol. 38, no. 3, pp. 419–426, 2003. F. Passos, M. Kotti, R. Gonza´lez-Echevarrı´a, et al., “Physical vs. surrogate models of passive RF devices,” in IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 2015, pp. 117–120. F. Passos, M. H. Fino, and E. Roca, “Single-objective optimization methodology for the design of RF integrated inductors,” in Camarinha-Matos L. M., Barrento N. S., and Mendonc¸a R. (eds.), Technological Innovation for Collective Awareness Systems. DoCEIS 2014. IFIP Advances in Information and Communication Technology, vol 423, 2014. Springer, Berlin, Heidelberg. A. I. J. Forrester, A. Sobester, and A. J. Keane, Engineering Design via Surrogate Modelling – A Practical Guide. Wiley, 2008. S. K. Mandal, S. Sural, and A. Patra, “ANN- and PSO-based synthesis of onchip spiral inductors for RF ICs,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 27, no. 1, pp. 188–192, 2008. I. Couckuyt, F. Declercq, T. Dhaene, H. Rogier, and L. Knockaert, “Surrogate-based infill optimization applied to electromagnetic problems,” International Journal of RF and Microwave Computer-Aided Engineering, vol. 20, no. 5, pp. 492–501, 2010. F. Passos, E. Roca, R. Castro-Lo´pez, and F. V. Ferna´ndez, “Radio-frequency inductor synthesis using evolutionary computation and Gaussian-process surrogate modeling,” Applied Soft Computing, vol. 60, pp. 495–507, 2017. K. Okada and K. Masu, “Modeling of spiral inductors,” in Advanced Microwave Circuits and Systems. InTech, 2010. doi: 10.5772/8435. Keysigth, ADS Momentum. http:// www.keysight.com/en/pc-1887116/ momentum-3d-planar-em-simulator?cc¼ES&lc¼eng [accessed, 2019]. M. B. Yelten, T. Zhu, S. Koziel, P. D. Franzon, and M. B. Steer, “Demystifying surrogate modeling for circuits and systems,” IEEE Circuits and Systems Magazine, vol. 12, no. 1, pp. 45–63, 2012. D. Gorissen, I. Couckuyt, P. Demeester, T. Dhaene, and K. Crombecq, “A surrogate modeling and adaptive sampling toolbox for computer based design,” The Journal of Machine Learning Research, vol. 11, pp. 2051–2055, 2010. 178 [18] [19] Modelling methodologies in analogue integrated circuit design DACE – A Matlab Kriging Toolbox. http://www2.imm.dtu.dk/pubdb/views/ publication_details.php?id=1460 [Accessed November, 2019]. B. Bischl, O. Mersmann, H. Trautmann, and C. Weihs, “Resampling methods for meta-model validation with recommendations for evolutionary computation,” Evolutionary Computation, vol. 20, no. 2, pp. 249–275, 2012. F. Ferranti, L. Knockaert, T. Dhaene, and G. Antonini, “Parametric macromodeling based on amplitude and frequency scaled systems with guaranteed passivity,” International Journal of Numerical Modelling: Electronic Networks, Devices and Fields, vol. 25, no. 2, pp. 139–151, 2012. K. Zielinski and R. Laur, “Constrained single-objective optimization using differential evolution,” in IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 2006, pp. 223–230. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, pp. 182–197, 2002. Chapter 8 Modeling of variability and reliability in analog circuits Javier Martin-Martinez1, Javier Diaz-Fortuny1, Antonio Toro-Frias2, Pablo Martin-Lloret2, Pablo Saraza-Canflanca2, Rafael Castro-Lopez2, Rosana Rodriguez1, Elisenda Roca2, Francisco V. Fernandez2, and Montserrat Nafria1 One of the main challenges in the design of integrated circuits (ICs) in recent ultrascaled complementary metal oxide semiconductor (CMOS) technologies is how to deal with the large variability of the electrical characteristics of devices. In the last years, large progress has been made in understanding the metal oxide semiconductor field effect transistors (MOSFETs) variability associated with fabrication processes, namely, the time-zero variability (TZV) introduced by phenomena, such as random dopant distribution or line edge roughness [1–7], and their impact on circuits [8,9]. However, the time-dependent variability (TDV), though can strongly limit the IC reliability, is still a controversial issue. TDV is typically associated with processes, such as random telegraph noise (RTN) [10–12], and wear-out mechanisms, such as bias temperature instabilities (BTI) [13–17] or hotcarrier injection (HCI) [18–21]. At device level, during circuit operation, these phenomena lead to stochastic and time-dependent variations of several CMOS device parameters (e.g., the threshold voltage, Vth), negatively impacting circuit parameters, as the delay in digital circuits [22], small-signal parameters in low noise amplifiers (LNAs) [23,24], retention times and stability in memories [25,26] and eventually causing the failure of the IC. Consequently, TDV must be taken into account during the design of digital and analog ICs. Aging (also known as degradation or wear out) of MOSFETs has been extensively studied in the last years and physical models for the BTI and HCI mechanisms have been proposed [27–29]. Nowadays, it is broadly accepted that the aging of the device during its operation is related to the generation of defects (in the bulk of the gate oxide and/or close to its interfaces) that can trap/detrap charges, leading to the observed threshold voltage shifts [30–32]. Because of the complexity of the phenomena, taking into account the details of the related physics during circuit simulation would be 1 2 Electronic Engineering Department, Universitat Auto`noma de Barcelona (UAB), Barcelona, Spain Instituto de Microelectro´nica de Sevilla, IMSE-CNM, CSIC and Universidad de Sevilla, Sevilla, Spain Modelling methodologies in analogue integrated circuit design computationally unaffordable, so that a more simplistic, though accurate enough, description of the aging effects must be formulated, implementable into commercial circuits simulators. In this regard, physics-based compact models are required, which accurately describe the TDV observed in the device electrical properties and allow the evaluation of the aging of devices during their operation in the circuit within reasonable computation times. These compact models are actually keys to bridge the gap between device and circuit levels, but, unfortunately, a limited number is still available [33,34]. In parallel to the models, suitable model parameter extraction procedures must be developed to take the particularities of the aging of the underlying technology into account. However, this second step also poses several challenges when dealing with ultrascaled technologies. First, suitable test structures and a complete and advanced instrumentation system are required to characterize TDV in a large number of transistors [35–40], to get statistically significant results in reasonable testing times. Second, smart post-processing tools [41–44] are needed to handle the huge amount of data coming out of the characterization, to get the parameters of the compact models. Once the devices have been fully modeled, a simulation methodology must be developed, which, considering the physics-based device TDV compact models with the suitable model parameter set, provides an accurate estimation of the impact of device TDV on the circuit performance and reliability [45–47]. In this chapter, some of the most recent approaches adopted in all these fields will be presented, with especial emphasis on analog circuits. The chapter is divided into four sections, covering each of the aforementioned issues. In Section 8.1, the probabilistic defect occupancy (PDO) model, a physics-based compact model, will be introduced, which can be easily implemented into circuit simulators. Section 8.2 describes a purposely designed IC which contains suitable test structures, together with a full instrumentation system for the massive characterization of TZV and TDV in CMOS transistors, from which aging of the technology under study can be statistically evaluated. Section 8.3 is devoted to a smart methodology, which allows extracting the statistical distributions of the main physical parameters related to TDV from the measurements performed with our instrumentation system. Finally, Section 8.4 describes CASE, a new reliability simulation tool that accounts for TZV and TDV in analog circuits, covering important aspects, such as the device degradation evaluation, by means of stochastic modeling and the link between the device biasing and its degradation. As an example, the shifts of the performance of a Miller operational amplifier related to the device TDV is evaluated using CASE. Finally, in Section 8.5 the main conclusions are 8.1 Modeling of the time-dependent variability in CMOS technologies: the PDO model During the operation of devices in a circuit, mechanisms, such as BTI and/or HCI, will lead to the aging of the device, i.e., to a progressive change of its electrical properties, which could end in its final failure, limiting the IC reliability. Fortunately, under nominal operation conditions, the device/IC failure can take Modeling of variability and reliability in analog circuits years, so that accelerated tests are used in the laboratory to reduce the evaluation of the lifetime to reasonable testing times. In these accelerated tests, temperature and/ or drain voltage and/or gate voltage (namely, the stress conditions) are raised above their nominal values during a certain time, i.e., the stress time. After this stress phase, the electrical properties of the device are measured again, during the socalled measurement phase, at lower voltages/temperatures, to evaluate the shift of some device electrical parameter (with respect to that measured on the unstressed or pristine device), as a consequence of the aging induced by the previous stress. These consecutive stress-measurement (SM) phases are repeated several times, so that the temporal evolution of the device aging can be evaluated, in what is known as SM testing scheme [14]. Physical models allow extrapolating the results obtained under accelerated test conditions to normal operation conditions [33,48]. In the case of BTI, a voltage is applied to the gate (source and drain grounded) and/or eventually at high temperature. To trigger HCI, in addition to the gate voltage, a drain voltage has to be applied. Independently of the aging mechanism (BTI or HCI), the aging of the device leads to an increment of the threshold voltage (or, equivalently, to a decrease of the current). Figure 8.1 shows an example of ID–VG curves recorded during the measurement phases of a 3-cycle SM test on a pMOS device. The stress voltage was 2.1 V and the stress times, ts, were 100, 1,000 and 10,000 s. The ID–VG curve measured on the fresh device is also shown. Note that, after the stress, the curve shifts toward the right (smaller currents), which is interpreted as an increase of the threshold voltage. Figure 8.2(a), left-hand side, shows a typical increase of the threshold voltage, DVth, as a function of the stress time. However, it is well known that during the measurement of BTI effects, the threshold voltage of the stressed device starts recovering toward their original values immediately after the removal/decrease of the stress [14], in what is known as recovery/relaxation phase, with duration tr. An example is shown on the righthand side of Figure 8.2(a). Note, however, that the recovery is only partial (i.e., the 10–5 |IDS| (A) ts 100 s ts = 1000 s ts = 10,000 s W/L = 0.6/0.13 μm pMOS |VS| = 2.1V 10–10 10–11 0.0 Shift Vth 0.2 0.6 |VGS| (V) Figure 8.1 Characteristic ID–VG curve shift after several BTI stresses on a pMOS after different stress times (100, 1,000 and 10,000 s) at a constant stress voltage Vs ¼ 2.1 V. The black curve corresponds to the one measured on the fresh (as-grown) device Modelling methodologies in analogue integrated circuit design 35 25 ∆Vth (mV) ∆Vth (mV) 15 10 Aging 5 0 (a) 1,000 1,500 Time (s) 0 0.01 2,000 (b) 1 10 100 Relaxation time (s) Figure 8.2 (a) Example of the threshold voltage shift caused by BTI during 1,000 s of stress. After that when the stress is removed, the threshold voltage is partially recovered. In this large-area device, the recovery is continuous with time. (b) In small-area devices, the Vth recovery is observed as discrete drops associated to the discharge of defects that were previously charged during the value of the threshold voltage previous to the stress is not reached); so it is said that BTI has actually two components: a permanent and a recoverable component [49]. In the recovery trace, shown in Figure 8.2(a), the threshold voltage continuously recovers (i.e., decreases), but in ultrascaled devices, this recovery is discrete, being observed as sudden Vth drops, as shown in Figure 8.2(b). This kind of evolution introduces an intrinsic statistical nature of the BTI aging [31,32]. From data like those shown in Figures 8.1 and 8.2, physical models have been developed to explain the device aging and its dependences on time (stress and recovery), voltage and temperature during BTI [14,15,17] and HCI tests [18–21]. In spite of their differences, all of them associate aging to the generation of defects in the oxide bulk and/or its interfaces, which can be charged/discharged. When these defects are charged, they create a barrier that can hinder the electron transport along the channel, reducing the drain current, or equivalently increasing the threshold voltage [50]. Then, Vth is the parameter that is extensively considered in the models to describe the BTI degradation [27–29]. It must be emphasized that a similar physical framework explains the phenomenon of RTN. Actually, the observation of RTN and/or BTI is strongly dependent on the operation conditions (voltage and temperature) of the device [31]. RTN and BTI effects can be observed simultaneously, the first one as fast increase/decrease events in the device current, the second one as slower or permanent changes in the current. This general picture must be translated into compact models, which provide equations that (i) accurately describe the change of Vth as a function of the device operation and (ii) can be implemented into circuit simulators with a reasonable increase of the computation time. Several attempts have been already presented [51] but, as an example, this work will focus on the PDO model [29], which was initially developed to describe the BTI aging. However, it has been Modeling of variability and reliability in analog circuits recently demonstrated that it is able to describe in a unified way the effects related to aging (BTI and HCI) and RTN transients [16,31]. The PDO model describes the shift of the Vth (DVth) in a MOSFET when a stress voltage is applied to the gate (Vs) during a certain stress time (ts), followed by a relaxation/recovery phase at a lower voltage, Vr, of duration tr (usually Vr Vth). This model assumes that each device has a number of defects (N), which in a set of devices is Poisson distributed [52]. When each of these defects is charged/ discharged, a shift of the Vth, h, is registered, different for each defect [32]. Figure 8.3 shows a schematic representation of a pristine device (t ¼ 0) during a BTI stress. When a stress voltage (Vs) is applied to the gate contact, the occupancy probability of traps in the oxide rises leading to a device Vth increase. After this perturbation, gate voltage is reduced to Vr (to measure the effect of the stress) and the charged traps in the oxide are gradually discharged (occupancy probability of traps in the oxide decreases) and the system returns to the previous state, recovering partially or totally the previous Vth level. Figure 8.4 (a) represents the energy band diagram of a metal oxide semiconductor (MOS) structure, where the tunneling t=0 + + + + + Pristine device Vr (t = tr) Vs (t = ts) + + + + + + + + NBTI stress transient increasing Vth + + + + + + + NBTI relaxation transient decreasing Vth Figure 8.3 Schematic representation of NBTI phenomenon. Defects are charged during the stress and discharged during the recovery (measurement) phase Defect distribution Tunneling transitions Trap τc(s) τc τe Semiconductor Metal 〈τe〉 Figure 8.4 (a) Energy band diagram of an MOS structure, where the tunneling transitions of the electrons from the semiconductor to a trap are shown. (b) Example of defect distribution in a bidimensional graph (tc–te) Modelling methodologies in analogue integrated circuit design transitions of the electrons from the semiconductor to a trap that leads to the charge/discharge events that cause the device Vth shifts are sketched. Each of the N defects in the device has associated an emission time (te) and a capture time (tc), which are the average times that a defect needs to be discharged or charged, respectively. Given these assumptions, the PDO model equation that describes DVth, for a given set of tr, ts and operation conditions is (8.1): ð1 ð1 DVth ¼ Nhhi Dðte ; tc ÞPocc ðte ; tc ;ts ; tr Þdte dtc þ Pp ðts Þ (8.1) 0 where N is the number of defects, hhi is the average shift of the Vth when the defects change their state. D(te, tc) describes the distribution of the defects in the te and tc space and depends on voltage and temperature. Pp (also dependent on V and T) is the BTI permanent part, which is associated with charged defects whose emission time is much larger than the typical experimental window and therefore they appear permanently charged. D(te, tc) and Pp are characteristics of the technology and implicitly include the voltage and temperature dependences of the device aging. Note, however, that in ultrascaled devices, N will be small, so that actually few individual traps (statistically distributed following D(te, tc)) will have to be considered, leading to device-to-device variability of the aging. Figure 8.4(b) shows a typical D(te, tc) defect distribution, which must be introduced into (8.1) and integrated to evaluate its impact on the device Vth. Green dots represent the particular case for an ultrascaled device, where only three defects are active (N ¼ 3). Finally, Pocc in (8.1) is the occupancy probability for each defect with given te and tc, for a particular stress/relaxation conditions, ts, tr, Vr and Vs, and can be calculated using the following equation: Pocc tr=s ¼ Pocc ðti Þ !! ! (8.2) te Vr=s ti tr=s Pocc ðti Þ 1 exp þ teff Vr=s te Vr=s þ tc Vr=s where (ti) is the initial time and teff(Vr/s) is given by the following equation: 1 1 1 ¼ te Vr=s þ tc Vr=s teff Vr=s the term Vr/s indicates that the values teff, tc and te correspond to either the stress voltage (Vs) or the relaxation voltage (Vr). Then, Pocc depends on the operation conditions of the device (voltage, temperatures and times) and must be evaluated after a transient simulation of the circuit. N, hhi, D(te, tc) and Pp are the model parameters, which are technology dependent. Then, to evaluate the TDV effects using the PDO model, the values of the model parameters, together with their voltage and temperature dependences, must be evaluated for the technology under study. It has been shown that they can be obtained from the relaxation traces measured after the stress of the device [53]. Note that, in the case of ultrascaled devices, where device-to-device variability can Modeling of variability and reliability in analog circuits be very important, it is necessary to measure many devices in order to get enough statistical data under different stress conditions (to get the corresponding dependences), which is a very challenging issue, as will be discussed in the next section. 8.2 Characterization of time zero variability and timedependent variability in CMOS technologies Traditionally, TDV effects are analyzed in isolated MOSFETs on a wafer (nonencapsulated). However, as described in Section 8.1, the stochastic nature of the aging phenomena and the need to evaluate the voltage/temperature dependences of the model parameters require stressing thousands of samples under several stress conditions and long stress times. Then, extensive statistical on-wafer characterization of MOSFET aging effects is hardly affordable, because applying aging tests to a large number of devices results in long measurement times that can take months and even years. In order to perform a statistical characterization of the device TDV, an area-efficient solution is the use of on-chip array structures that contain thousands of devices under test (DUT) [35–40]. Together with the device array, with a well-suited architecture, the complete characterization system should fulfill the following requirements: ● Individual DUT terminal access and accurate control of applied voltages (since the model parameters strongly depend on voltage). Possibility to perform parallel stress of devices with accurate timing and accurate control of the stress and recovery times for all transistors during aging tests, minimizing at the same time the time gaps between both phases. Possibility to perform electrical characterization of process variability, RTN, BTI and HCI aging. The design of such system is not easy, which explains that most of the developed systems only partially fulfill these requirements. As an alternative, we have designed and fabricated a device array chip, called ENDURANCE, whose design satisfactorily addresses all of them [54]. It is equipped with an internal full custom digital control circuitry (for DUT selection) and Force & Sense architecture (to get a precise control of the actual voltages applied to the DUT terminals). The chip has been fabricated in a 1.2 V, 65-nm CMOS technology and occupies an area of 1,800 1,800 mm2. To allow statistical characterization, the chip includes 3,136 MOS test transistors, distributed in two electrically isolated nMOS and pMOS DUT blocks, with eight different geometries. For the electrical characterization of the chip, a test setup has been implemented (Figure 8.5), which includes a printed circuit board (PCB), where the chip is inserted; a semiconductor parameter analyzer (SPA), used for voltage and current measurements and DUTs biasing; and a power supply for chip biasing. The Force & Sense voltage system supported by the SPA and incorporated into the PCB and the chip minimizes the negative effects of series resistance and parallel capacitance of cables, connectors and PCB routing, as well as voltage drops due to metal chip lines, Modelling methodologies in analogue integrated circuit design Keithley E3631A power supply Keysight B1500 SPA Thermonics T-2560BV GPIB Communication Instruments connection to PCB DAQ 6501 USB communication TARS TOOLBOX Matlab® Figure 8.5 Complete system for the characterization of TZV and TDV in CMOS transistors. The devices are included in a purposely designed array chip DUT1 DUT2 DUT3 DUT4 Stand-by ID–VGS measure time (s) Figure 8.6 Illustration of a parallel 4-DUT 4-cycle SM scheme where the stress phases overlap. The introduction of stand-by periods avoids the overlap of the measurement phases i.e. IR-drops, and other series resistances. A temperature system allows the control of the test temperature. The GPIB bus is used for the communication between the controller (a personal computer) and the instrumentation. The digital control of the chip is carried out with a USB digital acquisition system. A dedicated software tool automatically controls the digital inputs of the chip and all the instrumentation for automatic execution of the TDV tests [55]. The architecture of our chip together with a new stress parallelization algorithm allows a strong reduction of the testing time, so that aging tests of many devices can be carried out in reasonable testing times. The concept behind this algorithm is the overlap of the stress phases of multiple devices, while only one DUT can be measured at a time. Figure 8.6 shows an example of the parallelization of four DUTs under a typical aging test. Our algorithm avoids overlapping of the measurement phases (i.e., ID–VGS measurement and/or recovery phases), i.e., when Modeling of variability and reliability in analog circuits the device is operated at low voltage and the device current is being registered by the system, but overlapping between stress phases in many transistors is allowed, leading to a strong reduction of the test time, because stress phase is typically the most time-consuming part of any aging test. This is done through the addition of suitable stand-by phases in the test (where the same constant voltage is applied to all the terminals of the DUT), which also grants equal recovery and stress times in all of the devices. To give some figures as example of the reduction of the testing time with this new parallelization capability, a test on a set of 784 DUTs that would last approximately 104 days if performed serially would be completed in 4 days with the new algorithm. To show the potentiality of the system, some examples of the TZV and TDV characterization results obtained with our chip will be presented. Before applying any TDV test, an initial characterization of all the DUTs in the ENDURANCE chip is performed, to evaluate the process variability (TZV) and obtain the threshold voltage of the fresh devices, which will be the reference value from which the aging-induced shifts will be evaluated. The ID–VGS curves of 784 80/60 nm nMOS are shown in Figure 8.7. The ID–VGS curves were measured by sequentially selecting each of the DUTs in the array. To measure the curves, VGS was ramped from 0 to 1.2 V, VDS ¼ 100 mV and bulk and source terminals were grounded. As it can be seen in the figure, a high variability can be observed. Using these measured curves and the constant current parameter extraction procedure as a first approach [14], the threshold voltage (Vth) of each DUT can be obtained. As a second example, full RTN characterization can be automatically done on the 3,136 DUTs of the ENDURANCE chip. Two different tests are applied to each DUT for RTN characterization: a ramped voltage test, to measure an initial ID–VGS curve (with VDS ¼ 0.1 V), followed by a constant voltage test to measure the ID–t curve, during 100 s, with fixed VGS ¼ 0.4 V and VDS ¼ 0.1 V (absolute value). As an illustrative example, Figure 8.8 shows a set of seven current traces displaying 20 784 nMOS W/L = 80/60 nm ID (μA) VGS (V) Figure 8.7 A total of 784 nMOS ID–VGS curves measured before any aging test, demonstrating process variability Modelling methodologies in analogue integrated circuit design –4 ID (μA) –4.5 –5 –5.5 –6 –6.5 Time (s) Figure 8.8 Experimental results of RTN tests on seven pMOS transistors of the ENDURANCE chip ∆Vth (mV) ∆Vth (mV) 1,000/60 nm 1,000/1000 nm 1,000/500 nm 600/60 nm 111 Time (s) 100 120 Time (s) Figure 8.9 Threshold voltage shifts extracted from the currents measured during a BTI aging test on pMOS devices with different geometries. VGS ¼ 2.5 V and VDS ¼ 0 V were applied during the stress phase, at room temperature. The inset shows zooms of the recovery phase of the device RTN effects. This figure demonstrates that the system is capable of capturing current fluctuations associated to charge trapping/detrapping in/from defects in the analyzed devices, with different emission and capture times. Multilevel signals, corresponding to more than one defect in the transistor, can also be observed [55]. Finally, Figure 8.9 shows the results of four BTI tests on four pMOS with different geometries. During the test, a stress with VGS ¼ 2.5 V and VDS ¼ 0 V has been applied for 100 s, which was periodically interrupted to measure the stress effects. A final recovery phase of 100 s, during which the current was measured at VGS ¼ 0.5 V and VDS ¼ 0.1 V was considered for each DUT. The corresponding Vth Modeling of variability and reliability in analog circuits shifts were extracted from the measured currents. As expected, during the stress phase Vth increases, whereas during the recovery phase Vth decreases. Note, however, that for the smallest devices, the evolution of the threshold voltage shift is no longer continuous, but discrete, as shown in the inset of Figure 8.9, because a small number of defects are present in the device. 8.3 Parameter extraction of CMOS aging compact models During circuit operation, the variability effects related to the trapping/detrapping in/from oxide defects (as a consequence of TDV) could result in circuit malfunction, due to the shift of some transistor parameters, such as the threshold voltage (Vth) [10–21]. Thus, to implement reliability-aware circuits, it is critical for IC designers to take TDV effects into account [22–26]. To this end, appropriate TDV compact models, like the PDO model [29], described in Section 8.1, are essential, together with suitable model parameters extraction procedures. In the context of the PDO model, the stochastic impact of aging can be modeled through the analysis of the emission time (te) and corresponding impact on the variation of the threshold voltage (h) of each defect in the transistor. With these parameters, and their dependences with the operation conditions, the corresponding voltage/temperature dependent D(te, tc) can be built. These parameters can be “visually” evaluated from the recovery traces of small-area devices (Figure 8.10(a)). But, since thousands of recovery traces have to be analyzed in massive aging tests, an automatic ∆Vth (mV) 70 60 50 40 30 20 0.01 ∆Vth (mV) L3 L2 32.5 32.0 1 Time (s) L1 L0 31.5 (b) Time (s) Figure 8.10 (a) Typical relaxation trace after a BTI stress. (b) Zoom in this trace shows fast defect transitions (RTN) Modelling methodologies in analogue integrated circuit design model parameter extraction procedure must carefully analyze recovery traces to obtain these parameters. However, during BTI testing, RTN transients could simultaneously appear together with other noise sources (Figure 8.10(b)), so that “artifacts” could mask or significantly increase the current increment determined during the extraction procedure. Then, for accurate parameter extraction, one must clearly distinguish the “slow” defects, responsible for aging-induced degradation, from the “fast” defects causing the RTN transient variations. Here we introduce a smart method of defect parameters extraction, which identifies the BTI-related te and h values from a large number of experimental transistor recovery traces, where fast defect capture/emissions (i.e., RTN) and background noise are present [42]. 8.3.1 Description of the method The defect parameter extraction method analyses the corresponding recovery traces individually. A five-step procedure is followed in order to remove the background noise and RTN (if any) from the trace, so that the te and h parameters can be accurately evaluated. The procedure is described in detail as follows: 1. IDS conversion to DVth. For each device, the measured IDS–t recovery traces are converted into the equivalent DVth vs. time trace, using information obtained from the IDS–VGS curve measured on the device before the stress was applied, as those in Figure 8.7. For instance, Figure 8.10(a) shows the results of the conversion of a measured IDS–t curve to DVth–t, where several charge emissions (denoted with green arrows) can be clearly observed in the 100 s measurement window. In addition to these emissions, which contain the information of interest for parameter extraction, fast transitions associated to RTN are superposed. Figure 8.10(b) reveals fast capture/emission transitions, switching between four different DVth levels (i.e., L0–L3) and mixed with background noise. A visual inspection reveals the joint contribution of two individual RTN signals, one switching between levels L0 and L1 and the other one switching with lower capture/emission times between L1 and L3 and between L0 and L2. Identification of DVth levels. The next step consists in the application of the weighted time lag plot (WTLP) method [41] to each recovery trace, in order to identify the number and magnitude of the DVth levels. For instance, Figure 8.11 (b) shows the WTLP resulting from the analysis of the trace in Figure 8.11(a). By analyzing the diagonal of the WTLP, four groups of populated data regions, separated in the figure by red dashed arrows, can be distinguished. Each of the data groups corresponds to a different DVth level present in the recovery trace. Transitions from one data group to the next are considered BTI slow emissions at a specific te and the difference between two DVth levels corresponds to the h of the discharged defect. Furthermore, when present, fast capture/emissions on top of each DVth level can also be clearly distinguished as red-colored regions in the diagonal (i.e., DVth levels L0–L9) with other less populated regions located outside of the diagonal (i.e., DVth level transitions). Modeling of variability and reliability in analog circuits ∆Vth (t+1) (mV) ∆Vth (mV) 35 30 25 20 15 10–2 Time (s) ∆Vth (t+1) (mV) L8 L9 L6 L7 (L0 – L3) L4 L5 (b) L0 14.96 14.66 14.2 13.77 13.45 102 L1 25 30 ∆Vth (t) (mV) (L1 → L3) (L0 → L2) (L3 →L1) (L2 →L0) 13.45 13.77 14.2 14.66 14.96 ∆Vth (t) (mV) Figure 8.11 (a) Typical recovery trace measured during a BTI test and (b) the corresponding WTLP. (c) Zoom-in in (b) showing the combination of two fast defects As an example, Figure 8.11(c) shows a zoom-in of the WTLP plotted in Figure 8.11(b), where the four red regions across the diagonal, which correspond to the L0, L1, L2 and L3 DVth levels, can be clearly observed. The blue regions out of the diagonal in the WTLP indicate the transitions between levels (e.g., L1!L3). These levels result from the joint combination of the two individual RTN signals from the fast charges/discharges of two oxide defects. Figure 8.11 clearly demonstrates that the application of the WTLP to the DVth recovery trace allows an accurate location of all DVth levels and is able to distinguish between RTN and BTI contributions. Background noise removal. Once the DVth levels are identified in the recovery trace, the procedure assigns the closest DVth level to each sample in the DVth trace. This step quantizes the trace, removing the background noise and keeping only DVth levels associated to captures/emissions of RTN and BTI contributions. For instance, Figure 8.12(a) displays three DVth traces (in blue), showing high DVth degradation due to the previous stress, and also the DVth trace reconstruction (in red) with the background noise removed. During the recovery period, several DVth levels can be observed because of defect emissions mixed with RTN phenomena, as shown in detail in the zoom-in plot in Figure 8.12(b). Removal of RTN-related transients. In order to distinguish between RTN and BTI contributions, we define a square matrix, named transition matrix (TM), to store the transitions between different DVth levels in the recovery trace. The dimension of the TM is the total number of DVth levels in the trace, ten in this Modelling methodologies in analogue integrated circuit design ∆Vth (mV) ∆Vth (mV) ∆Vth (mV) 20 10 10–3 (c) Time (s) Figure 8.12 (a) Several examples of experimental relaxation traces (blue) and clean traces after the background noise removal (red). (b) Detail of the dashed square in (a). (c) Traces after the RTN removal; transitions here are only due to relaxation of BTI stress particular case. Each DVth level is denoted as LN, where N ranges from 0 to 9. The rows of the TM are defined as the initial DVth levels (i.e., ith DVth level), while the columns are defined as the final ones (i.e., (i þ 1)th DVth level). During the DVth trace analysis, three different DVth level transitions can be distinguished: (i) Case i: There is not any change of DVth level, i.e., two consecutive DVth samples stay at level LN. (ii) Case ii: Defect charging, i.e., DVth shifts to a larger level. For instance, DVth switches from level L1 to L3. (iii) Case iii: Defect discharging. DVth shifts to a smaller level. For instance, DVth switches from level L3 to L1. Modeling of variability and reliability in analog circuits The TM is constructed by analyzing all the sample values in the DVth recovery trace and counting the number of cases i, ii and iii. Case i will lay in the main diagonal of the TM, while case ii and case iii transitions will be located above and below the TM main diagonal, respectively. For instance, in Figure 8.11(a), ten DVth levels have been detected by the WTLP method (Figure 8.11(b)). The resulting 10 10 TM is filled with all DVth transitions identified during the DVth recovery trace sweep as shown in Figure 8.13. In order to remove fast RTN transitions from the traces, the method analyzes the data in the TM, distinguishing the following two cases: (a) Slow emission recognition: This type of defect emissions is characterized by a unique defect discharge to a lower DVth level. In the TM, this appears as a “1” in the corresponding element below the TM main diagonal, and a “0” in the TM symmetric position. Analyzing the TM in Figure 8.13, three different emissions, without any further capture, can be found marked with circles: from initial to final DVth levels L4–L3, L6–L4 and L8–L7. (b) Transients recognition: The method identifies multiple and consecutive transitions between two distinct DVth levels. For instance, transitions from/ to levels L0 to L3 and from L6 to L7 that are attributed to RTN signals are marked with dash and solid square box in Figure 8.13. Consequently, the construction of TM can be used to distinguish between transitions provoked by RTN or by BTI. As examples, Figure 8.12(c) displays three resulting traces after the application of the methodology showing a total of three, eight and four slow emissions, without artifacts coming from fast defect transitions and background noise, respectively. L0 L2 L3 L4 L5 0 L0 12658 127 297 L1 L2 L6 L7 L8 L9 Figure 8.13 TM corresponding to the trace of Figure 8.11(a). Dashed and solid boxes indicate transitions attributed to RTN, circles indicate the location of slow defect emissions 194 5. Modelling methodologies in analogue integrated circuit design Defect parameters extraction. The last step consists in obtaining the te and h parameters of the slow emissions. This is done by locating the elements below the TM main diagonal showing single discharge that have a symmetrical zero above the TM main diagonal. The h value of each detected defect is obtained by subtracting the final DVth from the initial DVth. To allow the evaluation of the te associated to the defect, when the TM is constructed, also the times at which (i, i þ 1) transitions occurred are saved. When a slow discharge is encountered, the te is assigned to the computed h value so that, the defect is characterized by the tuple {te, h}. 8.3.2 Application examples As example of application of the method, the model parameters obtained from a particular BTI test applied on devices within our chip are described. BTI experiments were performed on a set of 248 pMOS, which were tested using a 6-cycle SM scheme. The duration of the stress phases is increased exponentially (i.e., 1, 10, 100, 1,000, 10,000 and 100,000 s), while the measurement (i.e., recovery) phase time always lasts 100 s. During the stress period, a gate-source voltage VGS has been applied maintaining the drain-source voltage VDS at 0 V. Four different VGS voltages have been considered (1.2, 1.5, 2.0 and 2.5 V), while for measurements VGS Vth and VDS ¼ 100 mV. The total test time required for a 6-cycle SM test on a single device takes 111,111 s þ (6 100 s) ¼ 111,711 s 31 h. Thus, for the four BTI tests involving 992 devices, the total test time, with conventional serial testing procedures, would be 3.5 years, a considerably large and prohibitive test time. However, thanks to the array-based IC chip ad hoc design, to significantly reduce the total testing time, the stress phases of each BTI test have been parallelized, without overlapping the measurement phases of any device. In this case, the testing time is reduced to only 64 h per BTI test, resulting in a 4-BTI test that only lasts 10 days, a significant test time reduction when compared to conventional serial testing. From the automated analysis of the data, slow emissions have been identified. Figure 8.14 shows the histogram plots of the extracted te from the defects found VGS stress: 2.5V τe (s) Figure 8.14 te Histogram plots found during recovery analysis for the 80/60 nm transistor geometry after 10,000 s of pure BTI stress Modeling of variability and reliability in analog circuits 0 –1 –3 〈η〉 (V) In (1-F) –5 Area –6 –7 10–4 (a) 10–3 〈η〉 (V) 10–3 (b) 10–2 10–1 100 2) Transistor area (μm Figure 8.15 (a) Exponential distribution of the h values (symbols), with exponential fitting (lines) extracted from the four 6-cycle BTI tests for each geometry (device area increases from right to left). (b) Distribution of the hhi values of the slow emissions as a function of the transistor area during the recovery traces, showing that most charges are emitted during the very first moments of the recovery phase. Figure 8.15 shows the statistical results obtained from the analysis of all {te, h} tuples extracted by using the method detailed in this work. Figure 8.15(a) shows that h is exponentially distributed. Figure 8.15(b) shows the dependence of the hhi value on the transistor area. The results demonstrate that hhi decreases with the transistor area. The results are in agreement with those in the literature [32], which supports the validity of the methodology. Note that if the RTN-related transients were not removed from the recovery traces before the slow emissions identification, a large number of “false” events (with equal DVth value as the RTN amplitude) would have been taken into account during the hhi calculation. Therefore, the resulting hhi value, for all tested geometries, would have been close to the DVth of the fastest RTN, masking the actual hhi of the BTI-related defects. 8.4 CASE: a reliability simulation tool for analog ICs Many challenges have to be faced during the co-optimization of performance and reliability of ICs during the initial design phase. As shown in the previous sections, the characterization and modeling of TDV impact on ultrascaled devices are two of the aspects to be considered. The implementation of the stochastic TDV models Modelling methodologies in analogue integrated circuit design into circuit simulation tools is a third one. In the case of digital circuits, the aging of many devices, which are operated under different conditions, will have to be evaluated, which could take unaffordable computation time, so that approximations of the models [33] or dedicated computing architectures will have to be considered [51]. For analog circuits, the problem of the number of devices could not be so relevant as in digital circuits, but accuracy of the model predictions will become a must. In any case, the number of available circuit reliability simulation tools is still very limited [45–47]. Focusing on analog circuits, available commercial tools that deal with TDV only offer limited simulation solutions. First, they are based on deterministic aging models, which do not account for the stochastic nature of the wear-out phenomena and, therefore, accuracy errors occur. Second, they cannot carry out a complete reliability analysis taking into account both TZV and TDV (each source of variability is considered uncorrelated, losing accuracy along the way). Third, the bidirectional link between biasing and stress [56,57] is taken into account inefficiently (in the trade-off CPU vs. accuracy), with linear or, at best, logarithmic scales defining the steps where the biasing conditions of each device in the circuit are updated. This section presents CASE, a reliability simulator that tackles the abovementioned deficiencies with a streamlined simulation flow, an underlying stochastic physics-based model (i.e., the PDO model), and a user-friendly interface. As it will be shown later, CASE can carry out reliability simulations in a complete manner, the results of the analysis can be plotted easily, and several pieces of information can be extracted: the statistics of the circuit performances, the statistics of the variability of each device, or even the impact that the variability of each device has on a particular circuit 8.4.1 Simulator features This section provides a brief description of the main features of the tool, as well as its advantages over other existing solutions. The main features of the implemented tool, which allow overcoming the limitations of commercial ones, are as follows: ● ● ● Handling of the stochastic nature of aging. Inclusion of the combined effect of TZV and TDV. Use of an adaptive technique to efficiently consider the variability of each device. In contrast with the solutions derived from deterministic models, the use of a stochastic model, based on the underlying physics of the aging phenomena, provides more accurate information. In this regard, the tool uses the foundry-provided Monte Carlo models (for TZV) and the PDO model (for TDV) characterized as described in previous sections. In any case, the simulator allows integrating any model to evaluate the aging effects easily. Reliability simulators commonly use transient analysis to determine the stress conditions of the different devices. With this information, the device degradation at the target time (the time at which the degradation is to be calculated and the aged Modeling of variability and reliability in analog circuits performances simulated, e.g., 10 years) is estimated to obtain the aged circuit, as depicted in Figure 8.16. A final simulation is carried out to obtain the aged performances of the aged circuit. To reduce errors in the extrapolation of the conditions at the target time, the calculation process is carried out (feedback loop in Figure 8.16) with several intermediate time steps, instead of a single time jump to the target time. One of the improvements provided by CASE is that TZV and TDV are considered in each of the actions taken within the dashed line in Figure 8.16, whereas other reliability simulators only take TDV or, at the very best, TDV with TZV at the end of the flow. The feedback loop between the calculation of the degradation and the stress conditions is also improved. The stress conditions change during the circuit operation, precisely because of the device degradation over its lifetime. As it is well known, there is a strong feedback between device biasing, stress conditions and device aging, as illustrated in Figure 8.17. For this reason, a number of intermediate steps are used to update the stress conditions. This option is already included in several commercial simulators. However, these tools use fixed scales where the steps are uniformly distributed along a linear or logarithmic scale. It is important to emphasize that there are two issues to take into account: the number of steps and the distribution of the steps. A higher number of steps implies a higher computational cost, but, if few steps are carried out, the required accuracy in the calculation of the degradation might not be achieved. Fixed scales (such as a linear or logarithmic distribution of steps) are used because the degradation of a single device, under unalterable stress conditions, follows a power law distribution. This assumption is only correct for a single Fresh circuit Calculate and update stress conditions Extrapolate conditions to target time Calculate degradation Aged circuit Figure 8.16 Simulation flow of the CASE simulator Modelling methodologies in analogue integrated circuit design Biasing Aging Stress conditions Figure 8.17 Bidirectional link between stress (i.e., biasing) and aging device, operating alone, but in a circuit, where different devices coexist, the general behavior cannot be adjusted with these scales, merely for the fact that the degradation of each device can alter the biasing of any other device. Therefore, in contrast to a predefined number of steps over a fixed scale, CASE provides an innovative algorithm [58] that adapts both the number and the distribution of steps with the progressive aging of the circuit. This algorithm can be used in two ways: ● Setting a maximum value for the degradation of the devices (e.g., Vth) at each intermediate step, up to which a new calculation (i.e., a new step where a transient analysis is performed to update the stress conditions) is not required. This approach allows reducing CPU time while maintaining the desired accuracy for the calculation of the circuit degradation. Setting a CPU budget (i.e., setting the number of times that a new transient analysis is carried out). In this case, the algorithm optimizes the distribution of the steps where stress conditions should be updated. In contrast to other adaptive solutions presented in the literature, the one used in CASE allows fixing the number of steps and, therefore, the CPU time can be bounded and controlled. CASE links to commercial off-the-shelf electric simulators, like Spice or Spectre, to carry out all required analyses (transients, nominal or Monte Carlo simulations) to evaluate the circuit performance and get the stress conditions of each device. 8.4.2 TZV and TDV studied in a Miller operational amplifier In this section, the operation of the reliability simulation tool will be illustrated through a case study: the TZV and TDV impact on a Miller operational amplifier (Figure 8.18). The different configuration settings used for this case study are the following: (i) TZV (process and mismatch) and TDV are selected as variability sources. (ii) The target time is 1 year and the update of the stress conditions uses the adaptive scale with a fixed number of 20 steps. (iii) The temperature of the analysis is set to 25 C (iv) The number of samples for the Monte Carlo analysis is 1,000. The target Modeling of variability and reliability in analog circuits Vdd M5 Vip M3 Vout M8 Figure 8.18 The Miller operational amplifier considered to illustrate the capabilities of the CASE simulator 1,000 TZV TZV+TDV 500 0 170 GBW (MHz) 43.0 (b) DC-gain (dB) Figure 8.19 Histograms of the performances of the circuit: (a) GBW and (b) DC-gain performances of the circuit are the DC-gain and the gain–bandwidth product (GBW) of the operational amplifier. The constraints established are a phase margin larger than 60 and that all transistors are in their correct region of operation (M1–M8 set in saturation and M9 operating in the linear region). Once the simulation has been carried out, it is possible to plot the statistics of each performance, using the options provided by the GUI. In Figure 8.19, two examples of these plots for the case study circuit are shown. To analyze the wearout of the devices, CASE uses the variation in the threshold voltage (as most of today’s aging models). Nevertheless, CASE can handle any other varying parameter (e.g., mobility) as long as the underlying aging model takes that parameter into account. The variation in the threshold voltage of each transistor is shown in Figure 8.20. Each type of transistor can be represented individually, since the impact of aging can be very different for each nMOS and pMOS transistors. The variation of the threshold voltage is presented with the mean value (green for pMOS and red for nMOS), the standard deviation (blue error bars) and the maximum and minimum values (grey error bars). Modelling methodologies in analogue integrated circuit design 60 m Max–min Standard dev. TZV+TDV ∆Vth (V) 40 m 20 m 0 –20 m –40 m 1 5 Device Figure 8.20 DVth of the transistors, including spatial and temporal variability Figure 8.21(a) shows the statistical distribution of the samples obtained in the Monte Carlo analysis of the performance of the operational amplifier. This plot represents the possible correlation between different performances. The Yield obtained in the reliability analysis is 0.993. The tool also provides the option of plotting a selected performance against the degradation of a selected device. An example of this is shown in Figure 8.21(b) and (c). As it can be seen, while variations in the threshold voltage of device M8 seem to be uncorrelated with variations in the DC-gain, lower values of its threshold voltage seem to follow a decrease of the GBW of the amplifier. In this way, this type of figure can provide information about the correlation between each performance and each device and let the user take appropriate measures (e.g., further analyzing the correlation by selecting device M8 as the only transistor to be aged in the input file). 8.5 Conclusions TDV, due to stochastic processes as RTN, BTI and HCI, and TZV are identified as relevant sources of degradation of circuit functionality in scaled CMOS technological nodes. Reliability-aware circuit design requires accurate physics-based device compact models, together with an accurate computing time-efficient reliability simulation methodology. In this chapter, the challenging demands in these fields have been covered, focusing on analog circuits. As far as the compact modeling of the device TDV is concerned, the main features of the PDO model, a physics-based compact model, which is able to describe in a unified manner RTN transients and BTI/HCI aging has been described. In ultrascaled technologies, the extraction of the model parameters requires the measurement of a large number of devices, subjected to several stress conditions during long testing times. To allow such kind of massive statistical characterization, suitable tests structures and smart variability Modeling of variability and reliability in analog circuits GBW (Hz) 220 M TZV Monte Carlo samples TZV+TDV Monte Carlo samples 200 M 180 M Fresh nominal design 160 M 42.5 44.0 Gain (dB) DC-gain (dB) Fresh nominal design –20 m –10 m 0 ∆Vth (V) 10 m 30 m 20 m GBW (Hz) 200M 190 M Fresh nominal design 180 M 170 M –30 m –20 m –10 m 10 m 0 ∆Vth (V) 20 m 30 m Figure 8.21 (a) Frequency vs. gain, (b) DC-gain vs. DVth of device M8, (c) GBW vs. DVth of device M8. A target time of 1 year has been considered during the simulation characterization and analysis techniques are required. Thus, in Section 8.2, a full TZV and TDV characterization system for a large number of devices has been presented, whose key element is a purposely designed IC that contains thousands of devices in an array arrangement. The chip architecture together with a new stress parallelization scheme (which allows the simultaneous stress of a large number of devices while preserving identical degradation conditions) allow the reduction of the time needed for the statistical characterization of device TDV from years to days. In Section 8.3, a developed smart methodology to massively analyze the huge amount of measured TDV data has been presented. BTI-related events are automatically identified in the measured data by the use of the Weighted Time Lag method. Further analysis allows to distinguish between events associated to BTI Modelling methodologies in analogue integrated circuit design degradation and RTN, separating the effects of two sources of TDV that inevitably are coupled. This methodology has been used to obtain the statistical distributions of parameters of the physics-based PDO compact model, which are the inputs to a circuit reliability simulation tool. Section 8.4 presents CASE, an example of reliability simulation tool, which has been optimized to investigate the effects of TZV and TDV on analog circuits functionality. CASE evaluates the degradation in each transistor of the circuit using the PDO model (and the suitable set of model parameters), accounting for the change of the stress conditions during the circuit operation. As an example, the effects of the device TDV in the performance of a Miller operational amplifier has been evaluated using CASE, identifying the most variability-sensitive transistors and showing that TDV has a strong impact on the circuit performance. In summary, to accurately predict the reliability of analog circuits, strong efforts have to be carried out at different levels: at device level, on device physics (to develop accurate compact models for the device TDV) and characterization (to extract the model parameters that describe the device aging in a particular technology) and circuit level (to develop suitable simulation methodologies that evaluate the shifts of their performance and reliability linked to the device TDV). Many challenges rise on the way but several feasible solutions have been proposed, which can help designers to implement reliability-aware circuits, by accounting for the impact of device variability during their design. Acknowledgments This work has been supported in part by the TEC2013-45638-C3-R and TEC201675151-C3-R projects (funded by the Spanish AEI and ERDF). The work of J. DiazFortuny and P. Saraza-Canflanca was supported by AEI under grants BES-2014067855 and BES-2017-080160, respectively. References [1] [3] [4] [5] A. Asenov, A. R. Brown, G. Roy, et al., “Simulation of statistical variability in nano-CMOS transistors using drift-diffusion, Monte Carlo and nonequilibrium Green’s function techniques,” Journal of Computational Electronics, vol. 8, no. 3–4, pp. 349–373, 2009. A. Asenov, F. Adamu-Lema, X. Wang, and S. M. Amoroso, “Problems with the continuous doping TCAD simulations of decananometer CMOS transistors,” IEEE Transactions on Electron Devices, vol. 61, no. 8, pp. 2745–2751, 2014. S. K. Saha, “Modeling process variability in scaled CMOS technology,” IEEE Design & Test of Computers, vol. 27, no. 2, pp. 8–16, 2010. K. J. Kuhn, M. D. Giles, D. Becher, et al., “Process technology variation,” IEEE Transactions on Electron Devices, vol. 58, no. 8, pp. 2197–2208, 2011. K. Takeuchi, A. Nishida, and T. Hiramoto, “Random fluctuations in scaled MOS devices,” IEEE International Conference on Simulation of Semiconductor Processes and Devices, 2009. Modeling of variability and reliability in analog circuits [6] [15] [16] [17] [18] [19] C. M. Mezzomo, A. Bajolet, A. Cathignol, R. D. Frenza, and G. Ghibaudo, “Characterization and modeling of transistor variability in advanced CMOS technologies,” IEEE Transactions on Electron Devices, vol. 58, no. 8, pp. 2235–2248, 2011. K. Bernstein, D. J. Frank, A. E. Gattiker, et al., “High-performance CMOS variability in the 65-nm regime and beyond,” IBM Journal of Research and Development, vol. 50, no. 4.5, pp. 433–449, 2006. J. K. Rangan, N. P. Aryan, J. Bargfrede, C. Funke, and H. Graeb, “Timing variability analysis of digital CMOS circuits,” Reliability by Design 9. ITG/ GMM/ GI-Symposium, 2017. A. Ghosh, R. M. Rao, J. J. Kim, C. T. Chuang, and R. B. Brown, “Slew-rate monitoring circuit for on-chip process variation detection,” IEEE Transactions on VLSI Systems, vol. 21, no. 9, pp. 1683–1692, 2013. C. Claeys, M. G. C. de Andrade, Z. Chai, et al., “Random telegraph signal noise in advanced high performance and memory devices,” Symposium on Microelectronics Technology and Devices, 2016. S. Dongaonkar, M. D. Giles, A. Kornfeld, B. Grossnickle, and J. Yoon, “Random telegraph noise (RTN) in 14 nm logic technology: High volume data extraction and analysis,” IEEE Symposium on VLSI Technology, 2016. F. M. Puglisi, A. Padovani, L. Larcher, and P. Pavan, “Random telegraph noise: Measurement data analysis and interpretation,” IEEE 24th International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA) 2017, 2017. T. Grasser, B. Kaczer, W. Goes, et al., “Recent advances in understanding the bias temperature instability,” IEEE International Electron Devices Meeting, 2010. B. Kaczer, T. Grasser, J. Roussel, et al., “Ubiquitous relaxation in BTI stressing-new evaluation and insights,” IEEE International Reliability Physics Symposium, 2008. S. Mahapatra, and N. Parihar, “A review of NBTI mechanisms and models,” Microelectronics Reliability, vol. 81, pp. 127–135, 2018. N. Ayala, J. Martin-Martinez, R. Rodriguez, M. Nafria, and X. Aymerich, “Unified characterization of RTN and BTI for circuit performance and variability simulation,” IEEE European Solid-State Device Research Conference, 2012. T. Grasser, Bias Temperature Instability for Devices and Circuits, New York, NY: Springer, 2014. T. Aichinger, M. Nelhiebel, and T. Grasser, Hot Carrier Degradation in Semiconductor Devices, Switzerland AG: Springer Nature, 2014. C. Schlu¨nder, J. Berthold, F. Proebster, A. Martin, W. Gustin, and H. Reisinger, “On the influence of BTI and HCI on parameter variability,” IEEE International Reliability Physics Symposium, 2017. P. Magnone, F. Crupi, N. Wils, et al., “Impact of hot carriers on nMOSFET variability in 45-and 65-nm CMOS technologies,” IEEE Transactions on Electron Devices, vol. 58, no. 8, pp. 2347–2353, 2011. 204 [21] [32] [33] Modelling methodologies in analogue integrated circuit design E. Amat, T. Kauerauf, R. Degraeve, et al., “Channel hot-carrier degradation under static stress in short channel transistors with high-k/ metal gate stacks,” IEEE International Conference on Ultimate Integration of Silicon, pp. 103–106, 2008. M. Islam, T. Nakai, and H. Onodera, “Measurement of temperature effect on random telegraph noise induced delay fluctuation,” IEEE International Conference on Microelectronic Test Structures, 2018. D. P. Ioannou, Y. Tan, R. Logan, et al., “Hot carrier effects on the RF performance degradation of nanoscale LNA SOI nFETs,” IEEE International Reliability Physics Symposium, 2018. A. Crespo-Yepes, E. Barajas, J. Martin-Martinez, et al., “MOSFET degradation dependence on input signal power in a RF power amplifier,” Microelectronic Engineering, vol. 178, pp. 289–292, 2017. A. Goda, C. Miccoli, and C. Monzio Compagnoni, “Time dependent threshold-voltage fluctuations in NAND flash memories: From basic physics to impact on array operation,” IEEE International Electron Devices Meeting, 2015. F. Ahmed, and L. Milor, “Online measurement of degradation due to bias temperature instability in SRAMs,” IEEE Transactions on VLSI Systems, vol. 24, no. 06, pp. 2184–2194, 2016. S. Mahapatra, A. Islam, S. Deora, et al., “A critical re-evaluation of the usefulness of R-D framework in predicting NBTI stress and recovery,” IEEE International Reliability Physics Symposium, 2011. T. Grasser, B. Kaczer, W. Goes, T. Aichinger, P. Hehenberger, and M. Nelhiebel, “A two-stage model for negative bias temperature instability,” IEEE International Reliability Physics Symposium, 2009. J. Martin-Martinez, B. Kaczer, M. Toledano-Luque, et al., “Probabilistic defect occupancy model for NBTI,” International Reliability Physics Symposium, 2011. J. P. Campbell, P. M. Lenahan, A. T. Krishnan, and S. Krishnan, “NBTI: An atomic-scale defect perspective,” IEEE International Reliability Physics Symposium, 2006. T. Grasser, “Stochastic charge trapping in oxides: From random telegraph noise to bias temperature instabilities,” Microelectronics Reliability, vol. 52, no. 1, pp. 39–70, 2012. B. Kaczer, T. Grasser, Ph. J. Roussel, et al., “Origin of NBTI variability in deeply scaled pFETs,” IEEE International Reliability Physics Symposium, 2010. H. Amrouch, J. Martin-Martinez, V. M. van Santen, et al., “Connecting the physical and application level towards grasping aging effects,” IEEE International Reliability Physics Symposium, 2015. B. Kaczer, S. Mahato, V. Valduga de Almeida Camargo, et al., “Atomistic approach to variability of bias-temperature instability in circuit simulations,” IEEE International Reliability Physics Symposium, 2011. C. S. Chen, L. Li, Q. Lim, et al., “A compact test structure for characterizing transistor variability beyond 3s,” IEEE Transactions on Semiconductor Manufacturing, vol. 28, no. 3, pp. 329–336, 2015. Modeling of variability and reliability in analog circuits [36] C. Schlu¨nder, J. M. Berthold, M. Hoffmann, J. M. Weigmann, W. Gustin, and H. Reisinger, “A new smart device array structure for statistical investigations of BTI degradation and recovery,” IEEE International Reliability Physics Symposium, 2011. [37] T. Fischer, E. Amirante, K. Hofmann, M. Ostermayr, P. Huber, and D. Schmitt-Landsiedel, “A 65 nm test structure for the analysis of NBTI induced statistical variation in SRAM transistors,” IEEE European SolidState Device Research Conference, 2008. [38] H. Awano, M. Hiromoto, and T. Sato, “Variability in device degradations: Statistical observation of NBTI for 3996 transistors,” IEEE European SolidState Device Research Conference, 2014. [39] C. Schlunder, F. Proebster, J. Berthold, W. Gustin, and H. Reisinger, “Influence of MOSFET geometry on the statistical distribution of NBTI induced parameter degradation,” IEEE International Integration Reliability Workshop. Final Report, vol. 2016, March 2016. [40] M. Simicic, A. Subirats, P. Weckx, et al., “Comparative experimental analysis of time-dependent variability using a transistor test array,” IEEE International Reliability Physics Symposium, 2016. [41] J. Martin-Martinez, J. Diaz, R. Rodriguez, M. Nafria, and X. Aymerich, “New weighted time lag method for the analysis of random telegraph signals,” IEEE Electron Device Letters, vol. 35, no. 4, pp. 479–481, 2014. [42] J. Diaz-Fortuny, J. Martin-Martinez, R. Rodriguez, et al., “A noise and RTN-removal smart method for parameters extraction of CMOS aging compact models,” Joint International EUROSOI Workshop and International Conference on Ultimate Integration on Silicon (EUROSOI-ULIS), 2018. [43] T. Grasser, H. Reisinger, P.-J. Wagner, F. Schanovsky, W. Goes, and B. Kaczer, “The time dependent defect spectroscopy (TDDS) for the characterization of the bias temperature instability,” IEEE International Reliability Physics Symposium, pp. 16–25, May 2010. [44] T. Nagumo, K. Takeuchi, S. Yokogawa, K. Imai, and Y. Hayashi, “New analysis methods for comprehensive understanding of random telegraph noise,” IEEE International Electron Devices Meeting, 2009. [45] M. Nafria, R. Rodriguez, M. Porti, J. Martin-Martinez, M. Lanza, and X. Aymerich, “Time-dependent variability of high-k based MOS devices: Nanoscale characterization and inclusion in circuit simulators,” IEEE International Electron Devices Meeting, 2011. [46] C. Hu. “The Berkeley reliability simulator BERT: An IC reliability simulator,” Microelectronics Journal, vol. 23, no. 2, pp. 97–102, 1992. [47] P. Martin-Lloret, A. Toro-Frias, R. Castro-Lopez, et al., “CASE: A reliability simulation tool for analog ICs,” 14th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, 2017. [48] S. F. Wan Muhamad Hattam, H. Hussin, F. Y. Soon, et al., “Negative bias temperature instability characterization and lifetime evaluations of submicron pMOSFET,” IEEE Symposium on Computer Applications & Industrial Electronics, 2017. 206 [49] [56] [57] Modelling methodologies in analogue integrated circuit design T. Grasser, B. Kaczer, P. Hehenberger, et al., “Simultaneous extraction of recoverable and permanent components contributing to bias-temperature instability,” IEEE International Electron Devices Meeting, 2007. V. Velayudhan, J. Martin-Martinez, M. Porti, et al., “Threshold voltage and on-current variability related to interface traps spatial distribution,” IEEE European Solid-State Device Research Conference, 2015. V. M. van Santen, J. Diaz-Fortuny, H. Amrouch, et al., “Weighted time lag plot defect parameter extraction and GPU-based BTI modeling for BTI variability,” IEEE International Reliability Physics Symposium, 2018. S. E. Rauch, “Review and reexamination of reliability effects related to NBTI-induced statistical variations,” IEEE Transactions on Device and Materials Reliability, vol. 7, no. 4, pp. 524–530, 2007. M. Moras, J. Martin-Martinez, R. Rodriguez, M. Nafria, X. Aymerich, and E. Simoen, “Negative bias temperature instabilities induced in devices with millisecond anneal for ultra-shallow junctions,” Solid-State Electronics, vol. 101, pp. 131–136, 2014. J. Diaz-Fortuny, J. Martin-Martinez, R. Rodriguez, et al., “A versatile CMOS transistor array IC for the statistical characterization of time-zero variability, RTN, BTI and HCI,” IEEE Journal of Solid-State Circuits, vol. 54, no. 2, pp. 476–488, 2019. J. Diaz-Fortuny, J. Martin-Martinez, R. Rodriguez, et al., “TARS: A toolbox for statistical reliability modeling of CMOS devices,” International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, 2017. E. Maricau, and G. Gielen, Analog IC Reliability in Nanometer CMOS, New York: Springer, 2013. E. Afacan, G. Berkol, G. Dundar, A. E. Pusane, and F. Baskaya, “A deterministic aging simulator and an analog circuit sizing tool robust to aging phenomena,” International Conference on Synthesis Modeling Analysis and Simulation Methods and Applications to Circuit Design, 2015. A. Toro-Frias, P. Martin-Lloret, J. Martin-Martinez, et al., “Reliability simulation for analog ICs: Goals, solutions and challenges,” Integration the VLSI Journal, vol. 55, pp. 341–348, 2016. Chapter 9 Modeling of pipeline ADC functionality and nonidealities Enver Derun Karabeyog˘lu1 and Tufan Cos¸kun Karalar1 During the design of mixed signal circuits and systems, engineers often begin with high-level behavioral descriptions of the target system. The main appeal of this approach is to reveal theoretical limits and impact of nonidealities before starting transistor level design. This method becomes especially valuable during the design of high-performance and high-resolution circuits. Therefore, behavioral models of mixed signal systems, such as analog-to-digital converters (ADCs), have become a popular research topic. Using such models, the design parameters can be explored based on fast, high-level simulations. As will be described in this chapter, behavioral models of the circuit nonidealities can reveal many issues early in the design. Hence, nonidealities of the circuits should be described and modeled carefully. Sections 9.1 and 9.2 briefly explain the structure of the pipeline ADC and flash ADC, respectively. Section 9.3 describes an ideal model for a pipeline ADC. Next, in Section 9.4, circuit nonidealities are analyzed and modeled in MATLAB and TM Simulink environments. Finally, the model of whole ADC with nonideality models are simulated and results are presented in Section 9.5. 9.1 Pipeline ADC Many contemporary applications, such as 5G mobile systems, broadband communications, digital imaging, etc., require high-resolution data converters that can reach 10–12-bit data resolution while sampling data at in excess of 1 Gsps [1]. ADCs in pipeline topology are commonly employed to attain such performance levels. By simply adding extra stages, resolution can be increased at the expense of larger chip area. Each pipeline stage consists of a sub-ADC and a multiplying digital-to-analog converter (MDAC). Sub-ADC compares the sampled signal against reference levels. MDAC is composed of a sub-DAC and a switched capacitor circuit that 1 Electronics and Communications Engineering Department, Istanbul Technical University, Istanbul, Turkey Modelling methodologies in analogue integrated circuit design performs subtraction and amplification. The low-resolution digital output is converted back to analog value via the sub-DAC. This reconstructed signal is subtracted from the held signal and amplified by 2n where n is the number of output bits in the stage. Output of the MDAC is also known as the residue. The residue is passed to the next pipeline stage input. The residue from the next stage is fed to the subsequent stage and this arrangement repeats itself until the last stage as seen in Figure 9.1. To compute the overall ADC output, digital output codes from all stages are combined after proper timing alignment and weightings. The throughput of the pipeline ADC is at the full sampling rate while the latency is m clock cycle long where m is the number of pipeline stages. To illustrate the basic principle of pipeline ADCs, an input at 60% of full-scale voltage ðVFS Þ is applied to an ideal 6-bit pipelined ADC with 1-bit stages. The comparator reference level is half of the full scale voltage. If the sampled signal is + – SubDAC ϕ1 –acquire ϕ2 –convert Vin Stage RES 1 ϕ1 –acquire ϕ2 –convert IN Stage RES 2 Stage RES m ϕ1 DFF CLK1 CLK2 DFF Digital logic Figure 9.1 Generic pipeline ADC ϕ1 ϕ2 Modeling of pipeline ADC functionality and nonidealities greater than the comparator reference level, the digital output becomes high, otherwise low. Residue voltage of a 1-bit pipeline stage can be expressed as: Vres ¼ 2Vin DVFS where Vin and D are the input voltage and the digital output of the stage, respectively. For a 1-bit stage, D can be 0 or 1. According to (9.1), residue voltage and digital output of each stage are demonstrated in Figure 9.2. One problem with 1-bit pipeline stages is its vulnerability to comparator offset and residue amplifier gain errors. Here, missing codes or missing decision may appear as a result of small offset voltage or small gain error in the stage. Consider the transfer function when the comparator offset or residue amplifier offset are explicitly added in Figure 9.3, the residue voltage exceeds full-scale voltage near the center of the input range. Assuming that the subsequent stages are ideal, the output codes of the following stages remain all high resulting in missed decision levels. This problem can be eliminated by using 1.5-bit pipeline stages. Instead of one comparator, two comparators are used where reference levels are set to 3VFS =8 and 5VFS =8. As a result, comparator offsets up to VFS =8 could be tolerated without Vin Vin VFS VFS 2 Figure 9.2 Residue voltages and final digital code of an ideal 6-bit pipelined ADC [2] Ideal RA offset Vres VFS CMP offset Vres b=0 Vres b=0 Vos 0 VFS Vin Figure 9.3 One-bit pipeline stage transfer function with offset of comparator and residue amplifier [2] Modelling methodologies in analogue integrated circuit design residue saturation. The digital outputs determine DAC output that can assume VFS ; VFS =2 and 0. Residue voltage of a 1.5-bit stage can be ideally written as: 8 2Vin VFS ; > > < V ¼ 2Vin FS ; > 2 > : 2Vin ; if D0 ¼ 1 D1 ¼ 1 if D0 ¼ 1 D1 ¼ 0 : if D0 ¼ 0 D1 ¼ 0 It should be noted that in the case of a fully differential implementation, the input signal can vary between VFS and þVFS . Then, the comparator thresholds should be set to VFS =4 and the DAC values can assume VFS =2, 0 and þVFS =2. 9.2 Flash ADC The flash architecture is the fastest ADC architecture. It is implemented by comparing input signals with a set of reference voltages. These reference levels are generated by using a resistor ladder. The comparator output is thermometer encoded and a decoding operation is employed to obtain the binary outputs. An n-bit flash ADC requires an array of 2n 1 comparators, 2n þ 1 resistors and decoding logic. However, the structure becomes very complex and power consuming for increasing resolutions. Also, nonidealities such as comparator offset and resistor mismatch prevent achieving high resolutions. Therefore, low-resolution flash ADCs are generally used as sub-ADC in addition to digitizing the last pipeline stage residue. 9.3 Behavioral model of pipeline ADCs For the discussion regarding the behavioral model of a pipeline stage, we need to make some assumptions to define the system clearly. Here we will be considering an implementation that targets an 11-bit pipeline ADC that consists of eight cascaded pipeline stages and a last stage with a 3-bit flash ADC. Each pipeline stage has 1.5-bit configuration. 9.3.1 A 1.5-bit sub-ADC model Since sub-ADC is implemented using two comparators whose reference levels are VFS =4 and þVFS =4, its behavioral model can be created by using compare blocks followed by sample and hold blocks as illustrated in Figure 9.4. Each sub-ADC outputs 2-bit signals that are thermometer encoded. However, after the sub-ADC, an encoder can convert these values into any other encoding that is necessary. 9.3.2 Multiplier DAC MDAC is the combination of a sub-DAC, a subtractor and an amplifier. The subDAC converts digital bits into proper analog voltage. Current steering DACs are Modeling of pipeline ADC functionality and nonidealities Sample and Hold1 In S/H > Vref /4 1 B1 Flash level high 2 Clock 1 In1 In S/H > –Vref /4 2 B0 Sample and Hold2 Flash level low Figure 9.4 Model of ideal sub-ADC 1 Out1 + + 2 Bit0 I_ref1 1 Bit1 I_ref3 Figure 9.5 Model of ideal sub-DAC generally preferred for high-speed and high-resolution applications. Each data bit controls a current source. Figure 9.5 displays the model of the 1.5-bit DAC. Models of the current sources are encircled with dashed lines. Depending on the digital bits at the input of the switches, the current value is multiplexed with 0 before being multiplied by gain block to obtain desired DAC output voltage. Figure 9.6 shows a flip-around switched capacitor circuit that is capable of subtracting the DAC output voltage from the input signal and amplifying the Modelling methodologies in analogue integrated circuit design ϕ2 Cf Vin ϕ1 Cs – ϕ1 Cp ϕ2 ±Vref , VCM ϕs VCM A + ϕs Figure 9.6 Flip-over switched capacitor circuit difference to obtain the residue voltage of the pipeline stage. During its operation, in sampling phase, switches controlled by f1 are closed. Both capacitors are charged by the input signal. In the amplifying phase, when switches controlled by f2 are closed, residue voltage and DAC output voltage charge the feedback and the sampling capacitors, respectively. Hence, the output voltage of the residue amplifier can be expressed as: Vout ¼ Vin Cf þ Cs Cs VDAC Cf Cf where VDAC , Cs and Cf are DAC output voltage, sampling and feedback capacitors, respectively. In pipeline ADCs, MDAC of a stage transitions into amplifying phase as soon as its residue is sampled by the latter stage. So, in the worst case, during transition from hold to track mode, the charge due to the previous sample should be discharged before the current input can be sampled. This increased output swing is problematic especially at high-speed operation, as it can induce inter-sample interference (ISI) causing performance loss. To reduce the voltage swing during the track phase, a short reset phase fS can be added to discharge the previous cycle’s charge on the capacitor [3,4]. And the track phase can start from the reset value. As a consequence, effects of previous samples are reduced and ISI performance loss can be mitigated. 9.3.3 A 3-bit flash ADC After the cascaded pipeline stages, a 3-bit flash ADC, which consists of seven comparators and an encoder, is placed as shown in Figure 9.7. Comparator threshold levels increase with þVFS =4 from bottom to top. Finally, the outputs of the compare blocks are converted to 3-bit binary code by an encoder. > –3*Vref/4 > –2*Vref/4 > –Vref/4 > Vref/4 > 2*Vref/4 > 3*Vref/4 1 Out1 2 Out2 3 Out3 7 In7 2 In2 3 In3 4 In4 6 In6 5 In5 Figure 9.7 Model of ideal 3-bit flash ADC 1 In1 Modelling methodologies in analogue integrated circuit design 9.4 Sources of nonidealities in pipeline ADCs 9.4.1 Op-amp nonidealities Op-amps are often used and are important blocks for switched capacitor implementation in pipeline ADCs. Op-amps realized in deep-submicron complementary metal oxide semiconductor (CMOS) technology suffer from significant nonidealities. The impact of these should be analyzed precisely to get an accurate model. Finite DC gain Ideally, the DC gain of an op-amp is infinite; however, the actual open loop gain A is limited by circuit parameters. Including the finite DC gain of the op-amps, total charge stored on both capacitors during sampling phase becomes: Qs ¼ Vin ðCf þ Cs Þ By taking Cp input capacitance of the op-amp into account, the total charge in amplifying phase is: Qa ¼ ðVx VDAC ÞCs þ ðVx Vout ÞCf þ Vx Cp Based on conservation of total charge law: Vout ¼ Vin Cs þ Cf Cf þ Vx Cs þ Cf þ Cp Cf Cs Cf So, feedback factor b, which defines how much of the output voltage is fed back to the input, can be written as: b¼ Cf Cs þ Cf þ Cp Then, considering the finite DC gain of the op-amp, the output can be written as: Vout ¼ G Vx 1 1 ð1=A bÞ where G is the ideal gain of the switched-capacitor circuit. Furthermore, this output characteristic of the residue amplifier can also be approximated as a Taylor series expansion of the input voltage assuming a fifthorder polynomial [5,6]: Vout ¼ G Vx þ 5 X ! Vxn an where an denotes linear and harmonic distortion terms of the amplifier. Modeling of pipeline ADC functionality and nonidealities 9.4.1.2 Bandwidth and slew rate Ideally, an amplifier has infinite bandwidth. However, in reality, its bandwidth is limited by the gain-bandwidth product. The gain starts to reduce and beyond the point where the op-amp loop gain falls to 0 dB, the amplifier is not useful. Slew-rate can be considered as the rate of change in the output voltage in the presence of large input signals. In large signal conditions, op-amp provides current to charge the capacitors of the circuit and the charging rate is called slew rate. If the current is insufficient or the op-amp does not have enough time to charge the capacitors, the output will not be able to reach the required level. At high speeds, this condition can be exasperated in the presence of ISI-defined earlier. These two effects cause nonlinearity at the output and must be analyzed for two different conditions [7,8]. 1. The slope of the curve is lower than the slew rate. In such a case, there is no slew-rate limitation. The output response is linear. Vout ¼ GVin 1 eðt=tÞ where t ¼ 1=b2pfu is the time constant of the circuit where fu is the unity gain frequency. The slope of the curve is higher than the slew rate. In slewing mode, output of the op-amp behaves as a constant current source charging a capacitor, hence yielding ramp voltage during slewing time. Later on, as the charging current reduces below the slew current, it resumes linear behavior where the feedback takes control of the output voltage. Vout ¼ SR t t < tslew Vout ¼ Vout ðtslew Þ þ ðGVin SR tÞ 1 e½ðttslew Þ=t t > tslew where SR refers to slew rate and tslew is the time in slewing mode. Imposing the condition for the continuity of the derivatives of (9.11) and (9.12) in tslew , we obtain: tslew ¼ Vin G t SR 9.4.1.3 Input offset of an operational amplifier Ideally, the output voltage of an op-amp is zero when two input terminals are shorted. However, mismatch of the input transistors during fabrication of the silicon die and stresses placed on the die during the packaging process cause unequal current flowing through input transistors, resulting in nonzero output [9]. So, the input offset voltage is defined as required input voltages that zeroes Modelling methodologies in analogue integrated circuit design the output voltage. The transfer function of a stage with offset voltage can be expressed as: Vout ¼ 2ðVin þ Voff Þ D Vref where Voff is the op-amp input offset which is modeled as a constant value addition. Noise sources of an operational amplifier Thermal noise is one of the dominant noise sources in op-amps. It occurs in the passive and active elements due to random vibration of the charge carriers. AC model of the switched-capacitor circuit in MDAC is shown in Figure 9.8. Here, the amplifier noise is modeled as an equivalent noise current source at the amplifier output. With this model, the transfer function can be expressed as: HðsÞ ¼ Vout rO ¼ in ð1 þ gm rO bÞð1 þ ðs Co rO =ð1 þ gm rO bÞÞÞ where Cload is the load capacitor and Co ¼ Cload þ bðCs þ Cp Þ. Input-referred noise power can be written as: ð1 jHðsÞj2 i2n 2 Cf Vout 1 1 0 2 Vin ¼ 2 ¼ ¼ l kT (9.16) 2 b Co Cs þ Cf G G where noise current source in ¼ l 4kT gm Df , l is a coefficient equal to 2/3 for long channel transistors, Df is a small bandwidth at frequency f, G is the gain of the MDAC, k is the Boltzmann constant and T is the temperature in terms of Kelvin. Another important type is flicker noise also called as 1/f noise. It is present in all active devices. It is highly related to random trapping and releasing of the electrons or holes at the interface and doping profile. It causes the noise power spectral density to increase 3 dB/octave at low frequencies. The flicker noise power can be calculated as: Vn2 ¼ Kf 1 Cox W L f + in2 Cs gm Vx – + Figure 9.8 AC model for noise calculation Modeling of pipeline ADC functionality and nonidealities where Kf is the flicker noise coefficient, Cox is the oxide capacitance and W and L are the width and length of the metal oxide semiconductor field effect transistor (MOSFET) transistor. Figure 9.9 illustrates the model to simulate the thermal noise and flicker noise of the op-amp. 9.4.2 Switch nonidealities Switches are one of the major components in the switched-capacitor circuits. Ideally, they have zero resistance when they are on, on the contrary, they have infinite resistance when they are off. However, NMOS and PMOS transistors are used as switches in CMOS technology. Their parasitics can adversely affect the performance of the pipeline ADC. They can be categorized as switch thermal noise, charge injection, clock feedthrough and nonlinear switch resistance [8]. 9.4.2.1 Switch thermal noise Thermal noise associated with the sampling switches and the op-amps appears as white noise. When the input is sampled, thermal noise is also sampled on the input capacitor. Figure 9.10 provides the equivalent circuit for noise estimation. Its spectral power and the transfer function of the circuit can be calculated as: 2 Vn;Rs ¼ 4kT Rs Df AðjwÞ ¼ 1 1 þ jwRsC (9.19) Ktherm -K- Random number Out Add -KRandom number1 Figure 9.9 Model of thermal and flicker noise Vclk Vin + – + – C V2n,Rs = 4KTR Figure 9.10 MOS sampling circuit and its equivalent Modelling methodologies in analogue integrated circuit design When the switch is off, total noise power stored on the sampling capacitor is: ð 1 b 2 kT (9.20) Vn2 ¼ V AðjwÞ2 df ¼ 2p 0 n;Rs C where Rs is the on-resistance of the MOS switch and C is the sampling capacitor value. The total noise power of MDAC is given by: s2in;MDAC ¼ kT ðCs;mdac þ Cf ;mdac þ Cp;mdac Þ ðCs;mdac þ Cf ;mdac Þ2 Since thermal noise is random, a random number generator is used and it is multiplied by (9.21) to realize the switch thermal noise as in the Simulink model given in Figure 9.11. Charge injection Charge injection in MOS switches is caused by forcing mobile channel charge to escape when the switch turns off. This charge flows through each terminal depending on the ratio of the terminal’s impedance, the switch parameters and the slope of the clock. This nonideality manifests itself as a voltage step at the output. The error for an NMOS switch can be expressed as: DVinj ¼ Qch aWLCox ðVGS Vth Þ ¼ Cload Cload where W, L, Cox , VGS and Vth are the channel width, channel length, gate-to-oxide capacitance, gate-to-source voltage and threshold voltage of the NMOS, respectively. The factor a is the fraction of how much charge leaks into the sampling capacitor rather than the input source. Model of charge injection effect is developed in Simulink as illustrated in Figure 9.12. It performs (9.22) by using a random variable block to define uncertainty of charge quantity. -KRandom number Zero-order hold 1 Out Figure 9.11 Model of switch thermal noise du/dt Random number Zero-order hold Kinj + Kft 1 Out Figure 9.12 Model of the charge injection and clock feedthrough effects Modeling of pipeline ADC functionality and nonidealities 9.4.2.3 Clock feedthrough Besides drain current of the switch, there is an extra current flowing through the two-gate capacitances, Cgd and Cgs , when the switch turns off. In other words, clock transition is coupled to sampling capacitor through parasitic capacitances of MOS switches. This phenomenon is called clock feedthrough. Assuming that the overlap capacitances are fixed, the effect on the output voltage can be calculated as [8,10]: DV ¼ Qout Vclk Cov ¼ CH þ Cov CH þ Cov where Qout is the error charge due to the clock feedthrough, Vclk , Cov and CH represent the clock signal voltage, the overlap capacitor and the hold capacitor, respectively. As is seen in (9.23), clock feedthrough is independent of input signal. It can be modeled as shown in Figure 9.12. 9.4.2.4 Nonlinear switch resistance Unlike ideal conditions, switches have nonzero resistance that is a nonlinear function of the input voltage. Nonlinear on-resistance causes harmonic distortions especially for high-frequency and high-swing signals. It can be calculated as: Ron ¼ 1 ð1=2ÞmCox ðW =LÞðVin;s Vth Vin Þ where m is the NMOS mobility, Cox is the gate oxide capacitance per unit area, W, L are transistor dimensions, and Vin;s is the sampling voltage that can be calculated as: (9.25) Vin;s ¼ Vin 1 ets = ðRon ðjÞCÞ where j is the number of time intervals of the sampling phase. The switch onresistance from (9.24) can be approximated by using a polynomial function of the input voltage [8]. For the Simulink model, a polynomial calculation is realized with addition and product blocks. 9.4.3 Clock jitter and skew mismatch Clock jitter is the uncertainty in clock period because of thermal and power supply noises, crosstalk and reflections. These uncertainties can cause an increase in noise power that is proportional to the signal bandwidth and degrade ADC performance [11]. Consider a sinusoidal signal Vin ðtÞ with amplitude A, and frequency fin as input. Sampling error due to jitter can be expressed as: ½Vin ðt þ gÞ Vin ðtÞ g dVin ðtÞ dt with the assumption that: 2p fin s 1 where g is the sampling instance error characterized by a jitter process with N(0, s2 ) error statistics and s is the rms value of the jitter [12]. Modelling methodologies in analogue integrated circuit design + + Add 1 In Random number 1 Out + + Add Random number1 Zero-order hold Figure 9.13 Model of the clock jitter and skew effects Another performance-limiting factor for high-speed ADCs is the skew in the clock distribution network. This problem arises from unequal clock path lengths to different pipeline stages. Even though, H-tree structure is a common way to balance delay variations among stages, process variations in clock buffers and unequal coupling from neighboring interconnect can introduce uneven delays within the H-tree introducing noise due to clock skew [13]. These effects can be modeled in Simulink as shown in Figure 9.13. While one of the random sources generates a random number for each cycle of the ADC to model jitter, the other one generates a random number once for the simulation to model the clock skew effect. 9.4.4 Capacitor mismatches The ratio between sampling capacitor and feedback capacitor determines the gain of the MDAC. Any deviation in capacitor values would introduce an error from the ideal value of the residue. Therefore, an adequate matching is needed between the capacitors. The capacitance value can vary due to systematic mismatch errors caused by layout engineers as well as random mismatch from lithographic nonidealities such as over etching and oxide thickness gradients [14–16]. For MDAC configuration, what matters is the ratio of capacitances. In our system, ideally, sampling and feedback capacitances are equal to each other. Therefore, including mismatch, the relationship between capacitances can be written as: Cs;k ¼ Cf ;k þ DCk mk ¼ DCk Cf ;k where Cs;k and Cf ;k are sampling and feedback capacitors of the kth pipeline stage. Thus, the residue output becomes: Vout ¼ Vin ð2 þ mk Þ D Vref ð1 þ mk Þ Accuracy of capacitors can be improved during design and layout. The most straight forward way is to use large capacitors, which has limitations due to silicon area. The usage of large unit sized capacitors and dummy capacitors to avoid asymmetric loading decreases the effect of systematic mismatch. Furthermore, Modeling of pipeline ADC functionality and nonidealities gradient mismatches can be minimized by applying common centroid and interdigitation layout techniques. However, even after these improvements, remaining mismatches can still limit the ADC performance that may necessitate the use of gain calibration methods. 9.4.5 Current sources matching error As shown in Figure 9.1, the DAC value subtracted from input for residue calculation is generated by a sub-DAC. For this pipeline ADC, the sub-DAC is implemented as a three-level current steering DAC. Therefore, the mismatch between the current sources becomes a potential error source. Figure 9.14 shows a unit CMOS current mirror where the reference and output currents are equal. To the first-order output, current of the mirror source is proportional to the W/L ratios between the diode and current source devices. However, secondary effects such as channel length modulation, threshold mismatch and W/L mismatch can cause deviation from the ideal current source output. The drain current in device M0 can be written as: 1 W Iref ¼ Iout;0 ¼ mCox ðVGS Vth Þ2 2 L where m is the NMOS mobility, Cox is the gate oxide capacitance per unit area and W, L are transistor dimensions. Including mismatch errors, the drain current of M1 can be written as following: 1 W W Iout ¼ mCox þD ðVGS ðVth þ DVth ÞÞ2 ð1 þ l VDS Þ (9.31) 2 L L From (9.30) and (9.31), it is seen that the output current includes additional dependencies than W/L of M1. After evaluating the products and ignoring high-order terms the resulting current can be shown to include a current error DIout [17,18]: Iout ¼ Iout;0 þ DIout W L Iout = Iref W L Figure 9.14 Basic CMOS current mirror Modelling methodologies in analogue integrated circuit design Out + + E_iref Figure 9.15 A current source model with matching error Figure 9.15 illustrates the Simulink model of the current source with mismatch errors. Error value that can be extracted from Monte Carlo simulations is added to current value to model given 9.4.6 Comparator offset Comparator offset is the main source of error in the sub-ADC. When comparing two input voltages, input offset may affect the output decision. For the Simulink model, dynamic offset is implemented as a random number and is sampled at every cycle and the static offset is added to input signal as a constant number [16,19]. 9.5 Final model of the pipeline ADC and its performance results Simulink representation of the pipeline ADC models the basic ADC functionality in addition to the nonidealities that were defined in previous sections. For the subADC, these include the clock jitter, skew, as well as static and dynamic comparator offsets. Next, for the sub-DAC, a current mismatch model is included. Figure 9.16 presents subblock models in pipeline ADCs with error models. For the switch capacitor-based MDAC, first, charge injection, clock feedthrough, switching thermal noise and nonlinear switch resistance effects are captured. Next, input signal and DAC output are amplified with gains that include the errors due to capacitive mismatches. Residue amplifier nonidealities such as input offset, finite gain error, thermal noise, flicker noise and bandwidth as well as slew rate limitations are included in our model that is illustrated in Figure 9.17. It should be noted that during the Taylor series expansion of the finite amplifier gain only odd-order terms were employed. This is due to the well-known fact that a fully differential implementation would cancel the even-order terms. Finally, Random number1 1 Input Random number2 In Out Sample and Hold2 In S/H 2 Clock In S/H Flash level low > = –Vref /4 Flash level high > = Vref /4 1 B1 Figure 9.16 (a) Sub-flash ADC model and (b) sub-DAC model Zero-order hold1 DYN_OFFSET1 -KZero-order hold2 DYN_OFFSET2 + Jitter model1 Sample and Hold1 In Out Modelling methodologies in analogue integrated circuit design and slew rate limitations are implemented via a MATLAB function code and near the output of the model. The resulting Simulink representation of the pipeline stage is illustrated in Figure 9.17. To confirm the functionality of the model, an 11-bit ideal pipeline ADC model is simulated in MATLAB/Simulink. A sine wave with a frequency of 7.26 MHz is applied to the input. The sampling rate of the ADC is selected as 250 MHz. As is seen from Figure 9.18 and the left plot in Figure 9.19, the reconstructed digital output validates that the conversion is completed successfully. Furthermore, simulation of the ADC with modeled nonidealities is performed under the same conditions. In order to extract signal-to-noise ratio (SNR) and spurious-free dynamic range (SFDR), the output spectrum is plotted. Degradation of dynamic performance can be observed by comparing ideal and nonideal spectra in Figure 9.19. SNR decreases to 44.55 from 68.09 dB, which means the effective number of bits reduces from 11 to 7.1 bits. It can also be seen that the SFDR approximately lost 40 dB because of harmonic distortions. Additionally, Figure 9.20 shows simulation results of the static performance tests, including differential nonlinearity (DNL) and integral nonlinearity (INL). The result indicates that DNL and INL are within the range of 1 to þ1.26 and 13.71 to þ15.5 LSB, respectively. CI+CF+SN+NSR In Out Noise sources 2+(∆C/CF) + B1 DAC_out B0 Clock B0 Flash ADC BIT0 DAC 3 2 A1 -K- 1+(∆C/CF) Out In X Product X Product + sr Bandwith and slew rate Figure 9.17 Model of a pipeline stage with nonlinearity models Analog signal input 1 0.5 0 –0.5 –1 3 4 Digital output of ideal ADC 1 0.5 0 –0.5 –1 2 4 Time (μs) Figure 9.18 Analog input signal and reconstructed ADC digital output Modeling of pipeline ADC functionality and nonidealities THD: –88.77 dB c SNR: 68.09 dB c SINAD: 68.06 dB c SFDR: 86.70 dB c Magnitude (dB W) 80 40 60 Frequency (MHz) SNR: SFDR: 44.55 dB c 46.92 dB c Magnitude (dBW) –20 –40 –60 –80 –100 0 40 60 80 Frequency (MHz) Figure 9.19 FFT spectrum of ideal and nonideal 11-bit pipeline ADC Simulink model DNL (LSB) INL (LSB) 1,000 1,500 Digital code (LSB) 10 0 –10 Figure 9.20 DNL and INL of nonideal 11-bit pipeline ADC Simulink model Modelling methodologies in analogue integrated circuit design 9.6 Conclusion In this chapter, we have presented a brief analysis of pipeline ADCs and their submodules. Basic functionality and nonidealities as well as modeling of each of these aspects have been discussed in detail. Specifically, an 11-bit pipeline ADC was considered as the main application. Nonidealities such as finite DC gain, limited bandwidth and slew rate, thermal noise and input offset of an op-amp also charge injection, clock feedthrough, switching thermal noise, nonlinear switch resistance, clock jitter, skew, as well as static and dynamic comparator offsets were described and added to behavioral model of the pipeline ADC. Further top level MATLAB/ Simulink simulations were performed to demonstrate the performance loss that arises as a result of such nonidealities. As a result, this model can be used to evaluate any technique to mitigate the impact of these impairments. Acknowledgement This work was supported by the Scientific and Technological Research Council of Turkey Project number 115E752. References [1] [2] [3] [4] [5] Ahmed I. Pipelined ADC Design and Enhancement Techniques. 1st ed. New York: Springer; 2010. Chiu Y. Lecture Notes in EECT 7327-001 Data Converters. The University of Texas at Dallas; Fall, 2014. Tsang C, and Nishumura KA. Techniques to Reduce Memory Effect or Inter-symbol Interference (ISI) in Circuits Using Op-Amp Sharing. Agilent Technologies Inc.; Application Note, 2014. Tsui LHY, and Ko IT. Op-Amp Sharing Technique to Remove Memory Effect in Pipelined Circuit. Microchip Technologies Inc.; Application Note, 2015. Sahoo BD, and Razavi B. A user-friendly ADC simulator for courses on analog design. IEEE International Conference on Microelectronics Systems Education. 2009;p. 77–80. Panigada A, and Galton I. Digital background correction of harmonic distortion in pipelined ADCs. IEEE Transactions on Circuits and Systems I: Regular Papers. 2006;53:1885–1895. Malcovati P, Brigati S, Francesconi F, et al. Behavioral modeling of switched-capacitor sigma delta modulators. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications. 2003;50(3):352–364. Barra S, Kouda S, Dendouga A, et al. Simulink behavioral modeling of a 10-bit pipelined ADC. International Journal of Automation and Computing. 2013;10(2):134–142. Palmer R. DC Parameters: Input Offset Voltage. Report No: SLOA059. Texas Instruments; 2001. Modeling of pipeline ADC functionality and nonidealities [10] Xu W, and Friedman E. Clock feedthrough in CMOS analog transmission gate switches. Analog Integrated Circuits and Signal Processing. 2005;44(3): 271–281. [11] Malcovati P, Brigati S, and Francescon F. Behavioral modeling of switchedcapacitor sigma-delta modulator. IEEE Transactions on Circuits and Systems I. 2003;50(3):352–364. [12] Kobayashi H, and Morimura M. Aperture jitter effects in wideband ADC systems. 6th IEEE International Conference on Electronics, Circuits and Systems. 1999;p. 1705–1708. [13] Zarkesh-Ha P, Mule T, and Meindl J. Characterization and modeling of clock skew with process variations. Proceedings of the IEEE 1999 Custom Integrated Circuits Conference. 1999. [14] Shyu GT, and Krummenachero F. Random error effects in matched MOS capacitors and current sources. IEEE Journal of Solid-State Circuits. 1984; 19(6):948–956. [15] Saxena S, Hess C, Karbasi H, et al. Variation in transistor performance and leakage in nanometer-scale technologies. IEEE Transactions on Electron Devices. 2008;55(1):131–144. [16] Kledrowetz V, and Haze J. Analysis of Non-ideal Effects of Pipelined ADC by Using MATLAB–Simulink. Advances in Sensors, Signals and Materials. 2010, 85–88. [17] Hanfoug S, Moulahcene F, and Bouguechal N. Contribution to the modeling and simulation of current mode pipeline ADC based on Matlab. International Journal of Hybrid Information Technology. 2015;8(3):83–96. [18] Crippa P, Turchetti C, and Conti M. A statistical methodology for the design of high-performance CMOS current-steering digital-to-analog converter. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 2002;21(4):377–394. [19] Maloberti F, Estrada P, Malcovati P, et al. Behavioral modeling and simulation of data converters. 1992 IEEE International Symposium on Circuits and Systems. 1992;vol. 5, p. 2144–2147. Chapter 10 Power systems modelling Jindrich Svorc1, Rupert Howes1, Pier Cavallini1, and Kemal Ozanoglu2 The increasing deployment of electronic systems in many areas of our lives brings with it the need to provide power supplies for those applications, many of which are driven from a battery, increasingly Li-ion based, which itself may also have the requirement to be charged. For these systems, efficiency is of high importance to maximize battery runtime. High efficiency usually demands the use of a direct current DC–DC converter. This chapter starts with discussing how to construct a small-signal model of such a converter, which is inherently a large-signal timevariant system, such that it can be modelled in an electrical circuit simulator. Modelling and predicting the efficiency of the overall power system is then addressed. The behaviour of the power system is also strongly dependent on the battery characteristics, as well as the passive components, in this case, capacitors and inductors. An introduction to the modelling of all these topics is given. Small-signal models of DC–DC converters 10.2.1 Motivation Most of the common DC–DC converters are designed as a feedback system where the output voltage or current is compared with its reference value, and the error between actual and the reference value is used for controlling the power stage in a way that the error is minimized. The main reason for introducing a feedback system is the accuracy of such a system, which is usually better in comparison to open systems. One of the drawbacks of feedback systems is that the loop itself needs to be neatly designed and checked; otherwise, it can get unstable. If the system gets unstable, small or severe oscillations can occur, which can damage the controller or the following circuitry, in the worst case. Fortunately, the circuit theory taught us 1 2 Dialog Semiconductor, Swindon, UK Dialog Semiconductor, Istanbul, Turkey Modelling methodologies in analogue integrated circuit design various methods on how the stability can be checked and how the feedback loop can be correctly adjusted. The first step is to adjust the DC operating point of the whole circuit. Then the small-signal model can be created. The procedure for the feedback loop adjustment is usually done in the following manner: 1. 2. 3. Create a simplified mathematical small-signal linear model of the loop. Adjust the open-loop response in order to get correct DC gain, phase margin, and other desired input parameters from a specification. Check the final open-loop response in a circuit simulator (like Spice and Spectre). The mathematical model should be created in a way that it covers all significant contributors, which affect the final small-signal open-loop response. It means, the mathematical model usually does not need to cover all parasitic capacitances and resistances tied to all devices in the circuit. However, it may cover significant parasitic effects if needed – like serial resistance of inductors. On the other hand, the simulator calculates the DC operating point and creates its own internal small-signal model, which takes into account all the parasitic devices covered by the models used in the simulator, for example, parasitic capacitances of MOS transistors. If any of those additional parasitic devices cause the check-in simulator to fail (due to some effects which are not predicted in the mathematical model) the loop needs to be readjusted. One possible way how to handle these types of problems is to use an extra margin for the loop adjustment with the mathematical model. For example – if the model predicts a phase-margin of 60 but the simulator check reveals that the phase-margin lies below 50 , it is reasonable to adjust the loop for phase-margin better than 70 and expect a similar increase of phase-margin in the simulator as well. Although this method is very simple and easy, it may fail as the adjustment in the model does not aim directly for the root cause of the phase margin degradation, and it does not take into account any consequences. A better approach is to include additional significant parasitic effects from the circuit back into the mathematical model. In that case, the additional effects will be apparent, while the loop is being tuned so the loop adjustment will be made with respect to those effects. The previous paragraphs describe a very general procedure of adjusting a feedback system, which can be easily linearized. Unfortunately, this is not the case in switching circuits as they are non-time-continuous and not linear. However, we would like to use the well-proven and familiar methods from linear circuits for switching circuits as well. In order to do so, we need to model the switching circuit with a linearized model, and this chapter provides a brief introduction. It should be noted here that the chapter does not cover how the switching converter should be designed, where to place poles and zeroes of the transfer function and rigorous derivation of linear models. However, these topics can be found, for example, in [1–3]. The aim of this chapter is to provide intuitive insight into how the linear model works and how it can be implemented in a circuit simulator. The example of the Power systems modelling model-creating procedure is described with a simple voltage mode buck converter. Because it is just a model, it has its limitations and the most important of them will be discussed later in this 10.2.2 Assumptions For the following section, it is assumed that the reader has reasonable knowledge how the common DC–DC converters work and what the terms like ‘continuous conduction mode’, ‘discontinuous conduction mode (DCM)’, ‘DC operating point’, ‘small signal’ vs. ‘large signal’, mean. 10.2.3 Test vehicle As was mentioned before, the main aim of this chapter is to give an introduction on how to create a model of a switching converter, which we can use for our benefit for designing stable and well-behaved converters. Let us start with a simple buck converter with a voltage-mode controller, which will serve as a test vehicle. A buck converter is in general one of the simplest converters, and the fundamental modelcreating ideas can be well described. When understood, those ideas can be used for other converters as well. The buck converter schematic diagram is shown in Figure 10.1. It comprises input capacitor CIN, power inductor L, output capacitor COUT, power stage (PWR) which usually contains some logic (DIG) and drivers (DRV) for the power switches, pulse width modulator (PWM) and an error amplifier (EA) with compensation network. The block is powered by VIN, VREF is the reference voltage that sets the target for the regulated voltage VOUT, and the load is represented by an external resistor RLOAD. The basic operation is as follows: EA compares output voltage VOUT with reference voltage VREF and based on the difference between them sets the error signal ERR. The error signal is then fed into the PWM, which creates rectangular signal DUTY with fixed frequency and duty-cycle D proportional to the error signal ERR. The DUTY signal is then transferred to the LX node in the form of a pulsating signal with the same duty-cycle D and amplitude equal to VIN. For the ideal case, the DC voltage at the LX node is given by the following formula: avg ðVLX Þ ¼ DVIN EA + ERR DUTY PWM – DIG and DRV LX L Figure 10.1 Block diagram of a buck DC–DC converter Modelling methodologies in analogue integrated circuit design IL (A) 2.5 2.0 1.5 1.0 0.5 100 t (μs) VLX (V) t (μs) VLX (V) 1.095 1.090 1.085 1.080 1.075 95 t (μs) Figure 10.2 Buck converter waveforms example The voltage at the LX node has a rectangular shape. In order to remove its switching components, an L–C filter is put in place, so the output voltage VOUT is predominantly a DC component plus a small residual voltage ripple. The output voltage of an ideal buck DC–DC converter is then given by the following equation: VOUT ¼ DVIN An example of the time behaviour of the inductor current IL, the voltage at LX node VLX, and the output voltage VOUT in the buck converter is shown in Figure 10.2. The picture shows a load release event at time 100 ms. The load suddenly changes from 2 to 0 A, and the output voltage shoots up. The controller lowers the duty-cycle and the coil current decreases, and the output voltage goes back to its normal level. 10.2.4 Partitioning of the circuit From the perspective of the linear model, the block diagram in Figure 10.1 can be separated into switching and non-switching parts. The non-switching part works with time-continuous signals, whereas the switching part works with switching signals. The border between switching and non-switching part is marked with a red rectangle. All the blocks, which are inside the rectangle, belong to the switching part, and blocks outside belong to the non-switching part of the system. Power systems modelling The PWM and the second-order filter L–C sit at the boundaries between continuous and non-continuous time domain. Those blocks act as a port between switching and non-switching part of the circuit. The non-switching part of the circuit can be linearized with well-known mathematical tools from circuit theory. In addition, the non-switching part can be directly simulated in a circuit simulator, and no special modelling method is needed. It means, if we want to make a model of a buck converter, we can use the same CIN, L–C filter, RLOAD, but also the whole EA stays unchanged. On the other hand, the switching part of the system needs special attention as it is not possible to perform standard small-signal analysis on the blocks included. In order to do so, we need to model those blocks with simple linearized models, which remove the switching and provide similar small-signal behaviour as the switching original. In fact, there are usually no small-signal-related effects applied to the signal within the switching part of the system so in that case, it is enough to model the blocks, which pass the signal in and out of the switching part of the system. In the buck example, it is the PWM and power-stage PWR. Those blocks are our main interest and will be examined later in this chapter. 10.2.5 Model types In order to start with a clear view of what model is being talked about, a short overview is given in the following sections. 10.2.5.1 Switching model In general, the switching model covers the normal switching operation of the converter and might cover some additional features as well. There is no fixed distinction between simple and complex model, but for the purpose of this chapter, we can split switching models into two types with respect to the complexity level. ● The complex switching model is predominantly done in an advanced simulator (Spectre, PSpice, etc.) and evaluates the time behaviour of the final product. This model should cover all special functionalities, for example, coil current limitation, and so on. It should also cover all the parasitic effects related to all the devices. The accuracy and coverage are the most important for those models. The simple switching model can be used for initial circuit evaluation, for example, in a feasibility study. It does not cover all the functionalities and parasitic effects; however, it may include things like coil current limitation and significant parasitic effects. The number of effects, which are covered, depends on the designer’s choice. Speed and simplicity are the key features. This model is a starting point for the next listed model. 10.2.5.2 Averaged model – continuous-time model Both averaged and switching models still operate in the time domain. The important difference between the averaged model and the switching model is that the Modelling methodologies in analogue integrated circuit design averaged model employs no switching. For the purpose of this text, we can define two levels of complexity of the averaged model. ● The simple averaged model is usually based on the simple switching model. The switching blocks are replaced with their averaged counterparts. All switching signals are replaced with averaged signals. The instantaneous level of the averaged signals corresponds to the instantaneous average value of the original switching signal, when averaged over one switching period. More about averaging will be shown later in this chapter. Since there is no switching, the simple averaged model brings the advantage of running native small-signal AC analysis provided with the simulator. Although the averaged model is operated in the time domain, the simulator creates internally the small-signal linear model for the s-domain. The simple averaged model is the model, which is of our main interest further in this chapter. The complex averaged model is a fork of the simple averaged model where the original non-switching models (e.g. EA) are replaced with the full-feature complex models, which cover all the nonideal effects. The switching blocks are still modelled as in the simple averaged model. It is apparent that this model extends the simple averaged model by employing the nonidealities from other time-continuous blocks but does not account for any switching effects. Small-signal model – AC model In comparison to the averaged model, the small-signal model shows the behaviour of the circuit in the s-domain instead of time domain. It is a model, which describes small-signal behaviour of the converter, and it is used mainly for loop adjustment, input and output impedance evaluation, and other AC characteristics. From the simulator-user perspective, there is no special model for the simulator as the averaged model serves for this purpose. Nevertheless, it is always wise to create a mathematical model of the converter as it helps one to gain a better insight into the way in which the parameters affect the response and usually allows for very quick adjustment of the loop. Used models It should be noted here that unless explicitly stated, no complex model will be discussed in the rest of this chapter. Therefore, the following apply: ● ● switching model denotes simple switching model, averaged model denotes simple averaged model. 10.2.6 Basic theory for averaged – continuous-time model 10.2.6.1 Small-ripple approximation The small-ripple approximation means that we can neglect the output voltage ripple. This is reasonable in most cases as the voltage ripple is usually much smaller than the DC level of the output voltage VOUT. Comparison of VOUT waveform and its DC component is shown in Figure 10.3. Power systems modelling VOUT (V) 1.0720 1.0718 1.0716 DC only 1.0712 95.0 t (μs) 98.0 Figure 10.3 VOUT waveform with the DC component It is apparent that the peak-to-peak voltage magnitude is in millivolts, whereas the DC component is about 1,000 times larger. Hence, we can use the small-ripple approximation and neglect the ripple for our averaged model. So, for steady-state operation, it applies: VOUT DC VOUT peak -to-peak ! VOUT ðtÞ ffi VOUT DC The main reason for the small-ripple approximation is simplifying the equations for average model derivation. However, the main important consequence for our model is that we do not need to worry about ripple as it is assumed to be negligible. 10.2.6.2 Averaging The process of averaging is very important for a basic understanding of how the averaged model works. The main goal here is to remove the switching and its artefacts by averaging a particular signal over one switching period. We will denote the averaged signal in the following style: hAi ¼ 1 TS ð TS where TS is the switching period and fS is switching frequency defined as: TS ¼ 1 fS The following section gives a bit of introduction into the theory. There are two state variables used for evaluation of the DC–DC converter: ● ● the voltage across an inductor, the current through a capacitor. The reason why those variables are suitable for our evaluation is that they are changed with a step between two (or potentially more) states with constant level. It means that during each state, the inductor voltage or capacitor current is not changing, or at least they are not changing a lot, so the small-ripple approximation Modelling methodologies in analogue integrated circuit design is still valid. The duration of each state corresponds to the duty-cycle D with respect to the switching period. An example for VL(t) is shown in Figure 10.4. The theory says that in the steady-state operation, the average value of those state variables is equal to 0: 1 hVL i ¼ TS hIC i ¼ 1 TS ð TS VL ðtÞdt ¼ 0 IC ðtÞdt ¼ 0 ð TS 0 This fact is very intuitive. For example – if there is no average current through the output capacitor COUT over the switching period, the output voltage VOUT(t0) at the beginning of a period is equal to the voltage VOUT(t0 þ TS) at the end of the switching period – which is exactly the case in steady-state operation when the converter works in equilibrium. A similar situation applies to the inductor voltage, so if there is zero average voltage across the inductor over one switching period the inductor current at the end of the period IL(t0 þ TS) will be equal to the inductor current IL (t0) at the beginning of the period. The case of the inductor voltage is shown in Figure 10.4. The situation changes when a disturbance hits the controller and the duty-cycle changes. In that case, the average voltage across the inductor is no longer equal to 0 V and as a consequence, the inductor current at the end of the switching period IL(t0 þ TS) is not equal to IL(t0) at the beginning of the switching period – see Figure 10.5. The most important observation from the previous paragraphs is that the change between the initial level at t0 and the level at the end of the switching period t0 þ TS depends only on the change of the duty-cycle (assuming other parameters are constant) and not on the exact shape of the current within one period. (1–D)TS VIN −VOUT t –VOUT IL(t) 〈IL〉 t 0 t0 Figure 10.4 Inductor voltage and current in steady state Power systems modelling VL(t) (1–(D+d))TS VIN –VOUT t –VOUT IL(t) 〈IL〉 ∆〈IL〉 ∆IL t 0 t0 Figure 10.5 Inductor voltage and current with duty-cycle variation In the case of the inductor, we can start with the equation for the average voltage across the inductor: hVL i ¼ ðVIN VOUT ÞD VOUT ð1 DÞ with some algebra: hVL i ¼ VIN D VOUT and the current at the end of the switching period is derived as: ð ð 1 DTS 1 TS IL ðTS Þ ¼ VIN VOUT dt VOUT dt þ IL ð0Þ L 0 L DTS 1 IL ðTS Þ ¼ ððVIN VOUT ÞDTS VOUT ð1 DÞTS Þ þ IL ð0Þ L 1 IL ðTS Þ ¼ ðTS ðVIN D VOUT ÞÞ þ IL ð0Þ L (10.10) (10.11) (10.12) From (10.9) we know that VIN D VOUT ¼ hVL i thus: IL ðTS Þ ¼ TS hVL i þ IL ð0Þ L This equation shows that the change of the inductor current during one switching period is a function of the average voltage applied to the inductor. It implies that for the averaged model, we can replace the instantaneous signals with their averaged counterparts. This step removes the ripple from the model (which is assumed negligible). 10.2.6.3 Example of an averaged signal Figure 10.6 shows the original triangular coil current IL signal and the corresponding ideal averaged signal. It is apparent that the averaged signal is smooth and can be used for a small-signal analysis. Modelling methodologies in analogue integrated circuit design IL (A) 3 Switching signal t (μs) Averaged signal Figure 10.6 Switching and averaged coil current 10.2.7 Duty-cycle signal model It might not be apparent how the duty-cycle signal DUTY is represented in the averaged model. Duty-cycle in the original switching model is a logic signal, which has its own logic voltage levels. However, the information, which the DUTY signal holds, is the duty-cycle, which is then transferred to the pass device and to the LX node. The duty-cycle is in the range of 0%–100% or it can be expressed as a real number from 0 to 1. The 0–1 representation is used directly in the mathematical model. In the averaged model, it is convenient to use a voltage signal with a range from 0 to 1 V. In that implementation, the voltage itself represents the duty-cycle. Moreover, it can be directly compared with the mathematical model if needed. 10.2.8 Pulse width modulator model The PWM creates an interface from the linear domain into the switching domain. In the buck example, the PWM takes the error signal as input and creates a pulsewidth-modulated logic signal with the corresponding duty-cycle. The common basic topology used for the PWM is shown in Figure 10.7. There is a ramp generator driven with a clock signal, indicated by CLK. The ramp generator provides a saw-tooth-shaped signal RAMP that is then compared with the error signal in the PWM comparator. The output of this comparator is the DUTY signal. Typical waveforms are shown in Figure 10.8. The signals shown here are in the voltage domain, but there is no limitation to using currents instead. The duty-cycle at the output of the PWM is derived in the following manner: VPEDESTAL þ VA D TS ¼ VERROR TS thus D¼ 1 ðVERROR VPEDESTAL Þ VA Based on the previous equation, we can create a linear model, which follows our choice of duty-cycle representation as a voltage signal in the range from 0 to 1 V (0%–100% duty-cycle). The model employs one voltage-controlled voltage source, as shown in Figure 10.9. Power systems modelling Ramp generator CLK PWM comparator RAMP Figure 10.7 Basic PWM block diagram TS 0 DUTY 0 DTS Figure 10.8 Basic PWM waveforms DUTY ERR + Gain = 1 VA Figure 10.9 PWM linear model The fixed-voltage VPEDESTAL respects the pedestal voltage from the switching model and the voltage-controlled voltage source with a gain of 1/VA is the gain of the PWM. 10.2.9 Model of the power stage The power stage, together with the L–C filter are the elements which create the interface between the switching and non-switching parts of the buck converter. As we stated earlier that the L–C filter is already a linear block so it will be kept Modelling methodologies in analogue integrated circuit design unchanged in the averaged model. So, the block which needs to be modelled is the power stage. The power stage in the original switching circuits simply amplifies the DUTY signal in a way that the amplitude at the LX node is in the range between the negative and positive voltage supply levels. In our example, these are 0 V and VIN. The duty-cycle is kept unchanged. It was shown before, the signals in the average model are modelled by the average of the signal over one switching period. The same is still applicable to the VLX voltage. Figure 10.10 shows the VLX waveform and its average signal. Based on Figure 10.10, the following equation can be derived: hVLX i ¼ VIN D TS ¼ VIN D TS So, the VLX in the averaged model is a simple multiplication of VIN by duty-cycle. A similar exercise can be done for the input current of the power stage with the following outcome: hIIN i ¼ IL D ¼ The equation shows that the input current IIN can be modelled as a simple multiplication of inductor current IL and duty-cycle. The duty-cycle signal is modelled with a continuous-time signal in a meaningful range from 0 to 1 V. Therefore, by combining (10.16) and (10.17), the original power stage can be replaced with its averaged model shown in Figure 10.11. VLX (t) VIN 〈VLX〉 t Figure 10.10 VLX voltage and average voltage IOUT (t) LX + D(t)IOUT (t) D(t)OUT (t) Figure 10.11 Full model of the power stage Power systems modelling Analogue multiplier VIN LX VIN Figure 10.12 Model of the power stage with a single multiplier + Gain = VIN Figure 10.13 Simplified power-stage model with a voltage-controlled voltage source The following paragraphs show an example of model simplification. If there is no need to model input current of the converter, a simplified version of the powerstage model with a single multiplier can be used. The model then changes to the version shown in Figure 10.12. This corresponds to the right part of the model in Figure 10.11. The model in Figure 10.12 covers the change in duty-cycle as well as the change in VIN. However, because the net VIN is not loaded, the VIN affects the VLX in one direction only. In addition, an analogue multiplier might not be available in some simulators. In that case, it is possible to simplify the model further and use the one shown in Figure 10.13. Bear in mind that the simplified model from Figure 10.13 does not cover any input-related properties like a change in VIN. However, it can be used for the loop stability investigation if the input voltage is connected to a voltage source with low internal impedance and can be assumed as a fixed 10.2.10 Complete switching, linear and small-signal model The schematic diagram of the switching model of a simple buck converter, including the ideal Error Amplifier (EA) and the compensation network is shown in Figure 10.14. The mode provides the frequency response shown in Figure 10.15. The DC gain is set by the resistors R1 and R2, and there is one zero due to C1. The C1 provides a phase boost in order to get a good phase margin even with the Modelling methodologies in analogue integrated circuit design VIN Feed-back CIN – + VREF + – Power stage DUTY Figure 10.14 Switching model of a BUCK converter with voltage-mode control dB, degrees 90 45 0 10 kHz 100 kHz 1 MHz f [Hz] 10 MHz Grain(dB) Phase (degrees) –90 –135 –180 Figure 10.15 Error amplifier bode plot L–C combination in the output filter. A rigorous theory for the compensation adjustment is out of the scope of this chapter, but it can be found in [1,2]. The switching model from Figure 10.14 is the starting point for the new averaged non-switching model. Two blocks, which need to be replaced, are marked with rounded rectangles. Those are the PWM and the power stage. If they are replaced with their non-switching models derived earlier in this chapter, we obtain the model shown in Figure 10.16. This model allows us to perform small-signal AC analysis, stability analysis, and a transient analysis with time-continuous signals. However, it does not cover parameters like input impedance, because the simplified power-stage model from Figure 10.12 is used. The averaged model is very close to the model used for a mathematical evaluation of the loop – mathematical model. In order to create a mathematical model of the loop, the VIN is set as a DC source. For the small-signal analysis, all the DC sources are replaced with their internal resistance. Voltage sources are replaced with short circuit and current sources with an open circuit. The resulting model is shown in Figure 10.17. Power systems modelling VIN Feed-back CIN Power stage R2 – + Analogue multiplier +1 VA Figure 10.16 Time-continuous averaged model of the simple BUCK converter Feed-back C1 – + Power stage DUTY + 1 Gain = VA + Gain = VIN(DC) COUT Figure 10.17 Simplified small-signal model of the BUCK converter The gain of the PWM is simply left unchanged, but the pedestal voltage is removed because it is a DC source. The power stage is replaced with a simple gain of VIN(DC), which is the DC value of the input voltage VIN. It replaces the analogue multiplier with the assumption that the VIN is fixed, and no small signal comes from there. The scissors mark shows a good place where the loop can be cut for the loop investigation. The impedance relation before and after the cut should stay unchanged. If the simulator offers a particular analysis for loop investigation, the impedance will be correctly handled. Otherwise, the highlighted place is a good candidate because the output of the buck can be assumed as a low impedance node, so the R–C network around the amplifier will not affect it. 10.2.11 The small-signal open-loop transfer function After putting all the blocks from Figure 10.17 in the loop, we obtain the open-loop transfer function HOL(s). The terms in the equation are explained as follows: the first R1, R2, C1 term comes from the EA compensation network. The following Modelling methodologies in analogue integrated circuit design dB, degrees 180 135 Gain (dB) Phase (degrees) 10 kHz 100 kHz 1 MHz f (Hz) 10 MHz Figure 10.18 Ideal open-loop transfer function of the BUCK converter terms are the gain of the PWM and gain of the power stage. The last term with L, RLOAD, and COUT comes from the L–C filter, including the load. There are two complex poles from the output filter damped by the RLOAD resistance. In order to compensate for the double pole phase shift, a zero is introduced in the EA. R2 1 1 VIN ðDCÞ (10.18) HOL ðsÞ ¼ ð1 þ R1 C1 sÞ VA 1 þ ðL=RLOAD Þs þ COUT L s2 R1 This transfer function can be directly used for evaluating the open-loop transfer function and stability. Figure 10.18 shows the bode plot of HOL(s). The open-loop transfer function shows a cross-over frequency of about 100 kHz and a phase margin of about 60 . The resonant frequency of the L–C filter is very apparent. Since there is no serial resistance of the inductor RL, the damping of the L–C filter is just due to the RLOAD. 10.2.12 Comparison of various models 10.2.12.1 Used parameters In order to make a good overview and comparison of the various models, the following example of a BUCK is used as a case study. It is supposed to be a small buck for a battery-powered application like mobile phones. Capacitors CIN, COUT, and inductor L are discrete components. All the other circuitry is supposed to be integrated into a widely used common complementary metal oxide semiconductor (CMOS) technology. In order to make the model a bit more realistic, the serial resistance of the inductor is added. The other model improvement is that the EA is designed as an operational transconductance amplifier with finite GM and load capacitance C2 connected between the ERR node and ground. These changes bring the second pole of the amplifier into the picture and provide behaviour that is more realistic. Figure 10.19 shows the simplified diagram of the improved time-continuous averaged model. Power systems modelling VIN Feed-back CIN Power stage R2 – + Analogue multiplier RL ERR C2 DUTY + 1 VA Figure 10.19 Improved averaged model of the simple BUCK converter dB, degrees 180 135 Gain (dB) Phase (degrees) 45 0 10 kHz 100 kHz 1 MHz f(Hz) 10 MHz Figure 10.20 More realistic open-loop transfer function of the BUCK converter 10.2.12.2 Small-signal models After incorporating the changes into the mathematical model, the open-loop transfer function changes. The second pole of the EA affects the phase above a certain frequency, and the resonant effect of the L–C filter (peak) is more damped as the RL resistance has been introduced. The transfer function from Figure 10.20 is obviously more realistic than the one from Figure 10.18. Nevertheless, the more elements that are added to the circuit, the greater the complexity of the transfer function; hence, a mathematical expression is no longer easy to handle with a classic manual calculation requiring instead the use of software tools, for example Wolfram, Mathematica, MATLAB, OCTAVE, or similar, capable of computing several complex equations and producing the final transfer function. When we use the continuous-time model from Figure 10.16 and perform a small-signal analysis in the simulator, we obtain the transfer function shown in Figure 10.21. As expected, the results are identical to the mathematical model. Modelling methodologies in analogue integrated circuit design Gain (dB) Phase (degrees) 0 –45 135 90 45 0 –45 1k 4k 6k 10k 40k 60k 100k 200k 400k f (Hz) 4M 6M 10M 1 MHz/div Figure 10.21 Open-loop transfer function from AC analysis in the simulator Table 10.1 Buck model parameters Parameter VIN VOUT fSW COUT L RL RLOAD VA (ramp) R1 R2 C1 GM C2 4.4 1.1 1 100 1 0.025 10 4 100 1,500 30 1 10 V V MHz mF mH W W V kW kW pF mS fF Note: The same parameters – excluding the newly added – were used for the plots earlier in this chapter. Linear and switching model We will now compare the results of the averaged and switching models. The comparison is made with the switching model in line with Figure 10.14 and the time-continuous model from Figure 10.16. Both models use the same parameter set listed in Table 10.1. For a fair comparison, the same load conditions are applied to both models. For this example, the test pattern is a load step from 110 mA to 2.11 A within 1 ms. The condition is obtained by adding a piecewise linear current source in parallel to RLOAD, as shown in Figure 10.22. The comparison of the results of both averaged and switching models is shown in Figure 10.23. Both models react almost identically apart from the high-frequency switching ripple in the switching model. The identical behaviour implies that the small-signal Power systems modelling Figure 10.22 Load of the buck model for load transient simulations 3.0 2.5 IL (A) 2.0 1.5 1.0 0.5 0.0 –0.5 –1.0 1.10 VOUT (V) 1.09 1.08 1.07 1.06 1.05 1.04 t (μs) 100 10 μs (div) Figure 10.23 Comparison of the transient response of the switching and averaged model linear model and open-loop transfer function correctly describe the switching system. 10.2.13 Other outputs of the small-signal model The example of the small-signal analysis shows the open-loop transfer function of the controller loop. In general, the model can be used to extract other small-signal parameters, such as output and input impedances. Bear in mind that for input impedance simulation, the full power-stage model shown in Figure 10.11 must be used. Modelling methodologies in analogue integrated circuit design However, the model is just a model and does not cover all the effects of the circuit. A good designer must be aware of the limitations of the model and make sure that the model is capable of providing the expected behaviour. For instance, the simplified power-stage model illustrated in Figure 10.13 has no connection to input node VIN. It means any effect from the input node is not considered. So, the model can be well used in the situation when the converter is powered from a voltage source with low internal impedance. Nevertheless, the simplified powerstage model will not provide correct results when an input filter is placed in front of the converter. 10.2.14 Switching frequency effect Up to now, we have not investigated the impact of the switching frequency. There are many effects, which the switching nature takes into the picture, and it is beyond the scope of this chapter to try to cover them all. However, one of the most important will be briefly discussed. Switching small-signal analysis In order to investigate the effect of the changes in the switching frequency behaviour, we need now to introduce a new class of tool that enables the user to compare with the same model test and bench the behaviour in time and frequency domains. One of the widely used tools in the industry is the SIMPLIS simulator on which the analysis illustrated in the following paragraphs is based. SIMPLIS simulator is able to perform a small-signal analysis directly on the switching model. The results show the small-signal behaviour, including the effects of the switching nature. The additional phase shift due to the sampling Due to the way the DC–DC operates, it belongs to a sampled systems family with discrete-time. The period is given by the clock signal with fixed frequency fS. One important effect of the sampling system is the delay. In the discussed buck example, the PWM comparator cannot make a new decision until the next period starts. If it takes more decisions within the same period, it indicates an incorrect mode of operation. Every delay in the system is somehow translated into the phase of the response. For example, if the delay is constant, and it is equal to the switching period, the phase shift at the switching frequency is equal to 2p (360 ). However, the phase shift due to the switching is more complex and does not have linear phase. The comparison between the small-signal linear model and small-signal switching model responses is shown in Figure 10.24. The comparison of the open-loop transfer function of both models clearly shows the effect of switching frequency. A significant phase shift close to the switching frequency does not allow the designer to push the cross-over frequency of the loop too high. The switching frequency limits the possible speed of the controller. It is a rule-of-thumb that the safe cross-over frequency should be fS/10 to fS/5 maximum. Figure 10.25 shows a comparison of the same design with various switching frequencies. Power systems modelling Gain, phase / dB, degrees 180 135 90 45 0 –45 –90 1k 4 k 6 k 10 k 20 k 40 k 60 k 100 k 200 k 400 k 600 k 1 M f (Hz) 4 M 6 M 10 M 1 MHz/div Figure 10.24 Open-loop response comparison – switching vs. averaged model; blue – switching model; red – averaged model Lowering the frequency might have the benefit of better light-load efficiency, but from the perspective of the controller, it makes the design more difficult. The switching small-signal model predicts not just the expected lower phase-margin but also a lower DC gain. The lower DC gain should not be theoretically related to the sampling, so it is related to some other effects which are not covered by the linear model. Some of the effects which might affect the response will be shown later in the text. 10.2.14.3 The gain of the PWM The gain of the PWM was discussed in the section about the PWM average model. The PWM linear model is derived with the assumption that the input signal (error signal) is constant over the whole period or at least that it is kept constant around the decision point where it crosses the ramp signal. If this assumption is fulfilled, the gain of the modulator is equal to 1/VA as derived earlier. Unfortunately, it is often not the case in a real switching converter – especially in converters with a fast loop. The small-signal gain of the PWM depends on the angle the input signal hits the ramp signal. A common shape of the input signal is a type of wave with amplitude and phase shift with respect to the beginning of the switching period. The shape in steady-state operation is usually the same for each period, and an offset does not affect much the shape over one period. The gain of the modulator in the discussed case is as follows: GPWM ¼ Figure 10.26(a) shows the ideal duty-cycle with the flat input signal, whereas (b) and (c) show the impact of different shape and phase shift of the input signal to the gain of the PWM. Modelling methodologies in analogue integrated circuit design Gain, phase / dB, degrees 180 135 90 45 0 –45 –90 1k 4 k 6 k 10 k 20 k 40 k 60 k 100 k 200 k 400 k 600 k 1 M 2 M f (Hz) 4 M 6 M 10 M 1 MHz/div Figure 10.25 Open-loop transfer function with various switching frequencies fS; blue – fS ¼ 1 MHz; yellow – fS ¼ 300 kHz DUTY 0 (a) Figure 10.26 Effect of the shape of the error signals: (a) flat error signal; (b) error more vertical when reaching ramp; (c) error more parallel to the ramp When the input signal hits the ramp signal in a more vertical manner, the gain of the modulator GPWM can be significantly lower than that in the normal case. It is shown in Figure 10.26(b). Theoretically, if the input signal was vertical, the smallsignal gain would be 0. Similarly, when the input signal is close to being parallel to the ramp signal, the gain is going to be very high. This case is depicted in Figure 10.26(c). This effect can change the gain significantly. Fortunately, it is not very apparent when the loop is relatively slow with respect to the switching frequency (cross-over frequency
{"url":"https://dokumen.pub/modelling-methodologies-in-analogue-integrated-circuit-design-1785616951-9781785616952.html","timestamp":"2024-11-08T11:08:14Z","content_type":"text/html","content_length":"510769","record_id":"<urn:uuid:cc59e472-130e-4ba9-99a4-892bdca80385>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00602.warc.gz"}
Asymptotic Notation: A Primer How well will this code run? If you're like me, you probably ask yourself this question fairly often. It's always disappointing to spend a lot of time on a project only to realize that you've created a bottleneck that is wrestling your program's performance through the floor. If only there were a way to predict a program's performance before writing any code, right? Counting Instructions There's something intuitive about the idea that we should be able to extrapolate performance from the number of instructions that get executed. Every instruction a processor executes costs some time, so the more instructions that a processor executes the more time a process will take to run. Unfortunately, this requires an intimate knowledge of the compiler or interpreter, as well as a the architecture of the systems running the program. This kind of optimization would only be meaningful within a verry narrow set of system parameters, and would only be useful if memory or processing power are at a premium. Since most of us aren't puttingrovers on Mars, however, we are going to need a more generic solution. Asymptotic Behavior Simply put, asymptotic notation is a technique for describing how an algorithm behaves as the size of the input grows. Suppose that you have two piles of rice. After every second, you add one grain of rice to the first pile, and then you double the size of the second pile. Given an infinite amount of time, both stacks of rice would be infinitely large. But would they be the same size? Mathematically, the pile of rice that doubles every second will be a much, much larger infinity, and we can prove it (warning: math). Doing the Math Let's assume that k is the starting size of the pile of rice, and x is equal to the number of seconds that have elapsed since we started. We can determine how many grains of rice are in the first pile with the formula f(x) = k + x — the number of grains in the pile is equalto the starting size plus one grain for each second that has passed. Likewise, we can determine how many grains of rice are in the second pile with the formula g(x) = k · 2^x — the number of grains in the pile is equal to the starting size of the pile, doubled every second. Before we get into the weeds with trying to prove this on paper, let's assume that each pile started with 1 grain of rice and see what that looks like on a graph: The pile that doubles every second — g(x) — grows at a much faster rate than the pile that we add one grain at a time. This trend persists regardless of the starting size of the piles. That's great, Steve, but what does rice have to do with computing? Nothing, but the math does. This math lets us figure out how much of a resource (in this case, rice) is consumed by a function as the input (in this case, time) grows. If we were to apply this logic to computing, we would usually measure some computing resource (time, memory, storage, etc.) proportional to some variable input (the length of an array, the magnitude of an integer, etc.). There's one major difference between counting grains of rice and measuring performance in a computer that changes how we must treat our math: there are a lot of factors that affect computation speed in a computer that wouldn't affect the number of grains of rice. What kind of processor is executing the code? What operating system? How is the operating system allocating resources? There are a host of factors that can affect the actual execution time, so there often isn't any value in trying to determine the precise resource consumption of a function. But all is not lost! Just because we can't determine the exact number of milliseconds it takes to execute a set of commands doesnt' mean that our estimates are useless. The first step is to remove the constants from the expressions: f(x) ≈ x and g(x) ≈ 2^x. If we say that these functions return the same results, and that their values correspond to resource consumption, then we can extrapolate that the function with the smaller growth rate will have higher performance. Now we that we have the theory, let's see the application. Practical Application Typically, we will measure asymptotes as a function of the size of the input, where the result is the amount of a resource that is consumed. Usually, that resource is either time or an abstract concept of complexity, but it could be anything. Consider the following code snippet: * Basic Fibonnaci Sequence Generator * 0, 1, 1, 2, 3, 5, ... * @param {integer} x x'th number from the sequence * @return the value of the x'th number from the sequence function fib(x) if (!x || x <= 0) { return 0; if (x <= 1) { return 1; return fib(x - 2) + fib(x - 1); The basic Fibonnaci Sequence Generator is a relatively simple function that performs like absolute crap. I encourage you to try it, but don't do it in a browser you aren't willing to force-close, because the resource consumption of this function is exponential. It takes roughly twice as long to calculate the value of fib(5) as it took to calculate the value of fib(4). Let's compare this to a memoized version of thefunction: * Memoized Fibonnaci Sequence Generator * 0, 1, 1, 2, 3, 5, ... * @param {integer} x x'th number from the sequence * @return the value of the x'th number from the sequence var betterFib = (function() { var _memo = {}; return function betterFib(x) { if (!x || x <= 0) { return 0; if (x <= 1) { return 1; if ('undefined' === typeof _memo[x]) { _memo[x] = betterFib(x - 2) + betterFib(x - 1); return _memo[x]; By caching the results of this function, we can ensure that any value of x must only be calculated once. This should give us a pretty decent speed bump, but how would we explain that decision to someone? Sure, in this instance it's pretty easy, but what if we were trying to decide between something more complicated, or if we're relying on code that we don't maintain? We use asymptotic notation. Different Notations & What they Mean There are several different notationst hat are used to describe asymptotic behaviors, and each one describes a different aspect of that behavior. In a nutshell, Big-Oh represents the worst-case behavior of a function, Big-Omega represents the best-case behavior of a function, and Big-Theta represents the way a function always behaves. Definitions aren't useful unless you understand what they mean, though, so let's apply them to our functions fib and betterFib. Big-Oh (Ο) Big-Oh is typically used to describe the upper-boundary of a function. That doesn't always mean that it's your worst-case behavior; but if we are measuring resource consumption, then it represents the behavior of the maximum resource consumption of a function. Big-Oh also doesn't imply that this is how a function will perform. It's an informational tool to explain how the "upper-bound" of resource consumption will behave proportional to the size of the input. If we look back to fib and betterFib, we have already determined our Big-Oh values: For the function fib, any value x > 1 causes fib to execute twice as many times as the previous value of x. Since we care about asymptotic behavior, we can represent this by stating that fib executes at Ο(2^n). For the function betterFib, the memoization ensures that we only need to calculate a value once. Any value x > 1 still causes betterFib to execute itself twice, but the value of betterFib(x - 1) includes the value of betterFib(x - 2). The net result of memoization is that betterFib executes in linear time — Ο(n) — for any value of x. Big-Oh is the most commonly used asymptotic notation in computing. Since Big-Oh measures the way "poor performance" behaves, it's especially useful for reducing user frustration. As a general rule, avoiding "poor performance" is more important than maximizing "good performance"; and if you aren't convinced of the veracity of that statement, think back to the last time you became consciously aware that your internet was fast vs. the last time you became consciously aware that your internet was slow. Big-Omega (Ω) Big-Omega is less commonly used in computing, but is still occasionally useful. Big Omega looks just like Big-Oh, but uses the capital letter Omega (Ω) instead of Omicron (Ο). Where Big-Oh is used to describe the "worst case" behavior of a function, Big-Omega describes the lower boundary behavior of a function. Again, this doesn't describe how a function will perform, but it provides some insight into the lowest consumption you can expect from a function. This can be useful in instances where you are tie-breaking between two functions that have similar Big-Oh performance. Let's examine fib and betterFib to determine their Big-Omega behaviors. The function fib has no variance whatsoever. Regardless of how many times the function is run, it will always perform terribly. Since the function has the same asymptotic behavior, we can represent it the same way: Ω(2^n). The function betterFib, however, uses memoization to improve performance. If you ran betterFib(1000) once, it would execute at Ο(n). If you ran betterFib again with any input x &leq; 1000, it would return the value from the _memo object without recursing. This means that the "best case" you can get out of this function is constant time — Ω(1). This is an even broader difference than we saw from Big-Oh. Big-Theta (Θ) Big-Theta is less commonly used than Big-Oh, but still important to recognize. Big-Theta is used to describe functions in which the upper-bound behavior and the lower-bound behavior are the same. The function fib will execute at both Ο(2^n) and Ω(2^n). This means that fib also performs at Θ(2^n). By comparison, the memoization of betterFib causes it to execute at Ο(n) and Ω(1); so it has no Big-Theta value. When approaching a problem, it's helpful to understand how different solutions may impact the performance of your program. Even seemingly simple optimizations can seriously improve the performance of a piece of software in ways that may not always be obvious. Likewise, some optimizations may be wasting your time without providing a net benefit to your execution time. Asymptotic notation provides a framework to explain how different functions behave with large input values. Understanding how to read and derive Big-Oh, Big-Omega, and Big-Theta will improve your ability to write high-performance code, to integrate other people's code into your projects, and to explain to potential third-parties how to best integrate your code into their projects. Sure, it takes a little extra work, but I promise that it's worth it.
{"url":"https://www.stevethedev.com/blog/default/asymptotic-notation-primer","timestamp":"2024-11-15T01:39:23Z","content_type":"text/html","content_length":"34621","record_id":"<urn:uuid:24a890a7-df94-49c7-8019-49254bc084f1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00193.warc.gz"}
Svd — torch_svd (Tensor) the input tensor of size \((*, m, n)\) where * is zero or more batch dimensions consisting of \(m \times n\) matrices. (bool, optional) controls the shape of returned U and V (bool, optional) option whether to compute U and V or not The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order. The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the MAGMA routine gesdd as well. Irrespective of the original strides, the returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride() Extra care needs to be taken when backward through U and V outputs. Such operation is really only stable when input is full rank with all distinct singular values. Otherwise, NaN can appear as the gradients are not properly defined. Also, notice that double backward will usually do an additional backward through U and V even if the original backward is only on S. When some = FALSE, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces. When compute_uv = FALSE, backward cannot be performed since U and V from the forward pass is required for the backward operation. svd(input, some=TRUE, compute_uv=TRUE) -> (Tensor, Tensor, Tensor) This function returns a namedtuple (U, S, V) which is the singular value decomposition of a input real matrix or batches of real matrices input such that \(input = U \times diag(S) \times V^T\). If some is TRUE (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only \ (min(n, m)\) orthonormal columns. If compute_uv is FALSE, the returned U and V matrices will be zero matrices of shape \((m \times m)\) and \((n \times n)\) respectively. some will be ignored here. if (torch_is_installed()) { a = torch_randn(c(5, 3)) out = torch_svd(a) u = out[[1]] s = out[[2]] v = out[[3]] torch_dist(a, torch_mm(torch_mm(u, torch_diag(s)), v$t())) a_big = torch_randn(c(7, 5, 3)) out = torch_svd(a_big) u = out[[1]] s = out[[2]] v = out[[3]] torch_dist(a_big, torch_matmul(torch_matmul(u, torch_diag_embed(s)), v$transpose(-2, -1))) #> torch_tensor #> 2.91169e-06 #> [ CPUFloatType{} ]
{"url":"https://mlverse.github.io/torch/reference/torch_svd.html","timestamp":"2024-11-05T09:01:27Z","content_type":"text/html","content_length":"18328","record_id":"<urn:uuid:09149173-5fd2-49ef-aa51-ff41e5971e2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00588.warc.gz"}
Neural networks do better as they get bigger Opposable thumbs have played an essential role in the evolution of our species. However, if evolution had given us two additional thumbs, things would not be much different. A single thumb on each hand is sufficient. But not for neural networks, the most sophisticated AI technologies for performing human-like activities, as they’ve grown bigger, they’ve been able to grasp more information. This has come as a surprise to spectators. Basic mathematical findings had predicted that networks should only require so much space, but modern neural networks are frequently expanded far beyond this theoretical limit – known as overparameterization. Did you know, neural network-based visual stimuli classification paves the way for early Alzheimer’s diagnosis? So these kind of researches are really important for neural network systems to improve. The latest study proved that neural networks must be bigger In December, Microsoft Research’s Sébastien Bubeck and Mark Sellke from Stanford University offered a new theory for the scaling problem in a paper presented at NeurIPS, a major conference. They argue that neural networks must be considerably bigger than previously thought to avoid several basic issues. The discovery provides a broader understanding of an issue that has perplexed experts for “It’s a really interesting math and theory result. They prove it in this very generic way. So in that sense, it’s going to the core of computer science,” said the Swiss Federal Institute of Technology Lausanne, Lenka Zdeborová. The typical standards for the size of neural networks are based on studies of how they memorize data. The typical standards for the size of neural networks are based on studies of how they memorize data. However, to understand memory, we must first grasp what networks accomplish. The identification of objects in photographs is a typical operation for neural networks. To create a network that can do this, researchers first give it many photos and object labels, instructing it to discover the relationships between them. The network will correctly identify the item in a photo previously seen after being trained. A network learns data when it is trained. In addition, once a network has memorized enough training data, it also achieves the capacity to forecast the labels of objects that it has never seen — to some extent—the ability for a network to generalize. The amount of information a neural network can retain is determined by its size. This may be pictured as follows. Assume you have two data points from which to choose. You can link these points with a line with two parameters: the slope and the height when it crosses the vertical axis. Suppose someone else is given the line, as well as an x-coordinate of one of the original data points. In that case, they may deduce the corresponding y-coordinate simply by looking at it (or using the parameters). The line has memorized the two data points. The amount of information that a neural network can retain is determined by its size Similar things may be said of neural networks. Images, for example, are described by hundreds or thousands of numerical values — one for each pixel. This set of numerous free parameters is mathematically equivalent to a point’s position in a high-dimensional space. The dimension (number of coordinates) is known as the dimension. An old mathematical result says that to fit n data points with a curve, you need a function with n parameters. (In the previous example, the two points were described by a curve with two parameters.) When neural networks first emerged as a force in the 1980s, it made sense to think the same. They should only need n parameters to fit n data points — regardless of the dimension of the data. To accommodate n data points with a curve, it is necessary to use a function with n parameters, according to an old mathematical theorem. (In the previous example, the two points were represented by a curve with two parameters.) It made perfect sense when neural networks first emerged in the 1980s. There should only be n parameters required to fit n data points, regardless of the data size. “This is no longer what’s happening. Right now, we are routinely creating neural networks that have a number of parameters more than the number of training samples. This says that the books have to be rewritten,” explained Alex Dimakis of the University of Texas, Austin. In their new proof, the pair demonstrates that robustness requires overparameterization. Bubeck and Sellke weren’t attempting to rewrite anything when they decided to tackle the problem. They were examining a different feature that neural networks frequently lack, robustness, which is the ability of a network to deal with minor variations. A network that isn’t robust may have learnt to recognize a giraffe, but it would mistake an extremely modified variant for a gerbil. When Bubeck et al realized in 2019 that the problem was linked to network size, they set out to disprove theorems about it. “We were studying adversarial examples — and then scale imposed itself on us. We recognized it was this incredible opportunity, because there was this need to understand scale itself,” explained In their new proof, the pair demonstrates that robustness requires overparameterization. They calculate how many parameters are required to properly fit data points to a curve with a mathematical quality equivalent to smoothness. Consider a plane curve, with the x-coordinate representing the color of a single-pixel and the y-coordinate representing an image label. Because the curve is smooth, the prediction should only change modestly if you move a little distance along it while changing the pixel’s color slightly. However, because an extremely sharp curve has been described, small changes to the x-axis (the color) can produce huge differences in the y-coordinate (the image label). Giraffes may turn into gerbils. Overparameterization has been shown to be beneficial in a variety of studies. n-d parameters are required for precisely fitting high-dimensional data points, according to Bubeck and Sellke (for example, 784 for a 784-pixel image). In other words, overparameterization is not only beneficial; it’s necessary if you want a network to memorize its training data robustly. The argument is based on a curious feature of high-dimensional geometry: that points distributed at random across the surface of a sphere are almost all spaced by a whole diameter. Because there is such a big gap between locations, fitting them all with a single smooth curve would require many extra parameters. The finding offers a fresh perspective on why neural network scaling up has been so successful. Overparameterization is beneficial in a variety of studies. According to other research, it can enhance the efficiency of training and the ability of a network to generalize. While we know that overparameterization is required for robustness, how important robustness is compared to other things remains unclear. The new proof, however, implies that robustness may be more significant than previously thought by connecting it with overparameterization. “Robustness seems like a prerequisite to generalization. If you have a system where you just slightly perturb it, and then it goes haywire, what kind of system is that? That’s not reasonable. I do think it’s a very foundational and basic requirement,” explained Bubeck. If you are into machine learning check out how a new ML model improves wildfire forecasts. Please login to join discussion
{"url":"https://dataconomy.com/2022/06/20/bigger-neural-networks-do-better/","timestamp":"2024-11-13T11:32:43Z","content_type":"text/html","content_length":"311841","record_id":"<urn:uuid:eacd3b5d-8dcd-443a-91e6-403471c577fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00596.warc.gz"}
Man of the Match Codechef Solution || Codechef contest solution In a cricket match, there are two teams, each comprising 1111 players. The scorecard of the match lists the runs scored and wickets taken by each of these 2222 players. To determine the “Man of the Match”, we assess each player’s performance. Points are awarded to a player as follows: • Each run scored earns 11 point. • Every wicket taken earns 2020 points. The player with the highest total points is awarded the “Man of the Match” title. You are given the scorecard of a cricket match, listing the contributions of all 2222 players. The players are numbered from 11 to 2222. Find the “Man of the Match”. It is guaranteed that for all inputs to this problem, the “Man of the Match” is unique. Note: A player who belongs to the losing team can also win the “Man of the Match” award. Input Format • The first line of input will contain a single integer �T, denoting the number of test cases. • Each test case consists of 2222 lines of input. The �i-th of these 2222 lines contains two space-separated integers ��Ai​ and ��Bi​ — respectively, the runs scored and wickets taken by the �i-th player. Output Format For each test case, output on a new line a single integer �i (1≤�≤22)(1≤i≤22) denoting the index of the player with the maximum score. The tests for this problem are designed such that there will be exactly one player with the maximum score. • 1≤�≤10001≤T≤1000 • 0≤�≤2000≤A≤200 • 0≤�≤100≤B≤10 • There will be exactly 11 player with the maximum score. #include <bits/stdc++.h> using namespace std; int main() { // your code goes here int t; cin >> t; while (t>0) { int n, m, high=0, res=0; for (int i=1; i<23; i++) { cin >> n >> m; int temp = n + m*20; if (temp>high) { cout << res << endl;
{"url":"https://digitwood.com/man-of-the-match-codechef-solution-codechef-contest-solution/","timestamp":"2024-11-13T19:24:19Z","content_type":"text/html","content_length":"165388","record_id":"<urn:uuid:48e07c38-5819-4f9a-bb3e-433b0b9cb4de>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00474.warc.gz"}
How To Play Minesweeper - DECADE THIRTY Asked by: Ian HarrisQuestionNew How To Play Minesweeper Asked By: Carl Wood Date: created: Sep 06 2024 How rare is each number in Minesweeper Answered By: Jordan Robinson Date: created: Sep 09 2024 Numbers Typical games of Minesweeper have a wide assortment of numbers, with 0 (blank) through 3 being the most common, and 7’s and 8’s being quite rare. These boards show off Expert boards with as many of each number as possible. Expert Board with 364 0’sExpert Board with 371 1’sExpert Board with 334 2’sExpert Board with 194 3’s (by noob)Expert Board with 127 4’s (by noob)Expert Board with 94 5’sExpert Board with 72 6’sExpert Board with 38 7’sExpert Board with 25 8’sExpert Board with Equal Number of 1’s, 2’s, and 3’sExpert Board with 1 8, 2 7’s, 4 6’s, etc. (by Bry10022) As a bonus, it turns out to be possible to create an Expert board with the safe squares having an equal number of 1’s, 2’s, and 3’s. Here is an example. And here is a board with one 8, two 7s, four 6s, etc. all the way up to 128 1’s. : Numbers Can you lose Minesweeper first click? No, it is (normally) not possible to hit a mine on the first click in Microsoft’s implementation of Minesweeper. It is pretty easy to convince yourself that it is impossible to lose on the first Is minesweeper ever guessing? Guessing – Sometimes you need to guess in Minesweeper. The optimal guessing strategy depends on whether your goal is to win or to win fewer games more quickly. New players often make the mistake of guessing instead of learning how to solve patterns, The first strategy is to guess quickly. This is the best approach when there is no possibility of obtaining further information. It can also be effective if you are happy losing often in order to win fewer games but with better scores. The second strategy is to guess only when you are forced to guess. If the squares touch other unopened squares, solve the rest of the board first in the hope that approaching from a different direction will eliminate the guess. However, many professionals guess immediately to avoid incurring the time it takes to move the mouse to an easier location. A third strategy is to practice playing with no flags so you become better at looking for empty squares. Players who enjoy flagging often make the mistake of guessing a mine location and chording when they could have opened a safe square instead. This 50:50 guess is unavoidable. The best strategy is to guess quickly. The pink squares are a 50:50 guess. The first strategy guesses quickly. The second strategy solves the rest of the board and hopes new information eliminates the guess. There could be 1, 2 or 3 mines in these four squares. The second strategy solves the rest of the board to determine the number of mines. If there are 1 or 3 mines no guessing is needed. You cannot deduce the mines. The third strategy forgets about mines and opens the safe pink squares. If each pink square is a 1 the blue squares can also be opened. A fourth strategy makes the most useful guess. Sometimes one option eliminates another guess or makes the rest of the board easier to solve. For example, when there is a 33:66 situation on a row it is best to open the right or left squares. Opening the middle square will force you to make a second guess. A fifth strategy considers probability. The mine density on Beginner (8×8) and Intermediate (16×16) is 0.156 and on Expert (16×30) is 0.206. When there is a 50:50 guess it is actually safer to open a random square! Edges are more likely to be openings than squares near the middle. A special case is the top left corner where the probability of being a mine doubles after the first click (due to mine shifting). Avoid unecessary guesses. Instead of opening three squares in a row open the yellow squares first so you have time to react if the middle square is a mine. If you open the middle square you need to make another guess. The fourth strategy opens the blue square to eliminate the guess. There are multiple 50:50 guesses. The fourth strategy opens a useful square. If the blue square is a 3 or 7 it eliminates guessing. If the green square is 3 or 6 it could also eliminate guessing. There are multiple 50:50 guesses. The fifth strategy opens a ‘random’ square. On Expert the blue square is 20:80, does not touch a 50:50 and is on an edge so might be an opening. The sixth strategy is to calculate probability. This is the best strategy for winning games but can be complicated and time consuming. Local probability is easy to calculate but global probability is much more difficult. For example, it is easy to calculate that one mine in two squares is 50:50 but what if probability depends on all possible mine arrangements for the rest of the board? Sean Barrett has written an excellent guide to Minesweeper Advanced Tactics, The following example considers all six strategies. The first strategy is to guess quickly and hope for the best. This approach will give the best score if you survive. The second strategy is solving the rest of the board to determine the number of mines remaining. 1. There are 79 possible mine arrangements but only 1 solution has 9 mines. 2. The third strategy opens a safe square but in this case there are none. 3. The fourth strategy makes a useful guess. 4. In this case there is one square (I) that solves the board if it is a 4 or 7. 5. The fifth strategy guesses a square that does not touch a number (B, C, F, G) hoping Expert density of 0.206 comes to the rescue. The sixth strategy calculates global mine probability which ranges from 0.392 (D, K) to 0.798 (J). Three local 50:50 guesses. Three local 66:33 guesses. Preparing to calculate. Global probabilities. Is minesweeper always solvable? No. It’s not like Sudoku, where every puzzle is created to have a unique solution that can be found solely through logic. Depending on where you click in a Minesweeper grid, you may find yourself in a spot where your information is limited and you can’t go any further without taking a guess. Is minesweeper a game of luck or skill? Strategies for Mastering Minesweeper – Although when you play Minesweeper it relies partly on luck, several strategies can greatly increase your chances of success. First, players should focus on opening as many squares as possible by clicking on corners or edges of the grid. This minimizes the risk of accidentally clicking on a mine while providing valuable clues about the surrounding 1. As you progress through the game, these clues will help create a clearer picture of where mines are likely to be located, allowing you to make more informed decisions about where to click next. 2. An essential skill in Minesweeper is the ability to recognize patterns. 3. Certain combinations of numbers on the grid provide definitive information about the location of nearby mines. By familiarizing yourself with these patterns, you can rapidly deduce which squares are safe to click and which should be flagged as potential mine locations. To speed up your gameplay, you can use the left and right mouse buttons simultaneously to clear safe squares, a technique known as chording. 1. Developing a winning strategy also involves understanding probability and making calculated risks when necessary. 2. When faced with an ambiguous situation, try to weigh the likelihood of a mine being present based on the information given and the overall density of mines on the grid. 3. This can help you make more accurate decisions in situations where an educated guess is required. With practice and persistence, you’ll find your win rate and speed improving over time. Asked By: Morgan Gray Date: created: Oct 02 2023 Where do you click first in minesweeper Answered By: Dennis Hernandez Date: created: Oct 03 2023 The First Click – Where is the best place to make the first click in Minesweeper? You need an opening and some numbers to start playing. Most players begin games with random clicks to find an opening. Clicking in the middle produces bigger openings so you get more numbers and an easier start to the game. • However, openings in the middle are less common than openings on edges. • The Beginner (8×8) and Intermediate (16×16) levels on Windows Minesweeper generate a limited number of games that repeat in board cycles. • In 2002, Tim Kostka generated every board and calculated the probability and average size of openings for each square. The probability of an opening increases towards edges but the size of openings increases towards the middle. Similar calculations were performed for Expert. Average opening size on Beginner ranges from 18 to 32 squares. Average opening size on Intermediate ranges from 27 to 66 squares. Average opening size on Expert ranges from 16 to 41 squares. Opening probability on Beginner ranges from 0.19 to 0.60. Opening probability on Intermediate ranges from 0.21 to 0.60. Opening probability on Expert ranges from 0.12 to 0.50. You might notice that the four squares in the top left corner produce the fewest and smallest openings. Windows Minesweeper makes the first click safe by shifting the mine to the first empty square on the top row, starting from the left corner. Windows Vista introduced guaranteed openings on the first click so you should always start in the middle. However, new players should start in the middle on all versions because, despite losing more games in the first few clicks, they will finish more games due to larger openings. For experienced players the best place to start is more complicated and depends on personal preference. For example, do you mind losing thousands of games an hour in the first three clicks just to get bigger openings? Asked By: Tyler King Date: created: Feb 17 2024 What do blank squares mean in minesweeper Answered By: Fred Moore Date: created: Feb 19 2024 7. Understand What The Blank Squares Mean In Minesweeper – If you see blank squares on the game board, don’t worry, you did not somehow break the game. Sometimes when you click on a square, a cluster of squares will automatically open. These clusters of squares will include both blank and numbered squares. Asked By: Jake Sanchez Date: created: Sep 14 2024 Can you always beat Minesweeper without guessing Answered By: Gavin King Date: created: Sep 14 2024 No. It’s not like Sudoku, where every puzzle is created to have a unique solution that can be found solely through logic. Depending on where you click in a Minesweeper grid, you may find yourself in a spot where your information is limited and you can’t go any further without taking a guess. Can you win every Minesweeper? While some players may have strategies that give them a better chance of winning more often, there is no guaranteed way to always win at Minesweeper. What is a good score for Minesweeper? Really good times: ~7 seconds in Beginner and ~35 seconds in Intermediate. Be warned though. It just gets tougher to better your times from there on. How do I play Minesweeper? Asked By: Bruce Walker Date: created: Aug 03 2023 How do you know if you won Minesweeper Answered By: Herbert Flores Date: created: Aug 04 2023 How To Play Minesweeper Minesweeper on Windows XP with 40 mines solved in 28 seconds Minesweeper is a game where mines are hidden in a grid of squares. Safe squares have numbers telling you how many mines touch the square. You can use the number clues to solve the game by opening all of the safe squares. If you click on a mine you lose the game! Windows Minesweeper always makes the first click safe. You open squares with the left mouse button and put flags on mines with the right mouse button. Pressing the right mouse button again changes your flag into a questionmark. When you open a square that does not touch any mines, it will be empty and the adjacent squares will automatically open in all directions until reaching squares that contain numbers. A common strategy for starting games is to randomly click until you get a big opening with lots of numbers. If you flag all of the mines touching a number, chording on the number opens the remaining squares. Chording is when you press both mouse buttons at the same time. • This can save you a lot of work! However, if you place the correct number of flags on the wrong squares, chording will explode the mines. • The three difficulty levels are Beginner (8×8 or 9×9 with 10 mines), Intermediate (16×16 with 40 mines) and Expert (30×16 with 99 mines). • The game ends when all safe squares have been opened. A counter shows the number of mines without flags, and a clock shows your time in seconds. Minesweeper saves your best time for each difficulty level. You also can play Custom games up to 30×24 with a minimum of 10 mines and maximum of (x-1)(y-1) mines. Asked By: Carter Brooks Date: created: Sep 11 2024 Has anyone ever got an 8 in Minesweeper Answered By: Juan Kelly Date: created: Sep 13 2024 Getting an 8 in a minesweeper game, however, is a very rare occurrence in a regular minesweeper game, even the rarest number to get in a minesweeper game, unless you play a modify the game of minesweeper with a higher mine density (term Mine Density means: What is the ratio between the total amount of mines and the Asked By: Jacob White Date: created: Sep 20 2024 What is the rarest number in Minesweeper Answered By: Wallace James Date: created: Sep 22 2024 10. What Are The Rarest Numbers In Minesweeper? – The rarest numbers are seven and eight. According to StackExchange.com, getting a number 8 in Minesweeper is roughly 6X10 (to the power of negative 8) or a 0.0008219 chance of getting an 8. Getting a seven is slightly more feasible, with a 0.02716 chance. Alfred Torres
{"url":"https://decadethirty.com/blog/howau/how-to-play-minesweeper.html","timestamp":"2024-11-06T09:10:33Z","content_type":"text/html","content_length":"139417","record_id":"<urn:uuid:3597238d-e5da-4270-973c-4b0ba812600b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00881.warc.gz"}
Volume of Solid Figures - FilipiKnow Volume of Solid Figures So far, we have explored the world of one and two-dimensional geometric figures such as lines, angles, and plane figures. This time, we are going to the realm of three-dimensional geometric figures called solid figures. Objects around you are primarily solid figures. The ice cube you use for your juice, the basketball you dribble, the traffic cone you see on the road, and much more! It will take you forever to list all of them. This chapter will delve into the world of solid geometric figures such as cubes, cones, pyramids, rectangular prisms, and cylinders. We will also learn how to compute their respective volumes and use this technique to solve real-life word problems. Click below to go to the main reviewers: Ultimate Civil Service Exam Reviewer Ultimate PNP Entrance Exam Reviewer Ultimate PMA Entrance Exam Reviewer Table of Contents What Are Solid Figures? Solid figures are geometric figures that have three dimensions – width, depth, and height. We live in a three-dimensional world where almost every object is three-dimensional. Take a look at the given image below. The geometric figure on the left side is a two-dimensional square (i.e., a plane figure). Meanwhile, you see a three-dimensional cube on the right side. Can you see how these figures are different from each other? A cube is an example of a solid figure since it is three-dimensional. In this module, we shift our focus to the branch of geometry that deals with solid figures, the solid geometry. What Are the Types of Solid Figures? Let us discuss the most common types of solid figures we can observe in the real world. 1. Cube A cube is a solid geometric figure with six squares as its “faces.” The image above shows an example of a cube. Notice that it is made of squares joined together to form this solid figure. These squares are the faces of the cubes. One of the best examples of a cube is the popular puzzle “Rubik’s cube,” which most of us are familiar with. Other examples of a cube are ice cubes, sugar cubes, and the six-sided dice used in board games. Since cubes have square faces, the sides of a cube are all congruent or have the exact measurement. 2. Rectangular Prism A rectangular prism has six faces, just like a cube. But instead of squares, its faces are all rectangles. A rectangular prism has length, width, and height. The length of the rectangular prism refers to how long it is, the width refers to how wide or thick it is, and the height refers to how tall it is. Rectangular prisms are also called cuboids or rectangular solids. Examples of a rectangular prism are aquariums (or fish tanks), shoeboxes, cabinets, books, cargo containers, and more. 3. Pyramid A pyramid is a solid figure with triangular sides that meet on a specific point called the apex and a base that is a polygon. Notice that the square pyramid above has four triangular sides that meet at a certain point (the apex). Its base is a square, so we call it a square pyramid. The vertical line that connects the base and the apex (topmost point) of the pyramid is the height of the pyramid. In the figure above, the red line represents the height. If we create such a pyramid with a triangular base, we have a triangular pyramid. Meanwhile, we have a hexagonal pyramid if you use a hexagon as the base. In other words, you can use any polygon as a base to create a pyramid. The Egyptian Pyramids are a real-life example of a pyramid we are all familiar with. These structures were built thousands of years ago and are considered among the most beautiful creations of ancient human civilizations. In fact, it is regarded as one of the world’s seven ancient wonders. Other real-life examples of a pyramid include tents and some perfume bottles. 4. Cylinder A cylinder is a solid figure with two ends that are congruent circles and parallel. A curved surface connects these circles. A cylinder has a radius in its circular ends and a height which is the vertical distance from the center of one circular end to another. The height of the cylinder tells you how long (if the cylinder is horizontally placed) or how high (if the cylinder is vertically placed) it is. One of the best visual examples of a cylinder is a tin can. A tin can has two ends, both circles connected by a curved surface. Aside from tin cans, batteries, cylindrical tanks, drinking glasses, toilet paper rolls, and beakers are some real-life examples of cylinders. 5. Cone As you can see in the image above, a cone has a circular base connected to the topmost point (called apex) through a curved surface. A cone has a radius in its circular base and a height which is the vertical distance from the apex to the center of the circle. It is not difficult to imagine a cone in real life since it best resembles the ice cream cone we use for our ice cream. You also see them constantly on the road in the form of traffic cones which help in traffic management. And who can’t forget those conical birthday hats we used to wear during birthday parties? 6. Sphere The sphere is the solid figure counterpart of a circle. It is the set of all points that are equidistant or have the same distance from a center in a three-dimensional space. In other words, you can think of spheres as solid round figures. Just like a circle, the sphere has a radius. The sphere’s radius is the distance from the sphere’s center to the point on the sphere. Furthermore, if we form a line segment from one point on the sphere to another and let it pass through the center, we form the sphere’s diameter. The length of the diameter of a sphere is twice as long as the length of its radius, just like in a circle. Real-life examples of a sphere include globes we use to represent Earth, the balls we use in outdoor sports such as basketball and volleyball, and those marbles (or jolens) we loved collecting as a There are more solid figures, but we will focus only on these six common solid figures. Volume of a Solid Figure The space that a solid figure occupies is its volume. Think of an aquarium or a fish tank. A fish tank is an example of a solid figure (specifically, a rectangular prism). The maximum amount of water you can put in that fish tank is equivalent to the volume of that tank. As shown in the image below, you can think of the volume as the number of cubic units you can put inside a solid figure. However, you cannot see cubic units inside a solid figure in real life. Thus, we must use reliable and accurate methods to determine the volume of solid figures. For this reason, mathematicians derived various formulas to find the volume of these solids. In this section, let us talk about these methods and formulas. How To Compute the Volume of Solid Figures Before calculating the volumes of various solid figures, ensure you have mastered the operations on fractions and decimals because you’ll apply them in the succeeding calculations. If you need to brush up on your manual calculation skills, we advise you to do it first. Once you are ready, read the succeeding sections to learn how to find the volume of solid figures. 1. Volume of a Cube The volume of a cube is just the measurement of the cube’s side multiplied by itself three times. In other words, the volume of the cube can be obtained by raising the length of the side to the third power or by “cubing” it. In symbols: V[cube] = s^3 Quite simple, right? Sample Problem 1: A cube has a side length of 3 cm. Determine the volume of the cube. Solution: As we have stated earlier, we can get the volume of this cube just by multiplying the length of the side three times by itself or raising it to the third power. The given side of the cube is 3 cm long. Therefore, we have s = 3. Using the formula: V[cube] = s^3 = (3)^3 = 3 x 3 x 3 = 27 Hence, the volume of the cube is 27 cm^3. Note that when we are writing the volume of a solid figure, we express the given units in “cubic units.” If the given measurement is in centimeters (cm), the volume must be written in cubic centimeters (cm^3). Likewise, if the given measurement is in feet (ft), the volume must be cubic feet (ft^3). Sample Problem 2: The packaging used by Lemon Inc. for its lemon-flavored juice drink is cubic. If the side of this cubic packaging material is 15 cm long, how many cubic centimeters of juice drink can be put in the packaging? Solution: To determine the number of cubic centimeters of juice drink that can be put in the packaging, we must determine its volume. Since the packaging is cubic, we can use the formula for the volume of a cube. V = s^3 The side of the packaging is 15 cm long. Thus, we have s = 15. Let us substitute s = 15 into the formula: V = (15)^3 V = 15 x 15 x 15 V = 3375 Thus, the volume of the cubic packaging is 3375 cm^3. It means about 3375 cm^3 of the juice drink can be put in the packaging. 2. Volume of a Rectangular Prism Recall that a rectangular prism has length, width, and height. Computing the volume of a rectangular prism is quite simple. All you have to do is multiply the rectangular prism’s length, width, and height. V[rectangular prism ] = lwh where l stands for the length, w for the width, and h for the height of the rectangular prism. Here are some examples of how to find the volume of a rectangular prism: Sample Problem 1: What is the volume of a shoebox that is 4 inches long, 3 inches wide, and 3 inches high? Solution: A shoebox is an example of a rectangular prism. Since we have the shoebox’s length, width, and height, we can compute its volume. The volume of a rectangular prism is just V = lwh or the product of its length, width, and height. We have l = 4, w = 3, and h = 3. Using the formula: V = lwh V = (4)(3)(3) V = 36 Thus, the volume of the shoebox is 36 cubic inches (in^3). Sample Problem 2: Compute the volume of the rectangular prism below. Solution: The given figure above tells us that the length of the rectangular prism is 8 cm, its width is 6 cm, and its height is 5 cm. Recall that the volume of a rectangular prism is just the product of the measurements of its length, width, and height: V = lwh V = (8)(6)(5) V = 240 Thus, the volume of the rectangular prism is 240 cm^3. 3. Volume of a Pyramid The volume of a pyramid is equivalent to ⅓ of the product of the area of its base and height. V = ⅓ bh where b stands for the area of the polygonal base of the pyramid, and h stands for its height. Most of the time, the height of the pyramid is given. However, the value of the area of the pyramid’s base depends on the base type. If we have a square pyramid (a pyramid with a square base), then the area of the pyramid’s base is equivalent to the area of the square used as the base. Meanwhile, if we have a triangular pyramid (a pyramid with a triangular base), the area of the pyramid’s base is the area of that triangular base. Thus, the area of the pyramid’s base is equal to the area of the polygon it uses as the base. To further understand this concept, let us solve some examples. Sample Problem 1: Determine the volume of the pyramid below: Solution: We have a square pyramid above since the base is a square. The height is already given, which is 12 cm. Meanwhile, the side of the square base is given, which is 4 cm. Let us determine the area of this square base first to find the volume of this pyramid. The formula for the area of a square is A = s^2. The side of the pyramid’s square base is 4 cm. Thus, we have s = 4. Therefore, the area of the square is A = (4)^2 = 16. Thus, we will use b = 16 as the area of the base in our formula. We now have base = 16 (as we have computed above) and height = 12 cm (this is given). Let us now do the math and compute the volume of the pyramid: V = ⅓ bh V = ⅓ (16)(12) V = 64 Thus, the area of the square pyramid above is 64 cm^3. Sample Problem 2: A pyramid has a rectangular base. The base has a length of 5 inches and a width of 3 inches. Determine the volume of the pyramid if it is 6 inches tall. Solution: We already have the height of the pyramid, which is 6 inches. However, we still need to determine the area of the base for us to compute the volume of the pyramid. How do we find the area of the base of this pyramid? We have a pyramid that has a rectangular base. Therefore, we need to find the area of that rectangular base. To clarify, we need to look for the area of the rectangular base so that we can use it in the formula for the volume of the pyramid. The rectangular base has a length of 5 inches and a width of 3 inches. The area of a rectangle is computed as A = lw or the product of the length and the width. Therefore, the area of the rectangular base is: A = lw A = 5(3) = 15 cm^2 So, the area of the rectangular base is 15 cm^2. We now have a value of b for the formula. Let us compute the pyramid area using h = 6 (this is given) and b = 15 (as computed above). V = ⅓ bh V = ⅓ (15)(6) V = 30 Therefore, the volume of the rectangular pyramid is 30 cm^3. To make things easier, I will share with you some “specific” formulas that you can use instead to find the volumes of a square pyramid, rectangular pyramid, and triangular pyramid: Type of Pyramid Formula for Volume Meaning Square Pyramid V[square pyramid ] = ⅓ s^2h s is the side of the pyramid’s square base, and h is the height Rectangular Pyramid V[rectangular pyramid] = ⅓ lwh l and w are the length and width of the rectangular base, and h is the pyramid’s height Triangular Pyramid V[triangular pyramid] = ⅙ (b[t]h[t])h[p] b[t] and h[t] are the base and the height of the triangular base, respectively, and h[p][ ]is the pyramid’s height I know that it is taxing to memorize these formulas, so I advise you to stick to the method we discussed earlier: to find the area of the polygonal base first of the pyramid and then compute its volume. However, if you opt to memorize these formulas, there’s no harm in doing it since it takes less time to use these formulas than the long method. Anyway, let us try to solve the volume of a triangular pyramid using both of these methods: Sample Problem 3: A pyramid has a base shaped like a triangle. The triangular base has a height of 5 cm and a base of 4 cm. Meanwhile, the pyramid is 10 cm high. Calculate the volume of the pyramid. Method 1: Longer Method Let us compute the area of the triangular base first. The height of the triangular base is 5 cm while its base is 4 cm. Using the formula for the area of a triangle: A = ½ bh A = ½ (4)(5) A = ½ (20) A = 10 Thus, the area of the triangular base is 10 cm^2. Now, we have base = 10 and height = 10. Let us determine the volume of the pyramid using the “general” formula for the volume of a pyramid: V = ⅓ bh V = ⅓ (10)(10) V = 100/3 V = 33.33 Thus, the volume of the triangular pyramid in the problem is approximately 33.33 cm^3. Method 2: Using the “specific formula” for the triangular pyramid V[triangular pyramid ] = ⅙ (b[t ]h[t]) h[p] b[t] stands for the base of the triangular base. So, we have b[t] = 4 cm. Meanwhile, h[t] represents the height of the triangular base. So, we have h[t] = 5 cm. Lastly, h[p] represents the height of the pyramid, so we have h[p] = 10 So, to summarize, we have b[t] = 4, h[t] = 5, and h[p] = 10 Using the formula above: V[triangular pyramid ] = ⅙ (b[t ]h[t]) h[p] []V[triangular pyramid ] = ⅙ (4)(5)(10) V[triangular pyramid ] = ⅙ (20)(10) V[triangular pyramid ] = ⅙ (200) V[triangular pyramid ] = 200/6 V[triangular pyramid ] = 33.33 Thus, the volume of the triangular pyramid is 33.33 cm^3. Notice that you will still arrive at the same answer whether you use the more extended method or the “specific” formula. So, it is up to you which way you prefer. 4. Volume of a Cylinder The volume of a cylinder is the product of the area of its circular base and height. In symbols, V[cylinder] = bh where b is the area of the circular base, and h is the height of the cylinder. Note that since a cylinder has a circular base, the area of the circular base (b) is b = πr^2, where r is the radius of the circular base. We can make the given formula above more specific: V[cylinder] = πr²h where r is the radius of the circular base. You might ask this yourself after looking at the given formulas: Which one should I use? If the circular base area is already given in the problem, then use V = bh. Meanwhile, if the area of the circular base is unknown and only the radius of the base is given, then use V = πr^2h. Sample Problem 1: The radius of the circular base of a cylindrical tank is 3 meters long. The cylindrical tank is 6 meters high. Determine the total amount of water that can be put in the cylindrical tank (Use π = 3.14). Solution: The total amount of water that you can put inside the cylindrical tank depends on the volume of that cylindrical tank. So, to answer the given problem, we must compute the volume of the cylindrical tank. The height of the cylindrical tank is 6 meters. Meanwhile, the radius of its circular base is 3 meters long. Since we have the radius of the circular base rather than its area, we must use the formula V = πr^2h. Let us substitute r = 3 and h = 6 in the formula. Take note that the estimate of π that we are going to use is 3.14: V = πr^2h V = (3.14)(3)^2(6) V = (3.14)(9)(6) V = (3.14)(54) V = 169.56 Thus, the volume of the cylindrical tank is 169.56 m^3. This means that the total amount of water that can be poured inside is about 169.56 cubic meters. Sample Problem 2: The diameter of the circular base of the milk can is 10 cm long. If the can is 14 cm tall, determine the amount of milk that can be put in the can (Use π = 3.14). Solution: The total amount of milk inside the can depends on its volume. The circular base of the milk can has a diameter of 10 cm long. Take note that the diameter is twice the measurement of the radius. Thus, if the diameter of the circular base is 10 cm, then the radius of the circular base must be 10/2 = 5 cm. So, we have 5 cm as the radius of the circular base (r = 5). Meanwhile, the problem states that the can is 14 cm tall. It means that we have 14 cm as the height of this cylindrical can (h = 14). Since we already have the radius and height of the cylinder, let us use the formula V = πr^2h. V = πr^2h V = (3.14)(5)^2(14) V = (3.14)(25)(14) V = 1099 Thus, the volume of the milk can is 1099 cm^3. It means that 1099 cubic centimeters of milk can be put inside the milk can. Sample Problem 3: A clock’s battery is cylindrical. It has a base with an area of 2.5 cm^2 and a height of 5 cm. Determine the volume of the clock’s battery. Solution: The problem provided us with the area of the circular base of the cylinder and its height. Therefore, we can use the formula V = bh. The given area of the base is 2.5 cm^2. So, we have b = 2.5. Meanwhile, the height of the battery is 5 cm. So, we have h = 5. Let us now substitute these values for the formula. V = bh V = (2.5)(5) V = 12.5 Thus, the volume of the clock’s battery is 12.5 cm^3. 5. Volume of a Cone The volume of a cone can be calculated as ⅓ of the product of the area of its circular base and height. In symbols, V[cone] = ⅓ bh where b is the area of the cone’s circular base, and h is its height. Since a cone has a circular base, the area of the circular base (b) is b = πr^2, where r is the radius of the cone’s circular base. Hence, we can write the formula above more precisely: V[cone] = ⅓ πr^2h where r is the radius of the circular base of the cone. As you may have noticed, the formula for the volume of a cone is similar to the formula for the volume of a cylinder. The two solid figures almost have the same formula, except that the cone has an additional ⅓ in its formula. If a cone and a cylinder have the same radius and height, then the volume of the cone is equal to ⅓ of the volume of the cylinder. Going back to how the volume of a cone is computed, if the area of the circular base and the height of the cone are already given in the problem, use the formula V = ⅓ bh. On the other hand, if only the radius of the circular base is presented and its height, then use V = ⅓ πr^2h. Let us try to solve some examples. Sample Problem 1: A cone has a base with an area of 120 mm^2. If the cone is 50 mm high, determine the volume of the cone. Solution: In this given problem, the cone base’s area and height are given. Thus, we can use the formula V = ⅓ bh to determine its volume. So, we have b = 120 and h = 50. Let us calculate the volume of the cone. V = ⅓ bh V = ⅓ (120)(50) V = ⅓(6000) V = 2000 Hence, the volume of the cone is 2000 mm^3. Sample Problem 2: Determine the volume of the cone below (Use as it is). Solution: Based on the illustration above, the cone has a defined height of 6 centimeters and a radius of 5 centimeters. Since the cone’s height and radius are already given, using the formula V = πr²h is easier. We have h = 6 and r = 5. Note that the problem requires us to use π as it is, so we don’t have to use an approximate value and will use the Greek letter instead. V = ⅓πr^2h V = ⅓π(5)^2(6) V = ⅓π(25)(6) V = π(25)(2) V = 50π Therefore, the volume of the cone is 50π cm^3. Sample Problem 3: Rose has a cylindrical pencil holder and a cone-shaped mini-lamp, which have the same radius (of the circular base) and height. The cylindrical pencil holder has a height of 25 cm and a radius of 10 cm. Determine the volume of Rose’s cone-shaped mini-lamp using the volume of Rose’s cylindrical pencil holder. Solution: Recall that the volume of a cylinder with the same height and radius as a cone is equivalent to ⅓ of the volume of the cone. Thus, to answer the given problem, we need to find first the volume of the cylindrical pencil holder and then take ⅓ of it to obtain the volume of Rose’s cone-shaped mini-lamp. Let us first derive the volume of Rose’s cylindrical pencil holder. The problem states that the pencil holder has a height of 25 cm, and the radius of its circular base is 10 cm. Since we have the height and the radius of the circular base, it is more convenient to use the formula V = πr^2h for the volume of the cylinder. We have r = 10 and h = 25. Take note that there’s no estimate of π that we are required to use, so we use it as it is. Substituting these values to the formula: V[cylinder] = πr^2h V[cylinder] = π(10)^2(25) V[cylinder] = π(100)(25) V[cylinder] = 2500π Thus, the volume of the cylindrical pencil holder is 2500 πcm^3. Now, we need to take ⅓ of the volume of the cylindrical pencil holder to obtain the volume of the cone-shaped mini-lamp. Thus: V[cone] = ⅓ (2500π) V[cone ]= 833.33π Thus, the volume of the cone-shaped mini lamp is 833.33 cm^3 6. Volume of a Sphere The volume of a sphere can be obtained using the formula below: V[sphere] = 4⁄3πr^3 where r is the radius of the sphere. The formula above is derived by the Greek mathematician Archimedes who also provided one of the earliest estimates of π. The volume of a sphere can be obtained by multiplying the cube of the sphere’s radius by π, multiplying the result by 4, and then dividing the result by 3. Sample Problem 1: A ball has a radius of 2 meters. Determine the maximum amount of air that the ball can hold. Solution: The amount of air the ball can hold depends on its capacity or volume. The radius of the ball is 2 meters. Thus, we have r = 2. Note that the problem did not provide us with any approximate value of π that we must use, so we use it as it is. Using the formula for the volume of a sphere: V = 4⁄3πr^3 V = 4⁄3πr^3 V = 4⁄3π(2)³ V = 4⁄3π(8) Sample Problem 2: A newly bought globe has a diameter of about 20 inches. Determine the volume of that globe (Use = 3.14). Solution: The problem is the diameter of the globe which is 20 inches. However, the sphere’s volume formula uses the radius, not the diameter. Thus, we have to determine the radius first. Recall that the diameter of a round figure, such as a sphere or a circle, is twice as long as the radius. Thus, if the diameter of the globe is 20 inches, its radius must be 20/2 = 10 inches. Hence, we have r = 10 inches. Let us now use the formula for the volume of a sphere. V = 4⁄3πr^3 V = 4⁄3(3.14)(10³) V = 4⁄3(3.14)(1000) V = 4⁄3(3140) V = 4186.666… Thus, the volume of the sphere is 4186.67 cm^3. Next topic: Circles Previous topic: Perimeter and Area of Plane Figures Return to the main article: The Ultimate Basic Math Reviewer Test Yourself! Jewel Kyle Fabula Jewel Kyle Fabula graduated Cum Laude with a degree of Bachelor of Science in Economics from the University of the Philippines Diliman. He is also a nominee for the 2023 Gerardo Sicat Award for Best Undergraduate Thesis in Economics. He is currently a freelance content writer with writing experience related to technology, artificial intelligence, ergonomic products, and education. Kyle loves cats, mathematics, playing video games, and listening to music. Copyright Notice All materials contained on this site are protected by the Republic of the Philippines copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of filipiknow.net or in the case of third party materials, the owner of that content. You may not alter or remove any trademark, copyright, or other notice from copies of the content. Be warned that we have already reported and helped terminate several websites and YouTube channels for blatantly stealing our content. If you wish to use filipiknow.net content for commercial purposes, such as for content syndication, etc., please contact us at legal(at)filipiknow(dot)net
{"url":"https://filipiknow.net/volume-of-solid-figures/","timestamp":"2024-11-05T21:54:18Z","content_type":"text/html","content_length":"168124","record_id":"<urn:uuid:9fd366e6-3331-4e4c-aecb-07849e570afc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00817.warc.gz"}
Roles of sensitivity analysis and uncertainty quantification for science and engineering models, by Dr. Ralph Smith. Monday, February 26, 2024 - 11:30 to 12:30 This presentation will focus on the use of sensitivity analysis and uncertainty quantification for applications arising in science and engineering.First, pertinent issues will be illustrated in the context of weather and climate modeling, applications utilizing smart materials for energy harvesting, biology models, radiation source localization in an urban environment, and simulation codes employed for nuclear power plant design. This will demonstrate that the basic UQ goal is to ascertain uncertainties inherent to parameters, initial and boundary conditions, experimental data, and models themselves to make predictions with quantified and improved accuracy.The use of data, to improve the predictive accuracy of models, is central to uncertainty quantification and we will discuss the use of Bayesian techniques to construct distributions for model inputs. Specifically, the focus will be on algorithms that are both highly robust and efficient to implement. The discussion will subsequently focus on the use of sensitivity analysis to isolate critical model inputs and reduce model complexity. This will include both local sensitivity analysis, based on derivatives of the response with respect to model parameters, and variance-based techniques which determine how uncertainties in responses are apportioned to uncertainties in parameters. The presentation will conclude with a discussion detailing the manner in which model discrepancy must be addressed to construct time-dependent models that can adequately predict future events. An important aspect of this presentation is that all concepts will be illustrated with a suite of both fundamental and large-scale examples from biology and engineering.
{"url":"https://www.math.sissa.it/content/roles-sensitivity-analysis-and-uncertainty-quantification-science-and-engineering-models-dr","timestamp":"2024-11-05T00:20:01Z","content_type":"application/xhtml+xml","content_length":"34388","record_id":"<urn:uuid:52d0a34c-157a-4550-a0e0-428ad4ade8bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00142.warc.gz"}
The standard deviation for variables x and y be 3 and class 11 maths JEE_Main Hint: The correlation coefficient is a statistical measure of the strength in relationship between the relative movements of two variables, for the given variables x and y the formula become coefficient of correlation = $\dfrac{{Cov(x,y)}}{{{\sigma _X}{\sigma _Y}}}$ Complete step by step solution: As in the question we have to find coefficient of correlation, for this The correlation coefficient is a statistical measure of the strength in relationship between the relative movements of two variables. The values range in between $ - 1$ and $1$ if the calculated number is greater than $1$ or less than $ - 1$ means that there is an error in the correlation measurement. A correlation of $ - 1$ shows a perfect negative correlation, while a correlation of $1$ shows a perfect positive correlation. A correlation of $0$ shows no linear relationship between the movement of the two variables. To calculate the product-moment correlation, one must first determine the covariance of the two variables in question. Next, one must calculate each variable's standard deviation. The correlation coefficient is determined by dividing the covariance by the product of the two variables' standard deviations. So the relation is Coefficient of correlation = $\dfrac{{Cov(x,y)}}{{{\sigma _X}{\sigma _Y}}}$ $Cov(x,y)$ is covariance of x and y that is \[8\] ${\sigma _X}$ is standard deviation of x that is $3$ ${\sigma _y}$ is standard deviation of x that is $4$ Hence on putting the value of these , Coefficient of correlation = $\dfrac{8}{{3 \times 4}}$ Hence it is equal to the $\dfrac{2}{3}$ Option A will be the correct answer. Note:Correlation coefficients are used to measure the strength of the relationship between two variables. Pearson correlation is the one most commonly used in statistics. This measures the strength and direction of a linear relationship between two variables that we use in this question. Correlation coefficient values less than $0.8$ or greater than $ - 0.8$ are not considered significant.
{"url":"https://www.vedantu.com/jee-main/the-standard-deviation-for-variables-x-and-y-be-maths-question-answer","timestamp":"2024-11-11T17:07:30Z","content_type":"text/html","content_length":"145621","record_id":"<urn:uuid:add2e905-4296-4362-b3be-b966c8b1601d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00443.warc.gz"}
SIS with Hints Zoo \( \newcommand\ring{\mathcal{R}} \newcommand\ZZ{\mathbb{Z}} \renewcommand{\vec}[1]{{\boldsymbol{\mathbf{#1}}}} \newcommand\mat[1]{\vec{#1}} \newcommand{\sample}{\gets_{\$}} \) An attempt to keep track of all those new SIS-like assumptions that hand out additional hints. Some of these venture into LWE land, but for now I want to keep it more or less SIS focused. • Designers: Please consider whether you can re-use one of those many newfangled assumptions before introducing yet another one. • Cryptanalysts: Analyse them! In th jolly tradition of "Are Graded Encoding Schemes broken yet?" I tag problems as BROKEN if I am aware of cryptanalysis. I also tag them as: if there is a reduction from a standard (non-hint) assumption that covers meaningful parameters, i.e. those useful in applications. For example, k-SIS is tagged like this but k-R-ISIS is not because known reductions only cover parameters that, so far, seem useless in applications. if there is a reduction from another hint assumption. Again, this may only cover some parameters. This is meant to indicate that cryptanalytic efforts might be better directed at the equivalent Please reach out if you find a mistake or have an update. 1. STANDARD k-SIS We give the module variant defined in (Albrecht, Cini, Lai, Malavolta, & Thyagarajan, 2022). We recover the original definition by setting \(\ring := \ZZ\). For any integer \(k \geq 0\), an instance of the k-M-SIS problem is a matrix \(\mat{A} \in \ring_{q}^{\eta \times \ell}\) and a set of \(k\) vectors \(\vec{u}_{0}, \ldots \vec{u}_{k-1}\) s.t. \(\ mat{A}\cdot \vec{u}_{i} \equiv \vec{0} \bmod q\) with \(\|{\vec{u}_i}\| \leq \beta\). A solution to the problem is a nonzero vector \(\vec{u} \in \ring^{\ell}\) such that \[\|{\vec{u}}\| \leq \beta^*, \quad \mat{A}\cdot \vec{u} \equiv \vec{0} \bmod q,\quad \text{and} \quad \vec{u} \notin \mathcal{K}\text{-}\operatorname{span}(\set{\vec{u}_i}_{0 \leq i < k}).\] An LWE variant, called k-LWE was introduced in (Ling, Phan, Stehlé, & Steinfeld, 2014). 1.1. Hardness The original paper showed that k-SIS (over \(\ZZ\)) is hard if SIS is hard for uniform \(\mat{A}\) and discrete Gaussian \(\vec{u}_{i}\). This reduction was improved in (Ling, Phan, Stehlé, & Steinfeld, 2014) to cover \(k = \mathcal{O}(\ell)\). The condition \(\vec{u} \notin \mathcal{K}\text{-}\operatorname{span}(\set{\vec{u}_i}_{0 \leq i < k})\) can be dropped when \(k < \ell^{1/4}\) because then the probability that there is an additional short vector in the \(k\)-dimensional sublattice spanned by the \(\vec{u}_i\) is negligible. No proof was provided for the module variant. 2. One-more-ISIS (Agrawal, Kirshanova, Stehlé, & Yadav, 2022) A challenger selects a matrix \(\mat{A} \in \ZZ_{q}^{\eta \times \ell}\) and sends it to the adversary. The adversary can perform two types of queries: Syndrome queries The adversary can request a challenge vector which the challenger selects at random, i.e. \(\vec{t} \gets_{\$} \ZZ_{q}^{\eta}\), adds to some set \(\mathcal{S}\) and returns to the adversary. Preimage queries. The adversary submits any vector \(\vec{t}' \in \ZZ_{q}^{\eta}\). The challenger will return a short vector \(\vec{y}' \gets_{\$} D_{\ZZ^\ell,\sigma}\) such that \(\mat{A}\cdot \vec{y}' \equiv \vec{t}' \bmod q\). Denote \(k\) for the number of preimage queries. In the end the adversary is asked to output \(k+1\) pairs \(\{(\vec{y}_i,\vec{t}_i)\}_{0 \leq i < k+1}\) satisfying: \[\mat{A}\cdot \vec{y}_{i} \equiv \vec{t}_{i} \bmod q, \|\vec{y}_{i}\| \leq \ beta \text{ and }\vec{t}_{i} \in \mathcal{S}.\] 2.1. Hardness The hardness of the problem is analysed using direct cryptanalysis in the original paper. The authors give a combinatorial attack and a lattice attack. Combinatorial Attack. The adversary requests \(\eta \cdot q\) preimages for all \(\{a \cdot \vec{e}_{i}\ \mid\ a \in \ZZ_{q}, i \in \ZZ_{\eta}\}\), here \(\vec{e}_{i}\) is the \(i\)-th unit vector. Then, adding up \(\eta\) such preimages allows to construct any image. Since the norm of the preimages returned by the challenger is \(\sqrt{\ell} \cdot \sigma\), this allows to solve the One-more-ISIS problem when \(\sqrt{\eta \cdot \ell} \cdot \sigma \leq \beta\). Of course, smaller and larger sets of preimages are possible, increasing and decreasing the output norm respectively. Lattice Attack. The adversary requests \(\geq \ell\) preimages of zero and uses that to produce a short basis \(\mat{T}\) for the kernel of \(\mat{A}\), i.e. \(\mat{A}\cdot\mat{T} \equiv \vec{0} \ bmod q\). This constitutes a trapdoor for \(\mat{A}\) and thus permits to return short preimages for any target. The key point here is that this trapdoor is of degraded quality relative to the trapdoor used by the challenger. The key computational challenge then is to fix-up or improve this degraded trapdoor in order to be able to sample sufficiently short vectors. 3. K-R-ISIS (Albrecht, Cini, Lai, Malavolta, & Thyagarajan, 2022) Let \(g(\vec{X}) \in \mathcal{R}(\vec{X})\) be a Laurent monomial, i.e. \(g(\vec{X}) = \vec{X}^{\vec{e}} := \prod_{i \in \ZZ_w} X_i^{e_i}\) for some exponent vector \(\vec{e} = (e_i: i \in \ZZ_w) \in \ZZ^w\). Let \(\mathcal{G} \subset \mathcal{R}(\vec{X})\) be a set of Laurent monomials with \(k := |\mathcal{G}|\). Let \(g^{\star} \in \mathcal{R}(\vec{X})\) be a target Laurent monomial. We call a family \(\mathcal{G}\) k-M-ISIS-admissible if 1. all \(g \in \mathcal{G}\) have constant degree, i.e. \(\|\vec{e}\|_{1} \in O(1)\); 2. all \(g \in \mathcal{G}\) are distinct, i.e. \(\mathcal{G}\) is not a multiset; and 3. \(0 \not\in\mathcal{G}\). We call a family \((\mathcal{G}, g^*)\) k-M-ISIS-admissible if \(\mathcal{G}\) is k-MISIS-admissible, \(g^*\) has constant degree, and \(g^* \notin \mathcal{G}\). Now, let \(\vec{t} = (1,0,\ldots,0)\). Let \(\mathcal{G} \subset \mathcal{R}(\vec{X})\) be a set of \(w\)-variate Laurent monomial. Let \(g^{\star} \in \mathcal{R}(\vec{X})\) be a target Laurent monomial. Let \((\mathcal{G},g^\star)\) be k-M-ISIS-admissible. Let \(\mat{A} \gets_{\$} \ring_q^{\eta \times \ell}\), \(\vec{v} \gets_{\$} (\ring_q^\star)^w\). The K-M-ISIS assumption states that given \((\mat{A}, \vec{v}, \vec{t}, \{\vec{u}_{g}\})\) with \(\vec{u}_{g}\) short and \[ g(\vec{v}) \cdot \vec{t} \equiv \mat{A}\cdot \vec{u}_{g} \bmod q \] it is hard to find a short \(\vec {u}_{g^*}\) and small \(s^{*}\) s.t. \[ s^* \cdot g^{*}(\vec{v}) \cdot \vec{t} \equiv \mat{A} \cdot \vec{u}_{g^*} \bmod q. \] When \(\eta = 1\), i.e. when \(\mat{A}\) is just a vector, we call the problem k-R-ISIS. 3.1. Hardness The hardness of the problem is analysed by the authors by providing some partial reductions, i.e. covering some special cases. These special cases are not interesting for applications. The authors show that • k-R-ISIS (with \(g^{*} \equiv 1\)) is as hard as R-SIS when \(\ell > k\) or when the system generated by \(\mathcal{G}\) is efficiently invertible. • k-M-ISIS is at least as hard as k-R-ISIS and that k-M-ISIS is a true generalisation of k-R-SIS. • \((\mathcal{G},g^*)\) is as hard as \((\mathcal{G},0)\) for any \(\mathcal{G}\), formalising the intuition that the non-homogeneous variant is no easier than the homogeneous variant. • scaling \((\mathcal{G},g^*)\) multiplicatively by any non-zero Laurent monomial does not change the hardness, e.g. we may choose to normalise instances to \(g^{*} \equiv 1\). The authors also explore direct cryptanalysis: • a direct SIS attack on \(\mat{A}\). • finding short \(\ZZ\)-linear combinations of \(\vec{u}_{i}\) • finding \(\mathbb{Q}\)-linear combinations of \(\vec{u}_{i}\) that produce short images. 4. BROKEN Knowledge K-R-ISIS (Albrecht, Cini, Lai, Malavolta, & Thyagarajan, 2022) The assumption essentially states that for any element \(c \cdot \vec{t}\) that the adversary can produce together with a short preimage, it produced that as some small linear combination of the preimages \(\{\vec{u}_{g}\}\) we have given it. Thus, roughly: If an adversary outputs any \(c, \vec{u}_{c}\) s.t. \[ c \cdot \vec{t} \equiv \mat{A} \cdot \vec{u}_{c} \bmod q \] then there is an extractor that – with access to the adversary's randomness – outputs short \(\{c_{g}\}\) s.t. \[ c \equiv \sum_{g \in \mathcal{G}} c_{g} \cdot g(\vec{v}) \bmod q. \] The knowledge assumption only makes sense for \(\eta \geq 2\). However, in order to be able to pun about "crises of knowledge", the authors also define a ring version of the knowledge assumption. In the ring setting, they consider proper ideals rather than submodules. 4.1. Cryptanalysis The assumption is invalidated – at least "morally" – in (Wee & Wu, 2023a). Roughly speaking, the attack uses that the preimages of \(g(\vec{v}) \cdot \vec{t}\) span a short basis of the submodule spanned by \(\vec{t}\): essentially an Ajtai-style trapdoor. Then, sampling an arbitrary, not-necessarily short, preimage of some \(c \cdot \vec{t}\), Babai rounding can be applied to obtain a short preimage of some other, random \(\tilde{c} \cdot \vec{t}\). An implementation of the attack in SageMath is available here. 5. EQUIVALENT Twin K-R-ISIS (Balbás, Catalano, Fiore, & Lai, 2022) let \(\vec{t} = (1,0,\ldots,0)\). Let \(\mathcal{G}_{A}, \mathcal{G}_{B} \subset \mathcal{R}(\vec{X})\) be sets of \(w\)-variate Laurent monomial. Let \((\mathcal{G}_{A} \cup \mathcal{G}_B)\) be k-M-ISIS-admissible. Let \(\mat{A} \gets_{\$} \ring_q^{\eta \times \ell}\), \(\mat{B} \gets_{\$} \ring_q^{\eta \times \ell}\), \(\vec{v} \gets_{\$} (\ring_q^\star)^w\). The Twin K-M-ISIS assumption states that given \((\mat{A}, \mat{B}, \{\vec{u}_{g}\}_{g \in \mathcal{G}_A}, \{\vec{w}_{g}\}_{g \in \mathcal{G}_B}, \vec{t}, \vec{v})\) with \(\vec{u}_{g}, \mathbf{w}_{g}\) short, \[ g(\vec{v}) \cdot \vec{t} \equiv \mat{A}\cdot \vec{u}_{g} \bmod q,\ \forall\ g \in \mathcal{G}_{A}, \] and \[ g(\vec{v}) \cdot \vec{t} \equiv \mat{B}\cdot \vec{w}_{g} \bmod q,\ \forall\ g \in \ mathcal{G}_{B} \] it is hard to find a short \(\vec{u}^{*}, \vec{w}^{*}\) s.t. \[ \vec{0} \equiv \mat{A} \cdot \vec{u}^{*} + \mat{B} \cdot \vec{w}^{*} \bmod q. \] 6. STANDARD BASIS (Random) We consider BASIS\(_{rand}\) with \(k=2\), for simplicity. Let \(\mat{A} \in \ZZ_{q}^{\eta \times \ell}\). We're given \[\vec{B} := \begin{pmatrix}\mat{A}_{0} & \vec{0} & - \vec{G}_{\eta+1}\\ \vec{0} & \mat{A}_{1} & -\vec{G}_{\eta+1}\end{pmatrix}\] and a short \(\vec{T}\) s.t. \[\vec{G}_{\eta'} \equiv \vec{B} \cdot \vec{T} \bmod q\] where \(\mat{A}_{i}\) are uniformly random for \(i>0\) and \(\mat{A}_{0} := [\vec{a} | \mat{A}^{T}]^{T}\) for uniformly random \(\mat{A}\) and \(\vec{a}\). The BASIS\(_{rand}\) assumption states that given \((\vec{B}, \vec{T})\) it is hard to find a short \(\vec{x}\) s.t. \(\mat{A} \cdot \vec{x} \equiv \vec{0}\). 6.1. Hardness It was shown in the original paper that BASIS\(_{rand}\) is as hard as SIS. The idea is that we can construct \(\vec{B}\) given \(\mat{A}\) since we can trapdoor all \(\mat{A}_{i}\) for \(i > 0\). A way of looking at is by noting that for each column \(\vec{t} = (\vec{t}^{(0)}, \vec{t}^{(1)}, \vec{t}^{(G)})\) of \(\vec{T}\) we have \(\mat{A}_{i} \cdot \vec{t}^{(i)} \equiv \vec{G}_{\eta+1} \ cdot \vec{t}^{(G)}\) where \(\vec{G}_{\eta+1} \cdot \vec{t}^{(G)}\) is close to uniform. Thus, we can sample \(\vec{t}^{(0)}\), compute \(\vec{y} := \mat{A}_{0} \cdot \vec{t}^{(0)}\) and then use the gadget structure of \(\vec{G}_{\eta+1}\) to find a short \(\vec{t}^{(G)}\) s.t. \(\mat{A}_{i} \cdot \vec{t}^{(i)} \equiv \vec{G}_{\eta+1} \cdot \vec{t}^{(G)}\). Then, using the trapdoors for \(\mat {A}_{i}\) with \(i>0\) we can find \(\vec{t}^{(i)}\) s.t. \(\mat{A}_{i} \cdot \vec{t}^{(i)} \equiv \vec{G}_{\eta+1} \cdot \vec{t}^{(G)}\). 7. EQUIVALENT BASIS (Structured) We consider BASIS\(_{struct}\) with \(k=2\), for simplicity. Let \(\mat{A} \in \ZZ_{q}^{\eta \times \ell}\). We're given \[\vec{B} := \begin{pmatrix}\mat{A}_{0} & \vec{0} & - \vec{G}_{\eta+1}\\ \vec{0} & \mat{A}_{1} & -\vec{G}_{\eta+1}\end{pmatrix}\] and a short \(\vec{T}\) s.t. \[\vec{G}_{\eta'} \equiv \vec{B} \cdot \vec{T} \bmod q\] where \(\mat{A}_{i} := \vec{W}_{i} \cdot \mat{A}\) for known \(\vec{W}_{i} \in \ZZ_{q}^{\eta \times \eta}\). The BASIS\(_{struct}\) assumption states that given \((\vec{B}, \vec{T})\) it is hard to find a short \(\vec{x}\) s.t. \(\mat{A} \cdot \vec{x} \equiv \vec{0}\). 7.1. Hardness The authors also show that given an algorithm for solving BASIS\(_{struct}\) there is an algorithm for solving k-M-ISIS. This reduction outputs a BASIS\(_{struct}\) instance of size \(k < \ell/k'\) where \(k'\) is the parameter \(k\) of the k-M-ISIS instance. 8. PRISIS PRISIS (Fenzi & Nguyen, 2023) is a special case of BASIS. It is more structured than BASIS, so we might expect hardness results on PRISIS to translate to BASIS. It turns out the additional structure allows to prove a broader regime of parameters as hard as M-SIS. Let \(\mat{A} \in \ring_{q}^{\eta \times \ell}\). We're given \[\vec{B} := \begin{pmatrix} \mat{A} & \vec{0} & \cdots & - \vec{G}_{\eta+1}\\ \vec{0} & w \cdot \mat{A} & \cdots & -\vec{G}_{\eta+1} \\ \mat{0} & \vec{0} & \ddots & -\vec{G}_{\eta+1}\\ \vec{0} & \cdots & w^{\ell-1} \cdot \mat{A} & -\vec{G}_{\eta+1} \end{pmatrix}\] and a short \(\vec{T}\) s.t. \[\vec{G}_{\eta'} \equiv \vec{B} \ cdot \vec{T} \bmod q.\] The PRISIS assumption states that given \((\mat{A}, \mat{B}, w, \vec{T})\) it is hard to find a short \(\vec{x}\) s.t. \(\mat{A} \cdot \vec{x} \equiv \vec{0}\). 9. STANDARD \(h\)-PRISIS \(h\)-PRISIS (Albrecht, Fenzi, Lapiha, & Nguyen, 2023) is a multi-instance version of PRISIS. Let \(\mat{A}_{i} \in \ring_{q}^{\eta \times \ell}\) for \(i \in \{0,…,h-1\}\). We're given \[\vec{B}_{i} := \begin{pmatrix} \mat{A}_{i} & \vec{0} & \cdots & - \vec{G}_{\eta+1}\\ \vec{0} & w_{i} \cdot \mat{A}_{i} & \cdots & -\vec{G}_{\eta+1}\\ \mat{0} & \vec{0} & \ddots & -\vec{G}_{\eta+1}\\ \vec{0} & \cdots & w_i^{\ell-1} \cdot \mat{A}_{i} & -\vec{G}_{\eta+1} \end{pmatrix}\] and a short \(\vec{T}_{i}\) s.t. \[\vec{G}_{\eta'} \equiv \vec{B}_{i} \cdot \vec{T}_{i} \bmod q.\] The \(h\)-PRISIS assumption states that given \((\{\mat{A}_i\}, \{\mat{B}_{i}\}, \{w_i\}, \{\vec{T}\}_i)\) it is hard to find a short \(\vec{x}_{i}\) s.t. \(\sum \mat{A}_{i} \cdot \vec{x}_{i} \ equiv \vec{0}\). 10. ISISf (Bootle, Lyubashevsky, Nguyen, & Sorniotti, 2023) Let \(\mat{A} \in \ZZ_{q}^{\eta \times \ell}\) and let \(f: \ZZ_{N} \rightarrow \ZZ_{q}^{\eta}\) be a function. Given \((\mat{A}, f)\) and access to an oracle that when called samples a fresh \(x \sample \ZZ_{N}\) and outputs \(x, \vec{u}_{x}\) s.t. \[ \mat{A} \cdot \vec{u}_{x} \equiv f(x) \bmod q \text{ and } \|\vec{u}_{x}\| \leq \beta, \] the ISISf assumption states that it is hard to output a fresh tuple \((x^{*}, \vec{u}^{*})\) s.t. \[\mat{A} \cdot \vec{u}^{*} \equiv f(x^*) \bmod q \text{ and } \|\vec{u}^{*}\| \leq \beta^*.\] 10.1. Hardness If \(f\) is a random oracle then the problem, in the ROM, is as hard as SIS. In (Bootle, Lyubashevsky, Nguyen, & Sorniotti, 2023) the authors set \(f\) to be a function that first turns \(x \in \ZZ_{N}\) into a binary vector \(\vec{x} \in \{0,1\}^{\log N}\) and then outputs \ (\mat{B} \cdot \vec{x}\). The call this problem ISIS\(_{bin}\). The authors analyse direct lattice reduction as well as exploiting relations on the image space. The authors also introduce an interactive variant and show its equivalence. Agrawal, S., Kirshanova, E., Stehlé, D., & Yadav, A. (2022). Practical, round-optimal lattice-based blind signatures. In H. Yin, A. Stavrou, C. Cremers, & E. Shi (Eds.), Proceedings of the 2022 ACM SIGSAC conference on computer and communications security, CCS 2022 (pp. 39–53). Albrecht, M. R., Cini, V., Lai, R. W. F., Malavolta, G., & Thyagarajan, S. A. K. (2022). Lattice-based snarks: Publicly verifiable, preprocessing, and recursively composable - (extended abstract). In Y. Dodis & T. Shrimpton (Eds.), Advances in cryptology - CRYPTO 2022 (pp. 102–132). Springer. Albrecht, M. R., Fenzi, G., Lapiha, O., & Nguyen, N. K. (2023). Slap: Succinct lattice-based polynomial commitments from standard assumptions . Cryptology ePrint Archive, Paper 2023/1469. Retrieved from Balbás, D., Catalano, D., Fiore, D., & Lai, R. W. F. (2022). Functional commitments for circuits from falsifiable assumptions . Cryptology ePrint Archive, Paper 2022/1365. Retrieved from Boneh, D., & Freeman, D. M. (2011). Linearly homomorphic signatures over binary fields and new tools for lattice-based signatures. In D. Catalano, N. Fazio, R. Gennaro, & A. Nicolosi (Eds.), Public key cryptography - PKC 2011 (pp. 1–16). Springer. Bootle, J., Lyubashevsky, V., Nguyen, N. K., & Sorniotti, A. (2023). A framework for practical anonymous credentials from lattices . Cryptology ePrint Archive, Paper 2023/560. Retrieved from Fenzi, G., & Nguyen, N. K. (2023). Lattice-based polynomial commitments: Towards asymptotic and concrete efficiency . Cryptology ePrint Archive, Paper 2023/846. Retrieved from Ling, S., Phan, D. H., Stehlé, D., & Steinfeld, R. (2014). Hardness of k-lwe and applications in traitor tracing. In J. A. Garay & R. Gennaro (Eds.), Advances in cryptology - CRYPTO 2014 (pp. 315–334). Springer. Wee, H., & Wu, D. J. (2023a). Lattice-based functional commitments: Fast verification and cryptanalysis. Asiacrypt 2023 . Springer-Verlag. Wee, H., & Wu, D. J. (2023b). Succinct vector, polynomial, and functional commitments from lattices. In C. Hazay & M. Stam (Eds.), Advances in cryptology - EUROCRYPT 2023 (pp. 385–416). Springer.
{"url":"https://malb.io/sis-with-hints.html","timestamp":"2024-11-03T14:04:59Z","content_type":"application/xhtml+xml","content_length":"37983","record_id":"<urn:uuid:90566f02-301c-41ae-b59f-71a1555a1885>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00419.warc.gz"}
Video - Sums of squares in quadratic number rings It is usually a difficult problem to characterize precisely which elements of a given integral domain can be written as a sum of squares of elements from the integral domain. Let R denote the ring of integers in a quadratic number field. This talk will deal with the problem of identifying which elements of R can be written as a sum of squares. If an element in R can be written as a sum of squares, then the element must be totally positive. This necessary condition is not always sufficient. We will determine exactly when this necessary condition is sufficient. In addition, we will develop several criteria to guarantee that a representation as a sum of squares is possible. The results are based on theorems of I. Niven and C. Siegel from the 1940's, and R. Scharlau from 1980.
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=video&l=ko&category=110673&listStyle=gallery&page=4&sort_index=Dept&order_type=asc&document_srl=168261","timestamp":"2024-11-02T14:04:47Z","content_type":"text/html","content_length":"53492","record_id":"<urn:uuid:02ea1f50-52ab-4587-a1d8-ffb8a06ee423>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00303.warc.gz"}
How to solve the ST3002 - Correlation and Regression task in nursing (Solved)How to solve the ST3002 - Correlation and Regression task in nursing (Solved) How to solve the ST3002 – Correlation and Regression task in nursing (Solved) Open the file CAR Measurements using menu option Datasets and then Elementary Stats, 13th Edition in Statdisk. This file contains information, such as size, weight, length, braking distance, cylinders, displacement, city miles per gallon (MPG), highway MPG, and GHG (greenhouse gas emissions), for 21 cars. Perform the following tasks to help you determine which car is right for you: 1. Scatterplots, Correlations, and the Correlation Coefficient 1A. Weight vs. Braking Columns (i). Create a scatterplot for the data in the Weight and Braking columns. Paste it in your report. (ii). Using Statdisk, calculate the linear correlation between the data in the Weight and Braking columns. Paste your results in your Word document. (iii). Explain the mathematical relationship between weight and braking based on the linear correlation coefficient. Be certain to include comments about the magnitude (strength) and the direction (positive or negative) of the correlation. As weight increases, what happens to the braking distance? B. Weight vs. City MPG (i). Create a scatterplot for the data in the Weight and the City MPG columns. Paste it in your report. (ii). Using Statdisk, calculate the linear correlation between the data in the Weight and City MPG columns. Paste your results in your Word document. (iii). Explain the mathematical relationship between weight and city MPG based on the linear correlation coefficient. Be certain to include comments about the magnitude and the direction of the correlation. As weight increases, what happens to the city MPG? (iv). Compare the correlations for weight and braking distance with that of weight and city MPG. How are they similar? How are they different? 2. Linear Regression and Prediction A. Let’s say that we wanted to be able to predict the braking distance in feet for a car based on its weight in pounds. (i). Using this sample data, perform a linear regression to determine the line-of-best fit. Use weight as your x (independent) variable and braking distance as your y (response) variable. Use four (4) places after the decimal in your answer. Paste it in your report. (ii). What is the equation of the line-of-best fit (linear regression equation)? Present your answer in y = bo + b1x form. (iii). What would you predict the braking distance would be for a car that weighs 2650 pounds? Show your calculation. (iv). Let’s say you want to buy a muscle car that weighs 4250 pounds. What would you predict the braking distance would be for a muscle car that weighs 4250 pounds? Show your calculation. (v). What effect would you predict weight would have on the braking distance of the car? Compare the breaking distance of the 2650-pound car to the 4250-pound car. (vi). Calculate the coefficient of determination (R2 value) for this data. What does this tell you about this relationship? 3. Multiple Regression A. Let’s say that we wanted to be able to predict the city MPG for a car using weight in pounds, length in inches, and cylinders. Using this sample data, perform a multiple-regression line-of-best-fit using weight, length, cylinder, and city MPG. B. Select City MPG (Column 8) as your dependent variable. Paste it in your report. (i). What is the equation of the line-of-best fit? The form of the equation is: Y = bo + b1X1 + b2X2 + b3X3 (fill in values for bo, b1, b2, and b3). Round coefficients to three (3) decimal places. (ii). What would you predict for the city MPG of a car whose (1) Weight is 3410 pounds, (2) LENGTH is 130 inches, and (3) Cylinders is 6? C. What is the R2 value for this regression? What does it tell you about the regression? 4. Making Decisions Based on Data A. Based on the information gathered in this task on the relationship between weight and braking distance and weight and city MPG, which of the 21 cars listed would you choose to buy, and wh 1. Scatterplots, Correlations, and the Correlation Coefficient 2. Weight Vs. Braking Columns 1A (i): Weight Vs. Braking Scatterplot Y values= Braking X values= Weight 1A (ii): Linear Correlation Correlation coeff, r: 0.3513217 Critical r: ±0.4328579 1A (iii): Mathematical Relationship The scatterplot shows a weak positive correlation because r= 0.35 and is closer to zero. The critical r value is ±0.43 indicating a weak correlation and therefore the weight and braking values cannot be used for prediction (Bennett et al., 2018). Looking at the data points that appear to largely scatter from the straight line, it can be concluded that the relationship between weight and braking is weak. The direction of the line indicates that as the weight of the vehicle increases the braking time is also likely to increase. 1. Weight Vs, City MPG B(i): Weight Vs. City MPG Scatter Plot Y values= City MPG X values= Weight B(ii): Linear Correlation Correlation coeff, r: -0.8437604 Critical r: ±0.4328579 B(iii): Mathematical Relationship The negative value observed after calculation indicates that car weight and city MPG have a negative correlation. Negative correlation values range between 0 and -1 and with a correlation coeff, r: -0.8437604 it indicates a strong negative correlation (Schober et al., 2018). The values in the scatter plot are close to the line due to the strong correlation of values. From the scatter plot, it can be observed that the line moves in a downward direction supporting the negative correlation. Additionally, the downward direction indicates that as the weight of the car gets heavier the fewer the MPG the car will have. B(iii): Comparing Correlations The Weight Vs. Braking values demonstrate a weak positive correlation while the Weight Vs. City MPG demonstrates a strong negative correlation. The values in the braking plot are scattered from the line compared to the values of the City MPG scatter plot. In both correlations, weight has a negative effect on braking and city MPG. When the weight increases, the car breaking distance gets worse, and also as the weight increases the car MPG decreases. 2. Linear Regression and Prediction 2A(i): Regression Results Y= b0 + b1x: Y Intercept, b0: 125.308 Slope, b1: 0.0031873 2A(ii): Linear regression Equation Form of the equation: y = bo + b1x Equation based on the regression results is: y= 125.308 + 0.0032x 2A(iii): Breaking Distance For a 2650 pounds car, the breaking distance will be: y= 125.308 + 0.0032x y= 125.308 + 0.0032(2650) y= 125.308 + 8.48 y= 133.788 The prediction of braking for a car weighing 2650 pounds is 133.788 feet. 2A(iv): Breaking Distance For a 4250 pounds muscle car, the breaking distance will be: y= 125.308 + 0.0032x y= 125.308 + 0.0032(4250) y= 125.308 + 13.6 y= 138.908 The prediction of braking for a muscle car weighing 4250 pounds is 138.908 feet. 2A(V): Comparison The results above indicate that the more the weight of the car the further the braking distance. A 2650 pounds car has a braking distance of 133.788 feet while a 4250 pounds muscle car has a braking distance of 138.908. There is a significant braking distance of about 5.12 feet between the two cars. 2A(V): Coefficient of Determination R^2= (0.3513217)^2 The Coeff of Det, R^2 is : 0.123427. This value is not close to 1 indicating that correlation between the variables is weak. The variation in weight only explains about 12.3% of the braking distance while the rest remains unexplained. 3. Multiple Regression The multiple regression results are as follows: Number of columns used: 4 Dependent column: 8 Coeff, b0: 46.01974 Coeff, b1: -0.0034893 Coeff, b2: -0.0578495 Coeff, b3: -0.4463389 Total Variation: 288.6667 Explained Variation: 212.7625 Unexplained Variation: 75.90414 Standard Error: 2.113043 Coeff of Det, R^2: 0.7370526 Adjusted R^2: 0.6906502 P Value: 0.0000352 3B(i): Equation Form: Y = bo + b1X1 + b2X2 + b3X3 Equation: Y= 46.02 – 0.003×1 – 0.058×2 – 0.446×3 3B(ii): Prediction The city MPG of a car weighing 3410 pounds, length 130 inches with 6 cylinders will be: y= 46.02- 0.003(3410)-0.058(130)-0.446(6) y= 25.574 The car will have an MPG of 25.574 in the city. 3C: R^2 Value Coeff of Det, R^2: 0.7370526 The r^2 value is closer to 1 indicating a positive correlation between the variables. From this value, it is likely that most values will be closer to the line of best fit. 4. Decision Making The information gathered indicates that the higher the weight of the car the longer the distance it will take to slow down when braking. However, the correlation is not strong indicating that other factors could contribute to the longer braking time (Bennett et al., 2018). Secondly, the weight and city MPG demonstrates a strong correlation where increased weight leads to a decreased city MPG. Based on this information, I would choose to buy the Kia Rio which has a braking distance of 132 feet and a city MPG of 27. Bennett, J., Briggs, W. L., & Triola, M. F. (2018). Statistical reasoning for everyday life (5th ed.). Boston, MA: Pearson. Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia and Analgesia, 126(5), 1763–1768. https://doi.org/10.1213/ Related Posts:
{"url":"https://customnursingessays.com/how-to-solve-the-st3002-correlation-and-regression-task-in-nursing-solved/","timestamp":"2024-11-14T05:02:58Z","content_type":"text/html","content_length":"213922","record_id":"<urn:uuid:0663045b-96a2-4e1a-94b8-6ee49efbe1c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00487.warc.gz"}
Vertex Algebras for S-duality Creutzig T, Gaiotto D (2020) Publication Type: Journal article Publication year: 2020 Book Volume: 379 Pages Range: 785-845 Journal Issue: 3 DOI: 10.1007/s00220-020-03870-6 We define new deformable families of vertex operator algebras A[g, Ψ , σ] associated to a large set of S-duality operations in four-dimensional supersymmetric gauge theory. They are defined as algebras of protected operators for two-dimensional supersymmetric junctions which interpolate between a Dirichlet boundary condition and its S-duality image. The A[g, Ψ , σ] vertex operator algebras are equipped with two g affine vertex subalgebras whose levels are related by the S-duality operation. They compose accordingly under a natural convolution operation and can be used to define an action of the S-duality operations on a certain space of vertex operator algebras equipped with a g affine vertex subalgebra. We give a self-contained definition of the S-duality action on that space of vertex operator algebras. The space of conformal blocks (in the derived sense, i.e. chiral homology) for A[g, Ψ , σ] is expected to play an important role in a broad generalization of the quantum Geometric Langlands program. Namely, we expect the S-duality action on vertex operator algebras to extend to an action on the corresponding spaces of conformal blocks. This action should coincide with and generalize the usual quantum Geometric Langlands correspondence. The strategy we use to define the A[g, Ψ , σ] vertex operator algebras is of broader applicability and leads to many new results and conjectures about deformable families of vertex operator algebras. Authors with CRIS profile Involved external institutions How to cite Creutzig, T., & Gaiotto, D. (2020). Vertex Algebras for S-duality. Communications in Mathematical Physics, 379(3), 785-845. https://doi.org/10.1007/s00220-020-03870-6 Creutzig, Thomas, and Davide Gaiotto. "Vertex Algebras for S-duality." Communications in Mathematical Physics 379.3 (2020): 785-845. BibTeX: Download
{"url":"https://cris.fau.de/publications/324616579/","timestamp":"2024-11-05T08:49:50Z","content_type":"text/html","content_length":"10372","record_id":"<urn:uuid:6a81c955-1844-4024-9381-e538bff648ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00193.warc.gz"}
Writing conic constraints. I need to include a conic constraint in my optimization model. It’s a second order code and is the form of x(i,j) + y(i,j) >= sqrt( sqr(a(i,j)) + sqr(b(i,j)) + sqr(x(i,j) - y(i,j)) ) This is actually converted as a quadratic cone from the original form of a rotated quadratic cone as follows x(i,j)*y(i,j) >= sqr(a(i,j)) + sqr(b(i,j)) How can I write it in the format for GAMS? I tried the user manual document, it wasn’t much help. Any help would be highly appreciated. Thank you. It seems obvious, but perhaps I am missing the point: equation conic(i,j).. x(i,j) + y(i,j) =G= sqrt( sqr(a(i,j)) + sqr(b(i,j)) + sqr(x(i,j) - y(i,j))); Thank you for your response sir. But using sqrt on squared of variables introduce non-linearity. My objective is to transform the whole non-linear problem into a convex problem, that’s why I need that conic constraint. Thank you again.
{"url":"https://forum.gams.com/t/writing-conic-constraints/2961","timestamp":"2024-11-11T10:38:37Z","content_type":"text/html","content_length":"17607","record_id":"<urn:uuid:f13a51db-5f02-46d4-9ecb-1216d91bb01d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00139.warc.gz"}
Franklin Pezzuti Dyer Interesting Trigonometric Integrals 2018 June 16 NOTE: I will use a special notation during this post that may be unfamiliar to some. The symbol $\delta_{ij}$, a function of $i$ and $j$, is defined as $1$ if $i=j$ and $0$ otherwise... pretty simple, but handy since it allows one to avoid piecewise notation. NOTE: One more piece of helpful notation - the Iverson Bracket. If $A$ is some logical statement, then $[A]$ is equal to $1$ if $A$ is true and $0$ if $A$ is false. For example, $[2018\space\text{is odd}]=0$, and $\delta_{ij}=[i=j]$. WARNING: Don't try to calculate the last integral in any nice closed form - I calculated it in terms of a fractional value of the Zeta function. As with my last blog post, the integrals provided as "warm-ups" are only specific cases of more useful generalizations that I will derive during this post. Interestingly, some of these integrals involve a bit of number theory, which we will discover shortly... Let us begin with the following integral, which I have not included in the warm-ups, but which is crucial in my derivation of many of them: ...where $m$ and $n$ are positive integers. Using the following trigonometric identity: we can transform the integral into the following: which easily evaluates to which vanishes for any positive integers $m,n$, since the sine of an integer multiple of $\pi$ is zero. However, this integral isn't valid for $m=n$, because the second term, in that case, is indeterminant. When $m=n$, the integral becomes instead which evaluates to $\pi/2$. Thus, our original integral vanishes when $m\ne n$ and is equal to $\pi/2$ when $m=2$, giving us the following result: ...a rather tame result on its own, but, in conjunction with some other trigonometric series and integral identities, it can lead to some fascinating ones. To start us on this journey, I shall begin with a rather ordinary generating function identity. As we all know, By replacing $x$ with $ze^{i\theta}$, we have the following: By equating real and imaginary parts, this demonstrates that Now, let's consider what this means when applied to our integral. We know that Thus if $k$ is a positive integer, then By summing both sides from $k=0$ to $\infty$, we get the following: ...as long as $|z|<1$, to ensure convergence. By decomposing that ugly fraction in the integral, we get the following: ...and, recalling the assumption that $n$ is an integer, we have This formula holds for $m,n\in\mathbb Z^+$ and $|z|<1$. Using it, we can easily resolve our first warm-up problem by setting $n=5$, $m=1$, and $z=5-2\sqrt{6}$: ...but we can do even better! In fact, we can do yet another infinite series! Let's begin this time with the formula we just derived: This time, we'll replace $n$ with $nk$, where $k$ is a positive integer. For convenience's sake, I'll also replace $z$ with $a$. Now multiply both sides by $b^k$, for some $b$ satisfying $|b|<1$: Finally (can you guess what to do now?) I will sum both sides from $k=0$ to $\infty$, resulting in Now notice that $[m|nk]$ is equal to zero unless $k$ is a multiple of $m/\gcd(m,n)$, in which case it is equal to $1$. Thus, our series can be converted to which is just a geometric series, converging to and so we have using the same partial fraction decomposition as before, we end up with the following: or Letting $m=1$, $n=2$, and $a=b=1/2$ gives the answer to the second warm-up: Now we will use the infinite series identity derived earlier to exploit an entirely different type of integral. Consider the well-known identity for any value of $a$. By antidifferentiating both sides with respect to $a$, we obtain the integral where $C$ is some constant. By setting $a=0$, we see that $C=0$, and so Then replace $a$ with $ak$ for some positive integer $k$, and multiply both sides by $z^k$, where $|z|\lt 1$. This gives us Now, as before, we sum both sides from $k=1$ to $\infinity$, and use this series identity before on the LHS: and use the following well-known series identity on the RHS: This gives us the following equality: or, as our final result, Plugging in $z=1/4$, we get the value of our third warm-up problem: For the next warm-up integral, before I start to solve it, I'd like to show a picture of its graph. It is, of course, an improper integral, but over the interval $(0,\infty)$, not only does its integrand contain infinitely many singularities, but they occur more and more frequently, making it very improper (so improper that Wolfram Alpha can't even approximate it effectively), but still convergent. To quickly derive the value of this integral, one can employ the integral identity and the well-known Fourier series By replacing $a$ with $ak$ in the first integral and dividing both sides by $k$, one obtains Then, summing from $k=1$ to $\infty$, one gets By plugging in $a=1$, one gets the value of the fourth and final warm-up: Now, before ending this blog post, I would like to show off an integral that I derived using trigonometric series. Using the Fourier Series that we employed for the previous series, and using the identity from earlier,along with various trigonometric identities, I managed to prove the following awesome identity for $m,n\in\mathbb Z$: If you want, you can try it as well. And, to finish off the post, I propose the following exercise: see what integrals you can derive using trigonometric series and the following identity, derived in my post on the Residue Theorem:
{"url":"https://franklin.dyer.me/post/64","timestamp":"2024-11-11T23:12:02Z","content_type":"text/html","content_length":"14984","record_id":"<urn:uuid:bcf9d2cb-d9ab-46af-91e2-27f347059191>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00595.warc.gz"}
Nest If/Or with Contains So with multiple drop downs I want the HC Froup to result in either one of three returns: backfill, backfill repuposed, or new. I am getting an uparseble error. Following is a snapshot of the attempt Best Answers • Hello @Catherine Shea The syntax in the CONTAINS functions was in error and caused the unparseable. The next opportunity with the formula was the order of the IFs. An IF formula will advance until the first true statement is found, then the formula stops. Since the CONTAINS function will find any instance of a word, in your original formula the CONTANS with only the word "Backfill" would find your desired cells with the word Backfill as well as in the cells with the words "Backfill Repurposed". To mitigate this, I swapped the order of the IF statements so that the formula stops if Repurposed is found. In the snippet you showed it didn't look like it was necessary to write out the entire "Backfill Repurposed" so lazily I didn't- you can add "backfill repurposed' to the formula if desired. =IF(CONTAINS("New", [HC Group]@row), "New", IF(CONTAINS("Repurpose", [HC Group]@row), "Backfill Repurposed", IF(CONTAINS("Backfill", [HC Group]@row), "Backfill"))) Will this work for you? • Thanks Kelly for taking the time for the explanation and the fix on this. I used the exmaple you provided and it indeed was the solution. ☺️ • Hello @Catherine Shea The syntax in the CONTAINS functions was in error and caused the unparseable. The next opportunity with the formula was the order of the IFs. An IF formula will advance until the first true statement is found, then the formula stops. Since the CONTAINS function will find any instance of a word, in your original formula the CONTANS with only the word "Backfill" would find your desired cells with the word Backfill as well as in the cells with the words "Backfill Repurposed". To mitigate this, I swapped the order of the IF statements so that the formula stops if Repurposed is found. In the snippet you showed it didn't look like it was necessary to write out the entire "Backfill Repurposed" so lazily I didn't- you can add "backfill repurposed' to the formula if desired. =IF(CONTAINS("New", [HC Group]@row), "New", IF(CONTAINS("Repurpose", [HC Group]@row), "Backfill Repurposed", IF(CONTAINS("Backfill", [HC Group]@row), "Backfill"))) Will this work for you? • Thanks Kelly for taking the time for the explanation and the fix on this. I used the exmaple you provided and it indeed was the solution. ☺️ Help Article Resources
{"url":"https://community.smartsheet.com/discussion/91601/nest-if-or-with-contains","timestamp":"2024-11-10T12:48:14Z","content_type":"text/html","content_length":"408995","record_id":"<urn:uuid:afe25c15-0670-4c5c-81f4-2d41388194a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00121.warc.gz"}
Representing a Tree With an Array You’ve seen two approaches to implementing a sequence data structure: either using an array, or using linked nodes. Wth BSTs, we extended our idea of linked nodes to implement a tree data structure. As discussed in the heaps lecture, we can also use an array to represent a complete tree. Here’s how we implement a complete binary tree: • The root of the tree will be in position 1 of the array (nothing is at position 0). We can define the position of every other node in the tree recursively: • The left child of a node at position n is at position 2n. • The right child of a node at position n is at position 2n + 1. • The parent of a node at position n is at position n/2. Working With Binary Heaps Binary Heaps Defined In this lab, you will be making a priority queue using a binary min-heap (where smaller values correspond to higher priorities). Recall from lecture: Binary min-heaps are basically just binary trees (but not binary search trees) – they have all of the same invariants of binary trees, with two extra invariants: • Invariant 1: the tree must be complete (more on this later) • Invariant 2: every node is smaller than its descendants (there is another variation called a binary max heap where every node is greater than its descendants) Invariant 2 guarantees that the min element will always be at the root of the tree. This helps us access that item quickly, which is what we need for a priority queue. We need to make sure binary min-heap methods maintain the above two invariants. Here’s how we do it: Add an item 1. Put the item you’re adding in the left-most open spot in the bottom level of the tree. 2. Swap the item you just added with its parent until it is larger than its parent, or until it is the new root. This is called bubbling up or swimming. Remove the min item 1. Swap the item at the root with the item of the right-most leaf node. 2. Remove the right-most leaf node, which now contains the min item. 3. Bubble down the new root until it is smaller than both its children. If you reach a point where you can either bubble down through the left or right child, you must choose the smaller of the two. This process is also called sinking. Complete Trees There are a couple different notions of what it means for a tree to be well balanced. A binary heap must always be what is called complete (also sometimes called maximally balanced). A complete tree has all available positions for nodes filled, except for possibly the last row, which must be filled from left-to-right. Writing Heap Methods The class ArrayHeap implements a binary min-heap using an array. Fill in the missing methods in ArrayHeap.java. Specifically, you should implement the following methods, ideally in the order shown. • leftIndex • rightIndex • parentIndex • swim • sink • insert • peek • removeMin • changePriority Please try to read over the entire skeleton first before starting! JUnit tests are provided inside ArrayHeap that test these methods (with the exception of peek and changePriority). Try out the tests as soon as you write the corresponding methods. You may find the Princeton implementation of a heap useful. Here is a more detailed explanation of each of the methods. Unlike the Princeton implementation, we store items in the heap as an array of Nodes, instead of an array of Key, because we want to leave open the possibility of priority changing operations. To submit, push your ArrayHeap.java to Github, upload to gradescope, and submit. The toString method is causing a stack overflow and/or the debugger seems super slow. The debugger wants to print everything out nicely as it runs, which means it is constantly calling the toString method. If something about your code causes an infinite recursion, this will cause a stack overflow, which will also make the debugger really slow.
{"url":"https://sp18.datastructur.es/materials/lab/lab10/lab10","timestamp":"2024-11-04T04:09:10Z","content_type":"text/html","content_length":"9634","record_id":"<urn:uuid:44517c81-2296-475d-bd3f-64000df12ba2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00699.warc.gz"}
What is 189 Celsius to Fahrenheit? - ConvertTemperatureintoCelsius.info Converting Celsius to Fahrenheit is a common task that many people need to do regularly, especially when dealing with weather forecasts or cooking recipes. In this article, we will explore the conversion of 189 Celsius to Fahrenheit, providing the formula and step-by-step calculation process. To begin, let’s first understand what Celsius and Fahrenheit are. Celsius is the metric unit for measuring temperature, and it is commonly used in most countries around the world. On the other hand, Fahrenheit is the imperial unit for temperature measurement and is predominantly used in the United States. The formula for converting Celsius to Fahrenheit is as follows: F = (C x 9/5) + 32 Now, let’s calculate the conversion of 189 Celsius to Fahrenheit using the formula. F = (189 x 9/5) + 32 F = (340.2) + 32 F = 372.2 So, 189 Celsius is equal to 372.2 Fahrenheit. It is important to note that the conversion formula is based on the freezing and boiling points of water in both Celsius and Fahrenheit scales. In the Celsius scale, the freezing point of water is 0 degrees, and the boiling point is 100 degrees. In the Fahrenheit scale, water freezes at 32 degrees and boils at 212 degrees. Now that we have the conversion for 189 Celsius to Fahrenheit, let’s discuss some practical applications of this knowledge. Understanding temperature conversions is essential when traveling to countries that use different temperature scales. It also comes in handy when dealing with temperature-sensitive materials or when following international recipes. In addition to the practical applications, knowing how to convert Celsius to Fahrenheit can also be beneficial in scientific and academic settings. Many scientific experiments and research papers require temperature conversions, and having a good grasp of the process can be advantageous. Furthermore, understanding temperature conversions can also be useful in everyday scenarios, such as adjusting the thermostat in your home or checking the weather forecast. By being able to convert between Celsius and Fahrenheit, you can better understand and interpret temperature readings in various contexts. In conclusion, the conversion of 189 Celsius to Fahrenheit is 372.2. The formula for converting between the two temperature scales is F = (C x 9/5) + 32. This knowledge can be applied in numerous practical, scientific, and everyday situations, making it a valuable skill to have. Whether you’re traveling, cooking, or conducting research, understanding temperature conversions is essential for accurate and informed decision-making.
{"url":"https://converttemperatureintocelsius.info/what-is-189celsius-in-fahrenheit/","timestamp":"2024-11-05T22:21:24Z","content_type":"text/html","content_length":"73215","record_id":"<urn:uuid:ac8574d0-a27d-4432-82c6-b00ec91da11d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00376.warc.gz"}
Lower bounds of uncertainty and upper limits on the accuracy of forecasts of macroeconomic variables Olkhov, Victor (2024): Lower bounds of uncertainty and upper limits on the accuracy of forecasts of macroeconomic variables. Preview Download (192kB) | Preview We consider the randomness of values and volumes of market deals as a major factor that describes lower bounds of uncertainty and upper limits on the accuracy of the forecasts of macroeconomic variables, prices, and returns. We introduce random macroeconomic variables, whose average values coincide with usual macroeconomic variables, and describe their uncertainty by coefficients of variation that depend on the volatilities, correlations, and coefficients of variation of random values or volumes of trades. The same approach describes bounds of uncertainty and limits on the accuracy of forecasts for growth rates, inflation, interest rates, etc. Limits on the accuracy of forecasts of macroeconomic variables depend on the certainty of predictions of their probabilities. The number of predicted statistical moments determines the veracity of macroeconomic probability. To quantify macroeconomic 2nd statistical moments, one needs additional econometric methodologies, data, and calculations of variables determined as sums of squares of values or volumes of market trades. Forecasting of macroeconomic 2nd statistical moments requires 2nd order economic theories. All of that is absent and for many years to come, the accuracy of forecasts of the probabilities of random macroeconomic variables, prices, and returns will be limited by the Gaussian approximations, which are determined by the first two statistical moments. Item Type: MPRA Paper Original Title: Lower bounds of uncertainty and upper limits on the accuracy of forecasts of macroeconomic variables English Title: Lower bounds of uncertainty and upper limits on the accuracy of forecasts of macroeconomic variables Language: English Keywords: bounds of uncertainty; limits on the accuracy of forecasts; random macroeconomic variables; market deals; prices and returns C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation Methodology ; Computer Programs E - Macroeconomics and Monetary Economics > E0 - General Subjects: E - Macroeconomics and Monetary Economics > E0 - General > E01 - Measurement and Data on National Income and Product Accounts and Wealth ; Environmental Accounts E - Macroeconomics and Monetary Economics > E2 - Consumption, Saving, Production, Investment, Labor Markets, and Informal Economy > E27 - Forecasting and Simulation: Models and E - Macroeconomics and Monetary Economics > E3 - Prices, Business Fluctuations, and Cycles > E37 - Forecasting and Simulation: Models and Applications G - Financial Economics > G0 - General Item ID: 121628 Depositing Victor Olkhov Date Deposited: 09 Aug 2024 10:37 Last Modified: 09 Aug 2024 10:37 Barrero, J.M., Bloom, N. and I. Wright, (2017). Short And Long Run Uncertainty, NBER WP23676, 1-46 Berkowitz, S.A., Dennis E. Logue, D.E. and E. A. Noser, Jr., (1988). The Total Cost of Transactions on the NYSE, The Journal Of Finance, 43, (1), 97-112 Bianchi, F., Kung, H. and M. Tirskikh, (2018). The Origins And Effects Of Macroeconomic Uncertainty, NBER WP25386, 1-56 Bloom, N. (2013). Fluctuations In Uncertainty, NBER WP19714, 1-29 Borovička, J. and L.P. Hansen, (2016). Term Structure Of Uncertainty In The Macroeconomy, NBER WP22364, 1-59 Cacciatore, M. and F. Ravenna, (2020). Uncertainty, Wages, And The Business Cycle, NBER WP27951, 1-37 Cole, R. (1969). Data Errors and Forecasting Accuracy, 47-82, in Ed. Mincer, J.A., Economic Forecasts and Expectations: Analysis of Forecasting Behavior and Performance, NBER, Davidson, R. and J.G. MacKinnon, (2004). Econometric Theory and Methods, Oxford Univ. Press, NY, 766 Diebold, F.X. and R.S. Mariano, (1994). Comparing predictive accuracy, NBER TWP 169, 1-33 Diebold, F.X. (2012). Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective On The Use And Abuse Of Diebold-Mariano Tests, NBER, WP 18391, 1-16 Duffie, D. and P. Dworczak, (2018). Robust Benchmark Design, NBER WP 20540, 1-56 Fox, D.R., et al. (2019). Concepts and Methods of the U.S. National Income and Product Accounts. BEA, Dep. Commerce, US, Chapters 1-13, 1- 449 Ilut, C.L. and M. Schneider, (2022). Modeling Uncertainty As Ambiguity: A Review, NBER WP 29915, 1-42 Jurado, K, Ludvigson, S.C. and S. Ng, (2015). Measuring Uncertainty, Am. Ec. Rev., 105, 3, 1177–1216 Hansen, L.P. (2014). Uncertainty Outside And Inside Economic Models, NBER WP 20394, 1-56 References: Kumar, S., Gorodnichenko, Y. and O. Coibion, (2022). The Effect Of Macroeconomic Uncertainty On Firm Decisions, NBER WP30288, 1-71 Markowitz, H. (1952). Portfolio Selection, J. Finance, 7(1), 77-91 Mills, T.C and K. Patterson (Ed), (2009). Palgrave Handbook of Econometrics, v.2, Applied Econometrics, Palgrave MacMillan, 1377 p Morgenstern, O. (1950). On the Accuracy of Economic Observations, Princeton Univ., 322 Olkhov, V., (2021). Theoretical Economics and the Second-Order Economic Theory. What is it?, MPRA WP120536, 1-13 Olkhov, V. (2022). Market-Based Asset Price Probability, MPRA WP120026, 1-18 Olkhov, V.(2023a). Market-Based Probability of Stock Returns, MPRA WP120117, 1-17 Olkhov, V. (2023b). Theoretical Economics as Successive Approximations of Statistical Moments, MPRA WP120763, 1-17 Olkhov, V. (2023c). Economic Complexity Limits Accuracy of Price Probability Predictions by Gaussian Distributions, MPRA WP120636, 1-23 Olkhov, V. ( 2024). Volatility Depends on Market Trade and Macro Theory, MPRA WP121221, 1-16 Reif, M. (2018). Macroeconomic Uncertainty and Forecasting Macroeconomic Aggregates, Leibniz Institute for Economic Research, Univ. Munich, WP 265, 1-36 Shiryaev, A.N. (1999). Essentials Of Stochastic Finance: Facts, Models, Theory. World Sc. Pub., Singapore. 1-852 Shreve, S. E. (2004). Stochastic calculus for finance, Springer finance series, NY, USA Zarnowitz, V. (1967). An Appraisal of Short-Term Economic Forecasts, NBER, Occasional Paper 104, 1-140 Zarnowitz, V. (1978). On The Accuracy And Properties Of Recent Macroeconomic Forecasts, NBER, WP 229, 1-39 Zarnowitz, V. (1991). Has Macro-Forecasting Failed? NBER, WP3867. 1-38 URI: https://mpra.ub.uni-muenchen.de/id/eprint/121628 Available Versions of this Item • Lower bounds of uncertainty and upper limits on the accuracy of forecasts of macroeconomic variables. (deposited 09 Aug 2024 10:37) [Currently Displayed]
{"url":"https://mpra.ub.uni-muenchen.de/121628/","timestamp":"2024-11-05T05:27:14Z","content_type":"application/xhtml+xml","content_length":"36365","record_id":"<urn:uuid:08ef8466-965d-4670-a855-9e6fe96128da>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00050.warc.gz"}
During a 40-mile trip, Marla traveled at an average | Atlantic GMAT Tutoring During a 40-mile trip, Marla traveled at an average speed of x miles per hour for the first y miles of the trip and and at an average speed of 1.25x miles per hour for the last 40 – y miles of the trip. The time that Marla took to travel the 40 miles was what percent of the time it would have taken her if she had traveled at an average speed of x miles per hour for the entire trip? GMAT During a 40-mile trip, Marla traveled at an average speed of x miles per hour for the first y miles of the trip and and at an average speed of 1.25x miles per hour for the last 40 – y miles of the trip. The time that Marla took to travel the 40 miles was what percent of the time it would have taken her if she had traveled at an average speed of x miles per hour for the entire trip? (1) x = 48. (2) y = 20. Correct Answer: B
{"url":"https://atlanticgmat.com/during-a-40-mile-trip-marla-traveled-at-an-average/","timestamp":"2024-11-05T23:26:35Z","content_type":"text/html","content_length":"226140","record_id":"<urn:uuid:691549a5-72a3-48ac-97c5-2e2cb1e15d82>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00446.warc.gz"}
Sample Complexity of Testing the Manifold Hypothesis Part of Advances in Neural Information Processing Systems 23 (NIPS 2010) Hariharan Narayanan, Sanjoy Mitter The hypothesis that high dimensional data tends to lie in the vicinity of a low dimensional manifold is the basis of a collection of methodologies termed Manifold Learning. In this paper, we study statistical aspects of the question of fitting a manifold with a nearly optimal least squared error. Given upper bounds on the dimension, volume, and curvature, we show that Empirical Risk Minimization can produce a nearly optimal manifold using a number of random samples that is {\it independent} of the ambient dimension of the space in which data lie. We obtain an upper bound on the required number of samples that depends polynomially on the curvature, exponentially on the intrinsic dimension, and linearly on the intrinsic volume. For constant error, we prove a matching minimax lower bound on the sample complexity that shows that this dependence on intrinsic dimension, volume and curvature is unavoidable. Whether the known lower bound of $O(\frac{k}{\eps^2} + \frac{\log \ frac{1}{\de}}{\eps^2})$ for the sample complexity of Empirical Risk minimization on $k-$means applied to data in a unit ball of arbitrary dimension is tight, has been an open question since 1997 \ cite{bart2}. Here $\eps$ is the desired bound on the error and $\de$ is a bound on the probability of failure. We improve the best currently known upper bound \cite{pontil} of $O(\frac{k^2}{\eps^2} + \frac{\log \frac{1}{\de}}{\eps^2})$ to $O\left(\frac{k}{\eps^2}\left(\min\left(k, \frac{\log^4 \frac{k}{\eps}}{\eps^2}\right)\right) + \frac{\log \frac{1}{\de}}{\eps^2}\right)$. Based on these results, we devise a simple algorithm for $k-$means and another that uses a family of convex programs to fit a piecewise linear curve of a specified length to high dimensional data, where the sample complexity is independent of the ambient dimension.
{"url":"https://proceedings.nips.cc/paper_files/paper/2010/hash/8a1e808b55fde9455cb3d8857ed88389-Abstract.html","timestamp":"2024-11-14T07:35:14Z","content_type":"text/html","content_length":"9489","record_id":"<urn:uuid:1434a282-70cc-4993-b23c-7747e6a9b5b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00217.warc.gz"}
2D Beam Planar Approx. Validation | Ansys Innovation Courses "Verification and validation" can be thought of as a formal process for checking results. We previously performed some sanity checks on the deformed shape. A further basic check involves observing how the results change on refining the mesh. The following video shows how to recalculate the results on a refined mesh. Summary of steps in the above video: 1. Easily investigate the change of deformed shape and max deformation of the beam with mesh refinement. Make a copy of the model: 1. In Project Schematic, right click on Model. 2. Select Duplicate. 3. Rename the copied project “2D Beam (refined mesh)". 2. Refine the mesh in the copied project: 1. Double click on Setup in the copied version of the project to launch Ansys Mechanical. 2. In the Project Tree, click on Mesh > Face Sizing. 3. Change Element Size to 0.8in. 4. Click Update. 3. To redo the solution on the refined mesh: 1. Click Solve. We see from the above video that at least one more level of mesh refinement is necessary. Please carry that out. Note that we also need to closely interrogate the comparison with Euler-Bernoulli beam theory as part of the "Verification and validation" process. To save and exit when you are done, select "File > Save" and "File > Exit" in the project view (yellow icon in taskbar). When transferring the project to another location, you need to transfer the "2d_beam.wbpj" file as well as the "2d_beam_files" folder. The project cannot be read into Ansys software without both these entities. The analyst can proceed to simulate the beam using a variety of elements: one- dimensional beam elements, plane strain triangles, plane strain quadrilaterals, plane stress triangles, plane stress quadrilaterals, and three-dimensional brick elements (using what the analyst believes to be sufficiently relaxed end constraints, as per the previous example). The results for maximum transverse deflection are reported in Fig. 4.12. All results are reported in dimensionless form, normalized by the characteristic deflection defined in the Problem Specification Section earlier. According to these results, and still believing that Euler-Bernoulli beam theory is correct, the analyst would see that the maximum converged transverse deflection predicted by plane stress conditions underestimates the deflection predicted by Euler-Bernoulli beam theory by nearly 50%; by comparison, the maximum converged transverse deflection predicted by plane strain conditions overestimate the prediction of Euler-Bernoulli theory by 40%. The analyst also realizes that the converged results from the three-dimensional brick elements appear to be in agreement with the con- verged plane stress results, but that a coarse mesh instance of the plane strain model seems to agree well with the expected Euler-Bernoulli beam theory. How does the analyst sort out these mixed The consequences of getting this analysis wrong, in this case, can be far reaching. The analyst who insists on sticking with the Euler-Bernoulli beam theory not only will persist with that error, but as a consequence might make other poor judgements, such as believing, as is apparent in this case, that a relatively coarse mesh under plane strain conditions is also generally correct! This could, in turn, lead to the analyst not performing sufficient mesh refinement studies in other problems, and to accept other erroneous solutions based on erroneous two-dimensional approximations.
{"url":"https://innovationspace.ansys.com/courses/courses/planar-approximations-of-a-2d-beam/lessons/verification-validation-lesson-8-14/","timestamp":"2024-11-02T04:39:20Z","content_type":"text/html","content_length":"179777","record_id":"<urn:uuid:bcd9032b-20ec-46e0-a160-24c296760f91>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00786.warc.gz"}
Triangle Graph Paper Printable Hello friends, are you searching for printable Triangle Graph Paper on the internet? We have various types of graph paper that you can use whenever you need it. If you need graph paper, you must know about it, but today we share some important tips about graph paper. Graph paper is a sheet with a small square on the whole paper. The small prints on a graph give a triangular shape to the whole paper. There is a very high demand for graph paper as there are so many uses for this. You will always need the graph paper, Either you are a student or in any higher post. When a student starts learning mathematics, they can learn lots of things with the help of graph sheet. There are many designs for graph paper, and you can get every type of graph paper on our site. Graph Paper Triangle Graph Paper Do you know, almost every person uses graph paper as it has many uses? If you need the printable graph sheet, read the whole article before downloading the graph sheet, as it will give you some basic knowledge of graph paper. Triangle Graph Paper Triangle graph paper exits in the shape of the triangle, and the small square gives the triangular shape on the sheet. So, the design looks triangular on the paper. In this paper, you need to work on the graph in triangular shape only. Do you know why the triangular shape graph is necessary? You will need the triangular shape graph when you need to make a design on it. Yes, there are various benefits of graph paper, and you will know all the benefits in this article. Do you know the sign convention on the graph paper? Let me tell you that there is two sign on any graph paper that is positive and negative. You can read the graph and know the real value on the graph only if you know about these sign conventions. These signs give only negative and positive value, but it also helps you know the profit and loss of any business. The sign on the graph depends upon only one thing: the quadrant of the graph. Yes, there is a quadrant in any graph paper, and the quadrants mean there are four-phase for every graph paper. These quadrants are the only basis of sign convention. Every phase of the graph will have a different sign, giving a better value to any graph paper. Triangle Graph Paper PDF If you are searching the graph paper, you must be searching the graph in different formats. Do you know the benefits of these graph papers in different formats? Every format has its value. Suppose you make a graph and don’t want to edit it again and want to see the graph again and also need to share it with the others. Now, what will you do? Yes, you will download the pdf formats as it does not allow editing. Triangle Graph Paper Printable Once you convert the project or graph to pdf, you will not have to worry about editing the sheet. No one can edit it even if you send it to others. The graph will send to the other as same as you design it. The others will not see any difference in the graph paper. Yes, a PDF version is also a noneditable format of the file. It is the only reason everyone sends resumes, dissertations, and other things to others in only pdf formats. Now, if you need to edit the graph, you will need word formats of the graph to help you edit the graph. Sometimes you need to edit lots of things in the graph, so you can only edit when you have the graph sheet in word format. Equilateral Triangle Graph Paper An equilateral triangle is a triangle that has an equal side. So, the line of triangle angles in an equilateral triangle is always equal. Do you know there are lots of benefits of graph paper? Everyone uses graph paper for their benefit. When you study in school, you have to read lots of things. The teacher always wants that their student learns everything. Do you know, the teacher takes the help of graphs to learn mathematics and other calculation? When the students start learning about time distance, weather, and profit loss, all these graphs will help them lots in observation and calculation. Triangle Grid Paper So, a student will always need graph paper while better learning. The second thing is that when you need to learn drawing, this graph paper is one the best things that will help you learn the drawing in the best way. The jewelry designer always prefers to have a graph to make one of the best designs for their client. One of the best parts of any graph paper is that you do not need to worry about measurements, as every square of graph paper will have equal measurements. So, you did not need to find a scale for measurements while designing. Civil engineering also uses graph paper while designing the structure of the building. Triangle Grid Paper Printable Everyone loves the triangle in printable formats as it allows you to give the graph paper in hardcopy. If you do not know designing with the help of a computer and you are too good at hand drawing, you will always need printable graph paper only. So, when you have the printable formats, you can easily print out the graph paper and get it in hardcopy. Triangle graph paper is one of the best papers to learn design on the paper. You can also learn to draw with the help of paper. So, check every format of graph paper available on the chart to get the best graph paper according to your choice. You may have a great choice in graph paper, so don’t worry. You can download any number of the sheet on our site as all the pdf, word and printable formats of graphs are free of cost. How to Use Triangle Graph Paper? You can use the triangle graph paper in various ways as the triangle graph paper comes in 3 axes only to use this graph in making different designs of diamonds and other things. It will help in highlighting the relationship between the two values. So, download the triangle graph paper by tapping on the downloading link.
{"url":"https://graphpaperhd.com/tag/triangle-graph-paper-printable/","timestamp":"2024-11-11T16:08:38Z","content_type":"text/html","content_length":"44793","record_id":"<urn:uuid:74f14a9c-c234-4475-866f-c96907dc400a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00026.warc.gz"}
: .. : : 92 : : 2021 : .. // . 92. .: , 2021. .28-42. DOI: https://doi.org/10.25728/ubs.2021.92.2 , , , linear matrix inequalities, nonlinear matrix inequalities, Schur's lemma, static controller, electromagnetic suspension . . . - , . . , . , . : . , . . One of the most practically demanded methods of control in linear systems is control in the form of a static controller. To implement this control method, it is not necessary to measure all the phase variables of the system. With this method of control, the dimension of the closed system coincides with the dimension of the original object. The problem of static controller synthesis, in general, is reduced to the search for two mutually inverse matrices that satisfy a system of linear matrix inequalities. Such a problem is nonconvex and therefore cannot be solved using the apparatus of linear matrix inequalities. The solution of such a problem is reduced to finding two mutually inverse matrices that satisfy a system of linear matrix inequalities. The article considers a special case of the problem of synthesis of static controllers, which can be reduced to solving a system of linear matrix inequalities. The conditions for the implementation of such a case are shown. Two problems of the synthesis of static controllers are considered: the synthesis of the stabilizing controller and the synthesis of the optimal controller. The obtained results are applied to the stabilization of the electromagnetic suspension when the measured variable is the vertical displacement of the rotor. Graphs of transients are presented. A comparative analysis of the quality of transients in a closed system with calculated static controllers is performed. PDF - : 1496, : 633, : 11.
{"url":"http://www.mtas.ru/search/search_results_new.php?publication_id=22948&IBLOCK_ID=20","timestamp":"2024-11-08T12:46:25Z","content_type":"text/html","content_length":"14412","record_id":"<urn:uuid:8421b918-512b-4d3c-8924-a322f4df7b9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00312.warc.gz"}
Apply your knowledge of dplyr to do the following two challenges. Number Ones Challenge: boys How many distinct boys names achieved a rank of Number 1 in any year? Number Ones Challenge: girls How many distinct girls names achieved a rank of Number 1 in any year? Number Ones Challenge: Plot number_ones is a vector of every boys name to achieve a rank of one. Use number_ones with babynames to recreate the plot below, which shows the popularity over time for every name in number_ones. babynames |> filter(name %in% number_ones, sex == "M") |> ggplot() + geom_line(aes(x = year, y = prop, color = name)) Name Diversity Challenge: number of unique names Which gender uses more names? In the chunk below, calculate and then plot the number of distinct names used each year for boys and girls. Place year on the x axis, the number of distinct names on they y axis and color the lines by sex. Name Diversity Challenge: number of boys and girls Let’s make sure that we’re not confounding our search with the total number of boys and girls born each year. With the chunk below, calculate and then plot over time the total number of boys and girls by year. Is the relative number of boys and girls constant? Name Diversity Challenge: children per name Hmm. Sometimes there are more girls and sometimes more boys. In addition, the entire population has been grown over time. Let’s account for this with a new metric: the average number of children per name. If girls have a smaller number of children per name, that would imply that they use more names overall (and vice versa). In the chunk below, calculate and plot the average number of children per name by year and sex over time. How do you interpret the results? Good job! In recent years, there are fewer girls (on average) given any particular name than boys. This suggests that there is more variety in girls names than boys names once you account for population. Interestingly, the number of children per name has gone down steeply for each gender since the 1960s, even though the total population has continued to increase. This suggests that there is a greater variety of names today than in the past. Where to from here Congratulations! You can use {dplyr}’s grammar of data manipulation to access any data associated with a table—even if that data is not currently displayed by the table. In other words, you now know how to look at data in R, as well as how to access specific values, calculate summary statistics, and compute new variables. When you combine this with the visualization skills that you learned in Visualization Basics, you have everything that you need to begin exploring data in R. The next tutorial will teach you the last of three basic skills for working with R: 1. How to visualize data 2. How to work with data 3. How to program with R code
{"url":"https://r-primers.andrewheiss.com/transform-data/03-deriving/05-challenges.html","timestamp":"2024-11-06T04:45:55Z","content_type":"application/xhtml+xml","content_length":"119954","record_id":"<urn:uuid:e5f19059-4534-4c2a-b25f-6a2c1c2d1059>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00176.warc.gz"}
value of rittingers constant for ball mill value of rittingers constant for ball mill - Bussa Machinery value of rittingers constant for ball mill Oct 10, 2014 · • Rittinger's law: – putting p = −2, then integration gives: – Writing C = KRfc, where fc is the crushing strength of the material, then Rittinger's law, first postulated in 1867, is obtained as: – Since the surface of unit mass of material is proportional to 1/L, the interpretation of this law is that the energy required ... Get Quote
{"url":"https://malta-hotele.pl/2017/Jan/28-21742.html","timestamp":"2024-11-02T23:50:33Z","content_type":"text/html","content_length":"36277","record_id":"<urn:uuid:84210988-4115-413c-8e9b-822e184a5112>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00557.warc.gz"}
LL parser Jump to navigation Jump to search In computer science, an LL parser (Left-to-right, Leftmost derivation) is a top-down parser for a subset of context-free languages. It parses the input from Left to right, performing Leftmost derivation of the sentence. An LL parser is called an LL(k) parser if it uses k tokens of lookahead when parsing a sentence. A grammar is called an LL(k) grammar if an LL(k) parser can be constructed from it. A formal language is called an LL(k) language if it has an LL(k) grammar. The set of LL(k) languages is properly contained in that of LL(k+1) languages, for each k≥0.^[1] A corollary of this is that not all context-free languages can be recognized by an LL(k) parser. An LL parser is called an LL(*), or LL-regular,^[2] parser if it is not restricted to a finite number k of tokens of lookahead, but can make parsing decisions by recognizing whether the following tokens belong to a regular language (for example by means of a Deterministic Finite Automaton). LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and many computer languages are designed to be LL(1) for this reason.^ [3] LL parsers are table-based parsers, similar to LR parsers. LL grammars can also be parsed by recursive descent parsers. According to Waite and Goos (1984),^[4] LL(k) grammars were introduced by Stearns and Lewis (1969).^[5] For a given context-free grammar, the parser attempts to find the leftmost derivation. Given an example grammar ${\displaystyle G}$: 1. ${\displaystyle S\to E}$ 2. ${\displaystyle E\to (E+E)}$ 3. ${\displaystyle E\to i}$ the leftmost derivation for ${\displaystyle w=((i+i)+i)}$ is: ${\displaystyle S\ {\overset {(1)}{\Rightarrow }}\ E\ {\overset {(2)}{\Rightarrow }}\ (E+E)\ {\overset {(2)}{\Rightarrow }}\ ((E+E)+E)\ {\overset {(3)}{\Rightarrow }}\ ((i+E)+E)\ {\overset {(3)} {\Rightarrow }}\ ((i+i)+E)\ {\overset {(3)}{\Rightarrow }}\ ((i+i)+i)}$ Generally, there are multiple possibilities when selecting a rule to expand given (leftmost) non-terminal. In the previous example of the leftmost derivation, in step 2: ${\displaystyle S\ {\overset {(1)}{\Rightarrow }}\ E\ {\overset {(?)}{\Rightarrow }}\ ?}$ To be effective, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is ${\displaystyle (}$ , the only correct rule that can be used is 2. Generally, ${\displaystyle LL(k)}$ parser can look ahead at ${\displaystyle k}$ symbols. However, given a grammar, the problem of determining if there exists a ${\displaystyle LL(k)}$ parser for some ${\displaystyle k}$ that recognizes it is undecidable. For each ${\displaystyle k}$, there is a language that cannot be recognized by ${\displaystyle LL(k)}$ parser, but can be by ${\displaystyle LL We can use the above analysis to give the following formal definition: Let ${\displaystyle G}$ be a context-free grammar and ${\displaystyle k\geq 1}$. We say that ${\displaystyle G}$ is ${\displaystyle LL(k)}$, if and only if for any two leftmost derivations: 1. ${\displaystyle S\ \Rightarrow \ \dots \ \Rightarrow \ wA\alpha \ \Rightarrow \ \dots \ \Rightarrow \ w\beta \alpha \ \Rightarrow \ \dots \ \Rightarrow \ wx}$ 2. ${\displaystyle S\ \Rightarrow \ \dots \ \Rightarrow \ wA\alpha \ \Rightarrow \ \dots \ \Rightarrow \ w\gamma \alpha \ \Rightarrow \ \dots \ \Rightarrow \ wy}$ Following condition holds: Prefix of the string ${\displaystyle x}$ of length ${\displaystyle k}$ equals the prefix of the string ${\displaystyle y}$ of length ${\displaystyle k}$ implies ${\ displaystyle \beta \ =\ \gamma }$. In this definition, ${\displaystyle S}$ is the starting and ${\displaystyle A}$ any non-terminal. The already derived input ${\displaystyle w}$, and yet unread ${\displaystyle x}$ and ${\displaystyle y}$ are strings of terminals. The Greek letters ${\displaystyle \alpha }$, ${\displaystyle \beta }$ and ${\displaystyle \gamma }$ represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words. The ${\displaystyle LL(k)}$ parser is a deterministic pushdown automaton with the ability to peek on the next ${\displaystyle k}$ input symbols without reading. This capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction. The stack alphabet ${\displaystyle \Gamma =N\cup \Sigma }$, where: • ${\displaystyle N}$ is the set of non-terminals; • ${\displaystyle \Sigma }$ the set of terminal (input) symbols with a special end-of-input (EOI) symbol ${\displaystyle \}$. The parser stack initially contains the starting symbol above the EOI: ${\displaystyle [\ S\ \\ ]}$. During operation, the parser repeatedly replaces the symbol ${\displaystyle X}$ on top of the • with some ${\displaystyle \alpha }$, if ${\displaystyle X\in N}$ and there is a rule ${\displaystyle X\to \alpha }$; • with ${\displaystyle \epsilon }$ (in some notations ${\displaystyle \lambda }$), i.e. ${\displaystyle X}$ is popped off the stack, if ${\displaystyle X\in \Sigma }$. In this case, an input symbol ${\displaystyle x}$ is read and if ${\displaystyle xeq X}$, the parser rejects the input. If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack. The states and the transition function are not explicitly given; they are specified (generated) using a more convenient parse table instead. The table provides the following mapping: • row: top-of-stack symbol ${\displaystyle X}$ • column: ${\displaystyle |w|\leq k}$ lookahead buffer contents • cell: rule number for ${\displaystyle X\to \alpha }$ or ${\displaystyle \epsilon }$ If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals. Concrete example[edit] To explain an LL(1) parser's workings we will consider the following small LL(1) grammar: 1. S → F 2. S → ( S + F ) 3. F → a and parse the following input: ( a + a ) We construct a parsing table for this grammar by expanding all the terminals by column and all nonterminals by row. Later, the expressions are numbered by the position where the columns and rows cross. For example, the terminal '(' and non-terminal 'S' match for expression number 2. The table is as follows: ( ) a + $ S 2 - 1 - - F - - 3 - - (Note that there is also a column for the special terminal, represented here as $, that is used to indicate the end of the input stream.) Parsing procedure[edit] In each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack. Thus, in its first step, the parser reads the input symbol '(' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '(' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '( S + F )' on the stack by removing 'S' from stack and pushing ')', 'F', '+', 'S', '(' onto the stack and this writes the rule number 2 to the output. The stack then becomes: [ (, S, +, F, ), $ ] Since the '(' from the input stream did not match the top-most symbol, 'S', from the stack, it was not removed, and remains the next-available input symbol for the following step. In the second step, the parser removes the '(' from its input stream and from its stack, since they now match. The stack now becomes: [ S, +, F, ), $ ] Now the parser has an 'a' on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes: [ F, +, F, ), $ ] The parser now has an 'a' on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes: [ a, +, F, ), $ ] In the next two steps the parser reads the 'a' and '+' from the input stream and, since they match the next two items on the stack, also removes them from the stack. This results in: [ F, ), $ ] In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream. In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream: [ 2, 1, 3, 3 ] This is indeed a list of rules for a leftmost derivation of the input string, which is: S → ( S + F ) → ( F + F ) → ( a + F ) → ( a + a ) Parser implementation in C++[edit] Below follows a C++ implementation of a table-based LL parser for the example language: #include <iostream>#include <map>#include <stack>enum Symbols { // the symbols: // Terminal symbols: TS_L_PARENS, // ( TS_R_PARENS, // ) TS_A, // a TS_PLUS, // + TS_EOS, // $, in this case corresponds to '\0' TS_INVALID, // invalid token // Non-terminal symbols: NTS_S, // S NTS_F // F Converts a valid token to the corresponding terminal symbol Symbols lexer(char c) case '(': return TS_L_PARENS; case ')': return TS_R_PARENS; case 'a': return TS_A; case '+': return TS_PLUS; case '\0': return TS_EOS; // end of stack: the $ terminal symbol default: return TS_INVALID; int main(int argc, char **argv) using namespace std; if (argc < 2) cout << "usage:\n\tll '(a+a)'" << endl; return 0; // LL parser table, maps < non-terminal, terminal> pair to action map< Symbols, map<Symbols, int> > table; stack<Symbols> ss; // symbol stack char *p; // input buffer // initialize the symbols stack ss.push(TS_EOS); // terminal, $ ss.push(NTS_S); // non-terminal, S // initialize the symbol stream cursor p = &argv[1][0]; // set up the parsing table table[NTS_S][TS_L_PARENS] = 2; table[NTS_S][TS_A] = 1; table[NTS_F][TS_A] = 3; while(ss.size() > 0) if(lexer(*p) == ss.top()) cout << "Matched symbols: " << lexer(*p) << endl; cout << "Rule " << table[ss.top()][lexer(*p)] << endl; case 1: // 1. S → F ss.push(NTS_F); // F case 2: // 2. S → ( S + F ) ss.push(TS_R_PARENS); // ) ss.push(NTS_F); // F ss.push(TS_PLUS); // + ss.push(NTS_S); // S ss.push(TS_L_PARENS); // ( case 3: // 3. F → a ss.push(TS_A); // a cout << "parsing table defaulted" << endl; return 0; cout << "finished parsing" << endl; return 0; Parser implementation in Python[edit] #All constants are indexed from 0 TERM = 0 RULE = 1 # Terminals T_LPAR = 0 T_RPAR = 1 T_A = 2 T_PLUS = 3 T_END = 4 T_INVALID = 5 # Non-Terminals N_S = 0 N_F = 1 #parse table table = [[ 1, -1, 0, -1, -1, -1], [-1, -1, 2, -1, -1, -1]] RULES = [[(RULE, N_F)], [(TERM, T_LPAR), (RULE, N_S), (TERM, T_PLUS), (RULE, N_F), (TERM, T_RPAR)], [(TERM, T_A)]] stack = [(TERM, T_END), (RULE, N_S)] def lexical_analysis(inputstring): print('Lexical analysis') tokens = [] for c in inputstring: if c == '+': tokens.append(T_PLUS) elif c == '(': tokens.append(T_LPAR) elif c == ')': tokens.append(T_RPAR) elif c == 'a': tokens.append(T_A) else: tokens.append(T_INVALID) return tokens def syntactic_analysis(tokens): print('Syntactic analysis') position = 0 while len(stack) > 0: (stype, svalue) = stack.pop() token = tokens[position] if stype == TERM: if svalue == token: position += 1 print('pop', svalue) if token == T_END: print('input accepted') print('bad term on input:', token) elif stype == RULE: print('svalue', svalue, 'token', token) rule = table[svalue][token] print('rule', rule) for r in reversed(RULES[rule]): print('stack', stack) inputstring = '(a+a)' As can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $: • If the top is a nonterminal then the parser looks up in the parsing table on the basis of this nonterminal and the symbol on the input stream, which rule of the grammar it should use to replace nonterminal on the stack. The number of the rule is written to the output stream. If the parsing table indicates that there is no such rule then the parser reports an error and stops. • If the top is a terminal then the parser compares it to the symbol on the input stream and if they are equal they are both removed. If they are not equal the parser reports an error and stops. • If the top is $ and on the input stream there is also a $ then the parser reports that it has successfully parsed the input, otherwise it reports an error. In both cases the parser will stop. These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error. Constructing an LL(1) parsing table[edit] In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal A on the top of its stack and a symbol a on its input stream. It is easy to see that such a rule should be of the form A → w and that the language corresponding to w should have at least one string starting with a. For this purpose we define the First-set of w, written here as Fi(w), as the set of terminals that can be found at the start of some string in w, plus ε if the empty string also belongs to w. Given a grammar with the rules A[1] → w[1], ..., A[n] → w[n], we can compute the Fi(w[i]) and Fi(A[i]) for every rule as follows: 1. initialize every Fi(A[i]) with the empty set 2. add Fi(w[i]) to Fi(w[i]) for every rule A[i] → w[i], where Fi is defined as follows: □ Fi(a w' ) = { a } for every terminal a □ Fi(A w' ) = Fi(A) for every nonterminal A with ε not in Fi(A) □ Fi(A w' ) = Fi(A) \ { ε } ∪ Fi(w' ) for every nonterminal A with ε in Fi(A) □ Fi(ε) = { ε } 3. add Fi(w[i]) to Fi(A[i]) for every rule A[i] → w[i] 4. do steps 2 and 3 until all Fi sets stay the same. Unfortunately, the First-sets are not sufficient to compute the parsing table. This is because a right-hand side w of a rule might ultimately be rewritten to the empty string. So the parser should also use the rule A → w if ε is in Fi(w) and it sees on the input stream a symbol that could follow A. Therefore, we also need the Follow-set of A, written as Fo(A) here, which is defined as the set of terminals a such that there is a string of symbols αAaβ that can be derived from the start symbol. We use $ as a special terminal indicating end of input stream and S as start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows: 1. initialize Fo(S) with { $ } and every other Fo(A[i]) with the empty set 2. if there is a rule of the form A[j] → wA[i]w' , then □ if the terminal a is in Fi(w' ), then add a to Fo(A[i]) □ if ε is in Fi(w' ), then add Fo(A[j]) to Fo(A[i]) □ if w' has length 0, then add Fo(A[j]) to Fo(A[i]) 3. repeat step 2 until all Fo sets stay the same. Now we can define exactly which rules will be contained where in the parsing table. If T[A, a] denotes the entry in the table for nonterminal A and terminal a, then T[A,a] contains the rule A → w if and only if a is in Fi(w) or ε is in Fi(w) and a is in Fo(A). If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called an LL(1) grammar. Constructing an LL(k) parsing table[edit] Until the mid-1990s, it was widely believed that LL(k) parsing (for k > 1) was impractical, since the parser table would have exponential size in k in the worst case. This perception changed gradually after the release of the Purdue Compiler Construction Tool Set around 1992, when it was demonstrated that many programming languages can be parsed efficiently by an LL(k) parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators like yacc use LALR(1) parser tables to construct a restricted LR parser with a fixed one-token lookahead. As described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars (CFGs); LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages which in turn are a proper subset of all context-free languages. In order for a CFG to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section. Let A be a non-terminal. FIRST(A) is (defined to be) the set of terminals that can appear in the first position of any string derived from A. FOLLOW(A) is the union over FIRST(B) where B is any non-terminal that immediately follows A in the right hand side of a production rule. LL(1) Conflicts[edit] There are 2 main types of LL(1) conflicts: FIRST/FIRST Conflict[edit] The FIRST sets of two different grammar rules for the same non-terminal intersect. An example of an LL(1) FIRST/FIRST conflict: S -> E | E 'a' E -> 'b' | ε FIRST(E) = {'b', ε} and FIRST(E 'a') = {'b', 'a'}, so when the table is drawn, there is conflict under terminal 'b' of production rule S. Special Case: Left Recursion[edit] Left recursion will cause a FIRST/FIRST conflict with all alternatives. E -> E '+' term | alt1 | alt2 FIRST/FOLLOW Conflict[edit] The FIRST and FOLLOW set of a grammar rule overlap. With an empty string (ε) in the FIRST set it is unknown which alternative to select. An example of an LL(1) conflict: S -> A 'a' 'b' A -> 'a' | ε The FIRST set of A now is {'a', ε} and the FOLLOW set {'a'}. Solutions to LL(1) Conflicts[edit] Left Factoring[edit] A common left-factor is "factored out". A -> X | X Y Z A -> X B B -> Y Z | ε Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict. Another example (more complex) using above FIRST/FIRST conflict example: S -> E | E 'a' E -> 'b' | ε becomes (merging into a single non-terminal) S -> 'b' | ε | 'b' 'a' | 'a' then through left-factoring, becomes S -> 'b' E | E E -> 'a' | ε Substituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts. Note that this may cause a FIRST/FIRST conflict. Left recursion removal^[7][edit] For a general method, see removing left recursion. A simple example for left recursion removal: The following production rule has left recursion on E E -> E '+' T E -> T This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*. So the rule could be rewritten as E -> T Z Z -> '+' T Z Z -> ε Now there is no left recursion and no conflicts on either of the rules. However, not all CFGs have an equivalent LL(k)-grammar, e.g.: S -> A | B A -> 'a' A 'b' | ε B -> 'a' B 'b' 'b' | ε It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar. See also[edit] External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/elemz%C3%A9si_fa/en.wikipedia.org/wiki/LL_parser.html","timestamp":"2024-11-03T19:00:59Z","content_type":"text/html","content_length":"148269","record_id":"<urn:uuid:b5b9b1fb-493e-4b11-b399-85208b7318c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00540.warc.gz"}
Lagrange Polynomials · CryptoGroups.jl This Julia code demonstrates the implementation of Lagrange interpolation over a modular field, a crucial component in cryptographic schemes such as Shamir's Secret Sharing. The implementation showcases the flexibility and composability of Julia's ecosystem by seamlessly integrating the external Polynomials package with the custom modular field arithmetic provided by CryptoGroups. CryptoGroups handles the finite field arithmetic, while Polynomials manages the polynomial operations, without creating a direct dependency between the two. The result is a concise yet powerful implementation that can be easily adapted for various cryptographic applications. The example includes a test case that reconstructs a secret (the constant term of the polynomial) using Lagrange interpolation, illustrating its practical application in secret sharing schemes. using Test using CryptoGroups.Fields using Polynomials function lagrange_interpolation(x::Vector{T}, y::Vector{T}) where T n = length(x) result = Polynomial{T}([0]) for i in 1:n term = Polynomial{T}([1]) for j in 1:n if i != j term *= Polynomial{T}([-x[j], 1]) / (x[i] - x[j]) result += y[i] * term return result p = 23 secret = 3 poly = Polynomial{FP{p}}([secret, 5, 2, 5]) x = FP{p}[2, 4, 6, 8] y = poly.(x) interp_poly = lagrange_interpolation(x, y) @test value(interp_poly(0)) == secret Test Passed This page was generated using Literate.jl.
{"url":"https://peacefounder.org/CryptoGroups.jl/dev/generated/lagrange/","timestamp":"2024-11-06T04:34:38Z","content_type":"text/html","content_length":"9938","record_id":"<urn:uuid:69c46e16-fc0a-4c04-836f-2ed20e2013f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00218.warc.gz"}
How To Graph The Inverse Function - Graphworksheets.com Inverse Function Graph Worksheet – If you’re looking for graphing functions worksheets, you’ve come to the right place. There are several different types of graphing functions to choose from. Conaway Math offers Valentine’s Day-themed worksheets with graphing functions. This is a great way for your child to learn about these functions. Graphing functions Graphing functions … Read more
{"url":"https://www.graphworksheets.com/tag/how-to-graph-the-inverse-function/","timestamp":"2024-11-02T20:52:08Z","content_type":"text/html","content_length":"46915","record_id":"<urn:uuid:b90cc340-ae76-4225-a10d-f1d50b9569b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00147.warc.gz"}
Segregate 0s and 1s in an Array Segregate 0s and 1s in an Array Segregate 0s and 1s in an Array Difficulty Level Easy Frequently asked in Accolite Amazon Fab MakeMyTrip PayPal Paytm Zoho Views 2756 Problem Statement Suppose you have an integer array. The problem “Segregate 0s and 1s in an array” asks to segregate the array in two parts, in 0s and in 1s. The 0’s should be on the left side of the array and 1’s on the right side of the array. Explanation: All 0’s are shifted to left and 1’s are shifted to the right. 1. Traverse the array and get the count of total zero’s in the array. 2. Push ‘0’ that count number of times in the array 3. Push ‘1’ (n – count) no of times in the array from the next position of 0 where we left inserting 0. 4. Print the array. Explanation for Segregate 0s and 1s in an Array Given the array of the integers, in integers, it will only store the 0s and 1s in the array. Rearrange the array in such a way that all the zeroes will be shifted to the left side of the array and all the 1s elements of the array will be shifted to the right side of the array. For this, we are going to make a count of all the zeroes. That zero counts will be helping us in marking the zeros at the left side of the array. Traverse the array for the first time in the code to get the count of all the zeroes in the array, this count will be helping us in marking all the count number of places from the left side of the array. So for that we will traverse the array and check for each value of the arr[i], is it is equal to 0, if it is found to be equal to 0, then increase the value of count by 1. We should have declared and initialized the value of count to 0 before entering into the loop. After traversing we got the count. We will traverse loop count no times, and mark every value of arr[i] from 0^th index to the count-1 number of places. Now, we have the zeroes in the left side of the array. Now we have to traverse the array from the count to n where n is the length of the array. So starting from i=count whatever the value of count will be, keep updating all the values to 1. After all the operations, we have the desired array, 0s in the left side of the array and 1s in the right side of the array. C++ program for Segregate 0s and 1s in an Array using namespace std; void segregateZeroesOnes(int arr[], int n) int count = 0; for (int i = 0; i < n; i++) if (arr[i] == 0) for (int i = 0; i < count; i++) arr[i] = 0; for (int i = count; i < n; i++) arr[i] = 1; void printArray(int arr[], int n) for (int i = 0; i < n; i++) cout << arr[i] << " "; int main() int arr[] = {1,0,1,1,0,1,1,0}; int n = sizeof(arr) / sizeof(arr[0]); segregateZeroesOnes(arr, n); printArray(arr, n); return 0; Java program for Segregate 0s and 1s in an Array class segregateZeroesOnes public static void segregateZeroesOnes(int arr[], int n) int count = 0; for (int i = 0; i < n; i++) if (arr[i] == 0) for (int i = 0; i < count; i++) arr[i] = 0; for (int i = count; i < n; i++) arr[i] = 1; public static void printArray(int arr[], int n) for (int i = 0; i < n; i++) System.out.print(arr[i] + " "); public static void main(String[] args) int arr[] = new int[] { 1,0,1,1,0,1,1,0 }; int n = arr.length; segregateZeroesOnes(arr, n); printArray(arr, n); Complexity Analysis for Segregate 0s and 1s in an Array Time Complexity O(n) where “n” is the number of elements in the array. Space Complexity O(n) where “n” is the number of elements in the array.
{"url":"https://tutorialcup.com/interview/array/segregate-0s-and-1s-in-an-array-2.htm","timestamp":"2024-11-05T19:21:49Z","content_type":"text/html","content_length":"105804","record_id":"<urn:uuid:ac7a521d-89b5-4ff4-b89e-dcbb33b431af>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00616.warc.gz"}
100 Grid With the recent Coronavirus Disease 2019 (COVID-19), many children are at home because schools are closed. Education and fun, however, should not stop. Here is a spreadsheet I created that gets kids working on their numbers using a 100 grid. First, you can enter in the probability of a number being missing from the 100 grid. I usually keep them all the same percent, however, I made the spreadsheet have a probability for each number 1-100 just in case you wanted to concentrate practice on specific numbers. For example, if your student has problems with say numbers ending in 5, or all even numbers, etc. In that case, just set the probability larger for those numbers. Or, perhaps you just want to make a cool pattern of missing and present numbers (smiley face? the word "hi"?). To make a number always missing, enter 1 (i.e. 100%). To make a number always present, enter 0. The spreadsheet will then randomly fill in the 100 grid. You can press F9 to create a new randomly generated grid. I set the Print Area, that is, the area that will be printed out when you click Print, to be the area with the grid in the outlined box. This way, you can just open up the spreadsheet and print, and you're good to go. As mentioned, if you want another realization of the 100 grid, just press F9 to refresh the random numbers. Click here to download the spreadsheet. If you find this spreadsheet useful, please credit Statisticool.com. Thanks for reading. Please anonymously VOTE on the content you have just read:
{"url":"https://statisticool.com/education/100grid.html","timestamp":"2024-11-01T20:39:52Z","content_type":"text/html","content_length":"7992","record_id":"<urn:uuid:5c03eaa6-081b-424e-ba2c-f0ef4e9d4f63>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00769.warc.gz"}
Mathematics of the Universe The universe is a vast and complex system governed by mathematical laws and principles. published : 09 April 2024 The universe is a vast and complex system governed by mathematical laws and principles. From the motion of celestial bodies to the behavior of subatomic particles, mathematics provides a language for understanding the fundamental structure and dynamics of the cosmos. Physical Laws and Mathematical Formulations Many of the physical laws that govern the universe can be described using mathematical equations and formulations. For example, Isaac Newton's laws of motion, Albert Einstein's theory of relativity, and the laws of thermodynamics are all expressed using mathematical equations. These mathematical formulations allow scientists to make predictions about the behavior of physical systems and test these predictions through observation and experimentation. By studying the mathematical underpinnings of the universe, scientists gain insights into the nature of reality and the fundamental forces that shape it. Quantum Mechanics and Mathematical Formalism Quantum mechanics, the branch of physics that describes the behavior of particles at the smallest scales, relies heavily on mathematical formalism for its formulation and interpretation. The mathematical framework of quantum mechanics, including wave functions, operators, and probability amplitudes, provides a rigorous foundation for understanding the strange and counterintuitive phenomena of the quantum world. Mathematical concepts such as linear algebra, differential equations, and probability theory are essential tools for solving problems in quantum mechanics and interpreting the results of experiments. By applying mathematical techniques to quantum systems, physicists can make predictions about the behavior of particles and develop new technologies and applications. Cosmology and Mathematical Models Cosmology, the study of the origin, evolution, and eventual fate of the universe, relies on mathematical models and simulations to explore its vast and complex structure. Mathematical models of the universe, such as the Big Bang theory and inflationary cosmology, provide frameworks for understanding the history and dynamics of the cosmos. By combining observational data with mathematical models, cosmologists can make predictions about the structure and evolution of the universe and test these predictions through observation and experimentation. Mathematical techniques such as differential geometry, topology, and tensor calculus play a crucial role in formulating and solving the equations that describe the universe on the largest scales. The mathematics of the universe is a testament to the power and beauty of mathematical reasoning. By uncovering the mathematical laws and principles that govern the cosmos, scientists gain insights into the nature of reality and the underlying order that pervades the universe. As we continue to explore the depths of mathematical physics and cosmology, let us marvel at the elegance and sophistication of the mathematical descriptions of the universe and the profound insights they offer into the mysteries of existence.
{"url":"https://www.function-variation.com/article110","timestamp":"2024-11-06T10:53:01Z","content_type":"text/html","content_length":"17072","record_id":"<urn:uuid:3775f17f-ab26-449d-8323-2493b9be1cfb>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00353.warc.gz"}
Collection of Solved Problems Task number: 2323 Positive charge Q is located in all vertices of a square with side length of a. a) Determine the electric field intensity in the centre of the square. b) What charge q do we have to place at the centre of the square so that the overall forces acting on every charge were zero? • a) Hint: Intensity And Potential Of Electric Field All charges have the same value and distance from the centre. What will thus apply for electric field intensity vectors’ size? The potential in the centre of the square can be obtained by adding up the potentials of all electric fields what the point charges create. • a) Solution: Electric Field Intensity The size of electric intensity is directly proportional to the size of the charge creating the electric field and indirectly proportional to the square of the distance between the charge and the point where we determine the intensity. All charges are of the same size and have the same distance between them and the centre of the square. The size of electric intensity of all fields is thus the same. The direction of the vector of the electric intensity is the same as the direction of the electric force that would act on a positive charge in the given point. Since the charges in the vertices are positive, the electric intensity will point away from them. If we draw a picture, it is obvious that vectors of electric intensity of fields of opposite charges have opposite directions. Since they also have the same size, they will cancel out. The total electric field intensity will thus be zero. • a) Analysis: Potential In the Square Centre The total electric field potential in the centre of the square can be obtained by adding up the potential of all electric fields caused by all the charges. The electric field potential of a point charge is directly proportional to the size of the charge and indirectly proportional to the distance between the charge and the point where we determine the potential. Since all charges are of the same size and have the same distance from the centre, the potential caused by each of them at the centre of the square is the same. • a) Solution: Potential In the Square Centre The total potential in the centre of the square is equal to the sum of potentials of every charge. The following relation applies for electric potential at distance r caused by charge Q: \[\varphi\,=\, \frac{1}{4 \pi \varepsilon_0} \frac{Q}{r}\,. \] Since all charges have the same value and are the same distance from the centre of the square, the potential is always the same. Charges are half the diagonal distant from the centre of the square. That distance is \(\frac{a \sqrt{2}}{2}\). The potential of every field is equal to: \[\varphi\,=\, \frac{1}{4 \pi \varepsilon_0}\, \frac{Q}{\frac{a \sqrt{2}}{2}}\,=\,\frac{1}{4 \pi \varepsilon_0}\, \frac{2Q }{a\sqrt{2} }\,.\] We will multiply the denominator and the numerator by the square root of 2 to remove the root from the denominator. \[\varphi\,=\,\frac{1}{4 \pi \varepsilon_0}\, \frac{2\sqrt{2}Q }{2a }\,=\,\frac{1}{4 \pi \varepsilon_0}\, \frac{\sqrt{2}Q }{a }\] We will obtain the total potential by adding up all potentials of every field, i.e. four times the potential of a single field. \[\varphi_c\,=\,4 \varphi\,=\, \frac{1}{4 \pi \varepsilon_0}\, \frac{4\sqrt{2}Q}{a}\] • b) Hint What forces act on the charge in the corner of the square? Draw a picture and determine the resultant of these forces. • b) Picture of Acting Forces • b) Analysis We have to find the resulting electric forces caused by charges located at the corners of the square on the other charges before we calculate the value of charge q which has to be placed into the centre of the square in such a way so that the resultant force acting on charges in the corners is zero. Charges placed at the corners of the square act upon the other charges with repulsive electric force. The size of these forces can be found from Coulomb’s law. The resulting force which acts on a single charge in the corner can be obtained by adding up the vectors of every force. We have to place such a charge into the centre of the square so that it acts with the same force on the other charges but in the opposite direction. Since the charge in the centre of the square has to attract the other charges, it has to be negative. The value of this charge can be obtained from Coulomb’s law. • b) Solution: Charge q Value We will calculate the value of forces from Coulomb’s law: \[F\,=\, k \, \frac{Q^2}{r^2}\] Only forces acting on charge Q[3] are drawn into the picture. The others are analogical. Charges Q[2] and Q[4] are at distance a from charge Q[3] and therefore they act on the charge with the same force: \[F_2\,=\,F_4\,=\, k \, \frac{Q^2}{a^2}\] The distance between Q[1] and Q[3] is \(a\sqrt{2}\) which is the diagonal in a square with the side length of a. \[F_1\,=\, k \, \frac{Q^2}{\left( \sqrt{2} a \right)^2}\] \[F_1\,=\, k \, \frac{Q^2}{2 a^2}\] We will obtain the resultant of acting forces by adding up their vectors. We will add up forces \(\vec{F}_2\) and \(\vec{F}_4\) first. The resultant of these two forces will be the diagonal in a square with the side length of F[2] and its value will thus be: \[F_{24}\,=\,\sqrt{2}F_2\,.\] \[F_{24}\,=\,\sqrt{2}k \, \frac{Q^2}{a^2}\] The resulting force \(\vec{F}\) can be obtained by adding up the vectors of forces \(\vec{F}_{24}\) and \(\vec{F}_1\) whose values have been expressed. \[F\,=\,F_{24}+F_1\] \[F\,=\,\sqrt{2}k \, \frac{Q^2}{a^2}\,+\,k \, \frac{Q^2}{2 a^2}\] \[F\,=\,k \, \frac{Q^2}{a^2}\,\left(\sqrt{2}\,+\, \frac{1}{2 }\right)\tag{*}\] We have to place a charge at the centre of the square so that it will act with force of the same value \(\vec{F^,}\) but in the opposite direction. We can find its value from Coulomb’s law. \[F^,\,=\, k \,\frac{Q q}{\left(\frac{a\sqrt{2}}{2}\right)^2}\,,\] where \(\frac{a\sqrt{2}}{2}\) is the half of the diagonal. We will adjust the denominator of the fraction: \[F^,\,=\, k \,\frac{Q q}{\frac{a^2 2}{4}}\,=\, k \,\frac{Q q}{\frac{a^2}{2}}\] \[F^,\,=\, k \,\frac{2Q q}{a^2}\tag{**}\] Now we will compare the expression of F and F ^, from relations (*) and (**): \[k \,\frac{2Q q}{a^2}\,=\, \,k \, \frac{Q^2}{a^2}\,\left(\sqrt{2}\,+\, \frac{1}{2 }\right)\] We will adjust and then express the searched charge q: \[2q \,=\,Q\,\left(\sqrt{2}\,+\, \frac{1}{2 }\right) \] \[2q \,=\,Q\,\left(\frac{2\sqrt{2}\,+\, 1}{2 }\right) \] \[ q\,=\,Q\,\left(\frac{2\sqrt{2}\,+\, 1}{4 }\right) \] • Answer The electric intensity of the field is zero at the centre. Potential at the centre of the square is equal to: \[\varphi_c\,=\,4 \varphi\,=\, \frac{1}{4 \pi \varepsilon_0}\, \frac{4\sqrt{2}Q}{a}\,.\] We have to place a negative charge of value: \[ q\,=\,Q\,\left(\frac{2\sqrt{2}\,+\, 1}{4 }\right)\,. \] at the centre of the square so that the resultant force acting on the charges in the corners would be zero.
{"url":"https://physicstasks.eu/2323/square","timestamp":"2024-11-11T17:04:59Z","content_type":"text/html","content_length":"36228","record_id":"<urn:uuid:4ceff723-3f9a-4792-b4a4-55c82f2f04e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00470.warc.gz"}
Taylor Series Matlab | Examples of Taylor Series Matlab Updated March 13, 2023 Introduction to Taylor Series Matlab The following article provides an outline for Taylor Series Matlab. Taylor series is used to expand a function into an infinite sum of terms. Taylor series expresses the function in terms of its derivative at a point. Taylor series of e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! + x^5/5! + … As can see in the above example, we have drilled down the function ‘e^x’ into a polynomial which is of infinite degree. It finds its application in modern day Physics to simplify complex calculations, by breaking them down into the simple sum of terms. In Matlab, we use the ‘taylor’ function to get taylor series of any function. A = taylor (Fx, p) A = taylor (Fx, p, a) A = taylor (Fx, Name, Value) • taylor (Fx, p) will compute the Taylor series for the input function. By default, the series is computed till the 5^th order and at ‘p = 0’ as the point. • taylor (Fx, p, a) will compute the Taylor series for input function at the point p = a. • taylor (Fx, Name, Value) can be used to put additional conditions, which can be specified using pair arguments (Name, Value). Examples of Taylor Series Matlab Let us now see the code to calculate in Matlab using ‘taylor (Fx, p) function’: Example #1 In this example, we will use a simple cos function and will expand it using Taylor series function. We will follow the following 2 steps: • Create the cos function in Matlab. • Calculate the Taylor series using ‘Taylor function’. syms x [Initializing the variable ‘x’] A = taylor (5* cos (x)) [Creating the cos function and passing it as an input to the taylor function] [Please note that, since we did not pass any value for point ‘p’, the Taylor series will be computed at the point p = 0, by default] [Mathematically, the Taylor series of 5*cos(x) is (5*x^4)/24 – (5*x^2)/2 + 5] syms x A = taylor (5* cos (x)) Let us now see how the code for Taylor series looks like in Matlab if we want to use the point of our choice for Taylor series. For this purpose, we will be using ‘taylor (Fx, p, a)’ function and will pass some value for ‘p’. Example #2 In this example, we will use a function of sine and will find the Taylor series at the point p = 1. We will follow the following 2 steps: • Create the function of sine in Matlab. • Calculate the Taylor series using ‘taylor (Fx, p, a) function’ and pass ‘p’ as 1. syms x [Initializing the variable ‘x’] A = taylor(4*sin(x)) x, 1) [Creating the polynomial function of sine and passing it as an input to the taylor function] [Please note that we have passed ‘x’, ‘1’ as arguments, which represent the point x = 1] [Mathematically, the Taylor series of 4*sin (x) at point x = 1 is4*sin(1) – 2*sin(1)*(x – 1)^2 + (sin(1)*(x – 1)^4)/6 + 4*cos(1)*(x – 1) – (2*cos(1)*(x – 1)^3)/3 + (cos(1)*(x – 1)^5)/30 syms x A = taylor(4*sin(x), x, 1) Let us now see how the code for Taylor series looks like if we need to add more conditions using the ‘Name’, ‘Value’ pair arguments. For this purpose, we use ‘taylor (Fx, Name, Value)’ function. Example #3 In this example, we will use the same polynomial function as used in the above example but will find the Taylor series only till 2^nd order. We will follow the following 2 steps: • Create the function of cos and sine. • Calculate the Taylor series using the function ‘taylor (Fx, Name, Value)’. syms x [Initializing the variable ‘x’] A = taylor ((5*cos (x) + 4*sin (x)), x, 1, 'Order', 2) [Creating the polynomial function of cos & sine and passing it as an input to the taylor function] [Please note that we have passed ‘Order’, ‘2’ as name-value pair arguments] [Mathematically, the taylor series of 5*cos (x) + 4*sin (x) at point x = 1 and order 2 is 5*cos (1) + 4*sin (1) + (x – 1) * (4*cos (1) – 5*sin (1))] syms x A = taylor ((5*cos (x) + 4*sin (x)), x, 1, 'Order', 2) A function’s Taylor series can be found in Matlab using taylor function. By default, the Taylor series is computed at point x = 0. If we need Taylor series w.r.t some other point, we can use taylor (Fx, p, a). This is a guide to Taylor Series Matlab. Here we discuss the introduction to Taylor Series Matlab along with examples respectively. You may also have a look at the following articles to learn more –
{"url":"https://www.educba.com/taylor-series-matlab/","timestamp":"2024-11-09T17:29:39Z","content_type":"text/html","content_length":"309276","record_id":"<urn:uuid:08eda39c-285c-45f4-ab46-5948e38584d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00237.warc.gz"}
How to Write an IF Formula for Pass/Fail in Excel In this tutorial, you will learn how to write an IF formula to determine the pass/fail of test scores completely. When processing test scores in , we might sometimes need to determine their pass/fail marks. We usually base this on the minimum passing score we have for those test scores. If they equal or exceed our minimum passing score, then we consider them to pass. Otherwise, if they are below our minimum passing score, then we consider them to fail. If we know how to the correct IF formula to determine that, we can get our “pass/fail” marks easily. Want to know the way to write the IF to determine the pass/fail of a test score in excel? Read this tutorial until its last part! : This post may contain affiliate links from which we earn commission from qualifying purchases/actions at no additional cost for you. Learn more Want to work faster and easier in Excel? Install and use Excel add-ins! Read this article to know the best Excel add-ins to use according to us! Table of Contents: How to Write an IF Formula for Pass/Fail in Excel The way to write an IF formula to determine pass/fail in excel is quite simple. If you understand the way to use number operators in a condition, you should be able to write it. Here is the general way to write the IF formula for this pass/fail mark purpose. = IF ( test_score >= minimum_passing_score , “Pass” , “Fail” ) In this writing, we tell our IF to compare our test score with our minimum passing requirement of the test score. If it is equal to or more than the minimum passing score, then our IF will produce “Pass”. If otherwise, our IF will produce “Fail”. To better understand the formula writing concept, here is its implementation example in excel. As you can see there, we can get the “pass/fail” mark of our test scores from the IF formula writing. Just compare the test score with the minimum passing score and input the “Pass” and “Fail” marks inside the IF. After we finish writing the IF for the first test score, just copy the formula to mark all the test scores. By doing that, we will immediately get all the pass/fail marks that we need. How to Write an IF Formula for Pass/Retest/Fail in Excel What if we have the option to retest also for a certain test score range? If that is the case, then we just need to write two IFs and put one in another. Input the retest mark condition in one of the Here is the general writing form of the nested IFs for this purpose in excel. = IF ( test_score >= minimum_passing_score , “Pass” , IF ( test_score >= minimum_retest_score , “Retest” , “Fail” ) ) In this IF writing, we input the logic condition for passing in the first IF. This IF will catch all the test scores that pass. Thus, we just need to compare the test score with the minimum retest score in the second IF. If the test score doesn’t fulfill the requirements to pass or retest, our IF will produce the “Fail” mark. Here is the implementation example of this IF formula writing in excel. For the previous set of test scores, we have a new retest score requirement. Thus, we write nested IFs and we compare the test score with the minimum retest score in the second IF. By doing that, we can mark the test scores which need to take retest too! After you have learned how to write an IF formula to determine the pass/fail in excel, let’s do an exercise. This is so you can understand the tutorial lessons more practically. Download the exercise file and answer the questions below. Download the answer key file if you have done the exercise and want to check your answers. Link to the exercise file: Download here Answer all the questions below in the appropriate gray-colored cell according to the question number! 1. The passing requirement for the test score is 65. What is the pass/fail mark of each test score? 2. We add the chance to retest for the students and the retest requirement is 55. What is the pass/retest/fail mark of each test score? 3. We increase the passing and retest requirement for the test score to 75 and 60. What is the pass/retest/fail mark of each test score? Link to the answer key file: Download here Additional Note You can also use IF if you want to grade test scores into letters (A, B, C, D, E, etc). Just change the “Pass”, “Retest”, and “Fail” marks to those letters. Add more IFs into your nested IFs if you need them. Related tutorials you should learn too:
{"url":"https://computeexpert.com/english-blog/excel-tips-and-trick/if-pass-fail-excel","timestamp":"2024-11-13T19:14:45Z","content_type":"application/xhtml+xml","content_length":"50256","record_id":"<urn:uuid:46fa6b59-955c-4e3b-ac9d-c2c95d9c52f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00424.warc.gz"}
Re: How to convert integer number to Q1.15 format? 2024-01-04 04:17 AM I am using STM32G491 controller for my application. I am trying to use FMAC module for FIR filter implementation for my application. I am referring the STM32G4-Peripheral-Filter_Math_Accelerator_FMAC.pdf FMAC pdf document. I have also referred the FMAC example sample code https://github.com/STMicroelectronics/STM32CubeG4/tree/master/Projects/ As the FMAC module accepts input buffer in q1.15 format. I have used below formula to convert int16_t value to q1.15 format. Number = 100; /First , normalization float v1_f = 100/32767.0; //second, convert the normalized results into Q Format v1_q15 = int16_t(V1_f *0x8000) +0.5; placed this v1_q15 value as the buffer element. Is the above formula for q1.15 conversion is correct? I am not getting the expected output which I have calculated from the FIR filter formula. Can any one support me to convert the integer value to q1.15 format and q1.15 format output to integer number? Any help would be appreciated. Thanks in advance 2024-01-04 06:15 AM 2024-01-04 06:15 AM 2024-01-04 07:57 AM 2024-01-12 04:02 AM 2024-01-12 04:07 AM
{"url":"https://community.st.com/t5/stm32-mcus-embedded-software/how-to-convert-integer-number-to-q1-15-format/m-p/624939/highlight/true?attachment-id=22993","timestamp":"2024-11-05T16:15:37Z","content_type":"text/html","content_length":"269171","record_id":"<urn:uuid:9afc9178-c7f5-4705-8553-7560dcdf22cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00034.warc.gz"}
1. Year 7 2. Topic: Number Year 7 Sub-topics: AC9M7N01 describe the relationship between perfect square numbers and square roots, and use squares (3) AC9M7N02 represent natural numbers as products of powers of prime numbers using exponent notation (20) AC9M7N03 represent natural numbers in expanded notation using place value and powers of 10 (1) AC9M7N04 find equivalent representations of rational numbers and represent rational numbers on a num (16) AC9M7N05 round decimals to a given accuracy appropriate to the context and use appropriate rounding (2) AC9M7N06 use the 4 operations with positive rational numbers including fractions, decimals and perce (18) AC9M7N07 compare, order and solve problems involving addition and subtraction of integers (7) AC9M7N08 recognise, represent and solve problems involving ratios (6) AC9M7N09 use mathematical modelling to solve practical problems involving rational numbers and perce (7)
{"url":"https://mathslinks.net/browse/ac9m7n-number","timestamp":"2024-11-06T14:02:33Z","content_type":"text/html","content_length":"26253","record_id":"<urn:uuid:16a7186a-5aca-4698-b209-33cd6157499d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00226.warc.gz"}
The D5-brane effective action and superpotential in N = 1 compactifications The four-dimensional effective action for D5-branes in generic compact Calabi-Yau orientifolds is computed by performing a Kaluza-Klein reduction. The N = 1 Kähler potential, the superpotential, the gauge-kinetic coupling function and the D-terms are derived in terms of the geometric data of the internal space and of the two-cycle wrapped by the D5-brane. In particular, we obtain the D5-brane and flux superpotential by integrating out four-dimensional three-forms which couple via the Chern-Simons action. Also the infinitesimal complex structure deformations of the two-cycle induced by the deformations of the ambient space contribute to the F-terms. The superpotential can be expressed in terms of relative periods depending on both the open and closed moduli. To analyze this dependence we blow up along the two-cycle and obtain a rigid divisor in an auxiliary compact threefold with negative first Chern class. The variation of the mixed Hodge structure on this blown-up geometry is equivalent to the original deformation problem and can be analyzed by Picard-Fuchs equations. We exemplify the blow-up procedure for a non-compact Calabi-Yau threefold given by the canonical bundle over del Pezzo surfaces. Dive into the research topics of 'The D5-brane effective action and superpotential in N = 1 compactifications'. Together they form a unique fingerprint.
{"url":"https://research-portal.uu.nl/en/publications/the-d5-brane-effective-action-and-superpotential-in-n-1-compactif","timestamp":"2024-11-06T12:24:57Z","content_type":"text/html","content_length":"53931","record_id":"<urn:uuid:f4f9d0ad-f491-481e-bfb3-b51befc709f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00816.warc.gz"}
#18 With Stableford to the handicap - Kai's Golf Guide Today it is to guess. Planned and registered with the club is an EDS round, an "Extra Day Score", i.e. a handicap-effective game. Two or more players go out on the course and count each other's strokes - in strict compliance with the rules, of course. Ultimately, I want to improve my personal handicap. Strictly speaking, I don't even have a handicap with a -51 on my club card, because the value up to and including -37 is called the "club handicap. In order not to make it too complicated, however, I will continue to use the term handicap here. Fair competition By the way, the purpose of this handicap is not only to be able to say with a simple number how well or badly one is able to play golf. This number can also be used to allow players to compete against each other who differ significantly in terms of their ability on the course. A conversion system makes it possible. The keyword is "Stableford," which ultimately - after the round - can also be used to calculate the new value for the club card. An example helps to understand: The holes of a golf course are categorized, classified as par 3, par 4 or par 5. This respective number tells how many strokes a player with handicap 0 needs to hole the ball. For example, 3 strokes for a par 3 hole. 2 points for par If the player achieves this par, i.e. needs exactly as many strokes as specified, he receives 2 Stableford points for the hole. For every deviating stroke there is one point more or less, who holes a par 3 with four strokes, gets only 1 Stableford point, who needs only 2 strokes, gets 4 Stableford points. What may sound a bit complicated at first, is actually quite simple. In short: The less strokes per hole, the more Stableford points. The par of the hole determines the basis. 36 points as a target So if our example golfer with a handicap of 0 plays par on all 18 holes during the round, he will receive two Stableford points per hole and will have earned 36 points accordingly at the end. He has thus confirmed his handicap. Let's take a rookie as another example: His handicap is -54. That's where every player starts after having passed his golf license. Of course, such a rookie will not be able to reach the par requirements per hole, but will need more strokes. More strokes allowed per hole How many strokes he may play to confirm his handicap, says the handicap itself. Because the value indicates how many strokes the player may need more than the course standard, that is, the value of all par holes added together. This standard is usually 72 for 18 holes. A player with a handicap of -54 may need 54 strokes more than the standard, which is 72+54= 126 strokes. To find out how many strokes more it is per hole, divide the handicap value by 18, the number of holes of a complete course round. In our example, 54 : 18 = 3, so the player may play 3 strokes more per hole in addition to the par requirement to reach his handicap. A par 3 hole with 6 strokes therefore corresponds to his ability, his personal par. Likewise a par 4 with 7 strokes and of course a par 5 with 8 strokes. Direct comparison possible In order to be able to compare oneself directly with any other player and also to be able to calculate after the round whether one has earned one's handicap or perhaps was better or worse, there are the Stableford points. As an example a competition between a player with handicap -18 and one with handicap -36 may serve. The better player may note one stroke more per hole to play his personal par, the worse player has two strokes extra. If both players play their personal par on a par 3 hole, they each receive 2 Stable Ford points and are therefore equally good in relation, although they differ in terms of If both have scored 36 Stableford points at the end of the round, they have not only been equally good - taking into account their respective personal handicaps - they have also confirmed their
{"url":"https://www.kaisgolfguide.com/blog/how-to-golf/18-with-stableford-to-the-handicap/","timestamp":"2024-11-01T18:48:27Z","content_type":"text/html","content_length":"66366","record_id":"<urn:uuid:4b643953-15d0-4360-9727-e70618e724dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00443.warc.gz"}
Students learn the fundamentals of geometry, measurements, and algebra as they venture into more complex mathematical concepts. Our lively math teachers break down the skills into easy-to-understand lessons in on-demand videos. Lessons are available for students who are struggling with a particular skill or who just need a refresher. • Math teachers teach students geometry, measurements, money, and algebra for the fifth-grade curriculum. • Students have instant help to learn the math skills needed for their assignments. • Teachers show students how to use the scratchpad to analyze and breakdown math problems.
{"url":"https://ok.mathgames.com/video/grade5","timestamp":"2024-11-02T00:05:17Z","content_type":"text/html","content_length":"474256","record_id":"<urn:uuid:4602f41e-4128-4363-8e84-73ced2e61d1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00807.warc.gz"}
Creative Subdivision | N∆BOU top of page Let start with his Creative subdivision system. A Subdivision is dividing a rhythmic unit into a smaller unit. Ronan indicates that we tend to use always the same type of subdivision in our compositions or improvisation. With his Creative Subdivision system, he found a way to divide a unit into smaller units. This results in subdivisions that we would not normally think of so quickly. He sees every rhythm as either 3 or 2 or a combination of 3 and/or 2. A dotted quarter note for example is a 3 (3 eight notes), and a half note is a 2 (2 quarter notes). If we combine this, for example for a group of 5, we have either 2+3 or 3+2. And a group of 7 could be 3+2+2 or 2+3+3 or 3+2+3. Using this principal is very easy to come up with some interesting rhythmic combinations over a basic 4/4. Example: 3+2+2+2+3+3+2+3+3+2+2+2+3 Example: grouping with a combination of 3 & 2: 5+5+5+7+7+3. In this example we have 8th notes in groups of 5 and 7 (with one group of 3 notes to form the cycle of 4 bar). If you put notes to these types of rhythms, you can come up with very original rhythmic melodies. This method is open to almost infinite variation. By practicing this principal, you can begin opening up your rhythmically playing and create fresher melodic shapes as an improviser and composer. bottom of page
{"url":"https://www.nabouclaerhout.com/creative-subdivision","timestamp":"2024-11-05T03:58:01Z","content_type":"text/html","content_length":"725462","record_id":"<urn:uuid:493664a7-965b-4408-a56d-c25e399d6764>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00486.warc.gz"}
ANSYS Mechanical APDL MFront version 3.1 provides an interface for the ANSYS Mechanical APDL (MAPDL) finite element solver. This interface is fairly features complete: • Isotropic and orthotropic materials are supported. • Small and finite strain behaviours are supported. It shall be pointed out that the ANSYS solvers has a long history. In particular, the design choices made for the USERMAT interface were meant to allow the users to easily write finite-strain behaviours in rate-form. MFront strives to provides behaviours that can be used “just-like” other USERMAT subroutines, but The USERMAT subroutines has a number of shortcomings compared with other interfaces: • External state variables are not supported. • Internal state variables can not be initialized to values different from zero. • There is no standard way of defining the orthotropic axes for orthotropic behaviours. • There is no way of controlling the increase/decrease of the time step. The choice made to defined the orthotropic axes for orthotropic behaviours is detailed below. There are also cases of misuses of the generated libraries that can not be prevented by MFront. The most important ones are the following: • The modelling hypothesis is not passed to the mechanical behaviours. As a consequence, there is not way of distinguishing between the plane strain hypothesis and the axisymmetrical hypothesis, which is mandatory for consistent orthotropic axes management. Therefore, the user has to explicitly call the implementation of the behaviour corresponding to the modelling hypothesis he wants to use, but there is not way to ensure the consistency of this choice and the choice of the modelling hypothesis make for the mechanical computation. • Whether the computation takes into account finite strain transformation (non linear geometric effects according to the ANSYS wording) is not known to the behaviour, so some misuses of MFront behaviours can’t be prevented. For example, small strain behaviours can be used in finite strain transformation using the build-in Jauman corotational framework, which is only valid for isotropic behaviours. Using this framework with orthotropic behaviours can’t be prevented. • The user must respect the order of definition of the material properties, some of them being implicitly defined by MFront. For example, we made the choice to pass the orthotropic axes as material properties. Those are automatically inserted at the beginning of the material properties list, after thermo-elastic properties, if the latter are requested. As a consequence, the number of material properties depends on the modelling hypothesis. The user is thus strongly advised to look at the input file example generated by MFront to have a consistent definition of the material Current status The ansys interface is still in its early stage of development. How the use MFront behaviours in ANSYS When compiling mechanical behaviours with the ansys interface, MFront generates: • Shared libraries containing one implementation of the considered behaviours per supported modelling hypotheses. • Examples of input files in the ansys directory. • A copy of a generic usermat.cpp file for the ANSYS solver. Those various files and their usage are now described. • The MFront libraries are only generated once, only the generic files needs to be recompiled at each run. This is very handy since compiling the MFront libraries can be time-consuming. Those libraries can be shared between computations and/or between users when placed in a shared folders. • The fact that MFront generates one implementation per modelling hypothesis allows the distinction between the plane strain hypothesis and the axisymmetrical hypothesis. This is mandatory to consistently handle orthotropy. • The name of the library can be changed by renaming it after the compilation or by using the @Library keyword. Note on libraries locations As explained above, MFront libraries will be loaded at the runtime time. This means that the libraries must be found by the dynamic loader of the operating system. Under Linux Under Linux, the search path for dynamic libraries are specified using the LD_LIBRARY_PATH variable environment. This variable defines a colon-separated set of directories where libraries should be searched for first, before the standard set of directories. Depending on the configuration of the system, the current directory can be considered by default. Under Windows Under Windows, the dynamic libraries are searched: • in the current directory • in the directories listed in the PATH environment. This variable defines a semicolon-separated set of directories. Generated input files Here is an extract of the generated input file for a MFront behaviour named Plasticitiy for the plane strain modelling hypothesis for the ANSYS solver: /com, Example for the 'PlaneStrain' modelling hypothesis /com, List of material properties /com, -YoungModulus /com, -PoissonRatio /com, -H /com, -s0 tb,user,<mat_id>,<numer of temperatures>,4 /com, you shall now declare your material properties /com, using the tbtemp an tbdata instructions. /com, See the Ansys "USER Material Subroutine" guide for details. /com, Declaration of state variables The generic usermat.cpp file When generating the sources of an MFront file with the ansys interface, a subdirectory called ansys is created. This subdirectory contains the following files: • Examples of MFront’ generated behaviours usage • usermat.cpp: the generic usermat.cpp used to generate the usermatLib library, described below. • test-usermat.cxx: the source of an executable which can be used to test an mfront-usermat.dat file. Its usage is described below. • CMakeLists.txt: a cmake project file that can be used to generate the usermatLib library and the test-usermat executable. The generic usermat.cpp file provided by MFront has the following role: • Reading a file called mfront-usermat.dat that has to be present in the current directory. This file relates an ANSYS material identifier to a MFront behaviour. • Loading the shared library and the behaviours implementation generated by MFront. • Calling the MFront behaviour. Warnings and errors are written in a file called mfront-usermat.log. Compilation of the usermatLib library and the test-executable executable This paragraph describe how to use the cmake to compile the usermatLib library and the test-executable executable. Using Visual Studio 2017 In the ansys subdirectory generated by MFront, type: The usermatLib.dll must be copied in a directory pointed by the ANS_USER_PATH variable. Under Unix In the ansys subdirectory generated by MFront, type: The usermatLib.dll must be copied in a directory pointed by the ANS_USER_PATH variable. Detailed procedure This example shows how to compile a file called ImplicitNorton.mfront and the usermatLib library under LiNuX. 1. Compile the MFront behaviour and generate the ansys subdirectory $ mfront --obuild --interface=ansys ImplicitNorton.mfront Treating target : all The following library has been built : - libAnsysBehaviour.so : ImplicitNorton_axis ImplicitNorton_pstress ImplicitNorton_pstrain ImplicitNorton_3D 2. Change the current directory 3. configure the cmake project $ cmake . -DCMAKE_BUILD_TYPE=Release -- The C compiler identification is GNU 6.3.0 -- The CXX compiler identification is GNU 6.3.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 3.7) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring done -- Generating done -- Build files have been written to: /tmp/ansys 4. Compile the usermatLib library and the test-usermat executable $ cmake --build . Scanning dependencies of target test-usermat [ 25%] Building CXX object CMakeFiles/test-usermat.dir/test-usermat.o [ 50%] Linking CXX executable test-usermat [ 50%] Built target test-usermat Scanning dependencies of target usermatLib [ 75%] Building CXX object CMakeFiles/usermatLib.dir/usermat.o [100%] Linking CXX shared library libusermatLib.so [100%] Built target usermatLib At the end of those four steps, the ansys subdirectory contains: • The libusermat.so library. • The test-usermat executable. Relating material identifier and MFront behaviours The mfront-usermat.dat file gives a list of commands aimed to associate an ANSYS material identifier to a MFront behaviour. The syntax used closely follows ANSY APDL syntax. Only two commands are actually supported: • /com, which is used to introduce a comment line. • tb,mfront which is used to associate an ANSYS material identifier to a MFront behaviour. Here is an example of such a file: /com, Associate the material id 2 to the Chaboche_3D behaviour /com, implemented in the Zircaloy4Behaviours shared library. /com, Associate the material id 3 to the Creep_3D behaviour /com, implemented in the Zircaloy4Behaviours shared library. New commands will eventually be introduced as needed to circumvent the various shortcomings of the USERMAT interface, notably: • Adding the ability to give initial values to state variables • Adding the ability to override the default values of parameters. • etc. Library name For portability reasons, the library name can be stripped from the standard prefix (lib under UNIX) and from the file extension (.dll under Windows, .dylib under Mac Os, .so under LiNuX). The usermat function delivered with MFront will try every combinaison until a suitable on is found. Testing an mfront-usermat.dat using test-usermat The declarations in the mfront-usermat.dat file can lead to the following errors: • The material libraries can not be found. • The TFEL libraries can not be found. • The functions implementing the behaviours can not be found. Although every warning and errors are redirected in the mfront-usermat.log file, those errors can be painful to analyse in Ansys because the generic usermat function closes Ansys on error. Thus, a simple executable called test-usermat is also provided. This executable reads the mfront-usermat.dat in the current directory. If its exists normally, then everything is ok. Main features of the ANSYS interface The ANSYS solver provides the USERMAT interface. In this case, the behaviour shall compute: • The evolution of the state variables. • The value the Cauchy stress at the end of the time step. • The consistent tangent operator. The definition of the consistent tangent operator is given below. For finite strain analyses, small strain behaviours can be written in rate form. The behaviour in integrated in the Jauman framework, which is only suitable for isotropic behaviours. Supported behaviours Isotropic and orthotropic behaviours are both supported. Small and finite strain behaviours are supported. Modelling hypotheses The following modelling hypotheses are supported: • tridimensional (3D) • plane strain (pstrain) • plane stress (including shell elements) (pstress) • axisymmetrical (axis) The generalised plane strain hypothesis is currently not supported. Orthotropic behaviours By nature, orthotropic behaviours are expressed in a preferential frame defined by the material orthotropic axes. There is no standard way of defining the orthotropic axes in ANSYS. We choose to add the definition of those axes to the list of material properties. \(2\) additional material properties are required in \(2D\), and \(6\) additional material properties in \(3D\): • In \(2D\), the two additional material properties are the two components of the vector defining the first axis of the material frame. This vector is thus supposed to be contained in the \(2D\) plane. The second axis is also assumed to be contained in this plane and perpendicular to the first one. With those assumptions, this vector can be deduced from the first one and does not have to be defined. The third direction is required to follow the out-of-plane direction. • In \(3D\), the six additional material properties defines two vectors defining respectively the first and second axes of the material frame. The components of the first vectors are given before the components of the second vector. Those vectors are supposed to be orthogonal. The third vector, associated to the third axis, is implicitly given by the cross-product of the two first axes and does not have to be defined. The user shall use the input file example generated by MFront to see their relative positions of the material properties associated to the definition of the orthotropic axes. Those definitions are only meaningful if the direction of orthotropy are constants. Finite strain strategies Engineers are used to write behaviours based on an additive split of strains, as usual in small strain behaviours. Different strategies exist to: • write finite strain behaviours that preserve this property. • guarantee some desirable properties such as energetic consistency and objectivity. Through the @ANSYSFiniteStrainStrategy, the user can select on of various finite strain strategies supported by MFront, which are described in this paragraph. The usage of the @ANSYSFiniteStrainStrategy keyword is mostly deprecated since MFront 3.1: see the @StrainMeasure keyword. The Native finite strain strategy Among them is the Native finite strain strategy which relies on build-in ANSYS facilities to integrate the behaviours written in rate form. The Native finite strain strategy will use the Jauman rate. This strategy has some theoretical drawbacks (hypoelasticity, restriction to isotropic behaviours, etc…) and is not portable from one code to another. Recommended finite strain strategies Two other finite strain strategies are available in MFront for the ansys interface (see the @ANSYSFiniteStrainStrategy keyword): • FiniteRotationSmallStrain: this finite strain strategy is fully described in [1],[2] • MieheApelLambrechtLogarithmicStrain: this finite strain strategy is fully described in [3] and 1. This finite strain strategy is yet to be implemented. Those two strategies use lagrangian tensors, which automatically ensures the objectivity of the behaviour. Each of these two strategies define an energetic conjugate pair of strain or stress tensors: • For the FiniteRotationSmallStrain case, the strain tensor is the Green-Lagrange strain and the conjugated stress is the second Piola-Kirchhoff stress. • For the MieheApelLambrechtLogarithmicStrain, the strain tensor is the langrangian Hencky strain tensor, i.e. the logarithm of the stretch tensor. The first strategy is suited for reusing behaviours that were identified under the small strain assumptions in a finite rotation context. The usage of this behaviour is still limited to the small strain assumptions. The second strategy is particularly suited for metals, as incompressible flows are characterized by a deviatoric logarithmic strain tensor, which is the exact transposition of the property used in small strain behaviours to handle plastic incompressibility. This means that all valid consistutive equations for small strain behaviours can be automatically reused in finite strain analysis. This does not mean that a behaviour identified under the small strain assumptions can be directly used in a finite strain analysis: the identification would not be consistent. Those two finite strain strategies are fairly portable and are available (natively or via MFront) in Cast3M, Code_Aster, Europlexus, Abaqus/Standard and Abaqus/Explicit and Zebulon, etc… Consistent tangent operator for finite strain behaviours The “ANSYS User Subroutines Reference Guide” gives indicates that the tangent moduli required by ANSYS \(\underline{\underline{\mathbf{C}}}^{MJ}\) is closely related to \(\underline{\underline{\ mathbf{C}}}^{\tau\,J}\), the moduli associated to the Jauman rate of the Kirchhoff stress : \[ J\,\underline{\underline{\mathbf{C}}}^{MJ}=\underline{\underline{\mathbf{C}}}^{\tau\,J} \] where \(J\) is the derterminant of the deformation gradient \({\underset{\tilde{}}{\mathbf{F}}}\). By definition, \(\underline{\underline{\mathbf{C}}}^{\tau\,J}\) satisfies: \[ \overset{\circ}{\underline{\tau}}^{J}=\underline{\underline{\mathbf{C}}}^{\tau\,J}\,\colon\underline{D} \] where \(\ underline{D}\) is the rate of deformation. Most information reported here are extracted from the book of Belytschko ([4]). Relations between tangent operator Relation with the moduli associated to the Truesdell rate of the Cauchy Stress \(\underline{\underline{\mathbf{C}}}^{\sigma\,T}\) The moduli associated to the Truesdell rate of the Cauchy Stress \(\underline{\underline{\mathbf{C}}}^{\sigma\,T}\) is related to \(\underline{\underline{\mathbf{C}}}^{\tau\,J}\) by the following \[ \underline{\underline{\mathbf{C}}}^{\tau\,J}=J\,\left(\underline{\underline{\mathbf{C}}}^{\sigma\,T}+\underline{\underline{\mathbf{C}}}^{\prime}\right)\quad\text{with}\quad\underline{\underline{\ mathbf{C}}}^{\prime}\colon\underline{D}=\underline{\sigma}\,.\,\underline{D}+\underline{D}\,.\,\underline{\sigma} \] \[ \underline{\underline{\mathbf{C}}}^{MJ}=\underline{\underline{\mathbf{C}}}^{\sigma\,T}+\underline{\underline{\mathbf{C}}}^{\prime} \] Relation with the spatial moduli \(\underline{\underline{\mathbf{C}}}^{s}\) The spatial moduli \(\underline{\underline{\mathbf{C}}}^{s}\) is associated to the Lie derivative of the Kirchhoff stress \(\mathcal{L}\underline{\tau}\) , which is also called the convected rate or the Oldroyd rate: \[ \mathcal{L}\underline{\tau}=\underline{\underline{\mathbf{C}}}^{s}\,\colon\,\underline{D} \] The spatial moduli is related to the moduli associated to Truesdell rate of the Cauchy stress \(\underline{\underline{\mathbf{C}}}^{\sigma\,T}\): \[ \underline{\underline{\mathbf{C}}}^{\sigma\,T}=J^{-1}\,\underline{\underline{\mathbf{C}}}^{s} \] Thus, we have: \[ \underline{\underline{\mathbf{C}}}^{MJ}= J^{-1}\underline{\underline{\mathbf{C}}}^{s}+\underline{\underline{\mathbf{C}}}^{\prime} = J^{-1}\left(\underline{\underline{\mathbf{C}}}^ \,.\,\underline{\tau} \] Relation with \(\underline{\underline{\mathbf{C}}}^{\mathrm{SE}}\) The \(\underline{\underline{\mathbf{C}}}^{\mathrm{SE}}\) relates the rate of the second Piola-Kirchhoff stress \(\underline{S}\) and the Green-Lagrange strain rate \(\underline{\varepsilon}^{\mathrm \[ \underline{\dot{S}}=\underline{\underline{\mathbf{C}}}^{\mathrm{SE}}\,\colon\,\underline{\dot{\varepsilon}}^{\mathrm{GL}} \] As the Lie derivative of the Kirchhoff stress \(\mathcal{L}\underline{\tau}\) is the push-forward of the second Piola-Kirchhoff stress rate \(\underline{\dot{S}}\) and the rate of deformation \(\ underline{D}\) is push-forward of the Green-Lagrange strain rate \(\underline{\dot{\varepsilon}}^{\mathrm{GL}}\), \(\underline{\underline{\mathbf{C}}}^{s}\) is the push-forward of \(\underline{\ \[ C^{c}_{ijkl}=F_{im}F_{jn}F_{kp}F_{lq}C^{\mathrm{SE}}_{mnpq} \] Link with \(\displaystyle\frac{\displaystyle \partial \underline{\sigma}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}\) For all variation of the deformation gradient \(\delta\,{\underset{\tilde{}}{\mathbf{F}}}\), the Jauman rate of the Kirchhoff stress satisfies: \[ \underline{\underline{\mathbf{C}}}^{\tau\,J}\,\colon \delta\underline{D}=\delta\underline{\tau}-\delta{\underset{\tilde{}}{\mathbf{W}}}.\underline{\tau}+\underline{\tau}.\delta{\underset{\tilde{}}{\mathbf{W}}} \] • \(\delta{\underset{\tilde{}}{\mathbf{L}}}= \delta{\underset{\tilde{}}{\mathbf{F}}}\,.\,{\underset{\tilde{}}{\mathbf{F}}}^{-1}\) • \(\delta{\underset{\tilde{}}{\mathbf{W}}}= \displaystyle\frac{\displaystyle 1}{\displaystyle 2}\left(\delta{\underset{\tilde{}}{\mathbf{L}}}-{\delta{\underset{\tilde{}}{\mathbf{L}}}^{\mathrm{T}}} • \(\delta{\underset{\tilde{}}{\mathbf{D}}}= \displaystyle\frac{\displaystyle 1}{\displaystyle 2}\left(\delta{\underset{\tilde{}}{\mathbf{L}}}+{\delta{\underset{\tilde{}}{\mathbf{L}}}^{\mathrm{T}}} Thus, the derivative of the Kirchhoff stress with respect to the deformation gradient yields: \[ \displaystyle\frac{\displaystyle \partial \underline{\tau}}{\displaystyle \partial {\underset{\tilde {}}{\mathbf{F}}}}=\underline{\underline{\mathbf{C}}}^{\tau\,J}\,.\,\displaystyle\frac{\displaystyle \partial \underline{D}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}+\left(\partial^{\ star}_{r}\left(\tau\right)-\partial^{\star}_{l}\left(\tau\right)\right)\,.\,\displaystyle\frac{\displaystyle \partial {\underset{\tilde{}}{\mathbf{W}}}}{\displaystyle \partial {\underset{\tilde{}}{\ mathbf{F}}}} \] with \(\delta\,\underline{D}=\displaystyle\frac{\displaystyle \partial \underline{D}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}\,\colon\,\delta\,{\underset{\tilde{}}{\mathbf{F}}}\) and \(\delta\,{\underset{\tilde{}}{\mathbf{W}}}=\displaystyle\frac{\displaystyle \partial {\underset{\tilde{}}{\mathbf{W}}}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}\,\colon\,\delta \[ \displaystyle\frac{\displaystyle \partial \underline{\sigma}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}=\displaystyle\frac{\displaystyle 1}{\displaystyle J}\left(\displaystyle\frac {\displaystyle \partial \underline{\tau}}{\displaystyle \partial {\underset{\tilde{}}{\mathbf{F}}}}-\underline{\sigma}\,\otimes\,\displaystyle\frac{\displaystyle \partial J}{\displaystyle \partial {\ underset{\tilde{}}{\mathbf{F}}}}\right) \] Numerical approximation of \(\underline{\underline{\mathbf{C}}}^{MJ}\) Following [5], an numerical approximation of \(\underline{\underline{\mathbf{C}}}^{MJ}\) is given by: \[ \underline{\underline{\mathbf{C}}}^{MJ}_{ijkl}\approx\displaystyle\frac{\displaystyle 1}{\ displaystyle J\,\varepsilon}\left(\underline{\tau}_{ij}\left({\underset{\tilde{}}{\mathbf{F}}}+{\underset{\tilde{}}{\mathbf{\delta F}}}^{kl}\right)-\underline{\tau}_{ij}\left({\underset{\tilde{}}{\ mathbf{F}}}\right)\right) \] where the perturbation \({\underset{\tilde{}}{\mathbf{\delta F}}}^{kl}\) is given by: \[ {\underset{\tilde{}}{\mathbf{\delta F}}}^{kl}=\displaystyle\frac{\displaystyle \varepsilon}{\displaystyle 2}\left(\vec{e}_{k}\otimes\vec{e}_{l}+\vec{e}_{l}\otimes\vec{e}_{k}\right)\,.\,{\underset {\tilde{}}{\mathbf{F}}} \] Such perturbation leads to the following rate of deformation: \[ \delta\,\underline{D}=\left({\underset{\tilde{}}{\mathbf{\delta F}}}^{kl}\right)\,{\underset{\tilde{}}{\mathbf{F}}}^{-1}=\displaystyle \frac{\displaystyle \varepsilon}{\displaystyle 2}\left(\vec{e}_{k}\otimes\vec{e}_{l}+\vec{e}_{l}\otimes\vec{e}_{k}\right) \] The spin rate \(\delta\,\underline{W}\) associated with \({\underset{\tilde{}}{\mathbf{\delta F}}}^{kl}\) is null. Relation with other moduli The previous relation can be used to relate to other moduli. See the section describing the isotropic case for details. Doghri, Issam. Mechanics of deformable solids: Linear, nonlinear, analytical, and computational aspects. Berlin; New York : Springer, 2000. ISBN 3540669604 9783540669609 3642086292 9783642086298. . R5.03.22 révision : 11536: Loi de comportement en grandes rotations et petites déformations. Référence du {Code} {Aster}. EDF-R&D/AMA, 2013. Available from: , C., , N. and , M. Anisotropic additive plasticity in the logarithmic strain space: Modular kinematic formulation and implementation based on incremental minimization principles for standard materials. Computer Methods in Applied Mechanics and Engineering . November 2002. Vol. 191, no. 47–48, p. 5383–5425. DOI . Available from: Belytschko, Ted. Nonlinear Finite Elements for Continua and Structures. Chichester ; New York : Wiley-Blackwell, 2000. ISBN 9780471987741. , Wei, , Elliot L and , Marc E. Numerical approximation of tangent moduli for finite element implementations of nonlinear hyperelastic material models. Journal of biomechanical engineering . December 2008. Vol. 130, no. 6, p. 061003–061003. DOI . Available from:
{"url":"https://thelfer.github.io/tfel/web/ansys.html","timestamp":"2024-11-02T12:23:10Z","content_type":"text/html","content_length":"61342","record_id":"<urn:uuid:c89274ad-804b-4d56-9d16-fc10af3e2105>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00846.warc.gz"}
Beyond Decimal 2701 = 37×73 and 2701 + 1072 = 3773 This supreme symmetry is so rare that there is only one known instance in the entire decimal (base 10) number system! How about in other number systems? We commonly use the base-10 (decimal) number system where there are 10 possible digits 0-9. But there are other ways to represent numbers. For example, in the Binary (base-2) number system, only two digits are used: 0 and 1 In the Base-3 system, only 3 digits are used: 0,1,2 In the Base-4 system, only 4 digits are used: 0,1,2,3 Here is a conversion table between them: │ Conversion Table │ │ Decimal (Base 10) │ 0 │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ 8 │ 9 │ 10 │ 11 │ │ Binary (Base 2) │ 0 │ 1 │ 10 │ 11 │ 100 │ 101 │ 110 │ 111 │ 1000 │ 1001 │ 1010 │ 1011 │ │ Base 3 │ 0 │ 1 │ 2 │ 10 │ 11 │ 12 │ 20 │ 21 │ 22 │ 100 │ 101 │ 102 │ │ Base 4 │ 0 │ 1 │ 2 │ 3 │ 10 │ 11 │ 12 │ 13 │ 20 │ 21 │ 22 │ 23 │ Let us see if there are other solutions of the supreme symmetry in other number systems. A scan of 200 million integers for bases 2 through 10 reveals one single other solution! In the base 7 number system: and 2023 Is this solution also related to Genesis 1:1? Incredibly, if we convert these numbers back to decimal (base 10) to see what they are: = 37 = 19 = 703 This solution converts back to none other than: 703 = 19×37 The Gematria of the last two words in the verse (which contains 7 letters). בְּרֵאשִׁ֖ית בָּרָ֣א אֱלֹהִ֑ים אֵ֥ת הַשָּׁמַ֖יִם וְאֵ֥ת הָאָֽרֶץ Interestingly, the letter expansion (Milui) of these two words gives 2701 Letter Expansion of ואת הארץ ץ ר א ה ת א ו צדי דלת יוד ריש יוד שין אלף למד פי הי יוד תאו אלף ואו אלף למד פי ואו אלף ואו These two share other symmetries such as: • Small Gematria (Ketana) of 1st verse = 82 Ordinal Gematria of וְאֵ֥ת הָאָֽרֶץ = 82 • Number of Words of 1st verse = 7 Number of Letters of וְאֵ֥ת הָאָֽרֶץ = 7 It seems this is because וְאֵ֥ת הָאָֽרֶץ corresponds to the Sefirah of Malchut which includes everything (Aderet Eliyahu). Beyond Bases 2-10 What about beyond base 10? Mathematicians and programmers commonly use up to 36 bases corresponding to A-Z and 0-9. A scan of 8 million primes in bases 11-36 reveals an extra 3 solutions! (more primes causes my PC to overflow and crash in the manipulations) In base 22: × FFCJ3 = 2H26969A21 and 2H26969A21 + 12A96962H2 = 3JCFFFFCJ3 In base 28: × J9 = 6J03 and 6J03 + 30J6 = 9JJ9 In base 31 × GTC4 = 2CR2E202 and 2CR2E202 + 202E2RC2 = 4CTGGTC4 However, of these 3 solutions, only the base 28 one is a true solution. For in 2701=37×73 there is an extra symmetry that 37 is the mid-point of 73. Center Point 37 is the exact center point of 73 If we include this criteria, then only the base 28 solution fits (just like in 703=19×37, 19 is the midpoint between 1 and 37). We can see this more clearly if we convert it back to decimal. What is the base 28 solution in decimal (base 10)? = 271 = 541 = 146611 Thus, in decimal (base 10) this number is none other than: 146611 = 271×541 541 is the Gematria of Yisrael! Gematria of Yisrael ל א ר ש י And 271 is the midpoint between 1 and 541. Center Point 271 is the exact center point of 541 As before, the Milui of Yisrael = 2701 - and the first verse has 28 letters. The milui (letter expansion) of Yisrael = 2701 Letter Expansion of Yisrael ל א ר ש י למד מם דלת אלף למד פה ריש יוד שין שין יוד נון יוד ואו דלת Rashi's commentary on the Torah expounds the word "Beresheit" from the Midrash: "G-d created the world for the sake of the Torah, which is called (Prov. 8:22): 'the beginning of His way', and for the sake of Yisrael who are called (Jer. 2:3) 'the first of His produce'..." As for 146611, perhaps this is a hint to 611 = Torah and 146 = 73×2 (Wisdom×Beit, both 73 and Beit refer to wisdom as before). It all goes back to the Triangle of 28 Letters. As we saw last time, the central "Yud" is "wisdom" then the inner triangle is Torah (611) and the outer triangle is the "Peh" (mouth) which corresponds to the Sefirah of Malchut. As known, Yisrael likewise corresponds to "Malchut". To summarize the criteria of these "divine numbers" 1. two DISTINCT prime factors which are mirror reflections - ex.37×73=2701. Thus, 2701 is a semi-prime number (not 9×9=81 since factors are neither distinct nor prime. also mirror reflection is trivial). 2. prime factors revealed when added to its own mirror reflection - ex.2701+1072=3773 3. prime factors which are mid-points of each other, ex.271 is exact midpoint between 1 and 541. Thus, in number bases 2-36, only 3 known solutions exist and they are all related to Genesis 1:1! A scan beyond base 36 shows more solutions starting from base 70, some with very important numbers related to Genesis 1:1. As to the significance of semi-prime numbers and midpoint symmetry, I asked Oren Evran, an Israeli expert in Genesis 1:1. He replied: A semi prime number has a meaning of a very robust construction. They are in one sense as important as primes. For the semi primes are so robust that they are very strong building blocks, but at the same time they are made of other blocks. Therefore if those other blocks (primes) are center points of one another the semi prime "building" you are looking at has a special shape of Figurate Triangular number which is like 2701 and it has more meanings to it way beyond this as geometrically speaking center points factors that are primes would always be odd (not even) and therefore it will have a center point of it's own and the Triangular would have inner-inclusions just like 2701. 2701 is also unique in the center-point triangle = T37 as you know etc. Anyway, I think that this should be the default for the simple reason stated above and to continue explaining why I can go on for a long time The Quintessential Number So it seems there are other solutions besides 2701=37×73 Is there anything unique about 2701 then? Yes! In the section "Divine Symmetries", we showed that the PRIME INDEXES of 37/73 also have the same mirror symmetry. Namely, 37 is the 12th prime and 73 is the 21st prime. Notice the PRIME INDEXES 12/21 are also MIRROR REFLECTIVE - just like 37/73. Interestingly, 12/21 are also the very first pairs of numbers with mirror symmetry. If we add this criteria, then there is NO OTHER SOLUTION in all number bases besides 2701 in base 10. (we will see much more on the prime indexes in the section "Central Prime") (as we will see, 2701 is also the ONLY triangle number in all of math whose inner triangle (703) is equal to its sum of thousands, namely 2+701=703. this has special meaning and works out only in Notice also that 2701 and its prime factors all have digit sum = 10. The Hebrew Gematria system is base 10. (it goes 1,2,3,4,5,6,8,9,10,20,30..100,200..) As to why 10, we saw earlier, "Bereisheit" means "with wisdom", for the world was created with divine wisdom, namely the Sefirah of Chochmah (wisdom) which is the 10th Sefirah (including "daat"). It is the highest Sefirah that encapsulates all the 10 as a chain-reaction, therefore you start with "1" and immediately jump to 10 See the super-symmetry of the gematrias and center points leading to 10, i.e. 73,37,19,10 - 1, as we explained in chapter "wisdom=Yud". We humans are not able to grasp something that has no shape or location etc, therefore the initial wisdom men can grasp is called a point. It is the basic structure of everything and it is a location in space, so both are being sophisticatedly created at the same time. The letter that represents the point is called YUD which is 10 and ultimately it is 1 in small gematria. This is the POINT of creation from which the 10 statements come from in the form of the 10 sefirot (heard from Oren Evron). More on 10 As before G-d's Name begins with the 10th letter - Yud. The Maharal writes (Derech Chaim, Avot 3:13) that 10 represents a collective (klal). Thus, G-d's Name starts with the tenth letter Yud since G-d is collective in that He includes everything. Likewise, all numbers after 10 are just an extension of this original collective of 10 (ibid). The Yud is also the smallest letter reflecting the sublime non-physical nature of G-d. Yud is also the prime Hebrew letter in that all other Hebrew letters can be made from the "Yud" (Maharal). Last but not least, the letter "Yud" means "hand", and of course, we normally have ten fingers and ten toes But does "Yud" mean 10 because we have 10 fingers and toes or do we have 10 fingers and toes because "Yud" means 10? You decide. Yisrael = 541 = 100th prime = 10 - the perfection of 10! >> Next: Math and Reality Footnotes • [1] see https://math.stackexchange.com/questions/3567551/ return to text • [2] A scan beyond base 36 shows the next solution is in base-70. Converting to decimal to see what it is: Interestingly, 3313 is the Star of David figurate formed from combining two 70th triangles (2485 dots each). The 70th triangle is a very important creation number. It is the inner triangle within the 73rd triangle (of 2701 dots and holds the vessels of creation, see section 6 days and Sabbath footnote 1). 5489641 is the 3313th Triangle number and it is the base for Star of David 7319521 = 913×8017 (913 is the Gematria of Bereisheit). This needs further investigation. The next solution found is in base-73. Converting to decimal to see what it is: 3601 is a very important creation number. It is the Star of David figurate formed from combining two triangles of size 73 (2701 dots each) and 6485401 is the 3601th triangle number. Thus, in the wisdom base, i.e. base-73, the solution is the Star of David formed from the merging of two triangles of 2701 dots each. The next solution is in base-91. In decimal it is 5581×2791. Yes, 5581 is a Star of David number and also the 737th prime. This needs more investigation. I scanned up to base-136 and there were more solutions, all of whom were Star of David numbers. We are seeing a pattern, every solution is a triangle number and the larger number is a Star of David number (made by combining two triangles of that particular number base (ex. base-73 solution is Star of David formed from two 73rd triangles). Thus, the solutions to the supreme symmetry are all directly related and point to the Star of David. Hence it seems there is a simple mathematical reason for these symmetries. But nevertheless, the fact that these solutions are related to Genesis 1:1 is significant. So is there anything unique in 37×73, the base-10 solution? Yes! Both primes 37 and 73 are each solutions to the supreme symmetry in base-7 and base-10 respectively and both are Star of David figurate numbers. Another uniqueness, 2701 = 37×73 is a star of stars figurate number. It is unique and shiny out of all these special, star numbers. A star of stars whose stars follow each other and are center points of each other (see section "Geometric properties"). Another uniqueness: 2701 collapses to 2+701=703 in sum of thousands and 703 is the inner triangle of 2701. This has special significance and works out only in base-10. Thus 2701 isn't just "rare" there are no other numbers like this at all in ANY number system. (keep in mind, figurate numbers are geometric representation of numbers using dots. Thus, they are a way of representing numbers independently of any numerical system or base - an absolute way to represent numbers). return to text Rating: 10 / 10 Total Votes: 2
{"url":"https://www.dafyomi.co.il/general/info/torahau/torah_numerology.php?d=7","timestamp":"2024-11-02T23:29:28Z","content_type":"text/html","content_length":"28679","record_id":"<urn:uuid:7ca3db47-3d07-4006-9829-d978d4dfb6fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00094.warc.gz"}
The Application and Order of Discounts Within Account level billing in Kangarootime, discounts are helpful tools for billing clients who pay reduced tuition accurately. Accounts can be layered with various discounts, leading to the question: How do I know if I am discounting the account correctly? This article aims to answer this question by explaining how each discount will affect an account, who the discount(s) will cover, and in what order the discounts will appear. Flat Rate Billing When applying a dollar discount to an account, one should expect that the discount value will be applied to every child. The itemized bill will show the tuition and the discount transactions as separate line items. In the example below, The $10.00 Discount was put on the account, and the discount appears for both children within their itemized bill. Suppose one applies a dollar discount to one or more children on an account instead of the overall account, only the child(ren) that is/are assigned a discount will receive a discount when the billing cycle runs. In the example below, the Early Bird Discount was applied to only Raine Rivers, and she was the only child on the account to receive the discount. When one applies a percentage discount to an account; one should expect the total discount to be applied to all children associated with the account. The itemized bill will show the tuition and the discount transactions as separate line items. The Vacation Discount below is shown as a separate line item and will display the dollar equivalent of the discount percentage (ex. 50% of $80.00). In a case where you apply a percentage discount to one or more children on an account, only the child(ren) assigned a discount will be given the discount when the billing cycle runs. The sibling discount applied below to Kennedy Warner was the only discount applied on this account. The Sibling Discount ended up as $15.00 (5% of $300.00). • Dollar and Percentage Discount: When one applies a dollar discount and a percentage discount to an account; one should expect all children associated with the account to receive both discounts when the billing cycle runs. It is important to note that the dollar discount will be taken off first, and then the percentage discount will be taken out of the remaining charge. Each discount will be shown as a separate line item. In the example below, the Employee Discount of $100.00 was taken off both children on the account, and the Sibling Discount is shown as $5.00 (ten percent of $50.00) and $15.00 (10% of $150.00). Suppose you apply a dollar and percentage discount to one or more children on an account; the discounts will only affect the children chosen. The dollar discount will be taken off first, and then the percentage discount will be taken out of the remaining tuition charge. Each discount will be shown as a separate line item. The $10.00 Discount and the 5% Discount are both applied to Robert Smith, so he is the only child that receives the discounts. The $10.00 discount was taken off first; the 5% discount is represented by the $13.50 amount (5% of $270.00). Session-Based Billing When a dollar discount is applied to an account; one should expect the total discount to be applied to each session fee for all children. The discount will appear as a separate line item on the itemized bill. In the example below, the Military Discount was taken off all children on the McGee account. When you apply a dollar discount to one or more children on an account; one should expect the discount to be applied to each session fee for the selected children. The discount will appear as a separate line item on the itemized bill. In the example below, the Early Bird Discount was applied only to Tom Dolly, and so he will only receive the discount for each billed session. Suppose one applies a percentage discount to an account; you should expect the total discount to be applied to all children associated with the account. Each session fee will be discounted a dollar amount equal to the discounted percentage. The discount will appear as a separate line item per session on the itemized bill. In the example below, all three children received the 25% discount, which is represented as $50.00 (25% of $200.00). When applying a percentage discount to one or more children on an account; dollar amount discounts equating to the total percentage discounted will be taken off each session. The discount(s) will appear as separate line times on an itemized bill. In the example below, the Second Sibling Discount was applied to Oria Magooda, the only child to receive the 10% discount for each billed session. The discount is $6.00 (10% of $60.00). • Dollar and Percentage Discount: When one applies a dollar discount and a percentage discount to an account; one should expect all children associated with the account to receive both discounts on each session when the billing cycle runs. The dollar discount will be taken off first, and then the percentage discount will be taken out of the remaining tuition charge. Each discount will be shown as a separate line item. In the example below, the $10.00 Discount and the 5% Discount were both applied to the account, so all children on the account receive both discounts. The $10.00 Discount was taken off first, and then $1.55 was discounted from the remaining total (5% of $31.00). Suppose one applies a dollar discount and a percentage discount to one or more children on an account; the discounts will only affect the children chosen. The dollar discount will be taken off first, and then the percentage discount will be taken out of the remaining tuition charge for each session. Each discount will be shown as a separate line item. In the example below, The $10.00 Discount and the 5% Discount were applied to Christoper Moltisanti, who was the only child on the account to receive both discounts. The $10.00 Discount was taken off first, and then $1.55 was discounted (5% of $31.00). Please contact helpdesk@kangarootime.com with any questions.
{"url":"https://help.kangarootime.com/hc/en-us/articles/12774054512020-The-Application-and-Order-of-Discounts","timestamp":"2024-11-07T23:27:57Z","content_type":"text/html","content_length":"68548","record_id":"<urn:uuid:1a676e4e-be60-4c37-8530-7df3d64c5f7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00064.warc.gz"}
Techno Press You logged in as Volume 4, Number 3, May 1996 One of the major divisions in the mathematical modelling of a tubular structure is to include the effect of the transverse shear stress and rotary inertia in vibration of members. During the past three decades, problems of vibration of tubular structures have been considered by some authors, and special attention has been devoted to the Timoshenko theory. There have been considerable efforts, also, to apply the method of spectral analysis to vibration of a structure with rectangular section beams. The purpose of this paper is to compare the results of the spectrally formulated finite element analyses for the Timoshenko theory with those derived from the conventional finite element method for a tubular structure. The spectrally formulated finite element starts at the same starting point as the conventional finite element formulation. However, it works in the frequency domain. Using a computer program, the proposed formulation has been extended to derive the dynamic response of a tubular structure under an impact load. Key Words spectral, finite element, vibration, tubular structure Horr AM, UNIV WOLLONGONG,DEPT CIVIL & MIN ENGN,WOLLONGONG,NSW 2522,AUSTRALIA An important class of problems in the field of geotechnical engineering may be analyzed with the aid of a simple integro-differential equation. Behavior of \'\'rigid\'\' piles(say concrete piles), \' \'deformable\'\' piles(say gravel piles). pile groups, pile-raft foundations, heavily reinforced earth, flow within circular silos and down drag on cylindrical structures (for example the crusher unit of a mineral processing complex) are the type of situations that can be handled by this type of equation. The equation under consideration has the form; partial derivative w(r, z)/partial derivative + f(z)integral(0)(z)g(xi)(partial derivative(2)w(r, xi)/partial derivative r(2) + 1/r partial derivative w(r, xi)/partial derivative r)d xi + h(r, z) = 0 where w(r, z) is the vertical displacement of a soil particle expressed as a function of the polar cylindrical space coordinates (r, z) and the symbols f; g and h represent soil properties and the loading conditions. The merit of the analysis is its simplicity (both in concept and in application) and the ease with which it can be expressed in a computer code. In the present paper the analysis is applied to investigate the behavior of a single rigid pile to bedrock. The emphasis, however, is placed on equation, the numerical techique used in its evaluation and validation of the technique, hereafter called the ID technique, against a formal program, CRISP, which uses the FEM. Key Words soil structure, deep foundation, deformation, negative friction, numerical technique, FEM (CRISP) Poorooshasb HB, CONCORDIA UNIV,DEPT CIVIL ENGN,MONTREAL,PQ H3G 1M8,CANADA SAGA UNIV,DEPT CIVIL ENGN,SAGA 840,JAPAN This paper presents a model of a layered, delaminated composite beam. The beam is modelled by beam finite elements, and the delamination is modelled by additional boundary conditions. In the present study, the laminated beam contains only one delaminated region through the thickness direction which extends to the full width of the beam. It is also assumed that the delamination is open. The influence of the delamination length and position upon changes in the bending natural frequencies of the composite laminated cantilever beam is investigated. Key Words natural frequencies, composite beams, delamination, finite element method Krawczuk M, POLISH ACAD SCI,INST FLUID FLOW MACHINERY,UL GEN J FISZERA 14,PL-80952 GDANSK,POLAND In North America, a large number of concrete old slab-on-steel girder bridges, classified noncomposite, were built without any mechanic connections. The stablizing effect due to slab/girder interface contact and friction on the steel girders was totally neglected in practice. Experimental results indicate that this effect can lead to a significant underestimation of the load-carrying capacity of these bridges. In this paper, the two major components-concrete slab and steel girders, are treat as two deformable bodies in contact. A finite element procedure with considering the effect of friction and contact for the analysis of concrete slab-on-steel girder bridges is presented. Tile interface friction phenomenon and finite element formulation are described using an updated configuration under large deformations to account for the influence of any possible kinematic motions on the interface boundary conditions. The constitutive model for frictional contact are considered as slip work-dependent to account for the irreversible nature of friction forces and degradation of interface shear resistance, The proposed procedure is further validated by experimental bridge models. Key Words composite action, contact, degradation, finite element, friction Lin JJ, FORINTEK CANADA CORP,NATL WOOD PROD RES INST,VANCOUVER,BC,CANADA UNIV LAVAL,DEPT CIVIL ENGN,QUEBEC CITY,PQ G1K 7P4,CANADA A finite element model of a beam element with flexible connections is used to investigate the effect of the randomness in the stiffness values on the modal properties of the structural system. The linear behavior of the connections is described by a set of random fixity factors. The element mass and stiffness matrices are function of these random parameters. The associated eigenvalue problem leads to eigenvalues and eigenvectors which are also random variables. A second order perturbation technique is used for the solution of this random eigenproblem. Closed form expressions for the Ist and 2nd order derivatives of the element matrices with respect to the fixity factors are presented. The mean and the variance of the eigenvalues and vibration modes are obtained in terms of these derivatives. Two numerical examples are presented and the results are validated with those obtained by a Monte-Carlo simulation. It is found that an almost linear statistical relation exists between the eigenproperties and the stiffness of the connections. Key Words modal analysis, random eigenvalue problem, flexible connections, finite element modeling Matheu EE, VIRGINIA TECH,DEPT ENGN SCI & MECH,BLACKSBURG,VA 24061 UNIV PUERTO RICO,DEPT GEN ENGN,MAYAGUEZ,PR 00681 The dynamic behavior of an Euler beam with multiple point constraints traversed by a moving concentrated mass, a \'\'moving-force moving-mass\'\' problem, is analyzed and compared with the corresponding simplified \'\'moving-force\'\' problem. The equation of motion in matrix form is formulated using Lagrangian approach and the assumed mode method. The effects of the presence of intermediate point constraints in reducing the fluctuation of the contact force between the mass and the beam and the possible separation of the mass from the beam are investigated. The equation of motion and the numerical results are expressed in dimensionless form. The numerical results presented are therefore applicable for a large combination of system parameters. Key Words moving mass, contact force, multiple supports, beam Lee HP, NATL UNIV SINGAPORE,DEPT MECH & PROD ENGN,10 KENT RIDGE CRESCENT,SINGAPORE 0511,SINGAPORE A mathematical model of a cable as a system of interacting wires with interwire friction taken into account is presented in this paper. The effect of friction forces and the interwire slip on the mechanical properties of tension cables is investigated. it is shown that the slip occurs due to the twisting and bending deformations of wires, and it occurs in the form of micro-slips at the contact patches and macro-slips along the cable. The latter slipping starts near the terminals and propagates towards the middle of the cable with the increase of tension, and its propagation is proportional to the load. As the result of dry friction, the load-elongation characteristics of the cable become quadratic. The energy losses during the extension are shown to be proportional to the cube of the load and in inverse proportion to the friction force, a result qualitatively similar to that for lap joints. Presented examples show that the model is in qualitative agreement with the known experimental data. Key Words cable mechanics, energy losses, interwire friction Huang XL, WITTKE WASTE EQUIPMENT,MEDICINE HAT,AB T1C 1K6,CANADA UNIV CALGARY,DEPT MECH ENGN,CALGARY,AB T2N 1N4,CANADA A model of a cable comprising interacting wires with dry friction forces at the interfaces is subjected to a quasi-static cyclic loading. The first cycle of this process, comprising of axial loading, unloading and reloading is investigated analytically. Explicit load-elongation relationships are obtained for all of the above phases of the cycle. An expression for the hysteretic losses is also obtained in an explicit form. It is shown that losses are proportional to the third power of the amplitude of the oscillating axial force, and are in inverse proportion to the interwire friction forces. The results obtained are used to introduce a model of a cable as a solid rod with an equivalent stiffness and damping properties of the rod material. It is shown that the stiffness of the equivalent rod is weakly nonlinear, whereas the viscous damping coefficient is proportional to the amplitude of the oscillation. Some numerical results illustrating the effect of cable parameters on the losses are given. Key Words cable mechanics, cyclic loading, dry friction, hysteresis Huang XL, WITTKE WASTE EQUIPMENT,MEDICINE HAT,AB T1C 1K6,CANADA UNIV CALGARY,DEPT MECH ENGN,CALGARY,AB T2N 1N4,CANADA
{"url":"http://www.techno-press.org/?page=container&volumeno=4/3&journal=sem","timestamp":"2024-11-06T13:52:56Z","content_type":"application/xhtml+xml","content_length":"74821","record_id":"<urn:uuid:e8da49c6-20b4-4efd-9dbb-cc7dda73fee6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00625.warc.gz"}
Body rotation and moving in facing direction 23-04-2007 23:44:30 Hi all, I'm a complete newbie to NxOgre, however given the fact that I need to have a small demo done by next week I'm having to learn pretty quick. Anyway, my problem is as follows: I have a tank body which I want the player to be able to rotate (at a steady rate, as opposed to adding torque) and then move forward in the direction which the tank is facing. I found trying to rotate the tank to be extremely difficult, but settled upon something that worked: //rotate half a degree each frame, since Quaternion(sqrt(0.5), 0, 0, sqrt(0.5)) is 90' rotation around Z axis quat = p1Tank->getGlobalOrientation() * Ogre::Quaternion(sqrt(0.5), 0, 0, sqrt(0.5/90)/2); I'm pretty sure theres a better way to do this, but its all I have that works for now. My next question is how can I get the tank to move at a steady rate (again, not like adding forces) along a flat plane in the direction it is facing? I've tried many different things, such as multiplying the X and Z position vector by the X and Z rotation values (obviously my 3D mathematics is rusty!), but none are giving the desired result. Any help is much appreciated, and if you could make it as clear as possible that would also be helpful
{"url":"https://www.ogre3d.org/addonforums/6/t-4159.html","timestamp":"2024-11-13T21:03:57Z","content_type":"text/html","content_length":"5342","record_id":"<urn:uuid:89aa44ec-f219-4b5a-af93-c4f0a73af01a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00703.warc.gz"}
Data Science Interview Questions and Answers Data Science General Interview Questions-1 1. How would you create a taxonomy to identify key customer trends in unstructured data? The best way to approach this question is to mention that it is good to check with the business owner and understand their objectives before categorizing the data. Having done this, it is always good to follow an iterative approach by pulling new eskişehir escort data samples and improving the model accordingly by validating it for accuracy by soliciting feedback from the stakeholders of the business. This helps ensure that your model is producing actionable results anproving over the time. 2. Python or R – Which one would you prefer for text analytics? The best possible answer for this would be Python because it has Pandas library that provides easy to use data structures and high performance data analysis tools. 3. Which technique is used to predict categorical responses? Classification technique is used widely in mining for classifying data sets. 4. What are Recommender Systems? A subclass of information filtering systems that are meant to predict the preferences or ratings that a user would give to a product. Recommender systems are widely used in movies, news, research articles, products, social tags, music, etc. 5. What is power analysis? An experimental design technique for determining the effect of a given sample size. 6. What is Collaborative filtering? The process of filtering used by most of the recommender systems to find patterns or information by collaborating viewpoints, various data sources and multiple agents. 7. What is Machine Learning? The simplest way to answer this question is – we give the data and equation to the machine. Ask the machine to look at the data and identify the coefficient values in an equation. For example for the linear regression y=mx+c, we give the data for the variable x, y and the machine learns about the values of m and c from the data. 8. During analysis, how do you treat missing values? The extent of the missing values is identified after identifying the variables with missing values. If any patterns are identified the analyst has to concentrate on them as it could lead to interesting and meaningful business insights. If there are no patterns identified, then the missing values can be substituted with mean or median values (imputation) or they can simply be ignored. There are various factors to be considered when answering this question: Understand the problem statement, understand the data and then give the answer. Assigning a default value which can be mean, minimum or maximum value. Getting into the data is important. If it is a categorical variable, the default value is assigned. The missing value is assigned a default value. If you have a distribution of data coming, for normal distribution give the mean value. Should we even treat missing values is another important point to consider? If 80% of the values for a variable are missing then you can answer that you would be dropping the variable instead of treating the missing values. 9. How can outlier values be treated? Outlier values can be identified by using univariate or any other graphical analysis method. If the number of outlier values is few then they can be assessed individually but for large number of outliers the values can be substituted with either the 99th or the 1st percentile values. All extreme values are not outlier values. The most common ways to treat outlier values – To change the value and bring in within a range To just remove the value. 10. What is the goal of A/B Testing? It is a statistical hypothesis testing for randomized experiment with two variables A and B. The goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of an interest. An example for this could be identifying the click through rate for a banner ad. 11. Why data cleaning plays a vital role in analysis? Cleaning data from multiple sources to transform it into a format that data analysts or data scientists can work with is a cumbersome process because – as the number of data sources increases, the time take to clean the data increases exponentially due to the number of sources and the volume of data generated in these sources. It might take up to 80% of the time for just cleaning data making it a critical part of analysis task. 12. Differentiate between univariate, bivariate and multivariate analysis. These are descriptive statistical analysis techniques which can be differentiated based on the number of variables involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can be referred to as univariate analysis. If the analysis attempts to understand the difference between 2 variables at time as in a scatterplot, then it is referred to as bivariate analysis. For example, analyzing the volume of sale and a spending can be considered as an example of bivariate analysis. Analysis that deals with the study of more than two variables to understand the effect of variables on the responses is referred to as multivariate analysis. 13. What do you understand by the term Normal Distribution? Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell shaped curve. The random variables are distributed in the form of an symmetrical bell shaped curve. Data Science General Interview Questions-2 14. What is Interpolation and Extrapolation? Estimating a value from 2 known values from a list of values is Interpolation. Extrapolation is approximating a value by extending a known set of values or facts. 15. Are expected value and mean value different? They are not different but the terms are used in different contexts. Mean is generally referred when talking about a probability distribution or sample population whereas expected value is generally referred in a random variable context. For Sampling Data Mean value is the only value that comes from the sampling data. Expected Value is the mean of all the means i.e. the value that is built from multiple samples. Expected value is the population mean. For Distributions Mean value and Expected value are same irrespective of the distribution, under the condition that the distribution is in the same population. 16. What is the difference between Supervised Learning an Unsupervised Learning? If an algorithm learns from the training data so that the knowledge can be applied to the test data, then it is referred to as Supervised Learning. Classification is an example for Supervised Learning. If the algorithm does not learn anything beforehand because there is no response variable or any training data, then it is referred to as unsupervised learning. Clustering is an example for unsupervised learning. 17. What is an Eigenvalue and Eigenvector? Eigenvectors are used for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix. Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching. Eigenvalue can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs. 18. What are various steps involved in an analytics project? • Understand the business problem • Explore the data and become familiar with it. • Prepare the data for modelling by detecting outliers, treating missing values, transforming variables, etc. • After data preparation, start running the model, analyze the result and tweak the approach. This is an iterative step till the best possible outcome is achieved. • Validate the model using a new data set. • Start implementing the model and track the result to analyze the performance of the model over the period of time. 19. How can you deal with different types of seasonality in time series modelling? Seasonality in time series occurs when time series shows a repeated pattern over time. E.g., stationary sales decreases during holiday season, air conditioner sales increases during the summers etc. are few examples of seasonality in a time series. Seasonality makes your time series non-stationary because average value of the variables at different time periods. Differentiating a time series is generally known as the best method of removing seasonality from a time series. Seasonal differencing can be defined as a numerical difference between a particular value and a value with a periodic lag (i.e. 12, if monthly seasonality is present) 20. Why L1 regularizations causes parameter sparsity whereas L2 regularization does not? Regularizations in statistics or in the field of machine learning is used to include some extra information in order to solve a problem in a better way. L1 & L2 regularizations are generally used to add constraints to optimization problems. In the example shown above H0 is a hypothesis. If you observe, in L1 there is a high likelihood to hit the corners as solutions while in L2, it doesn’t. So in L1 variables are penalized more as compared to L2 which results into sparsity. In other words, errors are squared in L2, so model sees higher error and tries to minimize that squared error. 21. What does P-value signify about the statistical data? P-value is used to determine the significance of results after a hypothesis test in statistics. P-value helps the readers to draw conclusions and is always between 0 and 1. • P- Value > 0.05 denotes weak evidence against the null hypothesis which means the null hypothesis cannot be rejected. • P-value <= 0.05 denotes strong evidence against the null hypothesis which means the null hypothesis can be rejected. • P-value=0.05is the marginal value indicating it is possible to go either 22. Do gradient descent methods always converge to same point? No, they do not because in some cases it reaches a local minima or a local optima point. You don’t reach the global optima point. It depends on the data and starting conditions. 23. How can you iterate over a list and also retrieve element indices at the same time? This can be done using the enumerate function which takes every element in a sequence just like in a list and adds its location just before it. 24. Can you explain the difference between a Test Set and a Validation Set? Validation set can be considered as a part of the training set as it is used for parameter selection and to avoid Overfitting of the model being built. On the other hand, test set is used for testing or evaluating the performance of a trained machine leaning model. In simple terms, the differences can be summarized as- • Training Set is to fit the parameters i.e. weights. • Test Set is to assess the performance of the model i.e. evaluating the predictive power and generalization. • Validation set is to tune the parameters. 25. What do you understand by statistical power of sensitivity and how do you calculate it? Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, RF etc.). Sensitivity is nothing but “Predicted TRUE events/ Total events”. True events here are the events which were true and model also predicted them as true. Calculation of seasonality is pretty straight forward- Seasonality = True Positives /Positives in Actual Dependent Variable Where, True positives are Positive events which are correctly classified as Positives. 26. How do data management procedures like missing data handling make selection bias worse? Missing value treatment is one of the primary tasks which a data scientist is supposed to do before starting data analysis. There are multiple methods for missing value treatment. If not done properly, it could potentially result into selection bias. Let see few missing value treatment examples and their impact on selection- Complete Case Treatment: Complete case treatment is when you remove entire row in data even if one value is missing. You could achieve a selection bias if your values are not missing at random and they have some pattern. Assume you are conducting a survey and few people didn’t specify their gender. Would you remove all those people? Can’t it tell a different story? >Available case analysis: Let say you are trying to calculate correlation matrix for data so you might remove the missing values from variables which are needed for that particular correlation coefficient. In this case your values will not be fully correct as they are coming from population sets. Mean Substitution: In this method missing values are replaced with mean of other available values. This might make your distribution biased e.g., standard deviation, correlation and regression are mostly dependent on the mean value of variables. Hence, various data management procedures might include selection bias in your data if not chosen correctly. Data Science Confusion Matrix Interview Questions 27. Can you cite some examples where a false positive is important than a false negative? Before we start, let us understand what false positives are and what false negatives are. False Positives are the cases where you wrongly classified a non-event as an event a.k.a Type I error. And, False Negatives are the cases where you wrongly classify events as non-events, a.k.a Type II error. In medical field, assume you have to give chemo therapy to patients. Your lab tests patients for certain vital information and based on those results they decide to give radiation therapy to a Assume a patient comes to that hospital and he is tested positive for cancer (But he doesn’t have cancer) based on lab prediction. What will happen to him? (Assuming Sensitivity is 1) One more example might come from marketing. Let’s say an ecommerce company decided to give $1000 Gift voucher to the customers whom they assume to purchase at least $5000 worth of items. They send free voucher mail directly to 100 customers without any minimum purchase condition because they assume to make at least 20% profit on sold items above 5K. Now what if they have sent it to false positive cases? 28. Can you cite some examples where a false negative important than a false positive? Assume there is an airport ‘A’ which has received high security threats and based on certain characteristics they identify whether a particular passenger can be a threat or not. Due to shortage of staff they decided to scan passenger being predicted as risk positives by their predictive model. What will happen if a true threat customer is being flagged as non-threat by airport model? Another example can be judicial system. What if Jury or judge decide to make a criminal go free? What if you rejected to marry a very good person based on your predictive model and you happen to meet him/her after few years and realize that you had a false negative? 29. Can you cite some examples where both false positive and false negatives are equally important? In the banking industry giving loans is the primary source of making money but at the same time if your repayment rate is not good you will not make any profit, rather you will risk huge losses. Banks don’t want to lose good customers and at the same point of time they don’t want to acquire bad customers. In this scenario both the false positives and false negatives become very important to These days we hear many cases of players using steroids during sport competitions. Every player has to go through a steroid test before the game starts. A false positive can ruin the career of a Great sportsman and a false negative can make the game unfair. 30. A test has a true positive rate of 100% and false positive rate of 5%. There is a population with a 1/1000 rate of having the condition the test identifies. Considering a positive test, what is the probability of having that condition? Let’s suppose you are being tested for a disease, if you have the illness the test will end up saying you have the illness. However, if you don’t have the illness- 5% of the times the test will end up saying you have the illness and 95% of the times the test will give accurate result that you don’t have the illness. Thus there is a 5% error in case you do not have the illness. Out of 1000 people, 1 person who has the disease will get true positive result. Out of the remaining 999 people, 5% will also get true positive result. Close to 50 people will get a true positive result for the disease. This means that out of 1000 people, 51 people will be tested positive for the disease even though only one person has the illness. There is only a 2% probability of you having the disease even if your reports say that you have the disease. Data Science Classification and Regression Interview Questions 31. What is Linear Regression? Linear regression is a statistical technique where the score of a variable Y is predicted from the score of a second variable X. X is referred to as the predictor variable and Y as the criterion variable. 32. What is the difference between Cluster and Systematic Sampling? Cluster sampling is a technique used when it becomes difficult to study the target population spread across a wide area and simple random sampling cannot be applied. Cluster Sample is a probability sample where each sampling unit is a collection, or cluster of elements. Systematic sampling is a statistical technique where elements are selected from an ordered sampling frame. In systematic sampling, the list is progressed in a circular manner so once you reach the end of the list, it is progressed from the top again. The best example for systematic sampling is equal probability method. 33. How can you assess a good logistic model? There are various methods to assess the results of a logistic regression analysis- • Using Classification Matrix to look at the true negatives and false positives. • Concordance that helps identify the ability of the logistic model to differentiate between the event happening and not happening. • Lift helps assess the logistic model by comparing it with random selection. 34. How will you define the number of clusters in a clustering algorithm? Though the Clustering Algorithm is not specified, this question will mostly be asked in reference to K-Means clustering where “K” defines the number of clusters. The objective of clustering is to group similar entities in a way that the entities within a group are similar to each other but the groups are different from each other. For example, the following image shows three different groups. Within Sum of squares is generally used to explain the homogeneity within a cluster. If you plot WSS for a range of number of clusters, you will get the plot shown below. The Graph is generally known as Elbow Curve. Red circled point in above graph i.e. Number of Cluster =6 is the point after which you don’t see any decrement in WSS. This point is known as bending point and taken as K in K – Means. This is the widely used approach but few data scientists also use Hierarchical clustering first to create dendograms and identify the distinct groups from there. 35. Is it possible to perform logistic regression with Microsoft Excel? It is possible to perform logistic regression with Microsoft Excel. There are two ways to do it using Excel. • One is to use Add-ins provided by many websites which we can use. • Second is to use fundamentals of logistic regression and use Excel’s computational power to build a logistic regression But when this question is being asked in an interview, interviewer is not looking for a name of Add-ins rather a method using the base excel functionalities. Let’s use a sample data to learn about logistic regression using Excel. (Example assumes that you are familiar with basic concepts of logistic regression). Data shown above consists of three variables where X1 and X2 are independent variables and Y is a class variable. We have kept only 2 categories for our purpose of binary logistic regression classifier. Next we have to create a logit function using independent variables, i.e. Logit = L = â0 + â1*X1 + â2*X2 36. You created a predictive model of a quantitative outcome variable using multiple regressions. What are the steps you would follow to validate the model? Since the question asked, is about post model building exercise, we will assume that you have already tested for null hypothesis, multi collinearity and Standard error of coefficients. Once you have built the model, you should check for following – – Global F-test to see the significance of group of independent variables on dependent variable – R^2 – Adjusted R^2 – RMSE, MAPE In addition to above mentioned quantitative metrics you should also check for- – Residual plot – Assumptions of linear regression 37. Give some situations where you will use an SVM over a RandomForest Machine Learning algorithm and vice-versa. SVM and Random Forest are both used in classification problems. a) If you are sure that your data is outlier free and clean then go for SVM. It is the opposite – if your data might contain outliers then Random forest would be the best choice. b) Generally, SVM consumes more computational power than Random Forest, so if you are constrained with memory go for Random Forest machine learning algorithm. c) Random Forest gives you a very good idea of variable importance in your data, so if you want to have variable importance then choose Random Forest machine learning algorithm. d) Random Forest machine learning algorithms are preferred for multiclass problems. e) SVM is preferred in multi-dimensional problem set – like text classification but as a good data scientist, you should experiment with both of them and test for accuracy or rather you can use ensemble of many Machine Learning techniques. Data Science R programming Interview Questions – 1 38. What do you understand by element recycling in R? If two vectors with different lengths perform an operation –the elements of the shorter vector will be re-used to complete the operation. This is referred to as element recycling. Example – Vector A <-c(1,2,0,4) and Vector B<-(3,6) then the result of A*B will be ( 3,12,0,24). Here 3 and 6 of vector B are repeated when computing the result. 39. How can you verify if a given object “X” is a matrix data object? If the function call is.matrix(X) returns true then X can be considered as a matrix data object otheriwse not. 40. How will you measure the probability of a binary response variable in R language? Logistic regression can be used for this and the function glm () in R language provides this functionality. 41. What is the use of sample and subset functions in R programming language? Sample () function can be used to select a random sample of size ‘n’ from a huge dataset. Subset () function is used to select variables and observations from a given dataset. 42. There is a function fn(a, b, c, d, e) a + b * c – d / e. Write the code to call fn on the vector c(1,2,3,4,5) such that the output is same as fn(1,2,3,4,5). do.call (fn, as.list(c (1, 2, 3, 4, 5))) 43. How can you resample statistical tests in R language? Coin package in R provides various options for re-randomization and permutations based on statistical tests. When test assumptions cannot be met then this package serves as the best alternative to classical methods as it does not assume random sampling from well-defined populations. 44. What is the purpose of using Next statement in R language? If a developer wants to skip the current iteration of a loop in the code without terminating it then they can use the next statement. Whenever the R parser comes across the next statement in the code, it skips evaluation of the loop further and jumps to the next iteration of the loop. 45. How will you create scatterplot matrices in R language? A matrix of scatterplots can be produced using pairs. Pairs function takes various parameters like formula, data, subset, labels, etc. The two key parameters required to build a scatterplot matrix are – • formula- A formula basically like ~a+b+c . Each term gives a separate variable in the pairs plots where the terms should be numerical vectors. It basically represents the series of variables used in pairs. • data- It basically represents the dataset from which the variables have to be taken for building a scatterplot. 46. How will you check if an element 25 is present in a vector? There are various ways to do this- i. It can be done using the match () function- match () function returns the first appearance of a particular element. ii. The other is to use %in% which returns a Boolean value either true or false. iii. Is.element () function also returns a Boolean value either true or false based on whether it is present in a vector or not. 47. What is the difference between library() and require() functions in R language? There is no real difference between the two if the packages are not being loaded inside the function. require () function is usually used inside function and throws a warning whenever a particular package is not found. On the flip side, library () function gives an error message if the desired package cannot be loaded. Data Science R Programming – 2 48. What are the rules to define a variable name in R programming language? A variable name in R programming language can contain numeric and alphabets along with special characters like dot (.) and underline (-). Variable names in R language can begin with an alphabet or the dot symbol. However, if the variable name begins with a dot symbol it should not be a followed by a numeric digit. 49. What do you understand by a workspace in R programming language? The current R working environment of a user that has user defined objects like lists, vectors, etc. is referred to as Workspace in R language. 50. Which function helps you perform sorting in R language? Order () 51. How will you list all the data sets available in all R packages? Using the below line of code- data(package = .packages(all.available = TRUE)) 52. Which function is used to create a histogram visualisation in R programming language? 53. Write the syntax to set the path for current working directory in R environment? 54. How will you drop variables using indices in a data frame? Let’s take a dataframe df<-data.frame(v1=c(1:5),v2=c(2:6),v3=c(3:7),v4=c(4:8)) ## v1 v2 v3 v4 ## 1 1 2 3 4 ## 2 2 3 4 5 ## 3 3 4 5 6 ## 4 4 5 6 7 ## 5 5 6 7 8 Suppose we want to drop variables v2 & v3 , the variables v2 and v3 can be dropped using negative indicies as follows- ## v1 v4 ## 1 1 4 ## 2 2 5 ## 3 3 6 ## 4 4 7 ## 5 5 8 55. What will be the output of runif (7)? It will generate 7 randowm numbers between 0 and 1. 56. What is the difference between rnorm and runif functions ? rnorm function generates “n” normal random numbers based on the mean and standard deviation arguments passed to the function. Syntax of rnorm function – rnorm(n, mean = , sd = ) runif function generates “n” unform random numbers in the interval of minimum and maximum values passed to the function. Syntax of runif function – runif(n, min = , max = ) 57. What will be the output on executing the following R programming code – mat<-matrix(rep(c(TRUE,FALSE),8),nrow=4) Data Science R Programming Interview Questions – 3 58. How will you combine multiple different string like “Data”, “Science”, “in” ,“R”, “Programming” as a single string “Data_Science_in_R_Programmming” ? paste(“Data”, “Science”, “in” ,“R”, “Programming”,sep=”_”) 59. Write a function to extract the first name from the string “Mr. Tom White”. substr (“Mr. Tom White”,start=5, stop=7) 60. Can you tell if the equation given below is linear or not? Emp_sal= 2000+2.5(emp_age)2 Yes it is a linear equation as the coefficients are linear. 61. What will be the output of the following R programming code? var2<- c(“I”,”Love,”DeZyre”) It will give an error. 62. What will be the output of the following R programming code? x<-5 print(“X is an even number”) else print(“X is an odd number”) Executing the above code will result in an error as shown below – ## Error: :4:1: unexpected ‘else’ ## 3: print(“X is an even number”) ## 4: else ## ^ R programming language does not know if the else related to the first ‘if’ or not as the first if() is a complete command on its own. 63. I have a string “contact@dezyre.com”. Which string function can be used to split the string into two different strings “contact@dezyre” and “com” ? This can be accomplished using the strsplit function which splits a string based on the identifier given in the function call. The output of strsplit() function is a list. strsplit(“contact@dezyre.com”,split = “.”) Output of the strsplit function is – ## [[1]] ## [1] ” contact@dezyre” “com” 64. What is R Base package? R Base package is the package that is loaded by default whenever R programming environent is loaded .R base package provides basic fucntionalites in R environment like arithmetic calcualtions, input/ 65. How will you merge two dataframes in R programming language? Merge () function is used to combine two dataframes and it identifies common rows or columns between the 2 dataframes. Merge () function basically finds the intersection between two different sets of Merge () function in R language takes a long list of arguments as follows – Syntax for using Merge function in R language – merge (x, y, by.x, by.y, all.x or all.y or all ) • X represents the first dataframe. • Y represents the second dataframe. • by.X- Variable name in dataframe X that is common in Y. • by.Y- Variable name in dataframe Y that is common in X. • all.x – It is a logical value that specifies the type of merge. all.X should be set to true, if we want all the observations from dataframe X . This results in Left Join. • all.y – It is a logical value that specifies the type of merge. all.y should be set to true , if we want all the observations from dataframe Y . This results in Right Join. • all – The default value for this is set to FALSE which means that only matching rows are returned resulting in Inner join. This should be set to true if you want all the observations from dataframe X and Y resulting in Outer join. 66. Write the R programming code for an array of words so that the output is displayed in decreasing frequency order. R Programming Code to display output in decreasing frequency order tt <- sort(table(c(“a”, “b”, “a”, “a”, “b”, “c”, “a1”, “a1”, “a1”)), dec=T) depth <- 3 tt[1:depth] Output – 1) a a1 b 2) 3 3 2 67. How to check the frequency distribution of a categorical variable? The frequency distribution of a categorical variable can be checked using the table function in R language. Table () function calculates the count of each categories of a categorical variable. Output of the above R Code – Gender F M 4 2 Programmers can also calculate the % of values for each categorical group by storing the output in a dataframe and applying the column percent function as shown below – t = data.frame(table(gender)) t$percent= round(t$Freq / sum(t$Freq)*100,2) Data Science R Programming Interview Questions – 4 68. What is the procedure to check the cumulative frequency distribution of any categorical variable? The cumulative frequency distribution of a categorical variable can be checked using the cumsum () function in R language. gender = factor(c(“f”,”m”,”m”,”f”,”m”,”f”)) y = table(gender) Output of the above R code- f m 3 3 69. What will be the result of multiplying two vectors in R having different lengths? The multiplication of the two vectors will be performed and the output will be displayed with a warning message like – “Longer object length is not a multiple of shorter object length.” Suppose there is a vector a<-c (1, 2, 3) and vector b <- (2, 3) then the multiplication of the vectors a*b will give the resultant as 2 6 6 with the warning message. The multiplication is performed in a sequential manner but since the length is not same, the first element of the smaller vector b will be multiplied with the last element of the larger vector a. 70. R programming language has several packages for data science which are meant to solve a specific problem, how do you decide which one to use? CRAN package repository in R has more than 6000 packages, so a data scientist needs to follow a well-defined process and criteria to select the right one for a specific task. When looking for a package in the CRAN repository a data scientist should list out all the requirements and issues so that an ideal R package can address all those needs and issues. The best way to answer this question is to look for an R package that follows good software development principles and practices. For example, you might want to look at the quality documentation and unit tests. The next step is to check out how a particular R package is used and read the reviews posted by other users of the R package. It is important to know if other data scientists or data analysts have been able to solve a similar problem as that of yours. When you in doubt choosing a particular R package, I would always ask for feedback from R community members or other colleagues to ensure that I am making the right choice. 71. How can you merge two data frames in R language? Data frames in R language can be merged manually using cbind () functions or by using the merge () function on common rows or columns. 72. Explain the usage of which() function in R language. which() function determines the postion of elemnts in a logical vector that are TRUE. In the below example, we are finding the row number wherein the maximum value of variable v1 is recorded. mydata=data.frame(v1 = c(2,4,12,3,6)) which(mydata$v1==max(mydata$v1)) It returns 3 as 12 is the maximum value and it is at 3rd row in the variable x=v1. 73. How will you convert a factor variable to numeric in R language? A factor variable can be converted to numeric using the as.numeric() function in R language. However, the variable first needs to be converted to character before being converted to numberic because the as.numeric() function in R does not return original values but returns the vector of the levels of the factor variable. X <- factor(c(4, 5, 6, 6, 4)) X1 = as.numeric(as.character(X)) Data Science Python Interview Questions – 1 74. Write a function that takes in two sorted lists and outputs a sorted list that is their union. First solution which will come to your mind is to merge two lists and short them afterwards Python code def return_union(list_a, list_b): return sorted(list_a + list_b) R code- return_union <- function(list_a, list_b) list_c<-list(c(unlist(list_a),unlist(list_b))) return(list(list_c[[1]][order(list_c[[1]])])) Generally, the tricky part of the question is not to use any sorting or ordering function. In that case you will have to write your own logic to answer the question and impress your interviewer. Python code- def return_union(list_a, list_b): len1 = len(list_a) len2 = len(list_b) final_sorted_list = [] j = 0 k = 0 for i in range(len1+len2): if k == len1: elif j == len2: elif list_a[k] < list_b[j]: k += 1 j += 1 return final_sorted_list Similar function can be returned in R as well by following the similar steps. return_union <- function(list_a,list_b) #Initializing length variables len_a <- length(list_a) len_b <- length(list_b) len <- len_a + len_b #initializing counter variables #Creating an empty list which has length equal to sum of both the lists list_c <- list(rep(NA,len)) #Here goes our for loop for(i in 1:len) { if(j>len_a) list_c[i:len] <- list_b[k:len_b] break } else if(k>len_b) list_c[i:len] <- list_a[j:len_a] else if(list_a[[j]] <= list_b[[k]]) list_c[[i]] <- list_a[[j]] j <- j+1 } else if(list_a[[j]] > list_b[[k]]) list_c[[i]] <- list_b[[k]] k <- k+1 75. Name a few libraries in Python used for Data Analysis and Scientific computations. NumPy, SciPy, Pandas, SciKit, Matplotlib, Seaborn 76. Which library would you prefer for plotting in Python language: Seaborn or Matplotlib? Matplotlib is the python library used for plotting but it needs lot of fine-tuning to ensure that the plots look shiny. Seaborn helps data scientists create statistically and aesthetically appealing meaningful plots. The answer to this question varies based on the requirements for plotting data. 77. Which method in pandas.tools.plotting is used to create scatter plot matrix? 78. How can you check if a data set or time series is Random? To check whether a dataset is random or not use the lag plot. If the lag plot for the given dataset does not show any structure then it is random. 79. What are the possible ways to load an array from a text data file in Python? How can the efficiency of the code to load data file be improved? numpy.loadtxt () 80. Which is the standard data missing marker used in Pandas? 81. Which Python library would you prefer to use for Data Munging? 82. Write the code to sort an array in NumPy by the nth column? Using argsort () function this can be achieved. If there is an array X and you would like to sort the nth column then code for this will be x[x [: n-1].argsort ()] 83. Which python library is built on top of matplotlib and Pandas to ease data plotting? 84. Which plot will you use to access the uncertainty of a statistic? 85. What is pylab? A package that combines NumPy, SciPy and Matplotlib into a single namespace. 86. Which python library is used for Machine Learning? 87. How can you copy objects in Python? a. The functions used to copy objects in Python are- b. Copy.copy () for shallow copy c. Copy.deepcopy () for deep copy d. However, it is not possible to copy all objects in Python using these functions. For instance, dictionaries have a separate copy method whereas sequences in Python have to be copied by ‘Slicing’. 88. What is the difference between tuples and lists in Python? Tuples can be used as keys for dictionaries i.e. they can be hashed. Lists are mutable whereas tuples are immutable – they cannot be changed. Tuples should be used when the order of elements in a sequence matters. For example, set of actions that need to be executed in sequence, geographic locations or list of points on a specific route. 89. What is PEP8? PEP8 consists of coding guidelines for Python language so that programmers can write readable code making it easy to use for any other person, later on. 90. Is all the memory freed when Python exits? No, it is not, because the objects that are referenced from global namespaces of Python modules are not always de-allocated when Python exits. Data Science Python Interview Questions- 2 91. What does _init_.py do? a. init_.py is an empty py file used for importing a module in a directory. _init_.py provides an easy way to organize the files. If there is a module maindir/subdir/module.py,_init_.py is placed in all the directories so that the module can be imported using the following command- b. import maindir.subdir.module 92. What is the different between range () and xrange () functions in Python? range () returns a list whereas xrange () returns an object that acts like an iterator for generating numbers on demand. 93. How can you randomize the items of a list in place in Python? Shuffle (lst) can be used for randomizing the items of a list in Python 94. What is a pass in Python? Pass in Python signifies a no operation statement indicating that nothing is to be done. 95. If you are gives the first and last names of employees, which data type in Python will you use to store them? You can use a list that has first name and last name included in an element or use Dictionary. 96. What happens when you execute the statement mango=banana in Python? A name error will occur when this statement is executed in Python. 97. Optimize the below python code-word = ‘word’ a. print word.__len_() b. Answer: print ‘word’._len_() 98. What is monkey patching in Python? Monkey patching is a technique that helps the programmer to modify or extend other code at runtime. Monkey patching comes handy in testing but it is not a good practice to use it in production environment as debugging the code could become difficult. 99. What is pickling and unpickling? Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling. 100. What are the tools that help to find bugs or perform static analysis? PyChecker is a static analysis tool that detects the bugs in Python source code and warns about the style and complexity of the bug. Pylint is another tool that verifies whether the module meets the coding standard.
{"url":"https://www.techtutorial.in/data-science-interview-questions-and-answers/","timestamp":"2024-11-07T19:01:27Z","content_type":"text/html","content_length":"113201","record_id":"<urn:uuid:b11cb8c0-a728-4949-8407-d96ad359ce92>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00697.warc.gz"}
322 research outputs found The extension of the standard model to SU(3)_L x SU(3)_R x SU(3)_C is considered. Spontaneous symmetry breaking requires two Higgs field multiplets with a strong hierarchical structure of vacuum expectation values. These vacuum expectation values, some of them known from experiment, are used to construct invariant potentials in form of a sum of individual potentials relevant at the weak scale. As in a previous suggestion one may normalize the most important individual potentials such that their mass eigenvalues agree with their very large vacuum expectation values. In this case (for a wide class of parameters) the scalar field corresponding to the standard model Higgs turns out to have the precise mass value m_Higgs = v/sqrt(2) = 123 GeV at the weak scale. The physical mass (pole mass) is larger and found to be 125 +/- 1.4 GeV.Comment: 5 pages, version appearing in Phys. Rev. We present further tests and applications of the new eta-eta' mixing scheme recently proposed by us. The particle states are decomposed into orthonormal basis vectors in a light-cone Fock representation. Because of flavor symmetry breaking the mixing of the decay constants can be identical to the mixing of particle states at most for a specific choice of this basis. Theoretical and phenomenological considerations show that the quark flavor basis has this property and allows, therefore, for a reduction of the number of mixing parameters. A detailed comparison with other mixing schemes is also presented.Comment: 9 page We analyse the width of the $\theta(\frac12^+)$ pentaquark assuming that it is a bound state of two extended spin-zero $ud$-diquarks and the $\bar s$ antiquark (the Jaffe-Wilczek scenario). The width obtained when the size parameters of the pentaquark wave function are taken to be close to the parameters of the nucleon is found to be $\simeq 150$ MeV, i.e. it has a normal value for a $P$-wave hadron decay with the corresponding energy release.However, we found a strong dynamical suppression of the decay width if the pentaquark has an asymmetric "peanut" structure with the strange antiquark in the center and the two diquarks rotating around. In this case a decay width of $\simeq$ 1 MeV is a natural possibility.Comment: 3 new references added, version accepted to Physics We present an $E_6$ Grand Unified model with a realistic pattern of fermion masses. All standard model fermions are unified in three fundamental 27-plets (i.e. supersymmetry is not invoked), which involve in addition right handed neutrinos and three families of vector like heavy quarks and leptons. The lightest of those can lie in the low TeV range, being accessible to future collider experiments. Due to the high symmetry, the masses and mixings of all fermions are closely related. The new heavy fermions play a crucial role for the quark and lepton mass matrices and the bilarge neutrino oscillations. In all channels generation mixing and ${\cal CP}$ violation arise from a single antisymmetric matrix. The $E_6$ breaking proceeds via an intermediate energy region with SU(3)_L \tm SU(3)_R\tm SU(3)_C gauge symmetry and a discrete left-right symmetry. This breaking pattern leads in a straightforward way to the unification of the three gauge coupling constants at high scales, providing for a long proton lifetime. The model also provides for the unification of the top, bottom and tau Yukawa couplings and for new interesting relations in flavor and generation space.Comment: RevTex4, three ps figures, some correction
{"url":"https://core.ac.uk/search/?q=author%3A(B.%20Stech)","timestamp":"2024-11-08T21:36:16Z","content_type":"text/html","content_length":"117789","record_id":"<urn:uuid:fd27aea3-47e9-4253-b79d-0fe3af4afbf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00303.warc.gz"}
A certain children's toy sells 4100 units when its price A certain children's toy sells 4100 units when its price is $20 and 2800 units when its price is $28 Subject:EconomicsPrice:2.88 Bought3 A certain children's toy sells 4100 units when its price is $20 and 2800 units when its price is $28. Calculate its price elasticity of demand, using the technique in the PowerPoints. You will interpret this elasticity in the next question.Enter only numbers, a decimal point, and/or a negative sign as needed. Round your final answer to two decimal places as necessary; if you round on intermediate steps, use four places. Option 1 Low Cost Option Download this past answer in few clicks 2.88 USD Option 2 Custom new solution created by our subject matter experts
{"url":"https://studyhelpme.com/question/6318/A-certain-children39s-toy-sells-4100-units-when-its-price-is-20-and-2800-units-when-its-price-is","timestamp":"2024-11-09T11:20:04Z","content_type":"text/html","content_length":"69328","record_id":"<urn:uuid:e0ececf1-256a-4653-80f1-8525ddaf5952>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00298.warc.gz"}
Cusp Catastrophe Polynomial Model Power and Sample Size Estimation Cusp Catastrophe Polynomial Model: Power and Sample Size Estimation () 1. Introduction Popularized in the 1970 ’ s by Thom [1] , Thom and Fowler [2] , Cobb and Ragade [3] , Cobb and Watson [4] , and Cobb and Zack [5] , catastrophe theory was proposed to understand a complicated set of behaviors including both gradual and continuous changes and sudden and discrete or catastrophical changes. Computationally, there are two directions to implement this theoretical catastrophe theory. One direction is operationalized by Guastello [6] [7] with the implementation into a polynomial regression approach and another direction by a stochastic cusp catastrophe model from Cobb and his colleagues [5] with implementation in an R package in [8] . And this paper is to discuss the first direction on polynomial cusp catastrophe regression model due to its relative simplicity and ease for implementation as simple regression approach. This model has been used extensively in research. Typical examples include modeling of accident process [7] , adolescent alcohol use [9] , changes in adolescent substance use [10] , binge drinking among college students [11] , sexual initiation among young adolescents [12] , nursing turnover [13] , and effect of HIV prevention among adolescents [12] [14] . Even though this polynomial regression method has been widely applied in behavioral studies to investigate the existence of cusp catastrophe, to the best of our knowledge, no reported research has addressed the determination of sample size and statistical power for this analytical approach. Statistical power analysis is an essential part for researchers to efficiently plan and design a research project as pointed out in [15] - [17] . To assist and enhance application of the polynomial regression method in behavioral research, this paper is aimed to fill this method gap by reporting the Monte-Carlo simulation-based method we develope to conduct power analysis and to determine sample size. The structure of the paper is as follows. We start with a brief review of the cusp catastrophe model (Section 2), followed by reporting our development of the novel simulation-based approach to calculate the statistical power (Section 3). This approach is then verified through Monte Carlo simulations and is further illustrated with data derived from published study (Section 4). Conclusions and discussions are given at the end of the paper (Section 5). 2. Cusp Catastrophe Model 2.1. Overview The cusp catastrophe model is proposed to model system outcomes which can incorporate the linear model with extension to nonlinear model along with discontinuous transitions in equilibrium states as control variables vary. According to the catastrophe systems theory [1] [18] - [20] , the dynamics for a cusp system outcome is expressed by the time derivative of its state variable (often called behavioral variable within the context of catastrophe theory) to the potential function: where Figure 1 depicts the equilibrium plane which reflects the response surface of the outcome measure Furthermore, the paths A, B, and C in Figure 1 depict three typical but different pathways of change in the Figure 1. Cusp catastrophe model for outcome measures outcome measure From the affirmative description, it is clearly that a cusp model differs from a linear model in that: 1) A cusp model allows the forward and backward progression follows different paths in the outcome measure and both processes can be modeled simultaneously (see Paths B and C in Figure 1) while a linear model only permits one type of relationship; 2) A cusp model covers both a discrete component and a continuous component of a behavior change while a linear model covers on continuous process (Path A). In this case a linear model can be considered as a special case of the cusp model; 3) A cusp model consists of two stable regions and two thresholds for sudden and discrete changes. Therefore, the application of the cusp modeling will advance the linear approach and better assist researchers to describe the behavior data while evidence obtained from such analysis, in turn, can be used to advance theories and models to better explain a behavior. 2.2. Guastello’s Cusp Catastrophe Polynomial Regression Model To operationalize the cusp catastrophe model for behavior research, Guastello [6] [7] developed the polynomial regression approach to implement the concept of cusp model. Since the first publication of this method, it has been widely used in analyzing real data as we described in the Introduction. In this study, we referred the method as Gastello’s polynomial cusp regression. According to Gustello, this approach is derived by inserting regression To demonstrate the efficiency of the polynomial regression approach in describing behavioral changes that are cusp, Guastelly [7] recommended a comparative approach. In this approach, two types, four alternative linear models can be constructed and used in modeling the same variables: 1) Change scores linear models 2) Pre-and post-linear models These alternative linear models add another analytical strategy to strength the polynomial regression method. A better data-model fitting (or a larger 3. Simulation-Based Power Analysis Approach for Guastello’s Cusp Regression 3.1. A Brief Introduction to Statistical Power In statistics, power is defined as the probability of correctly rejecting the null hypothesis. Stated in common language, power is the fraction of the times that the specified null-hypothesis value will be rejected from statistical tests. Operationally based on this definition, if we specify an alternative hypothesis As detailed in Chapter 7 in [17] , five factors related to research design interplay with each other to determine the statistical power and sample size for a simple t-test: 1) the rate of type-I Extending the same concept described above for Guastello’s polynomial cusp regression, we would need to specify the corresponding parameter effect size for all 3.2. Simulation-Based Approach for Power Analysis and Sample Size Determination Power analysis and sample size determination can be developed for specific purpose. Typically, it is developed to detect treatment effect as in clinical trials or to detect the effect of specific risk factor as in regression. Similar development can be done to Guastello’s cusp regression model for specific repressor in asymmetry variable 1) Simulate data with sample size 2) Specify model parameter effect size 3) Calculate 4) Fit the Guastello’s cusp regression model (Equation (2)) with least squares method using the data generated for 5) Repeat Steps 1 to 4 a large number of times (typically 1000) and calculate the proportion of simulations which satisfy the Guastello’s decision rules. This proportion then provides an estimate of the statistical power for the pre-specified sample size and the study specifications given in Steps 1 and 2; 6) With the above established five steps for power assessment, sample size is then determined to reach a pre-specified level of statistical power. This is carried out by running Steps 1 to 5 with a range of sample sizes The simulation-based approach described above is implemented in free 4. Simulation Study and Real Example 4.1. Monte-Carlo Simulation Analysis 4.1.1 . Rationale To verify the novel approach proposed in Section 3, we simulated four scenarios with 4.1.2 . Data Generation Data are generated with the asymmetry variable With the generated Figure 2 illustrates one realization of the data generation with 4.1.3 . Simulation Analysis Four data sets for the four scenarios (e.g., Table 1. It can be seen from the table that for the Scenario where Results of other three scenarios in Table 1 indicate that as Figure 2. Example of simulated data when the existence of a cusp. With regard to Scenario 3 where 4.1.4 . Sample Size Estimation To demonstrate the proposed novel simulation method, we estimate sample sizes needed for each of the four scenarios to achieve 85% statistical power employing this method and the estimated parameter Table 1 in the previous step. Figure 3 summarizes the results. Data in Figure 3 indicate that with 85% statistical power to detect the underlying cusp, the required sample sizes for Scenarios 1 through 4 are 36, 101, 195 and 293, respectively. The required sample size varies proportionately with measurement errors. This result adds more evidence supporting the validity of the simulation-based approach we proposed for power analysis. 4.1.5 . Reverse-Verification If the novel simulation-based approach is valid, the sample size estimates for each of the four scenarios described in previous section will allow approximately 85% chance to detect the underlying cusp. Therefore, we took a reverse approach to compute statistical power by applying the calculated sample size as input for each of the four scenarios. Results in Figure 3 indicated that for Scenario 1, a sample size of 36 observations will be adequate to detect the cusp with 85% statistical power. Figure 3. Statistical power curves corresponding to Significant codes: ^*** p-value < 0.00001, ^**p-value < 0.001, ^*p-value < 0.01, “.”(p-value < 0.05). To demonstrate this result, we make use Monte-Carlo procedure and randomly sample 36 observations from the simulate data 4.2. Verification with Published Data The best approach to demonstrate the validity of the simulation approach would be to test it with observed data. To use our approach, we need two sets of data from any reported study: parameter estimates as effect size Briefly, in Chen’s study participants were 469 virgins in the control group for a randomized controlled trial to assess the effect of an HIV behavioral prevention intervention program [22] [23] . The participants in grade 6 in the Bahamian public schools were randomly assigned to receive either intervention or control conditions. They were followed every 6 months up to 24 months at the time when the analysis was conducted. A participant was categorized as having initiated sex if he or she had the first penile-vagina sexual intercourse during the follow-up period. In addition to sexual initiation, the likelihood to initiate sex was also assessed using a 5-point rating scale with 1 = very unlikely to have sex in the next 6 months and 5 = very likely to have sex. A sexual progression index (SPI) was thus created as the dependent variable for modeling analysis was defined as the first time. SPI = 1 for participants who never had sex and reported very unlikely to have sex; SPI = 2 for participants who never had sex but unsure if they are going to have sex in the next 6 months; SPI = 3 for participants who never had sex but reported very like to have sex in the next 6 months; and SPI = 4 for participants who initiated sex. In addition to SPI, age was used as the asymmetry variable To verify the simulation-based method, the parameter effect size estimates were obtained from the paper withFigure 4 presents the sample size-power curve. From the figure it can be seen that the estimated sample size is 153 to achieve 85% power. This sample size is much smaller than the sample 5. Discussions In the case where analytical solution to power analysis and sample size determination is difficult, simulation represents an ideal alternative as recommended in [16] [17] [24] . In this paper, we reported a novel simulation- based approach we developed to estimate the statistical power and to compute sample size for Gustello’s polynomial cusp catastrophe model. The method was developed based on statistical power theory and our understanding of Guastello’s cusp polynomial regression modeling approach. The computing method is programmed using the With this approach, researchers can compute statistical power and estimate sample size if they plan to conduct cusp modeling analysis using Gustallo’s polynomial regression method. A detailed introduction to the method can be found in [6] [7] [21] . Data needed for our methods included parameter effect size estimates for the intercept and five model parameters To make the presentation easier, we confined this novel simulation approach to the situation of one regressor for each control variable in the cusp model. This approach can be easily adopted and extended to multiple regressors for each of the asymmetric Figure 4. Power curve for Chen et al. (2010). The estimated sample size for power of 0.85 is 153. More and more data suggest the utility of cusp modeling approach in characterizing a number of human behaviors, particularly health risk behaviors, such as tobacco smoking, alcohol consumption, hardcore drug use, dating violence, and unprotected sex [10] [11] [14] [21] [25] [26] . The methods we reported in this paper provide a useful tool for researchers to more effectively design their research to investigate these risk behaviors and to assess intervention programs for risk reduction. By conducting this study, we also note that previous studies published in the literature do not report adequate information for power analysis. We highly recommend that journal editors ask authors to report all parameter estimates, including There are a number of strengths with the method we present in this study. The principle and the computing process are not difficult to follow; the data used for the computing can be obtained; the computing software is written with This research was support in part by two NIH grants, one from the National Institute On Drug Abuse (NIDA, R01 DA022730, PI: Chen X) and another from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD, R01HD075635, PIs: Chen X and Chen D).
{"url":"https://scirp.org/journal/paperinformation?paperid=51467","timestamp":"2024-11-08T18:08:52Z","content_type":"application/xhtml+xml","content_length":"153001","record_id":"<urn:uuid:313886b6-bcc8-4481-ae2c-d6137fcc1879>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00290.warc.gz"}
I am a principal research fellow at Universiteit Antwerpen. Previously I have been a postdoctoral research associate in Number Theory at the University of Bath, and in the Browning Group at the Institute for science and Technology Austria. I obtained my Ph.D. at Leiden University under the supervision of dr. Martin Bright. My research focusses on the application of algebraic geometry in number theory. I am for example interested in local to global principles such as the Hasse principle and several forms of approximation to both rational and integral points, and geometric obstructions to these principles. I have a particular interest in varieties given by multiple quadratic equations, in particular del Pezzo surface of degree 4, and the main geometric obstruction that features in my work is the Brauer–Manin obstruction. I also use analytic number theory in my research, which is most prominent in my work on the Manin conjecture and the Loughran–Smeets conjecture. I also have an interest in effective techniques for points on curves and surfaces. I am fascinated by the endless geometric and analytical techniques one can use to study some obvious and seemingly simple questions in number theory. Contact: Julian Lyczak Office: M.G.227
{"url":"https://julianlyczak.nl/","timestamp":"2024-11-12T06:06:32Z","content_type":"text/html","content_length":"35066","record_id":"<urn:uuid:1fce4633-d099-415d-86aa-c15cd4818ed3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00382.warc.gz"}
Adjacent Angles - Definition, Examples & Practice Questions Subtract the known angle measure from the total angle measure of the adjacent angles. Total angle measure = 180^{\circ} . Known angle measure (angle PUL ) = 45^{\circ} 180^{\circ}-45^{\circ} = 135^{\circ} So the missing angle measure (angle JUP ) is 135^{\circ} . What are adjacent angles? Adjacent angles are two angles that are side by side and share a common vertex and a common side. What are the properties of adjacent angles? The properties of adjacent angles are: β they share a common vertex β they share a common side β they do not overlap β they can be part of a linear pair or have other angle relationships like being complementary or supplementary What is a linear pair of angles? A linear pair is a pair of adjacent angles that combine to form a straight angle. The angles in a linear pair are supplementary angles, meaning their measures add up to 180^{\circ}. When are adjacent angles complementary angles? Adjacent angles are complementary when the sum of their measures is equal to 90^{\circ}. When are adjacent angles supplementary angles? Adjacent angles are supplementary when the sum of their measures is equal to 180^{\circ}. Can vertical angles be adjacent angles? No, vertical angles cannot be adjacent angles. Vertical angles are formed when two lines intersect, and they are opposite to each other. Adjacent angles are next to each other. angle BRF and angle RYN angle FRY and angle YRN angle BRF and angle BRY angle YRN and angle FRN angle AOG angle COA angle BOA angle BAG Angles AEB and AEC ; angles AEB and BEC Angles CED and CEA ; angles DEA and AEB Angles DEB and BED ; angles AEC and BEC Angles BEC Β and CED ; angles AEB and BEC Angle A and angle C Angle C and angle D Angle B and angle D There are no adjacent angles in this polygon.
{"url":"https://thirdspacelearning.com/us/math-resources/topic-guides/geometry/adjacent-angles/","timestamp":"2024-11-13T17:54:01Z","content_type":"text/html","content_length":"252038","record_id":"<urn:uuid:c3ae32a7-8eb8-4744-83ac-07d17601e4c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00684.warc.gz"}
Regular language In theoretical computer science and formal language theory, a regular language (also called a rational language^[1]^[2]) is a formal language that can be expressed using a regular expression, in the strict sense of the latter notion used in theoretical computer science (as opposed to many regular expressions engines provided by modern programming languages, which are augmented with features that allow recognition of languages that cannot be expressed by a classic regular expression). Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem^[3] (after American mathematician Stephen Cole Kleene). In the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars (regular grammars). Regular languages are very useful in input parsing and programming language design. Formal definition The collection of regular languages over an alphabet Σ is defined recursively as follows: • The empty language Ø, and the empty string language {ε} are regular languages. • For each a ∈ Σ (a belongs to Σ), the singleton language {a} is a regular language. • If A and B are regular languages, then A ∪ B (union), A • B (concatenation), and A* (Kleene star) are regular languages. • No other languages over Σ are regular. See regular expression for its syntax and semantics. Note that the above cases are in effect the defining rules of regular expression. All finite languages are regular; in particular the empty string language {ε} = Ø* is regular. Other typical examples include the language consisting of all strings over the alphabet {a, b} which contain an even number of as, or the language consisting of all strings of the form: several as followed by several bs. A simple example of a language that is not regular is the set of strings { a^nb^n | n ≥ 0 }.^[4] Intuitively, it cannot be recognized with a finite automaton, since a finite automaton has finite memory and it cannot remember the exact number of a's. Techniques to prove this fact rigorously are given below. Equivalent formalisms A regular language satisfies the following equivalent properties: 1. it is the language of a regular expression (by the above definition) 2. it is the language accepted by a nondeterministic finite automaton (NFA)^[note 1]^[note 2] 3. it is the language accepted by a deterministic finite automaton (DFA)^[note 3]^[note 4] 4. it can be generated by a regular grammar^[note 5]^[note 6] 5. it is the language accepted by an alternating finite automaton 6. it can be generated by a prefix grammar 7. it can be accepted by a read-only Turing machine 8. it can be defined in monadic second-order logic (Büchi-Elgot-Trakhtenbrot theorem^[5]) 9. it is recognized by some finite monoid M, meaning it is the preimage { w∈Σ^* | f(w)∈S } of a subset S of a finite monoid M under a monoid homomorphism f: Σ^* → M from the free monoid on its alphabet^[note 7] 10. the number of equivalence classes of its "syntactic relation" ~ is finite^[note 8]^[note 9] (this number equals the number of states of the minimal deterministic finite automaton accepting L.) Properties 9. and 10. are purely algebraic approaches to define regular languages; a similar set of statements can be formulated for a monoid M⊂Σ^*. In this case, equivalence over M leads to the concept of a recognizable language. Some authors use one of the above properties different from "1." as alternative definition of regular languages. Some of the equivalences above, particularly those among the first four formalisms, are called Kleene's theorem in textbooks. Precisely which one (or which subset) is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs ("1." and "2." above) "Kleene's theorem".^[6] Another textbook calls the equivalence of regular expressions and DFAs ("1." and "3." above) "Kleene's theorem".^[7] Two other textbooks first prove the expressive equivalence of NFAs and DFAs ("2." and "3.") and then state "Kleene's theorem" as the equivalence between regular expressions and finite automata (the latter said to describe "recognizable languages").^[2]^[8] A linguistically oriented text first equates regular grammars ("4." above) with DFAs and NFAs, calls the languages generated by (any of) these "regular", after which it introduces regular expressions which it terms to describe "rational languages", and finally states "Kleene's theorem" as the coincidence of regular and rational languages.^[9] Other authors simply define "rational expression" and "regular expressions" as synonymous and do the same with "rational languages" and "regular Closure properties The regular languages are closed under the various operations, that is, if the languages K and L are regular, so is the result of the following operations: • the set theoretic Boolean operations: union K ∪ L, intersection K ∩ L, and complement L, hence also relative complement K-L.^[10] • the regular operations: K ∪ L, concatenation K ∘ L, and Kleene star L^*.^[11] • the trio operations: string homomorphism, inverse string homomorphism, and intersection with regular languages. As a consequence they are closed under arbitrary finite state transductions, like quotient K / L with a regular language. Even more, regular languages are closed under quotients with arbitrary languages: If L is regular then L/K is regular for any K. • the reverse (or mirror image) L^R. Decidability properties Given two deterministic finite automata A and B, it is decidable whether they accept the same language.^[12] As a consequence, using the above closure properties, the following problems are also decidable for arbitrarily given deterministic finite automata A and B, with accepted languages L[A] and L[B], respectively: • Containment: is L[A] ⊆ L[B] ?^[note 10] • Disjointness: is L[A] ∩ L[B] = {} ? • Emptiness: is L[A] = {} ? • Universality: is L[A] = Σ^* ? • Membership: given a ∈ Σ^*, is a ∈ L[B] ? For regular expressions, the universality problem is NP-complete already for a singleton alphabet.^[13] For larger alphabets, that problem is PSPACE-complete.^[14]^[15] If regular expressions are extended to allow also a squaring operator, with "A^2" denoting the same as "AA", still just regular languages can be described, but the universality problem has an exponential space lower bound,^ [16]^[17]^[18] and is in fact complete for exponential space with respect to polynomial-time reduction.^[19] Complexity results In computational complexity theory, the complexity class of all regular languages is sometimes referred to as REGULAR or REG and equals DSPACE(O(1)), the decision problems that can be solved in constant space (the space used is independent of the input size). REGULAR ≠ AC^0, since it (trivially) contains the parity problem of determining whether the number of 1 bits in the input is even or odd and this problem is not in AC^0.^[20] On the other hand, REGULAR does not contain AC^0, because the nonregular language of palindromes, or the nonregular language can both be recognized in AC^0.^ If a language is not regular, it requires a machine with at least Ω(log log n) space to recognize (where n is the input size).^[22] In other words, DSPACE(o(log log n)) equals the class of regular languages. In practice, most nonregular problems are solved by machines taking at least logarithmic space. Location in the Chomsky hierarchy To locate the regular languages in the Chomsky hierarchy, one notices that every regular language is context-free. The converse is not true: for example the language consisting of all strings having the same number of a's as b's is context-free but not regular. To prove that a language such as this is not regular, one often uses the Myhill–Nerode theorem or the pumping lemma among other methods. Important subclasses of regular languages include • Finite languages - those containing only a finite number of words.^[24] These are regular languages, as one can create a regular expression that is the union of every word in the language. • Star-free languages, those that can be described by a regular expression constructed from the empty symbol, letters, concatenation and all boolean operators including complementation but not the Kleene star: this class includes all finite languages.^[25] • Cyclic languages, satisfying the conditions uv ∈ L ⇔ vu ∈ L and w ∈ L ⇔ w ^n ∈ L.^[26]^[27] The number of words in a regular language Let denote the number of words of length in . The ordinary generating function for L is the formal power series The generating function of a language L is a rational function if L is regular.^[26] Hence for any regular language there exist an integer constant , complex constants and complex polynomials such that for every the number of words of length in is .^[28]^[29]^[30]^[31] Thus, non-regularity of certain languages can be proved by counting the words of a given length in . Consider, for example, the Dyck language of strings of balanced parentheses. The number of words of length in the Dyck language is equal to the Catalan number, which is not of the form , witnessing the non-regularity of the Dyck language. Care must be taken since some of the eigenvalues could have the same magnitude. For example, the number of words of length in the language of all even binary words is not of the form , but the number of words of even or odd length are of this form; the corresponding eigenvalues are . In general, for every regular language there exists a constant such that for all , the number of words of length is asymptotically .^[32] The zeta function of a language L is^[26] The zeta function of a regular language is not in general rational, but that of a cyclic language is.^[33]^[34] The notion of a regular language has been generalized to infinite words (see ω-automata) and to trees (see tree automaton). Rational set generalizes the notion (of regular/rational language) to monoids that are not necessarily free. Likewise, the notion of a recognizable language (by a finite automaton) has namesake as recognizable set over a monoid that is not necessarily free. Howard Straubing notes in relation to these facts that “The term "regular language" is a bit unfortunate. Papers influenced by Eilenberg's monograph^[35] often use either the term "recognizable language", which refers to the behavior of automata, or "rational language", which refers to important analogies between regular expressions and rational power series. (In fact, Eilenberg defines rational and recognizable subsets of arbitrary monoids; the two notions do not, in general, coincide.) This terminology, while better motivated, never really caught on, and "regular language" is used almost universally.”^[36] Rational series is another generalization, this time in the context of a formal power series over a semiring. This approach gives rise to weighted rational expressions and weighted automata. In this algebraic context, the regular languages (corresponding to Boolean-weighted rational expressions) are usually called rational languages.^[37]^[38] Also in this context, Kleene's theorem finds a generalization called the Kleene-Schützenberger theorem. • Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007. • Eilenberg, Samuel (1974). Automata, Languages, and Machines. Volume A. Pure and Applied Mathematics. 58. New York: Academic Press. Zbl 0317.94045. • Salomaa, Arto (1981). Jewels of Formal Language Theory. Pitman Publishing. ISBN 0-273-08522-0. Zbl 0487.68064. • Sipser, Michael (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Zbl 1169.68300. Chapter 1: Regular Languages, pp. 31–90. Subsection "Decidable Problems Concerning Regular Languages" of section 4.1: Decidable Languages, pp. 152–155. • Philippe Flajolet and Robert Sedgewick, Analytic Combinatorics: Symbolic Combinatorics. Online book, 2002. • John E. Hopcroft; Jeffrey D. Ullman (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. ISBN 0-201-02988-X. • Alfred V. Aho and John E. Hopcroft and Jeffrey D. Ullman (1974). The Design and Analysis of Computer Algorithms. Addison-Wesley. 1. 1 2 Ruslan Mitkov (2003). The Oxford Handbook of Computational Linguistics. Oxford University Press. p. 754. ISBN 978-0-19-927634-9. 2. 1 2 3 Mark V. Lawson (2003). Finite Automata. CRC Press. pp. 98–103. ISBN 978-1-58488-255-8. 3. ↑ Sheng Yu (1997). "Regular languages". In Grzegorz Rozenberg; Arto Salomaa. Handbook of Formal Languages: Volume 1. Word, Language, Grammar. Springer. p. 41. ISBN 978-3-540-60420-4. 4. ↑ Eilenberg (1974), p. 16 (Example II, 2.8) and p. 25 (Example II, 5.2). 5. ↑ M. Weyer: Chapter 12 - Decidability of S1S and S2S, p. 219, Theorem 12.26. In: Erich Grädel, Wolfgang Thomas, Thomas Wilke (Eds.): Automata, Logics, and Infinite Games: A Guide to Current Research. Lecture Notes in Computer Science 2500, Springer 2002. 6. ↑ Robert Sedgewick; Kevin Daniel Wayne (2011). Algorithms. Addison-Wesley Professional. p. 794. ISBN 978-0-321-57351-3. 7. ↑ Jean-Paul Allouche; Jeffrey Shallit (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. p. 129. ISBN 978-0-521-82332-6. 8. ↑ Kenneth Rosen (2011). Discrete Mathematics and Its Applications 7th edition. McGraw-Hill Science. pp. 873–880. 9. ↑ Horst Bunke; Alberto Sanfeliu (January 1990). Syntactic and Structural Pattern Recognition: Theory and Applications. World Scientific. p. 248. ISBN 978-9971-5-0566-0. 10. ↑ Salomaa (1981) p.28 11. ↑ Salomaa (1981) p.27 12. ↑ Hopcroft, Ullman (1979), Theorem 3.8, p.64; see also Theorem 3.10, p.67 13. ↑ Aho, Hopcroft, Ullman (1974), Exercise 10.14, p.401 14. ↑ Hunt, H. B., III (1973), "On the time and tape complexity of languages. I", Fifth Annual ACM Symposium on Theory of Computing (Austin, Tex., 1973), Assoc. Comput. Mach., New York, pp. 10–19, MR 15. ↑ Harry Bowen {Hunt III} (Aug 1973). On the Time and Tape Complexity of Languages (PDF) (Ph.D. thesis). TR. 73-182. Cornell University. 16. ↑ Hopcroft, Ullman (1979), Theorem 13.15, p.351 17. ↑ A.R. Meyer & L.J. Stockmeyer (Oct 1972). The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space (PDF). 13th Annual IEEE Symp. on Switching and Automata Theory. pp. 125—129. 18. ↑ L.J. Stockmeyer & A.R. Meyer (1973). Word Problems Requiring Exponential Time (PDF). Proc. 5th ann. symp. on Theory of computing (STOC). ACM. pp. 1—9. 19. ↑ Hopcroft, Ullman (1979), Corollary p.353 20. ↑ Furst, M.; Saxe, J. B.; Sipser, M. (1984). "Parity, circuits, and the polynomial-time hierarchy". Math. Systems Theory. 17: 13–27. doi:10.1007/bf01744431. 21. ↑ Cook, Stephen; Nguyen, Phuong (2010). Logical foundations of proof complexity (1. publ. ed.). Ithaca, NY: Association for Symbolic Logic. p. 75. ISBN 0-521-51729-X. 22. ↑ J. Hartmanis, P. L. Lewis II, and R. E. Stearns. Hierarchies of memory-limited computations. Proceedings of the 6th Annual IEEE Symposium on Switching Circuit Theory and Logic Design, pp. 179–190. 1965. 23. ↑ A finite language shouldn't be confused with a (usually infinite) language generated by a finite automaton. 24. ↑ Volker Diekert; Paul Gastin (2008). "First-order definable languages". In Jörg Flum; Erich Grädel; Thomas Wilke. Logic and automata: history and perspectives (PDF). Amsterdam University Press. ISBN 978-90-5356-576-6. 25. 1 2 3 Honkala, Juha (1989). "A necessary condition for the rationality of the zeta function of a regular language" (PDF). Theor. Comput. Sci. 66 (3): 341–347. doi:10.1016/0304-3975(89)90159-x. Zbl 0675.68034. 26. ↑ Berstel & Reutenauer (2011) p.220 27. ↑ Flajolet & Sedgweick, section V.3.1, equation (13). 28. ↑ http://cs.stackexchange.com/a/11333/683 Proof of theorem for arbitrary DFAs 29. ↑ Flajolet & Sedgewick (2002) Theorem V.3 30. ↑ Berstel, Jean; Reutenauer, Christophe (1990). "Zeta functions of formal languages". Trans. Am. Math. Soc. 321 (2): 533–546. doi:10.1090/s0002-9947-1990-0998123-x. Zbl 0797.68092. 31. ↑ Berstel & Reutenauer (2011) p.222 32. ↑ Samuel Eilenberg. Automata, languages, and machines. Academic Press. in two volumes "A" (1974, ISBN 9780080873749) and "B" (1976, ISBN 9780080873756), the latter with two chapters by Bret 33. ↑ Straubing, Howard (1994). Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Basel: Birkhäuser. p. 8. ISBN 3-7643-3719-2. Zbl 0816.68086. 34. ↑ Berstel & Reutenauer (2011) p.47 35. ↑ Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge: Cambridge University Press. p. 86. ISBN 978-0-521-84425-3. Zbl 1188.68177. Further reading • Kleene, S.C.: Representation of events in nerve nets and finite automata. In: Shannon, C.E., McCarthy, J. (eds.) Automata Studies, pp. 3–41. Princeton University Press, Princeton (1956); it is a slightly modified version of his 1951 RAND Corporation report of the same title, RM704. • Sakarovitch, J. "Kleene's theorem revisited". Lecture Notes in Computer Science. 1987: 39–50. doi:10.1007/3540185356_29. External links This article is issued from - version of the 11/5/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Regular_language.html","timestamp":"2024-11-06T02:22:21Z","content_type":"text/html","content_length":"87988","record_id":"<urn:uuid:d43005eb-fd20-49d4-92a2-c4101de3f87c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00474.warc.gz"}
Representation and Reality by Language: How to make a home quantum computer? Download PDFOpen PDF in browser Representation and Reality by Language: How to make a home quantum computer? EasyChair Preprint 3371 14 pages•Date: May 11, 2020 A set theory model of reality, representation and language based on the relation of completeness and incompleteness is explored. The problem of completeness of mathematics is linked to its counterpart in quantum mechanics. That model includes two Peano arithmetics or Turing machines independent of each other. The complex Hilbert space underlying quantum mechanics as the base of its mathematical formalism is interpreted as a generalization of Peano arithmetic: It is a doubled infinite set of doubled Peano arithmetics having a remarkable symmetry to the axiom of choice. The quantity of information is interpreted as the number of elementary choices (bits). Quantum information is seen as the generalization of information to infinite sets or series. The equivalence of that model to a quantum computer is demonstrated. The condition for the Turing machines to be independent of each other is reduced to the state of Nash equilibrium between them. Two relative models of language as game in the sense of game theory and as ontology of metaphors (all mappings, which are not one-to-one, i.e. not representations of reality in a formal sense) are deduced. Keyphrases: Turing machine, completeness, halting problem, information, language, ncompleteness, quantum computer, quantum information, quantum mechanics, reality, representation Links: https://easychair.org/publications/preprint/62Gv Download PDFOpen PDF in browser
{"url":"https://www.easychair.org/publications/preprint/62Gv","timestamp":"2024-11-11T14:44:02Z","content_type":"text/html","content_length":"5579","record_id":"<urn:uuid:81a325ff-802c-46c0-8238-62ddc27fae17>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00477.warc.gz"}
The Statistical Damage Constitutive Model of the Mechanical Properties of Alkali-Resistant Glass Fiber Reinforced Concrete College of Civil and Transportation Engineering, Hohai University, Nanjing 210098, China College of Civil Engineering, Anhui Jianzhu University, Hefei 230022, China Author to whom correspondence should be addressed. Submission received: 31 May 2020 / Revised: 30 June 2020 / Accepted: 5 July 2020 / Published: 8 July 2020 Alkali-resistant glass fiber reinforced concrete (AR-GFRC) has greatly improved in terms of tensile strength, toughness, durability, and reduction of cracking, which has been proven by testing. However, the constitutive relationship of fiber reinforced concrete under complicated stress represents a complex theoretical problem. In order to investigate the microscopic damage evolution and failure mechanism of AR-GFRC, the meso-statistical damage theory, microcontinuum theory, and composite material theory were considered, and uniaxial tensile tests of two types of AR-GFRC were conducted. A new damage variable expression of the AR-GFRC was proposed, and the stress-strain curve was redefined by considering the residual strength based on experimental fitting parameters and statistical parameters. A Weibull distribution was assumed and a statistical damage constitutive model was developed of the deformation process of the AR-GFRC while considering the residual strength effect; detailed calculation methods to determine the mechanical and statistical parameters of the concrete were developed. The validation results show that the theoretical stress-strain curve of the constitutive model is in good agreement with the experimental curve and the trend is consistent. 1. Introduction Concrete is used as an indispensable engineering material in construction for a wide range of long-term applications, but gradually shows its own defects, such as plastic shrinkage cracking, fatigue, and brittleness [ ]. Once the concrete structure is destroyed under complex geological conditions, it will endanger the living environment and safety of human beings [ ]. Experimental research on the long-term mechanical properties of concrete has shown that the tensile properties, crack resistance, and brittleness can be significantly improved when a certain amount of alkali-resistant glass fiber is added [ ]. However, the microscopic damage evolution and failure mechanism of alkali-resistant glass fiber reinforced concrete (AR-GFRC) is not yet clearly understood. Therefore, the long-term mechanical properties of concrete require improvements, and the microscopic damage evolution and failure mechanism of AR-GFRC must be explored and understood; this is an urgent problem that must be addressed to achieve safe, reliable, and stable operation of concrete structures. The tensile properties and crack resistance of concrete are greatly influenced by glass fiber, as demonstrated by Tassew, Lubell [ ], and Kizilkanat [ ]. Tests of AR-GFRC were carried out by Ghugal and Deshmukh [ ], where the compressive strength, splitting tensile strength, and flexural strength of concrete with different fiber contents were obtained and an equation determining the mechanical parameters of concrete was put forward. Yildizel et al. [ ] proposed a stochastic distribution method to investigate micro-cracks and micro-defects and improve the mechanical properties of AR-GFRC. Research was conducted on the durability of AR-GFRC and X-ray diffraction (XRD) and scanning electron microscopy (SEM) were used to determine the damage evolution of AR-GFRC with different fiber dosages (0.6–2.4%). The long-term mechanical properties of AR-GFRC were determined experimentally in references [ The damage in concrete is the result of the origination, nucleation, and expansion of cracks caused by microscopic local tensile strain. Concrete is a heterogeneous composite material and its mechanical behavior is nonlinear under complex high levels of stress (30–80% compressive strength). However, the accuracy and precision of a nonlinear analysis depends on the constitutive relationship [ ]. At present, theoretical research on the constitutive relationship of quasi-brittle materials is based on the statistical meso-damage theory. This damage concept was introduced by Dougill [ ], and the damage constitutive model and equations of the evolution of the internal variables based on the thermodynamic theory were established by Lemaitre [ ]. Subsequently, a stochastic damage model for concrete based on microscopic statistical mechanics was put forward by Kandarpa [ ]. Based on these research results, the parallel bar system (PBS) model was proposed by Krajcinovic [ It is assumed that the micro-unit strength is subject to some statistical distribution, and the microscopic heterogeneity of the quasi-brittle material has been considered. Since then, different improvements have been made to the PBS model in references [ ]. In these studies, the constitutive models were based on the macroscopic continuous damage mechanics theory to determine the microscopic damage mechanism, the damage variables, and the damage evolution. A statistical probability distribution method was used to ascertain the damage evolution microscopically. However, most damage constitutive models are based on Lemaitre’s hypothesis of strain equivalence [ ], and the dominant damage type in quasi-brittle materials is the formation of a cavity [ ]. That is, no bearing capacity (residual strength) remains after the concrete is destroyed, and this behavior requires further clarification. In this study, the stress-strain curve and new damage variable considering the post-peak residual strength were redefined based on experimental uniaxial tensile data of concrete. The statistical meso-statistical damage theory, microcontinuum theory, and composite material theory were used in conjunction to propose a constitutive equation describing the statistical damage evolution of AR-GFRC based on the Weibull distribution. The calculation method and definition of the mechanical and statistical parameters of the concrete are presented, which can provide theoretical support and reference data to ensure structural safety in concrete engineering. 2. Materials and Methods 2.1. Uniaxial Tensile Test Material Alkali-resistant glass fiber Alkali-resistant glass fiber was used in the uniaxial tensile test, including Anti-Crak HD/alkali-resistant fiber and Anti-Crak HP/alkali-resistant glass fiber. The performance parameters are listed in Table 1 Concrete matrix materials In this test, ordinary Portland cement was used with a strength grade of 42.5 and a coarse aggregate of 4.75–20 mm continuously graded gravel with high strength, equal shape and size, and no active silicon. The particle size distribution is shown in Table 2 . The fine aggregate was natural medium and coarse river sand that was hard and clean. The mineral powder was S95 grade slag powder and SM-IV polycarboxylate superplasticizer. 2.2. Mix Proportion Design of Concrete Samples The mix ratio was calculated using the methods described in [ ]. Zhang and Zhu [ ] carried out an indoor experiment on the mechanical properties of AR-GFRC, and concluded that the tensile strength of AR-GFRC was related to the fiber content. Therefore, we used seven groups of mix ratios with two types of fibers and a concrete matrix (C35) to explore the relationship of fiber contents to the tensile strength of AR-GFRC. The details are shown in Table 3 2.3. Preparation and Test Scheme for Concrete Samples An HJW-60 concrete mixer, HZJ concrete shaking table, HBY-40A concrete standard curing box, and YJ-22 electric measuring instrument were used in the tensile test. Three test specimens were selected for each group, and the test results were accurate to 0.1 MPa [ ]. The test scheme is shown in Figure 1 After the concrete specimens were cured, the surface moisture of the specimens was wiped off and a force transfer device was placed on both sides of the specimens. The specimen force sensor was attached and the corresponding stress sensors were placed on both sides. The stress monitor was attached on the left and right sides of the specimens. The test piece was continuously and uniformly loaded, and the load speed was 0.05–0.08 MPa/s. When the deformation of the test piece increased rapidly, the accelerator was stopped and adjustments were made until the test piece was destroyed. At this time, the failure load value was recorded. 3. Test Results and Analysis of the AR-GFRC 3.1. Effect of Fiber Content on the Tensile Strength of Concrete To investigate the effects of the fiber content on the tensile strength of the concrete, HD12 and HP12 AR-glass fiber were added to the concrete matrix. The strength grade was C35 and the volume content was 0%, 0.5%, 1%, and 1.5%. The tensile stress-strain curve of the AR-GFRC is shown in Figure 2 The tensile stress-strain curve represents a macroscopic reflection of the concrete’s tensile performance and parameters and characteristics of crack appearance, development, damage accumulation, penetration, and failure of the concrete under a tensile load. The addition of alkali-resistant glass fibers into concrete can significantly improve the tensile strength of concrete, as shown in Figure 2 Figure 3 . Compared with the concrete (without fiber), the three-day tensile strength of concrete with fiberglass increased by 11.2% to 29.7%, the strength of the 28-day tensile increased by 7.4% to 24.7%, and the 180-day tensile strength increased by 1.8% to 10.6%. Moreover, the tensile strength of concrete shows the law of increasing first and then decreasing with increasing alkali-resistant glass fiber doping. The peak strength and tensile strength first increased and then decreased with increasing fiber content at a fiber content of 1%. The AR-GFRC specimens had higher peak strength and residual strength after cracking than the concrete (without fiber). Fiber has obvious cracking effects in concrete. The main reason is that there are micro-cracks of different sizes inside the concrete, and the impact of micro-cracks against pull strength is much greater than the pressure strength. When mixed with alkali-resistant glass fibers, fibers prevent the production of these cracks, thereby reducing the number of cracks, and making the size of the cracks smaller, which reduces the stress strength factor of the tip of the crack, eases the degree of stress concentration of the tip of the crack, and inhibits the initiation and expansion of the crack in the force process. When the amount of alkali-resistant glass fiber doping is too large, the dispersion of fiber in concrete is poor, thereby reducing the compactness of the concrete, and leading to decreases in cracking and folding strength. 3.2. Stress-Strain Curve of the Whole Process of the AR-GFRC To study the curve change rule of the whole process of the AR-GFRC uniaxial tensile stress and strain, some typical concrete uniaxial tensile test data of several researchers [ ] were selected for comparative studies. Yang [ ] introduced a new test method for measuring the uniaxial tensile complete stress-strain curve, and obtained a uniaxial tensile stress-strain full curve of concrete. Chen [ ] used independently developed test equipment to conduct an axial tension test on two groups of sample size 45 cm × 45 cm × 135 cm and obtained the test data by fitting an empirical formula of the complete curve of the full grade concrete. Sun [ ] obtained the full stress-strain curve by using a uniaxial tension test and developed a mathematical model with clear physical significance and certain practicality through data fitting, as shown in Figure 4 Based on the macroscopic failure morphology and tensile curve of the AR-GFRC obtained from the experiments, the stress-strain curve was redefined and divided. As shown in Figure 5 , the uniaxial tensile stress-strain curves of the AR-GFRC can be divided into four parts, where the elastic deformation to micro-fracture development stage (OA region) showed the most effects. We can draw some important conclusions: Elastic deformation to micro-fracture development stage (OA region) As shown in Figure 5 , the σ-ε curve exhibits an approximately linear shape in this stage; the AB region is the elastic deformation stage and the σ-ε concrete curve increases. The BC region is the micro-crack development stage and the σ-ε curve is linear. The linear growth segment is 70–80% from the start of loading to the limit stress. Stress peak stage (AB region) As the load is continually applied, when the stress value exceeds 70–80% of the limit stress, the concrete has plastic deformation, the stress-deformation curve is slightly convex, and the slope decreases. When approaching the peak point of the stress, deformation increases and stress does not increase; the stress is quickly reduced, forming a basic symmetrical drop segment with the rising segment; the stress-deformation curve measured by a different ranging range is quite different; and the visible cracks of the test piece appear in the decreasing segment of the stress-deformation Progressive rupture stage (BC region) The stress of the plain concrete and AR-GFRC rapidly increase, but the slope of the stress-strain curve of the plain concrete is smaller than that of the AR-GFRC, which indicates that the fiber content has an effect on crack development and propagation. Therefore, the crack development and expansion speed are slow. As more micro-cracks develop and the cracks finally penetrate the concrete, the compression state changes to an expansion state and the axial strain and volume strain rate rapidly increase. After reaching peak strength, the internal structure of the AR-GFRC is destroyed and the number of cracks increases; they become larger, are combined, and penetrate the concrete, eventually resulting in a macroscopic fracture surface. When the stress of the test piece drops to 45% to 50% of the maximum pull stress, the stress-deformation curve is at an inflection point, the deformation increases sharply, and the stress slowly decreases. Post-rupture stage (CD region) The stress rapidly decreases but the strain is still increasing, and the rate of increase is faster than during the previous stage. The deformation of the concrete specimens occurs due to block slips of the macroscopic fracture surface. At the fracture position, the alkali-resistant glass fiber is pulled out or off, and the bearing capacity of the specimen rapidly decreases, although some bearing capacity remains. The test piece still has a nominal stress of 15% to 20% (at this time, the macro-crack of the test piece has been fully developed and the effective pull area is greatly reduced), indicating that the deformation retracts after the release of the concrete stress on both sides of the crack, which is mainly the width of the concrete crack. To analyze the uniaxial tensile results of AR-GFRC more intuitively, the shape of the samples after uniaxial tensile testing are shown in Figure 6 . (An example is HP-35 concrete). 4. Development of the Statistical Damage Constitutive Model for the AR-GFRC 4.1. Assumptions There are different micro-cracks, pores, and defects in the interior of quasi-brittle materials such as concrete and rock; stress concentration occurs under an external load. After the penetration stage of the cracks has been reached, some bearing capacity remains in the final failure stage of the concrete. The statistical meso-damage theory, micro-continuum theory, and composite material theory were used to define a new damage variable and to provide a more suitable statistical damage constitutive model of the AR-GFRC. The assumptions are as follows: Assuming that the AR-GFRC consists of a series of isotropic elastic micro-units, each of which is an ideal elastomer with the same stiffness prior to the destruction. Assuming that the micro-unit strength follows the Weibull distribution. Assuming that the concrete consists of an undamaged part and a damaged part and the stress-strain relationship of the undamaged part obeys Hooke’s law. Assuming that the concrete damage only occurs under axial stress, damage occurs after a load has been applied, and immediately after damage, failure occurs. 4.2. Derivation of the Statistical Damage Constitutive Model 4.2.1. The Constitutive Equation of the Elastic Section of Uniaxial Tensile Concrete The uniaxial tensile deformation process of AR-GFRC is divided into a nondestructive phase and damage evolution phase: is the elastic modulus of alkali resistant fiberglass concrete, which was deduced and verified in previous research by Zhang Cong et al. [ ], and will not be repeated here. 4.2.2. The Constitutive Equation of the Inelastic Section in the Whole Process of Uniaxial Tensile Concrete Based on the assumptions, the statistical damage constitutive model of the AR-GFRC, which considers the residual strength, is established. The microelements of the AR-GFRC were analyzed. The macroscopic nominal axial stress is assumed to be $σ 1$ and the stress of the undamaged part and damaged part of the concrete is $σ 1 ′$ and $σ 1 r$, respectively. The cross-sectional area of the element is $A$ and the cross-sectional areas of the elements in the undamaged and damaged parts are $A 1$ and $A 2$, respectively, $A = A 1 + A 2$. Based on assumption (1), we obtain: $σ 1 A = σ 1 ′ A 1 + σ 1 r A 2$ is the axial damage variable or damage factor of the material where $D = A 2 / ( A 1 + A 2 )$ ]. The damage variable correction coefficient is used to describe the residual strength after the peak strength [ ]; it is defined as: $σ r$ is the residual strength after the peak strength and $σ c$ is the peak strength. Equation (2) can be rewritten as: $σ 1 = σ 1 ′ ( 1 − δ D ) + σ 1 r δ D$ The partial deformation of the undamaged part obeys Hooke’s law and is expressed as [ Equation (4) is: $σ 1 = E ε ( 1 − δ D ) + σ 1 r δ D$ Based on the stress-strain characteristics of concrete and rock, it was concluded that the microscopic homogenous mechanical properties of quasi-brittle materials follow the Weibull distribution [ ]. Therefore, it is assumed that the concrete damage variables also follow the Weibull distribution. The Weibull distribution of two parameters is defined as [ $D = 1 − exp [ − ( ε a ) b ]$ is the strain, represents the size parameters, and represents the shape parameters of the material. By substituting Equation (7) into Equation (6), we obtain: $σ 1 = E ε { 1 − δ { 1 − exp [ − ( ε a ) b ] } } + σ 1 r δ { 1 − exp [ − ( ε a ) b ] }$ $ε c$ is the strain at peak strength. The geometric conditions of the stress-strain curve of quasi-brittle materials are as follows [ ]: I. $ε = 0 , σ = 0$ ; II. $ε = 0 , d σ d ε = E$ ; III. $σ = σ c , ε = ε c$ ; IV. $σ = σ c , d σ d ε = 0.$ In Equation (8), variable derivation can be obtained, $d σ d ε = E ( 1 − δ ) + δ exp [ − ( ε a ) b ] ( E − E b ε b + σ 1 r b ε b − 1 a b )$ I and II. IV is put into Equation (9) to obtain: $E ( 1 − δ ) + δ exp [ − ( ε c a ) b ] ( E − E b ε c b + σ 1 r b ε c b − 1 a b ) = 0$ When the peak strength is reached, $σ r = σ c$ can be determined by the limit method. That is $( 1 − δ ) = 0$ $exp [ − ( ε c a ) b ] ≠ 0$ . It is: $E − E b ε c b + σ 1 r δ b ε c b − 1 a b = 0$ According to Equation (11), we obtain: $a = [ b ε c b − 1 ( E ε c − σ 1 r δ ) E ] 1 b$ By substituting Equation (12) into Equation (8), can be obtained based on III. $σ = σ c , ε = ε c$ $b = E ε c ( E ε c − σ 1 r δ ) ln [ ( E ε c − σ 1 r ) δ σ c − E ε c + ( E ε c − σ 1 r ) δ ]$ The residual strength of the AR-GFRC is usually determined using the peak strength model. The residual strength is calculated using the Mohr–Coulomb model: $σ 1 r = 2 c r cos φ r / ξ ( 1 − sin φ r )$ $c r$ $φ r$ are the residual internal friction angle and residual adhesion force of the material, and is a correction factor of the residual strength, which can be obtained by fitting the test data, = 22~25. The composite material theory was introduced by Hilles and Ziara [ ]. The calculation model of the elastic modulus of the AR-GFRC is obtained by using the mechanical equilibrium equation: $E = E f ρ f + E m ρ m = E f ρ f + E m ( 1 − ρ f )$ is the elastic modulus of the AR-GFRC, $E f$ is the elastic modulus of the fiber, $E m$ is the elastic modulus of the substrate, $ρ f = A f / A a$ is the volume ratio of the fiber, and $ρ m = A m / A a$ is the volume ratio of the substrate, $A f + A m = A a$ However, the distribution of the fiber material in the concrete depends on the mixing conditions in practical applications and during the test [ ]. The effects of the fiber content, fiber length, and interfacial bonding characteristics on the material strength were considered, and these effects are defined by the coefficients $η 1$ $η 2$ $η 3$ $0 < η i < 1 , i = 1 , 2 , 3$ ), respectively. We assume that the coefficients are independent; is used to represent the effect between the three coefficients: $η i$ is the coefficient of the th factor affecting the properties of the composite. Therefore, the elastic modulus of the randomly distributed fiber in the composite can be expressed as: $E = η E f ρ f + E m ( 1 − ρ f )$ By integrating Equations (8), (12), (13) and (17), the statistical damage constitutive model of the AR-GFRC, which considers the residual strength, is obtained: $σ 1 = ε E { 1 − δ + δ exp [ − ( ε a ) b ] } + δ σ 1 r { 1 − exp [ − ( ε a ) b ] }$ The modulus and strength of the AR-GFRC are proportional to the volume content of the fiber. is less than 1. However, the elastic modulus and strength of the alkali-resistant glass fiber are much greater than that of the concrete substrates, i.e., . Therefore, the tensile statistical damage constitutive model of AR-GFRC is: $σ 1 = { E ε 1 , 0 ≤ ε ≤ ε c ε E { 1 − δ + δ exp [ − ( ε a ) b ] } + δ σ 1 r { 1 − exp [ − ( ε a ) b ] } , ε > ε c$ 4.3. Parameter Determination of the Statistical Damage Constitutive Model The parameters in the model include: $E f$ $E m$ $σ f$ $σ m$ $σ c$ $ε c$ $η i$ $n a$ $c r$ $φ r$ . The mechanical parameters can be obtained from concrete tests [ $σ 1 r$ can be obtained from Equation (14). The statistical parameters such as are obtained from Equations (12), (13) and (17). Therefore, only the expression for the last parameter is given here. $a = [ b ε c b − 1 ( E ε c − σ 1 r δ ) E ] 1 b$ $b = E ε c ( E ε c − σ 1 r δ ) ln [ ( E ε c − σ 1 r ) δ σ c − E ε c + ( E ε c − σ 1 r ) δ ]$ 5. Verification of the Statistical Damage Constitutive Model of the AR-GFRC 5.1. Verification of Field Test 5.1.1. Project Overview and Field Test To apply the above research to actual projects, the tunnel portion of the Chenglan Railway Project of China Railway 14th Bureau was selected for field testing to verify the validity and applicability. The mileage of the selected tunnel is DK194 + 198~DK194 + 368, the tunnel width is 13.70 m, the center angle of the top arch is 120°, and the joints are developed. For the field test, typical test sections were selected with similar geological conditions, topography, and supporting conditions: test Section 1 (DK194 + 210~DK194 + 230), test Section 2 (DK194 + 270~DK194 + 290), and test Section 3 (DK194 + 325~DK194 + 345). Test Section 1 is a common concrete sprayed layer, and test Section 2 Section 3 are alkali-resistant glass fiber sprayed layers with alkali-resistant glass fiber contents of 1.0% HP and 1.0% HD, respectively. The trends in stress of the concrete and the convergence value of the tunnel were obtained by field monitoring, which were used to qualitatively analyze and verify the validity of the above research. 5.1.2. On-Site Deformation Monitoring and Result Analysis Stress variation in concrete lining The monitoring plan was to monitor the key typical locations of the tunnel, such as the tunnel vault, left arch waist, left wall waist, right vault, and right arch waist lined with an ordinary spray layer and alkali-resistant glass fiber spray layer. The monitoring data are shown in Figure 7 It can be seen from Figure 7 that the stress value of the shotcrete lining shows an increasing trend with time (the negative sign represents the direction). When it reaches 25 days, the stress of the two types of concrete spraying layers slowly increase and gradually stabilize. This is mainly because the stress of the surrounding rock tends to stabilize and reach equilibrium at this time. However, no matter the kind of concrete spray layer structure, the stress on the left and right wall waist of the tunnel is more obvious, and the stress distribution in the alkali-resistant glass fiber reinforced concrete spray layer is more uniform than that of the ordinary concrete spray layer, which is related to the pressure distribution of the surrounding rock being consistent. Therefore, it can be seen that the stress of the alkali-resistant glass fiber sprayed layer is much smaller than that of the ordinary concrete sprayed layer, and the stress distribution is more uniform. Variation of the convergence value around the tunnel According to the monitoring and analysis of the displacement value of the tunnel under the ordinary spray layer and alkali-resistant glass fiber spray layer lining, the convergence of the position of the two arch waists and that of the vault are shown in Figure 8 Figure 9 As shown in Figure 8 Figure 9 , the convergence values of the two arch waists and arch of the ordinary spray layer and AR-GFRC spraying layer show rapid, exponential increases, and show a gradually stable trend. After stabilization, the convergence value gradually tends to balance and no longer changes. The convergence values of the two sills of the tunnel are greater than the convergence of the tunnel vault. Although the convergence value of the AR-GFRC sprayed layer is the same as that of the ordinary concrete sprayed layer, the convergence values of the tunnel arches and tunnel vaults are greater. This shows that the tensile strength of the concrete after adding alkali-resistant glass fiber increased, and the fiber plays a certain role in pulling and deforming the concrete, so that the stress distribution of the surrounding rock is uniform, and the stress concentration phenomenon does not easily occur. 5.2. Verification of the Theory and Laboratory Experiment For the verification of the accuracy and applicability of the proposed model, we used data for HD concrete, HP concrete, and the experimental results for the concrete with different fiber contents (0.5%, 1%, and 1.5%) for 7 d and 28 d curing to obtain the material parameters and statistical damage parameters. The uniaxial tensile test curves were compared with the theoretical curves of the constitutive model, as shown in Figure 10 Figure 11 The proposed constitutive model describes the post-peak damage evolution of the AR-GFRC and the trends and characteristics of the concrete strength in the stress state. The theoretical and experimental stress-strain curves are in good agreement for the different fiber contents. The early tensile strength of the concrete was improved by the addition of alkali-resistant glass fiber. As shown in Figure 10 a–f and Figure 11 a–f, the proposed tensile statistical damage constitutive model of AR-GFRC can accurately describe the stress-strain evolution law of the different fiber contents of alkali-resistant glass fiber concrete in this concrete tensile test. The content of the alkali-resistant glass fiber doping range was considered 0 to 1.5%, the 7-day peak strength difference between HD and HP concrete under standard curing between the experimental results and the theoretical results is small, and the 28-d peak strength shows similar patterns of change. The theoretical equation curve is consistent with the indoor test data curve in the elastic stage, peak strength, and nonlinear change segment after its peak. Different types of fibers (HD and HP) have little effect on the theoretical and test curves. This shows that the proposed AR-GFRC tensile statistical damage model is also applicable to the description of the concrete stress-strain curve of HD and HP fibers. 6. Discussion AR-GFRC is used as an anchor spraying support material. The basic mechanical performance test assumes that the alkali-resistant glass fiber is in a continuous and uniform state. Alkali-resistant glass fibers may bond or break during mixing, and it is difficult for alkali-resistant glass fibers to be evenly distributed in concrete [ ]. Based on this hypothesis, Cao [ ] introduced the strain strength theory and hypothesis of particle strength of rock-like materials obeying Weibull random distribution to build a statistical damage constitutive model under triaxial compression. On this basis, a uniaxial tensile test of AR-GFRC was carried out to further explore the statistical damage relationship. The relationship can be used to truly reflect the actual pull state and crack conditions of concrete structure field engineering, which have been given in the verification section. It was used in the sprayed concrete layer of soft rock tunnel, and its effects were stronger than an ordinary concrete sprayed layer with great engineering application value. The statistical damage constitutive model of the AR-GFRC, which considers residual strength, is based on the statistical meso-damage theory, microcontinuum theory, and composite material theory. It is assumed that the concrete consists of a series of isotropic elastic micro-units prior to failure and that the micro-unit strength follows a Weibull distribution; in addition, the interface between the alkali-resistant glass fiber and concrete matrix is not considered. However, if these two assumptions are not met, the calculation formula may not be applicable. In the uniaxial tensile test, the lateral parts of the concrete were not damaged, and the part that was damaged due to axial stress after the axial failure was damaged due to residual stress. These observations are consistent with the results and stress-strain behavior of high strength concrete reinforced with glass fiber carried out by Muhammed [ The stress-strain curve that takes into consideration the residual strength effect is less affected by the interface discontinuity problem and is more representative of actual conditions. In practical engineering, the different parameters of the stress-strain curves of the AR-GFRC can be accurately described by the proposed constitutive model, and the parameters of this model have definitive physical meaning. 7. Conclusions Uniaxial tensile tests were conducted for concrete with additions of alkali-resistant glass fiber with different fiber contents. The effects of fiber content on the concrete tensile strength were described macroscopically based on the types of failures of the concrete specimens. A new damage variable of the AR-GFRC was defined based on the results of the uniaxial tensile test and the stress-strain curve was redefined by taking into consideration the residual strength. The meso-statistical damage theory, microcontinuum theory, and composite material theory were used to develop a statistical damage constitutive equation of the AR-GFRC based on a Weibull distribution. The calculation methods for determining the mechanical and statistical parameters of the concrete have been provided, and the research results can be used to provide reference data for practical concrete engineering. The constitutive theoretical curve was verified in MATLAB based on the uniaxial tensile test data of the concrete. The theoretical and experimental stress-strain curves of the AR-GFRC were in good agreement. The fiber contents (0.5%, 1%, and 1.5%) and the size and shape parameters in the constitutive equation were quantified, and it was determined that the proposed model was well suited to describe the microscopic damage evolution and failure mechanism of the concrete while considering the residual strength effects. The results provide a theoretical basis and have practical application Author Contributions Conceptualization, X.S. and C.Z.; methodology, C.Z.; validation, X.S. and C.Z.; formal analysis, X.Z.; investigation, C.Z.; resources, C.Z.; data curation, X.S. and C.Z.; writing—original draft preparation, C.Z.; writing—review and editing, X.S. and C.Z.; visualization, X.S. and C.Z.; supervision, C.Z.; project administration, C.Z.; funding acquisition, X.S., C.Z. and X.Z. All authors have read and agreed to the published version of the manuscript. This research was partially carried out with financial support from the China Scholarship Council (CSC) and supported by the Fundamental Research Funds for the Central Universities and Postgraduate Research and Practice Innovation Program of Jiangsu Province, grant number (2019B74214,SJKY19_04533), and project support from the National Natural Science Foundation of China, grant number Conflicts of Interest The authors declare no conflict of interest. 1. Fathi, H.; Lameie, T.; Maleki, M.; Yazdani, R. Simultaneous effects of fiber and glass on the mechanical properties of self-compacting concrete. Constr. Build. Mater. 2017, 133, 443–449. [Google Scholar] [CrossRef] 2. Dimchev, M.; Caeti, R.; Gupta, N. Effect of carbon nanofibers on tensile and compressive characteristics of hollow particle filled composites. Mater. Des. 2010, 31, 1332–1337. [Google Scholar] [ 3. Takeda, K.; Nakashima, E. Nuclear Power and Environmental Concerns in the Aftermath of 3/11 Fukushima Daiichi Nuclear Power Plant Accident in Japan. In Globalization, Development and Security in Asia: Environment and Sustainable Development in Asia; World Scientific: Singapore, 2014. [Google Scholar] 4. Keleştemur, O.; Arıcı, E.; Yıldız, S.; Gökçer, B. Performance evaluation of cement mortars containing marble dust and glass fiber exposed to high temperature by using Taguchi method. Constr. Build. Mater. 2014, 60, 17–24. [Google Scholar] [CrossRef] [Green Version] 5. Tassew, S.T.; Lubell, A.S. Mechanical properties of glass fiber reinforced ceramic concrete. Constr. Build. Mater. 2014, 51, 215–224. [Google Scholar] [CrossRef] 6. Kizilkanat, A.B.; Kabay, N.; Akyüncü, V.; Chowdhury, S.; Akça, A.H. Mechanical properties and fracture behavior of basalt and glass fiber reinforced concrete: An experimental study. Constr. Build. Mater. 2015, 100, 218–224. [Google Scholar] [CrossRef] 7. Yuwaraj, M.G.; Santosh, B.D. Performance of Alkali-resistant Glass Fiber Reinforced Concrete. J. Reinf. Plast. Compos. 2006, 25, 617–630. [Google Scholar] 8. Yildizel, S.A.; Ozturk, A.U. Micro Glass Fiber Reinforced Concrete. ICOCEE CESME 2018, 4, 24–27. [Google Scholar] 9. Kwan, W.H.; Cheah, C.B.; Ramli, M.; Chang, K.Y. Alkali-resistant glass fiber reinforced high strength concrete in simulated aggressive environment. Materiales de Construcción 2018, 68, 1–14. [ Google Scholar] [CrossRef] 10. Chandramouli, K.; Rao, P.S.; Pannirselvam, N.; Sekhar, T.S.; Sravana, P. Chloride Penetration Resistance Studies on Concretes Modified with Alkali Resistant Glass Fibers. Am. J. Appl. Sci. 2010, 7, 371–375. [Google Scholar] [CrossRef] 11. Yurdakul, A.; Dölekçekiç, E.; Karasu, B.; Günkaya, G. Characterization of Commercially Available Alkali Resistant Glass Fiber for Concrete Reinforcement and Chemical Durability Comparison with SrO-Mn[2]O[3]-Fe[2]O[3]-MgO-ZrO[2]-SiO[2](SMFMZS SYSTEM GLASSES). Anadolu Univ. J. Sci. Technol. A Appl. Sci. Eng. 2012, 13, 95–102. [Google Scholar] 12. Qin, X.; Li, X.; Cai, X. The applicability of alkaline-resistant glass fiber in cement mortar of road pavement: Corrosion mechanism and performance analysis. Int. J. Pavement Res. Technol. 2017, 10, 536–544. [Google Scholar] [CrossRef] 13. Kumar, J.D.C.; Abhilash GV, S.; Khan, P.K.; Manikantasai, G.; Tarakaram, V. Experimental Studies on Glass Fiber Concrete. Am. J. Eng. Res. 2016, 5, 100–104. [Google Scholar] 14. Yıldızel, S.A. Effects of Barite Sand Addition on Glass Fiber Reinforced Concrete Mechanical Behavior. Int. J. Eng. Appl. Sci. 2017, 9, 100–105. [Google Scholar] [CrossRef] [Green Version] 15. Krishnan, K.A.; Anjana, R.; George, K.E. Effect of alkali-resistant glass fiber on polypropylene/polystyrene blends: Modeling and characterization. Polym. Compos. 2014, 37, 398–406. [Google Scholar] [CrossRef] 16. Oh, H.S.; Moon, D.Y.; Kim, S.D. An Investigation on Durability of Mixture of Alkali-Resistant Glass and Epoxy for Civil Engineering Application. Procedia Eng. 2011, 14, 2223–2229. [Google Scholar ] [CrossRef] [Green Version] 17. Dougill, J.W. On stable progressively fracturing solids. Zeitschrift für Angewandte Mathematik und Physik ZAMP 1976, 27, 432–437. [Google Scholar] [CrossRef] 18. Lemaitre, J. Coupled elasto-plasticity and damage constitutive equations. Comput. Methods Appl. Mech. Eng. Fr. 1985, 51, 31–49. [Google Scholar] [CrossRef] 19. Kandarpa, S.; Krikner, D.J.; Spencer, B.F. Stochastic damage model for brittle materials subjected to monotonic loading. J. Eng. Mech. ASCE 1996, 122, 788–795. [Google Scholar] [CrossRef] 20. Krajcinovic, D.; Silva, M.A.G. Statistical aspects of the continuous damage theory. Int. J. Solids Struct. 1982, 18, 551–562. [Google Scholar] [CrossRef] 21. Krajcinovic, D. Constitutive equations for damaging materials. J. Appl. Mech. 1983, 50, 355–360. [Google Scholar] [CrossRef] 22. Breysse, D. Probabilistic formulation of damage-evolution law of cementious composites. J. Eng. Mech. 1990, 1169, 1489–1511. [Google Scholar] [CrossRef] 23. Guo, Y.; Kuang, Y. An elastoplastic micro-mechanical damage model for quasi-brittle materials under uniaxial loading. Int. J. Damage Mech. 2019, 28, 1191–1202. [Google Scholar] [CrossRef] 24. Bakhshi, M.; Mobasher, B. Simulated shrinkage cracking in the presence of Alkali Resistant Glass fibers. Spec. Publ. 2011, 280, 1–14. [Google Scholar] 25. Eiras, J.N.; Kundu, T.; Bonilla, M. Nondestructive Monitoring of Ageing of Alkali Resistant Glass Fiber Reinforced Cement (GRC). J. Nondestruct. Eval. 2013, 32, 300–314. [Google Scholar] [ 26. Lemaitre, J. How to use damage mechanics. Nucl. Eng. Des. 1984, 80, 233–245. [Google Scholar] [CrossRef] 27. Wengui, C.; Sheng, Z.; Minghua, Z. Study on a statistical damage constitutive model with conversion between softening and hardening properties of rock. Eng. Mech. 2006, 23, 110–115. [Google 28. Weiya, X.; Lide, W. Study on statistical damage constitutive model of rock. Chin. J. Rock Mech. Eng. 2002, 21, 787–791. [Google Scholar] 29. Mehta, P.K.; Aitcin, P.C. Microstructural basis of selection of materials and mix proportions for high-strength concrete. Spec. Publ. 1990, 121, 265–286. [Google Scholar] 30. Alireza, A.; Mehdi, G.; Akram, F. Glass fber-reinforced epoxy composite with surface modifed graphene oxide: Enhancement of interlaminar fracture toughness and thermo-mechanical performance. Polym. Bull. 2018. [Google Scholar] [CrossRef] 31. Abeysinghe, C.M.; Thambiratnam, D.P.; Perera, N.J. Flexural performance of an innovative hybrid composite floor plate system comprising glass–fibre reinforced cement, polyurethane and steel laminate. Compos. Struct. 2013, 95, 179–190. [Google Scholar] [CrossRef] [Green Version] 32. Zhu, Z.; Zhang, C.; Meng, S.; Shi, Z.; Tao, S.; Zhu, D. A Statistical Damage Constitutive Model Based on the Weibull Distribution for Alkali-Resistant Glass Fiber Reinforced Concrete. Materials 2019, 12, 1908. [Google Scholar] [CrossRef] [PubMed] [Green Version] 33. Zhang, C.; Zhu, Z.; Zhu, S.; He, Z.; Zhu, D.; Liu, J.; Meng, S. Nonlinear Creep Damage Constitutive Model of Concrete Based on Fractional Calculus Theory. Materials 2019, 12, 1505. [Google Scholar] [CrossRef] [PubMed] [Green Version] 34. Yang, W.; Xue, M. Test method and equipment of stress-strain curves of concertein in direct tension. Ind. Constr. 2009, 39, 907–909. [Google Scholar] 35. Chen, Y.; Du, C.; Zhou, W. Experimental study on the complete stress-deformation curve of full grade concrete under axial tension. J. Hydroelectr. Eng. 2010, 29, 76–81. [Google Scholar] 36. Sun, F. Experimental Research for Complete Stress-Deformation Curve of Concrete in Uniaxial Tension. Master’s Thesis, Hohai University, Nanjing, China, 2007. [Google Scholar] 37. Wu, N.; Zhang, C.; Maimaitiyusupu, S.; Zhu, Z. Investigation on Properties of Rock Joint in Compression Dynamic Test. KSCE J. Civ. Eng. 2019, 23, 3854–3863. [Google Scholar] [CrossRef] 38. He, Z. The Research on Nonlinear Rheological Mechanics Models of Deep-Buried Tunnel Surrounding Rock and Its Application; HoHai University: Nanjing, China, 2018. [Google Scholar] 39. Skrzypacz, P.; Nurakhmetov, D.; Wei, D. Generalized stiffness and effective mass coefficients for power-law Euler–Bernoulli beams. Acta Mech. Sin. 2020, 36, 160–175. [Google Scholar] [CrossRef] 40. Eissa, F.H. Stress-Strength Reliability Model with the Exponentiated Weibull Distribution: Inferences and Applications. Int. J. Stat. Probab. 2018, 7. [Google Scholar] [CrossRef] 41. Kasagani, H.; Rao, C.B.K. Effect of Short length Glass Fiber on dilated concrete in Compression and Tension. Proc. Inst. Civ. Eng. Struct. Build. 2018, 4, 1–12. [Google Scholar] 42. Cao, R.L.; He, S.H.; Wei, J.; Wang, F. Study of modified statistical damage softening constitutive model for rock considering residual strength. Rock Soil Mech. 2013, 34, 1652–1661. [Google 43. Hilles, M.M.; Ziara, M.M. Mechanical behavior of high strength concrete reinforced with glass fiber. Eng. Sci. Technol. Int. J. 2019, 22, 920–928. [Google Scholar] [CrossRef] 44. Khosravani, M.R.; Nasiri, S.; Anders, D.; Weinberg, K. Prediction of dynamic properties of ultra-high performance concrete by an artificial intelligence approach. Adv. Eng. Softw. 2019, 127, 51–58. [Google Scholar] [CrossRef] 45. Brandon, F.; Pedram, S. Contribution of Longitudinal GFRP Bars in Concrete Cylinders under Axial Compression. Can. J. Civ. Eng. 2018, 45, 458–468. [Google Scholar] 46. Yu, W.; Xue, H.; Qian, M. Tensile and compressive properties of epoxy syntactic foams reinforced by short glass fiber. Indian J. Eng. Mater. Sci. 2017, 24, 283–289. [Google Scholar] 47. Sun, W.; Zuo, Y.; Wang, S.; Wu, Z.; Liu, H.; Zheng, L.; Lou, Y. Pore structures of shale cores in different tectonic locations in the complex tectonic region: A case study of the Niutitang Formation in northern Guizhou, Southwest China. J. Nat. Gas Sci. Eng. 2020, 80, 103398. [Google Scholar] [CrossRef] 48. Sun, W.; Zuo, Y.; Wu, Z.; Liu, H.; Zheng, L.; Wang, H.; Shui, Y.; Lou, Y.; Xi, S.; Li, T.; et al. Pore characteristics and evolution mechanism of shale in a complex tectonic area: Case study of the Lower Cambrian Niutitang Formation in Northern Guizhou, Southwest China. J. Pet. Sci. Eng. 2020, 193, 107373. [Google Scholar] [CrossRef] Figure 2. Tensile curves of alkali-resistant glass fiber reinforced concrete (AR-GFRC) and ordinary concrete. (a) HD 28 d curing and (b) HP 28 d curing. Figure 7. Variation of stress in concrete lining. (a) Common concrete spray; (b) AR-GFRC (1.0%HP); and (c) AR-GFRC (1.0%HD). Figure 10. The stress-strain curve of HD concrete under 7 d and 28 d of curing. (a) 0.5%, 7 d; (b) 1.0%, 7 d; (c) 1.5%, 7 d; (d) 0.5%, 28 d; (e) 1.0%, 28 d; and (f) 1.5%, 28 d. Figure 11. The stress-strain curve of HP concrete under 7 d and 28 d of curing. (a) 0.5%, 7 d; (b) 1.0%, 7 d; (c) 1.5%, 7 d; (d) 0.5%, 28 d; (e) 1.0%, 28 d; and (f) 1.5%, 28 d. Type Length/mm Equivalent Diameter/um Fracture Strength Elongation at Break/% Modulus/GPa Melting Point/°C Anti-Crak^®HD 6/12 14 1700 3.6 72 1580 Anti-Crak^®HP 6/12 700 1700 3.6 72 1580 Sieve Hole Size 31.5 26.5 16.0 4.75 2.36 Actual cumulative percentage of sieve remainder/% 0 0~5 30~70 90~95 95~100 Number Name Cement Mineral Powder Fly Ash Sand Stone Water Admixture Fiber Contents (%) C35 Concrete (without fiber) 245 100 95 735 1040 175 10.5 0 HD-35-1 245 100 95 735 1040 175 10.5 0.5 HD-35-2 Anti-Crak^® (HD) Concrete 245 100 95 735 1040 175 10.5 1 HD-35-3 245 100 95 735 1040 175 10.5 1.5 HP-35-1 245 100 95 735 1040 175 10.5 0.5 HP-35-2 Anti-Crak^® (HP) Concrete 245 100 95 735 1040 175 10.5 1 HP-35-3 245 100 95 735 1040 175 10.5 1.5 © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Shi, X.; Zhang, C.; Zhou, X. The Statistical Damage Constitutive Model of the Mechanical Properties of Alkali-Resistant Glass Fiber Reinforced Concrete. Symmetry 2020, 12, 1139. https://doi.org/ AMA Style Shi X, Zhang C, Zhou X. The Statistical Damage Constitutive Model of the Mechanical Properties of Alkali-Resistant Glass Fiber Reinforced Concrete. Symmetry. 2020; 12(7):1139. https://doi.org/10.3390 Chicago/Turabian Style Shi, Xianzeng, Cong Zhang, and Xingde Zhou. 2020. "The Statistical Damage Constitutive Model of the Mechanical Properties of Alkali-Resistant Glass Fiber Reinforced Concrete" Symmetry 12, no. 7: 1139. https://doi.org/10.3390/sym12071139 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-8994/12/7/1139","timestamp":"2024-11-06T20:01:32Z","content_type":"text/html","content_length":"476755","record_id":"<urn:uuid:2ee5bbbd-eaff-4a65-b76b-d3a67be07d79>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00887.warc.gz"}
Suffix Tree Data Structure In the previous blog, we studied the properties and applications of the trie data structure. In this blog, we will study Suffix trees, an advanced version of tries. In a trie, each node represents a single element. This representation is not efficient, as we can see in the image below, where the words Romane, Romanus, Romulus, Rubens, Ruber, Rubicon, and Rubicundus are represented in a trie structure. We can see that the letters "O", "A" from the word "ROMANE" and the letters "U", "N", "D", "U", and "S" from the word "RUBICUNDUS" have only one child. If we store these letters together in a single node, we will not lose out on any information. Although tries have good time complexity, they are space inefficient as they store only one letter in an edge. We remove this space inefficiency by keeping multiple characters of a word in a single node reducing the number of extra edges and nodes needed while maintaining the same symbolic meaning and the performance of tries. In the worst-case scenario, the space required by a compressed trie will be equivalent to that of a simple trie. The following image shows how the compressed trie will look after inserting all the previously inserted words in the trie. The internal nodes will have at least two children in a compressed trie. Also, it has almost N leaves, where N is the number of strings inserted in the compressed trie. Combining these facts, we can conclude that there are at most 2N -1 nodes in the trie. So the space complexity of a compressed trie is O(N) compared to the O(N²) of a standard trie. Efficient storage is one of the reasons to use compressed tries over classic tries. Suffix trees are also a compressed version of the trie that includes all of a string's suffixes. Various string-related problems can be solved using suffix trees, which are discussed later in the Let S be a string of length n. S[i, j] denotes the substring of S from index i to j. Before constructing the suffix tree, we concatenate a new character, $, to S. By adding this unique character to the end of S, we can avoid cases when a suffix is a prefix of another suffix. A suffix tree is a rooted, directed tree. It has n leaf nodes labeled from 1 to n, and its edges are labeled by the letters. On a path from the root to the leaf j, one can read the string's suffix S [j, n] and a $ sign. Such a path is denoted by P(j), and its label is referred to as the path label of j and denoted by L(j). Image of suffix tree for the word “aabccb” Suffix trees use much space, as we can see from the above image. There are long branches that can be compressed, as discussed earlier. A compact suffix tree of S is a rooted directed tree with n leaves. Each internal node has at least two children. The label of a path is the concatenation of the labels on its edges. Image of compressed suffix tree for the word “aabccb” Brute Force algorithm for constructing suffix trees The brute force algorithm to construct a suffix tree is to consider all suffixes of a string and insert them one by one into a trie. This will take O(N²) time as there are N suffixes, and each suffix will take O(N) time to insert into the trie. This method of constructing suffix trees is inefficient for large strings, and we need to find a better method to construct the suffix tree. Ukkonen's algorithm for constructing suffix trees The basic idea of Ukkonen's Algorithm is to divide the process of constructing a suffix tree into phases. Further, each phase is divided into extensions. In phase i, the ith character is considered. In phase i, all the suffixes of the string S[1:i] are inserted into the trie, and inserting the jth suffix, S[j: i], in a phase is called the jth extension of that phase. So, in the ith phase, there are i extensions and overall there are N such phases. It looks like this is O(N²) task, but the algorithm exploits the fact that these are suffixes of the same string and uses different mechanisms that bring down the time complexity to O(N). We will now see how an extension step works. We will consider the jth extension of the ith phase. Extending the path S[j:i-1] by adding the letter S[i] is known as the suffix extension. As we are in the ith phase right now, all S[1:i-1] substrings have already been inserted into the tree in the i-1th phase. First, find the last character of S[j:i-1] and then extend it to make sure that S[j:i-1] + S[i](S[j:i]) occurs in the tree. For the extension, the following rules have to be considered: 1. The complete string S[j:i-1] is found in the tree, and the path ends at a leaf node. In this case, we append the character to the last edge label, and no new nodes are created. 2. If at the end of S[j:i-1] there is no path started with S[i], create a new edge from this position to a new leaf, and label it with S[i]. If this position is inside an edge, then create here a new node and divide the label of the original edge. 3. If S[j: i] is already in the tree, do nothing. We can see that for each extension, we have to find the path of a string S[j: i] in the tree built in previous phases. The complexity for doing this in the brute force manner will be O(N) for each extension, and the overall time complexity will be O(N³) for building the suffix tree. Ukkonen's linear-time algorithm is a speed-up of the algorithm introduced, which we have discussed by applying a few tricks. Suffix Links Let xS be an arbitrary string where x is a single character and S some (possibly empty) substring. Suppose both S and xS are present in the tree, and the path from the root ends at nodes u and v, respectively, for the two strings. Then a link from node u to v is known as a suffix link. How do suffix links help in creating suffix trees? While performing the jth extension of phase i, we have to look at the substring S[j:i-1] and in the j + 1th extension, we have to look at the substring S[j+1:i-1]. We know that before coming to phase i, we have already performed i-1 phases, and the substrings S[j:i-1] and S[j+1:i-1] are already inserted into the tree. Also, S[j:i-1] = S[j]S[j+1:i-1]. We will have a suffix link from the node ending at the path S[j:i-1] to the node ending at the path S[j+1:i-1]. Now, instead of traversing down from root for extension j + 1 of phase i, we can simply make use of the suffix link. Suffix links reduce the time of processing each phase to O(N), as the number of nodes present in the suffix tree is of order N. Thus the overall time complexity of building a suffix tree is reduced to O(N²). Notice that rules which we discussed earlier for extension will always be applied in sequential order, i.e, some number of extensions from extension 0 will have to apply rule 1, some extensions after that have to apply rule 2, and the remaining extensions will apply rule 3. Thus, if in ith phase rule 3 is applied in extension j for the first time, then in all the remaining extensions, j + 1 to i, rule 3 will be applied. So we will stop a phase as soon as rule 3 starts applying to an extension. If a leaf is created in the ith phase, then it will remain a leaf in all successive phases i’, for i’ > i (once a leaf, always a leaf!). Reason: A leaf edge is never extended beyond its current leaf. Only the edge label of the edge between the leaf node and its parent will keep on increasing because of the application of rule 1, and also for all the leaf nodes the end index will remain the same. So, in any phase rule, 1 will be applied in a constant time by maintaining a global end index for all the leaf nodes. New leaf nodes are created when rule 2 is applied, and in all the extensions in which rule 2 is applied in any phase, in the next phase, rule 1 will be applied in all those extensions. So rule 2 will be applied a maximum of N times, as there are N leaves, which means that all the phases can be completed in complexity O(N). Applications of suffix tree Suffix trees can be used to solve many string problems that occur in text-editing, free-text search, etc. The ability to search efficiently with mismatches might be considered their greatest Here are some popular applications of suffix tree: • String search • Finding the longest repeated substring • Finding the longest common substring • Finding the longest palindrome in a string • Searching for patterns in DNA or protein sequences • Data compression • Suffix tree clustering algorithm Enjoy learning, Enjoy algorithms.
{"url":"https://www.enjoyalgorithms.com/blog/suffix-tree-data-structure/","timestamp":"2024-11-12T12:02:13Z","content_type":"text/html","content_length":"74152","record_id":"<urn:uuid:a597d6d7-12fe-4ed2-bf4e-01f780965433>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00291.warc.gz"}