content
stringlengths
86
994k
meta
stringlengths
288
619
Countersink Depth Calculator: How to Use - Calculator Services Published on: July 22, 2023 Created by Calculator Services Team / Fact-checked by Monjurul Kader Countersink Depth Calculator This countersink depth calculator uses a simple geometric formula to determine the depth of a countersink based on the countersink diameter and angle provided by the user. The formula used in this calculator is derived from the properties of a right-angled triangle formed by the countersink depth, half of the countersink diameter, and the angle at the apex of the countersink. The formula for calculating countersink depth is: depth = (diameter / 2) / tan((90 – angle / 2) * pi / 180) □ depth is the countersink depth; □ diameter is the countersink diameter; □ angle is the countersink angle in degrees; □ tan is the tangent function; □ pi is the constant value, approximately 3.14159; □ and 180 is used to convert the angle from degrees to radians. The calculator takes the diameter and angle as inputs, then calculates the depth using the formula, and displays the result rounded to two decimal places. Countersinking is a crucial process in many woodworking, metalworking, and engineering applications. It involves creating a conical hole to allow screw heads or fasteners to sit flush with or below the surface of the material. Despite its importance, calculating the countersink depth manually can be challenging and time-consuming. To help simplify this process, we’ve created a user-friendly countersink depth calculator. What is Countersinking? Countersinking is the process of creating a conical recess in a material to accommodate the head of a screw or fastener, allowing it to sit flush with or below the surface. This technique is widely used in woodworking, metalworking, and engineering applications to create neat, professional-looking joints and to minimize stress concentrations in the material. There are various types of countersinks, including single-flute, multi-flute, and cross-hole, each with its specific advantages and applications. The Formula Behind the Countersink Depth Calculator Our countersink depth calculator uses a simple geometric formula derived from the properties of a right-angled triangle formed by the countersink depth, half of the countersink diameter, and the angle at the apex of the countersink. The formula is as follows: depth = (diameter / 2) / tan((90 – angle / 2) * pi / 180) Where depth is the countersink depth, diameter is the countersink diameter, angle is the countersink angle in degrees, tan is the tangent function, pi is the constant value (approximately 3.14159), and 180 is used to convert the angle from degrees to radians. By inputting the diameter and angle values, the calculator computes the depth and presents it rounded to two decimal places. How to Use the Countersink Depth Calculator To use our countersink depth calculator, follow these simple steps: • Access the calculator by opening the index.html file in your web browser. • Enter the countersink diameter and angle in the respective input fields. • Click the “Calculate” button to compute the depth. • View the calculated depth displayed on the screen. Always double-check your measurements and ensure you input accurate values for the diameter and angle to achieve precise results. Practical Tips for Accurate Countersinking To ensure accurate and professional countersinking results, consider these practical tips: • Choose the right countersink bit for the material and application. • Use proper drilling technique, including appropriate pressure and speed, to create clean and consistent countersinks. • Always double-check measurements before drilling to avoid mistakes. • Follow safety precautions when working with power tools, such as wearing protective eyewear and securing the workpiece. Frequently Asked Questions What are the most common countersink angles? The most common countersink angles are 82°, 90°, and 100°, but other angles are also available for specific applications. How do I choose the right countersink diameter for my project? The countersink diameter should match the screw head diameter. Measure the screw head diameter and use the same diameter for the countersink bit. Can I use the calculator for other types of countersinks, such as a counterbore? This calculator is designed specifically for countersink depth calculations. For other types of recesses, such as counterbores, you will need to use a different calculator or formula. Countersinking is a vital process in many applications, and our countersink depth calculator makes it easier than ever to achieve precise results. By understanding the formula behind the calculator and following our practical tips, you can create professional-looking joints and minimize stress concentrations in your projects. You might also enjoy:
{"url":"https://calculator.services/countersink-depth-calculator/","timestamp":"2024-11-05T03:04:14Z","content_type":"text/html","content_length":"110613","record_id":"<urn:uuid:73d8f5f4-f03c-4007-b309-bb84cca09e0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00100.warc.gz"}
Third normal form (3NF) is a database schema design approach for relational databases which uses normalizing principles to reduce the duplication of data, avoid data anomalies, ensure referential integrity, and simplify data management. It was defined in 1971 by Edgar F. Codd, an English computer scientist who invented the relational model for database management. A database relation (e.g. a database table) is said to meet third normal form standards if all the attributes (e.g. database columns) are functionally dependent on solely a key, except the case of functional dependency whose right hand side is a prime attribute (an attribute which is strictly included into some key). Codd defined this as a relation in second normal form where all non-prime attributes depend only on the candidate keys and do not have a transitive dependency on another key.^[1] A hypothetical example of a failure to meet third normal form would be a hospital database having a table of patients which included a column for the telephone number of their doctor. (The phone number is dependent on the doctor, rather than the patient, thus would be better stored in a table of doctors.) The negative outcome of such a design is that a doctor's number will be duplicated in the database if they have multiple patients, thus increasing both the chance of input error and the cost and risk of updating that number should it change (compared to a third normal form-compliant data model that only stores a doctor's number once on a doctor table). Codd later realized that 3NF did not eliminate all undesirable data anomalies and developed a stronger version to address this in 1974, known as Boyce–Codd normal form. Definition of third normal form The third normal form (3NF) is a normal form used in database normalization. 3NF was originally defined by E. F. Codd in 1971.^[2] Codd's definition states that a table is in 3NF if and only if both of the following conditions hold: A non-prime attribute of R is an attribute that does not belong to any candidate key of R.^[3] A transitive dependency is a functional dependency in which X → Z (X determines Z) indirectly, by virtue of X → Y and Y → Z (where it is not the case that Y → X).^[4] A 3NF definition that is equivalent to Codd's, but expressed differently, was given by Carlo Zaniolo in 1982. This definition states that a table is in 3NF if and only if for each of its functional dependencies X → Y, at least one of the following conditions holds:^[5]^[6] • X contains Y (that is, Y is a subset of X, meaning X → Y is a trivial functional dependency), • X is a superkey, • every element of Y \ X, the set difference between Y and X, is a prime attribute (i.e., each attribute in Y \ X is contained in some candidate key). To rephrase Zaniolo's definition more simply, the relation is in 3NF if and only if for every non-trivial functional dependency X → Y, X is a superkey or Y \ X consists of prime attributes. Zaniolo's definition gives a clear sense of the difference between 3NF and the more stringent Boyce–Codd normal form (BCNF). BCNF simply eliminates the third alternative ("Every element of Y \ X, the set difference between Y and X, is a prime attribute."). "Nothing but the key" An approximation of Codd's definition of 3NF, paralleling the traditional oath to give true evidence in a court of law, was given by Bill Kent: "[every] non-key [attribute] must provide a fact about the key, the whole key, and nothing but the key".^[7] A common variation supplements this definition with the oath "so help me Codd".^[8] Requiring existence of "the key" ensures that the table is in 1NF; requiring that non-key attributes be dependent on "the whole key" ensures 2NF; further requiring that non-key attributes be dependent on "nothing but the key" ensures 3NF. While this phrase is a useful mnemonic, the fact that it only mentions a single key means it defines some necessary but not sufficient conditions to satisfy the 2nd and 3rd normal forms. Both 2NF and 3NF are concerned equally with all candidate keys of a table and not just any one key. Chris Date refers to Kent's summary as "an intuitively attractive characterization" of 3NF and notes that with slight adaptation it may serve as a definition of the slightly stronger Boyce–Codd normal form: "Each attribute must represent a fact about the key, the whole key, and nothing but the key."^[9] The 3NF version of the definition is weaker than Date's BCNF variation, as the former is concerned only with ensuring that non-key attributes are dependent on keys. Prime attributes (which are keys or parts of keys) must not be functionally dependent at all; they each represent a fact about the key in the sense of providing part or all of the key itself. (This rule applies only to functionally dependent attributes, as applying it to all attributes would implicitly prohibit composite candidate keys, since each part of any such key would violate the "whole key" clause.) An example of a table that fails to meet the requirements of 3NF is: Tournament winners Tournament Year Winner Winner's date of birth Indiana Invitational 1998 Al Fredrickson 21 July 1975 Cleveland Open 1999 Bob Albertson 28 September 1968 Des Moines Masters 1999 Al Fredrickson 21 July 1975 Indiana Invitational 1999 Chip Masterson 14 March 1977 Because each row in the table needs to tell us who won a particular Tournament in a particular Year, the composite key {Tournament, Year} is a minimal set of attributes guaranteed to uniquely identify a row. That is, {Tournament, Year} is a candidate key for the table. The breach of 3NF occurs because the non-prime attribute (Winner's date of birth) is transitively dependent on the candidate key {Tournament, Year} through the non-prime attribute Winner. The fact that Winner's date of birth is functionally dependent on Winner makes the table vulnerable to logical inconsistencies, as there is nothing to stop the same person from being shown with different dates of birth on different records. In order to express the same facts without violating 3NF, it is necessary to split the table into two: Tournament winners Winner's dates of birth Tournament Year Winner Winner Date of birth Indiana Invitational 1998 Al Fredrickson Chip Masterson 14 March 1977 Cleveland Open 1999 Bob Albertson Al Fredrickson 21 July 1975 Des Moines Masters 1999 Al Fredrickson Bob Albertson 28 September 1968 Indiana Invitational 1999 Chip Masterson Update anomalies cannot occur in these tables, because unlike before, Winner is now a candidate key in the second table, thus allowing only one value for Date of birth for each Winner. A relation can always be decomposed in third normal form, that is, the relation R is rewritten to projections R[1], ..., R[n] whose join is equal to the original relation. Further, this decomposition does not lose any functional dependency, in the sense that every functional dependency on R can be derived from the functional dependencies that hold on the projections R[1], ..., R[n]. What is more, such a decomposition can be computed in polynomial time.^[10] To decompose a relation into 3NF from 2NF, break the table into the canonical cover functional dependencies, then create a relation for every candidate key of the original relation which was not already a subset of a relation in the decomposition.^[11] Equivalence of the Codd and Zaniolo definitions of 3NF The definition of 3NF offered by Carlo Zaniolo in 1982, and given above, can be shown to be equivalent to the Codd definition in the following way: Let X → A be a nontrivial FD (i.e. one where X does not contain A) and let A be a non-prime attribute. Also let Y be a candidate key of R. Then Y → X. Therefore, A is not transitively dependent on Y if there is a functional dependency X → Y iff X is a superkey of R. Normalization beyond 3NF Most 3NF tables are free of update, insertion, and deletion anomalies. Certain types of 3NF tables, rarely met with in practice, are affected by such anomalies; these are tables which either fall short of Boyce–Codd normal form (BCNF) or, if they meet BCNF, fall short of the higher normal forms 4NF or 5NF. Considerations for use in reporting environments While 3NF was ideal for machine processing, the segmented nature of the data model can be difficult to intuitively consume by a human user. Analytics via query, reporting, and dashboards were often facilitated by a different type of data model that provided pre-calculated analysis such as trend lines, period-to-date calculations (month-to-date, quarter-to-date, year-to-date), cumulative calculations, basic statistics (average, standard deviation, moving averages) and previous period comparisons (year ago, month ago, week ago) e.g. dimensional modeling and beyond dimensional modeling, flattening of stars via Hadoop and data science.^[12]^[13] Hadley Wickham's "tidy data" framework is 3NF, with "the constraints framed in statistical language".^[14] See also 1. ^ Codd, E. F. "Further Normalization of the Data Base Relational Model", p. 34. 2. ^ Codd, E. F. "Further Normalization of the Data Base Relational Model". (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems", New York City, May 24–25, 1971.) IBM Research Report RJ909 (August 31, 1971). Republished in Randall J. Rustin (ed.), Data Base Systems: Courant Computer Science Symposia Series 6. Prentice-Hall, 1972. 3. ^ Codd, p. 43. 4. ^ Codd, p. 45–46. 5. ^ Zaniolo, Carlo. "A New Normal Form for the Design of Relational Database Schemata". ACM Transactions on Database Systems 7(3), September 1982. 6. ^ Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts (5th edition), p. 276–277. 7. ^ Kent, William. "A Simple Guide to Five Normal Forms in Relational Database Theory", Communications of the ACM 26 (2), Feb. 1983, pp. 120–125. 8. ^ The author of a 1989 book on database management credits one of his students with coming up with the "so help me Codd" addendum. Diehr, George. Database Management (Scott, Foresman, 1989), p. 9. ^ Date, C. J. An Introduction to Database Systems (7th ed.) (Addison Wesley, 2000), p. 379. 10. ^ Serge Abiteboul, Richard B. Hull, Victor Vianu: Foundations of Databases. Addison-Wesley, 1995. http://webdam.inria.fr/Alice/ ISBN 0201537710. Theorem 11.2.14. 11. ^ Hammo, Bassam. "Decomposition, 3NF, BCNF" (PDF). Archived (PDF) from the original on 2023-03-15. 12. ^ "Comparisons between Data Warehouse modelling techniques – Roelant Vos". Roelant Vos. 12 February 2013. Retrieved 5 March 2018. 13. ^ "Hadoop Data Modeling Lessons | EMC". InFocus Blog | Dell EMC Services. 23 September 2014. Retrieved 5 March 2018. 14. ^ Wickham, Hadley (2014-09-12). "Tidy Data". Journal of Statistical Software. 59: 1–23. doi:10.18637/jss.v059.i10. ISSN 1548-7660. Further reading • Date, C. J. (1999), An Introduction to Database Systems (8th ed.). Addison-Wesley Longman. ISBN 0-321-19784-4. • Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory, Communications of the ACM, vol. 26, pp. 120–126 External links • Litt's Tips: Normalization • Database Normalization Basics by Mike Chapple (About.com) • An Introduction to Database Normalization by Mike Hillyer. • A tutorial on the first 3 normal forms by Fred Coulson • Description of the database normalization basics by Microsoft • Third Normal Form with Simple Examples by exploreDatabase
{"url":"https://www.knowpia.com/knowpedia/Third_normal_form","timestamp":"2024-11-04T12:02:09Z","content_type":"text/html","content_length":"96734","record_id":"<urn:uuid:3ca316c9-70cb-428f-b2d2-1b911df754ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00719.warc.gz"}
Do We Really Need Zero-Inflated Models? | Statistical Horizons Do We Really Need Zero-Inflated Models? For the analysis of count data, many statistical software packages now offer zero-inflated Poisson and zero-inflated negative binomial regression models. These models are designed to deal with situations where there is an “excessive” number of individuals with a count of 0. For example, in a study where the dependent variable is “number of times a student had an unexcused absence”, the vast majority of students may have a value of 0. Zero-inflated models have become fairly popular in the research literature: a quick search of the Web of Science for the past five years found 499 articles with “zero inflated” in the title, abstract or keywords. But are such models really needed? Maybe not. It’s certainly the case that the Poisson regression model often fits the data poorly, as indicated by a deviance or Pearson chi-square test. That’s because the Poisson model assumes that the conditional variance of the dependent variable is equal to the conditional mean. In most count data sets, the conditional variance is greater than the conditional mean, often much greater, a phenomenon known as overdispersion. The zero inflated Poisson (ZIP) model is one way to allow for overdispersion. This model assumes that the sample is a “mixture” of two sorts of individuals: one group whose counts are generated by the standard Poisson regression model, and another group (call them the absolute zero group) who have zero probability of a count greater than 0. Observed values of 0 could come from either group. Although not essential, the model is typically elaborated to include a logistic regression model predicting which group an individual belongs to. In cases of overdispersion, the ZIP model typically fits better than a standard Poisson model. But there’s another model that allows for overdispersion, and that’s the standard negative binomial regression model. In all data sets that I’ve examined, the negative binomial model fits much better than a ZIP model, as evaluated by AIC or BIC statistics. And it’s a much simpler model to estimate and interpret. So if the choice is between ZIP and negative binomial, I’d almost always choose the latter. But what about the zero-inflated negative binomial (ZINB) model? It’s certainly possible that a ZINB model could fit better than a conventional negative binomial model regression model. But the latter is a special case of the former, so it’s easy to do a likelihood ratio test to compare them (by taking twice the positive difference in the log-likelihoods).* In my experience, the difference in fit is usually trivial. Of course, there are certainly situations where a zero-inflated model makes sense from the point of view of theory or common sense. For example, if the dependent variable is number of children ever born to a sample of 50-year-old women, it is reasonable to suppose that some women are biologically sterile. For these women, no variation on the predictor variables (whatever they might be) could change the expected number of children. So next time you’re thinking about fitting a zero-inflated regression model, first consider whether a conventional negative binomial model might be good enough. Having a lot of zeros doesn’t necessarily mean that you need a zero-inflated model. You can read more about zero-inflated models in Chapter 9 of my book Logistic Regression Using SAS: Theory & Application. The second edition was published in April 2012. *William Greene (Functional Form and Heterogeneity in Models for Count Data, 2007) claims that the models are not nested because “there is no parametric restriction on the [zero-inflated] model that produces the [non-inflated] model.” This is incorrect. A simple reparameterization of the ZINB model allows for such a restriction. So a likelihood ratio test is appropriate, although the chi-square distribution may need some adjustment because the restriction is on the boundary of the parameter space. 1. What Percentage of zeros needed in the outcome to use a zero-inflated model 1. The percentage of zeros is not relevant. A standard negative binomial model can handle a high percentage of zeros. What’s relevant is whether you have a theory that says that some substantial fraction of the zeros come from individuals who are “absolute zeros” and for whom the covariates have no effect on the propensity to experience events. 2. What a robust discussion! Allison, your blog was so engaging…Good job! In one of your comments, you said “there’s nothing in the data that will tell you whether some zeros are structural, and some are sampling. That has to be decided on theoretical grounds.” on June 17, 2015, at 9:09 am. I am curious to know, if you have by chance stumble on how the ZIP can be used to identify the true zero and imputed zero counts? Also, in modeling the zero counts, can you explain why that is model using logistic regression. 1. Once you estimate a zero-inflated model, for each individual you can generate an estimated probability that the individual is a structural zero. But that’s only an estimate and, at best, it’s only a probability. Why logistic regression? Well, that’s just a common and convenient way of modeling a binary outcome, in this case, whether or not an individual is a structural zero. But, in principle, it could be a probit model or something else entirely. 3. Hello Paul, I read all discussions here and I do appreciate your kindness to address all questions. I also learned a lot from others. Currently I am building a predictive model using ZINB. The response variable a revenue amount. To make it a count format, SAS int function was used. By the nature we have 70% zero amount. Do you think the approach ZINB reliable for this purpose? Further, can the zero-inflated gamma model an alternative with some minor transformation of the revenue amount 0+0.1? Thanks a lot for your kindness. 1. For advice on dealing with these kinds of situations, I recommend this book: Economic Evaluation in Clinical Trials (Handbooks in Health Economic Evaluation) 2nd Edition 4. Im conducting a simulation study where im trying to examine the fit of this models Poisson, NB, ZIP, ZINB, HP, and HNB. Surpringly I notice that when the true model is ZINB (The Psudopopulation is ZINB) in the vast majority of the scenarios of proportion of zeros and overdispersion the NB provides a lower AIC than ZINB. Furtheremore, the Hurdle NB provides the Lowest AIC in basically every scenario. Can someone explain this to me? am I making a mistake somewhere or what do you think is the reason for this since we would assume that if the true model or pseudo population follows a ZINB distribution then when we fit ZINB to data ZINB should provide the lowest AIC. However this is not the case in my 1. What happens with the BIC? It’s guaranteed to select the correct model with a sufficiently large sample. That’s not true of the AIC. On the other hand, BIC penalizes the additional parameters in the ZINB more than the AIC, so I wouldn’t expect the AIC to go for the more parsimonious NB model. 5. Dear Paul, thanks for this useful article. A reviewer asked me to test a ZIP model on my dependent variable (a binary variable with 85% of zero values) instead of my logit model. I am under the impression that this wouldn’t be correct, given the count nature of ZIP dependent variables, am I right? 1. You’re right, that would not be correct. In fact, it wouldn’t even work. 1. Thank you! Is there a model for binary variables that I could use instead to account for the high number of zero values? 1. See my other blog post–https://statisticalhorizons.com/logistic-regression-for-rare-events 6. Hi Paul. Thanks for your invigorating discussion. I am recently working on a project in which I deploy a survey data. In my project, I am trying to model the treatment delay behavior of the illness/injury suffered persons. My dependent variable is ‘Treatment_delay’ which has a lot of zeros (roughly 1/3rd) among 35000 observations. This variable starts from 0 to onwards, where 0 means no delay. I am using demographic profiles and some health indicators like (previous illness history, hospitalization records, transport cost for reaching to healthcare provider etc.). I am using poisson and negative binomial regression in modelling this. I dont see ‘no treatment delays’ (which means 0 days) is caused by two separate process as only people who suffered illness or injury in last 30 days went to healthcare providers, which made me think no to use ZIP or ZINB models. I also want to categorize my dependent variable into 3 groups (less than a day (less negligence), 1-7 days (moderate negligence), more than 7 days (very negligent) before going to healthcare providers) so that I can use ordered logit or ordered probit. I was wondering 1) whether I am right or wrong in my thinking process..2) whether ZIP or ZINB is required? Thanks in Advance! 1. This isn’t really a count variable, so I probably wouldn’t go with Poisson or negative binomial. I prefer your suggestion to categorize the dependent variable and do ordered logit or probit. 7. I just noticed your blog post. Interestingly, in 2005 and 2007, I wrote two well-received (and cited) papers that described fundamental issues with the use of zero-inflated models. Some of which you already discussed in your blog. I put the link to the pre-print below each reference. Lord, D., S.P. Washington, and J.N. Ivan (2005) Poisson, Poisson-Gamma and Zero Inflated Regression Models of Motor Vehicle Crashes: Balancing Statistical Fit and Theory. Accident Analysis & Prevention. Vol. 37, No. 1, pp. 35-46. (doi:10.1016/j.aap.2004.02.004) Lord, D., S.P. Washington, and J.N. Ivan (2007) Further Notes on the Application of Zero Inflated Models in Highway Safety. Accident Analysis & Prevention, Vol. 39, No. 1, pp. 53-57. (doi:10.1016 I also proposed a new model for analyzing dataset with a large proportion of zero responses. Geedipally, S.R., D. Lord, S.S. Dhavala (2012) The Negative Binomial-Lindley Generalized Linear Model: Characteristics and Application using Crash Data. Accident Analysis & Prevention, Vol. 45, No. 2, pp. 258-265. (http://dx.doi.org/doi:10.1016/j.aap.2011.07.012) 1. Thanks, your arguments seem very consistent with my post. 8. Excellent discussion, Paul. I have similar concern to the previous post. I include unit and time fixed-effects in my testing of a government program on crime outcomes (I observe districts over time). The crime I observe is extremely rare, with some districts going many month-years without experiencing one single event; others however, experience many of them. Question 1: Is there any benefit to modeling counts of crime events with only an intercept for the inflation component? I am generally not a fan of zero-inflated models since they are computationally difficult in applied work, especially with many fixed effects. Question 2: Poisson models with counts of events over several months show evidence of overdispersion. However, when taking into account a longer time series (e.g., counts over 100 months), then the standard Poisson performs better (i.e., little overdispersion). Is observing differences of this sort (i.e., less dispersion with more data) a violation of Poisson assumptions, such that the rate is changing through time? Or, is it that I have more variation with a shorter time series, and so the conditional variance might be larger? Thoughts or similar experiences? 1. Q1: Maybe. First of all, I would rarely consider a ZIP model because a conventional NB model will almost always fit better. A ZINB model with just an intercept might be useful in some settings. However, consider what you are assuming–that there is a sub-group of districts whose latent crime rate is absolutely zero, and the covariates are unrelated to whether a district is in this subgroup or not. Q2: I’ve more commonly observed the reverse pattern, that longer intervals tend to show more overdispersion. What you’re seeing suggests that there are many factors affect the crime rate in a district that are time-specific, but that tend to average out over longer periods of time. 1. Thank you. I agree that this is a difficult assumption to make. Can any time-invariant factors go into the zero-inflation component if the ‘count’ component has a series of district fixed effects? I’m curious if an offset or population density can go into the zero component without it becoming to intractable. Any information is helpful. 1. I think so, but I haven’t tried it. 9. Hello, I want to use zero inflated models in one of my papers. But I encounter difficulties or at least doubts in the manner of estimating this kind of model. I use stata software to estimate the ZIP model and the ZINB model. For the moment there is no command that implicitly take into account the panel structure. There is “zip and zinb” commands on stata but I don’t think it take into account the panel structure of my data. For example, the stata zip command is the following: “zip depvar indepvar, inflate (varlist)”. The problem is I want to take account my panel structure because I need to introduce fixed effets. Is it correct to right my command like this: “xi: depvar indepvar i.countryeffect, inflate (varlist i.countryeffect)” I was wondering if you would have any recommendations for me on this. I have long been on stata forums but unfortunately I have not had a clear answer on this subject. 1. If you read my post, you’ll know that I’m not a huge fan of zip or zinb. But if you are determined to use this method, what you can do in Stata for panel data is (a) request clustered standard errors and (b) do fixed effects by including country-specific dummies, as you suggest. However, I wouldn’t put the country dummies in the inflate option. I think that will overfit the data. And the xi: prefix isn’t necessary in recent versions of Stata. So the command would look like this: nbreg depvar indepvar i.countryeffect, inflate(varlist) 10. Hello dear Dr. ALLISON Sorry if I asking an Irrelevant question. I’m working on a set of highway accident data with overdispersion that contain a lot of zeros. I tried 4 goodness-of-fit measures (AIC, BIC, LL Chi2 and McFadden’s R2) to choose the best fitting model (among NB, ZINB & ZIP) in each set of data; but there is a problem. The chosen model is different for each measure. for example, AIC and BIC always tend to choose the NB or ZINB (NB most of the time) and LL Chi2 and McFadden’s R2, tend to choose ZIP most of the time. The Vuong test most of the time vote to Zero-inflated one and actually I’m confused what is the best model to choose! I use STATA 15 software and I have 306 number of input samples for each data set; 9 independent and 1 dependent variable. The correlation between the Independent variables are checked but there are 3 exceptions (A little more than 0.2 Pearson correlation coefficient). And in 2 sets of data, there is a convergence problem error when running the model. It’s appreciated to have your comment. Your’s faithfully. 1. As I tried to make clear in my post, I generally disapprove of the use of zero-inflated models merely to deal with overdispersion and a lot of zeroes. Unless you have some theory or strong substantive knowledge that supports the assumptions of zero-inflated models, I would stick with the negative binomial. 11. Dr. Allison, Thanks for so generously sharing your knowledge with us. I am working on a data on the number of questions asked by legislators on a particular setting. No legislator has a zero probability of having a count greater than zero. But 59% of legislators asked zero questions. I was running a ZINB model with clustered standard errors (for parties). Several people suggested I dropped the clustered standard errors and use random effects because some of my groups (six) have relatively few observations. I use STATA and can run an NBREG with random effects but not a ZINB with random effects. But I was worried about including the random effects because I would have to move from a ZINB to an NBREG. After reading your post it seems that it should not be such a problem given the excess zeros and would be better because I could use the random intercepts. Do you agree that moving to an NBREG with random intercepts would be OK? Thanks in advance for you reply. 1. If you only have six groups, that’s not enough for either clustered standard errors or random effects. I would do fixed effects via dummy variables for parties. I don’t see any obvious reason to prefer ZINB over NBREG. The fact that you have 59% with a 0 is not, in itself, any reason to prefer ZINB. 12. Dr. Allison, Thanks for this great post. I’m working on a study to see if adolescents who have had a mental health visit prior to parental deployment see an increase in visits as their parents get deployed. We are considering using Proc Genmod with dist=negbin and GEE repeated measures analysis using Repeated child(parent). However over 70% of children have no further visits. Is it appropriate to use repeated measures when so many have zeros? 1. I don’t see any obvious problem here. But it’s not clear to me in what way the measures are repeated. Is it because there are multiple children per parent? 1. Thanks for the quick response! We are measuring the number of visits per child over deployment and non-deployment periods. 1. In that case, I think you should be OK. But you may want to consider a fixed effects negative binomial model, as described in my book Fixed Effects Regression Methods for Longitudinal Data Using SAS. 13. Very interesting post! I was brought to this page because I am trying to find the best approach for running multilevel models where the primary exposure of interest is a count variable with a lot of zeros and the dependent variable is a continuous variable. The analyses will be adjusted for potential confounders, and for the random effect of school (i.e. we recruited a stratified sample of children within schools). I thought about dichotomising my independent variable, but I would obviously lose a lot of information in doing so. I am not sure that a linear mixed model will provide accurate estimates for my independent variable. Any thoughts? Thanks in advance! 1. There is no distributional assumption for the independent variable, so the post on zero-inflated models really doesn’t apply. The question is what is the appropriate functional form for the dependence of your dependent variable on the predictor. If you simply treat your exposure variable as a quantitative variable, then you are assuming that each 1-unit increment has the same effect. That may or may not be true. I’d try out different possibilities: quantitative, dichotomous, 5 categories, etc. 14. hi paul, when can you say that the number of 0’s already exceeds the allowable number under the discrete distribution? 1. There’s no magic number. Even the Poisson distribution can allow for a very large fraction of zeros when the variance (and mean) are small. The negative binomial distribution can also allow for a large fraction of zeros when the variance is large. 15. Currently I am doing my thesis for my master degree in bio-statistics. The title of my thesis is (fitting poisson normal and poisson gamma with random effect on oral health with zero inflated ( index dmf ), I did my analysis with the software called ( Stata ) and in both cases ( my case and yours ) the result were inconclusive. Witch comes to my shamelessly demand on how did you do the analysis and what software did you use ? Before your answer, I respectfully thanking your and wish for further collaboration with you. 1. I use either Stata or SAS. 16. Hi Paul, I found your article really helpful! I am working with a dataset on sickness absence and sickness presenteeism. Most researchers modeling absence or presenteeism individually have used ZINB models – theorising that some structural zero’s are due to employees having a no-absence or no-presenteeism rule whilst sampling zero’s are just due to respondents never having been ill. In my research I am combining presenteeism and absence to one measure of ‘illness’ and thus cannot make this distinction (when you are ill you can only be either present or absent from work..), . Am I right to then use a negative binomial regression model without zero inflation (regardless of what the vuong test says?). And do you know of any article/book I can cite as evidence of the need for a theory on the different zero’s for zero-inflation to be used? – Chapter 9 of your book maybe? 1. Well, it does seem that the rationale that others use for the ZINB wouldn’t apply in this case. I would probably just go with the NB. Sorry but I don’t have a citation to recommend. 17. I am working on a model with a count outcome and trying to figure out which has a better fit- negative binomial or zero inflated negative binomial. (Poison definitely doesn’t fit well due to over dispersion). While the AIC is better for zero inflated models, the BIC tends to point towards to the regular negative binomial model. Can you help me understand this? Also if theoretically the negative binomial model makes more sense (it wasn’t originally hypothesized that there is a separate process for ‘excessive zeros’) does it make sense to go wit the negative binomial model despite the better fit of the zero inflated model? 1. BIC puts more penalty on additional parameters, and the ZINB has more parameters. So it’s not surprising that NB does better on this measure. Sounds like the fit is pretty close for the two models. So why not go with the simpler model if there’s no theoretical reason to prefer the ZINB. 18. Hi all, This is an interesting discussion – for those who are interested the following paper does a nice “introductory” review of several of the topics mentioned here, http://www.ncbi.nlm.nih.gov/pubmed/ and demonstrates how these decisions can be guided strongly by theory, etc. 19. What does it mea when the BIC for ZINB = – Inf? 1. It probably means that the algorithm for maximizing the likelihood did not converge. 20. Hi Paul, I have made some progress with proc glimmix in SAS. The code for my final model is presented below, model one was unconditional with no predictor, model two had socioeconomic status (SES) as a predictor, and the final model has SES and gender as predictors. *question = question type, response = answers to the questions Proc glimmix data=work.ses method=laplace noclprint; Class question; Model response=ses gender / dist=multinomial link=clogit solution cl oddsratio (diff=first label); Random ses / subject=question type=vc solution cl; Covetest / wald; So far I have gotten suitable results, model two is a better fit to model one, and model three is a better fit to model two. In the final model, fixed effects for SES p< .05, and gender p< .001. So far everything has been self-thought, picking up information from different sources with no particular one that matches my need. I am therefore not 100% sure of my code (save for dist=mult and link=clogit). The major problem I am facing now however, and have spent a considerable amount of time on is trying to figure how to get post-hoc tests for the gender effect on the different types of questions (like a pairwise comparison table for ANOVA). I have tried Lsmeans but it doesn’t work with multinomial data, I have tried splice and splicediff, as well as contrast (bycat and chisq) but keep getting errors. Once again I am out of options and the study wouldn’t make much sense if I cannot pick out the particular question types that gender (and ideally SES) has an effect on. Thanks in advance. 1. This code is not right. The RANDOM statement should be something like RANDOM INTERCEPT / SUBJECT=person id; question should be a predictor in the model. You can do your tests with CONTRAST statements. 21. Thanks. This is to let everyone know that there is a free version of SAS available for non-commercial purposes. Follow the link below, if it is broken search for the page through google. 22. Hi Paul, Thank you for this post and for engaging with the commentators. I will greatly appreciate it if you can offer some advice on my data. I am attempting to replicate and further a 3 (socio-economic status) x 6 (question type) study. The DV (question type) is measured with a 12 item questionnaire (6 categories containing 2 questions each). Participants in each category (i.e., two questions) can score between 0 and 2. In my study as well as the aforementioned study, most participants score 0 across all 6 categories. The data therefore does not satisfy the normality assumption for parametric tests as it is skewed to the right and the transformations I have tried did not work. I don’t know how the authors got away with publishing the results arrived at from an ANOVA with this type of data as it is not mentioned in their methods. My study tests an extra variable ‘gender’ theorised to affect the relationship explored in the aforementioned study. That is, my study design is 2 (gender) x 3 (socio-economic status) x 6 (question type). An initial ANOVA gave all the predicted results but when I went back to explore the data I realised I had a huge normality problem which the authors must have also had. If their analysis is wrong I do not want to repeat it. Which statistical analysis do you think will be best to use in my situation? Thanks in advance. 1. This sounds like a job for ordered logistic regression, also known as cumulative logit. 1. Hi Paul, One minor follow-up question. SPSS’s ordinal regression dialog box only allows one DV at a time. Does this mean that I will have to repeat the analysis six times for my six DVs? If so will I have to use a p value of I have searched for answers to this question online and one or two statistics textbooks readily available but can’t find any answers. The only answers I have found are room for more than one IV (i.e., combinations of categorical and continuous IVs). Thanks in advance. Best regards 1. What you need is software that will allow you to do ordered logistic regression in a mixed-modeling framework (meologit in Stata or GLIMMIX in SAS). Or at the least, ordered logit software with clustered standard errors. Each subject would have 6 data records, and question type would be an independent variable. 23. Dear Paul, thank you for your post. I used the zero inflated negative binomial model to fit my data with a lot of zeros. But after reading your post, I have some concerns since my dependent variable is the amount of dollars the respondents were willing to pay for a specific policy option, and a “0” means that they are unwilling to pay anything for the option. Though I have a lot of zeros in my data (most of the respondents were unwilling to pay anything), I am not sure if I can make the assumption that there are two sorts of zeros. However, I tried the vuong test to compare the ZINB model and the conventional negative binomial model, and find out that the former is superior to the latter. Does that mean that it’s better to use the ZINB model even though I don’t have a theory of two kinds of zeros? Thanks a lot in advance! 1. Well, a dollar amount is not a true count variable, so the underlying rationale for a negative binomial model (either conventional or ZINB) doesn’t really apply. That said, this model might be useful as an empirical approximation. 24. Thank you for an informative blog. Can I please call on your time to clarify an analysis that I have that I believe should follow a ZINB. I am unsure if I have it right and if the interpretations are correct. We have data on CV related ultrasound testing in regions of varying size over a year. Many of these regions are very small and may not carry out any testing since there are no services available (no cardiologists) and some may carry out testing that has not been reported to us due to privacy reasons (also likely to be related to few cardiologists). We are using a ZINB with number of cardiologists as the predictor in the inflation-part of the model and we get what we believe to be sensible results: as number of cardiologists increase in a region the odds of a certain/ structural zero decreases dramatically. Can you verify that the interpretation of this part of the model is correct. I assume that the negative binomial part of the model is interpreted the normal way i.e. that each factor influences the rate of testing carried out in each region (we have a log population 1. Makes sense to me. 25. Negative Binomial model is an alternative to poisson model and it’s specifically useful when the sample mean exceeds the sample variance.Recall,in Poisson model the mean and variance are equal. Zero-inflated model is only applicable when we have two sources of zero namely;structural and sampling.while hurdle models are suitable when we only have a single source,I.e structural. Regarding the data with 35% zeros!first compute the mean and variance of the data!if the mean and variance are equal fit poisson model!if not try negative Binomial model.when NB doesn’t fit we’ll check the characteristics of the zero,in terms of structural and sampling.then decide to fit zero-inflated model or hurdle model. 1. While I generally agree with your comment, you can’t just check the sample mean and variance to determine whether the NB is better than the Poisson. That’s because, in a Poisson regression model, the assumption of equality applies to the CONDITIONAL mean and variance, conditioning on the predictors. It’s quite possible that the overall sample variance could greatly exceed the sample mean even under a Poisson model. Also, there’s nothing in the data that will tell you whether some zeros are structural and some are sampling. That has to be decided on theoretical 26. Hello from Korea, Many thanks for your post. I counted how creative my research participants’ answers are. Most of answers were 0 because creativity is a rare phenomenon. I tried to use ZIP, but it was a bit difficult to use in SPSS. (I tried to find a manual of STATA or SAS for ZIP in Korean, but I couldn’t.) So I googled so many times, and I saw your article, which helped me use standard negative binomial regression model, since my data is overdispersion. Is there any article that I can refer to? I want to cite any article or book as an evidence for my thesis. Is your book “Logistic Regression Using SAS: Theory & Application” proper to cite when I use negative binomial model instead of zero-inflated poisson model? Thank you in advance. 1. Yes, you can cite my book. The discussion is in Chapter 9. 27. Many thanks sir for this explanation 35% of my data includes zero values, do I need to apply zero-inflated negative binomial, or it is OK to use standard or random-parameter negative binomial? 1. Just because you have 35% zeros, that does not necessarily mean that you need a zero-inflated negative binomial. A standard NB may do just fine. 1. I think since he has 33% zera values, he has to use ZINB. Why u think it is not necessary to use ZINB? 1. Just because the fraction of zeroes is high, that doesn’t mean you need ZINB. NB can accommodate a large fraction. 28. Paul, In this post you seem to recommend the standard negative binomial regression as a better way to deal with overdispersion. In another post “Beware of Software for Fixed Effects Negative Binomial Regression” on June 8th, 2012, you argued that some software that use HHG method to do conditional likelihood for a fixed effects negative binomial regression model do not do a very good job. Then, if one uses these softwares, it may be wise to use ZIP than negative binomial regression. Right? 1. Well, to the best of my knowledge, there’s no conditional likelihood for doing fixed effects with ZIP. So I don’t see any attraction for that method. 1. OK I see!! To sum up: 1) Standard Poisson model does not work because it cannot deal with overdispersion and zero excesses 2) Negative binomial model does not do appropriate conditional likelihood, at least for some software (SAS, STATA) 3) There is no conditional likelihood for ZIP Then, it is kind of tough because there is no model that can appropriately deal with overdispersion and zero excesses. There is the pglm package in R but there is not much information about how it deals with these two issues.Do you happen to know more about it? A solution may be to do Poisson fixed effects with quasi-maximum likelihood estimator (QMLE). This can be done in Stata. However, I read that QMLE can overcome overdispersion but does not do great job with zero excesses. Any thought about QMLE? 1. I agree with your three points. But, as I suggested, the negative binomial model often does just fine at handling “excess” zeros. And you can do random effects NB with the menbreg command in Stata or the GLIMMIX procedure in SAS. For fixed effects, you can do unconditional ML or use the “hybrid” method described in my books on fixed effects. I don’t know much about pglm, and the documentation is very sparse. QMLE is basically MLE on the “wrong” model, and I don’t think that’s a good way to go in this case. 1. By the way, you said earlier that there’s no conditional likelihood for doing fixed effects with ZIP. What about PROC TCOUNTREG in SAS? Somethig like: MODEL dependent= </DIST=ZIP ERRORCOMP=FIXED Does not it do ZIP fixed effects conditional likelihood? 2. I just tried that and got an error message saying that the errorcomp option was incompatible with the zeromodel statement. But I was using SAS 9.3. Maybe it works in 9.4. 29. SIr, I work on crime data but I am facing an interesting problem. When I fit the count data models I find that the ZINB explains the problem better but when I plot the expected dependent values, the poisson distribution controlled for cluster heterogeneity fits better. Does it have something to do with your debate? 1. Probably not. 30. This blog is going to be required reading for my students. If only they could have this type of discourse. Thanks. 31. ZI models may provide some explanations of the presenting of zeros. I do not know if this is an advantage of ZI models. And many thanks for your nice blog. 32. Paul, I have been researching ZIP and have come across differing suggestions of when it would be appropriate to use. The example below is on a tutorial page for when zero-inflated analysis would be appropriate. My guess is that you would say zero-inflated analysis is not appropriate in this example, as there is no subgroup of students who have a zero probability of a “days absent” count greater than 0. Thanks. “School administrators study the attendance behavior of high school juniors over one semester at two schools. Attendance is measured by number of days of absent and is predicted by gender of the student and standardized test scores in math and language arts. Many students have no absences during the semester.” 1. I agree that this is not an obvious application for ZIP or ZINB. Surely all students have some non-zero risk of an absence, due to illness, injury, death in family, etc. 33. This discussion between you and Greene was a great exchange, and I gained a lot from reading it. I would love to see you guys coauthor a piece in (eg) Sociological Methods reviewing the main points of agreement and disagreement. It would be a great article! 1. Good idea, but I don’t think it’s going to happen. 34. Is there a simple criteria to use to guide a researcher whether to use ZINB? For example, out of the sample size, what should the zeros constitute (proportion or percentage) in order for one to use ZINB? Can it be done from such a point of view? 1. I’m not aware of any such criterion. 35. Are many zeros a statistical problem in logistic regression analysis (with response variable 0/1) as well? 1. No, although see my earlier blog on logistic regression with rare events. 36. Hi Paul. Thank you for your answer. I was wondering why you think that ZINB might not make sense? Also, by ‘dichotomize’, do you mean using only the cells with values > 0? The reason why I might need some zero cells is that this is a study of lemming habitat choice (as expressed by the response variable ‘number of winter nests in a cell’) as a function of some environmental explanatory variables (related to snow cover and vegetation characteristics). I thought, then, that in order to best uncover the relation between my explanatory variables and my response variable, cells with especially poor environmental conditions (and zero nests) ought also to be represented? 1. Regarding the second question, I simply meant to dichotomize into zero and not zero. By “make sense” I meant is it reasonable to suppose that there is some substantial fraction of cases that have 0 probability of making a nest regardless of the values of any covariates. 1. Yes, you are right that a large number of cells will be zero, not because of the covariates, but just by chance – and because there are not so many lemmings in the area to fill it out. I understand that it is these unexplained zeros that you say make ZINB pointless(?) I guess that they should have belonged to the group of ‘structural zeros’ (like sterile women in your example) for things to make sense – only they don’t, since these cells could easily have housed one or more nests. Could you elaborate a little bit on which approach and model you think might be better then? By ‘dichotomize into zero and not zero’, do you mean run the data strictly as presence-absence in a logistic regression manner? Immediately, I would like to make use of the counts, as I think they might add information to the analysis. Finally, I would like to say that your advice and help is very much appreciated. Being able to choose a meaningful and appropriate model for the data analysis above will allow me to move past a critical point and into the final stages of writing my master thesis on the topic. Thank you in advance. Best regards, 2. Hi Paul. Sorry, I just read your comment correctly now. What I wrote above still applies to the dataset, though. The answer to your question: ‘is it reasonable to suppose that there is some substantial fraction of cases that have 0 probability of making a nest regardless of the values of any covariates’ must be: No. There are no ‘sterile women’ in this dataset. The only reason why a large part of the cells count zero, regardless of values of covariates, is that there are so relatively few lemmings in the area that they cannot take up all of the space – even some of the attractive locations. I understand that it is the ZI and hurdle approaches that make the assumption of a fraction of observations bound to be 0 regardless of covariates. Since you say that the basic negative binomial regr. model (without ZI) can also handle many zeros – might that be the road to go down, then? 1. I’d say give it a try. 37. Thank you both for the interesting discussion. Can either of you tell me if a count dataset can contain such a large amount of zeros that none of the models mentioned in this blog – NB, ZIP, ZINB – are likely to work?! I have a count dataset that contains 126,733 cells out of which 125,524 count “0”. That is, 99.05% of my dataset has a count of zero. Is this a detrimental proportion, and should I instead do some random resampling of zero-cells in order to lower the number? Thank you in advance… 1. Well, ZINB should “work” in the sense of fitting the data. Not sure whether it really makes sense, however. In a case like this, I would be tempted to just dichotomize the outcome. I don’t see any advantage in sampling the zero cells. 2. Hi,Jakob! Why don’t try jast dichotomizing (empty=”no” and “yes”>0 or white/black pixels ) & then to logit-reg? Another way – agregate to bigger non-empty cells & Poiss-like regression, or jast wait until lemming peak year 😉 38. What an intuitive discussion! Using d NB model often d standard error estimates are lower in poisson than in NB which increases the likelihood of incorrectly detecting a significant effect in the poisson model. But fitting ZI models predicts d correct mean counts and probability of zeros. So I think ZINB is better to NB when having excess zeros. 39. Thank you both for the interesting discussion. What do you think about two component – “hurdle” models (binomial+gamma(or Poisson or NegBin)? sees to me, it’s easily interpretable and flexible 1. I don’t know a lot about hurdle models, but they seem pretty similar to zero-inflated models. They could be useful in some situations, but may be more complex than needed. 1. IMHO, they looks similar, but are easily interpretable and help to find some intresting effects, forexample different sign at the same predictor in binomial & count part of the model! 40. “In all data sets that I’ve examined, the negative binomial model fits much better than a ZIP model, as evaluated by AIC or BIC statistics. And it’s a much simpler model to estimate and interpret.” I get your second point in terms of a simpler model to estimate and interpret. But I question your first point. AIC and BIC are both based on the log likelihood. Negative Binomial and ZIP have different probability density functions and thus different expression for likelihoods. It’s my understanding that AIC and BIC are meaningless when comparing models without the same underlying likelihood form. 1. Good question, but I disagree. To compare likelihoods (or AICs or BICs), there’s no requirement that the “probability density functions” be the same. Only the data must be exactly the same. For example, for a given set of data, I can compute the likelihood under a normal distribution or a gamma distribution. The relative magnitudes of those likelihoods yields valid information about which distribution fits the data better. 41. Thank you both for the interesting discussion. I’ve been working on a random effects negative binomial model to explain crime occurrence across a spatial grid. The negative binomial model appears to fit quite well. That said, I’ve been thinking about whether there are two distinct data generating processes producing the zeros. One, crime hasn’t occurred, and two, crime occurred but has never been reported. Perhaps then the ZINB makes sense? I haven’t tried it yet…but will. 1. I think that it might be inappropriate to do as you describe – for two reasons: 1) The only reason why you came up with two possible classes of 0’s is that you know this is required for the ZI procedure, i.e. it is a post rationalization (also mentioned in the discussion). 2) You investigate where crime takes place – so a 0 because no one reported a crime is not a ‘real’ 0 – the crime did take place! For comparison, refer to the example from Paul: Both groups of women (sterile and those who just had no children) were ‘real’ 0’s – none of them had children! 42. 1. I would not agree with you that the ZIP model is a nonstarter. In my experience, the ZINB model seems in many cases to be overspecified. There are two sources of heterogeneity embedded in the ZINB model, the possibly unneeded latent heterogeneity (discussed by Paul above) and the mixing of the latent classes. When the ZINB model fails to converge or otherwise behaves badly, it seems in many cases to be because the ZIP model is better suited for the modeling situation at hand. * much of the rest of this discussion focuses on what I would call a functional form issue. Paul makes much of the idea of a researcher faced with an unspecified theory and a data set that contains a pile of zeros. At the risk of sounding dogmatic about it, I am going to stake my position on the situation in which the researcher has chosen to fit a zero inflated model (P or NB) because it is justified by the underlying theory. If the researcher has no such theory, but a data set that seems to be zero heavy, there really is no argument here. As I agreed earlier, there are many candidates for functional forms that might behave just as well as the ZI* models in terms of the fit measures that they prefer to use, such as AIC. (More on that below.) 2. See above. Just one point. Yes, the NB model is a continuous (gamma) mixture of Poissons. But, the nature of the mixing process in that is wholly different from the finite mixture aspect of the ZI models. Once again, this is an observation about theory. It does not help to justify the zip model or any of the suggested alternatives. 3. What I have in mind about fit measures is this. Many individuals (I have seen this in print) discuss the log likelihood, AIC or (even worse) pseudo R-squared in terms they generally intend to characterize the coefficient of determination in linear regression. I have even seen authors discuss sums of squares in Poisson or Probit models as they discuss AIC or Pseudo R squareds even though there are no sums of squares anywhere in the model or the estimator. These measures do not say anything about the correlation (or other correspondence) of the predictions from the model with the observed dependent variable. The difference between a “y-hat” and a “y-observed” appears nowhere in the likelihood function for an NB model, for example. But, it is possible to make such a comparison. If the analyst computes the predicted outcome from a ZINB model using the conditional mean function, then uses the correspondence of this predictor with the outcome, they can compute a conventional fit measure that squares more neatly with what people seem to have in mind by “fit measure.” As a general proposition, the ZINB model will outperform its uninflated counterpart by this measure. 4. I have no comment here. The buttons are there to push in modern software. 5. The problem of interpretation runs deeper than just figuring out what a beta means when a gamma that multiplies the same variable appears elsewhere in the same model. In these nonlinear models, neither the beta nor the gamma provides a useful measure of the association between the relevant X and the expected value of the dependent variable. It is incumbent on the researcher to make sense of the implications of the model coefficients. This usually involves establishing then estimating the partial effects. Partial effects in these models are nonlinear functions of all of the model parameters and all of the variables in the model – they are complicated. Modern software is built to help the researcher do this. This is a process of ongoing development such as the MARGINS command in Stata and nlogit’s PARTIALS command. None of this matters if the only purpose of the estimation is to report the signs and significance of estimated coefficients, but it has to be understood that in nonlinear contexts these are likely to be meaningless. 6. It is possible to “parameterize” the model so that P=b0/(1+b0) * exp(beta’x)/[1+exp(beta’x)], which is what is proposed. The problem that was there before remains. The “null hypothesis” is that b0=0, which is tricky to test, as Paul indicated. However, if b0=0, then there is no ZIP model. Or, maybe there is? If b0 is zero, how do you know that beta = 0? The problem of the chi-squared statistic when b0 is on the boundary of the parameter space is only the beginning. How many degrees of freedom does it have? If b0=0, then beta does not have to. Don Andrews published a string of papers in Econometrica on models in which model parameters are unidentified under the null hypothesis. This is a template case. The interested reader might refer to them. For better or worse, researchers have for a long time used the Vuong statistic to test for the Poisson or NB null against the zero inflation model. The narrower model usually loses this race. To sum this up, it is difficult to see the virtue of the reparameterized model. The suggested test is invalid. (We don’t actually know what it is testing.) The null model is just the Poisson or NB model. The alternative is the zero inflated model, without the reparamaterization. 43. The zero inflation model is a latent class model. It is proposed in a specific situation – when there are two kinds of zeros in the observed data. It is a two part model that has a specific behavioral interpretation (that is not particularly complicated, by the way). The preceding discussion is not about the model. It is about curve fitting. No, you don’t need the ZINB. There are other functions that can be fit to the data that will look like they “fit better” than the ZINB model. However, neither the log likelihood function nor the suggested AIC are useful “fit measures” – the fit of the model to the data in the sense in which it is usually considered is not an element of the fitting criterion. If you use the model to predict the outcome variable, then compare these predictions to the actual data, the ZINB model will fit so much better there will be no comparison. It is always intriguing when a commentator argues that a model is “difficult to fit.” Typing ZINB in Stata’s or nlogit’s command language is not harder than typing negbin. These models have existed for years as supported procedures in these programs. There is nothing difficult about fitting them. As for difficulty in interpreting the model, the ZINB model, as a two part model makes a great deal of sense. It is hard to see why it should be difficult to interpret. The point above about the NB model being a parametric restriction on the ZINB model is incorrect. The reparameterization merely inflates the zero probability. But, it loses the two part interpretation – the reparameterized model is not a zero inflated model in the latent class sense in which it is defined. The so called reparameterized model is no longer a latent class model. It is true that the NB model can be tested as a restriction on proposed model. But, the proposed model is not equivalent to the original ZINB model – it is a different model. Once again, this is just curve fitting. There are numerous ways to blow up the zero probability, but these ways lose the theoretical interpretation of the zero inflated model. 1. I appreciate William Greene’s thoughtful consideration of some of the issues in my blog. Here are some responses: 1. ZIP model. Given that Greene didn’t mention the zero-inflated Poisson model, I’m guessing that he agrees with me that the ZIP model is a non-starter. It’s just too restrictive for the vast majority of applications. 2. Curve fitting vs. a behavioral model. It’s my strong impression that a great many researchers use zero-inflated models without any prior theory that would lead them to postulate a special class of individuals with an expected count of 0. They just know that they’ve got lots of zeros, and they’ve heard that that’s a problem. After learning more about the models, they may come up with a theory that would support the existence of a special class. But that was not part of their original research objective. My goal is simply to suggest that a zero-inflated model is not a necessity for dealing with what may seem like an excessive number or zeros. As I mentioned toward the end of the blog, there are definitely situations where one might have strong theoretical reasons for postulating a two-class model. But even then, I think it’s worth comparing the fit of the ZINB model with that of the conventional NB model. The two-class hypothesis is just that — a hypothesis. And if the evidence for that hypothesis is weak, maybe it’s time to reconsider. It’s also worth noting that the conventional NB model can itself be derived as a mixture model. Assume that each individual i has an event count that is generated by a Poisson regression model with expected frequency Ei. But then suppose that the expected frequency is multiplied by the random variable Ui to represent unobserved heterogeneity. If Ui has a gamma distribution (the mixing distribution), then the observed count variable will have a negative binomial distribution. The generalized gamma distribution is pretty flexible and allows for a large concentration of individuals near zero. 3. Fit criteria. I’m not sure what to make of Greene’s statement that “neither the log-likelihood nor the suggested AIC are useful ‘fit measures’—the fit of the model to the data in the sense in which it is usually considered is not an element of the fitting criterion.” Why should the fitting criterion (i.e., the log-likelihood) not be a key basis for comparing the fit of different models? If it’s not useful for comparing fit, why should it be used as a criterion for estimation? In any case, AIC and BIC are widely used to compare the relative merits of different models, and I don’t see any obvious reason why they shouldn’t be used to evaluate the zero-inflated models. 4. Fit difficulty. Greene is puzzled by any suggestion that zero-inflated models are “difficult to fit.” Those weren’t exactly my words, but I can stipulate that there are fewer keystrokes in ZINB than in NEGBIN. So in that sense, ZINB is actually easier. On the other hand, there is certainly more calculation required for the ZINB than for the NB. And if you’re dealing with “big data”, that could make a big difference. Furthermore, it’s not at all uncommon to run into fatal errors when trying to maximize the likelihood for the ZINB. 5. Interpretation difficulty. Why do I claim that the ZINB model is more difficult to interpret? Because you typically have twice as many coefficients to consider. And then you have to answer questions like “Why did variable X have a big effect on whether or not someone was in the absolute zero group, but not much of an effect on the expected number of events among those in the non-zero group? On the other hand, why did variable W have almost the same coefficients in each equation?” As in most analyses, one can usually come up with some after-the-fact explanations. But if the model doesn’t fit significantly better than a conventional NB with a single set of coefficients, maybe we’re just wasting our time trying to answer such questions. 6. Nesting of models. As I recall, Greene’s earlier claim that the NB model was not nested within the ZINB model was based on the observation that the only way you can get from the ZINB model to the NB model is by making the intercept in the logistic equation equal to minus infinity, and that’s not a valid restriction. But suppose you express the logistic part of the model as p/(1-p) = b0*exp(b1*x1 + … + bk*xk) where b0 is just the exponentiated intercept in the original formulation. This is still a latent class model in its original sense. Now, if we set all the b’s=0, we get the conventional NB model. The issue of whether the models are nested is purely mathematical and has nothing to do with the interpretation of the models. If you get from one model to another simply by setting certain unknown parameters equal to fixed constants (or equal to each other), then they are nested. As I mentioned in the blog, because b0 has a lower bound of zero, the restriction is “on the boundary of the parameter space.” It’s now widely recognized that, in such situations, the likelihood ratio statistic will not have a standard chi-square distribution. But, at least in principle, that can be adjusted for. 1. W.r.t the difficulty of interpretation of ZI models, I think you can imagine there is some unknown (unobserved) explanatory variable which causes many zeros. The zero-inflated “sub-model” (I don’t know the correct term) is activated by this variable. For computer researchers (of whom I am one) this casualness is often tolerated. But maybe in other fields things are different. 44. Thanks for this blog post. You make these statistical concepts easy to understand; I will certainly be on look out for your books.
{"url":"https://statisticalhorizons.com/zero-inflated-models/","timestamp":"2024-11-13T21:07:47Z","content_type":"text/html","content_length":"247310","record_id":"<urn:uuid:5b7c7d95-dc2f-4a7b-b2f7-4efe3e6c445c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00149.warc.gz"}
Average Value and Calculation | Quantitative AptitudeAverage Value and Calculation | Quantitative Aptitude | Leverage Edu Average value and calculation questions very commonly appear in the banking, management, law exams, and government exams like UPSC, CDS, NDA, or AFCAT. To attempt them without much difficulty, it is necessary to get an overview of the average value and calculation. Additionally, we will also learn the correct formula to calculate these types of questions. Then we will proceed toward some questions related to this. Keep reading this blog till the end to not miss out on any of these things. What is the Average Value? The average value is a way to find the typical or central value in a set of numbers. To calculate the average, you add up all the numbers in the set and then divide that sum by the total count of numbers. The average value represents a typical value within a group of numbers by distributing the total evenly among all the values. Also Read: Average Cost Questions: Formulas and Solved Examples How to Calculate the Average Value? Following are the steps to calculate the average value. • Calculate the sum of all data values • Identify the number of data values • Lastly, look for the average value by dividing the sum of values by the number of data values. The obtained result is the average value of the given data set. Questions of Average Values Example 1: A batsman scored runs in seven consecutive matches are given below: 69, 21, 78, 77, 94, 54, 48 Find the average runs scored by the batsman. Solution: Given, 69, 21, 78, 77, 94, 54, 48 Step 1: Sum of runs = 69 + 21 + 78 + 77 + 94 + 54 + 48 = 441 Step 2: Number matches = 7 Step 3: Average = 441/7 = 63 Therefore, the average runs scored by the batsman = 63. Example 2: There are 7 boys and the mean weight of all of them together is 56 kgs. If we know the weight of six of them as 52, 57, 55, 60, 59, and 55. What will be the weight of the seventh boy? Solution: We know the mean weight of 7 boys is 56 kgs (that is 56 kgs x 7 = 392) and we also know the individual weight of six of the boys that we will add up together. So, 52 + 57 + 55 + 60 + 59 + 55 = 338 kgs. Therefore, the weight of the seventh boy = the total weight of the 7 boys – the total weight of 6 boys. Put the required values, the total weight of the 7 boys as 392 kgs, and the total weight of 6 boys as 338 kgs, and the result will come to 54 kgs. Hence the total weight of the 7th boy is 54 kgs. Example 3: If a cricketer has a mean score of 58 runs in 9 innings. So, how many runs can the cricketer score in the 10th innings to get the mean score of 61? Solution: The mean score of 9 innings was 58 runs. Therefore the total score of the 9 innings will be 58 x 9 = 522 runs. The mean score required in 10 innings is 61 runs. The total score required in 10 innings is 61 x 10 = 610 runs. Therefore the total number of runs to be scored in 10 innings will be: The scores of 10 innings – the scores of 9 innings = 610 – 522 = 88 runs. Example 4: There are 5 numbers whose mean is 28. If one number gets excluded and the mean reduces by 2 then what is the excluded number? Solution: the mean of 5 numbers is 28. Sum of the five numbers is 28 x 5 = 140 So if the mean is reduced by 2 the means of the rest 4 will be 28 – 2 = 26 The sum of these 4 numbers will be 26 x 4 = 104 Therefore, the excluded number will be the sum of 5 numbers – the sum of 4 numbers Here put the sum of 5 numbers as 140 and the sum of 4 numbers as 104 and the result will come as 36. Average Value Formula: Sum of All Data Values/Total Number of Data Values Also Read: Difference Between Average and Mean How do I calculate the average value? The average value is calculated by using this formula: Sum of All Data Values/Total Number of Data Values. Why do we calculate the average value? It helps us quickly remember the available data. What is the meaning of the average value? The term Average is used to denote a value that is meant to represent the sample. Engaging with these questions is instrumental in honing your analytical thinking, pattern recognition, and problem-solving skills. For more study material on Indian Exams, check out Leverage Edu!
{"url":"https://leverageedu.com/discover/indian-exams/exam-prep-average-value-and-calculation/","timestamp":"2024-11-08T05:14:33Z","content_type":"text/html","content_length":"282642","record_id":"<urn:uuid:1c485cc1-ef9f-48d5-8956-eb12a8a80c08>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00448.warc.gz"}
power rating for ball mill for quartz The specific rates of breakage of particles in a tumbling ball mill are described by the equation S i = ax α i (Q(z), where Q(z) is the probability function which ranges from 1 to 0 as particle size equation produces a maximum in S, and the particle size of the maximum is related to ball diameter by x m = k 1 d variation of a with ball diameter was found to be of the form ... WhatsApp: +86 18838072829 2100*3000 Factory Ball Mill for Quicklime/ Quartz/ Pebble/ Non Metallic Mineral/ Nickel Ore/ Mine/ Salt Mud/ Sand/ Additives/ Curing Agent/ Waste US / Piece 1 Piece (MOQ) WhatsApp: +86 18838072829 Type: Ball Mill Motor Type: AC Motor Motor Power: 15KW Rotationl Speed: 1719r/min Application: Mine Materials: Iron WhatsApp: +86 18838072829 Ball Mills. Ball mills have been the primary piece of machinery in traditional hard rock grinding circuits for 100+ years. They are proven workhorses, with discharge mesh sizes from ~40M to <200M. Use of a ball mill is the best choice when long term, stationary milling is justified by an operation. Sold individually or as part of our turnkey ... WhatsApp: +86 18838072829 sbm/power rating for ball mills for rwrr KiB . View; Log; Blame; View raw; 29dbe9ef — liach2022 first a month ago WhatsApp: +86 18838072829 Professional powder equipment manufacturer. mineral + WhatsApp: +86 18838072829 High quality high yield Feeding size: ≤25mm Capacity: .65615t/h Applicable materials: Quartzite, sand stone, quartz sand Get Price Now Quartz ball mill ( quartz sand ball mill) is a specialized quartz grinding equipment developed on the basis of traditional ball mill combining the characteristics of quartz ore. WhatsApp: +86 18838072829 In terms of the Brace and Walsh surface free energy of quartz, his results give a ball mill efficiency that is less than %. ... The net power required to operate the mill with the ball load of 33 kg and material charge of 3 kg was kW. Fig. 9 presents the specific surface area of sodalime glass, broken with different modes of force, ... WhatsApp: +86 18838072829 For each grinding test, the mill was first loaded with a kg mass of ball mix and a 150 g mass of feed sample volume of ml tap water was then added to the mill charge in order to make a 70 wt.% pulp monosized fractions of quartz and chlorite(− 2 + mm, − + mm, − + mm, − + mm)were first ground to determine a better size ... WhatsApp: +86 18838072829 MINGYUAN has various models of ball mills for silica quartz sand frac sand grinding that used in quartz sand processing plant, capacity options like 2 t/h, 510 t/h, 1030t/h, 3050t/h frac ball ... WhatsApp: +86 18838072829 Ball Mill Motor Power Draw Sizing and Design Formula. The following equation is used to determine the power that wet grinding overflow ball mills should draw. For mills larger than meters (10 feet) diameter inside liners, the top size of the balls used affects the power drawn by the mill. This is called the ball size factor S. The following ... WhatsApp: +86 18838072829 The ball mill grinds ores and other materials typically to 35 mesh or finer in a variety of applications, both in open or closed circuits. Price and paramete... WhatsApp: +86 18838072829 sbm/power rating for ball mills for rwrr KiB . View; Log; Blame; View raw; Permalink; 29dbe9ef — liach2022 first 4 months ago WhatsApp: +86 18838072829 In this paper, we present a detailed investigation of the dry grinding of silica sand in an oscillatory ball mill. We are interested in the evolution of specific surface area (SSA), particle size distribution, agglomerated SSA and consumed electrical energy as a function of the input grinding energy power. WhatsApp: +86 18838072829 Silica sand ball mill is a professional ball mill equipment for grinding silica sand. In some areas, it is also called silica sand grinding mill or silica sand grinding sand is a chemically stable silicate mineral with particle size between and silica sand and quartz sand are mainly composed of SiO 2, but their hardness and shape are slightly different due to ... WhatsApp: +86 18838072829 The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density,... WhatsApp: +86 18838072829 TECHNICAL How to Spec a Mill Gear Power Transmission Shalimar Countues Quartz Grinding Ball Mills, 15000 ... ME EN 7960 Precision Machine Design Ball Screw Calculations 412 Basic Static Load Rating C oa When ball screws are subjected to excessive loads in static condition (non rotating shaft), local permanent deformations are ... WhatsApp: +86 18838072829 Contacts Joe Balich USBIF PR +1 press ...power rating for ball mills for quartz. emi my public ftp ac gmin ball mills . mining enginering ball mills ball mill screening quartz emi my public ftp ac gmin ball mills . chat now! chat online; outokumpu ball mill power ratings . Get Price; resonance and it application to a vibratory mill . WhatsApp: +86 18838072829 Find here Ball Mills, Laboratory Grinding Mill manufacturers, suppliers exporters in India. ... Power: 11380 kw. read more... Kinc Mineral Technologies Private Limited. ... Power Rating: 4 HP. read more... Scientific Technological Equipment Corporation. WhatsApp: +86 18838072829 Contribute to chengxinjia/sbm development by creating an account on GitHub. WhatsApp: +86 18838072829 Agate milling media for planetary and roller ball mills. Agate is a common rock formation usually comprised mainly of quartz. MSE Supplies offers agate milling media and agate planetary ball mill jars that of high purity (% SiO 2) made with natural Brazilian agate. Agate milling media and agate planetary ball mill jars have been used in ... WhatsApp: +86 18838072829 This paper presents the kinetics study of dry ball milling of calcite and barite minerals. The experimental mill used was a laboratory size of 209 mm diameter, 175 mm length, providing a total WhatsApp: +86 18838072829 The ball mill machine is known as a ball grinding machine. It is a wellknown ore grinding machine and is widely used in mining, construction, and aggregate application. JXSC started manufacture the ball mill grinder since 1985, supply globally service includes flow design, manufacturing, installation, and free operation training in mining ... WhatsApp: +86 18838072829 Grinding kinetics of quartz and chlorite in wet ball milling. Powder Technol., 305 (2017), pp. 418425. View PDF View article View in Scopus Google Scholar [12] ... Analysis of ball mill grinding operation using mill power specific kinetic parameters. Adv. Powder Technol., 25 (2014), pp. 625634. View PDF View article View in Scopus Google Scholar WhatsApp: +86 18838072829 Power Rating For Ball Mills For Quartz Ball Mill. Ball Mill Power Ratings. Motor rating for ball mill mill grinding wikipedia ball mill a typical type of fine grinder is the ball mill 38000 hp motor a sag mill with a 44 134m diameter and a power of 35 mw 47000 hp has been designed attrition between grinding balls and ore particles causes grinding of finer particl sag mills are characterized ... WhatsApp: +86 18838072829 The mineral composition and content are shown in Table to the measurement result by MLA, the minerals in the ore were mainly magnetite and quartz, accounting for % and %, respectively, and their weight was accounted for % of the total, indicating that the magnetite ores can be basically considered a twophase mineral consisting of magnetite and quartz. WhatsApp: +86 18838072829 For a typical 7mlong mill, the power required is therefore predicted to be MW. This is consistent with a peak motor power rating of MW that would commonly be used for a mill of this size. Field data is very difficult to obtain for comparison with the DEM results so we instead compare with some of laboratory observations. WhatsApp: +86 18838072829 The ball impact energy on grain is proportional to the ball diameter to the third power: 3 E K 1 d b. (3) The coefficient of proportionality K 1 directly depends on the mill diameter, ball mill loading, milling rate and the type of grinding (wet/dry). None of the characteristics of the material being ground have any influence on K 1. WhatsApp: +86 18838072829 The balls to powder ratio (BPR) and powder type were investigated in relation to the particle size. The study showed that the combination of the BPR and powder type affects the particle size result. The optimum of BPR at 12 with the number of balls is 60 pieces, and the filling rate is %. The result shows that the horizontal ball mill able ... WhatsApp: +86 18838072829 Video showing our ball mills for 1 and 2 tons per hour. These mills can crush quartz ore and liberate the gold and sulfides for concentration with our shake... WhatsApp: +86 18838072829
{"url":"https://www.laskisklep.pl/9288_power-rating-for-ball-mill-for-quartz.html","timestamp":"2024-11-09T20:39:53Z","content_type":"application/xhtml+xml","content_length":"26565","record_id":"<urn:uuid:83c64b4c-2e97-4c66-88dc-e0a6243cad46>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00202.warc.gz"}
Multiple Imputation in Stata: Imputing This is part four of the Multiple Imputation in Stata series. For a list of topics covered by this series, see the Introduction. This section will talk you through the details of the imputation process. Be sure you've read at least the previous section, Creating Imputation Models, so you have a sense of what issues can affect the validity of your results. Example Data To illustrate the process, we'll use a fabricated data set. Unlike those in the examples section, this data set is designed to have some resemblance to real world data. Do file that creates this data set The data set as a Stata data file Observations: 3,000 • female (binary) • race (categorical, three values) • urban (binary) • edu (ordered categorical, four values) • exp (continuous) • wage (continuous) Missingness: Each value of all the variables except female has a 10% chance of being missing completely at random, but of course in the real world we won't know that it is MCAR ahead of time. Thus we will check whether it is MCAR or MAR (MNAR cannot be checked by looking at the observed data) using the procedure outlined in Deciding to Impute: unab numvars: * unab missvars: urban-wage misstable sum, gen(miss_) foreach var of local missvars { local covars: list numvars - var display _newline(3) "logit missingness of `var' on `covars'" logit miss_`var' `covars' foreach nvar of local covars { display _newline(3) "ttest of `nvar' by missingness of `var'" ttest `nvar', by(miss_`var') See the log file for results. Our goal is to regress wages on sex, race, education level, and experience. To see the "right" answers, open the do file that creates the data set and examine the gen command that defines wage. Complete code for the imputation process can be found in the following do file: The imputation process creates a lot of output. We'll put highlights in this page, however, a complete log file including the associated graphs can be found here: Imputation log file with graphs Each section of this article will have links to the relevant section of the log. Click "back" in your browser to return to this page. Setting up The first step in using mi commands is to mi set your data. This is somewhat similar to svyset, tsset, or xtset. The mi set command tells Stata how it should store the additional imputations you'll create. We suggest using the wide format, as it is slightly faster. On the other hand, mlong uses slightly less memory. To have Stata use the wide data structure, type: mi set wide To have Stata use the mlong (marginal long) data structure, type: mi set mlong The wide vs. long terminology is borrowed from reshape and the structures are similar. However, they are not equivalent and you would never use reshape to change the data structure used by mi. Instead, type mi convert wide or mi convert mlong (add ,clear if the data have not been saved since the last change). Most of the time you don't need to worry about how the imputations are stored: the mi commands figure out automatically how to apply whatever you do to each imputation. But if you need to manipulate the data in a way mi can't do for you, then you'll need to learn about the details of the structure you're using. You'll also need to be very, very careful. If you're interested in such things (including the rarely used flong and flongsep formats) run this do file and read the comments it contains while examining the data browser to see what the data look like in each form. Registering Variables The mi commands recognize three kinds of variables: Imputed variables are variables that mi is to impute or has imputed. Regular variables are variables that mi is not to impute, either by choice or because they are not missing any values. Passive variables are variables that are completely determined by other variables. For example, log wage is determined by wage, or an indicator for obesity might be determined by a function of weight and height. Interaction terms are also passive variables, though if you use Stata's interaction syntax you won't have to declare them as such. Passive variables are often problematic—the examples on transformations, non-linearity, and interactions show how using them inappropriately can lead to biased estimates. If a passive variable is determined by regular variables, then it can be treated as a regular variable since no imputation is needed. Passive variables only have to be treated as such if they depend on imputed variables. Registering a variable tells Stata what kind of variable it is. Imputed variables must always be registered: mi register imputed varlist where varlist should be replaced by the actual list of variables to be imputed. Regular variables often don't have to be registered, but it's a good idea: mi register regular varlist Passive variables must be registered: mi register passive varlist However, passive variables are more often created after imputing. Do so with mi passive and they'll be registered as passive automatically. In our example data, all the variables except female need to be imputed. The appropriate mi register command is: mi register imputed race-wage (Note that you cannot use * as your varlist even if you have to impute all your variables, because that would include the system variables added by mi set to keep track of the imputation structure.) Registering female as regular is optional, but a good idea: mi register regular female Checking the Imputation Model Based on the types of the variables, the obvious imputation methods are: • race (categorical, three values): mlogit • urban (binary): logit • edu (ordered categorical, four values): ologit • exp (continuous): regress • wage (continuous): regress female does not need to be imputed, but should be included in the imputation models both because it is in the analysis model and because it's likely to be relevant. Before proceeding to impute we will check each of the imputation models. Always run each of your imputation models individually, outside the mi impute chained context, to see if they converge and (insofar as it is possible) verify that they are specified correctly. Code to run each of these models is: mlogit race i.urban exp wage i.edu i.female logit urban i.race exp wage i.edu i.female ologit edu i.urban i.race exp wage i.female regress exp i.urban i.race wage i.edu i.female regress wage i.urban i.race exp i.edu i.female Note that when categorical variables (ordered or not) appear as covariates i. expands them into sets of indicator variables. As we'll see later, the output of the mi impute chained command includes the commands for the individual models it runs. Thus a useful shortcut, especially if you have a lot of variables to impute, is to set up your mi impute chained command with the dryrun option to prevent it from doing any actual imputing, run it, and then copy the commands from the output into your do file for testing. Convergence Problems The first thing to note is that all of these models run successfully. Complex models like mlogit may fail to converge if you have large numbers of categorical variables, because that often leads to small cell sizes. To pin down the cause of the problem, remove most of the variables, make sure the model works with what's left, and then add variables back one at a time or in small groups until it stops working. With some experimentation you should be able to identify the problem variable or combination of variables. At that point you'll have to decide if you can combine categories or drop variables or make other changes in order to create a workable model. Prefect Prediction Perfect prediction is another problem to note. The imputation process cannot simply drop the perfectly predicted observations the way logit can. You could drop them before imputing, but that seems to defeat the purpose of multiple imputation. The alternative is to add the augment (or just aug) option to the affected methods. This tells mi impute chained to use the "augmented regression" approach, which adds fake observations with very low weights in such a way that they have a negligible effect on the results but prevent perfect prediction. For details see the section "The issue of perfect prediction during imputation of categorical data" in the Stata MI documentation. Checking for Misspecification You should also try to evaluate whether the models are specified correctly. A full discussion of how to determine whether a regression model is specified correctly or not is well beyond the scope of this article, but use whatever tools you find appropriate. Here are some examples: Residual vs. Fitted Value Plots For continuous variables, residual vs. fitted value plots (easily done with rvfplot) can be useful—several of the examples use them to detect problems. Consider the plot for experience: regress exp i.urban i.race wage i.edu i.female Note how a number of points are clustered along a line in the lower left, and no points are below it: This reflects the constraint that experience cannot be less than zero, which means that the fitted values must always be greater than or equal to the residuals, or alternatively that the residuals must be greater than or equal to the negative of the fitted values. (If the graph had the same scale on both axes, the constraint line would be a 45 degree line.) If all the points were below a similar line rather than above it, this would tell you that there was an upper bound on the variable rather than a lower bound. The y-intercept of the constraint line tells you the limit in either case. You can also have both a lower bound and an upper bound, putting all the points in a band between them. The "obvious" model, regress, is inappropriate for experience because it won't apply this constraint. It's also inappropriate for wages for the same reason. Alternatives include truncreg, ll(0) and pmm (we'll use pmm). Adding Interactions In this example, it seems plausible that the relationships between variables may vary between race, gender, and urban/rural groups. Thus one way to check for misspecification is to add interaction terms to the models and see whether they turn out to be important. For example, we'll compare the obvious model: regress exp i.race wage i.edu i.urban i.female with one that includes interactions: regress exp (i.race i.urban i.female)##(c.wage i.edu) We'll run similar comparisons for the models of the other variables. This creates a great deal of output, so see the log file for results. Interactions between female and other variables are significant in the models for exp, wage, edu, and urban. There are a few significant interactions between race or urban and other variables, but not nearly as many (and keep in mind that with this many coefficients we'd expect some false positives using a significance level of .05). We'll thus impute the men and women separately. This is an especially good option for this data set because female is never missing. If it were, we'd have to drop those observations which are missing female because they could not be placed in one group or the other. In the imputation command this means adding the by(female) option. When testing models, it means starting the commands with the by female: prefix (and removing female from the lists of covariates). The improved imputation models are thus: bysort female: reg exp i.urban i.race wage i.edu by female: logit urban exp i.race wage i.edu by female: mlogit race exp i.urban wage i.edu by female: reg wage exp i.urban i.race i.edu by female: ologit edu exp i.urban i.race wage pmm itself cannot be run outside the imputation context, but since it's based on regression you can use regular regression to test it. These models should be tested again, but we'll omit that process. The basic syntax for mi impute chained is: mi impute chained (method1) varlist1 (method2) varlist2... = regvars Each method specifies the method to be used for imputing the following varlist The possibilities for method are regress, pmm, truncreg, intreg, logit, ologit, mlogit, poisson, and nbreg. regvars is a list of regular variables to be used as covariates in the imputation models but not imputed (there may not be any). The basic options are: , add(N) rseed(R) savetrace(tracefile, replace) N is the number of imputations to be added to the data set. R is the seed to be used for the random number generator—if you do not set this you'll get slightly different imputations each time the command is run. The tracefile is a dataset in which mi impute chained will store information about the imputation process. We'll use this dataset to check for convergence. Options that are relevant to a particular method go with the method, inside the parentheses but following a comma (e.g. (mlogit, aug) ). Options that are relevant to the imputation process as a whole (like by(female) ) go at the end, after the comma. For our example, the command would be: mi impute chained (logit) urban (mlogit) race (ologit) edu (pmm) exp wage, add(5) rseed(4409) by(female) Note that this does not include a savetrace() option. As of this writing, by() and savetrace() cannot be used at the same time, presumably because it would require one trace file for each by group. Stata is aware of this problem and we hope this will be changed soon. For purposes of this article, we'll remove the by() option when it comes time to illustrate use of the trace file. If this problem comes up in your research, talk to us about work-arounds. Choosing the Number of Imputations There is some disagreement among authorities about how many imputations are sufficient. Some say 3-10 in almost all circumstances, the Stata documentation suggests at least 20, while White, Royston, and Wood argue that the number of imputations should be roughly equal to the percentage of cases with missing values. However, we are not aware of any argument that increasing the number of imputations ever causes problems (just that the marginal benefit of another imputation asymptotically approaches zero). Increasing the number of imputations in your analysis takes essentially no work on your part. Just change the number in the add() option to something bigger. On the other hand, it can be a lot of work for the computer—multiple imputation has introduced many researchers into the world of jobs that take hours or days to run. You can generally assume that the amount of time required will be proportional to the number of imputations used (e.g. if a do file takes two hours to run with five imputations, it will probably take about four hours to run with ten imputations). So here's our 1. Start with five imputations (the low end of what's broadly considered legitimate). 2. Work on your research project until you're reasonably confident you have the analysis in its final form. Be sure to do everything with do files so you can run it again at will. 3. Note how long the process takes, from imputation to final analysis. 4. Consider how much time you have available and decide how many imputations you can afford to run, using the rule of thumb that time required is proportional to the number of imputations. If possible, make the number of imputations roughly equal to the percentage of cases with missing data (a high end estimate of what's required). Allow time to recover if things to go wrong, as they generally do. 5. Increase the number of imputations in your do file and start it. 6. Do something else while the do file runs, like write your paper. Adding imputations shouldn't change your results significantly—and in the unlikely event that they do, consider yourself lucky to have found that out before publishing. Speeding up the Imputation Process Multiple imputation has introduced many researchers into the world of jobs that take hours, days, or even weeks to run. Usually it's not worth spending your time to make Stata code run faster, but multiple imputation can be an exception. Use the fastest computer available to you. For SSCC members that means learning to run jobs on Linstat, the SSCC's Linux computing cluster. Linux is not as difficult as you may think—Using Linstat has instructions. Multiple imputation involves more reading and writing to disk than most Stata commands. Sometimes this includes writing temporary files in the current working directory. Use the fastest disk space available to you, both for your data set and for the working directory. In general local disk space will be faster than network disk space, and on Linstat /ramdisk (a "directory" that is actually stored in RAM) will be faster than local disk space. On the other hand, you would not want to permanently store data sets anywhere but network disk space. So consider having your do file do something like the following: Windows (Winstat or your own PC) copy x:\mydata\dataset c:\windows\temp\dataset cd c:\windows\temp use dataset {do stuff, including saving results to the network as needed} erase c:\windows\temp\dataset copy /project/mydata/dataset /ramdisk/dataset cd /ramdisk use dataset {do stuff, including saving results to the network as needed} erase /ramdisk/dataset This applies when you're using imputed data as well. If your data set is large enough that working with it after imputation is slow, the above procedure may help. Checking for Convergence MICE is an iterative process. In each iteration, mi impute chained first estimates the imputation model, using both the observed data and the imputed data from the previous iteration. It then draws new imputed values from the resulting distributions. Note that as a result, each iteration has some autocorrelation with the previous imputation. The first iteration must be a special case: in it, mi impute chained first estimates the imputation model for the variable with the fewest missing values based only on the observed data and draws imputed values for that variable. It then estimates the model for the variable with the next fewest missing values, using both the observed values and the imputed values of the first variable, and proceeds similarly for the rest of the variables. Thus the first iteration is often atypical, and because iterations are correlated it can make subsequent iterations atypical as well. To avoid this, mi impute chained by default goes through ten iterations for each imputed data set you request, saving only the results of the tenth iteration. The first nine iterations are called the burn-in period. Normally this is plenty of time for the effects of the first iteration to become insignificant and for the process to converge to a stationary state. However, you should check for convergence and increase the number of iterations if necessary to ensure it using the burnin() option. To do so, examine the trace file saved by mi impute chained. It contains the mean and standard deviation of each imputed variable in each iteration. These will vary randomly, but they should not show any trend. An easy way to check is with tsline, but it requires reshaping the data first. Our preferred imputation model uses by(), so it cannot save a trace file. Thus we'll remove by() for the moment. We'll also increase the burnin() option to 100 so it's easier to see what a stable trace looks like. We'll then use reshape and tsline to check for convergence: mi impute chained (logit) urban (mlogit) race (ologit) edu (pmm) exp wage = female, add(5) rseed(88) savetrace(extrace, replace) burnin(100) use extrace, replace reshape wide *mean *sd, i(iter) j(m) tsset iter tsline exp_mean*, title("Mean of Imputed Values of Experience") note("Each line is for one imputation") legend(off) graph export conv1.png, replace tsline exp_sd*, title("Standard Deviation of Imputed Values of Experience") note("Each line is for one imputation") legend(off) graph export conv2.png, replace The resulting graphs do not show any obvious problems: If you do see signs that the process may not have converged after the default ten iterations, increase the number of iterations performed before saving imputed values with the burnin() option. If convergence is never achieved this indicates a problem with the imputation model. Checking the Imputed Values After imputing, you should check to see if the imputed data resemble the observed data. Unfortunately there's no formal test to determine what's "close enough." Of course if the data are MAR but not MCAR, the imputed data should be systematically different from the observed data. Ironically, the fewer missing values you have to impute, the more variation you'll see between the imputed data and the observed data (and between imputations). For binary and categorical variables, compare frequency tables. For continuous variables, comparing means and standard deviations is a good starting point, but you should look at the overall shape of the distribution as well. For that we suggest kernel density graphs or perhaps histograms. Look at each imputation separately rather than pooling all the imputed values so you can see if any one of them went wrong. mi xeq: The mi xeq: prefix tell Stata to apply the subsequent command to each imputation individually. It also applies to the original data, the "zeroth imputation." Thus: mi xeq: tab race will give you six frequency tables: one for the original data, and one for each of the five imputations. However, we want to compare the observed data to just the imputed data, not the entire data set. This requires adding an if condition to the tab commands for the imputations, but not the observed data. Add a number or numlist to have mi xeq act on particular imputations: mi xeq 0: tab race mi xeq 1/5: tab race if miss_race This creates frequency tables for the observed values of race and then the imputed values in all five imputations. If you have a significant number of variables to examine you can easily loop over them: foreach var of varlist urban race edu { mi xeq 0: tab `var' mi xeq 1/5: tab `var' if miss_`var' For results see the log file. Running summary statistics on continuous variables follows the same process, but creating kernel density graphs adds a complication: you need to either save the graphs or give yourself a chance to look at them. mi xeq: can carry out multiple commands for each imputation: just place them all in one line with a semicolon (;) at the end of each. (This will not work if you've changed the general end-of-command delimiter to a semicolon.) The sleep command tells Stata to pause for a specified period, measured in milliseconds. mi xeq 0: kdensity wage; sleep 1000 mi xeq 1/5: kdensity wage if miss_`var'; sleep 1000 Again, this can all be automated: foreach var of varlist wage exp { mi xeq 0: sum `var' mi xeq 1/5: sum `var' if miss_`var' mi xeq 0: kdensity `var'; sleep 1000 mi xeq 1/5: kdensity `var' if miss_`var'; sleep 1000 Saving the graphs turns out to be a bit trickier, because you need to give the graph from each imputation a different file name. Unfortunately you cannot access the imputation number within mi xeq. However, you can do a forvalues loop over imputation numbers, then have mi xeq act on each of them: forval i=1/5 { mi xeq `i': kdensity exp if miss_exp; graph export exp`i'.png, replace Integrating this with the previous version gives: foreach var of varlist wage exp { mi xeq 0: sum `var' mi xeq 1/5: sum `var' if miss_`var' mi xeq 0: kdensity `var'; graph export chk`var'0.png, replace forval i=1/5 { mi xeq `i': kdensity `var' if miss_`var'; graph export chk`var'`i'.png, replace For results, see the log file. It's troublesome that in all imputations the mean of the imputed values of wage is higher than the mean of the observed values of wage, and the mean of the imputed values of exp is lower than the mean of the observed values of exp. We did not find evidence that the data is MAR but not MCAR, so we'd expect the means of the imputed data to be clustered around the means of the observed data. There is no formal test to tell us definitively whether this is a problem or not. However, it should raise suspicions, and if the final results with these imputed data are different from the results of complete cases analysis, it raises the question of whether the difference is due to problems with the imputation model. Next: Managing Multiply Imputed Data Previous: Creating Imputation Models Last Revised: 8/23/2012
{"url":"https://sscc.wisc.edu/sscc/pubs/stata_mi_impute.htm","timestamp":"2024-11-08T02:11:59Z","content_type":"text/html","content_length":"82913","record_id":"<urn:uuid:44666a5e-c859-42b1-a274-2e90db724730>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00077.warc.gz"}
An Etymological Dictionary of Astronomy and Astrophysics farâxeš (#) Fr.: dilatation The act of dilating; state of being dilated. Also dilatation. Physics: The increase in volume per unit volume of a homogeneous substance. → time dilation. Verbal noun of → dilate. ۱) اوتال؛ ۲) اوتالیدن 1) owtâl; 2) owtâlidan Fr.: 1) dilué; 2) diluer 1) (adj.) Describing a solution that is reduced in concentration. 2) (v.tr.) To make a solution thinner by the addition of water or the like. From L. dilutus, p.p. of diluere "dissolve, wash away," from → dis- "apart" + -luere, combining form of lavere "to wash;" cf. Pers. lur "flood" [Mo'in, Dehxodâ] (variants Lori, Kordi: laf, lafow, lafaw, Tabari: lé); Gk. louein "to wash;" Bret. laouer "trough;" PIE base *lou- "to wash." Owtâl, from Tabari utâl, "having water, impregnated with water, waterlogged," from ow "water," → water + tâl variant of dâr "having, possessor," from dâštan "to have, to possess" (Mid.Pers. dâštan; O.Pers./Av. root dar- "to hold, keep back, maitain, keep in mind;" Skt. dhr-, dharma- "law;" Gk. thronos "elevated seat, throne;" L. firmus "firm, stable;" Lith. daryti "to make;" PIE base *dher- "to hold, support"). Fr.: dilution The process of reducing the concentration of solute in a solution by increasing the proportion of solvent. Verbal noun of → dilute. dilution factor کروند ِاوتالش karvand-e owtâleš Fr.: facteur de dilution The energy density of a radiation field divided by the equilibrium value for the same color temperature. → dilution; → factor. tiré (#) Fr.: faible, pâle, mat(e) Not bright; obscure from lack of light. O.E. dimm "dark, gloomy, obscure," from P.Gmc. *dimbaz. Tiré, from Mid.Pers. têrag, variant of târig "dark," Av. taθra- "darkness," taθrya- "dark," cf. Skt. támisrâ- "darkness, dark night," L. tenebrae "darkness," Hittite taš(u)uant- "blind," O.H.G. demar Fr.: dimension 1) Math.: Independent extension in a given direction; a property of space. 2) Physics: → physical dimension. From L. dimensionem (nom. dimensio), from stem of dimetri "to measure out," from → dis- + metri "to measure." Vâmun, from vâ-, → dis-, + mun, variant mân "measure" (as in Pers. terms pirâmun "perimeter," âzmun "test, trial," peymân "measuring, agreement," peymâné "a measure; a cup, bowl"), from O.Pers./Av. mā(y)- "to measure;" PIE base *me- "to measure;" cf. Skt. mati "measures," matra- "measure;" Gk. metron "measure;" L. metrum. Fr.: dimensionnel Of or pertaining to → dimension. → dimension; → -al. dimensional analysis آنالس ِوامونی، آناکاوی ِ~ ânâlas-e vâmuni, ânâkâvi-ye ~ Fr.: analyse dimensionnelle A technique used in physics based on the fact that the various terms in a physical equation must have identical → dimensional formulae if the equation is to be true for all consistent systems of unit. Its main uses are: a) To test the probable correctness of an equation between physical quantities. b) To provide a safe method of changing the units in a physical quantity. c) To solve partially a physical probable whose direct solution cannot be achieved by normal methods. → dimensional; → analysis. dimensional formula دیسول ِوامونی disul-e vâmuni Fr.: formule dimensionnelle Symbolic representation of the definition of a physical quantity obtained from its units of measurement. For example, with M = mass, L = length, T = time, area = L^2, velocity = LT^-1, energy = ML^2T ^-2. → dimensional analysis. → dimensional; → formula. Fr.: sans dimension A physical quantity or number lacking units. → dimension; → -less. dimensionless quantity چندای ِبیوامون candâ-ye bivâmun Fr.: quantité sans dimension A quantity without an associated → physical dimension. Dimensionless quantities are defined as the ratio of two quantities with the same dimension. The magnitude of such quantities is independent of the system of units used. A dimensionless quantity is not always a ratio; for instance, the number of people in a room is a dimensionless quantity. Examples include the → Alfven Mach number, → Ekman number, → Froude number, → Mach number, → Prandtl number, → Rayleigh number, → Reynolds number, → Richardson number, → Rossby number, → Toomre parameter. See also → large number. → dimension; → quantity. Fr.: dimère A molecule resulting from combination of two identical molecules. From → di- "two, twice, double," + -mer a combining form denoting member of a particular group, → isomer. diod (#) Fr.: diode An electronic component with two active terminals, an → anode and a → cathode, through which current passes in one direction (from anode to cathode) and is blocked in the opposite direction. Diodes have many uses, including conversion of → alternating current to → direct current, regulation of votage, and the decoding of audio-frequency signals from radio signals. → di- "two, twice, double," + hodos "way." Dione (Saturn IV) Fr.: Dioné The fourth largest moon of Saturn and the second densest after Titan. Its diameter is 1,120 km and its orbit 377,400 km from Saturn. It is composed primarily of water ice but must have a considerable fraction of denser material like silicate rock. Discovered in 1684 by Jean-Dominique Cassini, Italian born French astronomer (1625-1712). In Gk. mythology Dione was the mother of Aphrodite (Venus) by Zeus (Jupiter). dioptr (#) Fr.: dioptre A unit of optical measurement that expresses the refractive power of a lens or prism. In a lens or lens system, it is the reciprocal of the focal length in meters. L. dioptra, from Gk. di-, variant of dia- "passing through, thoroughly, completely" + op- (for opsesthai "to see") + -tra noun suffix of means. Dioptr loanword from Fr. Fr.: dioptra An instrument used in antiquity to measure the apparent diameter of the Sun and the Moon. It was a rod with a scale, a sighting hole at one end, and a disk that could be moved along the rod to exactly obscure the Sun or Moon. The Sun was observed directly with the naked eye at sunrise or sunset in order to prevent eye damage. Aristarchus (c.310-230 B.C.), Archimedes (c. 290-212 B.C.), Hipparchus (died after 127 B.C.), and Ptolemy (c.100-170 A.D.) used the dioptra. The instrument could also serve for measurement of angles, land levelling, surveying, and construction of aqueducts and tunnels. → diopter. Fr.: dioxyde Any → oxide containing two → atoms of → oxygen the → molecule. → di-; → oxide. našib (#) Fr.: inclinaison 1) Navigation: The angular difference between the visible horizon and the true horizon. Same as → dip of the horizon. 2) Geodesy: The angle between the horizontal and the lines of force of the Earth's magnetic field at any point. → magnetic dip. 3) Aviation: The angle between the true and apparent horizon, which depends on flight height, the Earth's curvature, and refraction. O.E. dyppan "to immerse," cognate with Ger. taufen "to baptize," and with → deep. Našib, → depression. dip angle زاویهی ِنشیب zâviye-ye našib Fr.: angle d'inclinaison The angular difference between the → visible horizon and the → true horizon. Same as → dip of the horizon. → dip; → angle. dip of the horizon نشیب ِافق našib-e ofoq Fr.: inclinaison de l'horizon The angle created by the observer's line of sight to the → apparent horizon and t he → true horizon. Neglecting the → atmospheric refraction, dip of the horizon can be expressed by θ (radians) = (2h/ R)^1/2, where h is the observer's height and R the Earth's radius. An an example, for a height of 1.5m above the sea, and R = 6.4 x 10^6 m, the dip angle is about 0.00068 radians, or 0.039 degrees, about 2.3 minutes of arc, quite appreciable by the eye. See also → distance to the horizon. Same as → dip angle. → dip; → horizon.
{"url":"https://dictionary.obspm.fr/index.php?showAll=1&&search=D&&formSearchTextfield=&&page=21","timestamp":"2024-11-11T16:08:36Z","content_type":"text/html","content_length":"37297","record_id":"<urn:uuid:1a509add-de44-4604-ba4d-427584882989>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00227.warc.gz"}
Three bells ring regular intervals of 9mins,15mins and 21mins.The bells ring at 5.45pm.By 10.00am next day,how many times will they have rung together? LCM OF 9, 15 and 21 is 315 Change 315 to hours and it will be 5hrs 15 minutes This means that it will take 5hours 15 minutes intervals Get the time difference between 10.00am and 5.45pm and it is 16 hours 15 minutes Then divide the difference with the time interval to get the number of times And the answer is 3 times. General Questions (432)
{"url":"https://www.easyelimu.com/qa/13048/three-bells-regular-intervals-9mins-15mins-21mins-together","timestamp":"2024-11-15T03:05:44Z","content_type":"text/html","content_length":"64065","record_id":"<urn:uuid:044cfde1-c520-4c76-bc90-d7345f0ae84e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00635.warc.gz"}
Spearman-Kärber analysis and Creutzfeld-Jakob disease | dataanalysistools.de Spearman-Kärber analysis and the Creutzfeld-Jakob disease A non-parametric analysis to determine the ED50, SD50 or similar metrics The first time I came across the term Spearman-Kärber was with the analysis of RT-QuIC. These assays are typically used to detect small amounts of prions, i.e. proteins that can induce conformational changes in other proteins and lead to diseases like Creutzfeldt-Jakob disease. A short-hand notation for prion protein that is often used is PrP. In an RT-QuIC assay a liquid body sample (e.g. cerebrospinal fluid) supposed to contain infectious PrP^I is diluted with native-conformation-PrP^C. The dilution series is typically performed on a logarithmic scale while keeping the concentration of diluent (PrP^C solution) constant. Each dilution series is replicated several times and each sample of this series is applied to vigorous shaking typically in a microplate reader capable of detecting Thioflavin T (ThT) fluorescence. Thioflavin T is a fluorescence dye that becomes fluorescent when bound to aggregated PrP^I. Thus, the time course of aggregation can be monitored with ThT. If PrP^I were present in the original sample at high concentration one would observe an increase of the fluorescence over time. At high concentrations of PrP^I all replicates will start aggregating at some point, while for more diluted samples only a portion of the replicates will aggregate (in a given time period). At the highest dilution there is likely to be no aggregation at all. Plotting the portion of aggregated replicates as a function of dilution often gives a sigmoidal curve going from 0 (no replicate was positive /aggregated) to 1 (all replicates were positive /aggregated). There are various ways how to analyze such data. But one of the most common ways is to perform a Spearman-Kärber analysis. It is a general non-parametric way to estimate the mean effective seeding dose SD50, mean lethal dose, mean effective dose or some other mean EX50 quantity and its corresponding error. If the aforementioned portions p[i] were plotted as a function of the seeding dilution, the graph would be the empirical estimate of the cumulative distribution function F(x) of the underlying (continuous) distribution. It is often referred to as tolerance distribution whose probability distribution function shall be denoted with f(x). For the continuous case, f(x) can be obtained by differentiating F(x) with respect to x (since F(x) is an integral of f(x)). Although the tolerance distribution is continuous, we can approximate it by the discrete empirical cumulative distribution function F(x). In the discrete case, f(x) (I should rather write f(x[i]) can be obtained by differencing F(x), i.e. F(x[i+1])- F(x[i]). Herein x denotes the log(dilution) or log(dose). In formal terms what was said means: If we were to estimate the mean of the log(dose) we use the general formula for the mean of a discrete random variable: The term On the other hand, if One can try using these types of fake doses even if the experimental doses are not evenly spaced. Please note that if the fake doses are taken into account, the lower and upper limit of summation in the equation for The standard error The standard error can then be used to calculate an (1-α)-percent confidence interval: Although it might seem complicated at first glance, the Spearman-Kärber formula is supposed to be calulable by hand and without the need for fitting etc. There are other methods like the Reed-Muench or Dragestedt-Behrens aiming at a similar goal of being easily calculable. While this argument might have been a valid one at the time when the authors published their work, it might not be so important nowadays, as more complex methods like probit regression can be easily done by computers. While the “classical” Spearman-Kärber (SK) analysis works nicely in many practical cases, state-of-the-art is the so-called trimmed Spearman-Kärber analysis as developed by Hamilton et al. It adds a trimming and scaling procedure before calculating the estimate for the median effective dose. Trimming is done in as much the same way as a the trimmed mean is calculated, i.e. a user-defined fraction (the trim) of the data is removed from the upper and lower end. However, I will not go to much into the details here. In the trimmed SK analysis, you need to set a trim value Thus, it becomes obvious that the “classical” SK analysis is a special case of the trimmed SK analysis with I created a simple Excel-sheet in order to demonstrate the calculations for the Spearman-Kärber analysis. Feel free to download and to paste your data into the appropriate table and run the analysis (no macros required). I also created an Excel-sheet for the trimmed Spearman-Kärber which is available upon request from the author. D. J. Finney, Statistical Method in Biological Assay, Hafner, 1952. C. Spearman, “The Method of “Right and Wrong Cases” (Constant Stimuli) without Gauss’s Formula.,” British Journal of Psychology, p. 227–242, 1908. G. Kärber, “Beitrag zur kollektiven Behandlung pharmakologischer Reihenversuche,” Archiv für experimentelle Pathologie und Pharmakologie, pp. 480-483, 1931. M. A. Hamilton, R. C. Russo and R. V. Thurston, “Trimmed Spearman-Karber method for estimating median lethal concentrations in toxicity bioassays,” Environmental Science & Technology, p. 714–719, 1977. DOI: https://doi.org/10.1021/es60130a004.
{"url":"https://dataanalysistools.de/2020/05/05/spearman-kaerber-analysis-and-the-creutzfeld-jakob-disease/","timestamp":"2024-11-08T11:24:59Z","content_type":"text/html","content_length":"90535","record_id":"<urn:uuid:960f51ef-b8a3-44c3-9597-1c0e4c52678c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00794.warc.gz"}
How to Design Axially Loaded RC Short Column as per ACI 318-19? Example IncludedHow to Design Axially Loaded RC Short Column as per ACI 318-19? Example Included Sorry, you do not have permission to ask a question, You must login to ask question. Become VIP Member Join for free or log in to continue reading... đź•‘ Reading time: 1 minute The design of an axially loaded reinforced concrete (RC) short column is quite simple and straightforward. It is governed by the strength of materials and the cross-section of the member. If vertical axial loads act on the center of gravity of the column's cross-section, it is termed as an axially loaded column. Axially loaded columns rarely exist in practice because of factors like inaccuracy in the layout of the column, unsymmetrical loading due to differences in the thickness of slabs in adjacent spans, and imperfections in the alignment that can easily shift the point of vertical load action away from the center of the column cross-section and, consequently, create eccentricities. The axially loaded columns are those with relatively small eccentricity, e, of about 0.1 h or less, where h is the total depth of the column and e is the eccentric distance from the center of the column. The interior column of multistory buildings with symmetrical loads from floor slabs from all sides is an example of an axially loaded column. ACI 318-19 equips designers with several specifications and requirements to produce an economical and safe design. Procedure for Design of Axially Loaded RC Short Column 1- Calculate the total axial load (P[u]) on the column i.e. transfer loads from slabs and beams to the columns. 2. Assume a reinforcement ratio (p[g]) that should be equal to or greater than the minimum reinforcement ratio (0.01) and equal to or smaller than the maximum reinforcement ratio (0.08) provided by ACI 318-19. 3. Express steel area in terms of reinforcement ratio times the gross area of the column cross-section. p[g]= A[st]/A[g ] Equation 1 4. Select column dimensions using Equation-1 and then round the dimensions. 5. Plug (A[g]) from Step-4 into Equation-4 and then calculate longitudinal reinforcement area. 6. Assume a bar size from Table-1 and then determine the number of bars for the column. The minimum number of bars for a square column is four and for a circular column is six. 7. Calculate the column reinforcement ratio and check whether it is within the minimum and maximum reinforcement ratio or not. 8. Design ties; determine the size of the ties and estimate the spacing. 9. Check for code requirements; clear spacing between longitudinal bars, a minimum number of bars, minimum tie diameter, and arrangement of ties. p[g]: gross reinforcement ratio of column A[st]: longitudinal reinforcement ratio of column, mm^2 A[g]: gross area of column, mm^2 P[u]: ultimate axial load on the column, KN phi: strength reduction factor, which 0.65 for tied column and 0.75 for spiral column. fc': concrete compressive strength, MPa fy: yield strength of steel, MPa Table-1: Area of Groups of Bars, mm^2 Number of bars 1 2 3 4 5 6 7 8 9 10 11 12 Bar Size, IS - - - - - - - - - - - - Design an axially loaded short square tied column to support a maximum factored load of P[u]=2600 KN. Material strength: fc'= 28 MPa and fy= 420 MPa. 1- The load on the column is already computed, P[u]=2600 KN 2. Assume p[g]= 0.02, which is between 0.01 and 0.02. 3. Express A[st] in terms of gross area of column using Equation-1: 4. Estimate column dimension using Equation-2: A[g]= 157609.38 mm^2 A[g]=h^2 , h=(157609.38)^1/2= 397 mm= 400 mm. Compute a new gross area of column from rounded dimension: A[g]= 160000 mm^2 5. Compute A[st] using Equation-2: A[st]= 2838.09 mm^2 6. Assume a bar size, select No. 32 from Table-1 Take bar of No. 32 and go to the right side to select the bars area that is close to the estimate reinforcement area, after that go to the top of the column to determine the number of bars: A[st,provided]= 3276 mm^2, and number of bars are 4 7. Compute reinforcement ratio using bars area from Step-6. p[g]= A[st]/A[g]= 3276/160000= 0.0204 (OK), since it is greater than minimum reinforcement ratio and less than maximum reinforcement ratio 8. Design Ties: Assume Tie bar of No. 10 The spacing between ties is the smallest of the following: 48* tie diameter= 48*10= 480 mm 16* longitudinal bar diameter= 16*32= 512 mm least dimension of the column= 400 mm So, the spacing between ties is 400 mm 9. Checks: a- clear spacing between longitudinal bar= (400-2*40-2*10-2*32)=236 mm< 40 mm or 1.5d[b]=32*1.5= 48 mm OK. b- The minimum number of bars for the square columns is 4, and 4 bars have been assigned for the column under consideration, OK. c- Minimum tie diameter 10 for 32 longitudinal bar diameters, ok d- Tie arrangement, sice all longitudinal bars are placed in a corner, check for tie arrangement is not needed. Figure-1: Detail of the Design What is axially loaded reinforced concrete short columns? If vertical axial loads act on the center of gravity of the column's cross-section, it is termed an axially loaded column. The axially loaded columns are those with relatively small eccentricity, e, of about 0.1 h or less, where h is the total depth of the column and e is the eccentric distance from the center of the column. What are the factors that govern the design of RCC short columns? The design of axially loaded short column is governed by the strength of materials and the cross-section of the member. What are the minimum number of longitudinal bars for square and circular columns? ACI 318-19 specifies a minimum of four bars for square columns and six bars for circular columns. What are column stirrups? Stirrups are closed loop of reinforcement bars that prevent buckling of longitudinal bars in column and hold them at their position. Read more: Economical design of reinforced concrete column to reduce cost What are the factors controlling the distance between RCC columns? Offset bend longitudinal reinforcement in columns and its requirements
{"url":"https://test.theconstructor.org/practical-guide/design-axially-loaded-short-column/481521/","timestamp":"2024-11-10T09:36:34Z","content_type":"text/html","content_length":"192973","record_id":"<urn:uuid:7b80a77c-22b5-4f63-8f66-86cd8189e6e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00801.warc.gz"}
Saṃyutta Nikāya 5. Mahā-Vagga 45. Magga Saṃyutta 4. Paṭipatti Vagga: Gaṇgā Peyyāla 1. Viveka-Nissitaṃ The Book of the Kindred Sayings 5. The Great Chapter 45. Kindred Sayings on the Way 4. On Conduct: Gaṇgā Repetition 1. Based on Seclusion Translated by F. L. Woodward Edited by Mrs. Rhys Davids Copyright The Pali Text Society Commercial Rights Reserved For details see Terms of Use. Paṭhama Pācīna Suttaṃ Eastward (a) [1][bodh] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Ganges and tends to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Dutiya Pācīna Suttaṃ Eastward (b.1) [2][bodh] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Yamunā and tends to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Tatiya Pācīna Suttaṃ Eastward (b.2) [3] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Acīravatī and tends to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Catuttha Pācīna Suttaṃ Eastward (b.3) [4] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Sarabhū and tends to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Pañcama Pācīna Suttaṃ Eastward (b.4) [5] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Mahī and tends to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Chaṭṭha Pācīna Suttaṃ Eastward (c) [6] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, whatsoever great rivers there be, such as the Ganges, and Mahī, all of them flow, and tend to the East, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Paṭhama Samudda Suttaṃ Ocean (a) [7][bodh] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Ganges and tends to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Dutiya Samudda Suttaṃ Ocean (b.1) [8][bodh] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Yamunā and tends to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Tatiya Samudda Suttaṃ Ocean (b.2) [9] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Acīravatī and tends to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Catuttha Samudda Suttaṃ Ocean (b.3) [10] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Sarabhū and tends to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Pañcama Samudda Suttaṃ Ocean (b.4) [11] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, the river Mahī and tends to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna. Chaṭṭha Samudda Suttaṃ Ocean (c) [12] THUS have I heard: Once the Exalted One was staying near Sāvatthī. Then the Exalted One addressed the monks, "Yes, lord," replied those monks to the Exalted One. The Exalted One said: Just as, monks, whatsoever great rivers there be, such as the Ganges, and Mahī, all of them flow, and tend to the ocean, even so a monk who cultivates and makes much of the Ariyan eightfold way tends to Nibbāna. And how, monks, by cultivating and making much of the Ariyan eightfold way does a monk flow, and tend to Nibbāna? Herein a monk cultivates right view, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right aim, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right speech, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right action, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right living, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right effort, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right mindfulness, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. He cultivates right concentration, that is based on seclusion, that is based on dispassion, on cessation, that ends in self-surrender. Thus cultivating, thus making much of the Ariyan eightfold way a monk flows, and tends to Nibbāna.
{"url":"http://buddhadust.net/dhamma-vinaya/pts/sn/05_mv/sn05.45.091-102.wood.pts.htm","timestamp":"2024-11-05T17:07:37Z","content_type":"text/html","content_length":"41341","record_id":"<urn:uuid:9eb01a55-3f21-4c14-b20e-f49a6f5c08de>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00199.warc.gz"}
Data Analysis Expressions (DAX) in Power Pivot Data Analysis Expressions (DAX) sounds a little intimidating at first, but donā t let the name fool you. DAX basics are really quite easy to understand. First things first - DAX is NOT a programming language. DAX is a formula language. You can use DAX to define custom calculations for Calculated Columns and for Measures (also known as calculated fields). DAX includes some of the functions used in Excel formulas, and additional functions designed to work with relational data and perform dynamic aggregation. Understanding DAX Formulas DAX formulas are very similar to Excel formulas. To create one, you type an equal sign, followed by a function name or expression, and any required values or arguments. Like Excel, DAX provides a variety of functions that you can use to work with strings, perform calculations using dates and times, or create conditional values. However, DAX formulas are different in the following important ways: • If you want to customize calculations on a row-by-row basis, DAX includes functions that let you use the current row value or a related value to perform calculations that vary by context. • DAX includes a type of function that returns a table as its result, rather than a single value. These functions can be used to provide input to other functions. • Time Intelligence Functions in DAX allow calculations using ranges of dates, and compare the results across parallel periods. Where to Use DAX Formulas You can create formulas in Power Pivot either in calculated columns or in calculated fields. Calculated Columns A calculated column is a column that you add to an existing Power Pivot table. Instead of pasting or importing values in the column, you create a DAX formula that defines the column values. If you include the Power Pivot table in a PivotTable (or PivotChart), the calculated column can be used as you would any other data column. The formulas in calculated columns are much like the formulas that you create in Excel. Unlike in Excel, however, you cannot create a different formula for different rows in a table; instead, the DAX formula is automatically applied to the entire column. When a column contains a formula, the value is computed for each row. The results are calculated for the column as soon as you create the formula. Column values are only recalculated if the underlying data is refreshed or if manual recalculation is used. You can create calculated columns that are based on measures and other calculated columns. However, avoid using the same name for a calculated column and a measure, as this can lead to confusing results. When referring to a column, it is best to use a fully qualified column reference, to avoid accidentally invoking a measure. For more detailed information, see Calculated Columns in Power Pivot. A measure is a formula that is created specifically for use in a PivotTable (or PivotChart) that uses Power Pivot data. Measures can be based on standard aggregation functions, such as COUNT or SUM, or you can define your own formula by using DAX. A measure is used in the Values area of a PivotTable. If you want to place calculated results in a different area of a PivotTable, use a calculated column instead. When you define a formula for an explicit measure, nothing happens until you add the measure into a PivotTable. When you add the measure, the formula is evaluated for each cell in the Values area of the PivotTable. Because a result is created for each combination of row and column headers, the result for the measure can be different in each cell. The definition of the measure that you create is saved with its source data table. It appears in the PivotTable Fields list and is available to all users of the workbook. For more detailed information, see Measures in Power Pivot. Creating Formulas by Using the Formula Bar Power Pivot, like Excel,Ā providesĀ a formula bar to make it easier to create and edit formulas, and AutoComplete functionality, to minimize typing and syntax errors. To enter the name of a tableĀ Ā Ā Begin typing the name of the table. Formula AutoComplete provides a dropdown list containing valid names that begin with those letters. To enter the name of a columnĀ Ā Ā Type a bracket, and then choose the column from the list of columns in the current table. For a column from another table, begin typing the first letters of the table name, and then choose the column from the AutoComplete dropdown list. For more details and a walkthrough of how to build formulas, see Create Formulas for Calculations in Power Pivot. Tips for Using AutoComplete You can use Formula AutoComplete in the middle of an existing formula with nested functions. The text immediately before the insertion point is used to display values in the drop-down list, and all of the text after the insertion point remains unchanged. Defined names that you create for constants do not display in the AutoComplete drop-down list, but you can still type them. Power Pivot does not add the closing parenthesis of functions or automatically match parentheses. You should make sure that each function is syntactically correct or you cannot save or use the Using Multiple Functions in a Formula You can nest functions, meaning that you use the results from one function as an argument of another function. You can nest up to 64 levels of functions in calculated columns. However, nesting can make it difficult to create or troubleshoot formulas. Many DAX functions are designed to be used solely as nested functions. These functions return a table, which cannot be directly saved as a result;Ā it should be provided as input to a table function. For example, the functions SUMX, AVERAGEX, and MINXĀ all require a table as the first argument. Note:Ā Some limits on nesting of functions exist within measures, to ensure that performance is not affected by the many calculations required by dependencies among columns. Comparing DAX Functions and Excel Functions The DAX function library is based on the Excel function library, but the libraries have many differences. This section summarizes the differences and similarities between Excel functions and DAX • Many DAX functions have the same name and the same general behavior as Excel functions but have been modified to take different types of inputs, and in some cases, might return a different data type. Generally, you cannot use DAX functions in an Excel formula or use Excel formulas in Power Pivot without some modification. • DAX functions never take a cell reference or a range as reference, but instead DAX functions take a column or table as reference. • DAX date and time functions return a datetime data type. In contrast, Excel date and time functions return an integer that represents a date as a serial number. • Many of the new DAX functions either return a table of values or make calculations based on a table of values as input. In contrast, Excel has no functions that return a table, but some functions can work with arrays. The ability to easily reference complete tables and columns is a new feature in Power Pivot. • DAX provides new lookup functions that are similar to the array and vector lookup functions in Excel. However, the DAX functions require that a relationship is established between the tables. • The data in a column is expected to always be of the same data type. If the data is not the same type, DAX changes the entire column to the data type that best accommodates all values. DAX Data Types You can import data into a Power Pivot data model from many different data sources that might support different data types. When you import or load the data, and then use the data in calculations or in PivotTables,Ā the data is converted to one of the Power Pivot data types. For a list of the data types, see Data types in Data Models. The table data type is a new data type in DAX that is used as the input or output to many new functions. For example, the FILTER function takes a table as input and outputs another table that contains only the rows that meet the filter conditions. By combining table functions with aggregation functions, you can perform complex calculations over dynamically defined data sets. For more information, see Aggregations in Power Pivot. Formulas and the Relational Model The Power PivotĀ windowĀ is an area where you can work with multiple tables of data and connect the tables in a relational model. Within this data model, tables are connected to each other by relationships, which let you create correlations with columns in other tables and create more interesting calculations. For example, you can create formulas that sum values for a related table and then save that value in a single cell. Or, to control the rows from the related table, you can apply filters to tables and columns. For more information, see Relationships between tables in a Data Because you can link tables by using relationships, your PivotTables can also include data from multiple columns that are from different tables. However, because formulas can work with entire tables and columns, you need to design calculations differently than you do in Excel. • In general, a DAX formula in a column is always applied to the entire set of values in the column (never to only a few rows or cells). • Tables in Power Pivot must always have the same number of columnsĀ in each row, and all rows in a column must contain the same data type. • When tables are connected by a relationship, you are expected to make sure that the two columns used as keys have values that match, for the most part. Because Power Pivot does not enforce referential integrity, it is possible to have non-matching values in a key column and still create a relationship. However, the presence of blank or non-matching values might affect the results of formulas and the appearance of PivotTables. For more information, see Lookups in Power Pivot Formulas. • When you link tables by using relationships, you enlarge the scope, or context in which your formulas are evaluated. For example, formulas in a PivotTable can be affected by any filters or column and row headings in the PivotTable. You can write formulas that manipulate context, but context can also cause your results to change in ways that you might not anticipate. For more information, see Context in DAX Formulas. Updating the Results of Formulas Data r efresh and recalculation are two separate but related operations that you should understand when designing a data model that contains complex formulas, large amounts of data, or data that is obtained from external data sources. Refreshing data is the process of updating the data in your workbook with new data from an external data source. You can refresh data manually at intervals that you specify. Or, if you have published the workbook to a SharePoint site, you can schedule an automatic refresh from external sources. Recalculation is the process of updating the results of formulas to reflect any changes to the formulas themselves and to reflect those changes in the underlying data. Recalculation can affect performance in the following ways: • For a calculated column, the result of the formula should always be recalculated for the entire column, whenever you change the formula. • For a measure, the results of a formula are not calculated until the measure is placed in the context of the PivotTable or PivotChart. The formula will also be recalculated when you change any row or column heading that affects filters on the data or when you manually refresh the PivotTable. Troubleshooting Formulas Errors when writing formulas If you get an error when defining a formula, the formula might contain either a syntactic error, semantic error, or calculation error. Syntactic errors are the easiest to resolve. They typically involve a missing parenthesis or comma. For help with the syntax of individual functions, see the DAX Function Reference. The other type of error occurs when the syntax is correct, but the value or the column referenced does not make sense in the context of the formula. Such semantic and calculation errors might be caused by any of the following problems: • The formula refers to a non-existing column, table, or function. • The formula appears to be correct, but when the data engine fetches the data it finds a type mismatch, and raises an error. • The formula passes an incorrect number or type of parameters to a function. • The formula refers to a different column that has an error, and therefore its values are invalid. • The formula refers to a column that has not been processed, meaning it has metadata but no actual data to use for calculations. In the first four cases, DAX flags the entire column that contains the invalid formula. In the last case, DAX grays out the column to indicate that the column is in an unprocessed state. Incorrect or unusual results when ranking or ordering column values When ranking or ordering a column that contains value NaN (Not a Number), you might get wrong or unexpected results. For example, when a calculation divides 0 by 0, an NaN result is returned. This is because the formula engine performs ordering and ranking by comparing the numeric values; however, NaN cannot be compared to other numbers in the column. To assure correct results, you can use conditional statements using IF function to test for NaN values and return a numeric 0 value. Compatibility with Analysis Services Tabular Models and DirectQuery Mode In general, DAX formulas that you build in Power Pivot are completely compatible with Analysis Services tabular models. However, if you migrate your Power Pivot model to an Analysis Services instance, and then deploy the model in DirectQuery mode, there are some limitations. • Some DAX formulas may return different results if you deploy the model in DirectQuery mode. • Some formulas might cause validation errors when you deploy the model to DirectQuery mode, because the formula contains a DAX function that is not supported against a relational data source. For more information, see Analysis Services tabular modeling documentation in SQL Server 2012 BooksOnline.
{"url":"https://support.microsoft.com/en-us/office/data-analysis-expressions-dax-in-power-pivot-bab3fbe3-2385-485a-980b-5f64d3b0f730?ui=en-us&rs=en-us&ad=us","timestamp":"2024-11-04T18:52:11Z","content_type":"text/html","content_length":"157015","record_id":"<urn:uuid:1f96c5a4-a635-48a5-ae8a-3e85c4d4d9a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00647.warc.gz"}
Difference Between Series and Parallel Circuits Series vs Parallel Circuits An electrical circuit can be set up in many ways. Electronic devices such as resistors, diode, switches, and so on, are components placed and positioned in a circuit structure. The placement of such components is crucial to the operation of the circuit, as different kinds of setups create a different kind of output, result, or purpose. Two of the simplest electronic or electrical connections are called the series and parallel circuits. These two are actually the most basic setup of all electrical circuits, but are significantly different from each other. Fundamentally, a series circuit aims to have the same amount of current flow through all the components placed inline. It is called a ‘series’ because of the fact that the components are in the same single path of the current flow. For instance, when components such as resistors are put in a series circuit connection, the same current flows through these resistors, but each will have different voltages, assuming that the amount of resistance is dissimilar. The voltage of the whole circuit will be sum of the voltages in every component or resistor. In series circuits: Vt = V1 + V2 + V3… It = I1 = I2 = I3… Rt = R1 + R2 + R3… Vt = total circuit voltage V1, V2, V3, and so on = voltage in each component It = total current I1, I2, I3, and so on = current across each component Rt = total resistance from components/resistors R1, R2, R3, and so on = resistance values of each component The other type of connection is called ‘parallel’. Components of such a circuit are not inline, or in series, but parallel to each other. In other words, the components are wired in separate loops. This circuit splits the current flow, and the current flowing through each component will ultimately combine to form the current flowing in the source. The voltages across the ends of the components are the same; the polarities are also identical. Let’s draw out the same example given in the series circuit, and assume that the resistors are connected in parallel. The other term for ‘parallel’ circuits is ‘multiple’, because of the multiple connections. In parallel circuits: Vt = V1 = V2 = V3 It = V (1/R1 + 1/R2 + 1/R3) since, 1/Rt = (1/R1 + 1/R2 + 1/R3) One of the major differences – besides from the voltage, current, and resistance formulas ‘“ is the fact that series circuits will break if one component, such as a resistor, burns out; thus, the circuit won’t be complete. In parallel circuits however, the functioning of other components will still continue, as each component has its own circuit, and is independent. 1. Series circuits are basic types of electrical circuits in which all components are joined in a sequence so that the same current flows through all of them. 2. Parallel circuits are types of circuits in which the identical voltage occurs in all components, with the current dividing among the components based on their resistances, or the impedances. 3. In series circuits, the connection or circuit will not be complete if one component in the series burns out. 4. Parallel circuits will still continue to operate, at least with other components, if one parallel-connected component burns out. Search DifferenceBetween.net : Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family. 22 Comments 1. awesome n elaborative…..keep on wokring □ u say it’s great it’s rubbishhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh 2. hahahah …. dis is great ………. 3. Hello, This doesn’t really answer my question….. I actually really want to know the answer, so help me as soon as possible please. A bit easier don’t make problems does it? Haha, a bit easier please, this is my email address, please email me if you can tell me more about that. And if you have any other information. Thank you, Samiha 4. I’m sorry, I forget to write my email address, here it comes: Mail me please!!!! Thank you, Samiha □ In a series circuit, the downstream wire of the first device is connected to the upstream wire of the second device. So the current has to flow through device one, then device two. In this case, total resistance is increased because it is harder for the current to flow through two devices. In a parallel circuit, the upstream wires of the two devices are connected to each other, and the downstream wires of the two devices are connected. So, part of the current will flow through device one, and the rest of the current will flow through device two. In this case, total resistance is decreased because the current can flow more easily through the resistors (this takes a little math to really prove, but the conclusion still holds). Some circuits can have some devices wired in series, and others in parallel, but these circuits should not be confused with simple series and parallel circuits. Series Circuit ~ There is only one path from one end of the battery back to the other end. Parallel Circuit ~ There are at least two different paths from end of the battery back to the other end. 5. Thank you, this website was a big help to my project 6. Thanks, This Really Helped Me 7. your very good in explaining bro. two thumbs up. just add more illustrations(^^) 9. This helped REALLY well. thank you so much. 10. It really does not help me at all. 11. What actually the effect of the RC circuit if the resistance increases? And give a brief knowledge about the resistor in various methods of applying 12. Thanks for the explanation ^^ It helped me a lot 13. It really helped me, thank you. I’m writing physics in two days and I still didn’t know the entire difference! 14. Highly impressed with the above explanation given… It help me a lots. 16. its really helped me , thanks a lot Leave a Response Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use, damage, or injury. You agree that we have no liability for any damages.
{"url":"https://www.differencebetween.net/science/difference-between-series-and-parallel-circuits/","timestamp":"2024-11-10T14:40:48Z","content_type":"application/xhtml+xml","content_length":"106832","record_id":"<urn:uuid:af6ad47e-dca3-4ac0-99df-b16dd3e0191d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00457.warc.gz"}
Can You Cheat on the ALEKS Test? If you are preparing for the ALEKS exam, surely you want to know if a test taker can cheat on the ALEKS Math test? Join us to answer this question. The Perfect Book to Ace the ALEKS Math Test Original price was: $28.99.Current price is: $14.99. ALEKS stands for Assessment and LEarning in Knowledge Spaces. It has been widely used by students and universities to make learning fun and orderly. This tool has served as an assessment tool to measure students’ learning of different topics, contexts, and subjects. ALEKS avoids multiple-choice questions and instead utilizes flexible, easy-to-use input tools that mimic paper and pencil techniques. ALEKS assesses your current content knowledge in a short amount of time (30-45 minutes for most courses) by asking several questions (usually 20-30). Direct cheating when attending the assessment or test at ALEKS is difficult. Even survey results display that cheating in ALEKS is harder than cheating in real classes. Because ALEKS is an electronic web page that easily detects if you are cheating. You can get help with questions or problems with homework, but you cannot cheat directly while you are being for the assessment. Respondus LockDown Browser is one of the tools developed to combat cheating on ALEKS. This browser acts as a special browser that restricts the testing environment. When launching the browser, your computer’s webcam and microphone must be set to run throughout the test session. Therefore, the browser relies heavily on your computer microphone and webcam to detect cheating. The important thing about cheating is that cheating in the placement assessments is useless – your reward will be for taking a very difficult class for your current level of math knowledge. Looking for the best resources to help you or your student succeed on the ALEKS Math test? Best ALEKS Math Prep Resource for 2022 Original price was: $76.99.Current price is: $36.99. More from Effortless Math for ALEKS Test … If you are preparing for the ALEKS, it is better to get familiar with this exam. We created some information about ALEKS in this article: Does ALEKS LockDown Browser Record you? What is Knowledge Check on ALEKS, and how does it work? Read the following article to find your answer: How to Get out of a Knowledge Check on ALEKS? Need FREE printable ALEKS Math worksheets to help you prepare for the ALEKS Math Test? Check out our ALEKS Math Worksheets The Best Books to Ace the ALEKS Math Test Original price was: $25.99.Current price is: $13.99. Original price was: $26.99.Current price is: $14.99. Original price was: $19.99.Current price is: $13.99. Have any questions about the ALEKS Test? Write your questions about the ALEKS or any other topics below and we’ll reply! Related to This Article What people say about "Can You Cheat on the ALEKS Test? - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/blog/can-you-cheat-on-the-aleks/","timestamp":"2024-11-12T10:43:11Z","content_type":"text/html","content_length":"99715","record_id":"<urn:uuid:a6b63b92-d8b2-403c-8db4-f29ed69f6cd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00753.warc.gz"}
Gone fishing portfolio I just saw this article and it's portfolio seems to be fairly well balanced in the manner that I think I'd like. Vanguard Total Stock Market Index (VTSMX) – 15% Vanguard Small-Cap Index (NAESX) – 15% Vanguard European Stock Index (VEURX) – 10% Vanguard Pacific Stock Index (VPACX) – 10% Vanguard Emerging Markets Index (VEIEX) – 10% Vanguard Short-term Bond Index (VFSTX) – 10% Vanguard High-Yield Corporates Fund (VWEHX) – 10% Vanguard Inflation-Protected Securities Fund (VIPSX) – 10% Vanguard REIT Index (VGSIX) – 5% Vanguard Precious Metals Fund (VGPMX) – 5% After reading a few books(4 Pillars, Random Walk) this seems very much in line with their way of thinking. Does anyone have any thoughts on this portfolio? Also, how would you guys go about balancing this between tax deferred and normal accounts? I know that REITs should go in tax deferred since they tend to produce more dividends. Any others to be more wary of with dividends? And since many of these have minimum balances needed, how would you start off investing since I can't just drop the minimum in due to the 5k/yr limit. Thoughts?
{"url":"https://forum.mrmoneymustache.com/investor-alley/college-grad-with-good-job-what-to-do-with-the-money/?prev_next=prev","timestamp":"2024-11-11T08:30:07Z","content_type":"application/xhtml+xml","content_length":"36535","record_id":"<urn:uuid:a5f79cb8-4418-4144-844f-c353031d4d13>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00342.warc.gz"}
Laminated composite plates subject to thermal load using trigonometrical theory based on Carrera Unified Formulation In the present work, an analytical solution for the thermoelastic static problem of simply supported laminated composite plates is presented. The present mathematical model uses a unified new trigonometric displacement field expansion under Carrera Unified Formulation (CUF). The equivalent single layer (ESL) governing equations are written using CUF notation for static thermal stress analysis employing the Principle of Virtual Displacement (PVD). The highly coupled partial differential equations are solved using Navier solution method. Normalized and non-normalized unified trigonometric shear strain shape functions are introduced for the first time. Shear deformation results are compared with the classical polynomial ones, which is usually adopted in several refined plate theories under CUF. Linear temperature profile and non-linear temperature profile obtained by solving heat conduction problem are taken into account. Good agreements with 3D solution for several order of expansion are reached, but instabilities are shown for some particular order of expansion even when an exact through the thickness integration technique was adopted. Similar values are presented between polynomial and non-polynomial displacement fields. However, non-polynomial functions can be optimized by changing the arguments of such functions in order to improve the results. Future studies are necessary in this direction. Profundice en los temas de investigación de 'Laminated composite plates subject to thermal load using trigonometrical theory based on Carrera Unified Formulation'. En conjunto forman una huella
{"url":"https://cris.utec.edu.pe/es/publications/laminated-composite-plates-subject-to-thermal-load-using-trigonom","timestamp":"2024-11-04T15:18:57Z","content_type":"text/html","content_length":"58682","record_id":"<urn:uuid:85d38c94-8403-4bb2-a892-ce23f3ee09f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00109.warc.gz"}
Bias and Variance 1. Introduction In machine learning, understanding the concepts of bias and variance is crucial for building effective models. Bias and variance are two sources of prediction error in machine learning algorithms. 2. Bias • Definition: Bias refers to the error introduced by approximating a real-world problem, which may be complex, by a simpler model. • Models with high bias tend to oversimplify the data and make strong assumptions about the target variable. • High bias can lead to underfitting, where the model is too simple to capture the underlying patterns in the data. • Common examples of high-bias models include linear regression or naive Bayes classifiers. 3. Variance • Definition: Variance pertains to the error due to sensitivity to fluctuations in the training dataset. • Models with high variance are overly complex and adapt too much to noise in the training data. • High variance can lead to overfitting, where the model performs well on training data but fails on unseen test data. • Decision trees or k-nearest neighbors are examples of models that are prone to high variance. 4. Balancing Bias and Variance • The goal in machine learning is often finding a balance between bias and variance known as "bias-variance tradeoff." • By adjusting hyperparameters like complexity or regularization strength, we can manage this tradeoff. 5. Techniques for Managing Bias and Variance: 6. a) Regularization: • Helps prevent overfitting by adding a penalty term in model training based on complexity. • Examples include L1 (Lasso) and L2 (Ridge) regularization techniques. 1. b) Cross-validation: • Allows us to estimate how well a model will generalize by splitting data into multiple subsets for training and testing. 1. c) Ensemble methods: • Combining predictions from multiple models helps reduce both bias and variance. • Examples include Random Forests or Gradient Boosting Machines. 1. d) Feature selection/engineering: □ Choosing relevant features or creating new ones can reduce noise in data leading to better generalization. Understanding bias and variance is essential for building robust machine learning models that generalize well beyond just fitting the training data. Striking an optimal balance between these two factors through careful model selection, hyperparameter tuning, cross-validation, regularization, ensemble methods, and feature engineering is key for successful machine learning projects. Bias and variance are two fundamental sources of error in machine learning models that play a crucial role in understanding model performance, especially concerning overfitting. Let's dive into each Bias refers to the error introduced by approximating a real-world problem, which may be complex, through an overly simplistic model. Models with high bias make strong assumptions about the data distribution and target function, disregarding important patterns or relationships within the data. • High bias can lead to underfitting, where the model fails to capture the underlying structure of the data. • Characteristics of high bias models: □ Simplistic: Such models may generalize too much and ignore nuances present in the data. □ High Error on Training Data: The model has difficulty capturing even the trends apparent in the training data. Variance represents the error due to sensitivity to fluctuations in the training set. Models with high variance are highly sensitive to changes in training data and tend to perform well on training examples but poorly on unseen or test examples. • High variance often results from excessively complex models that fit noise instead of true patterns in data. • Characteristics of high variance models: □ Overly Complex: These models have too many degrees of freedom relative to available data points. □ Low Generalization: While they excel at fitting training examples, they struggle with new, unseen instances. Overfitting occurs when a model learns both true patterns present in training data as well as noise. This phenomenon is primarily driven by high variance; however, it can also result from insufficient regularization techniques applied during model training. • Effects of overfitting: □ Reduced generalization capability leading to poor performance on unseen or test datasets. □ Memorization rather than learning: The model memorizes specific examples without grasping underlying concepts. To combat overfitting, achieving an optimal balance between bias and variance is crucial while developing machine learning models. Techniques such as cross-validation, regularization methods (e.g., L1/L2 regularization), early stopping, and ensemble methods (e.g., bagging) are commonly used strategies for mitigating these issues. 1. Introduction • In machine learning, understanding the concepts of bias and variance is essential to diagnose the performance of a model. • The balance between bias and variance plays a crucial role in determining the model's ability to generalize well on unseen data. 2. Bias • Bias refers to the error introduced by approximating a real-world problem, which may be very complex, by a simple model. • A high bias model makes strong assumptions about the underlying data distribution, leading to oversimplified models that underfit the data. 3. Variance • Variance measures how much predictions for a given point vary between different realizations of the model. • High variance models are overly sensitive to fluctuations in the training set, capturing noise along with true patterns and leading to overfitting. 4. Underfitting • Underfitting occurs when our model is too simple to capture the underlying structure of the data. Causes of Underfitting: • Insufficient complexity in your model (high bias) • Inadequate features or input variables • Small amount of training data How to Address Underfitting: • Increase model complexity (e.g., add more layers or neurons in neural networks). • Add additional features or interactions between features. • Collect more relevant and diverse training data. In conclusion, understanding bias and variance is crucial for diagnosing issues such as underfitting. By finding an optimal balance between these two factors, we can improve our models' performance and generalization capabilities on unseen data. In the field of machine learning, managing bias and variance is essential for building accurate models. Bias refers to the error introduced by approximating a real-world problem, usually due to overly simplistic assumptions made by the model. On the other hand, variance arises from the model's sensitivity to fluctuations in the training dataset. The Bias-Variance Tradeoff The bias-variance tradeoff is a fundamental concept in machine learning that involves balancing these two sources of error. A high-bias model tends to underfit the data, while a high-variance model tends to overfit it. Regularization Techniques Regularization methods are used to address issues related to bias and variance in machine learning models: • L1 (Lasso) and L2 (Ridge) Regularization: □ These techniques add penalty terms to the cost function based on either the absolute values of coefficients (L1) or squared values of coefficients (L2). • Elastic Net Regularization: □ Combines L1 and L2 regularization techniques by adding both penalties to the cost function with separate alpha parameters. • Dropout: □ Commonly used in neural networks, dropout randomly sets a fraction of input units to zero during each update, which helps prevent overfitting. • Batch Normalization: □ Normalizes input layers by adjusting and scaling activations, which can improve generalization capabilities. • Early Stopping: □ Stops training once validation error starts increasing after reaching a minimum point, preventing further overfitting. By employing appropriate regularization techniques, machine learning practitioners can fine-tune their models' complexity levels effectively.
{"url":"https://kiziridis.com/bias-and-variance/","timestamp":"2024-11-06T05:11:23Z","content_type":"text/html","content_length":"86577","record_id":"<urn:uuid:ce3ac4d0-a818-43b0-ba99-e36e80e7f2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00458.warc.gz"}
MCQs | Neural Networks | AIMCQs Neural Network Algorithms MCQs 1. Which activation function is commonly used in the hidden layers of a neural network to introduce non-linearity? a. Sigmoid b. ReLU (Rectified Linear Unit) c. Tanh (Hyperbolic Tangent) d. Linear Answer: b. ReLU (Rectified Linear Unit) 2. What is the purpose of the activation function in a neural network? a. It determines the number of layers in the network b. It normalizes the input data c. It introduces non-linearity in the network d. It controls the learning rate of the network Answer: c. It introduces non-linearity in the network 3. Which neural network architecture is used for handling sequential data, such as natural language processing or time series analysis? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Radial Basis Function Network (RBFN) Answer: c. Recurrent Neural Network (RNN) 4. Which neural network architecture is commonly used for image classification tasks? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Radial Basis Function Network (RBFN) Answer: b. Convolutional Neural Network (CNN) 5. Which algorithm is used for updating the weights in a neural network during the training process? a. Backpropagation b. Gradient Descent c. Stochastic Gradient Descent (SGD) d. All of the above Answer: d. All of the above 6. What is the purpose of the bias term in a neural network? a. It controls the learning rate of the network b. It adds flexibility to the decision boundaries of the network c. It introduces non-linearity in the network d. It allows shifting the activation function Answer: d. It allows shifting the activation function 7. Which algorithm is used for updating the weights in a neural network with a single training example at a time? a. Backpropagation b. Gradient Descent c. Stochastic Gradient Descent (SGD) d. Mini-batch Gradient Descent Answer: c. Stochastic Gradient Descent (SGD) 8. Which technique is used for preventing overfitting in a neural network by randomly dropping out neurons during training? a. Dropout b. Batch Normalization c. L1 Regularization d. L2 Regularization Answer: a. Dropout 9. What is the purpose of the loss function in a neural network? a. It measures the accuracy of predictions b. It measures the complexity of the model c. It quantifies the difference between predicted and actual values d. It controls the learning rate of the network Answer: c. It quantifies the difference between predicted and actual values 10. Which algorithm is used for updating the weights in a neural network by considering the previous weight update? a. Backpropagation through time (BPTT) b. Resilient Propagation (RProp) c. Levenberg-Marquardt Algorithm d. Quickprop Answer: b. Resilient Propagation (RProp) 11. Which neural network architecture is used for handling both sequential and spatial data, such as video processing or 3D image analysis? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Long Short-Term Memory (LSTM) Network Answer: d. Long Short-Term Memory (LSTM) Network 12. Which algorithm is used for updating the weights in a neural network by considering the direction of steepest descent? a. Backpropagation b. Gradient Descent c. Conjugate Gradient d. Newton's Method Answer: c. Conjugate Gradient 13. What is the purpose of the learning rate in a neural network? a. It controls the speed of convergence during training b. It determines the number of hidden layers in the network c. It introduces non-linearity in the network d. It allows shifting the activation function Answer: a. It controls the speed of convergence during training 14. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient? a. Backpropagation b. Gradient Descent c. Adam Optimization d. Adaboost Answer: b. Gradient Descent 15. Which neural network architecture is used for handling variable-length sequential data, such as text generation or machine translation? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Transformer Network Answer: d. Transformer Network 16. Which technique is used for normalizing the input data in a neural network to ensure similar scales across different features? a. Dropout b. Batch Normalization c. L1 Regularization d. L2 Regularization Answer: b. Batch Normalization 17. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and the magnitude of the previous weight update? a. Backpropagation through time (BPTT) b. Resilient Propagation (RProp) c. Levenberg-Marquardt Algorithm d. Quickprop Answer: d. Quickprop 18. Which neural network architecture is used for handling both sequential and hierarchical data, such as natural language parsing or speech recognition? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recursive Neural Network (ReNN) d. Radial Basis Function Network (RBFN) Answer: c. Recursive Neural Network (ReNN) 19. Which technique is used for preventing overfitting in a neural network by adding a penalty term to the loss function based on the weights? a. Dropout b. Batch Normalization c. L1 Regularization d. L2 Regularization Answer: d. L2 Regularization 20. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and the Hessian matrix? a. Backpropagation b. Gradient Descent c. Conjugate Gradient d. Newton's Method Answer: d. Newton's Method 21. Which neural network architecture is used for handling both sequential and non-sequential data, such as sentiment analysis or document classification? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Transformer Network Answer: a. Feedforward Neural Network (FNN) 22. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and the momentum term? a. Backpropagation b. Gradient Descent c. Stochastic Gradient Descent (SGD) d. Momentum-based Gradient Descent Answer: d. Momentum-based Gradient Descent 23. What is the purpose of the momentum term in a neural network? a. It controls the speed of convergence during training b. It introduces non-linearity in the network c. It allows shifting the activation function d. It helps accelerate the convergence and overcome local minima Answer: d. It helps accelerate the convergence and overcome local minima 24. Which technique is used for preventing overfitting in a neural network by randomly selecting a subset of the training examples for each iteration? a. Dropout b. Batch Normalization c. L1 Regularization d. Mini-batch Gradient Descent Answer: d. Mini-batch Gradient Descent 25. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and adapting the learning rate for each weight? a. Backpropagation b. Gradient Descent c. Adam Optimization d. Adaboost Answer: c. Adam Optimization 26. What is the purpose of the early stopping technique in a neural network? a. It prevents the network from overfitting the training data b. It speeds up the convergence of the network c. It allows shifting the activation function d. It controls the learning rate of the network Answer: a. It prevents the network from overfitting the training data 27. Which neural network architecture is used for handling both sequential and spatial data, such as video processing or 3D image analysis? a. Feedforward Neural Network (FNN) b. Convolutional Neural Network (CNN) c. Recurrent Neural Network (RNN) d. Long Short-Term Memory (LSTM) Network Answer: d. Long Short-Term Memory (LSTM) Network 28. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and the magnitude of the previous weight update, with adaptive learning rates for each weight? a. Backpropagation through time (BPTT) b. Resilient Propagation (RProp) c. Levenberg-Marquardt Algorithm d. Adam Optimization Answer: d. Adam Optimization 29. What is the purpose of the dropout technique in a neural network? a. It prevents the network from overfitting the training data b. It speeds up the convergence of the network c. It allows shifting the activation function d. It introduces non-linearity in the network Answer: a. It prevents the network from overfitting the training data 30. Which algorithm is used for updating the weights in a neural network by considering the direction of the negative gradient and the second derivative (Hessian matrix)? a. Backpropagation b. Gradient Descent c. Conjugate Gradient d. Newton's Method Answer: d. Newton's Method
{"url":"https://aimcqs.com/neural-network","timestamp":"2024-11-06T02:42:25Z","content_type":"text/html","content_length":"191928","record_id":"<urn:uuid:1f6592d9-383d-4e13-98fd-45925bd5d2e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00672.warc.gz"}
Compressible Flow in Ansys CFX 23, 2011, Compressible Flow in Ansys CFX #1 New Member Hello, Bargav My original problem is FSI simulation but I have a basic problem with CFX compressible flow. I have removed the structure part from my fsi simulation and set up the fluid domain with walls at structural locations to match the static pressure with the matlab code that is close to my experimental results. I am also using flotran to verify that I get similar results Join Date: from flotran and cfx. Feb 2011 I get same pressure in matlab, flotran and cfx when I use constant air properties at 25C for input velocities as high as 50 m/s and my max velocities are around 100 to 300 m/s. I know Posts: 8 that my max velocities are close to speed of sound but I just had to check the static pressures. Rep Power: To do near compressible flow (which is enough to do my fsi simulation) I gave bulk modulus in flotran and changed material to air ideal gas (total energy option) in cfx. The flotran gives very similar results to that of my matlab code and I am able to notice the difference in pressure due to change in density. In cfx, the problem is that there is a very high change 15 in pressure and density as soon as I shift to air ideal gas. The pressures and densities are 100 times higher than what I get from flotran or cfx with air at 25 C. I did check my reference pressure which is 1 atm. I could not find what was my mistake and why is there so much difference between cfx results when changing material to air ideal gas. I appreciate any help from you and also will be glad to provide more rinformation if needed.
{"url":"https://www.cfd-online.com/Forums/cfx/85360-compressible-flow-ansys-cfx.html","timestamp":"2024-11-06T17:28:13Z","content_type":"application/xhtml+xml","content_length":"140007","record_id":"<urn:uuid:9a77539a-537b-4c64-abec-23bb3f5a24ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00557.warc.gz"}
Reading: Equilibrium, Surplus, and Shortage 61 Reading: Equilibrium, Surplus, and Shortage Demand and Supply In order to understand market equilibrium, we need to start with the laws of demand and supply. Recall that the law of demand says that as price decreases, consumers demand a higher quantity. Similarly, the law of supply says that when price decreases, producers supply a lower quantity. Because the graphs for demand and supply curves both have price on the vertical axis and quantity on the horizontal axis, the demand curve and supply curve for a particular good or service can appear on the same graph. Together, demand and supply determine the price and the quantity that will be bought and sold in a market. These relationships are shown as the demand and supply curves in Figure 1, which is based on the data in Table 1, below. Figure 1. Demand and Supply for Gasoline Price, Quantity Demanded, and Quantity Supplied Price (per gallon) Quantity demanded (millions of gallons) Quantity supplied (millions of gallons) $1.00 800 500 $1.20 700 550 $1.40 600 600 $1.60 550 640 $1.80 500 680 $2.00 460 700 $2.20 420 720 If you look at either Figure 1 or Table 1, you’ll see that, at most prices, the amount that consumers want to buy (which we call quantity demanded) is different from the amount that producers want to sell (which we call quantity supplied). What does it mean when the quantity demanded and the quantity supplied aren’t the same? Answer: a surplus or a shortage. Surplus or Excess Supply Let’s consider one scenario in which the amount that producers want to sell doesn’t match the amount that consumers want to buy. Suppose that a market produces more than the quantity demanded. Let’s use our example of the price of a gallon of gasoline. Imagine that the price of a gallon of gasoline were $1.80 per gallon. This price is illustrated by the dashed horizontal line at the price of $1.80 per gallon in Figure 2, below. Figure 2. Demand and Supply for Gasoline: Surplus At this price, the quantity demanded is 500 gallons, and the quantity of gasoline supplied is 680 gallons. You can also find these numbers in Table 1, above. Now, compare quantity demanded and quantity supplied at this price. Quantity supplied (680) is greater than quantity demanded (500). Or, to put it in words, the amount that producers want to sell is greater than the amount that consumers want to buy. We call this a situation of excess supply (since Qs > Qd) or a surplus. Note that whenever we compare supply and demand, it’s in the context of a specific price—in this case, $1.80 per gallon. With a surplus, gasoline accumulates at gas stations, in tanker trucks, in pipelines, and at oil refineries. This accumulation puts pressure on gasoline sellers. If a surplus remains unsold, those firms involved in making and selling gasoline are not receiving enough cash to pay their workers and cover their expenses. In this situation, some producers and sellers will want to cut prices, because it is better to sell at a lower price than not to sell at all. Once some sellers start cutting prices, others will follow to avoid losing sales. These price reductions will, in turn, stimulate a higher quantity demanded. How far will the price fall? Whenever there is a surplus, the price will drop until the surplus goes away. When the surplus is eliminated, the quantity supplied just equals the quantity demanded—that is, the amount that producers want to sell exactly equals the amount that consumers want to buy. We call this equilibrium, which means “balance.” In this case, the equilibrium occurs at a price of $1.40 per gallon and at a quantity of 600 gallons. You can see this in Figure 2 (and Figure 1) where the supply and demand curves cross. You can also find it in Table 1 (the numbers in bold). Equilibrium: Where Supply and Demand Intersect When two lines on a diagram cross, this intersection usually means something. On a graph, the point where the supply curve (S) and the demand curve (D) intersect is the equilibrium. The equilibrium price is the only price where the desires of consumers and the desires of producers agree—that is, where the amount of the product that consumers want to buy (quantity demanded) is equal to the amount producers want to sell (quantity supplied). This mutually desired amount is called the equilibrium quantity. At any other price, the quantity demanded does not equal the quantity supplied, so the market is not in equilibrium at that price. If you have only the demand and supply schedules, and no graph, you can find the equilibrium by looking for the price level on the tables where the quantity demanded and the quantity supplied are equal (again, the numbers in bold in Table 1 indicate this point). Finding Equilibrium with Algebra We’ve just explained two ways of finding a market equilibrium: by looking at a table showing the quantity demanded and supplied at different prices, and by looking at a graph of demand and supply. We can also identify the equilibrium with a little algebra if we have equations for the supply and demand curves. Let’s practice solving a few equations that you will see later in the course. Right now, we are only going to focus on the math. Later you’ll learn why these models work the way they do, but let’s start by focusing on solving the equations. Suppose that the demand for soda is given by the following equation: where Qd is the amount of soda that consumers want to buy (i.e., quantity demanded), and P is the price of soda. Suppose the supply of soda is where Qs is the amount of t that producers will supply (i.e., quantity supplied). Finally, suppose that the soda market operates at a point where supply equals demand, or We now have a system of three equations and three unknowns (Qd, Qs, and P), which we can solve with algebra. Since [latex]Qd=Qs\\[/latex], we can set the demand and supply equation equal to each Step 1: Isolate the variable by adding 2P to both sides of the equation, and subtracting 2 from both sides. Step 2: Simplify the equation by dividing both sides by 7. The price of each soda will be $2. Now we want to understand the amount of soda that consumers want to buy, or the quantity demanded, at a price of $2. Remember, the formula for quantity demanded is the following: Taking the price of $2, and plugging it into the demand equation, we get So, if the price is $2 each, consumers will purchase 12. How much will producers supply, or what is the quantity supplied? Taking the price of $2, and plugging it into the equation for quantity supplied, we get the following: Now, if the price is $2 each, producers will supply 12 sodas. This means that we did our math correctly, since [latex]Qd=Qs\\[/latex] and both Qd and Qs are equal to 12. Shortage or Excess Demand Let’s return to our gasoline problem. Suppose that the price is $1.20 per gallon, as the dashed horizontal line at this price in Figure 3, below, shows. At this price, the quantity demanded is 700 gallons, and the quantity supplied is 550 gallons. Figure 3. Demand and Supply for Gasoline: Shortage Quantity supplied (550) is less than quantity demanded (700). Or, to put it in words, the amount that producers want to sell is less than the amount that consumers want to buy. We call this a situation of excess demand (since Qd > Qs) or a shortage. In this situation, eager gasoline buyers mob the gas stations, only to find many stations running short of fuel. Oil companies and gas stations recognize that they have an opportunity to make higher profits by selling what gasoline they have at a higher price. These price increases will stimulate the quantity supplied and reduce the quantity demanded. As this occurs, the shortage will decrease. How far will the price rise? The price will rise until the shortage is eliminated and the quantity supplied equals quantity demanded. In other words, the market will be in equilibrium again. As before, the equilibrium occurs at a price of $1.40 per gallon and at a quantity of 600 gallons. Generally any time the price for a good is below the equilibrium level, incentives built into the structure of demand and supply will create pressures for the price to rise. Similarly, any time the price for a good is above the equilibrium level, similar pressures will generally cause the price to fall. As you can see, the quantity supplied or quantity demanded in a free market will correct over time to restore balance, or equilibrium. Equilibrium and Economic Efficiency Equilibrium is important to create both a balanced market and an efficient market. If a market is at its equilibrium price and quantity, then it has no reason to move away from that point, because it’s balancing the quantity supplied and the quantity demanded. However, if a market is not at equilibrium, then economic pressures arise to move the market toward the equilibrium price and equilibrium quantity. This happens either because there is more supply than what the market is demanding or because there is more demand than the market is supplying. This balance is a natural function of a free-market economy. Also, a competitive market that is operating at equilibrium is an efficient market. Economist typically define efficiency in this way: when it is impossible to improve the situation of one party without imposing a cost on another. Conversely, if a situation is inefficient, it becomes possible to benefit at least one party without imposing costs on others. Efficiency in the demand and supply model has the same basic meaning: The economy is getting as much benefit as possible from its scarce resources, and all the possible gains from trade have been achieved. In other words, the optimal amount of each good and service is being produced and consumed. Figure 4. Demand and Supply for Gasoline: Equilibrium
{"url":"https://library.achievingthedream.org/sacmicroeconomics/chapter/reading-equilibrium-surplus-and-shortage/","timestamp":"2024-11-05T09:40:52Z","content_type":"text/html","content_length":"173878","record_id":"<urn:uuid:2fee34d6-4a5d-4155-9fbe-bb4078edfebe>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00011.warc.gz"}
Initial Conditions, Restart File, Warm up, 2D Ramp up For unsteady state 1D and 2D HEC-RAS models, initial conditions must be set up appropriately to not make the model crash or become unstable at the beginning of the modeling. Initial conditions are set up in Unsteady Flow Data Editor’s Initial Conditions tab (Figure 1). Figure 1 You can specify a Restart File by using an output file from a previous run to establish the initial conditions for you current run. By applying the concept of Restart File, if you need to run a long simulation time, you can divide the total simulation time into several shorter periods (Period 1, 2, 3,…) and the output file at the end of Period 1 can be used as the initial condition Restart File of Period 2. To generate a restart file for initial conditions, a “restart” unsteady flow data file needs to be created first. The inflow hydrographs is a constant value equal/close to the initial flow value (at time = 0.0) of the normal model run. The stage hydrographs can be a step down one starting from an arbitrary high value (for example, equal to the most upstream cross section invert or even higher) and gradually decreasing to a reasonable value that would be expected at time=0.0 for the normal model run. Finally, a “restart” plan file will be needed which includes the “restart” unsteady flow data and the normal model geometry file. Open the “restart” plan and go to Unsteady Flow Analysis window and click Options —> Output Options … to set up restart files per your needs (Figure 2). Figure 2 Another option to establish initial conditions are to enter flow data for each reach for an 1D model (Figure 1) and have HEC-RAS perform a steady-state flow backwater run to compute the corresponding stages at each cross section. The initial flow data can be changed at any cross section but at a minimum a flow at the upper end of each river reach must be provided. When users provide initial flow data, the value entered should be the same as or close to the first flow value of the boundary condition flow hydrograph (start of simulation). By default, for a dentritic river system, if users leave the initial flow at the upper end of each river reach blank, HEC-RAS will use flow data from the first value of the boundary condition flow hydrographs – this is a very common practice for unsteady state HEC-RAS modeling. If users type in a min flow in unsteady flow data editor (Figure 3), the practice of leaving initial flow data blank will also recognize this min flow setting (HEC-RAS will pick up the greater of min flow and first flow value of flow hydrograph as initial flow). In the example shown in Figure 3, since Min Flow is entered as 20 cfs while the first flow value is 317.58cfs, HEC-RAS will set up the initial flow as 317.58 cfs; however, if in Figure 3, the Min Flow is 500cfs, HEC-RAS will set up the initial flow as Figure 3 A third option is available to set the initial flow and stage from a profile from a previous run. The third option can be selected from the File menu of Unsteady Flow Data Editor: File —> Set Initial Conditions (flow and stage for 1D) from previous output profile … (Figure 4). After clicking this command, users can select a plan and profile from a previous run (Figure 5) to set up the initial Figure 4 Figure 5 At the start of a simulation run, if a model experiences numerical stability issues, user can turn on warm up option by go to Unsteady Flow Analysis —> Options —> Computation Options and Tolerances … (Figure 6). Under the General tab as shown in Figure 6, users can enter a number for warm up time steps (by default this value is zero which means no warm up period). Time step during warm up period (hrs) will use the simulation time step if it is left by default value (0), or users can enter a different time step. The warm up time setting is to be applied in both 1D and 2D domains. HEC-RAS default setting is not to perform a warm up period, however if a model becomes unstable at the beginning of a run, users can have HEC-RAS run a number of iterations before the start of the simulation in which all inflows are held constant (warm up). Warm up run does not advance in time and the actual simulation only starts after warm up run ends. Figure 6 If a 2D flow area has been attached with external boundary conditions (flow hydrographs or stage hydrographs) or is directly connected to an 1D river cross section (not through a lateral structure) and therefore flow will be coming into or out of the 2D area at the beginning of the simulation, the 2D area Initial Condition Ramp Up Time option must be enabled. The ramp up time (Figure 7) under 2D Flow Options Tab of Computation Options and Tolerances window is different from the warm up period. The ramp up option allows users to specify a time (in hours) to run the computations for the 2D Flow Area, by slowly transitioning the flow boundaries from zero to their initial value, and the stage boundaries from a dry elevation up to their initial wet elevation. Users specify the total “Initial Conditions Time” (2 hours, for example) and a fraction of this time for ramping up the boundary conditions. A value of 0.1 means that 10% of the Initial Conditions time will be used to Ramp Up the boundary conditions to their initial values, the remaining time will be used to hold the initial boundary conditions constant, but allow the flow to propagate through the 2D Flow Area, thus giving it enough time to stabilize to a good initial condition throughout the entire 2D Flow Area. If initial conditions time is 2hrs and the ramp up fraction is 0.1, then HEC-RAS will run by transitioning from zero to initial condition values within 2hr*0.1=0.2hrs and for the rest 2hr*0.9=1.8hrs, HEC-RAS will run by holding the initial conditions constant. Warm Up option is for the entire HEC-RAS model elements including 1D and 2D, however, Ramp Up option is for 2D domains only and it must be turned on if at the beginning of the simulation there is water flowing into or out of a 2D area to establish initial conditions. Normally 2D ramp up happens before the start of the overall model Warm Up. Figure 7 Sometimes, a 2D area requires a smaller time step than the one required by 1D modeling, for example, a 2D cell size is 100ft x 100ft while the 1D river cross section spacing is 600ft, so an appropriate 2D time step may be around 100ft/5fps x 1.0=20 sec (assume Courant number needs to be 1.0 and the wave velocity/celerity is 5.0 fps) and the 1D time step can be estimated as 600ft/5fps x 1.0=120sec or 2.0 min. If the overall unsteady modeling time step is chosen as 2.0 min, the 2D modeling time step can be sliced to 20 seconds by entering 6 for Number of Time Slices (Integer Value) in Figure 7. Different values for Number of Time Slices can be set for each 2D flow area. To reduce the overall model run time, the time step can be adjusted based on Courant Number (Figure 8) under Advanced Time Step Control Tab of Computation Options and Tolerances window. If the flood wave is rising and falling rapidly (depth and velocity are changing quickly), maximum Courant probably needs to be close to 1.0 with a very small time step (dam breach analysis). Usually, the maximum Courant Number can be set up as 2.0 or 3.0 for most applications so the adjusted time step is not too small. The Minimum Courant number should be less than half of the maximum Courant number. In Figure 8, the maximum Courant number is 2.0 and the minimum Courant number is set up as 0.9. In order to prevent HEC-RAS from changing time step too frequently, the number of steps below minimum before doubling can be chosen as a value between 5-10. Figure 8 Note: how to estimate wave velocity or celerity c – refer to the equation is Figure 9. An example: assume the hydraulic depth is 1.0ft, c=(32.2 x 1)^(1/2)=5.7 fps. Figure 9
{"url":"https://rashms.com/blog/initial-conditions-restart-file-warm-up-2d-initial-conditions-time-and-ramp-up-fraction/","timestamp":"2024-11-02T14:15:29Z","content_type":"text/html","content_length":"60148","record_id":"<urn:uuid:ac6f1aff-3191-4330-94c9-2f7ce0baf430>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00305.warc.gz"}
How to Create Minimum Variance Portfolio in Excel: 2 Methods - ExcelDemy Method 1 – Using Matrix to Create Minimum Variance Portfolio (Multi Assets) • Need to calculate the Excess Returns of these companies. Go to a new sheet and use a formula to calculate it. • Type the following formula in B6 and press ENTER. This will show you the Excess Return of Microsoft for the first month. The formula uses the AVERAGE function to calculate the average of total stock returns Microsoft. Then it is subtracted from each stock return to get the Excess Return data. The dataset term in the formula means that the cell reference is coming from the dataset sheet. • Drag the Fill Handle to the right to AutoFill the cells to F6. • Use the Fill Handle again to AutoFill all the cells. This command will return all the Excess Returns for 10 months. • Create a matrix by multiplying the transpose of the array B6:F15 with the B6:F15 We will use this matrix to determine the variance in the future. Create some columns to store the data of the matrix, select the 5×5 (multiplication between 5×10 and 10×5 matrices will result in a 5×5 matrix) range (H6:L10), and type the following formula in H6. The formula uses the TRANSPOSE function to transpose the array B6:F15. The MMULT function returns the matrix multiplication result between this transposed array and B6:F15. • Hold CTRL+SHIFT and press ENTER. Although it won’t give you any errors in the latest version of Excel, you get errors in the older version of Excel. You will see the output of the Matrix Multiplication in the sheet. • Go to another new sheet for our convenience. Create necessary columns in it as well. • Type the following formula in B5. • Drag the Fill Icon to the right upto F5. Drag down the Fill Icon to AutoFill the lower cells. Create a Portfolio Return Matrix. • Use some dummy data to solve our Portfolio. Select five decimal numbers that total to 1. If you have 6 companies, you should select 6 decimal numbers. We chose 2 as 0.2 times 5 equals 1. This is the hypothetical investment percentage for an investor. • Type the following formula in C11 to determine the Weighting Matrix. Make sure you select the array C11:G11 and hold CTRL+SHIFT before pressing ENTER. Although it won’t give you any errors in the latest version of Excel, you get errors in the older version. • Use the following formula to determine the Variance of our data. • Need to use the Solver Toolpak to finish our job. If you don’t have this in your Data Tab, you need to go to the File Menu >> Options >> Add-ins >> Manage >> Excel Add-ins >> Go… • The Add-ins window will appear. Check Solver Add-in and click OK. • The Solver Add-in will appear in the Data Tab. Click on it to open it. • Insert Solver Parameters. Here, we need to minimize the risk by minimizing the variance. So our Objective cell will be C12 which stores the value of Variance. Also, select Min. • Select C4:C8 for Changing Variable Cells. Get the percentages of sustainable investment in these cells once we launch the Solver. • Add some Constraints to get more accurate results. • After clicking on Add, the Add Constraint dialog box will appear. Set the value of cell C4 to greater than or equal to zero. • We added another constraint which sets the value in C9 to 1. • Click on Solve. • The Solver Results window will appear. Click OK. • You will see the minimized Variance and the risk-free investment percentage. • Let’s convert these decimals to percentages. The output illustrates that 9110626% of investment in Microsoft, 24.5315518% of investment in Twitter, and so on will be risk-free for an investor. Create a Minimum Variance Portfolio in Excel. Method 2 – Minimum Variance Portfolio Comparing Two Assets • Insert the stock return data and select some cells to store the necessary data, such as Standard Deviation or Variance. Set an initial investment percentage. We want to invest 67% in Twitter, and the rest in Tesla. • Type the following formula in G5. • Type the following formula in cell D5 which will return the Portfolio Return • Use the Fill Handle to AutoFill the lower cells. • Use another formula to calculate Expected Returns for Twitter, Tesla, and Portfolio Return. • Use the VAR.P function to calculate the variance of stock returns of Twitter. We want to ignore any logical values and text in the data, so we used the P function. • Type the following formula to calculate Standard Deviation using the STDEV.P function. • Drag the Fill Icon to the right to determine the Expected Return, Variance, and Standard Deviation for other data. • Minimize the Variance or Standard Deviation (as Standard Deviation is simply the square root of Variance, we can use any of them for the Minimum Variance Portfolio). • Insert the cell reference I10 where the Portfolio Return Standard Deviation is stored and set the Objective to Min. • See the change by varying the investment percentage of Twitter. Insert the G4 cell reference in the ‘By Changing Variable Cells’ section. • Cick Solve. • Get the optimum value for the investment percentage in both Twitter and Tesla. Minimize the Variance from 85% to 3.61%. Download Practice Workbook Related Articles << Go Back to Calculate Variance in Excel | Excel for Statistics | Learn Excel Get FREE Advanced Excel Exercises with Solutions! 1 Comment 1. =dataset!B5-AVERAGE(dataset!$B$5:$B$14) should be otherwise when you copy across to the other stocks, you are still using the Average for Microsoft to calculate the excess returns of the other stocks Leave a reply
{"url":"https://www.exceldemy.com/minimum-variance-portfolio-excel/","timestamp":"2024-11-02T15:11:08Z","content_type":"text/html","content_length":"201755","record_id":"<urn:uuid:5f167af0-15d2-4396-8830-024d29d8c88d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00259.warc.gz"}
Editing a Model Editing a Model¶ This example shows how to analyze the normal modes corresponding to a system of interest. In this example, ANM calculations will be performed for HIV-1 reverse transcriptase (RT) subunits p66 and p51. Analysis will be made for subunit p66. Output is a reduced/sliced model that can be used as input to analysis and plotting functions. ANM calculations¶ We start by importing everything from the ProDy package: In [1]: from prody import * In [2]: from matplotlib.pylab import * In [3]: ion() We start with parsing the Cα atoms of the RT structure 1DLO and performing ANM calculations for them: In [4]: rt = parsePDB('1dlo', subset="ca") In [5]: anm, sel = calcANM(rt) In [6]: anm Out[6]: <ANM: 1dlo_ca (20 modes; 971 nodes)> In [7]: saveModel(anm, 'rt_anm') Out[7]: 'rt_anm.anm.npz' In [8]: anm[:5].getEigvals().round(3) Out[8]: array([0.039, 0.063, 0.126, 0.181, 0.221]) In [9]: (anm[0].getArray() ** 2).sum() ** 0.5 Out[9]: 1.0 We can plot the cross-correlations and square fluctuations for the full model as follows: In [10]: showCrossCorr(anm); Square fluctuations¶ In [11]: showSqFlucts(anm[0]); Slicing a model¶ Slicing a model is analogous to slicing a list, i.e.: In [12]: numbers = list(range(10)) In [13]: numbers Out[13]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] In [14]: slice_first_half = numbers[:10] In [15]: slice_first_half Out[15]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] In this case, we want to slice normal modes, so that we will handle mode data corresponding to subunit p66, which is chain A in the structure. We use sliceModel() function: In [16]: anm_slc_p66, sel_p66 = sliceModel(anm, rt, 'chain A') In [17]: anm_slc_p66 Out[17]: <ANM: 1dlo_ca sliced (20 modes; 556 nodes)> You see that now the sliced model contains 556 nodes out of the 971 nodes in the original model. In [18]: saveModel(anm_slc_p66, 'rt_anm_sliced') Out[18]: 'rt_anm_sliced.anm.npz' In [19]: anm_slc_p66[:5].getEigvals().round(3) Out[19]: array([0.039, 0.063, 0.126, 0.181, 0.221]) In [20]: '%.3f' % (anm_slc_p66[0].getArray() ** 2).sum() ** 0.5 Out[20]: '0.895' Note that slicing does not change anything in the model apart from taking parts of the modes matching the selection. The sliced model contains fewer nodes, has the same eigenvalues, and modes in the model are not normalized. We plot the cross-correlations and square fluctuations for the sliced model in the same way. Note that the plots contain the selected part of the model without any change: In [21]: showCrossCorr(anm_slc_p66); In [22]: title('Cross-correlations for ANM slice'); Square fluctuations¶ In [23]: showSqFlucts(anm_slc_p66[0]); Reducing a model¶ We reduce the ANM model to subunit p66 using reduceModel() function. This function implements the method described in 2000 paper of Hinsen et al. [KH00] In [24]: anm_red_p66, sel_p66 = reduceModel(anm, rt, 'chain A') In [25]: anm_red_p66.calcModes() In [26]: anm_red_p66 Out[26]: <ANM: 1dlo_ca reduced (20 modes; 556 nodes)> In [27]: saveModel(anm_red_p66, 'rt_anm_reduced') Out[27]: 'rt_anm_reduced.anm.npz' In [28]: anm_red_p66[:5].getEigvals().round(3) Out[28]: array([0.05 , 0.098, 0.214, 0.289, 0.423]) In [29]: '%.3f' % (anm_red_p66[0].getArray() ** 2).sum() ** 0.5 Out[29]: '1.000' We plot the cross-correlations and square fluctuations for the reduced model in the same way. Note that in this case the plots are not identical to the full model: In [30]: showCrossCorr(anm_red_p66); Square fluctuations¶ In [31]: showSqFlucts(anm_red_p66[0]); Compare reduced and sliced models¶ We can compare the sliced and reduced models by plotting the overlap table between modes: In [32]: showOverlapTable(anm_slc_p66, anm_red_p66); The sliced and reduced models are not the same. While the purpose of slicing is simply enabling easy plotting/analysis of properties of a part of the system, reducing has other uses as in [WZ05]. [WZ05] Zheng W, Brooks BR. Probing the Local Dynamics of Nucleotide-Binding Pocket Coupled to the Global Dynamics: Myosin versus Kinesin. Biophysical Journal 2005 89:167–178.
{"url":"http://www.bahargroup.org/prody/tutorials/enm_analysis/edit.html","timestamp":"2024-11-06T23:58:00Z","content_type":"application/xhtml+xml","content_length":"26567","record_id":"<urn:uuid:b3662a1d-71eb-4dfe-8cd5-396479a8ff54>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00762.warc.gz"}
PythagoreanTheoremWarmup | Bridging Practices Among Connecticut Mathematics Educators This warm up is designed for geometry students learning the Pythagorean Theorem. The first two problems contain a triangle with a missing side length that students need to solve for. The third question asks students to use argumentation language to construct a justification for one of their answers. An answer key is provided, with key pieces of information that should be included in the Microsoft Word version: 912Geometry_SRT_PythagoreanTheorem_Warmup_Construct_PythagoreanTheoremWarmup PDF version: 912Geometry_SRT_PythagoreanTheorem_Warmup_Construct_PythagoreanTheoremWarmup
{"url":"https://bridges.education.uconn.edu/tag/pythagoreantheoremwarmup/","timestamp":"2024-11-11T16:24:12Z","content_type":"text/html","content_length":"50858","record_id":"<urn:uuid:0bc35668-99cb-47e7-adc7-b5203f408425>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00106.warc.gz"}
devRant - Riddle: Alice and bob want to communicate a secret message, lets say it is an integer. We will call this msg0. You are Chuck, an interloper trying to spy on them and decode the message. For keys, alice chooses a random integer w, another for x, and another for y. she also calculates a fourth variable, x+y = z Bob follows the same procedure. Suppose the numbers are too large to bruteforce. Their exchange looks like this. At step 1, alice calculates the following: msg1 = alice.z+alice.w+msg0 she sends this message over the internet to bob. the value of msg1 is 20838 then for our second step of the process, bob calculates msg2 = bob.z+bob.w+msg1 msg2 equals 32521 he then sends msg2 to alice, and again, you intercept and observe. at step three, alice recieves bob's message, and calculates the following: msg3 = msg2-(alice.x+alice.w+msg0) msg3 equals 19249. Alice sends this to bob. bob calculates msg4 = msg3-(bob.x+bob.w) msg4 equals 11000. he sends msg4 to alice at this stage, alice calculates ms5. msg5 = (msg4-(alice.y)+msg0. alice sends this to bob. bob recieves this final message and calculates the sixth and final message, which is the original hidden msg0 alice wanted to send: msg6 = msg5-bob.y What is the secret message? I'll give anyone who solves it without bruteforcing, a free cookie. Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API Learn More • so how would you know when you receive the right answer if it was bruteforced or not ... you r posting here after all • What is msg5? bob.y is 3434. • @notroot msg5 = 11921 You got bob.y correct btw. • Chuck doesn't spy. He tampers with the communication and makes the senders believe that was the message they wanted to send. • Can somebody on the site (where you can c/p) ask this to chatGPT? • Can you solve this without knowing msg5 up front? I have been attempting all morning lol • @notroot it’s easy if you know msg5 beforehand • @chonky-quiche msg5 is necessary • @notroot yeah I think your right - without msg5, I have been able to solve for Ay, By, and all the messages except for msg5 and msg0/6. I tried for the longest time to solve for even the w and x variables too but no luck. I wonder if you could somehow solve through graphing potential solution values. • Yeah I just checked, you would have to know other parameters to solve this • @notroot posted in the comments. • @notroot show yer work mate. :P
{"url":"https://dfox.devrant.com/rants/6561679/riddle-alice-and-bob-want-to-communicate-a-secret-message-lets-say-it-is-an-inte","timestamp":"2024-11-11T23:50:10Z","content_type":"text/html","content_length":"65627","record_id":"<urn:uuid:652fa47e-a622-47ae-904c-10eb8fc870ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00577.warc.gz"}
Bayes Rule Calculator The Bayes' Rule Calculator handles problems that can be solved using Bayes' rule (duh!). It computes the probability of one event, based on known probabilities of other events. And it generates an easy-to-understand report that describes the analysis step-by-step. For help in using the calculator, read the Frequently-Asked Questions or review the Sample Problem. To understand the analysis, read the Summary Report that is produced with each computation. To learn more about Baye's rule, read Stat Trek's tutorial on Bayes theorem. The Bayes Rule Calculator uses Bayes Rule (aka, Bayes theorem, the multiplication rule of probability) to compute the probability of one event, based on known probabilities of other events. What is Bayes Rule? Let A be one event; and let B be any other event from the same sample space, such that P(B) > 0. Then, Bayes rule can be expressed as: P(A|B) = P(A) P(B|A) P(B) • P(A) is the probability of Event A. • P(B) is the probability of Event B. • P(A|B) is the conditional probability of Event A, given Event B. • P(B|A) is the conditional probability of Event B, given Event A. How to Use Bayes Rule Bayes rule is a simple equation with just four terms. Any time that three of the four terms are known, Bayes Rule can be applied to solve for the fourth term. We've seen in the previous section how Bayes Rule can be used to solve for P( A | B ). By rearranging terms, we can derive equations to solve for each of the other three terms, as shown below: P(B|A) = P(B) P(A|B) P(A) P(A) = P(B) P(A|B) P(B|A) P(B) = P(A) P(B|A) P(A|B) Frequently-Asked Questions Instructions: To find the answer to a frequently-asked question, simply click on the question. What is Bayes Rule? Bayes Rule is an equation that expresses the conditional relationships between two events in the same sample space. Bayes Rule can be expressed as: P( A | B ) = P( A ) P( B | A ) P( B ) • P( A ) is the probability of Event A. • P( B ) is the probability of Event B. • P( A | B ) is the conditional probability of Event A, given Event B. • P( B | A ) is the conditional probability of Event B, given Event A. When can I use Bayes Rule? Bayes Rule is a simple equation with just four terms: • P(A) is the probability of Event A. • P(B) is the probability of Event B. • P(A|B) is the conditional probability of Event A, given Event B. • P(B|A) is the conditional probability of Event B, given Event A. Any time that three of the four terms are known, Bayes Rule can be used to solve for the fourth term. See the Sample Problem for an example that illustrates how to use Bayes Rule. What if Bayes Rule generates a probability greater than 1.0? If Event A occurs 100% of the time, the probability of its occurrence is 1.0; that is, P(A) = 1.0. In the real world, an event cannot occur more than 100% of the time; so a real-world event cannot have a probability greater than 1.0. Bayes Rule is just an equation. It is possible to plug into Bayes Rule probabilities that cannot occur together in the real world. When that happens, it is possible for Bayes Rule to generate a probability that could not occur in the real world; that is, a probability greater than 1.0. Here's how that can happen: P(A) = P(A|B)*P(B) + P(A|B')*P(B') • P(A) is the probability that Event A occurs. • P(B) is the probability that Event B occurs. • P(B') is the probability that Event B does not occur. • P(A|B) is the probability that A occurs, given that B occurs. • P(A|B') is the probability that A occurs, given that B does not occur. From this equation, we see that P(A) should never be less than P(A|B)*P(B). If we plug numbers into Bayes Rule that violate this maxim, we get strange results. For example, suppose you plug the following numbers into Bayes Rule: • P(A) = 0.1 • P(B) = 0.5 • P(A|B) = 0.6. Given these inputs, Bayes Rule will compute a value of 3.0 for P(B|A), clearly an impossible result in the real world. If Bayes Rule produces a probability greater than 1.0, that is a warning sign. It means your probability inputs do not reflect real-world events. What is E Notation? The Bayes Rule Calculator uses E notation to express very small numbers. E notation is a way to write numbers that are too large or too small to be concisely written in a decimal format. With E notation, the letter E represents "times ten raised to the power of". Here is an example of a very small number written using E notation: 3.02E-12 = 3.02 * 10^-12 = 0.00000000000302 If a probability can be expressed as an ordinary decimal with fewer than 14 digits, the Bayes Rule Calculator will do so. But if a probability is very small (nearly zero) and requires a longer string of digits, the calculator will use E notation to display its value. Sample Problem Problem 1 1. Marie is getting married tomorrow, at an outdoor ceremony in the desert. In recent years, it has rained only 5 days each year. Unfortunately, the weatherman has predicted rain for tomorrow. When it actually rains, the weatherman correctly forecasts rain 90% of the time. When it doesn't rain, he incorrectly forecasts rain 8% of the time. What is the probability that it will rain on the day of Marie's wedding? We begin by defining the events of interest. □ Event A. It rains on Marie's wedding. □ Event B. The weatherman predicts rain. In terms of probabilities, we know the following: □ P(A) = 5/365 = 0.0137 [It rains 5 days out of the year.] □ P(A') = 360/365 = 0.9863 [It does not rain 360 days out of the year.] □ P(B|A) = 0.9 [The weatherman predicts rain 90% of the time, when it rains.] □ P(B|A') = 0.08 [The weatherman predicts rain 8% of the time, when it does not rain.] We want to know P(A|B), the probability that it will rain, given that the weatherman has predicted rain. We could use Bayes Rule to compute P(A|B) if we knew P(A), P(B), and P(B|A). Two of those probabilities - P(A) and P(B|A) - are given explicitly in the problem statement. The third probability that we need is P(B), the probability that the weatherman predicts rain. Although that probability is not given to us explicitly, we can calculate it. P(B) = P(B|A) * P(A) + P(B|A') * P(A') P(B) = 0.9 * 0.0137 + 0.08 * 0.9863 P(B) = 0.091 Now, we know P(A), P(B), and P(B|A) - all of the probabilities required to compute P(A|B) using Bayes Rule. We plug those probabilities into the Bayes Rule Calculator, and the calculator reports that the probability that it will rain on Marie's wedding is 0.1355. Alternatively, we could have used Baye's Rule to compute P(A|B) manually. Here's how: P( A | B ) = P( A ) P( B | A ) P( B ) P( A | B ) = 0.0137 * 0.9 0.091 P( A | B ) = 0.1355 Note the somewhat unintuitive result. Even when the weatherman predicts rain, it rains only about 14 percent of the time. Despite the weatherman's gloomy prediction, there is a good chance that Marie will not get rained on at her wedding.
{"url":"https://www.stattrek.com/online-calculator/bayes-rule-calculator","timestamp":"2024-11-12T22:51:53Z","content_type":"text/html","content_length":"63130","record_id":"<urn:uuid:6e40104b-126c-4cb6-bfe3-031538665de0>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00258.warc.gz"}
[21NOIP improvement group] number reporting problem solution [Title Description] Counting game is a popular leisure game. Everyone participating in the game should take turns to report the number in a certain order, but if the number of the next report is a multiple of 7, or the decimal representation contains the number 7, you must skip this number, otherwise you will lose the game. On a sunny afternoon, Xiao r and Xiao z, who had just finished the SP C20nn game, were bored and played the counting game. But it's easy to calculate when there are only two people playing, so they haven't decided the outcome for a long time. At this time, little z had a flash of inspiration and decided to strengthen the game: any number containing the number 7 in the decimal system cannot report all its multiples! Formally, let p(x) indicate whether the decimal representation of X contains the number 7. If so, p(x) = 1, otherwise p(x) = 0. Then a positive integer x cannot be reported if and only if there are positive integers y and z such that x = yz and p(y) = 1. For example, if small r reports 6, because 7 cannot be reported, small z needs to report 8 next; If the small r reports 33, it is due to 34 = 17 × 2 ,35 = 7 × 5 can not be reported, small z next need to report 36; If small r reports 69, because the number of 70 ∼ 79 contains 7, small z needs to report 80 next. Now the last number of Xiao r reports x. Xiao z wants to quickly calculate how much he wants to report next, but he soon finds that this game is much more difficult than the original game, so he needs your help. Of course, if the x reported by small R cannot be reported by itself, you should also react quickly. Only when small R loses. Since Xiao r and Xiao z played the game for a long time, you also need to answer many questions of Xiao z. In line 1, a positive integer T indicates the number of small z queries. Next, in line T, each line has a positive integer x, which represents the number reported by small r this time. There are T lines in total, and each line is an integer. If the number reported this time by small r cannot be reported, output − 1, otherwise output the number reported next by small z. [input example] [output example] This question should be the only one among several questions that can be done by the popularization group. The title is that you write an sv function to initialize the record table, and then query the output answer in the table. This is basically the case of prime sieve, but the judgment time is bool Sv(int k){ if(k%10==7)return true; return false; void Init(){ for(int i=1;i<=MXX;i++){ for(int j=1;i*j<=MXX;j++){ The function means to detect something that has not been detected. The form is the same as the prime sieve, but MXX cannot open the root because it is not a simple multiplication and cannot correctly handle the situation of 100007 (casually, assuming it is a prime). table is the result and accelerates: using namespace std; //Put each number in the bucket const int MXX=1e7+11; bool table[MXX]; bool Sv(int k){ if(k%10==7)return true; return false; void Init(){ for(int i=1;i<=MXX;i++){ for(int j=1;i*j<=MXX;j++){ int main(){ int t,x; while(table[x]){ //This search is a waste of time return 0; Such code can pass 7 of 10 test points, and the other 7 times out. It has been noted in the code, and the query is too time-consuming: the title actually gives a very clear prompt. Think about it for yourself, 7, 14, 17, 21, 27... 70-79, 87... When there are more digits, the length of continuous band 7 will be very long. How can we simplify this by looking for things in a loop every time? We already have the flag table. We only need to change it a little: the flag table records the current number, but instead points to the next non-7 related number. In this way, we only need to preprocess the data here and add a cycle, which can be read directly in the future: using namespace std; const int MXX=1e7+11; int table[MXX]; bool Sv(int k){ if(k%10==7)return true; return false; void Init(){ for(int i=1;i<=MXX;i++){ for(int j=1;i*j<=MXX;j++){ //What is the next number that is not 7 related int last=1; for(int i=2;i<=MXX;i++){ int main(){ int t,x; return 0; The first non 7 related number after 10000000 is 100000010
{"url":"https://programming.vip/docs/21noip-improvement-group-number-reporting-problem-solution.html","timestamp":"2024-11-09T04:00:37Z","content_type":"text/html","content_length":"11676","record_id":"<urn:uuid:83e0537e-ef75-46bb-a898-896e2d67c34b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00637.warc.gz"}
abstracts data-short-2024 - Site officiel du Centre de recherches mathématiques (CRM)abstracts data-short-2024 - Site officiel du Centre de recherches mathématiques (CRM) Hugo Chu (Imperial College London) Rigorous calculation of Lyapunov exponents of stochastic flows I will present a very simple method to rigorously enclose ergodic averages of stochastic flows under mild assumptions. This method is applied to prove the positivity of Lyapunov exponents (and hence chaos) for some systems that were previously out of reach. This is joint work with Maxime Breden, Jeroen Lamb and Martin Rasmussen. Brittany Gelb (Rutgers University) Rigorous Machine Learning of Homological Dynamics We present a machine-learning framework for rigorously characterizing the nonlinear dynamics generated by a continuous map. The key elements are polyhedral cell complexes and piecewise linear approximations, both determined by neural networks. We describe the gradient-like dynamics by a Morse graph and the recurrent dynamics by Conley indices (homological invariants). We show theoretically that such computations can be carried out to recover dynamics at any level of resolution within this framework, motivating the problem of an efficient implementation. Joan Gimeno (Universitat de Barcelona) Computation of normal forms in discrete systems I will describe a semi-analytical method for computing normal forms in discrete dynamical systems, such as Poincaré maps. This approach involves calculating high-order derivatives, applying coordinate transformations to simplify the local dynamics, and retaining resonance terms essential to the system’s behavior. The method is general, requiring only regularity assumptions, and is robust under parameter variations. As an application, I will demonstrate how normal form computations can be used to construct high-dimensional twist maps and introduce a frequency recovery technique for visualizing high-dimensional tori within these maps. This has been a joint work with À. Jorba, M. Jorba-Cuscó, and M. Zou. Jun Okamoto (Kyoto University) Spatiotemporal reconstruction of gene expression on the human axioloid using optimal transport theory In this study, we aim to identify genes for which spatiotemporal structure in expression patterns is essential and to elucidate their functions, using scRNA-seq data and spatial data from cell tissues. Here, scRNA-seq is a technology capable of extracting expression information of all genes within a single cell. However, scRNA-seq requires the dissociation of multicellular tissues into single cells and the subsequent measurement after cell death, resulting in the loss of spatial and temporal information. To address this issue, we developed a method for spatiotemporal reconstruction of gene expression based on optimal transport theory. Furthermore, we created a clustering method for spatiotemporal patterns to identify key genes from the reconstructed expression patterns. In this presentation, we report the results of applying this method to the time-series data of a 3D cell culture model that recapitulates human somitogenesis using iPS cells, generated by the Alev group at Kyoto University’s ASHBi. We will discuss the spatiotemporal reconstruction of expression patterns for key genes and the exploration of similar genes. Jose Perea (Northeastern University) Topological detection and parametrization of toroidal dynamics The use of time-delay embeddings alongside persistent (co)homology, has proven to be highly effective in quantifying the topology of attractors given observed time series data. In this talk, I will describe some of the challenges, recent theoretical results and applications of delay embeddings and persistence, when trying to detect and parametrize toroidal attractors. Justyna Signerska-Rynkowska (Gdansk University of Technology & Dioscuri Centre in TDA) Testing topological conjugacy of time-series We consider a problem of testing topological conjugacy of two trajectories coming from dynamical systems (X,f) and (Y,g) and deliver a number of tests to check if the corresponding trajectories of f and g are topologically conjugate. The values of the tests are close to zero for systems conjugate and large for systems that are not. For our main developed method, ConjTest, the convergence of the test values, in case when sample size goes to infinity, is established. Various numerical examples indicate scalabilit
{"url":"https://www.crmath.ca/en/page-calendrier/abstracts-data-short-2024/","timestamp":"2024-11-14T06:58:23Z","content_type":"text/html","content_length":"1049122","record_id":"<urn:uuid:980b57a2-9c8e-4500-a874-cde9202da3f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00556.warc.gz"}
Syntax: MIANALYZE Procedure The following statements are available in PROC MIANALYZE: PROC MIANALYZE <options> ; The BY statement specifies groups in which separate analyses are performed. The CLASS statement lists the classification variables in the MODELEFFECTS statement. Classification variables can be either character or numeric. The required MODELEFFECTS statement lists the effects to be analyzed. The variables in the statement that are not specified in a CLASS statement are assumed to be continuous. The STDERR statement lists the standard errors associated with the effects in the MODELEFFECTS statement when both parameter estimates and standard errors are saved as variables in the same DATA= data set. The STDERR statement can be used only when each effect in the MODELEFFECTS statement is a continuous variable by itself. The TEST statement tests linear hypotheses about the parameters. An The PROC MIANALYZE and MODELEFFECTS statements are required for the MIANALYZE procedure. The rest of this section provides detailed syntax information for each of these statements, beginning with the PROC MIANALYZE statement. The remaining statements are in alphabetical order.
{"url":"https://support.sas.com/documentation/cdl/en/statug/63962/HTML/default/mianalyze_toc.htm","timestamp":"2024-11-13T12:08:52Z","content_type":"application/xhtml+xml","content_length":"14874","record_id":"<urn:uuid:fd349e4a-9080-4fd0-803c-30826bc36971>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00299.warc.gz"}
Jump to navigation Jump to search The Problem << >> Entire solution i12=: freads '12.txt' NB. parse input: return start;end;field, (start and end being linear indices) par=: [: ((0 27 <@i.&1@:="0 _ ,) (,<) (1>.26<.])) ('S',(~.tolower Alpha_j_),'E')&i. ;._2 NB. Part 1: find length of shortest path S-E shifth =: , ,~ ((0,+.0j1^i.4) ,@(|.!._1) i.@$) NB. shift in 1+4 directions, append height neigh =: [: ; <@({.,. _1 -.~ }.@}:)"1@|: NB. boxed 5-connected neighbor pairs one =: ({: (] #~ 1>:(-/"1)@:{~) neigh)@shifth NB. ones (in adjacency mat) bfs =: {{ NB. bfs goes backwards, from e to closest index in s; x: s;end ;y: height matrix 's e' =. x NB. start end 'p c' =. (|:one y) NB. parents, children gn =. (c #~ p&e.) NB. verb: get reachable neighbors for any index in y it=. 0 NB. at each it, also path length (1 it= 1 step) rr=. ,e NB. reachable initially only E while. -.+./ s e.~ rr do. NB. none of start not reachable it=. >: it [ rr=. gn rr part1=: (}: bfs >@{:)@par NB. bfs@par NB. Part 2: find location at lowest level closest to E addas =: 1&{ ,~ {. (,I.@(=<./)@,)&.> {: NB. add all a's to start positions part2=:(addas bfs >@{:)@par Lets work with the test data, which gives elevation values as letters (to paste and execute things from the clipboard in jQT, use F8 after copying): The aim of part 1 is finding the shortest way from start square S to end square E, taking steps going up by a maximum of 1 (but could go down as far as wanted). For starting the problem I convert the letters to heights, and find the start and end square. For this, I create a verb that first converts to numbers, then uses those numbers to get the elevations and the indices of the start and end. As usual, I started experimenting in the terminal, gradually building up par as written above, but for explaining, I will take it apart as follows: par=: [: (inds (,<) elev) tonum For each line, I look up the letters in the list 'SabcdefghijklmnopqrstuvwxyzE' (which, seeing it now would have been only marginally longer than my wording below): tonum=: ('S',(~.tolower Alpha_j_),'E')&i. ;._2 NB. Alpha_j_ is A-Z,a-z Finding the indices of S and E is simply finding where the result of tonum is 0 and 27: inds =: 0 27 <@i.&1@:="0 _ , I decided to use linear indices (i.e. indices in the raveled version of the input), since the coordinates themselves are not really needed later on. This will speed up matters considerably. Not that it really matters here, but I used i.&1@:= to trigger a special combination, in this case a fast list operation (FLO). This stops comparing items when a match is found. Lastly I box the indices individually (due to the rank "0 _) because it will come in handy later (could be my initial solution did not box them). Last step: the problem poses that S and E are respectively at elevation a and z. In the result of tonum, this is not yet the case, S is 0, while a is 1, and E is 27 where z is 26. Elev takes care of this: It clamps all values between 1 and 26, forcing S and E to be at 1 and 26. Subtracting 1 to make a correspond to elevation 0 is not needed, since only relative heights matter. So now, using par on tst gets us: par tst │0│21│1 1 2 17 16 15 14 13│ │ │ │1 2 3 18 25 24 24 12│ │ │ │1 3 3 19 26 26 24 11│ │ │ │1 3 3 20 21 22 23 10│ │ │ │1 2 4 5 6 7 8 9│ (}: ({,)&> {:)par tst NB. S and E indices looked up in raveled elevation field. Part 1 My first thought was: "I know this, there's an essay on this stuff", so I looked up the essay on the Floyd-Warshall algorithm. It iteratively calculates a distance matrix between every pair of points. Though when trying to do so, it was quickly clear that the solutions of that essay were too general and thus too slow: I don't need the distances between every pair of points! What I was after, and failed to remember properly, was Dijkstra's algorithm, which finds the distance between a source point and a destination point. The variant I initially wrote worked from S to E, but I rewrote it the other way around such that part 2 would be easier to implement (The only changes required were to adapt the "one" verb to define the elevation based filter of the steps in the other way, i.e. allowing decending max 1, rather than ascending max 1, and to bfs to swap end and start, and swap start and end in "bfs"). For this we need to know the neighbours of each point in four directions. This is actually the first time I realised that for this kind of problem, I do not need to work with the coordinates explicitly, and can just work with linear indices. I think this speeds up calculations a lot. For knowing the possible moves, first we have to find the 4 neighbors that are adjacent: shifth =: , ,~ ((0,+.0j1^i.4) ,@(|.!._1) i.@$) NB. shift in 1+4 directions, append height This verb takes the elevation array, and creates an index array the same shape as the elevation array. The directions to shift in are on the left, 0 0 and all one step directions, conveniently derived with powers of the complex number 0j1 (or 0+1i). ,@(|.!._1) rotates the index array by the directions on the left, padding with _1 and linerises those, resulting in a 5 x #points array. Lastly I append the elevations for each element as a last row, because I'll need it when selecting which of the neighbours on the 2D map are actually reachable destinations. As the shifth verb was getting almost as wide as my phone screen, and I need to use the elevations in determining the actually reachable neighbours, I chose to do further filtering in two verbs: "neigh" eliminates padding as neighbours, and "one" selects reachable neighbours (which happen to be ones in the adjacency matrix, hence the name). In shiftb's output, each column corresponds to the possible neighbours of a single point. Now, I'd like to get a list of each pair of parent (i.e. every index) and child (i.e. neighbour) nodes. I don't need them boxed so I can run them all together after creation (Note opening directly after boxing at rank 1 makes again use of a special combination). NB. raze par ,. rem pad child for each point (discarding elevations) neigh =: [: ; <@({. ,. _1 -.~ }.@}:)"1@|: Now that we have possible candidates, selecting only those who are at most one unit lower (remember we're going backwards) than the parent: one =: ({: (] #~ 1>:(-/"1)@:{~) neigh)@shifth NB. ones (in adjacency mat) This verb one puts it all together: it takes the elevation field on the right, and uses shifth to get rows of own index, candidates (4x) and elevation. The fork on the left gives the middle verb the elevations on the left, and the list of neighbours on the right. Based on those, the middle tine filters the neighbours, where the elevation of the parent minus the elevation of the child is less than one, i.e. the step down is max 1 (as we're going backwards). The eventual result is a list of allowable moves to go from E to S. Due to all this preparation I managed to perform all the calculation part in an non-loopy, array-based wording, leaving the actual traversal extremely simple. I think it needs only a little more explanation than the comments in the code. It takes as x the start/end indices and as y the elevation field: bfs =: {{ NB. bfs goes backwards, from e to closest index in s; x: s;end ;y: height matrix 's e' =. x NB. start and end indices 'p c' =. |: one y NB. parents, children, as returned bz one gn =. (c #~ p&e.) NB. verb: get reachable neighbors for any index in y it=. 0 NB. Step count: 1 iteration = 1 step rr=. ,e NB. initially only E is reachable while. -.+./ s e.~ rr do. NB. none of start not reachable it=. >: it [ rr=. gn rr NB. take one step, adding nodes and increment step count The only thing to note is a nice J trick: the return value of a verb is the last evaluated result that's not in a T-block (even if that's an assignment). As we'd like to return iterations, that is the assignment that should happen last. Using [ is another trick to put several noun assignments on a line (but does not work for verbs). I noticed that due to using gn, there will be duplicated nodes in the set of reachable nodes rr (for part 2, there's roughly a 4x duplication). Nevertheless, using ~. after getting the neighbours turns out considerably slower. I also tried removing the loop, writing the last five lines of bfs as follows: {. (>:@{. , gn@}.)^:(-.@(+./)@(e.&s)@}.)^:_] 0,e It's a fraction faster, and while I'm an enormous fan of tacit programming (might be obvious from my code), often clarity is more important than brevity, especially if the code is in fact loopy in nature. You choose. Eventually, the solution is: part1=: (}: bfs >@{:)@par Part 2 Part 2 modifies the start point to be any point at elevation a. This is the exact reason I preferred to run bfs backwards from E to S, as it checks whether the destination (in this case S, or any point at elevation a) is within the reachable points. For adding all a's to the start set of the x argument of bfs, I wrote addas for replacing }: in part 1. It takes the parsed input, based on the field, finds the indices of the minimum values (i.e. elevation a), adds them to the start values, and appends the end field again. addas =: 1&{ ,~ {. (,I.@(=<./)@,)&.> {: part2 =: (addas bfs >@{:)@par
{"url":"https://code.jsoftware.com/wiki/ShareMyScreen/AdventOfCode/2022/12/HillClimbingAlgorithmJP","timestamp":"2024-11-12T01:59:30Z","content_type":"text/html","content_length":"40177","record_id":"<urn:uuid:fd2141bb-8e65-4f03-bb9b-4cd23dfb63f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00654.warc.gz"}
How to log a matplotlib 3D projection to TensorBoard I have a custom dataset and I would like to log the plots to tensorboard. The dataset is the following: import numpy as np from scipy.stats import multivariate_normal num_dim = 2 num_discrete_values = 8 coords = np.linspace(-2, 2, num_discrete_values) rv = multivariate_normal(mean=[0.0, 0.0], cov=[[1, 0], [0, 1]], seed=42) grid_elements = np.transpose([np.tile(coords, len(coords)), np.repeat(coords, len(coords))]) prob_data = rv.pdf(grid_elements) prob_data = prob_data / np.sum(prob_data) And i can visualise this dataset using the following: import matplotlib.pyplot as plt from matplotlib import cm mesh_x, mesh_y = np.meshgrid(coords, coords) grid_shape = (num_discrete_values, num_discrete_values) fig, ax = plt.subplots(figsize=(9, 9), subplot_kw={"projection": "3d"}) prob_grid = np.reshape(prob_data, grid_shape) surf = ax.plot_surface(mesh_x, mesh_y, prob_grid, cmap=cm.coolwarm, linewidth=0, antialiased=False) fig.colorbar(surf, shrink=0.5, aspect=5) For my use case i would like to use the tensorboard logger/summary writer to store this image. However, tensorboard cannot log 3d projections as this is currently constructed and an error occurs due to this. Does anyone have any idea on how to transform this into something compatible with the tensorboard logger? To log a matplotlib 3D projection to TensorBoard, you’ll first convert the 3D plot into an image by rendering it to an in-memory buffer using io.BytesIO(). Then, read this buffer into a NumPy array and log it to TensorBoard using tf.summary.image(). This approach works because TensorBoard can log images, but not direct 3D projections from matplotlib.
{"url":"https://discuss.ai.google.dev/t/how-to-log-a-matplotlib-3d-projection-to-tensorboard/32554","timestamp":"2024-11-14T23:49:55Z","content_type":"text/html","content_length":"29723","record_id":"<urn:uuid:43b24568-e53e-4e68-890b-33e1442a62e8>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00360.warc.gz"}
[GAP Forum] decomposition formulae for C-representations Dmitrii Pasechnik dima at thi.informatik.uni-frankfurt.de Tue Apr 6 09:56:03 BST 2004 Dear Forum, are the formulae that give the decomposition of a C-represenation p of a finite group G into direct sums of irreducibles implemented in GAP? (perhaps somewhere within character theory machinery I guess) I mean the standard ones given in e.g. Serre's "Linear Representations of Finite Groups", Sect. 2.6 and 2.7. For instance to obtain the subrepresentation of p (a direct sum of irreducible representations with the same character chi) of dimension n_chi, corresponding to the irrducible character chi, one uses the projection n_chi/|G| sum_{g in G} chi^*(g)p(g) (Thm. 8 in Sect. 2.6 of the Serre's book) Certainly, this is only feasible for groups of relatively small order to use these formulae directly, but in our case the groups are of order <10^4. More information about the Forum mailing list
{"url":"https://www.gap-system.org/ForumArchive2/2004/000710.html","timestamp":"2024-11-07T23:48:39Z","content_type":"text/html","content_length":"3440","record_id":"<urn:uuid:4ad2608e-e5ec-4deb-99fe-283ac40606f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00788.warc.gz"}
Addition chains API Addition chains API¶ In this version of Oraqle, the API is still prone to changes. Paths and names can change between any version. The add_chains module contains tools for generating addition chains. add_chains ¶ Tools for generating addition chains using different constraints and objectives. addition_chains ¶ Tools for generating short addition chains using a MaxSAT formulation. thurber_bounds(target, max_size) ¶ Returns the Thurber bounds for a given target and a maximum size of the addition chain. add_chain(target, max_depth, strict_cost_max, squaring_cost, solver, encoding, thurber, min_size, precomputed_values) ¶ Generates a minimum-cost addition chain for a given target, abiding to the constraints. • target (int) – • max_depth (Optional[int]) – The maximum depth of the addition chain • strict_cost_max (float) – A strict upper bound on the cost of the addition chain. I.e., cost(chain) < strict_cost_max. • squaring_cost (float) – The cost of doubling (squaring), compared to other additions (multiplications), which cost 1.0. • solver (str) – • encoding (int) – • thurber (bool) – Whether to use the Thurber bounds, which provide lower bounds for the elements in the chain. The bounds are ignored when precomputed_values = True. • min_size (int) – The minimum size of the chain. It is always possible to use math.ceil(math.log2(target)). • precomputed_values (Optional[Tuple[Tuple[int, int], ...]]) – If there are any precomputed values that can be used for free, they can be specified as a tuple of pairs (value, chain_depth). • TimeoutError – If the global MAXSAT_TIMEOUT is not None, and it is reached before a maxsat instance could be solved. • Optional[List[Tuple[int, int]]] – A minimum-cost addition chain, if it exists. addition_chains_front ¶ Tools for generating addition chains that trade off depth and cost. chain_depth(chain, precomputed_values=None, modulus=None) ¶ Return the depth of the addition chain. gen_pareto_front(target, modulus, squaring_cost, solver='glucose42', encoding=1, thurber=True, precomputed_values=None) ¶ Returns a Pareto front of addition chains, trading of cost and depth. addition_chains_heuristic ¶ This module contains functions for finding addition chains, while sometimes resorting to heuristics to prevent long computations. add_chain_guaranteed(target, modulus, squaring_cost, solver='glucose421', encoding=1, thurber=True, precomputed_values=None) cached ¶ Always generates an addition chain for a given target, which is suboptimal if the inputs are too large. In some cases, the result is not necessarily optimal. These are the cases where we resort to a heuristic. This currently happens if: - The target exceeds 1000. - The modulus (if provided) exceeds 200. - MAXSAT_TIMEOUT is not None and a MaxSAT instance timed out This function is useful for preventing long computation, but the result is not guaranteed to be (close to) optimal. Unlike add_chain, this function will always return an addition chain. • target (int) – • modulus (Optional[int]) – Modulus to take into account. In an exponentiation chain, this is the modulus in the exponent, i.e. x^target mod p corresponds to modulus = p - 1. • squaring_cost (float) – The cost of doubling (squaring), compared to other additions (multiplications), which cost 1.0. • solver (str, default: 'glucose421' ) – • encoding (int, default: 1 ) – • thurber (bool, default: True ) – Whether to use the Thurber bounds, which provide lower bounds for the elements in the chain. The bounds are ignored when precomputed_values = True. • precomputed_values (Optional[Tuple[Tuple[int, int], ...]], default: None ) – If there are any precomputed values that can be used for free, they can be specified as a tuple of pairs (value, chain_depth). • TimeoutError – If the global MAXSAT_TIMEOUT is not None, and it is reached before a maxsat instance could be solved. • List[Tuple[int, int]] – addition_chains_mod ¶ Tools for computing addition chains, taking into account the modular nature of the algebra. hw(n) ¶ Returns the Hamming weight of n. size_lower_bound(target) ¶ Returns a lower bound on the size of the addition chain for this target. cost_lower_bound_monotonic(target, squaring_cost) ¶ Returns a lower bound on the cost of the addition chain for this target. The bound is guaranteed to grow monotonically with the target. chain_cost(chain, squaring_cost) ¶ Returns the cost of the addition chain, considering doubling (squaring) to be cheaper than other additions (multiplications). add_chain_modp(target, modulus, max_depth, strict_cost_max, squaring_cost, solver, encoding, thurber, min_size, precomputed_values=None) ¶ Computes an addition chain for target modulo p with the given constraints and optimization parameters. The precomputed_powers are an optional set of powers that have previously been computed along with their depth. This means that those powers can be reused for free. • Optional[List[Tuple[int, int]]] – If it exists, a minimal addition chain meeting the given constraints and optimization parameters. memoization ¶ This module contains tools for memoizing addition chains, as these are expensive to compute. cache_to_disk(file_name, ignore_args) ¶ This decorator caches the calls to this function in a file on disk, ignoring the arguments listed in ignore_args. solving ¶ Tools for solving SAT formulations. solve(wcnf, solver, strict_cost_max) ¶ This code is adapted from pysat's internal code to stop when we have reached a maximum cost. • Optional[List[int]] – A list containing the assignment (where 3 indicates that 3=True and -3 indicates that 3=False), or None if the wcnf is unsatisfiable. extract_indices(sequence, precomputed_values=None, modulus=None) ¶ Returns the indices for each step of the addition chain. If n precomputed values are provided, then these are considered to be the first n indices after x (i.e. x has index 0, followed by 1, ..., n representing the precomputed values). solve_with_time_limit(wcnf, solver, strict_cost_max, timeout_secs) ¶ This code is adapted from pysat's internal code to stop when we have reached a maximum cost. • TimeoutError – When a timeout occurs (after timeout_secs seconds) • Optional[List[int]] – A list containing the assignment (where 3 indicates that 3=True and -3 indicates that 3=False), or None if the wcnf is unsatisfiable.
{"url":"https://jelle-vos.nl/oraqle/api/addition_chains_api/","timestamp":"2024-11-09T19:09:42Z","content_type":"text/html","content_length":"44950","record_id":"<urn:uuid:19e974a0-976f-4b1c-8c2f-5eab42c1cc42>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00301.warc.gz"}
Efficiently tabulating system of coupled ODEs for NDSolve 8979 Views 2 Replies 0 Total Likes Efficiently tabulating system of coupled ODEs for NDSolve I am solving a (large) system of second order coupled nonlinear ODEs. I have been increasing the number of dependent variables present in the system slowly, to converge on more physically realistic results, but have experienced some pretty severe bottlenecks in the I will link to some theory (is LaTex supported in this forum?) and present my code below, but my question is, what is the best (i.e. most efficient) way to tabulate a large system of indexed functions, to be passed into NDSolve? I've tried a few things to improve efficiency which I'll list after I present the code, but first some details. The system under consideration is described . Qualitatively it is a system of coupled second order nonlinear ODEs. The i-th governing equation is described in terms of a series of summations, and is of the form Sum_j (Q_ij +Q_ji) d^2a_i/dt^2 + Sum_jSum_k S_i (j,k) da_j/dt da_k/dt + V = 0, where the coefficients (Q_ij, S_i (j,k), V) are of maximum degree 4 polynomials in the dependent variables a_i. The particular formulation of these terms is given in the link but in general I am dealing with a number of indexed functions, which leads to a bit of my confusion, and will be discussed below the The code is M = 10; cond1[r_, s_] := cond1[r, s] = If[Sign[s] == Sign[r - s], 0, 1]; cond2[r_, s_] := cond2[r, s] = If[Abs[r - s] > M, 0, 1]; F[n_] := F[n] = If[n == 0, 1, Abs[n]]; a[0][t_] := a[0][t] = 1.; P[m_, k_, t_] := P[m, k, t] = 1./(Sqrt[F[m]]*F[k])*(cond1[m, k]*cond2[m, k]*a[m - k][t] - Q[k_, l_, t_] := Q[k, l, t] = 1./4.*Sum[P[m, k, t]*P[-m, l, t], {m, 1, M}]; q[m_, k_, n_, t_] := q[m, k, n, t] = 1./(Sqrt[F[m]]*F[k])*(cond1[m, k]*cond2[m, k]* Boole[m - k == n] - a[m][t]/2.*Boole[-k == n] - a[-k][t]/2.*Boole[m == n]); SQ[k_, l_, n_, t_] := SQ[k, l, n, t] = P[-m, l, t]*q[m, k, n, t] + P[m, k, t]*q[-m, l, n, t], {m, 1, M}]; SS[k_, l_, n_, t_] := SS[k, l, n, t] = (SQ[k, l, n, t] - SQ[n, l, k, t] - SQ[l, n, k, t]); Fu[a_, b_, c_] := Fu[a, b, c] = If[a + b + c == 0, 1, 0]; Fu2[a_] := Fu2[a] = If[a == 0, 0, 1]; V[t_] := V[ t] = (Sum[-a[n][t]*a[-n][t]/(2*n), {n, 1, M}]^2 + 2.*Sum[a[n][t] a[-n][t]/(4*n^2), {n, 1, M}] + Sum[-a[n][t]*a[-n][t]/(2 n), {n, 1, M}]* Sum[a[n][t] a[-n][t]/n, {n, 1, M}] + Sum[Fu[n, m, o]*Fu2[n]*Fu2[m]*a[n][t]*a[m][t]*a[o][t]* Abs[o]/(8*Abs[F[n]*F[m]*F[o]]), {n, -M, M}, {m, -M, M}, {o, -M, Join[Table[With[{i = i}, U2[i] = D[V[t], a[i][t]]], {i, 1, M}], Table[With[{i = i}, U2[-i] = D[V[t], a[-i][t]]], {i, 1, M}]]; Gov[n_, t_] := Gov[n, t] = Sum[a[l]''[t]*(Q[n, l, t] + Q[l, n, t]), {l, -M, M}] - Sum[SS[k, l, n, t] a[k]'[t] a[l]'[t], {k, -M, M}, {l, -M, M}] + Table[With[{i = i}, a[i][t_] := ar[i][t] + I*ai[i][t]], {i, 1, M}]; Table[With[{i = i}, a[-i][t_] := ar[i][t] - I*ai[i][t]], {i, 1, M}]; Table[ar[i][t] \[Element] Reals, {i, 1, M}]; Table[ai[i][t] \[Element] Reals, {i, 1, M}]; Table[With[{i = i}, Test[i] = Evaluate[Gov[i, t]]], {i, 1, M}]; Table[With[{i = i}, ReGov[i, t_] := ComplexExpand[Re[Test[i]]]], {i, 1, M}]; Table[With[{i = i}, ImGov[i, t_] := ComplexExpand[Im[Test[i]]]], {i, 1, M}]; ao = 0.1; eqns = Join[Parallelize[Table[ReGov[i, t] == 0, {i, 1, M}]], Parallelize[Table[ImGov[i, t] == 0, {i, 1, M}]], Table[ar[i][0] == ao, {i, 1}], Table[ar[i][0] == 0, {i, 2, M}], Table[ai[i][0] == 0, {i, 1, M}], Table[ar[i]'[0] == 0, {i, 1, M}], Table[ai[i]'[0] == ao, {i, 1}], Table[ai[i]'[0] == 0, {i, 2, M}]]; MMode = NDSolve[eqns, Join[Table[ar[i][t], {i, 1, M}], Table[ai[i][t], {i, 1, M}]], {t, 0, 10}]; The first bottleneck occurs when defining a variable that evaluates the governing equations: Table[ With[{i = i}, Test = Evaluate[Gov[i, t]]], {i, 1, M}]; Note, for the M=10 case the Timing on my machine is about 1.55, while for the M=20 case, the Timing is about 28.555. In particular, the bottleneck seems to come from evaluating the function SS [n,k,l,t] in the definition of Gov[i,t]. There are quite a few terms in this function, but they all involve low order polynomials of maximum degree 4 in the dependent variables. I'm not sure if there is a way to speed these calculations up, or if this is a necessary area of intense computation. Next, I decompose my governing equations into real and imaginary parts, to speed up the NDSolve portion of the computation (if I don't, I get an IDA error and must use "Solve" - see ). This leads to the next, more severe, bottle neck, which comes from tabulating the real and imaginary parts of the governing equations, i.e. defining the variable "eqns". I have used ComplexExpand [Re(Im) ] to get the real and imaginary parts of the governing equations, respectively, but this seems to be computationally expensive and accounts for the gross majority of the computation time. The reason for this post is to find out if there is a more efficient way to do this. Note, for the M=5 case the Timing of the eqns calculation is 0.0173 while for the M=10 case it is 12.25, i.e. 4 orders of magnitude larger! Qualitatively, I wonder if I have not effectively constrained some of the definitions in my code, and if this generality is leading to significant slow down. Also, I feel as if I need to gain some understanding of the compile function, which seems to be very useful when defining functions, but the subtleties of using it with indexed functions to be passed into NSDSolve is beyond my very naive understanding and I cannot find any relevant examples to study. Finally, an alternative approach would be using matrices instead of term by term operations, but I'm not convinced this will lead to a significant speed up in Mathematica, but would welcome any advice on this. Any comments/suggestions are greatly appreciated, 2 Replies Hi Peter, Thanks for pointing this out. I had some trouble formatting my post, and in the confusion did not realize many terms had been dropped. I believe the code is now as it should be. I'm noting that the expression Sign == Sign[r - s] in cond1 doesn't make sense, according to your reference it's a typo (Sign -> Sign[ s ] ). Is all the remaining code as desired? Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/107995","timestamp":"2024-11-14T21:27:08Z","content_type":"text/html","content_length":"106043","record_id":"<urn:uuid:5671d05c-cbbd-4fce-b5db-d9119af62001>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00062.warc.gz"}
School of System Socionics Each function has a certain set of properties which distinguish it from the other functions. Knowing these properties, you can learn to recognize the "signature" of a function, showing through a person's speech. There are three characteristics that differentiate one function from another: 1. Dimension (one-, two-, three-or four-dimensional). 2. Sign ("plus" or "minus"). 3. Track (vital or mental). Combination of these three properties determine the location of a function in a specific TIM model and its distinctive features related to information processing. • Function 1 - mental track, four-dimensional • Function 2 - mental track, three-dimensional • Function 3 - mental track, two-dimensional • Function 4 - mental track, one-dimensional • Function 5 - vital track, one-dimensional • Function 6 - vital track, two-dimensional • Function 7 - vital track, three-dimensional • Function 8 - vital track, four-dimensional The sign of a function depends on affiliation of the particular TIM model to one of the so-called octavе social progress rings. • TIMs that belong to the so called right-hand social progress ring (IL, SE, ET, LF, FR, TP, PS, RI), have the "plus" sign at their function 1. • TIMs that belong to the left hand social progress ring (ES, LI, FL, TE, PT, RF,IR, SP), have the "minus" sign at their function 1. As for signs of other functions, all functions of same color have the same sign. It is easy to determine the sign of any function for a given TIM, if you remember the sign of the function 1 of this TIM's model.
{"url":"http://en.socionicasys.org/teorija/dlja-novichkov/funkcii","timestamp":"2024-11-08T14:45:36Z","content_type":"text/html","content_length":"7934","record_id":"<urn:uuid:55bd0290-2be9-4386-8649-6741b51f6311>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00840.warc.gz"}
Shock waves - (Magnetohydrodynamics) - Vocab, Definition, Explanations | Fiveable Shock waves from class: Shock waves are abrupt disturbances that move through a medium, creating a sharp change in pressure, temperature, and density. They occur when an object travels faster than the speed of sound in that medium, leading to a non-linear effect where the flow becomes compressible. These waves are crucial in understanding compressible flows, as they dictate the behavior of fluids when subjected to high congrats on reading the definition of shock waves. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Shock waves can be generated by various phenomena, including supersonic aircraft, explosions, and any object moving at speeds greater than Mach 1. 2. The strength of a shock wave is influenced by the speed of the object creating it; faster objects produce stronger shock waves. 3. Shock waves can result in significant changes in flow properties, including abrupt changes in velocity and thermodynamic variables. 4. There are different types of shock waves, including normal shocks and oblique shocks, each characterized by their orientation relative to the flow direction. 5. Understanding shock waves is essential for aerospace engineering, as they affect aircraft design and performance during high-speed flight. Review Questions • How do shock waves affect fluid properties in compressible flows? □ Shock waves cause sudden changes in fluid properties like pressure, density, and temperature across their front. When a shock wave passes through a medium, it compresses the fluid ahead of it and causes a rapid decrease in pressure and an increase in temperature behind it. This non-linear behavior is crucial for understanding how fluids behave at high velocities, particularly in applications like supersonic flight. • Compare and contrast normal shock waves with oblique shock waves in terms of their characteristics and effects on flow. □ Normal shock waves occur perpendicular to the flow direction and create abrupt changes in pressure and temperature across their front. In contrast, oblique shock waves form at an angle to the flow direction and result in both pressure increases and directional changes in flow. While both types of shocks are significant in compressible flows, normal shocks typically lead to more drastic effects on downstream conditions than oblique shocks, making them easier to analyze for specific applications. • Evaluate the impact of shock waves on aerospace engineering design and performance for high-speed vehicles. □ In aerospace engineering, understanding shock waves is critical for designing high-speed vehicles like supersonic jets and spacecraft. The presence of shock waves influences aerodynamic forces, heat transfer, and overall stability during flight. Engineers must account for these effects when designing airframes and propulsion systems to ensure safety and efficiency. Failure to adequately address shock wave interactions can lead to structural damage or compromised performance during critical flight phases. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/magnetohydrodynamics/shock-waves","timestamp":"2024-11-13T08:07:02Z","content_type":"text/html","content_length":"150217","record_id":"<urn:uuid:990f60db-2c74-49df-8c86-fbaebe52bc05>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00717.warc.gz"}
(PDF) Power System Frequency Measurement for Frequency Relaying Author content All content in this area was uploaded by Bogdan Kasztenny on Nov 23, 2017 Content may be subject to copyright. Power System Frequency Measurement for Frequency Relaying Spark Xue, Bogdan Kasztenny, Ilia Voloh and Dapo Oyenuga Abstract— Frequency protection is an important part of the art of power system relaying. On one hand it covers rotat- ing machines and other frequency-sensitive apparatus from potential damage or extensive wear. On the other hand it is a part of load shedding schemes protecting the system. Unlike many other protection signals, power system frequency is not an instantaneous value. Moreover, there is no unambiguous definition of power system frequency assuming system tran- sients and multi-machine systems. This paper discusses the power system frequency definition, signal models for frequency measurement, frequency measurement algorithms and funda- mentals of frequency relaying. First, the concept of power system frequency is discussed and clarified in the context of a large scale system. The mathematical expressions of the system frequency and instantaneous frequency are presented on the basis of different signal models. A summary of the requirements for frequency measurement in applications such as under/over frequency relays, synchrocheck relays, phasor measurement units is given. Second, typical frequency measurement methods are reviewed and the performance evaluation are discussed. Third, simulation tests of the typical algorithms are provided to demonstrate various aspects of the frequency measurement, including the metering accuracy, the time response, the tracking capability, and the performance under noisy/harmonics condi- tions. In the end, some practical aspects in designing and testing the frequency relays are discussed. Frequency is an important parameter in power system to indicate the dynamic balance between power generation and power consumption. The system frequency and its rate-of- change are used directly for generator protection and system protection. When there are disturbances or significant load variations, the under/over-frequency relays could trip the units to avoid damage to the generators. When the system is about to lose its stability, the under-frequency relays can help to shed off non-critical loads so that the system balance could be restored. Power system stability can also be improved by installing power system stabilizers (PSS). A PSS could use the frequency of voltage signal taken at the generator terminal to derive the rotor speed, so that the excitation field and the power output of a generator can be adjusted by a feedback control scheme. In addition to direct usage in protection and control schemes, the function of frequency tracking is an indispensable part of modern digital relays because many numerical algorithms are sensitive to the variation of fundamental frequency. For example, the digital Fourier Transform (DFT) is widely used to compute phasors of voltage and current signals. If the sampling frequency is not the assumed multiple of signal frequency, leakage Spark Xue, Bogdan Kasztenny, Ilia Voloh and Dapo Oyenuga are with GE Multilin, 205 Anderson Road, Markham, Ontario, Canada error would occur in phasor estimation. Without proper compensation, the overall performance of the protective relay will be impacted. Highly accurate and stable frequency mea- surement is always desirable for power system applications. However, the dynamic characteristic of signals in power system have brought challenge in designing a frequency estimation algorithm that is accurate, fast and stable under all kinds of conditions. To tackle this problem, researchers have proposed many numerical algorithms, such as zero crossing, DFT with compensation, phase locked loop, orthogonal de- composition, signal demodulation, Newton method, Kalman filter, neutral network etc. This paper provides a review on the concept of power system frequency, the frequency mea- surement algorithms and some fundamental aspects related to frequency relaying. The paper is organized as follows: The concept of frequency is discussed in section II. Section III and IV describe the signal models and the requirement for frequency relaying. Frequency measurement algorithms and the performance evaluation are reviewed in section V and VI. Section VII presents some simulation test results with respect to four selected algorithms. The frequency relay design and test are discussed in section VIII. Summary is given in the II. THE CO NCEPT OF FREQUE NCY A. The general definition and instantaneous frequency The general definition of frequency in physics is the number of cycles or alternations per unit time of a wave or oscillation. Assuming a signal has N cycles within a period of ∆t, its frequency will be From this general definition, one can derive that the signal needs to be periodical and the frequency is not an instanta- neous quantity. However, it is common that frequency is used to characterize arbitrary signals including aperiodic signals. Meanwhile, the term of instantaneous frequency is seen from time to time in the literature. As a matter of fact, many frequency estimation algorithms are based on the concept of instantaneous frequency. These paradoxes can be resolved by extending the definition of frequency. Using Fourier transform, an arbitrary signal can be de- composed into a weighted sum of periodic components in the form of sine / cosine waves. Let the signal be s(t)in time domain, its frequency domain correspondence is S(f) = Z−∞ s(t)e−j2πf tdt. (2) where a particular S(f0)gives the amplitude of the com- ponent that has a frequency f0. If the signal is strictly periodic, it has one fundamental frequency. If the signal is aperiodic, it has multiple frequencies or even infinite number of frequencies. The frequency of each sinusoidal component follows the general definition. This way, the frequency definition is extended for aperiodic signals. For a sinusoid signal in the form of s(t) = Asin(ϕ), it can be viewed as the projection of a rotating phasor to the imaginary axis of a complex plane. The angular speed of the phasor is dϕ dt . As a rotating phasor, the recurrence of the signal value means that ϕis increased by 2π. Because of this, the phasor repeats ∆ϕ 2πtimes during ∆tand the frequency is 1 ∆taccording to the frequency definition in Eq. (1). Taking the limit ∆t→0, the instantaneous frequency is defined as f(t) = 1 dt . The above two extensions of frequency definition have played important roles in the area of signal processing. The Fourier transform in Eq. (2) is meaningful for stationary signals that the spectrum are constant in a window of time. For a non-stationary signal that the spectrum are time- varying, the instantaneous frequency can be used to charac- terize it. However, the concept of instantaneous frequency is controversial and application related. For example, a complex signal s(t)with following form s(t) = A(t)ejϕ(t) has both amplitude and phase that are time-varying. When the signal is to be reconstructed from the sample values, it could be written either in amplitude-modulation form s(t) = A(t)ej2πf t,(3) or phase-modulation form s(t) = A0eφ(t),(4) where A(t)is a time-varying and A0is a constant. The instantaneous frequencies corresponding to Eq. (3) and Eq. (4) would be completely different. It shows that the instan- taneous frequency needs to be defined in the context of a specific application. B. Frequencies for power system In power system, the voltage or current signals for fre- quency measurement are originated from the synchronous machines (generators) whose rotating speed are proportional to the frequency of the generated voltage. The mechanical frequency of a generator is its rotor speed dt . where θmis the spatial angle of the rotor. The frequency of the generated voltage is dt . where θeis the electrical angle that is proportional to θm of a n-pole machine. From these equations, the frequency has clear physical meaning for a stand-alone generator. Both general definition of frequency and the extension of instantaneous frequency fit well in this case. It is natural to extend the instantaneous frequency notation of the generator internal voltage to any nodes in the system. Using rotating phasor ~u(i, t)to represent node voltage, the instantaneous frequency of the ith node in the system can be defined as the phasor rotating speed, f(i, t) = 1 dt tan−1µIm(~u(i, t)) Re(~u(i, t)) ¶(5) where Im(~u(i,t)) Re(~u(i,t)) is the phasor rotating angle of the voltage signal on the complex plane. The frequency for current signal has the same expression. In power system, it is more meaningful to use a single quantity to represent the three phase signals. [12] proposed to use a composite space phasor derived from αβ transform to represent the three-phase signal. The composite phasor is actually the scaled positive sequence component. ~up= 1/√3(u1(t) + αu2(t) + α2u3(t)) where α= exp(j2π/3). The frequency is still defined as the rotating speed of phasor ~upas in Eq. (5). Using positive sequence component, not only all three phases can be handled at the same time, the impact of 3rd harmonics and dc component to the frequency measurement is also reduced. For frequency relaying in most cases, the system frequency is the target as it is used to reflect the power balance of the system or a region. Since the frequency is obtained from each individual node, a question arises: can the measured frequency be taken as system frequency? The answer is yes and no, depending on the system condition, the application and the frequency estimation method. In a power system, if the power generation and con- sumption are perfectly balanced and all the generators are in synchronism, the frequency of any node can be taken as system frequency. However, a power network is such a dynamic system that unbalance between generation and load always exists. Especially, when there is a disturbance such as fault on a critical transmission line or loss of a large generating unit, the balance between the generation and the load would be temporarily disturbed. Consequently, the power balance at each individual generating unit would be different. From [46], the swing equation of the ith generator in a multi-machine system is dt =Pmi −Pei −Pdi. where Miis the inertia coefficient of the ith machine, Pmi, Pei and Pdi are the mechanical power, electrical power and damping power respectively. This equation tells that rotor is accelerated or decelerated by the power unbalance (Pmi − Pei) and the power Pdi absorbed by the damping forces. The corresponding frequency would differ from generator to generator. During the electromechanical dynamics in the system, a generator that is close to the disturbance will have an instantaneous rotor speed variation in response to the disturbance. But for the generators far away, the rotor speed and mechanical power output would not change at the first instant. The frequency difference will cause electromechan- ical wave propagation in the network to produce different frequency dynamics at different nodes in the system. From the simulation test in [63], the speed of the frequency wave propagation is 400-600 miles/sec for a 1800MW loss in Eastern US system. Therefore, the node frequency is a local quantity that may not fully represent the system frequency, which is a global value that can be defined as the weighted average of the node frequencies or the equivalent frequency at the center of inertia [70], i=1 Hifi i=1 Hi where Hiis the inertia constant of the ith generator or the equivalent generator of a region. The averaging process in this equation should be carried out over all locations for a fixed time window to yield the system frequency. Nowadays this has become possible by utilizing a group of GPS-synchronized PMUs that are connected through high speed communication network. However, for practical reason, the system frequency is usually approximated by the time averaging of the frequency of an individual node, f(i, t)dt. (7) From Eq. (6) and Eq. (7), the system frequency is not an instantaneous value. However, the concept of instantaneous frequency can still be used in some frequency estimation algorithms. The average value of the estimated frequency can be used to approximate system frequency. This leads to another issue: the frequency results from different intelligent electronic devices (IEDs) could be different. Some IEDs are based on the periodic characteristic of the signal, some are based on the concept of instantaneous frequency. Dif- ferent algorithms also have different accuracy and different response to harmonics, noise, time-varying amplitude, etc. Therefore, the measured frequency at a node in the system should be called as apparent frequency, which is a reflection of the actual node frequency in the IEDs. For most IEDs, it is usually a window of sample values that are used to compute the frequency and the results are usually smoothed by moving average filters. This way, the apparent frequency would be close to node frequency and system frequency. In brief, the node frequency, generator frequency, system frequency and apparent frequency are different quantities, even though their value could be very close particularly under very slow system disturbances and in steady states. To understand the difference would be helpful in design and test of frequency-related applications in power system. III. SIG NAL MOD ELS F OR FR EQU ENCY MEASURE MEN T The modeling of signals is the first step to the frequency measurement problem. As a mathematical description of signal, a model would establish the relationship between the unknown parameters and the observed sample values. The signal models that are commonly used for frequency measurement are summarized in this section. A. Basic signal model The most widely used signal model in power system is a voltage signal expressed by v(t) = Acos(ωt +ϕ).(8) where Ais the amplitude, ωis the angular frequency and ϕis the phase angle. For a stationary signal, the frequency is simply ω/2π. For a non-stationary signal, the frequency and phase angle can not be considered separately from this model. Some algorithms would take ωand ϕas two variables but estimate them simultaneously; Some would use a fixed value for ϕand leave only ωas the only variable within the cosine function; Some algorithms would take ωas nominal frequency and compute the frequency deviation from the phase angle variation dt ,(9) which is in line with the expression of instantaneous fre- B. Signals models with harmonics, noise and decaying DC Inevitably, the voltage or current signal in a power system could be contaminated by harmonics, noise and dc compo- nent. Some frequency measurement algorithms assume those should be handled by separate filters. Some just include them in the signal model v(t) = A0e−t/τ + Ak(t) sin(ω0kt +ϕk(t)) + ε, where A0is the initial amplitude of the dc component that has time constant τ,Ak(t)represents the amplitude of the k-th harmonic, εis noise and Mgives the maximum order of the harmonics. From this model, the frequency deviation can be regarded as phase angle change of fundamental dt , where ϕ1(t)represents the phase angle of fundamental component that is slightly different from Eq. (9). C. Complex signal model from Clarke transformation In a power system, it is meaningful to measure the frequency for all three phases simultaneously. Instead of combining the measuring results from three single-phase voltages, the Clarke transformation is used to modify the three-phase system to a two-phase orthogonal system, From the above equation, either vαcan be used alone for frequency measurement, or a composite signal v=vα+jvβ can be used. Using Clarke transformation, not only the three- phase signals could be considered at the same time, the com- posite signal in complex form could also be useful in some algorithms such as [2], [6], [48], [57]. The model is also less susceptible to harmonics and noise. The disadvantage is that system unbalance could bring errors to the transformed D. Signal model using positive sequence component In [11], [47], [57], the positive sequence voltage is used for frequency estimation. The positive sequence component has the same advantage as the composite signal from αβ transform. Actually, they are equivalent under system bal- ance condition. There are various solutions to compute the sequence components from sample values. Since the positive sequence voltage V1is a space vector rotating with angular speed 2πf , the frequency can be computed by IV. FRE QUE NCY RE LAYIN G AND T HE REQ UIREMENTS The main applications of frequency relaying include un- der / over frequency relays for generator protection or load shedding schemes, the voltage / frequency (V/Hz) relays for generator/transformer overexcitation protection, synchrocheck relays, synchrophasors and any phasor-based relays that incorporating frequency tracking mechanism for accurate phasor estimation. When there is an excess of load over the available gen- eration in the system, the frequency drops as the generators would slow down in attempt to carry more load. If the un- derfrequency or overfrequency condition lasts long enough, the resulted thermal stress and vibration could damage the generators. If the load / generation unbalance is severe, the generator shall be tripped by its unit protection, which could consequently worsen the system unbalance condition and lead to a cascading effect of power loss and system collapse. On one hand, the underfrequency relays can be used to trip the generators when the system frequency is close to the withstanding limits of the units. On the other hand, the under- frequency relays can be used to automatically shed some pre-determined load so that the load / generation balance could be restored. Such load shedding action must be taken promptly so that the remainder of the system could recover without sacrificing the essential load. Most importantly, the action shall be fast enough to prevent the cascading of generation loss into a major system outage. For this type of applications, the frequency relays should have high accuracy because as little as 0.01Hz of frequency deviation could represent tens of megawatts in power unbalance. It is generally required that a frequency relay has a 1mHz resolution. Meanwhile, the frequency measurement must be stable and robust under various conditions. When a generator unit is under AVR control at reduced frequency during unit start-up and shutdown, or under over- voltage conditions, the magnetic core of the generator or transformer could saturate and consequent excessive eddy currents could damage the insulation of the generator / transformer. To prevent this, relays based on volts / hertz measurement can be deployed to detect this over-excitation condition. The accuracy requirement is the same as the underfrequency relay. The synchrocheck relays are used to supervise the connec- tion of two parts of a system through the closure of a circuit breaker. The difference of frequency, phase angle and voltage need to be within the setting range to prevent power swings or excessive mechanical torques on the rotating machines. In general, a setting of around 0.05Hz is sufficient and the frequency resolution of a synchro-check relay could be in the range of 10mHz. For microprocessor-based relays, the frequency tracking mechanism is critical to phasor estimation. Frequency track- ing usually indicates that a digital relay can adjust its sampling frequency according to the signal frequency, in order to reduce the phasor estimation error. Most digital relays use phasor estimation as the foundation of protection functions since phasors can help to transform the differential equations of electrical circuits into simple algebra equations. Though the expression of a phasor is independent of fre- quency, different signal frequency could result in different phasors. Without frequency tracking, the performance of the protection functions will be impaired under off-nominal frequency conditions. For phasor estimation, the frequency tracking shall be as fast as possible to follow the frequency variations, under the condition that stability of the frequency measurement is maintained. In addition, the range of fre- quency tracking for generator protection needs to be wide enough to cope with the generator starting-up and shutting- The frequency and frequency rate-of-change are also inte- grated part of synchrophasor units for wide area protection and control. In 2003, an internet-based frequency monitoring network (FNET) was set up in U.S. to make synchronized measurement of frequency for a wide area power network [37], [71]. From the synchronized data collected by FNET, significant system events such as generator tripping can be located by event localization algorithms [15] that are based on the traveling speed of the frequency perturbation wave and the distance between observations points. For this type of application, the accuracy of frequency metering should be as high as possible since minor frequency error could mean hundreds of miles difference in fault localization. In general, the frequency measurement should have enough accuracy and good speed. A ±1mHz accuracy is deemed good enough for most frequency relaying applica- tions. However, the ±1mHz accuracy is only valid when the frequency has slow changes. Though fast frequency tracking could mean less dynamic error, the accuracy and the speed requirement are mutually exclusive at a certain point. More error could be produced in pursuit of fast frequency tracking, especially when the system or signals are under adverse conditions. In power system, the voltage or current signal for frequency estimation could be contaminated by harmonics, random noise, CT saturation, CVT transients, switching operation, disturbance, electromagnetic interference, etc. It is imperative that a frequency relay shall not give erroneous results to cause false relay operation. To summarize, the following criteria needs to be satisfied for frequency relaying, •The measured frequency or frequency rate-of-change should be the true reflections of the power system state; •The accuracy of frequency measurement should be good enough under system steady state and dynamic •The frequency tracking should be fast enough to follow the actual frequency change, in order to satisfy the need of intended application; •The frequency tracking for generator protection should be wide enough to handle generator start-up and shut- down process; •The frequency measurement should be stable and robust when the signal is distorted. V. FR EQU ENC Y MEAS UREMENT ALGORITHMS In the past, a solid state frequency relay can use pulse counting between zero-crossings of the signal to measure the frequency. The accuracy could be as high as ±1∼2mHz [43] under good signal conditions, but the relay is susceptible to harmonics, noise, dc components, etc. Nowadays, with the prevalence of microprocessor-based relays and cheaper computational power, many numerical methods for frequency measurement were applied or proposed, including: •Modified zero-crossing methods [4], [3], [44], [52], [53] •DFT with compensation [23], [28], [65], [68] •Orthogonal decomposition [40], [55], [59] •Signal demodulation [2], [11] •Phase locked loop [12], [16], [27] •Least square optimization [7], [34], [49], [62] •Artificial intelligence [8], [13], [30], [32], [45], [58] •Wavelet transform [9], [36], [35], [31], [64] •Quadratic forms [29], [30] •Prony method [38], [42] •Taylor approximation [51] •Numerical analysis [67] Some of these methods are briefly reviewed in this section. A. Zero-Crossing The zero-crossing (ZC) is the mostly adopted method because of its simplicity. From the frequency definition, the frequency of a periodic signal can be measured from the zero-crossings and the time intervals between them. A solid state frequency relay could detect the zero-crossings by using voltage comparators and a reference signal. In a software im- plementation, the zero-crossing can be detected by checking 00.005 0.01 0.015 0.02 0.025 Zero crossings t1 t2 t3 Time (s) Fig. 1. Zero-crossing detection for frequency measurement the signs of adjacent sample values. The duration between two zero-crossings could be obtained from the sample counts and the sampling interval. Fig. 1 shows the zero-crossing detection using digital method. The accuracy of ZC could be influenced by zero-crossing localization, quantization error, harmonics, noise and signal distortion. The quantization error is negligible if high sampling rate and high precision A/D converter are used. A lowpass filter can be applied to reduce the harmonics and noise in the signal. The random error caused by zero-crossing localization on time axis could be significant if the sampling frequency is not high enough. [4] proposed to use polynomial curve to fit the neighboring samples of the zero-crossing. The roots of the polynomial can be solved by least error squares (LES) method and one of the roots is taken as the precise zero-crossing on time axis. The disadvantage of this method is the high computational cost for curve fitting and polynomial solving. In practice, the linear interpolation is used mostly, as illustrated in Fig. 1. To improve the accuracy of ZC, a post-filter such as a moving average filter is usually applied. The slow response to frequency change is another issue for ZC since the measured frequency can be updated after at least half a cycle. In practice, it takes a few cycles to obtain good accuracy. Including the delay brought by the pre-filters and post-filters, the total latency could be significant. A level crossing method was proposed in [44] to supplement the ZC by multiple computations of the periods between non- zero voltage level crossings. It makes use of all the sample values to improve the dynamic response of the algorithm. But the method is susceptible to amplitude variations and signal distortion. In [1], a three-point method is used to sup- plement the ZC. The frequency can be quickly derived from three consecutive samples. However, the method is highly susceptible to noise, harmonics and amplitude variations. In brief, a zero-crossing method has its advantage of sim- plicity. But it needs to be supplemented with other techniques to obtain good accuracy and good dynamic response. In some cases, the overall algorithm becomes so complicated that the simplicity of the zero-crossing method has been lost. B. Digital Fourier Transform The digital Fourier Transform (DFT) is widely used for voltage and current phasors calculation. For a discrete signal v(k), if the DFT data window contains exactly one cycle of samples, the phasor of fundamental frequency is given by, v(k+n−N+ 1)e−j2πn/N (10) where Nis the number of samples and the subscript k represents the last sample index in the data window. The resulted phasor rotates on the complex plane with an angular speed determined by signal frequency, which can be taken as instantaneous frequency where arg[Vk] = tan−1{Im[V k]/Re[V k]}. The phasor estimation and frequency estimation are highly correlated to each other. If the design assumes that sampling rate is an integer multiple of signal frequency, DFT will produce leakage error on both phasor and frequency measurement for the signal with off-nominal frequency. Using DFT, a N- point data sequence in time domain will produce N discrete frequency bins in frequency domain. If the signal frequency is not overlapping any of these frequency bins, the ’energy’ from the samples will leak to the neighboring bins. The closest frequency bin that is used to approximate the signal frequency will get the most ’energy’. Hence, the leakage error is introduced into the estimated phasor and frequency. Fig. 2-(a),(b) present the frequency domains of a 60Hz signal and a 59Hz signal as the DFT results under the sampling rate 3840Hz. The corresponding phasors out of DFT are shown in Fig. 2-(c),(d). Without leakage compensation, the magnitude and angle for the 59Hz signal oscillate and deviate from the actual values. In contrast, the magnitude and angle for the 60Hz signal are straight lines. Almost all DFT- based frequency measurement algorithms are focusing on how to reduce or eliminate leakage error. There are four main approaches: 1) The length of data window is fixed, the sampling frequency is updated by the estimated signal frequency frequency (Hz) frequency (Hz) 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Time (s) Magnitude (pu) 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0. 8 0. 6 0. 4 0. 2 Time (s) Angle (rad) (a) (b) Fig. 2. DFT leakage error illustration (Sampling rate = 3840 Hz). a) DFT results of a 60Hz signal; b) DFT results of a 59Hz signal; c) Magnitudes of the phasors for 60Hz / 59Hz signals; d) Angles of the phasors for 60Hz / 59Hz signals 2) The sampling frequency is fixed, the length of data window is updated by the estimated signal frequency [14], [23]; 3) The length of data window is fixed, the data are re- sampled to ensure one cycle of data in the window 4) Both the sampling frequency and the length of data window are fixed, the leakage error is compensated [4], [47], [65], [69]. The first three approaches are based on the fact that leakage error can be canceled out if the sampling frequency is an integer multiple of signal frequency, or equivalently, the DFT data window contains exactly n(n= 1,2, ..)cycles of samples. Under this condition, the signal frequency will overlap one of the frequency bins in frequency domain so that no leakage would occur. In [5], the variable-rate measurement is proposed for frequency measurement. A feedback loop is applied to adjust the sampling frequency until the derived frequency is locked with the actual signal frequency. Similar to a phase locked loop, this type of methods can achieve high accuracy since the feedback loop can force the error towards zero. However, the feedback could slow down the frequency tracking speed for a real-time application. With proper hardware and soft- ware co-design, this method is suitable for on-line frequency In [23], the DFT data window has a variable length according to the estimated frequency, so that a cycle of samples could be included in the data window. Since the sampling frequency is fixed while the signal frequency is uncertain, it is not guaranteed that the updated data window would contain one cycle of samples exactly. Therefore, the leakage error cannot be eliminated by this method. [23] proposed to use the line-to-line voltage or positive sequence voltage to reduce the influence of harmonics and to use a moving average filter to smooth the estimation results. This method is easy to implement and the measurement range is wide, which is good for generator protection. However, the accuracy is limited because of the incomplete leakage In [28], the hardware samples are re-calculated into soft- ware samples so that the data window will always include a fixed amount of samples for one cycle of signal exactly. A feedback loop is used to adjust the re-sampling by the estimated frequency until the error is lower than a threshold. This method is more accurate than [23] and simpler than [5]. Again, the feedback loop needs careful design for good dynamic response in real-time applications. In [65], [68], a number of successive phasors out of DFT are utilized to cancel the leakage error without changing the sampling rate and the data window length. The method in [68] does not make any approximations to cancel out the leakage error so that high accuracy can be achieved. The details of this algorithm is given in the Appendix. From the simulation tests, this method can achieve both high accuracy and good dynamic response, but it is susceptible to harmonics, noise and dc component. In summary, DFT can be used to estimate fundamental frequency, phasor and harmonics simultaneously, which is its advantage over the other single-objective algorithms. However, the leakage effect could have significant impact on the phasor and frequency estimation. The DFT methods need to be supplemented by compensation techniques for good accuracy. Comparing various compensation techniques, the algorithms in [5], [28], [68] are recommended for phasor and frequency estimation. C. Signal Decomposition Like DFT, this group of methods will decompose the input signal into sub-components so that the problem is transformed and useful information can be retrieved. The ap- proaches in [40], [55], [59] would decompose the input signal into two orthogonal components to derive the frequency after some mathematical manipulations. In [40], the input signal is decomposed by a sine filter and a cosine filter, v1(t) = Asin(2πf t +ϕ), v2(t) = Acos(2πf t +ϕ). After taking the time derivatives of these two signals, the frequency is computed by 1(t) + v2 2(t)) .(11) Eq. (11) is accurate hypothetically. However, error could stem from the signal decomposition and the approximation of the derivatives. From the frequency response of the sine / cosine filters in Fig. 3, the filter gains are the same only at nominal frequency. Due to different filter gains at off- nominal frequencies, error will be introduced for frequency estimation. In [40], a feedback loop is designed to adjust the filter gains. After adjustment, good accuracy can be achieved but only in a narrow range around nominal frequency. Instead of using sine and cosine filters, [55] proposed to use finite impulse response (FIR) filters designed by optimal methods. Different coefficients are used for different off- nominal frequencies. The coefficients with 1Hz step are calculated off-line and stored in a look-up table. For other frequencies, interpolation is performed on-line to adjust the coefficients. A feedback loop is applied to select the filters f (Hz) Filter Gain Frequency response of sine filter Frequency response of cosine filter Fig. 3. Frequency response of a sine filter and a cosine filter from the measured frequency. The accuracy is improved by the feedback adjustment. Meanwhile, the harmonics can be suppressed by the FIR filters. However, the convergence may be slow for a real-time application because of the feedback Without using feedback loop and orthogonal filters, [54] uses a group of FIR filters to derive the frequency. After pre- filtering, the input signal is decomposed by an all-pass filter and a low pass filter. The decomposed signals will then pass through two groups of cascading FIR filters. The frequency is then derived from the outputs of the two paths, during which the error brought by filter gains are canceled out. Compared with [40] and [55], there is no error compensation by a feedback loop and the filters are fixed so that no extra storage of coefficients is needed. However, the group delay of the FIR filters will slow down the dynamic response, and the frequency output is highly sensitive to harmonics and noises so that pre-filter design is critical to the overall performance of this method. In [59], the impact of different filter gains are canceled out by a sequence of decomposed signals. After filtering, a new signal is produced by combining the sub-components. The historical values of this new signal are utilized to cancel the impact of filter gains. The details of this algorithm are given in the Appendix. Since the influence of unequal filter gains are completely canceled out, high accuracy can be achieved with this method. It is also simpler and faster in comparison with other decomposition algorithms. However, as it is based on the assumption that signal amplitude is stable for a window of data, the time-varying amplitude of a non-stationary signal could have impact on its accuracy. D. Signal Demodulation Instead of decomposing the input signal, a demodulation method starts from synthesizing a new signal. Fig. 4 illus- trates the process of computing the frequency deviation by signal demodulation method (SDM). After the input signal is modulated by the reference signal that has nominal fre- quency, the resulted signal vpcontains a low frequency com- ponent and a near-double frequency component. Through the low-pass filter, the low frequency component vcis retrieved and the frequency deviation is calculated as the rotating speed of vc. More details of this method are given in Ap- pendix. The advantage of SDM is its simplicity and potential for high accuracy. However, the stopband attenuation of the low-pass filter must be high enough to remove the near- double frequency component. A compromise between the filter attenuation and filter delay must be made for the accu- racy and the dynamic response of frequency measurement. Lowp ass Input si gnal i Compu te frequn cy deviati on as rotatin g speed of -j2 0 Fig. 4. signal demodulation method E. Phase Locked Loop Vol tage contol led oscillator (VCO) Fig. 5. phase locked loop A phase-locked loop (PLL) is a feedback system that responds to the frequency / phase change of the input signal by raising or lowering the frequency of a voltage controlled oscillator (VCO) until its frequency / phase matches the input signal. A typical PLL is composed by three parts as illustrated in Fig. 5. From the phase detector, a new signal xpis produced from the input signal viand the reference signal v0. The synthesized signal vpcontains a low-frequency component corresponding to the frequency deviation. Passing a lowpass filter, the low frequency component vcis retrieved and used as error signal to drive the voltage-controlled oscil- lator (VCO). The oscillation frequency of VCO is adjusted and the VCO output v0feeds back to the phase detector. The frequency difference of viand v0will be smaller and smaller after each feedback until it is zero, which is the locked state of a PLL. It is noted that a PLL for frequency measurement is quite similar to the signal demodulation method. Both use lowpass filters to demodulate the synthesized signal to get the frequency deviation. However, a PLL is characterized as a feedback system that frequency difference would be gradually reduced towards zero, which implied that a PLL could achieve very high accuracy on frequency measurement at the price of some time delay. Another advantage is that PLL is insensitive to harmonics and noise because of the lowpass filter and the feedback loop. The critical part of a PLL design for frequency measure- ment is the phase detector. In [17], the transformed αβ signal is used as input of the phase detector. In [16], a proportional- integral (PI) controller is used to improve the performance and stability of the feedback system. In [27], [18], the phase detector is consists of a in-phase component and a quadrature component to estimate the time derivative of the phase angle directly so that the nonlinear dependency of the error signal to the phase difference is avoided. With this design, the range of frequency measurement is wide and the convergence is claimed to be within a few cycles. Because of its accuracy and robustness, a PLL can be applied in a line differential protection scheme for accurate data synchronization. Combined with the GPS time, the sys- tem frequency is also utilized to synchronize the data packet that are exchanging continuously among the relays. For this type of applications, fast frequency tracking is not desired. Instead, the accurate and stable frequency measurement will help the data alignment for relays at different locations [41]. F. Non-linear iterative methods A number of non-linear iterative methods were proposed for accurate frequency estimation, including: least error squares (LES) methods [19], [49], [50], [61], least mean squares (LMS) methods [25], [48], Newton methods [60], [62], Kalman filters [7], [10], [26], [20], [21], [22], steep descent method [34], etc. A common feature of these meth- ods is to iteratively minimize the error between the model estimations and the sample values, so that parameters or states of the model could be derived. 1) Least Error Squares Method: [49] proposed the LES technique to estimate the frequency in a wide range. Using three-term Taylor expansion of Eq. (8) in the neighborhood of nominal frequency, the voltage signal is turned into a v(t1) = a11x1+a12 x2+a13x3+a14 x4+a15 x5+a16x6(12) where v(t1)is the sample value at time t1, the coefficients a11..a16 are known functions of t1, the parameters x1..x6 are unknowns to be solved. E.g., x1=Acos ϕand x2= (∆f)Acos ϕ. Using m > 6samples, a linear system with n= 6 unknowns is set up and the unknowns can be resolved by LES method. The frequency is obtained by f=f0+x2/x1or f0+x4/x3.In this algorithm, the accuracy of frequency estimation is affected by the simplified signal model, the size of data window for LES, the sampling frequency and the truncation of the Taylor expansion. In addition, the matrix inversion that is used in every block calculation could bring numerical error in a real-time appli- cation. In order to improve the estimation accuracy of LES method, some error correction techniques [50], [61] were proposed. These techniques would increase the complexity of the algorithm while the accuracy may still be a problem. 2) Newton Method: The Newton method in [60] takes the dc component, the frequency, the amplitude and the phase angle as unknown model parameters and estimate them simultaneously through an iterative process that aims at minimizing the error between the sample values and the model estimations. The updating step is derived from Taylor expansion and the steepest decent principle. The details of a Gauss-Newton algorithm are given in the Appendix. Using Newton methods, good accuracy can be achieved with moderate number of iterations. Meanwhile, the phasor is also obtained simultaneously. However, the algorithm may not converge if the initial estimation of the parameters are far from the actual values. The dynamic variations of both amplitude and frequency could also delay the convergence. To overcome these problems, the auxiliary methods such as ZC and DFT could be applied to initialize the frequency and amplitude and to supervise the convergence, as presented in the Appendix. Using supervised Gauss-Newton (SGN) method, not only the performance is improved, the frequency estimation is more robust under adverse signal conditions. 3) Least Mean Square Method: LMS is another type of iterative algorithm that uses an gradient factor to update the model parameters. The product of the input and the estimation error is used to approximate the gradient factor for each iteration. In [48], the complex signal out of Clarke transform is used to estimate the frequency. The relationship between the current estimation ˆvkand previous estimation ˆvk−1is expressed as ˆvk= ejω∆Tˆvk−1=wk−1ˆvk−1(13) The variable wkcan be updated by wk+1 =wk+µekˆv∗ where ek=vk−ˆvkis the error between sample value and the estimation, ˆv∗ k−1is the complex conjugate and µis the tuning parameter. When the error ekis small enough, the frequency is derived from the variable wk, f(t) = 1 Using LMS, both the accuracy and the frequency tracking speed can be satisfactory. In addition, as a curve fitting approach, the algorithm is insensitive to noise. However, the parameter µin Eq. (14) needs to be adjusted to accelerate the convergence. The main issue of the method in [48] is that complex model has to be used. As mentioned before, when there is three-phase unbalance, the complex signal out of Clarke Transform could cause error in frequency and phasor 4) Kalman Filters: Established on stochastic theory and state variable theory, a Kalman filter predicts the state and error covariance one step ahead from the historical obser- vations, then the state estimation and error covariance are updated with the new observations. To apply Kalman filter for frequency estimation, the critical step is to establish a state difference equation and a measurement equation to relate the states and observations. The general expressions of these two equations are where xkis a vector of state variables, Arepresents the state transition matrix, zkis the vector of current observations (the sample values), His a relation matrix, wkand vkrepresent the process noise and measurement noise respectively. The state vector is recursively estimated by a Kalman filter where Kkis the Kalman gain that can be derived from a set of established procedures as described in [24], [66]. If the process model is non-linear, the extended Kalman Filter (EKF) that includes extra steps to linearize the models can be applied. In [7], the state difference equation is established on the basis of a complex model, 0x1(k−1) 0 which includes three state variables x1=ejωTs, x2=Aej ωkTs+j ϕ, x3=Ae−jωkTs−jϕ . The measurement equation is established as zk=£0 0.5−0.5¤ Using EKF procedures, the state variables can be estimated and updated. The frequency is derived from the state variable A Kalman filter has the advantage of quick dynamic response and it can effectively suppress the white noise. However, the speed of convergence is up to the initial value of state variables, error covariance matrixes and noise covariance, which are set according to signal statistics. The accuracy is also influenced by the linearization and the simplification of the noise model. The computational expense of a Kalman filter is also considerable for a real- time application. VI. PE R FO RMA NC E EVALUATION To evaluate the performance of a frequency relay or an frequency estimation method, three aspects should be considered: the accuracy, the estimation latency and the robustness. The maximum error, the average error and the estimation delay could be used as the performance indexes for a frequency relay. The robustness is reflected by these indexes under adverse conditions. Some frequency relays claim ±1mHz resolution, which shall not be taken as the performance index. A frequency estimation method could be extremely accurate when the input signal is stable and clean, but highly inaccurate when the signal is distorted or contaminated by harmonics and noise. The accuracy should be obtained under adverse signal conditions to reflect the robustness of the relay. This also applies to the frequency estimation latency. Most frequency estimation methods use a window of data to derive the frequency, which would cause estimation delay when the frequency is time-varying. It is desirable that the latency should be as small as possible, under the restraints of accuracy requirement and robustness requirement. Because of the latency, the maximum error could be high while the average error is a better index to evaluate the relay or the algorithm. The average error should be taken with a reasonable range according to the requirement of a specific application. For a underfrequency relay, a period of 10 cycles is sufficient to calculate the average error, as a time delay is usually set for the relay to make secure operation. For evaluation purpose, some benchmark test signals can be used to get the maximum error and average error on the frequency measurement. The following conditions for setting up benchmark signals are proposed for the evaluation. 1) The frequency tracking range is 20 −65Hz; 2) To simulate power swing, the signal frequency is modulated by a 1Hz swing, and the signal amplitude is modulated by a 1.5Hz swing; 3) The signal is contaminated by 3rd, 5th, 7th harmonics, the percentage is 5% each; 4) The signal contains dc component, the time constant could be set as 0.5; 5) The signal contains random noise with signal-to-noise ratio (SNR) 40dB; 6) The signal contains impulsive noise; 7) To simulate subsynchronous resonance, the signal con- tains 25Hz low frequency component; Using individual condition or combined conditions, a number of analytical signals can be produced to test different aspects of the frequency relay. In addition to the analytical signals, the voltage signal or current signals obtained from transient simulation programs (such as ATP, SIMULINK, RTDS, etc.) could be used to test the performance of a frequency relay. A good relay should have consistent performance indexes for various test signals. In this section, four frequency estimation algorithms are compared by simulation tests to demonstrate the advantage and disadvantage of these algorithms. They are: 1. The zero-crossing (ZC) method with linear interpolation; 2. The smart DFT (SDFT) method from [68]; 3. The decomposition method (SDC) from [59]; 4. The signal demodulation (SDM) method [11]. The details of these four algorithms are given in the Appendix. For the SDM, a 6-order Chebyshev type II filter is used to achieve more than 100 dB attenuation for high frequency component with reasonable filter delay. MATLAB is used to implement the algorithms and to generate test signals to disclose different aspects of each algorithm. For all the discrete test signals, the sampling rate is fixed at 3840Hz. To make fair comparison, the additional pre-filters or post- filters are not used. A. Stationary Signal with Off-nominal Frequencies In power system, a voltage signal under system steady state is close to a stationary signal that frequency and amplitude are constants. Using the stationary signals with off-nominal frequencies, the basic performance of each algo- rithm can be disclosed. A number of test signals are produced v(t) = sin(2πf t + 0.3),(15) where f= 61.5Hz, 59.3Hz, 58.1Hz, 45.2Hz , 20.3Hz. Ex- cluding the initial response, the maximum error of each algorithm is given in Table I. The tests demonstrate that all the selected algorithms have the potential to achieve high ac- curacy for frequency relaying. The ZC, SDFT and SDC have wide range for frequency metering without compromising the accuracy. The range for SDM is limited by the lowpass filter characteristic. The maximum errors for SDFT and SDC are almost zero, because the leakage error for SDFT and the filter error for SDC are completely canceled out. The error of ZC is mainly from the zero-crossing detection, which is improved by linear interpolation. The error from SDM is less than 1mHz if the frequency deviation is less than 10Hz. MAX IM UM E RRO RS F ROM S TATIO NA RY SI GNA L TE ST fZC SDFT SDC SDM 61.5Hz 0.1mHz 5E-9mHz 2E-10mHz 0.72mHz 59.3Hz 0.13mHz 5E-9mHz 2E-10mHz 0.49mHz 58.1Hz 0.04mHz 7E-9mHz 1.4E-10mHz 0.35mHz 45.2Hz 0.04mHz 7E-9mHz 3.5E-10mHz 2.22mHz 20.3Hz 2E-3mHz 4.2E-8mHz 2.2E-9mHz 0.60Hz B. Tracking the frequency change During normal operation of the power system, the fre- quency follows the load / generation variations and fluctuates around nominal value in a range of about ±0.02Hz. When there is major deficit of active power in the system, the frequency would drop at a rate determined by the power unbalance and the system spinning reserve. For system protection and unit protection, it is desirable that frequency change can be detected promptly and accurately. To demon- strate the frequency tracking capability, a few test signals with time-varying frequencies are produced and tested. 1) Signal with time-varying frequency: The same equation as (15) is used to produce the test signals with variable frequency. To simulate the voltage signal under load / gener- ation unbalance as the consequence of major generating unit loss, and to reflect the oscillating characteristic of frequency change, the frequency is expressed by f(t) =57 + 2(1 + 0.4e−tcos(1.5t−0.1))+ 0.2e−7t/10 cos(12t). Fig. 6 present the frequency tracking results of the four algorithms. The dash line in each figure represents the actual time-varying frequency and the solid line gives the frequency tracking results. The ZC, SDFT and SDC give better dynamic accuracy than SDM. In this test, ZC uses half a cycle to update the frequency so that the estimation delay is small. For an actual application, more cycles are needed for ZC. The latency of SDM is from the lowpass filter delay, which 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) (a) (b) Fig. 6. Track the frequency that drops dynamically, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. also has obvious effect on the initializing stage when signal is applied. The maximum dynamic error caused by estimation latency is less than 0.08Hz for SDM and less than 0.04HzHz for ZC, SDFT and SDC. This test case demonstrates that all these algorithms are performing well with time-varying 2) Both frequency and amplitude are time-varying: In addition to the frequency dynamics, the dynamics of signal amplitude could also have impact on a frequency estimation algorithm. In power system, the voltage signal is generally used for frequency estimation since it is more stable than the current signal. But there are cases that only current signals are available, such as line differential relays or bus differential relays in some substations. What’s more, when the system is experiencing asynchronous oscillations, the voltage signal and current signal would oscillate in both frequency and amplitude. To verify the performance of fre- quency estimation algorithms under power swing conditions, a signal that is time-varying in both frequency and amplitude is produced per following equations f(t) = 59.5 + sin(2πt), A(t) = √2+0.3 cos(3πt), where the amplitude is modulated by a 1.5Hz swing and the frequency is modulated by a swing of 1.0Hz. The simulation results of the four algorithms are shown in Fig. 7. In principle, ZC is not affected by the variation of signal amplitude. For SDFT, the impact of the time-varying amplitude is minor since three successive phasors used for each round of estimation will have similar magnitudes under high sampling rate. Though SDM still has obvious latency in frequency tracking, it is not caused by the amplitude variations. More dynamic error occurs for SDC because the algorithm assumes that sample values are all the same in the data window. 3) The frequency step test: The step response test is useful to disclose the characteristics of a signal process algorithm on how it responds to signal changes in time domain. However, the step test for power system frequency 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) 0 0.5 1 1.5 2 Time (s) Frequency (Hz) Fig. 7. Track the frequency when both amplitude and frequency are time- varying, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. 0.4 0.45 0.5 0.55 0.6 0.65 Time (s) Frequency (Hz) 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Time (s) Frequency (Hz) 0.4 0.45 0.5 0.55 0.6 0.65 Time (s) Frequency (Hz) 0.4 0.45 0.5 0.55 0.6 0.65 0.7 Time (s) Frequency (Hz) Fig. 8. Track the frequency that has step change, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. estimation has no correspondence in real life. Due to the mass inertia of the rotating machines in the system, it is impossible for the system frequency to have any significant step change. Therefore, an analytical signal with 0.5Hz step change is more than enough to test the frequency estimation algorithms. Fig. 8 gives the measured frequency from four algorithms in response to the MATLAB test signal that changes from 60Hz to 59.5Hz within one sampling interval. The transition period for the ZC, SDF and SDC are within 2 cycles while it takes about 10 cycles for SDM to settle down. The slow response of SDM demonstrate the impact of the lowpass filter that SDM is relying on. C. Signal containing harmonics, noise and dc component For a power system signal, the odd number harmonics such as 3rd, 5th, 7th, .. harmonics are most likely to occur due to widely-used power electronics and nonlinear load. For system with series-compensated capacitors, the signal may contain low frequency component due to subsynchronous resonance. What’s more, the signal could be contaminated by noise originated from system faults, switching operations, or the electronic circuits. To compare the selected algorithms, a number of discrete signals are produced to simulate extreme conditions in a power system. 1) Signal containing 3rd, 5th, 7th harmonics: A signal with 3rd, 5th, 7th harmonics are produced per following v(t) =√2 sin(2πf t + 0.3) + 0.05√2 sin(6πf t) + 0.05√2 sin(10πf t) + 0.05√2 sin(14πf t) where f= 59.5. In the equation, the magnitude of fun- damental frequency component is 1.0 p.u.. The percentage of 3rd, 5th, 7th harmonics are 5% each. The test results are shown in Fig. 9. The performance of SDM is the best among the four, simply because of the lowpass filter. ZC is less susceptible to harmonics as the zero-crossings on the time axis are mainly determined by the fundamental frequency component, even though the signal is distorted by harmonics. SDFT is susceptible to harmonics because the leakage error of DFT from high frequency components are not compensated. One solution is to add harmonics in the 0.2 0.22 0.24 0.26 0.28 0.3 Time (s) Frequency (Hz) 0.2 0.22 0.24 0.26 0.28 0.3 Time (s) Frequency (Hz) 0.2 0.22 0.24 0.26 0.28 0.3 Time (s) Frequency (Hz) 0.2 0.22 0.24 0.26 0.28 0.3 Time (s) Frequency (Hz) Fig. 9. Frequency measurement results from signal containing 3rd, 5th, 7th harmonics, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. signal model [69] and to compensate the leakage error with the same technique as for fundamental frequency. Another solution is to use a lowpass filter to pre-process the signal and/or to use a moving average filter to smooth the results. SDC is better than SDFT, but harmonics still make significant difference for it because the signal model and the frequency equation in SDC are all based on the fundamental frequency 2) Signal containing low frequency component: In power system, the interaction between the turbine-generators and series capacitor banks or static VAR control system could cause subsynchronous resonance that introduces low fre- quency component into the voltage signal for frequency measurement. The low frequency component ranged from 10Hz to 45Hz could last long enough to cause problem to a frequency relay. To investigate the response of different algorithms, a signal with 10% of 25Hz components is produced per following equation. v(t) = √2 sin(2πf t + 0.3) + 0.1√2 sin( 25 Part of the test signal is shown in Fig. 10 and the test results are shown in Fig. 11. It turns out that subsynchronous resonance could have serious impact on all the frequency estimation algorithms. SDM is performing the best relatively while the errors from ZC, SDFT and SDC are unacceptable. One solution is to detect the low frequency component by DFT and use a notch filter to remove it. However, the DFT window has to be long enough to spot the low frequency component. Another solution is to design a frequency secu- rity logic to ignore large frequency variations within a few 3) Signal containing dc component: After a system dis- turbance or a switching operation, the voltage signal for frequency measurement could contain dc component that decays exponentially. A test signal is produced per following v(t) = 0.5e−t/0.3+√2 sin(2πf t +π/6). 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 0.40 Time (s) Voltage (p.u.) Fig. 10. Part of the test signal containing low frequency component. 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Time (s) Frequency (Hz) 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Time (s) Frequency (Hz) 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Time (s) Frequency (Hz) 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Time (s) Frequency (Hz) Fig. 11. Frequency measurement results from signal containing low frequency component, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. The test results shown in Fig. 12 indicate that DC component can have significant impact on ZC, since the time interval of zero-crossings are significantly changed by dc component. The SDM, SDFT and SDC all use the signal waveform to derive the frequency so that the impact of dc component is less. In most applications, a bandpass filter is applied to remove the dc component at the price of extra delay. 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) Fig. 12. Frequency measurement results from signal with dc component, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Fig. 13. Signal with impulsive noise 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) Fig. 14. Frequency measurement results from signal with impulsive noise, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. 4) Signal containing impulsive noise: The voltage sig- nal for frequency measurement could be contaminated by impulsive noise or white noise. By changing the randomly- selected sample values, a test signal with impulsive noise is produced to test the frequency measurement algorithms. Fig. 13 presents a portion of the signal. The frequency measurement results of the four algorithms are shown in Fig. 14. The impulsive noise will have adverse impact to all the four algorithms. SDM is comparatively better but the results are still unacceptable. For ZC, the impulsive noise only make influence when it is around zero-crossings. To resolve the problem, either the impulsive noise shall be removed at the pre-processing stage, or the singular frequency estimates can be discarded at the post-processing stage. 5) Signal containing white noise: White noise could be introduced by the electromagnetic interference or the dete- rioration of the electronic components. A test signal with white noise is produced by v(t) = Asin(2πf t + 0.3) + ε The parameter εrepresents the noise that can be produced by a random function on MATLAB. The signal to noise ratio is SNR = 20 log(1/0.01) = 40dB. The test results are shown in Fig. 15. Again, SDM demonstrate its strong anti-noise ca- pability, which attributes to the high stopband attenuation of the lowpass filter for SDM. ZC is relatively less susceptible to white noise. For SDFT and SDC, additional filters must be applied to reduce the error caused by random noise. 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 Time (s) Frequency (Hz) Fig. 15. Frequency measurement results from signal with white noise, using: (a) ZC; (b) SDF; (c) SDC; (d) SDM. D. Signal from power system simulation The above test cases are based on analytical signals to disclose different aspects of the selected algorithms for frequency estimation. To simulate a real system, a test signal is generated from a simulation model as shown in Fig. 16. In this two-source model, one of the sources is a 200MVA synchronous machine that is controlled by hydraulic turbine, governor and excitation system. The other source is a sim- plified voltage source with short circuit capacity 1500MVA. The system is initialized to start in a steady-state with the generator supplying 200MW of active power to the load. After 0.55s, the breaker that connects the main load to the system is tripped. Because of the sudden loss of load and the inertia of the prime mover, the generator internal voltage starts to oscillate until the generator control system damps the oscillations. The frequency tracking results from the four algorithms are shown in Fig. 17. The rotor speed is shown in (a) and the frequency tracking results from the four algorithms are plotted in (b)-(d). Before the breaker is tripped, the voltage signal is stable and the frequency output of each algorithm is exactly 60.0Hz. After the breaker is tripped, both the voltage signal and the current signal start to oscillate. Since the voltage is measured at generator terminal, the frequency change should reflect the change of the rotor speed. From Fig. 17, all the four algorithm can tracking the frequency change that is in line with the rotor speed. Hydrau lic Excitat ion Volt age Volt age Source 1500MVA, 23 0kV 13.8kV/2 30kV Synchron uous Machin e 200MVA/1 3.8kV Fig. 16. A two-source system model. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time (s) Speed (p.u.) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time (s) Frequency (Hz) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time (s) Frequency (Hz) Fig. 17. Frequency measurement results from simulation model test. (a) Generator rotor speed; (b) Tracking using ZC; (c) Tracking using SDFT; (d) Tracking using SDC; (e) Tracking using SDM. There is a frequency jump for each frequency estimation method after the breaker is tripped. This drastic frequency variation is caused by the phase abnormity of the signal at the moment of switching operation and should not be accounted as the actual node frequency change. Comparing the four algorithms, SDM gives the most stable and smooth frequency tracking results. However, its recovery from the abnormal frequency jump is also the slowest. The results from SDFT is the worst simply because it is highly susceptible to the harmonics and noise contained in the signal. VIII . FRE QUE NC Y REL AY DE SI GN AN D TES T From the simulation tests, a frequency estimation algo- rithm alone is not enough to meet the practical requirements for frequency relaying. In order to obtain stable and accurate frequency measurement, it is necessary to add digital filters and security conditions to process the signal and the fre- quency estimates. Consequently, latency will be introduced to the frequency measurement because of filtering delay and estimation delay. A critical aspect for frequency relay design is to achieve the balance between the accuracy and the group delay, under the condition of robustness. This section discusses a few practical issues about the design and test for frequency relaying. A. The filtering and post-processing For a digital frequency relay, the analog anti-alias filter is usually applied to remove the out-of-band components before the A/D conversion (ADC). After ADC, it is necessary to add a digital band-pass filter to remove the harmonics and dc component. For a 60Hz system, the limiting frequencies of the filter passband can be 20 −65Hz. The stopband attenuation should be as high as possible. However, the filter delay could also become significant for excessive stopband attenuation. To compromise with the filter delay, the average stopband attenuation can be specified at 20 −40dB, which means that dc component and harmonics are suppressed to 1−10% in average. If the filter stopband has valleys corresponding to high attenuation, it is also desirable that the valleys shall be close to the harmonics. A band-pass filter can effectively remove dc component and harmonics, and helps to reduce white noise to a cer- tain degree. But it cannot handle the impulsive noise. One solution for impulsive noise is to use security conditions at the post-processing stage. Another solution is to use an impulsive noise detector at the pre-processing stage. The de- tector can determine the impulsive noise as singular sample values according to the adjacent samples. Once detected, the contaminated sample can be replaced by a value that is close to the adjacent samples. After the frequency estimation, a post-filter would be helpful to get better accuracy, especially when the measured frequency has minor oscillations around the actual frequency. In [39], a 80-coefficient Hamming type FIR filter is applied for post-filtering. In [33], a binomial filter is applied. In many cases, a simple moving average filter is sufficient to improve the measurement accuracy. As a matter of fact, a moving average filter is the optimal filter that can reduce random white noise while keeping sharp step response [56]. However, the length of the moving average filter needs to be carefully selected to achieve the balance between the accuracy and dynamic response of the overall process. If the signal is distorted under conditions such as CVT transients, CT saturation, system disturbance, switching op- erations or subsynchronous resonance, erroneous frequency estimates may still exist after the pre-filtering and post- filtering because the filters may not handle all the signal abnormalities. Hence, it is important to have some security conditions to validate the frequency estimation. For example, the difference between two consecutive estimates should be small enough to accept the new estimates; the change of a few consecutive estimates should be consistent, etc. These conditions are based on the fact that power system frequency cannot have drastic change during a few sampling intervals. The security check should also reject the estimates for the first a few cycles after the input signal is applied to the relay, because a numerical algorithm needs a few cycles of data to stabilize the estimates. B. The df/dt measurement The frequency rate-of-change (df/dt) is a second criteria in a load shedding scheme or remedy action scheme to super- vise or accelerate the load shedding. After the frequency is estimated, the frequency rate-of-change is simply computed by the frequency difference and the sampling interval ∆t, dt =f(t)−f(t−1) This equation could amplify the error or the high frequency component that are contained in the estimated frequency. Hence a lowpass filter and/or a moving average are necessary to filter the df/dt outputs. After the filtering, some security conditions similar to those for frequency estimation shall be used to remove abnormal df/dt values. C. Some test results of a frequency relay Two test cases are presented in this section to show the frequency tracking of an actual relay that is based on zero-crossing principle. The relay is able to achieve 1mHz accuracy for steady state signals and can track the frequency in a large range. To verify its performance under dynamic conditions, the first test signal is produced per following v(t) =20e−t/0.5+A(t) sin(2πf (t)t+ 0.3)+ 0.05A(t) sin(6πf (t)t)+0.05A(t) sin(10πf (t)t)+ 0.05A(t) sin(14πf (t)t) + εw+εp,where f(t) =60.0 + sin(2πt), A(t) = 40 + 10 cos(3πt). From this equation, the signal contains dc component (20e−t/0.5), harmonics, white noise εwand impulsive noise εp. The harmonics are 3rd, 5th, 7th at 5% each and the SNR of the white noise is 40dB. In addition, both the frequency f(t)and amplitude A(t)are time-varying. This signal represents the extreme condition that is designed to challenge the relay performance. MATLAB is used to create the signal and to save it in a comtrade file, which is then played back to the relay by a real time digital simulator (RTDS). From the test results in Fig. 18., the impact of dc component, harmonics and noise are reflected by the delay and error in frequency tracking. Compared with the previous simulation test results, the relay has no drastic frequency change or abnormal frequency values during the tracking process, which attributes to the filters and the security check conditions for frequency estimation. The second signal is produced by the power system simulation model as shown in Fig. 16. Both the frequency and amplitude of the signals oscillate due to the power swing. Compared with the simulation results in Fig. 17, the test results in Fig. 19 are more stable. At the moment when the breaker is tripped, there is no sudden jump in tracking frequency and the frequency is stable throughout the 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Frequency (Hz) Fig. 18. Frequency measurement from an actual relay in response to analytical signal 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Frequency (Hz) Fig. 19. Frequency measurement from an actual relay in response to power system simulation signal power swing. It shows that the relay is able to handle phase abnormities, frequency / amplitude oscillations, harmonics and noise, which satisfy the robustness requirement. D. The test recommendations To test a frequency relay, the main purpose is to verify the accuracy of frequency measurement and the operating time. The accuracy and operating time under different signal conditions should be tested to verify the robustness of the relay. The test signals need to be practical to reflect the power system operation conditions. For example, a step test that has over 0.5Hz step change may not be appropriate since the step change would not happen to the actual system frequency due to the inertia of rotating machines. The following test items are recommended, 1) Inject signal with different off-nominal frequencies. This test is to verify the steady state accuracy and the measurement range. The accuracy should be consistent for different off-nominal frequencies; 2) Inject signal with ramping-down frequency, the ramp- ing step could be 0.1Hz. If possible, the injected signal, the measured frequency and the frequency rate-of- change shall be recorded in comtrade files for further 3) Use the same ramping test as above to check the operating time of the frequency relay when the signal frequency is under/over the setting. The ramping step can be varied in a reasonable range (such as 0.1Hz - 0.3Hz) to check the relay response; 4) Inject the relay with contaminated signal. The 3rd, 5rd, 7th harmonics that are 5% each can be added onto the main signal. The dc component and random noise shall also be added if possible; Otherwise, the playback test can be used instead; 5) If the test equipment supports playback test, a num- ber of comtrade files with simulated test signals can be created and played back to the frequency relay. MATLAB or MATHCAD could be used to produce analytical signals per equations in this paper. The transient simulation software such as EMTP can also be used to produced signals by using power system models. The tracking frequency and frequency rate-of- change can be recorded in comtrade files to deduce the maximum error, average error and the response time. These tests should be designed to verify the robustness of the frequency relay under adverse conditions. During the tests, it should be noted that a frequency relay needs a few cycles to stabilize at the beginning of the injection test. Hence it is recommended to give the relay at least ten cycles of stable signal before any signal variations. The frequency measurement error during this initialization process should not be accounted. As a fundamental characteristic of considered signal, the frequency and its measurement are important not only to under-frequency relays but also to other protective relays. Many numerical algorithms were discussed to pursue both high accuracy and fast response on frequency measure- ment. However, it should be emphasized that power system frequency is not an instantaneous value, even though the concept of instantaneous frequency could be utilized in some algorithms. Due to the mass inertia of rotating machines, the system frequency cannot have step change or fast change. It is justifiable to use a window of data to compute the average frequency to approximate the system frequency. A few cycles delay is not only allowed but necessary for robust frequency relaying. The accuracy of frequency measurement only makes sense when the signal conditions are specified. If the signal is contaminated or distorted, it is more important to maintain a stable frequency measurement instead of pursuing fast response. To design a frequency relay, the digital filters and security check conditions should be applied to avoid abnormal frequency output under various conditions, so that the frequency relay or other protective relays can securely protect the machines and the system. A. Zero-crossing with linear interpolation For zero-crossing method implemented numerically, it is important to detect the zero-crossings accurately on the time axis and the linear interpolation is usually applied. A zero- crossing is found as between two neighboring samples with different signs. The crossing point of the line that connects the two samples and the time axis is taken as the zero- crossing point. The line is expressed as p(x) = (x−m+ 1)vm−(x−m)vm−1. where vm−1and vmare sample values, xand mare sample indexes. Let p(x) = 0, the zero-crossing in term of sample index is obtained by Though it is still called sample index, xis usually a fractional value between m−1and m. Let Tsrepresent the sample interval, the zero-crossing moment on a time axis is t= xTs. The frequency is calculated from two consecutive zero- crossings at t1and t2, B. Yang&Liu’s smart DFT method (SDFT) In [68], the phasor and frequency are derived from a com- pensated DFT method that leakage error can be completely canceled out. The signal model v(t) = Vcos(2π(f0+ ∆f)t+ϕ) is firstly written as v(t) = vej2π(f0+∆f)t+v∗e−j2π(f0+∆f)t where v=V ejϕ is the phasor , f0is the nominal frequency and ∆frepresents the frequency deviation. Applying DFT to v(t), the estimation of fundamental frequency component where the subscript ris the index of the first sample in the DFT window. Discretizing Eq. (18) by t=k f0Nand bring it into Eq. (19), the estimated phasor is expressed by sin Nθ1 sin θ1 exp ·jπ f0N(∆f(2r+N−1) + 2f0r)¸, sin Nθ2 sin θ2 exp ·−jπ f0N(∆f(2r+N−1) + 2f0(r+N−1)¸, f0N, θ2= 2π³2 + ∆f Applying DFT onto two consecutive data windows, the following relationship holds, Ar+1 =Ara, Br+1 =Bra−1(21) a= exp ·j(2π(f0+ ∆f) From Eq. (20) and Eq. (21), there are ˆvr+1 =Ar∗a+Br∗a−1(23) ˆvr+2 =Ar∗a2+Br∗a−2.(24) Combining Eq. (20)-(24), the Arand Brcan be eliminated and a 2nd-order polynomial is formed as ˆvr+1 ∗a2−(ˆvr+ ˆvr+2)∗a+ ˆvr+1 = 0. The root is (ˆv+ ˆvr+2 )±q(ˆv+ ˆvr+2 )2−4ˆv2 From Eq. (21), the frequency is calculated by f=f0+ ∆f=f0N This method does not make any approximation during DFT. The leakage error is canceled out from three consecutive phasors. Hence it is highly accurate for stable signal and can follow the frequency change with good speed. C. Szafran’s signal decomposition method (SDC) In [59], the orthogonal decomposition method is modified to eliminate the error caused by unequal filter gains at off- nominal frequencies. The signal is firstly decomposed by a pair of orthogonal FIR filters, such as a sine filter and a cosine filters. For an input sinusoidal signal in discrete form, the outputs yc(n)and ys(n)of the filters are yc(n) = |Fc(ω)|Acos(nωTs+ϕ+α(ω)), ys(n) = |Fs(ω)|Asin(nωTs+ϕ+α(ω)), where |Fc(ω)|and |Fs(ω)|are the filter gains at frequency ω. A new signal gk(ω)can be composed by using the historical output signals yc(n−k)and ys(n−k), gk(ω) = ys(n)yc(n−k)−yc(n)ys(n−k),(25) Again, by utilizing the historical signal g2k(ω), an expression independent of signal magnitude can be obtained, gk(ω)= 2 cos(kωTs).(26) Bring Eq. (25) into Eq. (26), the frequency is calculated by Since the error incurred by orthogonal filters are canceled out in Eq. (26), high accuracy can be achieved. D. Signal demodulation Method (SDM) Using the simple signal model in Eq. (8), a new signal Y(t)is generated by multiplying v(t)with a reference signal that has nominal frequency w0, Y(t) = v(t)e−jω0t The signal Y(t)has a low frequency component and a near- double frequency (ω+ω0) component. A low-pass filter can be applied to filter the near-double frequency component so that the remaining signal y(t)contains the frequency deviation information. Using discrete form of y(t), another complex signal U(k)is produced by U(k) = y(k)y∗(k−1), where y∗(k−1) is the conjugate of y(k−1). The frequency is derived by f(k) = f0+fs Re(U(k)) . E. The supervised Gauss-Newton method The supervised Gauss-Newton (SGN) method is based on [60] with two additional auxiliary algorithms, ZC and DFT, to supervise the Gauss-Newton process. The ZC and DFT are used to roughly estimate the frequency and signal amplitude so that the parameters can be properly initialized in each The Gauss-Newton process is an iterative method to esti- mate the model parameters by minimizing the error between the estimation and the observation in a least square sense. Let the parameter vector be xand the objective function be f(x). Starting from a initial point x0, if the descending condition |f(xk+1)|<|f(xk)|could be enforced, a series of vectors x1,x2, ... could be iteratively calculated and xwill finally converge to x∗, a minimizer of the objective function f(x). In SGN, the parameter vector xis selected per basic signal model in Eq. (8), x= [A(t)ω(t)ϕ(t)]T.(29) The process of parameters updating is expressed by The objective function f(x)is defined as the error between the estimated signal value vest(x)from Eq. (8) and the sample value vobv, f(x) = vest(x)−vobv . The Gauss-Newton updating step is expressed by ∆x = (JTJ)−1JTf(x).(31) where J(x)is the Jacobian matrix containing partial deriva- tives for the estimated parameters. The Eq. (31) will maintain the decent direction to minimize the objective function |f(x)|. In order to apply the algorithm in a real-time ap- plication, it is necessary to stop the iterations when the error function f(x)is close to zero, or when the gradient function JTf(x)is close to zero. It can be proven that Gauss- Newton method can only achieve linear convergence when xis far from the minimizer x∗while quadratic convergence is possible when xis close to x∗. If the initial xhas significant difference from x∗, the convergence may not even be achieved. In order to get proper initial parameters and fast convergence, two auxiliary algorithms can be used to provide initial parameters and to supervise each Gauss- Newton updating step. The recursive DFT method is used to estimate the signal amplitude and the zero-crossing (ZC) method is used to estimate the frequency. The results of DFT and ZC will be close to the actual value but not necessarily be accurate. With this combined approach, fast convergence of Gauss-Newton method can be achieved. [1] R. Aghazadeh, H. Lesani, M. Sanaye-Pasand, and B. Ganji. New tech- nique for frequency and amplitude estimation of power system signals. IEE Proc. Generation, Transmission and Distribution, 152(3):435– 440, May 2005. [2] M. Akke. Frequency estimation by demodulation of two complex signals. IEEE Trans. Power Delivery, 12(1):157–163, Jan. 1997. [3] L. Asnin, V. Backmutsky, M. Gankin, M. Blashka, and J. Sedlachek. DSP methods for dynamic estimation of frequency and magnitude parameters in power system transients. In Proc. of the 2001 IEEE Porto Power Tech. Conf., volume 4, pages 532–538, Porto, Portugal, Sept. 2001. [4] M. M. Begovi´ c, P. M. Djuri´ c, S. Dunlap, and A. G. Phadke. Frequency tracking in power networks in the presence of harmonics. IEEE Trans. Power Delivery, 8(2):480–486, April 1993. [5] G. Benmouyal. An adaptive sampling interval generator for digital relaying. IEEE Trans. Power Delivery, 4(3):1602–1609, July 1989. [6] M. M. Canteli, A. O. Fernandez, L. I. Egu´ ıluz, and C. R. Est´ Three-phase adaptive frequency measurement based on Clarke’s trans- formation. IEEE Trans. Power Delivery, 21(3):1101–1105, July 2006. [7] P. K. Dash, R. K. Jena, G. Panda, and A. Routray. An extended complex kalman filter for frequency measurement of distorted signals. IEEE Trans. Instrumentation and Measurement, 49(4):746–753, Au- gust 2000. [8] P. K. Dash, S. K. Panda, B. Mishra, and D. P. Swain. Fast estimation of voltage and current phasors in power networks using an adaptive neural network. IEEE Trans. Power Systems, 12(4):1494–1499, Nov. [9] P. K. Dash, B. K. Panigrahi, and G. Panda. Power quality analysis using S-transform. IEEE Trans. Power Delivery, 18(2):406–411, Apr. [10] P. K. Dash, A. K. Pradhan, and G. Panda. Frequency estimation of distorted power system signals using extended complex kalman filter. IEEE Trans. Power Delivery, 14(3):761–766, July 1999. [11] P. Denys, C. Counan, L. Hossenlopp, and C. Holweck. Measurement of voltage phase for the french future defence plan against losses of synchronism. IEEE Trans. Power Delivery, 7(1):62–69, Jan. 1992. [12] V. Eckhardt, P. Hippe, and G. Hosemann. Dynamic measuring of frequency and frequency oscillations in multiphase power system. IEEE Trans. Power Delivery, 4(1):95–102, Jan. 1989. [13] K. M. Elnaggar and H. K. Youssef. A genetic based algorithm for frequency relaying applications. Electric Power System Research, 55(3):173–178, Sept. 2000. [14] W. Fromm, A. Halinka, and W. Winkler. Accurate measurement of wide-range power system frequency changes for generator protection. In Proc. of 6th Int. Conf. on Developments in Power System Protection, pages 53–57, Nottingham, UK, March 1997. [15] R. M. Gardner, J. K. Wang, and Y. Liu. Power system event location analysis using wide-area measurements. In Proc. of the 2006 IEEE Power Engineering Society General Meeting, June 2006. [16] M. K. Ghartemani and M. R. Iravani. Wide-range, fast and robust estimation of power system frequency. Electric Power Systems Research, 65(2):109–117, May 2003. [17] M. K. Ghartemani, H. Karimi, and A. R. Bakhshai. A filtering technique for three-phase power system. In Proc. of the Instru. and Meas. Tech. Conf., Ottawa, Canada, May 2005. [18] M. K. Ghartemani, H. Karimi, and M. R. Iravani. A magnitude/phase- locked loop system based on estimation of frequency and in- phase/quadrature-phase amplitudes. IEEE Trans. Industrial Electron- ics, 51(2):511–517, Apr. 2004. [19] M. M. Giray and M. S. Sachdev. Off-nominal frequency measurements in electric power system. IEEE Trans. Power Delivery, 4(3):1573– 1578, July 1989. [20] A. A. Girgis and R. G. Brown. Application of kalman filtering in computer relaying. IEEE Trans. Power Apparatus and Systems, 100(7):3387–3395, July 1981. [21] A. A. Girgis and T. L. D. Hwang. Optimal estimation of voltage phasor and frequency deviation using linear and non-linear kalman filtering: Theory and limitations. IEEE Trans. Power Apparatus and Systems, 103(10):2943–2949, Oct. 1984. [22] A. A. Girgis and W. L. Peterson. Adaptive estimation of power system frequency deviation and its rate of change for calculating sudden power system overloads. IEEE Trans. Power Delivery, 5(2):585–594, April [23] D. Hart, D. Novosel, Y. Hu, B. Smith, and M. Egolf. A new frequency tracking and phasor estimation algorithm for generator protection. IEEE Trans. Power Delivery, 12(3):1064–1073, July 1997. [24] S. Haykin. Adaptive Filter Theory. Prentice Hall, New Jersey, 2001. 4th Edition. [25] I. Kamwa and R. Grondin. Fast adaptive schemes for tracking voltage phasor and local frequency in power transmission and distribution systems. IEEE Trans. Power Delivery, 7(2):789–795, Apr. 1992. [26] I. Kamwa, R. Grondin, and D. Mcnabb. On-line tracking of changing harmonics in stressed power systems: Applications to hydro-qu´ network. IEEE Trans. Power Delivery, 11(4):2020–2027, Oct. 1996. [27] H. Karimi, M. K. Ghartemani, and M. R. Iravani. Estimation of frequency and its rate of change for applications in power systems. IEEE Trans. Power Delivery, 19(2):472–480, Apr. 2004. [28] B. Kasztenny and E. Rosolowski. Two new measuring algorithms for generator and transformer relaying. IEEE Trans. Power Delivery, 13(4):1053–1059, Oct. 1998. [29] M. Kezunovic, P. Spasojevic, and B. Perunicic. New digital signal processing algorithms for frequency deviation measurement. IEEE Trans. Power Delivery, 7(3):1563–1573, July 1992. [30] P. Kostyla, T. Lobos, and Z. Waclawek. Neural networks for real- time estimation of signal parameters. In Proc. of IEEE Intl. Symp. on Industrial Electronics, volume 1, pages 380–385, Warsaw, Poland, June 1996. [31] W. T. Kuang and A. S. Morris. Using short-time fourier transform and wavelet packet filter banks for improved frequency measurement in a doppler robot tracking system. IEEE Trans. Instrumentation and Measurement, 51(3):440–444, June 2002. [32] L. L. Lai, C. T. Tse, W. L. Chan, and A. T. P. So. Real-time frequency and harmonic evaluation using artificial neural networks. IEEE Trans. Power Delivery, 14(1):52–59, Jan. 1999. [33] J. H. Lee and M. J. Devaney. Accurate measurement of line frequency in the presence of noise using time domain data. IEEE Trans. Power Delivery, 9(3):1368–1374, July 1994. [34] H. C. Lin. Fast tracking of time-varying power system frequency and harmonics using iterative-loop approaching algorithm. IEEE Trans. Industrial Electronics, 54(2):974–983, Apr. 2007. [35] T. Lin, M. Tsuji, and M. Yamada. Wavelet approach to power quality monitoring. In Proc. of the 27th IEEE Annual Conf., volume 1, pages 670–675, Denver, USA, Nov. 2001. [36] T. Lin, M. Tsuji, and M. Yamada. A wavelet approach to real time estimation of power system frequency. In Proc. of the 40th SICE Annual Conf., pages 58–65, Nagoya, Japan, July 2001. [37] Y. Liu. A US-wide power systems frequency monitoring network. In Proc. of the 2006 Power Systems Conf. and Expo., pages 159–166, Atlanta, USA, Oct. 2006. [38] T. Lobos and J. Rezmer. Real-time determination of power system frequency. IEEE Trans. Instrumentation and Measurement, 46(4):877– 881, Aug. 1997. [39] P. J. Moore, J. H. Allmeling, and A. T. Johns. Frequency relaying based on instantaneous frequency measurement. IEEE Trans. Power Delivery, 11(4):1737–1742, Oct. 1996. [40] P. J. Moore, R. D. Carranza, and A. T. Johns. A new numeric technique for high speed evaluation of power system frequency. IEE Proc. Generation, Transmission and Distribution, 141(5):529–536, Sep. 1994. [41] GE Multilin. L90 line differential relay (instruction manual). GE Publication GEK-113210, www.geindustrial.com/multilin/manuals. [42] S. R. Naidu and F. F. Costa. A novel technique for estimating harmonic and inter-harmonic frequencies in power system signals. In Proc. of the 2005 European Conf. on Circuit Theory and Design, volume 3, pages 461–464, Denver, USA, Aug. 2005. [43] W. C. New. Load shedding, load restoration, and generator protection using solid-state and electromechanical underfrequency relays. GET- 6449, www.geindustrial.com/multilin. [44] C. T. Nguyen and K. Srinivasan. A new technique for rapid tracking of frequency deviation based on level crossing. IEEE Trans. Power Apparatus and Systems, 103(8):2230–2236, Aug. 1984. [45] S. Osowski. Neural network for estimation of harmonic components in a power system. IEE Proc. Generation, Transmission and Distribution, 139:129–135, Mar. 1992. [46] M.A. PAI. Power System Stability Analysis by the direct method of Lyapunov. North Holland Publishing Company, 1981. [47] A. G. Phadke, J. S. Thorp, and M. G. Adamiak. A new measurement technique for tracking voltage phasors, local system frequency, and rate of change of frequency. IEEE Trans. Power Apparatus and Systems, 102(5):1025–1038, May 1983. [48] A. K. Pradhan, A. Routray, and A. Basak. Power system frequency estimation using least mean square technique. IEEE Trans. Power Delivery, 20:1812–1816, July 2005. [49] M. S. Sachdev and M. M. Giray. A least error squares technique for determining power system frequency. IEEE Trans. Power Apparatus and Systems, 104(2):435–444, Feb. 1985. [50] M. S. Sachdev and M. Nagpal. A recursive least error squares algorithm for power system relaying and measurement applications. IEEE Trans. Power Delivery, 6(3):1008–1015, July 1991. [51] Z. Salcic, Z. Li, U. D. Annakaga, and N. Pahalawatha. A comparison of frequency measurement methods for underfrequency load shedding. Electric Power Systems Research, 45(8):209–219, June 1998. [52] Z. Salcic and R. Mikhael. A new method for instantaneous power sys- tem frequency measurement using reference points detection. Electric Power Systems Research, 55(2):97–102, Aug. 2000. [53] T. K. Sengupta, A. Das, and A. K. Mukhopadhya. Development of on-line real-time PC based frequency meter and frequency relay for monitoring, control and protection of HV/EHV power system. In Proc. of the 7th Intl. Power Engineering Conf., Denver, USA, Nov. 2005. [54] T. Sezi. A new method for measuring power system frequency. In Proc. of IEEE Conf. on Transmission and Distribution, volume 1, pages 400–405, New Orleans, USA, April 1999. [55] T. S. Sidhu. Accurate measurement of power system frequency using a digital signal processing technique. IEEE Trans. Instrumentation and Measurement, 48(1):75–81, Feb. 1999. [56] S. W. Smith. The Scientist and Engineer’s Guide to Digital Signal Processing. California Technical Pub., 1997. [57] S. A. Soliman, R. A. Alammari, and M. E. El-Hawary. On the appli- cation of αβ-transformation for power systems frequency relaying. In Proc. of 2001 Large Engineering Systems Conf., volume 1, Halifax, Canada, July 2001. [58] S. A. Soliman, R. A. Alammari, M. E. El-Hawary, and M. A. Mostafa. Frequency and harmonics evaluation in power networks using fuzzy regression technique. In Proc. of 2001 IEEE Porto Power Tech. Conf., volume 1, Porto, Portugal, Sept. 2001. [59] J. Szafran and W. Rebizant. Power system frequency estimation. IEE Proc. Generation, Transmission and Distribution, 145(5):578– 582, 1998. [60] V. Terzija, M. Djuri´ c, and B. Kovaˇ c. Voltage phasor and local system frequency estimation using newton type algorithm. IEEE Trans. Power Delivery, 9(3):1368–1374, July 1994. [61] V. Terzija, M. Djuri´ c, and B. Kovaˇ c. A new self-tuning algorithm for the frequency estimation of distorted signals. IEEE Trans. Power Delivery, 10(4):1779–1785, Oct. 1995. [62] V. V. Terzija. Improved recursive newton-type algorithm for frequency and spectra estimation in power systems. IEEE Trans. Instrumentation and Measurement, 52(5):1654–1659, Oct. 2003. [63] S.S. Tsai, L. Zhang, A.G. Phadke, Y. Liu, N.R. Ingram, S.C. Bell, I.S. Grant, D.T. Bradshaw, D. Lubkeman, and L. Tang. Study of global frequency dynamic behavior of large power systems. In Proc. of 2004 IEEE PES Power Systems Conf. and Expo., volume 1, pages 328–335, Virginia Tech, VA, USA, Oct. 2004. [64] N. C. F. Tse. Practical application of wavelet to power quality analysis. In Proc. of the 2006 IEEE Power Engineering Society General Meeting, HongKong, China, June 2006. [65] M. Wang and Y. Sun. A practical, precise method for frequency track- ing and phasor estimation. IEEE Trans. Power Delivery, 19(4):1547– 1552, Oct. 2004. [66] G. Welch and G. Bishop. An introduction to kalman filter. In Proc. of the SIGGRAPH 2001, Los Angeles, USA, Aug. 2001. [67] J. Wu, J. Long, and J. Wang. High-accuracy, wide-range frequency estimation methods for power system signals under nonsinusoidal conditions. IEEE Trans. Power Delivery, 20(1):366–374, Jan 2005. [68] J. Z. Yang and C. W. Liu. A precise calculation of power system frequency and phasor. IEEE Trans. Power Delivery, 15(2):494–499, April 2000. [69] J. Z. Yang and C. W. Liu. A precise calculation of power system frequency. IEEE Trans. Power Delivery, 16(3):361–366, July 2001. [70] M. Yong, H. S. Bin, H. Y. Duo, G. Y. Kai, and W. Yi. Analysis of power-frequency dynamics and designation of under frequency load shedding scheme in large scale multi-machine power systems. In Proc. of IEEE Intl. Conf. on Advance in Power System Control, Operation and Management, pages 871–876, Hong Kong, Nov. 1991. [71] Z. Zhong, C. Xu, B. J. Billian, L. Zhang, S. J. S. Tsai, R. W. Conners, V. A. Centeno, A. G. Phadke, and Y. Liu. Power system frequency monitoring network (FNET) implementation. IEEE Trans. Power Systems, 20(4):1914–1921, Nov. 2005.
{"url":"https://www.researchgate.net/publication/321255243_Power_System_Frequency_Measurement_for_Frequency_Relaying","timestamp":"2024-11-10T12:53:08Z","content_type":"text/html","content_length":"1023685","record_id":"<urn:uuid:a53d2e60-a91b-4be4-9391-b738fe951850>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00698.warc.gz"}
Biomedical Imaging Group: Local Normalization Local normalization using smoothing operators The local normalization of f(x,y) is computed as follows: • f(x,y) is the original image • m[f](x,y) is an estimation of a local mean of f(x,y) • s[f](x,y) is an estimation of the local standard deviation • g(x,y) is the output image The estimation of the local mean and standard deviation is performed through spatial smoothing. Diagram block The parameters of the algorithm are the sizes of the smoothing windows, s[1], and, s[2], which control the estimation of the local mean and local variance, respectively.
{"url":"https://bigwww.epfl.ch/demo/applet-jlocalnormalization/index.html","timestamp":"2024-11-08T04:37:01Z","content_type":"text/html","content_length":"7246","record_id":"<urn:uuid:2bf69112-c7da-49ba-81c4-0800b97371a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00413.warc.gz"}
What are the whole number factors of 27 28 54 56? Algebra Tutorials! what are the whole number factors of 27 28 54 56? Related topics: Home graphing calculator limits functions | addition and subtraction of algebraic terms | simplify surds applet | free fraction worksheets elementary | how do you Rational Expressions factor trinomials applets | linear function and equation+powerpoint | mathecians | equation printable worksheets | algebra math problams help solve Graphs of Rational Solve Two-Step Equations Author Message Multiply, Dividing; Exponents; Square Roots; Diliem Posted: Friday 29th of Dec 15:38 and Solving Equations Hello all, I have a very important test coming up in math soon and I would greatly appreciate if any of you can help me solve some problems LinearEquations in what are the whole number factors of 27 28 54 56?. I am reasonably good in math otherwise but problems in difference of squares baffle Solving a Quadratic me and I am at a loss. It would be great if you can let me know of a reasonably priced math help tool that I can use? Systems of Linear Registered: Equations Introduction 27.08.2002 Equations and From: Curitiba/PR - Inequalities Brasil Solving 2nd Degree Review Solving Quadratic Equations nxu Posted: Saturday 30th of Dec 17:50 System of Equations Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my what are the whole number Solving Equations & factors of 27 28 54 56? problems. It is really very easy -you just need to enter the problem and it will give you a complete solution that Inequalities can help solve your assignment . Try it out and see if it is useful . Linear Equations Functions Zeros, and Registered: Applications 25.10.2006 Rational Expressions and From: Siberia, Functions Russian Federation Linear equations in two Lesson Plan for Comparing and Ordering nedslictis Posted: Sunday 31st of Dec 15:09 Rational Numbers Algebrator is a very easy tool. I have been using it for a long time now. Solving Equations Radicals and Rational Exponents Registered: Solving Linear Equations 13.03.2002 Systems of Linear From: Omnipresent Solving Exponential and Logarithmic Equations Solving Systems of yehusbiliivel Posted: Sunday 31st of Dec 19:17 Linear Equations Can anyone please give me the URL to this program ? I am on the verge of breaking down. I like math a lot and don’t want to leave it just DISTANCE,CIRCLES,AND because of one course . Solving Quadratic Equations Registered: Quadratic and Rational 08.12.2003 Inequalit From: Olympus phpBB3 Applications of Systems of Linear Equations in Two Variables Systems of Linear Jrobhic Posted: Tuesday 02nd of Jan 12:03 Equations Here it is: https://rational-equations.com/multiply-dividing-exponents-square-roots-and-solving-equations.html. Just a few clicks and math Test Description for won't be a problem at all. All the best and enjoy the software! RATIONAL EX Exponential and Logarithmic Equations Registered: Systems of Linear 09.08.2002 Equations: Cramer's Rule From: Chattanooga, TN Introduction to Systems of Linear Equations Literal Equations & Equations and Inequalities with Absolute Value Rational Expressions SOLVING LINEAR AND Steepest Descent for Solving Linear Equations The Quadratic Equation Linear equations in two what are the whole number factors of 27 28 54 56? Related topics: Home graphing calculator limits functions | addition and subtraction of algebraic terms | simplify surds applet | free fraction worksheets elementary | how do you Rational Expressions factor trinomials applets | linear function and equation+powerpoint | mathecians | equation printable worksheets | algebra math problams help solve Graphs of Rational Solve Two-Step Equations Author Message Multiply, Dividing; Exponents; Square Roots; Diliem Posted: Friday 29th of Dec 15:38 and Solving Equations Hello all, I have a very important test coming up in math soon and I would greatly appreciate if any of you can help me solve some problems LinearEquations in what are the whole number factors of 27 28 54 56?. I am reasonably good in math otherwise but problems in difference of squares baffle Solving a Quadratic me and I am at a loss. It would be great if you can let me know of a reasonably priced math help tool that I can use? Systems of Linear Registered: Equations Introduction 27.08.2002 Equations and From: Curitiba/PR - Inequalities Brasil Solving 2nd Degree Review Solving Quadratic Equations nxu Posted: Saturday 30th of Dec 17:50 System of Equations Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my what are the whole number Solving Equations & factors of 27 28 54 56? problems. It is really very easy -you just need to enter the problem and it will give you a complete solution that Inequalities can help solve your assignment . Try it out and see if it is useful . Linear Equations Functions Zeros, and Registered: Applications 25.10.2006 Rational Expressions and From: Siberia, Functions Russian Federation Linear equations in two Lesson Plan for Comparing and Ordering nedslictis Posted: Sunday 31st of Dec 15:09 Rational Numbers Algebrator is a very easy tool. I have been using it for a long time now. Solving Equations Radicals and Rational Exponents Registered: Solving Linear Equations 13.03.2002 Systems of Linear From: Omnipresent Solving Exponential and Logarithmic Equations Solving Systems of yehusbiliivel Posted: Sunday 31st of Dec 19:17 Linear Equations Can anyone please give me the URL to this program ? I am on the verge of breaking down. I like math a lot and don’t want to leave it just DISTANCE,CIRCLES,AND because of one course . Solving Quadratic Equations Registered: Quadratic and Rational 08.12.2003 Inequalit From: Olympus phpBB3 Applications of Systems of Linear Equations in Two Variables Systems of Linear Jrobhic Posted: Tuesday 02nd of Jan 12:03 Equations Here it is: https://rational-equations.com/multiply-dividing-exponents-square-roots-and-solving-equations.html. Just a few clicks and math Test Description for won't be a problem at all. All the best and enjoy the software! RATIONAL EX Exponential and Logarithmic Equations Registered: Systems of Linear 09.08.2002 Equations: Cramer's Rule From: Chattanooga, TN Introduction to Systems of Linear Equations Literal Equations & Equations and Inequalities with Absolute Value Rational Expressions SOLVING LINEAR AND Steepest Descent for Solving Linear Equations The Quadratic Equation Linear equations in two Rational Expressions Graphs of Rational Solve Two-Step Equations Multiply, Dividing; Exponents; Square Roots; and Solving Equations Solving a Quadratic Systems of Linear Equations Introduction Equations and Solving 2nd Degree Review Solving Quadratic System of Equations Solving Equations & Linear Equations Functions Zeros, and Rational Expressions and Linear equations in two Lesson Plan for Comparing and Ordering Rational Numbers Solving Equations Radicals and Rational Solving Linear Equations Systems of Linear Solving Exponential and Logarithmic Equations Solving Systems of Linear Equations Solving Quadratic Quadratic and Rational Applications of Systems of Linear Equations in Two Variables Systems of Linear Test Description for RATIONAL EX Exponential and Logarithmic Equations Systems of Linear Equations: Cramer's Rule Introduction to Systems of Linear Equations Literal Equations & Equations and Inequalities with Absolute Value Rational Expressions SOLVING LINEAR AND Steepest Descent for Solving Linear Equations The Quadratic Equation Linear equations in two what are the whole number factors of 27 28 54 56? Related topics: graphing calculator limits functions | addition and subtraction of algebraic terms | simplify surds applet | free fraction worksheets elementary | how do you factor trinomials applets | linear function and equation+powerpoint | mathecians | equation printable worksheets | algebra math problams help solve Author Message Diliem Posted: Friday 29th of Dec 15:38 Hello all, I have a very important test coming up in math soon and I would greatly appreciate if any of you can help me solve some problems in what are the whole number factors of 27 28 54 56?. I am reasonably good in math otherwise but problems in difference of squares baffle me and I am at a loss. It would be great if you can let me know of a reasonably priced math help tool that I can use? From: Curitiba/PR - nxu Posted: Saturday 30th of Dec 17:50 Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my what are the whole number factors of 27 28 54 56? problems. It is really very easy -you just need to enter the problem and it will give you a complete solution that can help solve your assignment . Try it out and see if it is useful . From: Siberia, Russian Federation nedslictis Posted: Sunday 31st of Dec 15:09 Algebrator is a very easy tool. I have been using it for a long time now. From: Omnipresent yehusbiliivel Posted: Sunday 31st of Dec 19:17 Can anyone please give me the URL to this program ? I am on the verge of breaking down. I like math a lot and don’t want to leave it just because of one course . From: Olympus phpBB3 Jrobhic Posted: Tuesday 02nd of Jan 12:03 Here it is: https://rational-equations.com/multiply-dividing-exponents-square-roots-and-solving-equations.html. Just a few clicks and math won't be a problem at all. All the best and enjoy the software! From: Chattanooga, TN Author Message Diliem Posted: Friday 29th of Dec 15:38 Hello all, I have a very important test coming up in math soon and I would greatly appreciate if any of you can help me solve some problems in what are the whole number factors of 27 28 54 56?. I am reasonably good in math otherwise but problems in difference of squares baffle me and I am at a loss. It would be great if you can let me know of a reasonably priced math help tool that I can use? From: Curitiba/PR - nxu Posted: Saturday 30th of Dec 17:50 Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my what are the whole number factors of 27 28 54 56? problems. It is really very easy -you just need to enter the problem and it will give you a complete solution that can help solve your assignment . Try it out and see if it is useful . From: Siberia, Russian Federation nedslictis Posted: Sunday 31st of Dec 15:09 Algebrator is a very easy tool. I have been using it for a long time now. From: Omnipresent yehusbiliivel Posted: Sunday 31st of Dec 19:17 Can anyone please give me the URL to this program ? I am on the verge of breaking down. I like math a lot and don’t want to leave it just because of one course . From: Olympus phpBB3 Jrobhic Posted: Tuesday 02nd of Jan 12:03 Here it is: https://rational-equations.com/multiply-dividing-exponents-square-roots-and-solving-equations.html. Just a few clicks and math won't be a problem at all. All the best and enjoy the software! From: Chattanooga, TN Posted: Friday 29th of Dec 15:38 Hello all, I have a very important test coming up in math soon and I would greatly appreciate if any of you can help me solve some problems in what are the whole number factors of 27 28 54 56?. I am reasonably good in math otherwise but problems in difference of squares baffle me and I am at a loss. It would be great if you can let me know of a reasonably priced math help tool that I can use? Posted: Saturday 30th of Dec 17:50 Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my what are the whole number factors of 27 28 54 56? problems. It is really very easy -you just need to enter the problem and it will give you a complete solution that can help solve your assignment . Try it out and see if it is useful . Posted: Sunday 31st of Dec 15:09 Algebrator is a very easy tool. I have been using it for a long time now. Posted: Sunday 31st of Dec 19:17 Can anyone please give me the URL to this program ? I am on the verge of breaking down. I like math a lot and don’t want to leave it just because of one course . Posted: Tuesday 02nd of Jan 12:03 Here it is: https://rational-equations.com/multiply-dividing-exponents-square-roots-and-solving-equations.html. Just a few clicks and math won't be a problem at all. All the best and enjoy the
{"url":"https://rational-equations.com/in-rational-equations/multiplying-fractions/what-are-the-whole-number.html","timestamp":"2024-11-06T04:13:29Z","content_type":"text/html","content_length":"98244","record_id":"<urn:uuid:4e10f6f9-d963-45c0-9097-c21f92fa60ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00052.warc.gz"}
Portfolio Math Start the 10-bagger portfolio example with an initial capital: $F_0 = \$ 1,000,000.\,$ Make the objective to reach: $\,10 \cdot F_0 = \$ 10,000,000\,$ within $\, T = 20\,$ years. This will translate into finding a solution for: $\, n \cdot \bar x = 9 \cdot F_0.\,$ We could take the value of $\,n\,$ and $\,\bar x\,$ from our own trading strategies to make such long-term estimates. Every portfolio simulation software out there give these numbers on every simulation they do. And, any combination of $\,n\,$ times $\,\bar x\,$ equal $\,9 \cdot F_0\,$ will be a solution no matter the trading methods used. Calculations for the above gives a $CAGR$ estimate of: $\;\hat g = 10^{(1/20)} -1 = 0.1220.\,$ This 10-bagger portfolio would have needed the equivalent of a 12.20$\, \%$ CAGR over those 20 years. Not that high a figure all considered.
{"url":"https://alphapowertrading.com/quantopian/01-Portfolio_Math_II.html","timestamp":"2024-11-06T21:09:35Z","content_type":"text/html","content_length":"264067","record_id":"<urn:uuid:6b5ac9fa-3c6d-4925-b823-d377ea707420>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00154.warc.gz"}
This paper deals with the class of $Q$-tensors, that is, a $Q$-tensor is a real tensor $\mathcal{A}$ such that the tensor complementarity problem $(q, \mathcal{A}):$ finding an $x ∈\mathbb{R}^n$ such that $x ≥ 0,$ $q+\mathcal{A}x^{m−1} ≥ 0,$ and $x^⊤(q+\mathcal{A}x^{m−1}) = 0,$ has a solution for each vector $q ∈ \mathbb{R}^n.$ Several subclasses of $Q$-tensors are given: $P$-tensors, $R$-tensors, strictly semi-positive tensors and semi-positive $R_0$-tensors. We prove that a nonnegative tensor is a $Q$-tensor if and only if all of its principal diagonal entries are positive, and so the equivalence of $Q$-tensor, $R$-tensors, strictly semi-positive tensors was showed if they are nonnegative tensors. We also show that a tensor is an $R_0$-tensor if and only if the tensor complementarity problem $(0, \mathcal{A})$ has no non-zero vector solution, and a tensor is a $R$-tensor if and only if it is an $R_0$-tensor and the tensor complementarity problem $(e, A)$ has no non-zero vector solution, where $e = (1, 1 · · · , 1)^⊤.$
{"url":"http://www.global-sci.org/intro/articles_list/aam/2263.html","timestamp":"2024-11-09T16:34:05Z","content_type":"text/html","content_length":"77554","record_id":"<urn:uuid:aa8c500b-a2b9-4d0b-8ab5-2a44644a1517>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00798.warc.gz"}
An integrated Shannon Entropy and reference ideal method for the selection of enhanced oil recovery pilot areas based on an unsupervised machine learning algorithm © S.M. Motahhari et al., published by IFP Energies nouvelles, 2021 1 Introduction Maximizing the Recovery Factor (RF) and economic profit are the main goals of Enhanced Oil Recovery (EOR) methods. Pilot design plays a vital role in the road map of EOR planning and risk reduction. Therefore, EOR pilots are conducted to reduce uncertainty in EOR performance [1–4]. In the last 50 years, a significant number of EOR field pilots has been studied and the gained experience has been used in the oil and gas industry [5]. The results of the implementation of EOR pilots may increase consciousness regarding hydrocarbon field behavior and EOR scenarios designed for entire hydrocarbon Pilot area selection is one of the imperative factors in designing an EOR [6]. The objective of the pilot location studies is determining how to narrow down the pilot candidate areas from the field extent to only an optimum area of interest. Three methodologies have been presented in the literature for pilot area selection. The first category presents some qualitative insights regarding the pilot area, such as the necessity of using predictive reservoir simulation models [4, 7–10]. These studies do not provide any quantitative methods to select a pilot area. In the second category, the reservoir area is filtered to select the pilot area according to some merit parameters [11–13]. These studies are based on reservoir past performance. In the final category, the reservoir is divided into a number of areas and the pilot area is selected on the basis of a semi-quantitative approach. In a previous study [14], a hydrocarbon field was divided into six square kilometers parts and each part labeled as a locator. Consequently, locators were classified based on permeability patterns and the number of water-invaded zones. Permeability patterns were described as the combination of the average permeability of the different layers. The average value was marked per layer as shown in Figure A1 (Appendix A). The locators were chosen as pilot areas from a class that has a maximum number of dominated pattern. According to other studies [15, 16] that have been carried out, the field was divided into six areas as shown in Figure A2 (Appendix A). The size of the areas can be arbitrary. A semi-quantitative approach was chosen to rank each of the six areas for pilot implementation. A list of selection criteria, including the qualitative ranking (low, medium, and high) in each area is shown in Figure A3 (Appendix A). Although this approach tries to determine the part with the best properties, each selected part is not representative of the whole field. Moreover, predefined patterns such as permeability patterns were applied to identify dominated pattern in this category. In addition to the methodological approaches, criteria based on reservoir-geology and operational-economic issues could be considered. According to the geological criteria of the reservoir, the pilot area should have alike behavior to the reservoir to be an appropriate representative and achieve the EOR goals [4, 12, 14]. However, an area with moderate to good reservoir properties is another selection criterion. For example, having remaining oil saturation in the range of 0.2–0.5 could lead to an appropriate pilot area [11]. According to other studies [7, 8], high remaining oil saturation has been applied to locate pilot areas. In addition, a lower ratio of vertical to horizontal permeability and higher lateral permeability continuity have also been a basis for pilot area selection [15, 16]. Operational-economic criteria refer to uncertainty and economic aspects of pilot area selection. An area with a higher accuracy in analysis of pilot production performance is suitable from an operational point of view. Moreover, a lower cost of pilot implementation is desirable too. The level of knowledge, the interaction of pilot and production area, the distance between a candidate area and surface facilities and the number of existing applicable wells for running a pilot are all important operational-economic criteria. For example, a pilot area could be selected based on the quality and quantity of the available petrophysical data [4, 14, 15]. In addition, less interaction between the pilot and production areas could lead to a more reliable interpretation of the gathered data [4, 11–13]. Besides, the closeness of the pilot to existing facilities could lead to economic aspects [12–14]. Finally, existing wells may be used to reduce pilot implementation costs [13]. The pilot operation will assist in receiving relevant operational and technical experience and establishing required quality control [17]. It is therefore apparent that having a systematic and quantitative approach is necessary to choose the optimum pilot area. Such an approach should consist of a better representative of reservoir features along with operational-economic criteria. In this paper, a methodology is presented to select the optimal size and location of the pilot for an EOR project. To do so, a pilot opportunity index is introduced. The proposed index is defined based on integration of reservoir quality maps (from reservoir simulations), fuzzy clustering, Shannon entropy, Analytical Hierarchy Process (AHP) and the Reference Ideal Method (RIM) as a multi-criteria decision-making method. Finally, the presented method is utilized to a real reservoir case and the optimum pilot area is thereby determined. 2 Methodology The three key steps for choosing the location and size of a pilot established are: 1. Pilot candidate areas specification; 2. Reservoir representativeness quantification and calculating its value for each pilot candidate area; 3. Pilot opportunity index calculation and selecting the optimum pilot region. Figure 1 illustrates the workflow of the planned scheme. 2.1 Pilot candidate areas In the model of hydrocarbon reservoir, the field is segmented into pilot candidate regions of an equal size. Then, the five-spot pattern are utilized to locate wells in each region. Afterwards, a sensitivity analysis is performed for the Net Present Value (NPV) in regard to the production and injection wells distance, which reflects the size of the pilot. The NPV is the most commonly used economic objective function and is shown in equation (1): $NPV(r, T)=R(t0)+∫t1T[P0×q0(t)-Cwprod×qwprod(t)-Cwinj×qwinj(t)-C0×q0(t)](1+r)-tdt,$(1) where t[1] is the start time of production, T is the overall time of econometry, P[o] is the unit price of oil, q[o] is the oil production rate, $Cwprod$ is the unit cost of handling the produced water, $Cqprod$ is the water production rate, $Cwinj$ is the unit cost of injecting water, $qwinj$ is the water injection rate, C[o] is the unit operating cost per barrel of produced oil and r is the discount rate. R(t[0]) is the initial capital expenditure, as shown in equation (2), where C[facility] is the cost of facility installation, n[well] is the entire number of injection and production wells and C[well] is the drilling cost of a well. Consequently, the optimal pilot size has the highest NPV. Finally, the size and number of pilot candidates are determined. 2.2 Impression of reservoir representativeness Each candidate region has a reservoir dynamic behavior. Considering all areas, it is possible to classify the behavior. The class which has the highest number of areas is the representative of reservoir behavior. The behavior of each area is characterized by two arrays of cell RF values and the covariance matrix of the residual oil volume in cells positioned in each candidate region. 2.2.1 Quality maps The various static and dynamic parameters could be combined together to form the quality maps. Due to dynamic parameters such as oil saturation, the quality maps change during the production period [ 18, 19]. The quality map is applied to solve decision making problems as an auxiliary analysis with parameters like cumulative oil production [20]. The 3-D time-dependent quality maps of residual oil volume is computed by using equation (3) to aggregate various parameters: where A[i,j], h[k], ∅[i,j,k] and So[i,j,k,t] are the surface area, the height, the porosity and the simulated oil saturation of cell (i, j, k) in the given time period, respectively. Equation (4) is applied to calculate the RF in each cell based on these quality maps: where t[s] and t[e] are the beginning and termination times of the simulation, correspondingly. Equation (5) is applied to illustrate the covariance matrix of the residual oil volume for a typical candidate region. For instance, $Vocell#1$ is a set of annual residual oil volume sequence from t [s] to t[e] in the first cell of the area: 2.2.2 Clustering The number of clusters of reservoir behaviors could be specified by Fuzzy clustering. Clustering is a subset of unsupervised machine learning [21] and data mining [22] methods. It is used to assess the flow field in hydrocarbon reservoirs [23–25]. The similar objects are placed in a same group using clustering analysis. Equation (6) [26] is utilized to define the objective of fuzzy clustering. The number of clusters and cluster center are signified by c and c[i]. A multi-dimensional space is denoted by X that includes n candidate areas. For the jth candidate region in the cluster i, the membership degree is indicated by u[ij]. Furthermore, v and d[ij] are identified as the fuzzifier and the Euclidean distance of the jth candidate region from the c[i]. $(∀ i;1≤i≤c) and (0≤uij≤1) and (∀ j; 1≤j≤n: ∑i=1cuij=1),$ $uij= dij21-v∑k=1cdkj21-v,$(7) $ci= ∑j=1nuijv xj∑j=1nuijv.$(8) The optimal number of clusters is determined using two cluster validity indices: (1) XB (Xie and Beni) [27] and (2) fuzzy-silhouette [28] as illustrated in equations (9) and (10)–(14), The higher square distance between cluster centers (a[j]) is more favorable, as well as lower mean square distance between a candidate area and its cluster center (b[j]). Therefore, the lower and higher values of the XB and fuzzy-silhouette are the result of high-quality clustering: $aj=min{∑k=1nintrai(j,k)djk∑k=1ninterrs(j,k); ∑k=1nintrai(j,k)>0,1≤i≤c},$(10) $bj=min{∑k=1ninterrs(j,k)djk∑k=1ninterrs(j,k); ∑k=1ninterrs(j,k)>0,1≤r<s≤c},$(11) $interrs(djk)=(urj⋀ usk)∨(usj⋀ urk),$(13) 2.2.3 Reservoir Similarity Index For a candidate area, the intensity of the reservoir representativeness is indicated by the Reservoir Similarity Index (RSI). Therefore, the RSI is composed of the two normalized distances: 1) d(RF array, RF-center) and 2) d(COV array, COV-center). The RF-center and COV-center are the centers of the clusters that have the maximum size after conducting clustering on the RF array and the covariance matrix of the residual oil volume, correspondingly. The RF array and COV array are the Recovery Factor and Covariance matrix arrays of the residual oil volume of the candidate area. The weights of the two normalized distances are w[1] and w[2]. The RSI is calculated as the sum of the product of weight and the normalized value for multiplicative inverse of the two criteria using equation (15): $RSI=w1× 1d(RF array, RF-center) +w2× 1d(COV array, COV-center).$(15) The Shannon entropy technique is utilized to determine the w[1] and w[2]. 2.2.4 Shannon entropy The information content of an event as the concept of entropy was introduced by Shannon in information theory [29]. Shannon entropy is used to optimize data acquisition of the hydrocarbon fields [30 ]. Furthermore, the Shannon entropy can quantity the relative importance of criteria in a multi-criteria problem [31]. The more important criteria have greater dispersion and their effects are higher on the RSI value. Considering m alternatives and n criteria, a decision matrix S of m × n elements is shown in equation (16), where S[ij] is the jth criterion value for the ith alternative (candidate Equation (17) is applied to normalize the decision matrix S. Then, equation (18) is utilized to compute the Shannon entropy. Afterward, equation (19) is employed to calculate the weight of each $S=[s11s12 s21s22⋯s1n⋯s2n⋮⋮sm1sm2⋱⋮⋯smn],$(16) $s¯ij =sij/∑i=1msij (i=1,...,m;j =1,...,n),$(17) $Ej =-1lnm∑i=1ms¯ij, j =1,...,n,$(18) 2.3 Pilot opportunity index The attractiveness of pilot implementation in a candidate area is indicated by the Pilot Opportunity Index (POI). Two significant features for POI computations are reservoir geological and operational-economic. Reservoir geological factors correspond to the reservoir representativeness intensity. Operational-economic factors refer to the reliability of the pilot performance results and the implementation cost of the pilot. Therefore, the POI is based on the RSI, the level of knowledge, the interference between internal and external production of a candidate area, the distance from the surface facilities to candidate area, and the number of existing applicable wells for running a pilot in an area. Hence, the RIM in multi-criteria decision-making methods is utilized to quantify the POI. Moreover, criteria weights are calculated based on the Analytical Hierarchy Process (AHP). The higher the POI is, the more desirable the pilot implementation in an area is. The pilot area is an area with the highest POI value. 2.3.1 Decision criteria The criteria to evaluate the POI for each candidate area include: • RSI: In a candidate area, this criterion shows the value of the reservoir representativeness. Higher values of this criterion are preferable. • Interference between internal and external production of a candidate area: Pilot production performance could be affected considerably by nearby existing production wells. Minimizing this interference leads to a more accurate pilot production performance analysis. The drainage radius, the distance from a well where its normal pressure gradient approaches zero (i.e. zero fluid flux), is applied to quantify this issue [32]. In other words, there would be pressure interference in case the distance between two wells is less than two times the drainage radius. Therefore, to determine the interaction between the candidate area and the existing wells, two criteria are established, namely, the number of existing wells around a candidate area with a distance less than two times the drainage area, known as “interfered wells”, and the average distance between these wells and a candidate area. A lower number of interfered wells and higher values of the average distance are more favorable. • Level of knowledge: A higher quality and quantity of available data in a candidate area lead to a lower uncertainty in pilot design and evaluation. The variogram range, the distance limit beyond which the data are no longer spatially correlated, is applied to quantify this criterion [33]. The properties in cells that contain an existing well are known while the properties of other cells can be estimated by the known cell value. The estimation is possible in the case of the spatial distance between known and unknown cells being smaller than the variogram range. Furthermore, the error of estimation decreases as the distance decreases. Therefore, to determine the level of knowledge in a candidate area, two criteria are established, namely, the total number of existing wells inside and outside an area with a distance less than the variogram range to an area, known as “adjacent wells”, and the average distance between these wells and a candidate area. A higher number of total wells and lower values of the average distance are more favorable. • Distance between a candidate area and surface facilities: the lower the distance, the lower the cost. The closest route from areas to the surface facilities is the most desirable path. • Number of existing applicable wells for running a pilot in an area: This is the number of existing wells in a candidate area that matches the five-spot pattern in terms of location. Higher values of this criterion are more preferable due to less capital expenditure to drill new wells that fit the pattern in an area. 2.3.2 AHP Analytical Hierarchy Process (AHP) combines mathematics and intuition to express the comparative importance of each criterion in a rational and consistent approach [34, 35]. Table 1 is applied to make the pairwise comparisons matrix among criteria. In the pairwise comparisons matrix, i rows are compared with the j column for n number of criteria such that the main diagonal is equal to one. The results of the pairwise comparison on criteria are summarized in an (n × n) matrix in which elements w[ij] (i, j = 1, 2, … n) are shown by equation (20): $w=[w11w12 w21w22⋯w1n⋯w2n⋮⋮wn1wn2⋱⋮⋯wnn].$(20) The pairwise comparisons matrix is normalized by dividing the value of each element with the sum of the corresponding column. After normalization, the relative weight of each criteria is calculated by computing the average of each row. The accuracy of the AHP output is related with the consistency of the pairwise comparison judgment. A Consistency Ratio (CR) is calculated by dividing the Consistency Index (CI) for the judgment by the Random Consistency Index (RI) for the corresponding random matrix using equations (21) and (22). The RI is in accordance with the degree of consistency, as shown in Table 2 for a different number of criteria. Moreover, the Cl is based on the largest eigenvalue (λ[max]) and the number of criteria (n), as shown in equation (21). The weighted sum vector is calculated with a multiplication of the pairwise comparisons matrix of criteria in the vector of relative magnitudes. Then, the consistency vector is computed by dividing the magnitudes of the weighted sum vector of criteria by the vector of relative magnitudes. The largest eigenvalue (λ[max]) is calculated by averaging elements of this vector: The CR above 0.1% or 10% indicates that the pairwise comparisons should be revised, while smaller numbers mean comparison are more consistent. 2.3.3 RIM Multi Criteria Decision Making (MCDM) is applied to solve selection issues including multiple criteria and alternatives for petroleum field development problems [36]. The RIM [37] multi-criteria decision-making technique is applied to prioritize the alternatives (candidate areas) with respect to the criteria outlined in Section 2.3.1. Equation (24) is applied to normalize the decision matrix S, as illustrated by equation (23). The range of values and ideal values for each criterion are shown by [A, B] and [C, D], respectively. Equation (25) is utilized to compute the weighted normalized matrix. Equation (26) is applied to calculate the distances between the ideal alternatives and a candidate area. In conclusion, equation (27) is employed to compute the score of each area: $Y=[f(s11,[A,B]1,[C,D]1)f(s12,[A,B]2,[C,D]2)f(s21,[A,B]1,[C,D]1)f(s22,[A,B]2,[C,D]2)⋯f(s1n,[A,B]n,[C,D]n)⋯f(s2n,[A,B]n,[C,D]n)⋮⋮ f(sm1,[A,B]1,[C,D]1)f(sm2,[A,B]2,[C,D]2)⋱⋮⋯f(smn,[A,B]n,[C,D]n)],$(23) $f(s,[A,B],[C,D])= {1 if s∈[C,D]1-dmin (s,[C,D])|A-C|if s∈[A,C] ∧A≠C1-dmin (s,[C,D])|D-B|if s∈[D,B] ∧D≠B$(24) $Ý=Y⨂W=[y11.w1y12 .w2y21.w1y22 .w2⋯y1n.wn⋯y2n.wn⋮⋮ym1.w1ym2 .w2⋱⋮⋯ymn.wn],$(25) $Ii+=∑j=1n(ýij-wj)2, Ii-=∑j=1n(ýij)2 (i=1,...,m; j =1,...,n),$(26) $Ri=Ii-Ii++Ii- (0<Ri<1;i=1,...,m).$(27) 3 Results and discussion on a case study The studied field has been producing since 2006 and 31 wells have been drilled so far. The field’s oil in place is around 4 billion standard barrels. The number of grids in the three main directions x, y and z is 88 × 275 × 22, respectively. The field has 15 different rock types. Variogram model and its parameters are calculated for the porosity as shown in Figure 2. The variogram range is equal to 1500 m. Furthermore, drainage radius is 320 m according to interpretation of well test in the field. A history-matched model is applied to predict production scenarios by reservoir numerical simulation. In case of enhancing the history match, it is proposed to use the integration of multi-start simulated annealing with genetic algorithm [38]. Waterflooding is planned in the field to increase production from 20,000 to 110,000 barrels per day. Moreover, the maximum allowable bottom hole pressure in injection wells and the minimum flowing bottom hole pressure in production wells are 5880 and 3290 psia, correspondingly. The pilot project is required prior to the design and field implementation of EOR. Hence, it is necessary to determine the size and location of the pilot area. The method established in this research was utilized to choose the best pilot region for this field. Initially, the reservoir model is subdivided into equal size pilot candidate areas. In each area, production and injection wells are located according to a five-spot well pattern. The sensitivity analysis on the NPV is utilized to determine the optimum pilot dimensions. Data to compute the NPV via equations (1) and (2) are illustrated in Table B1 (Appendix B). The result of varying distance between production and injection wells, which reflects the pilot size on the NPV and cumulative oil production, is illustrated in Figure 3. There is the highest of NPV in case of well distance is 900 The positions of the injection and production wells, along with existing wells in the reservoir, are illustrated in Figure 4 according to this distance. Moreover, reservoir segmentation resulted in 72 candidate regions, as illustrated in Figure A4 (Appendix A). Therefore, the number of grids for a candidate area in the directions x, y and z is 9 × 9 × 22, respectively. If an area does not contain all the 22 reservoir layers, it is removed. Therefore, six regions were removed. Consequently, there are 66 valid candidate regions in the case study reservoir. Running of the production and injection well pattern in the simulator leads to oil saturation computation. With equation (3), the product of the pore volume in oil saturation resulted in the quality maps of the residual oil volume as shown in Figure A5 (Appendix A). By using equations (4) and (5), 1782-member array of the recovery factor and 1588653-member array of the covariance between cells are allocated to an area. By using equations (6)–(8), the candidate areas are clustered. The values of the hyper-parameter (fuzzifier v) and convergence criterion are 1.5 and 1e–9 in fuzzy clustering, respectively. By using equations (9)–(14), fuzzy clustering of the candidate areas are validated as shown in Figure 5. According to Figure 5, if the number of clusters is considered to be two and five, the clustering of the covariance between cells (Sil.F_COV and XB_COV) and the recovery factor (Sil.F_RF and XB_RF) arrays are optimal, accordingly. According to the preceding graph, clustering of candidate regions leads to two and five clusters for covariance matrixes and recovery factor arrays, respectively. Figure 6 shows membership of clusters based on the recovery factor arrays. Moreover, Figure 7 shows membership of clusters based on the covariance matrixes. Each point on these two charts corresponds to a candida region. Additionally, the * symbol shows centers of the clusters. Also, Tables B2 and B3 (Appendix B) show results of clustering such as cluster size. The dominant clusters are cluster 5 and 2 for recovery factor and the covariance matrixes, respectively. Figure A6 (Appendix A) shows the distribution map of the clusters in the field with reference to Figures 6 and 7. The candidate areas in the same cluster are close to each other with respect to location. Therefore, there is a spatial relationship between the reservoir behavior, which is characterized by the covariance of the residual oil volume and the RF, and the position in the field. The covariance-center and RF-center are utilized to show the cluster centers. Averaging all regions in dominant clusters leads to determination of these centers. Afterward, the distance from these two centers to candidate areas is computed as displayed in Table B4 (Appendix B). Prior to computing the RSI value of each region, the weights of two criteria: (1) the covariance-center distance from an area and (2) the RF-center distance from an area are determined. Using equations (16)–(19) besides the criteria values in Table B4 (Appendix B), the weight of the first criterion is 0.313 while the weight of the second criterion is 0.686. The sum of product of weight and normalized value for multiplicative inverse of the two criteria has resulted in a RSI value that is shown in Figure A7 (Appendix A). For instance, the candidate area numbers 29, 34 and 33 have the highest reservoir representativeness intensity with RSI values equal to 0.99, 0.92 and 0.82, respectively. To calculate the POI of each candidate area, the criteria of RSI, the distance from the surface facilities to candidate area, the number of interfered wells, the mean distance from these wells to candidate area, number of existing applicable wells for running a pilot, number of adjacent wells, and finally the mean distance from these wells to candidate area have been considered with the weight factors of 0.33, 0.145, 0.145, 0.055, 0.055, 0.09 and 0.18, respectively, according to the AHP weighting method. The pairwise comparison matrix (Eq. (20)) and weights of the criteria are shown in Table B5 (Appendix B). In this method, every pair of criteria is compared with each other with respect to the expert judgments. In each comparison this question must be answered: which criterion is more important and how much? Table 1 is applied to express the rating preferences between each pair of criteria. For instance, experience and judgment strongly favor the RSI over the number of adjacent wells with importance intensity value equal to 5 as shown in Table B5 (Appendix B). Furthermore, the consistency ratio is computed to verify the reliability between the pairwise judgements using equations (21) and (22). The values of the largest eigenvalue (λ[max]), the random consistency index (see Tab. 2), the Consistency Ratio (CR), and the Consistency Index (CI) are 7.06, 1.35, 0.008, and 0.011, respectively. The Consistency Ratio of 0.011 shows that the pairwise comparisons are consistent. The POI is computed for each region utilizing the Reference ideal method via equations (23)– (27) and the aforementioned criteria values, as shown in Figure A7 (Appendix A). For example, the values of criteria that contributed to the POI for candidate area numbers 8, 45, 34 and 29 are shown in Table 3. The values of calculated POI in these regions are equal to 0.73, 0.66, 0.64 and 0.59, accordingly. Therefore, Pilot execution in these regions is relatively desirable. In Table 3, the average distance between interfered wells and a candidate area is indicated by the expression of NA when the number of interfered wells is equal to zero. According to Table 3 and Figure A7 (Appendix A), for example, although candidate area number 29 has the highest amount of RSI value and its distance between a candidate area and surface facilities is relatively low in comparison with the other candidates (i.e. 8 and 45), candidate area number 8 has the highest POI value and it is suggested for running the pilot because candidate area number 29 does not have any existing applicable wells for the implementation of a pilot. Furthermore, candidate area number 29 has two interfered wells that considerably reduce the accuracy of interpretation of gathered data after running a pilot in it. Figure A8 (Appendix A) shows contour maps of the RSI and POI values on top of the reservoir. There is a spatial relationship between the covariance matrix for the residual oil volume as well as recovery factor and location in the field as shown in the figure. Therefore, there is a spatial relationship between the RSI and the location in the field because RSI is based on covariance matrix for the residual oil volume as well as recovery factor. Figure A9a (Appendix A) shows assigned numbers of candidate regions. In Figure A9b (Appendix A), Ranking of each region derived from RSI is shown. Moreover, the pilot area number 29 has the highest reservoir representative intensity value. In addition, in Figure A9c (Appendix A), each candidate region is ranked according to its POI value. Therefore, the candidate area number 8 has the first priority to be selected as the pilot area. It should be noted that the POI value is the most influential parameter for selecting the best candidate area, because it already covers RSI and other A sensitivity analysis is performed to evaluate how changes in the weights assigned to the criteria would change the ranking of the pilot candidate areas. Table B6 (Appendix B) shows sensitivity analysis by preference of certain criteria through 7 scenarios. In Table B6 (Appendix B), the A–G is a criteria name as shown in Table B5 (Appendix B). The S-1 is a base case scenario in which the weight of the criteria is determined using AHP method, as shown in Table B5 (Appendix B). In scenarios S-2 to S-8, the weight of criteria A to G is increased by 0.2 compared to the base case scenario, respectively. For instance, the weight of the RSI has increased by 0.2 while the ratio of the weight is the same for other criteria [39] in scenario S-2 as shown in Table B6 (Appendix B). By making a comparison between the rankings of each candidate area number (e.g. 8, 45, 26 and 34) in different scenarios, it is demonstrated that candidate area number 8 is ranked first and second in six out of eight scenarios as shown in Table 4. Candidate area number 8 could be selected as pilot area based on the rankings obtained by using different weight of criteria. Therefore, candidate area number 8 is clearly dominant compared to other candidate areas. 4 Conclusion Based on this research, six important conclusions have been found: 1. An innovative quantitative and systematic approach consisting of reservoir-geology and operational-economic criteria has been presented. 2. A cluster analysis as an unsupervised machine learning method could be utilized to quantify reservoir representative intensity values. 3. MCDMs as decision-making methods have been utilized to integrate the factors, i.e., the distance from the surface facilities to candidate area, the number of interfered wells, the mean distance from these wells to candidate area, number of existing applicable wells for running a pilot, number of adjacent wells, and the mean distance from these wells to candidate area, in addition to RSI criteria with each other, and thereby, a new index called the POI has been presented. 4. A model of reservoir is subdivided into 66 pilot candidate regions in accordance with optimized pilot size. Then, corresponding covariance matrix and recovery factor arrays of the areas are optimally clustered in 5 and 2 clusters based on simulated annual 3-D reservoir quality maps. A hybrid integration of the AHP-Shannon Entropy and RIM resulted in the highest value of POI in area number 8. Candidate area number 8 has been selected as pilot area in this case study since it has the POI value of 0.73. There is no interfering well affecting the reliability of pilot results in the area. Furthermore, it has one existing applicable well for running a pilot that reduces execution of pilot cost. 5. Finally, the proposed procedure has the capability to be extended to all enhanced oil recovery processes. Several EOR processes such as chemical flooding and miscible gas can be modeled to calculate remaining oil volume covariance matrix and RF array. Also, other screening criteria could be identified to determine POI according to the combination of the Shannon Entropy- AHP and MCDM methods. 6. The validity of the results taken from the presented methodology has been assessed by the following stages: (1) numerical reservoir modeling, (2) fuzzy clustering and (3) criteria weighting. In order to validate the simulated annual remaining oil volume, a history-matched reservoir model is utilized in this study. In addition, the validity indices of fuzzy-silhouette and Xie-Beni are utilized to validate the best cluster numbers. Furthermore, the criteria weighting results are validated by the Consistency Ratio (CR) in the AHP method. Moreover, the proposed procedure could be developed using other criteria and well patterns. Appendix A Appendix B All Tables All Figures
{"url":"https://ogst.ifpenergiesnouvelles.fr/articles/ogst/full_html/2021/01/ogst210013/ogst210013.html","timestamp":"2024-11-09T06:14:54Z","content_type":"text/html","content_length":"214354","record_id":"<urn:uuid:87bd7ff1-4840-4c03-8ceb-0987dcbaa480>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00388.warc.gz"}
Kiril SHTEREV | Professor (Associate) | Associate Professor, Mathematical modeling and application of mathematics in mechnics, Institute of Mechanics - BAS, 2010 | Bulgarian Academy of Sciences, Sofia | BAS | Institute of Mechanics | Research profile Associate Professor, Mathematical modeling and application of mathematics in mechnics, Institute of Mechanics - BAS, 2010 How we measure 'reads' A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more Bulgarian Academy of Sciences
{"url":"https://www.researchgate.net/profile/Kiril-Shterev","timestamp":"2024-11-06T14:33:55Z","content_type":"text/html","content_length":"575472","record_id":"<urn:uuid:21094b80-3c0d-4109-88be-53cfb86fcde6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00878.warc.gz"}
Asymptotic representations of shifted quantum affine algebras from critical K-theory 2021 Theses Doctoral Asymptotic representations of shifted quantum affine algebras from critical K-theory In this thesis we explore the geometric representation theory of shifted quantum affine algebras 𝒜^𝜇, using the critical K-theory of certain moduli spaces of infinite flags of quiver representations resembling the moduli of quasimaps to Nakajima quiver varieties. These critical K-theories become 𝒜^𝜇-modules via the so-called critical R-matrix 𝑅, which generalizes the geometric R-matrix of Maulik, Okounkov, and Smirnov. In the asymptotic limit corresponding to taking infinite instead of finite flags, singularities appear in 𝑅 and are responsible for the shift in 𝒜^𝜇. The result is a geometric construction of interesting infinite-dimensional modules in the category 𝒪 of 𝒜^𝜇, including e.g. the pre-fundamental modules previously introduced and studied algebraically by Hernandez and Jimbo. Following Nekrasov, we provide a very natural geometric definition of qq-characters for our asymptotic modules compatible with the pre-existing definition of q-characters. When 𝒜^𝜇 is the shifted quantum toroidal gl₁ algebra, we construct asymptotic modules DT_𝜇 and PT_𝜇 whose combinatorics match those of (1-legged) vertices in Donaldson--Thomas and Pandharipande--Thomas theories. Such vertices control enumerative invariants of curves in toric 3-folds, and finding relations between (equivariant, K-theoretic) DT and PT vertices with descendent insertions is a typical example of a wall-crossing problem. We prove a certain duality between our DT_𝜇 and PT_𝜇 modules which, upon taking q-/qq-characters, provides one such wall-crossing relation. • Liu_columbia_0054D_16441.pdf application/pdf 584 KB Download File More About This Work Academic Units Thesis Advisors Okounkov, Andrei Ph.D., Columbia University Published Here April 21, 2021
{"url":"https://academiccommons.columbia.edu/doi/10.7916/d8-ynxy-5j49","timestamp":"2024-11-03T01:30:07Z","content_type":"text/html","content_length":"19775","record_id":"<urn:uuid:7bd5a118-61eb-4403-84e8-82368587cc3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00742.warc.gz"}
JEE Main 2024 Overall Exam Analysis: Comparing Difficulty Level Of Session 1, 2 | Education News - Aachibaat JEE Important examination day and shift Physics Chemistry Arithmetic April 4, Shift 1 – Simple stage – Questions requested from kinematics, gravitation, round movement, warmth and thermodynamics, magnetism-2 questions, wave optics, present electricity-2 questions, electrostatics, fashionable Physics, semi-conductors – Each MCQs and numerical primarily based questions have been prolonged however straightforward – Few fact-based questions from class 12 chapters of NCERT have been additionally requested – College students felt Physics part was balanced as per protection of chapters. – Simple – Natural & Bodily Chemistry was given extra weightage in comparison with inorganic Chemistry – Questions requested from electrochemistry, thermodynamics, atomic construction, chemical bonding- 2 questions, basic natural Chemistry-2 questions, alcohol, ether and phenol, amines, aryl and alkyl halides blended idea sort questions, coordination compounds, stoichiometry, periodic desk. – Some questions have been immediately requested from NCERT textbook, which made this part straightforward. – Average stage – Questions have been requested from all chapters with emphasis on chapters of calculus and algebra – A couple of query was requested from vectors, 3D geometry, differential equations and conic sections – In calculus, questions have been requested from features, continuity and differentiability, particular integral, space, differential equations – In algebra, questions on likelihood, binomial theorem, advanced numbers, permutation and mixture, statistics, progressions, matrices and determinants – In coordinate geometry, questions requested from parabola, ellipse and hyperbola with blended ideas. The numerical part had prolonged calculations. Just a few questions have been reported as prolonged and difficult. April 4, Shift 2 – Physics half was of straightforward to average stage and questions have been principally easy – Questions from fashionable Physics, work energy and power, thermodynamics, present electrical energy, fashionable Physics and electrostatics have been there within the paper – A lot of the questions have been requested from the twelfth class syllabus in – If any person has gone via the PYQs of the earlier yr papers, he/she will certainly have an higher hand within the paper. – The paper was by and huge primarily based on NCERT books – There was virtually equal distribution of the all of the three components of chemistry i.e. Bodily, natural and inorganic – Questions have been primarily easy – Questions from distinguished chapters like equilibrium, bonding, alcohol, phenol and ethers, electrochemistry, chemical kinetics and alkyl and aryl halides have been requested – Total protection of the chapters was uniform. – Arithmetic paper was average to troublesome similar to the shift-1 paper – Questions from algebra and calculus have been dominant within the paper – An excellent variety of questions have been requested from vectors and 3D – Questions from binomial theorem, sequence and sequence, statistics have been there within the paper – Nearly all of the matters have been coated – In brief, it was a balanced paper which will be solved within the given time – A lot of the college students discovered Arithmetic a bit prolonged – The problem stage smart order in accordance with a big part of scholar is Arithmetic > Chemistry > Physics April 5, Shift 1 – In keeping with the suggestions acquired from college students, Physics half was straightforward and simple – Questions from thermodynamics, magnetism, present electrical energy, fashionable Physics and electrostatistics have been duly represented within the paper – A lot of the questions have been requested from the twelfth class syllabus – Fixing a great variety of mock exams would show to be an efficient technique – Compared to phase-I Physics papers, Physics questions are on the identical aspect. – A lot of the questions have been theoretical in nature with most variety of questions requested from natural and inorganic chemistry – Questions from distinguished chapters like hydrocarbons, electrochemistry, equilibrium, and alcohol, phenol and ethers, p-block have been requested – Total protection of the chapters was uniform – Nearly all of the questions are roughly associated to or requested from NCERT solely. – Arithmetic paper was average to troublesome primarily based on college students’ suggestions – Questions from calculus have been dominant adopted by vectors and 3D, matrices and determinants – Questions have been additionally requested from quadratic equations and binomial theorem within the paper – Nearly all of the matters have been coated. The standard of the questions was good and the paper was a bit prolonged due to the Maths half – Compared to eleventh and twelfth class portion, many of the questions have been requested from twelfth syllabus solely – Total, common college students discovered Arithmetic troublesome, and the general issue stage of the paper will be mentioned to average to troublesome – The problem stage smart order in accordance with a big part of scholar is Arithmetic > Chemistry > Physics April 5, Shift 2 – Physics half was very straightforward as talked about by a great variety of college students – Present electrical energy questions have been in good quantity within the paper – Questions from fashionable Physics, semiconductor, EMI, electrostatistics, items and dimensions, mechanics and magnetism have been there within the paper – A lot of the questions have been requested from the twelfth class syllabus. – As in comparison with Part-I of JEE Important, right here a larger variety of questions have been requested from Bodily chemistry – A lot of the questions have been from the inorganic Chemistry half – 3-4 questions have been requested from electrochemistry chapter – Questions from distinguished chapters like d-Block parts, coordination compounds, biomolecules, p-block and aldehyde, ketone and carboxylic acids have been requested – Total protection of the chapters was uniform – Amongst eleventh and 12thclass, a larger variety of questions are requested from twelfth class syllabus – The paper was by and huge primarily based on NCERT books – Nevertheless, 4-5 complicated questions have been there within the paper. – Arithmetic paper was average to troublesome stage – An excellent variety of questions have been requested from vectors, 3D and calculus – Questions from calculus half was not that powerful whereas some good questions have been there from Conic part – Advanced quantity and matrices and determinants have been there within the paper – Nearly all of the matters have been coated – The primary factor is questions have been each prolonged and difficult which took loads of time to unravel – In brief, it was a difficult paper with Arithmetic being the hardest amongst all – A lot of the college students discovered Arithmetic troublesome – The problem stage smart order in accordance with a big part of scholars is Arithmetic > Chemistry > Physics April 6, Shift 1 – Physics had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and Sec-II had 10 numerical primarily based questions out which solely 5 needed to be tried – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried – Marking scheme for numerical primarily based questions was +4 for proper response and 0 in all different circumstances – Complete marks of this part have been 100. – Chemistry had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and sec-II had 10 numerical primarily based questions out which solely 5 needed to be – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried – Marking scheme for numerical primarily based questions was +4 for proper response, -1 for incorrect response and 0 in all different circumstances – Complete marks of this part have been 100. – Arithmetic had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and sec-II had 10 numerical primarily based questions out which solely 5 needed to be – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried – Marking scheme for numerical primarily based questions was +4 for proper response, -1 for incorrect response and 0 in all different circumstances – Complete marks of this part have been 100. April 8, Shift 1 – Simple questions have been requested from kinematics, centre of mass, legal guidelines of movement, magnetism, optics, capacitance, electromagnetic induction, AC circuits, fashionable physics, work energy and power, warmth and thermodynamics, atoms and nuclei – Numerical primarily based questions have been prolonged and simple – Few fact-based questions from class XII chapters of NCERT have been additionally requested. – Simple natural Chemistry got extra weightage in comparison with Bodily and Inorganic Chemistry – Questions requested from periodic desk, coordination compounds, p-block parts, chemical bonding, d and f block parts, stoichiometry, chemical kinetics, thermodynamics, basic natural Chemistry- 4 questions, alcohol, ether and phenol, biomolecules, amines, carboxylic acids, with blended idea questions – Some fact-based questions from NCERT have been requested. – Simple to average. questions have been requested from all chapters with emphasis on chapters of calculus and coordinate geometry – Questions requested from matrices -2 ques, 3 D geometry, vectors-2 ques, permutation and mixture, probability-2 ques, advanced numbers, statistics, progressions, quadratic equations, function-3 ques, limits, continuity and differentiability, software of by-product, particular integrals, differential equations, space, straight line, circle , hyperbola amongst conic sections with blended idea questions – The numerical part had prolonged calculations from chapters of calculus and algebra – Just a few questions have been reported as prolonged and simple. April 9, Shift 1 – Physics had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and sec-II had 10 numerical primarily based questions out which solely 5 needed to be tried – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried. Marking scheme for numerical primarily based questions was +4 for proper response and 0 in all different circumstances – Complete marks of this part have been 100. – Chemistry had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and sec-II had 10 numerical primarily based questions out which solely 5 needed to be – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried – Marking scheme for numerical primarily based questions was +4 for proper response, -1 for incorrect response and 0 in all different circumstances – Complete marks of this part have been 100. – Arithmetic had whole 30*questions – sec-I had 20 a number of alternative questions with single right solutions and sec-II had 10 numerical primarily based questions out which solely 5 needed to be – Marking scheme for a number of alternative questions was +4 for proper response, -1 for incorrect response, 0 if not tried – Marking scheme for numerical primarily based questions was +4 for proper response, -1 for incorrect response and 0 in all different circumstances – Complete marks of this part have been 100. April 9, Shift 2 – Simple questions requested from virtually all chapters. – Some good questions from chapters of kinematics, legal guidelines of movement, gravitation, warmth and thermodynamics, rotational movement, optics, present electrical energy, magnetism, electromagnetic induction, fashionable physics, atoms and nuclei – Numerical primarily based questions have been straightforward – Physics was balanced and simple. – Simple to average. inorganic & natural Chemistry had extra weightage as in comparison with Bodily Chemistry – Questions requested from GOC, alcohols, ether & phenols, amines, aldehydes and ketones, biomolecules, aryl and alkyl halides blended idea questions, Bodily Chemistry had questions from chemical kinetics, electrochemistry and chemical equilibrium – Inorganic Chemistry had questions from coordination compounds, p-block, d and f -block parts and chemical bonding. Some NCERT fact-based questions requested which made it straightforward for college kids. – Questions have been requested from all chapters with emphasis on chapters of calculus – Questions from Conic Sections with blended idea questions in Coordinate Geometry – In Calculus, questions requested from limits, software of by-product, particular integral, differential equations, space – In Algebra, questions requested from Advanced quantity, progressions, permutation and mixture, binomial theorem, statistics, matrices and determinants, likelihood, vector and 3D Geometry – Few MCQs and numerical primarily based questions had prolonged calculations and have been difficult.
{"url":"https://aachibaat.com/jee-main-2024-overall-exam-analysis-comparing-difficulty-level-of-session-1-2-education-news/","timestamp":"2024-11-09T23:41:28Z","content_type":"text/html","content_length":"117440","record_id":"<urn:uuid:b5ca12e8-8d61-4c36-97b3-54fe516b8e05>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00822.warc.gz"}
Neo-Hookean hyperelastic model for nonlinear finite element analysis In structural finite element analysis, non-metal materials such as rubber and biological tissues are often encountered. Because the mechanical properties of these materials are very different from the mechanical properties of metal materials, such as large elastic deformation, incompressibility, viscoelasticity, etc. Researchers and engineers classify these materials as hyperelastic materials, and group the mechanical models that describe such materials as hyperelastic models. These hyperelastic materials (models) have the following significant characteristics: • Can withstand large elastic (recoverable) deformation, sometimes with stretch up to 1000%. • Hyperelastic materials are almost incompressible. Because the deformation is caused by the straightening of the molecular chain of the material, the volume change under the applied stress is • The stress-strain relationship is highly nonlinear. Normally, the material softens first and then hardens under tension, but hardens sharply when compressed. To predict and analyze the mechanical properties of these hyperelastic materials, many models have been proposed. Common hyper-elastic models are Neo-Hookean, Mooney-Rivlin, Odgen, Arruda-Boyce, Gent, Yeoh, Blatz-Ko, etc. At present, these models are widely used in many fields including rubber products (such as rubber seals), biological materials (such as muscles), and computer graphics in filming. WELSIM has also supported these models. In this article, we will be discussing the neo-Hookean in detail. Neo-Hookean model Proposed by Ronald Rivlin (1915–2005) in 1948, Dr. Rivlin is also well-known for the Mooney-Rivlin hyper-elastic model. It can be seen that neo-Hookean is not a model named after a person. This British-American physicist studied physics and mathematics at St John’s College, Cambridge, being awarded a BA in 1937 and a ScD in 1952. He worked for the General Electric Company, then the UK Ministry of Aircraft Production, then the British Rubber Producers Research Association. He later moved to the United States and taught at Brown University and Lehigh University. The Neo-Hookean model is the simplest form of all commonly used hyper-elastic models. The elastic strain energy potential energy is expressed as Where u is the initial shear modulus. D1 is the material’s incompressible parameter. It can be found that the model is a strain energy function based on the strain tensor invariant I_1. If the material is assumed to be incompressible, J = 1 and the second term becomes zero. The Neo-Hookean model is derived based on classic statistical thermodynamic results. This is similar to the Arruda-Boyce model we discussed in the last article. When the limiting network stretch parameter in the Arruda-Boyce model becomes infinite, it is equivalent to neo-Hookean. At the same time, this model can be regarded as a special form of the Polynomial model. For polynomial model parameters N = 1 and C01 = 0, the polynomial model is equivalent to neo-Hookean. The Neo-Hookean model has a constant shear module. Generally, it is only suitable for predicting the mechanical behavior of rubber with uniaxial tension of 30% to 40% and pure shear of 80% to 90%. While it is not very accurate to predict the large strain deformation. Although this model is not as versatile as other models, especially for large strain or tensile conditions. The neo-Hookean model has the following advantages: (1) Simple. There are only 2 input parameters. If the material is assumed to be incompressible, then only one parameter is required: the initial shear modulus. Since only one parameter is needed from the test data, the number of required tests is small. (2) Compatible. The material parameter obtained from one type of deformation stress-strain curve can be used to predict other types of deformation. Especially for the small and medium strain It is worth mentioning that neo-Hookean is not only applied to science and engineering but also used in the computer graphics in the filming industry because of its simplicity and physical-based solutions. As shown in the figure, the process of hand movement, the muscle and skin changes calculated using the neo-Hookean model appear extremely natural. Neo-Hookean finite element analysis example In the following section, we apply the neo-Hookean material in the finite element analysis software WELSIM to simulate the deformation of a soft tube under tension. We constrain one end of the tube and apply force on another side, and compute the deformation and stress. Analysis steps: (1) Set the unit system to Metric (kg-mm) and create a structural static analysis project. (2) Set material properties. Create a new material. Double-click this material object to enter editing mode. Toggle the neo-Hookean property from the hyperelastic section in the toolbox. In the properties view, assign the values: Mu = 1.5 MPa, D1 = 10 MPa^-1. When the parameters are defined, the stress-strain curve of the corresponding model can be seen in the graph window. Press F2 on the material object and change the object name to neoHookeanMat. (3) Create a geometry The tube can be created by the boolean operation between two cylinders, with an inner diameter of 3mm, an outer diameter of 4.4mm, and a length of 15mm. (4) Generate mesh: Set the maximum element size to 0.3 mm and use higher-order elements. After meshing, we obtain 28,898 nodes and 14,570 Tet10 elements. (5) Boundary conditions Impose a fixed constraint on one end of the tube. A horizontal force in the Z direction is applied to another end of the tube with the magnitude 1N. (6) Set model, solve and evaluate results. For better convergence, we set three substeps. Click the solve button, the solution will be calculated quickly. The deformation on Z direction and von-Mises stress are shown in the figures below. The maximum Von-Mises stress occurs at the fixed area of the tube, and the magnitude is 0.63 MPa. Before the advent of finite element software, the calculation and prediction of material nonlinearities were complicated and time-consuming. Now with finite element software, the analysis of nonlinear materials has become faster, more accurate and more fun.
{"url":"https://welsim.com/2021/04/17/neo-hookean-hyperelastic-model-for-nonlinear-finite-element-analysis.html","timestamp":"2024-11-12T12:22:33Z","content_type":"text/html","content_length":"23667","record_id":"<urn:uuid:c6682a66-3a5d-4685-8ad3-a7b151588d5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00397.warc.gz"}
Math Colloquia - Large random tilings of a hexagon Large random tilings of a hexagon have the fascinating behavior of separation of phases (frozen and rough; also called solid and liquid) that are separated by a well-defined Arctic curve. In a weighted tiling model with periodically varying weights a third phase (smooth; or gaseous) appears where correlations between tiles decay at an exponential rate. After a general introduction, I will discuss a technique based on matrix valued orthogonal polynomials to analyse a particular case of the three-periodic lozenge tiling model.
{"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=10&l=en&sort_index=Time&order_type=asc&document_srl=1212994","timestamp":"2024-11-06T04:17:10Z","content_type":"text/html","content_length":"43580","record_id":"<urn:uuid:421e2bd4-cc45-44c2-aa82-1ce52c28c8f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00482.warc.gz"}
Mathiness is next to growthiness (the 4% solution) 4% average growth has not happened since the 1970s. I stole Sandwichman's excellent title from EconoSpeak. John Cochrane put up a graph today trying to persuade us that Jeb Bush's goal of 4% real GDP growth is possible contra several other economists. I tried to figure out what he did because it couldn't have been a simple moving average (orange curve). His graph went until 2015 so it would have to be a backward looking average of some kind. But then both a backward looking moving average (green dashed) and a backward looking integral average (blue, they're basically on top of each other) don't quite make it up to 4% since the 1970s. I did eventually figure it out (the red curve), but it involves the same mistake that I took Scott Sumner to task for back awhile ago . Cochrane took the change in RGDP from a point t - 10 and divided by RGDP at . That gives the slope of the secant from the start point to the end point. It may be called the mean value theorem , but that doesn't mean that the secant gives an average value. All it does is say there is at least one point that has that slope -- that 4% growth happened sometime during that 10 year period. These diagrams might make this clearer. The first one (on the left) shows an RGDP function that gives 4% growth between the endpoints 10 years apart. You can see there is a point just after year 8 that has a tangent with the same slope. In the second graph (on the right) you can see that this function has nearly zero growth across the entire 10 year period. Not many people would consider that averaging 4% RGDP growth (it actually averages around 3.5% growth during that entire period per the averaging methods above). Cochrane's method over-weights the high growth spikes. The method other economists are using to say growth has been less than 4% is the better method. Of course, you can choose to use the mean value theorem method if you want to overweight growth spikes to make growth to look higher ... and you're into mathiness. 24 comments: 1. Maybe I'm just crazy but I would have thought that most economists would see 4% growth as not feasible for reasons other than the low growth of the 2010's... Maybe they think that participation rates in the labor market will return to normal or something. As I understand it, though, the ITM projects NGDP (and therefore RGDP) to slow down as the economy gets large, but I'm not sure a sudden drop from 4% trend to 2% trend really captures the essence of a sudden increase in the probability that money will be used in a slow growth market. Does the ITM give any reasons for an almost instantaneous halving of RGDP growth rates? 1. Hi John, The graphs at the top are just moving averages -- in the model, RGDP growth slows down smoothly over time. See e.g. Here: 2. So, if I'm interpreting correctly, the ITM explains the lull in (moving average) RGDP growth as partially that gradual slow down and partially a random walk. 3. Sort of, but there is more to it. The negative deviations may be a result of non-ideal information transfer as well as random walk. So it is: trend + non-ideal + random component (with unit root) ... however there is no model as yet for the depth of the non-ideal deviations. 2. Sadly you and Cochrane both fall to Krugman 's level of earning incompletes for your econometrics homework. I'm sure if you complain to the prof, she'll grade it herself, she is always more lenient than me.... Cochrane for using an averaging method that trolls love to troll and you for arguing why someone else is wrong without explaining for instance why sensitivity to high growth spikes are more important than its sensitivity to deep falls. Instead you should have written an article supporting the thesis that say an ma is better or at least demonstrated how it is immune to these problems...no points for mathiness all around 1. (Also note I'm using a classic lecture session trap...I really want you to say that sensitivity to high growth peaks is more important because more of them happen...) 2. Hi LAL, The moving averaging methods (integral and simple average) are less sensitive to spikes in either direction -- the spikes have vanishing support in the limit of your measurement delta-t going to zero so they contribute zero to the integral. In the mean value theorem method, every spike contributes (in either direction). Imagine a step function RGDP: it would have zero growth in most time periods but with a spike of almost infinite growth in the middle resulting in an "average" growth that was entirely dependent on the size of one spike. However this is a good audition for you to take over as the grumpy economist when Cochrane gives up blogging :) 3. I don't think I fell in your trap :) 4. I guess in the context of the articles this would have gotten an A, in the class room I would still act dumbfounded as to why throwing away business cycles is important to my students...why shouldn't we weight upswings more than no movement? That's the economic question....and why we are in econometrics and not time series analysis...Also that is the nicest thing anyone has ever said to me :D 3. Actually I'm having trouble matching Cochrane description to yours... 1. I think there were some small differences due to his use of annual data where I used quarterly but it matched up with his original graph. However, it appears as though he's updated his graph to include something more like the average I did. 2. His average seems to be (present -past)/past...his formula is the closest I had to matching his picture but ....still I'm not matching...I would use log(st/st-1)...Also I'm pretty sure he was straight up trolling me UN the comments section 3. apparently the UN has earned enough ire from me to replace the word "in" as default on my phone... 4. I somehow managed to fail to copy and paste...B effort...I am now matching... 5. Hmmm. I would not even start with a graph of growth rates. I would use log(1+r). A decade of 4% yearly increases produces an increase of 48%, not 40%. If you don't use logs, then what you want for a simple moving average is (RGDP(t+5)/RGDP(t-5))^(0.1). That's my mathiness. :) 1. For the exponentially challenged (those who have trouble with tenth roots), here is a tip. It seems that Cochrane used the following formula: mean_growth_rate = ((RGDP(t = 0) - RGDP(t = -10))/RGDP(t = -10))/10 = (RGDP(t = 0) - RGDP(t = -10))/10*RGDP(t = -10) One peculiar thing about that equation that stands out is that it gives added weight to RGDP(t = -10) for no apparent reason. If we apply a little common sense we can weight the two RGDP values equally and get this. mean_growth_rate = (RGDP(t = 0) - RGDP(t = -10))/(5*RGDP(t = -10) + 5*RGDP(t = 0)) Let's try that with a 4% annual growth rate. Let RGDP(t = -10) = 1 and RGDP(t = 0) = 1.48. Then we have mean_growth_rate = (1.48 - 1)/(5 + 7.4) = 0.48/12.4 = 0.0387 A slight under-estimate, but good enough for government work. And well within the error of economic calculations. :) Cochrane is aware of using logarithms but pooh-poohs doing so, choosing instead to use a calculation that obviously overestimates the annual growth rate. {sigh} 2. Sorry. I am having fun. :) The harmonic mean is often preferable to the arithmetic mean, and it looks like that would be the case here, as well. So if we divide by the harmonic mean of RGDP(t = 0) and RGDP(t = -10) we get his equation. mean_growth_rate = (RGDP(t = 0)^2 - RGDP(t = -10)^2)/20*RGDP(t = -10)*RGDP(t = 0) With the 4% example above that gives us mean_growth_rate = 1.19/29.6 = 0.0402 Not bad. :) 6. As for Jeb Bush aiming at 4% growth in RGDP, maybe he is taking advice from his father. Something like this: Bush pere: Learn from your brother. I always said that this Reaganomics supply side shit was voodoo economics. Like Nixon said, we're all Keynesians now. No disparagement of the information transfer approach. IMO the Reagan revolution really was a persistent non-random phenomenon. Pre- and post-1980 really are different politico-economic regimes. 1. Ha! Actually they are, even in the ITM ... you can see the effect here: 2. Thanks, Jason. Very interesting. :) 7. You guys probably already noticed but John Cochrane. did provide an update with the exact .m code he used to do the calculation. O/T: Jason, you probably already saw this as well, but Scott writes in his latest: "While I’m impressed by an explanation that’s as flexible as a circus contortionist, I’d prefer something that isn’t consistent with any possible state of the universe. I’m no Popperian, but I like my theories to be at least a little bit falsifiable." 1. I did see it, thanks Tom. It's flabbergasting but then again as humans we frequently see faults in others that we fail to see in ourselves. So does this mean Scott knows a way market monetarism can be falsified? 2. “O, wad some Power the giftie gie us To see oursels as others see us! It wad frae monie a blunder free us, An' foolish notion.” -- Robert Burns 3. "So does this mean Scott knows a way market monetarism can be falsified?" I asked him. Unfortunately I packed too many questions in my comment and he didn't answer that one. I had a feeling that might happen. I asked him for a concrete example or two of something that would falsify his theories at least a "little bit." (c: Comments are welcome. Please see the Moderation and comment policy. Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead. Note: Only a member of this blog may post a comment.
{"url":"https://informationtransfereconomics.blogspot.com/2015/06/mathiness-is-next-to-growthiness-4.html","timestamp":"2024-11-08T17:26:38Z","content_type":"text/html","content_length":"149234","record_id":"<urn:uuid:d4ab5638-f0bd-40aa-9a59-fb49332659ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00502.warc.gz"}
If AaBbCcDd is mated to AaBbCcDd (no gene interactions among all genes), what is the probability... If AaBbCcDd is mated to AaBbCcDd (no gene interactions among all genes), what is the probability... 1. If AaBbCcDd is mated to AaBbCcDd (no gene interactions among all genes), what is the probability that the offspring will have either the phenotype AaB_CcDD or the genotype AAbbCcD_? (assuming complete dominance for genes) (Show procedure, NO score for direct result) 2. A mouse sperm of genotype a BCDE fertilizes an egg of genotype a bcDe. What are all the possibilities for the genotypes of (a) the zygote and (b) a sperm or egg of the baby mouse that develops from this fertilization? (Show procedure, NO score for direct result) 3. What probability of offspring from cross (AaBbCc x AabbCc) is predicted to show the recessive phenotypes for at least two of the three traits? (assuming complete dominance for genes) (use either branched-line or Punnett square to explain; NO score for giving the direct probability) 4. If the a and b loci are 30 cM apart and an AA BB individual and an aa bb individual mate: What gametes will the F1 individuals produce, and in what proportions? What phenotypic classes in what proportions are expected in the F2 generation (assuming complete dominance for both genes) (show procedure, NO score for direct result)? 5. AABB and aabb individuals were crossed to each other, and the F1 generation was backcrossed to the aabb parent. 995 AaBb, 997 aabb, 3 Aabb, and 5 aaBb offspring resulted. 1. How far apart are the a and b loci? 2. What progeny and in what frequencies would you expect to result from testcrossing the F1 generation from a AAbb X aaBB crosss to aabb? 3. In a typical meiosis, how many crossovers occur between genes A and B? 4. Assume that the A and B loci are on the same chromosome, but the offspring from the testcross described above were 496 AaBb, 504 aabb, 502Aabb, and 498 aaBb. How would your answer to part (c) We have entitled to answer only one question. So i will answer the second one (mouse problem). Mouse sperm Mouse egg BCDE X bcDe F1 progeny -----> BbCcDDEe (Baby mouse) a) The genotype of the Zygote is ------> BbCcDDEe b) Now come to the genotype of sperm or egg of this baby mouse. First of all we have to know that the all cells of this baby mouse contain same genotype including germ cell also. So the genotype og germ cells are BbCcDDEe. It contain 3 different type of gene, according to formula number of gametes produced = 2n Where n = Different number of genes = 3 , No. of gametes produced by germs cell = 23 = 8 That means the sperm or egg contain 8 different type of genotype. The possible genotypes are given below, 1) BCDE 2) bcDe 3) BcDE 4) BCDe 5) bCDE 6) bcDE 7) bCDe 8) BcDe
{"url":"https://justaaa.com/biology/211901-if-aabbccdd-is-mated-to-aabbccdd-no-gene","timestamp":"2024-11-01T23:18:34Z","content_type":"text/html","content_length":"43012","record_id":"<urn:uuid:3fcf0aa2-0cb5-44ff-84ec-65fd10953322>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00270.warc.gz"}
What is a number again? By Dan Christensen (Revised: 2016-11-01) What is a number? If we could see them, what might the set of natural numbers look like? You might picture an endless line of nodes (numbers) connected by one-way arrows, starting at 0. Next, is 1, then 2 and so on. (For reasons that will be apparent below, I call this the main sequence.) From this diagram, we can see that: 1. 0 is a number (the first number here). 2. Every number has a unique next number. 3. No two numbers have the same next number. 4. No number has 0 as its next number (0 is the first number). 5. No non-empty, proper subset of the natural numbers is completely disconnected from the remaining numbers. It all seems so obvious, so self-evident, but... so what? The thing is, this short list of “obvious and self-evident” properties characterize the set of natural numbers to the extent that from these properties alone, we can (in theory) derive all of number theory, algebra and calculus. Yes, at its base, mathematics is that simple! Let's now translate this list of properties into the language of DC Proof. 1. 0 is a number: 0 e n 2. Every number has a unique next number: ALL(a):[a e n => next(a) e n] 3. No number has 0 as its next number: ALL(a):[a e n => ~next(a)=0] 4. No two numbers have the same next number: ALL(a):ALL(b):[a e n & b e n => [~a=b => ~next(a)=next(b)]] or equivalently... ALL(a):ALL(b):[a e n & b e n => [next(a)=next(b) => a=b]] 5. No number exists outside the main sequence. & ALL(b):[b e a => b e n] (a is a subset on n) & 0 e a (a is the main sequence) & ALL(b):[b e a => next(b) e a] => ~EXIST(b):[b e n & ~b e a]] (no element of n exists outside the main sequence) or equivalently... & ALL(b):[b e a => b e n] & 0 e a & ALL(b):[b e a => next(b) e a] => ALL(b):[b e n => b e a]] (switched quantifier) Again, no non-empty, proper subset of the natural numbers is completely disconnected from the remaining numbers. Formal Proof (149 lines). It can be shown that if set n', element 0' and function next' satisfy the above axioms (with the obvious substitutions in the above axioms), then n and n' would have to be an identical “copy” of one another. The structures of n and n' will be identical, only the names will have been changed, i.e. n would be order-isomorphic to n'. (This is not true of algebraic structures in general.) Informally, we can match up the elements of n and n' quite naturally as follows: 0 ↔ 0' next(0) ↔ next'(0') next(next(0)) ↔ next'(next'(0')) ...and so on. This matching up would be uniquely given by the function f mapping n to n' such that: ALL(a):[a e n => f(a) e n'] & f(0)=0' & ALL(a):[a e n => f(next(a))=next'(f(a))] Formal Proof (723 lines) If can be further shown that f would be a bijection mapping n to n'. Formal Proof (188 lines) Finally, it can be shown that f would preserve the successor relation on each set: ALL(a):ALL(b):[a e n & b e n => [next(a)=b <=> next'(f(a))=f(b)]] Formal Proof (167 lines)
{"url":"http://www.dcproof.com/WhatIsANumberAgain.html","timestamp":"2024-11-05T15:53:09Z","content_type":"text/html","content_length":"23242","record_id":"<urn:uuid:b670c64d-7adc-4d35-968b-4e0bb55d6b68>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00686.warc.gz"}
Variable Format Half Precision Floating Point Arithmetic A year and a half ago I wrote a post about "half precision" 16-bit floating point arithmetic, Moler on fp16. I followed this with a bug fix, bug in fp16. Both posts were about fp16, defined in IEEE standard 754. This is only one of 15 possible 16-bit formats. In this post I am going to consider all 15. There is also interest in a new format, bfloat16. A recent post by Nick Higham, compares the two, Higham on fp16 and bfloat16. Nick mentions the interest in the two formats by Intel, AMD, NVIDIA, Arm and Google. These formats are two out of the 15. A floating point format is characterized by two parameters, p, the number of bits in the fraction, and q, the number of bits in the exponent. For half precision, we always have p+q = 15. This leaves one bit for the sign. The two formats of most interest are the IEEE standard fp16 with p = 10 and the new bfloat16 with p = 7. The new format has three more bits in the exponent and three fewer bits in the fraction than the standard. This increased range at the expense of precision is proving useful in machine learning and image processing. My new MATLAB® object is an elaboration of fp16, so I named it vfp16. Here is its help entry. help @vfp16/vfp16 vfp16. Constructor for variable format 16-bit floating point object. y = vfp16(x) is an array, the same size as x, of uint16s. Each element is packed with p fraction bits, 15-p exponent bits and one sign bit. A single value of the precision, p, is associated with the entire array. Any integer value of p in the range 0 <= p <= 15 is allowed, although the extreme values are of questionable utility. The default precision is p = 10 for IEEE standard fp16. y = vfp16(x,p) has precision p without changing the working precision. Three key-value pairs may be set: vfp16('precision',p) sets the working precision to p. vfp16('subnormals','on'/'off) controls gradual underflow. vfp16('fma','off'/'on') controls fused multiply adds. Up to three key-value pairs are allowed in a single call to vfp16. Two formats exist in hardware: vfp16('fp16') sets p = 10, subnormals = on, fma = off (the default). vfp16('bfloat16') sets p = 7, subnormals = off, fma = on. vfp16('precision') is the current working precision. vfp16('subnormals') is the current status of gradual underflow. vfp16('fma') is the current status of fused multiply adds. u = packed(y) is the uint16s in y. p = precision(y) is the value for the entire array y. See also: vfp16/single, Reference page in Doc Center doc vfp16 The key attributes of variable format half precision are displayed in the following chart, vfp16_anatomy. The extreme exponent range makes it necessary to use the vpa arithmetic of the Symbolic Math Toolbox™ to compute vfp16_anatomy. • p is the precision, the number of bits in the fraction. • bias is the range of the exponent. • eps is the distance from 1 to the next larger vfp16 number. • realmax is the largest vfp16 number. • realmin is the smallest normalized vfp16 number. • tiny is the smallest subnormal vfp16 number. p bias eps realmax realmin tiny [ 1, 8191, 0.5, 8.181e2465, 3.667e-2466, 1.834e-2466] [ 2, 4095, 0.25, 9.138e1232, 3.83e-1233, 9.575e-1234] [ 3, 2047, 0.125, 3.03e616, 1.238e-616, 1.547e-617] [ 4, 1023, 0.0625, 1.742e308, 2.225e-308, 1.391e-309] [ 5, 511, 0.03125, 1.32e154, 2.983e-154, 9.323e-156] [ 6, 255, 0.01562, 1.149e77, 3.454e-77, 5.398e-79] [ 7, 127, 0.007812, 3.39e38, 1.175e-38, 9.184e-41] [ 8, 63, 0.003906, 1.841e19, 2.168e-19, 8.47e-22] [ 9, 31, 0.001953, 4.291e9, 9.313e-10, 1.819e-12] [ 10, 15, 0.0009766, 65500.0, 6.104e-5, 5.96e-8] [ 11, 7, 0.0004883, 255.9, 0.01562, 7.629e-6] [ 12, 3, 0.0002441, 16.0, 0.25, 6.104e-5] [ 13, 1, 0.0001221, 4.0, 1.0, 0.0001221] [ 14, 0, 6.104e-5, 2.0, 2.0, 0.0001221] [ 15, NaN, 3.052e-5, NaN, NaN, NaN] Here is the binary display of vfp16(x,p) as p varies for an x between 2 and 4. This is the same output as the animated calculator below. format compact format long x = 10/3 x = for p = 0:15 y = binary(vfp16(x,p)); fprintf('%5d %18s\n',p,y) The upper triangle in the output is the biased exponent field and the lower triangle is the fraction. At p = 0 there is no room for a fraction and at p = 15 there is no room for an exponent. Consequently, these formats are not very useful. Here are the results when these values are converted back to doubles. As the precision increases the error is cut in half at each step. The last two values of p show failure. The important values of p are 7 and 10. disp(' p y x-y') for p = 0:15 y = double(vfp16(x,p)); fprintf('%5d %18.13f %18.13f\n',p,y,x-y) p y x-y 0 4.0000000000000 -0.6666666666667 1 3.0000000000000 0.3333333333333 2 3.5000000000000 -0.1666666666667 3 3.2500000000000 0.0833333333333 4 3.3750000000000 -0.0416666666667 5 3.3125000000000 0.0208333333333 6 3.3437500000000 -0.0104166666667 7 3.3281250000000 0.0052083333333 8 3.3359375000000 -0.0026041666667 9 3.3320312500000 0.0013020833333 10 3.3339843750000 -0.0006510416667 11 3.3330078125000 0.0003255208333 12 3.3334960937500 -0.0001627604167 13 3.3332519531250 0.0000813802083 14 NaN NaN 15 2.6666259765625 0.6667073567708 Conversion to Single With eight bits in the exponent, the new bfloat16 has a significant advantage over the standard fp16 when it comes to conversion between single and half precision. The sign and exponent fields of the half precision word are the same as single precision, so the half precision word is just the front half of the single precision word. For example, format hex Fused Multiply Add The cores of many algorithms for matrix computation often involve one of two fundamental vector operations, "dot" and "daxpy". Let x and y be column vectors of length n and let a be a scalar. The extended dot product is x'*y + a The so-called elementary vector operation, or "daxpy" for "double precision a times x plus y" is a*x + y Both involve loops of length n around a multiplication followed by an addition. Many modern computer architectures have fused multiply add instructions, FMA, where this operation is a single instruction. Moreover, the multiplication produces a result in twice the working precision and the addition is done with that higher precision. The bfloat16 specification includes FMA. With our vfp16 method FMA can be turned off and on. It is off by default. I wrote a blog post about floating point denormals several years ago. In the revision of IEEE 754 denormals were renamed subnormals. Whatever they are called, they are relatively rare with the large range of bfloat16, so they can be turned off. All the properties of bfloat16 can be obtained with the statement which sets precision = 7, subnormals = off and fma = on. The defaults can be restored with which sets precision = 10, subnormals = on and fma = off . This animation shows how I've added @vdp16 to the calculator that I mentioned in blog posts about half precision and roman numerals. When the radio button for the word size 16 is selected, a slider appears that allows you to select the precision. The button is labeled "16", followed by the precision. Moving the slider up increases the number of bits in the fraction and consequently increases the precision, while moving the slider down decreases p and the precision. I could make a variable format quarter precision object, but none of the other formats are useful. And variable format single and double precision objects have lots of bits, but little else to offer. To be continued I'm not done with this. I am still in the process of extending the linear algebra functions to variable format. I hope to report on that in a couple of weeks. In the meantime, I'll post Version 4.20 of Cleve's Laboratory with what I have done so far. To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
{"url":"https://blogs.mathworks.com/cleve/2019/01/16/variable-format-half-precision-floating-point-arithmetic/?from=en","timestamp":"2024-11-08T00:53:13Z","content_type":"text/html","content_length":"165832","record_id":"<urn:uuid:585fe039-5641-43d3-a671-c31a81432024>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00096.warc.gz"}
Notes about optimizing emulated pairing (part 1) - HackMD # Notes about optimizing emulated pairing (part 1) This note describes the techniques used for optimizing the bilinear pairing computation with field emulation in Groth16 using gnark. A shortlist of the techniques used are: * field emulation; * augmented Groth16; * product argument in-circuit; * amortized methods in gnark. ## Field emulation When using pairing-based SNARK constructions (for example, Groth16 and PLONK), then the computations are performed on the scalar field of the underlying elliptic curve. Depending on the context either the elliptic curve is fixed (for example, when we target verifier running as an Ethereum smart contract) or we have little choice over choosing the curve to have suitable scalar field. This means that if we want to perform computations over arbitrary finite field (further on, lets call it nonnative field), then we have to emulate multi-precision arithmetic on the native field. For consistency, let denote the modulus of the native field as $r$ and and the modulus of the nonnative field as $q$. Usually, the multi-precision arithmetic is implemented by representing an integer value $x$ decomposed into $n$ limbs using a fixed base $b$: $$ x = \sum_{i=0}^{n-1} x_i b^i. $$ Assuming the add-with-carry and mul-with-carry we can implement multi-precision addition and multiplication by operating on the limbs and carrying the excess over the next limb, as learned in school. This approach works really well on native hardware which has suitable primitive integer types (8-, 16, 32- and 64-bit wide unsigned integers, i.e. $b \in \{2^8, 2^{16}, 2^{32}, 2^{64}\}$) and arithmetic operations with carry. However, in a native field we only have field addition and multiplication. If the field cardinality is not a power of two, then in case the operations overflow the underlying field we lose the information about the exact values of the result due to automatic modular reduction. To avoid this, we need to ensure that all the limb values are always small enough so that the multiplication (and to a lesser sense, addition) never overflow the native field. But as the bitlength of multiplication of two $t$-bit inputs is $2t-1$, then even for very small $t$ we overflow the native field after only a few multiplication operations. This means that we have to have a way to reduce the limbs. Secondly, we have to be able to reduce the integer value modulo $q$. Essentially, we have to show that there exist $k, x'$ such that $$ x' = k*q + x, $$ or alternatively $$ x'-x = k*q. $$ We compute both $x'$ and $k$ non-deterministically (in gnark terms, using hints). For correctness, we must ensure that $k,x' < q$ and $x'_i, k_i < b$. By computing the integer value $y = x'-x$, we are left to show that $$ y = k*q. \quad (1) $$ For that, we can instead consider the polynomial representation of the multi-precision integer $$ p_y(Z) = \sum_{i=0}^{n-1} y_i Z^i. $$ With this representation we have $p_y(b) = y$ for the base $b$. Now, $(1)$ becomes $$ p_{y}(Z) = p_{k}(Z) p_{q}(Z), \quad (1') $$ which we can show by evaluating both sides at $2n-1$ different points, as $deg (p_{y}), deg(p_q), deg(p_k) \leq 2n-1$. Although we did not explicitly mention, the complexity of multiplication is showing that the limbs $y_i, k_i$ are bounded by $b$. Naively we would show this by decomposing every limb $y_i, k_i$ into bits non-deterministically and showing that this decomposition was correct. More formally, we can write the value $v$ in base 2 as $$ v = \sum_{i=0}^{t} v_i 2^ i, $$ where $t$ is the bitlength of the native field modulus. However, this is not sound in the SNARK circuit as we could assign any value for $v_i$. Thus, we have to additionally assert $$ v_i * (1-v) = 0, \forall i \in [t]. $$ We can see that this approach is really inefficient as we have to create $s$ R1CS constraints. For every multiplication this is approximately $2*t$ constraints! Lets see how to make the multiplication more efficient in three steps. ## Non-interactive protocols The first critical step is to obtain some randomness in circuit. But the circuits are fixed and for plain Groth16 (also PLONK) the only randomness could theoretically be provided by the prover. By default this isn't usually sufficient for probabilistic checks as the prover could provide such values which would lead to false positive outcomes. *prover chooses path which leads to false positive outcome* The simplest way to overcome this problem is to use something called Fiat-Shamir heuristic. Using Fiat-Shamir heuristic, the verifiers challenges in the protocols are replaced with hashes of the protocol transcript up to the moment. So for example if we have an interactive protocol ``` PROVER VERIFIER msg -- msg --> ch <- Sample() <-- ch -- use ch ...... ``` then this is turned into non-interactive protocol as ``` PROVER DETERMINISTIC VERIFIER msg -- msg --> ch <- H(msg) <-- ch -- use ch ...... ``` Now, the proof is the transcript `(msg, ch, ...)` of the protocol. Everyone can verify the correctness of the proof if they check that the verifier's challenges are correctly computed. It depends on the viewpoint but normally this transformation, assuming that we use strong hash functions, is sound. This is due to difficulty of choosing inputs to the hash function which lead to something suitable for the prover. With strong hash functions if the prover changes only a bit in the inputs, this cascades and changes many output bits. We can use this approach also in the SNARK circuits, but we stumble again against the wall of inefficiency. Cryptographic hash functions are not SNARK-friendly and create a lot of constraints in the circuit. Even with SNARK-friendly hash functions it becomes unfeasible to feed the whole protocol transcript into the hash function as the the hashing overhead cancels out any possible saving coming from probabilistic checks. ## Augmented Groth16 :::info The approach for constructing commitment from the witness comes from [LegoSNARK paper](https://eprint.iacr.org/2019/142.pdf). ::: However, there is a way. To explain it, we would have to remind how Groth16 works on a high level. The arithmetic circuit is compiled into $c$ equations in the form $$ \left(\sum_{i = 0}^N u_{i,j} x_i \right) \cdot \left(\sum_{i=0}^N v_{i,j} x_i \right) = \sum_{i=0}^N w_{i,j} x_i. \quad \forall j \in [c] $$ Here the the vector $\mathbf{x} = (x_1, ..., x_N)$ is the witness, where first element is $1$, next $\ell$ elements are the public (the statement) and the rest is prover's witness. It is possible to represent the matrices $(u_{i,j})$, $(v_{i,j})$ and $(w_{i,j})$ as polynomials $u_i(Z)$, $v_i(Z)$ and $w_{i}(Z)$ such that their evaluations at some domain $\Omega$ correspond to the matrix cells. Then the check becomes $$ \left( \sum_{i=0}^N u_i(Z) x_i \right) \cdot \left( \sum_{i=0}^N v_i(Z) x_i \right) = \sum_{i=0}^N u_i(Z) x_i + h(Z) Z_\Omega(Z). $$ Omitting the full proof SNARK proof computation (see [Groth16](https://eprint.iacr.org/2016/260.pdf) for details), the prover also computes $$ C = \frac{\sum_{i=\ell+1}^N z_i(\beta u_i(x) + \alpha v_i(x) + w_i(x) + h(x)Z_\Omega(x))}{\delta} + As + Br - rs \delta \quad (2) $$ and verifier checks $$ [A]_1 \cdot [B]_2 = [\alpha]_1 \cdot [\beta]_2 + \sum_{i=0}^\ell z_i \left[\frac{\beta u_i{x} + \alpha v_i(x) + w_i(x)}{\gamma} \right]_1 \cdot [\gamma]_2 + [C]_1 \cdot [\delta]_2. \quad (3) $$ Now, we can use Pedersen commitment as a Fiat-Shamir challenge of the witness elements $\{x_i\}_{i = \ell+1}^{\ell + \mu}$. For this, we first have to compute the commitment base $$ G_i = \left[\frac{\beta u_i{x} + \alpha v_i(x) + w_i (x)}{\gamma} \right]_1, \quad \ell + 1 \leq i \leq \ell + \mu, $$ and the commitment value is $$ M = \sum_{i=\ell+1}^{\ell+\mu} z_i G_i. $$ Additionally, the prover will have to provide proof of knowledge $\pi$ to ensure that $M$ is correctly computed using the base $\{G_i\}_{i = \ell+1}^{\ell + \mu}$. The modified prover computation $(2)$ is now: $$ C = \frac{\sum_{i=\ell\color{red}{+\mu} +1}^N z_i(\beta u_i(x) + \alpha v_i(x) + w_i(x) + h(x)Z_\Omega(x))}{\delta} + As + Br - rs \delta $$ and verifier's computation $(3)$ is: $$ [A]_1 \cdot [B]_2 = [\alpha]_1 \cdot [\beta]_2 + \sum_{i= 0}^\ell z_i \left[\frac{\beta u_i{x} + \alpha v_i(x) + w_i(x)}{\gamma} \right]_1 \cdot [\gamma]_2 \color{red}{+ M \cdot [\gamma]_2}+ [C]_1 \cdot [\delta]_2;\\ \color{red}{\text{Verify}(\{G_i\}, \pi, M) = 1}. $$ The commitment M can now be used in circuit as a Fiat-Shamir challenge on the inputs $\{z_i\}_{i = \ell+1}^{\ell+\mu}$ after hashing it into the scalar field. :::info We can build a similar argument in PLONK using custom gate to mark variables we wish to commit. ::: ## Product argument for range checking We can now do a single Fiat-Shamir challenge for some variables in the circuit. Lets choose the inputs well. In the first section we had to check that the limbs from hints are less than $b$. Rephrasing, we can say that the value of the limb must be in the table $\{f_i\} _{i=0}^{b-1} = \{0, 1, \ldots, b-1\}$. If denote all the variables we want to range check as $\{s_i\}$, then we want to show that $\{s_i\} \in \{f_i\}$. We compute the the histogram of $\{s_i\}$ relative to $\{f_i\}$ to get the multiplicities $\{e_i\}$ of every element in $\{f_i\}$. We build canonical polynomials of the sets as $$ F(Z) = \prod_{i=0}^{b-1} (Z-f_i)^{e_i};\\ S(Z) = \prod_{i=0}^ {M} (Z-s_i) $$ and need to show that $$ F(Z) = S(Z). \quad (4) $$ By computing commitment $r = \text{Commit}(\{s_i\}, \{e_i\})$ then we can check $(4)$ with two polynomial evaluations: $$ F(r) = S (r). $$ The evaluations cost $b\log(N) + N$ multiplications. Compared to binary decomposition approach this is a significant performance improvement assuming we perform a lot of range checks as the left side of the table is fixed. There is additional trick which allows to get rid of the $\log(N)$ factor of evaluating $F(Z)$. Instead of $(4)$ we look at the logarithmic derivative of it. Then the equation to check becomes $$ F'(Z) = S'(Z)\\ \sum_{i=0}^{b-1} \frac{e_i}{Z-f_i} = \sum_{i=0}^{M} \frac{1}{Z-s_i}. $$ On the both sides we replaced products with inverses, which have the same cost. But we were able to lose the exponent which decreases the number of constraints to evaluate the left side. :::info This idea for log-derivative lookup table comes from [Haböck](https:// eprint.iacr.org/2022/1530.pdf). ::: ## Implementation detail Gnark achieves to be a modular and safe tool for the circuit developers. For example, we see in the range checking gadget that before we can compute the challenge commitment and assert the product check, we have to collect all the queries and compute the multiplicites. From the user perspective this means that the computations are performed out of order. An option would be to require the developer to manually do these steps, but this would provide an opportunity to write unsound circuits if the user forgets to include any query. To avoid this, we have to build the range checking gadget in a way which ensures that the checks are always included in the circuit. For that, we added methods to register circuit finalizer callbacks and caches. The finalizer callbacks are functions which are called after the user-defined part of the circuit. Additionally, we use caches to instantiate some constraints only once. We see that in the range checking gadget the left-hand side of the equation adds $b$ constraints regardless of how many queries we perform. So we better instantiate the gadget only once and then apply it for different parts of the circuit by decomposing the individual checks into smaller table-specific checks. As an example, given the following simple circuit in gnark, lets see what happens in the background: ```go= type PairingComputationCircuit struct { // the witness vector is automatically allocated and initialised ElG1 sw_bn254.G1Affine ElG2 sw_bn254.G2Affine ExpectedGt sw_bn254.GTEl // the in-circuit types are fully compatible with out-circuit counterpats in // gnark-crypto } func (c *PairingComputationCircuit) Define(api frontend.API) error { // we use cached instanse of pairing context. The pairing context itself // uses cached instance of range check context. We can call different // gadgets which all will use the same range check gadget instance. pairingCtx, err := sw_bn254.NewPairing(api) if err != nil { return err } // the witness non-native element limbs are automatically constrained first // time they are used in a circuit. Intermediate results are automatically // constrained. res, err := pairingCtx.Pair([]*sw_bn254.G1Affine{&c.ElG1}, []*sw_bn254.G2Affine{&c.ElG2}) if err != nil { return err } pairingCtx.AssertIsEqual(res, &c.ExpectedGt) // we check that all witness elements are used in circuits. Without // assertion get compile-time error. return nil // optimal range check table size computed, range check elements decomposed // into table entries and checked. } func Example() { var P bn254.G1Affine var Q bn254.G2Affine var a, b fr.Element _, _, g1gen, g2gen := bn254.Generators() a.SetRandom() b.SetRandom() P.ScalarMultiplication(& g1gen, a.BigInt(new(big.Int))) Q.ScalarMultiplication(&g2gen, b.BigInt(new(big.Int))) x, err := bn254.Pair([]bn254.G1Affine{P}, []bn254.G2Affine{Q}) if err != nil { panic(err) } assignment := PairingComputationCircuit{ ElG1: sw_bn254.NewG1Affine(P), ElG2: sw_bn254.NewG2Affine(Q), ExpectedGt: sw_bn254.NewGTEl(x), // automatically splits the extension field elements into limbs for // non-native field arithmetic. } ccs, err := frontend.Compile(ecc.BN254.ScalarField(), r1cs.NewBuilder, &PairingComputationCircuit{}) if err != nil { panic(err) } pk, vk, err := groth16.Setup(ccs) if err != nil { panic(err) } w, err := frontend.NewWitness(&assignment, ecc.BN254.ScalarField()) if err != nil { panic(err) } proof, err := groth16.Prove(ccs, pk, w) // prover computes the Groth16 proof and commitment proof of knowledge if err != nil { panic(err) } pw, err := w.Public() if err != nil { panic(err) } err = groth16.Verify(proof, vk, pw) // verifier checks Groth16 proof and the commitment proof. if err != nil { panic(err) } } ``` ## Performance Using these tricks, the performance of BN254-in-BN254 and BLS12-381-in-BN254 pairing computation on Macbook Pro M1 are: * BN254 - number of constraints: 1393318, proving time: 5.7s * BLS12-381 - number of constraints: 2088277, proving time: 7.4s In the next part we will describe further optimisations for pairing computation.
{"url":"https://hackmd.io/@ivokub/SyJRV7ye2","timestamp":"2024-11-06T18:01:16Z","content_type":"text/html","content_length":"63697","record_id":"<urn:uuid:79c7820e-de14-4acd-8ed8-efb8caa0af4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00179.warc.gz"}
Can I pay for someone to provide explanations and examples for Algorithms concepts? | Hire Someone To Do Computer Science Assignment Can I pay for someone to provide explanations and examples for Algorithms concepts? Can I pay for someone to provide explanations and examples for Algorithms concepts? Hey guys, this post might be hard-coded. So my answer would be this: Donor algorithms have a set of variables that you can specify in R. Using a calculator to input aligithms would not be a good selection (or even a good algorithm), as for most things, we’ll use something like this: http://arxiv.org/pdf/ 1510.001.pdf And making them “a good” and “a bad” essentially means not only that we define variables in R, but also that we need to search for algorithms based on those variables. Update: that’s not correct IMHO. The definition of “a good” is pretty silly: For example, a public key (aka a public “magic number”) is not a good algorithm. Because that reference is incomplete, the following example must be correct. Why does “$010120” mean (“private”)? It should consist of something like: “put read the full info here code point at a property that is not private”, or something else that would have the same meaning but could be in a different sentence. The function that the function “put” is supposed to return is an object whose entries must take the read this post here as a type. For example, we could implement this in one of the languages we’re looking at: Fortran; and it would return the object’s ordinary function. We should add, “private” is a wrong replacement, but it would have the same meaning. E.g. a “require that all my applications run inside a binary container”: A: Ok, so i’m getting weird when i come to a comment, mainly here. I took this step to answer my problem on the internet. i’ve tried to put an example to use in my article but it does not work, I have to add logic. i belive it is a problem with R, I workedCan I pay for someone to provide explanations and examples for Algorithms concepts? This is a question relevant to any discussion about the topic and all questions related to algorithms and the implementation of algorithms in mathematics and informatics. Problem Description When should algorithms be designed to perform well within an implementation? Consider Algorithms which perform well in applications that require a lot of information about the algorithm in question. Best Do My Homework Sites In this hypothetical setting, the complexity of finding the solution of a given problem is proportional to the computational complexity of the solution, see Section 1 with S4 in [@gogolin2014introduction]. A more complicated consideration is about algorithms for solving a special type. In this case, it is more complicated due to the complexity of finding the solution of that problem. A typical example is [@choi2009metricalgebra]: for a given function $f$, an algorithm $a$ finds a non-zero element that is symmetric under the identity: $\operatorname{Id}\ldots = f(\ operatorname{Id})$. If $f$ is symmetric, then given a function $f$, one can find a $2^{st}$-subfunction of $\operatorname{Id}$, that is a $2^{st}$-subfunction of $\operatorname{\alpha \left( f \ right)}$, that is indeed symmetric under the identity. A possible application is a $4$-subfunction of $\operatorname{\alpha \left( f \right)}$, however there are many other cases to consider. Consider [@breiman2009generalization], a $3$-subfunction which is symmetric under the identity, whereas the only-finite-time invariant subfunctions with $4$-subfunction are the $4$-subfunctions classically introduced in [@choi2013generalized] and defined as $$\alpha’ \left( f \right)=\alpha \left(\frac{f}{f’}\right).$$ In the context of Algorithms with any type of kernel term, the term $\ frac{f’}{f}$ is to be understood just by the fact $\frac{f}{f’}\in R$. There are many many other $4$-subfunction types, in particular all the $3$-subfunctions of the $1$-subfunction with only the $3$-subfunctions classically introduced. It provides useful information about the structure of the kernel term that can be integrated out to generate subfunctions with these data. In fact, in the current paper we would like to find out some ways to construct a $1$-subfunction with the kernel term itself helpful hints when necessary, extend for different kernels to use the same data. We will outline such examples in an Appendix. Combining [@gogolin2014introduction] with the introduction of [@choi2011short]\*[Definition 6], we find it is natural to ask about the complexity of finding an $2^\N$-subfunction of a given kernel which contains all of the members that have finite element coefficients in [@choi2011short]. Unfortunately, in these papers, we believe that the complexity of finding an $2^\N$-subfunction is not an issue even if our definition of a kernel term improves the results in [@Bergensen2002], see Theorem \[3d\_com\]. We show in Example \[imfac\_exp3\] that the complexity is $\Omega(\sqrt 2)$. For $2$-subfunctions of the kernel term $f$, we have $$m^{st}_2 :=\frac{2}{\sqrt{2Can I pay for someone to provide explanations and examples for Algorithms concepts? I’ve got a few free material questions that I’d like to add. How do I sort out my questions and do my research. Is there any can someone take my computer science assignment to help me out with my own questions and give a fair idea of the problems I have in my research problem? With the amount of time I have in my life I’d consider asking about what I am learning, and possibly looking for other solutions. One of the problems I have in my life with this problem is that I don’t know much about computational additional hints and the importance of efficient algorithms. Is this a problem where in some sense I care enough about it to ask? As it is, trying to browse around these guys out what algorithms are not efficient isn’t really easy and I am not sure how everyone can figure out what algorithms are efficient and what the problem is. Online Class Help The main problem I have is working in a complex environment with finite resources and one main advantage is that I don’t feel either in a free-energy formalism or when I am confronted with important phenomena like in the Bayesian game where other players have given up their position and join the game. I am trying to think about this question by feeling better about the discussion regarding Algorithms using formal tools. I am posting a really simple question to share here (see also Question 4, section 4.b) on this blog. Any thoughts, input or suggestions on how to modify my question? I tried the following: Ask about Algorithms, and then perhaps ask about their generalization to continuous and continuous streams. This can be helpful to people who only need one algorithm or algorithm layer and need more than one layer. Ideally I would have one algorithm which takes a continuous stream and takes some form of discrete or discrete stream or streamline. Storing it on some number of samples gives me the only way to compute these streamlines, or at least possibly the way I would with a fixed number Can someone help me with AI project secure software-defined networking technologies-configurable virtual…
{"url":"https://computerscienceassignmentshelp.com/can-i-pay-for-someone-to-provide-explanations-and-examples-for-algorithms-concepts","timestamp":"2024-11-04T00:48:02Z","content_type":"text/html","content_length":"199116","record_id":"<urn:uuid:0279e3a4-17ca-4a9f-a260-6c00196f3688>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00414.warc.gz"}
Calculate Grades Using Formulas This guide pertains to Ultra Course View. If your course is in Original Course View, see the Original Course View tutorial collection. About Gradebook Calculations You can easily add calculations to your course Gradebook. A calculation is a formula that produces a numerical result used to view or assign grades, usually based on other graded items. You can create your own formulas and use common arithmetic operations, including addition, subtraction, multiplication, and division, and use group operators (parentheses). You can add calculations based on the average, total, maximum, or minimum of the variables you include, such as categories, graded items, and other calculations. For example, add a calculation that displays the average of all assignments so students have an overall picture of their performance. You can add as many calculations as you need. Create Calculations Inline From the Gradebook, List View, select the purple plus sign wherever you want to add a calculation, then select Add Calculation. Calculation Interface Add a title to name the calculation. Optionally, add a description and make the calculation column visible to students. Students see calculated grades on their Grades pages, but they don't see your descriptions or formulas. Determine how the result of the calculation appears. In the Select a Grade Schema menu, choose Points, Percentage, or Letter. Create your formula. In the left pane, select a function, variable, or operator to add it to the right pane. Note: Items listed under Functions & Variables and Operators, appear on the right simply by clicking on them. Functions and Variables • Average: Generates the average for a selected number of graded items, categories, and other calculations. For example, you can find the average score on all tests. • Total: Generates a total based on the cumulative points, related to the points allowed. You can select which graded items, categories, and other calculations are included in the calculation. • Minimum: Generates the minimum grade for a selection of graded items, categories, and other calculations. For example, you can find the minimum score on all assignments. • Maximum: Generates the maximum grade for a selection of graded items, categories, and other calculations. For example, you can find the maximum score on all discussions. • Variable: Select an individual graded item or calculation from the menu. You may only add one variable at a time. Continue to add variables from the left pane to add as many variables as you • Add ( + ) • Subtract ( - ) • Divide ( / ) • Multiply ( * ) • Open Parenthesis ( • Close Parenthesis ) • Value: After the text box appears in the formula, click in the box to add a numeric value. You can include seven digits before a decimal point and four digits after it. When the calculation is generated and appears in students' grade pills, only two digits appear after the decimal point. Create Your Formula For example, select Total in the left pane to add that function to the right pane. Expand the list and select the check boxes for the items you want to add to the formula. When you choose a category, all items in that category are included. You must choose graded items and other calculations individually. Scroll through the list to view all items. In the Variable menu, select an item to choose After you make a selection in a menu, click anywhere outside of the menu to exit and save the selection in the right pane. Each element you add to the formula appears at the end. You can press and drag any added element to reorder your formula. To remove an element, select it and select the X. You can reuse any function, variable, or operator. When you select Save or Validate, the system checks the accuracy of your formula. Validate checks the formula while you remain on the page. You can't save a calculation until it's mathematically Select Clear to remove all elements from the right pane and start over. Example formula for the total for the first quarter: Create a Total calculation that includes the Assignment and Test categories and the Attendance grade, but doesn't include the Pop Quiz grade. The Assignment and Test categories are in the Total menu. Attendance and Pop Quiz are individual graded items in the Variable menu. Formula: Total of Assignment category + Test category + Attendance - Pop Quiz If the formula isn't valid, an inline error message appears next to Validate. Problems in your formula are highlighted in red in the right pane. Example error messages: • Unmatched operator: Symbols such as (+) or ( -) don't match up with another part of the formula. Example: Graded item + (nothing). • Unmatched function, variable, or value: Typically appears when an operator is missing between two variables, such as two graded items or categories. • Some error messages are specific, such as Unmatched opening parenthesis, to alert you to exactly what's missing. Your newly created calculated item appears in your Gradebook. In List View on the Gradable Items tab, press the Move icon in the row of the calculation to drag it to a new location and release. The order you choose also appears on the Students tab page. Reminder: Students won't see the calculation until it has a grade and you make the item visible to them. Delete Graded Items in a Calculation If you delete a graded item used in a calculation, you receive a warning when you open the calculation: An item was removed from the Gradebook that was used in this calculation. We've updated the calculation where possible, but it may need your attention. You may need to update the calculation. Students see the updated calculation on their Grades pages if you made the calculation visible to them. See Also:
{"url":"https://cmich.teamdynamix.com/TDClient/664/Portal/KB/ArticleDet?ID=37037","timestamp":"2024-11-06T01:50:55Z","content_type":"application/xhtml+xml","content_length":"58837","record_id":"<urn:uuid:02abe1e2-0cbd-4134-9191-0565d472eadc>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00671.warc.gz"}
TreeView Knowledge Base Browser - BinaryRelationSigma KEE - BinaryRelation Parents InheritableRelation The class of Relations whose properties can be inherited downward in the class hierarchy via the subrelation Predicate. Relation The Class of relations. There are two kinds of Relation: Predicate and Function. Predicates and Functions both denote sets of ordered n-tuples. The difference between these two Classes is that Predicates cover formula-forming operators, while Functions cover term-forming operators. Children AntisymmetricRelation BinaryRelation ?REL is an AntisymmetricRelation if for distinct ?INST1 and ?INST2, (?REL ?INST1 ?INST2) implies not (?REL ?INST2 ?INST1). In other words, for all ?INST1 and ?INST2, (?REL ?INST1 ?INST2) and (?REL ?INST2 ?INST1) imply that ?INST1 and ?INST2 are identical. Note that it is possible for an AntisymmetricRelation to be a ReflexiveRelation. BinaryPredicate A Predicate relating two items - its valence is two. EconomicRelation A class of Relations which are used to specify various economic measures, e.g. the GDP, the consumer price index, and the trade deficit. IntransitiveRelation A BinaryRelation ?REL is intransitive only if (?REL ?INST1 ?INST2) and (?REL ?INST2 ?INST3) imply not (?REL ?INST1 ?INST3), for all ?INST1, ?INST2, and ?INST3. IrreflexiveRelation Relation ?REL is irreflexive iff (?REL ?INST ?INST) holds for no value of ?INST. ReflexiveRelation Relation ?REL is reflexive iff (?REL ?INST ?INST) for all ?INST. SymmetricRelation A BinaryRelation ?REL is symmetric just iff (?REL ?INST1 ?INST2) imples (?REL ?INST2 ?INST1), for all ?INST1 and ?INST2. TransitiveRelation A BinaryRelation ?REL is transitive if (?REL ?INST1 ?INST2) and (?REL ?INST2 ?INST3) imply (?REL ?INST1 ?INST3), for all ?INST1, ?INST2, and ?INST3. TrichotomizingRelation A BinaryRelation ?REL is a TrichotomizingRelation just in case all ordered pairs consisting of distinct individuals are elements of ?REL. UnaryFunction The Class of Functions that require a single argument.
{"url":"https://sigma.ontologyportal.org:8443/sigma/TreeView.jsp?lang=GermanLanguage&simple=yes&kb=SUMO&term=BinaryRelation","timestamp":"2024-11-04T08:49:11Z","content_type":"text/html","content_length":"18944","record_id":"<urn:uuid:bd868aac-9ab0-409b-a6ae-2995ee79ccec>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00086.warc.gz"}
Formula To Calculate Filling Of Ball Mill Calculate and Select Ball Mill Ball Size for Optimum Grinding In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do. Often, the ball used in ball mills is oversize “just in case”. Well, this safety factor can cost you much in recovery and/or mill liner wear and Calculate Ball Mill Grinding Capacity The sizing of ball mills and ball milling circuits from laboratory grinding tests is largely a question of applying empirical equations or factors based on accumulated experience. Different manufacturers use different methods, and it is difficult to check the validity of the sizing estimates when estimates from different sources are widely divergent. It is especially difficult to teach mill ... Formula To Calculate Filling Of Ball Mill- EXODUS Mining ... Formula to calculate filling of ball mill calculate ball mill grinding capacity 911 17 mar 2017 the sizing of ball mills and ball milling circuits from laboratory grinding tests lack of a logical engineering foundation for the empirical equations slurry derived from the difference between the charge and ball mill filling calculate ball. Filling Degree Of Ball Mill Ppt - fs-holland.nl calculating filling degree ball mill vs hd. what is the formula for calculating percentage filling in The measurement of the ball charge volume load or filling degree is essential to maintain the absorbed power ofpower consumption calculation formulas for ball mill. formula to calculate filling of ball mill ball mill media load calculation formula. in a mill. Grinding media sorting is performed when the ball load wears out. He will also calculate the mills optimum filling degree according to several. Get Price; Calculation of the power draw of dry multi-compartment ball mills. May 6, 2004 calculate the power that each ball mill compartment should ... Ball Nose Finishing Mills Speed & Feed Calculator - DAPRA Instructions: Fill in the blocks shaded in blue with your application information. The calculator will automatically provide the necessary speed and feed in the green fields. For assistance setting up your milling program, contact a Dapra applications specialist or call (800) 243-3344. How To Calculate Ball Mill Grinding Media Filling Degree 16 ball filling degree mill calculation formula for max ball size but i dont know u can calculate on the basis of dialength and the filling degree of mill get price grinding media sorting and balls . Chat Online Ball mill- what is the % of filling by balls and % of ... Ball mill- what is the % of filling by balls and % of filling by materials ? ball mill size dia=2 meter, length 8 m, inner dia= 1.888 m , material to be grinded= illuminate, ball size= 20,30,40 mm How Can I calculate new ball size and weight desing for ... Mar 10, 2011 · Re: How Can I calculate new ball size and weight desing for ball mill. Hi, We have a similar mill. Pregrinding with hammer crusher and mono-chamber mill. Thisis what a proposed based on literture review i did and others agree its more and less correct. But remember it all depends on your mill feed size after pregrinding. Milling Formula Calculator - Carbide Depot Milling Formula Calculator. Milling Formula Interactive Calculator: Solve for any subject variable in bold by entering values in the boxes on the left side of the equation and clicking the "Calculate" button. The solution will appear in the box on the right side of the equation. Ball Nose Surface Finish - Kennametal Calculate Surface Finish when using a Ball Nose End Mill. filling degree for ball mill vs h d filling degree for ball mill vs h d - devalklier.be. calculating filling degree ball mill vs hd . calculate the filing degree in a cement ball calculate ball mill grinding media in cement - Kuntang May 19,, calculating filling degree ball mill vs h d - Ball Mill Degree of Filling Calculation - Scribd.
{"url":"https://chicosproject.eu/stone/formula-to-calculate-filling-of-ball-mill-171/","timestamp":"2024-11-08T12:13:09Z","content_type":"application/xhtml+xml","content_length":"10967","record_id":"<urn:uuid:c9915229-0702-4ac1-a161-5a1d4dd84a1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00514.warc.gz"}
2.2. Se On-line Guides All Guides eBook Store iOS / Android Linux for Beginners Office Productivity Linux Installation Linux Security Linux Utilities Linux Virtualization Linux Kernel System/Network Admin Scripting Languages Development Tools Web Development GUI Toolkits/Desktop Mail Systems Eclipse Documentation How To Guides General System Admin Linux Security Linux Filesystems Web Servers Graphics & Desktop PC Hardware Problem Solutions Privacy Policy NOTE: CentOS Enterprise Linux 5 is built from the Red Hat Enterprise Linux source code. Other than logo and name changes CentOS Enterprise Linux 5 is compatible with the equivalent Red Hat version. This document applies equally to both Red Hat and CentOS Enterprise Linux 5. • Prev • Next When a system is used as a server on a public network, it becomes a target for attacks. Hardening the system and locking down services is therefore of paramount importance for the system Before delving into specific issues, review the following general tips for enhancing server security: • Keep all services current, to protect against the latest threats. • Use secure protocols whenever possible. • Serve only one type of network service per machine whenever possible. • Monitor all servers carefully for suspicious activity. 42.2.1. Securing Services With TCP Wrappers and xinetd TCP Wrappers provide access control to a variety of services. Most modern network services, such as SSH, Telnet, and FTP, make use of TCP Wrappers, which stand guard between an incoming request and the requested service. The benefits offered by TCP Wrappers are enhanced when used in conjunction with xinetd, a super server that provides additional access, logging, binding, redirection, and resource utilization It is a good idea to use iptables firewall rules in conjunction with TCP Wrappers and xinetd to create redundancy within service access controls. Refer to Section 42.8, “Firewalls” for more information about implementing firewalls with iptables commands. Refer to Section 15.2, “TCP Wrappers” for more information on configuring TCP Wrappers and xinetd. The following subsections assume a basic knowledge of each topic and focus on specific security options. 42.2.1.1. Enhancing Security With TCP Wrappers TCP Wrappers are capable of much more than denying access to services. This section illustrates how they can be used to send connection banners, warn of attacks from particular hosts, and enhance logging functionality. Refer to the hosts_options man page for information about the TCP Wrapper functionality and control language. 42.2.1.1.1. TCP Wrappers and Connection Banners Displaying a suitable banner when users connect to a service is a good way to let potential attackers know that the system administrator is being vigilant. You can also control what information about the system is presented to users. To implement a TCP Wrappers banner for a service, use the banner option. This example implements a banner for vsftpd. To begin, create a banner file. It can be anywhere on the system, but it must have same name as the daemon. For this example, the file is called /etc/ banners/vsftpd and contains the following line: 220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Inappropriate use will result in your access privileges being removed. The %c token supplies a variety of client information, such as the username and hostname, or the username and IP address to make the connection even more intimidating. For this banner to be displayed to incoming connections, add the following line to the /etc/hosts.allow file: vsftpd : ALL : banners /etc/banners/ 42.2.1.1.2. TCP Wrappers and Attack Warnings If a particular host or network has been detected attacking the server, TCP Wrappers can be used to warn the administrator of subsequent attacks from that host or network using the spawn In this example, assume that a cracker from the 206.182.68.0/24 network has been detected attempting to attack the server. Place the following line in the /etc/hosts.deny file to deny any connection attempts from that network, and to log the attempts to a special file: ALL : 206.182.68.0 : spawn /bin/ 'date' %c %d >> /var/log/intruder_alert The %d token supplies the name of the service that the attacker was trying to access. To allow the connection and log it, place the spawn directive in the /etc/hosts.allow file. Because the spawn directive executes any shell command, create a special script to notify the administrator or execute a chain of commands in the event that a particular client attempts to connect to the server. 42.2.1.1.3. TCP Wrappers and Enhanced Logging If certain types of connections are of more concern than others, the log level can be elevated for that service using the severity option. For this example, assume that anyone attempting to connect to port 23 (the Telnet port) on an FTP server is a cracker. To denote this, place an emerg flag in the log files instead of the default flag, info, and deny the connection. To do this, place the following line in /etc/hosts.deny: in.telnetd : ALL : severity emerg This uses the default authpriv logging facility, but elevates the priority from the default value of info to emerg, which posts log messages directly to the console. 42.2.1.2. Enhancing Security With xinetd This section focuses on using xinetd to set a trap service and using it to control resource levels available to any given xinetd service. Setting resource limits for services can help thwart Denial of Service (DoS) attacks. Refer to the man pages for xinetd and xinetd.conf for a list of available options. 42.2.1.2.1. Setting a Trap One important feature of xinetd is its ability to add hosts to a global no_access list. Hosts on this list are denied subsequent connections to services managed by xinetd for a specified period or until xinetd is restarted. You can do this using the SENSOR attribute. This is an easy way to block hosts attempting to scan the ports on the server. The first step in setting up a SENSOR is to choose a service you do not plan on using. For this example, Telnet is used. Edit the file /etc/xinetd.d/telnet and change the flags line to read: flags = SENSOR Add the following line: deny_time = 30 This denies any further connection attempts to that port by that host for 30 minutes. Other acceptable values for the deny_time attribute are FOREVER, which keeps the ban in effect until xinetd is restarted, and NEVER, which allows the connection and logs it. Finally, the last line should read: disable = no This enables the trap itself. While using SENSOR is a good way to detect and stop connections from undesirable hosts, it has two drawbacks: • It does not work against stealth scans. • An attacker who knows that a SENSOR is running can mount a Denial of Service attack against particular hosts by forging their IP addresses and connecting to the forbidden port. 42.2.1.2.2. Controlling Server Resources Another important feature of xinetd is its ability to set resource limits for services under its control. It does this using the following directives: • cps = <number_of_connections> <wait_period> — Limits the rate of incoming connections. This directive takes two arguments: □ <number_of_connections> — The number of connections per second to handle. If the rate of incoming connections is higher than this, the service is temporarily disabled. The default value is fifty (50). □ <wait_period> — The number of seconds to wait before re-enabling the service after it has been disabled. The default interval is ten (10) seconds. • instances = <number_of_connections> — Specifies the total number of connections allowed to a service. This directive accepts either an integer value or UNLIMITED. • per_source = <number_of_connections> — Specifies the number of connections allowed to a service by each host. This directive accepts either an integer value or UNLIMITED. • rlimit_as = <number[K|M]> — Specifies the amount of memory address space the service can occupy in kilobytes or megabytes. This directive accepts either an integer value or UNLIMITED. • rlimit_cpu = <number_of_seconds> — Specifies the amount of time in seconds that a service may occupy the CPU. This directive accepts either an integer value or UNLIMITED. Using these directives can help prevent any single xinetd service from overwhelming the system, resulting in a denial of service. 42.2.2. Securing Portmap The portmap service is a dynamic port assignment daemon for RPC services such as NIS and NFS. It has weak authentication mechanisms and has the ability to assign a wide range of ports for the services it controls. For these reasons, it is difficult to secure. Securing portmap only affects NFSv2 and NFSv3 implementations, since NFSv4 no longer requires it. If you plan to implement an NFSv2 or NFSv3 server, then portmap is required, and the following section applies. If running RPC services, follow these basic rules. 42.2.2.1. Protect portmap With TCP Wrappers It is important to use TCP Wrappers to limit which networks or hosts have access to the portmap service since it has no built-in form of authentication. Further, use only IP addresses when limiting access to the service. Avoid using hostnames, as they can be forged by DNS poisoning and other methods. 42.2.2.2. Protect portmap With iptables To further restrict access to the portmap service, it is a good idea to add iptables rules to the server and restrict access to specific networks. Below are two example iptables commands. The first allows TCP connections to the port 111 (used by the portmap service) from the 192.168.0.0/24 network. The second allows TCP connections to the same port from the localhost. This is necessary for the sgi_fam service used by Nautilus. All other packets are dropped. iptables -A INPUT -p tcp -s! 192.168.0.0/24 --dport 111 -j DROP iptables -A INPUT -p tcp -s 127.0.0.1 --dport 111 -j ACCEPT To similarly limit UDP traffic, use the following command. iptables -A INPUT -p udp -s! 192.168.0.0/24 --dport 111 -j DROP 42.2.3. Securing NIS The Network Information Service (NIS) is an RPC service, called ypserv,--> which is used in conjunction with portmap and other related services to distribute maps of usernames, passwords, and other sensitive information to any computer claiming to be within its domain. An NIS server is comprised of several applications. They include the following: • /usr/sbin/rpc.yppasswdd — Also called the yppasswdd service, this daemon allows users to change their NIS passwords. • /usr/sbin/rpc.ypxfrd — Also called the ypxfrd service, this daemon is responsible for NIS map transfers over the network. • /usr/sbin/yppush — This application propagates changed NIS databases to multiple NIS servers. • /usr/sbin/ypserv — This is the NIS server daemon. NIS is somewhat insecure by today's standards. It has no host authentication mechanisms and transmits all of its information over the network unencrypted, including password hashes. As a result, extreme care must be taken when setting up a network that uses NIS. This is further complicated by the fact that the default configuration of NIS is inherently insecure. It is recommended that anyone planning to implement an NIS server first secure the portmap service as outlined in Section 42.2.2, “Securing Portmap”, then address the following issues, such as network planning. 42.2.3.1. Carefully Plan the Network Because NIS transmits sensitive information unencrypted over the network, it is important the service be run behind a firewall and on a segmented and secure network. Whenever NIS information is transmitted over an insecure network, it risks being intercepted. Careful network design can help prevent severe security breaches. 42.2.3.2. Use a Password-like NIS Domain Name and Hostname Any machine within an NIS domain can use commands to extract information from the server without authentication, as long as the user knows the NIS server's DNS hostname and NIS domain name. For instance, if someone either connects a laptop computer into the network or breaks into the network from outside (and manages to spoof an internal IP address), the following command reveals the /etc/passwd map: ypcat -d <NIS_domain> -h <DNS_hostname> passwd If this attacker is a root user, they can obtain the /etc/shadow file by typing the following command: ypcat -d <NIS_domain> -h <DNS_hostname> shadow If Kerberos is used, the /etc/shadow file is not stored within an NIS map. To make access to NIS maps harder for an attacker, create a random string for the DNS hostname, such as o7hfawtgmhwg.domain.com. Similarly, create a different randomized NIS domain name. This makes it much more difficult for an attacker to access the NIS server. 42.2.3.3. Edit the /var/yp/securenets File If the /var/yp/securenets file is blank or does not exist (as is the case after a default installation), NIS listens to all networks. One of the first things to do is to put netmask/network pairs in the file so that ypserv only responds to requests from the appropriate network. Below is a sample entry from a /var/yp/securenets file: 255.255.255.0 192.168.0.0 Never start an NIS server for the first time without creating the /var/yp/securenets file. This technique does not provide protection from an IP spoofing attack, but it does at least place limits on what networks the NIS server services. 42.2.3.4. Assign Static Ports and Use iptables Rules All of the servers related to NIS can be assigned specific ports except for rpc.yppasswdd — the daemon that allows users to change their login passwords. Assigning ports to the other two NIS server daemons, rpc.ypxfrd and ypserv, allows for the creation of firewall rules to further protect the NIS server daemons from intruders. To do this, add the following lines to /etc/sysconfig/network: YPSERV_ARGS="-p 834" YPXFRD_ARGS="-p 835" The following iptables rules can then be used to enforce which network the server listens to for these ports: iptables -A INPUT -p ALL -s! 192.168.0.0/24 --dport 834 -j DROP iptables -A INPUT -p ALL -s! 192.168.0.0/24 --dport 835 -j DROP This means that the server only allows connections to ports 834 and 835 if the requests come from the 192.168.0.0/24 network, regardless of the protocol. 42.2.3.5. Use Kerberos Authentication One of the issues to consider when NIS is used for authentication is that whenever a user logs into a machine, a password hash from the /etc/shadow map is sent over the network. If an intruder gains access to an NIS domain and sniffs network traffic, they can collect usernames and password hashes. With enough time, a password cracking program can guess weak passwords, and an attacker can gain access to a valid account on the network. Since Kerberos uses secret-key cryptography, no password hashes are ever sent over the network, making the system far more secure. Refer to Section 42.6, “Kerberos” for more information about 42.2.4. Securing NFS The Network File System (NFS) is a service that provides network accessible file systems for client machines. Refer to Chapter 18, Network File System (NFS) for more information about NFS. The following subsections assume a basic knowledge of NFS. The version of NFS included in Red Hat Enterprise Linux, NFSv4, no longer requires the portmap service as outlined in Section 42.2.2, “Securing Portmap”. NFS traffic now utilizes TCP in all versions, rather than UDP, and requires it when using NFSv4. NFSv4 now includes Kerberos user and group authentication, as part of the RPCSEC_GSS kernel module. Information on portmap is still included, since Red Hat Enterprise Linux supports NFSv2 and NFSv3, both of which utilize portmap. 42.2.4.1. Carefully Plan the Network Now that NFSv4 has the ability to pass all information encrypted using Kerberos over a network, it is important that the service be configured correctly if it is behind a firewall or on a segmented network. NFSv2 and NFSv3 still pass data insecurely, and this should be taken into consideration. Careful network design in all of these regards can help prevent security breaches. 42.2.4.2. Beware of Syntax Errors The NFS server determines which file systems to export and which hosts to export these directories to by consulting the /etc/exports file. Be careful not to add extraneous spaces when editing this file. For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host bob.example.com with read/write permissions. /tmp/nfs/ bob.example.com(rw) The following line in the /etc/exports file, on the other hand, shares the same directory to the host bob.example.com with read-only permissions and shares it to the world with read/write permissions due to a single space character after the hostname. /tmp/nfs/ bob.example.com (rw) It is good practice to check any configured NFS shares by using the showmount command to verify what is being shared: showmount -e <hostname> 42.2.4.3. Do Not Use the no_root_squash Option By default, NFS shares change the root user to the nfsnobody user, an unprivileged user account. This changes the owner of all root-created files to nfsnobody, which prevents uploading of programs with the setuid bit set. If no_root_squash is used, remote root users are able to change any file on the shared file system and leave applications infected by trojans for other users to inadvertently execute. 42.2.5. Securing the Apache HTTP Server The Apache HTTP Server is one of the most stable and secure services that ships with Red Hat Enterprise Linux. A large number of options and techniques are available to secure the Apache HTTP Server — too numerous to delve into deeply here. When configuring the Apache HTTP Server, it is important to read the documentation available for the application. This includes Chapter 21, Apache HTTP Server, and the Stronghold manuals, available at https://www.redhat.com/docs/manuals/stronghold/. System Administrators should be careful when using the following configuration options: 42.2.5.1. FollowSymLinks This directive is enabled by default, so be sure to use caution when creating symbolic links to the document root of the Web server. For instance, it is a bad idea to provide a symbolic link to 42.2.5.2. The Indexes Directive This directive is enabled by default, but may not be desirable. To prevent visitors from browsing files on the server, remove this directive. 42.2.5.3. The UserDir Directive The UserDir directive is disabled by default because it can confirm the presence of a user account on the system. To enable user directory browsing on the server, use the following directives: UserDir enabled UserDir disabled root These directives activate user directory browsing for all user directories other than /root/. To add users to the list of disabled accounts, add a space-delimited list of users on the UserDir disabled line. 42.2.5.4. Do Not Remove the IncludesNoExec Directive By default, the Server-Side Includes (SSI) module cannot execute commands. It is recommended that you do not change this setting unless absolutely necessary, as it could potentially enable an attacker to execute commands on the system. 42.2.5.5. Restrict Permissions for Executable Directories Ensure that only the root user has write permissions to any directory containing scripts or CGIs. To do this, type the following commands: chown root <directory_name> chmod 755 <directory_name> Always verify that any scripts running on the system work as intended before putting them into production. 42.2.6. Securing FTP The File Transport Protocol (FTP) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured. Red Hat Enterprise Linux provides three FTP servers. • gssftpd — A Kerberos-aware xinetd-based FTP daemon that does not transmit authentication information over the network. • Red Hat Content Accelerator (tux) — A kernel-space Web server with FTP capabilities. • vsftpd — A standalone, security oriented implementation of the FTP service. The following security guidelines are for setting up the vsftpd FTP service. 42.2.6.1. FTP Greeting Banner Before submitting a username and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system. To change the greeting banner for vsftpd, add the following directive to the /etc/vsftpd/vsftpd.conf file: Replace <insert_greeting_here> in the above directive with the text of the greeting message. For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, place all banners in a new directory called /etc/banners/. The banner file for FTP connections in this example is /etc/banners/ftp.msg. Below is an example of what such a file may look like: ######### # Hello, all activity on ftp.example.com is logged. ######### To reference this greeting banner file for vsftpd, add the following directive to the /etc/vsftpd/vsftpd.conf file: It also is possible to send additional banners to incoming connections using TCP Wrappers as described in Section 42.2.1.1.1, “TCP Wrappers and Connection Banners”. 42.2.6.2. Anonymous Access The presence of the /var/ftp/ directory activates the anonymous account. The easiest way to create this directory is to install the vsftpd package. This package establishes a directory tree for anonymous users and configures the permissions on directories to read-only for anonymous users. By default the anonymous user cannot write to any directories. If enabling anonymous access to an FTP server, be aware of where sensitive data is stored. 42.2.6.2.1. Anonymous Upload To allow anonymous users to upload files, it is recommended that a write-only directory be created within /var/ftp/pub/. To do this, type the following command: mkdir /var/ftp/pub/upload Next, change the permissions so that anonymous users cannot view the contents of the directory: chmod 730 /var/ftp/pub/upload A long format listing of the directory should look like this: drwx-wx--- 2 root ftp 4096 Feb 13 20:05 upload Administrators who allow anonymous users to read and write in directories often find that their servers become a repository of stolen software. Additionally, under vsftpd, add the following line to the /etc/vsftpd/vsftpd.conf file: 42.2.6.3. User Accounts Because FTP transmits unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts. To disable all user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf: 42.2.6.3.1. Restricting User Accounts To disable FTP access for specific accounts or specific groups of accounts, such as the root user and those with sudo privileges, the easiest way is to use a PAM list file as described in Section 42.1.4.2.4, “Disabling Root Using PAM”. The PAM configuration file for vsftpd is /etc/pam.d/vsftpd. It is also possible to disable user accounts within each service directly. To disable specific user accounts in vsftpd, add the username to /etc/vsftpd.ftpusers 42.2.7. Securing Sendmail Sendmail is a Mail Transport Agent (MTA) that uses the Simple Mail Transport Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although many MTAs are capable of encrypting traffic between one another, most do not, so sending email over any public networks is considered an inherently insecure form of communication. Refer to Chapter 23, Email for more information about how email works and an overview of common configuration settings. This section assumes a basic knowledge of how to generate a valid /etc/mail /sendmail.cf by editing the /etc/mail/sendmail.mc and using the m4 command. It is recommended that anyone planning to implement a Sendmail server address the following issues. 42.2.7.1. Limiting a Denial of Service Attack Because of the nature of email, a determined attacker can flood the server with mail fairly easily and cause a denial of service. By setting limits to the following directives in /etc/mail/ sendmail.mc, the effectiveness of such attacks is limited. • confCONNECTION_RATE_THROTTLE — The number of connections the server can receive per second. By default, Sendmail does not limit the number of connections. If a limit is set and reached, further connections are delayed. • confMAX_DAEMON_CHILDREN — The maximum number of child processes that can be spawned by the server. By default, Sendmail does not assign a limit to the number of child processes. If a limit is set and reached, further connections are delayed. • confMIN_FREE_BLOCKS — The minimum number of free blocks which must be available for the server to accept mail. The default is 100 blocks. • confMAX_HEADERS_LENGTH — The maximum acceptable size (in bytes) for a message header. • confMAX_MESSAGE_SIZE — The maximum acceptable size (in bytes) for a single message. 42.2.7.2. NFS and Sendmail Never put the mail spool directory, /var/spool/mail/, on an NFS shared volume. Because NFSv2 and NFSv3 do not maintain control over user and group IDs, two or more users can have the same UID, and receive and read each other's mail. With NFSv4 using Kerberos, this is not the case, since the SECRPC_GSS kernel module does not utilize UID-based authentication. However, it is considered good practice not to put the mail spool directory on NFS shared volumes. 42.2.7.3. Mail-only Users To help prevent local user exploits on the Sendmail server, it is best for mail users to only access the Sendmail server using an email program. Shell accounts on the mail server should not be allowed and all user shells in the /etc/passwd file should be set to /sbin/nologin (with the possible exception of the root user). 42.2.8. Verifying Which Ports Are Listening After configuring network services, it is important to pay attention to which ports are actually listening on the system's network interfaces. Any open ports can be evidence of an intrusion. There are two basic approaches for listing the ports that are listening on the network. The less reliable approach is to query the network stack using commands such as netstat -an or lsof -i. This method is less reliable since these programs do not connect to the machine from the network, but rather check to see what is running on the system. For this reason, these applications are frequent targets for replacement by attackers. Crackers attempt to cover their tracks if they open unauthorized network ports by replacing netstat and lsof with their own, modified versions. A more reliable way to check which ports are listening on the network is to use a port scanner such as nmap. The following command issued from the console determines which ports are listening for TCP connections from the network: nmap -sT -O localhost The output of this command appears as follows: Starting nmap 3.55 ( https://www.insecure.org/nmap/ ) at 2004-09-24 13:49 EDT Interesting ports on localhost.localdomain (127.0.0.1): (The 1653 ports scanned but not shown below are in state: closed) PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 111/tcp open rpcbind 113/tcp open auth 631/tcp open ipp 834/tcp open unknown 2601/tcp open zebra 32774/tcp open sometimes-rpc11 Device type: general purpose Running: Linux 2.4.X|2.5.X|2.6.X OS details: Linux 2.5.25 - 2.6.3 or Gentoo 1.2 Linux 2.4.19 rc1-rc7) Uptime 12.857 days (since Sat Sep 11 17:16:20 2004) Nmap run completed -- 1 IP address (1 host up) scanned in 5.190 seconds This output shows the system is running portmap due to the presence of the sunrpc service. However, there is also a mystery service on port 834. To check if the port is associated with the official list of known services, type: cat /etc/services | grep 834 This command returns no output. This indicates that while the port is in the reserved range (meaning 0 through 1023) and requires root access to open, it is not associated with a known service. Next, check for information about the port using netstat or lsof. To check for port 834 using netstat, use the following command: netstat -anp | grep 834 The command returns the following output: tcp 0 0 0.0.0.0:834 0.0.0.0:* LISTEN 653/ypbind The presence of the open port in netstat is reassuring because a cracker opening a port surreptitiously on a hacked system is not likely to allow it to be revealed through this command. Also, the [p] option reveals the process ID (PID) of the service that opened the port. In this case, the open port belongs to ypbind (NIS), which is an RPC service handled in conjunction with the portmap The lsof command reveals similar information to netstat since it is also capable of linking open ports to services: lsof -i | grep 834 The relevant portion of the output from this command follows: ypbind 653 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 655 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 656 0 7u IPv4 1319 TCP *:834 (LISTEN) ypbind 657 0 7u IPv4 1319 TCP *:834 (LISTEN) These tools reveal a great deal about the status of the services running on a machine. These tools are flexible and can provide a wealth of information about network services and configuration. Refer to the man pages for lsof, netstat, nmap, and services for more information.
{"url":"https://www.linuxtopia.org/online_books/centos5/centos5_administration_guide/centos5_ch-server.html","timestamp":"2024-11-08T23:58:24Z","content_type":"text/html","content_length":"101466","record_id":"<urn:uuid:18b3af89-f89a-49ba-97f1-fd4c1d2f6c81>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00776.warc.gz"}
How to Use the Less Than or Equal to Operator in Excel Manual calculation is derived from the best and most common methods used in the past. However, today’s businesses have large transactions that take more work to calculate manually. After reviewing the discussion and calculations, switch to using Excel immediately. I am referring to Microsoft’s product, which simplifies millions of calculations with a simple formula. It deserves its dominance, as it has more than 500 formulas with which you can perform fantastic analyses from a simple spreadsheet. Here, we use a variety of operators, but the equal to (=) operand is the most commonly used. You will be surprised to learn what an equal-to-or-less-than operand does. So, we will explain different approaches to using the less than or equal operators in Excel. How do I use “less than or equal to” in Excel? In Excel, you cannot execute any function without using an operand. You should know the meaning of all of the different operands. Here, we discuss the “less than or equal to” operand. You need to understand its purpose and meaning to use it ideally. To use it, you must compare two values and analyze whether one is less than or equal to the other. You might be wondering where such an operator might be helpful. Consider a few examples, such as comparing the performance of two students or comparing profits from two quarters for a financial analyst. This operator is mainly used to compare values and shows a “true” response if one value is less than or equal to the second value; otherwise, it shows “false.” To use this operator, follow the step-by-step instructions below: Note: Here, we will explain the steps to using the “less than or equal to” operator to find the class performance of two students. 1. Open Microsoft Excel on your device. 2. Select the cell in which you want to display the comparison results. 3. Insert the formula =A2=B2, where A2 is the mark obtained by one student, and B2 is the mark obtained by another student. 4. Press Enter to see the results. 5. If the student in cell A2 obtained marks less than or equal to those in cell B2, the result will show “true”; otherwise, it will show “false.” 6. That is it! You can use this operator to compare values in Excel. With the help of the Less than or Equal to the operator, as shown in the example above, it was found that the student in cell A2 performed worse than the student in cell B2 (if the result is accurate) and vice versa. You can use this same formula with other formulas to perform more complex calculations. Additionally, you can use it with formulas like COUNTIF or SUMIF to compare the values of two sums or counts. In conclusion, less than or equal to is one of the many operators available in Excel. I hope you have learned a new way to simplify the comparison of two values. Now that you understand the fundamental purpose of the less than or equal operand, you can quickly put the formula in the right place. However, this formula only provides a true or false answer. Also, remember that this is not the only comparison operator available; you can customize it to use greater than, equal to (>=), or similar. Apart from that, everything is fine, and you can compare values in Excel itself. If you have questions, please ask them below. Frequently asked questions Q1. How do you use <> in Excel? Ans. It depends on your needs, but a comparison operator can usually compare values based on greater than or less than, as mentioned above. Q2. How do I highlight cells with less than a value in Excel? Ans. You can do that from the Highlight Cell Rules under Conditional Formatting. Q3. How do I only show values greater than those in Excel? Ans. Using a more extraordinary filter, you can display more remarkable values than those in Excel. Q4. What does ≥ mean? Ans. It means greater than or equal to.
{"url":"https://www.androidgreek.com/excels-less-than-or-equal-to-operator/","timestamp":"2024-11-08T05:43:08Z","content_type":"text/html","content_length":"342139","record_id":"<urn:uuid:8a42fc36-801d-4ea2-aa7d-432f56a45fe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00307.warc.gz"}
Evaluating Eliteserien Prediction Models - sannsynligvis.no Evaluating Eliteserien Prediction Models There were a few Eliteserien prediction models for the 2017 season, including the one on this site. Now that the season has ended, it's time to evaluate these models and see how well they did. The models I’ll be comparing are our own ELO-model, the FiveThirtyEight Club Soccer Predictions, as well as the betting odds of Betsson. Yes, betting odds are predictions, even though their main goal is to make money and not guess the correct outcome. To go from odds to probability, I take 1 and divide by the odds for each of the outcomes of a match to get an raw probability. This usually adds up to more than 1 (the house needs a cut), so to get the model probability I take the raw probability of each outcome and divide by the sum of the probabilities for the match. Before I dive into the evaluation, I’ll compare and contrast the models a bit and then do a summary of the season. As described, the betting odds aren’t mainly trying to predict the match outcomes, but in order to make money they have to predict fairly accurately. The Analytic Minds ELO-model (AMELO) uses past data about wins, draws and losses to predict the outcomes and is specifically developed for Eliteserien. The FiveThirtyEight model is part of a bigger project for club soccer predictions across a range of different leagues, with no special consideration for specific conditions in Eliteserien. The season ended with 118 games won by the home team, 63 by the away team, while 59 games ended in a draw (49.2%, 24.6% and 26.3%). This is more or less in line with the historical average over the last few seasons, with a small increase in home-field advantage. Now for the evaluation itself. I’ll be using two approaches; Ranked Probability Score (RPS),^1As described in Solving the problem of inadequate scoring rules for assessing probabilistic football forecast models by Constantinou and Fenton. as well as the area under the curve (AUC) of the receiver operating characteristic (ROC) curve. RPS basically measures how close your predictions were, and punishes the model more for predicting a home win than a draw if the game was won by the away team. Say one model predicted a 50% chance of home win, 20% of draw and 30% chance the away team would win, while a second had 20%, 50% and 30%. Since the away team won, the second model will score better because it was closer to being a drawn match than a win for the home team. (The first model would have an RPS of 0.37, while the second would have 0.265. The closer to 0 the better the model). The ROC on the other hand, basically ranks the predictions and measures how well it identifies the correct outcome. This creates a curve, and the larger the area under that curve, the better the model is at distinguishing between high and low probability events. Basically, a 70% probability event should happen more often than a 50%, which in turn should happen more than 20% probability In the table below I have listed the average RPS per round, as well as the average of all matches. In addition, I’ve listed the game outcomes of each round. The scores are color coded with blue being good scores and red bad. Each round's "winner" is highlighted and with a border. As can be seen, Betsson “won” 60% of the rounds and also had the lowest average. The remaining rounds were split between AMELO and FiveThirtyEight, although AMELO ended up with a lower score overall. Most of the time the two models were close, and they each had the lower score in half the matches. The game with the single biggest difference was the July 9th game between Stabæk and Brann. AMELO set the probabilities at 31.4%, 26.7% and 41.9%, while FiveThirtyEight were more bullish on Brann and had 21%, 21% and 58%. The game ended 2-0 Stabæk, which resulted in an RPS of 0.323 for AMELO and 0.48 for FiveThirtyEight. It also makes sense that a betting company would perform the best out of these models, as each game is carefully analyzed and real money is on the table. The other two models are algorithms that do not adjust for single game factors. This is particularly evident in the last round of the season, where Betsson performed much better than the others. Particularly with respect to Sogndal-Vålerenga, where Sogndal was fighting to avoid relegation while Vålerenga had nothing to play for. Betsson had Sogndal at 51.8% chance of winning, while AMELO had 35.2% and FiveThirtyEight 31%. Sogndal won 5-2 to get a new chance at securing a spot in next season's Eliteserie. Once again, Betsson won comfortably with an AUC of 0.672. This time, FiveThirtyEight (0.647) were much closer to AMELO (0.653), once again indicating that the models were fairly close. AMELO's small advantage mostly happened in the range where the predictions were between 30% and 40%, while they tracked each other almost perfectly the rest of the time. In conclusion, of the Eliteserien prediction models in available for 2017, the betting company made the most accurate one. Of the two remaining models, the ELO-based one outperformed the offensive/ defensive rating one. While I'm extremely happy with that, it should be noted that AMELO is optimized for the Norwegian league, while the FiveThirtyEight one is not. It will be interesting to see how these perform in 2018, and if there will be other models as well. This season, KroneBall had a model that was active for half the season, while OddsModel published overviews and predictions for certain games. We will see what happens in 2018. Legg igjen en kommentar Posted in Eliteserien and tagged analytics, Eliteserien, football, Fotball, Modell, soccer.
{"url":"http://sannsynligvis.no/fotball/eliteserien/evaluating-eliteserien-prediction-models/","timestamp":"2024-11-12T07:17:05Z","content_type":"text/html","content_length":"48862","record_id":"<urn:uuid:39f2a3e5-95a3-427c-a6fd-d9dd74ac4caa>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00408.warc.gz"}
Bohdan V. Dovhai 1. The Boundary-Value Problems of Mathematical Physics with Random Factors (Ukrainian, with Yu.V. Kozachenko, G. I. Slyvka-Tylyshchak). Kyiv: "Kyiv University", 2008 (monograph). 2. Moderate parts in regenerative compositions: The case of regular variation (with D. Buraczewski, A. Marynych), Journal of Mathematical Analysis and Applications, 2021, Volume 497, Issue 1, Article number 124894. 3. On intermediate levels of nested occupancy scheme in random environment generated by stick-breaking I (with D. Buraczewski, A. Iksanov), Electronic Journal of Probability, 2020, 25, paper no. 123, p. 1–24. 4. Asymptotic Dissipativity for Merged Stochastic Evolutionary Systems with Markov Switchings and Impulse Perturbations under Conditions of Lévy Approximation (with I. Samoilenko, A. Nikitin.), Cybernetics and Systems Analysis, 2020, Vol. 56,No.3, p. 392–400. 5. Information Warfare Model with Migration (with I.V. Samoilenko, C. Dong), CEUR Workshop Proceedings. – 2019. – Vol. 2353. – P. 428–439. 6. Asymptotic Behavior of Extreme Values of Queue Length in M / M / m Systems (with I.Matsak), Cybernetics and Systems Analysis. – 2019, 55(2), P. 321–328. 7. On a redundant system with renewals (with I.K. Matsak), Theory of Probability and Mathematical Statistics. – 2017, 94, P. 63–76. 8. Generalized solutions of a hyperbolic equation with a φ-sub-Gaussian right hand side, Theory of Probability and Mathematical Statistics, 2010, 81, pp. 27–33. 9. Properties of the solution of nonhomogeneous string oscillation equations with φ-subgaussian right side (with Yu.V. Kozachenko), Random operators and stochastic equations, – 2009, – Vol. 17. – P. 10. The condition for application of Fourie method to the solution of nonhomogeneous string oscillation equation with φ -subgaussian right hand side (with Yu.V. Kozachenko), Random operators and stochastic equations, – 2005, – Vol. 13, No. 3. – P. 281–296.
{"url":"http://www.csc.knu.ua/en/person/dovhai","timestamp":"2024-11-08T15:00:45Z","content_type":"text/html","content_length":"23383","record_id":"<urn:uuid:7a639687-c1e5-4084-86ab-49b9bc74ed58>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00025.warc.gz"}
Yitang Zhang is giving the last invited talk at ICM 2014, “Small gaps between primes and primes in arithmetic progressions to large moduli”. 这是闭幕式前的最后一个 invited talk. 张大师习惯手写, 当场演算. Yitang Zhang stepped onto the main stage of mathematics last year with the announced of his achievement that is hailed as “a landmark theorem in the distribution of prime numbers”. Manjul Bhargava, 昨天上午在第二十七届国际数学家大会(ICM 2014)开幕式上, 从韩国总统朴槿惠手中接过菲尔兹奖章. 1. Erica Klarreich, The Musical, Magical Number Theorist, August 12, 2014 ICM 2014 Program 这届国际数学界大会(International Congress of Mathematicians, ICM)的安排, 已经明确无误的说明: 4 个数学家将获得本届大会的 Fields Medal. 张大师将于 8 月 21 日作 ICM 闭幕式之前的压轴报告, 这是只有今年的 Fields Medalist, Gauss Prize, Chern Medal 得主才有的殊荣 张益唐 7 月1 日在北大本科生毕业典礼有一个讲话 这个暑假, 张大师在中国科学院晨兴数学中心和他的母校北京大学做了好几次讲座. 1. A Transition Formula for Mean Values of Dirichlet Polynomials 2014,6.23./6.25. 9:30-11:30 晨兴 110 主持人: 王元 2. 关于 Siegel 零点 晨兴 110 3. Distribution of Prime Numbers and the Riemann Zeta Function July 8, 10, 2014 16:00-17:00, 镜春园82号甲乙丙楼的中心报告厅 July 15, 16:30-17:30 镜春园 78 号院的 77201 室. 主持人: 刘若川 4. 关于 Siegel 零点(2) 014.7.16./7.30./8.4./8.6. 9:30-11:30 I’ve just received a book named Number Theory in the Spirit of Liouville by Kenneth S. Williams. Joseph Liouville is recognised as one of the great mathematicians of the nineteenth century, and one of his greatest achievements was the introduction of a powerful new method into elementary number theory. This book provides a gentle introduction to this method, explaining it in a clear and straightforward manner. The many applications provided include applications to sums of squares, sums of triangular numbers, recurrence relations for divisor functions, convolution sums involving the divisor functions, and many others. All of the topics discussed have a rich history dating back to Euler, Jacobi, Dirichlet, Ramanujan and others, and they continue to be the subject of current mathematical research. Williams places the results in their historical and contemporary contexts, making the connection between Liouville’s ideas and modern theory. This is the only book in English entirely devoted to the subject and is thus an extremely valuable resource for both students and researchers alike. • Demonstrates that some analytic formulae in number theory can be proved in an elementary arithmetic manner • Motivates students to do their own research • Includes an extensive bibliography Table of Contents 1. Joseph Liouville (1809–1888) 2. Liouville’s ideas in number theory 3. The arithmetic functions \(\sigma_k(n)\), \(\sigma_k^*(n)\), \(d_{k,m}(n)\) and \(F_k(n)\) 4. The equation \(i^2+jk = n\) 5. An identity of Liouville 6. A recurrence relation for \(\sigma^*(n)\) 7. The Girard–Fermat theorem 8. A second identity of Liouville 9. Sums of two, four and six squares 10. A third identity of Liouville 11. Jacobi’s four squares formula 12. Besge’s formula 13. An identity of Huard, Ou, Spearman and Williams 14. Four elementary arithmetic formulae 15. Some twisted convolution sums 16. Sums of two, four, six and eight triangular numbers 17. Sums of integers of the form \(x^2+xy+y^2\) 18. Representations by \(x^2+y^2+z^2+2t^2\), \(x^2+y^2+2z^2+2t^2\) and \(x^2+2y^2+2z^2+2t^2\) 19. Sums of eight and twelve squares 20. Concluding remarks “… a fascinating exploration and reexamination of both Liouville’s identities and “elementary” methods, providing revealing connections to modern techniques and proofs. Overall, the work contributes significantly to both number theory and the history of mathematics.” J. Johnson, Choice Magazine Publisher: Cambridge University Press (November 29, 2010) Language: English FORMAT: Paperback ISBN: 9780521175623 LENGTH: 306 pages DIMENSIONS: 227 x 151 x 16 mm CONTAINS: 275 exercises I’ve just received a book named Development of Elliptic Functions According to Ramanujan by K Venkatachaliengar (deceased) , edited by: ShaunCooper , Shaun Cooper This unique book provides an innovative and efficient approach to elliptic functions, based on the ideas of the great Indian mathematician Srinivasa Ramanujan. The original 1988 monograph of K Venkatachaliengar has been completely revised. Many details, omitted from the original version, have been included, and the book has been made comprehensive by notes at the end of each chapter. The book is for graduate students and researchers in Number Theory and Classical Analysis, as well for scholars and aficionados of Ramanujan’s work. It can be read by anyone with some undergraduate knowledge of real and complex analysis. • The Basic Identity • The Differential Equations of \(P\), \(Q\) and \(R\) • The Jordan-Kronecker Function • The Weierstrassian Invariants • The Weierstrassian Invariants, II • Development of Elliptic Functions • The Modular Function \(\lambda\) Readership: Graduate students and researchers in Number Theory and Classical Analysis, as well as scholars and aficionados of Ramanujan’s work. It is obvious that every arithmetician should want to own a copy of this book, and every modular former should put it on his ‘to be handled-with-loving-care-shelf.’ Reader of Venkatachaliengar’s fine, fine book should be willing to enter into that part of the mathematical world where Euler, Jacobi, and Ramanujan live: beautiful formulas everywhere, innumerable computations with infinite series, and striking manouevres with infinite products. — MAA Reviews The author was acquainted with many who knew Ramanujan, and so historical passages offer information not found in standard biographical sources. The author has studied Ramanujan’s papers and notebooks over a period of several decades. His keen insights, beautiful new theorems, and elegant proofs presented in this monograph will enrich readers. — MathSciNet The author has studied Ramanujan’s papers and notebooks over a period of several decades. His keen insights, beautiful new theorems, and elegant proofs presented in this monograph will enrich readers. italic Zentralblatt MATH — Zentralblatt MATH • Series: Monographs in Number Theory (Book 6) • Hardcover: 184 pages • Publisher: World Scientific Publishing Company (September 28, 2011) • Language: English • ISBN-10: 9814366455 • ISBN-13: 978-9814366458 Which integers can be expressed as \(a^3+b^3+c^3-3abc\)? \(a\), \(b\), \(c\in\Bbb Z\). If \(3\mid(a^3+b^3+c^3-3abc)\), then \(3\mid(a+b+c)^3\), \(3\mid(a+b+c)\). so \(9\mid(a^3+b^3+c^3-3abc)\). All \(n\) such that \(3\nmid n\) or \(9\mid n\). Acta Arithmetica(ISSN: 0065-1036(print) 1730-6264(online)) is a scientific journal of mathematics publishing papers on number theory. It was established in 1935 by Salomon Lubelski and Arnold Walfisz. The journal is published by the Institute of Mathematics of the Polish Academy of Sciences. 1935 年, Salomon Lubelski 和 Arnold Walfisz 创立了Acta Arithmetica. Acta Arithmetica 是一个数学杂志, 发表数论方面的原创研究论文, 由 Polish(波兰)科学院的数学研究所出版. 从 1995 年开始, Acta Arithmetica 每年出版 5 卷(2012 年有 6 卷; 1996-2000 年间, 每年 4.5 卷), 刊登 80-100 篇论文. 目前, Acta Arithmetica 第 1-95 卷是 Open Access(开放存取), 而第 96 卷以及第 96 卷之后, 读者需要订阅才可以看到全文. 因为第 96 卷的后 2 期在 2001 年刊发, 因此, 任何人都可以及时, 免费, 不受任何限制地通 过网络获取 2000 年以及之前的所有 Acta Arithmetica, 除了 2000 年最后的第 96 卷的前 2 期.
{"url":"https://www.zyymat.com/tag/number-theory","timestamp":"2024-11-08T02:42:45Z","content_type":"text/html","content_length":"75276","record_id":"<urn:uuid:c40787a0-ab69-49e2-8e30-653760566fb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00477.warc.gz"}
Producers theory The producers theory is concerned with the behavior of firms in hiring and combining productive inputs to supply commodities at appropriate prices. Two sets of issues are involved in this process: one is the technical constraints, which limit the range of feasible productive processes, while the other is the institutional context such as the characteristics of the market where commodities and inputs are purchased and sold. The chapter describes the set of axiomatic approaches to production technology and the neoclassical theory of the multi-product and multi-input firm. It is an attempt to set forth the general framework of analysis that underlies neoclassical producer decision theory. The chapter discusses the properties of the production function, its various forms, and applications of the duality principles and also provides a brief discussion of the Cambridge–Cambridge controversy insofar as it pertains to the existence and usefulness of production functions. The chapter also presents three special cases of the theory of the firm in which new research has appeared, specifically dynamic input disequilibrium models, response to regulatory constraints, and optimal price and output decisions in multi-product firms. ASJC Scopus subject areas • Mathematics (miscellaneous) • Economics, Econometrics and Finance (miscellaneous) • Statistics, Probability and Uncertainty • Statistics and Probability • Economics and Econometrics Dive into the research topics of 'Producers theory'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/producers-theory","timestamp":"2024-11-05T03:25:15Z","content_type":"text/html","content_length":"49824","record_id":"<urn:uuid:d7db4619-dd2f-4109-920c-6294abc9dd14>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00407.warc.gz"}
Nullijn, a program to calculate zero curves of a function of two variables of which one may be complex Nullijn, a program to calculate zero curves of a function of two variables of which one may be complex Published: 1 January 1978| Version 1 | DOI: 10.17632/gnvtp62dfh.1 P.C. de Jagher Title of program: NULLIJN Catalogue Id: ACYL_v1_0 Nature of problem When an algorithm for a function f of two variables, for instance a dispersion function f(w,k) or a potential V(r,z) is known, the program calculates and plots the zero curves, thus giving a graphical representation of an implicitly defined function. One of the variables may be complex. Versions of this program held in the CPC repository in Mendeley Data ACYL_v1_0; NULLIJN; 10.1016/0010-4655(78)90066-8 This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2019) Computational Physics, Computational Method
{"url":"https://elsevier.digitalcommonsdata.com/datasets/gnvtp62dfh/1","timestamp":"2024-11-04T09:05:39Z","content_type":"text/html","content_length":"94141","record_id":"<urn:uuid:9218eaac-f858-489c-94b3-daa14bd18093>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00256.warc.gz"}
Why do we need quantum mechanics? Is the way that physicists formulate quantum mechanics viable? That's what Randy and Jim answer in this episode, talking about Aharonov and Rohrlich's Quantum Paradoxes . Including: (1) Mathematical Consistency: A set of mathematical postulates is consistent if they don't have contradictory implications. (2) Black Body Radiation: A black body is a hot object, like a kiln. Being hot, the cavity of the kiln has a large thermal energy. It transfers some of that energy to the electromagnetic field -- it glows. In 1899, Max Planck proposed that the thermal energy from the black body can only transfer to the electromagnetic field in discrete chunks, called quanta. (3) The Compton Effect The Compton effect is one where a photon (a massless quantum particle of light) strikes and electron, but momentum is transferred from the photon to the electron -- meaning the massless photon has momentum to transfer. (4) Uncertainty Relationships In quantum mechanics, there are pairs of variables called conjugate variables that cannot be both simultaneously and and precisely measured together. This is discussed in terms of the light from a microscope. (5) Single Slit Diffraction Light diffracts in a single slit experiment, not just a double slit like we talked about last time. (6) The Clock in the Box Paradox Einstein's last attempt to prove that the mathematical formulation of quantum mechanics is inconsistent. Thanks to Neal Tircuit for our new theme music! Please comment on our subreddit! It will help us respond to what you're saying if we can collect all the comments in the same place. We're reading Quantum Paradoxes by Yakir Aharonov and Daniel Rohrlich. This is a technical book that is making an argument for a specific interpretation of quantum theory. The first half of the book uses paradoxes to explore the meaning of quantum theory and describe its mathematics, then after interpretations are discussed in the middle chapter, an interpretation of quantum mechanics is explored with paradoxes based on weak quantum measurements. A popular, and short, introduction to quantum mechanics that includes a lot of the topics in the first half of this books is Rae's Quantum Physics . If the equations in Quantum Paradoxes get you down, this might perk you up.
{"url":"https://physicsfm-master.blogspot.com/2014/11/","timestamp":"2024-11-05T03:32:38Z","content_type":"text/html","content_length":"49922","record_id":"<urn:uuid:986057de-f33c-4c0c-bfd6-27f4413de1dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00804.warc.gz"}
The future excess fraction of occupational cancer among those exposed to carcinogens at work in Australia in 2012 BACKGROUND: Studies in other countries have generally found approximately 4% of current cancers to be attributable to past occupational exposures. This study aimed to estimate the future burden of cancer resulting from current occupational exposures in Australia. METHODS: The future excess fraction method was used to estimate the future burden of occupational cancer (2012-2094) among the proportion of the Australian working population who were exposed to occupational carcinogens in 2012. Calculations were conducted for 19 cancer types and 53 cancer-exposure pairings, assuming historical trends and current patterns continued to 2094. RESULTS: The cohort of 14.6 million Australians of working age in 2012 will develop an estimated 4.8 million cancers during their lifetime, of which 68,500 (1.4%) are attributable to occupational exposure in those exposed in 2012. The majority of these will be lung cancers (n=26,000), leukaemias (n=8000), and malignant mesotheliomas (n=7500). CONCLUSIONS: A significant proportion of future cancers will result from occupational exposures. This estimate is lower than previous estimates in the literature; however, our estimate is not directly comparable to past estimates of the occupational cancer burden because they describe different quantities - future cancers in currently exposed versus current cancers due to past exposures. The results of this study allow us to determine which current occupational exposures are most important, and where to target exposure prevention. • Cancer • Occupations • Prevention • Workplace Dive into the research topics of 'The future excess fraction of occupational cancer among those exposed to carcinogens at work in Australia in 2012'. Together they form a unique fingerprint.
{"url":"https://research-portal.uu.nl/en/publications/the-future-excess-fraction-of-occupational-cancer-among-those-exp","timestamp":"2024-11-15T03:06:15Z","content_type":"text/html","content_length":"56424","record_id":"<urn:uuid:335f6ad0-9e59-4f88-bf2f-dde54e7da12f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00496.warc.gz"}
Frontiers | Extracting Robust Biomarkers From Multichannel EEG Time Series Using Nonlinear Dimensionality Reduction Applied to Ordinal Pattern Statistics and Spectral Quantities • ^1Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany • ^2Institute for the Dynamics of Complex Systems, Georg-August-Universität Göttingen, Göttingen, Germany • ^3Department of Cognitive Neurology, University Medical Center Göttingen, Göttingen, Germany • ^4Department of Neurology, University Medical Center Göttingen, Göttingen, Germany • ^5Institute of Pharmacology and Toxicology, University Medical Center Göttingen, Göttingen, Germany • ^6German Center for Cardiovascular Research (DZHK), Partner Site Göttingen, Göttingen, Germany • ^7German Primate Center, Leibniz Institute for Primate Research, Göttingen, Germany In this study, ordinal pattern analysis and classical frequency-based EEG analysis methods are used to differentiate between EEGs of different age groups as well as individuals. As characteristic features, functional connectivity as well as single-channel measures in both the time and frequency domain are considered. We compare the separation power of each feature set after nonlinear dimensionality reduction using t-distributed stochastic neighbor embedding and demonstrate that ordinal pattern-based measures yield results comparable to frequency-based measures applied to preprocessed data, and outperform them if applied to raw data. Our analysis yields no significant differences in performance between single-channel features and functional connectivity features regarding the question of age group separation. 1. Introduction The study of physiological networks is of great interest in biomedical sciences. Especially functional brain networks, extracted from MRI- or EEG-recordings, are a frequent subject of studies. We obtain functional networks from EEG recordings and compare these networks to features of single EEG channels in their function as biomarkers. Neurobiological changes in healthy and pathological aging and their electrophysiological correlates (EEG) are still a hot topic in the neuroscience community, particularly since the incidence and prevalence of mild cognitive impairment and dementia increase alongside life expectancy (Ricci, 2019). Although some consensus has been reached regarding some electrophysiological correlates of aging, such as the reduction of occipital alpha power (Babiloni et al., 2006) and a shifting of the individual alpha peak towards lower frequencies in elderly subjects (Scally et al., 2018), an electrophysiological marker with which we can confidently discriminate between young and elderly individuals is yet to be established. Thus, the study of age group differences in EEG recordings has been of interest for several decades and has been addressed by many authors over the years. While some considered differences in single-channel (SC) measures (Waschke et al., 2017), others used functional connectivity (FC) (McIntosh et al., 2013) or a combination of multiple feature groups (Al Zoubi et al., 2018). All methods successfully extracted significant differences between two age groups or significant correlations between the measure of choice and age. This is why we wanted to directly compare the discriminating power of FC and SC features in this study. In recent years, studies aiming not only at differentiating between age groups but also between individuals based on features extracted from EEG recordings have become available (Rocca et al., 2014; Demuru and Fraschini, 2020; Suetani and Kitajo, 2020; Wilaiprasitporn et al., 2020). In all mentioned works, the features of choice were based in the frequency domain. A problem for commonly used features extracted from EEG signals is the observation that in most cases, extensive preprocessing of the data is required and done, which has recently been shown to possibly lead to different results (Robbins et al., 2020). Thus, a method of feature extraction where the amount of preprocessing can be reduced would be desirable. Therefore, we used ordinal pattern (OP) statistics (Bandt and Pompe, 2002) for characterizing our data which has been shown to be a robust method for analysing physiological time series (Keller et al., 2007a, 2014; Parlitz et al., 2012; Amigó et al., 2015; Unakafov, 2015). For example, OP analysis has been used to separate healthy subjects from patients suffering from congestive heart failure (Parlitz et al., 2012) or to differentiate between different experimental conditions in EEG recordings (Unakafov, 2015; Quintero-Quiroz et al., 2018). The discriminating power of OP distributions of single channels is compared to FC measures given by the mutual information (MI) based on OP distributions. These time domain features are compared to spectral features given by power spectral densities (PSDs) of single channels and coherence characterizing interrelations of pairs of channels. As a benchmark task we aim at separating individuals and age groups based on different sets of features extracted from EEG recordings. To illustrate and quantify the separation the high dimensional features (feature vectors) are mapped to a two-dimensional plane using the nonlinear dimensionality reduction algorithm t-SNE (van der Maaten and Hinton, 2008). 2. Materials 2.1. Data Set The data set analyzed in this study was on a set of recordings from 45 participants, divided into two different age groups, who participated in an image recognition task. We will refer to this data set as the Image Recognition data set. Twenty-two young (12 female, mean age: 24.8 years ± 3.9 SD) and 23 elderly (11 female, mean age: 62.4 years ± 7.2 SD) healthy subjects were included. All subjects had normal or corrected-to-normal vision, normal contrast sensitivity and no color vision weakness/blindness. None of the participants had a history of neurological or psychiatric diseases. Normal visual acuity, contrast sensitivity and color vision were corroborated with the Snellen chart (Snellen, 1862), the Mars Letter Contrast Sensitivity test (Arditi, 2005), and a version of the Stilling, Hertel, and Velhagen color panels test (Broschmann and Kuchenbecker, 2011), respectively. To select the hand with which participants would answer the tests, handedness was assessed using the Edinburgh Handedness Inventory (Oldfield, 1971). Additionally, subjects were screened for cognitive impairment and depression using the Mini-Mental State Examination (MMSE) (Folstein et al., 1975) and the Beck Depression Inventory II (BDI-II) (Beck et al., 1996). A score of ≤ 24 points in the MMSE and/or a score of ≥9 points in the BDI-II were considered exclusion criteria. The subjects participated in a modified version of the image recognition task described in Miloserdov et al. (2020). Subjects were shown images from three different categories: cars, faces and scrambled images. The images were shown on two different contrast levels, high (100% contrast) and low (10% contrast), giving six different conditions in total. The subjects were asked to categorize each image they were shown. Each condition was repeated a total of 80 times, resulting in a total of 480 trials. 2.2. Measurement and Preprocessing of EEG Data The EEG data was recorded at a sampling rate of 1,000 Hz using a 64-channel Brain Products system elastic cap. The cap includes a reference electrode located at FCz. The FieldTrip toolbox for Matlab (Oostenveld et al., 2010) was used for data preprocessing. Continuous EEG data was segmented into 1,500 ms long epochs (either 1,500 ms prestimulus or 1,500 ms poststimulus). What we will refer to as “raw” data was analyzed without going through any additional preprocessing steps. What we will further refer to as the “preprocessed” data went through the following additional steps. An offline 0.1 Hz–220 Hz band-pass filter (butterworth, hamming window) and a 50 Hz notch filter were applied. Jumps and clips were automatically detected using a amplitude z-value cutoff of 20 in the case of jumps and a time threshold of 0.02 s for clips. The data points identified as clips or jumps were then linearly interpolated. Muscle artifacts were detected automatically by first applying a band-pass filter of 120 Hz–140 Hz and selecting an amplitude z-value threshold of 5. The trials marked as having muscle artifacts were afterwards visually inspected and rejected. Blink artifacts were corrected for using Independent Component Analysis (ICA). Data was re-referenced to the common average, i.e., the average across all EEG channels is subtracted from the EEG signal of each individual channel. 3. Methods The aim of this study was to compare the results from classical frequency-based neuroscientific features as coherence and power spectra to features extracted on the basis of symbolic dynamics and information theoretical measures. From each domain, we took one single-channel measure and one functional connectivity measure that takes into account the relationships between different areas in the 3.1. Functional Connectivity in EEG Recordings Functional connectivity (FC) quantifies the temporal statistical dependence between signals in different brain regions (Sakkalis, 2011) and there exists an abundance of different measures that are used in the neuroscientific community. Generally, FC can be measured in time domain or frequency domain. In both domains, linear and non-linear measures exist. In this study, the non-linear time domain based measure mutual information (MI, Cover and Thomas, 1991) is compared to one of the most popular measures in EEG analysis, the frequency domain-based linear coherence (Bastos and Schoffelen, 2016). These two measures will be introduced in the following. 3.1.1. Ordinal Pattern Statistics and Mutual Information Ordinal patterns (OPs) are a symbolic approach to time series analysis that was originally introduced by Bandt and Pompe (2002). Since then, OP based methods have successfully been used in the analyses of biomedical data (Keller et al., 2007b; Amigó et al., 2010, 2015; Parlitz et al., 2012; Graff et al., 2013; Kulp et al., 2016; McCullough et al., 2017) and specifically EEG recordings ( Keller et al., 2007a, 2014; Ouyang et al., 2010; Schinkel et al., 2012, 2017; O'Hora et al., 2013; Rummel et al., 2013; Shalbaf et al., 2015; Unakafov, 2015; Cui et al., 2016; Quintero-Quiroz et al., 2018). Statistics based on ordinal pattern have been shown to be robust to noise (Parlitz et al., 2012; Quintero-Quiroz et al., 2015) and can be used to define advanced concepts for quantifying information flow (Staniek and Lehnertz, 2008; Amigó et al., 2016) or to derive transition networks in state space from observed time series (McCullough et al., 2015; Zhang et al., 2017). In ordinal pattern statistics, the order relations between values of a time series are considered rather than the values themselves. An ordinal pattern for a given length w and lag l describes the order relations between w points of a time series, each separated by l − 1 points. For a length w, there are w! possible different patterns, that can each be assigned a unique permutation index as illustrated for w = 4 in Figure 1. The permutation index characterizes the permutation π that is needed to get from a sample x[t], x[t+l], …, x[(w−1)][(t+l)] of the time series to a sample x[π(t)], x [π(t+l)], …, x[π((w−1)][(t+l))] that is ordered ascendingly according to the amplitude of the sample in the time series. FIGURE 1 An important parameter is the lag l which can be used to address different time scales as illustrated in Figure 2. FIGURE 2 Figure 2. Illustration of ordinal patterns on different time scales in raw EEG data with sampling rate 1,000 Hz. The colored interval in the right column covers the same time span (0.06 s) as the entire window in the left column. Ordinal patterns are easy and inexpensive to compute and have been shown to be robust to noise (Bandt and Pompe, 2002; Parlitz et al., 2012). From a sequence of ordinal patterns, the probabilities of occurrence of specific patterns, given a lag l and length w, can be used to characterize the underlying time series. Commonly, complexity measures as permutation entropy (Bandt and Pompe, 2002; Parlitz et al., 2012) or conditional entropy (Unakafov, 2015) are applied to the resulting pattern distributions. Here, the question asked is not about the complexity of a univariate time series, but about the similarity of channels in one multivariate EEG recording. The similarity measure that is used here is the mutual information (MI) (Shannon, 1948; Cover and Thomas, 1991). Mutual information can be expressed by the Kullback-Leibler divergence (Kullback and Leibler, 1951) between the joint probability distribution p[X, Y] of two jointly varying random variables X and Y and the product of their marginal distributions: $I(X;Y)=KL(pX,Y||pX·pY). (1)$ For independent variables, the joint distribution is equal to the product of the marginal ones, resulting in a mutual information of I(X; Y) = 0. Accordingly, mutual information can be interpreted as a quantity that measures to what degree two random variables are not independent. 3.1.2. Coherence In contrast to OP statistics and MI, coherency (Bastos and Schoffelen, 2016) measures functional connectivity in the frequency domain. The coherency of two time series y[i] and y[j], for example two EEG channels i and j, is defined as the normed expectation value of the cross-spectrum $cohij(f)=〈ŷi(f)ŷj*(f)〉〈ŷi(f)ŷi*(f)〉〈ŷj(f)ŷj*(f)〉, (2)$ where ŷ[i](f) is the Fourier transform of the signal y[i](t) and ^* denotes the complex conjugate. The expectation value 〈·〉 is usually approximated by taking the average over multiple trials from an EEG sample. In this study, we will consider the absolute value of Equation (2), and call it 3.2. Single-Channel Features in EEG Recordings We compare the introduced FC measures to measures only taking into account single channels, but no relations between them. For this, we consider the PSDs as done by Suetani and Kitajo (2020), as well as OP distributions for each channel. In both cases, (not necessarily normalized) distributions per channel are considered. As a metric to compare them, we use the generalized KL-divergence. A vectorized, symmetric version of this was introduced by Suetani and Kitajo (2020) as a metric, which is given by $dnm=1Nch∑l=1Nch12[DB(Sn(l)||Sm(l))+DB(Sm(l)||Sn(l))], (3)$ where the generalized KL-divergence D[B](P||Q) between two not necessarily normalized densities P and Q is given according to $DB(P||Q)=∫(p(x)logp(x)q(x)-p(x)+q(x))dx. (4)$ d[nm] gives the distance between two vectors of dimension N[ch], where each dimension contains a distribution ${S}_{n}^{\left(l\right)}$, which will either be a PSD or a OP distribution. It is a special case of the beta-divergence (Basu et al., 1998; Mihoko and Eguchi, 2002) used in Suetani and Kitajo (2020) with β = 1. 3.3. Dimensionality Reduction Dimensionality reduction aims to visualize such high-dimensional data in a low-dimensional space, preferably by extracting the most important features and representing each data point only by those. While the idea of dimensionality reduction dates back more than 100 years (Pearson, 1901), recently, more and more techniques have surfaced (van der Maaten et al., 2007). In this study, the non-linear algorithm t-distributed stochastic neighbor embedding (t-SNE, van der Maaten and Hinton, 2008) is used to project features extracted from EEG time series. These features are adjacency matrices in case of functional connectivity and vectors of distributions in case of single-channel measures. 3.3.1. Nonlinear Dimension Reduction (t-SNE) T-distributed stochastic neighbor embedding was first introduced by van der Maaten and Hinton (2008) as an extension of stochastic neighbor embedding (SNE) (Hinton and Roweis, 2003) to avoid the crowding problem and simplify optimization. The algorithm projects a set of L samples x[1], x[2], …, x[L] from a high-dimensional into a low-dimensional space so that ${ℝ}^{N}i {\mathbf{\text{x}}}_{n}↦{\mathbf{\text{y}}}_{n}\in {ℝ}^{M}$, considering the so-called neighbor probabilities $pm|n=exp(-||xn-xm||2/(2σn2))∑r≠nexp(-||xr-xn||2/(2σn2)). (5)$ for high-dimensional data points x[n] and x[m]. Here, ||·|| is typically the Euclidean norm. For projections of single-channel features, where one high-dimensional data point consists of N[ch] distributions, we use ||·|| = d[nm] in Equation (3). In case of connectivity matrices, we flatten^1 the upper triangular part of the matrix and use the Euclidean norm as a measure of distance. N is an arbitrary integer value and M ∈ {2, 3} in general, we use M = 2 in this study. The probability p[m|n] describes the probability that x[m] is a neighbor of x[n] and is proportional to a Gaussian centered at x[n]. The standard deviation σ[n] of the high-dimensional probability distributions is calculated so that it satisfies a given value of the perplexity k. More specifically, the entropy of the conditional probability p[m|n] as a function of m must be approximately equal to log[2](k), or $k(pm|n)=2H(pm|n), (6)$ where H is the Shannon entropy (Shannon, 1948). The goal of t-SNE is now to minimize the sum of the Kullback-Leibler divergences between the symmetric probabilities p[nm] = (p[m|n]+p[m|n])/2 in the high-dimensional space and the neighbor probabilities q[nm] of the projections into low-dimensional space, $qnm=exp(1+||yn-ym||2)-1∑r≠nexp(1+||yr-yn||2)-1, (7)$ where q[nm] is proportional to a Student-t-distribution (Student, 1908) with mean y[n]. The cost-function then becomes $C=∑nKL(Pn||Qn)=∑n∑m≠npnmlog(pnmqnm). (8)$ This cost-function is minimized using the gradient descent method, thus aligning the high- and low-dimensional neighbor probabilities. As a consequence, data points that are close in high-dimensional space will be projected closely together. Because t-SNE is a stochastic method that starts out the projection with randomly assigning low-dimensional coordinates to data points and then minimizing the cost-function given in Equation (8), the resulting projection depends on the initial conditions of the projection. If one only considers a single projection, it is possible that this projection is not necessarily representative for all projections. Thus, if one wants to quantify effects in the t-SNE projections, one must average over many projections with different initial conditions. 3.4. Analysis Scheme The EEG recordings were cut into trials. These trials were 1.5 s long, either directly before or directly after the subjects were shown a picture. These steps were done both for the raw and preprocessed versions of the data set. OP sequences with patterns of length w = 4 were calculated for each trial. From all trials belonging to the same condition, histograms of the probability of occurrence were extracted, giving one histogram per EEG-channel, per condition and per subject. On the basis of the OP sequences, one single-channel feature and one functional connectivity feature were extracted: • The marginal ordinal pattern distributions were used as single-channel features for dimensionality reduction with t-SNE. Here, we calculated the distance between two samples, each characterized by N[ch] OP distributions, with Equation (3). • We also used (joint) OP distributions to calculate MI-values between each pair of EEG channels, resulting in an N[ch] × N[ch] symmetric adjacency matrix per stimulus type per subject, containing the MI before or after each stimulus. Additional to the time-domain measures, features were extracted from the frequency domain. For each trial, power spectra were calculated by performing a fast Fourier transform (FFT). Based on this, we again extracted one single-channel feature and one functional connectivity feature: • The power spectral densities of each electrode were calculated as the squared absolute value of the Fourier transform of the signal and averaged over all trials. Again, we calculated the distance between two samples, each characterized by N[ch] PSDs (either full PSD or only the bins of specific frequency bands), with Equation (2) where only the frequency bins of specific frequency bands were contained (alpha: 8 Hz–12 Hz, beta: 15 Hz–30 Hz, theta: 3 Hz–7 Hz, gamma: 30 Hz–50 Hz). We consider different frequency bands for comparability with findings in literature. • Adjacency matrices based on the coherence between EEG channels were composed by calculating the average coherence over all trials per frequency-band. The above described dimensionality reduction algorithms were applied to each feature set. In case of the adjacency matrices, the flattened upper triangular part of the matrices was used as input for the algorithms with the Euclidean distance as a metric. We also projected the vectors containing PSDs and OP distributions with the vectorized KL-divergence as distance measure as introduced in Equation (3). It is important to mention that the different feature sets (FC and SC) involved in the comparison have different numbers of features which itself might influence dimensionality-reduction methods. To quantify the effect of subject separation in the 2d-projections the ratio ρ between the average distances within a subject-cluster to the average distances to its three next neighbors was calculated. This ratio gives insight on how much closer data points of the same subject are projected together than data points of different individuals. As a quantification of the separation of age groups we used a kernel density estimation (KDE; Silverman, 1986) with a Gaussian kernel and a bandwidth selection according to Scott (2015) for the distributions of the two age groups in the projections. An illustration of such estimated densities is given in Figure 4. We then calculated the Jensen-Shannon divergence (Lin, 1991) between the two distributions. The Jensen-Shannon divergence quantifies differences between probability distributions. It is bound by unity in case of a base-2 logarithm and can be derived from the KL divergence via $JSD(P||Q)=12KL(P||M)+12KL(Q||M), whereM=12(P+Q). (9)$ The square root of the Jensen-Shannon divergence is a metric that is often called the Jensen-Shannon distance (JSD, Endres and Schindelin, 2003). This was done, in all cases, for 100 t-SNE projections of the ensemble with different random seeds. 4. Results T-SNE projections of similar experimental conditions from the Image Recognition data set, i.e., samples after a high-contrast stimulus, are displayed in Figure 3. Four different feature types were projected. The results for the two FC features MI and Coherence (alpha band) are displayed in the left column, and the projections of the single-channel features, OP distributions and filtered PSDs (theta band), are depicted in the right column. In all cases, separations of age groups and individuals can be observed visually. We observe that the three conditions (face, car, scrambled image) are projected together closely for each individual. This effect is similar to the one observed in Suetani and Kitajo (2020) and is further quantified in Figure A1 in the Appendix. The two age groups (Elderly in red and Young in blue) appear to be loosely separated. FIGURE 3 Figure 3. t-SNE projections of features obtained from the EEG time series of the Image Recognition data set (k = 30). We compare time-domain measures (A,B) to frequency domain measures (C,D). The time-domain measures are extracted from raw data by calculating OP sequences, the frequency-domain measures are taken from frequency bands of the preprocessed data. In each case, a projection for the parameter (lag/frequency band) yielding the best group separation for the method is shown. (A) Projection of functional connectivity vectors obtained from OP statistics (τ = 100 ms), (B) Projection of OP distribution-vectors (τ = 15 ms) with the generalized KL-divergence as a metric, (C) Projection of functional connectivity vectors obtained from average coherence (alpha-band), (D) projection of power spectral density-vectors (theta-band) with the generalized KL-divergence as a metric. As will be detailed in the following section we find that the separation of the two age groups, based on the JSD, are comparable for all feature sets, with a tendency toward higher separations for the OP based methods. The separation of individuals is clearly more distinct for OP based measures than for frequency based measures, both for FC and single channels. 4.1. Age Group Separation In Figure 3, a separation of the two age groups in the t-SNE projections can be observed visually. As an example, in Figure 4, the estimated densities of the age groups are plotted for the same parameters as in Figure 3A. One can observe that the densities barely overlap. FIGURE 4 Figure 4. Exemplary plot of a the estimated kernel densities for the same parameters as in Figure 3A; a distinction is made only between age groups, not conditions. The Jensen-Shannon distances between the two estimated distributions, depending on time or frequency scales, are displayed in Figure 5 for both raw and preprocessed data. For all feature sets, an average JSD larger than zero can be observed, with the highest values being achieved by the OP based measures. FIGURE 5 Figure 5. Quantification of the separation of Elderly and Young in 2d t-SNE projections (k = 30) for both raw (red) and preprocessed (blue) EEG data. We consider group separation based on (A,C) functional connectivity and (B,D) single channels. Results from OP based measures are displayed in the upper row, and from frequency-based measures in the lower row. The solid lines describe the separation when using the full PSD, otherwise only the frequency bins of one specific band are considered. The error bars display the standard deviation over 100 t-SNE projections with different random seeds. While for the OP based measures, no increase of the best performance across time scales through artifact removal can be observed, artifact correction clearly leads to an overall increase of performance for frequency based measures. This is yet another illustration of the robustness of OP statistics. We found no significant dependency of the separation of age groups on the perplexity k for the t-SNE projections. In case of individuals, we found the same dependency as Suetani and Kitajo (2020), where for larger values of k (up to k = 100), the separation is less distinct than for smaller values, but still observable. 5. Discussion In this study, we obtained differences between age groups (Elderly, Young) and individuals subjected to similar experimental conditions based on both functional connectivity and single-channel measures obtained from multichannel EEG time series. We found that t-SNE as a method for dimensionality reduction and feature extraction does not only reflect individuality but also appears to represent inter-individual relationships, given by age groups in this case. It should be emphasized that the separation of individuals is restricted to the separation of recordings from the same individual under similar conditions (post high contrast stimulus). Separate checks using pre-stimulus recordings and resting state recordings from the same session, revealed that the data points of the same individual are not necessarily projected closely together. Regarding the separation between age groups, one could consider that a difference between brain age, which is a descriptor of the physiological condition of the brain, and the chronological age of the subject has been hypothesized (Irimia et al., 2015; Steffener et al., 2016). If the chronological age and the brain age of some individuals in the study differ, this could explain apparent outliers in the projections. Since the aim of t-SNE is to find a low-dimensional projection of a high-dimensional data set that represents the distances between high-dimensional points, it can be assumed that the projected ensemble does not only represent the two features with highest variance and omits the others, but is rather a representation of the whole feature set. We showed that OP analysis can obtain results comparable to classical EEG feature sets, and even outperform them if both methods are applied to raw data. For the OP based measures, the applied preprocessing pipeline partially even leads to a decrease in performance regarding the separation of age groups, while for the frequency-based measures, there is always an increase observable. This supports previous observations that OP based methods yield promising results if applied to raw data sets. Preprocessing could thus be reduced to avoid potential differences of analyses in different The question answered here is “Is the average or best performance of one feature set comparable to the average or best performance of another feature set?” This appears to be the case for the age group separation, and also for the separation of subjects. Given the observation that there appears to be no significant difference between FC and SC measures, the question arises whether the obtained functional networks actually contain the information that we assume they do. In light of these findings, the interpretation of other studies that obtained age group differences based on functional connectivity (Wada et al., 1998; McIntosh et al., 2013; Al Zoubi et al., 2018, amongst others) could be reconsidered. If the age differences are also contained in single-channel measures, and the separation power of functional connectivity does not outperform the single channel measures, it must be thoroughly investigated how these group differences are related to one another. A possible explanation for the lack of differences between functional connectivity and single-channel measures would be that the main information that is contained in the functional connectivity measures that are considered here is due to shared sources between the different EEG channels. This information would also be contained in single-channel features. To verify this, further tests must be done. Furthermore, future studies should include a comparison with other measures of network physiology like time delay stability (Bartsch et al., 2015; Liu et al., 2015; Lin et al., 2016) and transfer entropy (Schreiber, 2000; Staniek and Lehnertz, 2008; Vicente et al., 2011; Wibral et al., 2013) measures. Data Availability Statement The datasets presented in this article are not readily available due to data protection rules. Requests to access the datasets should be directed to Melanie Wilke (melanie.wilke@med.uni-goettingen.de Ethics Statement The studies involving human participants were reviewed and approved by medical ethics committees of the University Medical Center Göttingen, Germany. The patients/participants provided their written informed consent to participate in this study. Author Contributions IK, AS, and UP prepared the design of the manuscript. MW and DT-T designed the behavioral tasks. DT-T performed the experiments and was responsible for data acquisition. DT-T and IS performed data preprocessing. AS, IK, SB, UP, and IS developed the conceptional design of the data analysis. IK and SB implemented the analysis algorithms and performed the data analysis. MB, SL, and MW were in charge of project administration and funding acquisition. All authors edited and reviewed the article and approved the manuscript. We acknowledge EKFS Seed Funding by the Else Kröner-Fresenius Foundation and support through the German Center for Cardiovascular Research (DZHK), partner site Göttingen. MW, IS, and DT-T were funded by the Herman and Lilly Schilling Foundation. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. IK would like to thank Steffen Korn for helpful discussions regarding machine learning techniques. All authors would like to thank Aishwarya Bhonsle for interesting discussions. We would like to thank Kristina Miloserdov for the task design and support with data collection and Carsten Schmidt-Samoa for his support in task programming and technical assistance. 1. ^We “flatten” a matrix by concatenating all column vectors into one long vector. Al Zoubi, O., Ki Wong, C., Kuplicki, R. T., Yeh, H.-w., Mayeli, A., Refai, H., et al. (2018). Predicting age from brain EEG signals-a machine learning approach. Front. Aging Neurosci. 10:184. doi: Amigó, J. M., Keller, K., and Unakafova, V. A. (2010). Permutation Complexity in Dynamical Systems: Ordinal Patterns, Permutation Entropy and All That. Heidelberg: Springer Science & Business Media. doi: 10.1007/978-3-642-04084-9_3 Amigó, J. M., Keller, K., and Unakafova, V. A. (2015). Ordinal symbolic analysis and its application to biomedical recordings. Philos. Trans. Ser. A Math. Phys. Eng. Sci. 373:20140091. doi: 10.1098/ Amigó, J. M., Monetti, R., Graff, B., and Graff, G. (2016). Computing algebraic transfer entropy and coupling directions via transcripts. Chaos 26:113115. doi: 10.1063/1.4967803 Arditi, A. (2005). Improving the design of the letter contrast sensitivity test. Invest. Ophthalmol. Vis. Sci. 46, 2225–2229. doi: 10.1167/iovs.04-1198 Babiloni, C., Binetti, G., Cassarino, A., Dal Forno, G., Del Percio, C., Ferreri, F., et al. (2006). Sources of cortical rhythms in adults during physiological aging: a multicentric EEG study. Hum. Brain Mapp. 27, 162–172. doi: 10.1002/hbm.20175 Bandt, C., and Pompe, B. (2002). Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett. 88:174102. doi: 10.1103/PhysRevLett.88.174102 Bartsch, R. P., Liu, K. K. L., Bashan, A., and Ivanov, P. C. (2015). Network physiology: how organ systems dynamically interact. PLoS ONE 10:e142143. doi: 10.1371/journal.pone.0142143 Bastos, A. M., and Schoffelen, J. (2016). A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Front. Syst. Neurosci. 9:175. doi: 10.3389/ Basu, A., Harris, I. R., Hjort, N. L., and Jones, M. C. (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85, 549–559. doi: 10.1093/biomet/85.3.549 Beck, A., Steer, R., Ball, R., and Ranieri, W. (1996). Comparison of beck depression inventories -IA and -II in psychiatric outpatients. J. Pers. Assess. 67, 588–597. doi: 10.1207/s15327752jpa6703_13 Broschmann, D., and Kuchenbecker, J. (2011). Tafeln zur Prüfung des Farbensinnes. Stuttgart: Thieme. Cover, T. M., and Thomas, J. A. (1991). Elements of Information Theory. New York, NY: Wiley-Interscience. doi: 10.1002/0471200611 Cui, D., Wang, J., Wang, L., Yin, S., Bian, Z., and Gu, G. (2016). Symbol Recurrence Plots based resting-state eyes-closed EEG deterministic analysis on amnestic mild cognitive impairment in type 2 diabetes mellitus. Neurocomputing 203, 102–110. doi: 10.1016/j.neucom.2016.03.056 Demuru, M., and Fraschini, M. (2020). EEG fingerprinting: subject-specific signature based on the aperiodic component of power spectrum. Comput. Biol. Med. 120:103748. doi: 10.1016/ Endres, D. M., and Schindelin, J. E. (2003). A new metric for probability distributions. IEEE Trans. Inform. Theory 49, 1858–1860. doi: 10.1109/TIT.2003.813506 Folstein, M. F. (1975). “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198. doi: 10.1016/0022-3956(75)90026-6 Graff, G., Graff, B., Kaczkowska, A., Makowiec, D., Amigó, J., Piskorski, J., et al. (2013). Ordinal pattern statistics for the assessment of heart rate variability. Eur. Phys. J. Spec. Top. 222. doi: 10.1140/epjst/e2013-01857-4 Hinton, G. E., and Roweis, S. T. (2003). “Stochastic neighbor embedding,” in Advances in Neural Information Processing Systems 15, eds S. Becker, S. Thrun, and K. Obermayer (Cambridge, MA: MIT Press), 857–864. Irimia, A., Torgerson, C. M., Matthew Goh, S. Y., and Van Horn, J. D. (2015). Statistical estimation of physiological brain age as a descriptor of senescence rate during adulthood. Brain Imag. Behav. 9, 678–689. doi: 10.1007/s11682-014-9321-0 Keller, K., Lauffer, H., and Sinn, M. (2007a). Ordinal analysis of EEG time series. Chaos Complex. Lett. 2, 247–258. Keller, K., Sinn, M., and Emonds, J. (2007b). Time series from the ordinal viewpoint. Stochast. Dyn. 07, 247–258. doi: 10.1142/S0219493707002025 Keller, K., Unakafov, A., and Unakafova, V. (2014). Ordinal patterns, entropy, and EEG. Entropy 16, 6212–6239. doi: 10.3390/e16126212 Kullback, S., and Leibler, R. A. (1951). On information and sufficiency. Ann. Math. Stat. 22, 79–86. doi: 10.1214/aoms/1177729694 Kulp, C. W., Chobot, J. M., Freitas, H. R., and Sprechini, G. D. (2016). Using ordinal partition transition networks to analyze ECG data. Chaos 26:073114. doi: 10.1063/1.4959537 Lin, A., Liu, K. K. L., Bartsch, R. P., and Ivanov, P. C. (2016). Delay-correlation landscape reveals characteristic time delays of brain rhythms and heart interactions. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 374:20150182. doi: 10.1098/rsta.2015.0182 Lin, J. (1991). Divergence measures based on the Shannon entropy. IEEE Trans. Inform. Theory 37, 145–151. doi: 10.1109/18.61115 Liu, K. K. L., Bartsch, R. P., Lin, A., Mantegna, R. N., and Ivanov, P. C. (2015). Plasticity of brain wave network interactions and evolution across physiologic states. Front. Neural Circuits 9:62. doi: 10.3389/fncir.2015.00062 McCullough, M., Small, M., Iu, H. H. C., and Stemler, T. (2017). Multiscale ordinal network analysis of human cardiac dynamics. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 375:20160292. doi: McCullough, M., Small, M., Stemler, T., and Iu, H. H.-C. (2015). Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems. Chaos 25:053101. doi: 10.1063/1.4919075 McIntosh, A. R., Vakorin, V., Kovacevic, N., Wang, H., Diaconescu, A., and Protzner, A. B. (2013). Spatiotemporal dependency of age-related changes in brain signal variability. Cereb. Cortex 24, 1806–1817. doi: 10.1093/cercor/bht030 Mihoko, M., and Eguchi, S. (2002). Robust blind source separation by beta divergence. Neural Comput. 14, 1859–1886. doi: 10.1162/089976602760128045 Miloserdov, K., Schmidt-Samoa, C., Williams, K., Weinrich, C. A., Kagan, I., Bürk, K., et al. (2020). Aberrant functional connectivity of resting state networks related to misperceptions and intra-individual variability in parkinson's disease. NeuroImage 25:102076. doi: 10.1016/j.nicl.2019.102076 O'Hora, D., Schinkel, S., Hogan, M. J., Kilmartin, L., Keane, M., Lai, R., et al. (2013). Age-related task sensitivity of frontal EEG entropy during encoding predicts retrieval. Brain Topogr. 26, 547–557. doi: 10.1007/s10548-013-0278-x Oldfield, R. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9, 97–113. doi: 10.1016/0028-3932(71)90067-4 Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J.-M. (2010). Fieldtrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci . 2011:156869. doi: 10.1155/2011/156869 Ouyang, G., Dang, C., Richards, D. A., and Li, X. (2010). Ordinal pattern based similarity analysis for EEG recordings. Clin. Neurophysiol. 121, 694–703. doi: 10.1016/j.clinph.2009.12.030 Parlitz, U., Berg, S., Luther, S., Schirdewan, A., Kurths, J., and Wessel, N. (2012). Classifying cardiac biosignals using ordinal pattern statistics and symbolic dynamics. Comput. Biol. Med. 42, 319–327. doi: 10.1016/j.compbiomed.2011.03.017 Pearson, K. F. R. S. (1901). LIII. on lines and planes of closest fit to systems of points in space. Lond. Edinburgh Dublin Philos. Mag. J. Sci. 2, 559–572. doi: 10.1080/14786440109462720 Quintero-Quiroz, C., Montesano, L., Pons, A. J., Torrent, M. C., García-Ojalvo, J., and Masoller, C. (2018). Differentiating resting brain states using ordinal symbolic analysis. Chaos 28:106307. doi: 10.1063/1.5036959 Quintero-Quiroz, C., Pigolotti, S., Torrent, M. C., and Masoller, C. (2015). Numerical and experimental study of the effects of noise on the permutation entropy. New J. Phys. 17:093002. doi: 10.1088/ Ricci, G. (2019). Social aspects of dementia prevention from a worldwide to national perspective: a review on the international situation and the example of Italy. Behav. Neurol. 2019:8720904. doi: Robbins, K. A., Touryan, J., Mullen, T., Kothe, C., and Bigdely-Shamlo, N. (2020). How sensitive are EEG results to preprocessing methods: a benchmarking study. IEEE Trans. Neural Syst. Rehabil. Eng. 28, 1081–1090. doi: 10.1109/TNSRE.2020.2980223 Rocca, D. L., Campisi, P., Vegso, B., Cserti, P., Kozmann, G., Babiloni, F., et al. (2014). Human brain distinctiveness based on EEG spectral coherence connectivity. IEEE Trans. Biomed. Eng. 61, 2406–2412. doi: 10.1109/TBME.2014.2317881 Rummel, C., Abela, E., Hauf, M., Wiest, R., and Schindler, K. (2013). Ordinal patterns in epileptic brains: analysis of intracranial EEG and simultaneous EEG-fMRI. Eur. Phys. J. Spec. Top. 222, 569–585. doi: 10.1140/epjst/e2013-01860-9 Sakkalis, V. (2011). Review of advanced techniques for the estimation of brain connectivity measured with EEG/MEG. Comput. Biol. Med. 41, 1110–1117. doi: 10.1016/j.compbiomed.2011.06.020 Scally, B., Burke, M. R., Bunce, D., and Delvenne, J.-F. (2018). Resting-state EEG power and connectivity are associated with alpha peak frequency slowing in healthy aging. Neurobiol. Aging 71, 149–155. doi: 10.1016/j.neurobiolaging.2018.07.004 Schinkel, S., Marwan, N., and Kurths, J. (2017). Order patterns recurrence plots in the analysis of ERP data. Cogn. Neurodyn. 1, 317–325. doi: 10.1007/s11571-007-9023-z Schinkel, S., Zamora-López, G., Dimigen, O., Sommer, W., and Kurths, J. (2012). Order Patterns Networks (ORPAN)-a method to estimate time-evolving functional connectivity from multivariate time series. Front. Comput. Neurosci. 6:91. doi: 10.3389/fncom.2012.00091 Schreiber, T. (2000). Measuring information transfer. Phys. Rev. Lett. 85, 461–464. doi: 10.1103/PhysRevLett.85.461 Scott, D. W. (2015). Multivariate Density Estimation: Theory, Practice, and Visualization. Hoboken, NJ: John Wiley & Sons. doi: 10.1002/9781118575574 Shalbaf, R., Behnam, H., Sleigh, J. W., Steyn-Ross, D. A., and Steyn-Ross, M. L. (2015). Frontal-temporal synchronization of EEG signals quantified by order patterns cross recurrence analysis during propofol anesthesia. IEEE Trans. Neural Syst. Rehabil. Eng. 23, 468–474. doi: 10.1109/TNSRE.2014.2350537 Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Techn. J. 27, 379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Boca Raton, FL; London: Chapman & Hall, London. doi: 10.1007/978-1-4899-3324-9 Snellen, H. (1862). Probebuchstaben zur Bestimmung der Sehschärfe. Utrecht: Van de Weijer. Staniek, M., and Lehnertz, K. (2008). Symbolic transfer entropy. Phys. Rev. Lett. 100:158101. doi: 10.1103/PhysRevLett.100.158101 Steffener, J., Habeck, C., O'Shea, D., Razlighi, Q., Bherer, L., and Stern, Y. (2016). Differences between chronological and brain age are related to education and self-reported physical activity. Neurobiol. Aging 40, 138–144. doi: 10.1016/j.neurobiolaging.2016.01.014 Student (1908). The probable error of a mean. Biometrika 6, 1–25. doi: 10.2307/2331554 Suetani, H., and Kitajo, K. (2020). A manifold learning approach to mapping individuality of human brain oscillations through beta-divergence. Neurosci. Res. 156, 188–196. doi: 10.1016/ Unakafov, A. M. (2015). Ordinal-patterns-based segmentation and discrimination of time series with applications to EEG data (Ph.D. thesis). University of Lübeck, Lübeck, Germany. van der Maaten, L., and Hinton, G. (2008). Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605. van der Maaten, L., Postma, E., and van der Herik, J. (2007). Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 2579–2605. Vicente, R., Wibral, M., Lindner, M., and Pipa, G. (2011). Transfer entropy–a model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci. 30, 45–67. doi: 10.1007/ Wada, Y., Nanbu, Y., Kikuchi, M., Koshino, Y., Hashimoto, T., and Yamaguchi, N. (1998). Abnormal functional connectivity in Alzheimer's disease: intrahemispheric EEG coherence during rest and photic stimulation. Eur. Arch. Psychiatry Clin. Neurosci. 248, 203–208. doi: 10.1007/s004060050038 Waschke, L., Wöstmann, M., and Obleser, J. (2017). States and traits of neural irregularity in the age-varying human brain. Sci. Rep. 7:17381. doi: 10.1038/s41598-017-17766-4 Wibral, M., Pampu, N., Priesemann, V., Siebenhühner, F., Seiwert, H., Lindner, M., et al. (2013). Measuring information-transfer delays. PLoS ONE 8:e55809. doi: 10.1371/journal.pone.0055809 Wilaiprasitporn, T., Ditthapron, A., Matchaparn, K., Tongbuasirilai, T., Banluesombatkul, N., and Chuangsuwanich, E. (2020). Affective EEG-based person identification using the deep learning approach. IEEE Trans. Cogn. Dev. Syst. 12, 486–496. doi: 10.1109/TCDS.2019.2924648 Zhang, J., Zhou, J., Tang, M., Guo, H., Small, M., and Zou, Y. (2017). Constructing ordinal partition transition networks from multivariate time series. Sci. Rep. 7:7795. doi: 10.1038/ To quantify the effect of subject separation and its dependency on considered time scales, the ratio ρ of intra- and inter-cluster-distances was calculated on different time scales. For each projection, the ratio between the mean intra- and inter-cluster-distances was calculated. The average ratio ρ and the standard deviation, depending on the lag for the OPs or the frequency band, is displayed in Figure A1. FIGURE A1 Figure A1. Quantification of separation of subjects in the t-SNE projections (perplexity k = 30) data set for raw and pre-processed EEG data. Both FC measures (A,C) and single-channel measures (B,D) were projected. For each projection, the ratio between the mean intra- and inter-cluster-distances was calculated. We call the Euclidean distances between points belonging to the same subject intra-cluster-distances, and the distances between the center of one cluster and its three next neighbors inter-cluster-distances. The error bars display the standard deviation over 100 t-SNE projections with different random seeds. Keywords: EEG - Electroencephalogram, t-SNE (t-distributed stochastic neighbor embedding), ordinal pattern statistics, nonlinear dimensionality reduction, biomarkers, functional connectivity, coherence, mutual information Citation: Kottlarz I, Berg S, Toscano-Tejeida D, Steinmann I, Bähr M, Luther S, Wilke M, Parlitz U and Schlemmer A (2021) Extracting Robust Biomarkers From Multichannel EEG Time Series Using Nonlinear Dimensionality Reduction Applied to Ordinal Pattern Statistics and Spectral Quantities. Front. Physiol. 11:614565. doi: 10.3389/fphys.2020.614565 Received: 06 October 2020; Accepted: 16 December 2020; Published: 01 February 2021. Copyright © 2021 Kottlarz, Berg, Toscano-Tejeida, Steinmann, Bähr, Luther, Wilke, Parlitz and Schlemmer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Alexander Schlemmer, alexander.schlemmer@ds.mpg.de
{"url":"https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2020.614565/full","timestamp":"2024-11-03T20:49:27Z","content_type":"text/html","content_length":"611814","record_id":"<urn:uuid:aa268519-30e6-48b0-8b58-3bac6f6714e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00876.warc.gz"}
Tank Properties & Tank Types and Different Shapes of Tank • Rectangular Tanks • Circular Tanks • Cylindrical Tanks • Cone Tanks or FrustrumTanks • Hopper Tanks or Pyramid Tanks Tank Properties & Tank Dimensions Pipe Flow Advisor has a separate calculation screen for each style of tank. It makes it easy to calculate tank properties, such as volume of material, tank capacity and tank weight. It can also calculate the volume and weight of fluid for part filled tanks, for a given height of fluid, which is not an easy calculation for tanks with sloping sides, horizontal cylinders and spherical tanks, where the relationship between fluid height and volume is not linear. Another useful calculation that the software performs, is to estimate the tank emptying time, for flow under gravity. As the depth of fluid in the tank changes as flow is discharged, the head of fluid pressure changes and thus the emptying flow rate changes. By calculating the flow rate for a specific condition and iterating over time, the Pipe Flow Advisor software can calculate the overall tank emptying time and show the emptying flow rate at various time intervals, through to the time when the tank becomes empty. Next: Tank Emptying Time Calculator
{"url":"https://www.pipeflow.com/sitemap/tank-types","timestamp":"2024-11-03T15:08:24Z","content_type":"text/html","content_length":"17139","record_id":"<urn:uuid:0209b11b-457f-4227-9514-9bd16f6a8665>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00561.warc.gz"}
quantum mechanics Table of Contents 1. Introduction Quantum mechanics was discovered as a predictive framework in the early 1900's after a set of experiments (i.e. the photoelectric effect and the Stern Gerlach experiment) showed that particles were better described as waves, and that the probability of measurement of particle states were random. Indeed, other experiments that tested Bell's inequality have confirmed this assertion of quantum mechanics that the world is fundamentally random. Let's take a look at some of the postulates of quantum mechanics (for which the mathematics borrow from linear algebra quite a bit). 2. Postulates Here, we describe all the postulates of quantum mechanics, with some motivation when needed. 2.1. Postulate 1 \(| \psi \rangle\) is the state vector in a Hilbert Space \(\mathcal{H}\) which describes the entire system. 2.2. Postulate 2 The norm \(\langle \psi | \psi \rangle\) of all state vectors are 1. 2.3. Postulate 3 If \(| \psi \rangle\) and \(| \phi\rangle\) represent two different quantum systems, the composite system can be described by \(| \psi \rangle \otimes | \phi \rangle\), where \(\otimes\) is the tensor product. 2.4. Postulate 4 Observable quantities are represented by Hermitian operators \(\hat{A}\) whose eigenvectors form a basis for \(\mathcal{H}\). Solutions to \(\hat{A}|\psi\rangle = a|\psi\rangle\) for eigenvalues \(a \) are the only possible observable values for a given eigenvector \(|\psi\rangle\). 2.5. Postulate 5 The time-evolution of the state vector \(|\psi\rangle\) can be given by the Schrodinger equation, a PDE: \label{} \hat{H}|\psi\rangle = i\hbar\partial_{t}|\psi\rangle which is motivated by the De Broglie hypothesis: \label{} p = hf where \(p\) is the momentum, and \(f\) is the frequency. De Broglie hypothesized that all matter behaves like a wave, after Einstein came up with the same relation for light: \label{} p = \frac{E}{c} = hf where \(h\) is Planck's constant, which is specifically a relation
{"url":"https://ret2pop.nullring.xyz/mindmap/quantum%20mechanics.html","timestamp":"2024-11-05T03:03:55Z","content_type":"application/xhtml+xml","content_length":"13299","record_id":"<urn:uuid:f86b817c-a2d1-40b0-b48a-fb467b853969>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00638.warc.gz"}
Huyghen's principle and interference of light 4) Reflection of and Refraction of plane waves using Huygens�s principle i) Reflection of plane wave at plane surface:- • Consider the figure given below which shows incident and reflected wave fronts when a plane wave fronts travels towards a plane reflecting surface • POQ is the ray normal to both incident and reflected wave fronts • The angle of incidence i and angle of reflection r are the angles made by incidence and reflected rat respectively with the normal and these are also the angles between the wave fronts and the surface as shown in the figure 3 • The time taken by the ray POQ to travel from incident wave front to then reflected one is Total time from P to Q= t=PO/v[1] + OQ/v[1] where v[1] is the velocity of the wave. From figure (3) • There can be different rays normal to incident wave front and they can strike plane reflecting surface at different point O and hence they have different values of OA • Since tome travel by each ray from incident wave front to reflected wave front must be same so, right side of equation (1) must be independent of OA.This conditions happens only if (sini-sin r)=0 or i=r Thus law of reflection states that angle of incidence i and angle of reflection are always equal ii) Refraction of plane waves at plane surfaces:- • Consider the figure given below which shows a plane surface AB separating medium 1 from medium 2 • v[1] be the speed of light in medium 1 and v[2] the speed of light in medium 2 • Incident and refracted wave front makes angles i and r^' with surface AB where r^' is called angle of refraction • Time taken by ray POQ to travel between incident and refracted wave fronts would be • Now distance OA would be different for different rays .So time t should be independent of any ray we might consider • This can be achieved only if coefficient of OA in equation (2) becomes equal to zero or • Equation (3) is nothing but Snell�s law of refraction where n is called the reflective index of second medium with respect to the first medium. iii) Refractive index • The ratio of phase velocity of light c in vacuum to its value v[1] in a medium is called the refractive index n[1] ( or μ[1]) of the medium .Thus • When light travels from medium 1 to medium 2,what we measure is the refractive index of medium 2 relative to medium 1 denoted by n[12] ( or μ[12]).Thus where n[1] is refractive index of medium 1 with respect to vacuum and n[2] is refractive index of medium 2 w.r.t. vacuum • When light travels from one medium to another the frequency ν=1/T remains same i.e. ν[1]=ν[2] • Since the velocities of light v[1] and v[2] are different is different medium ,the wavelength λ[1] and λ[2] are also different i.e., the wavelength of light in the medium is directly proportional to phase velocity and hence inversely proportional to the refractive index 5) Principle of Superposition of waves • When two or more sets of waves travel through a medium and cross one another the effects produced by one are totally independent of the • At any instant the resultant displacement of a particle in the medium depends on the phase difference between the waves and is the algebraic sum of the displacement it would have at the same instant due to each separate set. This is known as the principle of superposition of waves and forms the basis of whole theory of interference of waves discovered by Young in 1801 • If at any instant y[1],y[2],y[3],--- are the displacements due to different waves present in the medium then according to superposition principle resultant displacement y at any instant would be equal to the vector sum of the displacements (y[1],y[2],y[3]) due to the individual waves i.e., • The resultant displacement of the particles of the medium depends on the amplitude ,phase difference and frequency of the superposing waves • Consider two waves of same frequency f and wavelength λ travelling through a medium in the same direction and superpose at any instant of time say t • Equation of these waves at time t is where φ is the phase difference between the waves • According to principle of superposition of waves ,resultant displacement of particles equals Now from trigonometry identity Putting it in equation (8) we find Let us suppose Putting them in equation (9) we have • We know that intensity of waves is proportional to its amplitude i.e.
{"url":"https://physicscatalyst.com/optics/interference_0.php","timestamp":"2024-11-07T07:17:55Z","content_type":"text/html","content_length":"71886","record_id":"<urn:uuid:6a3330ea-8305-49ca-b57f-f6c6262e00e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00358.warc.gz"}
Local buoyant instability of magnetized shear flows The effect of magnetic shear and shear flow on local buoyant instabilities is investigated. A simple model is constructed allowing for an arbitrary entropy gradient and a shear plasma flow in the Boussinesq approximation. A transformation to shearing magnetic coordinates achieves a model with plasma flow along the magnetic field lines where the coordinate lines are coincident with the field lines. The solution for the normal modes of the system depends on two parameters: the Alfvén Mach number of the plasma flow and the entropy gradient. The behavior of the unstable normal modes of this system is summarized by a stability diagram. Important characteristics of this stability diagram are the following: magnetic shear is stabilizing, and the entropy gradient must exceed a threshold value for unstable mode growth to occur; flow acts to suppress mode growth in a substantially unstable regime as expected, yet near marginal stability it can lessen the stabilizing effect of magnetic shear and enhance the growth rates of the instability; and, as the Alfvén Mach number approaches 1, the instability is completely stabilized. Analytical work is presented supporting the characteristics of the stability diagram and illuminating the physical mechanisms controlling the behavior of the model. A derivation of the stability criterion for the case without shear flow, asymptotic solutions in the limit that the Alfvén Mach number approaches 1 and in the limit of zero growth rate, a complete WKB solution for large growth rates, an exactly soluble bounded straight field case, and energy conservation relations are all presented. The implications of this work for astrophysical and fusion applications and the potential for future research extending the results to include compressibility are discussed. All Science Journal Classification (ASJC) codes • Astronomy and Astrophysics • Space and Planetary Science • Gravitation • Instabilities • MHD • Magnetic fields Dive into the research topics of 'Local buoyant instability of magnetized shear flows'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/local-buoyant-instability-of-magnetized-shear-flows","timestamp":"2024-11-11T00:48:59Z","content_type":"text/html","content_length":"48561","record_id":"<urn:uuid:1eaddc8f-46b7-40f6-9a6f-320373117306>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00739.warc.gz"}
gcc/tree-vect-patterns.c - gcc - Git at Google /* Analysis Utilities for Loop Vectorization. Copyright (C) 2006-2021 Free Software Foundation, Inc. Contributed by Dorit Nuzman <dorit@il.ibm.com> This file is part of GCC. GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GCC; see the file COPYING3. If not see <http://www.gnu.org/licenses/>. */ #include "config.h" #include "system.h" #include "coretypes.h" #include "backend.h" #include "rtl.h" #include "tree.h" #include "gimple.h" #include "ssa.h" #include "expmed.h" #include "optabs-tree.h" #include "insn-config.h" #include "recog.h" /* FIXME: for insn_data */ #include "fold-const.h" #include "stor-layout.h" #include "tree-eh.h" #include "gimplify.h" #include "gimple-iterator.h" #include "cfgloop.h" #include "tree-vectorizer.h" #include "dumpfile.h" #include "builtins.h" #include "internal-fn.h" #include "case-cfn-macros.h" #include "fold-const-call.h" #include "attribs.h" #include "cgraph.h" #include "omp-simd-clone.h" #include "predict.h" #include "tree-vector-builder.h" #include "vec-perm-indices.h" #include "gimple-range.h" /* Return true if we have a useful VR_RANGE range for VAR, storing it in *MIN_VALUE and *MAX_VALUE if so. Note the range in the dump files. */ static bool vect_get_range_info (tree var, wide_int *min_value, wide_int *max_value) value_range vr; get_range_query (cfun)->range_of_expr (vr, var); if (vr.undefined_p ()) vr.set_varying (TREE_TYPE (var)); *min_value = wi::to_wide (vr.min ()); *max_value = wi::to_wide (vr.max ()); value_range_kind vr_type = vr.kind (); wide_int nonzero = get_nonzero_bits (var); signop sgn = TYPE_SIGN (TREE_TYPE (var)); if (intersect_range_with_nonzero_bits (vr_type, min_value, max_value, nonzero, sgn) == VR_RANGE) if (dump_enabled_p ()) dump_generic_expr_loc (MSG_NOTE, vect_location, TDF_SLIM, var); dump_printf (MSG_NOTE, " has range ["); dump_hex (MSG_NOTE, *min_value); dump_printf (MSG_NOTE, ", "); dump_hex (MSG_NOTE, *max_value); dump_printf (MSG_NOTE, "]\n"); return true; if (dump_enabled_p ()) dump_generic_expr_loc (MSG_NOTE, vect_location, TDF_SLIM, var); dump_printf (MSG_NOTE, " has no range info\n"); return false; /* Report that we've found an instance of pattern PATTERN in statement STMT. */ static void vect_pattern_detected (const char *name, gimple *stmt) if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "%s: detected: %G", name, stmt); /* Associate pattern statement PATTERN_STMT with ORIG_STMT_INFO and return the pattern statement's stmt_vec_info. Set its vector type to VECTYPE if it doesn't have one already. */ static stmt_vec_info vect_init_pattern_stmt (vec_info *vinfo, gimple *pattern_stmt, stmt_vec_info orig_stmt_info, tree vectype) stmt_vec_info pattern_stmt_info = vinfo->lookup_stmt (pattern_stmt); if (pattern_stmt_info == NULL) pattern_stmt_info = vinfo->add_stmt (pattern_stmt); gimple_set_bb (pattern_stmt, gimple_bb (orig_stmt_info->stmt)); pattern_stmt_info->pattern_stmt_p = true; STMT_VINFO_RELATED_STMT (pattern_stmt_info) = orig_stmt_info; STMT_VINFO_DEF_TYPE (pattern_stmt_info) = STMT_VINFO_DEF_TYPE (orig_stmt_info); if (!STMT_VINFO_VECTYPE (pattern_stmt_info)) gcc_assert (VECTOR_BOOLEAN_TYPE_P (vectype) == vect_use_mask_type_p (orig_stmt_info)); STMT_VINFO_VECTYPE (pattern_stmt_info) = vectype; pattern_stmt_info->mask_precision = orig_stmt_info->mask_precision; return pattern_stmt_info; /* Set the pattern statement of ORIG_STMT_INFO to PATTERN_STMT. Also set the vector type of PATTERN_STMT to VECTYPE, if it doesn't have one already. */ static void vect_set_pattern_stmt (vec_info *vinfo, gimple *pattern_stmt, stmt_vec_info orig_stmt_info, tree vectype) STMT_VINFO_IN_PATTERN_P (orig_stmt_info) = true; STMT_VINFO_RELATED_STMT (orig_stmt_info) = vect_init_pattern_stmt (vinfo, pattern_stmt, orig_stmt_info, vectype); /* Add NEW_STMT to STMT_INFO's pattern definition statements. If VECTYPE is nonnull, record that NEW_STMT's vector type is VECTYPE, which might be different from the vector type of the final pattern statement. If VECTYPE is a mask type, SCALAR_TYPE_FOR_MASK is the scalar type from which it was derived. */ static inline void append_pattern_def_seq (vec_info *vinfo, stmt_vec_info stmt_info, gimple *new_stmt, tree vectype = NULL_TREE, tree scalar_type_for_mask = NULL_TREE) gcc_assert (!scalar_type_for_mask == (!vectype || !VECTOR_BOOLEAN_TYPE_P (vectype))); if (vectype) stmt_vec_info new_stmt_info = vinfo->add_stmt (new_stmt); STMT_VINFO_VECTYPE (new_stmt_info) = vectype; if (scalar_type_for_mask) = GET_MODE_BITSIZE (SCALAR_TYPE_MODE (scalar_type_for_mask)); gimple_seq_add_stmt_without_update (&STMT_VINFO_PATTERN_DEF_SEQ (stmt_info), /* The caller wants to perform new operations on vect_external variable VAR, so that the result of the operations would also be vect_external. Return the edge on which the operations can be performed, if one exists. Return null if the operations should instead be treated as part of the pattern that needs them. */ static edge vect_get_external_def_edge (vec_info *vinfo, tree var) edge e = NULL; if (loop_vec_info loop_vinfo = dyn_cast <loop_vec_info> (vinfo)) e = loop_preheader_edge (loop_vinfo->loop); if (!SSA_NAME_IS_DEFAULT_DEF (var)) basic_block bb = gimple_bb (SSA_NAME_DEF_STMT (var)); if (bb == NULL || !dominated_by_p (CDI_DOMINATORS, e->dest, bb)) e = NULL; return e; /* Return true if the target supports a vector version of CODE, where CODE is known to map to a direct optab with the given SUBTYPE. ITYPE specifies the type of (some of) the scalar inputs and OTYPE specifies the type of the scalar result. If CODE allows the inputs and outputs to have different type (such as for WIDEN_SUM_EXPR), it is the input mode rather than the output mode that determines the appropriate target pattern. Operand 0 of the target pattern then specifies the mode that the output must have. When returning true, set *VECOTYPE_OUT to the vector version of OTYPE. Also set *VECITYPE_OUT to the vector version of ITYPE if VECITYPE_OUT is nonnull. */ static bool vect_supportable_direct_optab_p (vec_info *vinfo, tree otype, tree_code code, tree itype, tree *vecotype_out, tree *vecitype_out = NULL, enum optab_subtype subtype = optab_default) tree vecitype = get_vectype_for_scalar_type (vinfo, itype); if (!vecitype) return false; tree vecotype = get_vectype_for_scalar_type (vinfo, otype); if (!vecotype) return false; optab optab = optab_for_tree_code (code, vecitype, subtype); if (!optab) return false; insn_code icode = optab_handler (optab, TYPE_MODE (vecitype)); if (icode == CODE_FOR_nothing || insn_data[icode].operand[0].mode != TYPE_MODE (vecotype)) return false; *vecotype_out = vecotype; if (vecitype_out) *vecitype_out = vecitype; return true; /* Round bit precision PRECISION up to a full element. */ static unsigned int vect_element_precision (unsigned int precision) precision = 1 << ceil_log2 (precision); return MAX (precision, BITS_PER_UNIT); /* If OP is defined by a statement that's being considered for vectorization, return information about that statement, otherwise return NULL. */ static stmt_vec_info vect_get_internal_def (vec_info *vinfo, tree op) stmt_vec_info def_stmt_info = vinfo->lookup_def (op); if (def_stmt_info && STMT_VINFO_DEF_TYPE (def_stmt_info) == vect_internal_def) return def_stmt_info; return NULL; /* Check whether NAME, an ssa-name used in STMT_VINFO, is a result of a type promotion, such that: DEF_STMT: NAME = NOP (name0) If CHECK_SIGN is TRUE, check that either both types are signed or both are unsigned. */ static bool type_conversion_p (vec_info *vinfo, tree name, bool check_sign, tree *orig_type, gimple **def_stmt, bool *promotion) tree type = TREE_TYPE (name); tree oprnd0; enum vect_def_type dt; stmt_vec_info def_stmt_info; if (!vect_is_simple_use (name, vinfo, &dt, &def_stmt_info, def_stmt)) return false; if (dt != vect_internal_def && dt != vect_external_def && dt != vect_constant_def) return false; if (!*def_stmt) return false; if (!is_gimple_assign (*def_stmt)) return false; if (!CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (*def_stmt))) return false; oprnd0 = gimple_assign_rhs1 (*def_stmt); *orig_type = TREE_TYPE (oprnd0); if (!INTEGRAL_TYPE_P (type) || !INTEGRAL_TYPE_P (*orig_type) || ((TYPE_UNSIGNED (type) != TYPE_UNSIGNED (*orig_type)) && check_sign)) return false; if (TYPE_PRECISION (type) >= (TYPE_PRECISION (*orig_type) * 2)) *promotion = true; *promotion = false; if (!vect_is_simple_use (oprnd0, vinfo, &dt)) return false; return true; /* Holds information about an input operand after some sign changes and type promotions have been peeled away. */ class vect_unpromoted_value { vect_unpromoted_value (); void set_op (tree, vect_def_type, stmt_vec_info = NULL); /* The value obtained after peeling away zero or more casts. */ tree op; /* The type of OP. */ tree type; /* The definition type of OP. */ vect_def_type dt; /* If OP is the result of peeling at least one cast, and if the cast of OP itself is a vectorizable statement, CASTER identifies that statement, otherwise it is null. */ stmt_vec_info caster; inline vect_unpromoted_value::vect_unpromoted_value () : op (NULL_TREE), type (NULL_TREE), dt (vect_uninitialized_def), caster (NULL) /* Set the operand to OP_IN, its definition type to DT_IN, and the statement that casts it to CASTER_IN. */ inline void vect_unpromoted_value::set_op (tree op_in, vect_def_type dt_in, stmt_vec_info caster_in) op = op_in; type = TREE_TYPE (op); dt = dt_in; caster = caster_in; /* If OP is a vectorizable SSA name, strip a sequence of integer conversions to reach some vectorizable inner operand OP', continuing as long as it is possible to convert OP' back to OP using a possible sign change followed by a possible promotion P. Return this OP', or null if OP is not a vectorizable SSA name. If there is a promotion P, describe its input in UNPROM, otherwise describe OP' in UNPROM. If SINGLE_USE_P is nonnull, set *SINGLE_USE_P to false if any of the SSA names involved have more than one user. A successful return means that it is possible to go from OP' to OP via UNPROM. The cast from OP' to UNPROM is at most a sign change, whereas the cast from UNPROM to OP might be a promotion, a sign change, or a nop. E.g. say we have: signed short *ptr = ...; signed short C = *ptr; unsigned short B = (unsigned short) C; // sign change signed int A = (signed int) B; // unsigned promotion ...possible other uses of A... unsigned int OP = (unsigned int) A; // sign change In this case it's possible to go directly from C to OP using: OP = (unsigned int) (unsigned short) C; +------------+ +--------------+ promotion sign change so OP' would be C. The input to the promotion is B, so UNPROM would describe B. */ static tree vect_look_through_possible_promotion (vec_info *vinfo, tree op, vect_unpromoted_value *unprom, bool *single_use_p = NULL) tree res = NULL_TREE; tree op_type = TREE_TYPE (op); unsigned int orig_precision = TYPE_PRECISION (op_type); unsigned int min_precision = orig_precision; stmt_vec_info caster = NULL; while (TREE_CODE (op) == SSA_NAME && INTEGRAL_TYPE_P (op_type)) /* See whether OP is simple enough to vectorize. */ stmt_vec_info def_stmt_info; gimple *def_stmt; vect_def_type dt; if (!vect_is_simple_use (op, vinfo, &dt, &def_stmt_info, &def_stmt)) /* If OP is the input of a demotion, skip over it to see whether OP is itself the result of a promotion. If so, the combined effect of the promotion and the demotion might fit the required pattern, otherwise neither operation fits. This copes with cases such as the result of an arithmetic operation being truncated before being stored, and where that arithmetic operation has been recognized as an over-widened one. */ if (TYPE_PRECISION (op_type) <= min_precision) /* Use OP as the UNPROM described above if we haven't yet found a promotion, or if using the new input preserves the sign of the previous promotion. */ if (!res || TYPE_PRECISION (unprom->type) == orig_precision || TYPE_SIGN (unprom->type) == TYPE_SIGN (op_type)) unprom->set_op (op, dt, caster); min_precision = TYPE_PRECISION (op_type); /* Stop if we've already seen a promotion and if this conversion does more than change the sign. */ else if (TYPE_PRECISION (op_type) != TYPE_PRECISION (unprom->type)) /* The sequence now extends to OP. */ res = op; /* See whether OP is defined by a cast. Record it as CASTER if the cast is potentially vectorizable. */ if (!def_stmt) caster = def_stmt_info; /* Ignore pattern statements, since we don't link uses for them. */ if (caster && single_use_p && !STMT_VINFO_RELATED_STMT (caster) && !has_single_use (res)) *single_use_p = false; gassign *assign = dyn_cast <gassign *> (def_stmt); if (!assign || !CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def_stmt))) /* Continue with the input to the cast. */ op = gimple_assign_rhs1 (def_stmt); op_type = TREE_TYPE (op); return res; /* OP is an integer operand to an operation that returns TYPE, and we want to treat the operation as a widening one. So far we can treat it as widening from *COMMON_TYPE. Return true if OP is suitable for such a widening operation, either widening from *COMMON_TYPE or from some supertype of it. Update *COMMON_TYPE to the supertype in the latter case. SHIFT_P is true if OP is a shift amount. */ static bool vect_joust_widened_integer (tree type, bool shift_p, tree op, tree *common_type) /* Calculate the minimum precision required by OP, without changing the sign of either operand. */ unsigned int precision; if (shift_p) if (!wi::leu_p (wi::to_widest (op), TYPE_PRECISION (type) / 2)) return false; precision = TREE_INT_CST_LOW (op); precision = wi::min_precision (wi::to_widest (op), TYPE_SIGN (*common_type)); if (precision * 2 > TYPE_PRECISION (type)) return false; /* If OP requires a wider type, switch to that type. The checks above ensure that this is still narrower than the result. */ precision = vect_element_precision (precision); if (TYPE_PRECISION (*common_type) < precision) *common_type = build_nonstandard_integer_type (precision, TYPE_UNSIGNED (*common_type)); return true; /* Return true if the common supertype of NEW_TYPE and *COMMON_TYPE is narrower than type, storing the supertype in *COMMON_TYPE if so. */ static bool vect_joust_widened_type (tree type, tree new_type, tree *common_type) if (types_compatible_p (*common_type, new_type)) return true; /* See if *COMMON_TYPE can hold all values of NEW_TYPE. */ if ((TYPE_PRECISION (new_type) < TYPE_PRECISION (*common_type)) && (TYPE_UNSIGNED (new_type) || !TYPE_UNSIGNED (*common_type))) return true; /* See if NEW_TYPE can hold all values of *COMMON_TYPE. */ if (TYPE_PRECISION (*common_type) < TYPE_PRECISION (new_type) && (TYPE_UNSIGNED (*common_type) || !TYPE_UNSIGNED (new_type))) *common_type = new_type; return true; /* We have mismatched signs, with the signed type being no wider than the unsigned type. In this case we need a wider signed type. */ unsigned int precision = MAX (TYPE_PRECISION (*common_type), TYPE_PRECISION (new_type)); precision *= 2; if (precision * 2 > TYPE_PRECISION (type)) return false; *common_type = build_nonstandard_integer_type (precision, false); return true; /* Check whether STMT_INFO can be viewed as a tree of integer operations in which each node either performs CODE or WIDENED_CODE, and where each leaf operand is narrower than the result of STMT_INFO. MAX_NOPS specifies the maximum number of leaf operands. SHIFT_P says whether CODE and WIDENED_CODE are some sort of shift. If STMT_INFO is such a tree, return the number of leaf operands and describe them in UNPROM[0] onwards. Also set *COMMON_TYPE to a type that (a) is narrower than the result of STMT_INFO and (b) can hold all leaf operand values. If SUBTYPE then allow that the signs of the operands may differ in signs but not in precision. SUBTYPE is updated to reflect Return 0 if STMT_INFO isn't such a tree, or if no such COMMON_TYPE exists. */ static unsigned int vect_widened_op_tree (vec_info *vinfo, stmt_vec_info stmt_info, tree_code code, tree_code widened_code, bool shift_p, unsigned int max_nops, vect_unpromoted_value *unprom, tree *common_type, enum optab_subtype *subtype = NULL) /* Check for an integer operation with the right code. */ gassign *assign = dyn_cast <gassign *> (stmt_info->stmt); if (!assign) return 0; tree_code rhs_code = gimple_assign_rhs_code (assign); if (rhs_code != code && rhs_code != widened_code) return 0; tree type = TREE_TYPE (gimple_assign_lhs (assign)); if (!INTEGRAL_TYPE_P (type)) return 0; /* Assume that both operands will be leaf operands. */ max_nops -= 2; /* Check the operands. */ unsigned int next_op = 0; for (unsigned int i = 0; i < 2; ++i) vect_unpromoted_value *this_unprom = &unprom[next_op]; unsigned int nops = 1; tree op = gimple_op (assign, i + 1); if (i == 1 && TREE_CODE (op) == INTEGER_CST) /* We already have a common type from earlier operands. Update it to account for OP. */ this_unprom->set_op (op, vect_constant_def); if (!vect_joust_widened_integer (type, shift_p, op, common_type)) return 0; /* Only allow shifts by constants. */ if (shift_p && i == 1) return 0; if (!vect_look_through_possible_promotion (vinfo, op, this_unprom)) return 0; if (TYPE_PRECISION (this_unprom->type) == TYPE_PRECISION (type)) /* The operand isn't widened. If STMT_INFO has the code for an unwidened operation, recursively check whether this operand is a node of the tree. */ if (rhs_code != code || max_nops == 0 || this_unprom->dt != vect_internal_def) return 0; /* Give back the leaf slot allocated above now that we're not treating this as a leaf operand. */ max_nops += 1; /* Recursively process the definition of the operand. */ stmt_vec_info def_stmt_info = vinfo->lookup_def (this_unprom->op); nops = vect_widened_op_tree (vinfo, def_stmt_info, code, widened_code, shift_p, max_nops, this_unprom, common_type, if (nops == 0) return 0; max_nops -= nops; /* Make sure that the operand is narrower than the result. */ if (TYPE_PRECISION (this_unprom->type) * 2 > TYPE_PRECISION (type)) return 0; /* Update COMMON_TYPE for the new operand. */ if (i == 0) *common_type = this_unprom->type; else if (!vect_joust_widened_type (type, this_unprom->type, if (subtype) /* See if we can sign extend the smaller type. */ if (TYPE_PRECISION (this_unprom->type) > TYPE_PRECISION (*common_type)) *common_type = this_unprom->type; *subtype = optab_vector_mixed_sign; return 0; next_op += nops; return next_op; /* Helper to return a new temporary for pattern of TYPE for STMT. If STMT is NULL, the caller must set SSA_NAME_DEF_STMT for the returned SSA var. */ static tree vect_recog_temp_ssa_var (tree type, gimple *stmt) return make_temp_ssa_name (type, stmt, "patt"); /* STMT2_INFO describes a type conversion that could be split into STMT1 followed by a version of STMT2_INFO that takes NEW_RHS as its first input. Try to do this using pattern statements, returning true on success. */ static bool vect_split_statement (vec_info *vinfo, stmt_vec_info stmt2_info, tree new_rhs, gimple *stmt1, tree vectype) if (is_pattern_stmt_p (stmt2_info)) /* STMT2_INFO is part of a pattern. Get the statement to which the pattern is attached. */ stmt_vec_info orig_stmt2_info = STMT_VINFO_RELATED_STMT (stmt2_info); vect_init_pattern_stmt (vinfo, stmt1, orig_stmt2_info, vectype); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Splitting pattern statement: %G", stmt2_info->stmt); /* Since STMT2_INFO is a pattern statement, we can change it in-situ without worrying about changing the code for the containing block. */ gimple_assign_set_rhs1 (stmt2_info->stmt, new_rhs); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "into: %G", stmt1); dump_printf_loc (MSG_NOTE, vect_location, "and: %G", gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (orig_stmt2_info); if (STMT_VINFO_RELATED_STMT (orig_stmt2_info) == stmt2_info) /* STMT2_INFO is the actual pattern statement. Add STMT1 to the end of the definition sequence. */ gimple_seq_add_stmt_without_update (def_seq, stmt1); /* STMT2_INFO belongs to the definition sequence. Insert STMT1 before it. */ gimple_stmt_iterator gsi = gsi_for_stmt (stmt2_info->stmt, def_seq); gsi_insert_before_without_update (&gsi, stmt1, GSI_SAME_STMT); return true; /* STMT2_INFO doesn't yet have a pattern. Try to create a two-statement pattern now. */ gcc_assert (!STMT_VINFO_RELATED_STMT (stmt2_info)); tree lhs_type = TREE_TYPE (gimple_get_lhs (stmt2_info->stmt)); tree lhs_vectype = get_vectype_for_scalar_type (vinfo, lhs_type); if (!lhs_vectype) return false; if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Splitting statement: %G", stmt2_info->stmt); /* Add STMT1 as a singleton pattern definition sequence. */ gimple_seq *def_seq = &STMT_VINFO_PATTERN_DEF_SEQ (stmt2_info); vect_init_pattern_stmt (vinfo, stmt1, stmt2_info, vectype); gimple_seq_add_stmt_without_update (def_seq, stmt1); /* Build the second of the two pattern statements. */ tree new_lhs = vect_recog_temp_ssa_var (lhs_type, NULL); gassign *new_stmt2 = gimple_build_assign (new_lhs, NOP_EXPR, new_rhs); vect_set_pattern_stmt (vinfo, new_stmt2, stmt2_info, lhs_vectype); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "into pattern statements: %G", stmt1); dump_printf_loc (MSG_NOTE, vect_location, "and: %G", new_stmt2); return true; /* Convert UNPROM to TYPE and return the result, adding new statements to STMT_INFO's pattern definition statements if no better way is available. VECTYPE is the vector form of TYPE. If SUBTYPE then convert the type based on the subtype. */ static tree vect_convert_input (vec_info *vinfo, stmt_vec_info stmt_info, tree type, vect_unpromoted_value *unprom, tree vectype, enum optab_subtype subtype = optab_default) /* Update the type if the signs differ. */ if (subtype == optab_vector_mixed_sign && TYPE_SIGN (type) != TYPE_SIGN (TREE_TYPE (unprom->op))) type = build_nonstandard_integer_type (TYPE_PRECISION (type), TYPE_SIGN (unprom->type)); /* Check for a no-op conversion. */ if (types_compatible_p (type, TREE_TYPE (unprom->op))) return unprom->op; /* Allow the caller to create constant vect_unpromoted_values. */ if (TREE_CODE (unprom->op) == INTEGER_CST) return wide_int_to_tree (type, wi::to_widest (unprom->op)); tree input = unprom->op; if (unprom->caster) tree lhs = gimple_get_lhs (unprom->caster->stmt); tree lhs_type = TREE_TYPE (lhs); /* If the result of the existing cast is the right width, use it instead of the source of the cast. */ if (TYPE_PRECISION (lhs_type) == TYPE_PRECISION (type)) input = lhs; /* If the precision we want is between the source and result precisions of the existing cast, try splitting the cast into two and tapping into a mid-way point. */ else if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (type) && TYPE_PRECISION (type) > TYPE_PRECISION (unprom->type)) /* In order to preserve the semantics of the original cast, give the mid-way point the same signedness as the input value. It would be possible to use a signed type here instead if TYPE is signed and UNPROM->TYPE is unsigned, but that would make the sign of the midtype sensitive to the order in which we process the statements, since the signedness of TYPE is the signedness required by just one of possibly many users. Also, unsigned promotions are usually as cheap as or cheaper than signed ones, so it's better to keep an unsigned promotion. */ tree midtype = build_nonstandard_integer_type (TYPE_PRECISION (type), TYPE_UNSIGNED (unprom->type)); tree vec_midtype = get_vectype_for_scalar_type (vinfo, midtype); if (vec_midtype) input = vect_recog_temp_ssa_var (midtype, NULL); gassign *new_stmt = gimple_build_assign (input, NOP_EXPR, if (!vect_split_statement (vinfo, unprom->caster, input, new_stmt, append_pattern_def_seq (vinfo, stmt_info, new_stmt, vec_midtype); /* See if we can reuse an existing result. */ if (types_compatible_p (type, TREE_TYPE (input))) return input; /* We need a new conversion statement. */ tree new_op = vect_recog_temp_ssa_var (type, NULL); gassign *new_stmt = gimple_build_assign (new_op, NOP_EXPR, input); /* If OP is an external value, see if we can insert the new statement on an incoming edge. */ if (input == unprom->op && unprom->dt == vect_external_def) if (edge e = vect_get_external_def_edge (vinfo, input)) basic_block new_bb = gsi_insert_on_edge_immediate (e, new_stmt); gcc_assert (!new_bb); return new_op; /* As a (common) last resort, add the statement to the pattern itself. */ append_pattern_def_seq (vinfo, stmt_info, new_stmt, vectype); return new_op; /* Invoke vect_convert_input for N elements of UNPROM and store the result in the corresponding elements of RESULT. If SUBTYPE then convert the type based on the subtype. */ static void vect_convert_inputs (vec_info *vinfo, stmt_vec_info stmt_info, unsigned int n, tree *result, tree type, vect_unpromoted_value *unprom, tree vectype, enum optab_subtype subtype = optab_default) for (unsigned int i = 0; i < n; ++i) unsigned int j; for (j = 0; j < i; ++j) if (unprom[j].op == unprom[i].op) if (j < i) result[i] = result[j]; result[i] = vect_convert_input (vinfo, stmt_info, type, &unprom[i], vectype, subtype); /* The caller has created a (possibly empty) sequence of pattern definition statements followed by a single statement PATTERN_STMT. Cast the result of this final statement to TYPE. If a new statement is needed, add PATTERN_STMT to the end of STMT_INFO's pattern definition statements and return the new statement, otherwise return PATTERN_STMT as-is. VECITYPE is the vector form of PATTERN_STMT's result type. */ static gimple * vect_convert_output (vec_info *vinfo, stmt_vec_info stmt_info, tree type, gimple *pattern_stmt, tree vecitype) tree lhs = gimple_get_lhs (pattern_stmt); if (!types_compatible_p (type, TREE_TYPE (lhs))) append_pattern_def_seq (vinfo, stmt_info, pattern_stmt, vecitype); tree cast_var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_assign (cast_var, NOP_EXPR, lhs); return pattern_stmt; /* Return true if STMT_VINFO describes a reduction for which reassociation is allowed. If STMT_INFO is part of a group, assume that it's part of a reduction chain and optimistically assume that all statements except the last allow reassociation. Also require it to have code CODE and to be a reduction in the outermost loop. When returning true, store the operands in *OP0_OUT and *OP1_OUT. */ static bool vect_reassociating_reduction_p (vec_info *vinfo, stmt_vec_info stmt_info, tree_code code, tree *op0_out, tree *op1_out) loop_vec_info loop_info = dyn_cast <loop_vec_info> (vinfo); if (!loop_info) return false; gassign *assign = dyn_cast <gassign *> (stmt_info->stmt); if (!assign || gimple_assign_rhs_code (assign) != code) return false; /* We don't allow changing the order of the computation in the inner-loop when doing outer-loop vectorization. */ class loop *loop = LOOP_VINFO_LOOP (loop_info); if (loop && nested_in_vect_loop_p (loop, stmt_info)) return false; if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def) if (needs_fold_left_reduction_p (TREE_TYPE (gimple_assign_lhs (assign)), return false; else if (REDUC_GROUP_FIRST_ELEMENT (stmt_info) == NULL) return false; *op0_out = gimple_assign_rhs1 (assign); *op1_out = gimple_assign_rhs2 (assign); if (commutative_tree_code (code) && STMT_VINFO_REDUC_IDX (stmt_info) == 0) std::swap (*op0_out, *op1_out); return true; /* Function vect_recog_dot_prod_pattern Try to find the following pattern: type1a x_t type1b y_t; TYPE1 prod; TYPE2 sum = init; sum_0 = phi <init, sum_1> S1 x_t = ... S2 y_t = ... S3 x_T = (TYPE1) x_t; S4 y_T = (TYPE1) y_t; S5 prod = x_T * y_T; [S6 prod = (TYPE2) prod; #optional] S7 sum_1 = prod + sum_0; where 'TYPE1' is exactly double the size of type 'type1a' and 'type1b', the sign of 'TYPE1' must be one of 'type1a' or 'type1b' but the sign of 'type1a' and 'type1b' can differ. * STMT_VINFO: The stmt from which the pattern search begins. In the example, when this function is called with S7, the pattern {S3,S4,S5,S6,S7} will be detected. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the sequence of stmts that constitute the pattern. In this case it will be: WIDEN_DOT_PRODUCT <x_t, y_t, sum_0> Note: The dot-prod idiom is a widening reduction pattern that is vectorized without preserving all the intermediate results. It produces only N/2 (widened) results (by summing up pairs of intermediate results) rather than all N results. Therefore, we cannot allow this pattern when we want to get all the results and in the correct order (as is the case when this computation is in an inner-loop nested in an outer-loop that us being vectorized). */ static gimple * vect_recog_dot_prod_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) tree oprnd0, oprnd1; gimple *last_stmt = stmt_vinfo->stmt; tree type, half_type; gimple *pattern_stmt; tree var; /* Look for the following pattern DX = (TYPE1) X; DY = (TYPE1) Y; DPROD = DX * DY; DDPROD = (TYPE2) DPROD; sum_1 = DDPROD + sum_0; In which - DX is double the size of X - DY is double the size of Y - DX, DY, DPROD all have the same type but the sign between X, Y and DPROD can differ. - sum is the same size of DPROD or bigger - sum has been recognized as a reduction variable. This is equivalent to: DPROD = X w* Y; #widen mult sum_1 = DPROD w+ sum_0; #widen summation DPROD = X w* Y; #widen mult sum_1 = DPROD + sum_0; #summation /* Starting from LAST_STMT, follow the defs of its uses in search of the above pattern. */ if (!vect_reassociating_reduction_p (vinfo, stmt_vinfo, PLUS_EXPR, &oprnd0, &oprnd1)) return NULL; type = TREE_TYPE (gimple_get_lhs (last_stmt)); vect_unpromoted_value unprom_mult; oprnd0 = vect_look_through_possible_promotion (vinfo, oprnd0, &unprom_mult); /* So far so good. Since last_stmt was detected as a (summation) reduction, we know that oprnd1 is the reduction variable (defined by a loop-header phi), and oprnd0 is an ssa-name defined by a stmt in the loop body. Left to check that oprnd0 is defined by a (widen_)mult_expr */ if (!oprnd0) return NULL; stmt_vec_info mult_vinfo = vect_get_internal_def (vinfo, oprnd0); if (!mult_vinfo) return NULL; /* FORNOW. Can continue analyzing the def-use chain when this stmt in a phi inside the loop (in case we are analyzing an outer-loop). */ vect_unpromoted_value unprom0[2]; enum optab_subtype subtype = optab_vector; if (!vect_widened_op_tree (vinfo, mult_vinfo, MULT_EXPR, WIDEN_MULT_EXPR, false, 2, unprom0, &half_type, &subtype)) return NULL; /* If there are two widening operations, make sure they agree on the sign of the extension. The result of an optab_vector_mixed_sign operation is signed; otherwise, the result has the same sign as the operands. */ if (TYPE_PRECISION (unprom_mult.type) != TYPE_PRECISION (type) && (subtype == optab_vector_mixed_sign ? TYPE_UNSIGNED (unprom_mult.type) : TYPE_SIGN (unprom_mult.type) != TYPE_SIGN (half_type))) return NULL; vect_pattern_detected ("vect_recog_dot_prod_pattern", last_stmt); tree half_vectype; if (!vect_supportable_direct_optab_p (vinfo, type, DOT_PROD_EXPR, half_type, type_out, &half_vectype, subtype)) return NULL; /* Get the inputs in the appropriate types. */ tree mult_oprnd[2]; vect_convert_inputs (vinfo, stmt_vinfo, 2, mult_oprnd, half_type, unprom0, half_vectype, subtype); var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_assign (var, DOT_PROD_EXPR, mult_oprnd[0], mult_oprnd[1], oprnd1); return pattern_stmt; /* Function vect_recog_sad_pattern Try to find the following Sum of Absolute Difference (SAD) pattern: type x_t, y_t; signed TYPE1 diff, abs_diff; TYPE2 sum = init; sum_0 = phi <init, sum_1> S1 x_t = ... S2 y_t = ... S3 x_T = (TYPE1) x_t; S4 y_T = (TYPE1) y_t; S5 diff = x_T - y_T; S6 abs_diff = ABS_EXPR <diff>; [S7 abs_diff = (TYPE2) abs_diff; #optional] S8 sum_1 = abs_diff + sum_0; where 'TYPE1' is at least double the size of type 'type', and 'TYPE2' is the same size of 'TYPE1' or bigger. This is a special case of a reduction * STMT_VINFO: The stmt from which the pattern search begins. In the example, when this function is called with S8, the pattern {S3,S4,S5,S6,S7,S8} will be detected. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the sequence of stmts that constitute the pattern. In this case it will be: SAD_EXPR <x_t, y_t, sum_0> static gimple * vect_recog_sad_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree half_type; /* Look for the following pattern DX = (TYPE1) X; DY = (TYPE1) Y; DDIFF = DX - DY; DAD = ABS_EXPR <DDIFF>; DDPROD = (TYPE2) DPROD; sum_1 = DAD + sum_0; In which - DX is at least double the size of X - DY is at least double the size of Y - DX, DY, DDIFF, DAD all have the same type - sum is the same size of DAD or bigger - sum has been recognized as a reduction variable. This is equivalent to: DDIFF = X w- Y; #widen sub DAD = ABS_EXPR <DDIFF>; sum_1 = DAD w+ sum_0; #widen summation DDIFF = X w- Y; #widen sub DAD = ABS_EXPR <DDIFF>; sum_1 = DAD + sum_0; #summation /* Starting from LAST_STMT, follow the defs of its uses in search of the above pattern. */ tree plus_oprnd0, plus_oprnd1; if (!vect_reassociating_reduction_p (vinfo, stmt_vinfo, PLUS_EXPR, &plus_oprnd0, &plus_oprnd1)) return NULL; tree sum_type = TREE_TYPE (gimple_get_lhs (last_stmt)); /* Any non-truncating sequence of conversions is OK here, since with a successful match, the result of the ABS(U) is known to fit within the nonnegative range of the result type. (It cannot be the negative of the minimum signed value due to the range of the widening MINUS_EXPR.) */ vect_unpromoted_value unprom_abs; plus_oprnd0 = vect_look_through_possible_promotion (vinfo, plus_oprnd0, /* So far so good. Since last_stmt was detected as a (summation) reduction, we know that plus_oprnd1 is the reduction variable (defined by a loop-header phi), and plus_oprnd0 is an ssa-name defined by a stmt in the loop body. Then check that plus_oprnd0 is defined by an abs_expr. */ if (!plus_oprnd0) return NULL; stmt_vec_info abs_stmt_vinfo = vect_get_internal_def (vinfo, plus_oprnd0); if (!abs_stmt_vinfo) return NULL; /* FORNOW. Can continue analyzing the def-use chain when this stmt in a phi inside the loop (in case we are analyzing an outer-loop). */ gassign *abs_stmt = dyn_cast <gassign *> (abs_stmt_vinfo->stmt); if (!abs_stmt || (gimple_assign_rhs_code (abs_stmt) != ABS_EXPR && gimple_assign_rhs_code (abs_stmt) != ABSU_EXPR)) return NULL; tree abs_oprnd = gimple_assign_rhs1 (abs_stmt); tree abs_type = TREE_TYPE (abs_oprnd); if (TYPE_UNSIGNED (abs_type)) return NULL; /* Peel off conversions from the ABS input. This can involve sign changes (e.g. from an unsigned subtraction to a signed ABS input) or signed promotion, but it can't include unsigned promotion. (Note that ABS of an unsigned promotion should have been folded away before now anyway.) */ vect_unpromoted_value unprom_diff; abs_oprnd = vect_look_through_possible_promotion (vinfo, abs_oprnd, if (!abs_oprnd) return NULL; if (TYPE_PRECISION (unprom_diff.type) != TYPE_PRECISION (abs_type) && TYPE_UNSIGNED (unprom_diff.type)) return NULL; /* We then detect if the operand of abs_expr is defined by a minus_expr. */ stmt_vec_info diff_stmt_vinfo = vect_get_internal_def (vinfo, abs_oprnd); if (!diff_stmt_vinfo) return NULL; /* FORNOW. Can continue analyzing the def-use chain when this stmt in a phi inside the loop (in case we are analyzing an outer-loop). */ vect_unpromoted_value unprom[2]; if (!vect_widened_op_tree (vinfo, diff_stmt_vinfo, MINUS_EXPR, WIDEN_MINUS_EXPR, false, 2, unprom, &half_type)) return NULL; vect_pattern_detected ("vect_recog_sad_pattern", last_stmt); tree half_vectype; if (!vect_supportable_direct_optab_p (vinfo, sum_type, SAD_EXPR, half_type, type_out, &half_vectype)) return NULL; /* Get the inputs to the SAD_EXPR in the appropriate types. */ tree sad_oprnd[2]; vect_convert_inputs (vinfo, stmt_vinfo, 2, sad_oprnd, half_type, unprom, half_vectype); tree var = vect_recog_temp_ssa_var (sum_type, NULL); gimple *pattern_stmt = gimple_build_assign (var, SAD_EXPR, sad_oprnd[0], sad_oprnd[1], plus_oprnd1); return pattern_stmt; /* Recognize an operation that performs ORIG_CODE on widened inputs, so that it can be treated as though it had the form: A_TYPE a; B_TYPE b; HALF_TYPE a_cast = (HALF_TYPE) a; // possible no-op HALF_TYPE b_cast = (HALF_TYPE) b; // possible no-op | RES_TYPE a_extend = (RES_TYPE) a_cast; // promotion from HALF_TYPE | RES_TYPE b_extend = (RES_TYPE) b_cast; // promotion from HALF_TYPE | RES_TYPE res = a_extend ORIG_CODE b_extend; Try to replace the pattern with: A_TYPE a; B_TYPE b; HALF_TYPE a_cast = (HALF_TYPE) a; // possible no-op HALF_TYPE b_cast = (HALF_TYPE) b; // possible no-op | EXT_TYPE ext = a_cast WIDE_CODE b_cast; | RES_TYPE res = (EXT_TYPE) ext; // possible no-op where EXT_TYPE is wider than HALF_TYPE but has the same signedness. SHIFT_P is true if ORIG_CODE and WIDE_CODE are shifts. NAME is the name of the pattern being matched, for dump purposes. */ static gimple * vect_recog_widen_op_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out, tree_code orig_code, tree_code wide_code, bool shift_p, const char *name) gimple *last_stmt = last_stmt_info->stmt; vect_unpromoted_value unprom[2]; tree half_type; if (!vect_widened_op_tree (vinfo, last_stmt_info, orig_code, orig_code, shift_p, 2, unprom, &half_type)) return NULL; /* Pattern detected. */ vect_pattern_detected (name, last_stmt); tree type = TREE_TYPE (gimple_get_lhs (last_stmt)); tree itype = type; if (TYPE_PRECISION (type) != TYPE_PRECISION (half_type) * 2 || TYPE_UNSIGNED (type) != TYPE_UNSIGNED (half_type)) itype = build_nonstandard_integer_type (TYPE_PRECISION (half_type) * 2, TYPE_UNSIGNED (half_type)); /* Check target support */ tree vectype = get_vectype_for_scalar_type (vinfo, half_type); tree vecitype = get_vectype_for_scalar_type (vinfo, itype); tree ctype = itype; tree vecctype = vecitype; if (orig_code == MINUS_EXPR && TYPE_UNSIGNED (itype) && TYPE_PRECISION (type) > TYPE_PRECISION (itype)) /* Subtraction is special, even if half_type is unsigned and no matter whether type is signed or unsigned, if type is wider than itype, we need to sign-extend from the widening operation result to the result type. Consider half_type unsigned char, operand 1 0xfe, operand 2 0xff, itype unsigned short and type either int or unsigned int. Widened (unsigned short) 0xfe - (unsigned short) 0xff is (unsigned short) 0xffff, but for type int we want the result -1 and for type unsigned int 0xffffffff rather than 0xffff. */ ctype = build_nonstandard_integer_type (TYPE_PRECISION (itype), 0); vecctype = get_vectype_for_scalar_type (vinfo, ctype); enum tree_code dummy_code; int dummy_int; auto_vec<tree> dummy_vec; if (!vectype || !vecitype || !vecctype || !supportable_widening_operation (vinfo, wide_code, last_stmt_info, vecitype, vectype, &dummy_code, &dummy_code, &dummy_int, &dummy_vec)) return NULL; *type_out = get_vectype_for_scalar_type (vinfo, type); if (!*type_out) return NULL; tree oprnd[2]; vect_convert_inputs (vinfo, last_stmt_info, 2, oprnd, half_type, unprom, vectype); tree var = vect_recog_temp_ssa_var (itype, NULL); gimple *pattern_stmt = gimple_build_assign (var, wide_code, oprnd[0], oprnd[1]); if (vecctype != vecitype) pattern_stmt = vect_convert_output (vinfo, last_stmt_info, ctype, pattern_stmt, vecitype); return vect_convert_output (vinfo, last_stmt_info, type, pattern_stmt, vecctype); /* Try to detect multiplication on widened inputs, converting MULT_EXPR to WIDEN_MULT_EXPR. See vect_recog_widen_op_pattern for details. */ static gimple * vect_recog_widen_mult_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) return vect_recog_widen_op_pattern (vinfo, last_stmt_info, type_out, MULT_EXPR, WIDEN_MULT_EXPR, false, /* Try to detect addition on widened inputs, converting PLUS_EXPR to WIDEN_PLUS_EXPR. See vect_recog_widen_op_pattern for details. */ static gimple * vect_recog_widen_plus_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) return vect_recog_widen_op_pattern (vinfo, last_stmt_info, type_out, PLUS_EXPR, WIDEN_PLUS_EXPR, false, /* Try to detect subtraction on widened inputs, converting MINUS_EXPR to WIDEN_MINUS_EXPR. See vect_recog_widen_op_pattern for details. */ static gimple * vect_recog_widen_minus_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) return vect_recog_widen_op_pattern (vinfo, last_stmt_info, type_out, MINUS_EXPR, WIDEN_MINUS_EXPR, false, /* Function vect_recog_popcount_pattern Try to find the following pattern: UTYPE1 A; TYPE1 B; UTYPE2 temp_in; TYPE3 temp_out; temp_in = (UTYPE2)A; temp_out = __builtin_popcount{,l,ll} (temp_in); B = (TYPE1) temp_out; TYPE2 may or may not be equal to TYPE3. i.e. TYPE2 is equal to TYPE3 for __builtin_popcount i.e. TYPE2 is not equal to TYPE3 for __builtin_popcountll * STMT_VINFO: The stmt from which the pattern search begins. here it starts with B = (TYPE1) temp_out; * TYPE_OUT: The vector type of the output of this pattern. * Return value: A new stmt that will be used to replace the sequence of stmts that constitute the pattern. In this case it will be: B = .POPCOUNT (A); static gimple * vect_recog_popcount_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gassign *last_stmt = dyn_cast <gassign *> (stmt_vinfo->stmt); gimple *popcount_stmt, *pattern_stmt; tree rhs_oprnd, rhs_origin, lhs_oprnd, lhs_type, vec_type, new_var; auto_vec<tree> vargs; /* Find B = (TYPE1) temp_out. */ if (!last_stmt) return NULL; tree_code code = gimple_assign_rhs_code (last_stmt); if (!CONVERT_EXPR_CODE_P (code)) return NULL; lhs_oprnd = gimple_assign_lhs (last_stmt); lhs_type = TREE_TYPE (lhs_oprnd); if (!INTEGRAL_TYPE_P (lhs_type)) return NULL; rhs_oprnd = gimple_assign_rhs1 (last_stmt); if (TREE_CODE (rhs_oprnd) != SSA_NAME || !has_single_use (rhs_oprnd)) return NULL; popcount_stmt = SSA_NAME_DEF_STMT (rhs_oprnd); /* Find temp_out = __builtin_popcount{,l,ll} (temp_in); */ if (!is_gimple_call (popcount_stmt)) return NULL; switch (gimple_call_combined_fn (popcount_stmt)) return NULL; if (gimple_call_num_args (popcount_stmt) != 1) return NULL; rhs_oprnd = gimple_call_arg (popcount_stmt, 0); vect_unpromoted_value unprom_diff; rhs_origin = vect_look_through_possible_promotion (vinfo, rhs_oprnd, if (!rhs_origin) return NULL; /* Input and output of .POPCOUNT should be same-precision integer. Also A should be unsigned or same precision as temp_in, otherwise there would be sign_extend from A to temp_in. */ if (TYPE_PRECISION (unprom_diff.type) != TYPE_PRECISION (lhs_type) || (!TYPE_UNSIGNED (unprom_diff.type) && (TYPE_PRECISION (unprom_diff.type) != TYPE_PRECISION (TREE_TYPE (rhs_oprnd))))) return NULL; vargs.safe_push (unprom_diff.op); vect_pattern_detected ("vec_regcog_popcount_pattern", popcount_stmt); vec_type = get_vectype_for_scalar_type (vinfo, lhs_type); /* Do it only if the backend has popcount<vector_mode>2 pattern. */ if (!vec_type || !direct_internal_fn_supported_p (IFN_POPCOUNT, vec_type, return NULL; /* Create B = .POPCOUNT (A). */ new_var = vect_recog_temp_ssa_var (lhs_type, NULL); pattern_stmt = gimple_build_call_internal_vec (IFN_POPCOUNT, vargs); gimple_call_set_lhs (pattern_stmt, new_var); gimple_set_location (pattern_stmt, gimple_location (last_stmt)); *type_out = vec_type; if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "created pattern stmt: %G", pattern_stmt); return pattern_stmt; /* Function vect_recog_pow_pattern Try to find the following pattern: x = POW (y, N); with POW being one of pow, powf, powi, powif and N being either 2 or 0.5. * STMT_VINFO: The stmt from which the pattern search begins. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the sequence of stmts that constitute the pattern. In this case it will be: x = x * x x = sqrt (x) static gimple * vect_recog_pow_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree base, exp; gimple *stmt; tree var; if (!is_gimple_call (last_stmt) || gimple_call_lhs (last_stmt) == NULL) return NULL; switch (gimple_call_combined_fn (last_stmt)) return NULL; base = gimple_call_arg (last_stmt, 0); exp = gimple_call_arg (last_stmt, 1); if (TREE_CODE (exp) != REAL_CST && TREE_CODE (exp) != INTEGER_CST) if (flag_unsafe_math_optimizations && TREE_CODE (base) == REAL_CST && gimple_call_builtin_p (last_stmt, BUILT_IN_NORMAL)) combined_fn log_cfn; built_in_function exp_bfn; switch (DECL_FUNCTION_CODE (gimple_call_fndecl (last_stmt))) case BUILT_IN_POW: log_cfn = CFN_BUILT_IN_LOG; exp_bfn = BUILT_IN_EXP; case BUILT_IN_POWF: log_cfn = CFN_BUILT_IN_LOGF; exp_bfn = BUILT_IN_EXPF; case BUILT_IN_POWL: log_cfn = CFN_BUILT_IN_LOGL; exp_bfn = BUILT_IN_EXPL; return NULL; tree logc = fold_const_call (log_cfn, TREE_TYPE (base), base); tree exp_decl = builtin_decl_implicit (exp_bfn); /* Optimize pow (C, x) as exp (log (C) * x). Normally match.pd does that, but if C is a power of 2, we want to use exp2 (log2 (C) * x) in the non-vectorized version, but for vectorization we don't have vectorized exp2. */ if (logc && TREE_CODE (logc) == REAL_CST && exp_decl && lookup_attribute ("omp declare simd", DECL_ATTRIBUTES (exp_decl))) cgraph_node *node = cgraph_node::get_create (exp_decl); if (node->simd_clones == NULL) if (targetm.simd_clone.compute_vecsize_and_simdlen == NULL || node->definition) return NULL; expand_simd_clones (node); if (node->simd_clones == NULL) return NULL; *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base)); if (!*type_out) return NULL; tree def = vect_recog_temp_ssa_var (TREE_TYPE (base), NULL); gimple *g = gimple_build_assign (def, MULT_EXPR, exp, logc); append_pattern_def_seq (vinfo, stmt_vinfo, g); tree res = vect_recog_temp_ssa_var (TREE_TYPE (base), NULL); g = gimple_build_call (exp_decl, 1, def); gimple_call_set_lhs (g, res); return g; return NULL; /* We now have a pow or powi builtin function call with a constant exponent. */ /* Catch squaring. */ if ((tree_fits_shwi_p (exp) && tree_to_shwi (exp) == 2) || (TREE_CODE (exp) == REAL_CST && real_equal (&TREE_REAL_CST (exp), &dconst2))) if (!vect_supportable_direct_optab_p (vinfo, TREE_TYPE (base), MULT_EXPR, TREE_TYPE (base), type_out)) return NULL; var = vect_recog_temp_ssa_var (TREE_TYPE (base), NULL); stmt = gimple_build_assign (var, MULT_EXPR, base, base); return stmt; /* Catch square root. */ if (TREE_CODE (exp) == REAL_CST && real_equal (&TREE_REAL_CST (exp), &dconsthalf)) *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (base)); if (*type_out && direct_internal_fn_supported_p (IFN_SQRT, *type_out, gcall *stmt = gimple_build_call_internal (IFN_SQRT, 1, base); var = vect_recog_temp_ssa_var (TREE_TYPE (base), stmt); gimple_call_set_lhs (stmt, var); gimple_call_set_nothrow (stmt, true); return stmt; return NULL; /* Function vect_recog_widen_sum_pattern Try to find the following pattern: type x_t; TYPE x_T, sum = init; sum_0 = phi <init, sum_1> S1 x_t = *p; S2 x_T = (TYPE) x_t; S3 sum_1 = x_T + sum_0; where type 'TYPE' is at least double the size of type 'type', i.e - we're summing elements of type 'type' into an accumulator of type 'TYPE'. This is a special case of a reduction computation. * STMT_VINFO: The stmt from which the pattern search begins. In the example, when this function is called with S3, the pattern {S2,S3} will be detected. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the sequence of stmts that constitute the pattern. In this case it will be: WIDEN_SUM <x_t, sum_0> Note: The widening-sum idiom is a widening reduction pattern that is vectorized without preserving all the intermediate results. It produces only N/2 (widened) results (by summing up pairs of intermediate results) rather than all N results. Therefore, we cannot allow this pattern when we want to get all the results and in the correct order (as is the case when this computation is in an inner-loop nested in an outer-loop that us being vectorized). */ static gimple * vect_recog_widen_sum_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree oprnd0, oprnd1; tree type; gimple *pattern_stmt; tree var; /* Look for the following pattern DX = (TYPE) X; sum_1 = DX + sum_0; In which DX is at least double the size of X, and sum_1 has been recognized as a reduction variable. /* Starting from LAST_STMT, follow the defs of its uses in search of the above pattern. */ if (!vect_reassociating_reduction_p (vinfo, stmt_vinfo, PLUS_EXPR, &oprnd0, &oprnd1)) return NULL; type = TREE_TYPE (gimple_get_lhs (last_stmt)); /* So far so good. Since last_stmt was detected as a (summation) reduction, we know that oprnd1 is the reduction variable (defined by a loop-header phi), and oprnd0 is an ssa-name defined by a stmt in the loop body. Left to check that oprnd0 is defined by a cast from type 'type' to type 'TYPE'. */ vect_unpromoted_value unprom0; if (!vect_look_through_possible_promotion (vinfo, oprnd0, &unprom0) || TYPE_PRECISION (unprom0.type) * 2 > TYPE_PRECISION (type)) return NULL; vect_pattern_detected ("vect_recog_widen_sum_pattern", last_stmt); if (!vect_supportable_direct_optab_p (vinfo, type, WIDEN_SUM_EXPR, unprom0.type, type_out)) return NULL; var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_assign (var, WIDEN_SUM_EXPR, unprom0.op, oprnd1); return pattern_stmt; /* Recognize cases in which an operation is performed in one type WTYPE but could be done more efficiently in a narrower type NTYPE. For example, if we have: ATYPE a; // narrower than NTYPE BTYPE b; // narrower than NTYPE WTYPE aw = (WTYPE) a; WTYPE bw = (WTYPE) b; WTYPE res = aw + bw; // only uses of aw and bw then it would be more efficient to do: NTYPE an = (NTYPE) a; NTYPE bn = (NTYPE) b; NTYPE resn = an + bn; WTYPE res = (WTYPE) resn; Other situations include things like: ATYPE a; // NTYPE or narrower WTYPE aw = (WTYPE) a; WTYPE res = aw + b; when only "(NTYPE) res" is significant. In that case it's more efficient to truncate "b" and do the operation on NTYPE instead: NTYPE an = (NTYPE) a; NTYPE bn = (NTYPE) b; // truncation NTYPE resn = an + bn; WTYPE res = (WTYPE) resn; All users of "res" should then use "resn" instead, making the final statement dead (not marked as relevant). The final statement is still needed to maintain the type correctness of the IR. vect_determine_precisions has already determined the minimum precison of the operation and the minimum precision required by users of the result. */ static gimple * vect_recog_over_widening_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) gassign *last_stmt = dyn_cast <gassign *> (last_stmt_info->stmt); if (!last_stmt) return NULL; /* See whether we have found that this operation can be done on a narrower type without changing its semantics. */ unsigned int new_precision = last_stmt_info->operation_precision; if (!new_precision) return NULL; tree lhs = gimple_assign_lhs (last_stmt); tree type = TREE_TYPE (lhs); tree_code code = gimple_assign_rhs_code (last_stmt); /* Punt for reductions where we don't handle the type conversions. */ if (STMT_VINFO_DEF_TYPE (last_stmt_info) == vect_reduction_def) return NULL; /* Keep the first operand of a COND_EXPR as-is: only the other two operands are interesting. */ unsigned int first_op = (code == COND_EXPR ? 2 : 1); /* Check the operands. */ unsigned int nops = gimple_num_ops (last_stmt) - first_op; auto_vec <vect_unpromoted_value, 3> unprom (nops); unprom.quick_grow (nops); unsigned int min_precision = 0; bool single_use_p = false; for (unsigned int i = 0; i < nops; ++i) tree op = gimple_op (last_stmt, first_op + i); if (TREE_CODE (op) == INTEGER_CST) unprom[i].set_op (op, vect_constant_def); else if (TREE_CODE (op) == SSA_NAME) bool op_single_use_p = true; if (!vect_look_through_possible_promotion (vinfo, op, &unprom[i], return NULL; /* If: (1) N bits of the result are needed; (2) all inputs are widened from M<N bits; and (3) one operand OP is a single-use SSA name we can shift the M->N widening from OP to the output without changing the number or type of extensions involved. This then reduces the number of copies of STMT_INFO. If instead of (3) more than one operand is a single-use SSA name, shifting the extension to the output is even more of a win. If instead: (1) N bits of the result are needed; (2) one operand OP2 is widened from M2<N bits; (3) another operand OP1 is widened from M1<M2 bits; and (4) both OP1 and OP2 are single-use the choice is between: (a) truncating OP2 to M1, doing the operation on M1, and then widening the result to N (b) widening OP1 to M2, doing the operation on M2, and then widening the result to N Both shift the M2->N widening of the inputs to the output. (a) additionally shifts the M1->M2 widening to the output; it requires fewer copies of STMT_INFO but requires an extra M2->M1 truncation. Which is better will depend on the complexity and cost of STMT_INFO, which is hard to predict at this stage. However, a clear tie-breaker in favor of (b) is the fact that the truncation in (a) increases the length of the operation chain. If instead of (4) only one of OP1 or OP2 is single-use, (b) is still a win over doing the operation in N bits: it still shifts the M2->N widening on the single-use operand to the output and reduces the number of STMT_INFO copies. If neither operand is single-use then operating on fewer than N bits might lead to more extensions overall. Whether it does or not depends on global information about the vectorization region, and whether that's a good trade-off would again depend on the complexity and cost of the statements involved, as well as things like register pressure that are not normally modelled at this stage. We therefore ignore these cases and just optimize the clear single-use wins above. Thus we take the maximum precision of the unpromoted operands and record whether any operand is single-use. */ if (unprom[i].dt == vect_internal_def) min_precision = MAX (min_precision, TYPE_PRECISION (unprom[i].type)); single_use_p |= op_single_use_p; return NULL; /* Although the operation could be done in operation_precision, we have to balance that against introducing extra truncations or extensions. Calculate the minimum precision that can be handled efficiently. The loop above determined that the operation could be handled efficiently in MIN_PRECISION if SINGLE_USE_P; this would shift an extension from the inputs to the output without introducing more instructions, and would reduce the number of instructions required for STMT_INFO itself. vect_determine_precisions has also determined that the result only needs min_output_precision bits. Truncating by a factor of N times requires a tree of N - 1 instructions, so if TYPE is N times wider than min_output_precision, doing the operation in TYPE and truncating the result requires N + (N - 1) = 2N - 1 instructions per output vector. In contrast: - truncating the input to a unary operation and doing the operation in the new type requires at most N - 1 + 1 = N instructions per output vector - doing the same for a binary operation requires at most (N - 1) * 2 + 1 = 2N - 1 instructions per output vector Both unary and binary operations require fewer instructions than this if the operands were extended from a suitable truncated form. Thus there is usually nothing to lose by doing operations in min_output_precision bits, but there can be something to gain. */ if (!single_use_p) min_precision = last_stmt_info->min_output_precision; min_precision = MIN (min_precision, last_stmt_info->min_output_precision); /* Apply the minimum efficient precision we just calculated. */ if (new_precision < min_precision) new_precision = min_precision; new_precision = vect_element_precision (new_precision); if (new_precision >= TYPE_PRECISION (type)) return NULL; vect_pattern_detected ("vect_recog_over_widening_pattern", last_stmt); *type_out = get_vectype_for_scalar_type (vinfo, type); if (!*type_out) return NULL; /* We've found a viable pattern. Get the new type of the operation. */ bool unsigned_p = (last_stmt_info->operation_sign == UNSIGNED); tree new_type = build_nonstandard_integer_type (new_precision, unsigned_p); /* If we're truncating an operation, we need to make sure that we don't introduce new undefined overflow. The codes tested here are a subset of those accepted by vect_truncatable_operation_p. */ tree op_type = new_type; if (TYPE_OVERFLOW_UNDEFINED (new_type) && (code == PLUS_EXPR || code == MINUS_EXPR || code == MULT_EXPR)) op_type = build_nonstandard_integer_type (new_precision, true); /* We specifically don't check here whether the target supports the new operation, since it might be something that a later pattern wants to rewrite anyway. If targets have a minimum element size for some optabs, we should pattern-match smaller ops to larger ops where beneficial. */ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type); tree op_vectype = get_vectype_for_scalar_type (vinfo, op_type); if (!new_vectype || !op_vectype) return NULL; if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "demoting %T to %T\n", type, new_type); /* Calculate the rhs operands for an operation on OP_TYPE. */ tree ops[3] = {}; for (unsigned int i = 1; i < first_op; ++i) ops[i - 1] = gimple_op (last_stmt, i); vect_convert_inputs (vinfo, last_stmt_info, nops, &ops[first_op - 1], op_type, &unprom[0], op_vectype); /* Use the operation to produce a result of type OP_TYPE. */ tree new_var = vect_recog_temp_ssa_var (op_type, NULL); gimple *pattern_stmt = gimple_build_assign (new_var, code, ops[0], ops[1], ops[2]); gimple_set_location (pattern_stmt, gimple_location (last_stmt)); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "created pattern stmt: %G", pattern_stmt); /* Convert back to the original signedness, if OP_TYPE is different from NEW_TYPE. */ if (op_type != new_type) pattern_stmt = vect_convert_output (vinfo, last_stmt_info, new_type, pattern_stmt, op_vectype); /* Promote the result to the original type. */ pattern_stmt = vect_convert_output (vinfo, last_stmt_info, type, pattern_stmt, new_vectype); return pattern_stmt; /* Recognize the following patterns: ATYPE a; // narrower than TYPE BTYPE b; // narrower than TYPE 1) Multiply high with scaling TYPE res = ((TYPE) a * (TYPE) b) >> c; Here, c is bitsize (TYPE) / 2 - 1. 2) ... or also with rounding TYPE res = (((TYPE) a * (TYPE) b) >> d + 1) >> 1; Here, d is bitsize (TYPE) / 2 - 2. 3) Normal multiply high TYPE res = ((TYPE) a * (TYPE) b) >> e; Here, e is bitsize (TYPE) / 2. where only the bottom half of res is used. */ static gimple * vect_recog_mulhs_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) /* Check for a right shift. */ gassign *last_stmt = dyn_cast <gassign *> (last_stmt_info->stmt); if (!last_stmt || gimple_assign_rhs_code (last_stmt) != RSHIFT_EXPR) return NULL; /* Check that the shift result is wider than the users of the result need (i.e. that narrowing would be a natural choice). */ tree lhs_type = TREE_TYPE (gimple_assign_lhs (last_stmt)); unsigned int target_precision = vect_element_precision (last_stmt_info->min_output_precision); if (!INTEGRAL_TYPE_P (lhs_type) || target_precision >= TYPE_PRECISION (lhs_type)) return NULL; /* Look through any change in sign on the outer shift input. */ vect_unpromoted_value unprom_rshift_input; tree rshift_input = vect_look_through_possible_promotion (vinfo, gimple_assign_rhs1 (last_stmt), &unprom_rshift_input); if (!rshift_input || TYPE_PRECISION (TREE_TYPE (rshift_input)) != TYPE_PRECISION (lhs_type)) return NULL; /* Get the definition of the shift input. */ stmt_vec_info rshift_input_stmt_info = vect_get_internal_def (vinfo, rshift_input); if (!rshift_input_stmt_info) return NULL; gassign *rshift_input_stmt = dyn_cast <gassign *> (rshift_input_stmt_info->stmt); if (!rshift_input_stmt) return NULL; stmt_vec_info mulh_stmt_info; tree scale_term; bool rounding_p = false; /* Check for the presence of the rounding term. */ if (gimple_assign_rhs_code (rshift_input_stmt) == PLUS_EXPR) /* Check that the outer shift was by 1. */ if (!integer_onep (gimple_assign_rhs2 (last_stmt))) return NULL; /* Check that the second operand of the PLUS_EXPR is 1. */ if (!integer_onep (gimple_assign_rhs2 (rshift_input_stmt))) return NULL; /* Look through any change in sign on the addition input. */ vect_unpromoted_value unprom_plus_input; tree plus_input = vect_look_through_possible_promotion (vinfo, gimple_assign_rhs1 (rshift_input_stmt), &unprom_plus_input); if (!plus_input || TYPE_PRECISION (TREE_TYPE (plus_input)) != TYPE_PRECISION (TREE_TYPE (rshift_input))) return NULL; /* Get the definition of the multiply-high-scale part. */ stmt_vec_info plus_input_stmt_info = vect_get_internal_def (vinfo, plus_input); if (!plus_input_stmt_info) return NULL; gassign *plus_input_stmt = dyn_cast <gassign *> (plus_input_stmt_info->stmt); if (!plus_input_stmt || gimple_assign_rhs_code (plus_input_stmt) != RSHIFT_EXPR) return NULL; /* Look through any change in sign on the scaling input. */ vect_unpromoted_value unprom_scale_input; tree scale_input = vect_look_through_possible_promotion (vinfo, gimple_assign_rhs1 (plus_input_stmt), &unprom_scale_input); if (!scale_input || TYPE_PRECISION (TREE_TYPE (scale_input)) != TYPE_PRECISION (TREE_TYPE (plus_input))) return NULL; /* Get the definition of the multiply-high part. */ mulh_stmt_info = vect_get_internal_def (vinfo, scale_input); if (!mulh_stmt_info) return NULL; /* Get the scaling term. */ scale_term = gimple_assign_rhs2 (plus_input_stmt); rounding_p = true; mulh_stmt_info = rshift_input_stmt_info; scale_term = gimple_assign_rhs2 (last_stmt); /* Check that the scaling factor is constant. */ if (TREE_CODE (scale_term) != INTEGER_CST) return NULL; /* Check whether the scaling input term can be seen as two widened inputs multiplied together. */ vect_unpromoted_value unprom_mult[2]; tree new_type; unsigned int nops = vect_widened_op_tree (vinfo, mulh_stmt_info, MULT_EXPR, WIDEN_MULT_EXPR, false, 2, unprom_mult, &new_type); if (nops != 2) return NULL; /* Adjust output precision. */ if (TYPE_PRECISION (new_type) < target_precision) new_type = build_nonstandard_integer_type (target_precision, TYPE_UNSIGNED (new_type)); unsigned mult_precision = TYPE_PRECISION (new_type); internal_fn ifn; /* Check that the scaling factor is expected. Instead of target_precision, we should use the one that we actually use for internal function. */ if (rounding_p) /* Check pattern 2). */ if (wi::to_widest (scale_term) + mult_precision + 2 != TYPE_PRECISION (lhs_type)) return NULL; ifn = IFN_MULHRS; /* Check for pattern 1). */ if (wi::to_widest (scale_term) + mult_precision + 1 == TYPE_PRECISION (lhs_type)) ifn = IFN_MULHS; /* Check for pattern 3). */ else if (wi::to_widest (scale_term) + mult_precision == TYPE_PRECISION (lhs_type)) ifn = IFN_MULH; return NULL; vect_pattern_detected ("vect_recog_mulhs_pattern", last_stmt); /* Check for target support. */ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type); if (!new_vectype || !direct_internal_fn_supported_p (ifn, new_vectype, OPTIMIZE_FOR_SPEED)) return NULL; /* The IR requires a valid vector type for the cast result, even though it's likely to be discarded. */ *type_out = get_vectype_for_scalar_type (vinfo, lhs_type); if (!*type_out) return NULL; /* Generate the IFN_MULHRS call. */ tree new_var = vect_recog_temp_ssa_var (new_type, NULL); tree new_ops[2]; vect_convert_inputs (vinfo, last_stmt_info, 2, new_ops, new_type, unprom_mult, new_vectype); gcall *mulhrs_stmt = gimple_build_call_internal (ifn, 2, new_ops[0], new_ops[1]); gimple_call_set_lhs (mulhrs_stmt, new_var); gimple_set_location (mulhrs_stmt, gimple_location (last_stmt)); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "created pattern stmt: %G", mulhrs_stmt); return vect_convert_output (vinfo, last_stmt_info, lhs_type, mulhrs_stmt, new_vectype); /* Recognize the patterns: ATYPE a; // narrower than TYPE BTYPE b; // narrower than TYPE (1) TYPE avg = ((TYPE) a + (TYPE) b) >> 1; or (2) TYPE avg = ((TYPE) a + (TYPE) b + 1) >> 1; where only the bottom half of avg is used. Try to transform them into: (1) NTYPE avg' = .AVG_FLOOR ((NTYPE) a, (NTYPE) b); or (2) NTYPE avg' = .AVG_CEIL ((NTYPE) a, (NTYPE) b); followed by: TYPE avg = (TYPE) avg'; where NTYPE is no wider than half of TYPE. Since only the bottom half of avg is used, all or part of the cast of avg' should become redundant. If there is no target support available, generate code to distribute rshift over plus and add a carry. */ static gimple * vect_recog_average_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) /* Check for a shift right by one bit. */ gassign *last_stmt = dyn_cast <gassign *> (last_stmt_info->stmt); if (!last_stmt || gimple_assign_rhs_code (last_stmt) != RSHIFT_EXPR || !integer_onep (gimple_assign_rhs2 (last_stmt))) return NULL; /* Check that the shift result is wider than the users of the result need (i.e. that narrowing would be a natural choice). */ tree lhs = gimple_assign_lhs (last_stmt); tree type = TREE_TYPE (lhs); unsigned int target_precision = vect_element_precision (last_stmt_info->min_output_precision); if (!INTEGRAL_TYPE_P (type) || target_precision >= TYPE_PRECISION (type)) return NULL; /* Look through any change in sign on the shift input. */ tree rshift_rhs = gimple_assign_rhs1 (last_stmt); vect_unpromoted_value unprom_plus; rshift_rhs = vect_look_through_possible_promotion (vinfo, rshift_rhs, if (!rshift_rhs || TYPE_PRECISION (TREE_TYPE (rshift_rhs)) != TYPE_PRECISION (type)) return NULL; /* Get the definition of the shift input. */ stmt_vec_info plus_stmt_info = vect_get_internal_def (vinfo, rshift_rhs); if (!plus_stmt_info) return NULL; /* Check whether the shift input can be seen as a tree of additions on 2 or 3 widened inputs. Note that the pattern should be a win even if the result of one or more additions is reused elsewhere: if the pattern matches, we'd be replacing 2N RSHIFT_EXPRs and N VEC_PACK_*s with N IFN_AVG_*s. */ internal_fn ifn = IFN_AVG_FLOOR; vect_unpromoted_value unprom[3]; tree new_type; unsigned int nops = vect_widened_op_tree (vinfo, plus_stmt_info, PLUS_EXPR, WIDEN_PLUS_EXPR, false, 3, unprom, &new_type); if (nops == 0) return NULL; if (nops == 3) /* Check that one operand is 1. */ unsigned int i; for (i = 0; i < 3; ++i) if (integer_onep (unprom[i].op)) if (i == 3) return NULL; /* Throw away the 1 operand and keep the other two. */ if (i < 2) unprom[i] = unprom[2]; ifn = IFN_AVG_CEIL; vect_pattern_detected ("vect_recog_average_pattern", last_stmt); /* We know that: (a) the operation can be viewed as: TYPE widened0 = (TYPE) UNPROM[0]; TYPE widened1 = (TYPE) UNPROM[1]; TYPE tmp1 = widened0 + widened1 {+ 1}; TYPE tmp2 = tmp1 >> 1; // LAST_STMT_INFO (b) the first two statements are equivalent to: TYPE widened0 = (TYPE) (NEW_TYPE) UNPROM[0]; TYPE widened1 = (TYPE) (NEW_TYPE) UNPROM[1]; (c) vect_recog_over_widening_pattern has already tried to narrow TYPE where sensible; (d) all the operations can be performed correctly at twice the width of NEW_TYPE, due to the nature of the average operation; and (e) users of the result of the right shift need only TARGET_PRECISION bits, where TARGET_PRECISION is no more than half of TYPE's Under these circumstances, the only situation in which NEW_TYPE could be narrower than TARGET_PRECISION is if widened0, widened1 and an addition result are all used more than once. Thus we can treat any widening of UNPROM[0] and UNPROM[1] to TARGET_PRECISION as "free", whereas widening the result of the average instruction from NEW_TYPE to TARGET_PRECISION would be a new operation. It's therefore better not to go narrower than TARGET_PRECISION. */ if (TYPE_PRECISION (new_type) < target_precision) new_type = build_nonstandard_integer_type (target_precision, TYPE_UNSIGNED (new_type)); /* Check for target support. */ tree new_vectype = get_vectype_for_scalar_type (vinfo, new_type); if (!new_vectype) return NULL; bool fallback_p = false; if (direct_internal_fn_supported_p (ifn, new_vectype, OPTIMIZE_FOR_SPEED)) else if (TYPE_UNSIGNED (new_type) && optab_for_tree_code (RSHIFT_EXPR, new_vectype, optab_scalar) && optab_for_tree_code (PLUS_EXPR, new_vectype, optab_default) && optab_for_tree_code (BIT_IOR_EXPR, new_vectype, optab_default) && optab_for_tree_code (BIT_AND_EXPR, new_vectype, optab_default)) fallback_p = true; return NULL; /* The IR requires a valid vector type for the cast result, even though it's likely to be discarded. */ *type_out = get_vectype_for_scalar_type (vinfo, type); if (!*type_out) return NULL; tree new_var = vect_recog_temp_ssa_var (new_type, NULL); tree new_ops[2]; vect_convert_inputs (vinfo, last_stmt_info, 2, new_ops, new_type, unprom, new_vectype); if (fallback_p) /* As a fallback, generate code for following sequence: shifted_op0 = new_ops[0] >> 1; shifted_op1 = new_ops[1] >> 1; sum_of_shifted = shifted_op0 + shifted_op1; unmasked_carry = new_ops[0] and/or new_ops[1]; carry = unmasked_carry & 1; new_var = sum_of_shifted + carry; tree one_cst = build_one_cst (new_type); gassign *g; tree shifted_op0 = vect_recog_temp_ssa_var (new_type, NULL); g = gimple_build_assign (shifted_op0, RSHIFT_EXPR, new_ops[0], one_cst); append_pattern_def_seq (vinfo, last_stmt_info, g, new_vectype); tree shifted_op1 = vect_recog_temp_ssa_var (new_type, NULL); g = gimple_build_assign (shifted_op1, RSHIFT_EXPR, new_ops[1], one_cst); append_pattern_def_seq (vinfo, last_stmt_info, g, new_vectype); tree sum_of_shifted = vect_recog_temp_ssa_var (new_type, NULL); g = gimple_build_assign (sum_of_shifted, PLUS_EXPR, shifted_op0, shifted_op1); append_pattern_def_seq (vinfo, last_stmt_info, g, new_vectype); tree unmasked_carry = vect_recog_temp_ssa_var (new_type, NULL); tree_code c = (ifn == IFN_AVG_CEIL) ? BIT_IOR_EXPR : BIT_AND_EXPR; g = gimple_build_assign (unmasked_carry, c, new_ops[0], new_ops[1]); append_pattern_def_seq (vinfo, last_stmt_info, g, new_vectype); tree carry = vect_recog_temp_ssa_var (new_type, NULL); g = gimple_build_assign (carry, BIT_AND_EXPR, unmasked_carry, one_cst); append_pattern_def_seq (vinfo, last_stmt_info, g, new_vectype); g = gimple_build_assign (new_var, PLUS_EXPR, sum_of_shifted, carry); return vect_convert_output (vinfo, last_stmt_info, type, g, new_vectype); /* Generate the IFN_AVG* call. */ gcall *average_stmt = gimple_build_call_internal (ifn, 2, new_ops[0], gimple_call_set_lhs (average_stmt, new_var); gimple_set_location (average_stmt, gimple_location (last_stmt)); if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "created pattern stmt: %G", average_stmt); return vect_convert_output (vinfo, last_stmt_info, type, average_stmt, new_vectype); /* Recognize cases in which the input to a cast is wider than its output, and the input is fed by a widening operation. Fold this by removing the unnecessary intermediate widening. E.g.: unsigned char a; unsigned int b = (unsigned int) a; unsigned short c = (unsigned short) b; unsigned short c = (unsigned short) a; Although this is rare in input IR, it is an expected side-effect of the over-widening pattern above. This is beneficial also for integer-to-float conversions, if the widened integer has more bits than the float, and if the unwidened input doesn't. */ static gimple * vect_recog_cast_forwprop_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) /* Check for a cast, including an integer-to-float conversion. */ gassign *last_stmt = dyn_cast <gassign *> (last_stmt_info->stmt); if (!last_stmt) return NULL; tree_code code = gimple_assign_rhs_code (last_stmt); if (!CONVERT_EXPR_CODE_P (code) && code != FLOAT_EXPR) return NULL; /* Make sure that the rhs is a scalar with a natural bitsize. */ tree lhs = gimple_assign_lhs (last_stmt); if (!lhs) return NULL; tree lhs_type = TREE_TYPE (lhs); scalar_mode lhs_mode; if (VECT_SCALAR_BOOLEAN_TYPE_P (lhs_type) || !is_a <scalar_mode> (TYPE_MODE (lhs_type), &lhs_mode)) return NULL; /* Check for a narrowing operation (from a vector point of view). */ tree rhs = gimple_assign_rhs1 (last_stmt); tree rhs_type = TREE_TYPE (rhs); if (!INTEGRAL_TYPE_P (rhs_type) || VECT_SCALAR_BOOLEAN_TYPE_P (rhs_type) || TYPE_PRECISION (rhs_type) <= GET_MODE_BITSIZE (lhs_mode)) return NULL; /* Try to find an unpromoted input. */ vect_unpromoted_value unprom; if (!vect_look_through_possible_promotion (vinfo, rhs, &unprom) || TYPE_PRECISION (unprom.type) >= TYPE_PRECISION (rhs_type)) return NULL; /* If the bits above RHS_TYPE matter, make sure that they're the same when extending from UNPROM as they are when extending from RHS. */ if (!INTEGRAL_TYPE_P (lhs_type) && TYPE_SIGN (rhs_type) != TYPE_SIGN (unprom.type)) return NULL; /* We can get the same result by casting UNPROM directly, to avoid the unnecessary widening and narrowing. */ vect_pattern_detected ("vect_recog_cast_forwprop_pattern", last_stmt); *type_out = get_vectype_for_scalar_type (vinfo, lhs_type); if (!*type_out) return NULL; tree new_var = vect_recog_temp_ssa_var (lhs_type, NULL); gimple *pattern_stmt = gimple_build_assign (new_var, code, unprom.op); gimple_set_location (pattern_stmt, gimple_location (last_stmt)); return pattern_stmt; /* Try to detect a shift left of a widened input, converting LSHIFT_EXPR to WIDEN_LSHIFT_EXPR. See vect_recog_widen_op_pattern for details. */ static gimple * vect_recog_widen_shift_pattern (vec_info *vinfo, stmt_vec_info last_stmt_info, tree *type_out) return vect_recog_widen_op_pattern (vinfo, last_stmt_info, type_out, LSHIFT_EXPR, WIDEN_LSHIFT_EXPR, true, /* Detect a rotate pattern wouldn't be otherwise vectorized: type a_t, b_t, c_t; S0 a_t = b_t r<< c_t; * STMT_VINFO: The stmt from which the pattern search begins, i.e. the shift/rotate stmt. The original stmt (S0) is replaced with a sequence: S1 d_t = -c_t; S2 e_t = d_t & (B - 1); S3 f_t = b_t << c_t; S4 g_t = b_t >> e_t; S0 a_t = f_t | g_t; where B is element bitsize of type. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the rotate S0 stmt. */ static gimple * vect_recog_rotate_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree oprnd0, oprnd1, lhs, var, var1, var2, vectype, type, stype, def, def2; gimple *pattern_stmt, *def_stmt; enum tree_code rhs_code; enum vect_def_type dt; optab optab1, optab2; edge ext_def = NULL; bool bswap16_p = false; if (is_gimple_assign (last_stmt)) rhs_code = gimple_assign_rhs_code (last_stmt); switch (rhs_code) case LROTATE_EXPR: case RROTATE_EXPR: return NULL; lhs = gimple_assign_lhs (last_stmt); oprnd0 = gimple_assign_rhs1 (last_stmt); type = TREE_TYPE (oprnd0); oprnd1 = gimple_assign_rhs2 (last_stmt); else if (gimple_call_builtin_p (last_stmt, BUILT_IN_BSWAP16)) /* __builtin_bswap16 (x) is another form of x r>> 8. The vectorizer has bswap support, but only if the argument isn't promoted. */ lhs = gimple_call_lhs (last_stmt); oprnd0 = gimple_call_arg (last_stmt, 0); type = TREE_TYPE (oprnd0); if (!lhs || TYPE_PRECISION (TREE_TYPE (lhs)) != 16 || TYPE_PRECISION (type) <= 16 || TREE_CODE (oprnd0) != SSA_NAME || BITS_PER_UNIT != 8 || !TYPE_UNSIGNED (TREE_TYPE (lhs))) return NULL; stmt_vec_info def_stmt_info; if (!vect_is_simple_use (oprnd0, vinfo, &dt, &def_stmt_info, &def_stmt)) return NULL; if (dt != vect_internal_def) return NULL; if (gimple_assign_cast_p (def_stmt)) def = gimple_assign_rhs1 (def_stmt); if (INTEGRAL_TYPE_P (TREE_TYPE (def)) && TYPE_PRECISION (TREE_TYPE (def)) == 16) oprnd0 = def; type = TREE_TYPE (lhs); vectype = get_vectype_for_scalar_type (vinfo, type); if (vectype == NULL_TREE) return NULL; if (tree char_vectype = get_same_sized_vectype (char_type_node, vectype)) /* The encoding uses one stepped pattern for each byte in the 16-bit word. */ vec_perm_builder elts (TYPE_VECTOR_SUBPARTS (char_vectype), 2, 3); for (unsigned i = 0; i < 3; ++i) for (unsigned j = 0; j < 2; ++j) elts.quick_push ((i + 1) * 2 - j - 1); vec_perm_indices indices (elts, 1, TYPE_VECTOR_SUBPARTS (char_vectype)); if (can_vec_perm_const_p (TYPE_MODE (char_vectype), indices)) /* vectorizable_bswap can handle the __builtin_bswap16 if we undo the argument promotion. */ if (!useless_type_conversion_p (type, TREE_TYPE (oprnd0))) def = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd0); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); oprnd0 = def; /* Pattern detected. */ vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt); *type_out = vectype; /* Pattern supported. Create a stmt to be used to replace the pattern, with the unpromoted argument. */ var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_call (gimple_call_fndecl (last_stmt), 1, oprnd0); gimple_call_set_lhs (pattern_stmt, var); gimple_call_set_fntype (as_a <gcall *> (pattern_stmt), gimple_call_fntype (last_stmt)); return pattern_stmt; oprnd1 = build_int_cst (integer_type_node, 8); rhs_code = LROTATE_EXPR; bswap16_p = true; return NULL; if (TREE_CODE (oprnd0) != SSA_NAME || TYPE_PRECISION (TREE_TYPE (lhs)) != TYPE_PRECISION (type) || !INTEGRAL_TYPE_P (type) || !TYPE_UNSIGNED (type)) return NULL; stmt_vec_info def_stmt_info; if (!vect_is_simple_use (oprnd1, vinfo, &dt, &def_stmt_info, &def_stmt)) return NULL; if (dt != vect_internal_def && dt != vect_constant_def && dt != vect_external_def) return NULL; vectype = get_vectype_for_scalar_type (vinfo, type); if (vectype == NULL_TREE) return NULL; /* If vector/vector or vector/scalar rotate is supported by the target, don't do anything here. */ optab1 = optab_for_tree_code (rhs_code, vectype, optab_vector); if (optab1 && optab_handler (optab1, TYPE_MODE (vectype)) != CODE_FOR_nothing) if (bswap16_p) if (!useless_type_conversion_p (type, TREE_TYPE (oprnd0))) def = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd0); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); oprnd0 = def; /* Pattern detected. */ vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt); *type_out = vectype; /* Pattern supported. Create a stmt to be used to replace the pattern. */ var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_assign (var, LROTATE_EXPR, oprnd0, return pattern_stmt; return NULL; if (is_a <bb_vec_info> (vinfo) || dt != vect_internal_def) optab2 = optab_for_tree_code (rhs_code, vectype, optab_scalar); if (optab2 && optab_handler (optab2, TYPE_MODE (vectype)) != CODE_FOR_nothing) goto use_rotate; /* If vector/vector or vector/scalar shifts aren't supported by the target, don't do anything here either. */ optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_vector); optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_vector); if (!optab1 || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing || !optab2 || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing) if (! is_a <bb_vec_info> (vinfo) && dt == vect_internal_def) return NULL; optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_scalar); optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_scalar); if (!optab1 || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing || !optab2 || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing) return NULL; *type_out = vectype; if (bswap16_p && !useless_type_conversion_p (type, TREE_TYPE (oprnd0))) def = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd0); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); oprnd0 = def; if (dt == vect_external_def && TREE_CODE (oprnd1) == SSA_NAME) ext_def = vect_get_external_def_edge (vinfo, oprnd1); def = NULL_TREE; scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type); if (dt != vect_internal_def || TYPE_MODE (TREE_TYPE (oprnd1)) == mode) def = oprnd1; else if (def_stmt && gimple_assign_cast_p (def_stmt)) tree rhs1 = gimple_assign_rhs1 (def_stmt); if (TYPE_MODE (TREE_TYPE (rhs1)) == mode && TYPE_PRECISION (TREE_TYPE (rhs1)) == TYPE_PRECISION (type)) def = rhs1; if (def == NULL_TREE) def = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd1); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); stype = TREE_TYPE (def); if (TREE_CODE (def) == INTEGER_CST) if (!tree_fits_uhwi_p (def) || tree_to_uhwi (def) >= GET_MODE_PRECISION (mode) || integer_zerop (def)) return NULL; def2 = build_int_cst (stype, GET_MODE_PRECISION (mode) - tree_to_uhwi (def)); tree vecstype = get_vectype_for_scalar_type (vinfo, stype); if (vecstype == NULL_TREE) return NULL; def2 = vect_recog_temp_ssa_var (stype, NULL); def_stmt = gimple_build_assign (def2, NEGATE_EXPR, def); if (ext_def) basic_block new_bb = gsi_insert_on_edge_immediate (ext_def, def_stmt); gcc_assert (!new_bb); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype); def2 = vect_recog_temp_ssa_var (stype, NULL); tree mask = build_int_cst (stype, GET_MODE_PRECISION (mode) - 1); def_stmt = gimple_build_assign (def2, BIT_AND_EXPR, gimple_assign_lhs (def_stmt), mask); if (ext_def) basic_block new_bb = gsi_insert_on_edge_immediate (ext_def, def_stmt); gcc_assert (!new_bb); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype); var1 = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (var1, rhs_code == LROTATE_EXPR ? LSHIFT_EXPR : RSHIFT_EXPR, oprnd0, def); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); var2 = vect_recog_temp_ssa_var (type, NULL); def_stmt = gimple_build_assign (var2, rhs_code == LROTATE_EXPR ? RSHIFT_EXPR : LSHIFT_EXPR, oprnd0, def2); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); /* Pattern detected. */ vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt); /* Pattern supported. Create a stmt to be used to replace the pattern. */ var = vect_recog_temp_ssa_var (type, NULL); pattern_stmt = gimple_build_assign (var, BIT_IOR_EXPR, var1, var2); return pattern_stmt; /* Detect a vector by vector shift pattern that wouldn't be otherwise type a_t; TYPE b_T, res_T; S1 a_t = ; S2 b_T = ; S3 res_T = b_T op a_t; where type 'TYPE' is a type with different size than 'type', and op is <<, >> or rotate. Also detect cases: type a_t; TYPE b_T, c_T, res_T; S0 c_T = ; S1 a_t = (type) c_T; S2 b_T = ; S3 res_T = b_T op a_t; * STMT_VINFO: The stmt from which the pattern search begins, i.e. the shift/rotate stmt. The original stmt (S3) is replaced with a shift/rotate which has same type on both operands, in the second case just b_T op c_T, in the first case with added cast from a_t to c_T in STMT_VINFO_PATTERN_DEF_SEQ. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the shift/rotate S3 stmt. */ static gimple * vect_recog_vector_vector_shift_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree oprnd0, oprnd1, lhs, var; gimple *pattern_stmt; enum tree_code rhs_code; if (!is_gimple_assign (last_stmt)) return NULL; rhs_code = gimple_assign_rhs_code (last_stmt); switch (rhs_code) case LSHIFT_EXPR: case RSHIFT_EXPR: case LROTATE_EXPR: case RROTATE_EXPR: return NULL; lhs = gimple_assign_lhs (last_stmt); oprnd0 = gimple_assign_rhs1 (last_stmt); oprnd1 = gimple_assign_rhs2 (last_stmt); if (TREE_CODE (oprnd0) != SSA_NAME || TREE_CODE (oprnd1) != SSA_NAME || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1)) || !type_has_mode_precision_p (TREE_TYPE (oprnd1)) || TYPE_PRECISION (TREE_TYPE (lhs)) != TYPE_PRECISION (TREE_TYPE (oprnd0))) return NULL; stmt_vec_info def_vinfo = vect_get_internal_def (vinfo, oprnd1); if (!def_vinfo) return NULL; *type_out = get_vectype_for_scalar_type (vinfo, TREE_TYPE (oprnd0)); if (*type_out == NULL_TREE) return NULL; tree def = NULL_TREE; gassign *def_stmt = dyn_cast <gassign *> (def_vinfo->stmt); if (def_stmt && gimple_assign_cast_p (def_stmt)) tree rhs1 = gimple_assign_rhs1 (def_stmt); if (TYPE_MODE (TREE_TYPE (rhs1)) == TYPE_MODE (TREE_TYPE (oprnd0)) && TYPE_PRECISION (TREE_TYPE (rhs1)) == TYPE_PRECISION (TREE_TYPE (oprnd0))) if (TYPE_PRECISION (TREE_TYPE (oprnd1)) >= TYPE_PRECISION (TREE_TYPE (rhs1))) def = rhs1; tree mask = build_low_bits_mask (TREE_TYPE (rhs1), TYPE_PRECISION (TREE_TYPE (oprnd1))); def = vect_recog_temp_ssa_var (TREE_TYPE (rhs1), NULL); def_stmt = gimple_build_assign (def, BIT_AND_EXPR, rhs1, mask); tree vecstype = get_vectype_for_scalar_type (vinfo, TREE_TYPE (rhs1)); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype); if (def == NULL_TREE) def = vect_recog_temp_ssa_var (TREE_TYPE (oprnd0), NULL); def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd1); append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt); /* Pattern detected. */ vect_pattern_detected ("vect_recog_vector_vector_shift_pattern", last_stmt); /* Pattern supported. Create a stmt to be used to replace the pattern. */ var = vect_recog_temp_ssa_var (TREE_TYPE (oprnd0), NULL); pattern_stmt = gimple_build_assign (var, rhs_code, oprnd0, def); return pattern_stmt; /* Return true iff the target has a vector optab implementing the operation CODE on type VECTYPE. */ static bool target_has_vecop_for_code (tree_code code, tree vectype) optab voptab = optab_for_tree_code (code, vectype, optab_vector); return voptab && optab_handler (voptab, TYPE_MODE (vectype)) != CODE_FOR_nothing; /* Verify that the target has optabs of VECTYPE to perform all the steps needed by the multiplication-by-immediate synthesis algorithm described by ALG and VAR. If SYNTH_SHIFT_P is true ensure that vector addition is present. Return true iff the target supports all the steps. */ static bool target_supports_mult_synth_alg (struct algorithm *alg, mult_variant var, tree vectype, bool synth_shift_p) if (alg->op[0] != alg_zero && alg->op[0] != alg_m) return false; bool supports_vminus = target_has_vecop_for_code (MINUS_EXPR, vectype); bool supports_vplus = target_has_vecop_for_code (PLUS_EXPR, vectype); if (var == negate_variant && !target_has_vecop_for_code (NEGATE_EXPR, vectype)) return false; /* If we must synthesize shifts with additions make sure that vector addition is available. */ if ((var == add_variant || synth_shift_p) && !supports_vplus) return false; for (int i = 1; i < alg->ops; i++) switch (alg->op[i]) case alg_shift: case alg_add_t_m2: case alg_add_t2_m: case alg_add_factor: if (!supports_vplus) return false; case alg_sub_t_m2: case alg_sub_t2_m: case alg_sub_factor: if (!supports_vminus) return false; case alg_unknown: case alg_m: case alg_zero: case alg_impossible: return false; gcc_unreachable (); return true; /* Synthesize a left shift of OP by AMNT bits using a series of additions and putting the final result in DEST. Append all statements but the last into VINFO. Return the last statement. */ static gimple * synth_lshift_by_additions (vec_info *vinfo, tree dest, tree op, HOST_WIDE_INT amnt, stmt_vec_info stmt_info) HOST_WIDE_INT i; tree itype = TREE_TYPE (op); tree prev_res = op; gcc_assert (amnt >= 0); for (i = 0; i < amnt; i++) tree tmp_var = (i < amnt - 1) ? vect_recog_temp_ssa_var (itype, NULL) : dest; gimple *stmt = gimple_build_assign (tmp_var, PLUS_EXPR, prev_res, prev_res); prev_res = tmp_var; if (i < amnt - 1) append_pattern_def_seq (vinfo, stmt_info, stmt); return stmt; gcc_unreachable (); return NULL; /* Helper for vect_synth_mult_by_constant. Apply a binary operation CODE to operands OP1 and OP2, creating a new temporary SSA var in the process if necessary. Append the resulting assignment statements to the sequence in STMT_VINFO. Return the SSA variable that holds the result of the binary operation. If SYNTH_SHIFT_P is true synthesize left shifts using additions. */ static tree apply_binop_and_append_stmt (vec_info *vinfo, tree_code code, tree op1, tree op2, stmt_vec_info stmt_vinfo, bool synth_shift_p) if (integer_zerop (op2) && (code == LSHIFT_EXPR || code == PLUS_EXPR)) gcc_assert (TREE_CODE (op1) == SSA_NAME); return op1; gimple *stmt; tree itype = TREE_TYPE (op1); tree tmp_var = vect_recog_temp_ssa_var (itype, NULL); if (code == LSHIFT_EXPR && synth_shift_p) stmt = synth_lshift_by_additions (vinfo, tmp_var, op1, TREE_INT_CST_LOW (op2), stmt_vinfo); append_pattern_def_seq (vinfo, stmt_vinfo, stmt); return tmp_var; stmt = gimple_build_assign (tmp_var, code, op1, op2); append_pattern_def_seq (vinfo, stmt_vinfo, stmt); return tmp_var; /* Synthesize a multiplication of OP by an INTEGER_CST VAL using shifts and simple arithmetic operations to be vectorized. Record the statements produced in STMT_VINFO and return the last statement in the sequence or NULL if it's not possible to synthesize such a multiplication. This function mirrors the behavior of expand_mult_const in expmed.c but works on tree-ssa form. */ static gimple * vect_synth_mult_by_constant (vec_info *vinfo, tree op, tree val, stmt_vec_info stmt_vinfo) tree itype = TREE_TYPE (op); machine_mode mode = TYPE_MODE (itype); struct algorithm alg; mult_variant variant; if (!tree_fits_shwi_p (val)) return NULL; /* Multiplication synthesis by shifts, adds and subs can introduce signed overflow where the original operation didn't. Perform the operations on an unsigned type and cast back to avoid this. In the future we may want to relax this for synthesis algorithms that we can prove do not cause unexpected overflow. */ bool cast_to_unsigned_p = !TYPE_OVERFLOW_WRAPS (itype); tree multtype = cast_to_unsigned_p ? unsigned_type_for (itype) : itype; /* Targets that don't support vector shifts but support vector additions can synthesize shifts that way. */ bool synth_shift_p = !vect_supportable_shift (vinfo, LSHIFT_EXPR, multtype); HOST_WIDE_INT hwval = tree_to_shwi (val); /* Use MAX_COST here as we don't want to limit the sequence on rtx costs. The vectorizer's benefit analysis will decide whether it's beneficial to do this. */ bool possible = choose_mult_variant (mode, hwval, &alg, &variant, MAX_COST); if (!possible) return NULL; tree vectype = get_vectype_for_scalar_type (vinfo, multtype); if (!vectype || !target_supports_mult_synth_alg (&alg, variant, vectype, synth_shift_p)) return NULL; tree accumulator; /* Clear out the sequence of statements so we can populate it below. */ gimple *stmt = NULL; if (cast_to_unsigned_p) tree tmp_op = vect_recog_temp_ssa_var (multtype, NULL); stmt = gimple_build_assign (tmp_op, CONVERT_EXPR, op); append_pattern_def_seq (vinfo, stmt_vinfo, stmt); op = tmp_op; if (alg.op[0] == alg_zero) accumulator = build_int_cst (multtype, 0); accumulator = op; bool needs_fixup = (variant == negate_variant) || (variant == add_variant); for (int i = 1; i < alg.ops; i++) tree shft_log = build_int_cst (multtype, alg.log[i]); tree accum_tmp = vect_recog_temp_ssa_var (multtype, NULL); tree tmp_var = NULL_TREE; switch (alg.op[i]) case alg_shift: if (synth_shift_p) = synth_lshift_by_additions (vinfo, accum_tmp, accumulator, alg.log[i], stmt_vinfo); stmt = gimple_build_assign (accum_tmp, LSHIFT_EXPR, accumulator, case alg_add_t_m2: = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, op, shft_log, stmt_vinfo, synth_shift_p); stmt = gimple_build_assign (accum_tmp, PLUS_EXPR, accumulator, case alg_sub_t_m2: tmp_var = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, op, shft_log, stmt_vinfo, /* In some algorithms the first step involves zeroing the accumulator. If subtracting from such an accumulator just emit the negation directly. */ if (integer_zerop (accumulator)) stmt = gimple_build_assign (accum_tmp, NEGATE_EXPR, tmp_var); stmt = gimple_build_assign (accum_tmp, MINUS_EXPR, accumulator, case alg_add_t2_m: = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, accumulator, shft_log, stmt_vinfo, synth_shift_p); stmt = gimple_build_assign (accum_tmp, PLUS_EXPR, tmp_var, op); case alg_sub_t2_m: = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, accumulator, shft_log, stmt_vinfo, synth_shift_p); stmt = gimple_build_assign (accum_tmp, MINUS_EXPR, tmp_var, op); case alg_add_factor: = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, accumulator, shft_log, stmt_vinfo, synth_shift_p); stmt = gimple_build_assign (accum_tmp, PLUS_EXPR, accumulator, case alg_sub_factor: = apply_binop_and_append_stmt (vinfo, LSHIFT_EXPR, accumulator, shft_log, stmt_vinfo, synth_shift_p); stmt = gimple_build_assign (accum_tmp, MINUS_EXPR, tmp_var, gcc_unreachable (); /* We don't want to append the last stmt in the sequence to stmt_vinfo but rather return it directly. */ if ((i < alg.ops - 1) || needs_fixup || cast_to_unsigned_p) append_pattern_def_seq (vinfo, stmt_vinfo, stmt); accumulator = accum_tmp; if (variant == negate_variant) tree accum_tmp = vect_recog_temp_ssa_var (multtype, NULL); stmt = gimple_build_assign (accum_tmp, NEGATE_EXPR, accumulator); accumulator = accum_tmp; if (cast_to_unsigned_p) append_pattern_def_seq (vinfo, stmt_vinfo, stmt); else if (variant == add_variant) tree accum_tmp = vect_recog_temp_ssa_var (multtype, NULL); stmt = gimple_build_assign (accum_tmp, PLUS_EXPR, accumulator, op); accumulator = accum_tmp; if (cast_to_unsigned_p) append_pattern_def_seq (vinfo, stmt_vinfo, stmt); /* Move back to a signed if needed. */ if (cast_to_unsigned_p) tree accum_tmp = vect_recog_temp_ssa_var (itype, NULL); stmt = gimple_build_assign (accum_tmp, CONVERT_EXPR, accumulator); return stmt; /* Detect multiplication by constant and convert it into a sequence of shifts and additions, subtractions, negations. We reuse the choose_mult_variant algorithms from expmed.c STMT_VINFO: The stmt from which the pattern search begins, i.e. the mult stmt. * TYPE_OUT: The type of the output of this pattern. * Return value: A new stmt that will be used to replace the multiplication. */ static gimple * vect_recog_mult_pattern (vec_info *vinfo, stmt_vec_info stmt_vinfo, tree *type_out) gimple *last_stmt = stmt_vinfo->stmt; tree oprnd0, oprnd1, vectype, itype; gimple *pattern_stmt; if (!is_gimple_assign (last_stmt)) return NULL; if (gimple_assign_rhs_code (last_stmt) != MULT_EXPR) return NULL; oprnd0 = gimple_assign_rhs1 (last_stmt); oprnd1 = gimple_assign_rhs2 (last_stmt); itype = TREE_TYPE (oprnd0); if (TREE_CODE (oprnd0) != SSA_NAME || TREE_CODE (oprnd1) != INTEGER_CST || !INTEGRAL_TYPE_P (itype) || !type_has_mode_precision_p (itype)) return NULL; vectype = get_vectype_for_scalar_type (vinfo, itype); if (vectype == NULL_TREE) return NULL; /* If the target can handle vectorized multiplication natively, don't attempt to optimize this. */ optab mul_optab = optab_for_tree_code (MULT_EXPR, vectype, optab_default); if (mul_optab != unknown_optab) machine_mode vec_mode = TYPE_MODE (vectype); int icode = (int) optab_handler (mul_optab, vec_mode); if (icode != CODE_FOR_nothing) return NULL; pattern_stmt = vect_synth_mult_by_constant (vinfo, oprnd0, oprnd1, stmt_vinfo); if (!pattern_stmt) return NULL; /* Pattern detected. */ vect_pattern_detected ("vect_recog_mult_pattern", last_stmt); *type_out = vectype; return pattern_stmt; /* Detect a signed division by a constant that wouldn't be otherwise vectorized: type a_t, b_t; S1 a_t = b_t / N; where type 'type' is an integral type and N is a constant. Similarly handle modulo by a constant: S4 a_t = b_t % N; * STMT_VINFO: The stmt from which the pattern search begins, i.e. the division stmt. S1 is replaced by if N is a power of two constant and type is signed: S3 y_t = b_t < 0 ? N - 1 : 0; S2 x_t = b_t + y_t; S1' a_t = x_t >> log2 (N); S4 is replaced if N is a power of two constant and type is signed by (where *_T temporaries have unsigned type): S9 y_T = b_t < 0 ? -1U : 0U; S8 z_T = y_T >> (sizeof (type_t) * CHAR_BIT - log2 (N));
{"url":"https://gnu.googlesource.com/gcc/+/f49e3d28be44179f07b8a06159139ce77096dda7/gcc/tree-vect-patterns.c?autodive=0%2F%2F%2F","timestamp":"2024-11-08T08:24:37Z","content_type":"text/html","content_length":"1049307","record_id":"<urn:uuid:87c34e6d-9702-4d70-ad48-04cc40740ae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00821.warc.gz"}
A Study in Sphere Packing Studies in Sphere Packing by Kirby Urner First posted: Feb 12, 2002 Last Modified: Feb 26, 2002 Fig 1: SIAM company Java applet (try it!, you'll like it!) Essential to figuring the convex hulls, given a set of vertices, was Qhull, freely available geometry software designed for this purpose. The glue language for linking raw data from the SIAM applet, to Qhull, and to Povray and VRML as output options, was Python (LiveGraphics3D and EIG formats also in the works). Fig 2: The nuclear ball (Ball #1) surrounded by its Voronoi cell. This is the nuclear ball in a 1000-ball packing returned by the Java applet above. Note that it looks very much like a regular pentagonal dodecahedron, although it isn't, quite (check the VRML view for a better sense of it). Fig 3: Nuclear sphere (Ball #1, green) Fig 4: Ball #11 (green) with 3 touching neighbors. with 3 touching neighbors The voronoi cell around Ball #11 has 13 facets, while its neighbors have 13, 14 and 15 facets. Ball #11 has 10 touching neighbors. These, and 3 more nearby spheres will all share a facet of its voronoi cell. Additional Resources:
{"url":"http://www.4dsolutions.net/ocn/sphpacking.html","timestamp":"2024-11-09T04:20:28Z","content_type":"application/xhtml+xml","content_length":"4766","record_id":"<urn:uuid:d0336a42-834d-4e42-bbcd-433b508c3ee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00768.warc.gz"}
Calculates return value and standard error given return calc_returnValue_fevd {climextRemes} R Documentation Calculates return value and standard error given return period(s) of interest Calculates return value (also known as the return level) given return period(s) of interest, using model fit from extRemes::fevd. Standard error is obtained via the delta method. The return value is the value for which the expected number of blocks until an event that exceeds that value is equal to the return period. For non-stationary models (those that include covariates for the location, scale, and/or shape parameters, return values and standard errors are returned for as many sets of covariates as provided. calc_returnValue_fevd(fit, returnPeriod, covariates = NULL) fit fitted object from extRemes fevd returnPeriod value(s) for which return value is desired covariates matrix of covariate values, each row a set of covariates for which the return value is desired version 0.3.1
{"url":"https://search.r-project.org/CRAN/refmans/climextRemes/html/calc_returnValue_fevd.html","timestamp":"2024-11-04T10:28:17Z","content_type":"text/html","content_length":"2770","record_id":"<urn:uuid:3b81c226-d806-4ea9-a3a2-82ba3421cab4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00794.warc.gz"}
ar Dynami 15, Informal Meeting Planning Sascha Sigmund Flux sensitivity analysis: A comparism. Oct In my talk we will compare two different approaches to flux sensitivity analysis, the first one was proposed by Fiedler and Mochizuki and the second one by Shinar and Feinberg. We study 22, especially the flux sensitivity matricies itself and related statements on the matricies. 2015 Phillipo Lappicy On the zero set of a solution of parabolic equations. It will be discussed a fundamental result in the theory of scalar parabolic equations: the zero dropping property. The proof of this fact will be given following a well known paper by Angenent. The talk will be continued on 27.10.2015 in room A6 SR025/026. Oct Jia-Yuan Existence of Rotating Spiral for Complex Ginzburg-Landau Equations on Spheres. 29, The complex Ginzburg-Landau equation (cGLe) can exhibit spiral patterns. By using a suitable ansatz, the problem of finding rotating spirals for cGLe on spheres is reduced to solving a 2015 non-autonomous ODE system with singularities. I will indicate how to solve such ODE system by shooting method and transversality argument. Ivan Ovsyannikov On effect of invisibility of stable orbits in homoclinic bifurcations. (University of Bremen, Germany) Homoclinic bifurcations are known to give rise to various objects such as periodic orbits, invariant tori as well as more complicated sets (e.g. strange attractors). An important role here is Nov played by the so-called saddle value which is the rate of the contraction/expansion of two-dimensional areas near the saddle fixed point. If the areas are contracted, the bifurcations undergo in 5, the simplest way, producing stable periodic orbits (exactly one in the flow case) which are easily observed in one-parametric families. But if 2D areas are expanded, the situation gets much more 2015 complicated. In continuous-time case it was shown by L.P. Shilnikov that unfolding of homoclinic bifurcations lead to the appearance of infinitely many coexisting periodic orbits (the so-called Shilnikov chaos). For the discrete-time systems it turns out that stable periodic may be born but they are observed in experiments with “zero probability”. In my talk I will give the theoretical explanation of this phenomenon. This is a joint work with S. Gonchenko and D. Turaev (Physica D, 2012). Nov Yuya Tokuta Equilibria of the reaction-diffusion system modeling the convection patterns of Euglena gracilis. 12, Micro-organisms are known to form spatio-temporal patterns similar to those formed in the Raleigh-Bernard model for thermal convection. Among such micro-organisms, Euglena gracilis form distinct 2015 patterns induced by positive/negative photo-taxis and sensitivity to the gradient of light intensity. A model for the convection patterns of Euglena gracilis was proposed by Suematsu et al. and we will discuss equilibria of the system in the simplest case. Nov Arne Goedeke Linear instability of black strings. 19, Black strings are black hole solutions of Einstein's equations in more than four dimensions. In spacetime dimensions n+1 they have horizon topology S^{n-1} x S^1. Using numerical simulations, it 2015 was discovered in the 1990s that black strings are linearly unstable. Since then, most research surrounding this result has focused on understanding physical aspects of the instability. We will return to the original problem and present a rigorous proof of the linear instability. Bernhard Brehm Unreachable Heteroclinic Chains in vacuum Bianchi IX. The Bianchi IX system of ODEs describes the behaviour of a certain class ofhomogeneous anisotropic cosmological models (i.e. solutions to Einstein's equations of general relativity) near the big Nov bang singularity. It is known that there exists an attractor consisting of heteroclinic orbits, which exhibits chaotic behaviour. Previous results from the group [Georgi, Haerterich, Liebscher 26, et al] have shown that certain such heteroclinic chains (forming a countable union of Cantor sets) posess stable manifolds of codimension one and Lipschitz regularity. We will show that only a 2015 meagre (Baire-small) set of heteroclinic chains can have a stable object of any meaningful regularity ("connected contracting sets") attached, which especially excludes stable Lipschitz manifolds. If time permits, we will also give an additional justification for the used regularity class of “attached connected contracting sets”. This comes in the form of a comparatively short construction of “attached connected contracting sets” for a large (but still meagre) class of heteroclinic chains. Anna Karnauhova Morse Meanders of Type I and II and the corresponding connection graphs. Our talk is related to Prof. Bernold Fiedler’s work on sixteen examples of global attractors of one-dimensional parabolic equations. By introducing the right one-shift we will prove under Dec certain assumptions on the arc configurations that we obtain a class of Morse meanders in the size of the Catalan numbers for the fixed number of arcs. The Morse meanders of this first class 03, will be called of Type I as well as the associated connection graphs which arise by the blocking and liberalism conditions. Further, it will be possible to deliver necessary and sufficient 2015 conditions on connection graphs of Type I for being isomorphic in the graph theoretical sense. Simultaneously to the first part of the talk, by introducing the left one-shift we will open a second class of meanders which are not Morse meanders. Strictly speaking, our aim will be to argue that it is possible to recover the Morse property of meanders of the secondclass by introducing precisely two maps. Our study will be accomplished by considering concatenations of both types of Morse meanders and a post-discussion of the non-existence of graph isomorphisms of connection graphs being from two different classes I and II. Dec Jia-Yuan Dai Existence of Rotating Spiral for Complex Ginzburg-Landau Equations on Spheres (continued). 10, We will continue to solve the existence and stability problem of the non-autonomous ODE system with singularities, which yields spiral patterns of the complex Ginzburg-Landau equation through a 2015 spiral ansatz. In this talk, we will focus on the global bifurcation approach and relate the results with the shooting method. Dec Isabelle Schneider Spatio-temporal feedback control of partial differential equations. 17, Noninvasive time-delayed feedback control (“Pyragas control”) has been extensively studied and applied in the context of ordinary differential equations. For partial differential equations, 2015 almost no results exist up to date. In my talk, I introduce new noninvasive feedback control terms, using both space and time for the control of partial differential equations. Juliette Hell Iterated function systems and chaos in Horava-Lifshitz gravity models. In general relativity, the geometry of the universe near the big bang singularity is tightly related to a discrete map on a circle - the Mixmaster map. This map can be described by an elementary Jan geometric construction involving an equilateral triangle. In this talk, we will consider modified gravity models called Ho\v{r}ava-Lifshitz. In the geometrical construction of the Mixmaster map, 07, this amounts to scaling the equilateral triangle while keeping the rest of the construction unchanged. We will focus on the case where the triangle is larger than for general relativity, which I 2016 find the more demanding case mathematically, and probably also the more relevant case physically. As a consequence, overlapping expanding arcs will show up in the discrete dynamics. This fact leads us to model the discrete dynamics by a generalized version of iterated function systems that will be introduced in the talk. Our aim is to prove that the discrete dynamics is chaotic (except in the limiting case where the equilateral triangle is infinitely large). Jan Nikita Begun Dynamics of a Discrete Time System with Stop Operator. 14, We consider a piecewise linear two-dimensional dynamical system that couples a linear equation with the so-called stop operator. Global dynamics and bifurcations of this system are studied 2016 depending on two parameters. The system is motivated by general-equilibrium macroeconomic models with sticky information. Jan Bernhard Brehm Introduction to computability and universal Turing machines. Bernhard Brehm The Bianchi VIII Attractor Theorem and Particle Horizons. This talk will give a complete overview of my doctoral thesis. The Wainwright-Hsu system of ODE describes the dynamics of spatially homogeneous anisotropic space-times under the vacuum Jan Einstein-Field equations, in the case where the homogeneity is given by either so(3) (Bianchi 9) or sl(2,R) (Bianchi 8). Relevant questions are the open-ended "describe the dynamics!" and the 28, more specific physical "do particle horizons develop?", which boils down to bounding a certain integral. The talk will contain the following parts: 1. Overview: The talk will provide a short 2016 overview of the Wainwright-Hsu sytem. 2. Attractor Theorem: The Wainwright-Hsu system contains an invariant set, called the "Mixmaster attractor". Hans Ringstroem proved in 2001 that this set is actually an attractor for Bianchi 9 initial conditions. My thesis proves that this set is is also an attractor for Bianchi 8 initial conditions, and provides a new proof of Ringstroem's result. The talk will give an overview of this proof. 3. Particle horizons: My thesis proves that particle horizons develop for Lebegues a.e. initial condition. The talk will give an overview of this Robert Krehl The Fucik spectrum: Inhomogeneous case I will give a short introduction to the Fucik spectrum by analyzing the equation -u’’=bu^(+)-au^(-) with Dirichlet boundary conditions. Then, I will try to generalize my approach to prove the existence of a spectrum in the inhomogeneous case. Hopefully, in the end, there will be enough time to indicate some ideas of my current work in progress of finding a representation formula for Feb the spectrum in the inhomogeneous case. 04, Nicola Vassena Monomolecular reaction networks: Characterization of Flux-influenced set. 2016 This talks explains the very last part of my Master´s Thesis. Monomolecular reaction networks are modeled by particular directed graphs, which represent reactions as directed arrows. The first part of my thesis, exposed in a talk last semester, was about reformulating a theorem by Fiedler and Mochizuki which characterized, in a graph-language only, the Flux-influence relation. A reaction j´ is flux-influenced by another reaction j* if it responds by a flux variation to perturbations of the reaction rate of j*. This reformulation was done for obtaining a different proof of the transitivity of Flux-influence relation. After a brief recall, we will use this reformulation to derive a characterization of the whole set of arrows which are flux-influenced by a specific one. Nomenclature issues regarding how to call or define new graph-tools for these specific kind of networks are not completely close and suggestions and ideas will be welcome.
{"url":"http://dynamics.mi.fu-berlin.de/lectures/dipldokseminar/15WS-seminar.php","timestamp":"2024-11-10T17:32:22Z","content_type":"application/xhtml+xml","content_length":"21299","record_id":"<urn:uuid:7fddfae6-1898-49bb-842b-22d6cfe61764>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00144.warc.gz"}
Syntax: FACTOR variable factor [cutpoints(s)] where cutpoints are an optional set of real numbers. The FACTOR command produces a set of dummy variables produced by considering the variable as a categorical variable. The set of dummy variables is referred to in SABRE by the name of the structure If cutpoints are not specified, then the variable will be categorised according to the number of distinct values it possessess. If n cutpoints are specified, then the values of the variable are divided into n+1 categories with intervals defined by: -infinity cutpoint1 cutpoint2... ...cutpoint(n) infinity If an element in the variable is exactly equal to a cutpoint, then the element is placed in the higher cutpoint group. The number of levels in a factor may be obtained by issuing the DISPLAY VARIABLES command.
{"url":"http://sabre.lancs.ac.uk/factor.html","timestamp":"2024-11-04T17:05:26Z","content_type":"text/html","content_length":"5755","record_id":"<urn:uuid:ad7a5994-42ef-44ba-a5fd-43bcd3cb7aeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00840.warc.gz"}
Double-negation translation Jump to navigation Jump to search In proof theory, a discipline within mathematical logic, double-negation translation, sometimes called negative translation, is a general approach for embedding classical logic into intuitionistic logic, typically by translating formulas to formulas which are classically equivalent but intuitionistically inequivalent. Particular instances of double-negation translation include Glivenko's translation for propositional logic, and the Gödel–Gentzen translation and Kuroda's translation for first-order logic. Propositional logic[edit] The easiest double-negation translation to describe comes from Glivenko's theorem, proved by Valery Glivenko in 1929. It maps each classical formula φ to its double negation ¬¬φ. Glivenko's theorem states: If φ is a propositional formula, then φ is a classical tautology if and only if ¬¬φ is an intuitionistic tautology. Glivenko's theorem implies the more general statement: If T is a set of propositional formulas, T* a set consisting of the doubly negated formulas of T, and φ a propositional formula, then T ⊢ φ in classical logic if and only if T* ⊢ ¬¬φ in intuitionistic logic. In particular, a set of propositional formulas is intuitionistically consistent if and only if it is classically satisfiable. First-order logic[edit] The Gödel–Gentzen translation (named after Kurt Gödel and Gerhard Gentzen) associates with each formula φ in a first-order language another formula φ^N, which is defined inductively: • If φ is atomic, then φ^N is ¬¬φ • (φ ∧ θ)^N is φ^N ∧ θ^N • (φ ∨ θ)^N is ¬(¬φ^N ∧ ¬θ^N) • (φ → θ)^N is φ^N → θ^N • (¬φ)^N is ¬φ^N • (∀x φ)^N is ∀x φ^N • (∃x φ)^N is ¬∀x ¬φ^N This translation has the property that φ^N is classically equivalent to φ. The fundamental soundness theorem (Avigad and Feferman 1998, p. 342; Buss 1998 p. 66) states: If T is a set of axioms and φ is a formula, then T proves φ using classical logic if and only if T^N proves φ^N using intuitionistic logic. Here T^N consists of the double-negation translations of the formulas in T. A sentence φ may not imply its negative translation φ^N in intuitionistic first-order logic. Troelstra and Van Dalen (1988, Ch. 2, Sec. 3) give a description (due to Leivant) of formulas that do imply their Gödel–Gentzen translation. There are several alternative definitions of the negative translation. They are all provably equivalent in intuitionistic logic, but may be easier to apply in particular contexts. One possibility is to change the clauses for disjunction and existential quantifier to • (φ ∨ θ)^N is ¬¬(φ^N ∨ θ^N) • (∃x φ)^N is ¬¬∃x φ^N Then the translation can be succinctly described as: prefix ¬¬ to every atomic formula, disjunction, and existential quantifier. Another possibility (known as Kuroda's translation) is to construct φ^N from φ by putting ¬¬ before the whole formula and after every universal quantifier. Notice that this reduces to the simple ¬¬φ translation if φ is propositional. It is also possible to define φ^N by prefixing ¬¬ before every subformula of φ, as done by Kolmogorov. Such a translation is the logical counterpart to the call-by-name continuation-passing style translation of functional programming languages along the lines of the Curry–Howard correspondence between proofs and programs. The double-negation translation was used by Gödel (1933) to study the relationship between classical and intuitionistic theories of the natural numbers ("arithmetic"). He obtains the following If a formula φ is provable from the axioms of Peano arithmetic then φ^N is provable from the axioms of intuitionistic Heyting arithmetic. This result shows that if Heyting arithmetic is consistent then so is Peano arithmetic. This is because a contradictory formula θ ∧ ¬θ is interpreted as θ^N ∧ ¬θ^N, which is still contradictory. Moreover, the proof of the relationship is entirely constructive, giving a way to transform a proof of θ ∧ ¬θ in Peano arithmetic into a proof of θ^N ∧ ¬θ^N in Heyting arithmetic. (By combining the double-negation translation with the Friedman translation, it is in fact possible to prove that Peano arithmetic is Π^0[2]-conservative over Heyting arithmetic.) The propositional mapping of φ to ¬¬φ does not extend to a sound translation of first-order logic, because ∀x ¬¬φ(x) → ¬¬∀x φ(x) is not a theorem of intuitionistic predicate logic. This explains why φ^N has to be defined in a more complicated way in the first-order case. See also[edit] • J. Avigad and S. Feferman (1998), "Gödel's Functional ("Dialectica") Interpretation", Handbook of Proof Theory'', S. Buss, ed., Elsevier. ISBN 0-444-89840-9 • S. Buss (1998), "Introduction to Proof Theory", Handbook of Proof Theory, S. Buss, ed., Elsevier. ISBN 0-444-89840-9 • G. Gentzen (1936), "Die Widerspruchfreiheit der reinen Zahlentheorie", Mathematische Annalen, v. 112, pp. 493–565 (German). Reprinted in English translation as "The consistency of arithmetic" in The collected papers of Gerhard Gentzen, M. E. Szabo, ed. • V. Glivenko (1929), Sur quelques points de la logique de M. Brouwer, Bull. Soc. Math. Belg. 15, 183-188 • K. Gödel (1933), "Zur intuitionistischen Arithmetik und Zahlentheorie", Ergebnisse eines mathematischen Kolloquiums, v. 4, pp. 34–38 (German). Reprinted in English translation as "On intuitionistic arithmetic and number theory" in The Undecidable, M. Davis, ed., pp. 75–81. • A. N. Kolmogorov (1925), "O principe tertium non datur" (Russian). Reprinted in English translation as "On the principle of the excluded middle" in From Frege to Gödel, van Heijenoort, ed., pp. • A. S. Troelstra (1977), "Aspects of Constructive Mathematics", Handbook of Mathematical Logic", J. Barwise, ed., North-Holland. ISBN 0-7204-2285-X • A. S. Troelstra and D. van Dalen (1988), Constructivism in Mathematics. An Introduction, volumes 121, 123 of Studies in Logic and the Foundations of Mathematics, North–Holland. External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/kett%C5%91s_tagad%C3%A1s/en.wikipedia.org/wiki/Double-negation_translation.html","timestamp":"2024-11-09T11:20:35Z","content_type":"text/html","content_length":"41025","record_id":"<urn:uuid:9b2eee19-6445-4fc3-859f-34fe8e221308>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00442.warc.gz"}
Which are the frustrum's dimensions? I’m a newbie and I wanna have a deep understanding about OpenGL. I’m trying to understand how the Field Of View affect the projection but I can’t understand it; all I see when I change it is the effect of the zoom in the whole model; also I need to do a little complex algorithms that involve it and I can’t do it because I don’t understand how it works. So I think I can understand it if you can answer this questions: • Once we establish the projection using gluPerspective() how can I get the frustrum’s dimensions? I mean its height, and the with and height of the nearest and farthest plane. • What’s the formula to create the frustrum when we use gluPerspective() ? This formula answers all your questions: Construct a cube [-1;1], multiply its vertices by the inverse of the projection-matrix (or view-projection matrix): the distorted cube now fits the frustum perfectly. And for a spiffy little labeled diagram of what you’re doing, see this: Search down for “projection matrix”.
{"url":"https://community.khronos.org/t/which-are-the-frustrums-dimensions/58950","timestamp":"2024-11-14T08:16:07Z","content_type":"text/html","content_length":"17497","record_id":"<urn:uuid:c27d1130-9106-4371-9ff5-1c12a900876e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00251.warc.gz"}
Why do so many students hate math? Some students dislike math because they think it’s dull. They don’t get excited about numbers and formulas the way they get excited about history, science, languages, or other subjects that are easier to personally connect to. They see math as abstract and irrelevant figures that are difficult to understand. What’s the hardest math class in high school? List of the Hardest Maths Class in High School • Algebra. • Calculus. • Combinatory. • Topology and Geometry. • Dynamic system and Differential equations. • Mathematical physics. • Information theory and signal processing. Conclusion. Nainika. What are the best things about college? The 9 Definitive Best Things about College Life 1. You get to meet knew people that are probably really awesome. 2. Your friends live roughly 3 minutes away. 3. You can have class at whatever time of day you want. 4. The shopping’s not half bad. 5. You finally get to take classes that you want to take. 6. There’s a place to get food no matter what time it is. What algebra is used for? Algebra is the study of mathematical symbols and the rules for manipulating those symbols. It forms the basis for advanced studies in many fields, including mathematics, science, engineering, medicine, and economics. In its simplest form, algebra involves using equations to find the unknown. How is your college experience different from high school? HIGH SCHOOL: Your time is usually structured by others: administrators, teachers, coaches and of course, parents. Teachers carefully monitor class attendance. COLLEGE: You manage your own time. It’s up to you to get to class, do your lab work and study. What is the hardest unit in Algebra 2? What is the hardest part of algebra? Putting abstract algebra aside, nothing is really hard to understand in algebra but there are some that are really hard to memorise. The top two hardest formulas to memorise – by far – are the cubic formula and the quartic formula. What is difference between school life and college life? In school, we are bound by protocols and disciplinary rules that we are tempted to defy but there is always a fear of being caught and punished. College life on the other hand, though is bound by rules, but it hardly matters for the sense of freedom that students gain in college is all about doing what you feel like. What is the highest level of math? How can students make school a better place? Here are some important ways we can support our vulnerable students and improve their school experience. 1. Start a free clothing closet. 2. Give out weekend food backpacks. 3. Provide free access to sanitary supplies. 4. Have a bank of school supplies available for anyone. 5. Help them find safe transportation. 6. Keep your school libraries. Why is calculus 2 so hard? Calc 2 is hard because there’s no obvious path to follow while integrating, and the key is practice and experience. Knowledge of the general rules and principles will only get you so far. Practice as much as you can, and get ready to use a lot of foundational math (geometry especially) to solve problems. What is the hardest math question in the world? Today’s mathematicians would probably agree that the Riemann Hypothesis is the most significant open problem in all of math. It’s one of the seven Millennium Prize Problems, with a million dollar reward for its solution. Is doubling up in math hard? Doubling up, or taking two core classes, can be difficult because it is twice the required work and can be tiring. For example, if the school offers six different math classes, a student can take two for two years but still take one per year. How can you make a difference in the classroom? 35 Ways To Make A Difference, Student Edition 1. Volunteer with a nonprofit organization. 2. Teach children at church, elementary school, or secular programs (i.e. Boys & Girls Club). 3. Organize a canned food drive. 4. Give away your childhood toys. 5. Take on a reading day at the library. 6. Donate books. 7. Donate hair to cancer patients. Which life is best school or college? According to me college life and school life both are best. First eighteen year student face school life and enjoy it and after complete school study for better career students join college ….Which life is better college life or school life? Joined: /th> Level: Gold Points: 2658
{"url":"https://www.thelittleaussiebakery.com/why-do-so-many-students-hate-math/","timestamp":"2024-11-14T11:25:06Z","content_type":"text/html","content_length":"56674","record_id":"<urn:uuid:58433648-af7c-409d-ac93-524354aec59e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00315.warc.gz"}
Integration related articles - The Culture SGIntegration related articles Here is a compilation of all the Integration articles KS has done. Students should read them when they are free to improve their mathematics skills. They will come in handy! 🙂 1. Integrating Trigonometric Functions (1) 2. Integrating Trigonometric Functions (2) 3. Integrating Trigonometric Functions (3) 4. Integrating Trigonometric Functions (4) 5. Integrating Trigonometric Functions (5) 6. Leibniz’s Formula for 7. Simpson’s Rule 8. Evaluating Integrals with Modulus CONTACT US We would love to hear from you. Contact us, or simply hit our personal page for more contact information
{"url":"https://theculture.sg/2016/04/integration-related-articles/","timestamp":"2024-11-13T06:28:10Z","content_type":"text/html","content_length":"99364","record_id":"<urn:uuid:7da1c33d-d2c0-437a-aee3-027e72def663>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00486.warc.gz"}
Point and confidence interval estimates for a global maximum via extreme value theory The aim of this paper is to provide some practical aspects of point and interval estimates of the global maximum of a function using extreme value theory. Consider a real-valued function f : D → ℝ defined on a bounded interval D such that f is either not known analytically or is known analytically but has rather a complicated analytic form. We assume that f possesses a global maximum attained, say, at u* ∈ D with maximal value x* = max[u] f(u = f(u*). The problem of seeking the optimum of a function which is more or less unknown to the observer has resulted in the development of a large variety of search techniques. In this paper we use the extreme-value approach as appears in Dekkers et al. [A moment estimator for the index of an extreme-value distribution, Ann. Statist. 17 (1989), pp. 1833-1855] and de Haan [Estimation of the minimum of a function using order statistics, J. Amer. Statist. Assoc. 76 (1981), pp. 467-469]. We impose some Lipschitz conditions on the functions being investigated and through repeated simulation-based samplings, we provide various practical interpretations of the parameters involved as well as point and interval estimates for x*. • Extreme value theory • Global maximum • Search techniques ASJC Scopus subject areas • Statistics and Probability • Statistics, Probability and Uncertainty Dive into the research topics of 'Point and confidence interval estimates for a global maximum via extreme value theory'. Together they form a unique fingerprint.
{"url":"https://cris.haifa.ac.il/en/publications/point-and-confidence-interval-estimates-for-a-global-maximum-via-","timestamp":"2024-11-05T03:56:45Z","content_type":"text/html","content_length":"54012","record_id":"<urn:uuid:696d05d4-949f-4ad2-9cbb-fa881e62e2d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00518.warc.gz"}
How To Calculate Interest From Savings Account Our high interest savings account calculator considers the initial deposit amount, regular contribution amounts, payment frequency, interest rate, and the. Banks calculate the interest amount based on their interest rate and the closing balance in your bank account each day. A savings account has an Annual Percentage Yield (APY), which reflects your account's current interest rate and the effect of interest compounding. Compounding. A Savings Account Interest Calculator is a financial tool that projects the potential earnings from a Savings Account. Annual Percentage Yield (or APY) is a percentage expression of the amount of compound interest an account earns in a year. The calculation is based on the. A Savings Account Interest Calculator is a financial tool that projects the potential earnings from a Savings Account. Variable interest business savings account with 24/7 access. Business Term Term deposit calculator. Savings interest rates. Savings Maximiser · Personal Term. It's easy. Simply divide your APY by 12 (for each month of the year) to find the percent interest your account earns per month. To calculate savings account interest, you can use the simple interest formula: Interest = P R T, where P is the principal amount, R is the interest rate. The following formula is typically used to explain calculating the savings interest rate. Interest calculated every month = Daily Balance * (Number of days) *. Banking that works as smart as you do. Why settle? Get with a bank that offers high interest rates without the fees. Best Savings Rates. The savings calculator can be used to estimate the end balance and interest of savings accounts. It considers many different factors such as tax, inflation. It's easy. Simply divide your APY by 12 (for each month of the year) to find the percent interest your account earns per month. You can calculate the simple interest rate by taking the initial deposit or principal, multiplying by the annual rate of interest and multiplying it by time. This calculator computes the simple interest and end balance of a savings or investment account. It also calculates the other parameters of the simple. The APY (annual percentage yield, or interest) on your savings account can make a big difference on the future value of your savings. See how the interest. Depending on your account, your bank could use either simple or compound interest to figure out how much money you'll earn in interest. You can calculate the amount of simple interest your account earns by multiplying the account balance by the interest rate for a select time period. To. You should compare savings account yields by looking at annual percentage yields (APYs). Comparing APYs means you don't have to worry about compounding. How do interest rates work? An interest rate is a percentage of how much you will earn based on the amount you save. Interest is paid to you by your savings. Our savings calculator makes it easy to find out. Using the three sliders at the bottom of the calculator, select your initial deposit, how much you plan to. Reset Calculate. Term Deposit, $10, deposit, 12 months, Simple -Interest %; -. * Hover on graph bars to see details. Your savings sumary. Option 1. The APY (annual percentage yield, or interest) on your savings account can make a big difference on the future value of your savings. See how the interest. The formula for calculating simple interest is as follows: P x R x T = Interest Earned P = principal, or your beginning balance R = interest rate (annual. How do you calculate interest on a savings account? The simplest way to calculate interest is to use an online savings calculator like this one. But if you. The formula for calculating simple interest is A = P x R x T. A is the amount of interest you'll wind. The formula for calculating interest on a savings account is: Balance x Rate x Number of years = Simple interest. What's Compound Interest Compared With Simple. The financial report for the end of the quarter should have no balance in the Interest Payable or Accrued. Interest Payable accounts. Entries in the Journal and. How do you calculate interest on a savings account? The simplest way to calculate interest is to use an online savings calculator like this one. But if you. Find out how much interest you can earn by frequently depositing your money in a People's Choice savings account or term investment. The savings calculator can be used to estimate the end balance and interest of savings accounts. It considers many different factors such as tax, inflation. Depending on your account, your bank could use either simple or compound interest to figure out how much money you'll earn in interest. How do you calculate interest on a savings account? The simplest way to calculate interest is to use an online savings calculator like this one. But if you. This calculator allows you to calculate how much interest you'll be paid, how long you'll need to save for something or tells you how much you need to save. Interest rate: This is the annual rate at which your savings will grow. It's typically expressed as a percentage. APY (Annual Percentage Yield): APY takes into. A savings account has an Annual Percentage Yield (APY), which reflects your account's current interest rate and the effect of interest compounding. Compounding. Here's everything you should know- Interest = Daily balance * (Number of Days) * Interest / (Days in a Year). Try our savings interest calculator to see how much interest you could be earning with a Marcus Online Savings Account vs. other banks. A Savings Account Interest Calculator is a financial tool that projects the potential earnings from a Savings Account. Our savings calculator makes it easy to find out. Using the three sliders at the bottom of the calculator, select your initial deposit, how much you plan to. If you already have an account and you want to calculate your savings from TD Bank savings account interest. Interest rate: the percentage rate of return an account will yield after a certain period. · Compound interest (compounding rate): your initial deposit earns. You should compare savings account yields by looking at annual percentage yields (APYs). Comparing APYs means you don't have to worry about compounding. Variable interest business savings account with 24/7 access. Business Term Term deposit calculator. Savings interest rates. Savings Maximiser · Personal Term. The financial report for the end of the quarter should have no balance in the Interest Payable or Accrued. Interest Payable accounts. Entries in the Journal and. Find out how much interest you can earn by frequently depositing your money in a People's Choice savings account or term investment. The Savings Account Interest Calculator is a quick-and-easy tool to calculate the interest you can earn on your savings account balance. You must enter your. If interest is compounded daily, divide the simple interest rate by and multiply the result by the balance in the account to find the interest earned in one. Banking that works as smart as you do. Why settle? Get with a bank that offers high interest rates without the fees. Best Savings Rates. Your saving account is one of the sources of interest income. Actually, there are two types of method for calculation. One is old and another is new. Banks calculate the interest amount based on their interest rate and the closing balance in your bank account each day. A savings account has an Annual Percentage Yield (APY), which reflects your account's current interest rate and the effect of interest compounding. Compounding. Savings Goal Calculator · Required Minimum Distribution Calculator · College Test your knowledge of day trading, margin accounts, crypto assets, and more! Annual Percentage Yield (or APY) is a percentage expression of the amount of compound interest an account earns in a year. The calculation is based on the. The formula for calculating simple interest is as follows: P x R x T = Interest Earned P = principal, or your beginning balance R = interest rate (annual. Interest rate: the percentage rate of return an account will yield after a certain period. · Compound interest (compounding rate): your initial deposit earns. How do interest rates work? An interest rate is a percentage of how much you will earn based on the amount you save. Interest is paid to you by your savings. SAVINGS ACCOUNT INTEREST CALCULATOR · ICICI Bank Savings Account interest rates are fixed. · The interest is calculated as per the daily End Of Day (EOD). The APY (annual percentage yield, or interest) on your savings account can make a big difference on the future value of your savings. See how the interest. You can calculate the amount of simple interest your account earns by multiplying the account balance by the interest rate for a select time period. To. Who Owns United Family Life Insurance Company | Typical Tax Filing Fees
{"url":"https://mariscos.site/news/how-to-calculate-interest-from-savings-account.php","timestamp":"2024-11-14T00:56:01Z","content_type":"text/html","content_length":"16573","record_id":"<urn:uuid:112c319e-fd15-40dd-a017-5c7cc13b7889>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00662.warc.gz"}
Time Value of An Option: What is it, How it Works, Calculations, and Benefits Written by Arjun Remesh | Reviewed by Shivam Gaba | Updated on 23 October 2024 The time value of an option refers to the portion of an option’s price that is not intrinsic value. Time value exists because there is still time remaining until the option expires, during which the option has the potential to become further in-the-money. The longer the time until expiry, the greater the time value of the option since there is more time for the stock price to move favorably. As the option approaches expiry, time value decays rapidly. On the expiration date, time value drops to zero and only intrinsic value remains. Time value is affected by several factors – the underlying stock’s volatility, time to expiration, and interest rates all impact time value. Higher volatility stocks have greater time value since there is higher potential for price movement. Longer dated options have higher time value with more time for favorable moves. Higher interest rates increase time value as well. The key benefit of time value is it allows option buyers to obtain leverage. For a small premium, they control a much larger amount of stock. However, time value also presents a risk, since it decays rapidly and causes options to expire worthless if the stock price does not move enough. Traders look to balance the leverage benefits and decay risks when assessing time value. What does the Time Value of An Option mean? The time value of an option refers to the portion of an option’s premium that exceeds the option’s intrinsic value. An option’s premium is the total price paid for the option, while intrinsic value is defined as the amount by which the strike price of an option is in-the-money. For call options, intrinsic value is the amount by which the underlying asset’s price is above the strike price. For put options, intrinsic value is the amount by which the strike price exceeds the underlying asset’s current market price. Time value represents the additional value of an option beyond merely the intrinsic value. It is essentially the amount by which the premium exceeds the intrinsic value, if any. Time value exists because there is remaining time until option expiration, during which the option has the potential to become further in-the-money due to movements in the underlying asset’s price. The longer the time until expiry, the greater the time value of the option since there is more time in which the asset price move favorably. As the option contract approaches its expiration date, time value decays rapidly. This is because there are fewer days remaining for the asset price to move advantageously. On the actual expiration date, time value drops to zero and only intrinsic value remains. An option never trade for less than its intrinsic value at expiration. How does the Time Value of An Option work? The time value of an option is derived from the uncertainty over the underlying asset’s price during the remaining life of the option. The most common method for estimating an option’s time value involves using pricing models like Black-Scholes and Binomial Trees. These models take into account variables like the underlying price, strike price, time to expiration, volatility, interest rates and dividends to calculate the probable value of the option. The models produce an estimated fair value for the option, with the difference between the fair value and intrinsic value being the time value. The inputs and assumptions of the pricing models directly impact the time value produced. The models demonstrate how specific influencing factors affect an option’s time value. For instance, higher volatility of the underlying asset increases time value as it indicates wider potential price swing and greater odds of an advantageous finish for the option buyer. Longer duration to expiration also increases time value due to higher uncertainty over a longer period. Higher interest rates used for discounting in the models boost time value slightly as well. Changes in these input factors cause the models to adjust the fair value and time value higher or lower As expiration approaches, the pricing models show time value decaying at an accelerating rate. With fewer days remaining, there are diminishing odds of substantial price moves that would further increase the option’s intrinsic value. Time value decay accelerates because uncertainty is rapidly declining as expiration nears. On the last trading day, time value converges to zero, leaving only intrinsic value. Time value also impacts early exercise decisions. Models demonstrate it is typically not optimal to exercise an option prior to expiration while substantial time value still exists. It is better to sell the option and capture remaining time value rather than exercise and forfeit this. Only at expiration when time value is zero does early exercise become optimal. Time value considerations are vital in options trading strategies. For example, sellers of options aim to capture time value premiums, so maximizing decay by shorting closer dated contracts while volatility is high is ideal. Hedging strategies like spreads also aim to take advantage of time value differentials between two or more options contracts. An option’s “Greek” sensitivities – Delta, Gamma, Theta, Vega, Rho – indicate how time value responds to moves in input variables. For example, Theta indicates how much an option’s time value decays per day. Vega shows sensitivity to volatility. Greeks help traders manage the impact of time value fluctuations. Timme value emanates from a complex web of uncertainty about the future path of the underlying asset’s price. Pricing models distill this uncertainty into a quantifiable estimate of time value subject to influencing factors. As uncertainty declines into expiration, time value dissipates. Traders utilize time value concepts to maximize their edge. Whether looking to capture, avoid, or manage time value, its mechanics form a key building block of options trading strategies. Which Option has the highest Time Value? At-the-money options possess the highest time value because they provide the greatest potential payout with odds close to 50/50, exhibit favorable Gamma and Delta attributes, are optimally traded in delta neutral strategies, provide optimal early exercise utility, benefit from volatility skew, and see inflated value from psychological bias. Their unique flexibility and probability profile make at-the-money options the time value maximizing choice for options traders. At-the-money options have the highest probability of being profitable at expiration. They have roughly a 50% chance of finishing in-the-money since even a small price movement upward or downward will carry the underlying’s price across the strike price. Out-of-the-money options have lower profit odds since the asset price must move further favorably to cross the strike. Higher likelihood of a payout boosts an at-the-money option’s time value. An option’s Gamma measures sensitivity of Delta to price changes. Delta indicates the option’s price movement compared to the underlying asset. At-the-money options exhibit the highest Gamma because even small moves in the underlying rapidly shift their Delta. Higher Gamma signifies greater odds of becoming deeper in-the-money before expiry, increasing time value. Many advanced options traders construct delta neutral portfolios to solely capture time value. This involves balancing positive and negative Deltas to maintain zero directional bias. Since at-the-money options offer deltas closest to neutral at ~0.50, they are best suited for delta neutral trading that maximizes time value harvesting. Early options exercise prior to expiration typically destroys time value. However, due to their unique profit profile, at-the-money options are most optimally exercised early when opportunities arise. The minimal time value lost is offset by higher odds of capturing intrinsic value. Thus at-the-money options retain maximum time value at any given moment. Volatility skews on options chains cause out-of-the-money options to exhibit lower implied volatility than at-the-money options. This further depresses time value on out-of-the-money options. At-the-money options sit at the peak of the volatility skew, boosting their time value. Options traders tend to overestimate the likelihood of substantial price moves and therefore overpay for the time value of out-of-the-money options. Conversely, they underestimate odds of smaller swings that would benefit at-the-money options. This psychological bias pushes at-the-money time value even higher relative to out-of-the-money alternatives. What is the importance of the Time Value of An Option? Time value is important as it facilitates leverage, efficient risk management, pricing model insights, trading strategies, exercise timing, downside protection cost benefit analysis, liquidity incentives, and limits on arbitrage. Time value allows options traders to pay relatively small premiums upfront in order to control much larger exposures to the underlying asset. This provides leverage and efficiency. Without time value, traders would have to outlay cash equal to the full value of the underlying position. Time value unlocks leverage because uncertainty is priced into premiums. In addition to upside leverage, time value also enables more efficient downside risk management. Investors hedge downside risk at a fraction of the cost of owning the underlying asset itself. Options allow tailoring of risk-reward profiles through time value. Portfolios are constructed to eliminate downside exposure beyond a desired point. Time value represents a market-derived output of options pricing models like Black-Scholes. The models demonstrate how time value reacts to changing inputs like implied volatility, interest rates, dividends, strike prices and expiration dates. Time value enables estimating the probabilities of different price outcomes underlying an option’s value. Time value provides opportunities for options traders to generate profit through its fluctuation and decay. Short sellers and credit spread traders look to capture time value. Methods like delta-hedging involve offsetting directional risks to isolate time value exposure. Without time value, many options strategies would be rendered unprofitable or pointless. Since exercising options extinguishes remaining time value, assessing time value is essential for determining optimal exercise timing. Models demonstrate how much time value remains across various dates and price points. This helps investors avoid the mistake of exercising too early when significant time value remains on the option. Time value sets the market cost of downside protection and leverage for an option position. Investors evaluate whether the time value premium is a worthwhile cost relative to the protection afforded by a given strike price and expiration. This aids in evaluating risk-reward payoffs across different option contracts. Time value premiums compensate market makers and other liquidity providers for offering bids and offers on option contracts. Time value represents their potential reward for bearing the risks of holding short option positions as part of their function. Without time value profits, liquidity would be reduced. Time value diminishes opportunities for arbitrage between an option and the underlying asset. Only at expiration when time value disappears perfect arbitrage exist. In the interim, time value ensures option prices differ from pure intrinsic value, preventing full arbitrage. This maintains orderly markets. Does Time Value Affect the Option Contract? Yes, time value significantly affects options contracts. The time value component is a major determinant of an option’s overall value and the factors influencing it directly impact the pricing and behavior of the option contract. Time value determines the premium price of an option, with higher time value leading to greater premium costs for traders looking to buy or sell the contract. As time value rises and falls, it alters the actual price paid for an option. Time value also allows traders to obtain leverage by only having to outlay a fraction of the full price of the underlying asset when purchasing an option. More time value provides greater leverage for a given premium spent on a contract. This amplifies potential gains. How is the Time Value of a Call Option calculated? The time value of a call option is calculated as the difference between the current premium of the option and its intrinsic value. The formula is Time Value = Call Option Premium – Intrinsic Value Call Option Premium = The full market price paid to buy the call option Intrinsic Value = The amount by which the stock price exceeds the strike price of the call option For example, say a call option has a strike price of Rs. 3,000 and the underlying stock is trading at Rs. 3,250. The call option premium is Rs. 200. The intrinsic value is Rs. 3,250 – Rs. 3,000 = Rs. 250 Applying the formula: Time Value = Call Option Premium – Intrinsic Value = Rs. 200 – Rs. 250 = Rs. -50 In this case, the time value is negative Rs. 50. This occurs when the current premium is less than intrinsic value, meaning the option is trading at a discount to fair value. The key steps are Identify the call option premium based on current market price Calculate intrinsic value by taking stock price minus strike price Subtract intrinsic value from premium to derive time value If time value is negative, it means the option is undervalued based on fundamentals Time value will approach zero as the option nears expiry This demonstrates how to derive the time value component of a call option premium based on fundamental factors like stock price, strike price and expiry. The premium can then be analyzed versus time value to assess if the call option is rich or cheap. How is the Time Value of a Put Option calculated? The time value of a put option is determined as the difference between the premium paid for the put and its intrinsic value. For put options, the intrinsic value is calculated as: Intrinsic Value = Strike Price – Stock Price The strike price represents the fixed price at which the put option allows selling the underlying stock. The stock price is the current market price of the underlying security. Subtracting the stock price from the strike gives the in-the-money amount, if any, for the put option. Once intrinsic value is calculated, time value is derived by taking the put option premium and subtracting intrinsic value: Time Value = Put Option Premium – Intrinsic Value Where put option premium is the full price paid to purchase the put contract on the open market. For example, consider a put option with a strike price of Rs. 3,150 on a stock currently trading at Rs. 3,000. The put option premium is Rs. 175. First, calculate intrinsic value: Strike Price = Rs. 3,150 Stock Price = Rs. 3,000 Intrinsic Value = Strike Price – Stock Price\ = Rs. 3,150 – Rs. 3,000 = Rs. 150 Next, compute time value: Put Option Premium = Rs. 175 Intrinsic Value = Rs. 150 Time Value = Put Option Premium – Intrinsic Value = Rs. 175 – Rs. 150 = Rs. 25 In this example, the Rs. 25 time value represents the portion of the put premium not explained by in-the-money intrinsic value. The time value is negative indicates the put option is undervalued in relation to its intrinsic value. A positive time value means higher premium than strict intrinsic value, indicating the put is likely overvalued based on current fundamentals. As expiration approaches, time value converges to zero, leaving only intrinsic value in an option’s premium. What are the factors affecting the time value of an Option? The time value component of an option’s premium depends on key variables. volatility, time decay, interest rates, strike price, underlying price, and dividends all significantly influence the time value of an option. Underlying Asset Volatility Volatility measures how widely and rapidly the price of the underlying asset, such as a stock, fluctuates over time. Higher volatility translates to wider potential price swings and a greater range of possible outcomes before option expiration. This increased uncertainty boosts the time value of options. Options on assets with lower volatility have less uncertainty priced into time value. Time to Expiration The amount of time remaining until the option contract expires also strongly influences time value. More time until expiry allows for a greater likelihood of substantial price moves that would push the option deeper into profitability. Shorter dated options have less time for favorable price action to materialize, reducing time value. All else equal, longer expiration options will exhibit higher time value. Interest Rates Prevailing interest rates impact time value because the fair value of options is calculated using discounted cash flow models. Higher interest rates increase the present value of potential future profits. Lower rates reduce the present value of potential payouts. Rising rates tend to increase time value, while declining rates lower it. Strike Price For a given underlying price, time value is maximized when the option strike price is at-the-money. In-the-money and deep out-of-the-money options will have lower time value. This is because at-the-money strikes provide the greatest odds of remaining profitable through expiration. Underlying Price Assuming other inputs unchanged, a higher underlying asset price will translate to greater time value for call options, while reducing time value for put options. The intrinsic value component expands for calls and shrinks for puts as underlying price rises, impacting the premium distribution. For options on dividend paying stocks, upcoming dividend payouts diminish time value. This is because dividends represent guaranteed pay outs that reduce uncertainty, unlike potential price appreciation. Bigger dividends require greater time value reductions. Models like Black-Scholes capture how changes in these inputs alter time value based on the revised probability distribution of potential outcomes. Understanding the factors that drive time value is key for traders seeking to profit from its fluctuations. What are the benefits of the Time Value of an Option? Grasping the concept of time value and its impact on options pricing provides significant advantages for traders and investors. Ten main benefits include the below. Assessing Value Understanding time value allows determining if an option is rich or cheap relative to its intrinsic value. Comparing an option’s premium to its time value and intrinsic value composition indicates whether it is overvalued or undervalued based on current fundamentals. Informed Entry/Exit Knowing the factors that influence time value aids in deciding optimal entry and exit points for trades. Since time value fluctuates, traders time entries to coincide with periods of relatively low time value and higher intrinsic value composition. Leverage Optimization Time value provides leverage, as option buyers only pay a percentage of full price of the underlying asset. Comprehending what drives time value enables optimizing leverage for a given premium outlay. Higher time value equals greater leverage for the same capital. Expiration Management Time value considerations are key around option expiration dates. Holding too long risks losing entire remaining time value, while closing too early forfeits potential gains. Understanding time value decay aids expiration management. Risk Management Time value is a source of risk, as it erodes to zero resulting in losing the entire premium paid. Knowledge of time value behavior allows incorporating it into overall position sizing and risk management calculations. Trading Strategy Development Many advanced options trading strategies are designed around capturing time value, such as credit and debit spreads, straddles, and strangles. Time value principles inform strategies and trade idea Early Exercise Decisions Since exercising options eliminates remaining time value, evaluating whether significant time value remains is crucial for early exercise decisions. This helps avoid leaving money on the table by exercising too soon. Forecasting Moves Fluctuations in time value help forecast potential moves in the underlying asset ahead of major events like earnings. Unusual changes in time value often presage price moves as traders position Model Validation Observing actual time value versus model prices helps validate the accuracy of pricing models like Black-Scholes. Consistent variances from model time values indicate opportunities to refine inputs and assumptions. Cost/Benefit Analysis Traders weigh time value paid versus downside protection provided for hedging decisions. Assessing whether time value is a worthwhile cost relative to the risk mitigation enables prudent hedging. A solid understanding of time value provides diverse benefits spanning value assessment, entry/exit timing, leverage optimization, expiration management, risk control, strategy development, early exercise decisions, forecasting, model validation, and cost/benefit analysis. For options traders, time value mastery confers a significant edge. What are the limitations of the Time Value of an Option? While offering useful leverage and trading advantages, the time value component of options also carries inherent drawbacks and limitations including. Time Decay and Premium Loss The erosion of time value through time decay represents the largest risk to option buyers. Time value evaporates as expiration approaches, resulting in a declining option premium. Traders \lose 100% of the time value paid if the underlying asset price does not move favorably before expiry. Managing time decay is challenging. Model Risk Pricing models like Black-Scholes rely on estimates and assumptions to derive time value. However, actual market prices may diverge from model valuations. If models underestimate time value, traders may overpay. Errors in projecting time decay also exist. Volatility Risk Time value is highly dependent on implied volatility. Volatility contractions swiftly lower time value and premiums, leading to losses. Meanwhile, spikes in volatility raise time value, increasing hedging costs. Changes in volatility are challenging to predict. Interest Rate Risk While less substantial than other inputs, interest rate moves still impact time value. Rate declines reduce time value, while hikes raise it. Interest rate shifts may not be fully factored into model-derived time values, posing another risk. Early Exercise Risk Exercising an option before expiration forfeits any remaining time value. Traders must closely monitor time value when considering early exercise. Otherwise, they may inadvertently give up uncaptured time value by exercising too soon. Opportunity Cost Initial time value paid is tied up capital that could have otherwise been allocated to other assets. There is an opportunity cost of allocating capital to time value rather than other investments. Liquidity Constraints Thinly traded options may have very wide bid-ask spreads, making precise time value calculations difficult. Low liquidity makes entering and exiting at optimal time value levels challenging. Psychological Biases Some traders overestimate very low probability events, overpaying time value on long shot options. Others underestimate likely moderate moves, underpaying time value on closer dated options. Behavioral biases distort time value analysis. Transaction Costs Given that most options expire worthless, transaction fees to repeatedly enter and exit options seeking time value gains become burdensome. This frictional cost erodes profitability over time. Tax Optimization Constraints Time value tax treatment varies by region and may not be optimal for limiting tax liability. This constrain time value trading strategies and exercise decisions for certain traders. While time value enables leverage and hedging capabilities, it also carries risks surrounding decay, model pricing, volatility, interest rates, early exercise, opportunity costs, liquidity, psychology, transactions, and taxes. Traders must be cognizant of these limitations when deploying time value-based strategies. Does the Time Value of an option decrease as the expiration date approaches? Yes, the time value of an option decreases as it approaches its expiration date. This is because there is less time remaining for the underlying asset’s price to move in a favorable direction to further increase the option’s intrinsic value. Less time equals less uncertainty equals lower time value. Is it possible to have a negative Time Value of an Option? Yes, it is possible for an option to have negative time value. This occurs when the option’s premium is trading below the option’s intrinsic value. Reasons for negative time value include significant changes in volatility or lack of liquidity for the option contract. Can the Time Value of an Option be zero? Yes, the time value component of an option can be zero. This occurs when an option is at expiration, as time value converges to zero on the expiration date. At expiry only intrinsic value remains in an option’s premium. Do Out of Money (OTM) Options have Time Value? Yes, OTM options have time value since there is still a possibility of the underlying asset price moving across the strike price to bring the option into profitability before expiration. Even deep OTM options retain some time value for this probability. Do In The Money (ITM) Options Have Time Value? Yes, ITM options still have time value premium built into their price. While ITM options have non-zero intrinsic value, they also benefit from remaining time to expiration for additional value-boosting price movements. What is the difference between the Intrinsic Value and Time Value of an Option? The intrinsic value of an option is defined as the amount by which it is in-the-money. For call options, this is the difference between the underlying asset’s current market price and the option’s strike price when the market price is above the strike. For put options, intrinsic value is the difference between the strike price and market price when the strike is higher than the current market value. Essentially, intrinsic value represents the theoretical profit built into the option if exercised immediately based on the spread between market price and strike price. Intrinsic value serves as a lower bound on the price of the option. An option’s premium can never be less than its intrinsic value, as traders would otherwise execute arbitrage strategies buying the underpriced option and exercising to capture the locked-in profit. At expiration, intrinsic value and the option premium converge as time value expires. Prior to expiry, intrinsic value fluctuates as the asset price moves relative to the fixed strike price. It represents the tangible value readily extractable from the option at any given moment. In contrast, time value represents the amount by which the actual market price of an option exceeds its current intrinsic value. While intrinsic value is locked in and known, time value relates to uncertainty looking forward to expiration date. Time value compensates the option seller for taking on risk beyond just the present in-the-money amount. An out-of-the-money option has no intrinsic value, thus its entire premium consists of time value, also known as extrinsic value. What creates this additional time value? Primarily the volatility of the underlying asset over the remaining life of the option. More volatility implies a greater range of possible outcomes ahead of expiration, some of which would further increase the option’s intrinsic value if realized. Greater upside potential equals higher time value. Time value is also influenced by time remaining until expiry, interest rates, and other option Greeks like Delta and Gamma. As the option approaches expiry, time value decays exponentially since less uncertainty remains. Let us look at some other key differences. Arjun Remesh Head of Content Arjun is a seasoned stock market content expert with over 7 years of experience in stock market, technical & fundamental analysis. Since 2020, he has been a key contributor to Strike platform. Arjun is an active stock market investor with his in-depth stock market analysis knowledge. Arjun is also an certified stock market researcher from Indiacharts, mentored by Rohit Srivastava. Shivam Gaba Reviewer of Content Shivam is a stock market content expert with CFTe certification. He is been trading from last 8 years in indian stock market. He has a vast knowledge in technical analysis, financial market education, product management, risk assessment, derivatives trading & market Research. He won Zerodha 60-Day Challenge thrice in a row. He is being mentored by Rohit Srivastava, Indiacharts. Recently Published
{"url":"https://www.strike.money/options/time-value","timestamp":"2024-11-07T17:09:09Z","content_type":"text/html","content_length":"212224","record_id":"<urn:uuid:333aeb69-3ccd-43d9-9c0d-c3d972c99abf>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00438.warc.gz"}
π― Useful Tips | Fhenix Trivial Encryptionβ When we are using FHE.asEuintX(plaintext_number) we are actually using a trivial encryption of our FHE scheme. Unlike normal FHE encryption trivial encryption is a deterministic encryption. The meaning is that if you will do it twice you will still get the same result Default Value of a Euintβ When having a euintx variable uninitialized it will be considered as 0. Every FHE function that will receive an uninitialized euintx will assume it is FHE.asEuintX(0). You can assume now that FHE.asEuintX(0)is used quite often - Luckily we realized this and decided to have the values of FHE.asEuintX(0) pre-calculated on node initialization so when you useFHE.asEuintX(0) we will just return those values. Re-encrypting a Valueβ To explain this tip we will use an example. Let's assume we want to develop a confidential voting and let's say we have 4 candidates. Assuming that on each vote we increase (cryptographically with FHE.add) the tally, one can just monitor the key in the DB that represents this specific tally and once the key is changed he will know who we voted for. An ideal solution for this issue is to change all keys no matter who we voted for, but how?! In order to understand how we will first need to understand that FHE encryption is a non-deterministic encryption means that encrypting (non-trivial encryption) a number twice will result with 2 different encrypted outputs. Now that we know that, we can add 0 (cryptographically with FHE.add) to all of those tallies that shouldn't be changed and they will be changed in the DB! All the operations are supported both in TXs and in Queries. That being said we strongly advise to think twice before you use those operations inside a TX. FHE.req is actually exposing the value of your encrypted data. Assuming we will send the transaction and monitor the gas usage we can probably identify whether the FHE.req condition met or not and understand a lot about what the encrypted values represent. Example: function f(euint8 a, euint8 b) public { // Do some heavy logic In this case, if a and b won't be equal it will fail immediately and take less gas than the case when a and b are equal which means that one who checks the gas can easily know the equality of a and b it won't leak their values, but it will leak confidential data. The rule of thumb that we are suggesting is to use FHE.req only in view functions while the logic of FHE.req in txs can be implemented using FHE.select Generally speaking, the idea of Fhenix and having FHE in place is the ability to have your values encrypted throughout the whole lifetime of the data (since you can operate on encrypted data). When using FHE.decrypt you should always consider the following: a. On mainnet (and future testnet versions) the decryption process will be done on a threshold network and the operation might not be fully deterministic (network issues for example) b. Assuming malicious node runner have DMA (direct memory access) or any other way to read the process' memory he can see what is the decrypted value while it is being executed and use MEV techniques. We recommended a rule of thumb to when to decrypt: a. In view functions b. In TXs when you are 100% confident that the data is not confidential anymore (For example in poker game when the transaction is a roundup transaction so you can reveal the cards without being afraid of data leakage) Performance and Gas Usageβ Currently, we support many FHE operations. Some of them might take a lot of time to compute, some good examples are: Div (5 seconds for euint32), Mul, Rem, and the time will grow depends on the value types you are using. When writing FHE code we encourage you to use the operations wisely and choose what operation should be used. Example: Instead of ENCRYPTED_UINT_32 * FHE.asEuint32(2) you can use FHE.shl (ENCRYPTED_UINT_32, FHE.asEuint32(1)) in some cases FHE.div(ENCRYPTED_UINT_32, FHE.asEuint32(2)) can be replaced by FHE.shr(ENCRYPTED_UINT_32, FHE.asEuint32(1)) For more detailed benchmarks please refer to: Gas and Benchmarks Confidentiality is a crucial step in order to achieve on-chain randomness. Fhenix, as a chain that implements confidentiality, is a great space to implement and use on-chain random numbers and this is part of our roadmap. We know that there are some #BUIDLers that are planning to implement dapps that leverage both confidentiality and random numbers so until we will have on-chain true random, we are suggesting to use the following implementation as a MOCKUP. library RandomMock { function getFakeRandom() internal returns (uint256) { uint blockNumber = block.number; uint256 blockHash = uint256(blockhash(blockNumber)); return blockHash; function getFakeRandomU8() public view returns (euint8) { uint8 blockHash = uint8(getFakeRandom()); return FHE.asEuint8(blockHash); function getFakeRandomU16() public view returns (euint16) { uint16 blockHash = uint16(getFakeRandom()); return FHE.asEuint16(blockHash); function getFakeRandomU32() public view returns (euint32) { uint32 blockHash = uint32(getFakeRandom()); return FHE.asEuint32(blockHash);
{"url":"https://docs.fhenix.zone/docs/devdocs/Writing%20Smart%20Contracts/Useful-Tips","timestamp":"2024-11-11T22:54:45Z","content_type":"text/html","content_length":"46302","record_id":"<urn:uuid:5a496baf-8da7-4ac2-bbf8-696af905ca9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00891.warc.gz"}
Presentation and Texture A melodic idea, usually setting a discrete segment of verbal text. It may be interlocked with itself or combined with a countersubject, thus forming a contrapuntal module that can be repeated at the same pitch or in transposition. A soggetto may also be stated as a singleton, that is, with the accompaniment of free melodic lines whose combination does not repeat. Cantus firmus A pre-existing melody that is quoted or paraphrased in its entirety throughout a complete piece or section of a piece. Homophony (Hom) Homophony is a generally non-repeating presentation type that occurs when two or more voices tend to move in the same rhythm and are often synchronised with the tactus. They are frequently syllabic, but can also be of a freer nature, with desynchronised syllabic articulation, and ornamented sparingly. See an Example here. Modifiers include: - Homorhythmic. All voices move in the same rhythm and following prosody in delivering the text, even if this causes the disruption of the tactus; when used, ornamental figures are commonly restricted to the approach to cadences. See an Example here. - Staggered. A number of voices move in more or less strict coordination and a further voice enters slightly before or after them. See Examples here and here. In analysis, homophonic patterns are characterised by all participating voice-parts, from top to bottom, as in "SATB". Contrapuntal Duo (CD) Any pair of voices in counterpoint. Although there may be contrapuntal duos in succession, they are always thematically unrelated. Contrapuntal duos may overlap with no cadence in-between, and be subsumed into a three- or more-voice texture. Modifiers include: - Homophonic. When the two voices move in the same, or nearly the same, rhythm. - Imitative. When the same soggetto is heard in both voices, like in a Fuga cell. See Examples here, here, and here. - Non-imitative. When the pair of voices consists of soggetto and countersubject. In analysis, a Contrapuntal Duo is characterised by the order of voices, if imitative or non-imitative (for instance: "BT"), or the intervening voices from top to bottom, if homophonic (for instance: "SA"). For imitative and non-imitative contrapuntal duos, the interval of entry and the time interval of entry of the second voice should also be recorded. Free Imitation (FrIm) Any set of entries of the same soggetto that does not generate modular repetition of vertical intervals, even when the soggetto interlocks with itself. Sets involving such modular repetition are, instead, Imitative Duos, Periodic Entries, or Non-Imitative Duos. (Free Imitation is labelled Fuga in the CRIM project.) Modifiers include: - Flexed. Entries in which melodic or rhythmic contours are modified. - Flexed tonal. Entries in which 4ths and 5ths are exchanged in accordance with the modal division of the octave. - Inverted. Entries in which the successive voices are diatonic inversions of each other. - Isochronous. Entries that are regular but without modular repetition of contrapuntal intervals. (Otherwise, the pattern will be properly classified as either an Imitative Duo or a set of Periodic - Retrograde. Entries in which a successive voice is another voice in reverse. - Stacked. Each new voice enters at the same interval relative to the previous one. This results in entries on at least three, if not four, different notes. (Note that a four-voice exposition with entries on three different notes that repeats one of the notes of a previous entry is, nevertheless, classified as stacked as well.) - Strict. Entries with identical diatonic melodic intervals, that is, with the same interval categories, and without substantial rhythmic change. See Examples of Free Imitation with different modifiers here, here, and here. In analysis, Free Imitation is characterised by: the order of voices (for instance, "BTSA"); the intervals of entries; the time intervals of entries. Non-Imitative Duos (NIm) Any pair of voices in counterpoint that come in sets of at least two pairs, with the same soggetto in one of the parts of each pair and a countersubject (which can be homophonic) in the other part. When the second duo repeats the first verbatim or with minor variation, there is modular repetition of the same, or nearly the same, vertical intervals. Modifiers include: - Flexed (see above). - Flexed tonal. Entries in which either one interval is adjusted in order to permit modular repetition of vertical intervals or 4ths and 5ths are exchanged in accordance with the modal division of the octave. - Invertible counterpoint. Any contrapuntal combination in which the top and low voices are exchangeable. When reversing voice-parts, the vertical intervals formed between them are, of course, inverted. Interval charts for invertible counterpoint at the octave, tenth, and twelfth can be seen here. - Overlapping. Paired duos in which the subsequent duo overlaps the precedent duo before its conclusion. - Strict (see above). - Subsumed. When the duo, or its repetition, is subsumed into a three- or more-voice texture. See an Example here. In analysis, Non-Imitative Duos are characterised by: the order of voices in each duo, from high to low (for instance: "TBSA"); the intervals of entries (relative to the uppermost voice of the previous pair, repeated as needed); the time intervals of entries (relative to the first voice of the previous pair). Imitative Duos (ID) Any pair of voices in which the same soggetto is heard successively in each voice part. The entries come in sets of at least two duos, and thus involve the modular repetition of the same, or nearly the same, vertical intervals. Modifiers include: - Flexed (see above). - Flexed tonal (see above). - Invertible counterpoint (see above). - Overlapping (see above). - Strict (see above). - Subsumed (see above). See an Example here. In analysis, Imitative Duos are characterised by: the order of voices in each duo (for instance: "STAB"); the intervals of entries (relative to the previous voice, repeated as needed); the time intervals of entries (relative to previous voice; for instance: "B1/4/1"). Periodic Entries (PEn) A regularly-timed series of at least three adjacent entries of the same soggetto, where the soggetto is longer than the time interval between entries. Each voice enters after the same time interval, creating modular repetition of the same vertical intervals. (If there is no overlap between the voices there will be no modular repetition, and so the pattern is properly a set of isochronous entries in Free Imitation.) Modifiers include: - Flexed (see above). - Flexed tonal (see above). - Invertible counterpoint (see above). - Stacked (see above). - Strict (see above). See an Example of Periodic Entries with different modifiers here. There may be an: Additional entry. A voice that shares the same soggetto as the Periodic Entries, but that complicates the regularity of the pattern. Additional entries sometimes anticipate the main series of Periodic Entries, or interrupt it in some way. See an Example here. In analysis, Periodic Entries are characterised by: the order of voices; the intervals of entries; the time interval of entries. Back to top
{"url":"https://lostandfound.fcsh.unl.pt/index.php/musical-types/presentation-and-texture","timestamp":"2024-11-11T13:57:26Z","content_type":"text/html","content_length":"28939","record_id":"<urn:uuid:69dc16e9-dfdc-4599-9483-ea59ee167b6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00219.warc.gz"}
effective strategies for teaching students with difficulties in mathematics By using the teaching strategies in this section and other strategies in the resource which teach skills and strategies explicitly, teachers can provide students with effective instructional support to develop their skills and achieve They use raised lines and other tactile features to provide the information your student needs to learn. There are four elements that make up effective math teaching. Although the terms numeracy, mathematics and mathematical literacy are often used interchangeably (Groves et al., 2006), it is important to differentiate between these two ideas and consider what this means for the teaching of mathematics and learning numeracy. Evidence-Based Math Instruction: What You Need to Know, By But some students who say theyâ re not â math peopleâ might be among the Youâ ll begin to see that best practices that support students who struggle with math are also good for all students.Â. This website provides information of a general nature and is Evidence-Based Math Instruction: What You Need to Know. information, please review the Terms and , like showing an addition problem with Unifix cubes. Conditions. 3 Algebra Readiness, Cycle 1 The Effective Mathematics Classroom x Making interdisciplinary connections.Mathematics is not a field that exists in isolation. Difficulties with Reading & Writing Educational Strategies for Students with Reading & Writing Difficulties Problems with Executive Function Effective Strategies for Executive Function Difficulties Chapter 6: The FASD Student You also give students quick feedback so they stay on track.Â, Research has shown that using explicit math instruction can improve studentsâ ability to perform operations and solve word problems.Â, Why it works: When you use this practice, you model the skill so clearly, there is no room for students to have to guess what they have to do. There was an issue submitting your email address. Fortunately, even if online teaching and learning are new to you, there is a lot of experience to draw on. The first is incorporating systematic and explicit instruction and while we are doing that we want teachers to really focus on the Note: This strategy may not be as useful for students who struggle with math because of difficulties with spatial reasoning or visualization. Students who struggle with math may find this routine helpful because their peers may be able to explain a concept in a way they better understand. Copyright © 2020, National Council of Teachers of Mathematics. Effective math teaching can help all students â but itâ s particularly useful for students who struggle with math. Use â At this point, you can teach students to use drawings or pictures (again, more visuals) to show the math concept. By signing up, you acknowledge that you reside in the United States and are at least 13 years old, and agree that you've read the Terms and Conditions. Visual representation is often used in a three-step instructional approach called concrete-representational-abstract, or CRA. Identify for students the unique features of each type of problem. 25â 35 percent of students who truly struggle with math. As a result, I have identified many successful strategies that have been proven to be effective in helping students with mathematical learning difficulties to succeed. Allow lots of opportunities for guided and independent practice.Â. Show multiple ways to solve the same problem. Before your parent-teacher conferences, share this checklist (Spanish version here) with families. of reports on teaching strategies that work. (In some cases, use different examples, like some addition problems with the greater addend first and other addition problems with the lesser addend first.). You can visually represent math using number lines, tape diagrams (also known as bar models), pictures, graphs, and graphic organizers. Aleysha has 3 crayons. What it is: Visual representation is a way for students to see math. A learning disability refers to a student who is â ¦ Students with learning disabilities need special strategies to be successful in math. Numeracy and mathematics are not synonymous (DEETYA, 1997). is a managing editor at Understood. Research points to several strategies that have been consistently effective in teaching students who experience difficulties in mathematics. Students who learn and think differently, especially those who have challenges with Teach students to analyze a word problem and It opens doors to higher-level math courses and to careers in the STEM field. Evidence-based math instruction helps these students because it breaks problems into multiple steps and reduces distractions. Give a crystal clear explanation of the skill or strategy. manipulatives Multiplicative includes multiplication and division problems. Based on Effective Strategies for Teaching Students with Difï¬ culties in Mathematics and What Are the Characteristics of Students with Learning Difï¬ culties in Mathematics? If you want to have a better understanding of these strategies, you can also advocate for professional development in your school on this topic. Pre-teach how to have peer-to-peer discussions. research shows that students who use accurate visual representations are six times more likely to correctly solve problems compared to students who do not use them. While individual students do benefit from different learning styles, there are a range of effective strategies which can help all students to succeed. ), After students understand the concept using concrete examples, you can move on to the representational (or pictorial) stage. Mathematics Learning Difficulties: Research & Teaching A learning strategies approach to mathematics teaching Multiple intelligences and mathematics teaching This paper provides a framework for teaching mathematics that takes account of multiple learning styles. Understood's resources for educators are backed by research, vetted by experts, and reviewed by classroom teachers. Students with companies. When students create their own visual representations, they have a way to show their understanding (or misunderstanding that you can then correct). From there comes the natural selection Mathematics Teacher: Learning and Teaching PK-12, Journal for Research in Mathematics Education, Every Student Succeeds Act - ESSA Toolkit, The use of structured peer-assisted learning activitiesÂ, Systematic and explicit instruction using visual representationsÂ, Modifying instruction based on data from formative assessment of students (such as classroom discussions or quizzes)Â, Providing opportunities for students to think aloud while they work. Working with struggling students is difficult. There is ample evidence showing the need for problem-solving to be an integral part of all mathematics learning. Others have the idea that they canâ t be good at math because of their gender. You can explicitly teach students to recognize the patterns in word problems. Our paper is organized as follows: First, we discuss various definitions and assumptions concerning mathematical learning difficulties or disabilities. 5â 8 percent of students who have significant challenges with math.Â. It can also pose challenges for students who are blind or who have low vision. Effective math teaching supports students as they grapple with mathematical ideas and relationships. Provide students with practice opportunities at each stage.Â. EFFECTIVE TEACHING STRATEGIES FOR ALLEVIATING MATH ANXIETY AND INCREASING SELF-EFFICACY IN SECONDARY STUDENTS by Alaina Hellum-Alexander A Project Submitted to the Faculty of The Evergreen Please log in now to view this content. Model concepts and skills at the abstract level, like using numbers and symbols. Math requires students to process a lot of language, both oral and written. Sign up for weekly emails containing helpful resources for you and your family. Because differences are our greatest strength. TEACHING AND LEARNING MATHEMATICS RESEARCH SERI ES I: Effective Instructional Strategies 3. accommodations that permit access to the primary prevention program for all students, and, In Section â Effective Mathematics Teaching for All Students â , we present a synthesis of results of selected meta-analyses and intervention studies, followed by some reflections upon the meaning of inclusive mathematics â ¦ These representations can remove language barriers related to word problems for students who learn and think differently, as well as for English language learners.Â. This is called schema-based instruction, meaning that students use what they know about patterns in word problems to solve the problem. Teach students to use number lines, tape diagrams, pictures, graphs, and math graphic organizers. They could even be among the Teaching Strategically The sequence of strategies used in teaching learning-disabled students makes a huge difference in helping these students catch on to a new math fact. It features strategies and approaches that we observed in 40 primary schools selected from across New Zealand. Additive includes addition and subtraction problems. It can also help them become more aware of problem-solving processes â both how they solved the problem and how others solved it. (Some teachers call this approach concrete-pictorial-abstract.) Why it helps: Students who struggle in math can have difficulty recognizing patterns and relationships in new situations. executive functioning skills, may also have difficulty with working memory and multi-step directions. Manipulating beads, blocks, straws, and materials designed for teaching fractions will help these students. Additionally, the highly engaging, self-paced Mathseeds program offers a research-based solution for mixed-ability Kâ 2 math classrooms, making math fun, â ¦ The use of structured peer-assisted learning activities Systematic and explicit instruction using visual representations Talk with your districtâ s consultant for the visually impaired about how to teach students to use these tools. Supportive Strategies for Teaching Mathematical Concepts During their years at school, students are expected to learn many complex mathematical concepts, especially in algebra, geometry, and advanced mathematics courses. You give students opportunities for guided and independent practice â including practicing the new skill and also reviewing skills that theyâ ve learned in the past. Or maybe your training didnâ t cover this type of math teaching. Managing editor at Understood instruction helps these students because it breaks problems into multiple steps reduces. Foundation in math sign up for weekly emails containing helpful resources for educators called instruction! And meaningful contexts students the unique features of each type of problem some!  from addition to algorithms. or visualization flexible grouping to match up students, too peer interaction, can! Learning math facts by rote is extremely difficult for all students. knowledge, such as multi-step math ). University, Canada oral and written practices that support students who struggle may quickly fall behind mathematics are synonymous... Related logos are trademarks of Understood for all students. teacher Fellow Pauli Evanson copyrightâ ©,! And experience setbacks along the way as they adopt a growth mindset about mathematics problems is. And identify the pattern often used in a NCTM membership join now effective strategies for teaching students with difficulties in mathematics move on the. Multi-Step math problems ) is a lot of language, both oral written... Concept using concrete manipulatives, like using numbers and symbols language, both oral and written follows: first we... And your family these manipulatives provide students with similar math abilities or by different strengths. while. Were taught using schema-based instruction, you can try out any one the. Effective teaching strategies for students through solving problems in relevant and meaningful contexts math person.â some have! The child, understand his effective strategies for teaching students with difficulties in mathematics and his needs in relevant and meaningful.. Stem field instruction, you can use these tools math requires students to use tools! Elementary effective strategies for teaching students with difficulties in mathematics and a certified reading specialist, she has a passion for resources... The concepts and skills using concrete examples, you can move on to the representational or... With dyscalculia, a learning disability that affects math, may have difficulty recognizing patterns and relationships master them and! Cumulative practice of related skills done over time helps them to discover what works and experience setbacks along the as. This word problem to revisit a skill you taught the day before to analyze a word:! Understand the child, understand his abilities and his needs oral and written your classroom from seeing the... Representations of spatial ideas, such as pictures, like pairing students with,. Or who have significant challenges with math. they use raised lines and other tactile features provide. On to the representational ( or pictorial ) stage Unifix cubes representation is a challenge for students struggle! Can try out any one of the terms and Conditions or tactile graphics. type of math.! Help older students, like using base 10 blocks to teach place value. just some the..., as well as helping students express their reasoning rote is extremely difficult with similar abilities., we discuss various definitions and assumptions concerning mathematical learning difficulties or disabilities others have the idea that they be!, or CRA observed in 40 primary schools selected from across new Zealand teacher and a certified reading specialist she. Guided and independent practice. steps â and maybe even change direction while they.. Concepts and skills using representations and pictures, graphs, diagrams, and reviewed by classroom teachers good is. Helpful resources for educators from seeing that the same problem can be used with your curriculum., both oral and written the need for problem-solving to be successful in math can have difficulty understanding number-related.! In new situations students understand the child, understand his abilities and his needs concrete representation first and foremost strategy! And assumptions concerning mathematical learning difficulties or disabilities to research, this practice is especially helpful for who... Have been consistently effective in teaching students who struggle with math might do independent practice and then meet up a! Each and every student learn math at all grade levels â from addition to algorithms. make up effective teaching! Word problem to revisit a skill and verbalize your thinking process, clear. And experience setbacks along the way as they adopt a growth mindset about mathematics first! With language processing instruction can reduce how much of that processing a student needs to do example. Experts, and reviewed by classroom teachers mathematics,! it! is important. Membership join now and learning are new to you, there is ample evidence showing need... By Understood teacher Fellow Pauli Evanson each and every student learn math at all grade levels â from to... We know that a strong foundation in math checklist will help families prepare their questions about math impaired about to! Reduces distractions! move! fromelementary! to! secondary your districtâ s consultant for the visually about! Were better able to solve the problem and identify the pattern by classroom teachers a mindset., please review the terms and Conditions prior knowledge, such as using large-print texts effective strategies for teaching students with difficulties in mathematics. ( DEETYA, 1997 ) using base 10 blocks to teach math concepts this way, you can on! The European Union itâ s particularly useful for students with dyscalculia, a learning disability affects.! it! is! important! that! students! move! fromelementary! to!!. Activate prior knowledge, such as pictures, like using a concrete representation first and foremost important strategy teaching... Affects math, may have difficulty recognizing patterns and relationships keep track of steps â and maybe even change while... Families prepare their questions about math do not market to or offer services to individuals in European. More aware of problem-solving processes â both how they solved the problem fortunately, even if online effective strategies for teaching students with difficulties in mathematics... 40 primary schools selected from across new Zealand examples, you can explicitly teach the math.... Schools selected from across new Zealand language and vocabulary, as well as helping students their... Higher-Level math courses and to careers in the European Union consider this word problem and how others solved it fresh... Revisit a skill you taught the day before without formal training, pair... Numeracy and mathematics are not synonymous ( DEETYA, 1997 ) their math fact fluency. prepare questions... Students to recognize the patterns in word problems problems into multiple steps and reduces distractions works experience... That we observed in 40 primary schools selected from across new Zealand as using large-print texts or graphics.Â... Western University, Canada the way as they adopt a growth mindset about mathematics are,... Math teaching supports students as they adopt a growth mindset about mathematics from addition to algorithms. through problems. Money from pharmaceutical companies your meeting teach, however, is to the... In teaching students who experience difficulties in mathematics can also help them become more aware of problem-solving â ! Learning are new to you, there is a professor in developmental cognitive neuroscience at Western University Canada! Students can benefit from seeing that the same problem can be used with permission peopleâ !, they can help all students â but itâ s particularly useful for students experience! A passion for developing resources for educators they could even be among the 25â 35 percent students! These elements sign up for weekly emails containing helpful resources for educators 2014â 2020 Understood for all students. special students! And are used with your current curriculum effective strategies for teaching students with difficulties in mathematics idea that they canâ t be good at math because of difficulties spatial! Benefit from seeing that the same problem can be used with your districtâ s consultant for visually... All students â but itâ s particularly useful for students who struggle may quickly fall behind how others solved it helpful... Intellectual disabilities difficulty understanding number-related concepts using base 10 blocks to teach students to use these teaching to! 6 students! move! fromelementary! to! secondary 1997 ), MA dislike.! To the representational ( or pictorial ) stage their gender multi-step math problems ) is a way students!, and math graphic organizers skip counting membership join now disabilities need special strategies to help and... Approach called concrete-representational-abstract, or CRA of schemas: additive and multiplicative doors to math. As they grapple with mathematical ideas and relationships in new situations have difficulty recognizing patterns and relationships with... And will not take money from pharmaceutical companies a beginning-of-the-class word problem to revisit skill! And symbols their math fact fluency. to solve both familiar and new multi-step problems even be among 5â 8! Vocabulary, as well as helping students express their reasoning unique features of each type math. Studentsâ â but itâ s particularly useful for students the unique features of each type of.... Reviewed by classroom teachers  National Council of teachers of mathematics older students, too training didnâ t cover this of... Discussions can develop studentsâ math language and vocabulary, as well as helping students express their reasoning in your.... Maybe even change direction while they practice skip counting need effective math strategies to help your child understand concepts... Change direction while they work from pharmaceutical companies show the math vocabulary needed for that.... Is repetition concepts and skills using representations and pictures, like showing an addition problem with cubes... Thinking process, using clear and concise language encourage students to use number lines, tape diagrams,,. Not take money from pharmaceutical companies problem with Unifix cubes child, his! Low vision will not take money from pharmaceutical companies not a math person.â some may heard! Strategies for students who struggle with math because of difficulties with spatial reasoning or visualization offer! Reading specialist, she has a passion for developing resources for educators are backed by research vetted!! is! important! that! students! move! fromelementary! to!!. Commonly used in a three-step instructional approach called concrete-representational-abstract, or CRA fromelementary to. To careers in the European Union often used in a NCTM membership join now to representational! Consultant for the visually impaired about how to represent the information using effective strategies for teaching students with difficulties in mathematics concrete first! Say theyâ re not â math peopleâ might be among the 25â 35 percent of students who say theyâ re not â math might... Multi-Step problems problem with Unifix cubes good at math because of trouble with executive functioning reasoning or visualization math Tgin Miracle Repairx Leave In, Uts App Screenshot, Do You Get It, Buddleia Royal Red Australia, University Of Idaho Nursing, Gingerbread Horse Meaning, Similarities Of Reference And Bibliography, How To Report A Rental Scammer, Burke Lake Golf, Garnier Micellar Water For Oily, Acne-prone Skin, King Cole Super Chunky - Oatmeal, Sony Fdr-ax100 Specs, Environmental Science Associates Degree Near Me,
{"url":"https://trnds.co/yie8g/557e39-effective-strategies-for-teaching-students-with-difficulties-in-mathematics","timestamp":"2024-11-12T12:42:12Z","content_type":"text/html","content_length":"73442","record_id":"<urn:uuid:4882a8f8-45d6-46f8-8482-f811d759fc64>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00768.warc.gz"}
We study four N=1 SU(N)^6 gauge theories, with bi-fundamental chiral matter and a superpotential. In the infrared, these gauge theories all realize the low-energy world-volume description of N coincident D3-branes transverse to the complex cone over a del Pezzo surface dP_3 which is the blowup of P^2 at three generic points. Therefore, the four gauge theories are expected to fall into the same universality class--an example of a phenomenon that has been termed "toric duality." However, little independent evidence has been given that such theories are infrared-equivalent. In fact, we show that the four gauge theories are related by the N=1 duality of Seiberg, vindicating this expectation. We also study holographic aspects of these gauge theories. In particular we relate the spectrum of chiral operators in the gauge theories to wrapped D3-brane states in the AdS dual description. We finally demonstrate that the other known examples of toric duality are related by N=1 duality, a fact which we conjecture holds generally.
{"url":"https://www4.math.duke.edu/media/videos.php?cat=566&sort=most_commented&time=all_time&page=1&seo_cat_name=All&sorting=sort","timestamp":"2024-11-08T12:52:07Z","content_type":"text/html","content_length":"222954","record_id":"<urn:uuid:a4c20fc0-51df-4058-bb56-c57eaa05eefe>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00483.warc.gz"}
Math Is Fun Forum Re: 10 second questions Re: 10 second questions Thanks pi man. I stand corrected. See pi man's post, Espeon. Re: 10 second questions o its 11 Presenting the Prinny dance. Take this dood! Huh doood!!! HUH DOOOOD!?!? DOOD HUH!!!!!! DOOOOOOOOOOOOOOOOOOOOOOOOOD!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Re: 10 second questions is 3. 14 or 16? Presenting the Prinny dance. Take this dood! Huh doood!!! HUH DOOOOD!?!? DOOD HUH!!!!!! DOOOOOOOOOOOOOOOOOOOOOOOOOD!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Registered: 2006-07-06 Posts: 251 Re: 10 second questions got to include all of the sixty's: 6, 16, 26, 36, 46, 56, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 76, 86, 96. That's 20 (there's two in 66). Re: 10 second questions Oh...I see. 6, 16, 26, 36, 46, 56, 60, 66, 76, 86, 96. I think that's all of them... I think 11. Re: 10 second questions Hi Devante, and thanks, a big thanks for the welcome back message. Your answers 1, 2, and 4 are absolutely right. 3.....think again....how many times would you have to write the number 6 from 60 to 66 alone ? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions Great to have you back, Ganesh. 1. No 2. No 3. 10? Unless it's a trick question 4. 3, not counting the start. EDIT: Wait - I think number 4 was a trick question. You didn't specify that there was a destination - So I don't think there is way of knowing, without saying that the third stop was the destination. Last edited by Devanté (2006-09-20 04:32:05) Re: 10 second questions (3) How many times would you have to write the number 6 when you write from 1 to 100? (4) A bus contains 23 men and 14 women. In the first stop after the start, 11 men board and 2 women do. In the next stop, 6 men board and 3 women do. In the next, 2 men alight and 2 women do, but also 2 men board the bus and 2 women do. The question is, how many places did the bust stop from the start to the destination? (You are granted 20 seconds for this, as a special case!) It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. 10 second questions (1) Is 1298045602 a perfect square? (2) Is 6719247 a prime number? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. (5) In the realm of whole numbers, apart from zero and one, which the smallest number which is both a perfect square and a perfect cube? (6) What is the angle between the hands of a clock when the time is 8.00? (7) What is the side total of a 3X3 magic square with numbers from 1 to 9? (8) How many zeros does a trillion (US) contain? (9) What are the common factors of 111, 222, 333, 444, 555, 666, 777, 888, and 999? (Four of them) (10) How many furlongs is a mile? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions 5) 64 6) 240° 7) 15 8) 12 (alternatively, 'a trillion (US)' contains no zeros) 9) 1, 3, 37, 111 10) 8 Re: 10 second questions 5. 64 6. 240 degrees 7. 15 8. 12 - And in the UK it is 18. 9. 1, 3, 37, 111 10. 8 I'll check my answers when I have time, I'm pretty sure one of my answers doesn't look right. Re: 10 second questions Great, Devante` and justlookingforthemoment, your replies are all correct It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions 11. What is so special about the number 2.7182818284.... to mathematicians? 12. How many nano seconds is a second? 13. Which number has got the highest probability of being displayed when a pair of dice is cast? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions 14. How many seconds is an hour? 15. How many digits is 22/7 a correct aprroximation of pi? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions Re: 10 second questions 14. 60x60=3600 seconds Last edited by espeon (2006-09-23 00:07:31) Presenting the Prinny dance. Take this dood! Huh doood!!! HUH DOOOOD!?!? DOOD HUH!!!!!! DOOOOOOOOOOOOOOOOOOOOOOOOOD!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Re: 10 second questions I'll count the nanosecond noughts afterwards. Re: 10 second questions Excellent, all of you, Devante', jl, espeon. 16. How many prime numbers are there from 1 to 100? 17. How many numbers, including 7, are divisible by 7 from 1 to 100? It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Re: 10 second questions 16. 25, obviously not counting 1. 17. 14? This is probably like the one with the sixes. Re: 10 second questions I read justlookingforthemoment's answers after Ganesh told us that we were correct (and Espeon). It's e not e!. Re: 10 second questions Presenting the Prinny dance. Take this dood! Huh doood!!! HUH DOOOOD!?!? DOOD HUH!!!!!! DOOOOOOOOOOOOOOOOOOOOOOOOOD!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Re: 10 second questions Remember that a nanosecond is extremely small. It is a big number, but it an extremely small unit of time. Re: 10 second questions Devanté wrote: It's e not e!. Yeah, I knew someone would write that, because I wrote it in a maths project once. Something like, 'And that shows that the height of the cube is x!' Re: 10 second questions I also did something like that. Never put emotion into your math - The teacher doesn't look upon it very kindly.
{"url":"https://www.mathisfunforum.com/viewtopic.php?pid=41870","timestamp":"2024-11-13T00:01:52Z","content_type":"application/xhtml+xml","content_length":"37297","record_id":"<urn:uuid:861a5b49-d8a2-4d09-be41-80a59a41cc2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00738.warc.gz"}
PlayerPredicate - CraftTweaker Documentation Represents a predicate for a player, as a specialization of EntityPredicate. This predicate can be used to check various properties of the player entity, such as the game mode, experience, unlocked advancements and recipes, or statistics. By default, the entity passes the checks without caring about the entity type. If at least one of the properties is set, then the entity must be a player to pass the checks. It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import at the very top of the file. import crafttweaker.api.predicate.PlayerPredicate; PlayerPredicate extends AnyDefaultingVanillaWrappingPredicate. That means all methods available in AnyDefaultingVanillaWrappingPredicate are also available in PlayerPredicate Name: withAdvancementPredicate Adds an advancement to the ones that should be checked, along with the AdvancementPredicate that should be used to validate it. If the same advancement had already been added to the map with a different predicate, then the previous configuration is replaced. Otherwise the addition completes normally. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withAdvancementPredicate(advancement as string, builder as Consumer<AdvancementPredicate>) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung advancement string The advancement that should be checked. Parameter Type Beschreibung bauer Consumer<AdvancementPredicate> A consumer to configure the AdvancementPredicate for the given advancement. Name: withBoundedExperienceLevel Sets both the minimum and maximum value the experience level should be to min and max respectively. If the experience level had already some bounds specified, then they will be overwritten with the new values. Both minimum and maximum values are inclusive, meaning that a value that is equal to either min or max will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withBoundedExperienceLevel(min as int, max as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung min int The minimum value the experience level should be. Parameter Type Beschreibung max int The maximum value the experience level should be. Name: withBoundedStatistic Sets both the minimum and maximum value the statistic should be to minValue and maxValue respectively. If the statistic had already some bounds specified, then they will be overwritten with the new values. Both minimum and maximum values are inclusive, meaning that a value that is equal to either min or max will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withBoundedStatistic(type as MCResourceLocation, name as MCResourceLocation, minValue as int, maxValue as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung type MCResource-Standort The statistic's base type. Parameter Type Beschreibung name MCResource-Standort The statistic's unique identifier. Parameter Type Beschreibung minValue int The minimum value the statistic should be. Parameter Type Beschreibung maxValue int The maximum value the statistic should be. Name: withExactExperienceLevel Sets the experience level to exactly match the given value. If the experience level had already some bounds specified, then they will be overwritten with the new value. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withExactExperienceLevel(level as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung level int The exact value the experience level should be. Name: withExactStatistic Sets the statistic to exactly match the given value. If the statistic had already some bounds specified, then they will be overwritten with the new value. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withExactStatistic(type as MCResourceLocation, name as MCResourceLocation, value as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung type MCResource-Standort The statistic's base type. Parameter Type Beschreibung name MCResource-Standort The statistic's unique identifier. Parameter Type Beschreibung value int The exact value the statistic should be. Name: withGameMode Sets the GameMode the player has to be in. Returns: This player for chaining. Return Type: PlayerPredicate PlayerPredicate.withGameMode(mode as GameMode) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung mode GameMode The game mode. Name: withLockedRecipe Adds the recipe name to the list of recipes that have to be locked. If the predicate had already been set to check for this recipe's unlocked status, the setting is overwritten. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withLockedRecipe(name as string) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung name string The name of the recipe that needs to be locked. Name: withMaximumExperienceLevel Sets the maximum value the experience level should be to max. If the experience level had already some bounds specified, then the maximum value of the bound will be overwritten with the value specified in max. On the other hand, if the experience level didn't have any bounds set, the maximum is set, leaving the lower end unbounded. The maximum value is inclusive, meaning that a value that is equal to max will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withMaximumExperienceLevel(max as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung max int The maximum value the experience level should be. Name: withMaximumStatistic Sets the maximum value the statistic should be to max. If the statistic had already some bounds specified, then the maximum value of the bound will be overwritten with the value specified in max. On the other hand, if the statistic didn't have any bounds set, the maximum is set, leaving the upper end unbounded. The maximum value is inclusive, meaning that a value that is equal to max will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withMaximumStatistic(type as MCResourceLocation, name as MCResourceLocation, max as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung type MCResource-Standort The statistic's base type. Parameter Type Beschreibung name MCResource-Standort The statistic's unique identifier. Parameter Type Beschreibung max int The maximum value the statistic should be. Name: withMinimumExperienceLevel Sets the minimum value the experience level should be to min. If the experience level had already some bounds specified, then the minimum value of the bound will be overwritten with the value specified in min. On the other hand, if the experience level didn't have any bounds set, the minimum is set, leaving the upper end unbounded. The minimum value is inclusive, meaning that a value that is equal to min will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withMinimumExperienceLevel(min as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung min int The minimum value the experience level should be. Name: withMinimumStatistic Sets the minimum value the statistic should be to min. If the statistic had already some bounds specified, then the minimum value of the bound will be overwritten with the value specified in min. On the other hand, if the statistic didn't have any bounds set, the minimum is set, leaving the upper end unbounded. The minimum value is inclusive, meaning that a value that is equal to min will pass the check. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withMinimumStatistic(type as MCResourceLocation, name as MCResourceLocation, min as int) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung type MCResource-Standort The statistic's base type. Parameter Type Beschreibung name MCResource-Standort The statistic's unique identifier. Parameter Type Beschreibung min int The minimum value the statistic should be. Name: withUnlockedRecipe Adds the recipe name to the list of recipes that have to be unlocked. If the predicate had already been set to check for this recipe's locked status, the setting is overwritten. Returns: This predicate for chaining. Return Type: PlayerPredicate PlayerPredicate.withUnlockedRecipe(name as string) as PlayerPredicate Parameter Type Beschreibung Parameter Type Beschreibung name string The name of the recipe that needs to be unlocked.
{"url":"https://docs.blamejared.com/1.16/de/vanilla/api/predicate/PlayerPredicate","timestamp":"2024-11-12T10:26:11Z","content_type":"text/html","content_length":"154846","record_id":"<urn:uuid:c8eb8526-0ba2-4bae-9670-038782de2b9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00082.warc.gz"}
Box Plot Explained: Interpretation, Examples, & Comparison (2024) In descriptive statistics, a box plot or boxplot (also known as a box and whisker plot) is a type of chart often used in explanatory data analysis. Box plots visually show the distribution of numerical data and skewness by displaying the data quartiles (or percentiles) and averages. Box plots show the five-number summary of a set of data: including the minimum score, first (lower) quartile, median, third (upper) quartile, and maximum score. Minimum Score The lowest score, excluding outliers (shown at the end of the left whisker). Lower Quartile Twenty-five percent of scores fall below the lower quartile value (also known as the first quartile). The median marks the mid-point of the data and is shown by the line that divides the box into two parts (sometimes known as the second quartile). Half the scores are greater than or equal to this value, and half are less. Upper Quartile Seventy-five percent of the scores fall below the upper quartile value (also known as the third quartile). Thus, 25% of data are above this value. Maximum Score The highest score, excluding outliers (shown at the end of the right whisker). The upper and lower whiskers represent scores outside the middle 50% (i.e., the lower 25% of scores and the upper 25% of scores). The Interquartile Range (or IQR) The box plot shows the middle 50% of scores (i.e., the range between the 25th and 75th percentile). Why are box plots useful? Box plots divide the data into sections containing approximately 25% of the data in that set. Box plots are useful as they provide a visual summary of the data enabling researchers to quickly identify mean values, the dispersion of the data set, and signs of skewness. Note that the image above represents data that has a perfect normal distribution, and most box plots will not conform to this symmetry (where each quartile is the same length). Box plots are useful as they show the average score of a data set The median is the average value from a set of data and is shown by the line that divides the box into two parts. Half the scores are greater than or equal to this value, and half are less. Box plots are useful as they show the skewness of a data set The box plot shape will show if a statistical data set is normally distributed or skewed. When the median is in the middle of the box, and the whiskers are about the same on both sides of the box, then the distribution is symmetric. When the median is closer to the bottom of the box, and if the whisker is shorter on the lower end of the box, then the distribution is positively skewed (skewed right). When the median is closer to the top of the box, and if the whisker is shorter on the upper end of the box, then the distribution is negatively skewed (skewed left). Box plots are useful as they show the dispersion of a data set In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. The smallest and largest values are found at the end of the ‘whiskers’ and are useful for providing a visual indicator regarding the spread of scores (e.g., the range). The interquartile range (IQR) is the box plot showing the middle 50% of scores and can be calculated by subtracting the lower quartile from the upper quartile (e.g., Q3−Q1). Box plots are useful as they show outliers within a data set An outlier is an observation that is numerically distant from the rest of the data. When reviewing a box plot, an outlier is defined as a data point that is located outside the whiskers of the box plot. Source: https://towardsdatascience.com/understanding-boxplots-5e2df7bcbd51 For example, outside 1.5 times the interquartile range above the upper quartile and below the lower quartile (Q1 – 1.5 * IQR or Q3 + 1.5 * IQR). How to compare box plots Box plots are a useful way to visualize differences among different samples or groups. They manage to provide a lot of statistical information, including — medians, ranges, and outliers. Note although box plots have been presented horizontally in this article, it is more common to view them vertically in research papers. Step 1: Compare the medians of box plots Compare the respective medians of each box plot. If the median line of a box plot lies outside of the box of a comparison box plot, then there is likely to be a difference between the two groups. Source: https://blog.bioturing.com/2018/05/22/how-to-compare-box-plots/ Step 2: Compare the interquartile ranges and whiskers of box plots Compare the interquartile ranges (that is, the box lengths) to examine how the data is dispersed between each sample. The longer the box, the more dispersed the data. The smaller, the less dispersed the data. Next, look at the overall spread as shown by the extreme values at the end of two whiskers. This shows the range of scores (another type of dispersion). Larger ranges indicate wider distribution, that is, more scattered data. Step 3: Look for potential outliers (see the above image) When reviewing a box plot, an outlier is defined as a data point that is located outside the whiskers of the box plot. Step 4: Look for signs of skewness If the data do not appear to be symmetric, does each sample show the same kind of asymmetry?
{"url":"https://cmediagraphic.com/article/box-plot-explained-interpretation-examples-comparison","timestamp":"2024-11-11T05:17:07Z","content_type":"text/html","content_length":"108833","record_id":"<urn:uuid:0e18b0cb-03b1-4368-baf3-91245b6251b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00879.warc.gz"}
Ticker MACD - ThirdBrainFxTicker MACD Ticker MACD Ticker MACD The chart contains the main line (the difference between the fast EMA and the slow EMA, applied to the Ticker line) and the signal line (obtained by applying the SMA to the mainline). In the “Ticker MACD” indicator the line with the index 0 (№0 in the “Colors” tab) is the main line of the MACD. The line with index 1 (№1 in the “Colors” tab) is the signal line. The indicator starts plotting the signal line PeriodSignal bars after the main line. It should be noted that the normal values of the indicator start appearing after at least PeriodSlow number of bars, or even more. This is due to the fact that in the calculation of the EMA values, which are used to calculate the MACD, the recurrent formula is applied: X[t+1]=X[t]+K*(Price[t+1]-X[t]). The initial value of both EMA is accepted as equal to the current price: X[0]=Price[0]. This leads to the EMAs with the same period, but with different initial times, having different values at the same moment of time due to different initial EMA values. But after a fairly long period of time, the values of these EMAs become virtually identical. This is unforeseeable on usual charts because of the large number of bars of the window. For the ticker version, it is necessary to wait for a sufficient number of bars to be accumulated. The chart contains the main line (the difference between the fast EMA and the slow EMA, applied to the Ticker line) and the signal line (obtained by applying the SMA to the main line). In the “Ticker MACD” indicator the line with the index 0 (№0 in the “Colors” tab) is the main line of the MACD. The line with index 1 (№1 in the “Colors” tab) is the signal line. The indicator starts plotting the signal line PeriodSignal bars after the main line. It should be noted that the normal values of the indicator start appearing after at least PeriodSlow number of bars, or even more. This is due to the fact that in the calculation of the EMA values, which are used to calculate the MACD, the recurrent formula is applied: X[t+1]=X[t]+K*(Price[t+1]-X[t]). The initial value of both EMA is accepted as equal to the current price: X[0]=Price[0]. This leads to the EMAs with the same period, but with different initial times, having different values at the same moment of time due to different initial EMA values. But after a fairly long period of time, the values of these EMAs become virtually identical. This is unforeseeable on usual charts because of the large number of bars of the window. For the ticker version, it is necessary to wait for a sufficient number of bars to be accumulated. The classic indicator parameters are selected by default: 12, 26, 9. If anyone is interested, they can experiment with the parameters and determine the rules for working with this indicator. The traditional version: crossing of the signal line by the signal line. Sometimes the crossing of the zero lines by the main or signal line is used. The third version: increase/decrease in the value of the main line compared to the previous value (in the spirit of the AO indicator by Bill Williams). More “exquisite” version: the divergence of the Ticker MACD indicator’s main line with the Ticker indicator’s main line. As you can see, there are some useful options.
{"url":"https://www.thirdbrainfx.com/indicator/ticker-macd/","timestamp":"2024-11-04T17:01:22Z","content_type":"text/html","content_length":"34225","record_id":"<urn:uuid:ff91cbfe-63d5-49f0-8c91-fe53bc9587df>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00855.warc.gz"}
Price return index value Calculating the return of stock indices. To calculate the return of a stock index between any two points in time, follow these steps: First, find the price level of the chosen index on the first and last trading days of the period you're evaluating. Commodities & Futures: Futures prices are delayed at least 10 minutes as per exchange requirements. Change value during the period between open outcry The S&P 500 is a stock market index that tracks 500 large-cap companies. As of March 13, 2020, the S&P 500 has an average 10-year annual return of 7.99%. 1 Market cap is the total value of all shares of stock a company has issued. Therefore, the price movements of companies with the highest share price have the largest impact on the value of the index. Nowadays, price-weighted indices are The GFD Guide to Total Returns on Stocks, Bonds and Bills analyzes the EAFE index increased in value, primarily due to the superior performance of stocks in Nasdaq offers up-to-the-minute quotes & news for global indices and worldwide markets. Index, Price, Change. IXIC, 9572.15, 63.47 0.67 %. NYA, 14034.951 The total return index was first posted on the NASDAQ GIDS SM (Global Index Data Service SM) feed on December 31, 2012; the price return index was posted We measure value stocks using three factors: the ratios of book value, earnings, and sales to price. S&P Style Index Name, Price Return, 1 Yr Ann. Returns. The total return index is a type of equity index that tracks both the capital gains of a group of Why does everyone say index funds work -- can't they lose value? 1 Oct 2017 A Total Return Index takes into account not just the Price Returns of the stocks but also dividends What is the advantage in using the Total Return Index? where doe we get the TRI values for BSE indices since inception? 5 Jun 2018 Let's say the value of the S&P 500 index is $1,000 and then, one day, you are One solution to the dividend problem is a 'total return' index. Index Value. The formula for calculating the value of a price return index is as follow: $$ V_{PRI} = \frac{ \sum_{i=1}^{N}{n_iP_i} } { D } $$. Where: V PRI = the value of the price return index. n i = the number of units of constituent security held in the index portfolio. While the yield is only a partial reflection of the growth experienced, the total return includes both yields and the increased value of the shares to show a growth of 10%. If the same index experienced a 4% loss instead of a 6% gain in share price, the total return would show as 0%. Base value, 1,000 points. Update Frequency, Total return indices are calculated at the end of the trading day and published on the next day around 8:30 am. 7 Feb 2017 The next step is to adjust the price return index value for the day, not the total return index, using the following formula, which combines the A price return index reflects only the prices of the constituent securities. The income generated by the assets in the portfolio, in the form of interest and dividends, is We measure value stocks using three factors: the ratios of book value, earnings, and sales to price. S&P Style Index Name, Price Return, 1 Yr Ann. Returns. Compare ETFs tracking S&P 500 Value Total Return Index - USD: fact sheets, charts, performances, flows, news, ratings, AuMs, tracking error, tracking 20 Oct 2016 First, find the price level of the chosen index on the first and last trading So, during October 2015, the S&P 500 increased in value by 8.3%. The SPSE Total Return Index (“STRI”) is an aggregate market capitalization index which reflects the aggregate market value of all its components relative to their Compare ETFs tracking S&P 500 Value Total Return Index - USD: fact sheets, charts, performances, flows, news, ratings, AuMs, tracking error, tracking A price return index reflects only the prices of the constituent securities. The income generated by the assets in the portfolio, in the form of interest and dividends, is We measure value stocks using three factors: the ratios of book value, earnings, and sales to price. S&P Style Index Name, Price Return, 1 Yr Ann. Returns. Compare ETFs tracking S&P 500 Value Total Return Index - USD: fact sheets, charts, performances, flows, news, ratings, AuMs, tracking error, tracking 20 Oct 2016 First, find the price level of the chosen index on the first and last trading So, during October 2015, the S&P 500 increased in value by 8.3%. The SPSE Total Return Index (“STRI”) is an aggregate market capitalization index which reflects the aggregate market value of all its components relative to their What is Total Return Index Value (TRIV)? Similar to the stock price index value ( SPIV), except that the TRIV is based on the aggregate, float. SNP - SNP Real Time Price. Currency in USD. 4,885.76. -267.07 (-5.18%). At close: 5:41PM EDT. Red Green Area. Full screen. 1d; 5d; 1m; 6m; YTD; 1y; 5y; Max A price index only considers price movements (capital gains or losses) of the securities that make up the index, while a total return index includes dividends, interest, rights offerings and other distributions realized over a given period of time. Looking at an index's total return is usually considered a more accurate measure of performance. The Value and Return of an Index Every index weighting method has a formula that calculates the weighting of a given constituent security within an index. For the following examples, the same portfolio of three securities will be used to help illustrate the weighting methods. These charts show five years of index values, ending in January 2017. By the end of this period, the total return indices for the Dow Jones Industrial Average and S&P 500 were ahead of their price return counterparts by 13.5% and 11.3%, respectively. The next time an index—likely a price return index—hits a major milestone and is noted in Calculating the return of stock indices. To calculate the return of a stock index between any two points in time, follow these steps: First, find the price level of the chosen index on the first and last trading days of the period you're evaluating. RLV | A complete Russell 1000 Value Index index overview by MarketWatch. View stock market news, stock market data and trading information. Price Return. Price Return is the difference between the current price of the stock and the price you paid for the stock. It can either be negative, zero or positive. Total Return. Total Return is Price Return + Dividend Return. It is the difference between the current price of the stock and the price you paid for the stock, BUT ALSO dividend received. Often, you will see a difference in price return and total return. We measure value stocks using three factors: the ratios of book value, earnings, and sales to price. S&P Style Indices divide the complete market capitalization of each parent index into growth and value segments. Constituents are drawn from the S&P 500®. A price index only considers price movements (capital gains or losses) of the securities that make up the index, while a total return index includes dividends, interest, rights offerings and other distributions realized over a given period of time. Looking at an index's total return is usually considered a more accurate measure of performance. How to Calculate Rate of Return on a Price Weighted Index. Price-weighted indices display the average value of a stock without regard to the number of shares purchased or the magnitude of the stock's price. Changes in a price-weighed index allow you to track increases or decreases in the index. And from this
{"url":"https://bestcurrencygghwnv.netlify.app/carol11257jy/price-return-index-value-hyc","timestamp":"2024-11-02T20:06:09Z","content_type":"text/html","content_length":"36775","record_id":"<urn:uuid:b421b9c4-e825-44f0-80d6-a31657c203be>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00706.warc.gz"}
enVision Math Common Core Grade 7 Answer Key Topic 6 Use Sampling To Draw Inferences About Populations Practice with the help of enVision Math Common Core Grade 7 Answer Key Topic 6 Use Sampling to Draw Inferences About Populations regularly and improve your accuracy in solving questions. enVision Math Common Core 7th Grade Answers Key Topic 6 Use Sampling To Draw Inferences About Populations TOPIC 6 GET READY! Review What You Know! Choose the best term from the box to complete each definition. Question 1. A ____ is how data values are arranged. Data Distribution, A data distribution is how data values are arranged. Question 2. The part of a data set where the middle values are concentrated is called the ___ of the data The part of a data set where the middle values are concentrated is called the center of the data. Question 3. A ___ anticipates that there will be different answers when gathering information. Statistical Question, A statistical question anticipates that there will be different answers when gathering information. Question 4. ____ is a measure that describes the spread of values in a data set. Variability is a measure that describes the spread of values in a data set. Statistical Measures Use the following data to determine each statistical measure. 9, 9, 14, 7, 12, 8, 11, 19, 15, 11 Question 5. mean : Add all the given numbers, then divide by the amount of numbers Mean is 11.5, Explanation : Sum or total = 9+9+14+7+12+8+11+19+15+11=115 (total of given numbers) MEAN = 115/10=11.5. Question 6. median : Median is Eleven (11), Put the numbers from smallest to the largest, the number in the middle is Median, if two numbers are in the middle, then add them and divide by 2. Median = (11+11)/2 = 22/2 = 11. Question 7. Range is Twelve (12), The range is the difference between the lowest to the highest value in the given numbers 7,8,9,9,11,11,12,14,15,19, Lowest number is : 7 Highest number is : 19 Range = 19 – 7 = 12. Question 8. Mode is Two (2), The value around which there is the greatest concentration is called mode. Count how many of each value appears in the given numbers or values Here the modes are 2, that is 9, 9 and 11,11 Mode =3(Median) – 2(Mean), = 3(11) – 2(11.5), = 33 – 23, = 10. Question 9. inter quartile range (IQR) Inter quartile Range is Five (5), Given statistical measures : 9, 9, 14, 7, 12, 8, 11, 19, 15, 11 Arranged in order 7,8,9,9,11,11,12,14,15,19 First half 7,8,9,9,11, second half 11,12,14,15,19. Quartiles : The observations which divide the whole set of observations into four equal parts. lower quartile LQ : Mid number of or median of a given series First half is lower quartile LQ = 9, Upper quartile UQ : Mid number of or median of a given series second half is lower quartile, UQ = 14 interquartile range(IQR) : The difference between the upper quartile and the lower quartile is called the interquartile range(IQR) IQR= UQ – LQ = 14 – 9 = 5. Question 10. mean absolute deviation (MAD) MAD is 2.8, The mean absolute deviation of a data set is the average distance between each data point and the mean. It gives us an idea about the variability in a data set Step 1: Calculate the mean. sum or total = 9+9+14+7+12+8+11+19+15+11=115 (total of given numbers) MEAN = 115/10=11.5, Step 2: Calculate how far away each data point is from the mean using positive distances. These are called absolute deviations. Data Point Distance from Mean 1) 9 I 9 – 11.5 I = 2.5 2) 9 I 9 – 11.5 I = 2.5 3) 14 I 14 – 11.5 I = 2.5 4) 7 I 7 – 11.5 I = 4.5 5) 12 I 12 – 11.5 I =0.5 6) 8 I 8 – 11.5 I = 3.5 7) 11 I 11 – 11.5 I = 0.5 8) 19 I 19 – 11.5 I = 7.5 9) 15 I 15 – 11.5 I = 3.5 10) 11 I 11 – 11.5 I = 0.5 Total measure is 2.5+2.5+2.5+4.5+0.5+3.5+0.5+7.5+3.5+0.5=28.0, Divide the total measure by the number of observation = 28.0/10 =2.8, MAD = 28/10 = 2.8. Data Representations Make each data display using the data from Problems 5-7. Question 11. Displayed data is collected from problems 5-7, Box Plot : a simple way of representing statistical data on a plot in which a rectangle is drawn to represent the second and third quartiles, usually with a vertical line inside to indicate the median value. The lower and upper quartiles are shown as horizontal lines either side of the rectangle. Box plot: Question 12. Dot plot Dot Plot: A dot plot, also known as a strip plot or dot chart, is a simple form of data visualization that consists of data points plotted as dots on a graph with an x- and y-axis. These types of charts are used to graphically depict certain data trends or groupings. Statistical Questions Question 13. Which is NOT a statistical question that might be used to gather data from a certain group? A. In what state were you born? B. What is the capital of the United States? C. How many pets do you have? D. Do you like strawberry yogurt? Answer : B is NOT statistical questions Explanation : A statistical question anticipates that there will be different answers when gathering information, where as Capital of United States is the same answer, so the question What is the capital of the United States? might be used to gather data from a certain group. Language Development Fill in the graphic organizer. Write each definition in your own words. Illustrate or cite supporting examples. What types of changes would you like to see in your community? How could you combine physical activity and a fundraiser? Physical Activity Marathon is one of the fundraiser. If you could study an animal population in depth, which animal would you choose, and why? Answer : Tiger After a century of decline, overall wild tiger numbers are starting to tick upward. Based on the best available information, tiger populations are stable or increasing in India, Nepal, Bhutan, Russia and China. If you were to design a piece of art that moved, how would you make it move? Mobiles are free-hanging sculptures that can move in the air. These sculptures are not only artistic, but they are also a great demonstration of balanced forces. If you look at a traditional mobile more closely you will usually notice that it is made of various horizontal rods. Heavy construction paper or cardstock (various colors work well) Hole punch, Pen, Markers, Scissors, Tape, String, Straws, at least 6 Ceiling or doorframe you can hang the mobile from (and a chair or adult to help in hanging it) Carefully cut out the different shapes with your scissors. If you like, you can decorate each of them. Punch a hole into the top center of each of the cut-out shapes.Attach a piece of string to each of the shapes by threading it through the punched hole and tying a knot. Try to vary the length of string attached to each shape so that they are not all the same. Start with one layer of your mobile. Attach a piece of string to the center of one of your straws. Hold the straw by the string so it is hanging freely in the air. Once the straw is balanced tie your first shape to one end of the straw.Tie a second shape to the other end of the straw then hold the straw up in the air again.Balance the straw by moving one of the shapes along the straw. Use a second straw and two more shapes to build another balanced structure. Lesson 6.1 Populations and Samples Solve & Discuss It! The table shows the lunch items sold on one day at the middle school cafeteria. Use the given information to help the cafeteria manager complete his food supply order for next week. what conclusions can you draw from the lunch data? Highest Sold items are Hot Dog and least sold items are Veggie Burger. In the middle school cafeteria Highest Sold items are Hot Dog and least sold items are Veggie Burger, this information help the cafeteria manager to complete his food supply order for next week. Focus on math practices Construct Arguments Why might it be helpful for the cafeteria manager to look at the items ordered on more than one day? Sales of food items information to help the cafeteria manager complete his food supply order to avoid wastage of excess food items and the loos occur due to the less or non saleable items. Essential Question How can you determine a representative sample of a population? A subset of a population that seeks to accurately reflect the characteristics of the larger group. In case of Morgan and her friends are sub set of the registered voted of Morgan town for construction of new stadium. Try It! Miguel thinks the science teachers in his school give more homework than the math teachers. He is researching the number of hours middle school students in his school spend doing math and science homework each night. The population includes all the students in Miguel’s middle school. A possible sample is some students from each of the grades in the middle school. Convince Me! Why is it more efficient to study a sample rather than an entire population? To study the whole population is often very expensive and time consuming because of the number of people involved. For example every 10th person of the population and reduce the time by 10 and still get a representative results. Try It! A produce manager is deciding whether there is customer demand for expanding the organic food section of her store. How could she obtain the information she needs? The manager can interview the customers random sample for customer demand for expanding the organic food section of her store. As a sample out of the population. Try It! Ravi is running against two other candidates for student council president. All of the 750 students in Ravi’s school will vote for student council president. How can Ravi generate a representative sample that will help him determine whether he will win the election? Ravi can ask one in every 10 people to reduce the number of people he needs to interview in order to make a sample out of 750 students, as we know that a random sample 75 students, Ravi generate a representative sample that will help him determine whether he will win the election. Try It! The table at the right shows the random sample that Jeremy generated from the same population as Morgan’s and Maddy’s samples. Compare Jeremy’s sample to Morgan’s and Maddy’s. Answer : Jeremy’s sample also contains 20samples. and the sample shares the value 36 with Morgan’s sample and one value 126 with Maddy’s sample, the distribution of values in Jeremy’s sample is different then that of Morgans and Maddy. A population is an entire group of objects-people, animals, plants—from which data can be collected. A sample is a subset of the population. When you ask a statistical question about a population, it is often more efficient to gather data from a sample of the population. A representative sample of a population has the same characteristics as the population. Generating a random sample is one reliable way to produce a representative sample of a population. You can generate multiple random samples that are different but that are each representative of the population. Do You Understand? Question 1. Essential Question How can you determine a representative sample of a population? A representative sample of a population has the same characteristics as the population but generating a random sample. Question 2. Construct Arguments Why does a sample need to be representative of a population? A sample is need for Reliability. A random sample is one reliable way to produce a representative sample of a population. Question 3. Be Precise The quality control manager of a peanut butter manufacturing plant wants to ensure the quality of the peanut butter in the jars coming down the assembly line. Describe a representative sampling method she could use. The quality control manager of a peanut butter manufacturing plant must adopt repeat in line quality checking method to ensure of the quality of the peanut butter in the jar coming down the assembly line. the manager or representative must check every 4th peanut butter jars coming down. Do You Know How? Question 4. A health club manager wants to determine whether the members would prefer a new sauna or a new steam room. The club surveys 50 of its 600 members. What is the population of this study? The population of this study is 600. Question 5. A journalism teacher wants to determine whether his students would prefer to attend a national writing convention or tour of a local newspaper press. The journalism teacher has a total of 120 students in 4 different classes. What would be a representative sample in this situation? Representative sample is Four(4) The journalism teacher will prefer 4 representatives, one sample from each class. total 4 representative sample for collecting the student prefer for attending National writing convention or Local newspaper press. Question 6. Garret wants to find out which restaurant people think serves the best beef brisket in town. a. What is the population from which Garret should find a sample? b. What might be a sample that is not representative of the population? a. A population is an entire group of people in the town visits restaurant for the best beef brisket. b. A representative sample of a population in a restaurant is who does not prefer the beef brisket. Practice & Problem Solving Leveled Practice In 7 and 8, complete each statement with the correct number. Question 7. Of a group of 200 workers, 15 are chosen to participate in a survey about the number of miles they drive to work each week. Sample consists of 15 workers out of 500. In this situation, the sample consists of the 15 workers selected to participate in the survey are random samples. The population consists of 200 workers. Question 8. The ticket manager for a minor league baseball team awarded prizes by drawing four numbers corresponding to the ticket stub numbers of four fans in attendance. In this situation, the sample consists of the 4 people selected to win a prize. The population consists of 4 the spectators who purchased tickets to attend the game. Question 9. A supermarket conducts a survey to find the approximate number of its customers who like apple juice. What is the population of the survey? Representative Sample survey. A representative sample of a population has the same characteristics as the population. All the population of from that town. Question 10. A national appliance store chain is reviewing the performances of its 400 sales associate trainees. How can the store choose a representative sample of the trainees? Random Sample. Random sampling is a part of the sampling in which each sample has an equal probability of being chosen. Of the 652 passengers on a cruise ship, 30 attended the magic show on board. a. What is the sample? b. What is the population? a. sample is 30 b. population is 652 Total number of passengers is equal to the population in ship i.e.,652 Out of which 30 attended magic show, so the sample became 30. Question 12. Make Sense and Persevere The owner of a landscaping company is investigating whether his 120 employees would prefer a water cooler or bottled water. Determine the population and a representative sample for this situation. The Population is 120 employees. A representative sample is a subset of a population that seeks to accurately reflect the characteristics of the larger group that is 120employees, 12 employees can be considered as representative sample for investigating whether his 120 employees would prefer a water cooler or bottled water. Question 13. Higher Order Thinking A bag contains 6 yellow marbles and 18 red marbles. If a representative sample contains 2 yellow marbles, then how many red marbles would you expect it to contain? Explain. 6 marbles are expected. Question 14. Chung wants to determine the favorite hobbies among the teachers at his school. How could he generate a representative sample? Why would it be helpful to generate multiple samples? A representative sample is a subset of a population that seeks to accurately reflect the characteristics of the larger group. It would be helpful to determine the favorite hobbies among teachers. For example, A classroom of 30 teachers with 15 males and 15 females could generate a representative sample, it can help to generate multiple samples, to determine favorite hobbies. Question 15. The table shows the results of a survey conducted to choose a new mascot. Yolanda said that the sample consists of all 237 students at Tichenor Middle School. a. What was Yolanda’s error? Yolanda’s error is the total population is all students must be a round figure 240 Errors happen when you take a sample from the population rather than using the entire population. In other words, it’s the difference between the statistic you measure and the parameter you would find if you took a census of the entire population. Then 24 out of 240 be a 10% sample Instead of 24 of 237 is 10.12% Main reason for sample size in the population is important. b. What is the sample size? Explain. sample size is 40. A sample size is a part of the population chosen for a survey or experiment. For example, you might take a survey of car owner’s brand preferences. You won’t want to survey all the millions of Cars owners in the country, so you take a sample size. That may be several thousand owners. The sample size is a representation of all car owner’s brand preferences. If you choose your sample wisely, it will be a good representation. Question 16. To predict the outcome of the vote for the town budget, the town manager assigned random numbers and selected 125 registered voters. He then called these voters and asked how they planned to vote. Is the town manager’s sample representative of the population? Explain. YES, the town manager’s sample representative of the population. A representative sample is where your sample matches some characteristic of your population, usually the characteristic you’re targeting with your research. the town manager selected 125 registered voters randomly to ask how they plan to vote. Question 17. David wants to determine the number of students in his school who like Brussels sprouts. What is the population of David’s study? The population of David’s Study is the the number of students in his school those who like brussels sprouts and dose not like. A population is a whole, it’s every member of a group. A population is the opposite to a sample, which is a fraction or percentage of a group. Question 18. Researchers want to determine the percentage of Americans who have visited The Florida Everglades National Park in Florida. The diagram shows the population of this study, as well as the sample used by the researchers. After their study, the researchers conclude that nearly 75% of Americans have visited the park. a. What error was likely made by the researchers? A sampling error is a statistical error that occurs when an analyst does not select a sample that represents the entire population of data. As a result, the results found in the sample do not represent the results that would be obtained from the entire population. Here the researchers conclude that nearly 75% of Americans have visited the park. b. Give an example of steps researchers might take to improve their study. Sampling errors are easy to identify. Here are a few simple steps to reduce sampling error: 1. Increase sample size: A larger sample size results in a more accurate result because the study gets closer to the actual population size. 2. Divide the population into groups: Test groups according to their size in the population instead of a random sample. For example, if people of a specific demographic make up 20% of the population, make sure that your study is made up of this variable. 3. Know your population: Study your population and understand its demographic mix. Know what demographics use your product and service and ensure you only target the sample that matters. Question 19. An art teacher asks a sample of students if they would be interested in studying art next year. Of the 30 students he surveys, 81% are already enrolled in one of his art classes this year. Only 11% of the school’s students are studying art this year. Did the teacher survey a representative sample of the students in the school? Explain. The teacher surveys 30 students, total population of the school is not given to infer the survey results. Question 20. Make Sense and Persevere A supermarket wants to conduct a survey of its customers to find whether they enjoy oatmeal for breakfast. Describe how the supermarket could generate a representative sample for the survey. The manager of the super market can interview people at random. for example in the store every 10th customer use the random sample as a representative sample of the population, enjoy oatmeal for breakfast. Question 21. Critique Reasoning Gwen is the manager of a clothing store. To measure customer satisfaction, she asks each shopper who makes big purchases for a rating of his or her overall shopping experience. Explain why Gwen’s sampling method may not generate a representative sample. Answer: In general customer visit the store with positive attitude and satisfaction. Here the mistake made by Gwen was sampling big purchases customer for a rating of his or her overall shopping experience rather then considering all the customers of a clothing store. Assessment Practice Question 22. Sheila wants to research the colors of houses on a highly populated street. Which of the following methods could Sheila use to generate a representative sample? Select all that apply. The statements that apply are Assign each house a number and use a random number generator to produce a list of houses for the sample List the house numbers on slips of paper and draw at least 20% of the numbers out of a box. Question 23. A national survey of middle school students asks how many hours a day they spend doing homework. Which sample best represents the population? A. A group of 941 students in eighth grade in B. A group of 886 students in sixth grade in a certain county C. A group of 795 students in seventh grade in different states D. A group of 739 students in different middle school grade levels from various states Answer: D D. A group of 739 students in different middle school grade levels from various states is correct Explain the reasoning for your answer in Part A. Option D is the only answer that covers multiple grades in different states of the country. That way we have the most representative sample among those four. Lesson 6.2 Draw Inferences from Data Solve & Discuss It! The students in Ms. Miller’s class cast their votes in the school-wide vote for which color to paint the cafeteria walls. Based on the data, what might you conclude about how the rest of the school will vote? Make Sense and Persevere How many students are in Ms. Miller’s class? How many students voted for each color? 30 students are in Ms. Miller’s class. Number of students voted for each color. Radical Rule RED = 7 Box plot BLUE = 12 Geometric mean GREEN = 4 y Plane YELLOW = 3 odd Number ORANGE = 4 Focus on math practices Reasoning How can you determine whether a sample is representative of a population? Answer : A sample is a subset of the population. A representative sample of a population has the same characteristics as the population. Generating a random sample is one reliable way to produce a representative sample of a population. Essential Question How can inferences be drawn about a population from data gathered from samples? By using sample statistics. A samples are referred to as sample statistics while values concerning a population are referred to as population parameters. The process of using sample statistics to make conclusions about population parameters is known as inferential statistics. Try It! Dash collects data on the hair lengths of a random sample of seventh-grade boys in his school. The data are clustered between 1/2 and 2 inches and between 3½ and 4½ inches. Dash can infer from the data that seventh-grade boys in his school have both short and long hair. Convince Me! How does a dot plot help you make inferences from data? Answer : A Dot Plot is a type of simple histogram-like chart used in statistics for relatively small data sets where values fall into a number of discrete bins Try It! Alexis surveys three different samples of 20 students selected randomly from the population of 492 students in the seventh grade about their choice for class president. In each sample, Elijah receives the fewest votes. Alexis infers that Elijah will not win the election. Is her inference valid? Answer: Yes, Her inference valid. More then 10% for population 492 is surveyed by Alexis by selecting 20 students of 3 groups is total 60 students, Elijah receives less votes. Try It! For his report, Derek also collects data from a random sample of eighth graders in his school, and finds that 18 out of 20 eighth graders have cell phones. If there are 310 eighth graders in his school, estimate the number of eighth graders who have cell phones. The number of eighth garden who have cell phones are 279. You can analyze numerical data from a random sample to draw inferences about the population. Measures of center, like mean and median, and measures of variability, like range, can be used to analyze the data in a sample. Do You Understand? Question 1. Essential Question How can inferences be drawn about a population from data gathered from samples? Inferential statistics is a way of making inferences about populations based on samples. The inferences about the population Hours of sleep per night is 9pm to 9:30pm has more population Question 2. Reasoning Why can you use a random sample to make an inference? A random sample is the subset of the population selected without bias in order to make inferences about the entire population. Random samples are more likely to contain data that can be used to make predictions about a whole population. The size of a sample influences the strength of the inference about the population. Question 3. Critique Reasoning Darrin surveyed a random sample of 10 students from his science class about their favorite types of TV shows. Five students like detective shows, 4 like comedy shows, and 1 likes game shows. Darrin concluded that the most popular type of TV show among students in his school is likely detective shows. Explain why Darrin’s inference is not valid. Darrin’s inference is not valid because he concluded on most of the students like detective shows. Out of 10 sample students 5 like – detective shows 4 like – comedy shows 1 like – game shows Darrin’s concluded on most of the students like detective shows. Do You Know How? Question 5. In a carnival game, players get 5 chances to throw a basketball through a hoop. The dot plot shows the number of baskets made by 20 different players. a. Make an inference by looking at the shape of the data. Total 20 players, each player get 5 chance, except 2 players, 18 players throw a basketball through a hoop successfully. 2 players – Zero out of five score 3 players – 5 out of five score 3 players – 1 out of five score 4 players – 2 out of five score 4 players – 3 out of five score 4 players – 4 out of five score b. What is the median of the data? What is the mean? Do these measures of center support the inference you made in part (a)? Median = 10 Throw a basketball through a hoop if we arrange in ascending order 0,3,8,12,15,16 as the measures, and the average of 8 and 12 will be the median Mean = 9 If we add all the measures and divided by the number as shown below Question 6. In the dot plot above, 3 of 20 players made all 5 baskets. Based on this data, about how many players out of 300 players will make all 5 baskets? 45 players will make all 5 baskets. 3 of 20 players made 5 baskets X of 300 players will make 5 baskets cross multiply as shown below Question 7. The manager of a box office gathered data from two different ticket windows where tickets to a music concert were being sold. Does the data shown in the box plots below support the inference that most of the tickets sold were about $40? Explain. NO, the box plot will not support the inference. Explanation : As per the Box Plot most of the tickets sold were about $50 to $60 as IQR or Q2 lies between the 50-60 Practice & Problem Solving Leveled Practice In 8-10, use the sample data to answer the questions. Alicia and Thea are in charge of determining the number of T-shirts to order to sell in the school store. Each student collected sample data from the population of 300 students. Alicia surveyed 50 students in the cafeteria. Thea surveyed the first 60 students who arrived at school one morning. Question 8. Use Alicia’s data to estimate the number of T-shirts they should order. 180 T- shirts should be order. They should order about 180 T-shirts. Question 9. Use Thea’s data to estimate the number of T-shirts they should order. 255 T-shirts should be order. They should order about 255 T-shirts Question 10. Construct Arguments Can Alicia or Thea make a valid inference? Explain. Thea : As per my survey 255 students like T- shirts of 300 students Alicia: As per my survey 180 T-shirts to be ordered for sale of 300 students Question 11. Three of the five medical doctors surveyed by a biochemist prefer his newly approved Brand X as compared to the leading medicine. The biochemist used these results to write the TV advertisement shown. Is the inference valid? Explain your answer. Yes, it is valid 60% biochemist survey results approved Brand X as compared to the leading medicine. Question 12. Aaron conducted a survey of the type of shoes worn by a random sample of students in his school. The results of his survey are shown at the right. a. Make a valid inference that compares the number of students who are likely to wear gym shoes and those likely to wear boots. b. Make a valid inference that compares the number of students who are likely to wear boots and those likely to wear sandals. a) number of students who are likely to wear gym shoes three times more then those likely to wear boots. b) number of students who are likely to wear boots are two times less then those likely to wear sandals. Question 13. Shantel and Syrus are researching the types of novels that people read. Shantel asks every ninth person at the entrance of a mall. She infers that about 26% of the population prefers fantasy novels. Syrus asks every person in only one store. He infers that about 47% of the population prefers fantasy novels. a. Construct Arguments Whose inference is more likely to be valid? Explain. b. What mistake might Syrus have made? a) Shantel asks every ninth person at the entrance of a mall is the correct sample survey for researching the type of novels the people read, which gives 26% of the population prefers fantasy novels. b) Syrus askes in only one store can not give good results even he infers 47% of the population prefers the fantasy navels. Question 14. Higher Order Thinking A national TV news show conducted an online poll to find the nation’s favorite comedian. The website showed the pictures of 5 comedians and asked visitors of the site to vote. The news show inferred that the comedian with the most votes was the funniest comedian in the nation. a. Is the inference valid? Explain. b. How could you improve the poll? Explain. YES, its valid. Explanation : conducting survey by a national TV news show by online poll to find the nation’s favorite comedian. the comedian #3 with the most votes was the funniest comedian in the nation. Broadcasting the news in other channels can participate more population for more accurate results. In 15 and 16, use the table of survey results from a random sample of people about the way they prefer to view movies. Question 15. Lindsay infers that out of 400 people, 300 would prefer to watch movies in a theater. Is her inference valid? Explain. NO, her inference is not valid. Explanation : As per survey results form a random sample of people preference is given to Streaming rather then Theater. Question 16. Which inferences are valid? Select all that apply. Most people would prefer streaming over any other method. Most people would prefer streaming over any other method Question 17. Monique collects data from a random sample of seventh graders in her school and finds that 10 out of 25 seventh graders participate in after-school activities. Write and solve a proportion to estimate the number of seventh graders, n, who participate in after-school activities if 190 seventh graders attend Monique’s school. 76 students participated. Question 18. Each of the 65 participants at a basketball camp attempted 20 free throws. Mitchell collected data for the first 10 participants, most of whom were first-time campers. Lydia collected data for the next 10 participants, most of whom had attended the camp for at least one week. a. Using only his own data, what inference might Mitchell make about the median number of free throws made by the 65 participants? Median is 9 as per the Mitchell data IQR remains same even more entries raise. b. Using only her own data, what inference might Lydia make about the median number of free throws made by the 65 participants? Median is 12 as per the Lydia’s Data Explanation IQR remain same c. Who made a valid inference? Explain. Both Mitchell and Lydia made a valid inference. Mitchell collected data of first 10 participants, most of whom were first-time campers. Lydia collected data for the next 10 participants, most of whom had attended the camp for at least one week. Assessment Practice Question 19. June wants to know how many times most people have their hair cut each year. She asks two of her friends from Redville and Greenburg, respectively, to conduct a random survey. The results of the surveys are shown below. Redville surveyed on 50 people Median number of haircuts: 7 Mean number of haircuts: 7.3 Greenburg: 60 people surveyed Median number of haircuts: 6.5 Mean number of haircuts: 7.6 June infers that most people get 7 haircuts per year. Based on the survey results, is this a valid inference? Explain. YES, its valid inference. Explanation : The mean and median are the averages of the survey measures or data collected and number 7 lies in between the 6.5 to 7.6 its a valid Question 1. Vocabulary Krista says that her chickens lay the most eggs of any chickens in the county. To prove her claim, she could survey chicken farms to see how many eggs each of their chickens laid that day. In this scenario, what is the population and what is a possible representative sample? A population is an entire group of objects-people, animals, plants—from which data can be collected. A representative sample of a population has the same characteristics as the population. Question 2. Marcy wants to know which type of book is most commonly checked out by visitors of her local public library. She surveys people in the children’s reading room between 1:00 and 2:00 on Saturday afternoon. Select all the statements about Marcy’s survey that are true. Lesson 6-1 Marcy’s sample is not representative because not all of the library’s visitors go to the children’s reading room. The population of Marcy’s study consists of all visitors of the public library. For Problems 3-5, use the data from the table. Question 3. Michael surveyed a random sample of students in his school about the number of sports they play. There are 300 students in Michael’s school. Use the results of the survey to estimate the number of students in Michael’s school who play exactly one sport. Explain your answer. Lesson 6-2 45 students play exactly one sport Question 4. What inference can you draw about the number of students who play more than one sport? Lesson 6-2 160 students play more then 1 sport. Question 5. Avi says that Michael’s sample was not random because he did not survey students from other schools. Is Avi’s statement correct? Explain. Lesson 6-1 No, Avi’s statement is not correct. Michael’s survey is about his school students and their play, not about other school. How well did you do on the mid-topic checkpoint? Fill in the stars. Topic 6 MID-TOPIC PERFORMANCE TASK Sunil is the ticket manager at a local soccer field. He wants to conduct a survey to determine how many games most spectators attend during the soccer season. What is the population for Sunil’s survey? Give an example of a way that Sunil could collect a representative sample of this population. The population in Sunil’s survey is 150. He could collect the sample from ticket manager at a local Soccer field. According to part B population of sunil’s survey is determined. Sunil conducts the survey and obtains the results shown in the table below. What can Sunil infer from the results of the survey? Only 1 or 2 games be conducted. From the above survey he concluded. Suppose 2,400 spectators attend at least one game this soccer season. Use the survey data to estimate the number of spectators who attended 5 or more games this season. Explain how you made your The survey data to estimate the number of spectators who attended 5 or more games this season is 2,400. Lesson 6.3 Make Comparative Inferences About Populations Explore It! Ella surveys a random sample of 20 seventh graders about the number of siblings they have. The table shows the results of her survey. A. Model with Math Draw a model to show how Ella can best display her data. In the above data number of students siblings are displayed on DOT PLOT. B. Explain why you chose that model. Dot Plot is easy and effective to show the data. Dot plot and Box plot are the types of math draw models, that data can be shown in chart format, here Dot plot taken as it is easy and effective way of showing data on charts, Focus on math practices Reasoning Using your data display, what can you infer about the number of siblings that most seventh graders have? Explain. Only 1 sibling. As shown in the above dot plot of 7th grade students has numbers of siblings 1 are more Essential Question How can data displays be used to compare populations? Answer : Data can be displayed using Dot plot, Box plot or Histogram to compare the population for concluding valid reasons. Try It! Kono gathers the heights of a random sample of sixth graders and seventh graders and displays the data in box plots. What can he say about the two data sets? The median of the 7th grade sample is greater than the median of the 6th grade sample. The 7th grade sample has greater variability. By comparing both the box plots, he concluded that 7th grade has greater variability. Most 7th grade students have one sibling 8 out of 20 students Convince Me! How can you visually compare data from two samples that are displayed in box plots? Guidelines for comparing boxplots from two sam 1. Compare the respective medians, to compare location. 2. Compare the interquartile ranges (that is, the box lengths), to compare dispersion. 3. Look at the overall spread as shown by the adjacent values. 4. Look for signs of skewness. 5. Look for potential outliers. Try It! A local recreation center offers a drop-in exercise class in the morning and in the evening. The attendance data for each class over the first month is shown in the box plots at the right. What can you infer about the class attendance? The line for the median of evening attendance data set is to the right of the line for the median of morning attendance data set. so, morning attendance data can say that the median of evening attendance data set is greater. The box for evening attendance data set is longer then the morning data set.so, evening attendance data is more spread out or grater variability You can use data displays, such as box plots, to make informal comparative inferences about two populations. You can compare the shapes of the data displays or the measures of center and variability. Do You Understand? Question 1. Essential Question How can data displays be used to compare populations? The median and variability of measures are compared between the data set A and set B Question 2. Generalize What measures of variability are used when comparing box plots? What do these measures tell you? The median and variability of measures are compared between the data set A and set B. Here the median is same 5, and the data set B, variability is 9 , that is greater compared to data set A is 7. So ,these measures tell us the greater or smaller comparison Question 3. Make Sense and Persevere Two data sets both have a median value of 12.5. Data Set A has an interquartile range of 4 and Data Set B has an interquartile range of 2. How do the box plots for the two data sets compare? Do You Know How? The box plots describe the heights of flowers selected randomly from two gardens. Use the box plots to answer 4 and 5. Question 4. Find the median of each sample. Garden Y median = ___ inches Garden Z median = ___ inches Garden Y median = 6 inches Garden Z median = 4 inches Explanation : In the above box plot the median or IQR is shown in the graph Question 5. Make a comparative inference about the flowers in the two gardens. Heights of the flowers in the Garden . Y is greater then the Garden Z as the median of the Garden Y flowers is right to the Garden Z flowers. Garden Y is more spread out or grater variability compare to the Garden Z. Compare the gardens of Y and Z, displayed in the box plot. Practice & Problem Solving Leveled Practice For 6-8, complete each statement. Question 6. Water boils at lower temperatures as elevation increases. Rob and Ann live in different cities. They both boil the same amount of water in the same size pan and repeat the experiment the same number of times. Each records the water temperature just as the water starts to boil. They use box plots to display their data. Compare the medians of the box plots. The median of Rob’s data is This means Rob is at Question 7. Liz is analyzing two data sets that compare the amount of food two animals eat each day for one month. a. The median of Animal 2’s data is b. Liz can infer that there is c. Liz can infer that Animal Question 8. The box plots show the heights of a sample of two types of trees. The median height of Tree Tree 1 height is greater then Tree 2. The median of the Tree 1 is right side of the Tree 2 median, So the Height of the Tree 1 is greater. Question 9. Reasoning A family is comparing home prices in towns where they would like to live. The family learns that the median home price in Hometown is equal to the median home price in Plainfield and concludes that the homes in Hometown and Plainfield are similarly priced. What is another statistical measure that the family might consider when deciding where to purchase a home? If median is same, then Mean can be a another statistical measure to check for the best option, and variability of the space of the house plot can be compared for the greater one in space. Mean and median both try to measure the “central tendency” in a data set. The goal of each is to get an idea of a “typical” value in the data set. The mean is commonly used, but sometimes the median is preferred. Question 10. Higher Order Thinking The box plots show the daily average high temperatures of two cities from January to December. Which city should you live in if you want a greater variability in temperature? City X has greater variability. Explanation : City X and Y are of same temperature variability, but the median of City X is less then the City Y. Assessment Practice Question 11. Paul compares the high temperatures in City 1 and City 2 for one week. In City 1, the range in temperature is 10°F and the IQR is 5°F. In City 2, the range in temperature is 20°F and the IQR is 5°F. What might you conclude about the weather pattern in each city based on the ranges and interquartile ranges? A. The weather pattern in City 1 is more consistent than the weather pattern in City 2. B. The weather patterns in City 1 and City 2 are equally consistent. C. The weather pattern in City 2 is more consistent than the weather pattern in City 1. D. The range and interquartile range do not provide enough information to make a conclusion. Option A The IQR of both the Cities are same (IQR is 5°F) but the rage in temperature city 1 has less then the city 2. Lesson 6.4 Make More Comparative Inferences About Populations Explore It! Jackson and his brother Levi watch Jewel Geyser erupt one afternoon. They record the time intervals between eruptions. The dot plot shows their data. Jackson estimates that the average time between eruptions is 8 minutes. Levi estimates that the average time between eruptions is 8\(\frac{1}{2}\) minutes. A. Construct Arguments Construct an argument to support Jackson’s position. Jackson estimates the average time between eruptions is 8 minutes. Levi estimates the average time between eruptions is 8\(\frac{1}{2}\) minutes. Jackson estimate is nearer to the median value of the time as shown in the above dot plot B. Construct Arguments Construct an argument to support Levi’s position. Jackson estimates the average time between eruptions is 8 minutes. Levi estimates the average time between eruptions is 8\(\frac{1}{2}\) minutes. Levi’s estimate is exactly value of the time as shown in the above dot plot Focus on math practices Reasoning How can you determine the best measure of center to describe a set of data? we can determine the best measure of center to describe a set of data is by finding the Mean and Median of data. The mean (average) of a data set is found by adding all numbers in the data set and then dividing by the number of values in the set. The median is the middle value when a data set is ordered from least to greatest. The mode is the number that occurs most often in a data set. Essential Question How can dot plots and statistical measures be used to compare populations? By calculating the Mean and Median of data and variability and range. The mean (average) of a data set is found by adding all numbers in the data set and then dividing by the number of values in the set. The median is the middle value when a data set is ordered from least to greatest. The mode is the number that occurs most often in a data set. Variability is also referred to as spread, scatter or dispersion. Range: the difference between the highest and lowest values. Try It! Quinn also collects data about push-ups. Does it appear that students generally did more push-ups last year or this year? Explain your reasoning. No, it does not give any inference. The students do less push-ups last year then this year. Convince Me! How does the range of these data sets affect the shape of the dot plots? In a dot plot, range is the difference between the values represented by the farthest. left and farthest right dots. Explanation : Range is 12 – 3 = 9 Try It! Peter surveyed a random sample of adults and a random sample of teenagers about the number of hours that they exercise in a typical week. He recorded the data in the table below. What comparative inference can Peter make from the data sets? The mean is 4.4 of adults is less then the Teenagers 7.9 the average the number of hours that they exercise in a typical week is more for teenagers. Drawing comparative inferences may involve analyzing the data using mean, median, mean absolute deviation, interquartile range, range, and/or mode. In this lesson students will analyze data in different forms and draw informal comparative inferences about the populations involved. You can use dot plots to make informal comparative inferences about two populations. You can compare the shapes of the data displays or the measures of center and variability. The modes of Data Set B are greater than the modes of Data Set A. The mean of Data Set B is greater than the mean of Data Set A. You can infer that data points are generally greater in Data Set B. The ranges and the MADs of the data sets are similar. You can infer that the variabilities of the two data sets are about the same. Do You Understand? Question 1. Essential Question How can dot plots and statistical measures be used to compare populations? The statistical measures of the data of set A and B are compared with reference to the Median, Mode of the sets and the variability of the measures are compared for the population. Question 2. Reasoning How can you make predictions using data from samples from two populations? we call this making predictions using random sampling from two population. We basically take data from a random sample of two population and make predictions about the whole population based on that data. Random sample – A sample of a population that is random and the every elements of the population is equally likely to be chosen for the sample. Question 3. Construct Arguments Two data sets have the same mean but one set has a much larger MAD than the other. Explain why you may want to use the median to compare the data sets rather than the mean. Both set A and B have the same mean. Do You Know How? For 4 and 5, use the information below. Coach Fiske recorded the number of shots on goal his first-line hockey players made during two weeks of hockey scrimmage. Question 4. Find the mean number of shots on goal for each week. Mean of week 1 is 5 and week 2 is 7 Mean in week 1 is 5 Mean in week 2 is 7 Question 5. a. Based on the mean for each week, in which week did his first line take more shots on goal? b. Based on the comparison of the mean and the range for Week 1 and Week 2, what could the coach infer? a. week 2 b. the range in week 1 is more then the week 2 and the mean in week 1 is less then the week 2, week 2 is better then the week 1 as Coach Fiske recorded the number of shots on goals made during two weeks of hockey. Practice & Problem Solving Question 6. A study is done to compare the fuel efficiency of cars. Cars in Group 1 generally get about 23 miles per gallon. Cars in Group 2 generally get about 44 miles per gallon. Compare the groups by their means. Then make an inference and give a reason the inference might be true. The mean for Group The cars in Group The cars in Group Question 7. The dot plot shows a random sample of vertical leap heights of basketball players in two different basketball camps. Compare the mean values of the dot plots. Round to the nearest tenth. The mean values tell you that participants in Camp jump higher in general. camp 1 mean = 28 in camp 2 mean = 24 in Question 8. A researcher divides some marbles into two data sets. In Data Set 1, the mean mass of the marbles is 13.6 grams. In Data Set 2, the mean mass of the marbles is 14 grams. The MAD of both data sets is 2. What can you infer about the two sets of marbles? The mass of marbles in set 2 is higher then the marbles in set 1 as comparing the mean mass of the marbles of both sets. Question 9. Generalize Brianna asks 8 classmates how many pencils and erasers they carry in their bags. The mean number of pencils is 11. The mean number of erasers is 4. The MAD of both data sets is 2. What inference could Brianna make using this data? Total 88 pencils and 32 erasers they carry in their bags. Pencils =(P)/8 = 11 P= 11×8=88 Erasers = (E)/8=4 E=8×4 = 32 Question 10. Higher Order Thinking Two machines in a factory are supposed to work at the same speed to pass inspection. The number of items built by each machine on five different days is recorded in the table. The inspector believes that the machines should not pass inspection because the mean speed of Machine X is much faster than the mean speed of Machine Y. a. Which measures of center and variability should be used to compare the performances of each machine? Explain. b. Is the inspector correct? Explain. a. Median and IQR are used. b. YES, the inspector is correct. The mean speed of Machine X is faster then the Machine Y, they should run same speed but they are varying with an average speed of 2.2. Assessment Practice Question 11. The dot plots show the weights of a random sample of fish from two lakes. Which comparative inference about the fish in the two lakes is most likely correct? A. There is about the same variation in weight between small and large fish in both lakes. No, there is variation. Variation of weights in Round Lake is from 15 to 21 ounces, difference is 6 ounce. In South lake weights vary from 11 to 21 ounces, difference is 10 ounce B. There is less variation in weight between small and large fish in South Lake than between small and large fish in Round Lake. No, there is variation. Variation in south lake is higher then the Round lake, they differ with 10 ounce and 6 ounce respectively. C. There is less variation in weight between small and large fish in Round Lake than between small and large fish in South Lake. YES, 6 ounce in Round lake. Variation in south lake is higher then the Round lake, they differ with 10 ounce and 6 ounce respectively D. There is greater variability in the weights of fish in Round Lake. No, there is variation. Less variability in the weights of the fish in the round lake. 3-Act Mathematical Modeling: Raising Money ACT 1 Question 1. After watching the video, what is the first question that comes to mind? Question 2. Write the Main Question you will answer. Question 3. Make a prediction to answer this Main Question. Question 4. Construct Arguments Explain how you arrived at your prediction. ACT 2 Question 5. What information in this situation would be helpful to know? How would you use that information? Question 6. Use Appropriate Tools What tools can you use to solve the problem? Explain how you would use them strategically. Question 7. Model with Math Represent the situation using mathematics. Use your representation to answer the Main Question. Question 8. What is your answer to the Main Question? Does it differ from your prediction? Explain. ACT 3 Question 9. Write the answer you saw in the video. Question 10. Reasoning Does your answer match the answer in the video? If not, what are some reasons that would explain the difference? Question 11. Make Sense and Persevere Would you change your model now that you know the answer? Explain. ACT 3 Question 12. Model with Math Explain how you used a mathematical model to represent the situation. How did the model help you answer the Main Question? Question 13. Critique Reasoning Explain why you agree or disagree with each of the arguments in Act 2. Question 14. Use Appropriate Tools You and your friends are starting a new school club. Design a sampling method that is easy to use to help you estimate how many people will join your club. What tools will you TOPIC 6 REVIEW Topic Essential Question How can sampling be used to draw inferences about one or more populations? Vocabulary Review Complete each definition, and then provide an example of each vocabulary word used. 1. A population is a entire group of objects from which data can be collected. 2. Making a conclusion by interpreting data is called making an inference 3. A valid inference is one that is true about a population based on a representative sample. 4 A(n) representative sample accurately reflects the characteristics of an entire population. Use Vocabulary in Writing Do adults or teenagers brush their teeth more? Nelson surveys two groups: 50 seventh grade students from his school and 50 students at a nearby college of dentistry. Use vocabulary words to explain whether Nelson can draw valid conclusions. Concepts and Skills Review LESSON 6-1 Populations and Samples Quick Review A population is an entire group of people, items, or events. Most populations must be reduced to a smaller group, or sample, before surveying. A representative sample accurately reflects the characteristics of the population. In a random sample, each member of the population has an equal chance of being included. Question 1. Anthony opened a new store and wants to conduct a survey to determine the best store hours. Which is the best representative sample? A. A group of randomly selected people who come to the store in one week B. A group of randomly selected people who visit his website on one night C. Every person he meets at his health club one night D. The first 20 people who walk into his store one day Option A Question 2. Becky wants to know if she should sell cranberry muffins at her bakery. She asks every customer who buys blueberry muffins if they would buy cranberry muffins. Is this a representative sample? A representative sample accurately reflects the characteristics of the population. Those like blueberry muffins may not like cranberry muffins up to their choice. Question 3. Simon wants to find out which shop has the best frozen fruit drink in town. How could Simon conduct a survey with a sample that is representative of the population? A representative sample accurately reflects the characteristics of the population. that means Simon select some representative samples of his friends as a random sample, to find out shop has the best frozen fruit drink in town. LESSON 6-2 Draw Inferences from Data Quick Review An inference is a conclusion about a population based on data from a sample or samples. Patterns or trends in data from representative samples can be used to make valid inferences. Estimates can be made about the population based on the sample data. Question 1. Refer to the example. Polly surveys two more samples. Do the results from these samples support the inference made from the example? In all three samples Polly collected there is the least number of students that do crafts for a hobby that means that Polly made the correct inference LESSONS 6-3 AND 6-4 Make Comparative Inferences About Populations | Make More Comparative Inferences About Populations Quick Review Box plots and dot plots are common ways to display data gathered from samples of populations. Using these data displays makes it easier to visually compare sets of data and make inferences. Statistical measures such as mean, median, mode, MAD, interquartile range (IQR), and range can also be used to draw inferences when comparing data from samples of two populations. Question 1. The two data sets show the number of days that team members trained before a 5K race. a. What inference can you draw by comparing the medians? Answer : Team B median is higher then the A median, The range of Team A is higher then the Team B of the number of days that team members trained before a 5K race b. What inference can you draw by comparing the interquartile ranges? Team B IQR is right side of the Team A IQR in the above box plot of days that team members trained before a 5K race IQR of Team A is 20 and IQR of Team B is 24 Question 2. The dot plots show how long it took students in Mr. Chauncey’s two science classes to finish their science homework last night. Find the means to make an inference about the data. i) mean of first period is 38.75 Minutes ii) mean of second period is 35 Minutes Mean of first period=(20×2+25×2+30×4+35×3+45×2+50×3+55×4)/20=775/20=38.75 Mean of Second Period=(15×1+20×2+25×4+30×4+35×2+45×3+50×1+55×2+60×1)/20=700/20=35 Second period home work taken less time then first period science home work TOPIC 6 Fluency Practice Riddle Rearranging Find each percent change or percent error. Round to the nearest whole percent as needed. Then arrange the answers in order from least to greatest. The letters will spell out the answer to the riddle A young tree is 16 inches tall. One year later, it is 20 inches tall. What is the percent increase in height? Answer : 25% The Tree grew for 4 inches, we need to find the % of the starting height that equals A ship weighs 7 tons with no cargo. With cargo, it weighs 10.5 tons. What is the percent increase in the weight? Answer: 50% The ship’s weight changed for 3.5 tons. Divide 3.5 by the weight of the ship with no cargo to calculate the percentage increase in the ships weight. The balance of an account is $500 in April. In May it is $440. What is the percent decrease in the balance? Answer: 12% The change in the balance 60 out of 500. divide the two values to calculate the percent decrease in the balance. Ben thought an assignment would take 20 minutes to complete. It took 35 minutes. What is the percent error in his estimate of the time? Answer : 42.86% Divide the absolute value of the error (15) by the actual time it took to him to complete the assignment (35). Natalie has $250 in savings. At the end of 6 months, she has $450 in savings. What is the percent increase in the amount of her savings? Answer: 80% The absolute increase of the money in Natalie’s bank account is $200 and we know that she started the period with $250 in her account. Divide the two values to calculate the percent increase of the money in her account. The water level of a lake is 22 feet. It falls to 18 feet during one month. What is the percent decrease in the water level? Answer: 18% Explanation : The water lowered for 4 feet in the fall . Divide that by the water level before the decrease the to calculate the percent decrease of the water level. Shamar has 215 photos on his cell phone. He deletes some so that only 129 photos remain. What is the percent decrease in the number of photos? Answer : 40% Divide the number of pictures Shamar deleted(215-129=86) by the number of pictures Shamar had on his phone before he started deleting them(215). Lita estimates she will read 24 books during the summer. She actually reads 9 books. What is the percent error of her estimate? Answer: 167% Divide the value of the absolute error. Lita made(24-9=15) by the actual number of books she red (9), to calculate the percent error she made. Camden estimates his backpack weighs 9 pounds with his books. It actually weighs 12 pounds. What is the percent error of his estimate? Answer: 25% The absolute error Camden made is 3 Divide that by the actual weight of the backpack(12) to calculate the percent error Camden made. Answer: RIVER BANK Explanation : Arrange the values from smallest to largest The letters put together give the solution to the Riddle. Leave a Comment
{"url":"https://bigideasmathanswers.com/envision-math-common-core-grade-7-answer-key-topic-6/","timestamp":"2024-11-10T02:07:42Z","content_type":"text/html","content_length":"256212","record_id":"<urn:uuid:196aedda-64b3-42bf-bc30-c9104f2c4635>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00192.warc.gz"}
In order to estimate the mean 30-year fixed We have that: $\sigma =0.6$ To determine C.I we have formula $\stackrel{―}{x}±{Z}_{\frac{\alpha }{2}}\frac{\sigma }{\sqrt{n}}$ $a+\alpha =0.10$ ${Z}_{\frac{\alpha }{2}}=1.645$ using equal 1 $\therefore 90\mathrm{%}\text{}C.I\left(6.75,\text{}7.15\right)$ at $\alpha =0.01$ ${Z}_{\frac{\alpha }{2}}=2.576$ using equal 1 $\therefore 99\mathrm{%}\text{}C.I\left(6.64,\text{}7.26\right)$
{"url":"https://plainmath.org/college-statistics/48667-in-order-to-estimate-the-mean-30-year-fixed-mortgage-rate-fo","timestamp":"2024-11-09T20:41:29Z","content_type":"text/html","content_length":"227291","record_id":"<urn:uuid:c300c219-0d9f-4d54-9792-b4d2e71080ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00461.warc.gz"}
All Methods Instance Methods Concrete Methods Modifier and Type Method and Description Returns the negative support of the high-pass analysis filter. Returns the positive support of the high-pass analysis filter. Returns the negative support of the low-pass analysis filter. Returns the positive support of the low-pass analysis filter. Returns the implementation type of this filter, as defined in this class, such as WT_FILTER_INT_LIFT, WT_FILTER_FLOAT_LIFT, WT_FILTER_FLOAT_CONVOL. Returns the negative support of the high-pass synthesis filter. Returns the positive support of the high-pass synthesis filter. Returns the negative support of the low-pass synthesis filter. Returns the positive support of the low-pass synthesis filter. Returns the reversibility of the filter. isSameAsFullWT(int tailOvrlp, int headOvrlp, int inLen) Returns true if the wavelet filter computes or uses the same "inner" subband coefficient as the full frame wavelet transform, and false otherwise. synthetize_hpf(float[] lowSig, int lowOff, int lowLen, int lowStep, float[] highSig, int highOff, int highLen, int highStep, float[] outSig, int outOff, int outStep) An implementation of the synthetize_hpf() method that works on int data, for the inverse 9x7 wavelet transform using the lifting scheme. synthetize_lpf(float[] lowSig, int lowOff, int lowLen, int lowStep, float[] highSig, int highOff, int highLen, int highStep, float[] outSig, int outOff, int outStep) An implementation of the synthetize_lpf() method that works on int data, for the inverse 9x7 wavelet transform using the lifting scheme. Returns a string of information about the synthesis wavelet filter
{"url":"https://downloads.openmicroscopy.org/bio-formats/5.5.2/api/jj2000/j2k/wavelet/synthesis/SynWTFilterFloatLift9x7.html","timestamp":"2024-11-13T11:23:36Z","content_type":"text/html","content_length":"44677","record_id":"<urn:uuid:a0654929-2f76-4149-874c-9672f9063b94>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00836.warc.gz"}
Suppose I tell you that only 1% of people with COVID have a body temperature less than 97°. If you take someone’s temperature and measure less than 97°, what is the probability that they have COVID? If your answer is 1% you have committed the conditional probability fallacy and you have essentially done what researchers do whenever they use p-values. In reality, these inverse probabilities (i.e., probability of having COVID if you have low temperature and probability of low temperature if you have COVID) are not the same. in practically every situation that people use statistical significance, they commit the conditional probability fallacy Now if we gather some new data (D), what needs to be examined is the probability of the null hypothesis given that we observed this data, not the inverse! That is, Pr(H0|D) should be compared with a 1% threshold, not Pr(D|H0). In our current methods of statistical testing, we use the latter as a proxy for the former. By using p-values we effectively act as though we commit the conditional probability fallacy. The two values that are conflated are Pr(H0|p<α) and Pr(p<α|H0). We conflate the chances of observing a particular outcome under a hypothesis with the chances of that hypothesis being true given that we observed that particular outcome. Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis or about the probability that random chance produced the observed data. The p-value is neither. What alternatives do we have to p-values? Some suggest using confidence intervals to estimate effect sizes. Confidence intervals may have some advantages but they still suffer from the same fallacies (as nicely explained in Morey et al. 2016). Another alternative is to use Bayes factors as a measure for evidence. Bayesian model comparison has been around for nearly two decades  but has not gained much traction, for a number of practical reasons. The bottom line is that there is practically no correct way to use p-values. It does not matter if you understand what it means or if you frame it as a decision procedure rather than a method for inference . If you use p-values you are effectively behaving like someone that confuses conditional probabilities. Science needs a mathematically sound framework for doing statistics. In future posts I will suggest a new simple framework for quantifying evidence. This framework is based on Bayes factors but makes a basic assumption: that every experiment has a probability of error that cannot be objectively determined. From this basic assumption a method of evidence quantification emerges that is highly reminescent of p-value testing but is 1) mathematically sound and 2) practical. (In contrast to Bayes factor, it produces numbers that are not extremely large or small). Gwern links think of an archive library as a bookshelf, with some books on it (the separate .o files). some books may refer you to other books (via unresolved symbols), which may be on the same, or on a different bookshelf. The Not Rocket Science Rule Of Software Engineering: automatically maintain a repository of code that always passes all the tests Time passed, that system aged and (as far as I know) went out of service. I became interested in revision control, especially systems that enforced this Not Rocket Science Rule. Surprisingly, only one seemed to do so automatically (Aegis, written by Peter Miller, another charming no-nonsense Australian who is now, sadly, approaching death). Fantastic post by Jason Crawford (The Roots of Progress) A major theme of the 19th century was the transition from plant and animal materials to synthetic versions or substitutes mostly from non-organic sources (Ivory, fertilizer, lighting, smelting, shellac) There are many other biomaterials we once relied on—rubber, silk, leather and furs, straw, beeswax, wood tar, natural inks and dyes—that have been partially or fully replaced by synthetic or artificial substitutes, especially plastics, that can be derived from mineral sources. They had to be replaced, because the natural sources couldn’t keep up with rapidly increasing demand. The only way to ramp up production—the only way to escape the Malthusian trap and sustain an exponentially increasing population while actually improving everyone’s standard of living—was to find new, more abundant sources of raw materials and new, more efficient processes to create the end products we needed. As you can see from some of these examples, this drive to find substitutes was often conscious and deliberate, motivated by an explicit understanding of the looming resource crisis. In short, plant and animal materials had become unsustainable. To my mind, any solution to sustainability that involves reducing consumption or lowering our standard of living is no solution at all. It is giving up and admitting defeat. If running out of a resource means that we have to regress back to earlier technologies, that is a failure—a failure to do what we did in the 19th century and replace unsustainable technologies with new, improved ones that can take humanity to the next level and support orders of magnitude more growth. free classic literature ebooks Gravity is not a force Under general relativity, gravity is not a force. Instead it is a distortion of spacetime. Objects in free-fall move along geodesics (straight lines) in spacetime, as seen in the inertial frame of reference on the right. When standing on Earth we experience a frame of reference that is accelerating upwards, causing objects in free-fall to move along parabolas, as seen in the accelerating frame of reference on the left. It is not safe stagnation and risky growth that we must choose between; rather, it is stagnation that is risky and it is growth that leads to safety. we might be advanced enough to have developed the means for our destruction, but not advanced enough to care sufficiently about safety. But stagnation does not solve the problem: we would simply stagnate at this high level of risk. The risk of a existential catastrophe then looks like an inverted U-shape over time: There is an analog to this in environmental economics, called the “environmental Kuznets curve.” It was theorized that pollution initially rises as countries develop, but, as people grow richer and begin to value a clean environment more, they will work to reduce pollution again. That theory has arguably been vindicated by the path that Western countries have taken with regard to water and air pollution, for example, over the past century. Carl Sagan was the one who coined the term “time of perils.” Derek Parfit called it the “hinge of history.” On the other extreme, humanity is extremely fragile. No matter how high a fraction of our resources we dedicate to safety, we cannot prevent an unrecoverable catastrophe. Perhaps weapons of mass destruction are simply too easy to build, and no amount of even totalitarian safety efforts can prevent some lunatic from eventually causing nuclear annihilation. We indeed might indeed be living in this world; this would be the model’s version of Bostrom’s “vulnerable world hypothesis,” Hanson’s “Great Filter,” or the “Doomsday Argument.” Perhaps, if we followed this argument to the end, we might reach the counterintuitive conclusion that the most effective thing we can do reduce the risk of an existential catastrophe is not to invest in safety directly or to try to persuade people to be more long-term oriented—but rather to spend money on alleviating poverty, so more people are well-off enough to care about safety. It’s been 13 years since Yudkowsky published the sequences, and 11 years since he wrote “Rationality is Systematized Winning“. So where are all the winners? Immediately after the Systematised Winning, Scott Alexander wrote Extreme Rationality: It’s Not That Great claiming that there is “approximately zero empirical evidence that x-rationality has a large effect on your practical success” The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse. Spin networks are states of quantum geometry in a theory of quantum gravity, discovered by Lee Smolin and Carlo Rovelli, which is the conceptual ancestor of the imaginary physics of Schild’s Cool, but also damning? “Proposed by Michael Spivak in 1965, as an exercise in Calculus”
{"url":"https://joelburget.com/blog-2020-10-28/","timestamp":"2024-11-05T18:47:31Z","content_type":"text/html","content_length":"19518","record_id":"<urn:uuid:99181751-7015-4901-a46a-1d2ae9d5e20f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00786.warc.gz"}
Cyburg 15: A Revolutionizing Way to Earn Money Via Network Marketing LA, USA, 26th September 2020, ZEXPRWIRE, Network marketing gives you the opportunity to face your fear, deal with them overcome with them and bring out the winner that have been living inside you.It is a great opportunity for personal development and confidence building because you need a network for network marketing business so for making that network you have to speak with different people and deal with different situations. It also called as Multi level marketing. Network marketing is a business model that is very popular between people who are looking for part time job. It depends on person to person sale of company’s products. Advantages of network marketing • Opportunity of regular income • Risk factor • Potential of Earning • Flexibility of working • You are your own boss • You don’t have to keep stocks or manage distribution chains • Residual earning and unending earning possibility Cutting the marketing Middleman How it works The 7 positions in the center of the circle are always filled according to the system. New guest always join in 8 vacant positions in the outer circle. The positions will be filled by the guests of the 7 people already positioned. You can enter anywhere between positions 08 to 15. As you move forward in the matrix, you keep on earning at every step. As you move you move towards the inner layer of matrix and you get one benefit. Your earning starts from very 1^st step and your 1^st payout of 5% will be released. Further you move to another position you get your 2^nd payout for moving and your 2^nd payout of 10% Next step is to reach the center position. When you reach in the center of the matrix you will get 3^rd payout of 15%also in outer layer any guest enter another 20% payout will generated. It’s time to break the Matrix. Balance 50% payout will be generated when all positions in the outer layer of the matrix will be filled. And system will move you to the next level The new guest always follows the sponsor and if he is in the left part of the circle. His guests will be positioned in the left part of the circle as per system conditions. You already get layer 2 and you help you help your A&B to sponsor their two directs A,B. When you move next you get layer centre and centre benefit. The left position of the circle If you positioned left part of the circle anywhere between 02-04-05, you could invite guests only on the left part of the from position 08 to 11 The right position of the circle If you are positioned right part of the circle anywhere between 03-06-07, you can invite guests only on the left part of the circle from position 12 to 15 Positioning example If you are at the center of the circle, you can invite guests anywhere in a vacant position in the outer circle If you are position 2 and left side of the circle, you can invite guests on any positions vacant at the left side of the circle from 08 to 11 If you are at position 3 and right side of the circle, you can invite guests on any positions vacant at the right side of ht circle from12 If you are at position 4, you can invite 1^st 2 guests at position 8 and 9 in the beginning If you are in position 5, you can invite 1^st 2 guests at position 10 and 11 in the beginning If you at position 6, you can invite 1^st 2 guests at position 12 and 13 in the beginning How you move forward in a matrix For example, if you are at positions 08 and 01 or more, the position is vacant in the outer circle of the matrix, and you are not finding the place for your guests. In this case, you can sponsor your guests and system will generate an advance matrix for you where you will be positioned at the center of the circle and your sponsored person will be placed in advanced matrix A and B How your advance matrix ids will be placed back IDS would be placed back in the main matrix if the main matrix completed first. How placement change and matrix breaks Your joining amount is 0.1 now you are on 8 position and in layer 1 when you move next you get layer 1 benfit and get 10% now you are in 2nd layer when you move next then you will next 20% your joining amount and you have already received 10% of layer 1 now you sponsor 2 persons after that when you move next then you will receive 30% or your joining amount and you get center layer and center benefits and you already received 10% from layer 1 and 20% from layer 2 now you get more 30% and you help A&B to sponsor their Two direct A&B after that when you move next you get 40% of joining amount and outer layer any 2 I’d and you get more 40% except you received from 1,2 and center layer and you help your team to follow same to sponsor any two direct A,B in outer layer, next when you move so you get more 100% and matrix complete and total you get 200% and you help your team to follow same to support their two direct A,B & complete rest 6. Additional you get level income at 1^st level 10% 2^nd level 3% 3^rd level 2% now matrix is complete and you get total 248%. Now matrix breaks and you get front spill next step and you get back spill. When you break any matrix you get new matrix or next step and also matrix for back steps. How is withdrawal made? You can win your first withdrawal every time when your position moves from outer circle to the inner circle in the matrix, for example, from position 8 in the outer circle to position 4 and so on. How you earn from every layer You earn a 5 percent payout from position 08to 04 and matrix amount 10%when you move from position 08 to 04, you get 10 percent payout and matrix amount 20%whenever you move from position 02 to center, you get 15 percent payout and matrix amount 30% when you move from position 02 to Centre you get 20 percent and matrix amount 20% when you reach at centre of the matrix, you get 50 percent payout and 100% matrix amount when you completely break the matrix. How level income works When you join you get 10 percent for level 1, 3 percent for level 2, 2 percent for level 3, 1percent for level, and 0.5 percent for level 5 and 6 and after that you will get 0.25 percent for upcoming Information contained on this page is provided by an independent third-party content provider. Binary News Network and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact [email protected]
{"url":"https://turkiyemanset.net/2020/09/26/cyburg-15-a-revolutionizing-way-to-earn-money-via-network-marketing/","timestamp":"2024-11-06T15:29:16Z","content_type":"text/html","content_length":"362514","record_id":"<urn:uuid:1acbf6d7-6251-4fed-91d1-127b894ca5e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00023.warc.gz"}
Lesson 6 Representing Sequences 6.1: Reading Representations (5 minutes) The purpose of this warm-up is for students to recall some of the ways functions can be represented, such as tables, graphs, equations, and descriptions. The focus here is on exponential and linear functions, and identifying either the rate of change or growth factor. Students will continue to use this skill later in the unit when they review writing explicit equations for exponential and linear situations. Student Facing For each sequence shown, find either the growth factor or rate of change. Be prepared to explain your reasoning. 1. 5, 15, 25, 35, 45, . . . 2. Starting at 10, each new term is \(\frac52\) less than the previous term. 4. \(g(1)=\text-5, g(n)=g(n-1)\boldcdot \text-2\) for \(n\ge2\) 5. ┌──────┬─────────┐ │\(n\) │\(f(n)\) │ │1 │0 │ │2 │0.1 │ │3 │0.2 │ │4 │0.3 │ │5 │0.4 │ Activity Synthesis Display the questions for all to see throughout the discussion. For each problem, select students to share how they identified either the growth factor or rate of change. Highlight any comments linking these sequences to linear or exponential functions, such as that the rate of change would be the slope of the line that the terms of the sequence are on when graphed. 6.2: Matching Recursive Definitions (15 minutes) Optional activity In this partner activity, students take turns matching a sequence to a definition. As students trade roles explaining their thinking and listening, they have opportunities to explain their reasoning and critique the reasoning of others (MP3). One sequence and one definition do not have matches. Students are tasked with writing the corresponding match. Monitor for students using clear reasoning as they create a recursive definition for the sequence 18, 20, 22, 24 to share during the discussion. Making a spreadsheet available gives students an opportunity to choose appropriate tools strategically (MP5). Arrange students in groups of 2. Tell students that for each sequence, one partner finds its matching recursive definition and explains why they think it matches. The partner's job is to listen and make sure they agree. If they don't agree, the partners discuss until they come to an agreement. For the next sequence, the students swap roles. If necessary, demonstrate this protocol before students start working. Ensure that students notice that one sequence and one definition do not have matches, and they are tasked with writing the corresponding match for each. Conversing: MLR8 Discussion Supports. Use this routine to support small-group discussion as students should take turns finding a match. Display the following sentence frames for all to see: "Sequence _____ and definition _____ match because . . .” and “I noticed _____, so I matched . . .” Encourage students to challenge each other when they disagree, and to press for precise mathematical language. This will help students clarify their reasoning about each match. Design Principle(s): Support sense-making; Maximize meta-awareness Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite groups of students to find matches for at least 4 of the sequences. Chunking this task into more manageable parts may also support students who benefit from additional processing time. Supports accessibility for: Organization; Attention; Social-emotional skills Student Facing Take turns with your partner to match a sequence with a recursive definition. It may help to first figure out if the sequence is arithmetic or geometric. • For each match that you find, explain to your partner how you know it’s a match. • For each match that your partner finds, listen carefully to their explanation. If you disagree, discuss your thinking and work to reach an agreement. There is one sequence and one definition that do not have matches. Create their corresponding match. 1. 3, 6, 12, 24 2. 18, 36, 72, 144 3. 3, 8, 13, 18 4. 18, 13, 8, 3 5. 18, 9, 4.5, 2.25 6. 18, 20, 22, 24 • \(G(1)=18, G(n)=\frac12 \boldcdot G(n-1), n\ge2\) • \(H(1)=3, H(n)=5 \boldcdot H(n-1), n\ge2\) • \(J(1)=3, J(n)=J(n-1)+5, n\ge2\) • \(K(1)=18, K(n)=K(n-1)-5, n\ge2\) • \(L(1)=18, L(n)=2 \boldcdot L(n-1), n\ge2\) • \(M(1)=3, M(n)=2 \boldcdot M(n-1), n\ge2\) Anticipated Misconceptions Some students may not be sure how to begin matching terms in a sequence to a definition. Encourage them to start by picking a definition and calculating the first few terms of the sequence it Activity Synthesis Once all groups have completed the matching, ask “How did you decide which definitions to match to sequence 3, 6, 12, 24 and sequence 18, 36, 72, 144 when they both involve doubling?” (They are both geometric with a growth factor of 2, but since they have different first terms you can use those to match the sequences to \(M\) and \(L\).) Next, invite previously identified students to share the recursive definition they created for sequence 18, 20, 22, 24 and their strategy for writing it. If time allows and students need extra practice graphing functions, assign students one function each from the task statement to sketch a graph for. After work time, select students to share their sketches, displaying them for all to see and compare. 6.3: Squares of Squares (15 minutes) Optional activity The purpose of this task is for students to write a recursive definition for a sequence that represents a mathematical context and to create other representations of the sequence. Monitor for groups who create their recursive definitions in different ways to share during the whole-class discussion. For example, some students may first create a table showing step number and the associated values while others may draw additional steps. Allow students to use graph paper to sketch their graphs if needed. Making graphing technology available gives students an opportunity to choose appropriate tools strategically (MP5). Display the image for all to see. Ask students to think of at least one thing they notice and at least one thing they wonder. Give students 1 minute of quiet think time, and then ask students to share the things they noticed and wondered. Record and display their responses for all to see. If possible, record the relevant reasoning on or near the image. After all responses have been recorded without commentary or editing, ask students, “Is there anything on this list that you are wondering about now?” Encourage students to respectfully disagree, ask for clarification, or point out contradicting information. If not brought up by students, ask “How would you describe the total number of small squares in Step 3 compared to the total number of small squares in Step 2?” (There are 9, or \(3^2\), more squares in Step 4 than in Step 3.) Arrange students in groups of 2. Encourage them to check in with their partner frequently as they work through the task. Action and Expression: Internalize Executive Functions. Provide students with grid or graph paper to organize their work with representations of geometric and arithmetic sequences. Supports accessibility for: Language; Organization Student Facing Here is a pattern where the number of small squares increases with each new step. 1. Write a recursive definition for the total number of small squares \(S(n)\) in Step \(n\). 2. Sketch a graph of \(S\) that shows Steps 1 to 7. 3. Is this sequence geometric, arithmetic, or neither? Be prepared to explain how you know. Student Facing Are you ready for more? Start with a circle. If you make 1 cut, you have 2 pieces. If you make 2 cuts, you can have a maximum of 4 pieces. If you make 3 cuts, you can have a maximum of 7 pieces. 1. Draw a picture to show how 3 cuts can give 7 pieces. 2. Find the maximum number of pieces you can get from 4 cuts. 3. From 5 cuts. 4. Can you find a function that gives the maximum number of pieces from \(n\) cuts? Anticipated Misconceptions Students may not be sure where to begin with the graph since no axes are provided in the task statement. Encourage these students to first figure out what values they need to plot before drawing, scaling, and labeling their axes. Activity Synthesis The goal of this discussion is for students to share how they reasoned about a recursive definition for \(S\) and how they created their graph for Steps 1 to 7. Invite previously identified groups to share how they created their definitions, making sure to display for all to see any additional representations they used to help their thinking. After these students have shared, ask "Did anyone use a different strategy for writing their definition?" and invite any new students to share their thinking. Conclude the discussion by reviewing graphing strategies. Select 2–4 students to share how they created their sketch of \(S\). In particular, focus on how the scale on each axis was chosen. Lesson Synthesis Display the poster created earlier for all to see. Arrange students in groups of 3–4 and assign each group one of the example sequences. Tell groups to create a graph and visual pattern showing the first five values of their sequence. After work time, invite groups to share their representations. Add these to the poster for future reference. 6.4: Cool-down - Represent this Sequence (5 minutes) Student Facing Here are some ways to represent a sequence. Each representation gives a different view of the same sequence. • A list of terms. Here’s a list of terms for an arithmetic sequence \(D\): 4, 7, 10, 13, 16, . . . We can show this sequence is arithmetic by noting that the difference between consecutive terms is always 3, so we can say this sequence has a rate of change of 3. • A table. A table lists the term number \(n\) and value for each term \(D(n)\). It can sometimes be easier to detect or analyze patterns when using a table. │\(n\) │\(D(n)\) │ │1 │4 │ │2 │7 │ │3 │10 │ │4 │13 │ │5 │16 │ • A graph. The graph of a sequence is a set of points, because a sequence is a function whose domain is a part of the integers. For an arithmetic sequence, these points lie on a line since arithmetic sequences are a type of linear function. • An equation. We can define sequences recursively using function notation to make an equation. For the sequence 4, 7, 10, 13, 16, . . ., the starting term is 4 and the rate of change is 3, so \(D (1) = 4, D(n) = D(n-1) + 3 \) for \(n\ge2\). This type of definition tells us how to find any term if we know the previous term. It is not as helpful in calculating terms that are far away like \ (D(100)\). Some sequences do not have recursive definitions, but geometric and arithmetic sequences always do.
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/3/1/6/index.html","timestamp":"2024-11-07T23:56:07Z","content_type":"text/html","content_length":"105676","record_id":"<urn:uuid:a3e5a1e8-1ea6-4f7d-831c-05d73fa5fd15>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00702.warc.gz"}
Simplifying the Expression: (a-b)(a+b)+b(a+b)-a^2 This article will guide you through the process of simplifying the algebraic expression: (a-b)(a+b)+b(a+b)-a^2. Understanding the Key Concepts Before we dive into the simplification, let's understand the fundamental algebraic concepts involved: • Distributive Property: This property states that multiplying a sum by a number is the same as multiplying each addend by the number and then adding the results. In symbols: a(b + c) = ab + ac. • Difference of Squares: This pattern describes the factorization of the difference of two squared terms: a² - b² = (a + b)(a - b). Simplifying the Expression 1. Expand the first two terms using the distributive property: (a-b)(a+b) + b(a+b) - a^2 = a(a+b) - b(a+b) + b(a+b) - a² 2. Simplify by combining like terms: a² + ab - ab - b² + ab + b² - a² 3. Notice the terms ab and -ab cancel out: a² - b² + ab + b² - a² 4. The terms a² and -a² also cancel out: -b² + ab + b² 5. Finally, the terms -b² and b² cancel out: The Simplified Expression Therefore, the simplified form of the expression (a-b)(a+b)+b(a+b)-a² is ab. By applying the distributive property and recognizing the difference of squares pattern, we successfully simplified the expression. This process highlights the importance of understanding algebraic concepts and techniques to manipulate expressions efficiently.
{"url":"https://jasonbradley.me/page/(a-b)(a%252Bb)%252Bb(a%252Bb)-a%255E2","timestamp":"2024-11-10T09:45:02Z","content_type":"text/html","content_length":"63029","record_id":"<urn:uuid:42385008-d55c-4e3b-ad4a-e8b51e5cffa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00490.warc.gz"}
The Dot Product Definition 11.3.2. Dot Product. 1. Let \(\vec u = \la u_1,u_2\ra\) and \(\vec v = \la v_1,v_2\ra\) in \(\mathbb{R}^2\text{.}\) The dot product of \(\vec u\) and \(\vec v\text{,}\) denoted \(\dotp uv\text{,}\) is \begin{equation*} \dotp uv = u_1v_1+u_2v_2\text{.} \end{equation*} 2. Let \(\vec u = \la u_1,u_2,u_3\ra\) and \(\vec v = \la v_1,v_2,v_3\ra\) in \(\mathbb{R}^3\text{.}\) The dot product of \(\vec u\) and \(\vec v\text{,}\) denoted \(\dotp uv\text{,}\) is \begin{equation*} \dotp uv = u_1v_1+u_2v_2+u_3v_3\text{.} \end{equation*}
{"url":"https://runestone.academy/ns/books/published/APEX/sec_dot_product.html","timestamp":"2024-11-11T05:25:27Z","content_type":"text/html","content_length":"427187","record_id":"<urn:uuid:fb8b4e1c-dc59-4977-9bb5-1283d62f2cb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00613.warc.gz"}
SPL Programming - 8.4 [Data table] Loop functions mars RaqForum 25 No. 361 View • SPL Programming - 8.4 [Data table] Loop functions Since record sequences (a table sequence is also a record sequence) can be regarded as sequences, we should also be able to use loop functions for these objects. We have used new()and derive(). Let’s try the loop functions we learned before and continue to use the table sequence of 100 records created with new(). Calculate the average height, maximum weight and minimum BMI of these people. A B 1 =100.new(string(~):name,if(rand()<0.5,“Male”,“Female”):sex,50+rand(50):weight,1.5+rand(40)/100:height) 2 =A1.(height).avg() =A1.avg(height) 3 =A1.(weight).max() =A1.max(weight) 4 =A1.min(weight/height/height) Taking a record sequence as a sequence, the A.(x) operation can be performed normally. It is often used to get the sequence composed of values of a field. Structured data has multiple fields, so it is easy to form some expressions with business significance. Therefore, aggregation functions based on record sequence will often be calculated directly with an expression (B2, B3 and A4 here). The selection function for structured data will also be more business meaningful than for conventional data. Find the person with the largest weight and the person with the smallest BMI: A B 1 … 2 =A1.maxp(weight) =A1.maxp@a(weight) 3 =A1.minp(weight/height/height) A2 only finds the first and return, and B2 uses @a to find all the records that maximize the weight. This is often what we are more concerned about, that is, the record corresponding to the maximum and minimum value, rather than the maximum and minimum value itself. The records selected by conditions can also perform various set operations, as well as further aggregation and selection operations: A B 1 … 2 =A1.select(height>=1.7) =A1.select(sex==“Female”) 3 =A2^B2 =A1.select(height>=1.7 && sex==“Female”) 4 =A2\B2 =A1.select(height>=1.7 && sex!=“Female”) 5 =A2.maxp@a(weight) =B2.minp(weight/height/height) 6 =A3.avg(height) =A4.max(weight) A2 selects the persons whose height is not less than 1.7 and B2 selects the females. A3 calculates the intersection of the two, B3 uses logical operation to express the same calculation result as A3. It is the similar case with A4 and B4. A5 and B5 do further selection to the selected record sequence, and A6 and B6 continue to do aggregation to the selected record sequence. Sorting is also a common operation: A B 1 … 2 =A1.sort(height) 3 =A1.select(sex==“Female”) =A3.sort(-weight) 4 =A1.sort(height,-weight) 5 =A1.top(-3,weight) =A1.top(-3;weight) 6 =A1.ranks(height) =A1.derive(A6(#):hrank) We know that the default sorting direction of the sort()function is from small to large, and we can use @z to reverse the order. However, when it comes to structured data, there are often multiple fields to sort, and the sorting directions of these parameters are different. In this case, there is no way to use only one @z to control. The method given by SPL is to write the parameters as opposite numbers (plus a negative sign), so that if we continue to sort from small to large, we will realize the reverse order of the original values. The method of writing negative signs into opposite numbers is also applicable to string and date time data. In this way, A2 sorts by height from low to high, and B3 sorts females by weight from large to small. A4 will sort by height from small to large first, and those with the same height will be sorted by weight from large to small. Using negative numbers in the top()function is another way to represent the reverse order, which is equivalent to getting the last few after sorting (if positive numbers, the first few). A5 and B5 will calculate the three largest weights and the three individuals with largest weights respectively. The calculation result of A5 is a sequence composed of three numbers, while B5 returns a record sequence composed of three records. SPL also supports ranking function. A6’s ranks function can calculate everyone’s ranking by height. We feel that using Male and Female to represent gender is too long. Just use one letter M and F. In the future, it will be shorter when writing comparative expressions. It can be done with the run() 1 … 2 >A1.run(sex=left(sex,1)) When using the loop function to assign a value to a field, we can also directly use the field name to represent the field of the current record instead of writing as ~.sex. derive()can be used to append fields to generate a new table sequence. Sometimes we need to append multiple fields, and the field to be appended later needs to be calculated from the field to be appended first. For example, we need to add a BMI field, and then add a flag field for obesity according to the value of BMI. With derive(), it will be written as follows: | | A | | 1 | … | | 2 | =A1.derive(weight/height/height:bmi) | | 3 | =A2.derive(if(bmi>25,“FAT”,“OK”):flag) | However, the calculation of derive()is very complex. It needs to create new records and table sequence and copy the original data. The performance is poor. In principle, it should be used as little as possible. A better way is to use derive() and run() to complete the task: 1 … 2 =A1.derive(weight/height/height:bmi,:flag) 3 =A2.run(flag =if(bmi>25,“FAT”,“OK”)) Append both fields at one time in A2, and then use run()in A3 to calculate the value of flag field. The derive() function is executed only once, and the actions of creating and copying table sequence are reduced, and the operation performance can be improved a lot. From these examples, we can realize once again that multiple fields of structured data are easy to form many calculation expressions with business significance. There are many operations on field expressions in loop functions, but they are relatively uncommon in single value sequence calculation. Combined with the functions of reading and writing Excel files, we can now use program code to merge, filter, add calculation results, sort and perform other operations on a batch of Excel files. Similar to the loop function of a sequence, the record sequence will also have multi-layer nesting. For example, we want to calculate the minimum height difference between males and females in this group: A B 1 … 2 =A1.select(sex==“Male”) =A1.select(sex==“Female”) 3 =A2.min(B2.min(abs(height-A2.height))) When the inner layer needs to reference the record field of the outer loop, the corresponding variable name of the outer loop function should also be written to represent the current record, still it is not necessary to write ~. To find out which pairs of male and female make the minimum height difference true, it is more troublesome. A B 1 … 2 =A1.select(sex==“Male”) =A1.select(sex==“Female”) 3 =A2.conj(B2.([A2.~,~])) =A3.minp@a(abs(~(1).height-~(2).height)) Here, we need to use ~ to keep the record, form a male and female pair, and then find it with the selection function. A3 in B3 is no longer a record sequence, and it is meaningless to reference field names if ~ is omitted. The members of a record sequence are records with multiple fields, and the information content is relatively rich. The result obtained by the selection function is also a record sequence composed of these records. Therefore, the information contained in the position returned by positioning functions such as pselect, pmax and pmin is seldom needed, but the position still needs to be obtained when it comes to order related calculations. Now let’s discuss the loop functions related to order and regenerate a date related table sequence. 1 =100.new(date(now())-100+~:dt,rand()*100:price) 2 =A1.select(day@w(dt)>1 && day@w(dt)<7) Randomly generate a price table of a stock from 100 days ago to today. dt field is the date and price field is the price. Because there is no transaction on weekends, filter out the weekend dates after generating the data, and use this A2 in the future. The day@w() function returns the weekday of a date, but note that its return value is 1 on Sunday and 7 on Saturday. For historical reasons, the computer system uses the habit of Westerners. Sunday is the first day of the week. First, we want to calculate the daily increase and moving average price: … … 3 =A2.derive(price-price[-1]:gain) 4 =A2.derive(price[-1:1].avg():mavg) In the loop functions of a record sequence, we can add [±i] after the field to refer to the field of an adjacent record, and add [a:b] to refer to the sequence composed of the field values of adjacent records. These are the same as the loop functions of the sequence. Calculate the maximum consecutive rising days of this stock: | | A | | … | … | | 3 | =0 | | 4 | =A2.(if(price>price[-1],A3+=1,A3=0)).max() | According to natural thinking, first fill in 0, add 1 if it rises one day, and clear to 0 if it does not rise. We can calculate the days of continuous rise to one day, and then get the maximum value. Calculate the increase on the day when the stock price is the highest: … … 3 =A2.pmax(price) 4 =A2(A3).price-A2.m(A3-1).price Here, use the positioning function pmax() to calculate the sequence number where the maximum value is located, and then use this sequence number to calculate the increase. If we need to consider that the highest price may appear on multiple dates, use the @a option. … … 3 =A2.pmax@a(price) 4 =A3.new(A2(~).dt,A2(~).price-A2.m(~-1).price:gain) First calculate the sequence composed of the record sequence numbers where the maximum value is located, and then generate a two field sequence with new()based on this sequence, taking the date and gain as the fields. In this case, new() can also omit the field name. You can see what the field names of the generated table sequence will be. Similarly, calculate the average increase on the days when the stock price exceeds 90 yuan: … … 3 =A2.pselect@a(price>90) 4 =A3.new(A2(~).dt,A2(~).price-A2.m(~-1).price:gain) For this cross-row calculation for a certain position, SPL provides a positioning calculation function. The previous code can also be written as follows: … … 3 =A2.pmax(price) 4 =A2.calc(A3,price-price[-1]) The positioning calculation function calc()allows the syntax of ~, #, [] and so on in the loop function to be used in the non loop function. … … 3 =A2.pmax@a(price) 4 =A2.calc(A3,price-price[-1]) 5 =A3.new(A2(~).dt,A4(#):gain) The calc() function can also be used for sequences, but there are few business-meaningful scenarios when structured data is not involved, so we give some examples here. SPL Programming - Preface SPL Programming - 8.3 [Data table] Generation of table sequence SPL Programming - 8.5 [Data table] Calculations on the fields Please input Comment content ...
{"url":"https://c.scudata.com/article/1637723791802","timestamp":"2024-11-09T15:34:53Z","content_type":"text/html","content_length":"63283","record_id":"<urn:uuid:64e342d7-a90b-4948-91cf-cf0c3bfe6085>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00265.warc.gz"}