content
stringlengths
86
994k
meta
stringlengths
288
619
Introduction to Heat Transfer 5th Edition Chapter 6 Solutions | Chegg.com Consider the equation for temperature profile. Here, the coefficients D, E, F and G are constants. Differentiate the above equation Calculate the velocity gradient at Consider the equation of Newton’s law of cooling and Fourier’s law. Equate equations (1) and (2). Therefore, the convection coefficient is
{"url":"http://www.chegg.com/homework-help/introduction-to-heat-transfer-5th-edition-chapter-6-solutions-9780471457275","timestamp":"2014-04-24T13:48:23Z","content_type":null,"content_length":"39980","record_id":"<urn:uuid:50bad376-bc06-4d49-a523-701eb5516f1f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations APPENDIX C Census Bureau's Methodology for Model-Based Estimates The Census Bureau's estimation methodology for producing county estimates of the number and percentage of related children aged 5–17 in poverty (poor school-age children) can be separated into four distinct steps: (1) the production of county estimates of the number of poor school-age children; (2) the production of state estimates of the number of poor school-age children; (3) the modification of the county estimates so that they add to the state estimates; and (4) the use of the estimated number of related children aged 5–17 as a denominator to produce estimates of the percentage of those children in poverty. Steps 1, 2, and 3 are described below; Appendix D describes the development of the denominators used in step 4. Two time periods must be differentiated in this discussion. The March 1994 CPS supports model-based estimates of the numbers of school-age children who lived in each county in 1994 and were in poverty in 1993 (the reference year for the income questions). Estimates that refer to the March 1994 CPS (or 1994 and surrounding years) are therefore referred to as 1993 estimates, although strictly speaking they involve information for both 1993 and 1994. Similarly, the 1990 decennial census produced estimates of school-age children who lived in each county in 1990 and were in poverty in 1989. The March 1990 CPS (or 1990 and surrounding years) supports estimates that are for the same income reference year (1989) as the census; we refer to these estimates as the 1989 estimates. The 1993 estimates are the current objective of the small-area estimation program; the 1989 estimates are important for evaluation purposes because they can be compared to the census. This appendix considers both the 1993 and 1989 estimates. OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations COUNTY-LEVEL ESTIMATION1 The county-level model uses regression to produce the estimates, with (3-year average) CPS measures as the dependent variable and administrative data and population estimates for the independent variables. In this model:: yi = a + ß1x1i + ß2x2i + ß3x3i + ß4x4i + ß5x5i + ui +ei , where: yi = log(3-year weighted average of the number of poor school-age children in county i),2 x1i = log(number of child exemptions [assumed to be under age 21] reported by families in poverty on tax returns in county i), x2i = log(number of people receiving food stamps in county i), x3i = log(estimated noninstitutionalized population under age 21 in county i),3 x4i = log(number of child exemptions on tax returns in county i), x5i = log(number of poor school-age children in county i in the previous census), u = model error for county i, and ei = sampling error for county i. Variables are transformed using logarithms for two reasons. First, it is more plausible that the model is homoscedastic on the log scale (corresponding to a constant coefficient of variation or equal model variances of share in poverty) than on the original scale (equal model variances of number in poverty) over the extremely wide range of county sizes. Second, the transformed variables have a much more symmetric distribution, and the scatterplots of various covariates with the dependent variable are more linear. Only CPS sample counties that have some poor school-age children in at least one of the 3 years contributing to the 3-year average are used in the regres- 1 The following section draws heavily from the Census Bureau's documentation (Coder et al., 1996). 2 The estimated number of poor school-age children is the product of the weighted 3-year average CPS county poverty rate for related children aged 5–17 and the weighted 3-year average CPS county number of related children aged 5–17. The weights for this average are the fractions of the 3-year total of CPS interviewed housing units containing children aged 5–17 in each year. For estimates from a given year, stratum-level weights ordinarily used have been removed. These stratum-level weights result from an over-or undersampling of counties to account for certain demographic or other characteristics. As a result, for this analysis, counties receive a weight depending directly on their population size and not on other characteristics. 3 For the 1989 model, estimates of this variable are from the 1990 census; for the 1993 model, estimates are from the Census Bureau's population estimates program. OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations sion. For the 1989 model, 1,028 of 3,141 counties were included in the regression; for the 1993 model, 1,184 of 3,143 counties were included (see Coder et al., 1996:Table 3). As represented above, the variability of yi, after the effects of the predictor variables are accounted for, is due to model error and sampling error. Since the sum of these vary substantially among counties, resulting in heterogeneous variances, a weighted least-squares regression is used. The weights are developed as follows. A mean square error is computed from the unweighted regression of log(1990 census estimates of the number of poor school-age children in 1989), using the covariates appropriate for an estimate of the dependent variable for 1989 (e.g., x5i would pertain to the 1980 census) and including only counties that have sample households with poor school-age children in the March 1993, 1994, or 1995 CPS. The mean square error or variance of total error for this regression is the sum of sampling variance and variance due to model error, that is, var(ui) + var(ei). The variance due to model error for this regression can be estimated by subtracting the contribution to mean square error due to the (estimated) sampling variances of log (census poverty estimates) that are derived from published generalized variance function estimates for each county. Since the census sampling variances are relatively small, variance due to model error is about 88 percent of the mean square error in the census regression model. This estimated model variance is assumed to closely approximate the model variance for an (unweighted) regression with the dependent variable of log(3-year average CPS estimates of the number of poor school-age children). Therefore, when this estimate of model error is subtracted from the mean square error for the CPS regression, the remainder is an estimate of the total county-level CPS sampling variability. The individual county-level sampling variance (for the log dependent variable) is then estimated by assuming that it is inversely proportional to sample size. To obtain the individual county-level contribution from model error, the model error is assumed to be homogeneous (i.e., the variance of the model error is assumed to be equal for each county). The mean square error for county i is then the sum of the variance due to model error and the estimated sampling variance (which depends on the county sample size). Most of the CPS mean square error (about 90 percent) is derived from sampling variance. The reciprocals of the mean square errors are then used as weights to recompute the regression using weighted least squares, which provides new weights since the mean square error has changed. Only one iteration is performed. The weights for the 1989 and 1993 CPS regressions differ because of their different data sets and because each year's model uses the counties in the CPS sample for that year. Together, these differences cause the estimated sampling variances to differ. However, the procedure used to develop the weights for the 1989 and 1993 CPS regressions assumes that the CPS regressions have the same model error as the 1989 census regression. Implicit in this assumption are the OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations assumptions that the CPS and census regression models are very similar and that the time from the last census (the 1980 census for the 1989 model and the 1990 census for the 1993 model) is not an important source of differences in mean square error for these models. These assumptions have not been fully validated. For the counties that do not appear in the 3-year CPS sample, estimates of log(number of poor school-age children) are calculated by substituting the covariates for that county in the estimated regression model and computing the model prediction. For the 1,028 (1989 model) or 1,184 (1993 model) counties for which direct CPS estimates are available, the direct 3-year average CPS estimates and the model predictions are combined, using a weighted average (referred to as empirical Bayes or shrinkage estimation) in which the weight for the model prediction is the ratio of the estimated sampling variance to the sum of the estimated sampling variance and the model error variance for that county. It is important to note that for almost all counties, the great majority of the weight is given to the model predictions; for only 13 counties is the weight for the model prediction less than 0.5. The numbers of poor related children aged 5–17 in each county estimated from the county-level model are then controlled to the state poverty estimates. STATE-LEVEL ESTIMATION4 For most states, direct estimates of the number of poor school-age children from the March CPS are insufficiently reliable to be used alone. A model-based approach that borrows strength from administrative records (IRS tax files, food stamp files, etc.), the decennial census, and other states is therefore used. The methodology for development of the 1989 state estimates is described below; similar methods were used for 1993 estimates, following the specifications that were found to work well for 1989. The regression model for producing state estimates of the proportion of school-age children in poverty has the following form (for details, see Fay, 1996): yit = (Sß tjxitj + zit) + eit , where: i = the state of interest, t = the year of estimation, j = the covariate, 4 This section draws heavily from the Census Bureau's documentation (Fay, 1996). OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations yit = direct estimate of the percentage of poor school-age children from the CPS in year t,5 zit = a random effect that represents differences between the model-based estimates and the direct estimates from the CPS, and eit = sampling variance for the dependent variable for state i in year t. The regression coefficients, ßtj, have the subscript t to indicate that they are reestimated for each year, and Sßtjxitj represents the portion of poverty that is linearly related to the covariates described below. The zit, are assumed to be independent and identically distributed in any given year. The eit are normal disturbances resulting from sampling variance. The quantity Sßtjxitj + zit represents the true poverty count for state i, which is the goal of the estimation procedure. The Census Bureau first performed a cross-sectional (linear) regression of the 1980 census estimates of poverty rates (1979 income) for school-age children on a variety of covariates. A cross-sectional regression was also fit for the 1990 census estimates of poverty rates (1989 income) for school-age children. (These regressions used ordinary least-squares estimation.) The residuals for the 1980 and 1990 census models were observed to be correlated (Fay, 1996), indicating that counties that had more poverty than predicted by the cross-sectional model for 1979 also tended to have more poverty than predicted by the cross-sectional model for 1989. This fact can be used to improve the 1989 predictions. Next, a regression model was built for the CPS estimates of school-age children's poverty rates in 1989. The covariates that were predictive in the regression models with census estimates of school-age children's poverty rates as the dependent variable were selected for inclusion in this model, along with the residuals from the regression of the 1980 census estimates of children's poverty rate on the same covariates. These covariates were (1) the percentage of child exemptions reported by families in poverty on tax returns, (2) the percentage of the noninstitutionalized population under age 65 that do not file income tax returns, (3) the percentage of the population that receives food stamps, and (4) the residuals from the regression fit on 1980 census poverty rates (discussed above). The CPS model of the poverty rates for school-age children was not used to select covariates because of the large sampling variability in the dependent variable. During the exploratory phase of model development, various transformations of both the dependent and independent variables were examined. The untransformed versions seemed to fit best, justifying the use of a model that is linear in percentages. 5 The percentage is calculated through the following ratio: the numerator is the number of poor related children aged 5–17 from the CPS, and the denominator is the estimated total population of noninstitutionalized children aged 5–17 (whether related or not) from the CPS. OCR for page 63 Small-Area Estimates of School-Age Children in Poverty: Interim Report I: Evaluation of 1993 County Estimates for Title I Allocations In the basic regression model, the &x98;tj were estimated by weighted least squares, the weights being the inverse of the sum of the estimated sampling variance and the estimated random effects variance. Estimation of the sampling variances of the direct CPS estimates of school-age children's poverty rates was done in several steps. The computer program developed to produce variances for complex samples (VPLX)—with successive difference replication (related to balanced-half sample replication)—was used to provide the original variance estimates for the CPS estimated state poverty rates. To reduce the instability of these variance estimates, they were modeled using a generalized variance function, which is a function of the poverty rate (e.g., &x98;y + &x103;y2, where y is the poverty rate) divided by the state's sample size for each year. The years 1989–1993 were used to estimate the generalized variance function. The estimated variances for the random effects were calculated using maximum likelihood estimation. One complication of this approach is that the mean and the variance of the estimated poverty rates are linked, in the sense that the variance of an estimated proportion (p) is proportional to p(1-p). Therefore, an iteration was performed, in which the estimated variance for the sampling errors was updated to reflect new values for the model predictions. The iteration was repeated six times. Finally, the CPS direct estimates of school-age children's poverty rates were combined with fitted values from the regression, using an empirical Bayes approach similar to that applied in county estimation. These procedures produced CPS estimates of 1989 poverty rates; the same methods were used to produce CPS estimates of 1993 poverty rates. The estimated rates were then multiplied by either census counts (for the 1989 model) or population estimates (for the 1993 model) to arrive at estimates of the number of poor school-age children in each state. The state estimates were then benchmarked to sum to the CPS national estimate of the number of related school-age children in poverty. This adjustment was a minor one, involving multiplying the state estimates from the 1989 model by 1.0168 and those from the 1993 model by 1.0091.
{"url":"http://www.nap.edu/openbook.php?record_id=5885&page=63","timestamp":"2014-04-16T10:40:15Z","content_type":null,"content_length":"52194","record_id":"<urn:uuid:92158eaa-2817-40b2-9a3c-9ee9a001eeec>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
The virtues of eta-expansion Results 11 - 20 of 28 - Proceedings of TLCA '95, Springer LNCS 902 , 1995 "... . We investigate, in a categorical setting, some completeness properties of beta-eta conversion between closed terms of the simplytyped lambda calculus. A cartesian-closed category is said to be complete if, for any two unconvertible terms, there is some interpretation of the calculus in the catego ..." Cited by 10 (0 self) Add to MetaCart . We investigate, in a categorical setting, some completeness properties of beta-eta conversion between closed terms of the simplytyped lambda calculus. A cartesian-closed category is said to be complete if, for any two unconvertible terms, there is some interpretation of the calculus in the category that distinguishes them. It is said to have a complete interpretation if there is some interpretation that equates only interconvertible terms. We give simple necessary and sufficient conditions on the category for each of the two forms of completeness to hold. The classic completeness results of, e.g., Friedman and Plotkin are immediate consequences. As another application, we derive a syntactic theorem of Statman characterizing beta-eta conversion as a maximum consistent congruence relation satisfying a property known as typical ambiguity. 1 Introduction In 1970 Friedman proved that beta-eta conversion is complete for deriving all equalities between the (simply-typed) lambda-definable... , 1996 "... . In this paper we focus on a set of abstract lemmas that are easy to apply and turn out to be quite valuable in order to establish confluence and/or normalization modularly, especially when adding rewriting rules for extensional equalities to various calculi. We show the usefulness of the lemmas by ..." Cited by 9 (3 self) Add to MetaCart . In this paper we focus on a set of abstract lemmas that are easy to apply and turn out to be quite valuable in order to establish confluence and/or normalization modularly, especially when adding rewriting rules for extensional equalities to various calculi. We show the usefulness of the lemmas by applying them to various systems, ranging from simply typed lambda calculus to higher order lambda calculi, for which we can establish systematically confluence and/or normalization (or decidability of equality) in a simple way. Many result are new, but we also discuss systems for which our technique allows to provide a much simpler proof than what can be found in the literature. 1 Introduction During a recent investigation of confluence and normalization properties of polymorphic lambda calculus with an expansive version of the # rule, we came across a nice lemma that gives a simple but quite powerful sufficient condition to check the Church Rosser property for a compound rewriting system... - TYPES FOR PROOFS AND PROGRAMS: INTERNATIONAL WORKSHOP, TYPES ’98, KLOSTER IRSEE , 1997 "... An extension of the simply-typed lambda-calculus allowing iteration and case reasoning over terms defined by means of higher order abstract syntax has recently been introduced by Joëlle Despeyroux, Frank Pfenning and Carsten Schürmann. This thorny mixing is achieved thanks to the help of the operato ..." Cited by 8 (1 self) Add to MetaCart An extension of the simply-typed lambda-calculus allowing iteration and case reasoning over terms defined by means of higher order abstract syntax has recently been introduced by Joëlle Despeyroux, Frank Pfenning and Carsten Schürmann. This thorny mixing is achieved thanks to the help of the operator ` ' of modal logic IS4. Here we give a new presentation of their system, with reduction rules, instead of evaluation judgments, that compute the canonical forms of terms. Our presentation is based on a modal lambda-calculus that is better from the user's point of view, is more concise and we do not impose a particular strategy of reduction during the computation. Our system enjoys the decidability of typability, soundness of typed reduction with respect to typing rules, the Church-Rosser and strong normalization properties. Finally it is a conservative extension of the simply-typed lambda-calculus. "... It is well known that confluence and strong normalization are preserved when combining algebraic rewriting systems with the simply typed lambda calculus. It is equally well known that confluence fails when adding either the usual contraction rule for #, or recursion together with the usual contract ..." Cited by 8 (3 self) Add to MetaCart It is well known that confluence and strong normalization are preserved when combining algebraic rewriting systems with the simply typed lambda calculus. It is equally well known that confluence fails when adding either the usual contraction rule for #, or recursion together with the usual contraction rule for surjective pairing. We show that confluence and strong normalization are modular properties for the combination of algebraic rewriting systems with typed lambda calculi enriched with expansive extensional rules for # and surjective pairing. We also show how to preserve confluence in a modular way when adding fixpoints to di#erent rewriting systems. This result is also obtained by a simple translation technique allowing to simulate bounded recursion. 1 Introduction Confluence and strong normalization for the combination of lambda calculus and algebraic rewriting systems have been the object of many studies [BT88, JO91, BTG94, HM90], where the modularity of these properties is s... - LIENS-DMI, Ecole Normale Superieure , 1996 "... The use of expansionary j-rewrite rules in various typed -calculi has become increasingly common in recent years as their advantages over contractive j-rewrite rules have become apparent. Not only does one obtain the decidability of fij-equality, but rewrite relations based on expansions give a natu ..." Cited by 6 (0 self) Add to MetaCart The use of expansionary j-rewrite rules in various typed -calculi has become increasingly common in recent years as their advantages over contractive j-rewrite rules have become apparent. Not only does one obtain the decidability of fij-equality, but rewrite relations based on expansions give a natural interpretation of long fij-normal forms, generalise more easily to other type constructors, retain key properties when combined with other rewrite relations, and are supported by a categorical theory of reduction. This paper extends the initial results concerning the simply typed -calculus to System F, that is, we prove strong normalisation and confluence for a rewrite relation consisting of traditional fi-reductions and j-expansions satisfying certain restrictions. Further, we characterise the second order long fij-normal forms as precisely the normal forms of the restricted rewrite relation. These results are an important step towards showing that j-expansions are compatible with the m... - PROCEEDINGS OF WCCFL 21 , 2002 "... ..." , 1993 "... We add extensional equalities for the functional and product types to the typed -calculus with not only products and terminal object, but also sums and bounded recursion (a version of recursion that does not allow recursive calls of infinite length). We provide a confluent and strongly normalizing ..." Cited by 3 (0 self) Add to MetaCart We add extensional equalities for the functional and product types to the typed -calculus with not only products and terminal object, but also sums and bounded recursion (a version of recursion that does not allow recursive calls of infinite length). We provide a confluent and strongly normalizing (thus decidable) rewriting system for the calculus, that stays confluent when allowing unbounded recursion. For that, we turn the extensional equalities into expansion rules, and not into contractions as is done traditionally. We first prove the calculus to be weakly confluent, which is a more complex and interesting task than for the usual -calculus. Then we provide an effective mechanism to simulate expansions without expansion rules, so that the strong normalization of the calculus can be derived from that of the underlying, traditional, non extensional system. These results give us the confluence of the full calculus, but we also show how to deduce confluence directly form our - ALP '94, volume 850 of LNCS , 1994 "... We study the extensional version of the simply typed -calculus with product types and fixpoints enriched with layered, wildcard and product patterns. Extensionality is expressed by the surjective pairing axiom and a generalization of the j-conversion to patterns. We obtain a confluent reduction syst ..." Cited by 3 (2 self) Add to MetaCart We study the extensional version of the simply typed -calculus with product types and fixpoints enriched with layered, wildcard and product patterns. Extensionality is expressed by the surjective pairing axiom and a generalization of the j-conversion to patterns. We obtain a confluent reduction system by turning the extensional axioms as expansion rules, and then adding some restrictions to these expansions in order to avoid reduction loops. Confluence is proved by composition of modular properties of the extensional and non-extensional subsystems of the reduction calculus. 1 Introduction Pattern-matching function definitions is one of the most popular features of functional languages, allowing to specify the behavior of functions by cases, according to the form of their arguments. Left-hand sides of function definitions are usually expressed using Layered, Wildcard and Product Patterns (LWPP), as for example the following Caml Light [ea93] program where the function cons new pair ta... - UNDER CONSIDERATION FOR PUBLICATION IN J. FUNCTIONAL PROGRAMMING , 2007 "... Traditionally, decidability of conversion for typed λ-calculi is established by showing that small-step reduction is confluent and strongly normalising. Here we investigate an alternative approach employing a recursively defined normalisation function which we show to be terminating and which reflec ..." Cited by 3 (0 self) Add to MetaCart Traditionally, decidability of conversion for typed λ-calculi is established by showing that small-step reduction is confluent and strongly normalising. Here we investigate an alternative approach employing a recursively defined normalisation function which we show to be terminating and which reflects and preserves conversion. We apply our approach to the simply-typed λ-calculus with explicit substitutions and βη-equality, a system which is not strongly normalising. We also show how the construction can be extended to System T with the usual β-rules for the recursion combinator. Our approach is practical, since it does verify an actual implementation of normalisation which, unlike normalisation by evaluation, is first order. An important feature of our approach is that we are using logical relations to establish equational soundness (identity of normal forms reflects the equational theory), instead of the usual syntactic reasoning using the Church-Rosser property of a term rewriting system.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=497787&sort=cite&start=10","timestamp":"2014-04-18T20:33:07Z","content_type":null,"content_length":"37281","record_id":"<urn:uuid:4a671472-e50b-4274-a9d3-911fcf3785f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
About lottery July 16th 2010, 07:58 AM About lottery Hey, I was just wondering if there is way to tell how many times person needs to do lottery in order to have 50% chance to actually win the big prize. We use 39 numbers and must pick 7 of them. Chances to win are 1/15380397 and I calculated that if you try to do it for 15380397 weeks always using 1 row at time, chances are 1 - (15380396/15380397)^15380937 = 0,63 So how many weeks is reguired to have almost exact 0,5 chance to win? I think I might have asked my math teacher about this problem, but she said there is no exact way to calculate that with a pen and paper. July 16th 2010, 10:25 AM From what I understand of the question the chance will always be ${49 \choose 7}$ as the events are independent. July 16th 2010, 02:02 PM Yeah but I'm talking about chance to win. For example, if you roll dice for 2 times you have 30& chance to get at least 1 "6" even chance is always 1/6. In this case, I'm asking how many lottery tries is reguired to have 50% chance to win at least once. July 16th 2010, 02:57 PM mr fantastic Hey, I was just wondering if there is way to tell how many times person needs to do lottery in order to have 50% chance to actually win the big prize. We use 39 numbers and must pick 7 of them. Chances to win are 1/15380397 and I calculated that if you try to do it for 15380397 weeks always using 1 row at time, chances are 1 - (15380396/15380397)^15380937 = 0,63 So how many weeks is reguired to have almost exact 0,5 chance to win? I think I might have asked my math teacher about this problem, but she said there is no exact way to calculate that with a pen and paper. Let X be the random variable 'number of times you win'. X ~ Binomial(n = ?, p = 1/15,380,397). You require the smallest integer value of n such that $\Pr(X \geq 1) \geq 0.5 \Rightarrow \Pr(X = 0) \leq 0.5$. Which means finding the smallest integer value of n that solves: $\left( 1 - \frac{1}{15,380,397} \right)^n \leq 0.5$. This can be done by trial and error using a simple scientific calculator. Alternatively an an exact algebraic solution to $\left(1 - \frac{1}{15,380,397} \right)^n = 0.5$ can be easily found and then appropriate rounding done to get the required answer. It should not surprise anyone to find that the value of n is between 10 million and 11 million.
{"url":"http://mathhelpforum.com/statistics/151110-about-lottery-print.html","timestamp":"2014-04-19T02:14:09Z","content_type":null,"content_length":"7216","record_id":"<urn:uuid:545daa52-96a9-4c19-ad1c-a850e99bbdbf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the inverse of y = 4x - 2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f6d175e4b007c4a2eb0e2a","timestamp":"2014-04-17T04:20:32Z","content_type":null,"content_length":"44354","record_id":"<urn:uuid:72c06378-3309-404c-ad8d-16bc578e435c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Earliest Known Uses of Some of the Words of Mathematics (L) Last revision: July 27, 2011 LAG in time series analysis. In 1901 R. H. Hooker described "a measure of the lag of one phenomenon behind another upon which it is in some way dependent" in his paper "Correlation of the Marriage-Rate with Trade," Journal of the Royal Statistical Society, 64, pp. 485-492. David (1998). LAGRANGE MULTIPLIER. Joseph-Louis Lagrange states the general principle for maximising a function of n variables when there are one or more equations between the variables in his Théorie des Fonctions Analytiques (1797, p. 198): "il suffira d'ajouter à la function proposée les functions qui doivent être nulles, multipliées chacune par une quantité indéterminée ...". Lagrange originally applied the multiplier technique to problems in the calculus of variations in his Mécanique Analytique (1788, pp. 46-7). (See H. H. Goldstine A History of the Calculus of Variations from the 17^th through the 19^th Century (1980).) Although "Lagrange multiplier" is the standard term today, "undetermined multiplier" and "indeterminate multiplier" were the usual terms in the 19^th century and for much of the 20^th. The term "Lagrange's method of undetermined multipliers" appears in J. W. Mellor, Higher Mathematics for Students of Chemistry and Physics (1912) [James A. Landau]. The term "Lagrange multiplier rule" appears in "The Problem of Mayer with Variable End Points," Gilbert Ames Bliss, Transactions of the American Mathematical Society, Vol. 19, No. 3. (Jul., 1918). Lagrange multiplier is found in "Necessary Conditions in the Problems of Mayer in the Calculus of Variations," Gillie A. Larew, Transactions of the American Mathematical Society, Vol. 20, No. 1. (Jan., 1919): "The [lambda]'s appearing in this sum are the functions of x sometimes called Lagrange multipliers." The use of Lagrangian for the augmented function dates from the 1960s, see e.g. Samuel Zahl "A Deformation Method for Quadratic Programming," Journal of the Royal Statistical Society, B, 26, (1964), p. 153. (JSTOR search) The Lagrangian function or the Lagrangian expression were once the popular terms. Lagrange multiplier test in Statistics. This test principle was introduced by S. D. Silvey "The Lagrangian Multiplier Test," Annals of Mathematical Statistics, 30, (1959), 389-407. However, while Silvey’s derivation was new, the test statistic was already in the literature as the "score test." Econometricians tend to favour the Lagrange term and statisticians the score term. See the entries SCORE TEST and WALD TEST. LAGRANGE'S THEOREM. Formule de Lagrange appears in Traité élémentaire de calcul différentiel et de calcul intégral (1797-1800) by Lacroix. Lagrange's theorem appears in An Elementary Treatise on Curves, Functions and Forms (1846) by Benjamin Peirce: "The theorem (650) under this form of application, has been often called Laplace's Theorem; but, regarding this change as obvious and insignificant, we do not hesitate to discard the latter name, and give the whole honor of the theorem to its true author, Lagrange." Lagrange's formula for interpolation appears in 1849 in An Introduction to the Differential and Integral Calculus, 2nd ed., by James Thomson. Lagrange's method of approximation occurs in the third edition of An Elementary Treatise on the Theory of Equations (1875) by Isaac Todhunter. LAGRANGIAN (as a noun) occurs in Th. Muir, "Note on the Lagrangian of a special unit determinant," Transactions Royal Soc. South Africa (1929). LAPLACE'S COEFFICIENTS. According to Todhunter (1873), "the name Laplace's coefficients appears to have been first used" by William Whewell (1794-1866) [Chris Linton]. Laplace's coefficients appears in the title Mathematical tracts Part I: On Laplace's coefficients, the figure of the earth, the motion of a rigid body about its center of gravity, and precession and nutation (1840) by Matthew O'Brien. LAPLACE'S EQUATION appears in 1813 in Pantologia. A new cabinet cyclopædia by John Mason Good, Olinthus Gilbert Gregory, and N. Bosworth. [Google print search] The LAPLACE EXPANSION of a determinant is generally traced to Laplace’s memoir in Histoire de l'Académie royale des sciences 1776 (Année 1772, 2e partie) pp. 267-376 (see pp. 294-304). Thomas Muir makes a detailed examination of the argument in The Theory of Determinants in the Historical Order of Development vol. 1, pp. 24-33: he concludes, “there can be no doubt that if any one name is to be attached to the theorem it should be that of Laplace.” [John Aldrich] LAPLACE'S FUNCTIONS appears in English in 1833 in Elementary principles of the theories of electricity, heat and molecular actions by Robert Murphy. [Google print search] The term LAPLACE'S OPERATOR (for the differential operator ^2) was used in 1873 by James Clerk Maxwell in A Treatise on Electricity and Magnetism (p. 29): "...an operator occurring in all parts of Physics, which we may refer to as Laplace's Operator" (OED2). See VECTOR ANALYSIS and the Earliest Uses of Symbols of Calculus page The term LAPLACE TRANSFORM was used by Boole and Poincaré. According to the website of the University of St. Andrews, Boole and Poincaré might otherwise have used the term Petzval transform but they were influenced by a student of Józeph Miksa Petzval (1807-1891) who, after a falling out with his instructor, claimed incorrectly that Petzval had plagiarised Laplace's work. LAPLACIAN (as a noun, for the differential operator ^2) was used in 1935 by Pauling and Wilson in Introd. Quantum Mech. (OED2). See VECTOR ANALYSIS. LATENT VALUE and VECTOR. See EIGENVALUE. The term LATIN SQUARE was named by Euler (as quarré latin) in 1782 in "Recherches sur une Nouvelle Espèces de Quarrés Magique," Verh. uitgegeven door het Zeeuwsch Genootschap d. Wetensch. te Vlissingen, 9, 85-232. Latin square appears in English in 1890 in the title of a paper by Arthur Cayley, "On Latin Squares" in Messenger of Mathematics. Graeco-Latin square appears in H. F. MacNeish "Euler Squares." Ann. Math. 23, (1921-1922), 221-227. The term was introduced into statistics by R. A. Fisher, according to Tankard (p. 112). Fisher used the term in 1925 in Statistical Methods for Research Workers p. 229 (OED2). Graeco-Latin square appears in 1934 in R. A. Fisher and F. Yates, " The 6 x 6 Latin Squares.," Proceedings of the Cambridge Philosophical Society, 30, 492-507. See the entry EULER'S GRAECO-LATIN SQUARES CONJECTURE. LATITUDE and LONGITUDE. Henry of Ghent used the word latitudo in connection with the concept of latitude of forms. Nicole Oresme (1320-1382) used the terms latitude and longitude approximately in the sense of abscissa and ordinate. LATTICE and LATTICE THEORY in algebra. These terms were introduced by Garrett Birkhoff in his “On the combination of subalgebras,” Proceedings of the Cambridge Philosophical Society, 29, (1933), 441–464 and became well-known through his book Lattice Theory (1940). See Enyclopedia of Mathematics where earlier work by Schröder and Dedekind and contemporary work by Ore are also described. LATTICE POINT. Gitterpuncte is found in “Geometrischer Beweis des Fundamentaltheorems fü die quadratischen Reste” by Eisenstein in Journal für die reine and angewandte Mathematik (Crelle), tom 28 (1844) pp 246-248, and also in Eisenstein's Werke I, 164-166: Man stele sich jetzt in der Ebene ein rechtwinkliges Coordinatesystem (x, y) und die ganze Ebene durch Parallelen mit den Axen in den Abständen = 1 von einander in lauter Quadrate von den Dimensionen = 1 getheilt vor. Gitterpuncte sollen alle Eckpuncte von Quadraten heifsen, welche nicht in den beiden Coordinaten – Axen liegen. The term Gitterpuncte may well have been used earlier. The earliest use of lattice point in English given by the OED is from Cayley's translation of Eistenstein, which is titled “Eisenstein's Geometrical Proof of the Fundamental Theorem for Quadratic Residues” and appears in the Quarterly Mathematical Journal, vol. I. (1857), pp. 186-191: “Imagine now in a plane, a rectangular system of coordinates (x, y) and the whole plane divided by lines parallel to the axes at distances = 1 from each other into squares of the dimension = 1. And let the angles which do not lie on the axes of coordinates be called lattice points.” [James A. Landau] The term LATUS RECTUM was used by Gilles Personne de Roberval (1602-1675) in his lectures on Conic Sections. The lectures were printed posthumously under the title Propositum locum geometricum ad aequationem analyticam revocare,... in 1693 [Barnabas Hughes]. LAURENT EXPANSION. Pierre-Alphonse Laurent announced this result in his "Extension du théorème de M. Cauchy relatif à la convergence du développement d'une fonction suivant les puissances ascendantes de la variable," Comptes rendus, 17, (1843) pp. 348-9. The full paper was published in Journal de l’Ecole Polytechnique. 23, (1863), 75-204. (Kline p. 641) LAW OF COSINES is found in 1895 in Plane and spherical trigonometry, surveying and tables by George Albert Wentworth: "Law of Cosines. ... The square of any side of a triangle is equal to the sum of the squares of the other two sides, diminished by twice their product into the cosine of the included angle" [University of Michigan Digital Library]. The term LAW OF INTERTIA OF QUADRATIC FORMS was introduced by James Joseph Sylvester in his “A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares,” Philosophical Magazine, 4, 138-142. It is sometimes called Sylvester’s law of inertia. See Kline (p. 799) and the entry in Encyclopedia of Mathematics. [John Aldrich] LAW OF LARGE NUMBERS, STRONG LAW, WEAK LAW. La loi de grands nombres appears in 1835 in Siméon-Denis Poisson (1781-1840), "Recherches sur la Probabilité des Jugements, Principalement en Matiére Criminelle," Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences, 1, 473-494 also in his "La loi de grands nombres" in ibid. 2, (1836) 377-382. (Porter p. 77). By Poisson’s time there were several theorems that could be covered by this phrase. The first, given by Jacob Bernoulli in Ars Conjectandi (1713), was about "Bernoulli trials" (a 20^th century term). It was often called "Bernoulli’s theorem": see e.g. Todhunter’s A History of the Mathematical Theory of Probability (1865, p. 71). In the course of the 19^th century analogous results were found for other types of random variable. In the 20^th century a new kind of convergence result was obtained, based on almost-sure convergence, not convergence in probability, as in the Bernoulli tradition. The first of these results (on Bernoulli trials) was given by E. Borel in 1909 (see NORMAL NUMBER) and a more general result was given by F. Cantelli in 1917. In 1928 A. Y. Khintchine introduced the term strong law of large numbers to distinguish these results from the "ordinary" Bernoulli-like results: "Sur la loi forte des grands nombres," Comptes Rendus de l'Académie des Sciences, 186 page 286. The obvious term for the Bernoulli results viz., the weak law of large numbers, seems to have come later. A JSTOR search (restricted to journals in English) produced W. Feller’s 1945 "Note on the Law of Large Numbers and "Fair" Games," Annals of Mathematical Statistics, 16, 301-304. LAW OF QUADRATIC RECIPROCITY. Legendre used loi de réciprocite in 1808 in his Essai Sur La Théorie des Nombres; Seconde Édition: “Démonstration du Théorème contenant la loi de réciprocite qui existe entre deux nombres premiers quelconques.” In English, law of quadratic reciprocity is found in H. J. Stephen, “Report on the Theory of Numbers.—Part III,” Report Of The Thirty-First Meeting Of The British Association For The Advancement Of Science; Held At Manchester In September 1861: Page 323 “But it follows from the law of quadratic reciprocity, that one-half of these complete characters are impossible; i. e. that no quadratic form characterized by them can exist.” Page 324n “It is also to be noticed that Gauss does not use the law of quadratic reciprocity to demonstrate the impossibility of one-half of the generic characters; for, as we shall hereafter see, this impossibility is proved in the Disq. Arith. (art. 261) independently of the law of reciprocity, and is then employed to establish that law.” [James A. Landau] LAW OF SINES (Snell's law). The law of sines is found in 1851-54 in Hand-books of natural philosophy and astronomy by Dionysius Lardner [University of Michigan Digital Library]. LAW OF SINES (trigonometry) is found in 1895 in Plane and spherical trigonometry, surveying and tables by George Albert Wentworth: "...the Law of Sines, which may be thus stated: The sides of a triangle are proportional to the sines of the opposite angles [University of Michigan Digital Library]. LAW OF SMALL NUMBERS is a translation of the German phrase coined by L. von Bortkiewicz, and used by him as a title of his book Das Gesetz der kleinen Zahlen (1898). (David (2001)). A JSTOR search found the English phrase in a note, probably by Edgeworth, in the Economic Journal (1904, p. 496) on an Italian publication treating “the relation between statistics and the Calculus of Probabilities with special reference to Prof. Bortschevitch’s ‘law of small numbers.’” LAW OF TANGENTS is found in 1895 in Plane and spherical trigonometry, surveying and tables by George Albert Wentworth: "Hence the Law of Tangents: The difference of two sides of a triangle is to their sum as the tangent of half the difference of the opposite angles is to the tangent of half their sum" [University of Michigan Digital Library]. LAW OF THE ITERATED LOGARITHM is found in English in Philip Hartman and Aurel Wintner, "On the law of the iterated logarithm," Am. J. Math. 63, (1941), 69-176. The German name appears in A. Kolmogorov’s Über das Gesetz des iterierten Logarithmus, Mathematische Annalen, 101, (1929), 126-135. Kolmogorov was extending the original result (for coin tossing) due to A. Khintchine "Über einen Satz der Wahrscheinlichkeitsrechnung," Fundamamenta Mathematicae, 6, (1924), 9-20. Khintchine’s work in turn rested on partial results formulated in terms of number theory. [John Aldrich] The term LEAST ACTION was used by Lagrange (DSB). LEAST COMMON MULTIPLE. Common denominator appears in English in 1594 in Exercises by Blundevil: "Multiply the Denominators the one into the other, and the Product thereof shall bee a common Denominator to both fractions" (OED2). Common divisor was used in 1674 by Samuel Jeake in Arithmetick, published in 1696: "Commensurable, called also Symmetral, is when the given Numbers have a Common Divisor" (OED2). Least common multiple is found in 1823 in J. Mitchell, Dict. Math. & Phys. Sci.: "To find the least common Multiple of several Numbers" (OED2). Least common denominator is found in 1844 in Introduction to The national arithmetic, on the inductive system by Benjamin Greenleaf: "RULE. - Reduce the fractions, if necessary, to the least common denominator. Then find the greatest common divisor of the numerators, which, written over the least common denominator, will give the greatest common divisor required" [University of Michigan Digital Lowest common denominator appears in 1854 in Arithmetic, oral and written, practically applied by means of suggestive questions by Thomas H. Palmer: "Suggestive Questions. - Are all the underlined factors to be found in the denominators of the fractions marked a and b? Should they be omitted, then, in finding the lowest common denominator? What is the product of the factors that are not underlined? (80·3·5.) Has this product every factor contained in all the given denominators? Will it form their common denominator, then? Does it contain no more factors than they do? Will it form, then, their lowest common denominator?" [University of Michigan Digital Library]. Least common dividend appears in 1857 in Mathematical Dictionary and Cyclopedia of Mathematical Science. Lowest common multiple appears in 1873 in Test examples in algebra, especially adapted for use in connection with Olney's School, or University algebra by Edward Olney [University of Michigan Digital LEAST SQUARES. See METHOD OF LEAST SQUARES. LEBESGUE INTEGRAL. In 1899-1901 Henri Lebesgue published five short papers in the Comptes Rendus. These formed the basis of his doctoral dissertation Intégrale, longueur, aire, published in 1902 in the Annali di Matematica. The fifth paper of the series “Sur une généralisation de l’intégral défini,” Comptes rendus 132, (1901) 1025-1028 announced Lebesgue’s generalisation of the Riemann Integral. (From T. Hawkins Lebesgue’s Theory of Integration: Its Origins and Development. 1970) The term Lebesgue integral soon appeared in English, in a paper by William H. Young, “On an extension of the Heine-Borel Theorem,” Messenger of Mathematics 33 (1903-04), 120-132. The date received by the editors is not given, but Ivor Grattan-Guinness [“Mathematical bibliography for W. H. and G. C. Young,” Historia Mathematica 2 (1975), 43-58] places this paper chronologically between papers with received dates of 29 October 1903 and 6 December 1903. The theorem in question has, as far as I know, not hitherto been formulated, though it can be deduced without difficulty from a theorem in a recent memoir by M. Lebesgue, which states that the Lebesgue integral, as we may conveniently call it, of a sum of any two functions (as far as our present knowledge of functions goes) is the sum of their Lebesgue integrals. It has only to be shown that the new notion of the Lebesgue integral coincides in the case of semi-continuous functions with the well-known one of upper or lower integral. (footnote on p. 129) Behind Young’s reference to the Lebesgue integral is a tale of lost priority. For when Young next refers to the integral he indicates the resemblance between Lebesgue's work and his own researches. William H. Young, “On upper and lower integration,” Proceedings of the London Mathematical Society (2) (1905), 52-66. [Received by the editors on January 14, 1904. The following footnote appears on the first page of the paper and is dated April 2, 1904.] This paper was written simultaneously with the preceding memoir [Young's "Open sets and the theory of content"], at a time when the writer was unacquainted with the work of M. Lebesgue. The result of Theorem 2 is in perfect accord with Lebesgue's expression for his integral as the common limit of two difference summations (Annali di Matematica, 1902, p. 253); in fact, it is easily shown that, in the case of an (upper) lower semi-continuous function, the Lebesgue integral coincides with the upper (lower) integral. It may be further remarked that, in the general case, the Lebesgue integral may itself be expressed in precisely my form. In accordance with the alterations made in the preceding memoir (cp. footnote, p. 16), I have made a few verbal alterations in the present paper; I have also elaborated the proof of the final theorem, which, in its original form, was too condensed. The following is from p. 143 of Ivor Grattan-Guinness, “A mathematical union: William Henry and Grace Chisholm Young,” Annals of Science 29(2) (August 1972), 105-186: By 1904 Will had, independently of Lebesgue, constructed by different means an equivalent theory of integration. It was his first really important idea in mathematics and the discovery that he had been anticipated would have cracked many a lesser man. But he took it magnanimously; when he heard of Lebesgue's work he withdrew his major paper and rewrote parts of it to include considerations on what he named for ever as 'the Lebesgue integral'. There were enough technical differences between the two approaches for Young to present his own results in this paper, and sufficient applications of his approach to all branches of analysis to make it significant in its time. In fact, some of the succeeding workers on 'Lebesgue integration' have preferred to follow Young's rather than Lebesgue's approach. [This entry was contributed by Dave L. Renfro.] LEG for a side of a right triangle other than the hypotenuse is found in English in 1659 in Joseph Moxon, Globes (OED2). Leg is used in the sense of one of the congruent sides of an isosceles triangle in 1702 Ralphson's Math. Dict.: "Isosceles Triangle is a Triangle that has two equal Legs" (OED2). LEIBNIZ SERIES. See GREGORY'S SERIES. LEMMA appears in English in the 1570 translation by Sir Henry Billingsley of Euclid's Elements (OED2). [The plural of lemma can be written lemmas or lemmata.] LEMNISCATE. Jacob Bernoulli named this curve the lemniscus in Acta Eruditorum in 1694. He wrote, "...formam refert jacentis notae octonarii [infinity symbol], seu complicitae in nodum fasciae, sive lemnisci" (Smith vol. 2, page 329). LEMOINE POINT. See SYMMEDIAN POINT. LEPTOKURTIC. See KURTOSIS. LEVERAGE in least squares estimation. The earliest JSTOR appearances are from 1978 but the term was apparently already established for David F. Andrews and Daryl Pregibon write that "observations with large effects" are usually called "leverage points". "Finding the Outliers that Matter," Journal of the Royal Statistical Society. Series B, 40, (1978), 85-93. L'HOSPITAL'S RULE. In his "De progressionibus transcendentibus" Euler referred to the method as "a known rule" ("per regulam igitur cognitam quaeramus valorem fractionis") [Stacy Langton]. In 1891 in An Elementary Treatise of the Differential and Integral Calculus by George A. Osborne, the method is not named: "The Differential Calculus furnishes the following method applicable to all In Differential and Integral Calculus (1902) by Virgil Snyder and John Irwin Hutchinson, the procedure is termed "evaluation by differentiation." The same term is used in Elementary Textbook of the Calculus (1912) by the same authors. de l'Hospital's theorem on indeterminate forms is found in approximately 1904 in the E. R. Hedrick translation of volume I of A Course in Mathematical Analysis by Edouard Goursat. The translation carries the date 1904, although a footnote references a work dated 1905 [James A. Landau]. The 1906 edition of A History of Mathematics by Florian Cajori, referring to L'Hopital's 1696 treatise, has: "This contains for the first time the method of finding the limiting value of a fraction whose two terms tend toward zero at the same time." In Differential and Integral Calculus (1908) by Daniel A. Murray, the procedure is shown but is not named. James A. Landau has found in J. W. Mellor, Higher Mathematics for Students of Chemistry and Physics, 4th ed. (1912), the sentence, "This is the so-called rule of l'Hopital." L'Hopital's rule is dated 1944 in MWCD11. The rule is named for Guillaume-Francois-Antoine de l'Hospital (1661-1704), although the rule was discovered by Johann Bernoulli. The rule and its proof appear in a 1694 letter from him to The family later changed the spelling of the name to l'Hôpital. LIAR PARADOX. This paradox exists in several forms. One is attributed to the philosopher Epimenides in the sixth century BC: "All Cretans are liars...One of their own poets has said so." Another is attributed to Eubulides of Miletus a leader of the Megarian school from the fourth century BC. In his Life of Euclides Diogenes Laertius wrote that Eubulides "handed down a great many arguments in dialectic." The Eubulides form is "Is the man a liar who says that he tells lies?" W. & M. Kneale The Development of Logic (1962) pp. 227-8 say that new variants were devised in the Middle Ages and they speculate that the medieval logicians may have rediscovered the paradox from considering St. Paul’s Epistle to Titus "One of themselves, a prophet of their own, hath said, The Cretans are always liars, evil wild beasts, lazy gluttons. This witness is true..." Paul evidently did not see the paradox in this echo of Epimenides. See PARADOX. LIE GROUP (named for Sophus Lie) appears in L. Autonne, "Sur une application des groupes de M. Lie.," C. R. CXII. 570-573 (1891). Lie group appears in English in 1897 in "Sophus Lie's Transformation Groups: A Series of Elementary, Expository Articles" by Edgar Odell Lovett, The American Mathematical Monthly, Vol. 4, No. 10. [ JSTOR search] LIFE TABLE. John Graunt set down a life table in Chapter XI of Observations Made upon the Bills of Mortality (1662) in the course of estimating the number of "fighting men" in London. Edmond Halley presented a more solidly based table in "An Estimate of the Degrees of the Mortality of Mankind, Drawn from Curious Tables of the Births and Funerals at the City of Breslaw; With an Attempt to Ascertain the Price of Annuities upon Lives" Philosophical Transactions (1683-1775), Vol. 17. (1693), 596-610. See Hald (1990, chapters 7 and 9) and Life and Work of Statisticians for other presentations of Halley’s article. The term table of mortality appears in A. De Morgan's Essay on Probabilities and their Applications to Life Contingencies and Insurance Offices (1838). The OED's earliest quotation for life table is from 1865, "Every insurance office bases its transactions upon an instrument which is called a ‘Life Table’" Reader 25 Feb. 213/1. A JSTOR search found the term in William A. Guy "On the Duration of Life Among the Families of the Peerage and Baronetage of the United Kingdom," Journal of the Statistical Society of London, 8, (1845), 69-77. See SURVIVAL FUBCTION. LIKELIHOOD. The term was first used in its modern sense in R. A. Fisher's "On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample", Metron, 1, (1921), 3-32. Formerly, likelihood was a synonym for probability, as it still is in everyday English. In his paper "On the Mathematical Foundations of Theoretical Statistics" (Phil. Trans. Royal Soc. Ser. A. 222, (1922), p. 326). Fisher made clear for the first time the distinction between the mathematical properties of "likelihoods" and "probabilities" (DSB). The solution of the problems of calculating from a sample the parameters of the hypothetical population, which we have put forward in the method of maximum likelihood, consists, then, simply of choosing such values of these parameters as have the maximum likelihood. Formally, therefore, it resembles the calculation of the mode of an inverse frequency distribution. This resemblance is quite superficial: if the scale of measurement of the hypothetical quantity be altered, the mode must change its position, and can be brought to have any value, by an appropriate change of scale; but the optimum, as the position of maximum likelihood may be called, is entirely unchanged by any such transformation. Likelihood also differs from probability in that it is not a differential element, and is incapable of being integrated: it is assigned to a particular point of the range of variation, not to a particular element of it. Likelihood was first used in a Bayesian context by Harold Jeffreys in his "Probability and Scientific Method," Proceedings of the Royal Society A, 146, (1934) p. 10. Jeffreys wrote "the theorem of Inverse Probability" in the form Posterior Probability This entry was contributed by John Aldrich, based on David (2001). See BAYES, MAXIMUM LIKELIHOOD, INVERSE PROBABILITY and POSTERIOR & PRIOR. LIKELIHOOD PRINCIPLE. This expression burst into print in 1962, appearing in "Likelihood Inference and Time Series" by G. A. Barnard, G. M. Jenkins, C. B. Winsten (Journal of the Royal Statistical Society A, 125, 321-372), "On the Foundations of Statistical Inference" by A. Birnbaum (Journal of the American Statistical Association, 57, 269-306), and L. J. Savage et al, (1962) The Foundations of Statistical Inference. It must have been current for some time because the Savage volume records a conference in 1959; the term appears in Savage's contribution so the expression may have been his The principle (without a name) can be traced back to R. A. Fisher's writings of the 1920s though its clearest earlier manifestation is in Barnard's 1949 "Statistical Inference" (Journal of the Royal Statistical Society. Series B, 11, 115-149). On these earlier outings the principle attracted little attention. See Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research The LIKELIHOOD RATIO figured in the test theory of J. Neyman and E. S. Pearson from the beginning, "On the Use of Certain Test Criteria for Purposes of Statistical Inference, Part I" Biometrika, (1928), 20A, 175-240. They usually referred to it as the likelihood although the phrase "likelihood ratio" appears incidentally in their "Problem of k Samples," Bulletin Académie Polonaise des Sciences et Lettres, A, (1931) 460-481. This phrase was more often used by others writing about Neyman and Pearson's work, e.g. Brandner "A Test of the Significance of the Difference of the Correlation Coefficients in Normal Bivariate Samples," Biometrika, 25, (1933), 102-109. The standing of "likelihood ratio" was confirmed by S. S. Wilks's "The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses," Annals of Mathematical Statistics, 9, (1938), 60-620 [John Aldrich, based on David (2001)]. See the entry WALD TEST. The term LIMAÇON was coined in 1650 by Gilles Persone de Roberval (1602-1675) (Encyclopaedia Britannica, article: "Geometry"). It is sometimes called Pascal's limaçon, for Étienne Pascal (1588? -1651), the first person to study it. Boyer (page 395) writes that "on the suggestion of Roberval" the curve is named for Pascal. LIMIT. Gregory of St. Vincent (1584-1667) used terminus to mean the limit of a progression, according to Carl B. Boyer in The History of the Calculus and its Conceptual Development. Isaac Newton wrote justifying limits in the Scholium to Section I of Book I of the Principia (Philosophiae Naturalis Principia Mathematica or The Mathematical Principles of Natural Philosophy) (first edition 1687) Perhaps it may be objected, that there is no ultimate proportion, of evanescent qualities; because the proportion, before the quantities have vanished, is not the ultimate, and when they are vanished, is none. But by the same argument, it may be alledged, that a body arriving at a certain place, and there stopping, has no ultimate velocity: because the velocity, before the body comes to the place, is not its ultimate, velocity; when it has arrived, is none. But the answer is easy; for by the ultimate velocity is meant that with which the body is moved, neither before it arrives at its last place and the motion ceases, nor after, but at the very instant it arrives; that is, that velocity with which the body arrives at its last place, and with which the motion ceases. And in like manner, by the ultimate ratio of evanescent quantities is to be understood the ratio of the quantities not before they vanish, nor afterwards, but with which they vanish. In like manner the first ratio of quantities is that with which they begin to be. And the first or last sum is that with which they begin and cease to be (or to be augmented or diminished). There is a limit which the velocity at the end of the motion may attain, but not exceed. This is the ultimate velocity. And there is the like limit in all quantities and proportions that begin and cease to be. And since such limits are certain and definite, to determine the same is a problem strictly geometrical. But whatever is geometrical we may be allowed to use in determining and demonstrating any other thing that is likewise geometrical. (Translated by Andrew Motte 1729) Katz (p. 471) comments, "A translation of Newton's words into an algebraic statement would give a definition of limit close to, but not identical with, the modern one." In 1821 Augustin-Louis Cauchy defined limit as follows: "If the successive values attributed to the same variables approach indefinitely a fixed value, such that they finally differ from it by as little as one wishes, this latter is called the limit of all the others." Cours d'analyse (Oeuvres II.3), p. 19. (Translation from Katz page 641) Cauchy introduced the modern ε, δ way of arguing. This entry was contributed by John Aldrich. See also Limit and Delta and epsilon on the Earliest Use of Symbols of Calculus page. LIMIT POINT. Cantor used Häufungspunkt (accumulation point) in an 1872 paper “Über die Ausdehnung eines Satzes der Theorie der trigonometrischen Reihen,” which appeared in Mathematische Annalen 5, pp. 123-132 [Roger Cooke]. Limit point is found in English in E. H. Moore “A Simple Proof of the Fundamental Cauchy-Goursat Theorem,” Transactions of the American Mathematical Society, Vol. 1, No. 4. (Oct., 1900), pp. 499-506. Point of accumulation appears in English in in E. W. Chittenden, “On the classification of points of accumulation in the theory of abstract sets,” Bulletin A. M. S. 32 (1926). LINDLEY’S PARADOX. "An example is produced to show that, if H is a simple hypothesis and x the result of an experiment, the following two phenomena can occur simultaneously: (i) a significance test for H reveals that x is significant at, say, the 5% level; (ii) the posterior probability of H given x, is, for quite small prior probabilities of H, as high as 95%." D. V. Lindley "A Statistical Paradox" Biometrika, 44, (1957), pp. 187-192. Lindley notes that "the paradox is not in essentials new, although few statisticians are aware of it." (p. 190) His earliest reference is to the discussion in Jeffreys’s Theory of Probability on which he comments, "Jeffreys is concerned to emphasise the similarity between his tests and those due to Fisher and the discrepancies are not emphasised." Lindley’s Paradox is the common name although some writers refer to the Jeffreys-Lindley Paradox. LINE FUNCTION was the term used for functional by Vito Volterra (1860-1940), according to the DSB. LINE GRAPH is found in the Danville Bee on Sept. 8, 1923: “A line graph shows the number of farms on which each of the crops is grown.” The term LINE INTEGRAL was used in 1873 by James Clerk Maxwell in a Treatise on Electricity and Magnetism, p. 71 in the phrase "Line-Integral of Electric Force, or Electromotive Force along an Art of a Curve" (OED2). Earlier in the book (p. 12) Maxwell explained how "Line-integration [is] appropriate to forces, surface-integration to fluxes." The concept of a line integral is much older: see the entry GREEN’S THEOREM. The alternate term curve integral has been seen in 1965 textbook, but it may be much older. The term LINE OF EQUAL POWER was coined by Steiner. LINEAR ALGEBRA. The DSB seems to imply that the term algebra linearia is used by Rafael Bombelli (1526-1572) in Book IV of his Algebra to refer to the application of geometrical methods to algebra. Linear associative algebra appears in 1870 as the title of a paper, "Linear Associative Algebra" by Benjamin Peirce. The paper was read before the National Academy of Sciences in Washington [James A. Linear algebra occurs in 1875 in the title, "On the uses and transformations of linear algebra" by Benjamin Peirce, published in American Acad. Proc. 2 [James A. Landau]. Pierce meant what today we would call a "finite dimensional algebra over a field," not the theory of vector spaces and linear transformations. [Fernando Q. Gouvea] Today the phase linear algebra is most familiar as the name of a subject. The usage seems to go back about 50 years. The oldest entry found in a COPAC search was from 1950: Hans Schwerdtfeger Introduction to Linear Algebra and the Theory of Matrices. See the entries MATRIX and VECTORS & VECTOR SPACES. LINEAR COMBINATION occurs in "On the Extension of Delaunay's Method in the Lunar Theory to the General Problem of Planetary Motion," G. W. Hill, Transactions of the American Mathematical Society, Vol. 1, No. 2. (Apr., 1900). LINEAR DEPENDENCE appears in the title “The Theory of Linear Dependence” by Maxime Bôcher published in 1900 in the Annals of Mathematics [James A. Landau]. LINEAR DIFFERENTIAL EQUATION appears in J. L. Lagrange, "Recherches sur les suites récurrentes don't les termed varient de plusieurs manières différentes, ou sur l'intégration des équations linéaires aux différences finies et partielles; et sur l'usage de ces équations dans la théorie des hasards," Nouv. Mém. Acad. R. Sci. Berlin 6 (1777) [James A. Landau]. LINEAR EQUATION appears in English the 1816 translation of Lacroix's Differential and Integral Calculus (OED2). LINEAR FUNCTION is found in 1843 in "Chapters in the Analytical Geometry of (n) Dimensions" by Arthur Cayley in the Cambridge Mathematical Journal, vol. IV [University of Michigan Digital Library]. Linear function is found in English in volume I of An Elementary Treatise on Curves, Functions and Forces by Benjamin Peirce. The title page of this work has 1852; the copyright date on the reverse of the title page is 1841 [James A. Landau]. LINEAR INDEPENDENCE is found in 1901 in Linear Groups, with an exposition of the Galois field theory by Leonard Eugene Dickson [James A. Landau]. LINEAR OPERATOR. Linear operation appears in 1837 in Robert Murphy, "First Memoir on the Theory of Analytic Operations," Philosophical Transactions of the Royal Society of London, 127, 179-210. Murphy used "linear operation" in the sense of the modern term "linear operator" [Robert Emmett Bradley]. LINEAR PRODUCT. This term was used by Hermann Grassman in his Ausdehnungslehre (1844). LINEAR TRANSFORMATION appears in 1843 in the title “Exposition of a general theory of linear transformations, Part II” by George Boole in Camb. Math. Jour. t. III. 1843, pp. 1-20. [James A. Landau] LINEARLY DEPENDENT was used in 1893 in "A Doubly Infinite System of Simple Groups" by Eliakim Hastings Moore. The paper was read in 1893 and published in 1896 [James A. Landau]. LINEARLY INDEPENDENT is found in 1847 in "On the Theory of Involution in Geometry" by Arthur Cayley in the Cambridge and Dublin Mathematical Journal [University of Michigan Historical Math LINK FUNCTION. One of the components of a GENERALIZED LINEAR MODEL is "the linking function, θ = f(Y) connecting the parameter of the distribution of z [the dependent variable] with the Y’s of the linear model." Nelder & Wedderburn Journal of the Royal Statistical Society, A, 135, (1972), p. 372. The term link function was introduced in J. A. Nelder’s "Log Linear Models for Contingency Tables: A Generalization of Classical Least Squares," Applied Statistics, 23, (1974), pp. 323-329. (Based on David (1998)) The term LITUUS (Latin for the curved wand used by the Roman pagan priests known as augurs) was chosen by Roger Cotes (1682-1716) for the locus of a point moving such that the area of a circular sector remains constant, and it appears in his Harmonia Mensurarum, published posthumously in Cambridge, 1722 [Julio González Cabillón]. The term LOCAL PROBABILITY is due to Morgan W. Crofton (1826-1915) (Cajori 1919, page 379). The term, which means "probability applied to geometrical magnitude," appears in the title of his 1868 paper, "On the Theory of Local Probability, applied to Straight Lines drawn at random in a plane; the methods used being also extended to the proof of certain new Theorems in the Integral Calculus," Philosophical Transactions of the Royal Society, 158, 181-199. LOCATION and SCALE. The location and scaling of frequency curves is discussed in §9 of R. A. Fisher’s "On the Mathematical Foundations of Theoretical Statistics" (Phil. Trans. R. Soc. 1922, p. 338). In §9 of Two New Properties of Mathematical Likelihood (Proc.R. Soc., A, 1934 p. 303) Fisher changed his terminology to the estimation of location and scaling. The terms location parameter and scale parameter were used by E. J. G. Pitman in "Tests of Hypotheses Concerning Location and Scale Parameters," Biometrika, 31, (1939), 200-215. David (2001) LOCUS is a Latin translation of the Greek word topos. Both words mean "place." According to Pappus, Aristaeus (c. 370 to c. 300 BC) wrote a work called On Solid Loci (Topwn sterewn). Pappus also mentions Euclid in connection with locus problems. Apollonius mentioned the "locus for three and four lines" ("...ton epi treis kai tessaras grammas topon...") in the extant letter opening Book I of the Conica. Apollonius said in the first book that the third book contains propositions (III.54-56) relevant to the 3 and 4 line locus problem (and, since these propositions are new, Apollonius claimed Euclid could not have solved the problem completely--a claim that caused Pappus to call Apollonius a braggard (alazonikos). In Book III itself there is no mention of the locus problem [Michael N. Fried]. Locus appears in the title of a 1636 paper by Fermat, "Ad Locos Planos et Solidos Isagoge" ("Introduction to Plane and Solid Loci"). In English, locus is found in 1727-41 in Chambers Cyclopedia: "A locus is a line, any point of which may equally solve an indeterminate problem. ... All loci of the second degree are conic sections" Locus geometricus is an entry in the 1771 Encyclopaedia Britannica. LOGARITHM. Before he coined the term logarithmus Napier called these numbers numeri artificiales, and the arguments of his logarithmic function were numeri naturales [Heinz Lueneburg]. Logarithmus was coined (in Latin) by John Napier (1550-1617) and appears in 1614 in his Mirifici Logarithmorum Canonis descriptio. According to the OED2, "Napier does not explain his view of the literal meaning of logarithmus. It is commonly taken to mean 'ratio-number', and as thus interpreted it is not inappropriate, though its fitness is not obvious without explanation. Perhaps, however, Napier may have used logos merely in the sense of 'reckoning', 'calculation.'" According to Briggs in Arithmetica logarithmica (1624), Napier used the term because logarithms exhibit numbers which preserve always the same ratio to one another. According to Hacker (1970): It undoubtedly was Napier's observation that logarithms of proportionals are "equidifferent" that led him to coin the name "logarithm," which occurs throughout the Descriptio but only in the title of the Constructio, which clearly was drafted first although published later. The many-meaning Greek word logos is therefore used in the sense of ratio. But there is an amusing play on words to which we might call attention since it does not seem to have been noticed. It is interesting that the Greeks also employed logos to distinguish reckoning, or that is to say mere calculation, from arithmos, which was generally reserved by them to indicate the use of number in the higher context of what today we call the theory of numbers. Napier's "logarithms" have indeed served both purposes. Logarithm appears in English in a letter of March 10, 1615, from Henry Briggs to James Ussher: "Napper, Lord of Markinston, hath set my Head and Hands a Work, with his new and admirable Logarithms. I hope to see him this summer, if it please God, for I never saw a book which pleased me better or made me more wonder." Logarithm appears in English in 1616 in E. Wright's English translation of the Descriptio: "This new course of Logarithmes doth cleane take away all the difficultye that heretofore hath beene in mathematicall calculations. [...] The Logarithmes of proportionall numbers are equally differing." In the Constructio, which was drafted before the Descriptio, the term "artificial number" is used, rather than "logarithm." Napier adopted the term logarithmus before his discovery was announced. Jobst Bürgi called the logarithm Die Rothe Zahl since the logarithms were printed in red and the antilogarithms in black in his Progress Tabulen, published in 1620 but conceived some years earlier (Smith vol. 2, page 523). [Older English-language dictionaries pronounce logarithm with an unvoiced th, as in thick and arithmetic.] See also BRIGGSIAN LOGARITHM, COMMON LOGARITHM, NAPIERIAN LOGARITHM, NATURAL LOGARITHM. LOGARITHMIC CURVE. Huygens proposed the terms hemihyperbola and linea logarithmica sive Neperiana. Christiaan Huygens used logarithmica when he wrote in Latin and logarithmique when he wrote in French. Johann Bernoulli used a phrase which is translated "logarithmic curve" in 1691/92 in Opera omnia (Struik, page 328). Logarithmic curve is found in English in 1715 in The Elements of Astronomy, Physical and Geometrical by David Gregory and Edmond Halley: "But this is the Property of the Logarithmic Curve very well known to Geometricians; therefore the Curve ACX is a Logarithmic Curve, whose Asymptote is the Right line BZ." [Google print search] LOGARITHMIC FUNCTION. Lacroix used fonctions logarithmiques in Traité élémentaire de calcul différentiel et de calcul intégral (1797-1800). Logarithmic function appears in 1831 in the second edition of Elements of the Differential Calculus (1836) by John Radford Young: "Thus, a^x, a log x, sin x, &c., are transcendental functions: the first is an exponential function, the second a logarithmic function, and the third a circular function" [James A. Landau] The term LOGARITHMIC POTENTIAL was coined by Carl Gottfried Neumann (1832-1925) (DSB). The term LOGARITHMIC SPIRAL was introduced by Pierre Varignon (1654-1722) in a paper he presented to the Paris Academy in 1704 and published in 1722 (Cajori 1919, page 156). Another term for this curve is equiangular spiral. Jakob Bernoulli called the curve spira mirabilis (marvelous spiral). LOGIC. The term logikê (knowledge of the functions of logos or reason) was used by the Stoics but it covered many philosophical topics that are not part of the modern subject. According to W. & M. Kneale The Development of Logic (1962) pp. 7 & 23, the word "logic" first appeared in its modern sense in the commentaries of Alexander of Aphrodisias who wrote in the third century AD. Until the late 19^th century the scope of the study was determined by the contents of Aristotle's (384-322 BC) writings on reasoning. These were assembled by his pupils after his death and became collectively known as the Organon or instrument of science. See Robin Smith’s Aristotle’s Logic. In the Middle Ages logic was one of the three sciences composing the ‘trivium’, the former of the two divisions of the seven ‘liberal arts’. The other constituents were rhetoric and grammar. The higher division, the ‘quadrivium’ consisted of arithmetic, geometry, astronomy and music. For the English word logic the OED’s earliest quotation is from 1362, the second from the Prologue to Chaucer’s Canterbury Tales: "A Clerk ther was of Oxenford also, That unto logik hadde longe ygo." (c.1386.) See QUADRIVIUM. Although the "Bibliography of Symbolic Logic" published in the first volume of the Journal of Symbolic Logic (December 1936, pp. 121-216) starts in 1666 with Leibniz, the modern era in logic begins in the 19^th century with the work of Augustus de Morgan (1806-1871) and George Boole (1815-1864). By the end of the century many new terms had been coined, including names for the subject, for the author’s particular take on it and for its various sub-divisions. Some of the names are still in use, although their meaning has often shifted. Some authors kept the new terms distinct, others would use them indifferently, e.g. Bertrand Russell treated "symbolic logic" and "mathematical logic" as interchangeable in his "Mathematical Logic as Based on the Theory of Types," American Journal of Mathematics, 30, (1908), 222-262. Formal logic and symbolic logic were used as book-titles by De Morgan (1847) and J. Venn (1881) respectively. G. Peano used mathematical logic as the name of his new subject. Its concerns were not those of traditional logic, as he explained to Felix Klein in 1894: "the aim of Mathematical logic is to analyse the ideas and reasoning which feature especially in the mathematical sciences." (quoted on p. 243 of Grattan-Guinness (2000). E. Schröder used the phrase algebra of logic in the title of his main work, Vorlesungen über die Algebra der Logik (volume 1, 1890). Logistic (French logistique) was used by Couturat and other speakers at the International Congress of Philosophy in 1904 and was popular for a few decades. The terms deductive logic and inductive logic originated in the 19^th century: W. S. Jevons called one of his books Studies in Deductive Logic (1880) and Venn one of his, The Principles of Empirical or Inductive Logic (1889). Informal logic is a new term, having been in use only since the 1970s; see Leo Groarke’s Informal logic. This entry was contributed by John Aldrich. See MATHEMATICAL LOGIC. A complete list of the set theory and logic terms on this web site is here. For the symbols of logic see Earliest Use of Symbols. LOGICISM is the doctrine that mathematics is in some significant sense reducible to logic. It is associated with the Principia Mathematica (1910-1913) of A. N. Whitehead and Bertrand Russell. According to Grattan-Guinness (2000, pp. 479 & 501), the word Logizismus was introduced by A. A. H. Fraenkel Einleitung in der Mengenlehre (1928) and R. Carnap Abriss der Logistik (1929). A JSTOR search found the English word in H. Reichenbach "Logical Empiricism in Germany and the Present State of its Problems," Journal of Philosophy, 33, (1936), p. 143. This entry was contributed by John Aldrich. See also FORMALISM and INTUITIONISM. The term LOGISTIC CURVE is attributed to Edward Wright (ca. 1558-1615) (Thompson 1992, page 145), although Wright used the term to refer to the logarithmic curve. Pierre Francois Verhulst (1804-1849) introduced the term logistique as applied to the sigmoid curve [Julio González Cabillón]. David (1995) gives the citation P. F. Verhulst (1845), "La Loi d' Accroissement de la Population," Nouveaux Mémoires de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles, 18, 1-59. Presumably the term refers to the "log-like" qualities of the curve. The term logistic regression appears in D. R. Cox "The Regression Analysis of Binary Sequences," Journal of the Royal Statistical Society. Series B (Methodological), 20, (1958), 215-242. LOGIT first appeared in Joseph Berkson’s "Application to the Logistic Function to Bio-Assay," Journal of the American Statistical Association, 39, (1944), p. 361: "Instead of the observations q[i] we deal with their logits l[i] = ln(p[i] / q[i]). [Note] I use this term for ln p/q following Bliss, who called the analogous function which is linear on x for the normal curve ‘probit’." (OED) See PROBIT. LOGNORMAL. Logarithmic-normal was used in 1919 by S. Nydell in "The Mean Errors of the Characteristics in Logarithmic-Normal Distributions," Skandinavisk Aktuarietidskrift, 2, 134-144 (David, 1995). Lognormal was used by J. H. Gaddun in Nature on Oct. 20, 1945: "It is proposed to call the distribution of x 'lognormal' when the distribution of log x is normal" (OED2). The lognormal distribution was apparently first studied when Donald McAlister answered a question put by Francis Galton: to what "law of error" does the geometric mean bear the relationship that the arithmetic mean bears to the normal distribution? See Galton's The Geometric Mean, in Vital and Social Statistics Proceedings of the Royal Society of London, 29, (1879), 365-367, which prefaced McAlister’s paper "The Law of the Geometric Mean." They did not give a name to the new distribution. However, according to E. T. Whittaker & G. Robinson Calculus of Observations (1924, p. 218), Seidel had asked and answered the same question in 1863. See also ARITHMETIC MEAN and GEOMETRIC MEAN. LONG DIVISION is found in 1787 in The Lodon Gentleman’s and Schoolmaster’s Assistant by Thomas Whiting: “Long Division is when the Divisor is more than 12....” [Google print search] LORENZ ATTRACTOR. This object was first described by the meteorologist Edward Norton Lorenz in his paper “Deterministic non-periodic flow,” J. Atmos. Sci., 20 : 2 (1963) pp. 130–141. The term “Lorenz attractor” came into use in the 1970s when his work began to be noticed. See MathWorld and the Encyclopedia of Mathematics. LORENZ CURVE. This diagram was introduced by Max O. Lorenz in his “Methods of Measuring the Concentration of Wealth,” Publications of the American Statistical Association, 9, (Jun., 1905), pp. 209-219. The term Lorenz curve quickly entered circulation—see e.g. W. M. Persons “The Measurement of Concentration of Wealth,” Quarterly Journal of Economics, 24, (Nov., 1909), p. 172. According to M. J. Bowman “A Graphical Analysis of Personal Income Distribution in the United States,” American Economic Review, 35, (1945), p. 617n, “The same idea was introduced almost simultaneously by Gini, Chatelain and Séailles.” The Séailles reference is to his 1910 book La répartition des fortunes en France. LOSS and LOSS FUNCTION in statistical decision theory. In the paper establishing the subject ("Contributions to the Theory of Statistical Estimation and Testing Hypotheses," Annals of Mathematical Statistics, 10, 299-326) Wald referred to "loss" but used "weight function" for the (modern) loss function. He continued to use weight function, for instance in his book Statistical Decision Functions (1950), while others adopted loss function. Arrow, Blackwell & Girshick’s "Bayes and Minimax Solutions of Sequential Decision Problems" (Econometrica, 17, (1949) 213-244) wrote L rather than W for the function and called it the loss function. A paper by Hodges & Lehmann ("Some Problems in Minimax Point Estimation," Annals of Mathematical Statistics, 21, (1950), 182-197) used loss function more freely but retained Wald’s W. This entry was contributed by John Aldrich, based on David (2001) and JSTOR. See DECISION THEORY. The term LOWER SEMICONTINUITY was used by René-Louis Baire (1874-1932), according to Kramer (p. 575), who implies he coined the term. The term appears in Baire’s thesis, “Sur les fonctions de variables réelles,” Annali di Matematica Pura ed Applicata (3) 3 (1899), 1-123, and in his “Sur la théorie des fonctions discontinues, Comptes rendus, 129, (1899) 1010-1013. The phrase LOWEST TERMS appears in about 1675 in Cocker's Arithmetic, written by Edward Cocker (1631-1676): "Reduce a fraction to its lowest terms at the first Work" (OED2). (There is some dispute about whether Cocker in fact was the author of the work.) LOXODROME. Pedro Nunez (Pedro Nonius) (1492-1577) announced his discovery and analysis of the curve in De arte navigandi. He called the curve the rumbus (Catholic Encyclopedia). The term loxodrome is due to Willebrord Snell van Roijen (1581-1626) and was coined in 1624 (Smith and DSB, article: "Nunez Salaciense). LUCAS-LEHMER TEST occurs in the title, "The Lucas-Lehmer test for Mersenne numbers," by S. Kravitz in the Fibonacci Quarterly 8, 1-3 (1970). The test is named for Edouard Lucas and Dick Lehmer. The term Lucas's test was used in 1932 by A. E. Western in "On Lucas's and Pepin's tests for the primeness of Mersenne's numbers," J. London Math. Soc. 7 (1932), and in 1935 by D. H. Lehmer in "On Lucas's test for the primality of Mersenne's numbers," J. London Math. Soc. 10 (1935). The term LUCAS PSEUDOPRIME occurs in the title "Lucas Pseudoprimes" by Robert Baillie and Samuel S. Wagstaff Jr. in Math. Comput. 35, 1391-1417 (1980): "If n is composite, but (1) still holds, then we call n a Lucas pseudoprime with parameters P and Q ..." [Paul Pollack]. LUDOLPHIAN NUMBER. The number 3.14159... was often called the Ludolphische Zahl in Germany, for Ludolph van Ceulen. In English, Ludolphian number is found in 1886 in G. S. Carr, Synopsis Pure & Applied Math (OED2). In English, Ludolph's number is found in 1894 in History of Mathematics by Florian Cajori (OED2). LUNE. Lunula appears in A Geometricall Practise named Pantometria by Thomas Digges (1571): "Ye last figure called a Lunula" (OED2). Lune appears in English in 1704 in Lexicon technicum, or an universal English dictionary of arts and sciences by John Harris (OED2). Front - A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z - Sources
{"url":"http://jeff560.tripod.com/l.html","timestamp":"2014-04-18T22:01:24Z","content_type":null,"content_length":"71610","record_id":"<urn:uuid:c5bca529-c233-41f2-86ea-004add68ce2c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of earned run average noun Baseball. a measure of the effectiveness of a pitcher, obtained by dividing the number of earned runs scored against the pitcher by the number of innings pitched and multiplying the result by nine. A pitcher yielding three earned runs in nine innings has an earned run average of 3.00. Abbreviation: ERA, era
{"url":"http://dictionary.reference.com/browse/earned+run+average","timestamp":"2014-04-17T04:35:21Z","content_type":null,"content_length":"89892","record_id":"<urn:uuid:14bc4b35-2b04-49d7-a6bc-c60c6757c23a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
New online R courses from Statistics.com Online training provider Statistics.com has introduced a couple of new R-related courses which are well worth checking out. These are all self-paced on-line courses, with materials by and interactive feedback from leading R gurus. Current R users looking to take their programming skills to the next level will be particularly interested in the Advanced Programming in R course from Hadley Wickham. Hadley has shared a cracking set of course materials for Advanced Programming in R, so you can see what's covered. (And if you'd like to see Hadley present that course in person, there's still a couple of seats left for his R Development Master Class in San Francisco.) The new training courses from Statistics.com are: Advanced Programming in R - This course will help participants write better code, focused on the mantra of “do not repeat yourself”. They will learn powerful new tools of abstraction, allowing to solve a wider range of problems with fewer lines of code. To get the most out of this course, students should have some experience programming in R already, be familiar with writing functions, and the basic data structures of R (vectors, matrices, arrays, lists and data frames). Participants will find the course particularly useful if they are an experienced R user looking to take the next step, or moving to R from other programming languages and want to quickly get up to speed with R’s unique features. Statistical Analysis of Microarray Data with R - This course will acquaint you with the process of microarray data mining from beginning to end. You will learn how to how to preprocess the data, estimate gene expression patterns, cluster genes to detect interesting gene expression patterns, and classify experiments (subjects) based on gene expression patterns. Illustrations of the statistical issues involved at the various stages of the analysis will use real data sets from DNA microarray experiments. And here's the calendar of their R-related courses for the remainder of the year: Statistics.com: Course Catalog You can follow this conversation by subscribing to the comment feed for this post.
{"url":"http://blog.revolutionanalytics.com/2011/05/new-online-r-courses-from-statisticscom.html","timestamp":"2014-04-20T05:42:01Z","content_type":null,"content_length":"27819","record_id":"<urn:uuid:ae34a6a4-91bd-4321-9a10-f79b7f59fc53>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Word Problems for Nursing School Entrance Exam Study Guide Math Word Problems for Nursing School Entrance Exam Study Guide (page 2) LearningExpress Editors Updated on Aug 12, 2011 The practice quiz for this study guide can be found at: Mathematics for Nursing School Entrance Exam Practice Problems Many of the math problems on tests are word problems. A word problem can include any kind of math, including simple arithmetic, fractions, decimals, percentages, and even algebra and geometry. The hardest part of any word problem is translating English into math. When you read a problem, you can frequently translate it word for word from English statements into mathematical statements. At other times, however, a key word in the word problem only hints at the mathematical operation to be performed. Here are the translation rules: EQUALS keywords: is, are, has English Math Bob is 18 years old. b = 18 There are 7 hats. h = 7 Judi has 5 cats. c = 5 ADDITION keywords: sum, more than, greater than, older than, total, altogether English Math The sum of two numbers is 10. x + y = 10 Karen has $5 more than Sam. k = 5 + s The base is 3 inches greater than the height. b = 3 + h Judi is 2 years older than Tony. j = 2 + t The total of three numbers is 25. a + b + c = 25 How much do Joan and Tom have altogether? j + t = ? SUBTRACTION keywords: difference, fewer than, less than, younger than, remain, left over English Math The difference between two numbers is 17. x – y = 17 Mike has 5 fewer cats than twice the number Jan has. m = 2j – 5 Jay is 2 years younger than Brett. j = b – 2 After Carol ate 3 apples, r apples remained. r = a – 3 MULTIPLICATION keywords: of, product, times, each, at English Math 20% of the samples 0.20 × s Half of the bacteria b The product of two numbers is 12. a × b = 12 DIVISION keyword: per English Math 15 drops per teaspoon 22 miles per gallon DISTANCE FORMULA: DISTANCE = RATE × TIME You know you will need to use the distance formula when you see movement words like: plane, train, boat, car, walk, run, climb, or swim. □ How far did the plane travel in 4 hours if it averaged 300 miles per hour? D = 300 × 4 D = 1,200 miles □ Ben walked 20 miles in 4 hours. What was his average speed? 20 = r × 4 5 miles per hour = r Solving a Word Problem Using the Translation Table Remember the problem at the beginning of this chapter about the jelly beans? Juan ate a. 60 b. 80 c. 90 d. 120 We solved it by working backward. Now let's solve it using our translation rules. Assume Juan started with J jelly beans. If Juan ate J jelly beans. Maria ate a fraction of the remaining jelly beans, which means we must subtract to find out how many are left. Maria ate of the J jelly beans, or J jelly beans. Multiplying out J gives J as the number of jelly beans left. The problem states that there were 10 jelly beans left, meaning that weset J equal to 10: J = 10 Solving this equation for J gives J = 60. Thus, the right answer is choice a (the same answer we got when we worked backward). As you can see, both methods—working backward and translating from English to math—work. You should use whichever method is more comfortable for you. The practice quiz for this study guide can be found at: Mathematics for Nursing School Entrance Exam Practice Problems From Nursing School Entrance Exam. Copyright © 2009 by LearningExpress, LLC. All Rights Reserved. Popular Articles Wondering what others found interesting? Check out our most popular articles.
{"url":"http://www.education.com/reference/article/word-problems3/?page=2","timestamp":"2014-04-19T12:26:27Z","content_type":null,"content_length":"102713","record_id":"<urn:uuid:40ad4b5f-f656-4c38-a41c-a10756e0e511>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SYSTEM AND METHOD FOR DYNAMIC SPACE MANAGEMENT OF A DISPLAY SPACE Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method for space management of a workspace provided on a display includes defining a first data structure of full-space rectangles present on the workspace, wherein at least a portion of the full-space rectangles are permitted to overlap. A second data structure of largest empty-space rectangles available on the workspace is also defined to complete the representation of the workspace. The methods include performing an operation on at least one full-space rectangle on the workspace and redefining the first data structure and the second data structure in accordance with the workspace resulting from the operation performed. The operations can include adding a new full-space rectangle, moving an existing full-space rectangle and deleting an existing full full-space rectangle from the workspace. Generally, the workspace is a display device coupled to an electronic device such as a personal computer, personal digital assistant, electronic book viewer and the A method for space management of a workspace comprising: allocating at least one full-space rectangle of the workspace; defining a first data structure for representing at least a portion of the at least one full-space rectangle to be present on the workspace, and permitting representation thereof in an overlapping configuration; defining a second data structure for one or more empty-space rectangles available on the workspace; performing an operation on the workspace involving the at least one full-space rectangle to define an updated workspace; and redefining the first data structure and the second data structure in accordance with the updated workspace. The method of space management according to claim 1, wherein the operation performed on at least one full-space rectangle is selected from the group consisting of adding a new full-space rectangle, deleting an existing full-space rectangle and moving an existing full-space rectangle. The method of space management according to claim 1, wherein the operation performed includes adding a new full-space rectangle to the workspace, and the redefining further comprises: adding an entry representing the new full-space rectangle to the first data structure; removing entries from the second data structure representing largest empty space rectangles which are intersected by the new full space rectangle; and adding entries to the second data structure representing the set of new largest empty-space rectangles resulting from the placement of the new full space rectangle. The method of space management according to claim 1, wherein defining the second data structure comprises defining the second data structure for one or more largest empty-space rectangles available on the workspace. The method of space management according to claim 1, wherein the redefining further comprises removing entries which are intersected by a full space rectangle following performing the operation. The method of space management according to claim 2, wherein the operation comprises the addition of a new full-space rectangle which is manually placed by a user. The method of space management according to claim 2, wherein the operation comprises the addition of a new full-space rectangle which is automatically placed in a final position on the workspace. The method of space management according to claim 6, wherein the operation of automatically placing the full-space rectangle further comprises: querying the second data structure to identify candidate largest empty space rectangles which satisfy at least one user defined placement parameter; selecting one of the candidate largest empty space rectangles; and placing the full-space rectangle within the selected candidate largest empty space rectangle. The method of space management according to claim 7, wherein the placement parameter includes a minimum area for the full-space rectangle being placed. The method of space management according to claim 7, wherein the placement parameter includes a minimum linear dimension for the full-space rectangle being placed. The method of space management according to claim 7, wherein the placement parameter includes an aspect ratio for the full-space rectangle being placed. The method of space management according to claim 7, wherein if a plurality of candidate largest empty-space rectangles are available, the selecting operation is performed in accordance with at least one user defined quality measure. The method of space management according to claim 11, wherein the quality measure is the empty-space rectangle which is closest in position to an initial placement of the full-space rectangle. The method of space management according to claim 11, wherein the quality measure is the empty-space rectangle which is the smallest candidate empty space rectangle. The method of space management according to claim 7, wherein the size of the full-space rectangle to be added is reduced by an amount up to a predetermined scaling factor and wherein the candidate largest empty-space rectangles include those empty-space rectangles which are at least as large as the original size reduced by the scaling factor. The method of space management according to claim 1, wherein the workspace comprises a three dimensional workspace. The method of space management according to claim 15, wherein the workspace comprises a physical workspace. A method of operating a display device in a computer system, the method comprising: providing a display workspace on the display device wherein content to be displayed to a user is defined in at least one full-space rectangle positioned on the workspace and permitting representation thereof in an overlapping configuration; storing in computer readable media a first data structure representing at least a portion of the at least one full-space rectangle present on a workspace of the display device; storing in computer readable media a second data structure for one or more empty-space rectangles available on the workspace, the largest empty space rectangles being defined, at least in part, by the placement of the portion of the at least one full-space rectangle stored in the first data structure; performing a user operation on the workspace involving the at least one full-space rectangle to define an updated workspace; and redefining the first data structure and the second data structure stored in the computer readable media in accordance with the updated workspace. A method for space management of a workspace provided on a display comprising: defining a first data structure for representing at least a portion of at least one full-space rectangle to be present on the workspace, and permitting representation thereof in an overlapping configuration; defining a second data structure for one or more empty-space rectangles available on the workspace; initiating an operation to be performed on the workspace involving the at least one full-space rectangle to be added to the first data structure; and querying the second data structure to determine candidate empty-space rectangles on the workspace to accommodate the operation to be performed; selecting one of the candidate empty-space rectangles based on at least one selection parameter; performing the operation to define an updated workspace; and redefining the first data structure and the second data structure in accordance with the updated workspace. CLAIM FOR PRIORITY TO RELATED APPLICATIONS [0001] This application is a continuation of U.S. patent application Ser. No. 12/124,797 filed May 21, 2008, and claims priority to U.S. patent application Ser. No. 10/258,510 filed Apr. 10, 2003, now U.S. Pat. No. 7,404,147, and International Application Serial No. PCT/US2001/13167 filed Apr. 24, 2001, and claims priority to U.S. Provisional Application Ser. No. 60/199,147 filed Apr. 24, 2000 and U.S. Provisional Application Ser. No. 60/230,958 filed Sep. 7, 2000, the contents of all which are hereby incorporated by reference in their entireties herein. BACKGROUND OF THE INVENTION [0004] 1. Field of the Invention The present invention relates generally to user display interfaces, and more particularly relates to a system and method for dynamic space management of a user display interface which efficiently manages available empty-space during both add and remove operations of full-space rectangles. 2. Background of the Related Art Computer graphics systems which are commonly used today generally provide a representation of the workspace, or display screen, occupied by the various elements of the scene. For example, in the case of a graphical user interface (GUI), such as a window manager for the Microsoft Windows® operating system, various icons and working windows are placed about the display space. In such an environment, it is often desirable to automatically allocate space for a new or modified object while avoiding intersecting or overlaying other objects which have already been allocated on the workspace. This generally either requires adjusting the size of the new object to fit within a selected space or more desirably, finding an available position on the display which maintains the size and aspect ratio of the object to be placed without overlapping previously placed objects. While several systems and methods for simplistic space management of a display have been used previously, such as simple window managers which use automatic tiling or cascading of objects, these systems have shortcomings. One aspect of effective space management is the modeling and use of the empty-space which is available on the workspace. One method of modeling the empty-space, such as on a user display, is described in the article "Free Space Modeling for Placing Rectangles without Overlapping" by Bernard et al, which was published in the Journal of Universal Computer Science, 3(6), pp 703-720, Springer-Verlag, June 1997. Bernard et al. describe a method of computing the free space available on a workspace, representing the free space as a set of empty-space rectangles, and using this representation to determine the placement of a new full-space rectangle on the display space in a non-overlapping manner. The modeling of the free space as a set of largest empty-space rectangles as disclosed by Bernard et al. provides an effective representation of the free space. Bernard et al. also disclose managing the workspace and adding new objects in the context of non-overlapping rectangles. However, Bernard et al. do not address the management of the display when two full-space objects overlap and do not provide a process for efficiently updating the empty-space model upon removal of a full-space rectangle from the display workspace. Accordingly, there remains a need for a dynamic space manager which efficiently models the available free space of a workspace in the presence of overlapping objects and during both add and remove operations affecting the workspace. OBJECTS AND SUMMARY OF THE INVENTION [0009] It is an object of the present invention to provide a method of managing a workspace during the addition and removal of objects from the workspace. It is another object of the invention to provide a method of managing a workspace, such as a display space, using largest empty-space rectangles to represent the free space and efficiently updating the empty-space representation after an object addition or object deletion operation. It is a further object of the present invention to provide a method of managing a workspace, such as a display space such that full-space rectangles can be added to existing empty-space or removed from the workspace in an efficient manner. It is another object of the invention to provide a method of managing a workspace, such as a display space, using largest empty-space rectangles to represent the free space of the workspace where at least some of the full-space objects on the display space overlap. In accordance with a first method for space management of a workspace provided on a display, a first data structure representing at least a portion of the full-space rectangles present on the workspace is defined and maintained. At least a portion of the full-space rectangles on the workspace are permitted to overlap. A second data structure of largest empty-space rectangles available on the workspace is also defined and maintained. The method further includes performing an operation on the workspace involving at least one full-space rectangle and redefining the first data structure and the second data structure in accordance with the workspace resulting from performing the operation. The operation which is performed on the workspace can include adding a new full-space rectangle, deleting an existing full-space rectangle, moving an existing full-space rectangle, and modifying an existing full-space rectangle on the workspace. The addition of a new full-space rectangle can include unrestricted manual placement of the rectangle by a user. The addition of a new full-space rectangle can also include automatic placement of the rectangle in a final position on the workspace. An undoable operation can be implemented by storing a copy of at least a portion of the first and second data structure prior to redefining the first and second data structures, accordingly. In such a case, it is preferred that only the portions of the first and second data structures which are altered by the operation are copied and stored. For example, in an undoable add operation, those empty space rectangles which are removed as a result of the add operation can be saved and those new empty space rectangles which are added to the workspace representation can be marked in the data structure. To undo the add operation, the marked entries in the data structure are removed and the previously removed empty space rectangles are reinstantiated in the second data structure. In the case where the operation includes adding a new full-space rectangle to the workspace, the step of redefining the first and second data structures can further include adding an entry representing the new full-space rectangle to the first data structure; removing entries from the second data structure representing largest empty space rectangles which are intersected by the new full space rectangle; and adding entries to the second data structure representing the set of new largest empty-space rectangles resulting from the placement of the new full space rectangle. The full-space rectangles are generally defined, at least in part, by a parameter of the content to be displayed in a full space rectangle. The parameter is generally user defined and can include an area required for the content, a minimum width, a maximum width, a minimum height, a maximum height, an original size, an aspect ratio and the like. The second data structure can be queried to determine the set of available candidate largest empty-space rectangles which can receive the full-space rectangle in accordance with the user parameter. In the case where there are a number of candidate largest empty-space rectangles which satisfy the user parameter(s), a user defined quality factor can be used to select among the candidate largest empty space rectangles. For example, in the case where a number of largest empty space rectangles are available which have a suitable size and aspect ratio available to receive the full-space rectangle, the user parameter can provide that the empty-space rectangle closest in position to an initial placement of the full-space rectangle is selected to receive the full-space rectangle. Alternatively, the smallest of the available empty-space rectangles with a suitable size and aspect ratio can be selected to receive the full-space rectangle. To add a degree of freedom in the automatic placement of a full-space rectangle, the size of the full-space rectangle to be added can be reduced by an amount up to a predetermined scaling factor. In this case, the available largest empty-space rectangles include those empty-space rectangles which are at least as large as the original size as reduced by the scaling factor. The operation performed on the workspace can also be a deletion operation where a full-space rectangle is removed from the workspace. For a deletion operation, the step of redefining the second data structure can include the steps of identifying the edges of the full-space rectangle to be deleted; selecting a first edge of the full-space rectangle to be deleted; identifying each empty-space rectangle in the second data structure which is adjacent to the selected edge; merging the adjacent empty-space rectangles with empty-space generated by deleting the full-space rectangle; adding the merged empty-space rectangle to the second data structure if the merged empty-space rectangle is a largest empty-space rectangle; dropping the merged empty-space rectangle if it is a subset of a previously identified largest empty-space rectangle; and saving the merged empty-space rectangle for a subsequent merging operation if the merged empty-space rectangle is not added or dropped. The next edge of the removed full space rectangle is selected and the saved merged empty space rectangles are used as input empty space rectangles, such that the combination of empty space rectangles progresses in a recursive fashion. An alternate method in accordance with the invention is applicable to operating a display device in a computer system. The method includes providing a display workspace on the display device wherein content to be displayed to a user is defined in a plurality of full-space rectangles positioned on the workspace. At least a portion of the full-space rectangles are permitted to overlap on the workspace. A first data structure representing at least a portion of the plurality of full-space rectangles present on a workspace of the display device is stored in computer readable media. A second data structure of largest empty-space rectangles available on the workspace is also stored in computer readable media. The largest empty space rectangles are defined by the placement of the portion of the plurality of full-space rectangles stored in the first data structure and the boundaries of the workspace. A user operation is performed on at least one full-space rectangle on the workspace and the first data structure and the second data structure stored in the computer readable media are redefined in accordance with the workspace resulting from the performing step. A further method for space management of a workspace provided on a display includes defining a first data structure for representing at least a portion of full-space rectangles to be present on the workspace. At least a portion of the full-space rectangles are permitted to overlap on the workspace. The method also includes defining a second data structure of largest empty-space rectangles available on the workspace. An operation to be performed on the workspace involving at least one full-space rectangle which is to be added to the first data structure is initiated and the second data structure is queried to determine the candidate largest empty-space rectangles on the workspace which can accommodate the operation to be performed. One of the candidate largest empty-space rectangles is then selected based on at least one selection parameter and the operation is performed. After performing the operation, the first data structure and the second data structure are redefined in accordance with the workspace resulting from the performing step. These and other objects and features of the invention will become apparent from the detailed description of preferred embodiments which is to be read in connection with the appended drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0024] For a more complete understanding of the present invention, and advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings, wherein like reference numerals represent like parts, in which: FIGS. 1A-1E are pictorial diagrams illustrating the representation of the empty-space of a workspace as a set of four largest empty-space rectangles. FIGS. 2A-2F are pictorial diagrams illustrating the effect of adding an overlapping full-space rectangle to the empty-space representation of FIG. 1. FIG. 3 is a flow chart illustrating an overview of the operation of the present method of space management for a user interface. FIG. 4 is a flow chart illustrating the process of adding an additional full-space rectangle to the workspace and redefining the resulting empty-space representation of the workspace. FIGS. 5A-5H are pictorial diagrams illustrating the representation of the empty-space of a workspace after a second, overlapping full-space rectangle is added to the workspace. FIG. 6 is a flow chart illustrating the process of determining whether an empty-space rectangle is a largest empty-space rectangle. FIG. 7 is a pictorial flow diagram illustrating the recursive combination process performed to redefine the empty-space representation of the workspace upon removal of a full-space rectangle from the FIG. 8 is a flow chart illustrating the process of removing a full-space rectangle from the workspace. FIG. 9 is a pictorial diagram illustrating the removal of an overlapping full-space rectangle. FIGS. 10A and 10B are pictorial diagrams illustrating an exemplary application of the present space management methods in connection with an information visualization system. FIGS. 11A and 11B are pictorial diagrams illustrating an exemplary application of the present space management methods in connection with the placement of insertable content within a webpage. FIG. 12 is a pseudo-code representation of an exemplary implementation of the method of adding a full space rectangle to the representation of the workspace. FIG. 13 is a pseudo-code representation of an exemplary implementation of the incremental deletion of a full space rectangle from the representation of the workspace described in connection with FIGS. 7-9. FIGS. 14A, 14B, 14C and 14D are pictorial representations of a computer display in an embodiment of the present invention as a window manager. FIG. 15 is a pseudo-code representation of an exemplary implementation of the method of combining empty space rectangles to form largest empty space rectangles. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS [0040] The present method for managing a workspace, such as display space on a user interface, represents both the full space which is allocated to content being provided on the workspace and the empty-space which is available on the workspace. The full-space representation is a list of full-space rectangles which are placed on the workspace and for which an area of the workspace is allocated. The empty-space of the workspace is generally represented in a data structure which describes a set of largest empty-space rectangles. The largest empty-space rectangles are generally automatically determined based on the placement of the full-space rectangles on the workspace. The workspace is generally an electronic display, such as cathode ray tube (CRT), liquid crystal display (LCD), and the like, which is operatively coupled to a computer system or other electronic device. However, it will be appreciated that the workspace is not limited to real time display units and can also include such things as hard copy print outs and data provided to other processes The workspace can also take the form of a number of such electronic displays which are operated in a cooperative fashion as a single display system. In the present invention in the context of a two-dimensional workspace, full-space rectangles are rectangular regions which represent the rectangular extents of content being displayed on the workspace. Thus, full-space rectangles designate regions of the workspace which are allocated for particular content. Such full space rectangles can generally be permitted to overlap on the workspace, if desired by a user. Generally, while not required, for the sake of simplicity the full-space rectangles are axis-aligned with the workspace. In higher order dimensional workspaces, such as 3D, 4D and the like, the term full-space rectangle means a unit of content which is defined by mutually orthogonal axes, such as cuboids in a 3D spatial workspace. An empty-space rectangle is a rectangular region of a 2D workspace which is not occupied by a full-space rectangle. A largest empty-space rectangle is an empty space rectangle whose height and width are at maximums while not overlapping portion of any full-space rectangle on the workspace. As such, each largest empty-space rectangle is bounded by either one or more edges of a full-space rectangle or a border of the workspace. As with full-space rectangles, the concept of the largest empty space rectangle is extensible into n-dimensions of a workspace. It should be noted that not every object displayed or provided on the workspace needs to be represented in the data structure which defines the set of full space rectangles. For example, if a user wishes to provide content on a display, but does not care if other content is allowed to overlap this content, there does not need to be any alteration of the full space representation or empty space representation of the workspace. FIG. 1A is a pictorial diagram illustrating a 2D workspace 100 with a single full-space rectangle 102 placed therein. FIGS. 1B-1E illustrate the four largest empty-space rectangles 104, 106, 108 and 110, respectively, which result from the placement of full-space rectangle 102 on the workspace 100 and adding the full space rectangle to the full-space representation of the workspace. This set of largest empty-space rectangles represents the available areas for the placement of additional full-space rectangles. In the event a new full-space rectangle were to be placed on the workspace, placement parameters of the full-space rectangle, such as the area, dimensions and/or aspect ratio of the new full-space rectangle, can be compared to the empty-space rectangles 104, 106, 108, 110 to determine if any of these empty-space rectangles are candidates to accept the new full-space rectangle. As illustrated in FIGS. 2A-2F, the present methods also allow for full-space rectangles to be placed in an overlapping manner by a user. For example, FIG. 2A is essentially the same as that illustrated in FIG. 1A, where a single full-space rectangle 102 defines four largest free-space rectangles 104, 106, 108, 110, FIG. 2B represents the workspace after a user has placed a second full-space rectangle 200 onto the workspace 100 in an overlapping relationship with full-space rectangle 102. Referring to FIGS. 2C and 2D, it is apparent that this placement does not intersect with empty-space rectangles 106, 104, respectively. Accordingly, this portion of the empty-space representation does not need to be altered. However, referring to FIGS. 2E and 2F, the new full-space rectangle does intersect with empty-space rectangles 108, 110 and that these empty-space rectangles must be reduced to define a new set of largest empty-space rectangles for the empty-space representation of the workspace. FIG. 3 is a flow chart illustrating an overview of the operation of the present method of space management for a user interface. Starting with a blank workspace 100, such as a computer display, electronic book viewing device, personal digital assistant or the like, a first full-space rectangle 102 is placed at an arbitrary position within the workspace. If the user desires that this full space rectangle is to be added to the representation of the workspace and considered in modifying the empty space representation, the full space rectangle is added to the data structure of full space rectangles which are allocated area on the workspace (step 300). As illustrated in FIGS. 1B-1E, this results in a reduction of the available empty-space in the workspace 100 which is represented by a set of empty-space rectangles (step 305). From the set of empty-space rectangles, the set of largest empty-space rectangles is then determined (step 310). It will be appreciated that various methods of determining the set of largest empty space rectangles can be used. It will also be appreciated that steps 305 and 310 may be combined such that the set of largest empty space rectangles is determined in a single operation. It should be noted that not all content which is presented on the workspace needs to be represented as full-space in the workspace representation. For example, certain content may be displayed as background for other objects which are intended to be placed in an overlapping fashion over the background. Thus, while the background includes content to be displayed, it does not necessarily alter the empty-space representation of the workspace. Once the empty-space has been represented by the set of largest empty-space rectangles, several subsequent operations are possible. Manual placement of an additional full-space rectangle on the workspace by a user is one such possible operation (step 315). In this case, the placement can be unrestricted as to placement on the workspace 100 such that two or more full-space rectangles are permitted to overlap to any degree. After the manual placement of a full-space rectangle is selected, the full-space rectangle is added to the data representation (step 300) and the representation of the empty-space available on the workspace is again determined by repeating steps 305 and 310. In addition to manually adding an additional full-space rectangle, an existing full-space rectangle can be removed from the workspace (step 320). Once a full-space rectangle is removed, the full-space rectangle is removed from the representation of the full-space (step 321) and the representation of the empty-space available on the workspace is again determined by repeating steps 305 and 310. In the case of removal, the operations involved in determining the set of free space rectangles (step 305) and determining the set of largest free-space rectangles (step 310) are generally performed in accordance with FIGS. 7-9, and 13 which are described in further detail below. A third possible operation on the workspace is to place a new full-space rectangle within an available empty-space on the workspace using computer assistance (step 325). If a new full-space rectangle is to be automatically positioned, at least one placement parameter associated with the content is determined (step 330). Numerous parameters can be established by a user to determine the placement of the full-space rectangle. For example, the content may require a certain amount of area on the workspace. The parameter can also include a minimum and/or maximum constraint on the width or height. Further parameters can include the size and aspect ratio of the full-space rectangle. Also, if the user has dropped or dragged the full-space rectangle to an initial approximate position on the workspace, this initial position can also be determined and used as a placement parameter. The parameters described above are merely examples of the types of relevant parameters which a user can apply to the placement of a full space rectangle to the workspace. Following step 330, the empty-space representation is queried to determine which, if any, of the available largest empty-space rectangles can receive a full space rectangle which satisfies the placement parameter(s) which are in effect and, therefore, are suitable candidates to receive the full-space rectangle (step 335). For example, the query may provide which largest empty-space rectangles have a size and aspect ratio which can accommodate the new full-space rectangle. In step 340, if there is one or more candidate largest empty-space rectangles available, one of the candidate largest empty-space rectangle which most closely satisfies a user defined quality measure can be selected from the available candidates. As with the placement parameters, the quality measure for selecting among candidate empty space rectangles is largely determined by the specific application and the user's preference. For example, the quality measure may be such that the empty space rectangle which is closest to the initial position of the new full-space rectangle is selected. As another example, quality factor can be such that the largest empty-space rectangle that most closely matches the area or the size and aspect ratio of the new full-space rectangle may be selected. It will be appreciated that these are merely examples and that any number of such user-preference based quality factors can be applied to the selection of the largest empty-space rectangle from a number of available candidates. If in step 340, there was no suitable candidate empty space rectangle available, the user can be given the option to place the full space rectangle with some degree of overlap with other objects on the workspace (step 342). If the user elects to place with overlap, the process returns to step 300. If the user elects not to place the full space rectangle, the procedure terminates at step 343. If in step 340 a suitable largest empty space rectangle is selected, the full-space rectangle can be sized and/or positioned within the selected largest empty space rectangle (step 347). Again, user preferences can be used in determining the extent to which the full space rectangle is resized and positioned within the confines of the selected largest empty-space rectangle. Examples include maximizing the size without altering the aspect ratio, maximizing the width or the height, justifying the full space rectangle with respect to one or more borders, etc. Once the size and position are determined, the full space rectangle is added to the workspace representation (step 300) and the empty-space representation of the workspace is redetermined (steps 305, 310). The operation of determining the set of empty-space rectangles (step 305) after a full-space rectangle is added to the workspace will be described further in connection with FIG. 4 and FIGS. 5A-5I. Referring to the flow chart of FIG. 4, after a full-space rectangle is positioned on the workspace, the empty-space representation is queried to generate a list of largest empty-space rectangles which are adjacent to or overlap the new full space rectangle (step 400). Each edge of the full-space rectangle is identified (step 402). A first edge of the full-space rectangle is selected and is compared against the largest empty-space rectangles in the list of empty-space rectangles to determine if the selected edge intersects any of these rectangles (o) (step 405). If the selected edge (e) intersects an empty-space rectangle (o) in the list, a determination is made as to whether the edge (e) is collinear with an edge of the empty-space rectangle (step 410). If the edge intersects an empty-space rectangle in the list and is not collinear with an edge of the selected largest empty-space rectangle (o), then the empty-space rectangle must be reduced to create new empty-space rectangles (step 415). The new empty-space rectangles will be bounded by the selected edge (e) and either the boundary of the workspace or the edges of the existing empty-space rectangle. Steps 405 through 415 are repeated for each edge of the full-space rectangle identified in step 400. This can be performed, for example by determining if there are additional edges to be tested (step 420), and if so, selecting the next identified edge (425). If in step 405 it is determined that the edge (e) did not intersect the current empty-space rectangle, then step 420 would be performed to test another edge of the full-space rectangle. Similarly, if in step 410 it is determined that the current edge is collinear with an edge of the empty-space rectangle, than no reduction is required and the process again advances to step 420 to determine if all edges of the full-space rectangle have been tested against the current empty-space rectangle. After each edge of the full-space rectangle has been tested against the first selected largest empty-space rectangle from the empty-space representation, the empty-space representation is evaluated to determine if there are additional empty-space rectangles to be tested (step 430). If so, the next empty-space rectangle is selected from the empty-space representation (step 435) and steps 400 through 425 are repeated as described above. When this process is complete for all largest empty-space rectangles of the empty-space representation, the resulting empty-space rectangles are evaluated and those which are not largest empty-space rectangles are removed from the representation (step 440). The process of FIG. 4 can be visualized with reference to the pictorial diagrams of FIG. 2. Referring to FIGS. 2C and 2D, it can be seen that no edge of full-space rectangle 200 intersects largest empty-space rectangles 106, 104, respectively. Accordingly, each edge of rectangle 200 which was tested against largest empty-space rectangles 106, 104 would fail step 405, with the result that no reduction of these spaces is required. To the contrary, in FIGS. 2E and 2F, three of the edges of rectangle 200 intersect with largest empty-space rectangles 108 and 110. Thus, for each of these rectangles, the process of FIG. 4 will advance through step 415 to reduce the empty-space rectangles. Referring to the pictorial diagrams of FIG. 5A, FIG. 5B, FIG. 5C and FIG. 5D, the effect of the intersection of full-space rectangle 200 with empty-space rectangle 108 is demonstrated in accordance with the process illustrated in FIG. 4. Referring to FIG. 5B, edge 502 intersects with empty-space rectangle 108 (step 405) in a non-collinear manner (step 410). Accordingly, a new empty-space rectangle 510 is created which is bounded by edge 502 and three boundaries of empty-space rectangle 108, which in this case coincide with the boundaries of the workspace 100. Similarly, empty-space rectangle 512 is defined by the intersection of edge 504 with empty-space rectangle 108 and empty-space rectangle 514 is defined by the intersection of edge 508 with empty-space rectangle 108. In the same manner, empty-space rectangles 516, 518 and 520 are defined by the non-collinear intersection of edges 502, 504, 506 with largest empty-space rectangle 110. Empty-space rectangles 510, 512, 514, 516, 518 and 520 are the set of empty-space rectangles generated by the reduction of largest empty-space rectangles 108, 110. However, the largest empty-space rectangles which will make up the representation of the resulting empty-space are a subset of the resulting set of empty-space rectangles. For example, in FIG. 5C, edge 522 of empty-space rectangle 512 is not bounded by either a full-space rectangle or the boundary of the workspace. Accordingly, rectangle 512 is not a largest empty-space rectangle and is dropped from the representation, as indicated by the X through FIG. 5C. Similarly, edge 524 of rectangle 516 is not bounded by either a full-space rectangle or the boundary of the workspace and is also dropped from the final empty-space representation. Thus, after the full-space rectangle 200 is added to the workspace, the resulting representation of the empty-space includes largest empty-space rectangles 104, 106, 510, 514, 518 and 520. The process of adding a full space rectangle to a workspace, as described above in connection with FIGS. 4 and 5, is further represented in FIG. 12, which is a pseudo code listing representing an embodiment of the process. It will be appreciated that this embodiment is merely illustrative and that various programming implementations can be used in any number of programming languages to implement the present invention. The process of determining whether an empty-space rectangle is a member of the set of largest empty-space rectangles is further illustrated in the flow chart of FIG. 6. The process of FIG. 6 is repeated for each empty-space rectangle that is created when a new full-space rectangle is added to the workspace (step 605). The process starts with the selection of a first edge of a first selected empty-space rectangle (step 610). This edge is analyzed to determine if it is collinear with any edge of a full-space rectangle already placed in the workspace (step 615). If not, then the edge is tested to determine whether the edge is collinear with a boundary of the workspace (step 620). If both steps 615 and 620 fail, than the selected empty-space rectangle can be discarded as not being a largest empty-space rectangle (step 625). If additional empty-space rectangles are present, a next empty-space rectangle is selected and the process returns to step 610 for the newly selected If the selected edge is bounded by either a full-space rectangle (step 615) or the boundary of the workspace (step 620), then testing of the empty-space rectangle continues. If all four edges of the empty-space rectangle have been tested (step 630), the current empty-space rectangle is added to the set of largest empty-space rectangles (step 635). If all four edges of the empty-space rectangle have not yet been tested, a next untested edge of the rectangle is selected and the process returns to step 615 (step 640). The process of FIG. 6 adds an empty-space rectangle to the set of largest empty-space rectangles only if all four edges of the rectangle are bounded either by a full-space rectangle or the boundary of the workspace. It will be appreciated that the relative order of testing of these conditions is not critical and that steps 615 and 620 can be interchanged without substantially altering the performance of the method. Another aspect of the present space management method includes generating an empty-space representation of the workspace after a full-space rectangle is deleted from the workspace. This entails removing the full-space rectangle, F, from the list of full-space rectangles in the representation. It also involves identifying those empty-space rectangles that are included within or are adjacent to the boundaries of F. The empty-space rectangles that are presented by the removal of a full-space rectangle are then analyzed and recursively combined until the maximum extents of the combined empty-space rectangles are obtained. Those combined empty-space rectangles are then evaluated to determine which of the combined empty-space rectangles are largest empty-space rectangles which will be stored in the empty-space representation. The pictorial flow diagram of FIG. 7 illustrates an example of the recursive combination of empty-space rectangles which takes place following the deletion of a full-space rectangle. In the individual workspace representation diagrams that form this flow diagram, rectangles 704, 706, 708 and 710 represent full-space rectangles that remain in the workspace. Rectangle 712, delineated by dotted lines, represents a full-space rectangle to be removed from the workspace. The edges of rectangle 712 are analyzed one by one against the largest empty-space rectangles in the empty-space representation to determine where there is adjacent empty-space which can be combined. Operation 700 illustrates the processing relating to empty-space rectangles 714 and 716 which each have an edge that is collinear with the left edge of rectangle 712. The workspace representation 702 graphically illustrates the input for the combination of rectangles 712 and 714. Workspace representation 718 represents the output state for this combination where empty-space rectangle 720 is formed. The workspace representation 722 illustrates the output which results from the combination of rectangles 712 and 716 illustrated in workspace representation 704 to yield empty-space rectangle 724. As noted above, the combination process is a recursive operation. Thus, empty-space rectangles 720, 724 which are the output solutions for operation 700 on the left edge of rectangle 712 are used as the input rectangles for operation 727 with respect to the right edge of rectangle 712. Workspace representations 726, 728 illustrate the intersection of rectangle 720 with the free space rectangles 730, 732, respectively, which abut the right edge of removed full-space rectangle 712. Workspace representation 734 illustrates empty-space rectangle 736 which results from the combination of empty-space rectangles 720, 726. Similarly, workspace 738 illustrates the combination of rectangles 720, 732 to generate empty-space rectangle 740. In a similar fashion, workspace representations 742, 746 illustrate right edge processing of rectangle 724 with empty-space rectangles 744, 748, respectively, which have an edge abutting the right edge of rectangle 712. As there is no intersection or abutment between empty-space rectangle 724 and empty-space rectangle 748, there is no combination operation among these two empty-space rectangles, as illustrated by the solid X through workspace representation 746. Workspace representation 750 illustrates the formation of rectangle 752 from the combination of rectangles 724 and 744. Operation 755 illustrates processing related to the bottom edge of removed full-space rectangle 712. The input rectangles for operation 755 are rectangles 736 and 740 from operation 726 which each have an edge coincident with the bottom edge of rectangle 712. Note that rectangle 752 in workspace 750 of operation 726, does not have any component which intersects with or is coincident with the bottom edge of rectangle 712 and, therefore, is not an input parameter for operation 755. The combination of empty-space rectangles 736 and 756 in workspace representation 754 yield empty-space rectangle 760 depicted in workspace representation 758. The combination of empty-space rectangles 736 and 762 in workspace representation 764 yields empty-space rectangle 766 depicted in workspace representation 768. The combination of empty-space rectangles 740 and 770 in workspace representation 772 yields empty-space rectangle 774 depicted in workspace representation 776. Empty-space rectangle 774 is bounded on the right side by the workspace boundary, on the top edge by full-space rectangle 706, on the left edge by full-space rectangle 710 and on the bottom edge by full-space rectangle 708. Accordingly, as indicated in FIG. 7 by the circle around workspace representation 776, empty-space rectangle 774 is a largest empty-space rectangle which will be added to the empty-space representation and no further processing on this rectangle is required. Workspace representation 780 illustrates the combination of empty-space rectangles 740 and 778 to yield empty-space rectangle 782 of workspace representation 784. However, rectangle 782 is a subset of rectangle 766 illustrated in workspace representation 768 and, therefore, is not a largest empty-space rectangle. Accordingly, rectangle 782 is dropped from subsequent processing, as illustrated by the dotted X through workspace representation 784. Operation 785 illustrates the continued processing of empty-space rectangles which are coincident with the top edge of rectangle 712. The input rectangles for this processing operation include empty-space rectangles 760 and 766 from operation 754 as well as empty-space rectangle 752 resulting from operation 726. Workspace representation 786 illustrates the combination of empty-space rectangles 788 and 760 to yield empty-space rectangle 790 of workspace representation 792. Workspace representation 794 illustrates the combination of empty-space rectangles 760 and 796 to generate empty-space rectangle 798 of workspace representation 800. Workspace representation 802 illustrates the combination of empty-space rectangles 804 and 766 to yield empty-space rectangle 806 of workspace representation 808. As indicated by the circles around workspace representations 792, 800 and 808, empty-space rectangles 790, 798 and 806 are largest empty-space rectangles which will be added to the empty-space representation. Workspace representation 810 illustrates the combination of empty-space rectangles 752 and 812 to yield empty-space rectangle 814 of workspace representation 816. However, empty-space rectangle 814 is fully included in empty-space rectangle 798 shown in workspace representation 800 and is not a largest empty-space rectangle. This is also evident as the bottom edge of rectangle 814 is not bounded by either a full-space rectangle or a boundary of the workspace. Workspace representation 818 illustrates the combination of empty-space rectangles 752 and 820 to yield empty-space rectangle 822 of workspace representation 824. As indicated by the circle around the workspace representation 824, rectangle 822 is a largest full-space rectangle which will be added to the empty-space representation. FIG. 15 is a pseudo code listing illustrating one example of an implementation of the process for combining empty space rectangles, which takes place during the operation of deleting a full space rectangle from the workspace representation. It will be appreciated that this embodiment is merely illustrative and that various programming implementations can be used in any number of programming languages to implement the present invention. FIG. 8 is a flow chart which further describes the process of redetermining the empty-space representation of a workspace upon removal of a full-space rectangle. The four edges of the full-space rectangle to be removed are identified (step 850). As described in connection with FIG. 7, as the rectangles are generally axis aligned with the workspace, the edges can be described as left, right, bottom and top. As the process is applicable to those environments which allow overlapping full-space rectangles, the process also identifies edges of underlying full-space rectangles which are within the extents of the rectangle to be removed (step 855). This is further illustrated in FIG. 9, where full-space rectangle 915 which overlaps full-space rectangles 910 and 920 is to be removed. Upon removal of full-space rectangle 915, only a portion of the area underlying full-space rectangle 915 is empty-space. The edges of full-space rectangles 910, 920 which are within the extents of full-space rectangle 915 will limit the combination of free space rectangles. For example, in processing the left edge of rectangle 915, the combination of empty-space rectangle 925 and the region of full-space rectangle 915 will only extend to the point of intersection with the left edge of full-space rectangle 920, as illustrated by the hatching indicating output rectangle 930. Returning to FIG. 8, after the interior intersecting edges of full-space rectangles are identified, the first edge of the full-space rectangle to be removed from the workspace is selected (step 860). All largest empty-space rectangles in the empty-space representation which are adjacent to the selected edge are identified (step 865). Each of the identified empty-space rectangles are selected in turn and to the extent that the rectangles are adjacent and define a larger rectangular empty-space, the rectangles are merged to generate output empty-space rectangles (step 870). Each merged empty-space rectangle of step 870 is tested to determine if it is a subset of any other empty-space rectangle (step 875). If the answer in step 875 is yes, then the empty-space rectangle of step 870 is dropped from further processing (step 880). Processing continues by testing the set of largest empty-space rectangles adjacent to the current edge to determine if all empty-space rectangles have been tested (step 885). If not, the next empty-space rectangle adjacent to the current edge is selected (step 886) and processing returns to step 870. If in step 885, all empty-space rectangles adjacent to the current edge have been tested, the set of edges of the full-space rectangle to be removed is tested to determine if all edges have been processed (step 887). If not, the next edge is selected (step 888) and processing returns to step 865. If all edges of the full-space rectangular have been evaluated, then processing is complete. Returning to step 875, if the resulting empty-space rectangle is not a subset of another empty-space rectangle, the output empty-space rectangle from step 870 is then tested to determine whether it is a largest empty-space rectangle (step 890). The testing of step 890 can be performed in a manner substantially as described in connection with FIG. 6. If the output rectangle is a largest empty-space rectangle, it is added to the empty-space representation (step 892) and processing continues by determining if more adjacent empty-space rectangles are available for processing (step 885). If in step 890 the current output rectangle is not a largest empty-space rectangle, the output rectangle is retained as an input parameter for subsequent recursive processing operations (step 894). Thus, the output rectangle is added to the set of empty-space rectangles which are evaluated in subsequent iterations of step 865. FIG. 13 is a pseudo code listing illustrating one example of an implementation of the process for deleting a full space rectangle from the workspace representation. It will be appreciated that this embodiment is merely illustrative and that various programming implementations can be used in any number of programming languages to implement the present invention. There are any number of ways of storing and querying the representation of the workspace. Various known data structures may offer benefits regarding storage space efficiency or query efficiency. In one embodiment, the space management method maintains a first data structure for storing data relating the full-space rectangles in the workspace and a second data structure for storing data relating the largest empty-space rectangles in the work space. In the case of a two dimensional workspace, the first and second data structures can take the form of a 2D interval tree. For higher order dimensional workspaces of dimension n, an n-dimensional interval tree can be used. In the operation of adding a full-space rectangle as described above, the representation of the empty-space prior to the add operation is generally overwritten by the new representation. In such a case, the ability to efficiently undo an add or delete operation is lost. An undoable add operation can be implemented by saving a copy of at least a portion of the data structures of the work space representation prior to the add operation. To the extent that computer memory or other digital storage is available, several prior versions of the data structures of the work space representation can be stored, such that a multi-operation undo can also be implemented. Preferably, only the affected portions of the data structures need to be copied and saved. For example, in an undoable add operation, those empty space rectangles which are removed as a result of the add operation can be saved and those new empty space rectangles which are added to the workspace representation can be marked in the data structure. To undo the add operation, the marked entries in the data structure are removed and the previously removed empty space rectangles are reinstantiated in the data In addition to providing an efficient way to restore a previous display state of the workspace, the undoable add operation can be used to animate selected objects more quickly than if the objects were repeatedly added and deleted as discussed above. For example, a full-space rectangle having an object to be animated is first deleted from the workspace. Next an undoable add operation is used to place a modified version of the object in accordance with a frame of the animation. The undoable add operation is then undone, and a newly modified object is added in its place, again using an undoable add operation for the next frame of the animation. The present space management methods are suitable for a wide range of graphics applications, such as window manager programs, graphical data presentation tools, electronic book displays and the like. In the case of a window manager program, such as illustrated in the exemplary embodiments of FIG. 14A, FIG. 14B, FIG. 14C, and FIG. 14D, the current dynamic space management methods enable several features with respect to adding, deleting and moving content on the display. FIG. 14A illustrates a workspace 1400, which in this case is a computer display for a window based operating system. On the workspace are displayed a number of full-space rectangles, including 1402, 1404, 1406, 1408, 1410, 1412 and 1414. In this initial placement, a number of the rectangles 1404, 1406, 1408, 1410, 1412 and 1414 are placed in an overlapping fashion. FIG. 14B illustrates the conventional placement of the rectangles if a user simply moves rectangle 1402 to a new position which overlaps rectangles 1404, 1406, 1408, 1410, 1412 and 1414. FIG. 14C illustrates an example of an overlap avoidance drag operation in accordance with the invention which repositions rectangle 1402 from the position in FIG. 14B to the closest non-overlapping position illustrated in FIG. 14C. In this ease, the rectangles 1404, 1406, 1408, 1410, 1412 and 1414 are not repositioned. FIG. 14D illustrates the workspace resulting from an alternate embodiment of an overlap avoidance drag operation in accordance with the invention. In this case, the system maintains the placement of the dragged rectangle 1402 but rearranges the rectangles 1404, 1406, 1408, 1410, 1412 and 1414 to new positions to avoid overlap. The selection of placement parameters for rectangle 1402, and those rectangles which would be affected by a drag operation of rectangle 1402, are generally user established parameters which are based on user preferences and application specific requirements. For example, a first user may prefer the results of FIG. 14C whereas a second user would prefer the results of FIG. 14D. In either case, the present method of managing a workspace provides a data structure representing the available empty space which can be queried to determine a set of possible positions for a given object of full space. The user parameters are then used to select among the available empty-space rectangles and to establish a final position of the full-space on the workspace. This enables the arrangement of full-space to accommodate a vast range of user preferences. As noted above, the current methods maintain a representation of the full-space and the empty-space of the workspace. When a full-space rectangle is to be moved or resized, it is first deleted from the full-space representation and the empty-space representation is updated. During the move, space management for overlap avoidance can take place continuously throughout the move operation or can be performed at the end of the move operation. In the first case, as the user selects a full-space rectangle and drags the selected rectangle to a new position on the work space, any other full-space rectangles which are in the drag path and would intersect the selected full space rectangle can be dynamically moved out of the way into other empty-space regions. Alternatively, the space management can take place after a user completes the move. In this later case, automatic positioning can involve either moving any full-space rectangles which are overlapped by the new placement of the moved object or by adjusting the final position of the moved object, such as into the closest available position that will satisfy the current placement parameters set by the user, such as, size and aspect ratio of the full-space rectangle. If desired, the system can be allowed to alter the size and or aspect ratio within predefined limits set by a user to provide for placement during a move operation. For example, if a predetermined size scaling factor of 0.8 is used, than the system would be allowed to place a moved rectangle in an empty-space rectangle having a suitable aspect ratio and a size in the range of 0.8 to 1.0 of the size of the original full-space rectangle. Alternatively, if the user established that the position of a full-space rectangle was critical, than the system could be provided with an option of maintaining the position and aspect ratio of the full-space rectangle and scaling the size of the full-space rectangle to fit the available empty-space rectangle at the selected position. Other positioning and sizing parameters can also be set by a user to control the final placement of the full space rectangle on the workspace, such as total area, minimum and/or maximum dimensions, and even relative parameters such as "next to rectangle x," "above rectangle y," and the like, where x and y can represent objects already placed on the workspace. FIGS. 10A and 10B are pictorial representations of an application of the present space management methods in connection with a three dimensional information visualization system. While such a system can be used in any number of specific applications, the application will be described in the context of a medical information system. The pictorial diagram of FIG. 10A depicts a workspace 1000 which represents a patient treatment timeline. Along a time line grid 1005 a number of folder icons 1010, 1015, 1020, 1025, 1030, 1035 are displayed on the workspace which are indications that detailed information regarding the patient, such as signs, symptoms, indications, treatment rendered, medication provided and the like, is available for a particular time period. In order to access the information, one of the folders can be selected to be brought to the foreground and enlarged. However, since the time line features other important patient data in other folders which may need to be accessed simultaneously, it would be disadvantageous for the folder which was brought forward to overlap another folder displayed on the background. As set forth above, it is not required, and in some cases not desired, that all content which is displayed on the workspace participate in the data structures which define full space and empty space on the workspace. In this case, the background timeline grid 1005 is content which is intended to have other objects placed in an overlapping relationship to it. Therefore, the full space rectangle which defines the background timeline grid 1005 can either be displayed without ever adding it to the full-space representation or it can added initially and then deleted from the full space representation without removing the content from the display. The space manager can query the empty space representation, which includes the area of the background timeline grid 1005, for the largest available empty-space rectangle to receive the enlarged folder. The folder can then be dynamically enlarged, while generally maintaining the original aspect ratio, to fit within the selected empty-space rectangle. This is illustrated in FIG. 10B where folder 1035 of FIG. 10A has been selected, brought to the foreground and enlarged as folder 1040. The present space management methods are also well suited to various electronic publishing and online content management applications. FIGS. 11A and 11B illustrate the use of the present space management methods for use in dynamic placement of advertising content within a web page. The workspace 1100 of FIG. 11 illustrates a typical web browser page with various content from an Internet Service Provider (ISP) being displayed. It is well known that advertising can be inserted into such web pages, such as through banner advertisements which are generally placed in a predetermined portion of the content page specifically reserved for such content. Using the present dynamic space management methods, an empty-space representation can be generated for a webpage to identify those areas which are available to receive advertising or other insertable content without specifically reserving a portion of the display for such content. For example, largest empty space rectangles 1102, 1104, 1106, 1108 and 1110 are representative empty-space rectangles which are available to receive insertable content. Referring to FIG. 11B, a block of advertising content 1112 can be inserted as a full space rectangle having a defined size and aspect ratio which can be received by empty space rectangle 1110. The various systems and methods can be implemented as computer software which can be operated by any number of conventional computing platforms. For example, an IBM® compatible personal computer operating with an Intel Pentium III® processor, having 256 MB of RAM and operating the Windows® 2000 operating system has been found suitable for applications such as those illustrated and described in connection with FIGS. 10 and 11. Suitable software can be generated in any conventional programming language or environment, such as Java 1.2. It will be appreciated that such software can be embodied in computer readable media such as various forms of computer memory, hard disk drives, CD-ROM, diskette and the like. The invention has been described in the context of a two dimensional work space represented by a list of full space rectangles and a list of empty space rectangles to represent the workspace. However, the invention is not limited to applications in two dimensions. The methods described are well suited for extension to three or more (n) dimensions. In the 3D case, rather than using planar rectangles in the representation, 3D axis-aligned cuboid bounds can be used as the basis of representation. The 3D full-space cuboids are then processed to generate a list of empty-space cuboids that represent the empty space in 3D. One application of 3D space management is to represent 3D physical space. For example, warehouse management systems could use such a representation to maximize space utilization while still maintaining enough space to physically retrieve inventory. In general, given a 3D layout, the present methods for representing and managing a workspace makes it easy to compute the placement or movement of objects in the empty space. In addition, while the dimensions of the workspace have generally been described as spatial, one or more of the dimensions in the representation can represent nonspatial dimensions. For example, one dimension could be mapped to time. Thus, in a 3D system, the use of two spatial dimensions and one temporal dimension could enable queries for finding an optimal place for storing an object based on its 2D footprint and the duration for which it must be stored. In a exemplary 4D space manager, three dimensions might be devoted to space and one to time. Of course, this can be extended to further dimensions and dimensional parameters, as required. The workspace has generally been considered to be planar for the sake of simplicity in the explanation. However, this is not required. The current methods can also be applied to a workspace that is continuous, or that "wraps around" the left and right edges of the rectangular workspace. Alternatively, the rectangular space manager can be mapped to the surface of a sphere or a cylinder. This enables modeling such objects as the surface of the earth, or the space above the earth, or processing a cylindrical or spherical wrap-around information space for an immersive wearable user interface. Wrapping the bottom and top edges of the workspace also allows the present methods to be extended to model toroidal information surfaces. Although the present invention has been described in connection with several exemplary embodiments, it will be appreciated that various changes and modifications may be suggested to one skilled in the art. It is intended that the present invention encompass such changes and modifications as fall within the scope of the appended claims. Patent applications by Blaine A. Bell, New York, NY US Patent applications by Steven A. Feiner, New York, NY US Patent applications in class Merge or overlay Patent applications in all subclasses Merge or overlay User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120200599","timestamp":"2014-04-18T17:27:28Z","content_type":null,"content_length":"102138","record_id":"<urn:uuid:cc659867-dfbf-4094-897b-b464a4661fb8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple statistical gradient-following algorithms for connectionist reinforcement learning Results 1 - 10 of 271 - Journal of Artificial Intelligence Research , 1996 "... This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem ..." Cited by 1298 (23 self) Add to MetaCart This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. - In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence , 2000 "... We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a model. Our approach is based on the following observation: Any (PO)MDP can be transformed into an "equivalent" POMDP ..." Cited by 207 (7 self) Add to MetaCart We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a model. Our approach is based on the following observation: Any (PO)MDP can be transformed into an "equivalent" POMDP in which all state transitions (given the current state and action) are deterministic. This reduces the general problem of policy search to one in which we need only consider POMDPs with deterministic transitions. We give a natural way of estimating the value of all policies in these transformed POMDPs. Policy search is then simply performed by searching for a policy with high estimated value. We also establish conditions under which our value estimates will be good, recovering theoretical results similar to those of Kearns, Mansour and Ng [7], but with "sample complexity" bounds that have only a polynomial rather than exponential dependence on the horizon time. Our method appl... , 1995 "... Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative model ..." Cited by 194 (22 self) Add to MetaCart Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. - SIAM JOURNAL ON CONTROL AND OPTIMIZATION , 2001 "... In this paper, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference (TD) learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction based on in ..." Cited by 174 (1 self) Add to MetaCart In this paper, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference (TD) learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actorcritic algorithms for Markov decision processes with general state and action spaces. We state and prove two results regarding their convergence. - Journal of Artificial Intelligence Research , 2001 "... Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce � � , a si ..." Cited by 153 (5 self) Add to MetaCart Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce � � , a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes ( � s) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm’s chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter � � (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of � � , and show how the correct choice of the parameter is related to the mixing time of the controlled �. We briefly describe extensions of � � to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by � � can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. 1. , 2000 "... Cooperative games are those in which both agents share the same payoff structure. Valuebased reinforcement-learning algorithms, such as variants of Q-learning, have been applied to learning cooperative games, but they only apply when the game state is completely observable to both agents. Poli ..." Cited by 131 (4 self) Add to MetaCart Cooperative games are those in which both agents share the same payoff structure. Valuebased reinforcement-learning algorithms, such as variants of Q-learning, have been applied to learning cooperative games, but they only apply when the game state is completely observable to both agents. Policy search methods are a reasonable alternative to value-based methods for partially observable environments. In this paper, we provide a gradient-based distributed policysearch method for cooperative games and compare the notion of local optimum to that of Nash equilibrium. We demonstrate the effectiveness of this method experimentally in a small, partially observable simulated soccer domain. 1 INTRODUCTION The interaction of decision makers who share an environment is traditionally studied in game theory and economics. The game theoretic formalism is very general, and analyzes the problem in terms of solution concepts such as Nash equilibrium [12], but usually works under the , 1999 "... We consider the problem of choosing a near-best strategy from a restricted class of strategies in a partially observable Markov decision process (POMDP). We assume we are given the ability to simulate the behavior of the POMDP, and we provide methods for generating simulated experience su cient to a ..." Cited by 111 (10 self) Add to MetaCart We consider the problem of choosing a near-best strategy from a restricted class of strategies in a partially observable Markov decision process (POMDP). We assume we are given the ability to simulate the behavior of the POMDP, and we provide methods for generating simulated experience su cient to accurately approximate the expected return of any strategy in the class. We prove upper bounds on the amount of simulated experience our methods must generate in order to achieve such uniform approximation. These bounds have no dependence on the size or complexity of the underlying POMDP, but depend only on the complexity of the restricted strategy class. The main challenge is in generating trajectories in the POMDP that can be reused, in the sense that they simultaneously provide estimates of the return of many strategies in the class. Our measure of strategy class complexity generalizes the classical notion of VC dimension, and our methods develop connections between problems of current interest in reinforcement learning and well-studied issues in the theory of supervised learning. We also discuss a number of practical planning algorithms for POMDPs that arise from our reusable trajectories. - Autonomous Robot , 2003 "... Abstract. The complexity of the kinematic and dynamic structure of humanoid robots make conventional analytical approaches to control increasingly unsuitable for such systems. Learning techniques offer a possible way to aid controller design if insufficient analytical knowledge is available, and lea ..." Cited by 91 (20 self) Add to MetaCart Abstract. The complexity of the kinematic and dynamic structure of humanoid robots make conventional analytical approaches to control increasingly unsuitable for such systems. Learning techniques offer a possible way to aid controller design if insufficient analytical knowledge is available, and learning approaches seem mandatory when humanoid systems are supposed to become completely autonomous. While recent research in neural networks and statistical learning has focused mostly on learning from finite data sets without stringent constraints on computational efficiency, learning for humanoid robots requires a different setting, characterized by the need for real-time learning performance from an essentially infinite stream of incrementally arriving data. This paper demonstrates how even high-dimensional learning problems of this kind can successfully be dealt with by techniques from nonparametric regression and locally weighted learning. As an example, we describe the application of one of the most advanced of such algorithms, Locally Weighted Projection Regression (LWPR), to the on-line learning of three problems in humanoid motor control: the learning of inverse dynamics models for model-based control, the learning of inverse kinematics of redundant manipulators, and the learning of oculomotor reflexes. All these examples demonstrate fast, i.e., within seconds or minutes, learning convergence with highly accurate final peformance. We conclude that real-time learning for complex motor system like humanoid robots is possible with appropriately tailored algorithms, such that increasingly autonomous robots with massive learning abilities should be achievable in the near future. 1. - In Proceedings of the ICML-2002 The Nineteenth International Conference on Machine Learning , 2002 "... We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms is a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate ..." Cited by 84 (6 self) Add to MetaCart We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms is a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods differ from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture. Our experimental results, comparing our approach to other RL methods, illustrate both the quality of the policies obtained and the additional benefits of coordination. 1. - In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS , 2006 "... Abstract — The aquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the d ..." Cited by 79 (19 self) Add to MetaCart Abstract — The aquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the domains of highdimensional robots such as manipulator, legged or humanoid robots. Policy gradient methods remain one of the few exceptions and have found a variety of applications. Nevertheless, the application of such methods is not without peril if done in an uninformed manner. In this paper, we give an overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field. We outline previous applications to robotics and show how the most recently developed methods can significantly improve learning performance. Finally, we evaluate our most promising algorithm in the application of hitting a baseball with an anthropomorphic arm.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.129.8871","timestamp":"2014-04-20T22:26:53Z","content_type":null,"content_length":"41711","record_id":"<urn:uuid:c8444c77-288f-4b45-9bca-6c80ed974b08>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of a regular subposet which collapses everything except the top cardinal up vote 9 down vote favorite Suppose $\delta$ is an inaccessible cardinal, and $\mathbb{P}$ is the Levy Collapse $\text{Col}(\kappa, \delta)$ which adds a surjection from $\kappa \to \delta$ (for some regular $\kappa < \delta$). It is well-known that there is a poset---namely $\text{Col}(\kappa, < \delta)$---which is absorbed by $\mathbb{P}$, and which has the same collapsing effects as $\mathbb{P}$, except that it doesn't collapse $\delta$. Is this true in general? For simplicity, assume GCH and, if necessary, that $\delta$ is a very large cardinal. Suppose $\mathbb{P} \subset V_\delta$ is a poset which collapses $\delta$. Must there exist a poset $\mathbb{Q}$ with the following properties? 1. $\mathbb{Q}$ is absorbed by $\mathbb{P}$ as a subforcing, i.e. there is a regular embedding from $\mathbb{Q} \to \text{r.o.}(\mathbb{P})$; 2. $\mathbb{Q}$ collapses a tail end of cardinals below $\delta$, but does not collapse $\delta$. Note: The answer is ``yes" in the special case that the closure of $\mathbb{P}$ matches $|\delta|^{V^{\mathbb{P}}}$, using standard absorption theory for Levy collapses. In particular, it's true if $ \mathbb{P}$ makes $\delta$ countable. set-theory lo.logic boolean-algebras forcing Do you have a counterexample when you drop the $\mathbb{P}\subset V_\delta$ restriction? – Joel David Hamkins Jun 19 '13 at 2:24 @Joel: No, I don't. – Sean Cox Jun 19 '13 at 3:21 Small observation: if we perform an additional small forcing $\mathbb{R}$, then we can find such a $\mathbb{Q}$ inside $\mathbb{P}\ast\mathbb{R}$. The reason is that we can let $\mathbb{R}$ collapse $|\delta|^{V^{\mathbb{P}}}$ to $\omega$, which is small relative to $\delta$ in the ground model, and so the composition will collapse $\delta$ to $\omega$, placing it into the case you mention at the end of the question. – Joel David Hamkins Jun 19 '13 at 5:09 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged set-theory lo.logic boolean-algebras forcing or ask your own question.
{"url":"http://mathoverflow.net/questions/134101/existence-of-a-regular-subposet-which-collapses-everything-except-the-top-cardin","timestamp":"2014-04-18T08:13:03Z","content_type":null,"content_length":"51857","record_id":"<urn:uuid:6d8f32fc-4a8b-43cf-adc0-44131ec148b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Author/Editor=(Dolgachev_Igor) 2000; 624 pp; hardcover List Price: US$62.50 Member Price: US$50 Order Code: MMJSP This volume constitutes a special issue of the Michigan Mathematical Journal dedicated to William Fulton on the occasion of his sixtieth birthday. Attesting to the breadth of his contributions, the volume contains some thirty papers on a wide range of topics centered in algebraic geometry, representation theory, and commutative algebra. This collection will be of interest to researchers and students in these and neighboring fields. Distributed worldwide by the AMS. Graduate students and research mathematicians interested in algebraic geometry, representation theory, commutative algebra, and their applications. • P. Aluffi and C. Faber -- Linear orbits of arbitrary plane curves • A. Beauville -- Determinantal hypersurfaces • A. Bertram -- Some applications of localization to enumerative problems • M. Brion -- Poincaré duality and equivariant (co)homology • H. Clemens and H. Kley -- On an example of Voisin • P. Deligne, M. Goresky, and R. MacPherson -- L'algèbre de cohomologie du complément dans un espace affine, d'une famille finie de sous-espaces affines • J.-P. Demailly, L. Ein, and R. Lazarsfeld -- A subadditivity property of multiplier ideals • P. Diaconis and A. Ram -- Analysis of systematic scan metropolis algorithms using Iwahori-Hecke algebra techniques • I. V. Dolgachev -- Polar cremona transformations • D. Edidin and W. Graham -- Good representations and solvable groups • C. Faber and R. Pandharipande -- Logarithmic series and Hodge integrals in the tautological ring • S. Fomin and M. Shapiro -- Stratified spaces formed by totally positive varieties • D. Franco, S. L. Kleiman, and A. T. Lascu -- Gherardelli linkage and complete intersections • T. Garrity -- Global structures on CR manifolds via Nash blow-ups • A. Givental -- On the WDVV-equation in quantum K-theory • M. Hochster and C. Huneke -- Localization and test exponents for tight closure • Y. Hu and S. Keel -- Mori dream spaces and GIT • T. Józefiak -- A construction of irreducible \(\mathrm{GL}(m)\)-representatives • J. Kollár -- Fundamental groups of rationally connected varieties • A. Kresch -- Gromov-Witten invariants of a class of toric varieties • D. Laksov and A. Thorup -- The algebra of jets • A. Lascoux and P. Pragacz -- Orthogonal divided differences and Schubert polynomials, \(\tilde{P}\)-functions, and vertex operators • A. Losev and Y. Manin -- New modular spaces of pointed curves and pencils of flat connections • M. V. Nori -- The Hirzebruch-Riemann-Roch theorem • D. Perkinson -- Inflections of toric varieties • P. C. Roberts -- Intersection multiplicities and Hilbert polynomials • B. Shapiro, M. Shapiro, A. Vainshtein, and A. Zelevinsky -- Simply-laced Coxeter groups and groups generated by symplectic transvections • K. Smith -- Globally F-regular varieties: Applications to vanishing theorems for quotients of Fano varieties • F. Sottile -- Some real and unreal enumerative geometry for flag manifolds • H. Tamvakis -- Height formulas for homogeneous varieties • B. Totaro -- The topology of smooth divisors and the arithmetic of abelian varieties
{"url":"http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Dolgachev_Igor&arg9=Igor_Dolgachev","timestamp":"2014-04-19T09:38:13Z","content_type":null,"content_length":"17931","record_id":"<urn:uuid:005d70d7-abc3-47fd-8818-d76f745e2b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Regression Equation for Zero inflated negative binomial Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Regression Equation for Zero inflated negative binomial From Partho Sarkar <partho.ss+lists@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Regression Equation for Zero inflated negative binomial Date Tue, 20 Sep 2011 16:15:26 +0530 I notice you are not getting the points, so even though I have no special experience with the ZINB model, let me try to explain a basic point about this kind of regression. Please excuse me if this is too Here your y- variable is the "count" or number of times something of interest happens in a unit of time. You assume that the "probability" of this (i.e., y=y0 for any y0) is given by the "negative binomial" probability distribution, which is defined by a parameter lambda, which in turn is determined by some characteristics of the observations. This is the left hand variable you estimate in your regression: This is why you do not have any regression in the form y=b0+b1X1 etc, and no one would expect you to produce such a regression! For further clarification, I would suggest you look at these sources: Hope this helps 2011/9/20 rachel grant <rachelannegrant@gmail.com>: > Thanks to everyone but I am still very confused. I should explain, I > am not a mathematician, I dont understand mathmatical formulae, I am a > biologist. I am looking for a regression formula for ZINB but I cannot > find anything that resembles: ln (Y) + B0+B1X1 + B2X2. > Yes there are formulae in the resources you are all pointing me to but > none of them look like a regression equation, and they dont have X on > the right hand side. I can't buy expensive books... I have been > searching help file and internet resources for several months. > thanks > Rachel Grant > 2011/9/19 rachel grant <rachelannegrant@gmail.com>: >> Thank you for your help. Maybe I need to explain the problem more clearly >> I have used Zero Inflated negative binomial regression in Stata to >> model some overdispersed, zero inflated count data. I got very nice >> results and also used the postestimation tools to predict Y values. >> This is included in my PhD thesis which I am about to submit. My PhD >> supervisor said as well as presenting the results (i.e cofficients, p >> values etc.) I also have to show "the model". I think by this he means >> the regression equation in the form: >> natural log (Y) = B0+B1X1 + B2X2.............. >> I cannot find out what the equation is for ZINB models and also I >> cannot find out how to make Stata display this model. >> I could just add in the coefficients myself BUT I am not sure of the >> exact formula of the model for ZINB (especially the ZI part) as I >> think it may be more complicated than the simple Poisson >> log e (Y) = β0 + β1Χ1 + β2Χ2 ... >> I have searched everywhere to find a general equation for ZINB with no >> luck and also read all the Stata help files. >> I am very confused about why this seemingly simple thing should prove >> so impossible! I am a beginner with stata, and previously only used >> simple linear regression so thank you for your patience. >> -- >> regards, Rachel Grant >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/statalist/faq >> * http://www.ats.ucla.edu/stat/stata/ > -- > regards, Rachel > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-09/msg00841.html","timestamp":"2014-04-16T13:25:17Z","content_type":null,"content_length":"12443","record_id":"<urn:uuid:53ba8d96-b241-439e-aa6e-971a885e6a4e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD AND APPARATUS FOR CONTROLLING DECODING IN RECEIVER Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Methods and apparatus are provided for controlling decoding in a receiver. A codeword is received and decoded. It is determined whether the decoding is a decoding success or a decoding failure. A number of unreliable bits of the codeword is determined when the decoding is the decoding failure. Iterative decoding is performed when the number of unreliable bits is less than a first threshold A method for controlling decoding in a receiver, comprising the steps of: receiving and decoding a codeword; determining whether the decoding is a decoding success or a decoding failure; determining a number of unreliable bits of the codeword when the decoding is the decoding failure; and performing iterative decoding when the number of unreliable bits is less than a first threshold value. The method of claim 1, wherein determining whether the decoding is the decoding success or the decoding failure comprises determining whether an equation of H×c=0 is satisfied, where `H` denotes a parity check matrix and `c` denotes the codeword. The method of claim 1, further comprising: determining an average variation of the number of unreliable bits when the number of unreliable bits is greater than the first threshold value; and determining the decoding failure of the codeword based on the average variation of the number of unreliable bits. The method of claim 1, wherein the number of unreliable bits is determined based on an absolute Log Likelihood Ratio (LLR) value and a specific reference value. The method of claim 3, wherein determining the decoding failure of the codeword comprises: increasing a count value when the average variation of the number of unreliable bits is less than or equal to a second threshold value; and determining the decoding failure of the codeword when the count value is equal to a third threshold value. The method of claim 3, wherein the average variation of the number of unreliable bits is determined by: S M ( l ) = C T ( l - M ) - C T ( l ) M ##EQU00002## where S .sup.(l) denotes the average variation of the number of unreliable bits, T denotes a reference LLR value of unreliable bits, C .sup.(l) denotes the number of unreliable bits in an l iteration step, and the S .sup.(l) is the average variation of the number of unreliable bits through M iterations with respect to the l iteration step. The method of claim 5, wherein the third threshold value comprises a maximum count satisfying |S .sup.(l)|≦α, where S .sup.(l) denotes the average variation of the number of unreliable bits and α denotes the second threshold value. The method of claim 1, wherein the codeword is Low Density Parity Check (LDPC)-encoded. A receiver comprising: a decoding unit for decoding a received codeword; and a control unit for determining whether the decoding is a decoding success or decoding failure, determining a number of unreliable bits of the codeword when the decoding is the decoding failure, and performing iterative decoding when the number of unreliable bits is less than a first threshold value. The receiver of claim 9, wherein the control unit comprises a determining unit for determining whether H×c=0 is satisfied, where `H` denotes a parity check matrix and `c` denotes the codeword. The receiver of claim 9, wherein the control unit comprises: a first count unit for determining the number of unreliable bits; an average variation calculating unit for determining an average variation of the number of unreliable bits when the number of unreliable bits is greater than the first threshold value; and a second count unit for determining a decoding failure based on the average variation of the number of unreliable bits. The receiver of claim 9, wherein the number of unreliable bits is determined based on an absolute Log Likelihood Ratio (LLR) value and a specific reference value. The receiver of claim 11, wherein the second count unit: increases a count value when the average variation of the number of unreliable bits is less than or equal to a second threshold value; and determines the decoding failure of the codeword when the count value is equal to a third threshold value. The receiver of claim 11, wherein the average variation of the number of unreliable bits is determined by: S M ( l ) = C T ( l - M ) - C T ( l ) M ##EQU00003## where S .sup.(l) denotes the average variation of the number of unreliable bits, T denotes a reference LLR value of unreliable bits, C .sup.(l) denotes the number of unreliable bits in an l iteration step, and the S .sup.(l) is the average variation of the number of unreliable bits through M iterations with respect to the l iteration step. The receiver of claim 13, wherein the third threshold value is determined as a maximum count satisfying |S .sup.(l)|≦α, where S .sup.(l) denotes the average variation of the number of unreliable bits and α denotes the second threshold value. A method for controlling decoding in a receiver, comprising the steps of: decoding a codeword; determining a number of unreliable bits of the codeword when the decoding of the codeword is a decoding failure, and determining an average variation of the number of unreliable bits; determining a number of times when the average variation of the number of unreliable bits decreases below a first threshold value during iterative decoding; and terminating the iterative decoding when the determined number of times reaches a second threshold value. The method of claim 16, wherein the number of unreliable bits is determined based on an absolute Log Likelihood Ratio (LLR) value. The method of claim 16, wherein the iterative decoding continues when the number of unreliable bits is less than a third threshold value. The method of claim 16, wherein the iterative decoding continues when the determined number of times does not reach the second threshold value. The method of claim 16, wherein the codeword is Low Density Parity Check (LDPC)-encoded. PRIORITY [0001] This application claims priority under 35 U.S.C. §119(a) to an application filed in the Korean Intellectual Property Office on Dec. 24, 2010 and assigned Serial No. 10-2010-0134459, the contents of which are incorporated herein by reference. BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention The present invention relates generally to a receiver of a broadcasting system, and more particularly, to a method and apparatus for controlling decoding in a receiver. 2. Description of the Related Art Powerful error correction encoding techniques that support broadband data and fast packet transmission are integral for next-generation communication and broadcasting systems. Since the introduction of a turbo coding scheme in 1993, there has been an increased interest in high-performance error correction codes that provide an error probability that approaches the Shannon limit. The IMT-2000 system also uses a turbo code. The core technology of a turbo code is an iterative decoding technique, which has evolved into new fields. However, the turbo code is limited in performing at a considerably low error probability, which is required in next-generation communication/broadcasting systems. Thus, there has been an increasing interest in new graph-based encoding schemes. Accordingly, a Low Density Parity Check (LDPC) code has been utilized as an error correction code. The LDPC code performs in a manner that approaches the Shannon limit when a belief propagation-based iterative decoding algorithm is used. In general, the LDPC code determines decoding termination by syndrome-check (H×c=0). Herein, `H` denotes a parity check matrix, and `c` denotes a codeword. The syndrome-check is typically not passed in a region where a Signal-to-Noise Ratio (SNR) is low or an error occurs frequently. Therefore, decoding is performed as many times as the maximum decoding iteration count set in the system. However, this may increase a decoding failure probability and may cause a decoding inefficiency. Specifically, the decoding iterations may cause power consumption and latency. SUMMARY OF THE INVENTION [0009] The present invention has been made to address at least the above problems and/or disadvantages and to provide at least the advantages below. Accordingly, an aspect of the present invention relates to a method and apparatus for controlling decoding in a receiver. Another aspect of the present invention relates to a decoding method and apparatus for controlling a decoding iteration count by determining whether decoding will fail, even when performed in a low-SNR region as many times as the maximum decoding iteration count. According to an aspect of the present invention, a method for controlling decoding in a receiver is provided. A codeword is received and decoded. It is determined with the decoding is a decoding success or a decoding failure. A number of unreliable bits of the codeword is determined when the decoding is the decoding failure. Iterative decoding is performed when the number of unreliable bits is less than a first threshold value. According to another aspect of the present invention, a receiver is provided that includes a decoding unit for decoding a received codeword. The receiver also includes a control unit for determining whether the decoding is a decoding success or a decoding failure. A number of unreliable bits of the codeword is determined when the decoding is the decoding failure. Iterative decoding is performed when the number of unreliable bits is less than a first threshold value. According to a further aspect of the present invention, a method for controlling decoding in a receiver is provided. A codeword is decoded. A number of unreliable bits of the codeword is determined when the decoding is a decoding failure. An average variation of the number of unreliable bits is determined. A number of times when the average variation of the number of unreliable bits decreases below a first threshold value during iterative decoding is determined. The iterative decoding is terminated when the determined number of times reaches a second threshold value. BRIEF DESCRIPTION OF THE DRAWINGS [0014] The above and other aspects, features, and advantages of the present invention will be more apparent from the following detailed description when taken in conjunction with the accompanying drawings in which: FIG. 1 is a graph illustrating the ratio of bits having an absolute Log Likelihood Ratio (LLR) value less than a specific value T to LDPC-encoded bits according to a decoding iteration count, in the event of a decoding failure; FIG. 2 is a graph illustrating the ratio of bits having an absolute LLR value less than a specific value T to LDPC-encoded bits according to a decoding iteration count, in the event of a decoding FIG. 3 is a flow chart illustrating a process for controlling LDPC decoding in a receiver, according to an embodiment of the present invention; and FIG. 4 is a block diagram of an apparatus for controlling LDPC decoding in a receiver, according to an embodiment of the present invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE PRESENT INVENTION [0019] Embodiments of the present invention are described in detail with reference to the accompanying drawings. The same or similar components may be designated by the same or similar reference numerals although they are illustrated in different drawings. Detailed descriptions of constructions or processes known in the art may be omitted to avoid obscuring the subject matter of the present Embodiments of the present invention provide a method and apparatus for controlling decoding in a receiver. The following description is made in the context of a receiver employing an LDPC code that is a linear block code capable of iterative decoding. Embodiments of the present invention are not limited to these codes. Thus, it should be understood that the present invention may also be applicable to any other receivers employing iterative decoding. FIG. 1 is a graph illustrating the ratio of bits having an absolute LLR value less than a specific value T to LDPC-encoded bits according to a decoding iteration count, in the event of a decoding failure, in the case of an Additive White Gaussian Noise (AWGN) channel. Referring to FIG. 1, an X-axis represents a decoding iteration count, and a Y-axis represents the ratio of bits having an absolute LLR value that is obtainable in a decoding process and that is less than a specific value T, to all the bits constituting an LDPC codeword. Specifically, FIG. 1 illustrates the ratio of bits having an absolute LLR value that is obtainable in a decoding process and that is less than 1 or 2, to LDPC codeword bits. "Decoding Trajectory, Averaged" represents an average value of the ratio of bits having an absolute LLR value less than a specific value T to bits of a plurality of LDPC codewords (e.g., 1000). "Decoding Trajectory, Sample 1" and "Decoding Trajectory, Sample 2" represent a change in the ratio of each of the codewords selected from the plurality of LDPC codewords (e.g., 1000). In general, the ratio of bits having an absolute LLR value less than a specific value T decreases as the decoding iteration count increases. This relationship occurs because the number of unreliable bits in an LDPC codeword decreases as the decoding iteration count increases. However, in the event of a decoding failure, the ratio of bits having an absolute LLR value less than a specific value T does not decrease. Specifically, the ratio of bits having an absolute LLR value less than a specific value T does not decrease because there is a limitation in correcting all of the unreliable bits. This limitation even occurs when decoding is performed in a low-SNR environment a number of times equal to the maximum decoding iteration count. Referring to FIG. 1, the ratio of bits having an absolute LLR value less than a specific value T does not decrease on the average in the event of a decoding failure, as represented by "Decoding Trajectory, Averaged". The simulation result of FIG. 1 reveals that a fluctuation occurs according to an error pattern in the event of a decoding failure. Referring to FIG. 1, a slight fluctuation may occur in the process of iterative decoding, but the ratio of bits having an absolute LLR value less than a specific value T tends to converge stably after completion of sufficient iterative decoding, as represented by "Decoding Trajectory, Sample 1". However, it is also observed that a considerable fluctuation occurs as represented by "Decoding Trajectory, Sample 2". The fluctuation of "Decoding Trajectory, Sample 2" occurs because belief propagation-based iterative decoding is performed for an LDPC codeword. Theoretically, if the length of an LDPC codeword is infinite, the absolution LLR value increases because each of the LDPC-encoded bits receives independent information, capable of increasing belief continuously according to iterative decoding, from the bits, except the LDPC-encoded bits. Practically, because the length of an LDPC codeword is finite, dependent information is received by the interaction between LDPC-encoded bits, and thus, belief does not propagate. If the interaction between erroneous LDPC-encoded bits is not strong, only a small fluctuation occurs as represented by "Decoding Trajectory, Sample 1". If the interaction between erroneous LDPC-encoded bits is strong, a large fluctuation may occur as represented by "Decoding Trajectory, Sample 2". FIG. 2 is a graph illustrating the ratio of bits having an absolute LLR value less than a specific value T to LDPC-encoded bits according to a decoding iteration count, in the event of a decoding success, in the case of an AWGN channel. Referring to FIG. 2, an X-axis represents a decoding iteration count, and a Y-axis represents the ratio of bits having an absolute LLR value that is obtainable in a decoding process and that is less than a specific value T, to all the bits constituting an LDPC codeword. In FIG. 2, "Decoding Trajectory, Averaged", "Decoding Trajectory, Sample 1", and "Decoding Trajectory, Sample 2" have the same meanings as those described above with respect to FIG. 1. In general, the ratio of bits having an absolute LLR value less than a specific value T decreases as the decoding iteration count increases. This is relationship occurs because the number of unreliable bits in an LDPC codeword decreases as the decoding iteration count increases. In the event of a decoding success, the ratio of bits having an absolute LLR value less than a specific value T decreases suddenly and approaches 0. In a high-SNR environment, all of the unreliable bits may be corrected as decoding is performed as many times as the maximum decoding iteration count. Therefore, the ratio of bits having an absolute LLR value less than a specific value T approaches Referring to FIG. 2, a fluctuation occurs according to an error pattern in the event of a decoding success. However, if the ratio is at or below a certain degree even once, a decoding success probability may increase with an increase in the decoding iteration count. As described with reference to FIG. 1, the fluctuation may occur according to an error pattern when belief-propagation decoding is applied to a finite-length LDPC code. As represented by "Decoding Trajectory, Sample 2 (T=1)" in FIG. 2, the ratio of bits having an absolute LLR value less than a specific value T decreases to about 0.03 when iterative decoding is performed about 30 times. Thereafter, the ratio of bits having an absolute LLR value less than a specific value T increases suddenly above 0.08 when iterative decoding is performed about 40 times. However, if the ratio is at or below a certain degree even once, a decoding success probability may increase with an increase in the decoding iteration count. In the case of "Decoding Trajectory, Sample 2 (T=1)" in FIG. 2, decoding succeeds when the decoding iteration count is 133. However, if a given system sets the maximum decoding iteration count to be less than 133, the decoding fails. If a given system sets the maximum decoding iteration count to be greater than 133, the decoding succeeds. Thus, it is possible to determine whether decoding will fail when the ratio reaches a suitable value even once, even if there is a variation in the ratio of bits having an absolute LLR value less than a specific value T. The suitable value may depend on the maximum decoding iteration count of a given system. FIG. 3 illustrates a process for controlling LDPC decoding in a receiver according to an embodiment of the present invention. Referring to FIG. 3, the receiver decodes an LDPC code in step 300. In an embodiment of the present invention, an algorithm for decoding the LDPC code may be one of a message passing algorithm, a sum product algorithm, and a belief propagation algorithm. In step 302, the receiver performs a syndrome check and determines whether an equation of H×c=0 is satisfied. `H` denotes a parity check matrix and `c` denotes a codeword. If the equation of H×c=0 is satisfied in step 302, the receiver determines a decoding success and ends the current LDPC decoding, in step 301. In another embodiment of the present invention, the receiver may start LDPC decoding of the next n information bits. If the equation of H×c=0 is not satisfied in step 302, the receiver proceeds to step 303 to perform iterative decoding. In step 303, the receiver counts the number of unreliable bits in the codeword. Specifically, the receiver detects the number of bits of LLR<T in the n-bit codeword. T denotes a reference value of unreliable LDPC bits. The bit having an absolute LLR value less than T is determined as an unreliable bit. In step 304 it is determined whether the counted number of unreliable bits is less than or equal to λ. If it is determined that the counted number of unreliable bits is greater than λ in step 304, the receiver proceeds to step 306. If it is determined that the counted number of unreliable bits is less than or equal to λ in step 303, the receiver returns to step 300. When the counted number of unreliable bits is less than or equal to λ, a relevant LDPC decoding algorithm of step 300 may be iteratively performed to correct an error in the unreliable bit. Specifically, λ is the minimum value at which decoding will succeed in the codeword. In step 306, the receiver calculates an average variation of the number of unreliable bits. The average variation of the number of unreliable bits may be determined by Equation (1) below. S M ( l ) = C T ( l - M ) - C T ( l ) M ( 1 ) ##EQU00001## .sup.(l) denotes the average variation of the number of the unreliable bits, T denotes a reference LLR value of unreliable bits, the bit having an absolute LLR value less than T is determined as an unreliable bit, C .sup.(l) denotes the number of unreliable bits in the l iteration step, and the S .sup.(l) is an average variation of unreliable bits through M iterations with respect to the l iteration step. In step 308, the receiver determines whether a relation of |S .sup.(l)|≦α is satisfied. If the relation of |S .sup.(l)|≦α is not satisfied in step 308, the receiver returns to step 300 to perform the relevant LDPC decoding algorithm. The receiver returns to step 300 because it is possible to correct an error in the unreliable bit. The value α is the reference value of S .sup.(l) for determining a decoding success/failure. If the relation of |S .sup.(l)|≦α is satisfied in step 308, the receiver increases the count in step 310. In step 312, it is determined whether the value counted in step 310 is equal to P. If the value counted in step 310 is equal to P in step 312, the receiver determines a decoding failure in step 314. Specifically, determines that decoding will fail even when performed up to the maximum decoding iteration count, before performing decoding up to the maximum decoding iteration count. The value P is the maximum count satisfying a specific relation between S .sup.(l) and α (e.g., |S .sup.(l)|≦α). When the specific relation is satisfied P times, the receiver determines a decoding failure, and the methodology terminates. If the value counted in step 310 is not equal to P in step 312, the receiver returns to step 300 to perform iterative decoding. FIG. 4 is a block diagram of an apparatus for controlling LDPC decoding in a receiver, according to an embodiment of the present invention. Referring to FIG. 4, the receiver includes a demodulating unit 400, a decoding unit 402, a determining unit 404, a first count unit 406, an average variation calculating unit 408, and a second count unit 410. FIG. 4 focuses on an LDPC decoding control apparatus of the receiver. However, in another embodiment of the present invention, the receiver may further include other function blocks in addition to an OFDM/OFDMA modulation block. A control unit 403 may include the determining unit 404 and the average variation calculating unit 408, and may further include the first count unit 406 and the second count unit 410. The demodulating unit 400 receives an LDPC code and performs a demodulating operation corresponding to a modulating scheme (e.g., BPSK, QPSK, and 64QAM) of a transmitter. The demodulating unit 400 maps the demodulated signal to LDPC codeword bits and provides the same to the decoding unit 402. The decoding unit 402 decodes the LDPC codeword bits received from the demodulating unit 400 according to an LDPC decoding algorithm. The LDPC decoding algorithm may be one of a message passing algorithm, a sum product algorithm, and a belief propagation algorithm. The determining unit 404 performs a syndrome check on the LDPC codeword bits received from the decoding unit 402 and determines whether an equation of H×c=0 is satisfied, where `H` denotes a parity check matrix and `c` denotes a codeword. The determining unit provides the result of H×c=0 to the first count unit 406. If the equation of H×c=0 is satisfied, it indicates a decoding success and the decoded LDPC codeword bits (i.e., information bits) are outputted. If the equation of H×c=0 is not satisfied, it indicates a decoding failure and the decoding unit 402 perform iterative decoding under the control of the first count unit 406 and the second count unit 410. The first count unit 406 counts the number of unreliable bits in the LDPC codeword. Specifically, the first count unit 406 detects the number of bits of LLR<T in the n-bit codeword, where T denotes the reference LLR value of unreliable LDPC bits. The bit having an absolute LLR value less than T is determined as an unreliable bit. When the counted number of unreliable bits is greater than λ, the first count unit 406 notifies this to the determining unit 404. When the counted number of unreliable bits is less than or equal to λ, the first count unit 406 notifies the decoding unit 402. Specifically, when the counted number of unreliable bits is less than or equal to λ, the first count unit 406 notifies to the decoding unit 402 to iteratively perform an LDPC decoding algorithm to correct an error in the unreliable bit. When the counted number of unreliable bits is greater than λ, the average variation calculating unit 408 calculates an average variation of the number of unreliable bits on the basis of the information received from the determining unit 404. The average variation of the number of unreliable bits may be determined by Equation (1) above. The second count unit 410 determines whether the average variation S .sup.(l) of the number of unreliable bits, received from the average variation calculating unit 408, satisfies a relation of |S .sup.(l)|≦α. When the relation of |S .sup.(l)|≦α is not satisfied, the second count unit 410 notifies the decoding unit 402 to iteratively perform an LDPC decoding algorithm. The value α is the reference value of S .sup.(l) for determining a decoding success/failure. When the relation of |S .sup.(l)|≦α is satisfied, the second count unit 410 increases the count. When the counted value is equal to P, the second count unit 410 determines a decoding failure. The value P is the maximum count satisfying a specific relation between S .sup.(l) and α (e.g., |S .sup.(l)|≦α). When the specific relation is satisfied P times, the receiver determines a decoding failure. When the counted value is not equal to P, the second count unit 410 notifies the decoding unit 402 to iteratively perform an LDPC decoding algorithm. As described above, embodiments of the present invention can reduce unnecessary power consumption and latency by determining whether decoding will fail even when performed in a low-SNR region as many times as the maximum decoding iteration count. While the invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in embodiments of the present invention. Patent applications by Hong-Sil Jeong, Seoul KR Patent applications by Hyun-Koo Yang, Seoul KR Patent applications by Se-Ho Myung, Suwon-Si KR Patent applications by SAMSUNG ELECTRONICS CO., LTD. Patent applications in class Forward correction by block code Patent applications in all subclasses Forward correction by block code User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120166905","timestamp":"2014-04-23T23:45:57Z","content_type":null,"content_length":"55907","record_id":"<urn:uuid:69edb961-8082-4b07-9cfa-a2f1442e74b6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Weymouth Geometry Tutor Find a Weymouth Geometry Tutor ...My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. I also coached. 19 Subjects: including geometry, Spanish, chemistry, calculus ...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test. 36 Subjects: including geometry, chemistry, English, reading ...I also have many current references that are willing to share their experiences. I have been able to achieve success by setting a pace that is appropriate for each individual student. During our sessions and the attentiveness of the student, I also believe in engaging in a certain amount of con... 13 Subjects: including geometry, calculus, GRE, algebra 1 ...Along with US and European history, if you need a tutor to help you with anything in world history, I am your man. I am passionate about history, and whether it is a paper you need to write, information and clarification or just piece of mind about your work, I am ready to attend to your needs in history. If you need a tutor for beginner to intermediate Italian, I am your man. 21 Subjects: including geometry, Spanish, English, ESL/ESOL ...I really enjoy tutoring students. In fact, I love tutoring math so much that I recently completed the MTEL requirements to become a Math Teacher. In the past, I have tutored 5th graders, 8th graders, high school and college students. 31 Subjects: including geometry, Spanish, reading, statistics
{"url":"http://www.purplemath.com/Weymouth_Geometry_tutors.php","timestamp":"2014-04-16T10:55:47Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:f75771b1-d641-420f-8095-52efa372d70b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical Values 4.6 Logical Values Octave has built-in support for logical values, i.e., variables that are either true or false. When comparing two variables, the result will be a logical value whose value depends on whether or not the comparison is true. The basic logical operations are &, |, and !, which correspond to “Logical And”, “Logical Or”, and “Logical Negation”. These operations all follow the usual rules of logic. It is also possible to use logical values as part of standard numerical calculations. In this case true is converted to 1, and false to 0, both represented using double precision floating point numbers. So, the result of true*22 - false/6 is 22. Logical values can also be used to index matrices and cell arrays. When indexing with a logical array the result will be a vector containing the values corresponding to true parts of the logical array. The following example illustrates this. data = [ 1, 2; 3, 4 ]; idx = (data <= 2); ⇒ ans = [ 1; 2 ] Instead of creating the idx array it is possible to replace data(idx) with data( data <= 2 ) in the above code. Logical values can also be constructed by casting numeric objects to logical values, or by using the true or false functions. Convert x to logical type. See also: double, single, char. Return a matrix or N-dimensional array whose elements are all logical 1. If invoked with a single scalar integer argument, return a square matrix of the specified size. If invoked with two or more scalar integer arguments, or a vector of integer values, return an array with given dimensions. See also: false. Return a matrix or N-dimensional array whose elements are all logical 0. If invoked with a single scalar integer argument, return a square matrix of the specified size. If invoked with two or more scalar integer arguments, or a vector of integer values, return an array with given dimensions. See also: true.
{"url":"http://www.gnu.org/software/octave/doc/interpreter/Logical-Values.html","timestamp":"2014-04-17T22:44:56Z","content_type":null,"content_length":"7527","record_id":"<urn:uuid:a2b7496b-151e-44ca-a250-8f68b6f019f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum distance in within a quadrilateral 1. The problem statement, all variables and given/known data Let A, B, C, D be the vertices of a convex quadrilateral. Convexity means that for each lines L(ab), L(bc), L(cd), L(da) the quadrilateral lies in one of its half-planes. Find the point P for which the minimum Min(d(P,A)+d(P,B)+d(P,C)+d(P,D)) is realized. 2. Relevant equations Min(d(P,A)+d(P,B)+d(P,C)+d(P,D)) is the equation we're trying to minimize. distance=d(X,Y)=abs(X-Y)=sqrt((X-Y)x(X-Y)) where "x" is the dot product. 3. The attempt at a solution For starters, this is for my Euclidean geometry class, so there's no coordinates or Calculus, I presume. My initial guess is that the point that would minimize those distances would be the intersection of the diagonals but I can't figure out why.
{"url":"http://www.physicsforums.com/showthread.php?p=2114725","timestamp":"2014-04-21T09:52:07Z","content_type":null,"content_length":"23093","record_id":"<urn:uuid:0805c329-8577-4698-8abd-1be339150137>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of ordered set which split into two isomorphics ordered sets up vote 2 down vote favorite $\newcommand{\A}{\mathfrak{A}}\newcommand{\B}{\mathfrak{B}}\newcommand{\C}{\mathfrak{C}}$ Given $\A$ an infinite totally ordered set such that it is defined as a disjoint union: $\A := \B \sqcup \C$ of two sets. These sets $\B,\C$ are both infinite and totally ordered such that each element of $\B$ are strictly smaller than each element of $\C$. Then I impose these three sets are isomorphics: $\ A:=\B\sqcup \C \simeq \B \simeq \C$. My question is: Does that type of sets have any name? That it is a specific class of ordinal or whatever? (Sorry for my bad english) EliX The order is an idempotent with respect to the sum of linear orders. – Ramiro de la Vega Feb 27 '13 at 16:51 add comment 1 Answer active oldest votes I would call these orders self-similar, since the whole order consists of two copies of itself, in the way of many fractals. Examples would include the rational line $\langle\mathbb{Q},\lt\rangle$ as well as the Cantor set and many other fractal-like orders. You asked, is it a specific class of ordinal or whatever? Note that no (nonzero) ordinal has your property, since no ordinal is isomorphic to a proper initial segment of itself. Indeed, one may easily iterate the isomorphism of $A$ with $B$ to produce an infinite descending sequence, so if nonempty, it cannot be well-ordered. It is easy to construct such orders, and all such orders arise by a suitable iteration of the following procedure: up vote 4 We have the current collection of points in $B$ and $C$, with every point in $B$ below every point in $C$, and we have order-preserving maps from $B\sqcup C$ to $B$ and also to $C$. down vote • We add a new point $p$ either to $B$ or $C$. • This new point realizes a certain cut in the order we have so far. • We extend the maps by considering the image and pre-image of that cut, creating further new points if necessary to realize those cuts, closing under this process. For example, one can start with one point in $B$, but this causes one to add a point to $C$ to which to map it, which causes one to create the image of that point in $B$, and so on, back and forth. One can control this process by ensuring during the construction that certain cuts will or will not have a least upper bound. Finally, note that the property can have no consequences as to the purely local nature of the order, since if we have a self-similar order, we can replace each point by a copy of any fixed linear order, and this resulting order will also be self-similar. For example, $\mathbb{Q}$ copies of $L$, for any linear order $L$, is self-similar. But this order is, locally, just like $L$. add comment Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/123098/definition-of-ordered-set-which-split-into-two-isomorphics-ordered-sets","timestamp":"2014-04-17T04:24:27Z","content_type":null,"content_length":"54079","record_id":"<urn:uuid:ed831d98-e2fc-4d8f-9b5d-b6dc13a6e65c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard Error of the Estimate David M. Lane Measures of Variability, Introduction to Simple Linear Regression, Partitioning Sums of Squares Learning Objectives 1. Make judgments about the size of the standard error of the estimate from a scatter plot 2. Compute the standard error of the estimate based on errors of prediction 3. Compute the standard error using Pearson's correlation 4. Estimate the standard error of the estimate based on a sample Figure 1 shows two regression examples. You can see that in Graph A, the points are closer to the line than they are in Graph B. Therefore, the predictions in Graph A are more accurate than in Graph Figure 1. Regressions differing in accuracy of prediction. The standard error of the estimate is a measure of the accuracy of predictions. Recall that the regression line is the line that minimizes the sum of squared deviations of prediction (also called the sum of squares error). The standard error of the estimate is closely related to this quantity and is defined below: where σ[est] is the standard error of the estimate, Y is an actual score, Y' is a predicted score, and N is the number of pairs of scores. The numerator is the sum of squared differences between the actual scores and the predicted scores. Note the similarity of the formula for σ[est] to the formula for σ.  It turns out that σest is the standard deviation of the errors of prediction (each Y - Y' is an error of prediction). Assume the data in Table 1 are the data from a population of five X, Y pairs. Table 1. Example data. X Y Y' Y-Y' (Y-Y')^2 1.00 1.00 1.210 -0.210 0.044 2.00 2.00 1.635 0.365 0.133 3.00 1.30 2.060 -0.760 0.578 4.00 3.75 2.485 1.265 1.600 5.00 2.25 2.910 -0.660 0.436 Sum 15.00 10.30 10.30 0.000 2.791 The last column shows that the sum of the squared errors of prediction is 2.791. Therefore, the standard error of the estimate is There is a version of the formula for the standard error in terms of Pearson's correlation: where ρ is the population value of Pearson's correlation and SSY is For the data in Table 1, μ[y] = 2.06, SSY = 4.597 and ρ= 0.6268. Therefore, which is the same value computed previously. Similar formulas are used when the standard error of the estimate is computed from a sample rather than a population. The only difference is that the denominator is N-2 rather than N. The reason N-2 is used rather than N-1 is that two parameters (the slope and the intercept) were estimated in order to estimate the sum of squares. Formulas for a sample comparable to the ones for a population are shown below. Please answer the questions:
{"url":"http://onlinestatbook.com/2/regression/accuracy.html","timestamp":"2014-04-17T18:25:09Z","content_type":null,"content_length":"11324","record_id":"<urn:uuid:4ae739df-e949-4fb3-96d8-9641cf559a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
triangle backface or frontface [Archive] - OpenGL Discussion and Help Forums 09-17-2011, 09:07 AM I have a triangle and his normal. The triangle stores the wind order of vertexes for create it A,B,C. Is possible to calculate if the triangle is backface triangle or a frontface triangle respect to a default conterclockwise normal? Is sufficent to do a dot product with this default normal and test if is > 0?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-175633.html","timestamp":"2014-04-21T09:55:33Z","content_type":null,"content_length":"7324","record_id":"<urn:uuid:2e29628b-c022-42bf-9edd-e0ba30347b36>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Plainsboro Calculus Tutor ...I have a firm grasp of the concepts and applications of different computer programming paradigms (procedural, functional, event-based, object oriented, etc.) and can communicate those concepts to students. My 10+ years in computational science research has given me substantial practical experien... 15 Subjects: including calculus, chemistry, statistics, biology ...This is more of a method of thinking consistent with many science classes. If the student’s algebraic, geometrical, or logical foundation is not up to par, I can help build the student’s mathematical toolbox. I learned the value of an excellent tutor firsthand for SAT Math. 9 Subjects: including calculus, chemistry, physics, biology I'm currently a junior at Rutgers University studying Mathematics. I also work part time at tutoring company. I have experience tutoring students from kindergarten to advanced mathematics at the undergraduate level. 16 Subjects: including calculus, statistics, geometry, algebra 1 ...And it is nearly impossible to learn how to learn from a text-book. The most important factor when measuring how successful a teacher or tutor is when it comes to teaching material and conveying information is experience. An elementary math teacher of 30 years will time and time again outshine ... 57 Subjects: including calculus, reading, chemistry, English ...I have a master's in mathematics and a bachelor's in mathematics and psychology. I have six years tutoring experience and three years of training and mentoring on a professional level. I have worked with students with learning disabilities to some success, though I have no specialized training. 15 Subjects: including calculus, physics, geometry, statistics
{"url":"http://www.purplemath.com/Plainsboro_calculus_tutors.php","timestamp":"2014-04-19T10:15:20Z","content_type":null,"content_length":"24012","record_id":"<urn:uuid:eb449a2a-c99a-4b50-8c79-fd1f0906807e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
2 projects tagged "arithmetic" Voynex calculator is a Web browser application designed to evaluate mathematical and JavaScript expressions. The main difference between it and other calculators is that it is possible to enter the whole expression at once instead of entering it step by step. For example, it can evaluate expressions like, "sin(sqrt(PI/3)+sqrt(PI/5))+cos(PI/2+1)", "x=pow(4,5)+3; 3*x+log(x)", or "0xff07+0x1c04* (0x45+0x55>>1)". The power of the JavaScript language itself is used to evaluate expressions of arbitrary complexity. It offers a wide set of mathematical functions to calculate trigonometrical, logarithmic, and other expressions.
{"url":"http://freecode.com/tags/arithmetic?page=1&with=&without=1175","timestamp":"2014-04-18T16:08:26Z","content_type":null,"content_length":"23844","record_id":"<urn:uuid:286dd79d-31cc-4263-bf2b-bd53d9385ead>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Selected Papers on Analysis and Differential Equations &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp American Mathematical Society This volume contains translations of papers that originally appeared in the Japanese journal Sugaku. The papers range over a variety of topics, including Translations--Series 2 nonlinear partial differential equations, \(C^*\)-algebras, and Schrödinger operators. 2003; 137 pp; hardcover The volume is suitable for graduate students and research mathematicians interested in analysis and differential equations. Volume: 211 Graduate students and research mathematicians interested in analysis and differential equations. ISBN-10: 0-8218-3508-4 • N. Ikeda -- Van Vleck formula for Wiener integrals and Jacobi fields • R. Kuwabara -- Spectral geometry for Schrödinger operators in a magnetic field ISBN-13: 978-0-8218-3508-1 • K. Matsumoto -- Symbolic dynamics and \(C^*\)-algebras • G. Nakamura -- Inverse problems for elasticity List Price: US$76 • Y. Shibata -- Time-global solutions of nonlinear evolution equations and their stability • K. Tachizawa -- Wavelets and eigenvalues of Schrödinger operators Member Price: US$60.80 • E. Yanagida and S. Yotsutani -- Recent topics on nonlinear partial differential equations: Structure of radial solutions for semilinear elliptic equations Order Code: TRANS2/211
{"url":"http://ams.org/bookstore?fn=20&arg1=trans2series&ikey=TRANS2-211","timestamp":"2014-04-19T16:09:19Z","content_type":null,"content_length":"14908","record_id":"<urn:uuid:6ad7f7e6-bec9-408e-9eb1-981851fb0b97>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Absolute Uncertainty From Relative Uncertainty January 16th 2009, 07:23 AM Finding Absolute Uncertainty From Relative Uncertainty Good morning, I have a problem in front of me that is asking for the absolute uncertainty of an area (A rectangular board is measured to have a width, w, of 2.00 ± 0.02 m and length, l, of 10.80 ± 0.04 m). I have the correct answer for the relative uncertainty and the area, so I'd assume since I have those, I could use them to solve for the absolute uncertainty, but it's coming back saying my answer is wrong, and I have no idea how to solve it. Area = 21.6 m^2 Relative uncertainty = 0.01 January 16th 2009, 09:16 AM Good morning, I have a problem in front of me that is asking for the absolute uncertainty of an area (A rectangular board is measured to have a width, w, of 2.00 ± 0.02 m and length, l, of 10.80 ± 0.04 m). I have the correct answer for the relative uncertainty and the area, so I'd assume since I have those, I could use them to solve for the absolute uncertainty, but it's coming back saying my answer is wrong, and I have no idea how to solve it. Area = 21.6 m^2 Relative uncertainty = 0.01 How do you get that relative uncertainty? What do you think the absolute uncertainty is? Think about when you round and the effects of premature rounding.
{"url":"http://mathhelpforum.com/advanced-statistics/68513-finding-absolute-uncertainty-relative-uncertainty-print.html","timestamp":"2014-04-20T16:17:03Z","content_type":null,"content_length":"5088","record_id":"<urn:uuid:24e38828-0e64-40f6-9d76-f381dd58ba3e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Plan 3 Explore the Circumcenter of a Triangle Title: Circumcenter Topic: Geometry Grade: 9th Lesson Summary: This lesson plan is to introduce the concepts of circumcenter by using computers with sketchpad software to explore. Students are able to observe and explore possible results (images) through computers by carrying out their ideas in front of screens. IL Learning Standards NCTM Standards Lesson Alignment 1. Understand the concepts of circumcenter of a triangle and other relative knowledge. 2. Be able to use computers with Geometer's Sketchpad to observe possible results and solve geometric problems. 1. Computers and Geometer's Sketchpad software 2. Papers, pencils, and rulers Lesson Plan Summarizing Statement Day 1 - Introduction of basic definition, review of relative concepts, and class discussion Day 2 - Group activity to answer questions by using computers with sketchpad Day 3 - Group discussion, sharing results, and making conclusion Day 1 1. The instructors introduces the basic definition of circumcenter and had better review similar concepts about centroid, incenter, and orthocenter of a triangle. 2. Discuss students' thought and other relative questions about circumcenter. Such as: How many circumcenters are there in a triangle? Is the point of circumcenter always on the inside of a triangle? If not, please describe the possible results and depend on what kind of triangle is. 3 Then, the instructor and students turn toward to play and test computers and discuss how to draw graphs and find their answers by using computers. Day 2 The instructor has 2-3 students form a group team to work through computers to collect data in order to decide the conclusion for questions. The instructor should turn around each group to observe students' learning and offer some help if students have problems on how to operate computers with sketchpad software. 1. Is there only a point of circumcenter in a triangle? Explain your possible reasons. 2. Is the point of circumcenter always on the inside of a triangle? If not. Please describe the possible results and depend on what kind of triangle is. Worksheet#1 and GSP file. 3. What are the different properties among centroid, incenter, orthocenter, and circumcenter? 4. What kind of triangle will result in that centroid, incenter, orthocenter, or circumcenter in the same triangle will overlap? GSP file 5. Which three points among centroid, incenter, orthocenter, and circumcenter will be on a line? ( This line is called Euler line.) Describe your experimental result and explain it. GSP file. 6. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the relation between angle ABC and angle AOC. Make a conclusion and explain it. Worksheet#2. and GSP 7. In a triangle ABC, suppose that O is the point of circumcenter of triangle ABC. Observe the length of OA, OB, and OC. Are they equal? Explain it. Let O be the center, and the length of OA be the radius to draw a circle. Observe the situation of point B and C and explain it. GSP file. ( This circle is called circumscribed circle to the triangle ABC.) Day 3 In this class, students offer their results to discuss and share among groups and make the final conclusion for the questions of Day 2 activity. Finally, if possible, the instructor should demand students to develop their geometric proof for each of the above questions. And, let students know that lots of results from dynamic models do not represent and make a proof. In a triangle ABC, AB= 3 cm, BC= 4 cm, CA= 5 cm. 1) What kind of triangle is it? Why? 2) Suppose that O is the point of circumcenter of triangle ABC, the sum of OA, OB, and OC is = ______. 1) In a acute triangle ABC, suppose that O is the point of circumcenter of triangle ABC, and the angle BAC is 65 degrees, then the angle BOC is ________ degrees. 2) In a triangle DEF, angle DEF is obtuse angle. Suppose O is the point of circumcenter of triangle DEF, and the angle DEF is 130 degrees, then the angle DOF is ________ degrees. In a triangle ABC, let A' be the midpoint of BC, B' be the midpoint of AC, and C' be the midpoint of AB. And let O is the circumcenter of triangle ABC. Please explain O is the orthocenter of triangle A'B'C'. (Hint: perpendicular lines) There is an arc BCD which is a part of a circle. Could you find the center of this circle and draw the another part of this circle ? Explain your method. (Hint: Three points form a triangle and decide a circle.) Summarizing statement: 1. Replace traditional geometric teaching in which geometry is taught by a verbal description to dynatmic drawing. 2. Help teacher to teach and replace traditional teaching which uses blackboards and chalks to draw graphs 3. Computers with sketchpad software not only allow students to manipulate geometric shapes to discover and explore the geometric relationships, but also verify possible results, provide a creative activity for students' ideas, and enhance students' geometric intuition. 4. Facilitate the creation of a rich mathematical learning environment to assist students' geometric proof and establish geometric concepts 1. It can not replace traditional logic geometric proof -lots of examples do not make a proof 2. Students can not get maximal and potential learning benefits from by using computers to learn if the instructor do not offer appropriate learning directions and guide. The instructor also should know what kind of learning environment with computers is most likely to encourage and stimulate students' learning. Relative Article: 1. Szymanski, W. A., (1994). Geometric computerized proofs= drawing package + symbolic compution software. Journal of Computers in Mathematics and Science Teaching, 13, p433-444. 2. Silver, J. A. (1998). Can computers to teach proofs? Mathematics Teacher, 91, 660-663 Any Comment: Yi-wen Chen ychen17@uiuc.edu
{"url":"http://mste.illinois.edu/courses/ci499sp01/students/ychen17/project336/teachplan3.html","timestamp":"2014-04-17T15:26:17Z","content_type":null,"content_length":"9079","record_id":"<urn:uuid:f72be8fb-f9ca-4565-ae3a-cc3ae1654297>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Operators over real vector spaces are self adjoint right? July 17th 2011, 05:53 AM #1 Junior Member Nov 2010 Operators over real vector spaces are self adjoint right? From Linear Algebra Done Right: "Positive Operators An operator T ∈ L(V ) is called positive if T is self-adjoint and ⟨T v , v ⟩ ≥ 0 for all v ∈ V. Note that if V is a complex vector space, then the condition that T be self-adjoint can be dropped from this definition (by 7.3)." If V is a real vector space then <Tv,v>=<v,Tv> so T is automatically self-adjoint. Doesn't this mean that the condition that T is self-adjoint can be dropped whatever the situation? Re: Operators over real vector spaces are self adjoint right? No responses? I thought this was a really simple question Re: Operators over real vector spaces are self adjoint right? Consider $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ with the usual inner product with $T(x_1,x_2)=(x_2,-x_1)$ then $\langle x,Tx \rangle =0$ for all $x\in \mathbb{R} ^2$ but $T$ is not self-adjoint because $\langle y,Tx \rangle = y_1x_2-x_1y_2 eq y_2x_1-y_1x_2 = \langle Ty,x \rangle$ (the equality holds only when $y=\lambda x$ for some $\lambda$) Re: Operators over real vector spaces are self adjoint right? In general, if $T\in\textrm{End}(E)$ with $(E,<,>)$ finite dimensional euclidean space, $B=\{u_1,\ldots,u_n\}$ is basis of $E$, $G$ is the Gram matrix with respect to $B$ and $A$ is the matrix of $T$ with respect to $B$, is easy to prove that $T$ is self adjoint iff $GA=A^tG$. Choosing for example $B$ orthonormal we have $G=I$ so, any non symmetric matrix represents a non self adjoint Re: Operators over real vector spaces are self adjoint right? Ok I see my mistake: <Tv,w>=<v,T*w> which doesn't necessarily equal <v,Tw> Now I have a different question, how do I show TT* is self adjoint? Re: Operators over real vector spaces are self adjoint right? July 17th 2011, 10:26 AM #2 Junior Member Nov 2010 July 17th 2011, 11:18 AM #3 Super Member Apr 2009 July 17th 2011, 12:30 PM #4 July 18th 2011, 12:50 AM #5 Junior Member Nov 2010 July 18th 2011, 01:33 AM #6
{"url":"http://mathhelpforum.com/advanced-algebra/184692-operators-over-real-vector-spaces-self-adjoint-right.html","timestamp":"2014-04-19T15:08:50Z","content_type":null,"content_length":"49885","record_id":"<urn:uuid:2e8304c7-86db-4a26-bcdd-1b4744583439>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Venn diagram and Probability December 7th 2012, 07:27 PM #1 Dec 2012 Venn diagram and Probability Hi I am quite bad at IB Maths Probablitiy, if you guys could help that would be much appreciated. Events A and B are such that P(A) = 0.3, P(B) = 0.6 and P(A∪B) = 0.7 The values q, r, s and t represent probabilities. (a) Write down the value of t. (b) Show that r = 0.2. (c) Write down the value of q and of s. (d) Write down P(B'). (e) Find P(A|B') Re: Venn diagram and Probability Hi I am quite bad at IB Maths Probablitiy, if you guys could help that would be much appreciated. Events A and B are such that P(A) = 0.3, P(B) = 0.6 and P(A∪B) = 0.7 The values q, r, s and t represent probabilities. (a) Write down the value of t. (b) Show that r = 0.2. (c) Write down the value of q and of s. (d) Write down P(B'). (e) Find P(A|B') You need to understand that this is not a homework service. You are expected to show some efforts. Hints: $q+r+x+t=1$ & $\mathcal{P}(A\cup B)=\mathcal{P}(A)+\mathcal{P}(B)-\mathcal{P}(A\cap B)~.$ December 8th 2012, 03:34 AM #2
{"url":"http://mathhelpforum.com/statistics/209316-venn-diagram-probability.html","timestamp":"2014-04-16T07:54:40Z","content_type":null,"content_length":"34442","record_id":"<urn:uuid:c3fb6c1a-f308-4db7-8e17-06dbb5550943>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Strange behaviour of viscoelastic materials Can anyone explain why viscoelastic materials behave differently at different strain rates? I am particularly interested in the uniaxial compression of a bulk solid to a constant level of strain (the stress relaxation behaviour): <snip> I'm not sure anyone can explain *why* a complex material behaves the way it does; OTOH, there are lots of models that reproduce real materials (over limited ranges of physical parameters). As you mention, a viscoelastic material has both a dissipative (viscous) component and an elastic component- AFAIK, Rivlin and Eriksen, J. Rational Mech. Anal. 4 (1955) first wrote a general and invariant theory of viscoelasticity. It's important to note that constitutive equations cannot be derived from a more basic theory; they are phenomenological in nature and require experiment to determine any As to your second sentence, that appears to be a measurement of creep? Creep requires irreversible thermodynamic considerations and is not viscoelasticity. OTOH, you may have some luck looking into
{"url":"http://www.physicsforums.com/showthread.php?t=587229","timestamp":"2014-04-21T14:57:32Z","content_type":null,"content_length":"25260","record_id":"<urn:uuid:ad245e63-6658-463b-9d44-e08054c86571>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Praxis - Spectacular Seven Programming Praxis – Spectacular Seven In today’s Programming Praxis exercise our task is to run a simulation of a ballgame to see if the scoring mechanic is fair. The provided Scheme solution clocks in at 25 lines. Let’s see if we can do any better. First, some imports. import Control.Applicative import Control.Monad import Data.List import System.Random After a match, the winner gets a point and the loser is moved to the end of the queue. match :: Int -> [(a, Int)] -> Int -> [(a, Int)] match ps ~(x:y:r) w = (p,s + if ps > 7 then 2 else 1) : r ++ [c] where ((p,s), c) = if w == 0 then (x,y) else (y,x) A game ends when one of the teams has 7 or more points. game :: IO Int game = f 0 (zip [1..8] [0,0..]) . randomRs (0,1) <$> newStdGen where f ps a ~(x:xs) = maybe (f (ps+1) (match ps a x) xs) fst $ find ((>= 7) . snd) a To simulate the game, we play a number of games and calculate the winning percentages of each team. simulate :: Int -> IO [Float] simulate n = (\ws -> map (\x -> 100 * (l x - 1) / l ws) . group . sort $ ws ++ [1..8]) <$> replicateM n game where l = fromIntegral . length All that’s left is to run the simulation. main :: IO () main = mapM_ print =<< simulate 10000 That leaves us with 7 lines, more than a two thirds reduction compared to the Scheme solution. That’ll do nicely. Tags: bonsai, code, game, Haskell, kata, praxis, programming, seven, spectacular Kwezan Says: May 5, 2010 at 1:10 pm | Reply It would be nice to see your “bonsai” solutions to “Modern Elliptic Curve Factorization Part II” and “Integer Factorization” Best regards,
{"url":"http://bonsaicode.wordpress.com/2010/05/04/programming-praxis-spectacular-seven/","timestamp":"2014-04-21T10:35:40Z","content_type":null,"content_length":"53756","record_id":"<urn:uuid:f534030a-3bb9-40c0-93bd-cfd44e537c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Images From Math Images Sinusoidal waves This main image is called "Flame." This image shows the graph of $\sin(x \cdot \sin y)=\cos(y \cdot \cos x)\,$. It is the composition of complicated sinusoidal waves. Basic Description Wave Mathematics - Trigonometric Functions : Waves are familiar to us from the ocean, the study of sound, earthquakes, and other natural phenomena. But as any surfer can tell you, ocean waves come in very different sizes, as can all waves. To fully understand waves, we need to understand measurements associated with these waves, such as , and While these measurements help describe waves, they do not help us make predictions about wave behavior. In order to do that, we need to look at waves more abstractly, which we can do using a mathematical formula. It is possible to look at waves mathematically because a wave's shape repeats itself over a consistent interval of time and distance. This behavior mirrors the repetition of the circle. Imagine drawing a circle on a piece of paper. Now imagine drawing that same shape while your friend slowly pulled the piece of paper out from under your pencil - the line you would have drawn traces out the shape of a wave. One rotation around the circle completes one cycle of rising and falling in the wave, as seen in the picture below. Mathematicians use the sine function (Sine) to express the shape of a wave. The mathematical equation representing the simplest wave looks like this: $y=\sin x$ Basic Graph of the Sine Function Figure 1 shows the graph of the function, which is defined as $y=\sin x$. The graph repeats itself as it moves along the x-axis. The cycles of this regular repetition are called the period; we use the notation $T$. This graph repeats every $6.28$ units or $2\pi$ radians. It ranges from -1 to 1; half of this distance is called the amplitude, we use notation $\varphi\,$ to represent it. Because $\varphi\,$ is 0, there is no shift from the origin. The graph is symmetric around the point $(0,0)$. So the graph of $f(x)=\sin x\,$ has a period of $2\pi$ and an amplitude of 1. History of the Sine Wave Trigonometry is a field of mathematics first compiled in the 2nd century BCE by the Greek mathematician Hipparchus. The history of trigonometry and of trigonometric functions follows the general lines of the history of mathematics. All trigonometric functions (sine, cosine, tangent, secant, cosecant, cotangent) can all be simply defined in terms of a single function sine. Sine, as associated with trigonometry, began in early civilization as a very important measuring aid. When the function concept was introduced in about 1700 along with calculus and analytic geometry, sine became a function and has little to do with triangles anymore. The sine function appears unexpectedly throughout analysis, because in essence it captures the idea of a wave, a fundamental concept in physics. A More Mathematical Explanation Note: understanding of this explanation requires: *Geometry, Algebra, Trigonometry and a little Physics Sinusoidal Waves A Sine Wave or Sinusoidal Wave, which describes a smooth repetitive oscillation, is one of the Basic Trigonometric Functions and a periodic function. The basic form of the function is $y(t) = A \cdot \sin(\omega t + \varphi)\,$. Sinusoidal waves are based on trigonomitric functions. General Form of Sine Function The general form of a Sine Function is $y(x,t) = A\cdot \sin(kx + \omega t- \varphi ) + D\,$. • $A\,$, amplitude of vibration, measures the peak of the deviation of the function from the center position. • $k$, wave number, also called the propagation constant, this useful quantity is defined as $2 \pi$ divided by the wavelength, so the SI units are radians per meter, and is also related to the angular frequency: $k = { \omega \over c } = { 2 \pi f \over c } = { 2 \pi \over \lambda }$. □ $c$, wave speed is the speed of propagation. □ $f$, frequency is the number of cycles in a unit of time. The SI unit of frequncy is the hertz (Hz). □ $\lambda$, wavelength measures the distance between any two points at corresponding positions on successive repetitions in the wave, so e.g. from one crest or trough to the next, in SI units of meters. • $x$, spatial dimension, also called position, measures the horizontal position of the wave. • $\omega\,$, angular frequency, measures the frequency of the function appearing in each unit, and is $2 \pi$ times the frequency, in SI units of radians per second. • $\varphi\,$, phase shift, measures the phase shift from the origin. • $D$, non-zero center amplitude, also called DC offset, gives the vertical position of the wave. • $T$, period, an essential concept of the sine wave, measures how long it takes the wave to complete one cycle. $T=\frac{1}{f}=\frac{2 \pi}{\omega}$. The general form gives a sine wave for a single dimension, thus the generalized equation given above gives the height of the wave at a position x at time t along a single line. This could, for example, be considered as the value of a wave along a wire. In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wave number k are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed. For more techniques related to the sine function, see also the page "Law of Sines". Useful formulas transformed by a general sine function: □ $y(x,t)=A \sin \omega (t - \frac{x}{c}) = A \sin 2 \pi f (t- \frac{x}{c})$ □ $y(x,t)=A \sin 2 \pi(\frac{t}{T}- \frac{x}{cT})$ □ $y(x,t)=A \sin(\omega t- k x)$ Periodic & Odd The sine function has a number of properties that result from it being periodic and odd. The basic sine function is periodic with a period of $2 \pi$, which implies that $\sin(x) = \sin(x + 2 \pi)$ or more generally, $\sin(x) = \sin(x + 2 \pi k), k \in Z$ $Z$ means integers. Since the period of the function $y=\sin x$ is $2 \pi$, the function $y=\sin(x + 2 \pi)$ is shifted to the left by $2 \pi$. Since the graph of the sine function is periodic and symmetric, the two function's graphs will coincide with each other. The function is odd; therefore, $\sin(-x) = -\sin(x)$ Both functions, $\sin(-x)$ and $-\sin(x)$ are created from the original function $y=\sin x$ by flipping it upside down. Explore The Graph of Sine Function We use a the Sine function $f(x)=2\sin(\frac{1}{2}x+\frac{1}{6}\pi)$ as an example. In the following, we use the red curve to represent this original function. From changing amplitude, perod, and phase shift, we can explore the properties of a general sine function. Change of The Amplitude The light green curve, which is "shorter" than the original curve, represents the function $g_1(x)=\sin(\frac{1}{2}x+\frac{1}{6}\pi)$. Notice how high and how low the graph goes; in mathematics this is called range, but in trigonometric funtions it also called amplitude. By changing the amplitude from 2 to 1, the peak of the graph changes from 2 to 1, and the minimum changes from -2 to -1. What do you think will happen when the sign of $A$ is changed to a negative? The graph will flip over upside down. Change of The Period The blue curve represents the function $g_2(x)=2\sin(x+\frac{1}{6}\pi)\,$, which is "wider" than the original function. From the image, it is obvious that the period of the new function is twice that of the original function. For the red curve, there are two periods in the space where there is one for the blue curve. That means periods occur twice as often for the red curve, or we can say that they are one-half as long. From the general view, as $\omega$ changes from 2 to 1, the period should shrink, but actually the period grows. Since $\omega$ can only measure the period, but not the real period, we use the notation $T$ to represent the period. $T=\frac{2\pi}{\omega}$, this means the smaller $\omega$ is, the bigger the period is. Change of The Phase Shift The black curve, which is almost identical, but just shifted a little bit from the original function, is $g_3(x)=2\sin(\frac{1}{2}x)\,$. Recall that the red curve is $f(x)=2\sin(\frac{1}{2}x+\frac{1}{6}\pi)\,$. Although the phase shift for the red curve is $\frac{1}{6}\pi\,$, the actual shift is $\frac{1}{3}\pi$. The actual shift in the graph is different from $\varphi\,$, because the the period and $\omega\,$ affect the shift. As the original function can be written as $f(x)=2\sin(\frac{1}{2}(x+\frac{1}{3}\ pi))$, the real shift of the original function away from 0 is $\frac{1}{3}\pi$. Notice that, if plus a number for $\varphi\,$, the graph will shift to left; if minus a number, the graph will shift to right. Graphs of Other Trignometric Functions - The Graph of $y=\cos x$ This graph can be created by moving function $y=\sin x$ to the left $\frac{\pi}{2}$ units. - The Graph of $y=\tan x$ - The Graph of $y=\cot x$ - The Graph of $y=\csc x$ - The Graph of $y=\sec x$ Non-sinusoidal Waves Non-sinusoidal waves are waves that are not pure sine waves. They are usually derived from simple math functions. While a pure sine consists of a single frequency, non-sinusoidal waveforms can be described as containing multiple sine waves of different frequencies. These "component" sine waves may, or may not, be multiples of a fundamental or "lowest" frequency. The frequency and amplitude of each component can be found using a mathematical technique known as Fourier analysis. Non-sinusoidal waveforms include square waves, rectangular waves, ramp waves, triangle waves, spiked waves, trapezoidal waves and sawtooth waves... Non-sinusoidal waveforms include square waves, rectangular waves, ramp waves, triangle waves, spiked waves, trapezoidal waves and sawtooth waves. Here we use the three most common waves as examples for non-sinusoindal waves. $\lfloor t \rfloor$ means the floor of $t$, and is the greatest integer smaller or equal to $t$. The sawtooth wave has a shape of saw, and this wave can be represented as the piecewise linear function as $x(t) = t - \lfloor t \rfloor$. Sawtooth wave is found often in time bases for display scanning. It is used as the starting point for subtractive synthesis, as a sawtooth wave of a constant period contains odd and even harmonics that fall off at −6 dB/octave. This wave can be represented as a piece linear function as $x(t)=A(-1)^{\lfloor \frac{2(x-x_0)}{T} \rfloor}$. This waveform is commonly used to represent digital information. A square wave of a constant period contains odd harmonics that fall off at −6 dB/octave. The wave can be represented as a piecewise linear function $x(t)=\frac{2}{a} \left (t-a \left \lfloor\frac{t}{a}+\frac{1}{2} \right \rfloor \right )(-1)^\left \lfloor\frac{t}{a}-\frac{1}{2} \right \ rfloor$. Triangle wave contains odd harmonics that fall off at −12 dB/octave. Fourier Series The mathematician Fourier proved that any continuous function could be produced as an infinite sum of sine and cosine waves. His result has far-reaching implications for the reproduction and synthesis of sound. A pure sine wave can be converted into sound by a loudspeaker and will be perceived to be a steady, pure tone of a single pitch. The sounds from orchestral instruments usually consists of a fundamental and a complement of harmonics, which can be considered to be a superposition of sine waves of a fundamental frequency f and integer multiples of that frequency. The process of decomposing a musical instrument sound or any other periodic function into its constituent sine or cosine waves is called Fourier analysis. You can characterize the sound wave in terms of the amplitudes of the constituent sine waves which make it up. This set of numbers tells you the harmonic content of the sound and is sometimes referred to as the harmonic spectrum of the sound. The harmonic content is the most important determiner of the quality or timbre of a sustained musical note. The decomposition process itself is called a Fourier Transform. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break up an arbitrary periodic function into a set of simple terms that can be plugged in, solved individually, and then recombined to obtain the solution to the original problem or an approximation to it to whatever accuracy is desired or practical. Both sinusoidal waves and non-sinusoidal waves can be written in Fourier Series format. The computation of the Fourier series is based on the integral identities: $F(\omega) = \mathcal{F}(f)(\omega) = \frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^\infty f(t) e^{-i\omega t}\,dt$. Expanding the integrand by means of Euler's formula results in: $F(\omega)=\frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^\infty f(t)(\cos\,{\omega t} - i\,\sin{ \,\omega t})\,dt$, which may be written as the sum of two integrals: $F(\omega)=\frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^\infty f(t)\cos\,{\omega t} \,dt - \frac{i}{\sqrt{2\pi}} \int\limits_{-\infty}^\infty f(t)\sin\,{\omega t}\,dt$. For more about Euler's formula, see Complex Numbers. Using the method for a generalized Fourier series, the usual Fourier series involving sines and cosines is obtained by taking $f_1(x)=\cos x$ and $f_2(x)=\sin x$. Since these functions form a complete orthogonal system over $[-\pi,\pi]$, the Fourier series of a function $f(x)$ is given by $f(x)=\frac{1}{2} a_0 + \sum_{n=1}^ \infty a_n \cos(nx)+ \sum_{n=1}^ \infty b_n \sin(nx),$ $a_0=\frac{1}{\pi} \int_{-\pi}^\pi f(x) dx$ $a_n=\frac{1}{\pi} \int_{-\pi}^\pi f(x) \cos(nx) dx$ $b_n=\frac{1}{\pi} \int_{-\pi}^\pi f(x) \sin(nx) dx$ For example, all the non-sinuoidal waves can be written in term of Fourier series, since they are periodic functions. 1. The sawtooth wave can be written as $\frac{1}{2}-\frac{1}{\pi} \sum_{n=1}^{\infty} \frac{1}{n} \sin(\frac{n \pi x}{L})$. 2. The triangle wave can be written as $\frac{4}{\pi} \sum_{n=1}^{\infty} \frac{1}{n} \sin(\frac{n \pi x}{L})$, where $n$ could only be odd integers. 3. The square wave can be written as $\frac{8}{\pi^2} \sum_{n=1}^{\infty} \dfrac{(-1)^{\tfrac{n-1}{2}}}{n^2} \sin(\frac{n \pi x}{L})$, where $n$ could only be odd integers. - The approach of Fourier Transformation for sawtooth wave. - The approach of Fourier Transformation for triangle wave. - The approach of Fourier Transformation for square wave. From the images above, we can see the process of the Fourier transform. Plug different $n$ into the fourier series, we can get different graphs. $n$ is bigger, the graph is closer to the true graph. As the images show, if $n=1$, which represented in orange line, the graph is a sine wave; if $n=20$, which represented in blue line, the graph is almost the true wave form. To see more information about Fourier Series, also see page Fourier Transform. See Also Why It's Interesting There are lots of waves in nature, and some are made by human beings, like electronic waves. Waves In Nature - Mechanical Waves Physical waves, or mechanical waves, form through the vibration of a medium, or a string, the Earth's crust, or particles of gases and fluids. Waves have mathematical properties that can be analyzed to understand the motion of the wave. Most physical waves are sinusoidal waves. There are two types of mechanical waves. • A transverse wave is such that the displacements of the medium are perpendicular (transverse) to the direction of travel of the wave along the medium. Vibrating a string in periodic motion, so the waves move along it, is a transverse wave, as are waves in the ocean. • A longitudinal wave is such that the displacements of the medium are back and forth along the same direction as the wave itself. Sound waves, where the air particles are pushed along in the direction of travel, is an example of a longitudinal wave. • Seismic waves, or in general earthquakes, are the waves of energy caused by the sudden breaking of rock within the earth or an explosion. They are the energy that travels through the earth and is recorded on seismographs. Seismic waves have both transverse waves and longitudinal waves. In general, there are two types of seismic waves: body waves and surface waves. Body waves can travel through the earth's inner layers, but surface waves can only move along the surface of the planet like ripples on water. Earthquakes radiate seismic energy as both body and surface waves. □ Body waves travel through the interior of the earth. They arrive before the surface waves emitted by an earthquake. These waves are of a higher frequency than surface waves. ☆ Primary wave or P wave. This is the fastest kind of seismic wave, and, consequently, the first to 'arrive' at a seismic station. The P wave can move through solid rock and fluids, like water or the liquid layers of the earth. It pushes and pulls the rock it moves through just like sound waves push and pull the air. Have you ever heard a big clap of thunder and heard the windows rattle at the same time? The windows rattle because the sound waves were pushing and pulling on the window glass much like P waves push and pull on rock. Sometimes animals can hear the P waves of an earthquake. Dogs, for instance, commonly begin barking hysterically just before an earthquake 'hits' (or more specifically, before the surface waves arrive). Usually people can only feel the bump and rattle of these waves. ☆ Secondary wave or S wave, which is the second wave you feel in an earthquake. An S wave is slower than a P wave and can only move through solid rock, not through any liquid medium. It is this property of S waves that led seismologists to conclude that the Earth's outer core is a liquid. S waves move rock particles up and down, or side-to-side--perpindicular to the direction that the wave is traveling in (the direction of wave propagation). □ Surface waves Travel only through the crust. They are of a lower frequency than body waves, and are easily distinguished on a seismogram as a result. Though they arrive after body waves, it is surface waves that are almost enitrely responsible for the damage and destruction associated with earthquakes. This damage and the strength of the surface waves are reduced in deeper ☆ Love wave, named after A.E.H. Love, a British mathematician who worked out the mathematical model for this kind of wave in 1911. It's the fastest surface wave and moves the ground from side-to-side. Confined to the surface of the crust, Love waves produce entirely horizontal motion. ☆ Rayleigh wave, named for John William Strutt, Lord Rayleigh, who mathematically predicted the existence of this kind of wave in 1885. A Rayleigh wave rolls along the ground just like a wave rolls across a lake or an ocean. Because it rolls, it moves the ground up and down, and side-to-side in the same direction that the wave is moving. Most of the shaking felt from an earthquake is due to the Rayleigh wave, which can be much larger than the other waves. Even though the waves discussed here will refer to travel in a medium, the mathematics introduced can be used to analyze properties of non-mechanical waves. Electromagnetic radiation, for example, is able to travel through empty space, but still has the same mathematical properties as other waves. What Causes A Wave 1. Waves can be viewed as a disturbance in the medium around an equilibrium state, which is generally at rest. The energy of this disturbance is what causes the wave motion. A pool of water is at equilibrium when there are no waves, but as soon as a stone is thrown in it, the equilibrium of the particles is disturbed and the wave motion begins. 2. The disturbance of the wave travels, or propogates, with a definite speed, called the wave speed. 3. Waves transport energy, but not matter. The medium itself doesn't travel; the individual particles undergo back-and-forth or up-and-down motion around the equilibrium position. Waves In Electronics - Sine Wave Generation A function generator is a piece of electronic test equipment or software used to generate electrical waveforms. These waveforms can be either repetitive or single-shot, in which case some kind of triggering source is required (internal or external). Function Generators are used in development, testing and repair of electronic equipment, e.g. as a signal source to test amplifiers, or to introduce an error signal into a control loop. Producing and manipulating the sine wave function is common used in Physic area. Sine wave circuits is a significant design challenge because they requre a constantly controlled linear oscillator. Sine wave circuitry is required in a number of diverse areas including audio testing, calibration equipment, transducer drives, power conditioning and automatic test equipment (ATE). Sine wave generator can produce a sine wave of certain amplitude and frequency without an input signal. From the energy view, it is a circuit, which transforms the direct current to alternating The Components of oscillator • The amplifier • The positive feedback • RC network, LC network, or Quartz notch filter □ RC network, or RC filter, or RC circuit, is A resistor–capacitor network. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit. We use the RC network to produce a low frequency signal that around several hundred kHz. □ LC network, or LC circuit, also called a resonant circuit or tuned circuit, consists of an inductor, and a capacitor. Using for create a high frequency signal, that is around several hundred □ Quartz notch filters, or Quartz crystal filters, are more likely to use to produce a stable high frequency signal over a wide temperature range, since quartz has a very low coefficient of thermal expansion. Quartz filters have much highter quality than RC and LC filters. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. About the Creator of this Image My name is Xah Lee. (李杀) Am Chinese by blood, but lived most of my adult life in California, USA. I do computer programing for a living, since 1995. Education: i attended Foothill college and DeAnza college (California, USA) during ~1991 to 1994. These are 2-years community colleges. I took all math courses they offered. Pretty much highest being different equations and linear algebra. That's pretty much my formal education. Profession: I'm a programer by profession. The companies i worked for are notably Wolfram Research in 1995 for 6 months as a intern, and WebOrder/Netopia during 1999 to 2002. (WebOrder was bought by Netopia in ~1999, and Netopia is bought by Motorola in 2007). My expertise is unix admin, web application development, using unix, Apache, perl. Besides web app dev, my expertise also lies in programing geometry visualization. My credential are mostly my website xahlee.org, developed since 1996 and is on going. Notably, visited by 9 thousands visitors per day, linked from hundreds of math edu institutions and programing websites, and cited in a few math text books and math journals. I taught once geometry visualization programing using Mathematica to grad math students at National Center for Theoretical Sciences, Taiwan, in 2003. Thanks to professor Richard Palais for the invitation. Related Links Additional Resources [1] Lee, Xah. (n.d.). Sine Curve. Retrieved from http://xahlee.org/SpecialPlaneCurves_dir/Sinusoid_dir/sinusoid.html [2] Valdez-Sanchez, Luis. Table of Trigonometric Identities. S.O.S. MATH. Dec 13, 1996. Retrieved from http://www.sosmath.com/trig/Trig5/trig5/trig5.html [3] Jones, Andrew Z. (n.d.). Mathematical Properties of Waves. From About.com Physics. Retrieved from http://physics.about.com/od/mathematics/a/wavemechanics.htm [4] Feller, Karline. (n.d.). The Sine Graph. retrieved from http://jwilson.coe.uga.edu/emt668/EMT668.Folders.F97/Feller/sine/assmt1.html [5] Sine: Properties. From Math.com. Retrieved from http://www.math.com/tables/algebra/functions/sine/properties.htm [6] Welz, Gary Leo. (n.d.). Wave Mathematics - Trigonometric Functions. From Visionlearning. Retrieved from http://www.visionlearning.com/library/module_viewer.php?mid=131 [7] Khamsi, M.A. (n.d.). Fourier Sine and Cosine Series. From S.O.S. MATH. Retrieved from http://www.sosmath.com/fourier/fourier2/fourier2.html [8] Nave, C.R. Fourier Analysis and Synthesis. From HyperPhysics. Retrieved from http://hyperphysics.phy-astr.gsu.edu/hbase/audio/fourier.html [9] Wikipedia. (n.d.). Waveform. Retrieved from http://en.wikipedia.org/wiki/Waveform [10] Wikipedia. (n.d.). Function generator. Retrieved from http://en.wikipedia.org/wiki/Function_generator [11] Wikipedia. (n.d.). History of trigonometry. Retrieved from http://en.wikipedia.org/wiki/History_of_trigonometry [12] Wikipedia. (n.d.). Sine wave. Retrieved from http://en.wikipedia.org/wiki/Sine_wave [13] Weisstein, Eric W. (n.d.). Fourier Series. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/FourierSeries.html [14] Michigan Technological University. (n.d.). What Is Seismology and What Are Seismic Waves? Retrieved from http://www.geo.mtu.edu/UPSeis/waves.html Future Directions for this Page □ More information about sine wave generation □ Add more things to illustrate the sine curve in other dimensions. □ Add more about non-sinusoidal waves. □ More detail about Fourier Series. If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"url":"http://mathforum.org/mathimages/index.php?title=Waves&oldid=28363","timestamp":"2014-04-19T02:06:11Z","content_type":null,"content_length":"81760","record_id":"<urn:uuid:c8182548-3721-4c3a-9754-0e5109e28678>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
explaining Jet Brightness / Power relation ? - Astronomy and Cosmology There is an observed relation, between the brightness & power of relativistic jets, common to Quasars & GRBs: Could the following calculations help explain the same? By partial fractions: So the integration yields: For relativistic jets , and most of the power is radiated early on, at high . So: But the initial electron energy was also proportional to . So, the average power of emissions is (calculated to be) constant at all energy scales: Perhaps some similar sort of scale invariance, whereby the power emitted by decelerating electrons is quasi-constant, could account, for the observed Jet brightness / power relation.
{"url":"http://www.scienceforums.net/topic/71637-explaining-jet-brightness-power-relation/","timestamp":"2014-04-21T02:00:49Z","content_type":null,"content_length":"48990","record_id":"<urn:uuid:15955439-b5b4-4ece-a584-d78bc6475d21>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Bistable multivibrator is an electronic circuit used to implement a variety of simple two-state systems such as . It is characterized by two amplifying devices (transistors, electron tubes or other devices) cross-coupled by resistors and capacitors. The most common form is the astable or oscillating type, which generates a square wave —the high level of in its output is what gives the multivibrator its common name. The mulitvibrator originated as a vacuum tube (valve) circuit described by William Eccles F.W. Jordan in 1919. There are three types of multivibrator circuit: • astable, in which the circuit is not stable in either state—it continuously oscillates from one state to the other. • monostable, in which one of the states is stable, but the other is not—the circuit will flip into the unstable state for a determined period, but will eventually return to the stable state. Such a circuit is useful for creating a timing period of fixed duration in response to some external event. This circuit is also known as a one shot. A common application is in eliminating switch • bistable, in which the circuit will remain in either state indefinitely. The circuit can be flipped from one state to the other by an external event or trigger. Such a circuit is important as the fundamental building block of a register or memory device. This circuit is also known as a flip-flop. In its simplest form the multivibrator circuit consists of two cross-coupled transistors. Using resistor-capacitor networks within the circuit to define the time periods of the unstable states, the various types may be implemented. Multivibrators find applications in a variety of systems where square waves or timed intervals are required. Simple circuits tend to be inaccurate since many factors affect their timing, so they are rarely used where very high precision is required. Before the advent of low-cost integrated circuits, chains of multivibrators found use as frequency dividers. A free-running multivibrator with a frequency of one-half to one-tenth of the reference frequency would accurately lock to the reference frequency. This technique was used in early electronic organs, to keep notes of different octaves accurately in tune. Other applications included early television systems, where the various line and frame frequencies were kept synchronized by pulses included in the video signal. Astable multivibrator circuit This circuit shows a typical simple astable circuit, with an output from the collector of Q1, and an inverted output from the collector of Q2. Suggested values which will yield a frequency of about 0.48Hz: • R1, R4 = 10K • R2, R3 = 150K • C1, C2 = 10μF • Q1, Q2 = BC547 or similar NPN switching transistor Basic mode of operation The circuit keeps one transistor switched on and the other switched off. Suppose that initially, Q1 is switched on and Q2 is switched off. State 1: • Q1 holds the bottom of R1 (and the left side of C1) near ground (0V). • The right side of C1 (and the base of Q2) is being charged by R2 from below ground to 0.6V. • R3 is pulling the base of Q1 up, but its base-emitter diode prevents the voltage from rising above 0.6V. • R4 is charging the right side of C2 up to the power supply voltage (+V). Because R4 is less than R2, C2 charges faster than C1. When the base of Q2 reaches 0.6V, Q2 turns on, and the following positive feedback loop occurs: • Q2 abruptly pulls the right side of C2 down to near 0V. • Because the voltage across a capacitor cannot suddenly change, this causes the left side of C2 to suddenly fall to almost -V, well below 0V. • Q1 switches off due to the sudden disappearance of its base voltage. • R1 and R2 work to pull both ends of C1 toward +V, completing Q2's turn on. The process is stopped by the B-E diode of Q2, which will not let the right side of C1 rise very far. This now takes us to State 2, the mirror image of the initial state, where Q1 is switched off and Q2 is switched on. Then R1 rapidly pulls C1's left side toward +V, while R3 more slowly pulls C2's left side toward +0.6V. When C2's left side reaches 0.6V, the cycle repeats. Multivibrator frequency The period of each half of the multivibrator is given by t = ln(2)RC. The total period of oscillation is given by: T = t[1] + t[2] ln(2)R[2] C[1] + ln(2)R[3] "font-style : italic;">C[2] $f = frac\left\{1\right\}\left\{T\right\}= frac\left\{1\right\}\left\{ln\left(2\right) cdot \left(R_2 C_1 + R_3 C_2\right)\right\}approx frac\left\{1\right\}\left\{0.693 cdot \left(R_2 C_1 + R_3 C_2\ For the special case where • t[1] = t[2] (50% duty cycle) • R[2] = R[3] • C[1] = C[2] $f = frac\left\{1\right\}\left\{T\right\}= frac\left\{1\right\}\left\{ln\left(2\right) cdot 2RC\right\}approx frac\left\{0.721\right\}\left\{RC\right\}$ Initial power-up When the circuit is first powered up, neither transistor will be switched on. However, this means that at this stage they will both have high base voltages and therefore a tendency to switch on, and inevitable slight asymmetries will mean that one of the transistors is first to switch on. This will quickly put the circuit into one of the above states, and oscillation will ensue. In practice, oscillation always occurs for practical values of R and C. However, if the circuit is temporarily held with both bases high, for longer than it takes for both capacitors to charge fully, then the circuit will remain in this stable state, with both bases at 0.6V, both collectors at 0V, and both capacitors charged backwards to -0.6V. This can occur at startup without external intervention, if R and C are both very small. For example, a 10 MHz oscillator of this type will often be unreliable. (Different oscillator designs, such as relaxation oscillators, are required at high frequencies.) Period of oscillation Very roughly, the duration of state 1 (low output) will be related to the time constant R2*C1 as it depends on the charging of C1, and the duration of state 2 (high output) will be related to the time constant R3*C2 as it depends on the charging of C2. Because they do not need to be the same, an asymmetric duty cycle is easily achieved. However, the duration of each state also depends on the initial state of charge of the capacitor in question, and this in turn will depend on the amount of discharge during the previous state, which will also depend on the resistors used during discharge (R1 and R4) and also on the duration of the previous state, etc. The result is that when first powered up, the period will be quite long as the capacitors are initially fully discharged, but the period will quickly shorten and stabilise. The period will also depend on any current drawn from the output and on the supply voltage. Protective components While not fundamental to circuit operation, connected in series with the base or emitter of the transistors are required to prevent the base-emitter junction being driven into breakdown when the supply voltage is in excess of the Veb breakdown voltage, typically around 7 volts for most silicon transistors. In the monostable configuration, only one of the transistors requires protection. Monostable multivibrator circuit When triggered by an input pulse, a monostable multivibrator will switch to its unstable position for a period of time, and then return to its stable state. If repeated application of the input pulse maintains the circuit in the unstable state, it is called a retriggerable monostable. If further trigger pulses do not affect the period, the circuit is a non-retriggerable multivibrator. Bistable multivibrator circuit Suggested values: This circuit is similar to an astable multivibrator, except that there is no charge or discharge time, due to the absence of capacitors. Hence, when the circuit is switched on, if Q1 is on, its collector is at 0 V. As a result, Q2 gets switched off. This results in nearly +V volts being applied to base of Q1, thus keeping it on. Thus, the circuit remains stable in a single state Similarly, Q2 remains on continuously, if it happens to get switched on first. For practical uses, one employs the fact that switching of state can be done via Set and Reset terminals connected to the bases. For example, if when Q2 is on, Set is grounded, this switches Q2 off, and as described above, makes Q1 on. Thus, Set is used to "set" Q1 on, and Reset is used to "reset" it to off state. External links
{"url":"http://www.reference.com/browse/Bistable+multivibrator","timestamp":"2014-04-21T01:52:01Z","content_type":null,"content_length":"89070","record_id":"<urn:uuid:ca2cc72b-b1d4-4905-b343-6d02aed2dede>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Solar Energy Basics of Solar Energy The Sun --> Always there; lots of Energy How many photons (energy) reach the surface of the Earth on Average? The energy balance in the atmosphere is shown here: The main components in this diagram are the following: • Short wavelength (optical wavelengths) radiation from the Sun reaches the top of the atmosphere. • Clouds reflect 17% back into space. If the earth gets more cloudy, as some climate models predict, more radiation will be reflected back and less will reach the surface • 8% is scattered backwards by air molecules: • 6% is actually directly reflected off the surface back into space • So the total reflectivity of the earth is 31%. This is technically known as an Albedo . Note that during Ice Ages, the Albedo of the earth increases as more of its surface is reflective. This, of course, exacerbates the problem. What Happens to the 69% of the incoming radiation that doesn't get reflected back: • 19% gets absorbed directly by dust, ozone and water vapor in the upper atmosphere. This region is called the stratosphere and its heated by this absorbed radiation. Loss of stratospheric ozone is causing the stratosphere to cool with time. • 4% gets absorbed by clouds located in the troposphere. This is the lower part of the earth's atmosphere where weather happens. • The remaining 47% of the sunlight that is incident on top of the earth's atmosphere reaches the surface. This is not a real significant energy loss. Cliff Notes Summary How much energy from the sun reaches the surface of the Earth on Average? Note that we measure energy in units of Watt-hours. A watt is not a unit of energy, it is a measure of power. ENERGY = POWER x TIME 1 Kilowatt Hour = 1KWH = 1000 watts used in one hour = 10 100 watt light bulbs left on for an hour Incident Solar Energy on the ground: • 8 hour summer day, 40 degree latitude So over this 8 hour day one receives: • 8 hours x 600 watts per sq. m = 4800 watt-hours per sq. m which equals 4.8 kilowatt hours per sq. m • This is equivalent to 0.13 gallons of gasoline • For 1000 square feet of horizontal area (typical roof area) this is equivalent to 12 gallons of gas or about 450 KWH But to go from energy received to energy generated requires conversion of solar energy into other forms (heat, electricity) at some reduced level of efficiency. We will talk more about PV cells in detail later. For now the only point to retain is that they are quite low in efficieny! Collection of Solar Energy Amount of captured solar energy depends critically on orientation of collector with respect to the angle of the Sun. • Under optimum conditions, one can achieve fluxes as high as 1000 Watts per sq. meter • In the Winter, for a location at 40 degrees latitude, the sun is lower in the sky and the average flux received is about 300 Watts per sq. meter A typical household Winter energy use is around 3000 KWHs per month or roughly 100 KWH per day. Assume our roof top area is 100 square meters (about 1100 square feet). In the winter on a sunny day at this latitude (40^o) the roof will receive about 6 hours of illumination. So energy generated over this 6 hour period is: 300 watts per square meter x 100 square meters x 6 hours = 180 KWH (per day) But remember the efficiency problem: • 5% efficiency • 10% efficiency • 20% efficiency At best, this represents 1/3 of the typical daily Winter energy usage and it assumes the sun shines on the rooftop for 6 hours that day. With sensible energy conservation and insulation and south facing windows, its possible to lower your daily use of energy by about a factor of 2. In this case, if solar shingles become 20% efficient, then they can provide 50-75 % of your energy needs Another example calculation for Solar Energy which shows that relative inefficiency can be compensated for with collecting area. A site in Eastern Oregon receives 600 watts per square meter of solar radiation in July. Asuume that the solar panels are 10% efficient and that the are illuminated for 8 hours. How many square meters would be required to generate 5000 KWH or electricity
{"url":"http://zebu.uoregon.edu/1998/ph162/l4.html","timestamp":"2014-04-21T12:15:44Z","content_type":null,"content_length":"7050","record_id":"<urn:uuid:f98941a8-d6d2-4006-9b71-410d72bdd7be>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
The Kansas University Rewrite Engine (KURE) The Kansas University Rewrite Engine (KURE) is a Haskell-hosted DSL for strategic programming. We`ve just released the third version of KURE, which adds lenses for navigation and a variant set of combinators to make change detection easier. This post just overviews the basics, and gives a simple example of usage. KURE Basics KURE is based around the following data type: data Translate c m a b = Translate {apply :: c -> a -> m b} translate :: (c -> a -> m b) -> Translate c m a b translate = Translate There`s a lot of type parameters, but the essential idea is that Translate represents a transformation that can be applied to a value of type a in a context c, and produces a value of type b in the monad m. Actually, we require m to be a MonadPlus, as this allows us to encode notions of success and failure, which are integral to strategic programming. Specifically, mzero represents failure and mplus is a “catch” for both mzero and fail. To avoid clutter we`ll omit the class constraints, but just imagine that wherever you see an m there`s a (MonadPlus m => ...) to go with it. We also define a synonym for the special case when the result and argument type coincide: type Rewrite c m a = Translate c m a a Translate itself forms a monad (and an arrow, and a bunch of other structures besides), which provides us with a lot of combinators for free. Two key definitions are composition and bind: (>>>) :: Translate c m a b -> Translate c m b d -> Translate c m a d t1 >>> t2 = translate $ \ c -> apply t1 c >=> apply t2 c (>>=) :: Translate c m a b -> (b -> Translate c m a d) -> Translate c m a d t >>= f = translate $ \ c a -> do b <- apply t c a apply (f b) c a Observe the difference: composition takes the result of the first translation as the argument to the second translation, whereas bind uses the result to determine the second translation, but then applies that second translation to the original argument. Another useful combinator is <+> (from the ArrowPlus class), which acts as a catch for Translate: (<+>) :: Translate c m a b -> Translate c m a b -> Translate c m a b t1 <+> t2 = translate $ \ c a -> apply t1 c a `mplus` apply t2 c a We can now write strategic programming code, such as the classic try combinator: tryR :: Rewrite c m a -> Rewrite c m a tryR r = r <+> idR Where idR is the identity rewrite: idR : Rewrite c m a idR = translate $ \ _ -> return Finally, one combinator new to this version of KURE is sequential composition of rewrites that allows one rewrite to fail: (>+>) :: Rewrite c m a -> Rewrite c m a -> Rewrite c m a Example: Arithmetic Expressions with Fibonacci Now let`s consider an example. Take a data type of arithmetic expressions augmented with a Fibonacci primitive: data Arith = Lit Int | Add Arith Arith | Sub Arith Arith | Fib Arith To keep things simple, we`ll work with an empty context, and use Maybe as our MonadPlus: type RewriteA = Rewrite () Maybe Arith Let`s start with some rewrites that perform basic arithmetic simplification: addLitR :: RewriteA addLitR = do Add (Lit m) (Lit n) <- idR return (Lit (m + n)) subLitR :: RewriteA subLitR = do Sub (Lit m) (Lit n) <- idR return (Lit (m - n)) We`re exploiting the fact that Translate is a monad to use do-notation – something we have found extremely convenient. If the pattern match fails, this will just trigger the fail method of the monad, which we can then catch as desired. Using >+>, we can combine these two rewrites into a single rewrite for arithmetic simplification: arithR :: RewriteA arithR = addLitR >+> subLitR Next a more interesting rewrite, unfolding the definition of Fibonacci: fibLitR :: RewriteA fibLitR = do Fib (Lit n) <- idR case n of 0 -> return (Lit 0) 1 -> return (Lit 1) _ -> return (Add (Fib (Sub (Lit n) (Lit 1))) (Fib (Sub (Lit n) (Lit 2))) Tree Traversals Thus far, we`ve only discussed rewrites that apply to the entire data structure we`re working with. But a key feature of KURE (and strategic programming) is the ability to traverse a structure applying rewrites to specific locations. For example, the anyR combinator applies a rewrite to each immediate child of a node, succeeding if any of those rewrites succeed: anyR :: RewriteA -> RewriteA At first glance this might sound simple, but there are a number of issues. Most notably, what if the children have distinct types from each other and their parent? How should such a combinator be typed? This isn`t an issue in this simple Fibonacci example, as there is only one type (Arith), but in general you could have an AST with multiple mutually recursive non-terminals. KURE solves this by constructing a sum data type of all non-terminals in the AST, and having traversal combinators operate over this data type (using Associated Types to specify the sum type for each non-terminal). This is the most significant feature of KURE, but it`d take too long to explain the details here. You can read about it in either of the following papers: Using the anyR combinator (amongst others), KURE defines various traversal strategies (we just give the specialised types here): anybuR :: RewriteA -> RewriteA anybuR r = anyR (anybuR r) >+> r innermostR :: RewriteA -> RewriteA innermostR r = anybuR (r >>> tryR (innermostR r)) anybuR traverses a tree in a bottom-up manner, applying the rewrite to every node, whereas innermostR performs a fixed-point traversal, continuing until no more rewrites can be successfully applied. For example, we can define an evaluator for Arith using this strategy: evalR :: RewriteA evalR = innermostR (arithR >+> fibLitR) KURE 2.0.0 is now available on Hackage. You can find this Fibonacci example, and several others, bundled with the source package. For a non-trivial example, KURE is being used as the underlying rewrite engine for the HERMIT tool. HERMIT hasn`t been released yet, but you can read about it in this paper: Introducing the HERMIT Equational Reasoning Framework The paper also describes how KURE uses lenses. Tags: HERMIT, KURE, Rewrites, Strategic Programming
{"url":"http://www.ittc.ku.edu/csdlblog/?p=124","timestamp":"2014-04-19T14:55:54Z","content_type":null,"content_length":"14736","record_id":"<urn:uuid:36d3a8c7-323b-44f0-8b4a-231181067fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
help with this probability problem December 11th 2008, 11:57 AM #1 Nov 2008 help with this probability problem help with this problem the probability that any one student will fail every course this semester is .003 what is the probability that you will no fail every course? If the probability of failing every course is 0.003, then the probability you don't fail every course is $1 - 0.003 = 0.997$ December 11th 2008, 03:53 PM #2 Dec 2008 Auckland, New Zealand
{"url":"http://mathhelpforum.com/statistics/64537-help-probability-problem.html","timestamp":"2014-04-17T15:56:21Z","content_type":null,"content_length":"31153","record_id":"<urn:uuid:b8104e6a-cbf8-48c7-9ade-61e65477b337>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the reason for this? (studying reasons for seg faults) up vote 0 down vote favorite I am just observing the reasons for segmentation fault. I malloced memory here but still it is giving seg fault. What might be the reasons for this? void main() int **matrix,i,j; matrix = malloc(sizeof(int)); **matrix = 10; printf("%d", **matrix); c segmentation-fault Enable compiler warnings, and you will discover the problem. – Oli Charlesworth Mar 21 '13 at 0:42 Think of the type of matrix and what you are trying to do with **matrix = 10 – Chris Mar 21 '13 at 0:44 main should return int. The rest is secondary. – wildplasser Mar 21 '13 at 0:49 **matrix is a pointer to pointer and you allocate memory for *matrix only. Not for **matrix, so you can not(should not) derefrence it(as you are doing in printf();). – linuxD Mar 21 '13 at 5:24 add comment 3 Answers active oldest votes matrix is a pointer (it happens to be a pointer to a pointer to int), and you've allocated memory (though probably not enough) for it to point to. *matrix is also a pointer (it happens to be a pointer to int), and you haven't allocated memory for it to point to, so *matrix contains garbage (or doesn't exist if your malloc() call didn't allocate enough memory). So dereferencing *matrix has undefined behavior. You were lucky enough for the symptom to be obvious. up vote 2 down vote accepted Why are you allocating sizeof(int) bytes for matrix to point to? And why are you defining void main() rather than the correct int main(void)? (void main() is mostly useful as a way to detect books and tutorials written by authors who don't know the language very well.) Thank you Keith. matrix = malloc(sizeof(int *)); *matrix = malloc(sizeof(int)); Solved my problem. – Shreyas Kale Mar 21 '13 at 0:55 1 You mean Herb Schildt, by any chance ;-? – wildplasser Mar 21 '13 at 0:56 @wildplasser: Yes, for example. – Keith Thompson Mar 21 '13 at 2:30 add comment Because you are not allocating memory properly. matrix is a pointer that points to a integer pointer not just an integer pointer. The correct way to do this would be: (Warning: I type this directly on the browser, do not use it in production) void main() int **matrix,i,j; matrix = (int **) malloc(sizeof(void *)); *matrix = (int *) malloc(sizeof(void *)); **matrix = 10; printf("%d", **matrix); return 0; It is easier to understand what is going on by imagining how the memory would look like in each line ( address | type | value {value_type}): int **matrix,i,j; matrix: 0x00 | &int ** | NULL {int **} i: 0x04 | &int | GARBAGE {int} j: 0x08 | &int | GARBAGE {int} Because virtual memory, it is usual that Operative Systems initialize memory with 0 (or NULL) but as this is not granted it is bad programming to count with that. up vote 1 down vote matrix = (int **) malloc(sizeof(void *)); malloc will return an address of an space on the heap: matrix: 0x00 | &int ** | 0xA0 {int **} i: 0x04 | &int | GARBAGE {int} j: 0x08 | &int | GARBAGE {int} *matrix: 0xA0 | &int * | GARBAGE {int *} *matrix = (int *) malloc(sizeof(void *)); matrix: 0x00 | &int ** | 0xA0 {int **} i: 0x04 | &int | GARBAGE {int} j: 0x08 | &int | GARBAGE {int} *matrix: 0xA0 | &int * | 0xA4 {int *} **matrix:0xA4 | &int | GARBAGE {int} **matrix = 10; matrix: 0x00 | &int ** | 0xA0 {int **} i: 0x04 | &int | GARBAGE {int} j: 0x08 | &int | GARBAGE {int} *matrix: 0xA0 | &int * | 0xA4 {int *} **matrix:0xA4 | &int | 10 {int} add comment stores size for and int, whilist up vote 0 down vote stores size to a pointer to int. You are reserving the size of an int and not the size of an *int. This is what is causing the problems. Also you should allocate the int to a *int pointer if you want also to do some real computations. 1 No, this is not the cause of the problem. – Oli Charlesworth Mar 21 '13 at 0:45 add comment Not the answer you're looking for? Browse other questions tagged c segmentation-fault or ask your own question.
{"url":"http://stackoverflow.com/questions/15537457/what-is-the-reason-for-this-studying-reasons-for-seg-faults/15537980","timestamp":"2014-04-17T22:05:18Z","content_type":null,"content_length":"81596","record_id":"<urn:uuid:8426a3c3-b8d3-420c-b886-bc15a6c14cce>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of prandtl A Prandtl-Meyer expansion fan is a centered expansion process, which turns a supersonic flow around a convex corner. The fan consists of infinite number of Mach waves, diverging from a sharp corner. In case of a smooth corner, these waves can be extended backwards to meet at a point. Each wave in the expansion fan turns the flow gradually (in small steps). It is physically impossible to turn the flow away from itself through a single "shock" wave because it will violate the second law of thermodynamics. Across the expansion fan, the flow accelerates (velocity increases) and the Mach number increases, while the static pressure, temperature and density decrease. Since the process is isentropic, the stagnation properties remain constant across the fan. Flow properties The expansion fan consists of infinite number of expansion waves or Mach lines. The first Mach line is at an angle $mu_1 = arcsin left\left(frac\left\{1\right\}\left\{M_1\right\} right\right)$ with respect to the flow direction and the last Mach line is at an angle $mu_2 = arcsin left\left(frac\left\{1\right\}\left\{M_2\right\} right\right)$ with respect to final flow direction. Since the flow turns in small angles and the changes across each expansion wave are small, the whole process is isentropic. This simplifies the calculations of the flow properties significantly. Since the flow is isentropic, the stagnation properties like stagnation pressure ($p_0$), stagnation temperature ($T_0$) and stagnation density ($rho_0$) remain constant. The final static properties are a function of the final flow Mach number ($M_2$) and can be related to the initial flow conditions as follows, frac{T_2}{T_1} &= & bigg(frac{1+frac{gamma -1}{2}M_1^2}{1+frac{gamma -1}{2}M_2^2} bigg) frac{p_2}{p_1} &= & bigg(frac{1+frac{gamma -1}{2}M_1^2}{1+frac{gamma -1}{2}M_2^2} bigg)^{gamma/(gamma-1)} frac {rho_2}{rho_1} &= &bigg(frac{1+frac{gamma -1}{2}M_1^2}{1+frac{gamma -1}{2}M_2^2} bigg)^{1/(gamma-1)} end{array} The Mach number after the turn ($M_2$) is related to the initial Mach number ($M_1$) and the turn angle ($theta$) by, $theta = nu\left(M_2\right) - nu\left(M_1\right) ,$ where, $nu\left(M\right) ,$ is the Prandtl-Meyer function. This function determines the angle through which a sonic flow (M = 1) must turn to reach a particular Mach number (M). Mathematically, begin\left\{align\right\} nu\left(M\right) & = int frac{sqrt{M^2-1}}{1+frac{gamma -1}{2}M^2}frac{,dM}{M} & = sqrt{frac{gamma + 1}{gamma -1}} cdot arctan sqrt{frac{gamma -1}{gamma +1} (M^2 -1)} - arctan sqrt{M^2 -1} end{align} By convention, $nu\left(1\right) = 0. ,$ Thus, given the initial Mach number ($M_1$), one can calculate $nu\left(M_1\right) ,$ and using the turn angle find $nu\left(M_2\right) ,$. From the value of $nu\left(M_2\right) ,$ one can obtain the final Mach number ($M_2$) and the other flow properties. Maximum turn angle As Mach number varies from 1 to $infty$, $nu ,$ takes values from 0 to $nu_\left\{max\right\} ,$, where $nu_\left\{max\right\} = frac\left\{pi\right\}\left\{2\right\} bigg\left(sqrt\left\{frac\left\{gamma+1\right\}\left\{gamma-1\right\}\right\} -1 bigg\right)$ This places a limit on how much a supersonic flow can turn through, with the maximum turn angle given by, $theta_\left\{max\right\} = nu_\left\{max\right\} - nu\left(M_1\right) ,$ One can also look at it as follows. A flow has to turn so that it can satisfy the boundary conditions. In an ideal flow, there are two kinds of boundary condition that the flow has to satisfy, 1. Velocity boundary condition, which dictates that the component of the flow velocity normal to the wall be zero. It is also known as no-penetration boundary condition. 2. Pressure boundary condition, which states that there cannot be a discontinuity in the static pressure inside the flow (since there are no shocks in the flow). If the flow turns enough so that it becomes parallel to the wall, we do not need to worry about this boundary condition. However, as the flow turns, its static pressure decreases (as described earlier). If there is not enough pressure to start with, the flow won't be able to complete the turn and will not be parallel to the wall. This shows up as the maximum angle through which a flow can turn. Lower to Mach number to start with (i.e. small $M_1$), greater the maximum angle through which the flow can turn. The streamline which separates the final flow direction and the wall is known as a slipstream (shown as the dashed line in the figure). Across this line there is a jump in the temperature, density and tangential component of the velocity (normal component being zero). Beyond the slipstream the flow is stagnant (which automatically satisfies the velocity boundary condition at the wall). In case of real flow, a shear layer is observed instead of a slipstream, because of the additional no-slip boundary condition. See also * * * External links
{"url":"http://www.reference.com/browse/prandtl","timestamp":"2014-04-19T07:49:03Z","content_type":null,"content_length":"80696","record_id":"<urn:uuid:d63a6e45-36d3-4c77-8fa5-b9983c6a4bee>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
How to get a probability space of repeated numbers? March 5th 2013, 03:04 PM How to get a probability space of repeated numbers? If you have a set of numbers D = {x1, x2, x3, ..., xn}, you can determine if there is any repeated numbers in D using an algorithm using a direct access table (hashing). But how do you determine a probability space when deriving the expected running time? For the probability space, are we basically saying what is the probability that a number in D is repeated, but how do you get a probability from this? Can anyone help please? I am thinking, that for each number xi in D, you have to check if it is the same number as all the other ones in D, so there is n-1 other numbers in D. March 5th 2013, 07:26 PM Re: How to get a probability space of repeated numbers? Hey Sneaky. With regards to your question, you have to make an assumption with regard to the probability. Typically we do this in a couple of ways depending on the problem. One way is to make mathematical assumptions and then derive the PDF (probability function) of the distribution. This is done with things like Binomial, Poisson, and other similar distributions. The other way is to look at a distribution based on its fit to some model. We do this by either forcing a distribution to have a certain structure (like Normal, Chi-square, Uniform) or we can use what is called an empirical distribution which is just a fancy way of using the actual data from an actual experiment/process/etc and plotting a nice frequency histogram and normalizing it. If you want to assume pure randomness then use a uniform distribution since it has the highest entropy, it gives the best model of randomness provided that all realizations are independent from the other ones.
{"url":"http://mathhelpforum.com/statistics/214278-how-get-probability-space-repeated-numbers-print.html","timestamp":"2014-04-20T02:58:50Z","content_type":null,"content_length":"5037","record_id":"<urn:uuid:47a7ddae-f4b9-4ced-9363-ea25347031da>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Basic Algebra This page: basic algebra Dr. Math See also the Dr. Math FAQ: cubic and quartic equations order of operations Internet Library: basic algebra T2T FAQ: algebra help About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Basic Algebra Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Solving simple linear equations. Positive/negative integer rules. Mixture problems. Quadratic equations. Absolute value. Completing the square. Direct and indirect variation. Inequalities and negative numbers. Find a polynomial F(x) of degree less than n so that the graph of F passes through all of the points. Your job is to prepare the women's athletes for the Olympic 1,500-meter race. I've got questions concerning graphing. How do you find the asymptotes of f(x) = (2x + 1)/(x - 3)? How do you use the asymptotes to graph the function? How do you graph 2x-y = 10? Please explain how to graph this equation: y = a + b(x) + c(x^2) + ... How would you approach graphing negative 2 times the absolute value of x plus 2? How would I find the intercept for a problem such as 3x-2y = 12? How do you know how to graph a parabola from looking at its equation? I do not understand how to graph piecewise functions. Given f(x)=4 and g(x)=2, which is the Y axis and which is the X axis? f(x)=x-2(3)+4 ; g(x)=x-2(3)+4. Can you give me a simple explanation of how to graph quadratic polynomials like y = x^2 - 8x + 15? When given the slope intercept form, how do you know whether you rise or drop when looking at the slope? How many times do the hour and minute hands cross in a 12-hour period of time? A football player kicked a 41-yard punt. The path of the ball was modeled by y= -0.035x(squared) + 1.4x + 1, where x and y are measured in yards. What was the maximum height of the ball? I'm trying to figure out how to prove that a is less than b implies that a*a is less than b*b is true in R+.... My daughter has to produce an equation to show the number of hidden faces when three rows of cubes are placed together on a flat surface. How many kilometers is it from the base to the top of the mountain? Do you have any helpful hints for algebra 101? Can you give a method, formula, and answer for this equation? x^3 - 3x^2 - 2x + 5 = 0 Three persons A, B, and C, travel from point X to point Y. They leave point X at the same time and follow the same route... A rocket leaves the earth for the sun at a speed of 28,800 mph. At the same time, a photon of light leaves the sun for the earth... A woman usually takes the 5:30 p.m. train, arriving at her station at 6:00, and her husband picks her up and drives her home... I'm wondering if there is a formula or algorithm that will tell me exactly how many pairs of factors a number has? You must spend $100 to buy 100 pets, choosing at least one of each pet. The pets and their prices are: mice @ $0.25 each, cats @ $1.00 each, and dogs @ $15.00 each. How many mice, cats, and dogs must you buy? How can we determine whether or not a given plane curve is a parabola? Under what conditions is a parabola uniquely determined? If everyone in your class gave a Valentine to everyone else in your class, how many valentines would be exchanged? Timothy spent all of his money at five stores. At each store, he spent $1 more than half of the amount he had when entering the store. How much money did he have when he entered the first store? Paul made $44.14 selling 27 items (beer and popcorn). If he made $1.22 selling popcorn and $2.62 selling beer, how many boxes of popcorn were sold? I'm having a hard time learning the linear combinations method and the substitution method in algebra. If we subtract 0 from a number and get the same number, doesn't that make 0 an identity for subtraction? Also, can't a number be its own inverse for subtraction? A complex and an easy solution to the problem. If I type 36/6(25-11*2) into my TI-85, I get an answer of 2. If I include a multiplication symbol to have 36/6*(25-11*2), I get an answer of 18. I thought that when the 6 is written right up against the parenthesis, the multiplication is implied. Why am I getting two different answers? A sample of dimes and quarters totals $18.00. If there are 111 coins in all, how many are there of each coin? Five members of a basketball team are weighed and an average weight is recalculated after each weighing. If the average increases 2 pounds each time, how much heavier is the last player than the How do I deal with an equation with a greater than or less than sign? How do I find the solution set for problems like |X| > X and |X + 2| - X >= 0? What's the definition of absolute value? What is the case method? How does it apply to inequalities with absolute values? Why can't I just write the inequality |x| > a as (-a) > x > a? Page: [<prev] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_algebra.html?s_keyid=39259314&f_keyid=39259316&start_at=321&num_to_see=40","timestamp":"2014-04-16T13:15:19Z","content_type":null,"content_length":"25467","record_id":"<urn:uuid:389e1b5a-fed9-4601-9c3b-f7a4edf7998b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Albert Einstein: The Incorrigible Plagiarist Order Home Reviews Articles Distributors Cataloging-in-Publication Data Pop Quiz Albert Einstein: The Incorrigible Plagiarist Contact: info@xtxinc.com Copyright © 2003, 2004, 2005. All Rights Reserved. Table of Contents 1. Hilbert's Proofs Prove Hilbert's Priority 1.1 Introduction 1.2 Corry, Renn and Stachel's Baseless Revisionism 1.3 Historical Background and the Correspondence 1.4 Hilbert's Proofs Prove Hilbert's Priority 1.5 A Question of Character 1.6 A Question of Ability 1.7 Conclusion 2. Gerber's Formula 2.1 Introduction 2.2 How Fast Does Gravity Go? 2.3 Gerber's Formula was Well-Known 2.4 Einstein's Fudge 2.5 Conclusion 3. Soldner's Prediction 3.1 Introduction 3.2 Soldner's Hypothesis and Solution 3.3 Einstein Knew the Newtonian Prediction 3.4 Soldner's Formulation 3.5 Conclusion 4. The Principle of Equivalence, Etc. 4.1 Introduction 4.2 Eotvos' Experimental Fact and Planck's Proposition 4.3 Kinertia's Elevator is Einstein's Happiest Thought 4.4 Dynamism 4.5 Space-Time 4.6 Reference Frames and Covariance 4.7 Conclusion Appendix A: Soldner's Paper on Light Appendix B: Hilbert's Published Paper Appendix C: Hilbert's Printer's Proofs Appendix D: Einstein's Field Equations Paper Appendix E: Gerber's Paper on Mercury Appendix F: Einstein's Paper on Mercury ANTICIPATIONS OF EINSTEIN IN THE GENERAL THEORY OF RELATIVITY In 1997, amid much fanfare, Leo Corry announced to the world that he had uncovered proof that Albert Einstein arrived at the generally covariant field equations of gravitation, before David Hilbert. Leo Corry joined with Juergen Renn and John Stachel and published an article in the journal Science arguing against Hilbert's priority. Their claims were largely based on a set of printer's proofs of David Hilbert's 20 November 1915 Goettingen lecture, which Corry had uncovered. However, in this 1997 article, "Belated Decision in the Hilbert-Einstein Priority Dispute," Corry, Renn and Stachel failed to disclose the fact that these printer's proofs were mutilated, and are missing a critical part. Full disclosure of the facts reveals that even in their mutilated state, these proofs prove that Hilbert had a generally covariant theory of gravitation before Einstein, and that Einstein plagiarized these equations from Hilbert. Jurgen Renn, himself, once admitted, "I had personally come to the conclusion that Einstein plagiarized Hilbert[.] [The] conclusion is almost unavoidable, that Einstein must have copied from Hilbert." [C. Suplee, 'Researchers Definitively Rule Einstein Did Not Plagiarize Relativity Theory', The Washington Post, (14 November 1997), p. A24.] The author of Albert Einstein: The Incorrigible Plagiarist focuses in on the general theory of relativity and discredits the baseless historical revisionism of Leo Corry, Jürgen Renn and John Stachel. The direct comparison of primary source material demonstrates that Albert Einstein did not originate the theory of relativity. Formal mathematical proofs explain how Einstein was forced to fudge his equations in order to derive the results Paul Gerber and Johann Georg von Soldner had published long before him. Einstein did not yet have the benefit of plagiarizing David Hilbert's generally covariant field equations of gravitation and was operating under an erroneous assumption. An extensive history of the principle of equivalence proves that Einstein plagiarized this idea. The book reprints the relevant papers by Einstein, Soldner, Gerber, and Hilbert, as well as the remainder of David Hilbert's mutilated printer's proofs of his article "The Foundations of Physics". While the book presents the mathematical proofs needed to justify its claims, the non-mathematical reader will find it rich in prose and will be able to follow the arguments and the history presented. A FEW OF THE QUOTATIONS FOUND IN THE BOOK: "In a sense, Einstein had 'appropriated' Hilbert's contribution to the gravitational field equations as a march of his own ideas--or so it would seem from the reading of his 1916 Ann. d. Phys. paper on the foundations of general relativity."--Prof. Jagdish Mehra * * * "[Hilbert] would soon [***] pinpoint flaws in Einstein's rather pedestrian way of dealing with the mathematics of his gravitation theory."--Dr. Tilman Sauer * * * ". . .Gerber, who has given the correct formula for the perihelion motion of Mercury before I did."--Albert Einstein * * * "Remarkably, Einstein was not the first to discover the correct form of the law of warpage [***] Recognition for the first discovery must go to Hilbert."--Prof. Kip Thorne * * * "No unprejudiced person can deny that, in the absence of direct and incontrovertible proofs establishing his innocence, Einstein must, in view of the circumstantial evidence previously presented, stand convicted before the world as a plagiarist."--Prof. Arvid Reuterdahl * * * "Thus, with what is known as the special theory, if we consider as paramount factor not the detail work but the guiding thoughts by which this was inspired, then the father of this special relativity theory was undoubtedly Henri Poincare. [***] In the general theory of relativity the basic thought is that of Mach, viz. the replacement in dynamics of the law of gravitation by a law of motion. But in what Einstein built upon this basis the influence of Poincare is again manifest. [***] And in view of all these facts one does not know at which to be most astounded: the magnanimity of Poincare who was always over-anxious that there should be recognition of the labors of those who reaped where he himself had sown, the apathy of his friends after his death, or the peculiar attitude of Einstein and his coterie, exemplified by Born of Goettingen, who refers to Poincare as one of those who 'collaborated' with Einstein in the development of the relativity theory!"--Robert P. Richardson * * * "From these facts the conclusion seems inevitable that Einstein cannot be regarded as a scientist of real note. He is not an honest investigator."--Prof. O. E. Westin Anticipations of Einstein in the General Theory of Relativity is now available at Amazon.com and Barnes and Noble The following journal articles also discredit Leo Corry, Juergen Renn and John Stachel's baseless and radical historical revisionism: Prof. Friedwardt Winterberg's paper discrediting Corry, Renn and Stachel's revisionism: "On 'Belated Decision in the Hilbert-Einstein Priority Dispute', published by L. Corry, J. Renn, and J. Stachel", Zeitschrift fuer Naturforschung A, Volume 59a, Number 10, (October, 2004), pp. 715-719. Abstract for Prof. Friedwardt Winterberg's paper discrediting Corry, Renn and Stachel's revisionism. Table of Contents for Zeitschrift fuer Naturforschung A, Volume 59a. A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Uspekhi Fizicheskikh Nauk, Volume 174, Number 6, (June, 2004), pp. 663-678. An English translation of A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Uspekhi Fizicheskikh Nauk, Volume 174, Number 6, (June, 2004), pp. 663-678. An alternative English translation was published in the Physics-Uspekhi: A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Physics-Uspekhi, Volume 47, Number 6, (June, 2004), pp. 607-621. T. Sauer, "The Relativity of Discovery: Hilbert's First Note on the Foundations of Physics", Archive for History of Exact Sciences, Volume 53, Number 6, (1999), pp. 529-575. Leo Corry, Jürgen Renn and John Stachel's 1997 article in Science, which does not mention the mutilation of Hilbert's proofs: "Belated Decision in the Hilbert-Einstein Priority Dispute", Science, Volume 278, (14 November 1997), pp. 1270-1273. Internet Resources for Mileva Einstein-Marity: Documentary: Einstein's Wife Einstein's Wife on amazon.com M. Maurer, "Weil nicht sein kann, was nicht sein darf... 'DIE ELTERN' ODER 'DER VATER' DER RELATIVITÄTSTHEORIE?", PCnews, Nummer 48, Jahrgang 11, Heft 3, Wien, (Juni, 1996), S. 20-27 "In Albert's Shadow: The Life and Letters of Mileva Maric, Einstein's First Wife" by Milan Popovic "Im Schatten Albert Einsteins" by Desanka Trbuhovic-Gjuric "Einstein's Wife: Work and Marriage in the Lives of Five Great Twentieth-Century Women" by Andrea Gabor Was Einstein's Wife Mileva His Silent Collaborator? Mileva Maric Mileva Maric on Wikipedia Einstein's Plagiarism in the News: Special Theory of Relativity, Jules Henri Poincare, Hendrik Antoon Lorentz, and Albert Einstein: Henri Poincare and Relativity Theory by A. A. Logunov, Former Vice-President of the Russian [Soviet] Academy of Sciences, and currently Director of the Institute for High Energy Physics A. A. Logunov, "Sur la dynamique de l'électron" LA RELATIVITÉ Poincaré et Einstein, Planck, Hilbert: Histoire véridique de la Théorie de la Relativité by Jules Leveugle Jules Leveugle's book on Amazon France Albert Einstein: UN EXTRAORDINAIRE PARADOXE by 1988 Economics Nobel Prize laureate Maurice Allais Relativistic Theory of Gravity (Horizons in World Physics) by A.A. Logunov Einstein et Poincaré by Jean-Paul Auffray on Amazon France. Comment le jeune et ambitieux Einstein s'est approprié la Relativité restreinte de Poincaré by Jean Hladik on Amazon France. "Henri Poincaré : A decisive contribution to Special Relativity. The short story" by Jacques Fric Einstein's Clocks, Poincare's Maps: Empires of Time by Peter Louis Galison "Henri Poincaré: a decisive contribution to Relativity" by Christian Marchal: Word.doc "Henri Poincaré: a decisive contribution to Relativity" by Christian Marchal: HTML General Theory of Relativity, Paul Gerber, David Hilbert, Albert Einstein: F. Winterberg, The Einstein Myth and the Crisis in Modern Physics. I. McCausland, "Anomalies in the History of Relativity", Journal of Scientific Exploration, Volume 13, Number 2, (1999), pp. 271-290. The following journal articles discredit Leo Corry, Juergen Renn and John Stachel's baseless and radical historical revisionism: Prof. Friedwardt Winterberg's paper discrediting Corry, Renn and Stachel's revisionism: "On 'Belated Decision in the Hilbert-Einstein Priority Dispute', published by L. Corry, J. Renn, and J. Stachel", Zeitschrift fuer Naturforschung A, Volume 59a, Number 10, (October, 2004), pp. 715-719. Abstract for Prof. Friedwardt Winterberg's paper discrediting Corry, Renn and Stachel's revisionism. Table of Contents for Zeitschrift fuer Naturforschung A, Volume 59a. A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Uspekhi Fizicheskikh Nauk, Volume 174, Number 6, (June, 2004), pp. 663-678. An English translation of A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Uspekhi Fizicheskikh Nauk, Volume 174, Number 6, (June, 2004), pp. 663-678. An alternative English translation was published in the Physics-Uspekhi: A. A. Logunov, M. A. Mestvirishvili and V. A. Petrov, "How Were the Hilbert-Einstein Equations Discovered?" Physics-Uspekhi, Volume 47, Number 6, (June, 2004), pp. 607-621. T. Sauer, "The Relativity of Discovery: Hilbert's First Note on the Foundations of Physics", Archive for History of Exact Sciences, Volume 53, Number 6, (1999), pp. 529-575. Leo Corry, Jürgen Renn and John Stachel's 1997 article in Science, which does not mention the mutilation of Hilbert's proofs: "Belated Decision in the Hilbert-Einstein Priority Dispute", Science, Volume 278, (14 November 1997), pp. 1270-1273. Other Important Links: The homepage of Prof. Umberto Bartocci Richard Moody "Albert Einstein: Plagiarist of the Century" Richard Moody "Albert Einstein--Plagiator" (Polish) "Was Einstein a Plagiarist?" "E=mc^2 before Einstein" by Paul Marmet Kazakhstani scientist Karim Khaidarov Kazakhstani scientist Nikolai Noskov Dr. Caroline Thompson
{"url":"http://home.comcast.net/~xtxinc/AEGRBook.htm","timestamp":"2014-04-19T02:33:47Z","content_type":null,"content_length":"31565","record_id":"<urn:uuid:716f1b95-f06b-42ec-aa68-fd21072b64e0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Living Links: Applications of Matrix Operations to Population Studies By Angela B. Shiflet, George W. Shiflet and Jesse A. Hanley Wofford College, Spartanburg, South Carolina This page provides download links for a set of curricular materials designed to teach parallel computational modeling to undergraduate or graduate students in science and other STEM disciplines. The module begins with an introduction to vector and matrix operations and demonstration of the use of these constructs in storing and manipulating population data. The module then takes students through the construction of a population model and serial implementation using Matlab and Mathematica. This model is then re-implemented with C and parallelized using MPI. The module consists of the documents described below. The documents can be downloaded individually, or as a zip archive containing all of the documents: Populations Matrices Module (.doc) : MS Word file describing: • Population dynamics • Matrix operations and their use in dynamic systems modeling • Project ideas • Examples & solutions • References Populations and Matrix Operations Matlab Script : Matlab script demonstrating vector and matrix operations. Populations and Matrix Operations Mathematica Notebook : Mathematica notebook file demonstrating vector and matrix operations Populations and Matrix Operations C code : Zip archive containing C code, header file, input file and results file. The C code implements vector and matrix operations which can also be found in scientific libraries for linear algebra. Populations and Matrix Operations Module Archive : Zip archive containing all of the materials for the Living Links module. Populations Matrices Module (.pdf) : PDF describing: • Population dynamics • Matrix operations and their use in dynamic systems modeling • Project ideas • Examples & solutions • References
{"url":"http://shodor.org/petascale/materials/UPModules/populationMatrices/","timestamp":"2014-04-19T09:23:51Z","content_type":null,"content_length":"12217","record_id":"<urn:uuid:2867cefe-526a-428d-ab43-143513e27a10>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
The Great Riemann Hypothesis In 1637 the great king of amateur mathematicians — Pierre de Fermat — wrote his most famous theorem in the margin of his copy of Arithmetica, which stated that that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two. But an even more interesting was his little note in the margin, which read that the proof was too big to fit into it. What happened later is that this apparently simple problem became known as the most difficult problem of mathematics. Greatest mathematicians tried to solve Fermat’s last theorem for 358 years, including Euler, Hilbert, Kronecker and others. But it was finally solved in 1995 by the great Andrew Wiles. This remarkable story ended more than 350 years of reign of Fermat’s last theorem as the king of the most difficult math problems. But don’t worry, as long as there are mathematicians there will be unsolved problems, and, actually, there is a number of important problems, which belong to the list of the so called Millennium Prize Problems. The list holds 7 problems (1 of which is already solved), the award for which is 1 million dollars. Riemann hypothesis is arguably the most important one, or at least the most famous one. And it is a problem proposed by Bernhard Riemann (1859) about the location of the nontrivial zeros of the Riemann zeta function which states that all non-trivial zeros have real part 1/2. So let’s take a closer look at what the Riemann hypothesis is all about. Here are two videos, the first one is a nice presentation of the history behind it, whereas the second one is a great lecture by Dan Rockmore. Category: Blog Add Comment Register Leave a Reply Cancel reply Featured Post Some great new physics books have been released recently, so, as always, here’s a short overview of 5 top physics books that you should definitely check out. For more new physics books check this link or Amazon’s new releases. No.
{"url":"http://physicsdatabase.com/2012/10/20/the-great-riemann-hypothesis/","timestamp":"2014-04-18T03:53:15Z","content_type":null,"content_length":"55005","record_id":"<urn:uuid:4f7012a2-7b8d-4cc4-b1bc-97430a1d6dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A minicourse on the low Mach number limit Thomas Alazard CNRS & Univ. Paris-Sud 11, France 1. Introduction These lectures are devoted to the study of the so-called low Mach number limit for classical solutions of the compressible Navier-Stokes or Euler equations for non-isentropic fluids. The Mach number, hereafter denoted by , is a fundamental dimensionless number. By definition, it is the ratio of a characteristic velocity in the flow to the sound speed in the fluid. Hence, the target of the mathematical analysis of the low Mach number limit 0 is to justify some usual simplifications that are made when discussing the fluid dynamics of highly subsonic flows (which are very common). For highly subsonic flows, a basic assumption that is usually made is that the com- pression due to pressure variations can be neglected. In particular, provided the sound propagation is adiabatic, it is the same as saying that the flow is incompressible. We can simplify the description of the governing equations by assuming that the fluid velocity is divergence-free. (The fact that the incompressible limit is a special case of the low Mach number limit explains why the limit 0 is a fundamental asymptotic limit in fluid mechanics.) On the other hand, if we include heat transfert in the problem, we cannot ignore the entropy variations. In particular we have to take into account the compression due to the combined effects of large temperature variations and thermal conduction. In
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/690/1963530.html","timestamp":"2014-04-18T22:19:19Z","content_type":null,"content_length":"8550","record_id":"<urn:uuid:105f0572-603d-452b-aaf3-f27a1dd392b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Java-based optimal Rogue reforging calculator - Page 2 - Rogues A suggestion: Why not implement so you can enter the EP-values and the cap values? A set of textboxes with the suggested values with the option of changing them? That way you can enter the EP-values for your particular talent-spec and gear level. That would also increase understanding as to what values are actually used in the reforging calculations.
{"url":"http://forums.elitistjerks.com/topic/114737-java-based-optimal-rogue-reforging-calculator/page-2","timestamp":"2014-04-19T05:21:31Z","content_type":null,"content_length":"133916","record_id":"<urn:uuid:ed6343e6-d833-456b-a6bc-fb2ca7cb7eec>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Creek, WA Science Tutor Find a Mill Creek, WA Science Tutor ...If there are problems to solve, we would work through some examples together to make sure your thought processes are going in the right direction. My favorite subject is biology. I worked for many years as a molecular biologist, and I used to teach freshman biology at the college level. 6 Subjects: including chemistry, biology, microbiology, prealgebra ...I have installed networks, configured computers and setup security and many other types of software. Though my primary tutoring area is math, I worked as an electrical engineer at Tektronix in Beaverton, Oregon for about a year and worked extensively with circuits as a teenager. I have a good g... 43 Subjects: including electrical engineering, chemistry, ACT Science, physical science ...I focus on emphasizing the underlying logic of the concepts and building on prior knowledge while incorporating new material. Some of my areas of greatest experience are: properties of exponents and roots, writing and graphing linear equations and inequalities, probabilities, interpretation of g... 17 Subjects: including ACT Science, English, chemistry, writing ...And if you have a less conventional writing assignment, I can probably help with that, too! I have experience working with students to write and edit poetry, and have even written several essays in French. I have always enjoyed astronomy and took a college-level astronomy course in which I rece... 35 Subjects: including astronomy, philosophy, physical science, algebra 1 ...I believe that all kids have the potential to learn and solve problems if given the right tools of teachings. I am a very patient person and also had the opportunity of working with students who have an IEP. Please email or call me to work out the rates as they might be less or more than the posted rate, depending on the distance traveled and/or subject, level taught. 24 Subjects: including physics, biology, chemistry, geometry Related Mill Creek, WA Tutors Mill Creek, WA Accounting Tutors Mill Creek, WA ACT Tutors Mill Creek, WA Algebra Tutors Mill Creek, WA Algebra 2 Tutors Mill Creek, WA Calculus Tutors Mill Creek, WA Geometry Tutors Mill Creek, WA Math Tutors Mill Creek, WA Prealgebra Tutors Mill Creek, WA Precalculus Tutors Mill Creek, WA SAT Tutors Mill Creek, WA SAT Math Tutors Mill Creek, WA Science Tutors Mill Creek, WA Statistics Tutors Mill Creek, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Mill_Creek_WA_Science_tutors.php","timestamp":"2014-04-18T08:37:56Z","content_type":null,"content_length":"24168","record_id":"<urn:uuid:ab0b5360-422d-499a-94b4-d2757c229a13>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum! Particle in a box problem! HELP!!! The longest wavelength of light consists of the lowest energy photons. The lowest energy photon corresponds to the *smallest* change in the electron's energy as a result of it making a transition from a lower level to a higher level. CHANGE is the operative word here. When the electron absorbs a photon, it gains energy and goes from one of the lower energy states of the particle in a box to one of the higher energy such states. Therefore, rather than looking at only one of the particle in a box energies (i.e. only one value of n), you really need to be comparing two different energies, taking the difference between them. What is the smallest such difference? What transition corresponds to the smallest energy change, and therefore would have to have been caused by the lowest energy
{"url":"http://www.physicsforums.com/showthread.php?t=269427","timestamp":"2014-04-18T23:32:32Z","content_type":null,"content_length":"23299","record_id":"<urn:uuid:58bb6188-e586-4d69-8a87-46e27c15dc98>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Valparaiso, IN SAT Math Tutor Find a Valparaiso, IN SAT Math Tutor ...I grew fond of helping people in breaking down the concepts of Mathematics, and I decided to expand my experience by earning my teaching certification to teach at secondary schools. While I have good experiences with teaching classes, I always enjoy helping people one-to-one because each individ... 11 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I am able to understand how you learn best and work within your framework. Additionally, since I commute to Chicago, I can work with people located along I-94 from Michigan to Chicago. I currently conduct research for an economics professor at the Kellogg School of Management at Northwestern Un... 28 Subjects: including SAT math, reading, English, calculus ...Unfortunately that does not happen often. I have also experienced what it is like to teach young minds. Since have been a middle school wrestling coach I have seen true confusion in young kids, but I have been able to simplify the most abstract ideas, and show step by step how things work. 11 Subjects: including SAT math, geometry, algebra 1, precalculus ...As an Instructional Assistant for over 11 years, working with special education, I helped students with Study Skills as a main component of my job. Working with students with special needs often requires finding many different methods or options for studying in order for them to be successful. I have tutored students with special needs for over 11 years. 24 Subjects: including SAT math, chemistry, calculus, geometry ...My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. 12 Subjects: including SAT math, calculus, geometry, algebra 1 Related Valparaiso, IN Tutors Valparaiso, IN Accounting Tutors Valparaiso, IN ACT Tutors Valparaiso, IN Algebra Tutors Valparaiso, IN Algebra 2 Tutors Valparaiso, IN Calculus Tutors Valparaiso, IN Geometry Tutors Valparaiso, IN Math Tutors Valparaiso, IN Prealgebra Tutors Valparaiso, IN Precalculus Tutors Valparaiso, IN SAT Tutors Valparaiso, IN SAT Math Tutors Valparaiso, IN Science Tutors Valparaiso, IN Statistics Tutors Valparaiso, IN Trigonometry Tutors
{"url":"http://www.purplemath.com/Valparaiso_IN_SAT_Math_tutors.php","timestamp":"2014-04-21T02:28:54Z","content_type":null,"content_length":"24263","record_id":"<urn:uuid:0c41c291-28d4-41f0-82bd-fb0412489852>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Variety of maths problems February 18th 2013, 03:56 PM #1 Feb 2013 Variety of maths problems Hey guys could you help me with these maths questions, there is a variety of them and it is from a test i did and i must redo all of them but i know very little and would appreciate any help. Re: Variety of maths problems It's hard to believe you are serious. If this is not important enough for you to type in problems in a format we can easily understand, I can't imagine it being important enough to anyone else to try to work out what the problems are and do them for you. Re: Variety of maths problems Ok i will type them out and when done i will post them Last edited by tommybc; February 18th 2013 at 04:53 PM. February 18th 2013, 04:42 PM #2 MHF Contributor Apr 2005 February 18th 2013, 04:44 PM #3 Feb 2013
{"url":"http://mathhelpforum.com/algebra/213364-variety-maths-problems.html","timestamp":"2014-04-17T02:16:09Z","content_type":null,"content_length":"38086","record_id":"<urn:uuid:bf835a1a-be7d-4f45-aa8c-af07c535aab3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Doron Zeilberger Opinion 75: James Joseph Sylvester: the GREATEST Mathematician of ALL TIMES By Doron Zeilberger Written: Oct. 27, 2006. I just finished reading the wonderful biography, by Karen Parshall, of my great hero James Joseph Sylvester. Parshall concludes her masterpiece with quoting, with agreement, MacMahon's verdict that Sylvester, while definitely one of the greatest mathematicians of his time, was even more definitely, not among the greatest mathematicians of all time. First, what's so great about being ``great'', or even ``greatest''? It is more important to be interesting, and James Joseph was surely more interesting than Euler and Gauss combined. Just read Parshall's biography, or better still, browse through his Collected Works. But even if it is not so great being the "greatest", if I had to name a mathematician who, all things considered (constructing a measure that is more concentrated on the things that really count, like vision, originality and foresight) then Sylvester has no rivals. He was way ahead of his time. He was also way ahead of our time, witness Parshall's agreement with the verdict of his contemporaries: `pretty great but no way amongst the greatest', and Karen Parshall ends with a conciliatory note: "In his time and his place, he was both a leader and a pathbreaker." [my emphasis] . Why was he so great? First, he knew that algebra is more important than analysis. He tried to do everything algebraically. He also knew that algebra was just combinatorics in disguise, and his Constructive Theory of Partitions, helped by his brilliant Johns Hopkins students [Fabian Franklin and William Pitt Durfee], is a masterpiece that is still not fully appreciated today. All these so-called analytical theta-function identities proved via one picture! He was also a great algorithmitican, way before the word existed. Towards the end of his life, there was a young Turk named David Hilbert who proved existence, and didn't care much about construction. Hilbert ruled for the next 100 years, and that was one of the reasons Sylvester and his style was looked down upon as "old hat". But the best reason why James Joseph Sylvester was the greatest was his vision and realization that mathematics is not a deductive science but an experimental science. In a public speech before the Mathematical and Physical Section of the British Association, delivered in 1869, he responded, in very strong terms, to the conventional wisdom of Thomas Huxley- who had his own agenda to promote the education of the empirical sciences at the expense of fossilized mathematics- delivered at an after-dinner speech, who claimed that "Mathematics is that study which knows nothing of observation, nothing of induction, nothing of experiment, nothing of causation.'' Sylvester admitted that the way mathematics was taught may give that false impression (unfortunately for us, this is still true to a large extent today). But the way mathematics is discovered is purely experimental, and he went on to present many convincing examples from his own and other mathematicians' work. Sylvester was so fond of his speech that he included it as an appendix to his poetry treatise "Laws of Verse". His approach to poetry was experimental as well, and he must have seen the connection. Sylvester was already right yesterday , way back in 1869. He is even more right today, and tomorrow, his vision will seem so obvious, that once again he would be in danger of not being considered so great, since all he said were platitudes known to everyone. Doron Zeilberger's Opinion's Table of Content
{"url":"http://www.math.rutgers.edu/~zeilberg/Opinion75.html","timestamp":"2014-04-20T18:23:13Z","content_type":null,"content_length":"4636","record_id":"<urn:uuid:7b344c81-c4bb-476a-a7e7-c02844668cc2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Cuisenaire Environment Copyright © University of Cambridge. All rights reserved. 'Cuisenaire Environment' printed from http://nrich.maths.org/ Full Screen Version This text is usually replaced by the Flash movie. Click on 'Rods', to choose a Cuisenaire rod and then drag it onto the squared background. More rods can be added in a similar way and aligned as you wish. A rod can be rotated by $90^\circ$ by clicking any key whilst dragging. The background squares can be altered (for example increasing/decreasing their size) using the 'View' menu. You could use this environment for all sorts of purposes - perhaps to explore addition and subtraction, factors and multiples or ideas about fractions. For more ideas, look at the section, or go to this page where there is a collection of Cuisenaire problems.
{"url":"http://nrich.maths.org/4348/index?nomenu=1","timestamp":"2014-04-16T08:04:35Z","content_type":null,"content_length":"4521","record_id":"<urn:uuid:ea2fc7f7-1226-48c2-9f97-e682daf0b35e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
The interior of a set is open and the boundary and closure of a set is closed September 3rd 2010, 03:31 PM #1 The interior of a set is open and the boundary and closure of a set is closed So I am pretty sure my proof is correct, just want to verify for correctness and rigor. Let $a\in int(S)$. Then there exists $\epsilon > 0$ such that $B(\epsilon , a) \subset S$. Now suppose $x\in B(\epsilon , a)$. Since $B(\epsilon, a)$ is open, there exists $\delta > 0$ such that $B(\delta , x) \subset B(\epsilon , a) \subset S$; in other words, for any $x\in B(\epsilon , a)$ there is a ball centered at $x$ contained solely in $S$, which means $x\in int(S)$. So given $a\ in int(S)$ there exists $r > 0$, namely $r = \epsilon$, such that $B(r,a) \subset int(S)$, so $int(S)$ is open. So now consider $\partial S$. Then $(\partial S)^{c} = int(S) \cup int(S^{c})$ since if $x$ is not a point such that every ball centered at $x$ intersects $S$ and $S^{c}$, then there exists some ball centered at $x$ that is either contained solely in $S$ or $S^{c}$. So we have that $(\partial S)^{c}$ is the union of two open sets, which is open, so $\partial S$ is closed. Similarly, if we consider $cl(S)$, then $(cl(S))^{c} = S^{c}\cap(int(S)\cup int(S^{c})) = int(S^{c})$, which is open, so $cl(S)$ is closed. So I think this is a perfectly valid and rigorous proof, but any feedback would be appreciated (I am somewhat unsure on the closed portions of the proof, especially the claims about what the complements of the boundary of S and the closure of S actually equal). Thanks. I find your proof very hard to follow. That is not saying it is incorrect! I think it is too complicated. It is easy to prove that any open set is simply the union of balls. The interior is just the union of balls in it. The complement of the closure is just the union of balls in it. The complement of the boundary is just the union of balls in it. To follow that last bit, think this way. If $totin \beta(A)$ the there is a ball $\mathcal{B}(t;\delta)$ that is a subset of $\mathcal{I}(A)$ or the complement of $A$ That is it contains only points of $A$ or points not in $A$. Hm, thanks. I guess I made it more complicated if not for the fact that this is my first time learning about open/closed sets in terms of interior points and balls, so when it asks if a set is open I go back to the basic definition that any point in an open set has some ball centered at that point contained solely in the set. (At least that is how the textbook I am using defines it). And since this is a new area of mathematical study for me, I don't know how to actually show that an open set is the union of balls. Actually that is an axiom. The union of any set of open sets is itself an open set. That should have been stated up front. Then by definition, any ball is an open set. So any union of balls is open, by the axiom. Ah, okay. I guess my textbook (Advanced Calculus by Gerald B. Folland) does things differently. Thanks again. The interior of a set M is the largest open set, which is still a subset of M or equivalently the union of all open subsets of M. September 3rd 2010, 03:58 PM #2 September 3rd 2010, 04:06 PM #3 September 3rd 2010, 05:53 PM #4 September 3rd 2010, 06:12 PM #5 September 4th 2010, 08:58 AM #6 Junior Member Aug 2010
{"url":"http://mathhelpforum.com/differential-geometry/155131-interior-set-open-boundary-closure-set-closed.html","timestamp":"2014-04-18T17:49:11Z","content_type":null,"content_length":"53241","record_id":"<urn:uuid:1e84f68e-76c7-4eaa-9d24-260793fb6552>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Birth of Model Theory: L¨owenheim's Theorem in the Frame of the Theory of Relatives by Calixto Badesa Princeton, Princeton University Press, 2004 xiii + 240 pages. US $52.50. ISBN 0-691-05853-9 Reviewed by Jeremy Avigad From ancient times to the beginning of the nineteenth century, mathemat- ics was commonly viewed as the general science of quantity, with two main branches: geometry, which deals with continuous quantities, and arithmetic, which deals with quantities that are discrete. Mathematical logic does not fit neatly into this taxonomy. In 1847, George Boole [1] offered an alter- native characterization of the subject in order to make room for this new discipline: mathematics should be understood to include the use of any symbolic calculus "whose laws of combination are known and general, and whose results admit of a consistent interpretation." Depending on the laws chosen, symbols can just as well be used to represent propositions instead of quantities; in that way, one can consider calculations with propositions on par with more familiar arithmetic calculations. Despite Boole's efforts, logic has always held an uncertain place in the mathematical community. This can partially be attributed to its youth;
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/614/1316186.html","timestamp":"2014-04-19T04:37:10Z","content_type":null,"content_length":"8500","record_id":"<urn:uuid:7b3b32a4-7524-4dcf-b9ef-eb11cb7d0361>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - inequality for sequence approaching e Date: Feb 26, 2013 8:16 PM Author: John Reid Subject: inequality for sequence approaching e Let a(n) = (1+1/n)^n, n=1,2,3,... It is well known that a(n)/e < 1, for all n. On the other hand, we found for all n that (1) n*ln(1+1/n) < a(n)/e. As n goes to infinity, it is easy to see that the left side of (1) converges to 1. The resulting sandwich yields the familiar fact that a(n) converges to e. QUESTION: does anyone have a reference for (1) ???
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8425433","timestamp":"2014-04-16T22:11:29Z","content_type":null,"content_length":"1376","record_id":"<urn:uuid:12bffc6d-4fe3-4cd7-989a-74b47c426f7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
n Mathematics Irvine Unified School District MATHEMATICS CONTENT STANDARDS GRADE 4 By the end of fourth grade, students understand large numbers and addition, subtraction ... Grade 4 Everyday Mathematics Sample Lesson 250 Unit 4 Decimals and Their Uses Teaching the Lesson materials Key Activities Students compare decimals using base-10 blocks. They append zeros to decimals in order ... Answer Key For The California Mathematics Standards Grade 4 Los Angeles County Office of Education: Mathematics National Center to Improve the Tools of Educators Answer Key For The California Mathematics Standards Grade 4 GRADE FOUR ... Grade 4 Mathematics The University of Texas at Austin, Continuing Education K-16 Education Center 1 Grade 4 Mathematics EA/CBE Content Study Guide This Exam for Acceleration/Credit by ... Everyday Mathematics Grade 4 Mathematics Curriculum Guides Milwaukee Public Schools Curriculum Guide- Grade 4 Everyday Mathematics 2008-2009 Curriculum Guide Everyday Mathematics -Grade 4 8/26/08 Developed by the Milwaukee ... Numbers - Grade 4 Math Questions With Answers Multiple choice grade 4 math questions on numbers with answers. Fifth Grade Everyday Mathematics Fifth Grade Everyday Mathematics Ventnor Schools Time Line Essential Questions and Unit Content NJCCC Standards Instructional Objectives Assessment Instructional ... Grade 4 Mathematics:Elementary:Regents Exams:OSA:NYSED Grade 4 Mathematics tests ... Contact University of the State of New York - New York State Education Department National Assessment of Educational Progress (NAEP) Questions and ... National Assessment of Educational Progress (NAEP) Questions and Answers What is NAEP? The National Assessment of Educational Progress (NAEP) is often referred to as the ... Question in my math homework, grade 4, fractions and decimals ... i am in 4th grade and i need help with a math problem. if 3/4 lb is $6.00, how much would 1/4 lb cost? Answers.com - Where can you find 4th grade study link 9.4 everyday ... Math question: Where can you find 4th grade study link 9.4 everyday math online? on pg 201 progress in mathematics workbook grade 4 eBook Downloads progress in mathematics workbook grade 4 free PDF ebook downloads. eBooks and manuals for Business, Education,Finance, Inspirational, Novel, Religion, Social, Sports ... MCAS Spring 2005 Release of Test Items: VI. Mathematics, Grade 4 Grade 4 Mathematics Test * The spring 2005 Grade 4 MCAS Mathematics Test was based on learning standards in the Massachusetts Mathematics Curriculum Framework (2000). Nevada Alternate Assessment Microsoft Word - Nevada Alternate Assessment - Mathematics Alternate Grade Level Indicators.doc Student Progress Monitoring in Mathematics 19 Finding Appropriate Level of Material for Progress Monitoring To find the appropriate CBM level:-Determine the grade-level probe at which you expect the student ... Harcourt Math Grade 4 Answers .doc MSWord Document Download We found several results for Harcourt Math Grade 4 Answers. Download links for Harcourt Math Grade 4 Answers .doc MSWord Document
{"url":"http://www.cawnet.org/docid/progress+in+mathematics+grade+4+answers/","timestamp":"2014-04-21T14:49:51Z","content_type":null,"content_length":"47451","record_id":"<urn:uuid:acc96af8-fcee-412a-b07e-c31b01b158ef>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Strange Bank Account Copyright © University of Cambridge. All rights reserved. 'Strange Bank Account' printed from http://nrich.maths.org/ In Charlie's Bank you are only allowed to deposit £2 at a time and withdraw £3 at a time. Imagine Alison deposits £2, another £2, another £2, another £2 and then finally withdraws £3. She now has an extra £5 in her account. What other amounts of money is it possible for Alison to change her account balance by? Charlie's Bank wants to change its rules whilst ensuring that its customers can still change their account balance by any total. If Alison is only allowed to deposit £3 at a time and withdraw £7 at a time, will she be able change her account balance by any total? Explore some other deposit and withdrawal amounts. If Alison is allowed to deposit £x withdraw £y , what has to be true about x and y to make sure it is possible to change the account balance by any total? You might like to play the game Up, Down, Flying Around, and then take a look at Strange Bank Account (part 2). With thanks to Don Steward, whose ideas formed the basis of this problem.
{"url":"http://nrich.maths.org/9923/index?nomenu=1","timestamp":"2014-04-17T21:23:36Z","content_type":null,"content_length":"4688","record_id":"<urn:uuid:29318b14-26b0-4ca4-a351-c4a6713844cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Merge sort (ACL2) From LiteratePrograms Other implementations: ACL2 | C | C++ | dc | Eiffel | Erlang | Haskell | Java | JavaScript | Lisp | OCaml | Oz | Perl | Prolog | Python | Ruby | Scheme This article presents an ACL2 implementation of the merge sort sorting algorithm, including a complete formal proof of correctness. The overall structure of our Lisp file will be: (in-package "ACL2") ; Definitions ; Initial results initial results ; Show that permutation-p is an equivalence relation permutation-p reflexivity permutation-p transitivity permutation-p symmetry permutation-p equivalence ; Main lemmas and final result append congruences merge-ordered is a permutation of append append of split yields the original list Final correctness result [edit] Definitions Our implementation of merge sort has two helpers. The first, merge-ordered, takes two ordered (sorted) lists and efficiently merges them into a single ordered list: (defun merge-ordered (x y) (declare (xargs :measure (+ (acl2-count x) (acl2-count y)))) (cond ((endp x) y) ((endp y) x) ((< (car x) (car y)) (cons (car x) (merge-ordered (cdr x) y))) (t (cons (car y) (merge-ordered x (cdr y)))))) The second, split, is used to break a list into two smaller sublists (roughly half and half). This particular implementation, one of the more convenient to write in a functional style, places the even-index elements in one list and the odd-index elements in the other: (defun split (lst) (if (endp lst) (mv nil nil) (if (endp (cdr lst)) (mv lst nil) (mv-let (list1 list2) (split (cdr (cdr lst))) (mv (cons (car lst) list1) (cons (car (cdr lst)) list2)))))) Next we define the main mergesort function, which splits the list into two sublists, sorts each sublist, then merges them using merge-ordered: (defun mergesort (lst) (if (or (endp lst) (endp (cdr lst))) (mv-let (list1 list2) (split lst) (merge-ordered (mergesort list1) (mergesort list2))))) ACL2 will not admit mergesort because it is not able to prove that it terminates - it doesn't realize that the lists returned from split are smaller than the original list. We throw in a lemma to establish this and then add mergesort: (defthm split-results-shorter (implies (and (consp x) (consp (cdr x))) (and (< (acl2-count (car (split x))) (acl2-count x)) (< (acl2-count (mv-nth 1 (split x))) (acl2-count x))))) Next, we add some helpful functions describing correctness properties of sorting algorithms. The first determines whether a list is ordered, or sorted: (defun ordered-p (lst) (or (endp lst) (endp (cdr lst)) (and (<= (car lst) (car (cdr lst))) (ordered-p (cdr lst))))) It is not enough for a sorting algorithm to produce ordered output however; the output list must also be a reordering, or permutation, of the input list. We add another function to test this, which depends on a helper function for deleting a given element from a list: (defun delete-element (a lst) (if (endp lst) (if (equal a (car lst)) (cdr lst) (cons (car lst) (delete-element a (cdr lst)))))) (defun permutation-p (left right) (or (and (endp left) (endp right)) (and (consp left) (member (car left) right) (permutation-p (cdr left) (delete-element (car left) right))))) [edit] Initial results We get ordered for free; ACL2 proves it from scratch: <<initial results>>= (defthm mergesort-ordered (ordered-p (mergesort x))) It remains to show that mergesort's output is a permutation of its input: (defthm mergesort-permutation-of-input (permutation-p (mergesort x) x)) Attempting to prove this, we get stuck at this simplification checkpoint: Subgoal *1/2'' (IMPLIES (AND (CONSP X) (CONSP (CDR X)) (PERMUTATION-P (MERGESORT (CAR (SPLIT X))) (CAR (SPLIT X))) (PERMUTATION-P (MERGESORT (MV-NTH 1 (SPLIT X))) (MV-NTH 1 (SPLIT X)))) (PERMUTATION-P (MERGE-ORDERED (MERGESORT (CAR (SPLIT X))) (MERGESORT (MV-NTH 1 (SPLIT X)))) Clearly this is true, since the two lists returned from split together form X, and merge-ordered combines them back into one list. The difficult task is to prove that the result of merge-ordered is a permutation of the original list. One way to approach this is to show that the result of merge-ordered is a permutation of the result of appending the two lists, and separately show that appending the two sublists produces a permutation of the original list. The transitivity of permutation-p yields the result. These are still nontrivial to show, though. We start out by proving that permutation-p an equivalence relation, allowing ACL2 to better reason with it. [edit] permutation-p is an equivalence relation [edit] Reflexivity ACL2 has no problem proving reflexivity: <<permutation-p reflexivity>>= (defthm permutation-p-reflexive (permutation-p x x)) [edit] Transitivity For transitivity, we attempt to show that: (defthm permutation-p-transitive (implies (and (permutation-p x y) (permutation-p y z)) (permutation-p x z))) Attempting to prove permutation-p-transitive gives us this checkpoint: Subgoal *1/5' (IMPLIES (AND (CONSP X) (NOT (MEMBER (CAR X) Z)) (MEMBER (CAR X) Y) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) Y))) (NOT (PERMUTATION-P Y Z))). Dropping unnecessary conditions and rearranging: (IMPLIES (AND (CONSP X) (MEMBER (CAR X) Y) (PERMUTATION-P Y Z)) (MEMBER (CAR X) Z)) So we need to show that if a is in X and X is a permutation of Y, a is in Y. A good general result for this is: (defthm permutation-p-implies-member-iff (implies (permutation-p x y) (iff (member a x) (member a y)))) But if we try to prove it, we only reach this checkpoint: Subgoal *1/3.2 (IMPLIES (AND (CONSP X) (NOT (EQUAL A (CAR X))) (NOT (MEMBER A (CDR X))) (NOT (MEMBER A (DELETE-ELEMENT (CAR X) Y))) (MEMBER (CAR X) Y) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) Y))) (NOT (MEMBER A Y))). Rearranging and dropping unnecessary conditions: (IMPLIES (AND (NOT (EQUAL A (CAR X))) (MEMBER A Y)) (MEMBER A (DELETE-ELEMENT (CAR X) Y))) In other words, deleting an element doesn't affect membership of other elements. We prove a lemma to show this: <<permutation-p transitivity>>= (defthm delete-different-element-preserves-member (implies (not (equal a b)) (iff (member a (delete-element b x)) (member a x)))) We can then prove permutation-p-implies-member-iff: <<permutation-p transitivity>>= We take another crack at permutation-p-transitive and get stuck at: (IMPLIES (AND (CONSP X) (MEMBER (CAR X) Z) (NOT (PERMUTATION-P (DELETE-ELEMENT (CAR X) Y) (DELETE-ELEMENT (CAR X) Z))) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) Y)) (PERMUTATION-P Y Z)) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) Z))). Rearranging and dropping unneeded hypotheses, we get: (IMPLIES (AND (CONSP X) (MEMBER (CAR X) Z) (PERMUTATION-P Y Z)) (PERMUTATION-P (DELETE-ELEMENT (CAR X) Y) (DELETE-ELEMENT (CAR X) Z))) Since (car x) is in Z, permutation-p-implies-member-iff tells us and ACL2 that it's also in Y. What it doesn't know is that deleting the same element from both lists preserves permutation-p: (defthm delete-same-member-preserves-permutation-p (implies (and (member x a) (member x b) (permutation-p a b)) (permutation-p (delete-element x a) (delete-element x b)))) When we attempt to prove this, we fail at: Subgoal *1/5'' (IMPLIES (AND (CONSP A) (NOT (EQUAL X (CAR A))) (PERMUTATION-P (DELETE-ELEMENT X (CDR A)) (DELETE-ELEMENT X (DELETE-ELEMENT (CAR A) B))) (MEMBER X B) (MEMBER (CAR A) B) (PERMUTATION-P (CDR A) (DELETE-ELEMENT (CAR A) B))) (PERMUTATION-P (DELETE-ELEMENT X (CDR A)) (DELETE-ELEMENT (CAR A) (DELETE-ELEMENT X B)))). Simplifying and rearranging: (IMPLIES (PERMUTATION-P (DELETE-ELEMENT X (CDR A)) (DELETE-ELEMENT X (DELETE-ELEMENT (CAR A) B))) (PERMUTATION-P (DELETE-ELEMENT X (CDR A)) (DELETE-ELEMENT (CAR A) (DELETE-ELEMENT X B)))) Clearly, it doesn't matter in what order we delete two elements from the list. We prove the lemma delete-element-commutes, and the lemmas delete-same-member-preserves-permutation-p and permutation-p-transitive follow: <<permutation-p transitivity>>= (defthm delete-element-commutes (equal (delete-element a (delete-element b c)) (delete-element b (delete-element a c)))) [edit] Symmetry Next, we go for symmetry: (defthm permutation-p-symmetric (implies (permutation-p x y) (permutation-p y x))) Attempting this proof gives: HARD ACL2 ERROR in REWRITE: The call depth limit of 1000 has been exceeded in the ACL2 rewriter. Examining the stack reveals that permutation-p-implies-member-iff is being applied in an endless loop. We disable it: <<permutation-p symmetry>>= (in-theory (disable permutation-p-implies-member-iff)) Trying symmetry again, we get this simplification checkpoint: Subgoal *1/3' (IMPLIES (AND (CONSP X) (MEMBER (CAR X) Y) (PERMUTATION-P (DELETE-ELEMENT (CAR X) Y) (CDR X)) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) Y))) (PERMUTATION-P Y X)). (IMPLIES (AND (MEMBER (CAR X) Y) (PERMUTATION-P (DELETE-ELEMENT (CAR X) Y) (CDR X))) (PERMUTATION-P Y X)) We will show that: Y is a permutation of (CONS (CAR X) (DELETE-ELEMENT (CAR X) Y)) is a permutation of (CONS (CAR X) (CDR X)) equals ACL2 can prove the second permutation trivially. Given transitivity, it should be able to prove the result from just a lemma for the first permutation above, but its rewriting heuristics don't work. Instead, we define a more explicit lemma that directly implies the result, adding a hypothesis that is easy to derive from existing hypotheses: <<permutation-p symmetry>>= (defthm cons-delete-permutation-of-original (implies (and (member a y) (permutation-p (cons a (delete-element a y)) x)) (permutation-p y x))) Symmetry now follows, and permutation-p is an equivalence: <<permutation-p symmetry>>= <<permutation-p equivalence>>= (defequiv permutation-p) [edit] Main lemmas and result [edit] append congruences Recall the checkpoint of mergesort-permutation-of-input: Subgoal *1/2'' (IMPLIES (AND (CONSP X) (CONSP (CDR X)) (PERMUTATION-P (MERGESORT (CAR (SPLIT X))) (CAR (SPLIT X))) (PERMUTATION-P (MERGESORT (MV-NTH 1 (SPLIT X))) (MV-NTH 1 (SPLIT X)))) (PERMUTATION-P (MERGE-ORDERED (MERGESORT (CAR (SPLIT X))) (MERGESORT (MV-NTH 1 (SPLIT X)))) Before we show merge-ordered is a permutation of append, first we'll show that either argument in append can have a permutation of that argument substituted without changing the result (with respect to permutations), which will allow us to use the hypotheses above. We start with: <<append congruences>>= (defthm permutation-p-implies-permutation-p-append-2 (implies (permutation-p y y-equiv) (permutation-p (append x y) (append x y-equiv))) :rule-classes (:congruence)) The congruence rule class is a special class for rules of this form (we could have also used defcong). ACL2 proves the above automatically, but if we attempt to show the analogous result for argument 1, we get stuck at this checkpoint: Subgoal *1/3.2 (IMPLIES (AND (CONSP X) (PERMUTATION-P (APPEND (CDR X) Y) (APPEND (DELETE-ELEMENT (CAR X) X-EQUIV) (MEMBER (CAR X) X-EQUIV) (PERMUTATION-P (CDR X) (DELETE-ELEMENT (CAR X) X-EQUIV))) (MEMBER (CAR X) (APPEND X-EQUIV Y))). Eliminating unneeded hypotheses: (IMPLIES (MEMBER (CAR X) X-EQUIV) (MEMBER (CAR X) (APPEND X-EQUIV Y))) This is clearly true. We show a general lemma for this, which allows us to prove the desired congruence: <<append congruences>>= (defthm member-of-append-iff-member-of-operand (iff (member a (append x y)) (or (member a x) (member a y)))) (defthm permutation-p-implies-permutation-p-append-1 (implies (permutation-p x x-equiv) (permutation-p (append x y) (append x-equiv y))) :rule-classes (:congruence)) [edit] merge-ordered is a permutation of append Now we focus on showing that merge-ordered is a permutation of append. <<merge-ordered-permutation-of-append simple>>= (defthm merge-ordered-permutation-of-append (permutation-p (merge-ordered x y) (append x y))) If we examine the proof, we see that it tried to perform induction based on append. This isn't very useful, as merge-ordered is not expanded, and we know more about append than merge-ordered. Instead, we explicitly tell it to induct over merge-ordered: (defthm merge-ordered-permutation-of-append (permutation-p (merge-ordered x y) (append x y)) :hints (("Goal" :induct (merge-ordered x y)))) Now, merge-ordered has been completely eliminated from the conclusion in our simplification checkpoint: Subgoal *1/4'' (IMPLIES (AND (CONSP X) (CONSP Y) (<= (CAR Y) (CAR X)) (PERMUTATION-P (MERGE-ORDERED X (CDR Y)) (APPEND X (CDR Y)))) (PERMUTATION-P (APPEND X (CDR Y)) (DELETE-ELEMENT (CAR Y) (APPEND X Y)))). Observe that (append (cdr y) x) and (delete-element (car y) (append y x)) are the same list. This means that we should be able to show this if we can just show two lemmas: one showing that switching the args to append gives a permutation, and the other a congruence showing that we can permute the second argument to delete-element: <<merge-ordered is a permutation of append>>= (defthm append-commutative-wrt-permutation (permutation-p (append x y) (append y x))) (defthm permutation-p-implies-permutation-p-delete-2 (implies (permutation-p x x-equiv) (permutation-p (delete-element a x) (delete-element a x-equiv))) :rule-classes (:congruence)) No progress is made because ACL2's heuristics won't reduce (DELETE-ELEMENT (CAR Y) (APPEND Y X)) to (APPEND (CDR Y) X). To facilitate this we add a rewrite rule that moves a DELETE-ELEMENT inside an <<merge-ordered is a permutation of append>>= (defthm delete-append-deletes-from-leftmost-containing (equal (delete-element a (append x y)) (if (member a x) (append (delete-element a x) y) (append x (delete-element a y))))) We can now show the desired result: <<merge-ordered is a permutation of append>>= [edit] append of split yields the original list Finally, we must prove that appending the two lists returned by split yields the original input: (defthm append-split-permutation-of-original (permutation-p (append (car (split x)) (mv-nth 1 (split x))) The first simplification checkpoint looks like: Subgoal *1/3'' (IMPLIES (AND (CONSP X) (CONSP (CDR X)) (PERMUTATION-P (APPEND (CAR (SPLIT (CDDR X))) (MV-NTH 1 (SPLIT (CDDR X)))) (CDDR X))) (PERMUTATION-P (APPEND (CAR (SPLIT (CDDR X))) (CONS (CADR X) (MV-NTH 1 (SPLIT (CDDR X))))) (CDR X))). If we could extract the cons out of the append, it would match the hypothesis. We can do this using a simple permutation lemma, and then prove the result: <<append of split yields the original list>>= (defthm append-cons-commute-under-permutation (permutation-p (append x (cons a y)) (cons a (append x y)))) [edit] Final correctness result Now, we have everything we need to prove the result: <<Final correctness result>>= (defthm mergesort-permutation-of-input (permutation-p (mergesort x) x)) (defthm mergesort-correct (and (ordered-p (mergesort x)) (permutation-p (mergesort x) x)) :rule-classes nil) We use :rule-classes nil on the last theorem so that ACL2 doesn't try to create rewrite rules for it and give spurious warnings. hijacker hijacker
{"url":"http://en.literateprograms.org/Merge_sort_(ACL2)","timestamp":"2014-04-16T08:57:59Z","content_type":null,"content_length":"86445","record_id":"<urn:uuid:5fe428d3-2118-4b7d-a811-de07f09af3df>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick Excel question Re: Quick Excel question Quick Excel question Im working on analyzing stats and creating auction values through excel. I ran into a problem while trying to calculate average though. I did hits/ab and created a new column with that function. It gave me an error though and Im assuming it's because for a few of the values I'm dividing by 0. Is there an easy way to fix this, or should I just delete all the players with 0 abs? Webster11 wrote:Im working on analyzing stats and creating auction values through excel. I ran into a problem while trying to calculate average though. I did hits/ab and created a new column with that function. It gave me an error though and Im assuming it's because for a few of the values I'm dividing by 0. Is there an easy way to fix this, or should I just delete all the players with 0 nope, can't bend the rules of algebra, but you can enter this: of course use the cell numbers instead of H and AB TennCare rocks!!!! Rugy needs an icon that means excel genious I'm too lazy to make a sig at the moment Re: Quick Excel question RugbyD wrote: Webster11 wrote:Im working on analyzing stats and creating auction values through excel. I ran into a problem while trying to calculate average though. I did hits/ab and created a new column with that function. It gave me an error though and Im assuming it's because for a few of the values I'm dividing by 0. Is there an easy way to fix this, or should I just delete all the players with 0 abs? nope, can't bend the rules of algebra, but you can enter this: of course use the cell numbers instead of H and AB Rugby's suggestion is dead on, but if I'm calculating BA like that, I would modify his formula just a smidge: That way the output will be consistent in both format and appearance. It also won't screw with your sorts. Again, that's just me. Not trying to step on Rugby's suggestion or anything... Re: Quick Excel question JTWood wrote: RugbyD wrote: Webster11 wrote:Im working on analyzing stats and creating auction values through excel. I ran into a problem while trying to calculate average though. I did hits/ab and created a new column with that function. It gave me an error though and Im assuming it's because for a few of the values I'm dividing by 0. Is there an easy way to fix this, or should I just delete all the players with 0 abs? nope, can't bend the rules of algebra, but you can enter this: of course use the cell numbers instead of H and AB Rugby's suggestion is dead on, but if I'm calculating BA like that, I would modify his formula just a smidge: That way the output will be consistent in both format and appearance. It also won't screw with your sorts. Again, that's just me. Not trying to step on Rugby's suggestion or anything... good call, so long as the cell result isn't being used to compute an average value with other cells. if so it will count the 0 if its there, but a blank won't factor in the calculation. TennCare rocks!!!! Re: Quick Excel question RugbyD wrote: JTWood wrote: RugbyD wrote: Webster11 wrote:Im working on analyzing stats and creating auction values through excel. I ran into a problem while trying to calculate average though. I did hits/ab and created a new column with that function. It gave me an error though and Im assuming it's because for a few of the values I'm dividing by 0. Is there an easy way to fix this, or should I just delete all the players with 0 abs? nope, can't bend the rules of algebra, but you can enter this: of course use the cell numbers instead of H and AB Rugby's suggestion is dead on, but if I'm calculating BA like that, I would modify his formula just a smidge: That way the output will be consistent in both format and appearance. It also won't screw with your sorts. Again, that's just me. Not trying to step on Rugby's suggestion or anything... good call, so long as the cell result isn't being used to compute an average value with other cells. if so it will count the 0 if its there, but a blank won't factor in the calculation. I love you, too, man. this could not be more Greek
{"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=166057","timestamp":"2014-04-18T04:06:53Z","content_type":null,"content_length":"74184","record_id":"<urn:uuid:a11429fb-1f99-48ba-a477-0d158bde9f5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
January/February 2011 (Vol. 13, No. 1) pp. 4 1521-9615/11/$31.00 © 2011 IEEE Published by the IEEE Computer Society Article Contents Download Citation Download Content DOWNLOAD PDF were mathematicians (think von Neumann and Turing). The debates over the use of computers in mathematics have been either more fierce or less so, depending on your viewpoint. If you're interested in any of the vast areas related to computational modeling, the central role of computation is obvious: it's the way you get the answer! Another well-studied connection is the one between the mathematical theory of computation and actual computation. Complexity results provide guidance as to what we should try to compute, while real-world computational problems suggest important questions about complexity. This is all well and good, but there's another set of issues lurking in the shadows—namely, questions about the use of computation in teaching and doing mathematics. My last column included some remarks on the doing; the topic is healthy and growing. The real battlefield has been computation's use in teaching mathematics. Almost everyone accepts the idea of using computers to produce useful graphics and, of course, the use of typesetting tools such as TeX to produce slides and class notes is almost universal. But what about using computing actually to do, say, calculus? Should we insist that mastery of calculus include the ability to differentiate things like x ^x ? Or (much as I enjoy such problems), are they something best left to tools such as Maple and Mathematica? And what is calculus anyhow? Archimedes almost discovered integration, and Fermat probably knew the fundamental theorem about the relation between differentiation and integration. So what did Leibniz and Newton invent? The answer is calculus! Here "calculus" means a notation and language along with a set of techniques for reasoning about approximations and their limits. The mechanics of calculus, such as how to use the chain rule and integration by parts, are just that: mechanics. Mathematics recently lost a great contributor: J.J. "Jerry" Uhl, a long-time professor of mathematics at the University of Illinois. In the first part of his career, Uhl earned distinction as a researcher in functional analysis, with a special emphasis on vector-valued measures. Together with Joe Diestel, he published one of the standard sources on this topic. He was a teacher par excellence, remembered by all who sat in his classes. He also mentored a long string of PhD students, many of whom are active in teaching and research at major universities today. In 1978, Tony Peressini, Francis Sullivan, and Jerry Uhl published a textbook on optimization. Working on this seemed to awaken Uhl's interest in computation. Then along came Mathematica, and Uhl got very excited (as only he could) about the possibility of using Mathematica to teach calculus. Sullivan predicted that there would be lots of resistance in mathematics departments, mainly from those whose idea of what should be taught as calculus is some combination of computational methods for finding integrals and subtle arguments from real analysis. The prospect of a pioneering effort that would incite battles really excited Uhl, as controversy often did. He joined forces with Horatio Porta and Bill Davis to create Calculus&Mathematica, an entirely new method of teaching calculus that relies heavily on symbolic computation and active classroom participation by students. In doing so, those three mathematicians made much progress and, of necessity, fought many battles both within their math departments and elsewhere. Change is difficult and resistance to change is a powerful force. It's much too early to tell if Uhl and his collaborators won their wars. But in the end, the new often replaces the old, especially if the new offers insights and proves more productive. Certain commercial products are identified to adequately specify or describe the subject matter here; in no case does that identification imply recommendation or endorsement by NIST. My columns are often informed by conversations with Francis Sullivan. In this case, his contributions were indispensable.
{"url":"http://www.computer.org/csdl/mags/cs/2011/01/mcs2011010004.html","timestamp":"2014-04-23T20:08:29Z","content_type":null,"content_length":"43247","record_id":"<urn:uuid:d493e244-817a-4a2f-8ef6-cccb2d67618e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Using the exact state space of a Markov model to compute approximate stationary measures Results 1 - 10 of 16 - International Journal of Simulation: Systems, Science & Technology , 2004 "... Abstract. This paper presents the advantages in extending Classical Tensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tensor Algebra (GTA). Stochasti ..." Cited by 24 (13 self) Add to MetaCart Abstract. This paper presents the advantages in extending Classical Tensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tensor Algebra (GTA). Stochastic Automata Networks (SAN) and Superposed Generalized Stochastic Petri Nets (SGSPN) formalisms use such Kronecker representations. The advantages of GTA do not imply in a reduction or augmentation of application scope, since there is a representation equivalence between SAN, which uses GTA, and SGSPN, which uses only CTA. Two modeling examples are presented in order to draw comparisons between the memory needs and CPU time required for the generation and solution using both formalisms, showing the computational advantages in using GTA. 1 , 2003 "... We describe the main features of SmArT, a software package providing a seamless environment for the logic and probabilistic analysis of complex systems. SmArT can combine dierent formalisms in the same modeling study. For the analysis of logical behavior, both explicit and symbolic state-space g ..." Cited by 23 (13 self) Add to MetaCart We describe the main features of SmArT, a software package providing a seamless environment for the logic and probabilistic analysis of complex systems. SmArT can combine dierent formalisms in the same modeling study. For the analysis of logical behavior, both explicit and symbolic state-space generation techniques, as well as symbolic CTL model-checking algorithms, are available. For the study of stochastic and timing behavior, both sparse-storage and Kronecker numerical solution approaches are available when the underlying process is a Markov chain. In addition, , 2004 "... We present techniques for computing the solution of large Markov chain models whose generators can be represented in the form of a generalized tensor algebra, such as Stochastic Automata Networks (SAN). Many large systems include a number of replications of identical components. This paper exploits ..." Cited by 17 (5 self) Add to MetaCart We present techniques for computing the solution of large Markov chain models whose generators can be represented in the form of a generalized tensor algebra, such as Stochastic Automata Networks (SAN). Many large systems include a number of replications of identical components. This paper exploits replication by aggregating similar components. This leads to a state space reduction, based on lumpability. We define SAN with replicas, and we show how such SAN models can be strongly aggregated, taking functional rates into account. A tensor representation of the matrix of the aggregated Markov chain is proposed, allowing to store this chain in a compact manner and to handle larger models with replicas more efficiently. Examples and numerical results are presented to illustrate the reduction in state space and, consequently, the memory and processing time gains. - In Validation of Stochastic Systems , 2004 "... Abstract. This paper describes symbolic techniques for the construction, representation and analysis of large, probabilistic systems. Symbolic approaches derive their efficiency by exploiting high-level structure and regularity in the models to which they are applied, increasing the size of the stat ..." Cited by 15 (2 self) Add to MetaCart Abstract. This paper describes symbolic techniques for the construction, representation and analysis of large, probabilistic systems. Symbolic approaches derive their efficiency by exploiting high-level structure and regularity in the models to which they are applied, increasing the size of the state spaces which can be tackled. In general, this is done by using data structures which provide compact storage but which are still efficient to manipulate, usually based on binary decision diagrams (BDDs) or their extensions. In this paper we focus on BDDs, multi-valued decision diagrams (MDDs), multi-terminal binary decision diagrams (MTBDDs) and matrix diagrams. 1 - In Tools of Aachen 2001 Int. Multiconference on Measurement, Modelling and Evaluation of Computer Communication Systems , 2001 "... al collections of homogeneous objects indexed by set elements. a[3][0.2] aggregates : analogous to the Pascal \record". p:3 A type can be further modied by the following natures, which describe stochastic characteristics: const: (the default) a non-stochastic quantity. ph: a random variable w ..." Cited by 12 (2 self) Add to MetaCart al collections of homogeneous objects indexed by set elements. a[3][0.2] aggregates : analogous to the Pascal \record". p:3 A type can be further modied by the following natures, which describe stochastic characteristics: const: (the default) a non-stochastic quantity. ph: a random variable with discrete or continuous phase-type distribution. rand: a random variable with arbitrary distribution. ctmc, dtmc, spn, . . . : stochastic formalisms dening a stochastic process indexed by time. 1.1 Function declarations Syntactically, objects dened in SMART are functions, possibly recursive, and can be overloaded: real pi := 3.14; /* a parameter-less function */ bool close(real a, real b) := abs(a-b) < 0.00001; /* a two-parameter function */ int pow(int base, int e - Proc. PAPM/PROBMIV 2001, Available as Volume 2165 of LNCS (2001 , 2001 "... We review high-level specification formalisms for Markovian performability models, thereby emphasising the role of structuring concepts as realised par excellence by stochastic process algebras. Symbolic representations based on decision diagrams are presented, and it is shown that they quite id ..." Cited by 6 (2 self) Add to MetaCart We review high-level specification formalisms for Markovian performability models, thereby emphasising the role of structuring concepts as realised par excellence by stochastic process algebras. Symbolic representations based on decision diagrams are presented, and it is shown that they quite ideally support compositional model construction and analysis. - In Proc. DSN , 2002 "... Implicit techniques for representing and generating the reachability set of a high-level model have become quite efficient. However, such techniques are usually restricted to models whose events have equal priority. Models containing events with differing classes of priority or complex priority stru ..." Cited by 6 (2 self) Add to MetaCart Implicit techniques for representing and generating the reachability set of a high-level model have become quite efficient. However, such techniques are usually restricted to models whose events have equal priority. Models containing events with differing classes of priority or complex priority structure, in particular models with immediate events, have thus been required to use explicit reachability set generation techniques. In this paper, we present an efficient implicit technique, based on multi-valued decision diagram representations for sets of states and matrix diagram representations for next-state functions, that can handle models with complex priority structure. If the model contains immediate events, the vanishing states can be eliminated either during generation, by manipulating the matrix diagram, or after generation, by manipulating the multi-valued decision diagram. We apply both techniques to several models and give detailed results. 1. - Proceedings of the 9th International Workshop on Petri Nets and Performance Models , 2001 "... Petri nets and stochastic Petri nets have been widely adopted as one of the best tools to model the logical and timing behavior of discrete-state systems. However, their practical applicability is limited by the state-space explosion problem. We survey some of the techniques that have been used to c ..." Cited by 3 (0 self) Add to MetaCart Petri nets and stochastic Petri nets have been widely adopted as one of the best tools to model the logical and timing behavior of discrete-state systems. However, their practical applicability is limited by the state-space explosion problem. We survey some of the techniques that have been used to cope with large state spaces, starting from early explicit methods, which require data structures of size proportional to the number of states or state-to-state transitions, then moving to implicit methods, which borrow ideas from symbolic model checking (binary decision diagrams) and numerical linear algebra (Kronecker operators) to drastically reduce the computational requirements. Next, we describe the structural decomposition approach which has been the topic of our research in the last few years. This method only requires to specify a partition of the places in the net and, combining decision diagrams and Kronecker operators with the new concepts of event locality and node saturation, achieves fundamental gains in both memory and time efficiency. At the same, the approach is applicable to a wide range of models. We conclude by considering several research directions that could further push the range of solvable models, eventually leading to an even greater industrial acceptance of this simple yet powerful modeling formalism. "... This paper presents a method to represent Finite Capacity Queueing Networks - FCQN - using an alternative formalism called Stochastic Automata Networks - SAN. This method can be performed automatically and the resulting SAN model can be solved by traditional solution methods. The solution of a SAN m ..." Cited by 3 (1 self) Add to MetaCart This paper presents a method to represent Finite Capacity Queueing Networks - FCQN - using an alternative formalism called Stochastic Automata Networks - SAN. This method can be performed automatically and the resulting SAN model can be solved by traditional solution methods. The solution of a SAN model can provide stationary results like throughput, servers utilization, response time and population of each queue. The FCQN models that can be handled by this method include, but are not limited to the following features: dierent classes of clients with or without priority; open and closed queueing systems with blocking due to restricted capacity; open systems with loss of clients due to restricted capacity or priority among classes; and variable routing patterns according to queues local states. The benets of the use of SAN are related to other similar approaches in the conclusion. keywords: performance evaluation, numerical solutions, nite capacity queueing networks, stochastic auto...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=127533","timestamp":"2014-04-16T21:20:47Z","content_type":null,"content_length":"38062","record_id":"<urn:uuid:89ea538c-b9e7-4581-b5fa-5a8bfd0c7515>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
The area of study known as the history of mathematics is primarily an investigation into the origin of new discoveries in mathematics and, to a lesser extent, an investigation into the standard mathematical methods and notation of the past. Before the modern age and the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. The most ancient mathematical texts available are Plimpton 322 (Babylonian mathematics ca. 1900 BC), the Moscow Mathematical Papyrus (Egyptian mathematics ca. 1850 BC), the Rhind Mathematical Papyrus (Egyptian mathematics ca. 1650 BC), and the Shulba Sutras (Indian mathematics ca. 800 BC). All of these texts concern the so-called Pythagorean theorem, which seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry. Egyptian and Babylonian mathematics were then further developed in Greek and Hellenistic mathematics, which is generally considered to be one of the most important for greatly expanding both the method and the subject matter of mathematics. The mathematics developed in these ancient civilizations were then further developed and greatly expanded in Islamic mathematics. Many Greek and Arabic texts on mathematics were then translated into Latin in medieval Europe and further developed there. One striking feature of the history of ancient and medieval mathematics is that bursts of mathematical development were often followed by centuries of stagnation. Beginning in Renaissance Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries, were made at an ever increasing pace, and this continues to the present day. For a look at the development of number systems - Click here
{"url":"http://www.suffolkmaths.co.uk/pages/1Historical.htm","timestamp":"2014-04-20T21:45:18Z","content_type":null,"content_length":"21262","record_id":"<urn:uuid:a15c063f-9aef-44a8-a3bc-e2be0b754a68>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Exact algorithm for delay-constrained capacitated minimum spanning tree network Y.J. Lee and M. Atiquzzaman Abstract: The delay-constrained capacitated minimum spanning tree (DC-CMST) problem of finding several broadcast trees from a source node is discussed. While the traditional CMST problem deals with only the traffic capacity constraint served by a port of the source node, and delay-constrained minimum spanning tree (DCMST) considers only the maximum end­end delay constraint, the DC-CMST problem deals with both the mean network delay and traffic capacity constraints. The DC-CMST problem consists of finding a set of minimum cost spanning trees to link end-nodes to a source node satisfying the traffic requirements at end-nodes and the required mean delay of the network. In the DC-CMST problem, the objective function is to mini- mise the total link cost. A dynamic programming-based three-phase algorithm that solves the DC-CMST problem is proposed. In the first phase, the algorithm generates feasible solutions to satisfy the traffic capacity constraint. It finds the CMSTs in the second phase, and allocates the optimal link capacities to satisfy the mean delay constraint in the third phase. Performance evalu- ation shows that the proposed algorithm has good efficiency for any network with less than 30 nodes and light traffic. The proposed algorithm can be applied to any network regardless of its con- figuration, and used for the topological design of local networks and for efficient routing algorithms capable of constructing least cost broadcast trees. 1 Introduction
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/888/1088884.html","timestamp":"2014-04-16T11:04:02Z","content_type":null,"content_length":"8746","record_id":"<urn:uuid:8cf62352-ca76-4d17-a2bf-cca98d706fdf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Submitted by mrd on Thu, 02/22/2007 - 8:57pm. Dynamic Programming is an algorithm design strategy which can be essentially described as breaking a problem down into subproblems and re-using previously computed results. Sometimes, a problem which appears to take exponential time to solve can be reformulated using a Dynamic Programming approach in which it becomes tractable. The benefit is especially clear when the subproblem solutions overlap considerably. The technique of memoization is a major time-saver in those cases. A common Haskell newbie question is: how do I memoize? At first, it appears to be a very difficult problem, because access to mutable arrays and hashtables is restricted. It is important to realize that lazy evaluation is actually memoization itself and can be leveraged in that way for the purposes of Dynamic Programming. In fact, as a result, the expression of these algorithms can be more natural and lucid in Haskell than in a strict language. Here, I am going to examine the classic ``knapsack problem.'' Given a number of items, their values, and their sizes -- what is the best combination of items that you can fit in a limited-size > module Knapsack where > import Control.Monad > import Data.Array > import Data.List > import Test.QuickCheck I am going to represent items with this data type. Essentially, it is just a tuple of the item itself, its value, and its size. > data Item a = Item { item :: a, > itemValue :: Int, > itemSize :: Int } > deriving (Eq, Show, Ord) Cells will be used both to represent the solution to the knapsack problem, and as individual cells in the matrix for the Dynamic Programming algorithm. It is a pair consisting of: summed values, and the items in the sack. > data Cell a = Cell (Int, [Item a]) > deriving (Eq, Show, Ord) definition is a very neat use of the List monad (which I pulled from the Haskell wiki). You can think of it as saying: for each element in the list, half the possible subsets will include it, and half will not. > powerset :: [a] -> [[a]] > powerset = filterM (const [True, False]) brutepack considers the powerset of the items, cutting out those subsets which are too large in size, and picking the most valuable subset left. As you might figure, this is going to run in O(2^n) thanks to the use of . The definition should be simple to understand and will provide a sound basis for testing the Dynamic Programming alternative. > brutepack :: (Ord a) => Int -> [Item a] -> Cell a > brutepack size items = maximum [ > cellOf subset | > subset <- powerset items, itemsFit subset > ] > where > itemsFit items = sum (map itemSize items) <= size > cellOf items = Cell (sum (map itemValue items), items) The Dynamic Programming algorithm is as follows: Consider a matrix where the rows are indexed by size and the columns by items. The rows range from 1 to the size of the knapsack, and the columns are one-to-one with the items in the list. value = $30 $20 $40 size = 4 3 5 4| v(2,4)<--. | | 5| | | | 6| | | | 7| | | <--| 8| v(2,8) v(3,8) This is a diagram where the knapsack has a maximum size allowance of 8, and we want to stuff some animals in it. Each element in the matrix is going to tell us the best value of items by size. That means the answer to the whole problem is going to be found in v(3,8) which is the bottom-rightmost corner. The value of any one cell in the matrix will be decided by whether it is worthwhile to add that item to the sack or not. In the v(3,8) cell it compares the v(2,8) cell to the left to the v(2,4) cell up above. The v(2,8) cell has no room for the bear, and the v(2,4) cell represents the situation where the bear will fit. So the question, after determining if the bear will fit in the bag at all, is: is value of bear + v(2,4) better than v(2,8)? This definition of v lends itself to a very nice recursive formulation. / v(m-1,n) if (s_n) > n v(m,n) = -| / v(m-1,n) / \ \ max -| otherwise \ v(m-1,n-(s_n)) + v_n where s_n is the size of item n and v_n is its value. A typical implementation of this algorithm might initialize a 2D array to some value representing ``undefined.'' In Haskell, we can initialize the entire array to the correct value directly and recursively because it will not be computed until needed. All that is necessary is to express the data-dependencies, and the order of evaluation will take care of itself. This code takes the algorithm a little further by tracking a field in each cell that contains a list of the items in the sack at that point. > dynapack :: Int -> [Item a] -> Cell a > dynapack size items = valOf noItems size > where > noItems = length items > itemsArr = listArray (1,noItems) items > itemNo n = itemsArr ! n > > table = array ((1,1),(noItems,size)) $ > [ > ((m, n), cell m n) | > m <- [1..noItems], > n <- [1..size] > ] > > valOf m n > | m < 1 || n < 1 = Cell (0, []) > | otherwise = table ! (m,n) > > cell m n = > case itemNo m of > i@(Item _ v s) > | s > n || > vL >= vU + v -> Cell (vL , isL) > | otherwise -> Cell (vU + v, i:isU) > where > Cell (vL, isL) = valOf (m - 1) n > Cell (vU, isU) = valOf (m - 1) (n - s) The matrix is defined in the variable table and valOf is our function v here. This definition very naturally follows from the algorithmic description because there is no problem with self-reference when defining cells in the array. In a strict language, the programmer would have to manually check for the presence of values and fill in the table. It's important to be confident that the algorithm is coded correctly. Let's see if brutepack and dynapack agree on test inputs. I will define my own simple data type to customize the randomly generated test data. > newtype TestItems = TestItems [(Int, Int, Int)] > deriving (Eq, Show, Ord) > nubWithKey k = nubBy (\a b -> k a == k b) > fst3 (a,b,c) = a > tripleToItem (i,v,s) = Item i v s The Arbitrary class expects you to define your customly generated test data in the Gen monad. QuickCheck provides a number of handy combinators, and of course you can use normal monadic functions. sized is a QuickCheck combinator which binds the generator's notion of "size" to a parameter of the supplied function. This notion of "size" is a hint to test-data creators that QuickCheck wants data on the "order of" that size. Of course, what "size" means can be freely interpreted by the author of the function, in this case I am using it for a couple purposes. The basic idea is simply: create a list of randomly generated tuples of length "size", and choose values and item-sizes randomly from (1, "size"). Notice how the randomly generated tuple is replicated with the monadic combinator replicateM. Then, before returning, just make sure that there are no "repeated items" by running nubWithKey fst3 over the generated list. That will cut out any items with the same name as previous items. > instance Arbitrary TestItems where > arbitrary = sized $ \n -> do > items <- replicateM n > $ do > i <- arbitrary > v <- choose (1, n) > s <- choose (1, n) > return $ (i, v, s) > return . TestItems . nubWithKey fst3 $ items With an Arbitrary instance, we can now define a property: it extracts the tuples and creates Items out of them, then tries out both algorithms for equivalence. Note that I am only checking the final values, not the actual items, because there may be more than one solution of the same value. > prop_effectivePacking (TestItems items) = v1 == v2 > where items' = map tripleToItem items > Cell (v1,_) = brutepack 16 items' > Cell (v2,_) = dynapack 16 items' Knapsack> verboseCheck prop_effectivePacking 0: TestItems [(1,2,3),(3,2,2)] 1: TestItems [] 2: TestItems [(1,1,1)] 3: TestItems [(2,1,2),(-1,2,3),(0,2,3)] 4: TestItems [(-2,2,2),(-1,1,1),(3,2,3),(4,2,2)] ... It will progressively check larger "size" samples and you will notice that the brute force algorithm is going to start dragging down performance mightily. On my computer, in the ghci interpreter, brutepack on even just 20 items takes 10 seconds; while dynapack takes almost no time at all. Algorithms making use of Dynamic Programming techniques are often expressed in the literature in an imperative style. I have demonstrated an example of one such algorithm in a functional language without resorting to any imperative features. The result is a natural recursive expression of the algorithm, that has its advantage from the use of lazy evaluation. Submitted by dokondr on Sat, 03/18/2006 - 1:04pm. Haskell newbie: Recursive lambda definitions? Simon Thompson gives the following exercise (10.9) in his "Haskell. The Craft of Functional Programming" book: 10.9 Define a function total total:: (Int -> Int) -> (Int -> Int) so that total f is the function which at value n gives the total f 0 + f 1 + ... + f n I use 'where' clause to describe the resulting function 'tot': total:: (Int -> Int) -> (Int -> Int) total f = tot tot n | n >= 0 = (f n) + (tot (n-1)) | otherwise = 0 test = total f1 4 Q: Is it possible instead of naming and defining a resulting function (such as 'tot' in this example) just use recursive lambda definition? In this example recursion is required to create a function which is a variable sum of another function applications like: f 0 + f 1 + ... + f n Giving function a name ('tot' in this case) makes recursive definition possible. But what about lambda recursion? Can it be defined? Submitted by shapr on Wed, 02/23/2005 - 12:44pm. Jeff Newbern's All About Monads is the best monad tutorial I've seen yet! This tutorial starts with the most basic definition of a monad, and why you might want one. It covers most of the monad instances in the standard libraries, and also includes monad transformers. It wraps up nicely with links to Parsec, category theory, and arrows. You can read it online, or download as a zip file or tarball. If you've been looking for a good monads tutorial, try this one first! Submitted by shapr on Tue, 02/22/2005 - 11:13am. Algorithms: A Functional Programming Approach is one of my top ten favorite computer science books. First, it covers the basics of Haskell and complexity theory. Then for each algorithm it gives first an easy to read implementation, and then a more efficient but harder to read implementation. Each of the transformations from clear to fast versions are discussed, and optimizations are explained. This book was also my first introduction to methodical step-by-step algorithmic optimization systems, in this case the Burstall & Darlington system. I've since used the lessons I learned in this book in my commercial work in Python, SQL, Java, and of course Haskell. The best audience for this book is those who are looking for a second Haskell book, or new to algorithms, or would like to learn how to optimize pure (non-monadic) Haskell code systematically. The sections on top-down design techniques and dynamic programming would be of interest to programmers who are still learning and wish to know more about structuring larger programs. Even with all that content, this softcover book is only 256 pages (coincidentally binary?), allowing for easy reading in any spare moment.
{"url":"http://sequence.complete.org/taxonomy/term/2","timestamp":"2014-04-18T15:40:23Z","content_type":null,"content_length":"30650","record_id":"<urn:uuid:6eb49485-8ef5-4424-b2da-6b3868a921ec>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: The meaning of truth Jeffrey Ketland ketland at ketland.fsnet.co.uk Tue Nov 7 16:56:31 EST 2000 Charles Silver wrote: >On the other hand, if (2) is thought worthy of attention, it >seems to me the only recourse would be to eliminate any reference to >the standard model when invoking the notion of truth for first-order >sentences of PA. This would require a revamping of the usual truth >definition. As a start, let's say that closed formulas of the >language of PA are flat-out true or flat-out false, depending on >whether they're true of the natural numbers or false of them. So, >(the universal closure of) 'S(x) = S(y) -> x = y' would be flat-out >true. So far, no reference to "models". But, if we wanted to spell >this out thoroughly, how would we continue? What would the metatheory >look like (Would there even be a metatheory?)? Does Kanovei, or >anyone else, have an idea how one might fill in the details? Dear Charlie - I think the sort of details you're after have been discussed (see below) and some are even well-known. I seem to remember that Martin Davis mentioned writing about this in his PhD thesis!! We're talking about Formalized Truth Theories. I like these. Even better - they're *common ground* for formalists/nominalists and realists/platonists. They're just formal systems which prove some appropriate (sub-)set of instances of Tarski's disquotation scheme Tr(#A) <--> A. Of course, no consistent formal system (extending Q) contains a predicate B(x) such that it proves all instances of B(#A) <--> A. (If you take Tr(x) as a primitive predicate, and add the scheme Tr(#A) <--> A (for all arithmetic A) to PA, the result is a conservative extension. But that's a really boring truth theory). I think the most interesting formalized truth theories are PA(S) and a weakening which I call PA(S)_0. (See below). Contemporary stuff on such matters, e.g., see [1] Boolos &Jeffrey 1989 ("Computability and Logic"), Chapter 15 (I think), "On Defining Arithmetic Truth". (constructs the truth definition for first-order arithmetic *within* Z_2). [2] Sol Feferman 1991 "Reflecting on Incompleteness" (JSL). (adds a primitive truth predicate Tr(x), plus axioms). [3] Couple of books in 1994, 1996 (in German : ( ), and several papers by Volker Halbach on disquotational Tarskian truth definitions for arithmetic. [4] Last (and probably least), see Ketland 1999 "Deflationism and Tarski's Paradise", Mind 108. The formal system known (to model theorists) as PA(S) (i.e., PA + is discussed in: [6] Richard Kaye 1991: "Models of Peano Arithmetic" (Chapter 15, "Recursive (Roughly, the main result is this (due to Krajewski, et al 1981): take a weaker system than PA(S) but without induction on the new formulas. Call this PA(S)_0. Then: any countable model M of PA can be extended to a recursively saturated expansion (M*, S) which is a model of PA(S)_0. In particular, it follows that PA(S)_0 is a conservative extension of PA. I'm not aware of a proof-theoretic version of this conservation theorem. OK. Here's one way of writing down PA(S) - from memory. The base axioms are PA in the language L of arithmetic. Using the pairing function, you code up sequences (n_1, n_2, ...n_k) as numbers. Then you define a valuation v(t, s) meaning "the value of term t in sequence s". More detail is given in Kaye's book [6]. Then you introduce a *primitive satisfaction predicate* Sat(x, y), with new (i) Sat(("t1 = t2", s) <--> v(t1, s) = v(t2, s) This is a disquotational axiom. Using the properties of v(t, s), you can prove nice disquotational theorems like: (ia) Sat("t = 0", s) <--> v(t, s) = 0 (ib) Sat("t = 1", s) <--> v(t, s) = 1 (ic) Sat("t1 = t2 + t3", s) <--> v(t1, s) = v(t2, s) + v(t3, s)) (id) Sat("t1 = t2 x t3", s) <--> v(t1, s) = v(t2, s) x v(t3, s)) The important truth-theoretic axioms are: (ii) Sat(neg(f), s) <--> ~Sat(f, s) (iii) Sat(conj(f, g), s) <--> Sat(f, s) & Sat(g, s) (iv) Sat("Exi f", s) <--> Es*("s and s* differ at most at ith place" & Sat(f, s*)) [These just say that "truth commutes through the logical operators". A big debate between realists and constructivists is whether *proof* commutes with the logical operators] (v) Tr(x) <--> (Sent_L(x) & for all s, Sat(x, s)) [So this is the definition of disquotational truth: a sentence A is an arithmetic truth iff it's a sentence of L and it's satisfied by all sequences. Basically Tarski's Definition 23 in his 1935/36 paper]. The resulting formal system is called PA(S). It includes full induction on formulas containing Sat(x, y). Some properties of PA(S): (1) Convention T: PA(S) proves all instances of Tr(#A) <-> A, with A (2) Liar sentences: Using the Diagonal Lemma, PA(S) also proves a sentence ~~Tr(#X) <-> X. This X is the "liar sentence". Interestingly, PA(S) proves X as well!! (Because X is constructed by diagonalization on the truth predicate. So, X contains the truth predicate. So, X isn't a sentence of L. This is represented in PA. PA |- ~Sent(#X). It follows that PA(S) |- ~~Tr(#X). Hence, PA(S) |- X.) (3) Conservativeness: Unlike PA(S)_0, PA(S) is a *non-conservative* extension of PA. (4) PA(S) |- forall x (Prov_PA(x) --> Tr(x)) (this requires the induction scheme). (5) A corollary is that if G is a goedel sentence for PA, then PA(S) |- G. (6) Standard model: If you take the standard model |N for PA, and you take S to be the standard satisfaction relation on |N, then (|N, S) is the standard model of PA(S). But - as you required - PA(S) itself doesn't mention models or anything like that - it doesn't mention the "standard model". In a sense, PA(S) contains both PA and a *disquotational* semantic meta-theory for PA, in a single formal system. Of course, the meta-theory for PA(S) itself is not included (but you might want to formalize this e-mail message!). As you required, the formula Tr("for all x, y (S(x) = S(y) -> x = y)") is (trivially) provable in PA(S). Best - Jeff ~~~~~~~~~~~ Jeffrey Ketland ~~~~~~~~~ Dept of Philosophy, University of Nottingham Nottingham NG7 2RD United Kingdom Tel: 0115 951 5843 Home: 0115 922 3978 E-mail: jeffrey.ketland at nottingham.ac.uk Home: ketland at ketland.fsnet.co.uk More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-November/004569.html","timestamp":"2014-04-17T04:22:35Z","content_type":null,"content_length":"9077","record_id":"<urn:uuid:cee4d899-977e-4e59-bebf-e54bbb59b588>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is that you fill in a form by filling it out, an alarm goes off by going on, a building burns down as it burns up, you cut a tree up after you cut it down, and when the stars are out you can see them better when the lights are out? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=39969","timestamp":"2014-04-20T18:38:37Z","content_type":null,"content_length":"17524","record_id":"<urn:uuid:68217e30-f857-4167-96bd-e68c3b22119a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Alan Weiss I write documentation for MATLAB mathematical toolboxes, primarily optimization and PDE. I have also written documentation for statistics, symbolic math, and econometrics. My pre-MathWorks job was with Bell Labs, primarily in mathematical models of data traffic, with a strong interest in parallel computation and in rare events (large deviations).
{"url":"http://www.mathworks.com/matlabcentral/answers/contributors/1033975-alan-weiss/answers","timestamp":"2014-04-18T06:26:40Z","content_type":null,"content_length":"95901","record_id":"<urn:uuid:7e68fc58-7b9f-465c-9825-2eb61fc55d56>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Camden, NJ Precalculus Tutor Find a Camden, NJ Precalculus Tutor ...I have a college degree in mathematics. I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. Therefore, I am qualified to tutor students in SAT Math. 16 Subjects: including precalculus, English, calculus, physics ...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma... 19 Subjects: including precalculus, calculus, statistics, geometry ...In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr. Peter is always willing to offer flexible scheduling to suit the client's needs. 10 Subjects: including precalculus, calculus, algebra 1, GRE ...I encourage everyone to read and understand the constitution and its amendments. The best way to be an American citizen is to understand how our government works and participate! I have a minor in biochemistry from University of Delaware. 14 Subjects: including precalculus, chemistry, algebra 1, algebra 2 I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school. 8 Subjects: including precalculus, calculus, geometry, algebra 1 Related Camden, NJ Tutors Camden, NJ Accounting Tutors Camden, NJ ACT Tutors Camden, NJ Algebra Tutors Camden, NJ Algebra 2 Tutors Camden, NJ Calculus Tutors Camden, NJ Geometry Tutors Camden, NJ Math Tutors Camden, NJ Prealgebra Tutors Camden, NJ Precalculus Tutors Camden, NJ SAT Tutors Camden, NJ SAT Math Tutors Camden, NJ Science Tutors Camden, NJ Statistics Tutors Camden, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Camden_NJ_precalculus_tutors.php","timestamp":"2014-04-19T02:19:12Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:4b036292-7d3d-44e5-a5a4-7a2487fb0def>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Evergreen, CO Math Tutor Find an Evergreen, CO Math Tutor ...I graduated High school with a weighted GPA of 4.32 and the International Baccalaureate Diploma for complete the IB program. I now am excelling at university with a GPA of 3.3. My general approach when tutoring is to develop a learning plan that would suit the individual students needs. 8 Subjects: including algebra 1, algebra 2, biology, calculus ...I have taken courses in all aspects of political science. I have done extensive reading in and know theoretical and factual basis for my field. I have great success in my courses by applying the theories and science of politics to coursework. 39 Subjects: including algebra 2, public speaking, elementary (k-6th), elementary math ...I look forward to helping you or your child succeed! In school I took Ordinary Differential Equations during my undergraduate program, and in my Master's program we covered the multiple approaches to solving Partial Differential Equations, which included solving the equations discretely by writi... 20 Subjects: including algebra 1, algebra 2, calculus, chemistry ...There's a lot of garbage taught in pre-algebra classes these days. But the simple fact of the matter is that algebra is NOTHING but arithmetic without the numbers. If you can add, subtract, multiply, and divide numbers, there is very little in grade school algebra that one doesn't already know. 57 Subjects: including precalculus, logic, algebra 1, algebra 2 ...I participated in the Quiz Bowl competition in which our group placed 2nd in our district and 4th in state in the Skills USA program. I would love to help you, your family or friends with whatever area I am able. Making learning fun is a main priority. 22 Subjects: including algebra 1, ACT Math, SAT math, geometry Related Evergreen, CO Tutors Evergreen, CO Accounting Tutors Evergreen, CO ACT Tutors Evergreen, CO Algebra Tutors Evergreen, CO Algebra 2 Tutors Evergreen, CO Calculus Tutors Evergreen, CO Geometry Tutors Evergreen, CO Math Tutors Evergreen, CO Prealgebra Tutors Evergreen, CO Precalculus Tutors Evergreen, CO SAT Tutors Evergreen, CO SAT Math Tutors Evergreen, CO Science Tutors Evergreen, CO Statistics Tutors Evergreen, CO Trigonometry Tutors Nearby Cities With Math Tutor Bow Mar, CO Math Tutors Columbine Valley, CO Math Tutors Conifer Math Tutors Edgewater, CO Math Tutors Erie, CO Math Tutors Frederick, CO Math Tutors Glendale, CO Math Tutors Golden, CO Math Tutors Idledale Math Tutors Indian Hills Math Tutors Kittredge Math Tutors Louisville, CO Math Tutors Morrison, CO Math Tutors Sheridan, CO Math Tutors Superior, CO Math Tutors
{"url":"http://www.purplemath.com/Evergreen_CO_Math_tutors.php","timestamp":"2014-04-17T01:28:11Z","content_type":null,"content_length":"23786","record_id":"<urn:uuid:7879032c-1198-4d02-91d6-bf5b6b5afac1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Elliptic curves On Aug 6, 1:57�pm, mm <nowhere@net> wrote: �a �crit : No. When I was talking of the order of a group based on an EC, I was not talking of an EC over a finite field. So it is over an infinite field. Q? Look up "torsion group". Now, all points except those in the torsion group (max order 12) have INFINITE order. In my 2nd post to E. S�ylemez, I wrote |With a curve E(A,B)/N, N being the product of two "big" different |primes, the order is not easy to compute (but we can build such a curve |with a known order when we know the factorization of N). I thought it made it clear that the computations are done with the curve E(A,B)/N where N is not a prime. E(A,B) mod N where N is composite does not even form an Elliptic Curve.
{"url":"http://www.derkeiler.com/Newsgroups/sci.crypt/2009-08/msg00058.html","timestamp":"2014-04-20T03:13:44Z","content_type":null,"content_length":"11360","record_id":"<urn:uuid:fc7d8473-5310-4301-ad2c-c585e9f7131d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Coursework 3 Coursework 3 (20%) 1. We define binary numbers (Bin) as sequences of Booleans. Load ex3.epi into Epigram and complete the definitions of The binary representations of one and six. Translates binary numbers to (Peano) natural numbers. Translates natural numbers to binary. shows that bin2nat is inverse to nat2bin I didn't ask to show that the inverse of nat2natProp holds, i.e. that bin2nat (nat2bin bs) = bs. Can you see why? How could one adapt the representation of Bin such that this holds? 2. Introduce a representation of binary words of fixed length, i.e. define a type Word together with translations from and to Bin as defined above. Define addition for words (wadd). The deadline for completing this coursework is Tuesday 6/12, 13:00. Once completed email the completed Epigram files to txa@cs.nott.ac.uk. There will be a short viva on your solution during the Tuesday lab after you have submitted it. Marks will be only be given after the viva. Thorsten Altenkirch Last modified: Wed Nov 3 21:39:02 GMT 2004
{"url":"http://www.cs.nott.ac.uk/~txa/g5bcfr/ex3.html","timestamp":"2014-04-21T12:09:32Z","content_type":null,"content_length":"1918","record_id":"<urn:uuid:004faa02-4f31-4def-8d84-2d6f54b63428>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Observations show that the universe is fairly homogeneous and isotropic at scales larger than about 150 h^-1 Mpc where 1 Mpc ^24 cm and h l h^-1 Mpc) dynamics of the universe from the issue of structure formation at smaller scales. The former is modeled by a homogeneous and isotropic distribution of energy density; the latter issue is addressed in terms of gravitational instability which will amplify the small perturbations in the energy density, leading to the formation of structures like galaxies. In such an approach, the expansion of the background universe is described by the metric (We shall use units with with c = 1 throughout, unless otherwise specified): with S[k](k = (1,0,-1). The function a(t) is governed by the equations: The first one relates expansion rate of the universe to the energy density k = 0, ± 1 is a parameter which characterizes the spatial curvature of the universe. The second equation, when coupled with the equation of state p = p(p to the energy density, determines the evolution of energy density a) in terms of the expansion factor of the universe. In particular if p = ww then, a^-3(1+w) and (if we further assume k = 0, which is strongly favoured by observations) the first equation in Eq.(2) gives a t^2/[3(1+w)]. We will also often use the redshift z(t), defined as (1 + z) = a[0] / a(t) where the subscript zero denotes quantities evaluated at the present moment. in a k=0 universe, we can set a[0] = 1 by rescaling the spatial coordinates. It is convenient to measure the energy densities of different components in terms of a critical energy density ([c]) required to make k = 0 at the present epoch. (Of course, since k is a constant, it will remain zero at all epochs if it is zero at any given moment of time.) From Eq.(2), it is clear that [c] = 3H[0]^2 / 8G where H[0] a)[0] - called the Hubble constant - is the rate of expansion of the universe at present. Numerically The variables [i] [i] / [c] will give the fractional contribution of different components of the universe (i denoting baryons, dark matter, radiation, etc.) to the critical density. Observations then lead to the following results: (1) Our universe has 0.98 [tot] [tot] can be determined from the angular anisotropy spectrum of the cosmic microwave background radiation (CMBR; see Section 6) and these observations (combined with the reasonable assumption that h > 0.5) show [1] that we live in a universe with critical density, so that k = 0. (2) Observations of primordial deuterium produced in big bang nucleosynthesis (which took place when the universe was about few minutes in age) as well as the CMBR observations show [2] that the total amount of baryons in the universe contributes about [B] = (0.024 ± 0.0012) h^-2. Given the independent observations [3] which fix h = 0.72 ± 0.07, we conclude that [B] Combined with previous item we conclude that most of the universe is non-baryonic. (3) Host of observations related to large scale structure and dynamics (rotation curves of galaxies, estimate of cluster masses, gravitational lensing, galaxy surveys ..) all suggest [4] that the universe is populated by a non-luminous component of matter (dark matter; DM hereafter) made of weakly interacting massive particles which does cluster at galactic scales. This component contributes about [DM] p[DM] [DM] a^-3 as the universe expands which arises from the evolution of number density of particles: nmc^2 n a^-3. (4) Combining the last observation with the first we conclude that there must be (at least) one more component to the energy density of the universe contributing about 70% of critical density. Early analysis of several observations [5] indicated that this component is unclustered and has negative pressure. This is confirmed dramatically by the supernova observations (see Ref. [6]; for a critical look at the current data, see Ref. [7]). The observations suggest that the missing component has w = p / [DE] dark energy with negative pressure is the cosmological constant which is a term that can be added to Einstein's equations. This term acts like a fluid with an equation of state p[DE] = -[DE]; the second equation in Eq.(2), then gives [DE] = constant as universe expands. (5) The universe also contains radiation contributing an energy density [R] h^2 = 2.56× 10^-5 today most of which is due to photons in the CMBR. The equation of state is p[R] = (1/3) [R]; the second equation in Eq.(2), then gives [R] a^-4. Combining it with the result [R] T^4 for thermal radiation, it follows that T a^-1. Radiation is dynamically irrelevant today but since ([R] / [DM]) a^-1 it would have been the dominant component when the universe was smaller by a factor larger than [DM] / [R] ^4 [DM] h^2. (6) Taking all the above observations together, we conclude that our universe has (approximately) [DE] [DM] [B] [R] ^-5. All known observations are consistent with such an - admittedly weird - composition for the universe. Using [NR] a^-3, [R] a^-4 and [DE] = constant we can write Eq.(2) in a convenient dimensionless form as where H[0] t, a = a[0] q([NR] = [B] + [DM] and This equation has the structure of the first integral for motion of a particle with energy E in a potential V(q). For models with [NR] + [DE] = 1, we can take E = 0 so that (dq / dV(q))^1/2. Based on the observed composition of the universe, we can identify three distinct phases in the evolution of the universe when the temperature is less than about 100 GeV. At high redshifts (small q) the universe is radiation dominated and Img src="../../New_Gifs/qdot.gif" alt="dot{q}"> is independent of the other cosmological parameters. Then Eq.(4) can be easily integrated to give a(t) t^1/2 and the temperature of the universe decreases as T t^-1/2. As the universe expands, a time will come when (t = t[eq], a = a[eq] and z = z[eq], say) the matter energy density will be comparable to radiation energy density. For the parameters described above, (1 + z[eq]) = [NR] / [R] ^4 [DM] h^2. At lower redshifts, matter will dominate over radiation and we will have a t^2/3 until fairly late when the dark energy density will dominate over non relativistic matter. This occurs at a redshift of z[DE] where (1 + z[DE]) = ([DE] / [NR])^1/3. For [DE] [NR] z[DE] T ^14 GeV; we will say more about this in Section 7. (For a textbook description of these and related issues, see e.g. Ref. [8].) Before we conclude this section, we will briefly mention some key aspects of the background cosmology described by a Friedmann model. (a) The metric in Eq.(1) can be rewritten using the expansion parameter a or the redshift z = (a[0] / a)^-1 -1 as the time coordinate in the form This form clearly shows that the only dynamical content of the metric is encoded in the function H(a) = (a). An immediate consequence is that any observation which is capable of determining the geometry of the universe can only provide - at best - information about this function. (b) Since cosmological observations usually use radiation received from distant sources, it is worth reviewing briefly the propagation of radiation in the universe. The radial light rays follow a trajectory given by if the photon is emitted at r[em] at the redshift z and received here today. Two other quantities closely related to r[em](z) are the luminosity distance, d[L], and the angular diameter distance d [A]. If we receive a flux F from a source of luminosity L, then the luminosity distance is defined via the relation F L / 4d[L]^2(z) . If an object of transverse length l subtends a small angle l = d [A]). Simple calculation shows that: (c) As an example of determining the spacetime geometry of the universe from observations, let us consider how one can determine a(t) from the observations of the luminosity distance. It is clear from the first equation in Eq. (8) that where the last form is valid for a k = 0 universe. If we determine the form of d[L](z) from observations - which can be done if we can measure the flux F from a class of sources with known value for luminosity L - then we can use this relation to determine the evolutionary history of the universe and thus the dynamics.
{"url":"http://ned.ipac.caltech.edu/level5/March06/Padmanabhan/Nabhan1.html","timestamp":"2014-04-18T01:20:05Z","content_type":null,"content_length":"20269","record_id":"<urn:uuid:6dd5c858-4108-4655-be17-cef15d420509>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20010036303 - Method of automatic registration of three-dimensional images [0001] The present invention concerns the field of image processing, particularly of three-dimensional radiological images. [0002] As is known, radiological apparatuses comprise a means of emission of an X-ray beam such as an X-ray tube and a means of reception of the beam, such as a solid state detector or even a scintillator and a video camera, of CCD type, for example. [0003] The means of emission and the means of reception of X-rays are generally supported by a mobile system with one or more axes, making it possible to take pictures at different angles of incidence. The means of reception of the X-ray beam is connected to image processing means making possible the generation of so-called 3DXA three-dimensional images from a series of two-dimensional images picked up by the means of reception, these two-dimensional images being representative of the group of structures crossed by the X-rays. In a 3DXA three-dimensional image, the voxels are isotropic and have a dimension in the order of 300 μm. In angiography applications, the 3DXA images make it possible to see the blood vessels which are injected with contrast medium, but the other tissues can hardly be distinguished. [0004] The nuclear magnetic resonance apparatuses comprise means of sectional imaging, an image being representative of the proportion of water present in the structures observed. From a series of such so-called MR images taken along different cutting planes displaced in translation and/or in rotation, it is known how to reconstruct a so-called 3DMR three-dimensional image. In a 3DMR three-dimensional image, the voxels are anisotropic, that is, capable of having different dimensions along the axes of a three-dimensional mark. The resolution is in the order of one millimeter. In angiography, applications the 3DMR images make it possible to see the blood vessels and other tissues. [0005] It is important to obtain a good match between a 3DXA image and a 3DMR image in order to refine the knowledge of the structures observed, notably, of the blood vessels in their environment. [0006] Such a match can be obtained by external markers, use of which is constraining and creates risks of errors. [0007] The present invention is an improved method of registration. [0008] The present invention also concerns a method of registration of millimetric or submillimetric precision with short calculation time. [0009] The automatic registration method in an embodiment of the invention is intended for three-dimensional images making possible a good isualization of the blood vessels. [0010] A three-dimensional digital angiography image obtained by means of a radiology apparatus and a three-dimensional digital image obtained by means of a nuclear magnetic resonance apparatus are compared. From a point of correspondence between the two three-dimensional images, an estimate is made by processing of the projected two-dimensional images of a rotation capable of registering the two three-dimensional images, then one of the two three-dimensional images is registered in relation to the other, an estimate is made by processing of the projected two-dimensional images of a translation capable of registering the two three-dimensional images, and one of the two three-dimensional images is registered in relation to the other. [0011] The two-dimensional images are obtained by projection of the three-dimensional images. [0012] In one embodiment, an estimate is made again by processing of the projected two-dimensional images of a rotation capable of registering the three-dimensional image registered in relation to the other three-dimensional image, then the registered three-dimensional image is registered in relation to the other or the reverse, an estimate is made by processing of the projected two-dimensional images of a translation capable of registering the three-dimensional image registered in relation to the other three-dimensional image, and the registered three-dimensional image is registered in relation to the other or the reverse. The weaker resolution image will preferably be registered in relation to the stronger resolution image. [0013] Preferably, the point of correspondence between the three-dimensional image is chosen manually or automatically on a blood vessel. [0014] The rotation estimate treatment advantageously comprises stages of: [0015] selection of the voxels of each three-dimensional image, lying between an outer surface and an inner surface, both surfaces embracing the point of correspondence, [0016] radial projection on a spherical surface of voxels of maximum intensity among the voxels selected for each three-dimensional image, [0017] generation of a two-dimensional image for each three-dimensional image, by projection on a plane in a three-dimensional mark centered on the point of correspondence in order to flatten the [0018] calculation of the correlation between the two-dimensional images projected with zero angular displacement, followed by a positive and then negative angular displacement along each axis of the three-dimensional mark, [0019] determination of the angular displacement around the three axes of the mark of the three-dimensional space presenting the maximum correlation between the two-dimensional images. [0020] In one embodiment, the rotation estimate treatment comprises stages of reiteration of the two stages of calculation of the correlation and calculation of the angular displacement for a displacement of a small number of pixels. [0021] In one embodiment, the outer and inner surfaces comprise concentric spherical parts. The center of the spheres can be the point of correspondence. [0022] Each outer or inner surface advantageously comprises a truncated cone-shaped part, the vertex of the cone being the point of correspondence. The intersection of the cone and of a sphere defines a small circle of the sphere which limits the spherical part and the truncated cone by defining its base. [0023] In one embodiment, the directrix of the cone is a circle, for example, placed in a plane perpendicular to a straight line passing through the center of the circle and the point of [0024] The translation estimate treatment advantageously comprises stages of: [0025] selection of voxels of each three-dimensional image, included in a parallelepiped of given dimensions, centered on the point of correspondence, [0026] projection along three axes of a same three-dimensional mark, centered on the point of correspondence, of voxels of maximum intensity among the voxels selected for each three-dimensional image, generating three two-dimensional images for each three-dimensional image, the projection preferably being along parallel lines, [0027] calculation of the correlation between each pair of two-dimensional images projected along the same axis with zero displacement, followed by a positive and then negative displacement, of a given number of pixels along each axis of the plane of each two-dimensional image, [0028] calculation of the average correlation for each displacement, [0029] calculation of the translation displacement between the three-dimensional images corresponding to the displacement presenting the maximum average correlation between the two-dimensional images. The parallelepiped can be a cube, for example, of 16 mm per side. The side of a voxel being different in 3DXA and in MRI, if the cube has 64 voxels in 3DXA, it will encompass fewer voxels in MRI, if it is desired that both cubes be of the same size. [0030] In one embodiment, the translation estimate treatment comprises stages of reiteration of both stages of calculation of the correlation and calculation of the translation displacement at a lower pitch for a lesser displacement. [0031] In other words, a registration of two three-dimensional images is made by means of processing of two-dimensional images resulting from projections of the three-dimensional images. One thus avoids direct processings of three-dimensional images, which would be slow and expensive. The use of external markers can be avoided. After obtaining the necessary two-dimensional registration, the corresponding three-dimensional registrations can be deduced therefrom. [0032] The present invention will be better understood by study of the detailed description of an embodiment taken by way of nonlimitative example and illustrated by the attached drawings, in which: [0033]FIG. 1 is a diagram of stages of a process according to one embodiment of the invention; [0034]FIG. 2 is a diagram of stages of a process according to another embodiment of the invention; [0035]FIG. 3 is a detailed diagram of stage 2 of the previous figures; [0036]FIG. 4 is a detailed diagram of stage 3 of FIGS. 1 and 2; [0037]FIG. 5 is a view in perspective of the spheres used for projection of a first image, according to an embodiment of the invention; [0038]FIG. 6 is a view in perspective of the spheres used for projection of a second image, according to an embodiment of the invention; [0039]FIG. 7 is a schematic view of a type of projection used; and [0040]FIG. 8 is a plane representation of the truncated crown. [0041] Three-dimensional reconstructions of blood vessels called “3DXA” have been used recently from rotational angiography sequences made by rapid rotation of the X-ray tube and of the camera over half a turn and the taking of about fifty DSA images, which are the projections on input of a tomography algorithm producing the 3DXA image on output. For more information on this technique, reference is made to Launay, “Localization and 3D reconstruction from stereotaxic angiograms,” doctoral thesis, National Polytechnic Institute of Lorraine, Nancy, France, 1996. [0042] “DSA image” means here the image of maximum opacification up to row N in the acquired sequence, that is, each pixel of the resultant image takes the smallest value encountered on the N first images of the sequence, or the image of row N in the acquired sequence. Row N of the image is either chosen by the user or fixed in relation to the rate of acquisition. [0043] These reconstructions make possible a very good appreciation of angioarchitecture. Furthermore, those three-dimensional images can be used in real time according to several types of visualization, such as maximum intensity projection, isosurface, volume melting, virtual endoscopy or even reformatted cross-section, and are a further assist to the diagnoses of practitioners. [0044] The invention makes possible a matching of 3DXA images and 3DMR images. [0045] The radiology machine, once calibrated, supplies an initial registration which differs from the registration sought of perfect registration by a rigid transformation (rotation+translation) in three-dimensional space. [0046] As can be seen in FIG. 1, registration of the three-dimensional images begins with a stage 1 of choice of a point of correspondence between a three-dimensional digital image composed of a matrix of voxels and obtained by means of an X-ray apparatus and of a three-dimensional digital image also composed of a matrix of voxels and obtained by means of a nuclear magnetic resonance imaging (MRI) machine. To date, the choice of point of correspondence has been made by an operator. However, automation of that task could be envisaged. [0047] The operator is going to choose a point which seems to be seen with precision on each of the two images. In angiography, the first image of 3DXA type makes the blood vessels stand out. The second image of 3DMR type makes both the blood vessels and the other neighboring tissues stand out. The operator will therefore choose as point of correspondence a point of a blood vessel which is then precisely visible on both images at the same time. It can be estimated that the precision of choice of the point of correspondence is in the order of 1 to 2 mm. The operator may make that choice by displacement of a cursor on one after the other of the three-dimensional images displayed on the screen by means of a mouse, a ball, a keyboard or any other suitable means of controlling a cursor. [0048] That stage 1 being completed, the operator launches the automatic registration proper, which begins with a stage 2 of estimate of a rotation defined by three angles of rotation on three axes of a three-dimensional mark, the origin of which is the point of correspondence chosen in stage 1. At the end of stage 2, three angles noted θ, ρ and φ are then known, making possible an angular registration between the two three-dimensional images. [0049] One then goes on to a stage 3 in which the registration defined by the three angles θ, ρ and φ are applied to one of the two three-dimensional images. Registration can be carried out on the 3DXA image as well as on the 3DMR image. [0050] In stage 4, an estimate is made of a translation capable of registering the three-dimensional images previously registered angularly in relation to one another. The translation is defined by three coordinates X, Y and Z on each of the three axes of a three-dimensional mark, the origin of which is the point of correspondence. The mark is advantageously the same as that used in stage 2 for [0051] In stage 5, one applies the registration in translation defined by the coordinates (X, Y, Z) to one of the two three-dimensional images. Two mutually registered images are thus obtained, which therefore present an improved correspondence between their voxels and make possible a better assessment of the patient's anatomy and, in the case of angiography, of the position of the blood vessels relative to the adjoining tissues. [0052] In the embodiment illustrated in FIG. 2, process stages 6 to 9 have been added to stages 1 to 5 described above. In fact, the estimate of displacement, on rotation as well as on translation, is made by comparison of the correlation calculated from two-dimensional images and emanating from the 3DXA and 3DMR three-dimensional images and from the correlation between the same two-dimensional images but shifted, either on rotation in stage 2 or on translation in stage 4, by a slight displacement, for example, of 4 pixels on translation and by choice of the displacement conferring the maximum correlation, knowing that from the displacement between the two-dimensional images it is possible to calculate a displacement on rotation as well as on translation between the 3DXA and 3DMR three-dimensional images. [0053] It is therefore particularly important to repeat the stages 2 to 5 illustrated in FIG. 1, with a recommended lesser displacement so as to increase the precision of registration. Thus, stages 6 to 9 illustrated in FIG. 2 are identical to stages 2 to 5, with the exception that the rotation in stage 6 and translation in stage 8 are estimated with a greater precision, for example, twice as [0054] Depending on the desired precision of registration, the block of four stages can be reiterated once more with an ever greater precision, until obtaining a subpixelic registration on the two-dimensional images originating from the 3DXA and 3DMR three-dimensional images. [0055] In FIG. 3, the substages carried out in stage 2 are illustrated. Stage 2 begins with a substage 10 of selection of certain voxels of the 3DXA image and 3DMR image included within a volume whose definition is identical for the 3DXA and 3DMR images. This volume is delimited by an outer surface referenced 19 in FIG. 5 and by an inner surface referenced 20. The outer surface 19 encompasses the inner surface 20, both surfaces 19 and 20 being capable of possessing common parts. The point of correspondence 21 lies within the volume and can also lie within surfaces 19 and 20. It is understood here that the volume embraces the points of surfaces 19 and 20 defining it. [0056] Let us suppose that we know a pair of homologous points in both modalities: P[r ]in the 3DXA modality and P[f ]in the 3DMR modality. If these points are known with extreme precision, the translation is entirely determinate (T=P[r]−P[f]) and only the rotation R remains unknown. [0057] Now, any sphere centered on the fixed point is rotation-invariant. Any set of points situated between two spheres centered on the fixed point is likewise rotation-invariant. On the other hand, the position of the points situated close to the center of rotation is not very sensitive to the amplitude of rotation. The idea is therefore to consider only the points situated between a minimum distance R[min ]and a maximum distance R[max ]from the fixed point. The minimum distance defining the inner surface 20 ensures that the points considered will produce significant information for characterizing the rotation. The maximum distance defining the outer surface 19 limits the set of points inside the brain-pan. The set of voxels between surfaces 19 and 20 is called crown. [0058] For example, in FIG. 5 it can be seen that surfaces 19 and 20 comprise a hemispheric part of different radii and a circular plane closing the half-sphere which is centered on the point of correspondence 21. [0059] In a preferred variant to be explained later with reference to FIG. 7, surfaces 19 and 20 comprise a spherical part greater than half of a sphere and a truncated cone-shaped part, the vertex of the cone being merged with the point of correspondence 21, the same as the center of the sphere, the intersection of the cone and sphere defining a small circle of the sphere and the base of the truncated cone. The cone is common for both surfaces 19 and 20. [0060] In fact, the sinuses, situated behind the nose, show a gain of contrast on a gadolinium-injected MRI examination. A large hypersignal zone is therefore present in the MRI volume. This zone constitutes a pole toward which the arteries, also on hypersignal, can be attracted. This zone is eliminated automatically by removing a cone from the crown. The crown is then said to be truncated, but that does not at all challenge the principle of MIP projection. [0061] The truncated cone is extracted in each modality and then projected on the outer surface. Rotation can then be resumed by turning the surface extracted from the MRI around its center and evaluating the superposition with the extracted surface of the 3DXA. This process compares two surfaces, that is, two sets with two dimensions. We can then pass through a representation in the plane in order to accomplish it. [0062] For example, the cone can present an aperture angle of 60°. In other variants, it could be arranged for surfaces 19 and 20 to be defined by a not constant function of angles that form a given point relative to the axes of a three-dimensional mark whose origin is the point of correspondence 21. [0063] In substage 11, a maximum intensity projection is made of the voxels selected on the outer surface 19. In other words, by taking a ray having the point of correspondence 20 as origin and intersecting the outer surface 19, the intensity value taken for the voxel situated at the intersection of the ray and outer surface 19 is the intensity of the voxel, among the voxels of the truncated crown, of greater intensity situated on the ray. [0064] The number of voxels is thus automatically reduced without losing important information on the rotation. The arteries appear in both modalities in hypersignal. The rest of the voxels can be likened to noise: no correspondence exists that could be taken advantage of as to the intensity of the voxels outside the arteries. An extra stage could then be crossed toward simplification of the data by radial maximum intensity projection (MIP) toward the outer surface. [0065] The maximum distance then plays an important role. Giving it a limited value makes it possible to prevent skin areas from intersecting the crown. The skin presenting a hypersignal, that would result in creating a white spot on the surface of the outer sphere after the MIP projection, a spot which would not have its equivalent in 3DXA. A portion of the arteries would be embedded in that spot and, consequently, a part of the pertinent information for registration would be lost. [0066] Likewise, choosing a value of too little minimum distance would give inordinate space for the vessels close to point 21. At the limit, it is sufficient to imagine what would be given by the MIP projection of a crown whose R[min ]was fixed at 1 voxel, if the center point is situated inside an artery: a hypersignal value is found in all directions. The image projected on the outer surface is therefore uniformly white and thus unusable. [0067] In substage 12, the voxels calculated in stage 11 are projected on a plane in a three-dimensional mark, the origin of which is the point of correspondence 20 (see also FIG. 8). [0068] The surface is therefore described by two angles: θ which varies from−π to +π on axis [0y); φ which varies from 0 (alignment with axis [0y) in the negative direction) to φ[max], so that π−φ [max ]is the angle at the vertex of the cone cut off from the surface. [0069] The plane representation can be obtained by using those two variables as polar coordinates in the image plane: θ is the polar angle and φ is the modulus. If we took the entire crown ((φ[max]= 180 degrees), the sinuses would appear as a white band on the periphery. This band is of no use and is potentially disturbing, lying in the same range of intensity as the arteries. Finally, the truncation reduces the size of the crown, while preserving the useful voxels, improving the processing time accordingly. [0070] On a point P=(x, y, z) situated on the outer surface, we can roughly go back to the case of the unit sphere by standardizing P. The cylindrical angles θ and φ are linked to the Cartesian coordinates by the formulas: ${ x = sin ϕ cos θ y = - cos ϕ z = sin ϕsin θ ( 1 )$ [0071] The angle φ being in the interval [0, π], the preceding formulas can be inverted without difficulty: ${ ϕ = arccos - y cos θ = x sin ϕ sin θ = z sin ϕ ( 2 )$ [0072] This formula forces φ to be positive. [0073] Passage into the image plane is then easy. Let us note (u, v) the coordinates in pixels of the point corresponding to P (u−v=0 in the upper left corner of the image). Let us consider that the image is of N×N pixels size; the formula of passage of the sphere to the image is then: ${ u = N 2 ( 1 + ϕ ϕ max cos θ ) v = N 2 ( 1 - ϕ ϕ max sin θ ) ( 3 )$ [0074] In case φ is nil (x=z=0 and y=−1), the cosines and sines of θ are indefinite, but their product by φ=0 in the preceding formula places (u, v) in the center of the image. [0075] Let us now consider a point of the image (u, v). It will have a corresponding point on the sphere if and only if it is within the disk forming the trace: [0076] ∃P on the truncated sphere →← $∃ F on the truncated sphere ⇔ { ρ 2 = ( 2 u N - 1 ) 2 + ( 2 v N - 1 ) 2 } < 1 ( 4 )$ [0077] where ρ is positive. [0078] If this condition is fulfilled, we can extract the angles describing the sphere: ${ ϕ = ρ ϕ max cos θ = 1 ρ ( 2 u N - 1 ) sin θ = 1 ρ ( 2 v N - 1 ) ( 5 )$ [0079] The coordinates of point P of the external sphere corresponding to the pixel (u, v) are easily found. It is to be noted that if ρ is nil, then so is φ as well as its sine. The cosine and the sine of θ are not defined, but that does not at all matter, for formula (1) gives x=z=0. [0080] Let a rotation, given by a 3×3 R matrix, now be applied to the truncated cone. For any point (u, v) satisfying condition (4), a corresponding point P on the sphere can be found by application of the inverse formulas (5) and then (1). That point P is transformed into P′ by rotation R (P′=R(P−P[f])+P[f]). The point P′ also belongs to the sphere, the latter being wholly invariant on rotation. Application of the direct formulas (2) and then (3) gives the location of the pixel (u′, v′) corresponding to point P′. The relation between (u, v) and (u′, v′) is thus determined. All the formulas used were bijective. The effect of a rotation is therefore a bijection of the plane of the image in itself. Thus work proceeds in a space with two dimensions instead of three. The edge effects due to the points entering (P′) and leaving (P) the conical zone can be detected according to condition (4) and be roughly treated. [0081] The projection can also be made by considering that the different voxels are projected, for a given plane, parallel to one another and parallel to a straight line perpendicular to the plane. However, this projection technique limits the choice of surfaces 18 and 19 to a half-sphere or, in general, to a surface delimited by a plane comprising the point of correspondence 20 and parallel to the plane on which it is projected. [0082] A different projection technique may be used, corresponding somehow to developing the surface 18 on the plane on which it is projected. A simple example of such a method is that a voxel of spherical coordinates (ρ, θ, φ) will have for Cartesian coordinates (θ, φ) in the plane on which it is projected. In other words, a Cartesian coordinate of a projected pixel is a linear function of the end of substage 12, one thus obtains a two-dimensional image secured by projection in the three-dimensional mark, for each of the 3DXA and 3DMR three-dimensional images of origin. [0083] This representation seems well suited to the problem: to represent θ and φ no longer as polar coordinates, but as Cartesian coordinates (the axis of the growing θs following the lines of the image and the axis of the growing φs following the columns). The formulas of transformation into the image plane are simple (translations, except for the rotation on axis [0y) which is somewhat more complex. However, a certain continuity is lacking for this type of representation: the pixels situated on the vertical edges of the images would pass from one side to the other under the effect of a rotation following θ. But the most serious problem is that this representation distorts the vascular structures, depending on their orientation. This representation proves to be a solution for accelerating the time of calculation of the algorithm, but the representation that we can by comparison describe as “polar” is prefer ed to the representation we shall then call “Cartesian,” because it hardly distorts the structures and produces images which can easily be interpreted: everything happens as if the observer were situated in the center of the spheres and possessed a view over 2φ [max ]degrees. [0084] In substage 13, the correlation between both two-dimensional images projected with zero angular displacement is calculated; in other words, the correlation between the two-dimensional image coming from the 3DXA image and the two-dimensional image coming from the 3DMR image is calculated. A cross-correlation obtained by multiplying the intensity values of the pixels of both two-dimensional images can be used for that purpose. [0085] A correlation value for the same two-dimensional images is then calculated in the same way, but displaced from one another by a certain angle along one of the axes of the three-dimensional mark. This same calculation is repeated for a negative displacement of the same angle along the same axis. The same calculations are then made for displacements along the two other axes. The seven correlation values obtained for a two-dimensional image are compared and the displacement affording the greatest correlation between both two-dimensional images is taken, one originating from the 3DXA image and the other from the 3DMR image. [0086] Three potentially disturbing problems can be observed for comparison of the images: [0087] the bottom of the 3DMR image is not uniform, with, in particular, wide hypersignal zones due to the large arteries and to the sinuses (lower part of the image) and light spots here and there in the image; [0088] the diameter of the arteries is not exactly the same; problems of resolution in the MRI volume appear; [0089] not all the arteries visible in 3DXA are in the 3DMR image and, above all, vessels (notably veins) are added to the major arteries in the 3DMR image. To enhance the vessels, a simple morphological operator called “Top-Hat” is used. As a reminder, it is simply a question of an opening, followed by a subtraction of the image opened on the original image. The structural element is a disk centered on the pixel of interest and whose radius is going to depend on the size of the arteries we want to retain. Positively, if we want to retain all the arteries whose diameter does not exceed d millimeters, the diameter D of the structural element will be determined by: $D = d / R min ϕ max / N ( 6 )$ [0090] where d/Rmin is the maximum angle at which a length equivalent to the width of the arteries to be retained is seen and φ[max]/N is the angular size of a pixel of the image of plane representation. This operator is applied to both images (3DXA and 3DMR). [0091] Beside the second problem, the morphological operator more strongly reveals the intensity variations present in the MRI: instability of intensity along an artery and weaker intensity for the smaller caliber arteries. We employ a criterion of standardized and centered intercorrelation, this time in relation to the local average of the images (in practice, a square neighborhood of 15 pixels per side is used). [0092] Finally, the last problem is solved by initializing the rotation parameters by a phase of exhaustive search of the maximum low resolution criterion, which makes it possible to avoid the local [0093] Other standard similarity criteria can clearly be envisaged in place of correlation: mutual information or correlation ratio, for example. [0094] In substage 14, from the different displacement values taken in substage 13 to supply the strongest correlation between the two-dimensional images, one calculates the angular displacement between the 3DXA and 3DMR three-dimensional images corresponding to the displacement obtained at the end of substage 13. [0095] Let us now assume that we know the rotation perfectly, but that the pair of homologous points (P[r], P[f]) is known approximately. We still have an estimate of the translation (T=P[r ]−P[f]), but the latter is now approximate and needs to be refined. [0096] There is no need to know the entire volume to determine a translation. A correctly chosen subvolume suffices, the effect of the translation being the same at any point of the volume (in contrast to rotation, the amplitude of transformation does not depend on the zone observed). Let us therefore require those paired points to be in the neighborhood of an arterial structure not presenting any invariance by translation; in other words, the structure must open up in three dimensions: a typical example being a bifurcation in which the three arteries take very different directions. The translation can then be estimated by bringing those point opposite the neighborhoods. [0097] In FIG. 4, substages 15 to 18 of stage 4 are illustrated. [0098] In substage 15, one selects the voxels of each 3DXA, 3DMR three-dimensional image included within a parallelepiped, for example, a cube of given dimensions, centered on the point of correspondence 21. A cube of 16 mm per side can be chosen, which corresponds approximately to the size of 64 voxels in 3DXA. [0099] In substage 16, three projections are made, each along an axis of the same three-dimensional mark. Each face of the cube ma advantageously be perpendicular to an axis of the three-dimensional mark. Each projection is made by retaining, for a given line fo voxels, the intensity value of the voxel presenting the maximum intensity. Thus, if the voxel of Cartesian coordinates x[1], y[1], z[1 ]presents the strongest intensity of the whole parallelepiped selected in stage 15, that same intensity value will be encountered in the two-dimensional images projected with the following [0100] x[1], y[1 ]for the image perpendicular to axis Z and the voxels of which were projected along the axis Z, (x[1], z[1]) in the two-dimensional image obtained by projection along axis Y, and (y [1], z[1]) in the two-dimensional image obtained by projection along axis X. [0101] At the end of substage 16, for each 3DXA and 3DMR three-dimensional image, three 2DXAP and 2DMRP projected two-dimensional images are obtained. [0102] In substage 17, the correlation is calculated between both 2DXAP and 2DMRP images projected along axis X with zero displacement. That same calculation is then made for the same images displaced by a given number of pixels, for example +4, and then the same calculation is made for the same images negatively displaced by the same given number of pixels, for example, −4. The same calculations are made with the 2DXAP and 2DMRP images projected along axis Y, and then those projected along axis Z. [0103] The images of each pair of homologous images (axial, sagittal and coronal) are compared by the criterion of standardized centered correlation. The size of the subcube being reduced, those images all present an almost uniform background, even in MRI. Centering is therefore done in relation to the average pixel situated in the zone common to both images, Standardization makes it possible to reinforce the strength for a small-sized common zone. [0104] A low correlation criterion resulting from the average of three scores is used: a high score in at least two images leads to a high criterion. [0105] Considering the low resolution of the MRI in relation to the 3DXA and, therefore, the high level of interpolation present in the MIP images of the subcube extracted from the MRI, as well as the possibility of artifacts (due, for example, to the flux) in the MRI, that low criterion makes it possible not to be penalized by these inaccuracies. [0106] In substage 18, the average correlation obtained in substage 17 is calculated for each displacement, namely, zero displacement, positive on the axis of the Xs, negative on the axis of the Xs, positive on the axis of the Ys, negative on the axis of the Ys, positive on the axis of the Zs and negative on the axis of the Zs. The displacement presenting the maximum average correlation between the 2DXAP and 2DMRP images is then retained. [0107] One then passes to stage 5 of registration by means of the displacement calculated in Cartesian coordinates in substage 18. Limiting the voxels to those contained in a parallelepiped of given dimensions makes it possible to reduce the necessary calculation times markedly. [0108] Preferably, one and the same three-dimensional mark is used for all the stages and substages. [0109]FIG. 5 represents in perspective an example of a 3DXA type three-dimensional image with a network of arteries 25 which are clearly visible due to the injection of a contrast medium. The voxels included between surfaces 19 and 20 are taken from that image, the spherical part of the outer surface 19 presenting a radius of 4 cm and the spherical part of the inner surface 20 presenting a radius of 2 cm. Those radius values are, of course, indicative. They are adapted to a three-dimensional image of the human brain, the large radius of 4 cm avoiding having in the image the structures outside the brain, notably the skin, and the small radius of 2 cm avoiding taking vessels too close to the center of the spheres and which would become too prominent on projection. In the optimization algorithm chosen, it is assumed that rotation and then translation in turn are fully known. When we estimate rotation, it is therefore assumed that the translation error is slight in relation to the rotation error. It can be estimated that this problem of equilibrium between the two errors is encountered especially on initialization, the final pseudo-exhaustive optimization data being slight enough to ignore this problem subsequently. It is therefore assumed that the worst translation error there might be is encountered on manual initialization of points P[r ]and P[f]. We have estimated it equivalent to the MRI resolution, i.e., 1 mm. The angle at which such an error is seen from the R[min ]distance is 1/R[min ]radians. The initialization of rotation is made with N= 64. The angular size of a pixel of the image of the truncated crown is therefore: 2φ[max]/N. It is then necessary to compare R[min ]to N/2φ[max]=12.22 mm, where R[min]=2 cm. [0110] The two spheres are centered on the point of correspondence 21 determined before introduction of the registration method. [0111]FIG. 6 represents a 3DMR type three-dimensional image on which the point of correspondence 21 and the outer surface 19 and inner surface 20 can also be seen. Thus, the same voxels are retained for registration treatment in the 3DXA image and 3DMR images for just edge effects. [0112] In FIG. 7, the projection which can be used on the rotation estimate stage is schematically illustrated. The voxels included between the outer surface 19 and inner surface 20 are projected first on the outer surface 19. For example, the ray 22 is represented with an angle φ[1 ]in relation to the axis of projection 23. All the voxels of coordinate φ[1 ]are projected on the outer surface 19 by projection of maximum intensity. The strongest intensity value of the voxels of coordinates φ[1 ]is therefore retained. [0113] The projected voxel of coordinate φ[1 ]is therefore situated at the intersection of the ray 22 and outer surface 19 and is projected in rotation on the plane 24 on which a two-dimensional image is formed. The projection makes it possible to develop the spherical part of the outer surface 19. In other words, the cartesian coordinate x[1 ]of the projected voxel corresponding to the voxel of spherical coordinate φ[1 ]will be such that x[1 ]is proportional to φ[1], in contrast to the standard projection where x[1 ]will be proportional to a sinusoidal function of φ[1]. [0114] For translation, the choice of a first proposed displacement of four pixels and then of a reiteration of the stages of estimate of rotation and translation with a proposed displacement of two pixels and then one pixel results from the precision of choice of the point of correspondence in the 3DXA and 3DMR images, which is estimated at between 1 and 2 mm, and of the resolution of the 3DXA image, which is in the order of 0.3 mm. Thus, the finest pixel of a projected two-dimensional image has a size in the order of 0.3 mm. Starting the registration with four pixels, i.e., a displacement of ±1.2 mm, then two pixels, i.e., ±0.6 mm, and then one pixel, i.e., ±0.3 mm, one manages at the maximum to make up for a translation error of 2.1 mm, which is widely sufficient compared to the reasonable error that an operator can commit on choice of the point of correspondence. [0115] In FIG. 8, a preferred type of projection called “polar” is illustrated. The angle φ in spherical coordinates, which lies between −φ[max ]and +φ[max], determines the modulus in polar [0116] The invention provides a method of registration of two three-dimensional images, which is rapid and inexpensive, inasmuch as only two-dimensional images are processed, particularly on correlation calculations. Three-dimensional radiology and magnetic resonance images can thus more easily be used on surgical interventions, which improves the safety and effectiveness of such [0117] Various modifications in structure and/or steps and/or function may be made by one skilled in the art without departing from the scope of the invention.
{"url":"http://www.google.com/patents/US20010036303?ie=ISO-8859-1&dq=7,752,326","timestamp":"2014-04-16T09:45:25Z","content_type":null,"content_length":"113334","record_id":"<urn:uuid:28fea336-efe6-4c12-9c15-cfa39133b8a6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Mathematical Sciences Math on the Plain by Colonel David ArneyThere are numerous perspectives one can take to view the beauty of West Point. Some of the obvious things to consider as you walk around the West Point grounds (the cadet area and large flat open area called "the plain") are the rich military history of this fortification, the massive Gothic architecture of the fortress and buildings, the unique geology and geography of the river and the rock, the history of the people (cadets and faculty) who have passed through the gates of the Academy, nature's blessing of flora and fawn in the heart of the Hudson Highlands, and especially the great impact this place has had on our nation and the world. One less obvious perspective is to view and taste some of the beauty in West Point's role and contributions in mathematics and mathematics education. You have to look carefully, but there is plenty of beauty in a mathematical tour of West Point. Please join me for a short walking tour of West Point with this mathematics perspective in mind.We start at the library corner where we plainly see a 12-foot tall bronze statue of George S. Patton (USMA 1909), dressed for combat with pistols on his belt, holding binoculars, looking as if he is ready to lead an armored attack across the plains of Europe. However, in this location, he is intently watching the entrance to the West Point Library. If Patton could use his binoculars to peer through the wall of the 4th floor of the library, he would see a bust of one of his World War II colleagues, Omar Bradley (USMA 1915). When it comes to appreciating, understanding, and using mathematics, these two colleagues could not be more different.Bradley was more than just a mathematics educator (he taught mathematics at USMA from 1919 to 1923). He also showed he was a real mathematician by using and developing mathematical ideas in performing his military duties as he rose to the rank of General of the Army (5 stars). It has been reported that Bradley was a superb teacher; so good that he was extended a 4th year in the Mathematics Department to help develop his fellow faculty members as well as teach cadets. Just as impressive were his talents in using mathematics to solve problems. After leading our Allies' efforts in Europe during World War II, he reflected that he often made his operational decisions by thinking of them in terms of constrained optimization problems and utilizing many of today's foundations of operations research. Who knows, in the process of his mathematical thinking, he could have been one of the first to think about the mathematical technique of linear programming.On the other hand, Patton was a weaker mathematics student. He struggled so much in all his subjects that it took him 5 years to complete the 4-year academy curriculum. However, the mere fact that he was given the extra time meant that Academy officials saw something special in Patton that made the investment in this young leader worth pursuing. If Patton had an academic strength, it was in history. He was quoted as saying, "to be a successful soldier, you must know history."It should be noted that both Patton and Bradley studied similar mathematics programs at USMA. The mathematics studies were quite intense and supported the required general engineering curriculum in place at the Academy. Bradley and Patton went to mathematics class 6 days per week for 80-85 minutes per day for the first 2 years for a total of about 612 in-class hours. This could be equivalent in today's semester system to about an amazing 45 credit hours of mathematics. Their courses included algebra, trigonometry, geometry, descriptive geometry, differential and integral calculus, differential equations, and linear perspective. So let's give Patton some credit for his mathematics talents. He survived a rigorous program that surely left him with powerful thinking and problem-solving skills to clearly and logically solve tough quantitative problems faced in his military career. Given the contributions of these two diverse personalities, it was fortunate that the USMA education system found room for both of them to succeed.Today the Academy tries to develop the future Bradleys and Pattons. The formula has not changed much. All cadets take 16 credit hours of core mathematics while engineering and science students take even more. Special cadets with great interest in mathematics, as Bradley had, can major in mathematics and take 15 or more courses in undergraduate mathematics. At West Point mathematics, science, engineering, humanities, and social sciences are for everyone.As we move west on the street, we approach the clock tower on the north end of Pershing Barracks. This building is named for John J. Pershing (USMA 1886) who as General of the Army (5 stars) commanded the American Armies in World War I. This clock tower plays a big role in a story told about Douglas MacArthur (USMA 1903). It seems that MacArthur was a multi-talented cadet. He was so outstanding in academics that he finished first in his class with an all-time high academic average and was the top cadet in each of his mathematics courses. He must have been an outstanding leader and engineer as well. During his first year at the Academy, he led a group of fellow plebes on a spirit mission. A spirit mission is best described as an activity on the boundary of regulatory acceptance that demonstrates teamwork, creativity, and, thereby, the spirit of the cadets involved and instills spirit in the rest of the cadets. Another opinion that is sometimes held by the Academy leadership is that spirit missions often end up being misguided pranks. At West Point, spirit missions are more frequent during the week of the Army-Navy football game. This may be the setting for MacArthur's caper as well. MacArthur and his team sneaked out of the barracks late at night, carried a heavy and bulky revelry cannon across "the plain", and placed the cannon of the top floor of the clock tower, at least 60-feet high. The story continues that it took the Academy's engineers a week to remove the cannon from the clock tower and return it to its proper place on "the plain". MacArthur must have been a mathematics instructor's dream student. He was bright, creative, studious, and motivated. When he missed answering a question correctly, it must have been a bad question. Of course, MacArthur had another side which was revealed in his fiercely competitive nature. It was said that he and classmate Ulysses S. Grant III (USMA 1903), who was 6th in the class and grandson of the former president, competed in all aspects of the cadet life.As we stand on the road in front of Pershing Barracks and look south along Thayer Road, we see Bartlett Hall, Mahan Hall, and Grant Hall. These buildings are named for William Bartlett (USMA 1826), Dennis Mahan (USMA 1824), and Ulysses S. Grant (USMA 1843). While Bartlett was a physicist and Mahan was engineer, both wrote mathematics books. Bartlett's book was entitled Analytic Mechanics, which essentially contained the methods of calculus and differential equations used to solve application problems from mechanical engineering. Mahan wrote a book entitled Descriptive Geometry, as Applied to the Drawing of Fortification and Stereotomy, which covers the geometric bridge between mathematics and engineering. This course is basically a mathematical treatment of engineering drawing. The third honoree mentioned, Grant, never wrote a mathematics book, but it may have been one of his dreams. Grant tried several times to join the USMA Mathematics Department, but was turned down. He had finished 21st in his graduating class of 39, doing best in mathematics where he stated "mathematics was easy for me" as he finished 10th out of 53 cadets studying mathematics. His mathematics classes lasted all morning, 6 days per week, for the first two years, covering algebra, geometry, trigonometry, descriptive geometry, surveying, analytic geometry, and calculus. All of Grant's mathematics books were written by Charles Davies (USMA 1815), who, as department head from 1823-1837, was a prolific writer and national leader in mathematics education from primary school through college. Grant's mathematics professor was Albert Church (USMA 1828), who was also a textbook writer and department head for 41 years, 1837-1878. In response to Grant's requests to return to West Point and teach, Church told him to remain with the field Army because he just did not have the academic credentials to teach mathematics. Even Grant's persistent letter writing campaign for the teaching job did not overcome his mediocre undergraduate mathematics performance. However, it was sufficient to be elected President of the United States.We now enter the cadet area by walking by the north end of Pershing Barracks toward the 1st division of the old cadet barracks. The 1st division is all that remains of the old central barracks. This horseshoe-shaped barracks was built in 18880. Traditionally, 1st division housed the Brigade First Captain (overall cadet in charge). First Captains of some mathematical note are MacArthur, Pershing, John Barnard (USMA 1833), and Hans Pung (USMA 1995). Barnard was an accomplished scientist, prolific author, and fortification engineer, who among other mathematics and scientific endeavors helped establish the National Academy of Science. Pung was a mathematics major and a Marshall Scholarship winner, which provided him a 3-year graduate fellowship to study mathematics at Oxford.Directly behind First Division is a 15-foot tall bronze statue of a French soldier defending Paris in 1815 with his sword held high, holding a flag, standing next to a cannon. This statue was a gift from the Ecole Polytechnique in Paris after World War I and is a replica of a statue at the Ecole. The relationship between Ecole Polytechnique and West Point began a century earlier, when Sylvanus Thayer (USMA 1807), an assistant professor of mathematics, visited European military schools to learn about the French and English engineering curricula, obtain some of their best undergraduate books, and see how they taught their cadets. On the faculty of the famous French Military school were several mathematicians of note. So impressive was this group that it may have been the most illustrious mathematics faculty ever assembled. Among them were Gaspard Monge, Joseph LaGrange, Augustine Cauchy, Theodore Olivier, Marquis de LaPlace, and many other famous French mathematicians.Thayer never saw the faculty of the Ecole in action, since Napoleon's defeat at Waterloo resulted in a temporary closing of the school. Napoleon had helped establish the Ecole and several other military technical schools and recognized the benefit of a rigorous program in mathematics and engineering. Napoleon was an accomplished mathematician and always kept plenty of mathematics talent, such as some of those just mentioned from the faculty of the Ecole, on his military staff. This closing of the Ecole gave Thayer plenty of time to meet with the faculty to discuss the engineering, science, and mathematics curricula and pedagogy. He discovered how to incorporate the Ecole's philosophy that rigorous mathematics was the key foundation to a successful military engineering and science program. He recognized the need for a comprehensive mathematics program to bridge the gap between the United States secondary school mathematics and the study of engineering. There in Paris, the seeds were planted for the new mathematics and science program at USMA and the principles developed for the Thayer method of teaching. While Thayer was in Europe he bought approximately 1000 volumes of the best books he could find for an undergraduate library. Thayer's books are now beautifully bound and displayed in the West Point Room of the library, around the corner from Bradley's bust. Possibly because he was a mathematician and saw the need for a demanding mathematics program, Thayer's original collection and the additional volumes he bought after he became Superintendent of the Academy were rich in mathematical works in terms of quantity and quality. Today the West Point Library holds one of the finest collections in the United States of pre-20th century mathematics textbooks, references and treatises. However, equally impressive is the intensive mathematics program that remains today with roots firmly planted in the rigorous French school guided by the best mathematicians of the 18th and 19th centuries and Thayer's brilliant insight in adapting the program to America.We move north about 200 feet through a sallyport in the new cadet barracks that takes us to the edge of the plain. At the corner of this wing of the barracks stands an impressive statue of Dwight Eisenhower (USMA 1915), Bradley's classmate and Bradley's and Patton's commander during World War II. Given the diverse mathematics talents of Bradley and Patton, it is not surprising that Eisenhower's mathematics record lies between the two. Eisenhower had developed a very analytic mind from in depth study in mathematics in high school and at West Point. MacArthur used Eisenhower as an analyst on the Army staff, working on quantitative issues like resource allocation, mobilization plans, and the impact of technology on air power, mechanization, and the industrial base of the military. Eisenhower had become a successful operations research analyst before such a profession even existed. Another of Eisenhower's great talents was to find the right person to get a specific job accomplished--for example: Bradley to plan operations, Patton to execute them. When faced with finding officers to perform war planning, design the postwar Army composition, run our nation's and Army's scientific intelligence program, and set up and organize the Central Intelligence Agency, he called on USMA mathematics professors William Bessell (USMA 1920) and Charles Nicholas (USMA 1925). Both men used their analytic skills to help General Eisenhower (another 5 star) and then returned to mathematics teaching at West Point. Bessell headed the mathematics department 1947-1959 and served as Dean from 1959-1965. Nicholas succeeded Bessell as department head and is famous for authoring a comprehensive, integrated mathematics program (15(?) volumes), affectionately referred to by cadets as "the green death". Like Grant, Eisenhower's mathematics talents did not earn him a teaching position on the West Point faculty, were sufficient for election to the Office of the President of the United States.We move down the cement apron to the center section of the fortress called Washington Hall. This building holds the cadet mess hall where all 4000 cadets can sit and simultaneously eat their meals. In front of the building stands an impressive 30-feet tall statue of George Washington riding a horse. No; Washington was not a graduate of the Academy, but he was no stranger to West Point. His Revolutionary War headquarters was located here so that he could defend the key terrain linking the northern and southern colonies and prevent the British forces from using the Hudson River by blocking it with a great chain that extended across the river at West Point. After the war, President Washington proposed a military academy for our new nation, conveniently located at West Point where some basic military training had continued after the war and troops were garrisoned. Washington felt the country's fledging Army needed professional, competent leaders educated at a military academy. The chief antagonist of the Washington plan was Thomas Jefferson. Jefferson, much like Napoleon, was an accomplished mathematician who appreciated its value. Jefferson recognized the need for more schools of higher education in America, but was afraid a national military academy would produce and benefit an elite class of Army officers. Jefferson's plan was for a national technical school to help educate the nation's common people, who after graduation would build our country's infrastructure. Neither plan was implemented until Jefferson, as President of the United States, agreed that one school could accomplish both goals--educate both professional officers for the Army and competent engineers for our growing nation. In 1802, Congress passed and Jefferson signed the law establishing the United States Military Academy at West Point. The Academy has been producing graduates to meet those needs envisioned by Washington and Jefferson ever since.As we continue north around the cement apron, we find another statue, nearly symmetric with the building to the Eisenhower statue we just visited. By the way, such symmetry produces beauty in the eye of a mathematician. We see that this monument commemorates Douglas MacArthur. Since we already discussed MacArthur as a cadet, let's now focus on his exploits as an officer. MacArthur influenced the mathematics program at two different times in his career. As Superintendent of the Academy from 1919 to 1922, he instituted a reform of the entire curriculum that integrated courses, utilized technology, and demanded more work by cadets outside class. The statistics of the reform show 10% fewer mathematics classes and class time reduced from 85 to 75 minutes per class. However, the quality of the mathematics program increased under guidance of department head Charles Echols (USMA 1891) and implementation of the new courses by Omar Bradley and his fellow instructors. Key changes in the modernization were emphasis of the use of the slide rule, new textbooks, increased time for and demands on cadet study, and integration of the topics and applications of mathematics and engineering science. Under MacArthur's reform the mathematics program had been modernized while maintaining its original philosophy of rigor and magnitude.MacArthur left the academy and eventually became the Chief of Staff of the Army from 1930 to 1935. During his tenure in that position, the academy faced a challenge to the quality of its academic program from the President of Harvard. MacArthur, Superintendent William Connor (USMA 1897), department head Harris Jones (USMA 1917), and a mathematics team of 10 yearling "mathletes" saw to it that the challenge was met. In May 1933 Army defeated Harvard in a mathematics competition which was widely reported like an athletic event. MacArthur personally awarded medals, gifts, and privileges to the victorious team for successfully defending his alma mater and reformed curriculum.Across from MacArthur, we see a beautiful fenced-in garden. At the corner on the wall of the fence is a spot called Constitution Corner. This messages given here are the role of the Army soldier and the purpose of the Academy. West Point teaches cadets how to "support and defend the constitution" through its propose of "producing leaders of character who serve in the common defense." Providing shade for this corner are two trees, a cherry and a peach tree. These trees are dedicated to the Army football teams that won the 1985 Peach Bowl and 1984 Cherry Bowl. It's not totally by coincidence that many of the names that I have mentioned already were Army football players: Patton, Bradley, Eisenhower, and Pung. Over the years Army football players have achieved success in academics, including mathematics, and military undertakings. In 1995, Eric Oliver, a mathematics major and football player, was awarded both a Rhodes Scholarship and NCAA postgraduate scholarship. Recently, we have had many other graduating cadets study mathematics in graduate program under prestigious fellowships. Hertz Fellowship winners include Richard Staats (USMA 1984, PhD MIT 1995), Andrew Fedorchek (USMA 1988, PhD student at Stanford), Thomas Tracyk (USMA 1991, Georgia Tech student), and Marcia Geiger (USMA 1992). Exchange cadet Anton Pineda (USMA 1990, MS RPI) received a similar national fellowship from his native country, the Philippines, and Ray Eason (USMA 1994) is studying mathematics at Oxford under a Marshall Scholarship. Overall USMA has had Rhodes Fellows, Marshall Fellows, and Hertz Fellows.Overlooking the beautiful gardens is the stately home of the Superintendent, Quarters 100. In deference to the power of mathematics, all buildings on post are assigned a number. Thayer lived here, although there have been same major expansions and renovations since then. The current occupant is General Howard Graves (USMA 1961), who won a Rhodes Scholarship and was an outstanding mathematics student as a cadet. Graves' course of study included about 30 credit hours of mathematics in algebra, trigonometry, analytic geometry, calculus, differential equations, and statistics. He finished at the top of his class of 534 in this mathematics coursework and was awarded the Lee Saber for his efforts. The Lee Saber, named for Robert E. Lee (USMA 1829), has been given to the top cadet in core mathematics since 1930(?). Speaking of Lee, not only was he Superintendent during 1852-1855, but he was also a mathematics instructor as a cadet. When the Academy ran short of faculty in the early 1800, upper-class cadets were used to teach elementary courses. Lee was an excellent mathematics student (4th out of 60) and a natural selection to teach the program he had just completed. Another Superintendent with special talents in mathematics was Richard Delafield (USMA 1818), who served three tours as Superintendent during the period 1838-1861. Delafield was an outstanding geometrer and produced some of the finest work possible in his descriptive geometry class. His drawing portfolio is stored next to Bradley's bust and Thayer's book collection on the 4th floor of the library. Delafield eventually used his talents to design plans for fortifications and other civil engineering projects. Alden Partridge (USMA 1806) was acting Superintendent (1814-1817) and a professor of mathematics (1806-1813). After being replaced as Superintendent by Thayer, Partridge left the Army and established numerous military and science schools, among them Norwich University. Other Superintendents of mathematical note were MacArthur, whom we have discussed, and George Cullum, who will learn about shortly.I must mention a couple cadets who processed great mathematics talent, but were not so successful as cadets and did not graduate from the Academy. Edgar Allan Poe entered the Academy in 1830, but left in 1831. What is interesting for this tour is not his great poetry (he did use some mathematical concepts and names in his poetry), but his tremendous talents as a cryptographer. Poe spent several years designing and breaking codes for a cryptology section of a couple mathematics journals. He was king of the hill when it came to code breaking. While core mathematics at West Point can not take all the credit for his success, Poe studied and did well in the rigorous mathematics program designed by Thayer and Davies and was taught personally by Davies. James Whistler's story is similar. He lasted longer, leaving West Point in his third year of study in 1854 after disciplinary problems and failing Chemistry. Whistler's mathematics program was designed and taught by Church. He used his mathematics, especially descriptive geometry, in his first job as a draftsman for the United States Geodetic and Coastal Survey. An agency filled with the best scientists and mathematicians in the country, many of whom were West Point graduates. I am sure he was a good draftsman and produced beautiful maps, but soon after he moved on to Paris and then England where he hung around with artists like Claude Monet and produced magnificent artwork.Let me cover one other side note. Davies and Church were the mathematics department heads during the formative and glory years of the department and Academy (1823-1878). Both men were prolific authors, together authoring over 40 different mathematics textbooks, and dedicated educators who made tremendous contributions to the Academy and the mathematics education community. It was said that every schoolboy in America knew about West Point because they studied from one of Davies' most popular books, The Common School Arithmetic, which had on its title page "Prepared for the Use of Academies and Common Schools in the United States, and also for the Use of the Young Gentlemen who may be Preparing to enter the Military Academy at West Point." Davies and Church were the first to establish entrance exams in mathematics, first verbal and then written, which were the predecessors of the modern SATs. Davies went on to teach at Trinity College, New York University, and Columbia.The next stop on the tour is real special. At the corner of Jefferson Road and Washington Road stands a statue of Sylvanus Thayer, "Father of the Military Academy." Since we already discussed his visit to Europe, let's learn about his days as a cadet, faculty member, and Superintendent. Thayer had already graduated from Dartmouth as valedictorian of his class when he arrived to become a cadet. It took him just 14? months of study to complete the unstructured program of the new academy and become West Point's 33rd graduate. When he was appointed Superintendent in 1817, he immediately went to work improving the curriculum he had taken and taught as an assistant mathematics professor in 1811. Some of his pedagogical reforms included sectioning cadets with homogenous ability, daily tests, competitive class rank system, interactive classrooms, and great use of the blackboards by cadets to practice and demonstrate their skills. It was said that the "most singular characteristic" of the Thayer system was its emphasis on mathematics using the Ecole Polytechnique as its model. Only a crazy mathematics professor would ever think that USMA could stand for United States Mathematics Academy, but many cadets in the middle of the 19th century, when West Point built its reputation as "the best school in the world", probably spent more time studying mathematics than performing military training. By the way, The Best School in the World is a book by James Morrison about West Point in the pre-civil war period (1833-1866), but the quotation is taken from President Andrew Jackson.During graduation week each year a special ceremony is held at this statue. West Point alumnae gather and march across the plain to assemble in front of Thayer. The oldest living graduate present at the service lays a wreath at the base of the statue while cadets and alumnae salute together and sing the alma mater. Thayer's insights and leadership not only gave West Point a great foundation, but also helped establish the foundation in the United States for mathematics, science, technology, and engineering education. Some refer to him as the "Father of American Technology."If we look to the West, we can see the Dean's house. West Point has had 11 Deans of the Academic Board since the first one was appointed in 1945. Three were former mathematics department heads: Harris Jones (USMA 1917), William Bessell (USMA 1920), and John Dick (USMA 1935).As we walk east along Washington Road, we can view the cannons (they are war trophies) that were the weapons of war in many of our country's battles. The cannons are grouped chronologically on Trophy Point by which war they were used and are arranged around the other memorabilia on the point, like the tall shaft of Battle Monument, Sheridan's statue, and links of the great chain. The big, cumbersome, inaccurate cannons of the Revolutionary War are fun to climb on today, but must have been painful to move or shoot in their days of use. Military engineers improved the cannon over the years, and the Civil War cannon are a bit smaller, but had more range and accuracy. The good work of military engineers, scientists, and mathematicians is in clear evidence as one walks from era to era seeing first hand the advances in weaponry. Possibly the ultimate and sublime in this development were the Paris guns designed, built, and used by the Germans to fire rounds 60 miles from the front lines of World War II into Paris. Guns that could be called America's equivalent of the Germany's "Big Bertha" were the shore batteries that were built to protect our country's shorelines. West Point had its own shore batteries that are still buried under an edge of "the plain" just across from Battle Monument.Certainly, the military engineers were the key technologists in the military battles of the 16th through 19th centuries. In the early years, the abilities of the civil engineers (like Delafield and Kosciuszko) to build impenetrable fortresses was of foremost importance. Later, the mechanical engineers had the key role in building cannons and armaments that were accurate and mobile. In World War I, the scientists made their impact, especially the chemists as the new world of gas warfare began. The physicists impact came with the atomic bomb during World War II. The mathematicians developed the control algorithms that had such a great impact on the "smart weapons" in the "Desert Storm War" in the Persian Gulf. What technology will play the key role in the next war? Many think information science will be crucial. Only time will tell. In any case, West Point scientists and mathematicians will be ready to contribute. In addition, we know that the soldiers, real people performing military duties, will ultimately decide the outcome of the next war, just as they have all preceding ones. That's why the focus of West Point is developing leaders of character.Before we leave Trophy Point, we can glance up the scenic river and scope out links from the great chain, but we can not linger, we need to move on to more mathematics. We do this by walking down Cullum Road to Kosciuszko's monument. High up on the 30-foot monument is a statue of Thaddeus Kosciuszko, the famous Polish-American military engineer who helped Washington design the fortifications of West Point. Kosciuszko overlooks the river at its critical point, where ships have to negotiate a sharp turn to pass up river. Kosciuszko's background of strong geometric mathematics skills and military training at the Warsaw Military Academy gave him the tools to become an expert at designing and building fortifications. That's exactly what Washington had him do for the headquarters at West Point. The fortification was so strong it could never be breached without complete knowledge of its structure. Giving that structure to the enemy British forces was what Benedict Arnold did to earn the label of "traitor". Kosciuszko went on to lead an unsuccessful rebellion against the Russians in his native land of Poland. He also became great friends with Thomas Jefferson, which just might explain why Jefferson finally agreed to the establishment of a military academy. If our academy could produce graduates like Kosciuszko, then both Washington and Jefferson would have their desires for America's first public school of undergraduate education come true.Our last stop is Cullum Hall, which is just a bit further down Cullum Road. This is the one building we will enter, only because it would be a tragedy to miss its contents. Before we enter, look across the street at Doubleday Field, named after Abner Doubleday (USMA 1842) and believed by some to be the inventor of the game of baseball while a cadet in 1839. Although that theory is very controversial, for purposes of this tour we will assume it to be true. In any case, the inventor of baseball had great intuition of geometry, trigonometry, and possibly calculus. Just think, how the game would be adversely affected if some of the distances or angles on a ball diamond were changed. A closer pitcher's mound would result in batters never hitting the ball. A longer distance between bases would make infield hits and stolen bases obsolete. The inventor of baseball was certainly a fine applied mathematician, something Abner Doubleday and his fellow graduates of USMA were educated to be.Now we let's investigate the treasures of Cullum Hall. This building is named after George Cullum (USMA 1833) and thanks to him we know he was the 709th graduate of USMA. Cullum was a prolific writer and one of his largest projects was to catalog the careers of all West Point graduates and to write and assemble their biographies. The Cullum index numbers all graduates and his biographies are a great source of historical information about graduates. Thanks to Cullum we know things like: Joseph Swift was the first graduate in 1802; Thayer was the 33rd graduate; in the first 100 years, 4121 graduated from West Point; there are over 52,000 graduates today and all have taken a rigorous, comprehensive program in mathematics. In his spare time, Cullum wrote a mathematics book entitled Problems of Descriptive Geometry.As we enter the Cullum Hall through the main door, we see hundreds of plaques honoring graduates, mostly for their contributions during war-time and for sacrificing their lives for defense of our country. The main plaque on the right lists the Academy graduates killed in action during the first 100 years of existence (1802-1902). Other plaques display West Point generals who served during various wars, losses of graduates during 20th century wars, superintendents, and professors, including all those mathematics professors we have discussed and many others. The magnificent ballroom on the second floor of Cullum is adorned with numerous plaques and paintings commemorating many of the West Pointers who served with the Union Army during the Civil War.Well, this concludes the tour. We have seen plenty while walking only a half mile or so. I hope you agree that there is plenty of beauty in West Point mathematics. So much that to commemorate all the geometry that was studied here at West Point, you will support my effort to change the name of "the plain" to its mathematically correct name "the plane".
{"url":"http://www.usma.edu/math/SitePages/Math%20on%20the%20Plain.aspx","timestamp":"2014-04-18T13:13:21Z","content_type":null,"content_length":"116123","record_id":"<urn:uuid:82f60f5d-5980-4e2a-b980-fe458333e56d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
DATA WAREHOUSING AND DATA MINING Assignments KOTTAM TUALSI REDDY MEMORIAL COLLEGE OF ENGINEERING, KONDAIR DATA WAREHOUSING AND DATA MINING Unit I 1. Explain Data mining as a step in the process of knowledge discovery. 2. Draw and explain the architecture of typical data mining systems. 3. Differentiate OLTP and OLAP. 4. What is data mining and data warehousing? Give their applications. 5. Briefly discuss the functionalities of data mining. 6. Briefly discuss about Multidimensional data model 7. Multidimensional Schema. 8. Architecture of data mining systems 9. Briefly discuss about data warehouse architecture 10. Classification of data mining systems Unit II 11. Briefly discuss the forms of data processing with neat diagram. 12. Explain about concept hierarchy generation for categorical data. 13. Explain various data reduction techniques. 14. Explain about concept hierarchy generation for numerical attributes. 15. Explain about data Integration and Transformation techniques 16. Briefly explain about discritization and Concept Hierarchy Generation for numerical and Categorical data. 17. Briefly explain about needs for preprocessing data. 18. Explain various data cleaning techniques. Unit II I 19. List and describe data mining primitives for specifying a Data Mining Task. 20. Briefly discuss about Task-relevant data specifications 21. Explain the syntax for Task-relevant data specifications. 22. Describe why concept hierarchies are useful in data mining. 23. Briefly explain about Data Mining Query Language with suitable examples 24. Explain about designing graphical user interfaces based on Data Mining query Language. Unit IV 25. What is concept description and explain about Attribute relevance analysis for data characterization 26. What are the differences between concept description in large databases and OLAP? 27. Differentiate between predictive and descriptive data mining 28. State and explain algorithm for attribute oriented induction. 29. Explain mining class comparisons using example. 30. Explain various formats for presenting derived generalized relations. 31. Explain various mining descriptive statistical measures in large databases Unit V 32. Discuss about mining frequent item sets without candidate generation. 33. What is association rule mining? Discuss about multilevel association rule mining from transactional databases in detail. 34. Write the FP-growth algorithm. Explain. 35. What is Iceberg query? Explain with example.. 36. Discuss about ARCS. 37. Explain mining multidimensional association rules from relational databases and warehouses 38. What is correlation analysis? And explain constraint based association mining Unit VI 39. How scalable is decision tree induction? Explain. 40. Describe working procedure of simple Bayesian classifier. 41. Write backpropagation algorithm and explain. 42. Discuss about nearest neighbor classifiers and case-based reasoning. 43. Can any ideas from association rule mining be applied to classification? Explain. 44. Explain about prediction and Explain Bayesian belief Networks 45. How does tree pruning work? What are some enhancements to basic decision tree induction? 46. what is classification and explain classification by Decision Tree Induction Unit VII 47. What is cluster analysis? What are the various types of data in Cluster Analysis? Explain. 48. Given two objects represented by the tuples (22, 1,42,10) and (20,0,36,8): 1. Compute the Euclidean distance between the two objects 2. Compute the Euclidean distance between the two objects 3. Compute the Euclidean distance between the two objects 49. Explain categorization of Major Clustering Methods 50. what is distance based Outlier? What are the efficient algorithms for mining distance-based algorithm? How are outliers determined in this method? 51. Given following measures for variable age: 18,22,25,42,28,43,33,35,56,28 Standardize the variable by the following i. Compute the mean absolute deviation of age. ii. Compute the Z-Score for the first four measurements. 52. Describe Model based clustering methods. 53. Suppose that the data mining task is to cluster the following eight points ( with (x,y) representing location) into three clusters. A1(2,10), A2(2,5),A3(8,4),B1(5,8),B2(7,5),B3(6,4),C1(1,2),C2(4,9). The distance function is Euclidean distance. Suppose initially we assign A1,B1 and C1 as the center of each cluster, respectively. Use the k-means algorithm to show only 1. the three cluster centers after the first round execution 2. the final three clusters 54. Explain DBSCAN algorithm with suitable example. 55. How does CLIQUE work. 56. Explain Partitioning Methods 57. Explain Density-Based Methods 58. Explain Grid-based Methods 59. Explain Model-Based Clustering Methods Unit VIII 60. Explain the construction of spatial data cube with suitable example. 61. Explain methods are for information retrieval? Explain 62. Describe web usage mining. 63. Explain construction and mining of object cubes 64. What is multimedia database? Explain mining multimedia databases. 65. What is time series database? What is a sequence database? Explain mining time series and sequence data. 66. Define spatial database, multimedia database, time series database, sequential database and text database. 67. Explain Periodicity analysis and Latent semantic indexing 68. Explain mining associations in multimedia data 69. Briefly discuss about Multidimensional Analysis and Descriptive Mining of Complex Data Objects 70. Briefly discuss about Mining Spatial Databases 71. Briefly discuss about Multimedia Databases 72. Briefly discuss about Mining Time-series and Sequence Data Mining 73. Briefly describe about Text Databases 74. Briefly discuss about mining the World Wide Web. 4 Responses Thanks conducive to the adept info. Narmally fair-minded wen up! I consistently do not bourgeoning on these but brooding you did a fervently buddy-buddy depredate and I’m indubitable some people suavity the less anyway. Cool work. Keep it comin’! I really like your writing style, wonderful info, appreciate it for putting up Thank u
{"url":"http://indraneel.blog.com/data-warehousing-and-data-mining-assignments/","timestamp":"2014-04-18T00:51:48Z","content_type":null,"content_length":"42818","record_id":"<urn:uuid:4dade272-add2-450f-ac0c-5290033d3f59>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
D = dummyvar(group) returns a matrix D containing zeros and ones, whose columns are dummy variables for the grouping variable group. Columns of group represent categorical predictor variables, with values indicating categorical levels. Rows of group represent observations across variables. group can be a numeric vector or categorical column vector representing levels within a single variable, a cell array containing one or more grouping variables, or a numeric matrix or cell array of categorical column vectors representing levels within multiple variables. If group is a numeric vector or matrix, values in any column must be positive integers in the range from 1 to the number of levels for the corresponding variable. In this case, dummyvars treats each column as a separate numeric grouping variable. With multiple grouping variables, the sets of dummy variable columns are in the same order as the grouping variables in group. The order of the dummy variable columns in D matches the order of the groups defined by group. When group is a categorical vector, the groups and their order match the output of the getlabels(group) method. When group is a numeric vector, dummyvar assumes that the groups and their order are 1:max(group). In this respect, dummyvars treats a numeric grouping variable differently than grp2idx. If group is n-by-p, D is n-by-S, where S is the sum of the number of levels in each of the columns of group. The number of levels s in any column of group is the maximum positive integer in the column or the number of categorical levels. Levels are considered distinct if they appear in different columns of group, even if they have the same value. Columns of D are, from left to right, dummy variables created from the first column of group, followed by dummy variables created from the second column of group, etc. dummyvar treats NaN values or undefined categorical levels in group as missing data and returns NaN values in D. Dummy variables are used in regression analysis and ANOVA to indicate values of categorical predictors. │ Note: If a column of 1s is introduced in the matrix D, the resulting matrix X = [ones(size(D,1),1) D] will be rank deficient. The matrix D itself will be rank deficient if group has multiple │ │ columns. This is because dummy variables produced from any column of group always sum to a column of 1s. Regression and ANOVA calculations often address this issue by eliminating one dummy │ │ variable (implicitly setting the coefficients for dropped columns to zero) from each group of dummy variables produced by a column of group. │ Suppose you are studying the effects of two machines and three operators on a process. Use group to organize predictor data on machine-operator combinations: machine = [1 1 1 1 2 2 2 2]'; operator = [1 2 3 1 2 3 1 2]'; group = [machine operator] group = Use dummyvar to create dummy variables for a regression or ANOVA calculation: D = dummyvar(group) D = The first two columns of D represent observations of machine 1 and machine 2, respectively; the remaining columns represent observations of the three operators. See Also anova1 | regress
{"url":"http://www.mathworks.com/help/stats/dummyvar.html?nocookie=true","timestamp":"2014-04-20T21:43:35Z","content_type":null,"content_length":"43839","record_id":"<urn:uuid:15b50890-1225-4f2b-9ba7-81eefaf65da6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: construction of triangle of given perimeter, given point and angle Date: Nov 11, 1997 3:05 PM Author: John Conway Subject: Re: construction of triangle of given perimeter, given point and angle On Tue, 11 Nov 1997, Eileen M. Klimick Schoaff wrote: > A parabola, on the other hand, is determined by any 4 of its points. > John Conway> > Am I missing something here? Don't 3 points determine a parabola if the axis > of symmetry is either vertical or horizontal? Yes they do. But not if it isn't. You did miss something! But if we consider any axes, > then there are an infinite number of parabolas passing through three points. Yes, this is true. But I spoke of 4 points, not 3. > The generic equation is ax^2 + bxy +cy^2 + dx + ey + f = 0. This is the general conic, which is usually an ellipse or hyperbola rather than a hyperbola. If Jon Roberts is > considering parabolas of the form y = ax^2 + bx + c, then knowing 3 points > gives you three equations with three unknowns which can easily be solved -- > unless there is no solution. > In the April 1997 issue of the Mathematics Teacher, a colleague of mine, Dr. > Ellie Johnson, wrote an article "A Look at Parabolas with a Graphing > Calculator". In this article she using the calculator to generate many > solutions to the generic equation. Of course this just shows that given three > points and restricting yourself to a parabola of the form y = ax^2 + bx + c, > you can derive the equation. That does not, of course, construct it. > Does the fourth point determine whether the axis of symmetry is vertical, > horizontal, or rotated? Yes, roughly speaking. In ax^2 + bxy +cy^2 + dx + ey + f = 0, it looks like > you need more than 4 points to determine a, b, c, d, e, f. You do indeed need 5 points to determine the general conic. > Then again, I am only a math education person and do not have a PhD in math so > I am probably far in the dark. > Eileen Schoaff > Buffalo State College I think as a math educator you really SHOULD have known of the existence of conics other than parabolae! John Conway
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=1077403","timestamp":"2014-04-16T14:09:59Z","content_type":null,"content_length":"3425","record_id":"<urn:uuid:784bc16d-ffbd-4f4d-a909-db3a121f19f8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Pauls Online Notes : Algebra/Trig Review - Factoring Factoring [Show All Solutions] Factor each of the following as much as possible. Show Solution We have a difference of squares and remember not to make the following mistake. This just simply isn’t correct. To convince yourself of this go back to Problems 1 and 2 in the Multiplying Polynomials section. Here is the correct answer. Show Solution This is a sum of squares and a sum of squares can’t be factored, except in rare cases, so this is as factored as it will get. As noted there are some rare cases in which a sum of squares can be factored but you will, in all likelihood never run into one of them. Show Solution Factoring this kind of polynomial is often called trial and error. It will factor as where and . So, you find all factors of 3 and all factors of -10 and try them in different combinations until you get one that works. Once you do enough of these you’ll get to the point that you can usually get them correct on the first or second guess. The only way to get good at these is to just do lot’s of problems. Here’s the answer for this one. Show Solution There’s not a lot to this problem. When you run across something that turns out to be a perfect square it’s usually best write it as such. Show Solution In this case don’t forget to always factor out any common factors first before going any further. Show Solution Remember the basic formulas for factoring a sum or difference of cubes. In this case we’ve got
{"url":"http://tutorial.math.lamar.edu/Extras/AlgebraTrigReview/Factoring.aspx","timestamp":"2014-04-16T15:59:04Z","content_type":null,"content_length":"61255","record_id":"<urn:uuid:ed32fdfb-4035-4441-9a69-c6be44363f1b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me troubleshoot my answer June 11th 2007, 03:05 AM Help me troubleshoot my answer You are playing a poker game. You have 55 and the board reads 358 giving you 3 of a kind. There are two more cards to come. What's the probability you make a full house or better? Here's my answer. You can only make a fullhouse or 4 of a kind. So on the first of the two cards to come, you can hit 7 out of 47 cards (3,5,or8). If you miss on the first card, you can hit 10 out of 46 cards. It's 10 because you can not only hit a 3, 5, 8 but also make a fullhouse by getting a pair in the last two cards. So, 1 - (prob(missing both cards)) = 1-(40/47 * (36/46) But it should also add up to the probability of hitting on the 1st card + probability of hitting on the second card - the probability of hitting on both 1st and second card. So, 7/46 + 10/46 - 6/ 46. But my two answers don't tie. What am I doing wrong? June 11th 2007, 05:07 AM I think your answer is definitely correct (I got the same answer doing it a different way than you). I think it's your checking that's gone wrong! I don't really understand where you're getting your numbers in the second part. The probabilities are A) Miss first and second (40/47)*(36/46)=1440/2162 B) Miss first, hit second (40/47)*(10/46)=400/2162 C) Hit first (no matter what happens with second) 7/47 = 322/2162 These add up just fine. Edit because I think I was wrong: If you take 10/46+7/47-(7/47*10/46) = 722/2162 you will notice that this is the same as 1-P(A) (what you wanted). June 11th 2007, 07:01 AM Hello, CrazyAsian! You are playing a poker game. You have $\{5,5\}$ and the board has $\{3,5,8\}$ giving you 3-of-a-kind. There are two more cards to come. What's the probability you make a full house or better? You are complicating the problem by ordering the draws. There are: ${47\choose2} = 1081$ possible draws. You must get the fourth 5 . . . There is 1 way. Your fifth card can be any of the remaining 46 cards. . . There are: . $1 \times 46 \:=$46 ways to get 4-of-a-kind. Full House . . . There are 3 ways. [1] Draw another 3 and some other card. There are: ${3\choose1} = 3$ ways to draw another 3 . . and $44$ choices for the fifth card. There are: $3 \times 44 \:=\:132$ to get "5s over 3s". [2] Draw another 8 and some other card. There are: ${3\choose1} = 3$ ways to draw another 8. . . and $44$ choices for the fifth card. There are: $3 \times 44\:=\:132$ ways to get "5's over 8's". [3] Draw a different pair (other than 3s or 8s). There are: $10$ choices of the value of the pair . . and ${4\choose 2} = 6$ ways to get the pair. There are: $10 \times 6\:=\:60$ more ways to get a Full House. Hence, there are: . $132 + 132 + 60 \:=$324 ways to get a Full House. Therefore, there are $46 + 324\:=$370 ways to get a Full House or better. . . The probability is: . $\frac{370}{1081}$
{"url":"http://mathhelpforum.com/statistics/15841-help-me-troubleshoot-my-answer-print.html","timestamp":"2014-04-21T05:13:52Z","content_type":null,"content_length":"9866","record_id":"<urn:uuid:7e44edba-1602-4ae7-8e98-a08b225935a8>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Grayson, GA Science Tutor Find a Grayson, GA Science Tutor ...I have a Master's degree in Business Administration (MBA). I am also a published author. I have worked as a tutor and teacher, but more importantly, I know how to make learning fun and easy. I have a Master's degree in Management Information Systems which relies heavily on math. 29 Subjects: including astronomy, English, reading, writing ...I have never received any unsatisfactory ratings. Also, my students made 80% pass rate for the Biology EOCT. I have been teacher of the year for two schools in Atlanta. 4 Subjects: including biology, anatomy, ecology, botany ...Geometry is the subject where math teachers bring in more abstract concepts and many students are left behind. This is great for tutoring because there are a limited number of equations to learn and everything can be demonstrated by real world objects and drawings. Really understanding geometry will get students off to the right foot for the rest of their high school careers. 17 Subjects: including ACT Science, writing, physics, biology ...I earned my Ph.D. in Physics at Georgia Tech. When I was a graduate student there, some of my graduate jobs included teaching physics labs, conducting physics homework recitations, and physics tutoring for the athletes in behalf of the Georgia Tech Athletic Dept. I found that I really liked tut... 4 Subjects: including physics, calculus, algebra 1, differential equations ...I am certified in Early Childhood Education P-5 as well as Middle Grades Education 4-8. I have ten years of teaching experience in the public school system. I have taught all subjects at the elementary and middle school level. 11 Subjects: including physical science, reading, ESL/ESOL, algebra 1 Related Grayson, GA Tutors Grayson, GA Accounting Tutors Grayson, GA ACT Tutors Grayson, GA Algebra Tutors Grayson, GA Algebra 2 Tutors Grayson, GA Calculus Tutors Grayson, GA Geometry Tutors Grayson, GA Math Tutors Grayson, GA Prealgebra Tutors Grayson, GA Precalculus Tutors Grayson, GA SAT Tutors Grayson, GA SAT Math Tutors Grayson, GA Science Tutors Grayson, GA Statistics Tutors Grayson, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/grayson_ga_science_tutors.php","timestamp":"2014-04-18T19:13:11Z","content_type":null,"content_length":"23743","record_id":"<urn:uuid:355649d9-bfe5-4847-b1b8-5c5c1d8751c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Type I planetary migration with stochastic fluctuations Meeting Room 2, CMS This talk presents a generalized treatment of Type I planetary migration in the presence of stochastic perturbations. In many planet-forming disks, the Type I migration mechanism, driven by asymmetric torques, acts on a short time scale and compromises planet formation. If the disk also supports MHD instabilities, however, the corresponding turbulent fluctuations produce additional stochastic torques that modify the steady inward migration scenario. This work studies the migration of planetary cores in the presence of stochastic fluctuations using complementary methods, including a Fokker-Planck approach and iterative maps. Stochastic torques have two main effects: [1] Through outward diffusion, a small fraction of the planetary cores can survive in the face of Type I inward migration. [2] For a given starting condition, the result of any particular realization of migration is uncertain, so that results must be described in terms of the distributions of outcomes. In addition to exploring different regimes of parameter space, this talk considers the effects of the outer disk boundary condition, varying initial conditions, and time-dependence of the torque parameters. For disks with finite radii, the fraction of surviving planets decreases exponentially with time. We find the survival fractions and decay rates for a range of disk models, and find the expected distribution of locations for surviving planets. For expected disk properties, the survival fraction lies in the range $0.01 < p_S < 0.1$.
{"url":"http://www.newton.ac.uk/programmes/DDP/seminars/2009081910204.html","timestamp":"2014-04-20T01:19:24Z","content_type":null,"content_length":"5225","record_id":"<urn:uuid:858b9505-caa9-4f9f-87de-ad6598ba834d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
A soccer ball is kicked with the initial speed of 10.1 m/s. After 0.750 s it is at its highest point. Number of results: 37,593 A tennis player standing 9.7 m from the net hits the ball at 3.42° above the horizontal. To clear the net, the ball must rise at least 0.291 m. If the ball just clears the net at the apex of its trajectory, how fast was the ball moving when it left the racquet? Tuesday, February 18, 2014 at 9:57pm by hatend Ted Williams hits a baseball with an initial velocity of 120 miles per hour (176 ft/s) at an angle of è = 35 degrees to the horizontal. The ball is struck 3 feet above home plate. You watch as the ball goes over the outfield wall 420 feet away and lands in the bleachers. After... Saturday, September 17, 2011 at 1:12pm by aaa Ted Williams hits a baseball with an initial velocity of 120 miles per hour (176 ft/s) at an angle of è = 35 degrees to the horizontal. The ball is struck 3 feet above home plate. You watch as the ball goes over the outfield wall 420 feet away and lands in the bleachers. ... Monday, September 19, 2011 at 2:07am by aaa A shell of a gun has a initial speed of 1700 m/s at initial inclination of 55◦ above the horiztontal. Ignore air resistance and other effects. a) what is the maximum height? B) what is the total time of the shell's flight? c) how far away did the shell hit? Friday, October 14, 2011 at 1:31am by Lala ap physics Mass * (34.7 + 4.5) kg*m/s 4.5 m/s is an extremely slow speed for the thrown ball (16.2 mph) . I doubt if it would reach home plate if thrown horizontally from the pitcher's mound. Are you sure the speed was not 24.50 m/s? Saturday, January 7, 2012 at 8:14pm by drwls A solid ball and a hollow ball, each with a mass of 2 kg and radius of 0.1 m, start from rest and roll down a ramp of length 2 m at an incline of 34°. An ice cube of the same mass slides without friction down the same ramp. What is the speed of the solid ball at the bottom of ... Monday, June 18, 2012 at 6:39pm by James Two stones, A and B, are thrown horizontally from the top of a cliff. Stone A has an initial speed of 15 meters per second and stone B has an initial speed of 30 meters per second. Compared to the time it takes stone A to reach the ground, the time it takes stone B to reach ... Monday, November 18, 2013 at 2:12pm by Anonymous Part 2 You are a pirate working for Dread Pirate Roberts. You are in charge of a cannon that exerts a force 20000 N on a cannon ball while the ball is in the barrel of the cannon. The length of the cannon barrel is 2.19 m and the cannon is aimed at a 32◦ angle from the ... Friday, February 25, 2011 at 8:03pm by ally A girl notices a ball moving straight upward just outside her window. The ball is visible for 0.25 seconds as it moves a distance of 1.05m from the bottom to the top of the window. How long does it take before the ball reappears? What is the greatest height of the ball above ... Saturday, September 18, 2010 at 3:45pm by Jon A golf ball and a ping-pong ball are dropped in a vacuum chamber. When they have fallen halfway down, they have the same... A.speed B.potential energy C.kinetic energy D.Rest Energy my thought is that its c, kinetic energy...but i'm not 100% sure Tuesday, January 23, 2007 at 6:32pm by Tammy A billiard ball of mass 0.16 kg has a speed of 1.80 m/s and collides with the side of the billiard table at an angle of 34.6°. For this collision, the coefficient of restitution is 0.841. What is the angle relative to the side (in degrees) at which the ball moves away from the... Thursday, February 9, 2012 at 9:48pm by sara In an elastic collision, both kinetic energy and momentum are conserved. Let m1=mass of red superball=1 m2=mass of blue superball=4 v11=initial velocity of red ball=5 v12=final velocity of red ball v21=initial velocity of blue ball=0 v22=final velocity of blue ball For ... Friday, November 20, 2009 at 2:13pm by MathMate Physics ~Work Kinetic Energy~ 1.)A ball (m 0.5 kg) starts from rest and has 300 J(joule) of work applied to it. a) what is the initial kinetic energy of the ball? b) what is the final kinetic of the ball? c) what is the final velocity of the ball? 2.)A boy is doing chin-up. If the boy does 450 N of work to... Wednesday, January 30, 2013 at 5:50pm by Isis physics help ! a baseball pitcher throws a baseball with a speed of 38m/s. estimate the average acceleration of the ball during the throwing motion,. in throwing the baseball, the pitcher accelerates the ball through a displacement of about 3.5 m from behind the body to hthe point where it ... Sunday, April 15, 2012 at 4:34pm by JACK 33 A rubber ball weighs 95 N a. what is the mass of the ball ? b.what is the acceleration of the ball if an upward force of 69N is applied Sunday, November 29, 2009 at 2:12am by Anonymous Physics Part 1 since it hit head-on, the balls travel in exactly opposite directions, or in the same direction. Aside from that, since the balls are of almost identical mass, do you really expect one of them to take off at 3 times the original speed? to conserve momentum, .17(2.4) = .17v + .... Saturday, November 9, 2013 at 7:27pm by Steve Physics(Please respond) 1) A 1320 kg demolition ball swings at the end of a 34.3 m cable on the arc of a vertical circle. At the lowest point of the swing, the ball is moving at a speed of 3.32 m/s. Determine the tension in the cable. F=ma (34.3)(3.32) = 113.87 Ia this correct? Tuesday, May 29, 2012 at 7:28pm by Hannah A small ball rolls horizontally off the edge of a tabletop of height h. It strikes the floor a distance x horizontally away from the edge of the table. (Use any variable or symbol stated above along with the following as necessary: g.) (a) How long is the ball in the air? t... Monday, February 4, 2013 at 8:12pm by Rachel Physics (quick check) A football is kicked at ground level with a speed of 20.8m/s at an angle of 23.3 degress to the horizontal. How much later, in seconds, does it hit the ground? Tf =(-20.8m/s x Sin 23.3)/-9.80m/s^2 = 0.839s this doesn't sound right to me Thursday, October 1, 2009 at 4:42pm by Mo Phil Dawson is a professional place kicker for the Cleveland Browns. On average, he kicks the ball at a 41 degree angle with an initial speed of 70 feet per second. For future reference, goal posts are 10 feet high in the NFL. a) Write parametric equations to model Dawson's ... Friday, February 28, 2014 at 8:56pm by Alexis Phil Dawson is a professional place kicker for the Cleveland Browns. On average, he kicks the ball at a 41 degree angle and with an initial speed of 70 feet per second. For future reference, goal posts are 10 feet high in the NFL. a) Write parametric equations to model Dawson'... Monday, March 3, 2014 at 6:12pm by Alexis Gravitation help..........thanks A projectile is shot directly away from Earth's surface. Neglect the rotation of Earth. (a) As a multiple of Earth's radius RE, what is the radial distance a projectile reaches if its initial speed is one-fifth of the escape speed from Earth? ____ times RE (b) As a multiple of... Sunday, January 14, 2007 at 9:46pm by Jessy A tennis player standing 11.0 m from the net hits the ball at 3.34° above the horizontal. To clear the net, the ball must rise at least 0.320 m. If the ball just clears the net at the apex of its trajectory, how fast was the ball moving when it left the racquet? Tuesday, February 9, 2010 at 7:32pm by mel A tennis player standing 11.5 m from the net hits the ball at 2.60° above the horizontal. To clear the net, the ball must rise at least 0.252 m. If the ball just clears the net at the apex of its trajectory, how fast was the ball moving when it left the racquet? Sunday, September 19, 2010 at 7:58pm by nate projectile physics im a bit puzzled, please help me gain the sanity required for this confusion... suppose a ball is thrown from the top of a cliff, and i am supposed to get its final velocity; since when the ball reaches it maximum height, the vf which becomes the vi or initial velocity upon ... Thursday, May 5, 2011 at 6:33am by cheerie To enter the main pool at an amusement part, a swimmer uses a water slide which has a vertical height of 2.65 m. Find her speed at the bottom of the slide if she starts with an initial speed of 0.950 Thursday, November 10, 2011 at 7:23pm by Jonathan A ball is thrown vertically upward with a speed of +13.0 m/s. (a) How high does it rise? (b) How long does it take to reach its highest point? (c) How long does the ball take to hit the ground after it reaches its highest point? (d) What is its velocity when it returns to the ... Thursday, September 15, 2011 at 10:47pm by ashley A ball is thrown vertically upward with a speed of +10.0 m/s. (a) How high does it rise? m (b) How long does it take to reach its highest point? s (c) How long does the ball take to hit the ground after it reaches its highest point? s (d) What is its velocity when it returns ... Sunday, January 22, 2012 at 11:25pm by seanso A ball is thrown vertically upward with a speed of +10.0 m/s. (a) How high does it rise? m (b) How long does it take to reach its highest point? s (c) How long does the ball take to hit the ground after it reaches its highest point? s (d) What is its velocity when it returns ... Monday, January 23, 2012 at 12:22am by seanso A ball is thrown vertically upward with a speed of 19.0 m/s. (a) How high does it rise? m (b) How long does it take to reach its highest point? s (c) How long does the ball take to hit the ground after it reaches its highest point? s (d) What is its velocity when it returns to... Thursday, August 30, 2012 at 12:30am by roger A ball is thrown vertically upward with a speed of 15.0 m/s. (a) How high does it rise? (b) How long does it take to reach its highest point? (c) How long does the ball take to hit the ground after it reaches its highest point? (d) What is its velocity when it returns to the ... Friday, September 21, 2012 at 6:44pm by Kelly I'm super confused about how to approach this problem. Please help. A ball of Styrofoam (ρ = 100 kg/m3) is totally submerged in water. The ball has a mass of 300.0 g What is the volume of the ball? If a string holds the ball when it’s in the water, what’s the tension in ... Friday, January 17, 2014 at 1:03pm by Sandy HELP! I'm doing a webassign, and am completley lost on these problems. If you could help with any, that would be great. A crate weighing 9.40 103 N is pulled up a 36° incline by a force parallel to the plane. If the coefficient of kinetic friction between the crate and the ... Wednesday, September 22, 2010 at 5:14am by Tom A ball is thrown at a rate of 25m/s. The ball travels 138 meters. How long does it take the ball from start to end? Monday, February 20, 2012 at 9:26pm by jorge A quarterback makes a pass, throwing the ball at 21 m/s and an angle of 60° above the horizontal. A) What is the maximum height of the ball in flight? B) How long does it take to complete the pass? C) How many yards (yd) does he manage to throw the ball? (1 yard = 0.9144 ... Sunday, September 16, 2012 at 5:33pm by Katharine A quarterback makes a pass, throwing the ball at 21 m/s and an angle of 60° above the horizontal. A) What is the maximum height of the ball in flight? B) How long does it take to complete the pass? C) How many yards (yd) does he manage to throw the ball? (1 yard = 0.9144 ... Sunday, September 16, 2012 at 5:33pm by Katharine A shell fired from the ground with an initial speed of 1.70 x 10^3 m/s at an initial angle of 55 degrees to the horizontal. neglecting air resistance, find a) the shell's horizontal range b)the amount of time the shell is in motion. Saturday, November 6, 2010 at 8:22am by luke Problem 21.10 A proton with an initial speed of 800000 m/s is brought to rest by an electric field. Part B - What was the potential difference that stopped the proton? = Part C - What was the initial kinetic energy of the proton, in electron volts? Wednesday, September 14, 2011 at 12:56pm by Paige A rocket is launched at an angle of 60.0° above the horizontal with an initial speed of 99 m/s. The rocket moves for 3.00 s along its initial line of motion with an acceleration of 32.0 m/s2. At this time, its engines fail and the rocket proceeds to move as a projectile. Friday, September 10, 2010 at 8:09pm by CARY Suppose you throw a 0.054-kg ball with a speed of 11.5 m/s and at an angle of 31.8° above the horizontal from a building 13.1 m high. (a) What will be its kinetic energy when it hits the ground? (b) What will be its speed when it hits the ground? Friday, June 1, 2012 at 10:11pm by James Suppose you throw a 0.058-kg ball with a speed of 11.0 m/s and at an angle of 31.5° above the horizontal from a building 12.3 m high. (a) What will be its kinetic energy when it hits the ground? (b) What will be its speed when it hits the ground? Wednesday, September 26, 2012 at 4:00pm by Ray A 1.5 kg ball strikes a wall with a velocity of 7.9 m/s to the left. The ball bounces off with a velocity of 6.7 m/s to the right. If the ball is in contact with the wall for 0.21 s, what is the constant force exerted on the ball by the wall? Thursday, December 8, 2011 at 7:13pm by Henry A 1.5 kg ball strikes a wall with a velocity of 7.9 m/s to the left. The ball bounces off with a velocity of 6.7 m/s to the right. If the ball is in contact with the wall for 0.21 s, what is the constant force exerted on the ball by the wall? Thursday, December 8, 2011 at 7:13pm by Henry A 2.7 kg ball strikes a wall with a velocity of 7.8 m/s to the left. The ball bounces off with a velocity of 6.6 m/s to the right. If the ball is in contact with the wall for 0.19 s, what is the constant force exerted on the ball by the wall? Thursday, November 8, 2012 at 10:31pm by Brandon A 3.6 kg ball strikes a wall with a velocity of 9.3 m/s. The ball bounces off with a velocity of 7.9 m/s in the opposite direction. If the ball is in contact with the wall for 0.4 seconds, what is the constant force exerted on the ball by the wall? Monday, March 25, 2013 at 3:38pm by Sam A 1.7 kg ball strikes a wall with a velocity of 7.7 m/s to the left. The ball bounces off with a velocity of 6.5 m/s to the right. If the ball is in contact with the wall for 0.17 s, what is the constant force exerted on the ball by the wall? Monday, December 16, 2013 at 10:02pm by Anonymous 1. Calculate the frequency (Hz) and energy (eV) of green photons with wavelength of 555nm. Photon energy = hν, where ν = frequency in Hz and h = 6.626x10-34 joule-sec, the Planck’s constant. Keep in mind that: wavelengthxfrequency = speed. 2. Andy Ruddick can serve ... Sunday, March 1, 2009 at 2:44pm by JeFF Calculate the maximum height using the initial speed, half-time, and angle. The initial speeed is 18m/s, half time is 1.8, and the angle is 75. And how is this written out to solve. Sunday, January 23, 2011 at 11:51am by Anonymous A particle has an initial horizontal velocity of 2.9 m/s and an initial upward velocity of 4.5 m/s. It is then given a horizontal accelera- tion of 1.2 m/s2 and a downward acceleration of 1.2 m/s2. What is its speed after 4.9 s? Answer in units of m/s Thursday, October 25, 2012 at 7:09pm by Sonia Physics w/ Calc A baseball approaches home plate at a speed of 43.0 m/s, moving horizontally just before being hit by a bat. The batter hits a pop-up such that after hitting the bat, the baseball is moving at 54.0 m /s straight up. The ball has a mass of 145 g and is in contact with the bat ... Sunday, October 16, 2011 at 8:08pm by Lauren A 2.7 kg ball strikes a wall with a velocity of 7.6 m/s to the left. The ball bounces off with a velocity of 5.2 m/s to the right. If the ball is in contact with the wall for 0.23 s, what is the constant force exerted on the ball by the wall? Answer in units of N Sunday, January 22, 2012 at 3:55pm by Angel A 3.3 kg ball strikes a wall with a velocity of 8.4 m/s to the left. The ball bounces off with a velocity of 6.8 m/s to the right. If the ball is in contact with the wall for 0.27 s, what is the constant force exerted on the ball by the wall? Answer in units of N Sunday, February 23, 2014 at 9:13pm by michael Downs and Abwender (2002) found neurological deficits in soccer players who are routinely hit on the head with soccer balls compared to swimmers, who are also athletes but who are not regularly hit in the head. Is this an example of an experimental or a non-experimental study... Wednesday, June 22, 2011 at 12:43am by Liz A ball is rolled off a table that is 0.481 m above the floor. The ball is rolling with a velocity if 1.8 m/s as it goes off the edge of the table. At the exact instant the first ball rolls off the table a second ball is dropped from the same height. A.) How long does it take ... Monday, July 22, 2013 at 8:20pm by Jason A billiard ball rolling across a table to the right at 2.2 m/s makes a head-on elastic collision with an identical ball. The mass of a billiard ball is 32 g. If the second ball is initially at rest, what is the velocity of the first ball after the collision? If the second ball... Tuesday, February 22, 2011 at 1:28pm by micha A billiard ball rolling across a table to the right at 2.2 m/s makes a head-on elastic collision with an identical ball. The mass of a billiard ball is 36 g. If the second ball is initially at rest, what is the velocity of the first ball after the collision? If the second ball... Wednesday, October 12, 2011 at 1:07am by sidney child developement A child throws a ball with speed and accuracy when he throws a ball to his teacher but his throws are slower and less accurate when plays with a group of friends. This difference is explained by A. classical stage theory B. probility theory C. locomotion-x theory D, Piaget,s ... Saturday, April 28, 2012 at 4:57pm by jane The defects of ball mill The Ball Mill faults: (1) work efficiency is low, the power consumption unit. (2) the ball mill barrel rotational speed is low (about 15 ~ 27 r/min), if you use ordinary motor that. The average all need equipped with expensive reducer. (3) grinding medium in the impact and ... Thursday, August 30, 2012 at 9:36pm by jiandan The initial vertical component of the velocity is: Vyo = (35.5m/s)(sin49.9) = _____ m/s The vertical distance can be described by: y = (Vyo)t + (1/2)(-9.8m/s^2)t^2 At the time the ball was caught: 0.877m = (Vyo)t + (1/2)(-9.8m/s^2)t^2 Substitute the value for Vyo and rearrange... Friday, September 12, 2008 at 8:19pm by GK A 0.26 kg rock is thrown vertically upward from the top of a cliff that is 27 m high. When it hits the ground at the base of the cliff the rock has a speed of 24 m/s. (a) Assuming that air resistance can be ignored, find the initial speed of the rock. (b) Find the greatest ... Monday, March 29, 2010 at 8:03pm by Carden Hillary kicks a ball so that it is in the air for 3.2 s. The ball lands down field 56 m from its starting point. a. Determine the x and y velocities of the ball. Sunday, October 10, 2010 at 10:47am by katelyn A ball rolls down a hill with a constant acceleration of 2.0 m/s2. If the ball starts from rest, (a) what is its velocity and the end of 4.0 s? (b) How far did the ball move? Monday, September 26, 2011 at 10:53pm by Me A ball rolls down a hill with a constant acceleration of 2.0 m/s2. If the ball starts from rest, (a) what is its velocity and the end of 4.0 s? (b) How far did the ball move? Monday, September 26, 2011 at 10:56pm by Me A baseball player throws a ball. While the 700.0-g ball is in the pitchers hand, there os a force of 125N in it. What is the accerleration of the ball? Wednesday, December 14, 2011 at 2:27pm by Amber If the mass of ball A is 5kg and ball B is 75kg, how high should ball B be dropped from so that when B collides with A they stick together and travel to point X? Thursday, January 19, 2012 at 11:46am by Andy A baseball pitcher throws a ball with a maximum speed of 150. km/h. If the ball is thrown horizontally, how far does it fall vertically by the time it reaches the catcher's glove 20. m away? Give your answer numerically in meters using decimal notation (omit the units). Use ... Saturday, September 19, 2009 at 6:42pm by Michael Problem 21.10 A proton with an initial speed of 800000 m/s2 is brought to rest by an electric field. Part B - What was the potential difference that stopped the proton? Part C - What was the initial kinetic energy of the proton, in electron volts? Friday, September 9, 2011 at 5:50pm by Paige If a ball is thrown straight up into the air with an initial velocity of 95 ft/s, its height in feet after t seconds is given by f(t)=95t−16t^2 Find the average velocity for the time period beginning when t=1 and lasting (i) 0.5 seconds (ii) 0.1 seconds (iii) 0.01 ... Friday, May 31, 2013 at 12:53am by Abby algebra 2 homework check Ruth is scheduling a soccer tournament in which 64 teams will participate. After each round of soccer games, half the teams advance to the next round of the tournament. After all the rounds are played, how many total games are in the tournament? Is it 48? Saturday, May 19, 2012 at 11:27pm by Anonymous algebra 2 homework check Ruth is scheduling a soccer tournament in which 64 teams will participate. After each round of soccer games, half the teams advance to the next round of the tournament. After all the rounds are played, how many total games are in the tournament? Is it 48? Saturday, May 19, 2012 at 11:27pm by Anonymous A football is thrown from the edge of a cliff from a height of 22 m at a velocity of 18 m/s [39degrees above the horizontal]. A player at the bottom of the cliff is 12 m away from the base of the cliff and runs at a maximum speed of 6.0 m/s to catch the ball. Is it possible ... Sunday, February 23, 2014 at 8:39pm by Louis A small ball with a mass of 30.0 g and a charge of −0.200 μC is suspended from the ceiling by a string. The ball hangs at a distance of 5.00 cm above an insulating floor. If a second small ball with a mass of 50.0 g and a charge of 0.400 μC is rolled directly ... Monday, September 17, 2012 at 1:39pm by alia A small ball with a mass of 30.0 g and a charge of −0.200 μC is suspended from the ceiling by a string. The ball hangs at a distance of 5.00 cm above an insulating floor. If a second small ball with a mass of 50.0 g and a charge of 0.400 μC is rolled directly ... Monday, September 17, 2012 at 2:32pm by alia Science Physics A rubber ball of mass m1= 10kg is moving to the right with speed v1= 10m/s. it collides elastically with another ball of mass m2= 50kg, which is sitting at rest. m2 is larger than m1, what are the speeds of the balls, v1 and v2 after the collision? Thursday, February 3, 2011 at 9:29pm by Alan three dimensional identical balls 1,2 and 3 are placed on a straight line at a separation of 10m between balls . initially they are at rest. ball 1 is given a velocity of 10m/s towards ball 2. collision between ball 1 and 2 is inelastic with e=0.5 . but collision between ball ... Sunday, January 16, 2011 at 7:49am by sreeram three dimensional identical balls 1,2 and 3 are placed on a straight line at a separation of 10m between balls . initially they are at rest. ball 1 is given a velocity of 10m/s towards ball 2. collision between ball 1 and 2 is inelastic with e=0.5 . but collision between ball ... Sunday, January 16, 2011 at 8:43pm by mackson I just want to be sure if my answers make any sense! A kid throws a ball in the air and catches it 5 seconds later a) What is the initial velocity of the ball? No air friction. b) What is the maximum height which the ball has reached above its departure point? c) If this kid ... Friday, November 18, 2011 at 2:43pm by Tommy A 0.50 kg ball that is tied to the end of a 1.4 m light cord is revolved in a horizontal plane with the cord making a 30° angle with the vertical. (b) If the ball is revolved so that its speed is 4.0 m/s, what angle does the cord make with the vertical? Monday, December 6, 2010 at 9:21pm by kaitlyn A 0.50 kg ball that is tied to the end of a 1.4 m light cord is revolved in a horizontal plane with the cord making a 30° angle with the vertical. (b) If the ball is revolved so that its speed is 4.0 m/s, what angle does the cord make with the vertical? Monday, December 6, 2010 at 9:23pm by kaitlyn The defects of ball mill The Ball Mill faults: (1) work efficiency is low, the power consumption unit. (2) the ball mill barrel rotational speed is low (about 15 ~ 27 r/min), if you use ordinary motor that. The average all need equipped with expensive reducer. (3) grinding medium in the impact and ... Thursday, June 13, 2013 at 4:55am by selina When you drop a bowling ball & a tennis ball they hit the floor at the same time but when they hit the floor they have the same what? A. speed B. force C. momentum D. all of the above. I think it is D but I can't find anywhere what they have the same. Thanks Friday, September 7, 2012 at 9:33pm by SUE 1. Our team practiced soccer every day. 2. Our team practiced the soccer every day. (Which one is OK? Are both grammatical? Do we have to eliminate 'the'?) Thursday, December 9, 2010 at 9:01pm by rfvv Assume: 1. no air resistance 2. initial speed (Vi) = "average speed" = 6000 mph Change in vertical altitude after time t =Vi*sin(θ)t - (1/2)gt² Vi = initial velocity = 6000 mph = 8800 ft/s t=time since launch = 120 s. θ=launch angle above horizontal = 26.5&deg... Saturday, June 23, 2012 at 10:30pm by MathMate Physics 141 speed of rock = 15.5 , mass = .32 speed of wagon = v , mass = 92.7 initial momentum = 93 * .52 = 48.4 final momentum = initial momentum so 48.4 = .32*15.5 + 92.7*v solve for v if backward 48.4 = -.32*15.5 + 92.7*v Tuesday, March 11, 2014 at 9:51pm by Damon a special rubber ball is dropped from the top of a wall that is 64 feet high. each time the ball bounces it rises half as high as the distance it fell. The ball is caught when it bounces 1foot high. how many times did the ball bounce? Monday, February 20, 2012 at 12:33pm by Jewell A thing rod of length L and negligible mass, that can pivot about one end to rotate in a vertical circle. A heavy ball of mass 5.00kg is attached to the other end. The rod is pulled aside through an angle 30.0 degrees and released. What is the speed of the ball at the lowest ... Thursday, December 9, 2010 at 3:01pm by Andrea when babe ruth hit a homer over the 7.5-m-high right field fence 95 m from home plate, roughly what wa the minimum speed of the ball when it left the bat? Assume the ball wa hit 1.0 m above the ground and it path initially made a 38 degree angle with the ground. Thursday, January 5, 2012 at 4:50pm by taylor Physics Circular Motion A 0.40-kg ball, attached to the end of a horizontal cord, is rotated in a circle of radius 2.0 m on a frictionless horizontal surface. If the cord will break when the tension in it exceeds 75 N, what is the maximum speed the ball can have? May someone please walk me through ... Saturday, June 16, 2012 at 5:13pm by Mike Joseph A shell is fired from the ground with an initial speed of 1.72 103 m/s at an initial angle of 40° to the horizontal. a) Neglecting air resistance find the horizontal range b) Find the amount of time the shell is in motion Wednesday, November 14, 2012 at 8:03pm by Jacob Jim has a difficult golf shot to make. His ball is 100m from the hole. He wants the ball to land 5m in front of the hole, so it can role to the hole. A 20m tree is between his ball and the hole, 40m from the hole and 60m from Jim's ball. With the base of the tree as the origin... Sunday, October 18, 2009 at 5:47pm by Samantha the coefficient of restitution between the ball and the floor is 0.60 If the ball is drop from rest at the height of 6.6 m from the floor. Find a) what is the maximum height will the ball attain after the first bounce. b) how much kinetic energy is lost duirng the impact if ... Monday, February 21, 2011 at 7:25am by robert Sam hits a golf ball with a five-iron a distance of 120 m horizontally. A tree 45 m high and 35 m in front of Sam is directly in the path of the ball. Will the ball clear the tree if the ball makes a parabolic curve and has maximum height of 80 m? Tuesday, October 11, 2011 at 7:56am by veera Physics - Mechanics and Ocsillatory Motion The force shown in the figure acts on a 1.7-kg object whose initial speed is 0.44 m/s and initial position is x = 0.27 m. the figure can be found at: [[webassign dot net forwardslash walker forwardslash 07-16 dot gif]] (a) Find the speed of the object when it is at the ... Saturday, October 30, 2010 at 6:42pm by Sara Me and my friend are playing on a Ferris wheel which has Radius of 20 Meter and had an constant angular velocity of 0.2 rad/s . Then, When I'm at the very top of the Ferris wheel i'm going to drop a tennis ball( dropping without initial velocity). How far around the Ferris ... Saturday, June 15, 2013 at 1:44pm by Dave A 1.4-kg copper block is given an initial speed of 2.0 m/s on a rough horizontal surface. Because of friction, the block finally comes to rest. (a) If the block absorbs 85% of its initial kinetic energy as internal energy, calculate its increase in temperature. Wednesday, April 11, 2012 at 9:56pm by Anonymous A ball is rolling on a track with an intial velocity of 2.0m/s, E. If the acceleration is 1.5 m/s2, write a sentence that describes the ball's motion, i.e, what does the ball do each second? Thursday, September 30, 2010 at 5:56pm by Jane A golf ball is hit with a velocity of 24.5 m/s at 35.0 degrees above the horizontal. Find a) the range of the ball, and b) the maximum height of the ball Monday, October 10, 2011 at 9:09pm by Joy If the green ball bounces upward from the ground with a speed of 20 m/s (immediately after it hit the ground), calculate the momentum of the ball as it goes upward immediately after hitting the Tuesday, September 28, 2010 at 6:07pm by becky A neutral ball is suspended by a string. A positively charged insulating rod is placed near the ball, which is observed to be attracted to the rod. Why is this? 1. The ball becomes negatively charged by induction. 2. The string is not a perfect conductor. 3. The number of ... Tuesday, July 23, 2013 at 2:00pm by Lynn A preseason game Tennessee Titans punter A.J. Trapasso hit the giant HD TV screen suspended above the field at the new Cowboys stadium. At the time of the punt A.J. was 30 yards away from the edge of the tv screen (the tv screen starts at the 40 yard line, and AJ was at the 10... Saturday, January 23, 2010 at 12:00pm by Anonymous Pages: <<Prev | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=A+soccer+ball+is+kicked+with+the+initial+speed+of+10.1+m%2Fs.+After+0.750+s+it+is+at+its+highest+point.&page=28","timestamp":"2014-04-19T20:31:52Z","content_type":null,"content_length":"46989","record_id":"<urn:uuid:d42ab471-f523-4ac7-a9f0-f68de0d0b3b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Sherborn Math Tutor Find a Sherborn Math Tutor ...I am a licensed, certified teacher for the state of Massachusetts. I am also the advisor for the high school math club and the advisor of the National Honor Society at a local high school.I have taught: SAT Prep, Pre-Caluculus, Trigonometry, Algebra 2 honors, Algebra 2 standard course, Geometry ... 12 Subjects: including precalculus, algebra 1, algebra 2, geometry ...Most troubles with introductory calculus are traceable to an inadequate mastery of algebra and trigonometry. As noted above, trigonometry is usually encountered as a part of a pre-calculus course. In my view, much of the traditional material associated with trigonometry should be replaced by an... 7 Subjects: including algebra 1, algebra 2, calculus, trigonometry ...I can help you be successful in your SAT test preparation, Math, English, and Accounting subjects, or in your general study skills. I am fun and friendly, and I especially enjoy working with young people. I take your future success very seriously. 30 Subjects: including algebra 1, ACT Math, SAT math, prealgebra ...I've worked as a classroom teacher, reading specialist, literacy coach, and educational consultant for over 15 years. I have taught and mentored students at the elementary, undergraduate, and graduate levels as well as been a member of admissions committees at top rated universities.I am able to... 19 Subjects: including SPSS, reading, Spanish, writing I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including statistics, discrete math, SPSS, probability
{"url":"http://www.purplemath.com/Sherborn_Math_tutors.php","timestamp":"2014-04-18T18:53:12Z","content_type":null,"content_length":"23621","record_id":"<urn:uuid:4bff696d-a7a1-42c5-85cd-0f41cb4c2a42>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Saddle point From Encyclopedia of Mathematics A point on a smooth surface such that the surface near the point lies on different sides of the tangent plane. If a point on a twice continuously-differentiable surface is a saddle point, then the Gaussian curvature of the surface at the point is non-positive. A saddle point is a generalization of a hyperbolic point. A surface all of whose points are saddle points is a saddle surface. A saddle point of a differentiable function Saddle point in game theory. A saddle of a differential equation on [a1] M.W. Hirsch, S. Smale, "Differential equations, dynamical systems, and linear algebra" , Acad. Press (1974) pp. 190ff MR0486784 Zbl 0309.34001 [a2] D.R.J. Chillingworth, "Differential topology with a view to applications" , Pitman (1976) pp. 150ff MR0646088 Zbl 0336.58001 How to Cite This Entry: Saddle point. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Saddle_point&oldid=28263 This article was adapted from an original article by D.D. Sokolov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Saddle_point","timestamp":"2014-04-19T14:29:57Z","content_type":null,"content_length":"18967","record_id":"<urn:uuid:b2a7f673-2ca9-45ab-b15c-b8c584a07b56>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
81Rxx Groups and algebras in quantum theory • 81R05 Finite-dimensional groups and algebras motivated by physics and their representations [See also 20C35, 22E70] • 81R10 Infinite-dimensional groups and algebras motivated by physics, including Virasoro, Kac-Moody, W-algebras and other current algebras and their representations [See also 17B65, 17B67, 22E65, 22E67, 22E70] • 81R12 Relations with integrable systems [See also 17Bxx, 37J35] • 81R15 Operator algebra methods [See also 46Lxx, 81T05] (4) • 81R20 Covariant wave equations • 81R25 Spinor and twistor methods [See also 32L25] • 81R30 Coherent states [See also 22E45]; squeezed states [See also 81V80] • 81R40 Symmetry breaking • 81R50 Quantum groups and related algebraic methods [See also 16T20, 17B37] • 81R60 Noncommutative geometry • 81R99 None of the above, but in this section
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/13103","timestamp":"2014-04-17T12:58:47Z","content_type":null,"content_length":"11902","record_id":"<urn:uuid:729b9776-e8cf-4f19-ba9d-9e3a07d2eb0b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
density of hydrogen peroxide Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Hydrogen peroxide Hydrogen peroxide ( H 2 O 2) is an oxidizer commonly used as a bleach. It is a clear liquid, slightly more viscous than water, that appears colorless in dilute solution. It is used as a disinfectant, antiseptic, oxidizer, and in rocket ry as a propellant. The oxidizing capacity of hydrogen From Yahoo Answers Question:How do I find the Molarity of the solution?? Answers:30% H2O2 70% H2O Assume 1 liter of solution. If the density of the solution is 1.11g/ml, then the 1 liter of solution would have a mass of (1.11g/ml)(1000ml/L)= 1110 g 30% of 1110 g = .30 x 1110 g = 333g Molecular weight of H2O2 is 2 x 1.0 + 2 x 16.0 = 34.0 333g(1 mole/34.0g) = 9.79 moles Since you assume 1 liter, you have 9.79moles/liter Round to two significant figures. Molarity is Question:A solution of hydrogen peroxide is 30.0% H2O2 by weight and has a density of 1.11 g cm-3. The molarity (M) of the solution is? Ok, I know that M(H2O2) = 34 and M(H20) = 18 Now 30.0% H2O2 by weigh means that 30% of the total mass is H202 Can we now assume that we have 1L of the solution an calculate the mass of the solution so: mass = density x volume mass = 1.11 x 1 = 1.11 whick would be translated in 1110g So 30% of 1110 is 333g n(H2O2)= 333 / 34 = 9.794 mol So the molarity of the solution is 9.794M Is that correct? Answers:Good job. Question:10 physical properties and 10 chemical properties of hydrogen peroxide Answers:Physical: almost colorless, less volatile than water, denser than water, more viscous than water, miscible in water. mp: -.41 deg C bp: 150.2 deg C density: 1.6434 g/cm3 (solid at -4.5 C) 1.4425 at 25C Viscosity: 1.245 centipoise (20C) vapor pressure (@ 25C) 1.9mmHg dielectric constant: (25C) 70.7 Electric conductivity (25C) 5.1E-8 ohm^-1 cm^-1 standard heat of formation -187.6 kJ/mol standard gibbs free energy of formation: -118.0 kJ/mol Chemical: spontaneously disproportionates decomposition strongly catalyzed by metal surfaces (Platinum, Silver) can act as oxidizing or reducing agent (in both acidic and basic solutions) evolves O2 when a reducing agent can undergo proton acid/base reactions to form peroxonium salts, hydroperoxides, and peroxides somewhat stronger acid than water (pKa=11.65) much weaker base than water (by a a 10^6 factor) used in the production of epoxides, propylene oxide, and caprolactones, hydroquinone, and many pharmaceuticals and food products environmental applications include pollution treatment by oxidizing cyanides and sulfides, and restoring aerobic conditions to sewage waters. replaces chlorine in industrial bleach because H2O and O2 decomp. products That should be a start. Question:What I know: vol of hydrogen peroxide sample is 8.72ml mass of hydrogen peroxide sample is 8.72g barometric pressure in lab is 764.29 mm HG temp in lab is 21 celcius vol of oxygen gas collected is 99.4 mL 1) How many moles of oxygen were collected n= .0821 x 284 divided by 1atm x 99.4 mL n=.23 moles that doesnt seem right :( 2) using the balanced equation (there is twice as much h202 needed as o2 produced). What is the molarity of the hydrogen peroxide solution? It would be double my answer in #1, but i dont think that answer is right 3) what is the mass percent hydrogen peroxide if the density is 1.00g/ml 4) calculate the volume of oxygen gas that would be produced upon the decomposition of 10ml of 30%(m/m) aqueous hydrogen peroxide solution. Assume the gas is collected at 1atm and 25 degrees celcius and density of hydrogen peroxide is 1.11g/mL. Answers:I1) PV = nRT n = PV/RT. I think you got it upside down. Convert P to atm, V to L, if you are using R in L atm mol-1 K-1. ALWAYS write down the units and you won't go wrong. 2) Take number of moles in (1). Double it cos q tells you to. then use molarity x volume (L) = number of moles. That should get you started. From Youtube Hydrogen Peroxide :Quite on the contrary, 35% Food-grade Hydrogen Peroxide (H2O2) is safe to handle. Read more on H2O2 at merahza.wordpress.com Hydrogen Peroxide :A quick video before water polo practice. I really rushed it and basically clicked "save" right at the moment that I had to leave for practice. Sorry for the little technical mistakes here and there. Enjoy! Amy Fan Comment, rate, subscribe! PS Cold pools suck. :[
{"url":"http://www.edurite.com/kbase/density-of-hydrogen-peroxide","timestamp":"2014-04-18T23:18:12Z","content_type":null,"content_length":"70533","record_id":"<urn:uuid:f3b7a5f2-98db-40ad-9c95-91df1661c2a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Power radiated I think Kinnersley's photon rocket is the special case. As the paper Peter linked to shows, the absence of GW is due to: - all nonstatic character of the scenario (motion) being due to the null fluid + conservation. - the null fluid only has anisotropy of monopole and dipole character Mfb's example suffers from non-conservation of momentum, so it is not a physically plausible example. However, I have seen, in the literature, that two non-rotating, uncharged black holes radially falling toward each other produce gravitational radiation. This seems the closest analog of the Newtonian two mutually attracting point particles. If I find a link for this, I'll post it, but I think it is pretty well known radially attracting BH radiate power. [edit: Here is a link for radiation from radial infall of particle into a BH:
{"url":"http://www.physicsforums.com/showthread.php?p=4184103","timestamp":"2014-04-16T07:47:46Z","content_type":null,"content_length":"71639","record_id":"<urn:uuid:fb452744-9faf-408b-890f-b3021547c44a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Control Systems/Feedback Loops A feedback loop is a common and powerful tool when designing a control system. Feedback loops take the system output into consideration, which enables the system to adjust its performance to meet a desired output response. When talking about control systems it is important to keep in mind that engineers typically are given existing systems such as actuators, sensors, motors, and other devices with set parameters, and are asked to adjust the performance of those systems. In many cases, it may not be possible to open the system (the "plant") and adjust it from the inside: modifications need to be made external to the system to force the system response to act as desired. This is performed by adding controllers, compensators, and feedback structures to the system. Basic Feedback StructureEdit This is a basic feedback structure. Here, we are using the output value of the system to help us prepare the next output value. In this way, we can create systems that correct errors. Here we see a feedback loop with a value of one. We call this a unity feedback. Here is a list of some relevant vocabulary, that will be used in the following sections: The term "Plant" is a carry-over term from chemical engineering to refer to the main system process. The plant is the preexisting system that does not (without the aid of a controller or a compensator) meet the given specifications. Plants are usually given "as is", and are not changeable. In the picture above, the plant is denoted with a P. A controller, or a "compensator" is an additional system that is added to the plant to control the operation of the plant. The system can have multiple compensators, and they can appear anywhere in the system: Before the pick-off node, after the summer, before or after the plant, and in the feedback loop. In the picture above, our compensator is denoted with a C. Some texts, or texts in other disciplines may refer to a "summer" as an adder. A summer is a symbol on a system diagram, (denoted above with parenthesis) that conceptually adds two or more input signals, and produces a single sum output signal. Pick-off node A pickoff node is simply a fancy term for a split in a wire. Forward Path The forward path in the feedback loop is the path after the summer, that travels through the plant and towards the system output. Reverse Path The reverse path is the path after the pick-off node, that loops back to the beginning of the system. This is also known as the "feedback path". Unity feedback When the multiplicative value of the feedback path is 1. Negative vs Positive FeedbackEdit It turns out that negative feedback is almost always the most useful type of feedback. When we subtract the value of the output from the value of the input (our desired value), we get a value called the error signal. The error signal shows us how far off our output is from our desired input. Positive feedback has the property that signals tend to reinforce themselves, and grow larger. In a positive feedback system, noise from the system is added back to the input, and that in turn produces more noise. As an example of a positive feedback system, consider an audio amplification system with a speaker and a microphone. Placing the microphone near the speaker creates a positive feedback loop, and the result is a sound that grows louder and louder. Because the majority of noise in an electrical system is high-frequency, the sound output of the system becomes high-pitched. Example: State-Space EquationEdit In the previous chapter, we showed you this picture: Now, we will derive the I/O relationship into the state-space equations. If we examine the inner-most feedback loop, we can see that the forward path has an integrator system, $\frac{1}{s}$, and the feedback loop has the matrix value A. If we take the transfer function only of this loop, we get: $T_{inner}(s) = \frac{\frac{1}{s}}{1 - \frac{1}{s}A} = \frac{1}{s - A}$ Pre-multiplying by the factor B, and post-multiplying by C, we get the transfer function of the entire lower-half of the loop: $T_{lower}(s) = B\left(\frac{1}{s - A}\right)C$ We can see that the upper path (D) and the lower-path T[lower] are added together to produce the final result: $T_{total}(s) = B\left(\frac{1}{s - A}\right)C + D$ Now, for an alternate method, we can assume that x' is the value of the inner-feedback loop, right before the integrator. This makes sense, since the integral of x' should be x (which we see from the diagram that it is. Solving for x', with an input of u, we get: $x' = Ax + Bu$ This is because the value coming from the feedback branch is equal to the value x times the feedback loop matrix A, and the value coming from the left of the sumer is the input u times the matrix B. If we keep things in terms of x and u, we can see that the system output is the sum of u times the feed-forward value D, and the value of x times the value C: $y = Cx + Du$ These last two equations are precisely the state-space equations of our system. Feedback Loop Transfer FunctionEdit We can solve for the output of the system by using a series of equations: $E(s) = X(s) - Y(s)$ $Y(s) = G(s)E(s)$ and when we solve for Y(s) we get: [Feedback Transfer Function] $Y(s) = X(s) \frac{Gp(s)}{1 + Gp(s)}$ The reader is encouraged to use the above equations to derive the result by themselves. The function E(s) is known as the error signal. The error signal is the difference between the system output (Y(s)), and the system input (X(s)). Notice that the error signal is now the direct input to the system G(s). X(s) is now called the reference input. The purpose of the negative feedback loop is to make the system output equal to the system input, by identifying large differences between X(s) and Y(s) and correcting for them. Example: ElevatorEdit Here is a simple example of reference inputs and feedback systems: There is an elevator in a certain building with 5 floors. Pressing button "1" will take you to the first floor, and pressing button "5" will take you to the fifth floor, etc. For reasons of simplicity, only one button can be pressed at a time. Pressing a particular button is the reference input of the system. Pressing "1" gives the system a reference input of 1, pressing "2" gives the system a reference input of 2, etc. The elevator system then, tries to make the output (the physical floor location of the elevator) match the reference input (the button pressed in the elevator). The error signal, e(t), represents the difference between the reference input x(t), and the physical location of the elevator at time t, y(t). Let's say that the elevator is on the first floor, and the button "5" is pressed at time t[0]. The reference input then becomes a step function: $x(t) = 5u(t - t_0)$ Where we are measuring in units of "floors". At time t[0], the error signal is: $e(t_0) = x(t_0) - y(t_0) = 5 - 1 = 4$ Which means that the elevator needs to travel upwards 4 more floors. At time t[1], when the elevator is at the second floor, the error signal is: $e(t_1) = x(t_1) - y(t_1) = 5 - 2 = 3$ Which means the elevator has 3 more floors to go. Finally, at time t[4], when the elevator reaches the top, the error signal is: $e(t_4) = x(t_4) - y(t_4) = 5 - 5 = 0$ And when the error signal is zero, the elevator stops moving. In essence, we can define three cases: • e(t) is positive: In this case, the elevator goes up one floor, and checks again. • e(t) is zero: The elevator stops. • e(t) is negative: The elevator goes down one floor, and checks again. State-Space Feedback LoopsEdit In the state-space representation, the plant is typically defined by the state-space equations: $x'(t) = Ax(t) + Bu(t)$ $y(t) = Cx(t) + Du(t)$ The plant is considered to be pre-existing, and the matrices A, B, C, and D are considered to be internal to the plant (and therefore unchangeable). Also, in a typical system, the state variables are either fictional (in the sense of dummy-variables), or are not measurable. For these reasons, we need to add external components, such as a gain element, or a feedback element to the plant to enhance Consider the addition of a gain matrix K installed at the input of the plant, and a negative feedback element F that is multiplied by the system output y, and is added to the input signal of the plant. There are two cases: 1. The feedback element F is subtracted from the input before multiplication of the K gain matrix. 2. The feedback element F is subtracted from the input after multiplication of the K gain matrix. In case 1, the feedback element F is added to the input before the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as: $u(t) = Fv(t) - FKy(t)$ In case 2, the feeback element F is subtracted from the input after the multiplicative gain is applied to the input. If v is the input to the entire system, then we can define u as: $u(t) = Fv(t) - Ky(t)$ Open Loop vs Closed LoopEdit Let's say that we have the generalized system shown above. The top part, Gp(s) represents all the systems and all the controllers on the forward path. The bottom part, Gb(s) represents all the feedback processing elements of the system. The letter "K" in the beginning of the system is called the Gain. We will talk about the gain more in later chapters. We can define the Closed-Loop Transfer Function as follows: [Closed-Loop Transfer Function] $H_{cl}(s) = \frac{KGp(s)}{1 + Gp(s)Gb(s)}$ If we "open" the loop, and break the feedback node, we can define the Open-Loop Transfer Function, as: [Open-Loop Transfer Function] $H_{ol}(s) = KGp(s)$ We can redefine the closed-loop transfer function in terms of this open-loop transfer function: $H_{cl}(s) = \frac{H_{ol}(s)}{1 +Gp(s)Gb(s)}$ These results are important, and they will be used without further explanation or derivation throughout the rest of the book. □ Changed the original open loop gain from Gp(s)Gb(s) to kGp(s). Rational: open loop gain eliminates the feedback loop, which means Gb(s) should no longer exist leaving the proportional controller, in this case K, and the plant, Gp(s). Placement of a ControllerEdit There are a number of different places where we could place an additional controller. Each location has certain benefits and problems, and hopefully we will get a chance to talk about all of them. Second-Order SystemsEdit Damping RatioEdit The damping ratio is defined by way of the sign zeta. The damping ratio gives us an idea about the nature of the transient response detailing the amount of overshoot & oscillation that the system will undergo. This is completely regardless time scaling. If zeta is: zero, the system is undamped; zeta < 1, the system is underdamped; zeta = 1, the system is critically damped; zeta > 1, the system is overdamped; Zeta is used in conjunction with the natural frequency to determine system properties. To find the zeta value you must first find the natural response! sadas Natural FrequencyEdit System SensitivityEdit Last modified on 10 September 2013, at 21:13
{"url":"http://en.m.wikibooks.org/wiki/Control_Systems/Feedback_Loops","timestamp":"2014-04-18T11:01:32Z","content_type":null,"content_length":"34968","record_id":"<urn:uuid:2589c426-d626-4370-8474-1b0733e7e09d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Gilcrest Statistics Tutor Find a Gilcrest Statistics Tutor ...At that point, I went back to school and became a Registered Nurse. After working in healthcare for a while, I have decided to split my time between helping the body and helping the mind. I love math and science and was at the top of my class in many courses. 13 Subjects: including statistics, geometry, algebra 1, algebra 2 ...I am certified in many subjects. I have experience working with a wide span of ages, from elementary and high school to college. I can also work with people who struggle with English, either as a second language or linguistically challenged, such as is found with conditions such as Aspergers. 30 Subjects: including statistics, reading, Spanish, calculus ...I can start tutoring after 12/15/13. I also would prefer to use Skype for the tutoring sessions but I am more than willing to travel if it works better for you or your child. Through my teaching experiences so far, I have worked with large groups, small groups, and one-on-one. 18 Subjects: including statistics, reading, algebra 2, geometry ...I also critiqued the speeches of other speakers. I have helped several students from the University of Colorado at Boulder and Colorado State University in econometrics. This class is a combination of statistics and calculus. 26 Subjects: including statistics, calculus, geometry, GRE ...I use my knowledge of biology every day in my job and feel that I am quite knowledgeable in the subjects of general biology, cell biology, plant biology, and other related subjects. I have extensive experience with statistics used in psychology and environmental data analysis. I am familiar wit... 18 Subjects: including statistics, biology, anatomy, Microsoft Excel
{"url":"http://www.purplemath.com/Gilcrest_Statistics_tutors.php","timestamp":"2014-04-21T07:48:41Z","content_type":null,"content_length":"23806","record_id":"<urn:uuid:5668a6f6-7e29-4d5f-a75f-2b16cc010290>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Andreas Ermedahl, Friedhelm Stappert, Jakob Engblom, "Clustered Worst-Case Execution-Time Calculation," IEEE Transactions on Computers, vol. 54, no. 9, pp. 1104-1122, September, 2005. BibTex x @article{ 10.1109/TC.2005.139, author = {Andreas Ermedahl and Friedhelm Stappert and Jakob Engblom}, title = {Clustered Worst-Case Execution-Time Calculation}, journal ={IEEE Transactions on Computers}, volume = {54}, number = {9}, issn = {0018-9340}, year = {2005}, pages = {1104-1122}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2005.139}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Clustered Worst-Case Execution-Time Calculation IS - 9 SN - 0018-9340 EPD - 1104-1122 A1 - Andreas Ermedahl, A1 - Friedhelm Stappert, A1 - Jakob Engblom, PY - 2005 KW - Index Terms- WCET analysis KW - WCET calculation KW - hard real-time KW - embedded systems. VL - 54 JA - IEEE Transactions on Computers ER - Knowing the Worst-Case Execution Time (WCET) of a program is necessary when designing and verifying real-time systems. A correct WCET analysis method must take into account the possible program flow, such as loop iterations and function calls, as well as the timing effects of different hardware features, such as caches and pipelines. A critical part of WCET analysis is the calculation, which combines flow information and hardware timing information in order to calculate a program WCET estimate. The type of flow information which a calculation method can take into account highly determines the WCET estimate precision obtainable. Traditionally, we have had a choice between precise methods that perform global calculations with a risk of high computational complexity and local methods that are fast but cannot take into account all types of flow information. This paper presents an innovative hybrid method to handle complex flows with low computational complexity, but still generate safe and tight WCET estimates. The method uses flow information to find the smallest parts of a program that have to be handled as a unit to ensure precision. These units are used to calculate a program WCET estimate in a demand-driven bottom-up manner. The calculation method to use for a unit is not fixed, but could depend on the included flow and program characteristics. [1] “ASTEC Homepage,” 2005, http:/www.astec.uu.se. [2] “Vinnova Homepage,” 2005, http:/www.vinnova.se. [3] N. Audsley, A. Burns, R. Davis, K. Tindell, and A. Wellings, “Fixed Priority Pre-Emptive Scheduling: An Historical Perspective,” Real-Time Systems, vol. 8, nos. 2/3, pp. 129-154, 1995. [4] L. Casparsson, A. Rajnak, K. Tindell, and P. Malmberg, “VolcanoA Revolution in On-Board Communications,” Volvo Technology Report, vol. 1, pp. 9-19, 1998. [5] J. Ganssle, “Really Real-Time Systems,” Proc. Embedded Systems Conf. (ESC SF) 2001, Apr. 2001. [6] R. Kirner and P. Puschner, “Transformation of Path Information for WCET Analysis during Compilation,” Proc. 13th Euromicro Conf. Real-Time Systems, (ECRTS '01), June 2001. [7] J. Engblom and A. Ermedahl, “Modeling Complex Flows for Worst-Case Execution Time Analysis,” Proc. 21st IEEE Real-Time Systems Symp. (RTSS '00), Nov. 2000. [8] C. Ferdinand, F. Martin, and R. Wilhelm, “Applying Compiler Techniques to Cache Behavior Prediction,” Proc. ACM SIGPLAN Workshop Languages, Compilers, and Tools for Real-Time Systems (LCT-RTS '97), 1997. [9] Y.-T.S. Li and S. Malik, “Performance Analysis of Embedded Software Using Implicit Path Enumeration,” Proc. 32nd Design Automation Conf., pp. 456-461, 1995. [10] J. Gustafsson, B. Lisper, C. Sandberg, and N. Bermudo, “A Tool for Automatic Flow Analysis of C-Programs for WCET Calculation,” Proc. Eighth IEEE Int'l Workshop Object-Oriented Real-Time Dependable Systems (WORDS '03), Jan. 2003. [11] C. Healy, R. Arnold, F. Müller, D. Whalley, and M. Harmon, “Bounding Pipeline and Instruction Cache Performance,” IEEE Trans. Computers, vol. 48, no. 1, Jan. 1999. [12] N. Holsti, T. Långbacka, and S. Saarinen, “Worst-Case Execution-Time Analysis for Digital Signal Processors,” Proc. EUSIPCO 2000 Conf. (X European Signal Processing Conf.), Sept. 2000. [13] T. Lundqvist and P. Stenström, “An Integrated Path and Timing Analysis Method based on Cycle-Level Symbolic Execution,” J. Real-Time Systems, May 2000. [14] F. Stappert and P. Altenbernd, “Complete Worst-Case Execution Time Analysis of Straight-Line Hard Real-Time Programs,” J. Systems Architecture, vol. 46, no. 4, pp. 339-355, 2000. [15] R. Heckmann, M. Langenbach, S. Thesing, and R. Wilhelm, “The Influence of Processor Architecture on the Design and the Results of WCET Tools,” IEEE Proc. Real-Time Systems Conf., 2003. [16] S.-S. Lim, Y.H. Bae, C.T. Jang, B.-D. Rhee, S.L. Min, C.Y. Park, H. Shin, K. Park, and C.S. Ki, “An Accurate Worst-Case Timing Analysis for RISC Processors,” IEEE Trans. Software Eng., vol. 21, no. 7, pp. 593-604, July 1995. [17] S.-K. Kim, S.L. Min, and R. Ha, “Efficient Worst Case Timing Analysis of Data Caching,” Proc. Second IEEE Real-Time Technology and Applications Symp. (RTAS '96), pp. 230-240, 1996. [18] R. White, F. Müller, C. Healy, D. Whalley, and M. Harmon, “Timing Analysis for Data Caches and Set-Associative Caches,” Proc. Third IEEE Real-Time Technology and Applications Symp. (RTAS '97), pp. 192-202, June 1997. [19] A. Colin and I. Puaut, “Worst Case Execution Time Analysis for a Processor with Branch Prediction,” J. Real-Time Systems, vol. 18, nos. 2/3, pp. 249-274, May 2000. [20] T. Mitra and A. Roychoudhury, “Effects of Branch Prediction on Worst Case Execution Time of Programs,” Technical Report 11-01, Nat'l Univ. of Singapore (NUS), Nov. 2001. [21] J. Engblom, “Processor Pipelines and Static Worst-Case Execution Time Analysis,” PhD dissertation, Dept. of Information Technology, Uppsala Univ., Uppsala, Sweden, Apr. 2002. [22] J. Engblom and A. Ermedahl, “Pipeline Timing Analysis Using a Trace-Driven Simulator,” Proc. Sixth Int'l Conf. Real-Time Computing Systems and Applications (RTCSA '99), Dec. 1999. [23] C. Ferdinand, R. Heckmann, M. Langenbach, F. Martin, M. Schmidt, H. Theiling, S. Thesing, and R. Wilhelm, “Reliable and Precise WCET Determination for a Real-Life Processor,” Proc. First Int'l Workshop Embedded Systems (EMSOFT2000), Oct. 2001. [24] S.-S. Lim, J.H. Han, J. Kim, and S.L. Min, “A Worst Case Timing Analysis Technique for Multiple-Issue Machines,” Proc. 19th IEEE Real-Time Systems Symp. (RTSS '98), Dec. 1998. [25] J. Schneider and C. Ferdinand, “Pipeline Behaviour Prediction for Superscalar Processors by Abstract Interpretation,” Proc. SIGPLAN Workshop Languages, Compilers and Tools for Embedded Systems (LCTES '99), May 1999. [26] S. Petters and G. Färber, “Making Worst-Case Execution Time Analysis for Hard Real-Time Tasks on State of the Art Processors Feasible,” Proc. Sixth Int'l Conf. Real-Time Computing Systems and Applications (RTCSA '99), Dec. 1999. [27] A. Colin and G. Bernat, “Scope-Tree: A Program Representation for Symbolic Worst-Case Execution Time Analysis,” Proc. 14th Euromicro Conf. Real-Time Systems (ECRTS '02), pp. 50-59, 2002. [28] F. Stappert, A. Ermedahl, and J. Engblom, “Efficient Longest Executable Path Search for Programs with Complex Flows and Pipeline Effects,” Proc. Fourth Int'l Conf. Compilers, Architecture, and Synthesis for Embedded Systems (CASES '01), Nov. 2001. [29] P. Puschner and A. Schedl, “Computing Maximum Task Execution Times with Linear Programming Techniques,” technical report, Technische Universität Wien, Institut für Technische Informatik, Apr. [30] A. Ermedahl, “A Modular Tool Architecture for Worst-Case Execution Time Analysis,” PhD dissertation, Dept. of Information Technology, Uppsala Univ. Uppsala, Sweden, June 2003. [31] F. Stappert, “From Low-Level to Model-Based and Constructive Worst-Case Execution Time Analysis,” PhD dissertation, Faculty of Computer Science, Electrical Eng., and Math., Univ. of Paderborn, 2004, C-LAB Publication, vol. 17, Shaker Verlag. [32] P. Atanassov, R. Kirner, and P. Puschner, “Using Real Hardware to Create an Accurate Timing Model for Execution-Time Analysis,” Proc. IEEE Real-Time Embedded Systems Workshop, Dec. 2001. [33] NEC Corp, V850E/MS1 32/16-bit Single Chip Microcontroller: Architecture, third ed., Jan. 1999, document no. U12197EJ3V0UM00. [34] ARM 9TDMI Technical Reference Manual, third ed., ARM Ltd., Mar. 2000, document no. DDI 0180A. [35] T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein, Introduction to Algorithms, second ed. MIT Press, 2002. [36] C. Healy, M. Sjödin, V. Rustagi, and D. Whalley, “Bounding Loop Iterations for Timing Analysis,” Proc. Fourth IEEE Real-Time Technology and Applications Symp. (RTAS '98), June 1998. [37] M. Berkelaar, lp_solve: (Mixed Integer) Linear Programming Problem Solver, 2004, ftp://ftp.es.ele.tue.nl/publp_solve. [38] “SICStus Prolog User's Manual,” Swedish Inst. of Computer Science, 1995. [39] “ILOG CPLEX Homepage,” 2004, http://www.ilog.com/productscplex/. Index Terms: Index Terms- WCET analysis, WCET calculation, hard real-time, embedded systems. Andreas Ermedahl, Friedhelm Stappert, Jakob Engblom, "Clustered Worst-Case Execution-Time Calculation," IEEE Transactions on Computers, vol. 54, no. 9, pp. 1104-1122, Sept. 2005, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2005/09/t1104-abs.html","timestamp":"2014-04-24T09:46:23Z","content_type":null,"content_length":"60131","record_id":"<urn:uuid:e5c9a2f0-3606-4e58-9e70-bcfc4d5791b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
The Unreasonable Ineffectiveness of Mathematics in Economics Velupillai, K. Vela (2004) The Unreasonable Ineffectiveness of Mathematics in Economics. UNSPECIFIED. (Unpublished) Download (592Kb) | Preview In this paper I attempt to show that mathematical economics is unreasonably ineffective. Unreasonable, because the mathematical assumptions are economically unwarranted; ineffective because the mathematical formalizations imply non-constructive and uncomputable structures. A reasonable and effective mathematization of economics entails Diophantine formalisms. These come with natural undecidabilities and uncomputabilites. In the face of this, I conjecture that an economics for the future will be freer to explore experimental methodologies underpinned by alternative mathematical structures. The whole discussion is framed within the context of the celebrated Wignerian theme: The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Item Type: Departmental Technical Report Department or Research center: Economics Subjects: H Social Sciences > HB Economic Theory > HB135 Mathematical economics. Quantitative methods Uncontrolled Keywords: General Equilibrium Theory, Computable General Equilibrium, Computable Economics, Constructive Mathematics, Mathematical Economics Additional Information: An earlier version of this paper was presented at the CJE Economics for the Future conference held at Cambridge University in September, 2003. Report Number: 6 Repository staff approval on: 07 Oct 2004 Actions (login required)
{"url":"http://eprints.biblio.unitn.it/685/","timestamp":"2014-04-18T02:59:39Z","content_type":null,"content_length":"17150","record_id":"<urn:uuid:a9072c71-0f1e-49d0-b5c5-6b8957173740>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
A proof from the mouth of a babe One of the high points of this last academic year has been volunteering in Katya's 2nd/3rd grade English and Math classes. I was suitably rewarded for this just a few weeks ago with a delicious treat of an elegant mathematical argument from a 2nd grader (Katya's in 3rd). Here's the puzzle and its solution (names of individuals have been changed). It all started with the teacher, Judith, posing a seemingly simple problem to the kids. She was teaching fractions that Thursday. In the latter half of Math class, the kids were each given rectangular cards, and asked to imagine they were cookies (or pizzas). Since classes at Ohlone are always mixed between two grades, the problems are always posed in a range of difficulties. With this one, the kids had to divide up their cookies into 2, 4, 8 and 3 equal parts. The more different ways you could slice it up, the more points you get. The puzzle I want to talk about concerns the division of the rectangle into 4 equal parts. Most kids did it the conventional way, by which I mean folding the rectangle in half, first vertically, then horizontally, and cutting along the creases. Edgar (name changed), however, adopted a different tack. He folded the rectangle along its two diagonals. The cool thing about this is that it's not immediately obvious that this way of cutting the rectangle produces 4 equal cuts, especially not to 2nd or 3rd graders. See the figure below, for instance: Can we tell immediately that Slice A is the same size as Slice B? Edgar claimed they were, but wasn't able to explain it - he just felt it in his gut. Judith, awesome teacher that she is, immediately recognized the opportunity to enrich and picked up on this issue, calling the attention of the entire class to the problem. She wrote it up on the white-board and named it "Edgar's challenge". Anyone who was done with the rest of the problems was welcome to give it a shot. Can they show that the four slices were the same size? It turns out that a great many of the kids were done with the "rest of the problems" really quickly, and were intrigued by Edgar's challenge. They all tried to prove it, and mostly with what I might call the straightforward and more "boring" proof. It uses the formula for the area of a triangle, which is equal to half its base times its height. The base of triangle A is equal to the width of the rectangle (if you rotate A by 90 degrees). Its height, clearly is half the length of the rectangle. Thus it's area must be one quarter the product of the rectangle's height and width. By a similar argument, you can show that the area of triangle B is also exactly that quantity. I was checking to see if there were any other interesting arguments and questions, when Mary ran up to me and asked if I could check her answer to Edgar's challenge. What she had was different from everyone else, and so she couldn't tell if it was right or not. So I went, and was suitably impressed. Mary took the 4 triangles and assembled two rhombuses (or is it rhombi?) from them. As you can see from the above pictures, the first rhombus was made with triangles A and C, and the second with triangles B and D. She laid one over the other and said they're the same size. Since each of these is made of two "equal" triangles, she said, the 4 triangles also have to be of the same size each. No formulas, no equations, just a plain and simple, albeit elegant geometric proof, stated with child-like innocence. Pretty awesome, considering that that's how one comes up with the formula for a triangle's area in the first place! Somewhere along the continuum, one loses sight of the beginnings until teachers like Judith and schools like Ohlone teach us the value of the past! That weekend, I remember recounting this marvelous experience to Chris Nguyen over a breakfast catchup. And he followed up with an equally interesting tidbit his daughter had just discovered a week ago at a play structure - What single formula gives the area of a rectangle, trapezoid and a triangle? The answer, not so surprisingly (or surprisingly to some) is quite simple - It's the product of the height and the average length of the two parallel sides. There you go - two nice little rewards in return for volunteering in a child's class, which is rewarding in itself. Not too shabby.
{"url":"http://www.pandamatak.com/people/anand/blog/2011/05/a_proof_from_the_mouths_of_bab.html","timestamp":"2014-04-19T11:05:50Z","content_type":null,"content_length":"20857","record_id":"<urn:uuid:c5147176-ae56-4a44-8ecf-675dda7bf378>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation to susceptibility Next: Regime of applicability of Up: Review of the linear Previous: Heating rate example non-interacting Relation to susceptibility Here, for the sake of intuition, I briefly make contact with the traditional LRT notation arising in condensed matter physics [127,122,160,99,65]. I consider first the `response' (time-dependent expectation value) of a general measurable whose operator is 2.40) we can write where the first term is the constant equilibrium value i.e. it has memory. However, because of translational invariance in time, the response must be local in frequency. So we can relate the Fourier transform of response to that of driving by 127]; in particular 2.40) gives where the latter form is averaged over 65]. We now choose 127], since, using the driving (2.41) we have where at the last stage the oscillatory terms average to zero. This heating rate expression results entirely from the definition of We should also note that in the condensed matter physics context dynamic form factor 160]. In the case where 122,99,8]. We have performed such a calculation for a chaotic mesoscopic dot [15]. Next: Regime of applicability of Up: Review of the linear Previous: Heating rate example non-interacting Alex Barnett 2001-10-03
{"url":"http://www.math.dartmouth.edu/~ahb/thesis_html/node25.html","timestamp":"2014-04-20T15:52:21Z","content_type":null,"content_length":"11990","record_id":"<urn:uuid:0ac76880-590a-46eb-a0dd-ee0bb6b06d00>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
ransfer Tutorial 2 (with Solutions) Heat Transfer Tutorial 2 (with Solutions) 1. It is required to heat a process stream flowing at 5kg/s with heat capacity 2 kJ/kg/K from 60^oC to 100^oC. What heat duty will this represent? Q = G x Cp x DT = 5 x 2 x (100-60) = 400kW Note the units because Cp is in kJ/kg/K Two alternative heating fluids are available, both at 150^oC : • steam with latent heat 2100 kJ/kg • a hot gas with heat capacity 1.5 kJ/kg/K whose temperature must be kept above 100^oC For each case determine the flowrate of heating medium required. At what temperature would you expect the condensed steam to leave the heat exchanger? For steam Q = G x L L = 2100 kJ/kg 400 = G x 2100 so G = 0.19 kg/s Since all the energy comes from latent heat not cooling, the condensed steam should leave at the same temperature as it entered, i.e. 150C. In practice it will be slightly cooled. For gas Q = G x Cp x DT using the maximum amount of temperature change available 400 = G x 1.5 x (150-100) G = 400 / (1.5 x 50) = 5.33 kg/s 2. A small countercurrent heat exchanger operates with the following stream temperatures: cold stream in 20^oC ; cold stream out 100^oC hot stream in 120^oC ; hot stream out 70^oC The unit has total area for heat transfer of 1 m&sup2and overall heat transfer coefficient of 500W/m&sup2/K. What is the rate of energy transfer ? For countercurrent operation: `cold' end driving force will be (70-20) = 50 deg C `hot' end will be (120-100) = 20 deg C Log mean is (50-20) / ln (50/20) = 30 / 0.916 = 32.7 deg C Q = U A (Theta) = 1 x 500 x 32.7 = 16,370 W 3. Estimate the film coefficients for flow of an organic liquid of density 800 kg/m³, viscosity 0.0008 kg/m/s and thermal conductivity 0.2 W/m/K which flows at a mean velocity of 1 m/s: • inside a tube of 40mm diameter and • outside the same tube. (Assume i.d. ~ o.d.) Re = u d ro / mu = 1 x 0.04 x 800 / 0.0008 = 40,000 To power 0.8 = 4804 To power 0.6 = 577 Nu (inside) = 0.046 x 4804 = 221 Nu (outside) = 0.24 x 577 = 138.5 Nu = h d / k so h = Nu k / d = Nu x 0.2/0.04 = 5 Nu hi = 5 x 221 = 1105 W/m2/K ho = 5 x 138.5 = 692 W/m2/K Estimate the overall heat transfer coefficient if heat is to be transfered between these two fluid streams. It will be reasonable to neglect wall resistance, so 1/U = 1/1105 + 1/692 = 0.00235 So U = 425.5 W/m2/K 4. It is required to preheat the feed stream to a chemical reactor from 20^oC to 80^oC . The stream flowrate is 20 kg/s of material having a specific heat capacity of 4 kJ/kg/K. From these figure we can calculate the required heat duty: Q = G Cp DT = 20 x 4 x (80-20) = 4800 kW Two streams of hot fluid are available which could be used to exchange heat with the above feed stream. Hot stream (1) is pressurised water at 110^oC, available at a flowrate of 30 kg/s and has a heat capacity 4.2kJ/kg/K. Hot stream (2) is a light oil at 220^oC available at a flowrate of 10 kg/s and has a heat capacity of 3 kJ/kg/K. We can work out the outlet temperature for each case. Again Q = G Cp DT = 4800 4800 = 30 x 4.2 x DT so DT ~ 38 deg C Water will come out at (110-38) = 72C 4800 = 10 x 3 x DT so DT = 160 deg C Oil will come out at (220-160) = 80C The expected overall heat transfer coefficients are 500W/m&sup2/K in an exchanger with water, and 200W/m&sup2/K with oil. Which heating fluid would you recommend to be used, on the basis of the above information? Briefly describe any other factors which might be relevant to the choice between these two alternatives. The main factor is going to be the the size of the exchanger, so work out heat transfer areas. Driving forces are: Cold end (72-20) = 52 Hot end (150-80) = 70 Log mean = 60.55 Q = U A theta Since duty is in kW make U = 0.5 kW/m2/K 4800 = 0.5 x A x 61 So A = 157m2 Cold end 80-20 = 60 Hot end 220-80 = 140 Log mean = 94.42 U = 0.2 kw/m2/K 4800 = 0.2 x A x 94.4 A = 254m2 Both are big units but this is nearly twice the size of the water exchanger. 5. The following experimental measurements were taken for the operation of a heat exchanger. Hot side: Cold side: flowrate 3.6 kg/s 2.9 kg/s inlet temperature 200^oC 100^oC outlet temperature 150^oC 170^oC fluid specific heat capacity 3 kJ/kg/K 2.5 kJ/kJ/K Dimensions : 200 tubes, each 25mm diameter and 2 m long. Expected overall heat transfer coefficient 480W/m&sup2/K. Fully countercurrent operation. Discuss critically. We are not asked to design a new heat excahnger, but to evaluate the performance of an existing unit. (i) We have enough information to evaluate the heat dutiues on both sides of the unit, which should in theory be the same... Q lost on hot side = 3.6 x 3 x (200-150) = 540 kW Q gained on cold side = 2.9 x 2.5 x (170-100) = 507.5 kW The cold side gains less heat than the hot side loses. This makes sense since there will inevitably be losses to the ouside. These amount to 32.5kW or 6% of the total heat supplied. They might be reduced by better insulation of the unit. (ii) Since we have the area, temperature driving force and heat duty we can calculate the actual heat transfer coefficient and compare it with the `expected' one. This is the most sensible comparison to make as this is the least certain item in designing any heat exchanger. The two driving forces are (200-170) = 30 deg C and (150-100) = 50 deg C Log mean is 39.15 deg C Each tube has an area of 0.025 x pi x 2 = 0.157 m2 There are 200 so total area = 31.4 m2 U = Q / (A Theta) = 507.5 / 31.4 / 39.15 = 0.414 kW/m2/K This doesn't compare too badly with the design figure of 0.48 kW/m2/K. However, it is low which is actually unexpected, since when people are designing with uncertain figures they tend to err on the safe side. It is a reasonable bet that the designer who estimated 480 w/m2/K actually believed that the real figure was significantly higher! Unless this was wrong, the exchanger is underpreforming and should probably be cleaned. 6. A stream of 10 kg/s of organic material with heat capacity 2 kJ/kg/K is to be heated from 80^oC to 120^oC . It is proposed to use steam at 3.5 bara as the heating medium. The steam condenses at 140^oC and has latent heat of 2732 kJ/kg. a) Determine the heat duty of the unit. b) What flowrate of steam will be required? c) Determine the Log mean temperature driving force d) If the expected overall heat transfer coefficient is 400W/m&sup2/K determine the area required. e) Estimate the annual cost of heating the stream, given an 8000 hour working year, steam costing £3 per GJ and an annualised capital cost of £500 per m&sup2 of heat exchanger.
{"url":"http://ecosse.org/jack/cheng1h/heattransfer/sol2.html","timestamp":"2014-04-20T15:51:20Z","content_type":null,"content_length":"7756","record_id":"<urn:uuid:8b6b7bbf-5859-4cee-8d9a-a9b63cdf44cc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Falling object July 11th 2007, 08:43 PM Falling object At time=0, a diver jumps from a diving board that is 32 feet above the water. the hieght of the diver is given by the equation, h(t)=-16t (squared) +16t+32, where h is measured in feet and t is measured in seconds. A)when does the diver hit the water? B)what is the domain of the equation for height? C)what is the range of the equation for the hieght? D)what is the maximum height reached by the diver? July 11th 2007, 09:00 PM This has nothing to do with geometry [quote=lilmama-14;60314]At time=0, a diver jumps from a diving board that is 32 feet above the water. the hieght of the diver is given by the equation, h(t)=-16t (squared) +16t+32, where h is measured in feet and t is measured in seconds. A)when does the diver hit the water? he hits the water when his height above the water is zero. solve for h(t) = 0 that is, $-16t^2 + 16t + 32 = 0$ B)what is the domain of the equation for height? the domain of a function is the set of all inputs (in this case, t-values) for which the function is defined. technically, the domain of this function is all real t, since it is a polynomial. but generally we don't accept negative values for time, so i'd say dom(h) = $[0, \infty)$ C)what is the range of the equation for the hieght? the range is the set of all outputs (in this case, h-values) for which the function is defined. the function given is a parabola with a maximum value. find this maximum value. the range will be everything from that value down to -infinity D)what is the maximum height reached by the diver? i'm tempted to just blurt out 32 , since the diver was at 32 feet and is diving downwards(but that is likely to be wrong, since it's possible his height increased before it started to decrease), but let's be methodical about this. the maximum height occurs at the vertex. to find the t that gives the vertex, we use the vertex formula: $t = \frac {-b}{2a}$, where the original equation is of the form: $h = at^2 + bt + c$ after finding the t-value in this way, plug it into h to solve for the max height July 12th 2007, 04:43 AM I believe Jhevon gave an incorrect answer for B) and also gave an incorrect answer for C) due to the same flaw. The equation $h(t) = -16t^2 + 16t + 32$ is valid only for the diver's height for the time period that the diver is in the air. So it is true only from t = 0 s to t = 2s (when the diver hits the water.) So the domain of the function is [0, 2]. As to the range of the function, again the function is only defined for when the diver is in the air. So the range will include only 0 ft (the surface of the water) to whatever the maximum height July 12th 2007, 08:22 AM I believe Jhevon gave an incorrect answer for B) and also gave an incorrect answer for C) due to the same flaw. The equation $h(t) = -16t^2 + 16t + 32$ is valid only for the diver's height for the time period that the diver is in the air. So it is true only from t = 0 s to t = 2s (when the diver hits the water.) So the domain of the function is [0, 2]. As to the range of the function, again the function is only defined for when the diver is in the air. So the range will include only 0 ft (the surface of the water) to whatever the maximum height yes, i did err. i mentioned that the domain and range were values for "which the function was defined." but obviously infinity would not come into play, or the guy would be diving forever! The function is only defined for the period the diver is in the air, duh! ...why did i write otherwise:confused: Thanks Dan! I'd give you two thanks for your post if i could! July 12th 2007, 10:07 AM yes, i did err. i mentioned that the domain and range were values for "which the function was defined." but obviously infinity would not come into play, or the guy would be diving forever! The function is only defined for the period the diver is in the air, duh! ...why did i write otherwise:confused: Thanks Dan! I'd give you two thanks for your post if i could! No worries. It happens to the best of us. Well, not me, but the best of everyone else. (I'm soooo humble. :p ) July 12th 2007, 11:45 AM July 12th 2007, 11:52 AM
{"url":"http://mathhelpforum.com/math-topics/16784-falling-object-print.html","timestamp":"2014-04-18T15:53:50Z","content_type":null,"content_length":"12662","record_id":"<urn:uuid:38e7d0a2-d863-45a4-bbea-2ebfad798b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> High-dimensional EFA (optimize run time) Jaime Derringer posted on Friday, August 06, 2010 - 9:52 am I'm running a series of EFAs on 60-80 items, extracting up to 10 factors. We are using the ML estimator, and the data are missing completely at random (by design) and weights are applied. I am currently running this using M+ v6 on a 64-bit Windows 7 quad-core machine - the first EFA has now been running for 6 days. We are planning to purchase a server on which to run these analyses, and were wondering what kind of machine parameters would optimize run times, or how much M+ is using machine features like, e.g., on-board ram vs virtual ram on the hard drive. Linda K. Muthen posted on Friday, August 06, 2010 - 9:59 am Are the factor indicators categorical or continuous? Jaime Derringer posted on Friday, August 06, 2010 - 10:08 am Categorical (4 options) Linda K. Muthen posted on Friday, August 06, 2010 - 10:24 am With maximum likelihood and categorical factor indicators each factor is one dimension of integration. We do not recommend models with more than four dimensions of integration. I suggest using WLSMV which is less computationally demanding in this case. I would think that you have some idea of how many factors are represented by the set of items. If it is four, for example, perhaps extracting from three to five or two to six would be sufficient. Jaime Derringer posted on Friday, August 06, 2010 - 11:30 am Unfortunately, we need to use ML to handle the missingness in our data (which is substantial, so we lose more than 1/2 of our subjects otherwise). We are constructing a measure, and for one of the sections (with 80 items), the proposed number of factors is 10, so we also need to run up to the 10-factor solution. Is there a hardware solution to optimize M+'s running of a categorical ML large EFA analysis, without changing the analysis itself? Bengt O. Muthen posted on Friday, August 06, 2010 - 12:23 pm Note that WLSMV does not use listwise deletion. Staying with ML, both integ=3 and integ=montecarlo can present numerical precision problems with that many factors. A more practical full-information approach would be to do Bayesian multiple imputation followed by WLSMV. The approach is studied in Section 3.1 of Asparouhov, T. & Muthén, B. (2010). Multiple imputation with Mplus. Technical Report. which on our web site under Papers, Bayesian Analysis. The UG ex 11.5 shows how do to the multiple imputation step. Jaime Derringer posted on Saturday, August 07, 2010 - 6:45 am If WLSMV is not using listwise deletion, how exactly is it handling missing data? And how does that work if its based on polychorics and the associated asymptotic weight matrix? Bengt O. Muthen posted on Saturday, August 07, 2010 - 8:29 am Pairwise present. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=8&page=5768","timestamp":"2014-04-20T09:17:28Z","content_type":null,"content_length":"25577","record_id":"<urn:uuid:8f92c1cb-0a66-4d5c-96d3-2c90ef41c639>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Ring of Witt Vectors and Tensor product of Fields up vote 1 down vote favorite Let $p > 2$ be a prime, and let $\textbf{F}_{p} = \textbf{Z}/p\textbf{Z}$. Let $k_{1}$ be a finite field over $\textbf{F}_{p}$, and let $k$ be a perfect field of characteristic $p$. Then we have ring isomorphism $k_{1} \otimes_{\textbf{F}_{p}} k \cong \oplus_{i=1}^{n} l_{i}$ where $l_{i}$ are finite extensions of $k$. Question: How do we prove that $W(k_{1}) \otimes_{\textbf{Z}_{p}} W(k) \cong \oplus_{i=1}^{n} W(l_{i})$, where $W(k)$ denote the ring of Witt vectors of $k$? Any suggestions or comments would be greatly appreciated. ac.commutative-algebra ra.rings-and-algebras witt-vectors 2 It is not true in general that when $k$ is finite you will always have $k\subset k_1$ or $k_1\subset k$. – Kevin Ventullo Jul 12 '13 at 20:04 @KevinVentullo: Oops, sorry. I was being stupid. But the above result is still true in that case. I edited my question. Thanks anyway! – david Jul 12 '13 at 20:32 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra ra.rings-and-algebras witt-vectors or ask your own question.
{"url":"http://mathoverflow.net/questions/136548/ring-of-witt-vectors-and-tensor-product-of-fields","timestamp":"2014-04-19T09:24:24Z","content_type":null,"content_length":"48630","record_id":"<urn:uuid:8d573593-4221-4de2-b639-16c47b2aaff1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Flintridge, CA Math Tutor Find a Flintridge, CA Math Tutor ...I tutored throughout high school (algebra, calculus, statistics, chemistry, physics, Spanish, and Latin) and tutored advanced math classes during college. Above all other things, I love to learn how other people learn and to teach people new things in ways so that they will find the material int... 28 Subjects: including prealgebra, logic, speech, Python ...In addition, I have plenty of experience writing and editing papers in different disciplines, so MLA and APA formats are very familiar. Throughout high school and college, I have taken several courses in philosophy, including honors classes through Biola's Torrey Academy. I am well acquainted w... 69 Subjects: including geometry, reading, prealgebra, SAT math ...My favorite subject to tutor and teach has always been Math, with Algebra holding a special place in my heart. I have an exceptional command of the subject and I have a good time when teaching. I use rhymes, songs, manipulatives, analogies, movement, and my enthusiasm to help students understand the material. 16 Subjects: including algebra 1, ACT Math, grammar, prealgebra ...Before moving to Los Angeles, I taught for years at a high-end tutoring firm in New York City, where I successfully helped students gain admission to the country's most competitive colleges and secondary schools. In addition to helping students prepare for standardized tests (SAT, ACT, ISEE, SSA... 42 Subjects: including trigonometry, precalculus, algebra 1, SAT math ...Listening), and by Act (ex. demonstration). Helping adults with standardized testing that can help them acquire further professional education, licensing and credentials is something I can deeply empathize with, having recently passed for myself with flying colors (92 out of the perfect score of... 18 Subjects: including algebra 1, prealgebra, reading, writing
{"url":"http://www.purplemath.com/flintridge_ca_math_tutors.php","timestamp":"2014-04-20T01:48:13Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:efe1865f-223b-41cc-ae23-ed66495fe3e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Can't seem to understand Well Ordering Principle May 30th 2012, 11:23 AM #1 Junior Member Can't seem to understand Well Ordering Principle Re: Can't seem to understand Well Ordering Principle Yes, they say the opposite. That's why they give a contradiction. The "hypothesis" that the set S does not include all positive integers must be false. Re: Can't seem to understand Well Ordering Principle I agree. To remind, the well-ordering property for positive integers says that every nonempty subset of positive integers has the least element. Note that the proof would go through with a weaker For every nonempty subset S of positive integers we have 1 ∈ S or there exists a number n such that n ∉ S and n + 1 ∈ S (*) It is clear that the well-ordering property implies (*). Now, (*) is in fact the contrapositive of induction. Indeed, considering the complement S' of S in (*) we have ∀S'. S' ≠ Z^+ ⇒ 1 ∉ S' ∨ ∃n. n ∈ S' ∧ n + 1 ∉ S'. The contrapositive of this is ∀S'. 1 ∈ S' ∧ (∀n. n ∈ S' ⇒ n + 1 ∈ S') ⇒ S' = Z^+, which is the induction principle. Thus, the outline of the proof is the same as the proof by contradiction of the contrapositive ¬B ⇒ ¬A from A ⇒ B: assume ¬B and suppose A; then B, a Similarly, the contrapositive of the whole well-ordering property is strong induction. Re: Can't seem to understand Well Ordering Principle lol, using the well-ordering principle to justify induction is like becoming an american citizen in order to get a permanent visa. and what i mean by this is: the principle of induction is equivalent to the well-ordering principle: given one, you can prove the other. such "proofs" amuse me, it's like defining existence to be a state of being. you haven't really accomplished anything. the principle of induction is something we would like to BELIEVE is true. this belief is founded on the notion that the successor map is injective. i'll admit it seems plausible, but it's only so because we say it's so. at some point, we should be honest and admit: "well, if the natural numbers DID exist, we'd like them to behave like THIS". look, i get it, we've built the "inductive" character of the natural numbers into our set theory. it's axiomatic. the desire for natural numbers to be included in the scope of ZF set theory is the raison d'etre of the axiom of infinity. @ the original poster: i sympathize with your confusion. all Mr. Rosen has done is to shift the proof of the principle of induction onto another principle: the well-ordering principle. and the well-ordering principle is typically proved....by induction! but it's still worth-while to understand the "form" of the proof. one way we often prove something is true, is to show that there are no counter-examples. and often this is paired with a proof by contradiction, like so: Pigs never fly Proof: If pigs fly, there must be some pig that flies. But as proved in Theorem A.B(1a), if any pig flies, then everything is true. But some things are not true, contradiction. a less tongue-in-cheek example of what i mean can be found here: May 30th 2012, 12:14 PM #2 MHF Contributor May 30th 2012, 02:01 PM #3 MHF Contributor May 31st 2012, 07:08 AM #4 MHF Contributor
{"url":"http://mathhelpforum.com/number-theory/199457-can-t-seem-understand-well-ordering-principle.html","timestamp":"2014-04-17T13:22:09Z","content_type":null,"content_length":"43542","record_id":"<urn:uuid:b7de4eb5-9e83-4489-9031-39f1b7fbcd91>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
I have a deep knowledge of physics and mathematics, having a Doctorate in physics from the University of Oxford, and having studied, researched and taught theoretical and experimental physics and mathematics at all levels for over 30 years. Equally importantly, I have a proven ability to explain the subtle ideas of physics and mathematics to students of all abilities, not only the brightest, but also those who are struggling to keep up. I have considerable experience of individual teaching. I have provided personal tuition at Common Entrance, GCSE, AS, A level and IB in physics and mathematics, to students of a wide range of abilities. Comments from satisfied students testify to my ability to make clear those subject areas which they find most difficult. I have taught physics and mathematics at all levels in Reading University Physics Department. This was a nationally recognized Centre of Excellence in Teaching and Learning in Physics, the only such centre in the UK and one of only a handful of Physics Departments to achieve a perfect score in the most recent Quality Assurance Agency assessment of national teaching standards in universities. In recognition of my teaching ability I was presented with an award for 'Outstanding contributions to Teaching & Learning' in the University. I was involved in establishing a pioneering pre-sessional course in our department, designed for A-level students who had not achieved the necessary grades in physics and mathematics for university entrance, to enable them to progress to our degree courses in physics. I taught on this course for over fifteen years, tutoring a wide range of students, some of whom continued on our degree courses to achieve first class honours. I taught first year courses in 'Mathematics for Scientists', introducing students to key mathematical ideas such as differential and integral calculus. At a more advanced level, I taught specialized courses for second, third and fourth year undergraduate students, and for graduate students at MSc and PhD level. My research interests are in optical physics and laser physics, particularly the interaction of ultra-high energy, ultra-short laser pulses with matter. A list of my publications and international conference presentations is available on request. I helped to design, build, test and maintain the Reading University Ultrafast Laser Laboratory. Students in my care are offered: Feedback through regular homework exercises Feedback through half-termly progress reports Past-paper based tuition aimed at improving performance in particular examinations Syllabus analysis to ensure that students concentrate on key areas Analysis of Examiners' Reports to ensure that students are aware of exactly what impresses examiners in particular examinations Advice on examination technique so that students know how to give the best account of themselves in examination conditions Help with preparation for Oxford and Cambridge interviews Tags: Wallingford Maths tutor, Wallingford Physics tutor, Wallingford Secondary Maths tutor, Wallingford GCSE Maths tutor, Wallingford A-Level Maths tutor, Wallingford University Maths tutor, Wallingford Secondary Physics tutor, Wallingford GCSE Physics tutor, Wallingford A-Level Physics tutor, Wallingford University Physics tutor
{"url":"http://www.firsttutors.com/uk/tutor/s-v.maths.physics","timestamp":"2014-04-16T21:52:13Z","content_type":null,"content_length":"19881","record_id":"<urn:uuid:d905853a-81ff-4b02-86fe-27ffaaf32820>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Astoria, NY Math Tutor Find an Astoria, NY Math Tutor I've had a strong love for the English language, and now I want to use my strengths to help others. I graduated from the University of Delaware with a BA in English. I minored in journalism and have worked as a blogger and professional copywriter since. 17 Subjects: including SAT math, English, reading, writing ...I am certified in teaching Mathematics 7-12 in New York. I have taught Precalculus in a classroom for 5 years. I have also tutored Precalculus online for two tutoring companies. 9 Subjects: including algebra 2, calculus, geometry, physics ...I have been a user of spreadsheet programs like Excel for 30 years. I can help you with similarity and congruency of triangles and other shapes, the Pythagorean Theorem, drawing circles and conics, and carrying out transformations. I can help you learn the basics of Word or more advanced tools like MailMerge. 36 Subjects: including precalculus, algebra 1, algebra 2, GRE ...Whenever working with students, I give the first session free so that students can get accustomed to my teaching style. I also prefer tutoring online especially with the technology and resources available. However, I am also available in the NYC/NJ/PA area. 9 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Both in my personal life and professional life, I have made use of Sql in numerous ways, and through various programs and databases. I can train individuals to automate and enhance what they do using Sql. I have a Bachelor's degree in Finance, have studied Financial Math and Engineering, and have worked in the field for years. 15 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel Related Astoria, NY Tutors Astoria, NY Accounting Tutors Astoria, NY ACT Tutors Astoria, NY Algebra Tutors Astoria, NY Algebra 2 Tutors Astoria, NY Calculus Tutors Astoria, NY Geometry Tutors Astoria, NY Math Tutors Astoria, NY Prealgebra Tutors Astoria, NY Precalculus Tutors Astoria, NY SAT Tutors Astoria, NY SAT Math Tutors Astoria, NY Science Tutors Astoria, NY Statistics Tutors Astoria, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Astoria_NY_Math_tutors.php","timestamp":"2014-04-20T07:08:04Z","content_type":null,"content_length":"23637","record_id":"<urn:uuid:6552c0c2-d412-4bbd-b5be-0cd7409d48fb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00188-ip-10-147-4-33.ec2.internal.warc.gz"}