content
stringlengths
86
994k
meta
stringlengths
288
619
This is a new chapter for me and I have the key right here so finding the solution isnt hard (I just need to flip to the page with the key) but I wanna understand how I can find the solution myself. This is the question "Enter lines equations in the formula y = kx + m" Check the attachments for pictures on my current task. Anyone that can tell me how to find this out? Simple calc wont do I need to understand each step you take to find the answer! This post has been edited by FrozenSnake: 28 March 2010 - 12:57 PM
{"url":"http://www.dreamincode.net/forums/topic/164768-some-math-assistance-needed-again/page__p__972814","timestamp":"2014-04-20T21:21:15Z","content_type":null,"content_length":"150170","record_id":"<urn:uuid:d95f254f-0933-4418-ad62-6aa106839087>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Galaxies - Igor Karachentsev 3.4. Separations of Double Galaxies and Selection Effects In contrast to the radial velocities, the angular separation between galaxies enters directly into the selection of pairs for the catalogue and is strongly affected by the isolation criteria. As a result, the distribution of double galaxies according to apparent linear separation shows strong selection effects. Figure 6 presents the distribution of 585 catalogue pairs in the projected linear separation between components, X. The maximum in this distribution occurs at X X > 100 kpc, the distribution of which is shown in the inset in figure 6. The distribution of the number of model pairs in projected linear separation between components, shown in figure 7, agrees in most regards with the catalogue distribution. The maximum number of M-pairs takes place at the same separation, X figure 7. The mean separation between members of optical pairs (72.0 kpc) and pseudo-pairs (39.7 kpc) is significantly larger than for the physical double systems (16.2 kpc). The region X > 100 kpc shows the exclusion of false M-pairs. It is clear that the appearance of the distribution in figure 7 and the extent of its resemblance to the distribution of real (catalogue) pairs in figure 6 will depend on the choice of parameters for the modelling, given in table 3. Changes in the mean separation between modelled double galaxies and their relative numbers per unit volume may be accomplished so as to satisfy both distributions. Note that, considering the variation of these parameters, the relative number of false pairs among wide (X > 100 kpc) pairs will become dominant. In order to quantify the effect of our selection criteria on the distribution of the component separation in pairs, we examined those physical pairs in the model which do not satisfy the isolation criteria (Karachentsev 1981c). The results of this appear in figure 8. The selectivity function Q(X) indicates the probability that a pair of galaxies with projected linear separation X will be rejected by the basic criteria (2.6) to (2.8). The curve in the figure indicates the selectivity function Q(X) approximated by where k[1] = 45 kpc. Just as previously expected, the isolation criteria introduce strong selection, particularly against wide pairs. For example, for X = 100 kpc, only one-tenth of the number of double galaxies satisfy this criterion. We might use this function (3.5) to recover the true distribution of double galaxies by projected linear separation. Just as in the preceding section, we will suppose that pairs with f > 100 f[] are not physical. The distribution of the remaining 487 pairs in X is shown in the histogram in figure 9. This distribution shows a much less prominent tail than the distribution for all K-pairs in figure 6. The mean value of the projected mutual separation for 487 double galaxies is 38.0 kpc with a standard deviation of 39.0 kpc. We will derive an analytical expression for the distribution n^*(X) in figure 9. Amongst simple formulae, the gamma function satisfies the histogram in the form with parameter k[2] = 22.1 kpc. The expression (3.6), normalized to a sample size N^* = 487, is shown as the continuous curve in figure 9. This distribution gives a mean separation between galaxies in pairs of <X> = 33.2 kpc. Knowing the selectivity function Q(X), it is possible to derive the actual distribution of the projected separation: Incorporating (3.5) and (3.6) we obtain for the undistorted distribution function the same sort of gamma function but with a different argument: where k[3] = k[1] × k[2] / (k[1] - k[2]) = 43.4 kpc. The maximum of this distribution is located at k[3] / 2 = 21.7 kpc, while the mean occurs at 3k[3]/2 = 65.1 kpc. The observational data on the distribution of double galaxies according to projected separation are presented in figure 10 on a logarithmic scale. The filled points indicate the catalogue fractions from the histogram in figure 9, corrected according to (3.7) for observational selection. The vertical bars on these points indicate standard deviations. The open points indicate the same quantities for the entire sample of 585 pairs, including false, non-isolated systems. The analytical expression for these observational data, (3.8), is shown as the continuous curve. As we can see, the distribution of double galaxies per unit volume according to the projected separation of the components is satisfactorily described by (3.8) over the interval from 1 to 100 kpc. For X > 100 kpc the observed fraction n(X) begins to increase, but the statistical error also increases because of the small fraction of the sample in the tail of the distribution. From this it follows that, in the region X > 120 kpc, the selectivity function (see figure 8) has to be extrapolated. Calculation of the factors needed for this extrapolation, and the difficulty of removing false pairs among the wide systems, render estimates of n(X) for X > 120 kpc highly unreliable. From the results of the modelling we showed (see table 4) that false pairs constitute about 43% of the sample. Applying our criterion f > 100, the selected number of false pairs in the catalogue is markedly small (98/585 figure 9 begins to show the effects of members of groups and clusters (most markedly among wide pairs). Examining the distribution of bright galaxies to magnitude 14 with measured radial velocities, Davis and Peebles (1983) showed that the quantitative form of the two-point correlation function of galaxies is W(X) ~ X^-0.77 over the interval [ 0.03 - 10] Mpc. This implies an expression n(X) ~ X^0.23, which is shown as the dashed line in figure 10. The actual observations, shown as points, show a tendency in the region X > 100 kpc to lie along this predicted line. Through this, we conclude that the strange behaviour of the distribution n(X) for X > 100 kpc does not have its origin in dynamically isolated pairs but rather in the role of members of groups and clusters of galaxies. Ignoring this fact may lead to very large mistakes in estimates of the orbital masses, such as shown by Turner (1976b). Attempts to establish the form of the distribution of double galaxies with respect to separation have been few and have yielded contradictory results. Thus, Holmberg (1954) proposed that the surface distribution of separations has the form n(R) ~ 1 - (R / R[m])^3, where the maximal scale for double systems R[m] is 307 kpc for H = 75 km/s/Mpc. Turner (1976b) and Peterson (1979b) for rather small samples of pairs agreed on the form of a distribution n(X) ~ X^-c for the interval [20 - 200] kpc, with the parameter c = 0.5 to 0.6. White and Valdes (1980), reexamining the data of Turner and Peterson, obtained a value for the parameter c of 0.3. As is apparent from figure 10 the data from the observations allow a description in quantitative form over the limited interval [30 - 100] kpc. The form obtained for n(X) has two special properties: it does not have a characteristic scale and it does not change its form upon transformation from projected mutual separation X to spatial separation R. However, an extrapolation of this measured distribution to R ~ 1 Mpc such as was done by Turner (1976b) does not appear to be justified, and forces characteristics onto the ensemble of double galaxies which make it difficult to satisfy the data from the very closest double systems.
{"url":"http://ned.ipac.caltech.edu/level5/Sept02/Keel/Keel3_4.html","timestamp":"2014-04-19T12:05:42Z","content_type":null,"content_length":"13172","record_id":"<urn:uuid:f4c0d460-74a6-4dd6-82f4-59069530afd3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wikibooks, open books for an open world An ellipse is the collection of points that are equidistant from two points, called foci (singular focus). The foci are found on the major axis, which has a length of 2a. The minor axis is 2b, and is smaller. The "roundness" or "longness" of an ellipse can be measured by eccentricity. If c is the distance from the center to a focus, then e = c / a. The latus rectum is a line parallel to the minor axis that crosses through a focus. Its length is b^2 / a. "Long" ellipses are generally written as $\frac{(x-h)^{2}}{a^{2}} + \frac{(y-k)^{2}}{b^{2}} = 1$ where (h,k) is the center, while "tall" ellipses are written as $\frac{(y-k)^{2}}{a^{2}} + \frac{(x-h)^{2}}{b^{2}} = 1$
{"url":"http://en.wikibooks.org/wiki/Algebra/Ellipse","timestamp":"2014-04-19T04:31:10Z","content_type":null,"content_length":"24766","record_id":"<urn:uuid:4965c7d3-1b8c-4c2a-a75e-9ad0ce5249dd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (6 - 7) Title: How Far Can You Leap? Description: This lesson will allow students to become familiar with the concept of unit rate. Through an open investigation students will develop methods to find unit rate with a table, equivalent ratios, or an equation. This is a lesson to be used as part of a unit with "Painter Problems" and "How Big Should It Be?" This is a College- and Career-Ready Standards showcase lesson plan. Subject: Education and Training (7 - 8), or Mathematics (6), or Science (8), or Technology Education (6 - 8) Title: It IS Easy Being "Green": Part II of III Creating Your Own Cleaning Supplies Description: In this second part of a three-part lesson, students will collaborate in ability groups to complete a Jigsaw activity on all-purpose cleaners. Next, students will research, create, and evaluate homemade cleaning supplies and determine the effectiveness compared to conventional (store-bought) cleaners. Students will present their findings for both the jigsaw and their product in a flipchart on the Interactive WhiteBoard. Subject: Mathematics (6) Title: Thirsty for Ratios Description: In this lesson the students will learn what a ratio is and how it can be used in comparison. In this lesson, students will also determine how to combine a sports drink in powder form and water to make enough for a whole football team. Students will be encouraged to use different strategies such as ratios and proportions. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Thinkfinity Lesson Plans Subject: Mathematics Title: Ratios Description: This reproducible transparency, from an Illuminations lesson, features equations for finding the ratios for surface area and volume when comparing a model to a full-size object. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: The Ratio of Circumference to Diameter Description: In this lesson, one of a multi-part unit from Illuminations, students measure the circumference and diameter of circular objects. They calculate the ratio of circumference to diameter for each object in an attempt to identify the value of pi and the circumference formula. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Purple Prisms Description: In this lesson, one of a multi-part unit from Illuminations, students investigate rectangular prisms using an online, interactive applet. They manipulate the scale factor that links two three-dimensional rectangular prisms to learn about edge lengths and surface area relationships. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Paper Pool: Analyzing Numeric and Geometric Patterns Description: This page provides an overview of a four-lesson Illuminations unit plan titled '' Analyzing Numeric and Geometric Patterns of Paper Pool.'' The interactive paper pool game in this unit plan provides an opportunity for students to further develop their understanding of ratio, proportion, and least common multiple. This investigation includes student resources for the Paper Pool project, preparation notes, answers, and a holistic-by-category scoring rubric with guidelines for how it can be used to assess the project. Samples of two students' work and comments from a teacher accompany the suggested rubric. This resource references the Illuminations lessons titled '' Paper Pool Game,'' '' Explore More Tables,'' '' Look for Patterns,'' and '' Go the Distance.'' Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Constant Dimensions Description: In this Illuminations lesson, students measure the length and width of a rectangle using both standard and non-standard units of measure. In addition to providing measurement practice, this lesson allows students to discover that the ratio of length to width of a rectangle is constant, in spite of the units. For many middle school students, this discovery is surprising. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Linking Length, Perimeter, Area, and Volume Description: In this four-lesson unit, from Illuminations, students explore ratio, proportion, scale factor and similarity using perimeter, area, volume and surface area of various rectangular shapes. Students use an online interactive applet to explore how the perimeters, areas, and side lengths of similar rectangles are related. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Language Arts,Mathematics Title: Can It Be? Description: In this lesson, one of a multi-part unit from Illuminations, students participate in activities in which they focus on connections between mathematics and children s literature. They listen to the story The Phantom Tollbooth, by Norton Juster, and then explore and interpret the concept of averages. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics,Science Title: Inclined Plane Description: In this multiple-day activity, from Illuminations, students time balls rolling down inclines of varying lengths and heights. They then try to make inferences about the relationships among the variables involved. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Getting into the Electoral College Description: In this unit of 3 lessons from Illuminations, students are engaged in activities involving percentages, ratios, and area, with a focus throughout on building problem-solving and reasoning skills. They are designed to be used individually to fit within curriculum being covered at the time of an election. Additionally, the lesson extensions include many ideas for interdisciplinary activities and some possible school-wide activities. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Language Arts,Mathematics Title: How Much is a Million? Description: This lesson focuses students on the concept of 1,000,000. It allows students to see first-hand the sheer size of a million while at the same time providing students with an introduction to sampling and its use in mathematics. Students will use grains of rice and a balance to figure out the approximate volume and mass of 1,000,000 grains of rice. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: A Swath of Red Description: In this lesson, one of a multi-part unit from Illuminations, students estimate the area of the country that voted for the Republican candidate and the area that voted for the Democratic candidate in the 2000 presidential election using a grid overlay. Students then compare the areas to the electoral and popular vote election results. Ratios of electoral votes to area are used to make generalizations about the population distribution of the United States. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics, Social Studies Title: First Class First? Using Data to Explore the Tragedy of the Titanic Description: In this Science NetLinks lesson, students analyze and interpret data related to the crew and passengers of the Titanic. They draw conclusions to better understand the people who were lost or saved as a result of the disaster, and whether or not social status affected the outcome. Thinkfinity Partner: Science NetLinks Grade Span: 9,10,11,12 Subject: Mathematics Title: Capture - Recapture Description: In this lesson, students experience an application of proportion that scientists actually use to solve real-life problems. Students learn how to estimate the size of a total population by taking samples and using proportions. The ratio of tagged items to the number of items in a sample is the same as the ratio of tagged items to the total population. Thinkfinity Partner: Illuminations Grade Span: 6,7,8,9,10,11,12 Subject: Mathematics Title: Bean Counting Description: By using sampling from a large collection of beans, students get a sense of equivalent fractions, which leads to a better understanding of proportions. Equivalent fractions are used to develop an understanding of proportions. The number-sense of recognizing equivalent fractions is useful when students study slope and proportions. Thinkfinity Partner: Illuminations Grade Span: 6,7,8,9,10,11,12 Thinkfinity Podcasts Subject: Cross-Disciplinary - Informal Education , Health - Nutrition , Mathematics - Applied Mathematics , Mathematics - Arithmetic , Mathematics - Measurement , Science - Biological and Life Sciences , Science - Biology , Science - Chemistry , Science - General Science , Adult & Family Literacy - Lifeskills , Informal Education - Health/Wellness/Nutrition/Cooking Title: How Can Math Help You Cook? Description: You might think the best part of making brownies is enjoying the end result, but gathering and mixing the ingredients can be just as fun. Grab a spoon and let's find out what ice cream can teach us about math. Yummy! Thinkfinity Partner: Wonderopolis Grade Span: K,PreK,1,2,3,4,5 Web Resources Learning Activities Thinking Blocks Ratios Thinking Blocks is an interactive video that models word problems that involve ratios and provides practice problems. Thinkfinity Learning Activities Subject: Cross-Disciplinary - Informal Education , Health - Nutrition , Mathematics - Applied Mathematics , Mathematics - Arithmetic , Mathematics - Measurement , Science - Biological and Life Sciences , Science - Biology , Science - Chemistry , Science - General Science , Adult & Family Literacy - Lifeskills , Informal Education - Health/Wellness/Nutrition/Cooking Title: How Can Math Help You Cook? Description: You might think the best part of making brownies is enjoying the end result, but gathering and mixing the ingredients can be just as fun. Grab a spoon and let's find out what ice cream can teach us about math. Yummy! Thinkfinity Partner: Wonderopolis Grade Span: K,PreK,1,2,3,4,5 Subject: Mathematics Title: Calculation Nation Description: Become a citizen of Calculation Nation! Play online math strategy games to learn about fractions, factors, multiples, symmetry and more, as well as practice important skills like basic multiplication and calculating area! Calculation Nation uses the power of the Web to let students challenge themselves and opponents from anywhere in the world. The element of competition adds an extra layer of excitement. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8,9
{"url":"http://alex.state.al.us/all.php?std_id=53832","timestamp":"2014-04-20T03:15:07Z","content_type":null,"content_length":"127790","record_id":"<urn:uuid:194c356c-0059-4ab7-a728-aa4bbfd953ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic approaches to relevance feedback Next: An appraisal and some Up: The Binary Independence Model Previous: Probability estimates in practice Contents Index Probabilistic approaches to relevance feedback We can use (pseudo-)relevance feedback, perhaps in an iterative process of estimation, to get a more accurate estimate of 1. Guess initial estimates of 2. Use the current estimates of 3. We interact with the user to refine the model of 4. We reestimate (where adding to both the count However, the set of documents judged by the user ( Bayesian updating . In this case we have: Here 59 requires a bit more probability theory than we have presented here (we need to use a beta distribution prior, conjugate to the Bernoulli random variable 5. Repeat the above process from step 2, generating a succession of approximations to It is also straightforward to derive a pseudo-relevance feedback version of this algorithm, where we simply pretend that 1. Assume initial estimates for 2. Determine a guess for the size of the relevant document set. If unsure, a conservative (too small) guess is likely to be best. This motivates use of a fixed size set 3. Improve our guesses for 79 for re-estimating add , we get: and if we assume that documents that are not retrieved are nonrelevant then we can update our 4. Go to step 2 until the ranking of the returned results converges. Once we have a real estimate for 73, Equation 76, and Equation 80, we have: But things aren't quite the same: we see that we are now adding the two log scaled components rather than multiplying them. • Work through the derivation of Equation 74 from and 3()I . • What are the differences between standard vector space tf-idf weighting and the BIM probabilistic retrieval model (in the case where no document relevance information is available)? • Let • Describe the differences between vector space relevance feedback and probabilistic relevance feedback. Next: An appraisal and some Up: The Binary Independence Model Previous: Probability estimates in practice Contents Index © 2008 Cambridge University Press This is an automatically generated page. In case of formatting errors you may want to look at the PDF edition of the book.
{"url":"http://nlp.stanford.edu/IR-book/html/htmledition/probabilistic-approaches-to-relevance-feedback-1.html","timestamp":"2014-04-20T13:25:28Z","content_type":null,"content_length":"19518","record_id":"<urn:uuid:03c225ab-8ef9-4522-8456-dfeac7c51b1e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
£129.99 in us dollars You asked: £129.99 in us dollars Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/%C2%A3129.99_in_us_dollars","timestamp":"2014-04-17T22:32:29Z","content_type":null,"content_length":"56439","record_id":"<urn:uuid:6134f1f9-3b96-419a-8e5d-b57f3c75ae3d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2009 [00527] [Date Index] [Thread Index] [Author Index] Re: Problem with DSolve • To: mathgroup at smc.vnet.net • Subject: [mg96498] Re: Problem with DSolve • From: Jean-Marc Gulliet <jeanmarc.gulliet at gmail.com> • Date: Sat, 14 Feb 2009 04:14:44 -0500 (EST) • Organization: The Open University, Milton Keynes, UK • References: <gn3bnf$pt9$1@smc.vnet.net> In article <gn3bnf$pt9$1 at smc.vnet.net>, Tony <aezajac at optonline.net> > can anyone help what is wrong? > On version 7 I enter > DSolve[{y'[x]==.02*y[x]-y[x]^2,y[0]==a},y[x],x] > and get > During evaluation of In[58]:= Solve::ifun: Inverse functions are being > used by Solve, so some solutions may not be found; use Reduce for > complete solution information. >> > During evaluation of In[58]:= Solve::ifun: Inverse functions are being > used by Solve, so some solutions may not be found; use Reduce for > complete solution information. >> > During evaluation of In[58]:= DSolve::bvnul: For some branches of the > general solution, the given boundary conditions lead to an empty > solution. >> As a rule of thumb, one should always use exact numbers whenever possible when looking for *symbolic* solutions to an equation. Also, it is a good practice to request the solution as a pure function for it is easier afterwards to check the validity of the solution. Having done that, Mathematica 7.0 (and 6.0.3 alike) only returns a particular In[1]:= DSolve[{y'[x] == 2/100*y[x] - y[x]^2, y[0] == a}, y, x] y'[x] == 2/100*y[x] - y[x]^2 /. %[[1]] {$Version, $ReleaseNumber} During evaluation of In[1]:= Solve::ifun: Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. >> Out[1]= {{y -> Function[{x}, (a E^(x/50))/(1 - 50 a + 50 a E^(x/50))]}} Out[2]= True Out[3]= {"7.0 for Microsoft Windows (32-bit) (November 10, 2008)", 0}
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Feb/msg00527.html","timestamp":"2014-04-16T07:45:04Z","content_type":null,"content_length":"26888","record_id":"<urn:uuid:753b1047-4f18-40a8-86e8-97ebc2b16370>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Math in the MediaMath in the Media 0403 Publications Meetings The Profession Membership Programs Math Samplings Policy & Advocacy In the News About the AMS April 2003 This doughnut universe. The top half of the front page of the New York Times Science section for March 11, 2003 was given over to illustrations for "Universe as Doughnut: New Data, New Debate" a substantial article by Dennis Overbye. The new data come from NASA's Wilkinson Microwave Anisotropy Probe, whose first results were released last month. The Probe is designed for precise measurement of the background microwave radiation permeating the universe; the anisotropy it examines is, as Overbye explains, the manifestation of sound waves expressing microscopic fluctuations ("lumps in the cosmic gravy") during the first instant of time. Overbye quotes Max Tegmark, a cosmologist at Penn: "There's a hint in the data that if you traveled far and fast in the direction of the constellation Virgo, you'd return to Earth from the opposite direction." The hint referred to by Tegmark comes from the spectrum of those waves: "If the universe were a guitar string, it would be missing its deepest notes, the ones with the longest wavelength, perhaps because it is not big enough to sustain them." The somewhat speculative conclusion is that the universe is finite, at least in certain directions. One section of the article is devoted to the topological implications of finiteness; it mentions William Thurston and Jeffrey Weeks as mathematicians who "have speculated about universes composed of various polyhedrons glued together in various ways." The article includes directions for making a torus out of a flat sheet of material. Don't try this at home. Knots in the Washington Post. The March 9 2003 Washington Post rana review of a math book: Alexei Sossinsky's "Knots: Mathematics with aTwist" (Harvard University Press). The reviewer,John Derbyshire, describes "Knots" as "an account of mathematical knot theory, aimed at a nonspecialist reader" and, as pop-math books go, "at the high end of the range of difficulty for readers who are not mathematicians."But he adds: "Once you have grasped three or four basic ideas, and got into the knotty way of thinking, it is easy to expand your understanding."Derbyshire runs through the basic examples of knot theory: the unknot,the trefoil, explains knot equivalence, and introduces the idea of aninvariant: "some characteristic mathematical object that is left unchanged by manipulations of the slide-but-don't-cut type.""I think the Jones polynomial ... will be the pons asinorum of the book for non-mathematicians. It is worth persevering with, though, for after 10 pages a very beautiful result is obtained ..." Sossinsky's "Knots" had been reviewed, by Andrzej Stasiak, in the January 30 2003 Nature. He also singled out the calculationof the Jones polynomial: "This experience alone, if you're willing to putin the effort, makes the book worth reading." Anthrax: the math. Results from a simulation using a mathematical model of an airborne anthrax attack (on a city the size of New York) were described in the March 18 2003 Chronicle of Higher Education. The article, "Death Toll in Airborne Anthrax Attack Could Exceed 100,000, Mathematical Model Finds" by Lila Guterman, cites a study just published in the Proceedings of the National Academy of Sciences, by Lawrence Wein (Stanford), Edward Kaplan (Yale) and David Craft, a graduate student at MIT. Wein is an expert in queuing theory: the thrust of the article is that if people have to wait in line for vaccination, in the case of a massive attack, many will die. The simulation showed, on the other hand, that "By simply eliminating the lines for antibiotics, the numbers of deaths can be nearly halved." Guterman quotes Wein: "Everything has to be measured in hours, not days ... We have to be very, very aggressive." Better yet, according to Wein, would be to distribute appropriate antibiotics beforehand to the entire population. In a statement quoted on the Stanford website he recommends: "Give it to the people now so that they can just turn on CNN and wait for Secretary Ridge to tell the people in their region to take their Cipro now." -Tony Phillips Stony Brook
{"url":"http://cust-serv@ams.org/news/math-in-the-media/mmarc-04-2003-media","timestamp":"2014-04-21T10:24:00Z","content_type":null,"content_length":"14364","record_id":"<urn:uuid:436a9f92-e243-489a-a03c-8c14a6379ffc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Improper Integration December 3rd 2009, 09:04 PM #1 The integral from 1 to infinity [sorry i don't know how to put it in LaTex $\frac {3dx}{3x-x^2}$ From what My teacher showed me, you can use Ln, or partial fractions to solve this, but I couldn't understand either method, how do you do it? Resolve into partial fractions: and a quick calculation shows that: Which has resolved the integrand into the sum of two terms both of which are easily integrable (in terms of $\ln$ ) from there you solve for A and B and then take the integrals of those fractions...Im on the right track right? Yes. In the second step CB solved for A, B and found A=B=1. Now you can integrate both fractions separately, which is easy. December 3rd 2009, 09:54 PM #2 Grand Panjandrum Nov 2005 December 3rd 2009, 10:02 PM #3 December 3rd 2009, 10:44 PM #4 December 3rd 2009, 10:45 PM #5 Senior Member Nov 2009
{"url":"http://mathhelpforum.com/calculus/118403-improper-integration.html","timestamp":"2014-04-18T12:24:52Z","content_type":null,"content_length":"44803","record_id":"<urn:uuid:49b45ce2-c521-47e9-8b75-8f38ddeaeb51>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Various Articles With Robert Gilmore Chaotic data generated by a three-dimensional dynamical system can be embedded into R^3 in a number of inequivalent ways. However, when lifted into R^5 they all become equivalent, indicating that they all belong to a single universality class sharing a common chaos-generating mechanism. We present a complete invariant determining this universality class and distinguishing attractors generated by distinct mechanisms. This invariant is easily computable from an appropriately ``dressed'' return map of any particular three-dimensional embedding. With Robert Gilmore Embeddings are diffeomorphisms between some dynamical phase space and a reconstructed image. Different embeddings may or may not be equivalent under isotopy. We regard embeddings as representations of the dynamical phase space. We determine the topological labels required to distinguish inequivalent representations of three-dimensional dissipative dynamical systems when the embeddings are into R^k, k=3,4,5,.... Three representation labels are required for embeddings into R^3, and only one is required in R^4. In R^5 there is a single ``universal'' representation. With Robert Gilmore Baker-Campbell-Hausdorff formulas are exceedingly useful for disentangling operators so that they may be more easily evaluated on particular states. We present such a disentangling theorem for general bilinear and linear combinations of multiple boson creation and annihilation operators. This work generalizes a classical result of Schwinger. With Robert Gilmore Takens has shown that a dynamical system may be reconstructed from scalar data taken along some trajectory of the system. A reconstruction is considered successful if it produces a system diffeomorphic to the original. However, if the original dynamical system is symmetric, it is natural to search for reconstructions that preserve this symmetry. These generally do not exist. We demonstrate that a differential reconstruction of nonlinear dynamical system preserves at most a two-fold symmetry. With Robert Gilmore Ideally an embedding of an N-dimensional dynamical system is N-dimensional. Ideally, an embedding of a dynamical system with symmetry is symmetric. Ideally, the symmetry of the embedding is the same as the symmetry of the original system. This ideal often cannot be achieved. Differential embeddings of the Lorenz system, which possesses a twofold rotation symmetry, are not ideal. While the differential embedding technique happens to yield an embedding of the Lorenz attractor in three dimensions, it does not yield an embedding of the entire flow. An embedding of the flow requires at least four dimensions. The four dimensional embedding produces a flow restricted to a twisted three dimensional manifold in R^4. This inversion symmetric three-manifold cannot be projected into any three dimensional Euclidean subspace without singularities. With Ryan Michaluk and Robert Gilmore An algorithm inspired by Genome sequencing is proposed which “reconstructs” a single long trajectory of a dynamical system from many short trajectories. This procedure is useful in situations when many data sets are available but each is insufficiently long to apply a meaningful analysis directly. The algorithm is applied to the Rössler and Lorenz dynamical systems as well as to experimental data taken from the Belousov-Zhabotinskii chemical reaction. Topological information was reliably extracted from each system and geometrical and dynamical measures were computed. With Robert Gilmore Embeddings are diffeomorphisms between some unseen physical attractor and a reconstructed image. Different embeddings may or may not be equivalent under isotopy. We regard embeddings as representations of the attractor, review the labels required to distinguish inequivalent representations for an important class of dynamical systems, and discuss the systematic ways inequivalent embeddings become equivalent as the embedding dimension increases until there is finally only one “universal” embedding in a suitable dimension. The interaction of a magnetic dipole with a point charge leads to an apparent paradox when analyzed using the 3-vector formulation of the Lorentz force. Specifically, the dipole is subject to a torque in some frames and not in others. We show that when analyzed according to the covariant 4-vector formulation the paradox disappears. The torque that arises in certain frames is connected to the time-space components of the torque in the rest frame, giving rise to "hidden" momentum. The observed accelerated cosmic expansion is problematic in that it seems to require an otherwise unobserved dark energy for its origin. A possible alternative explanation has been recently given, which attempts to account for this expansion in terms of a hypothesized matter-anti-matter repulsion. This repulsion or anti-gravity is derived by applying the CPT theorem to general relativity. We show that this proposal cannot work for two reasons: 1) it incorrectly predicts the behavior of photons and 2) the CPT transformation itself is not consistently applied. The linking integral is an invariant of the link-type of two manifolds immersed in a Euclidean space. It is shown that the ordinary Gauss integral in three dimensions may be simplified to a winding number integral in two dimensions. This result is then generalized to show that in certain circumstances the linking integral between arbitrary manifolds may be similarly reduced to a lower dimensional integral. The recently proposed Cooperstock-Tieu galaxy model claims to explain the flat rotation curves without dark matter. The purpose of this note is to show that this model is internally inconsistent and thus cannot be considered a valid solution. Moreover, by making the solution consistent the ability to explain the flat rotation curves is lost. Mach's principle states that the local inertial properies of matter are determined by the global matter distribution in the universe. In 1958 Cocconi and Salpeter suggested that due to the quadrupolar assymetry of matter in the local galaxy about the earth, inertia on earth would be slightly anisotropic, leading to unequal level splittings of nuclei in a magnetic field [1,2]. Hughes, et al., Drever, and more recently Prestage, et al. found no such quadrupole splitting [3-5]. However, recent cosmological overservations show an anisotropy in the Cosmic Microwave Background, indicating anisotropy of the matter at much greater distances. Since the in- ertial interaction acts as a power law of order unity, the effect of this matter would far outweigh the relatively local contribution from the galaxy [1,6]. Thus, the present article extends the work of Cocconi and Salpeter to higher multipoles leading to unequal level splittings that should be measurable by magnetic resonance experiments on nuclei of appropriate spin. Note: PhD Oral qualifying report. Of all the mysteries of quantum mechanics, the existence of half-integer spin is perhaps the hardest to accept. In this talk I want to take a look at why spin exists. It turns out that spin owes its existence to some rather deep and counter intuitive properties of three dimensional space. However, these properties have implications in classical physics, not just quantum mechanics. I will describe these properties, some of their classical manifestations, and how they give rise to spin. As a bonus, we will see why in two dimensions there exist particles with arbitrary spin, so-called Supplementary videos: Note: Beamer presentation. Report version is in the works. Demonstratration of the group theoretical origin of Maxwell's equations. The equations are constraints on a classical field to suppress non-physical degrees of freedom which are not present in the fundamental quantum description of the field. This is mostly complete, but there are some details which are glossed over that I hope to fix in the future. The twin paradox is analyzed in situations where no acceleration is necessary for the twins to reunite. Sorry if the format makes it difficult to understand - I should add a complete article version at some point. Note: Beamer presentaiton. The Master Analytic Representation for the root space A1 is constructed. This gives all of the unitary irreducible representations of the two real forms of this root space, su(2) and su(1, 1). This procedure is carried out in a generalization of Schwinger’s presentation for angular momentum. When considering maps in several complex variables one may want to consider whether the maps are immersive, submersive, or locally diffeomorphic. These same questions are easily formulated in terms of functions of real variables using the Jacobian determinant. This article uses the natural correspondence between complex and real maps to extend the real result to the complex case, expressing this result entirely in terms of the complex functions (the complex Jacobian). To do this we employ a result of Sylvester on the determinant of block matrices. The Flux Rule for calculating the EMF due to a changing magnetic flux is critically examined. First, the rule is derived from Maxwell’s equations in a way that unifies the two contributions to the flux change. Then it is shown that so-called “failures of the flux rule” are not problems with the actual rule, but rather in trying to improperly deduce a stronger local result from the weaker global result that the rule actually provides. This method requires a little bit of complex analysis, but otherwise is at the level of elementary calculus and requires no special machinery. The Doppler shift equations are derived exploiting invariance principles in the more general cases, keeping the hard work to a minimum. Problems & Solutions Problems and solutions. Note: All problems in chapter 1-7. A few problems from chapters 8-10. Problems and solutions. Note: Only chapter 2. Problems and solutions. Note: Only chatpers 2 and 3 are finished.
{"url":"http://www.haverford.edu/physics-astro/dcross/academics/","timestamp":"2014-04-16T21:54:02Z","content_type":null,"content_length":"16641","record_id":"<urn:uuid:ec23f4a7-a4ab-4a05-a1da-4d851ad04e18>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Carnival of Mathematics The Carnival of Mathematics is a monthly blogging round up hosted by a different blog each month. The Aperiodical will be taking responsibility for organising a host each month, and links to the monthly posts will be added here. To volunteer to host a forthcoming Carnival (see below for months needing a host), please contact Katie. The Carnival of Mathematics accepts any mathematics-related blog posts: explanations of serious mathematics, puzzles, writing about mathematics education, mathematical anecdotes, refutations of bad mathematics, applications, reviews, etc. Sufficiently mathematized portions of other disciplines are also acceptable. An FAQ can be found at the bottom of this page. Current: Carnival of Mathematics 109 by Tony at Tony’s Maths Blog. Next: Carnival of Mathematics 110 will be hosted by Colin at Flying Colours Maths. The closing date for submissions will be 5^th May. Click here to submit an item to Carnival 110 Future Carnival organisers: We’re always looking for more Carnival organisers. If you’re interested, email Katie. Carnival 110 – May 2014: Colin at Flying Colours Maths Carnival 111 – June 2014: Peter at Mathblogging.org Carnival 112 – July 2014: Robin at Theorem of the Day Carnival 113 – August 2014: Mike at Walking Randomly Carnival 114 – September 2014: Murray at SquareCirclez Carnival 115 – October 2014: William at MathTuition88 Previous Carnival of Mathematics posts: Carnival 108 - March 2014: John at Math Hombre Carnival 107 - February 2014: Frederick at White Group Mathematics Carnival 106 – January 2014: Katie at Blackboard Bold Carnival 105 - December 2013: Oluwasanya at Mathemazier Carnival 104 – November 2013: Shecky R at Math-Frolic Carnival 103 – October 2013: Evelyn at Roots of Unity Carnival 102 - September 2013: Michelle at My Summation Carnival 101 – August 2013: Aperiodical team, at The Aperiodical Carnival 100 - July 2013: Richard Elwes at Simple City Carnival 99 – June 2013: Sol at Wild About Math Carnival 98 - May 2013: Andrew at andrewt.net/blog Carnival 97 - April 2013: Colin at Flying Colours Maths Carnival 96 - March 2013: Sue at Math Mama Writes. Carnival 95 – February 2013: Jorge at Maths Fact Carnival 94 - January 2013: Paul at The Aperiodical Carnival 93 - December 2012: Tosin at X in Vogue Carnival 92 - November 2012: Frederick at White Group Mathematics Carnival 91 - October 2012: Owen at Matheminutes Carnival 90 - September 2012: Mike at Walking Randomly Carnival 89 – August 2012: Katie at The Aperiodical Carnival 88 - July 2012: at cp’s mathem-o-blog Carnival 87 - June 2012: John at Random Walks Carnival 86 - May 2012: Brent at The Math Less Travelled Carnival 85 - April 2012: Peter at Travels in a Mathematical World Carnival 84 - December 2011: Guillermo at Mathematics and Multimedia Carnival 83 - November 2011: Karyn at Teach Beside Me Carnival 82 - October 2011: at the Vedic Maths Forum Blog Carnival 81 - September 2011: Sol at Wild About Math Carnival 80 - August 2011: Mike at Walking Randomly Carnival 79 - July 2011: John at The Endeavour Carnival 78 - June 2011: at JimWilder.com Carnival 77 - May 2011: Fëanor at Jost A Mon Carnival 76 - April 2011: Mike at Walking Randomly Carnival 75 - March 2011: Daniel Colquitt (link broken) Carnival 74 - February 2011: Mike at Walking Randomly Carnival 73 - January 2011: Mike at Walking Randomly Carnival 72 - December 2010: Batman at Three Sixty Carnival 71 - November 2010: at Theorem of the Day Carnival 70 - October 2010: Daniel Colquitt (link broken; reposted 31/03/2012 at Mathematical Musings Tumblr) Carnival 69 - September 2010: Jonathan at JD2718 Carnival 68 - August 2010: at Plus Magazine Carnival 67 - July 2010: Peter at Travels in a Mathematical World Carnival 66 - June 2010: Sol at Wild About Math Carnival 65 - May 2010: Edmund at Maxwell’s Demon Carnival 64 - April 2010: Mike at Walking Randomly Carnival 63 - March 2010: Dan at Math Recreation Carnival 62 - February 2010: John at The Endeavour Carnival 61 - January 2010: Mike at Walking Randomly Carnival 60 - December 2009: Nick at Sum Idiot Carnival 59 - November 2009: Jason at The Number Warrior Carnival 58 - September 2009: Mike at Walking Randomly Carnival 57 - September 2009: Batman at Three Sixty Carnival 56 - August 2009: Rod Carvalho Carnival 55 - July 2009: at Sowerby Maths (link broken) Carnival 54 - July 2009: Todd and Vishal at Topological Musings Carnival 53 - June 2009: Brent at The Math Less Travelled Carnival 52 - May 2009: Jason at The Number Warrior Carnival 51 - April 2009: Murray at SquareCircleZ Carnival 50 - February 2009: John at The Endeavour Carnival 49 - February 2009: Batman at Three Sixty Carnival 48 - January 2009: yanzhang at Concrete Nonsense Carnival 47 - January 2009: Jonathan at JD2718 Carnival 46 - Deceber 2008: Mike at Walking Randomly Carnival 45 - December 2008: at TCM Technology Blog (private link) Carnival 44 - November 2008: Edmund at Maxwell’s Demon Carnival 43 - November 2008: Jason at The Number Warrior Carnival 42 - October 2008: John at The Endeavour Carnival 41 - October 2008: Batman at Three Sixty Carnival 40 - September 2008: Barry at Staring at Empty Pages Carnival 39 - August 2008: A at It’s the Thought That Counts Carnival 38 - August 2008: at CatSynth Carnival 37 - July 2008: Ian at Logic Nest Carnival 36 - July 2008: Charles at Rigorous Trivialities Carnival 35 - June 2008: at CatSynth Carnival 34 - May 2008: Batman at Three Sixty Carnival 33 - May 2008: Mike at Walking Randomly Carnival 32 - May 2008: at TCM Technology Blog (private link) Carnival 31 - April 2008: Jeffrey at Recursivity Carnival 30 - April 2008: Jason at The Number Warrior Carnival 29 - March 2008: Jordan at Quomodocumque Carnival 28 - March 2008: Tyler and Foxy at Tyler and Foxy’s Scientific and Mathematical Adventure Land Carnival 27 - February 2008: Jonathan at JD2718 Carnival 26 - February 2008: Batman at Three Sixty Carnival 25 - January 2008: Mike at Walking Randomly Carnival 24 - January 2008: Walt at Ars Mathematica Carnival 23 - December 2007: Brent at The Math Less Travelled Carnival 22 - December 2007: Sol at Wild About Math Carnival 21 - December 2007: Ben at Secret Blogging Seminar Carnival 20 - November 2007: Murray at SquareCircleZ Carnival 19 - October 2007: Mark at Good Math, Bad Math Carnival 18 - October 2007: Jonathan at JD2718 Carnival 17 - Septmber 2007: Dave at MathNotations Carnival 16 - September 2007: Kurt at Learning Computation Carnival 15 - August 2007: John at JohnKemeny.com Carnival 14 - August 2007: Vlorbik at MathEdZineBlog Carnival 13 - July 2007: Polymath at Polymathematics Carnival 12 - July 2007: at the Vedic Maths Forum Blog Carnival 11 - June 2007: Pi Guy at Grey Matters Carnival 10 - June 2007: Dave at MathNotations Carnival 9 - June 2007: Jonathan at JD2718 Carnival 8 - May 2007: Suresh at Geomblog Carnival 7 - May 2007: Arunn at Nonoscience Carnival 6 - April 2007: Graeme at Modulo Errors Carnival 5 - April 2007: Charles at Science and Reason Carnival 4 - March 2007: Jason at EvolutionBlog Carnival 3 - March 2007: Michi at Michi’s Blog Carnival 2 - February 2007: Mark at Good Math, Bad Math (Archived page) Carnival 1 - February 2007: Alon at Abstract Nonsense Carnival of Mathematics FAQ: When are the carnivals published? The Carnival of Mathematics gets published during the first week of the month. Math Teachers at Play, another maths carnival organised by Denise at LetsPlayMath, gets published during the third week. I would like to host a carnival at my blog. What should I do? Check the list of upcoming Carnival hosts above, and then contact Katie to let her know you’re interested. Who does the admin for the Carnival? The Carnival of Mathematics is currently being organised by the Aperiodical. The Aperiodical will be taking responsibility for organising a host each month, and links to the monthly posts will be added here. I’ve found a cool maths article that someone else has written. Can I submit it to the carnival? Yes, but in an ideal world it will be a recent post and should have never been submitted to one of the carnivals before. The best way to be sure of this, if the post is not your own, is to send in only something published since the last edition of the carnival. Who decides what gets included in a carnival and what doesn’t? The carnival hosts. The carnival is just a guest on the host’s blog, and so what each host writes is entirely up to him or her. In general, most carnival hosts will include almost everything that is submitted to them and a bit more besides. However, if they choose NOT to link to something, then so be it. Will the Carnival of Mathematics take articles on basic maths? Yes — everything from kindergarten to cutting edge research is fair game for the Carnival of Math. Basic mathematics can be submitted to either the Carnival of Mathematics, or Math Teachers At Play (as described above), but advanced maths should generally be submitted only to the Carnival of Math. Do you accept articles from subjects such as computer science or physics? As long as there is a reasonable amount of maths content, then yes. Is there anything else I need to do, besides submitting my article? No, there is nothing else you have to do. When the carnival is published, however, you may want to post a link to it on your blog. The carnival host will appreciate your support, and your readers will enjoy a chance to browse what other math bloggers have written. Does the carnival have a twitter feed? Yes. It can be found at @CarnivalOfMath. Out of the norm Here’s the missing carnival 7, via the Internet archive: http://web.archive.org/web/20110716094553/http://www.nonoscience.info/2007/05/04/carnival-of-mathematics-edition-7/ Katie Steckles Peter Rowlett I had a conversation with Daniel Colquitt on Twitter about #70 and #75. He didn’t realise he had deleted the posts but had just meant to deactivate the blog account. He’s now found the posts and will rehost them. Peter Rowlett Ah, as soon as I posted that he’s reposted #70 but #75 appears to be gone for good (and not at archive.org or a Google cache – where else might it be?). Rod Carvalho The 56th installment was actually hosted on my blog, not on Sue’s. In any case, thanks for compiling this list. • Peter Rowlett Rod, thanks for letting us know. I logged in to sort that out but either Christian or Katie has already made the change. The error was certainly mine. Any chance you want to host the Carnival again? We’re looking for hosts from March 2013 onwards. Sue VanHattum There’s nothing here about how to submit. I’ll send to Katie? • Christian Perfect There’s a big huge link at the top of the page saying “Click here to submit an item to Carnival 94″. Here it is again: Click here to submit an item to Carnival 94 Sue VanHattum Ok, I’m officially embarrassed. (I wonder if I’m blind to blue text when I’m scanning?) Hi, although #99 is published it still says “Click here to submit an item to Carnival 99″ up above. I clicked there and submitted something for #100. Will #100 find it, or has it gone down into a #99 space where no-one is looking? • Christian Perfect Whoops! Yes, it’s in the queue for #100. Thanks for alerting us to the error. Trackback: Travels in a Mathematical World Pingback: Carnival of Mathematics 98 | andrewt.net 35 Responses to “Carnival of Mathematics”
{"url":"http://aperiodical.com/carnival-of-mathematics/","timestamp":"2014-04-19T14:29:23Z","content_type":null,"content_length":"81172","record_id":"<urn:uuid:16dedf46-c81f-4038-9f45-00375d8fae33>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by saud on Sunday, October 30, 2011 at 6:48pm. Use the Linear Approximation to estimate ∆f=f(3.5)−f(3) for f(x)=2/(1+x^(2))? Δf ≈ Estimate the actual change. (Use decimal notation. Give your answer to five decimal places.) Compute the error and the percentage error in the Linear Approximation. (Use decimal notation. Give your answer to five decimal places.) Error = • Calculus - bobpursley, Sunday, October 30, 2011 at 7:23pm linear approximation means get the slope. at at x=3.25, df/dt= -4*3.25/(1+3.25^2)^2=-.0972 df=-.0972*x but x= 3.5-3=.5 error (.0488-.050)/.050 about 2 percent • Calculus - saud, Sunday, October 30, 2011 at 8:13pm it says the answer is incorrect Related Questions Calculus - a.) Given that f(3)=5 and f'(x)=x/((x^3)+3), find the linear ... Calculus - Estimate delta f using the Linear Approximation and use a calculator ... Practice Exam Calculus - The radius of a sphere was measured to be 20cm with a ... calculus - The radius of a sphere was measured to be cm with a possible error of... chemistry - Consider the reaction below: Cgraphite(s) + F2(g) +1/2O2(g) → ... AP Calculus - Use differentials (or a linear approximation) to estimate (2.001)^... calculus - Use a linear approximation (or differentials) to estimate the given ... calculus - Use a linear approximation (or differentials) to estimate the given ... calculus - Use a linear approximation (or differentials) to estimate the given ... Calculus. - Use a linear approximation (or differentials) to estimate the given ...
{"url":"http://www.jiskha.com/display.cgi?id=1320014891","timestamp":"2014-04-20T02:38:04Z","content_type":null,"content_length":"9044","record_id":"<urn:uuid:c738cd7b-6253-4a70-a282-5c4fbb6e2fb3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Connectivity across a grid above a variable surface Waichler, Scott R Scott.Waichler at pnl.gov Wed Jan 4 01:18:05 CET 2006 I'm looking for ideas or packages with relevant algorithms for calculating the connectivity across a grid, where connectivity is defined as the minimum amount of cross-sectional area along a continuous path. The upper boundary of the cross-sectional area is a fixed elevation, and the lower boundary is a gridded surface of variable elevation. My variable elevation surface represents the top of an impermeable geologic layer. I would like to represent the degree to which a fluid could flow from one end of my grid to another, above the surface and below the fixed level. I don't need to derive information about path lengths and hydraulic gradient, but if I could, that would be a plus. A groundwater flow model would provide the exact answer, but I'm looking for something more approximate and faster. My grids are such that there are many "dead-end" flow paths, where the bottom boundary rises to meet the top boundary and the cross-sectional area available for flow pinches out. In plan view, fluid can enter all along one boundary and leave all along the opposite boundary, but flow connectivity across the grid varies between bottom boundary scenarios. Scott Waichler Pacific Northwest National Laboratory scott.waichler _at_ pnl.gov More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2006-January/085328.html","timestamp":"2014-04-18T23:17:27Z","content_type":null,"content_length":"3757","record_id":"<urn:uuid:ee536941-1e9b-48e9-8dc1-ef8bcf05d3c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Currency format From Ripple Wiki (Redirected from Currency Format There are three types of currencies on ripple: ripple's native currency, also known as Ripples or XRP, fiat currencies and custom currencies. The later two are used to denominate IOUs on the ripple Native currency is handled by the absence of a currency indicator. If there is ever a case where a currency ID is needed, native currency will use all zero bits. Custom currencies will use the 160-bit hash of their currency definition node as their ID. (The details have not been worked out yet.) National currencies will be handled by a naming convention that species the three-letter currency code, a version (in case the currency is fundamentally changed), and a scaling factor. Currencies that differ only by a scaling factor can automatically be converted as transactions are processed. (So whole dollars and millionths of a penny can both be traded and inter-converted automatically.) The 160-bit identifier for national currencies will be assembled as follows: Name Bits Description Zero 96 Indicates a national currency ISO code 24 The three-letter code for this currency in upper case ASCII. Version 16 This will be zero for all currencies at first. It can be bumped if a currency is replaced/reissued Reserved 24 Should be zero for now, but non-zero values should not be rejected See also: Custom Currencies Currency 160-bit identifier is the hash of the currency definition block, which is also its storage index. Contains: Domain of issuer. Issuer's account. Auditor's account (if any). Client display information. Hash of policy document. The Ripple code uses a specialized internal and wire format to represent amounts of currency. Because the server code does not rely on a central authority, it can have no idea what ranges are sensible with currencies. In addition, it's handy to be able to use the same code both to handle currencies that have inflated to the point that you need billions of units to buy a loaf of bread and to handle currencies that have become extremely valuable and so extremely small amounts need to be tracked. The design goals are: • Accuracy - 15 decimal digits • Wide representation range - 10^80 units of currency to 10^-80 units • Deterministic - no vague rounding by unspecified rules. • Fast - Integer math only. Internal Format The format used internally is an integer exponent and a 64-bit integer mantissa. The number of units of currency represented is the mantissa multiplied by ten raised to the power of the exponent. The legal exponent range is -96 to +80. With the exception of 0 (which is represented as a mantissa of zero), the legal mantissa range is 10^15 to 10^16 -1. Wire Format The wire format is a single unsigned 64-bit integer. It is formed by adding 124 to the exponent and placing that in the upper 8 bits. The mantissa is placed in the lower 56 bits. This ensures that comparisons on numbers in wire format work. Display Format The display/Json format is exact in all cases. If the range is sensible for decimal display (which it almost always will be), the display is in standard currency format with no extra digits. Typical examples of this format are: 1, 2.25, and .001432. If the display is out of range for sensible display in this form, the exact internal representation is displayed. The display consists of the mantissa, followed by the letter e, followed by the It is expected that any client would have a table of currency formats and would know to display US dollars with two decimal digits unless the user desired more precision. Core operations are: canonicalize, +, -, * and /. The canonicalize operation adjusts a value to comply with the mantissa range rules. This is needed when accepting values in another format and when fixing up values after other operations. It must start with a value whose mantissa is in the range of zero to 2^64 - 1. The + operation adds two currency values. After testing both values for zero and handling those special cases, the value with the smaller exponent is shifted until they have the same exponents. The mantissas are added in a 64-bit register (so overflow is impossible). The final result is then canonicalized. The - operation subtracts one currency value from another. If either or both numbers are zero, that special case is handled appropriately. If the number subtracted has a larger exponent, an error is generated. Then the number subtracted is shifted until the exponents are equal. Then the subtraction is performed and the result is canonicalized. The * operation multiplies one currency value by another. Internal +5/10 LSB followed by integer truncation rounding is used. After appropriate special cases for zero, the two mantissas are multiplied using 128-bit integer arithmetic and then divided by 10^18. The result must then lie in the range of roughly 10^16 to 10^18. A new amount is formed with a mantissa of the result of the division and an exponent equal to the sum of the two original numbers exponents plus 16. This result is then canonicalized. To round, each mantissa is multiplied by ten prior to multiplication. After multiplication, 50 is added to the result and the result is divided by 100. The / operation divides one currency value by another. After appropriate special cases for zero, the mantissa of the numerator is multiplied by 10^16 using 128-bit integer arithmetic. The product is then divided by the denominator. The result must then lie in the range of roughly 10^15 to 10^17. A new amount is formed with a mantissa of the result of the division and an exponent of the numerator's exponent minus the denominator's exponent minus 16. This amount is then canonicalized. Internal currency values are shown as "mantissa e exponent". Input values are shown as "mantissa , exponent". Display values are shown normally. One dollar can be expressed non-canonically as 1,0 (one dollar) or 100,-2 (100 pennies). Either way, it canonicalizes to 1000000000000000e-15 and displays as "1". One penny can be input as 1,-2. It canonicalizes to 1000000000000000e-17 and displays as ".01". Some divisions are shown below. The number to the right of the equals sign is the actual answer from the current code in display format, which is also the exact internal value. (The internal value is always an integer. The decimal point is added for display.) For the first three, exact values (calculated with an arbitrary precision calculator, not this code) are shown for comparison. 4034,0 / 9081,0 = 0.4442242043827772 (exact: .4442242043827772271...) 9081,0 / 4034,0 = 2.251115518096182 (exact: 2.251115518096182449...) 9082,0 / 4034,0 = 2.251363411006445 (exact: 2.251363411006445215...) 11,0 / 1,70 = 1100000000000000e-84 1,70 / 11,0 = 9090909090909090e53 11,0 / 1,-70 = 1100000000000000e56 1,-70 / 11,0 = 9090909090909090e-87 This shows the offer logic. An offer is made. A few takes are placed against it. One shows an example of calculating how much you have to offer to get a desired output. Then a test is made offering slightly less. Then a full take (for more than the offer) is shown. (What the offeror gets is always equal to what the taker pays.) Offer Logic: Offer: For 17.3 sprockets, I will give 2340 widgets Someone offers 1 sprockets Offerror gets 1 sprockets and they get 135.2601156069364 widgets Offer: For 16.3 sprockets, I will give 2204.739884393064 widgets Jack needs 100 widgets. He should give 0.7393162393162391 sprockets Someone offers 0.7393162393162391 sprockets Offerror gets 0.7393162393162391 sprockets and they get 100 widgets Offer: For 15.56068376068377 sprockets, I will give 2104.739884393064 widgets Someone offers 0.739316239316239 sprockets Offerror gets 0.739316239316239 sprockets and they get 99.99999999999987 widgets Offer: For 14.82136752136754 sprockets, I will give 2004.739884393065 widgets Someone offers 5000 sprockets Offerror gets 14.82136752136754 sprockets and they get 2004.739884393065 widgets Offer: For 0 sprockets, I will give 0 widgets Offer is fully taken Native Currency Scaling Native currency is scaled for display as transaction fees are destroyed. Starting conditions: Total Coins: 15000000000, Your Coins: 100000 You spend 100 newcoins. Your Coins: 99900 10000 newcoins are eaten by transaction fees. Your Coins: 99900.0666000444 You spend 250 newcoins. Your Coins: 99650.0666000444 150000 newcoins are eaten by transaction fees. Your Coins: 99651.0631106755 Creating your currencies Ripple will support three types of currencies: • XRP □ Ripple's native currency. • ISO 4217 (3 Letter currencies) □ The commonly used codes such as USD, EUR, JPY, etc... □ Not currently enforced. • Custom currencies □ Client currently supports arbitrary three letter currency codes and strings ☆ If not entered as three capital letters, client automatically assigns a three capital letter currency code. □ Forthcoming - Currency resolution, intrinsic rates, or term length Ripple has no particular support for any of the 3 letter currencies. Ripple requires its users to agree on meaning of these codes. In particular, the person trusting or accepting a balance of a particular currency from an issuer, must agree to the issuer's meaning. As result, any 3 letter code can be used for represent currencies on the Ripple network as long the involved parties agree. However, it is illegal to use XRP as 3 letter currency code in the 160 bit encoding. In the 160 bit encoding, XRP must be specified as zero. For convenience, syntactically "XRP" is often allowed as an alternative to specifying zero. Walk through of creating and paying with a made up currency Suppose Alice commonly issues a currency ALC. And, Bob wants to hold ALC. 1. Bob trust Alice's Ripple account for, say, 100 ALC. □ In the client, Bob goes to the trust tab and types in ALC for the currency when setting trust. 2. Alice may now make a payment to Bob for 20 ALC. □ Alice manually types in ALC when sending the payment. See also:
{"url":"https://ripple.com/wiki/Currency_Format","timestamp":"2014-04-18T19:10:49Z","content_type":null,"content_length":"27323","record_id":"<urn:uuid:9b98515a-dfa0-4f20-b9a7-ef4ed27d7afe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Healthcare (THC) THC » Topics » Adjusted EBITDA RELATED TOPICS for THC: This excerpt taken from the THC 8-K filed Nov 3, 2009. VIEW MORE Adjusted EBITDA Adjusted EBITDA, a non-GAAP term defined below, was $240 million, or a margin of 10.6 percent of net operating revenues, in the third quarter of 2009. This represents an increase of $80 million, or 50 percent, from Adjusted EBITDA of $160 million in the third quarter of 2008, and a margin increase of 310 basis points as compared to an Adjusted EBITDA margin of 7.5 percent in the third quarter of 2008. Same-hospital Adjusted EBITDA was $236 million in the third quarter of 2009, an increase of $74 million, or 45.7 percent, from the $162 million in the third quarter of 2008. The same-hospital Adjusted EBITDA margin increased by 290 basis points to 10.5 percent in the third quarter of 2009 compared to 7.6 percent in the third quarter of 2008. Same-hospital financial data excludes the results from one of the Company s hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to common shareholders of Tenet Healthcare Corporation before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests; (3) preferred stock dividends, (4) income (loss) from discontinued operations, net of tax; (5) income tax (expense) benefit; (6) net gains (losses) on sales of investments; (7) investment earnings (loss); (8) gain (loss) from early extinguishment of debt; (9) interest expense; (10) litigation and investigation (costs) benefit, net of insurance recoveries; (11) hurricane insurance recoveries, net of costs; (12) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (13) amortization; and (14) depreciation. A reconciliation of Adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation common shareholders is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed Jul 28, 2009. Adjusted EBITDA Adjusted EBITDA, a non-GAAP term defined below, is expected to be approximately $246 million, or a margin of 11.0 percent of net operating revenues, in the second quarter of 2009. This represents an increase of $83 million, or 50.9 percent, from adjusted EBITDA of $163 million in the second quarter of 2008, and margin increase of 330 basis points as compared to an adjusted EBITDA margin of 7.7 percent in the second quarter of 2008. Same-hospital adjusted EBITDA is expected to be approximately $241 million in the second quarter of 2009, an increase of $72 million, or 42.6 percent, from the $169 million in the second quarter of 2008. Same-hospital adjusted EBITDA margin increased by 290 basis points to 10.9 percent in the second quarter of 2009 as compared to the same-hospital adjusted EBITDA margin of 8.0 percent in the second quarter of 2008. Same-hospital financial data excludes the results from one of our hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to shareholders before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests; (3) income (loss) from discontinued operations, net of tax; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) investment earnings (loss); (7) gain (loss) from early extinguishment of debt; (8) interest expense; (9) litigation and investigation (costs) benefit, net of insurance recoveries; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation shareholders is provided in Table #1 at the end of this This excerpt taken from the THC 8-K filed May 5, 2009. Adjusted EBITDA Adjusted EBITDA, a non-GAAP term defined below, was $276 million, or a margin of 12.1 percent of net operating revenues, in the first quarter of 2009. This represents an increase of $61 million, or 28.4 percent, from adjusted EBITDA of $215 million in the first quarter of 2008, and margin increase of 220 basis points as compared to an adjusted EBITDA margin of 9.9 percent in the first quarter of 2008. Same-hospital adjusted EBITDA was $273 million in the first quarter of 2009, an increase of $56 million, or 25.8 percent, from the $217 million in the first quarter of 2008. Same-hospital adjusted EBITDA margin increased by 210 basis points to 12.1 percent in the first quarter of 2009 as compared to the same-hospital adjusted EBITDA margin of 10.0 percent in the first quarter of 2008. Same-hospital financial data excludes the results from one of our hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to shareholders before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests (3) income (loss) from discontinued operations, net of tax; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) investment earnings; (7) gain from early extinguishment of debt; (8) interest expense; (9) litigation and investigation (costs) benefit, net of insurance recoveries; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill - 2 - and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation shareholders is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed Feb 24, 2009. Adjusted EBITDA Adjusted EBITDA, a non-GAAP term defined below, was $199 million, or a margin of 9.1 percent of net operating revenues, in the fourth quarter of 2008. This represents an increase of $43 million, or 27.6 percent, from adjusted EBITDA of $156 million in the fourth quarter of 2007, and margin increase of 160 basis points as compared to an adjusted EBITDA margin of 7.5 percent in the fourth quarter of 2007. Same-hospital adjusted EBITDA was $201 million in the fourth quarter of 2008, an increase of $43 million, or 27.2 percent, from the $158 million in the fourth quarter of 2007. Same-hospital adjusted EBITDA margin increased by 170 basis points to 9.3 percent in the fourth quarter of 2008 as compared to the same-hospital adjusted EBITDA margin of 7.6 percent in the fourth quarter of 2007. For 2008, same-hospital adjusted EBITDA was $752 million, an increase of $93 million, or 14.1 percent, as compared to $659 million for 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax (expense) benefit; (4) net gains (losses) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit, net of insurance recoveries; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed Jan 22, 2009. Adjusted EBITDA Adjusted EBITDA, a non-GAAP term, is defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax expense (benefit); (4) net gain (loss) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. This excerpt taken from the THC 8-K filed Nov 4, 2008. Adjusted EBITDA Adjusted EBITDA, a non-GAAP term defined below, was $156 million, or a margin of 7.2 percent of net operating revenues, in the third quarter of 2008. This represents a decrease of $8 million, or 4.9 percent, from adjusted EBITDA of $164 million in the third quarter of 2007, and a margin decline of 80 basis points as compared to an adjusted EBITDA margin of 8.0 percent in the third quarter of 2007. Adjusted EBITDA was $533 million for the first nine months of 2008 as compared to $501 million for the first nine months of 2007, an increase of $32 million, or 6.4 percent. Same-hospital adjusted EBITDA was $160 million in the third quarter of 2008, a decrease of $4 million, or 2.4 percent, from $164 million in the third quarter of 2007. Same-hospital adjusted EBITDA margin decreased by 60 basis points to 7.5 percent in the third quarter of 2008 as compared to the same-hospital adjusted EBITDA margin of 8.1 percent in the third quarter of 2007. For the first nine months of 2008, same-hospital adjusted EBITDA was $551 million, an increase of $50 million, or 10.0 percent, as compared to $501 million for the first nine months of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax (expense) benefit; (4) net gain (loss) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed Aug 5, 2008. Adjusted EBITDA Adjusted EBITDA, defined below, was $163 million, or a margin of 7.6 percent of net operating revenues, in the second quarter of 2008. This represents an increase of $7 million, or 4.5 percent, from $156 million in the second quarter of 2007, and a margin decline of 20 basis points as compared to an adjusted EBITDA margin of 7.8 percent in the second quarter of 2007. Adjusted EBITDA was $379 million for the first six months of 2008 as compared to $337 million for the first six months of 2007, an increase of $42 million, or 12.5 percent. Same-hospital adjusted EBITDA, defined below, was $171 million in the second quarter of 2008, an increase of $15 million, or 9.6 percent, from the $156 million in the second quarter of 2007. Same-hospital adjusted EBITDA margin increased by 30 basis points to 8.1 percent in the second quarter of 2008 as compared to a same-hospital adjusted EBITDA margin of 7.8 percent in the second quarter of 2007. The two leased hospitals that remain in continuing operations but whose leases will not be renewed reported breakeven adjusted EBITDA in both the second quarters of 2008 and 2007. The results from these two hospitals have been excluded from the calculation of adjusted EBITDA as well as same-hospital adjusted EBITDA. These two hospitals are our Irvine Regional Hospital and Medical Center and Community Hospital of Los Gatos. The leases on these hospitals expire in February and May 2009, respectively. The results from these two hospitals will be excluded from the calculation of adjusted EBITDA in future quarters as Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income (loss) from leased hospitals whose leases will not be renewed; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) minority interests; (7) investment earnings; (8) interest expense; (9) litigation and investigation (costs) benefit; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed May 6, 2008. Adjusted EBITDA Adjusted EBITDA in the first quarter of 2008 was $234 million producing a margin (as a percentage of net operating revenues) of 9.9 percent, an increase of $40 million, or 20.6 percent, from adjusted EBITDA of $194 million in the first quarter of 2007. The adjusted EBITDA margin was 8.7 percent in the first quarter of 2007. Same-hospital adjusted EBITDA was $239 million in the first quarter of 2008, an increase of 23.2 percent from $194 million in the first quarter of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation and investigation costs, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to Adjusted EBITDA is provided in Table #1 at the end of this release. This excerpt taken from the THC 8-K filed Feb 26, 2008. Adjusted EBITDA Adjusted EBITDA in the fourth quarter of 2007 was $166 million producing a margin (as a percentage of net operating revenues) of 7.4 percent, an increase of $13 million, or 8.5 percent, from adjusted EBITDA of $153 million in the fourth quarter of 2006. The adjusted EBITDA margin was 7.2 percent in the fourth quarter of 2006. Same-hospital adjusted EBITDA was $168 million in the fourth quarter of 2007, an increase of 9.8 percent from $153 million in the fourth quarter of 2006. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation and investigation costs, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to Adjusted EBITDA is provided in Table#1 at the end of this release. This excerpt taken from the THC 8-K filed Nov 6, 2007. Adjusted EBITDA Adjusted EBITDA in the third quarter of 2007 was $177 million producing a margin of 8.0 percent, an increase of $63 million, or 55 percent, from adjusted EBITDA of $114 million in the third quarter of 2006, and an increase of 250 basis points from the adjusted EBITDA margin of 5.5 percent in the third quarter of 2006. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation (costs) benefit, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided at the end of this release. This excerpt taken from the THC 8-K filed Aug 7, 2007. Adjusted EBITDA Adjusted EBITDA in the second quarter of 2007 was $156 million producing a margin of 7.0 percent, a decrease of $53 million, or 25 percent, from adjusted EBITDA of $209 million in the second quarter of 2006, and a decrease of 250 basis points from the adjusted EBITDA margin of 9.5 percent in the second quarter of 2006. Adjusted EBITDA was $345 million for continuing operations for the six months ended June 30, 2007, as compared to $426 million for the six months ended June 30, 2006. Excluding the $7 million and $13 million adjusted EBITDA losses generated by our two Dallas hospitals whose leases expire on August 31, 2007, from the second quarter of 2007 and the first six months of 2007, respectively, adjusted EBITDA would have been $163 million for the second quarter of 2007 and $358 million for the first six months of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net (loss) income before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) (loss) income from discontinued operations, (11) the cumulative effect of change in accounting principle, net of tax, and (12) net gains on the sales of investments. A reconciliation of net (loss) income to adjusted EBITDA is provided at the end of this release. This excerpt taken from the THC 8-K filed May 8, 2007. Adjusted EBITDA Adjusted EBITDA in the first quarter of 2007 was $189 million producing a margin of 8.3 percent, a decrease of $28 million, or 12.9 percent, from adjusted EBITDA of $217 million in the first quarter of 2006, and a decrease of 150 basis points from the adjusted EBITDA margin of 9.8 percent in the first quarter of 2006. The decline in Adjusted EBITDA is, among other factors, attributable to the net impact of lower favorable cost report adjustments and lower volumes. Adjusted EBITDA is a non-GAAP term defined by the Company as net (loss) income before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) (loss) income from discontinued operations, net of tax, (11) the cumulative effect of change in accounting principle, net of tax, and (12) net gains on the sales of investments. A reconciliation of net (loss) income to adjusted EBITDA is provided at the end of this release. This excerpt taken from the THC 8-K filed Feb 27, 2007. Adjusted EBITDA Adjusted EBITDA in the fourth quarter of 2006 was $153 million producing a margin of 7.0 percent, an increase of $30 million, or 24.4 percent, from adjusted EBITDA of $123 million in the fourth quarter of 2005, and an increase of 120 basis points from the adjusted EBITDA margin of 5.8 percent in the fourth quarter of 2005. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) discontinued operations, (11) the cumulative effect of change in accounting principle, net of tax, (12) loss from the early extinguishment of debt, and (13) net gains on the sales of investments. A reconciliation of net loss to adjusted EBITDA is provided at the end of this release. Adjusted EBITDA, a non-GAAP term defined below, was $240 million, or a margin of 10.6 percent of net operating revenues, in the third quarter of 2009. This represents an increase of $80 million, or 50 percent, from Adjusted EBITDA of $160 million in the third quarter of 2008, and a margin increase of 310 basis points as compared to an Adjusted EBITDA margin of 7.5 percent in the third quarter of 2008. Same-hospital Adjusted EBITDA was $236 million in the third quarter of 2009, an increase of $74 million, or 45.7 percent, from the $162 million in the third quarter of 2008. The same-hospital Adjusted EBITDA margin increased by 290 basis points to 10.5 percent in the third quarter of 2009 compared to 7.6 percent in the third quarter of 2008. Same-hospital financial data excludes the results from one of the Company s hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to common shareholders of Tenet Healthcare Corporation before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests; (3) preferred stock dividends, (4) income (loss) from discontinued operations, net of tax; (5) income tax (expense) benefit; (6) net gains (losses) on sales of investments; (7) investment earnings (loss); (8) gain (loss) from early extinguishment of debt; (9) interest expense; (10) litigation and investigation (costs) benefit, net of insurance recoveries; (11) hurricane insurance recoveries, net of costs; (12) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (13) amortization; and (14) depreciation. A reconciliation of Adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation common shareholders is provided in Table #1 at the end of this release. Adjusted EBITDA, a non-GAAP term defined below, is expected to be approximately $246 million, or a margin of 11.0 percent of net operating revenues, in the second quarter of 2009. This represents an increase of $83 million, or 50.9 percent, from adjusted EBITDA of $163 million in the second quarter of 2008, and margin increase of 330 basis points as compared to an adjusted EBITDA margin of 7.7 percent in the second quarter of 2008. Same-hospital adjusted EBITDA is expected to be approximately $241 million in the second quarter of 2009, an increase of $72 million, or 42.6 percent, from the $169 million in the second quarter of 2008. Same-hospital adjusted EBITDA margin increased by 290 basis points to 10.9 percent in the second quarter of 2009 as compared to the same-hospital adjusted EBITDA margin of 8.0 percent in the second quarter of 2008. Same-hospital financial data excludes the results from one of our hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to shareholders before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests; (3) income (loss) from discontinued operations, net of tax; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) investment earnings (loss); (7) gain (loss) from early extinguishment of debt; (8) interest expense; (9) litigation and investigation (costs) benefit, net of insurance recoveries; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation shareholders is provided in Table #1 at the end of this release. Adjusted EBITDA, a non-GAAP term defined below, was $276 million, or a margin of 12.1 percent of net operating revenues, in the first quarter of 2009. This represents an increase of $61 million, or 28.4 percent, from adjusted EBITDA of $215 million in the first quarter of 2008, and margin increase of 220 basis points as compared to an adjusted EBITDA margin of 9.9 percent in the first quarter of 2008. Same-hospital adjusted EBITDA was $273 million in the first quarter of 2009, an increase of $56 million, or 25.8 percent, from the $217 million in the first quarter of 2008. Same-hospital adjusted EBITDA margin increased by 210 basis points to 12.1 percent in the first quarter of 2009 as compared to the same-hospital adjusted EBITDA margin of 10.0 percent in the first quarter of 2008. Same-hospital financial data excludes the results from one of our hospitals as discussed below. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) attributable to shareholders before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) net income attributable to noncontrolling interests (3) income (loss) from discontinued operations, net of tax; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) investment earnings; (7) gain from early extinguishment of debt; (8) interest expense; (9) litigation and investigation (costs) benefit, net of insurance recoveries; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of adjusted EBITDA to net income (loss) attributable to Tenet Healthcare Corporation shareholders is provided in Table #1 at the end of this release. Adjusted EBITDA, a non-GAAP term defined below, was $199 million, or a margin of 9.1 percent of net operating revenues, in the fourth quarter of 2008. This represents an increase of $43 million, or 27.6 percent, from adjusted EBITDA of $156 million in the fourth quarter of 2007, and margin increase of 160 basis points as compared to an adjusted EBITDA margin of 7.5 percent in the fourth quarter of 2007. Same-hospital adjusted EBITDA was $201 million in the fourth quarter of 2008, an increase of $43 million, or 27.2 percent, from the $158 million in the fourth quarter of 2007. Same-hospital adjusted EBITDA margin increased by 170 basis points to 9.3 percent in the fourth quarter of 2008 as compared to the same-hospital adjusted EBITDA margin of 7.6 percent in the fourth quarter of 2007. For 2008, same-hospital adjusted EBITDA was $752 million, an increase of $93 million, or 14.1 percent, as compared to $659 million for 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax (expense) benefit; (4) net gains (losses) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit, net of insurance recoveries; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. Adjusted EBITDA, a non-GAAP term, is defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax expense (benefit); (4) net gain (loss) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. Adjusted EBITDA, a non-GAAP term defined below, was $156 million, or a margin of 7.2 percent of net operating revenues, in the third quarter of 2008. This represents a decrease of $8 million, or 4.9 percent, from adjusted EBITDA of $164 million in the third quarter of 2007, and a margin decline of 80 basis points as compared to an adjusted EBITDA margin of 8.0 percent in the third quarter of 2007. Adjusted EBITDA was $533 million for the first nine months of 2008 as compared to $501 million for the first nine months of 2007, an increase of $32 million, or 6.4 percent. Same-hospital adjusted EBITDA was $160 million in the third quarter of 2008, a decrease of $4 million, or 2.4 percent, from $164 million in the third quarter of 2007. Same-hospital adjusted EBITDA margin decreased by 60 basis points to 7.5 percent in the third quarter of 2008 as compared to the same-hospital adjusted EBITDA margin of 8.1 percent in the third quarter of 2007. For the first nine months of 2008, same-hospital adjusted EBITDA was $551 million, an increase of $50 million, or 10.0 percent, as compared to $501 million for the first nine months of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income tax (expense) benefit; (4) net gain (loss) on sales of investments; (5) minority interests; (6) investment earnings; (7) interest expense; (8) litigation and investigation (costs) benefit; (9) hurricane insurance recoveries, net of costs; (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (11) amortization; and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. Adjusted EBITDA, defined below, was $163 million, or a margin of 7.6 percent of net operating revenues, in the second quarter of 2008. This represents an increase of $7 million, or 4.5 percent, from $156 million in the second quarter of 2007, and a margin decline of 20 basis points as compared to an adjusted EBITDA margin of 7.8 percent in the second quarter of 2007. Adjusted EBITDA was $379 million for the first six months of 2008 as compared to $337 million for the first six months of 2007, an increase of $42 million, or 12.5 percent. Same-hospital adjusted EBITDA, defined below, was $171 million in the second quarter of 2008, an increase of $15 million, or 9.6 percent, from the $156 million in the second quarter of 2007. Same-hospital adjusted EBITDA margin increased by 30 basis points to 8.1 percent in the second quarter of 2008 as compared to a same-hospital adjusted EBITDA margin of 7.8 percent in the second quarter of 2007. The two leased hospitals that remain in continuing operations but whose leases will not be renewed reported breakeven adjusted EBITDA in both the second quarters of 2008 and 2007. The results from these two hospitals have been excluded from the calculation of adjusted EBITDA as well as same-hospital adjusted EBITDA. These two hospitals are our Irvine Regional Hospital and Medical Center and Community Hospital of Los Gatos. The leases on these hospitals expire in February and May 2009, respectively. The results from these two hospitals will be excluded from the calculation of adjusted EBITDA in future quarters as well. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before: (1) the cumulative effect of changes in accounting principle, net of tax; (2) income (loss) from discontinued operations, net of tax; (3) income (loss) from leased hospitals whose leases will not be renewed; (4) income tax (expense) benefit; (5) net gains (losses) on sales of investments; (6) minority interests; (7) investment earnings; (8) interest expense; (9) litigation and investigation (costs) benefit; (10) hurricane insurance recoveries, net of costs; (11) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries; (12) amortization; and (13) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided in Table #1 at the end of this release. Adjusted EBITDA in the first quarter of 2008 was $234 million producing a margin (as a percentage of net operating revenues) of 9.9 percent, an increase of $40 million, or 20.6 percent, from adjusted EBITDA of $194 million in the first quarter of 2007. The adjusted EBITDA margin was 8.7 percent in the first quarter of 2007. Same-hospital adjusted EBITDA was $239 million in the first quarter of 2008, an increase of 23.2 percent from $194 million in the first quarter of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation and investigation costs, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to Adjusted EBITDA is provided in Table #1 at the end of this release. Adjusted EBITDA in the fourth quarter of 2007 was $166 million producing a margin (as a percentage of net operating revenues) of 7.4 percent, an increase of $13 million, or 8.5 percent, from adjusted EBITDA of $153 million in the fourth quarter of 2006. The adjusted EBITDA margin was 7.2 percent in the fourth quarter of 2006. Same-hospital adjusted EBITDA was $168 million in the fourth quarter of 2007, an increase of 9.8 percent from $153 million in the fourth quarter of 2006. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation and investigation costs, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to Adjusted EBITDA is provided in Table#1 at the end of this Adjusted EBITDA in the third quarter of 2007 was $177 million producing a margin of 8.0 percent, an increase of $63 million, or 55 percent, from adjusted EBITDA of $114 million in the third quarter of 2006, and an increase of 250 basis points from the adjusted EBITDA margin of 5.5 percent in the third quarter of 2006. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) the cumulative effect of change in accounting principle, net of tax, (2) income (loss) from discontinued operations, net of tax, (3) income tax (expense) benefit, (4) net gains on sale of investments, (5) minority interests, (6) investment earnings, (7) interest expense, (8) litigation (costs) benefit, (9) hurricane insurance recoveries, net of costs, (10) impairment of long-lived assets and goodwill and restructuring charges, net of insurance recoveries, (11) amortization, and (12) depreciation. A reconciliation of net income (loss) to adjusted EBITDA is provided at the end of this release. Adjusted EBITDA in the second quarter of 2007 was $156 million producing a margin of 7.0 percent, a decrease of $53 million, or 25 percent, from adjusted EBITDA of $209 million in the second quarter of 2006, and a decrease of 250 basis points from the adjusted EBITDA margin of 9.5 percent in the second quarter of 2006. Adjusted EBITDA was $345 million for continuing operations for the six months ended June 30, 2007, as compared to $426 million for the six months ended June 30, 2006. Excluding the $7 million and $13 million adjusted EBITDA losses generated by our two Dallas hospitals whose leases expire on August 31, 2007, from the second quarter of 2007 and the first six months of 2007, respectively, adjusted EBITDA would have been $163 million for the second quarter of 2007 and $358 million for the first six months of 2007. Adjusted EBITDA is a non-GAAP term defined by the Company as net (loss) income before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) (loss) income from discontinued operations, (11) the cumulative effect of change in accounting principle, net of tax, and (12) net gains on the sales of investments. A reconciliation of net (loss) income to adjusted EBITDA is provided at the end of this release. Adjusted EBITDA in the first quarter of 2007 was $189 million producing a margin of 8.3 percent, a decrease of $28 million, or 12.9 percent, from adjusted EBITDA of $217 million in the first quarter of 2006, and a decrease of 150 basis points from the adjusted EBITDA margin of 9.8 percent in the first quarter of 2006. The decline in Adjusted EBITDA is, among other factors, attributable to the net impact of lower favorable cost report adjustments and lower volumes. Adjusted EBITDA is a non-GAAP term defined by the Company as net (loss) income before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) (loss) income from discontinued operations, net of tax, (11) the cumulative effect of change in accounting principle, net of tax, and (12) net gains on the sales of investments. A reconciliation of net (loss) income to adjusted EBITDA is provided at the end of this release. Adjusted EBITDA in the fourth quarter of 2006 was $153 million producing a margin of 7.0 percent, an increase of $30 million, or 24.4 percent, from adjusted EBITDA of $123 million in the fourth quarter of 2005, and an increase of 120 basis points from the adjusted EBITDA margin of 5.8 percent in the fourth quarter of 2005. Adjusted EBITDA is a non-GAAP term defined by the Company as net income (loss) before (1) interest expense, (2) taxes, (3) depreciation, (4) amortization, (5) impairment of long-lived assets and goodwill and restructuring charges net of insurance recoveries, (6) hurricane insurance recoveries net of costs, (7) costs of litigation and investigations, (8) investment earnings, (9) minority interests, (10) discontinued operations, (11) the cumulative effect of change in accounting principle, net of tax, (12) loss from the early extinguishment of debt, and (13) net gains on the sales of investments. A reconciliation of net loss to adjusted EBITDA is provided at the end of this release.
{"url":"http://www.wikinvest.com/stock/Tenet_Healthcare_(THC)/Adjusted%20Ebitda","timestamp":"2014-04-19T08:00:01Z","content_type":null,"content_length":"69320","record_id":"<urn:uuid:7163e015-539c-461b-892a-257534527807>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Help! Quick graphing question, need to find a point on a quadrilateral. (self.cheatatmathhomework) submitted ago by sorry, this has been archived and can no longer be voted on My maths methods test is tomorrow and it has a variation of this question: I'm stuck on d). I know AD is y=3x+2 and that point D is (0,2). I've looked at the answers and it says it says point C is (2, -2) but I have no idea how they got to that, other than just assuming it is directly below point A. Any help at all is greatly appreciated! all 2 comments [–]riboch0 points1 point2 points ago sorry, this has been archived and can no longer be voted on C lies on the perpendicular bisector of AB, so it has the same slope as AD and passes through the midpoint of AB. Figure out the equation of this perpendicular bisector and figure out where it intersects the BC line (3y=4x-14). Unless you know linear algebra, you start with Solve for y in one of the equations, and substitute into the other, then solve for x. Plug x back into the equations and get y. [–]tf2hipster0 points1 point2 points ago sorry, this has been archived and can no longer be voted on OK, let's solve these one at a time: Equation of AD: • slope of AD is a right angle to slope of AB, meaning it's -1/(slope AB). • Slope of AB is (6-8)/(8-2) = -1/3, so slope of AD is 3. • Equation for AD is y = 3x + c. • Since the line goes through (2, 8), 8 = 3*2 + c, c = 2. • So the equation for AD is y = 3x + 2 Coordinates of D: obviously (0,2) Equation of the perpendicular bisector of AB: • The perpendicular bisector is (obviously) perpendicular to AB, so it has the same slope as AD, meaning... • The equation will be y = 3x + c. • The mid-way point on AB is (5, 7) (just take the average of the x coordinates and the average of the y coordinates), so 7 = 3*5 + c, c = -8 • So the equation is y = 3x - 8 Coordinates of C: let's label this point (p, q). • We know from BC that 3q = 4p - 14. • We know from the last step that q = 3p - 8. Multiply that equation by -3, so -3q = -9p + 24. • Add the two equations: 0 = -5p + 10, or 5p = 10, p = 2. • q = 3*2 - 8, q = -2. The point is (2,-2) Area of ADC: • Since AC is a straight vertical line (A is (2, 8), c is (2, -2)), this is pretty easy. Treat that long leg as the base of the triangle. The height will be 2, and the length of the base is 10, so the area is ADC = base*height/2 = 10. Area of ABCD: • This one can be harder unless you see the trick. You already know the area of the triangle ADC, all you need is the area of the triangle ABC and add those together to get the total area. • Just like above, treat AC as the base, the "height" is 6. ABC = base*height/2 = 30. • So the total area is 40.
{"url":"http://www.reddit.com/r/cheatatmathhomework/comments/18y5v9/help_quick_graphing_question_need_to_find_a_point/","timestamp":"2014-04-18T03:17:08Z","content_type":null,"content_length":"53657","record_id":"<urn:uuid:9f4e2da4-8b0d-4997-aaeb-47ced9daaccc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> FIML vs MI Laura Pierce posted on Thursday, July 19, 2007 - 5:58 pm I have complex survey data (N =678), with some extensive missingness (67% at the worst). I am wondering what is best to use - FIML or Multiple Imputation? Do you have any suggestions? Thanks you. Linda K. Muthen posted on Friday, July 20, 2007 - 11:35 am The two methods are asymptotically equivalent. In both cases, with that much missing data, you will be relying too much on model assumption rather than the data. I don't one approach is better than the other in this situation. Laura Pierce posted on Wednesday, July 25, 2007 - 7:28 am Thank you Linda! TD posted on Wednesday, October 03, 2007 - 12:04 pm Hello Drs. Muthen - Do either of you have any suggestions for sources comparing FIML and Multiple Imputation? Linda K. Muthen posted on Wednesday, October 03, 2007 - 2:55 pm See papers by John Graham in Psych Methods in recent years. yang posted on Friday, March 04, 2011 - 2:10 pm Dear Drs. Muthen, IN CFA, when Estimator = WLSMV, Parameterization = THETA, and Listwise = OFF, what is the default method that Mplus (version 5) uses to deal with missing data, FIML, or MI? Specifically, is it possible to have WLSMV and FIML simultaneously in a single model given that the former is a least-square procedure that requires pairwise deletion while the latter is a maximum likelihood procedure that uses all possible information? Please clarify, thanks. Thank you very much. Linda K. Muthen posted on Friday, March 04, 2011 - 3:21 pm With WLSMV, the method used when the model has no covariates is pairwise present. One can use ML with categorical outcomes and obtain FIML. The problem is each factor requires one dimension of integration when the factor indicators are categorical. Or one can impute data and use WLSMV with the imputed data sets. yang posted on Monday, March 07, 2011 - 8:57 am Hello Linda, Thanks a lot for your prompt helps. Yes, actually I am running a MIMIC model, i.e., CFA with covariates. There are quite a few factors that have indicators (outcome variables) as a combination of continuous variables and categorical ones, and that's why I chose WLSMV estimator. Also, there are about 15% participants had missing values on at lease one of the indicators. The MIMIC model was run on both of the complete data only (about 85% participants) and the whole sample in order to assess if missing value is an issue. Then I am facing this question: under the situation as specified above and in my previous post, what method Mplus (version 5) is using to address missing values, Multiple Imputation, or, FIML? If I understand your response correctly, it should be MI, but I would rather ask for your kind confirmation. Thanks. Linda K. Muthen posted on Monday, March 07, 2011 - 4:03 pm With WLSMV, Mplus uses neither multiple imputation or FIML. Without covariates, it uses pairwise present. With covariates, missingness is allowed to be a function of the observed covariates but not the observed outcomes. What I suggest above is to use multiple imputation in Mplus to generate imputed data sets and then analyze these imputed data sets using WLSMV. Sarah Ryan posted on Wednesday, March 09, 2011 - 1:02 pm I am running a mediation model using a large federal data set- my N=8,000- using data from both parents and students. On student-level data, missingness is below 7% for all variables- but on parents, it ranges between 10% and 20%. In trying to decide between using MI and FIML, I've read Asparouhov & Muthen (2010) who write: "When the data set contains a large number of variables but we are interested in modeling a small subsets of variables it is useful to impute the data from the entire set and then use the imputed data sets with the subset of variables. This way we don't have to worry about excluding an important missing data predictor" (p. 23). If I understand correctly, this reason would apply in my research (there are several hundred student and parent variables in the data set). Here's my dilemma, however. My understanding from Graham's work is that FIML and MI are only asymptotically equivalent when ALL of the same variables are used with both. Further, Graham recommends appx. 100 imputed data sets in order to achieve such equivalence. MI with so many variables for this many data sets would be extremely intensive computationally- but to include only a subset of predictors seems to miss the point being made by A & M. Any direction/advice would be appreciated... Bengt O. Muthen posted on Wednesday, March 09, 2011 - 6:13 pm I would use the variables in your model plus any key variables beyond that which you think might be related to missingness. Most variables are not. Sarah Ryan posted on Thursday, March 10, 2011 - 1:33 pm Okay, thank you. Also, I should clarify that I will be using FIML in Mplus to estimate the full model- at this point, I'm just trying to figure out how best to approach missingness. I've also considered using auxilliary variables to assist in dealing with missingness using FIML. If the auxilliary variables were the same variables I would use as missing data predictors in MI, then the two approaches should be equivalent, is that correct? This is for my dissertation work, so this is new territory for me. Thanks for your response. Bengt O. Muthen posted on Thursday, March 10, 2011 - 1:56 pm They would be similar in large samples, yes. You may want to take a look at Craig Enders 2010 missing data book (Guilford) if you haven't already. Sarah Ryan posted on Thursday, April 21, 2011 - 2:23 pm Hello Again, Following up here on the post above. I did go ahead and read Enders' book, which helped my understanding of MI immensely. I remain undecided. I have a rather complex model- several covariates (missingness on some, one reason making MI attractive), two latent exogenous constructs, several manifest exogenous measures, a latent mediator and continuous outcome. I will also use multiple group invariance testing. This is complex survey data and I'm using data from both parents and students. Student missingness ranges from 0% to 11%, but parent missingness ranges from 0% to 28% (16%-20% missingness on most parent-level varbs). I know I would need to run MI for the full sample (includes all race/eth groups) as well as for each of the two subgroups of comparison (race-level comp). I also gather from reading on these boards that it would be best to fix parameter estimates for one of the MI sets (for full sample, and each subgroup) at the pooled imputation average AND THEN run the analysis model using FIML. I'm wondering if, given the high levels of parent-missing and some missing on covariates, using MI would produce more accurate analysis model parameter estimates. Do you have a stronger (convince me, please!) argument for why staying within the FIML framework (which would be simpler) would likely still produce just as accurate estimates even giving missingness Craig Enders posted on Thursday, April 21, 2011 - 4:42 pm In my mind, the answer to your question largely depends on the scaling of your variables, particularly the covariates. My post is a bit long, so I have to split it into separate chunks. First, suppose that all of your variables are continuously scaled, or approximately so. In this case, I think that the choice of FIML vs. MI is personal preference. Theoretically, there is no reason to expect noticeable performance differences (FIML might have slightly smaller SEs because MI uses a more complex saturated model to deal with missing data, but this difference is usually negligible). There are a couple nuances here. With FIML, you will need to take care of the missing data on the covariates (you seem to be aware of this issue). Suppose you have two covariates, X1 and X2. You would do this as follows: X1 X2; X1 with X2; As I understand it, explicitly listing X1 and X2 effectively makes the covariates single indicators of a latent construct -- a programming trick that converts the Xs to Ys while still maintaining the exogenous status of the variables in the model. The missing data on your outcomes would be automatically handled by FIML. Craig Enders posted on Thursday, April 21, 2011 - 4:43 pm Same situation -- continuous variables. Turning to MI, the imputation process is a bit simpler than you describe. You would impute the data separately for your ethnic groups -- there is NO need to then impute the data for the whole sample. Separate-group imputation automatically fills in the missing values in a way that preserves mean differences among the ethnic groups and all group by variable interactions in the data. Said differently, you would have filled in the data using the most general model possible, and the set of imputations that you get from this procedure would be appropriate for all your analyses (the imputation routine would need to include all covariates, outcomes, etc.). I'm not sure what you are referring to when you talk about fixing the estimates at the pooled average, then running the model using FIML. I *think* you might be describing the method for pooling likelihood ratio tests, but that would not be necessary -- Mplus reports the pooled chi-square. On my website (appliedmissingdata.com)I have an example of separate-group imputation in Mplus 6. Craig Enders posted on Thursday, April 21, 2011 - 4:45 pm Next, suppose that one of your covariates is binary. I think that the situation becomes murkier here. Take that programming trick that lists the variances of the X variables and their covariances. Again, as I understand it, this would make the covariate a single indicator of a latent construct. However, it isn't so clear to me that this is appropriate with a binary covariate, because you would be assuming normality for the covariate, and employing a linear factor model to convert the X to a pseudo Y might cause problems. The first problem is more conceptual; the programming trick does not produce a model that is a one to one translation of the complete-data analysis. Whether this introduces bias, I don't know. Second, I could imagine that Mplus would issue a warning about the standard errors because the mean and the variance of the binary variable are linearly dependent when you use the linear factor model to handle the missing data. I'm not sure that either of the problems are substantial ... MI would provide a useful alternative. Mplus allows you to specify variables as categorical or continuous in the imputation model. In the case of an incomplete categorical covariate, imputation would use a logistic regression to fill in plausible values. The MI procedure would be identical to what I describe above (impute separately for each ethnic group). You would simply use the (c) option to denote categorical variables. Craig Enders posted on Thursday, April 21, 2011 - 4:48 pm Continuing on ... Finally, if your outcomes/indicators are not continuous -- say Likert items -- then the choice isn't all that clear to me. FIML would assume multivariate normality, as you know. The missing data handling probably won't behave any differently than complete-data ML. With MI, you have two options: (a) use linear regression to impute the incomplete variables, or (b) use logistic regression to impute. The former assumes normality and would produce fractional imputed values (not a problem beyond aesthetics, you would not want to round these). The latter does not assume normality and would produce discrete imputations. I know of no studies that have compared these two imputation approaches, but I suspect that the logistic imputation might lead to larger SEs because the multinomial model would have more parameters than a linear model. I would probably still go with FIML and MLR standard errors , but the choice isn't so clear cut. The other thing to consider is your model testing procedures. It sounds like you plan to perform likelihood ratio tests. FIML is probably going to be easier to deal with here. Mplus computes pooled LR tests, but I'm not sure if there is a way to automate the computation of these tests when you are comparing fit between models from two separate analysis runs (e.g., your invariance tests). If you have to compute the LR tests by brute force, it would be time consuming. Brondeel Ruben posted on Friday, April 22, 2011 - 12:11 am Thanks Craig. I believe this post was interesting and useful for many of us. Sarah Ryan posted on Friday, April 22, 2011 - 9:58 am Dr. Enders, I am more appreciative than you could know for your response above, and the time you took to offer some insight. Let me add some thoughts on a few of your points. Sep. group imputation: The full sample includes four racia/ethnic groups, but I will test for model invariance between only two of those groups. Is it still the case, then, that I would not need to impute for the full sample? Fixing estimates at pooled average: In the discussion board on "MODEL INDIRECT and MISSING," Linda Muthen offers this advice if one is trying to examine indirect effects with imputed data (from what I understand, this is advised because otherwise Mplus will not provide indirect effects with imputed data): "With multiple imputation, you can fix all parameters at the average value given in the imputation run and them run the analysis with MODEL INDIRECT using one of the imputation data sets and no IMPUTATION. It does not matter which data set because nothing is estimated." (See next post for the rest of my message) Sarah Ryan posted on Friday, April 22, 2011 - 10:20 am Varb. scaling: None of the covariates with missingness are binary, but some are categorical (ordinal, with 3 to 7 levels). This is true of many of the model variables (ordinal, although those with 5+ levels could perhaps be treated as continuous). What I understand from a response I got from L. Muthen on choice of estimator is that I can use MLR with binary/categorical variables as long as there are four or fewer factors in the model (because numerical integration is required to estimate the logistic regressions, which becomes unwieldy with more than four factors). So, with any categorical covariates, ML would use logistic regression to estimate the single-indicator latent construct (the "programming trick"). Model testing: Yes, I do plan to use LR tests. Sounds like the model invariance testing would be pretty intensive if I went with MI. Finally, from your response, I'm inferring that % missingness on any given variable is not as critical as the factors you discussed when deciding between FIML and MI as one's approach to missing data in an analysis model similar to mine. If that is the case, then, weighing all of this, FIML is looking like the way to go here. EIher way, reading your 2010 book has left me feeling much more confident in my knowledge about MI and how I would go about it- if not in the dissertation, in the future. Craig Enders posted on Friday, April 22, 2011 - 11:57 am I might have misunderstood your original description when you were referring to imputing the entire sample. I thought you were referring to a situation where you (a) impute without regard to ethnicity, then (b) re-impute separately by ethnicity. This is what you would want to avoid because you could not compare analysis models that differ with respect to the imputation model. In terms of separate-group imputation, I would still impute separately within each of your four groups, even though you only plan to compare two groups in your invariance analyses (assuming a sufficient N within each group). This would allow each group to have its own mean vector and covariance matrix. There is nothing lost by imputing the data using a richer set of variables/associations than what you have in the subsequent analyses. Doing it this way would give you imputations that could serve for all of your subsequent analyses. Craig Enders posted on Friday, April 22, 2011 - 12:03 pm Continued ... The mediation effect. To get around the lack of MODEL INDIRECT with imputed data, why don't you instead use the MODEL CONSTRAINT command, as follows: m on x (a); y on m (b); y on x (cprime); ab = a*b; Provided that your a and b paths are linear regressions, this would give you what you want. I'm not sure I completely understand the constraint part that you were describing, but you would want to estimate the mediated effect, then average the m estimates. The MODEL CONSTRAINT option would give you that. Craig Enders posted on Friday, April 22, 2011 - 12:22 pm Finally, the percentage of missing data would have no bearing on your decision. All things being equal, MI and ML are asymptotically equivalent. MI uses Monte Carlo simulation to average across a distribution of plausible replacement values for the missing data, whereas ML essentially uses calculus to do the same thing analytically. Although the procedures look very different, they are in fact doing the same thing. MI uses a saturated model (typically, although not necessarily) to handle the missing data, whereas ML uses a more parsimonious model that doesn't spend all the degrees of freedom (at least in your example). This might produce tiny differences in SEs, but there is no other reason to expect ML and MI to differ. The model testing part certainly favors ML. The pooled LR test in MI is a bit of an unknown because simulation studies have not thoroughly assessed its performance. It's probably safe to say that most of what is known about the LR test in complete-data ML allows applies to missing data. Sarah Ryan posted on Friday, April 22, 2011 - 2:52 pm Dr. Enders, No, you did understand about group imputation...I didn't, at first! Now it makes sense. I also looked at the examples and code you give in the 2011 article in SEM journal on SGI, which made it even more clear. Thanks also for the MODEL CONSTRAINT advice. That's quite helpful. It seems like the reseach on how/when FIML and MI differ in approaching missing data is still emerging. I'm also learning that one has to weigh many different factors about the analysis model and data when deciding between the two. In trying to wrap my head around this, I've done a lot of reading. Some authors have strong convictions about always using one (or the other) to deal with missingness. My sense now is that it's not that simple, and that the researcher needs to make that decision based on the investigation at hand. Thank you, again. Jared K Harpole posted on Monday, March 19, 2012 - 7:32 pm What method does Mplus use to impute categorical data when we specify: Impute = var1-var5(c); What is this doing exactly? Do you have a reference for what it is doing? Is it imputing variables from a multinomial distribution? Bengt O. Muthen posted on Monday, March 19, 2012 - 9:49 pm This is described in the paper Asparouhov, T. & Muthén, B. (2010). Multiple imputation with Mplus. Technical Report. Version 2. which you find on our website under Papers, Bayesian analysis. Elina Dale posted on Wednesday, October 09, 2013 - 5:59 pm Dear Dr. Muthen, Just to make sure I understood it correctly, the current version of MPlus (v7) does not use FIML for imputation when the estimator is WLSMV, i.e. when factor indicators are categorical variables? So, does it do listwise deletion? In the following model: USEVARIABLE = y1-y20; CATEGORICAL = y1-y20; CLUSTER = clus; MISSING = all (-9999); ANALYSIS: TYPE = COMPLEX EFA 1 5; I have missing values on my factor indicators (y1-y20). Since these are categorical variables, the default estimator is WLSMV. Thus, what happens to observations with missing y's? Thank you, Linda K. Muthen posted on Thursday, October 10, 2013 - 6:27 am WLSMV uses pairwise present when the model does not have covariates. Elina Dale posted on Thursday, October 10, 2013 - 8:21 am Sorry, but what does it mean in practice? If I have 4 factor indicators and one of them (y1) has missing values. Does pairrwise present look at missing values between y1 and y2, y1 and y3 etc? If y1 is then missing some obs, but y2 is not, it imputes y1? Thank you! Linda K. Muthen posted on Thursday, October 10, 2013 - 8:29 am This has nothing to do with imputing anything. It means that the correlations in the matrix that is analyzed are computed using the number of people who do not have missing data on the variables Elina Dale posted on Thursday, October 10, 2013 - 4:53 pm So, if they are not imputed, does it mean that if person A has a missing value for y1, which is one of the four observed factor indicators, his response to y2-y4 will also be deleted? Thus, if my original sample had 200 people, and 10 of them had missing values on just 1 out of 4 items of the factor, my CFA model will have 190 observations only? This sounds like listwise deletion and I though MPlus was better at handling missing values in item scales or factor indicators. Thank you! Linda K. Muthen posted on Friday, October 11, 2013 - 6:22 am It is not listwise deletion. Missing data is looked at for pairs of variables -- pairwise present. So if 50 people have non-missing values for y1 and y2, 50 observations are used to compute the correlation between y1 and y2. If 70 people have non-missing values for y1 and y3, 70 observations will be used to compute the correlation between y1 and y3. Etc. For FIML use maximum likelihood estimators. Elina Dale posted on Friday, October 11, 2013 - 11:37 am Thank you so much!!! Now I understand it. What about when we have covariates, but our items are still categorical variables? So, we still use WLSMV estimator. In MPlus Guide, it says "For censored and categorical outcomes using weighted least squares estimation, missingness is allowed to be a function of the observed covariates but not the observed So, does it mean that when we have covariates, for example sex and age, MPlus imputes the missing values for scale items based on these two variables? I am not sure I understand what does "is allowed to be a function" mean? For ex, we have 100 observations. In a scale for anxiety with four items (y1-y4), y1 is missing 10 values but there are no missing values for sex and age which are used in the model to predict anxiety as measured by y1-y4. Does it mean the n for the model will be 100? I apologize for these questions and I really appreciate your guidance! Linda K. Muthen posted on Saturday, October 12, 2013 - 9:45 am It means what you say conceptually but values are not explicitly imputed. Yes, the sample size is 100. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=22&page=2445","timestamp":"2014-04-17T16:36:04Z","content_type":null,"content_length":"74203","record_id":"<urn:uuid:546cffa8-9060-4479-87ba-26673d410905>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Balls Are Dropped From The Same Height At The ... | Chegg.com A.The second ball will reach the ground first because it has an initial velocity. B.The first ball will reach the ground first because it does not travel as far C.The two balls reach the ground at the same time. D.Both balls are in projectile motion
{"url":"http://www.chegg.com/homework-help/questions-and-answers/two-balls-dropped-height-time-first-ball-released-zero-velocity-second-ball-thrown-velocit-q1249130","timestamp":"2014-04-17T15:49:35Z","content_type":null,"content_length":"20849","record_id":"<urn:uuid:e769c7b5-b518-4232-9a26-53a993f1907c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Ult. ModPack UBRV61+(44+ Complete Seasons-1970-Pres)12/1/13 Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Ballaboi wrote: HAWK23 wrote:Good news - Melomanu has finished Version 1 of the 2010 Roster - I will release shortly! Good stuff Melo Thanks bro.. keep in mind its just V1 and we can always go back and add more to whatever needs to be added. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ RatedRKO16 wrote:I will definitely be down to donate once it is all done. You guys have done amazing work. I'm reallllyyy hoping someone edits the draft classes so I can start off in the 79/80 season and just keep going in Assoc. mode!! That is my dream!! I hope you guys can make it happen. I'm loving this. Only reason I haven't moved on to 2k13! Glad you're loving it Rated I haven't looked at the 2K13 pages - or "the Dark Side" as I like to call them Always good if you can make a donation to Hawk23 - he is just a poor, over-worked and under-paid teacher who needs alcohol and pizza to keep him sane while he does his updates Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ I will definitely be down to donate once it is all done. You guys have done amazing work. I'm reallllyyy hoping someone edits the draft classes so I can start off in the 79/80 season and just keep going in Assoc. mode!! That is my dream!! I hope you guys can make it happen. I'm loving this. Only reason I haven't moved on to 2k13! Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hum, into the 40's and 50's, even the 60's i got the same issue, guys that played only one game in the hole season. But do what??? I made them anyway... So, just confirming, we have the follow regular seasons lasting: 2002 (NicktheQuick) 2003 (Boss) 2004 ??? 2008 ??? 2009 ??? 2010 (Melomanu - done/needing lil tweaks) Only 3 Seasons away from every Regular Season since 1979-1980 till now. Quite a accomplish! A full ABA Mod, the 60's/70's Mods, Extension Teams throught out time, several Draft Classes and of course the Alltime Legends Mod. Only the UBR Family have the stranks to do it Nice and sweat. hum? Yesterday... Carried the Piano - Today, Right Now... Playing the Piano - Tomorrow... Will Listen the Piano. Working Here, Working There... Working Everywhere! "Michael, you're not doing today! You ain't go doing today!!" - Dennis Rodman Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Work is progressing well on the Ultimate 80's mod - 300 players on roster and around 650 free agents to use to alter rosters how you like them I have been making sure all ages, draft positions, historic stats etc are accurate and hope to have it ready for next weekend. After that I will add some more players across all existing 80's mods and fix up draft teams and draft positions across the 80's as this wasn't a priority when I was building them but it would be nice for us to have them totally accurate. I think the Ultimate 80's will be the way to go for those of you who want to enjoy the best players of the 80's all together in one season rather than have to play a number of different season mods to experience Tiny Archibald and Reggie Miller for instance. Great job to those working on the CFs. Now back to creating those pesky players who played a total of 1 game in the NBA Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ jyap18 wrote:Hey! Why the Phoenix Suns court is all black? Only the Players I can see. The floor is black and the crowd.. You're not installing all 4 parts... │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hey! Why the Phoenix Suns court is all black? Only the Players I can see. The floor is black and the crowd.. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Alright - I've made a bunch of edits... I'll probably upload the new version (V40) tonight some time! │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ BASED ON JUAN33s MAGIC credits to him for model and BIRD_33 face texture credits to him for cf. I will be bringing rookie magic johnson to life for red and bird as i will editing headshape to bring young magic to the 1980 mod so stay tuned as once bird gets it from me he will release for u guys either thru himself or by giving to hawk to intergrate into ubr. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:Good news - Melomanu has finished Version 1 of the 2010 Roster - I will release shortly! Good stuff Melo Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hey Hawk are there any recent updates on the other 2000s season mod? Check out my various projects and works for 2k14: Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ mojio wrote: melomanu20 wrote: mojio wrote:Hello HAWK23 Do you have plans to add these players that I have? year-Specific Cyberface Nice CF pack.. Only problem is i saw over 200 pictures but the pack only has half of the faces (99 to be exact). For example i saw a version of shaq with the sideburns from 1995 that i loved and would like to have but he is not in the pack Sorry melomanu20 Some of these Cyberface the img folder contains the UBR.v39. 're 1994Roster in shaq with the sideburns. (Png9948) If, if I upload request. But I can not only convert only texture I'm sorry for using translation in English. . . moj - nice CFpack.... I've noticed that many of the CFs in that pack are already included... would you mind sending me JUST the ones that have not been used? │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Updated: 2/10/2013Future additions/modifications to V40: Multiple Mods: Ultimate Base Roster: (SILENT UPDATE TO PART 4) 2012 Mod: *None at this time 2011 Mod: *None at this time 2010 Mod: *This roster will need to have a lineups/rotations/roster revamp-check *Add any missing players / add accurate free agents (if any are missing) 2007 Mod: *None at this time 2006 Mod: *These players need to be added/created by scratch: BULLS: Luke Schenscher, Eddie Basden, CELTICS: Dwayne Jones, GRIZZLIES: Anthony Roberson, JAZZ: Andre Owens, KNICKS: Qyntel Woods, MAGIC: Terence Morris. LAKERS: Devin Green 2005 Mod: (SILENT UPDATE TO PART 4) *Edit lineups/rotation minutes to reflect better accuracy 2002 Mod: *This roster will need to have a lineups/rotations/roster revamp-check *Add any missing players / add accurate free agents (if any are missing) 2001 Mod: (SILENT UPDATE TO PART 4) *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed - Nick does a great job) *Add any missing players / add accurate free agents (if any are missing) *Tweak player attributes to better reflect the 2000-2001 Season 2000 Mod: *Check over all rosters/lineups/rotations to ensure accuracy and add any missing players *Tweak player attributes to better reflect the 1999-2000 Season 1999 Mod: *Add accurate Free Agents to the FA Pool *Check over all rosters/lineups/rotations to ensure accuracy and add any missing players 1998 Mod: *Check over all rosters/lineups/rotations to ensure accuracy and add any missing players 1997 Mod: *Tweak player attributes to better reflect the 1996-1997 Season (15 teams copied over from UBR) 1996 Mod: *Tweak player attributes to better reflect the 1995-1996 Season (3 teams copied over from UBR) 1995 Mod: 1994 Mod: (SILENT UPDATE TO PART 4) 1993 Mod: *None at this time 1992 Mod: *Tweak player attributes to better reflect the 1991-1992 Season 1991 Mod: *Tweak player attributes to better reflect the 1990-1991 Season (I've done 10 teams so far) 1990 Mod: *None at this time 1989 Mod: *None at this time 1988 Mod: *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed since Redd tweaked it) *Add any missing players / add accurate free agents (if any are missing) - Redd does a great job - so any changes should be minimal 1987 Mod: *Tweak player attributes to better reflect the 1986-1987 Season 1986 Mod: 1985 Mod: *None at this time 1984 Mod: (SILENT UPDATE TO PART 4) 1983 Mod: *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed) *Add any missing players / add accurate free agents (if any are missing) - Redd does a great job with his stuff - so any changes should be minimal 1982 Mod: *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed) *Add any missing players / add accurate free agents (if any are missing) - Redd does a great job with his stuff - so any changes should be minimal 1981 Mod: *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed) *Add any missing players / add accurate free agents (if any are missing) - Redd does a great job with his stuff - so any changes should be minimal 1980 Mod: (SILENT UPDATE TO PART 4) *This roster will need to have a lineups/rotations/roster revamp-check (I don't anticipate much tweaking needed) *Add any missing players / add accurate free agents (if any are missing) - Redd does a great job with his stuff - so any changes should be minimal All-Time Legends Roster: *None at this time 60s & 70s Ultimate Basketball Mods: *None at this time ABA Mod: *None at this time Expansion Teams: Draft Classes: (SILENT UPDATE TO PART 4) Website: [http://www.UltimateNBA2K.com] *Make a new page for the 2012 Mod *Make a new page for the 2010 Mod *Make a new page for the 2005 Mod *Make a new page for the 2002 Mod *Make a new page for the 2001 Mod *Make a new page for the 1983 Mod *Make a new page for the 1982 Mod *Make a new page for the 1981 Mod *Make a new page for the 1980 Mod *Make a new page for the All-Time Legends Mod *Make a new page for the 60s Ultimate Basketball Mod *Make a new page for the 70s Ultimate Basketball Mod *Make a new page for the 80s Ultimate Basketball Mod Last edited by on Mon Feb 11, 2013 8:52 am, edited 15 times in total. │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Good news - Melomanu has finished Version 1 of the 2010 Roster - I will release shortly! │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ hey, i have one more question, is it possible to insert picture of created player? if yes, then how to do it? Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ LaPhonso Ellis in 44 years old in 1997 roster and 45 years old in 1998 roster, while he was 26 and 27 years old in those seasons. Also ,he is 24 in 1999 roster and he was 28 in 1999 season. Edit: I went and checked LaPhonso Ellis age in all retro rosters since he was drafted (1993 roster ) until 1997 roster and in all of them his age is wrong: in his rookie season he starts as 40 year old, in his sophomore season he is 41 and so on. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:The fix for the pacer's arena is simple... I thought I had already made it... I would download part 4 and make sure you've added the newest version of the season mod with the issue... I can fix any other arena issues VERY simply.... didn't know this was still a problem. Ahhh yeah. I apologize!! I was using a saved roster from a previous version and didn't even think about it!! Ugh. Is there a way for me to fix it without making all my trades/coach profiles/rotations again?? Or would I have to start over. Thanks man. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hawk, Tim Mac at the Raptors in the Alltime roster is with white legs, and do you check those new CF's and this I know it sucks to made "recalls", but it's just for your notes Boss Yesterday... Carried the Piano - Today, Right Now... Playing the Piano - Tomorrow... Will Listen the Piano. Working Here, Working There... Working Everywhere! "Michael, you're not doing today! You ain't go doing today!!" - Dennis Rodman Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ The fix for the pacer's arena is simple... I thought I had already made it... I would download part 4 and make sure you've added the newest version of the season mod with the issue... I can fix any other arena issues VERY simply.... didn't know this was still a problem. │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ broken wrote:RatedRKO16, Hawk already told to you, that he knows that issue with Indiana which will be fixed, so just wait I'm pretty sure he said he didn't know if he'd be able to fix it... I was offering an alternative if he couldn't. Also, adding the Jazz issue. I know he's a busy guy. My bad. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ RatedRKO16, Hawk already told to you, that he knows that issue with Indiana which will be fixed, so just wait Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hey Hawk - I know you said you probably couldn't fix the Pacers' Arena so I'm guessing you can't for the Jazz either for the 1990 mod. I was wondering if you could just replace them with a past version of the arena or a future version? I know we wouldn't get the nostalgia effect but any court is better than a half made court... Thanks man. This post was deleted by on Thu Feb 07, 2013 6:04 am. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hey Hawk the rotation problem with the minutes can be solved if you let the cpu rebuild all the rotations as long as you don't rebuild the All Time Franchise teams. Then all the historic team's rotations can be edited, I do it every time I download a new version of the UBR. Also the color on James Edwards legs (95-96 Bulls) is off. Hawk can you also please add Darrell Walker to the 92-93 Bulls in the UBR and move Ed Nealy to the team's reserves. Nealy only played in 11 reg season games that year while Walker was on the playoff roster. Last edited by on Thu Feb 07, 2013 6:05 am, edited 2 times in total. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ melomanu20 wrote: mojio wrote:Hello HAWK23 Do you have plans to add these players that I have? year-Specific Cyberface Nice CF pack.. Only problem is i saw over 200 pictures but the pack only has half of the faces (99 to be exact). For example i saw a version of shaq with the sideburns from 1995 that i loved and would like to have but he is not in the pack Sorry melomanu20 Some of these Cyberface the img folder contains the UBR.v39. 're 1994Roster in shaq with the sideburns. (Png9948) If, if I upload request. But I can not only convert only texture I'm sorry for using translation in English. . . Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ mojio wrote:Hello HAWK23 Do you have plans to add these players that I have? Tim Duncan Steve Nash Courtney Lee Tyronne Lue year-Specific Cyberface Nice CF pack.. Only problem is i saw over 200 pictures but the pack only has half of the faces (99 to be exact). For example i saw a version of shaq with the sideburns from 1995 that i loved and would like to have but he is not in the pack Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ I love how people are getting excited about 80's CFs I've had my head down and tail in the air working on my 80's Ultimate Roster - 23 teams of classic players from the 80's and every other player who played in 80's as a free agent to be traded into teams if you don't like the rosters I have set up - that's the plan anyway and all with accurate historic stats Hoping to finish in a few weeks Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ too bad i am not good with headshapes Hi there, After having finally succeded in downloading the mod again (thanks HAWK!!), I have really appreciated the new graphic improvements, especially the new retro CFs… Now I’m trying to point out these mistakes, this time using a different approach: since the players are essentially copied over the seasons, the ones with something wrong keep their glitches all over their career. I.E. if a player has a wrong legcolor in a specific roster, it’s highly likely that he will have the same issue in any previous/following season (in which he played). So my humble suggestion is: Correct the players listed beyond for the 1980 mod (the very first season) Correct the same players in following mods This method is quite time consuming, but only in the very beginning : after completing the first step, it’s guaranteed that in the 1981 mod very few players will be wrong, since the other ones would have been already corrected, and so on… so essentially only the rookies/new players/players with multiple CF will have eventually something wrong. And at the end the reviewed rosters will be FLAWLESS… Hope this considerations will be useful… Anyway, here I go with 1980 mod COMPLETELY WRONG PLAYERS: Leon Douglas (he was black, not white!) WRONG LEGCOLOR: Marques Johnson, Dwight Jones, Ollie Mack, Scott May, Del Beshore, Mike Mitchell, Jeff Judkins, Don Chaney, Sidney Wicks, Brian Taylor, Nick Weatherspoon, Sam Pellom, James Hardy, Tom Boswell, Bill Robinzine, Len Elmore, Mike Glenn, Sly Williams, Butch Lee, Maurice Lucas, Eddie Jordan, David Thompson, Bo Ellis, Bobby Wilkerson, Billy Knight, Joe Hassett, Greg Kelser, Eric Money, Earl Evans, Roy Hamilton, Alonzo Bradley, Larry Kenon, John Shumate, James Silas, Mike Evans, Gar Heard, Calvin Natt, Jim Brewer, Darnell Hillman, Wayne Cooper, Kevin Grevey, Larry Boston, Gus Bailey WRONG PORTRAIT: Coby Dietrick, Bob Abernathy, Phil Ford, Joel Kramer (he doesn’t have any portrait and he looks right – he was a white player – but the preview shows a generic black guy) WRONG AGE: Tree Rollins (24 not 26), Walter Davis (25 not 27), Dennis Johnson (25 not 28) WRONG NAME: Ernie GRUNFELD (not GRUNFIELD) NOTES: There is a CF with unrealistic deep black arms: this CF is used for Freeman Wlliams, Billy McKinney and Gary Garland. My suggestion is just change CF for these players… Glen Godrezick has a lot of tattoos he shouldn’t have…Jim Cleamons has a strange glitch on the neck.. looks pretty ugly. Then, just to complete the review, I list what’s missing on jerseys and logos. Some of them already exist (also from the ABA mod...). http://www.sportslogos.net/logos/view/n ... me_Uniformhttp://www.sportslogos.net/logos/view/0 ... ad_Uniform http://www.sportslogos.net/logos/view/e ... me_Uniformhttp://www.sportslogos.net/logos/view/1 ... ad_Uniform New York http://www.sportslogos.net/logos/view/n ... me_Uniformhttp://www.sportslogos.net/logos/view/d ... ad_Uniform http://www.sportslogos.net/logos/view/g ... me_Uniformhttp://www.sportslogos.net/logos/view/b ... ad_Uniform http://www.sportslogos.net/logos/view/f ... me_Uniformhttp://www.sportslogos.net/logos/view/9 ... ad_Uniform http://www.sportslogos.net/logos/view/b ... me_Uniformhttp://www.sportslogos.net/logos/view/r ... ad_Uniform http://www.sportslogos.net/logos/view/g ... me_Uniformhttp://www.sportslogos.net/logos/view/h ... ad_Uniform Portland (away only) http://www.sportslogos.net/logos/view/s ... ad_Uniform San Antonio (away only) http://www.sportslogos.net/logos/view/q ... ad_Uniform http://www.sportslogos.net/logos/view/f ... imary_Logo http://www.sportslogos.net/logos/view/5 ... imary_Logo Bottom line, the courts. It’s impossible to say with accuracy what courts are missing or inaccurate for each specific season. But it’s pretty clear that a should be created from scratch, because there are simply missing… Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ here is some new cfs c. dietrick j. mengelt l. walton o. johnson o. mack r. sobers s. may fabri79 i will fix all fucked up or wrong cfs in 80s mod Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Hi Fabri - Thanks for the heads up on these. I am planning a major revamp of all the 80's so your suggestions will be a great starting point Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ I've already made a ton of skin color tweaks that will be released today... Also... I received the 2002 roster from NickTheQuick... this will be added along with the 2010 roster... ALSO.... Melomanu is going to begin work on the 2009 Roster That means we're left with: 2003- I have some of this done 2004- No one yet 2008- No one yet 2009- Melomanu We're close! │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:I've already made a ton of skin color tweaks that will be released today... Also... I received the 2002 roster from NickTheQuick... this will be added along with the 2010 roster... ALSO.... Melomanu is going to begin work on the 2009 Roster That means we're left with: 2003- I have some of this done 2004- No one yet 2008- No one yet 2009- Melomanu We're close! Big Boss Hawk'ndRoll, if you need some help with the remain teams, i can give a hand... I do not know how to put them inside a Regular Season (you know), but i can made them. I agree with Red's theory, you gonna need just a little more alcohol and pizza to do it all The UBR Family is the BEST!!!! Yesterday... Carried the Piano - Today, Right Now... Playing the Piano - Tomorrow... Will Listen the Piano. Working Here, Working There... Working Everywhere! "Michael, you're not doing today! You ain't go doing today!!" - Dennis Rodman Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Fabri- I've fixed the logos, ages, and the grunfeld issue │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:I've already made a ton of skin color tweaks that will be released today... Also... I received the 2002 roster from NickTheQuick... this will be added along with the 2010 roster... ALSO.... Melomanu is going to begin work on the 2009 Roster That means we're left with: 2003- I have some of this done 2004- No one yet 2008- No one yet 2009- Melomanu We're close! this calls for a celebration.. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Fabri- I've fixed the logos, ages, wrong portraits, and the grunfeld issue │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ NIB - I could use your help if you would be willing to create a few players from scratch... not whole teams... but players who haven't been created yet... for example: Luke Schenscher Eddie Basden Dwayne Jones Anthony Roberson Andre Owens Terence Morris Devin Green Toby Bailey Khalid El-Amin Just to name a few.... if you could save those in a .ROS file I could def use them! │UBR Online: http://www.UltimateNBA2k.com│Twitter: @UltimateNBA2K│Facebook: https://www.facebook.com/UltimateUBR │ │2013 Mod │2012 Mod│2011 Mod│2010 Mod│ │ │ │ │ │ │ │2009 Mod │2008 Mod│2007 Mod│2006 Mod│2005 Mod│2004 Mod│2003 Mod│2002 Mod│2001 Mod│2000 Mod│ │1999 Mod │1998 Mod│1997 Mod│1996 Mod│1995 Mod│1994 Mod│1993 Mod│1992 Mod│1991 Mod│1990 Mod│ │Download the Ultimate Base Roster for NBA 2K12 │ │ │ │ │ │ │ │ │ │ │Download the Ultimate Base Roster for NBA 2K14 │ │ │ │ │ │ │ │ │ │ │1989 Mod │1988 Mod│1987 Mod│1986 Mod│1985 Mod│1984 Mod│1983 Mod│1982 Mod│1981 Mod│1980 Mod│ │1979 Mod │1978 Mod│1977 Mod│1976 Mod│1975 Mod│1974 Mod│1973 Mod│1972 Mod│1971 Mod│1970 Mod│ │1967 Mod │1962 Mod│1952 Mod│ │ │ │ │ │ │ │ │All-Time Legends Mod │Ultimate 90's Mod│Ultimate 80's Mod│Ultimate 70's Mod│All-Star/USA Mod│ Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Ok, i'll do it. Could send to u during the week... Btw, if you want more guys later, just tell me. Yesterday... Carried the Piano - Today, Right Now... Playing the Piano - Tomorrow... Will Listen the Piano. Working Here, Working There... Working Everywhere! "Michael, you're not doing today! You ain't go doing today!!" - Dennis Rodman Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ I'll release the rookie Magic Johnson with edited headshape (courtesy of Suirad) later today guys just gotta finish up editing his cf Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Bird_33 wrote:I'll release the rookie Magic Johnson with edited headshape (courtesy of Suirad) later today guys just gotta finish up editing his cf Good stuff Bird Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Bird_33 wrote:I'll release the rookie Magic Johnson with edited headshape (courtesy of Suirad) later today guys just gotta finish up editing his cf why magic johnson, he look nice larry bird is the man who need younger cf Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:NIB - I could use your help if you would be willing to create a few players from scratch... not whole teams... but players who haven't been created yet... for example: Luke Schenscher Eddie Basden Dwayne Jones Anthony Roberson Andre Owens Terence Morris Devin Green Toby Bailey Khalid El-Amin Just to name a few.... if you could save those in a .ROS file I could def use them! If its the Dwayne Jones i think your talking about, former sun.. cavalier.. etc then i already made him in the 2010 roster. Also, The great Retroman already converted both Terence Morris & Anthony Roberson and are available for download on his pg. I know he has already given you permission to use his faces hawk And as for Toby Bailey.. im sure rayhoops wouldnt mind lettin us use his that he made in his work in progress version of college hoops Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ HAWK23 wrote:2008- No one yet let me get back in the game and get a crack at this. also if you need some help creating additional players, let me know. you know i'm good at that. Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ Nice! The C's year don't worry ballboi.. you got 2009 & 2010 Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ shawnkemp wrote: Bird_33 wrote:I'll release the rookie Magic Johnson with edited headshape (courtesy of Suirad) later today guys just gotta finish up editing his cf why magic johnson, he look nice larry bird is the man who need younger cf Cos Magic Johnson looked completely different as a rookie to his NBA 2K cf... Pretty self explanatory... Re: Ult. ModPack UBRV39+(27 Complete Seasons+ABA Proj)-1/27/ melomanu20 wrote:Nice! The C's year don't worry ballboi.. you got 2009 & 2010 Oh definitely Melo. That 08 Celtics team, in my opinion, is one of the top 15 teams of all-time! I will never take anything away from the Celtics, I'm too much of a basketball fan for that. I just can't cheer for them
{"url":"http://forums.nba-live.com/viewtopic.php?f=143&p=1595836","timestamp":"2014-04-16T19:02:11Z","content_type":null,"content_length":"179645","record_id":"<urn:uuid:370c7997-8a74-42c4-bf14-3c5a6ceee524>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Outreg for descrptive stats [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Outreg for descrptive stats From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Outreg for descrptive stats Date Fri, 25 Jul 2003 17:22:14 +0100 This whole business -- let's call it the reporting problem, namely, how best to produce, collate and report a lot of related results -- is of course of great concern to many users. It is much of what most of us _do_ with Stata, perhaps most of what some of us do, and all of what a few of us do. One short answer to Ronnie's question is to check out Roger Newson's paper on his set of programs. That paper will be forthcoming in Stata Journal 3(3) or Stata Journal 3(4), but a pre-publication copy will no doubt be available somewhere soon, if not Another short answer, and it is said with my tongue only a little in cheek, is Yes. The program is called Stata. Let's put in context that John Gallup's very substantial -outreg- project was focused on a class of related commands and made possible by one basic principle, the generic similarity of those commands, itself the result of a tight design in Stata. What Ronnie wants to do here is produce a lot of results from a lot of commands and select some of the results and present them in a neat way. A program to do that would have to be very general and have to have lots of handles to cope with even the simplest possibilities which might arise. But that program exists. It is Stata itself. Let me put it another way. We can fantasise about the commands we would we like to see, and one might be called -report-. You would specify to -report- what commands you want to run on what variables and what categories and which results you want to keep and how you want to present them, and no doubt other details, like you want -if- and -in- and -by- and weights and what you do with missing values. What the syntax you want to type look like? (I am not saying write the program. I am saying _imagine_ the help file.) Horrendous. That's my guess. It would be like trying to lump a very large fraction of Stata into one command. Here is another approach. There is one basic bag of tricks which gets you a long way: cycle over variables with -foreach- cycle over categories with -foreach- or -forvalues- -quietly- issue <whatever> command pick up the saved results you want put them somewhere safe process them as a new (part of the) dataset. Here is a fairly simple example. I rely on the fact that the number of observations in the auto data set is more than the number of variables. Something like that is usually true. If not, there are other ways to do this. I cycle over the variables in the auto dataset, and do t tests, -by(foreign)-. 0. Initialise I need places to put results. I am going to collect t statistics and P values. If I initialise variables by a -generate-, then I can -replace- within a loop. gen varname = "" gen t = . gen P_value = . local i = 1 1. Cycle ds, has(type numeric) qui foreach v of var `r(varlist)' { capture ttest `v', by(foreign) if _rc { replace varname = "`v'" in `i' replace t = r(t) in `i' replace P_value = r(p) in `i' local ++i 2. What I have got? list varname t P if t < . Note: there are two loops here, executed in unison. As soon as I have a result to put in an observation, I bump up the observation number. There is more about this in "Problems with lists", Stata Journal 3(2), 2003. Note: this is not Stata programming. Nowhere did I write the word -program-. Admittedly, you need to tricks from the programmer's repertoire to do this well. > -----Original Message----- > From: owner-statalist@hsphsun2.harvard.edu > [mailto:owner-statalist@hsphsun2.harvard.edu]On Behalf Of Ronnie > Babigumira > Sent: 25 July 2003 16:23 > To: statalist@hsphsun2.harvard.edu > Subject: st: Outreg for descrptive stats > Hi (stata8.1, win2k) > A quick one, is there an equivalent of outreg for simple descriptive > statistics? I am running a number of tests for diffrences > in means across > categories for a whole lot of variables. In addition, I am > doing a number of > cross tabulations. After this I cut and paste these into a > table in a report > however, how stata users are constantly on the look out for > efficiency, I am > thinking some one must have written a ado to automate this process. > Does such a thing exist * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-07/msg00623.html","timestamp":"2014-04-16T08:18:32Z","content_type":null,"content_length":"9922","record_id":"<urn:uuid:c27584df-dba8-4278-ab42-141e92665f52>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
High School Mathematics Extensions/Further Modular Arithmetic/Problem Set From Wikibooks, open books for an open world 1. Suppose in mod m arithmetic we know x ≠ y and $y^2 \equiv x^2 \pmod{m} \!$ find at least 2 divisors of m. 2. Derive the formula for the Carmichael function, λ(m) = smallest number such that a^λ(m) ≡ 1 (mod m). 3. Let p be prime such that p = 2^s + 1 for some positive integer s. Show that if g is not a square in mod p, i.e. there's no h such that h^2 ≡ g, then g is a generator mod p. That is g^q ≠ 1 for all q < p - 1.
{"url":"http://en.wikibooks.org/wiki/High_School_Mathematics_Extensions/Further_Modular_Arithmetic/Problem_Set","timestamp":"2014-04-18T08:09:22Z","content_type":null,"content_length":"31941","record_id":"<urn:uuid:96070940-895a-4df5-9e38-a54ba8de5abe>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Logic This page: Dr. Math See also the Dr. Math FAQ: false proofs, classic fallacies liars, truthtellers Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Logic Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Venn diagrams. If a point in set X is finite, then X has a first point and a last point. Prove by induction if true, and give a counterexample if false. Prove or disprove: existential x P(x) and existential x Q(x) is logically equivalent to existential x (P(x) and Q(x)). Don't fallacious arguments (as in the argumentum ad ignorantiam) represent illogic? What is the difference between A |- B and A -> B? They seem to mean the same thing--if A is true then you know that B is also true. I am having a hard time understanding why two false statements in a conditional sentence makes it true. Is p -> q totally equivalent to ~q -> ~p in practice? I understand the subset explanation of why the conditional logic statement 'If false then true/false' is always considered true. But what is the logic behind it? Is it possible for more than one answer to exist when proving things? What is a group? Can you give an example of an identity? I do not understand the laws of inference, simplification, disjunctive inference, and disjunctive addition. A logician vacationing in the South Seas finds herself on an island inhabited by the two proverbial tribes of liars and truth-tellers. The logic statement A->B is considered true if A is false and B is true. How can a false imply a true? What's the thinking behind that statement and can you give a good example of how it works? Can 5 tails-up coins become all heads-up by turning them over 2 at a time? If the total number of coins and turned coins changes, what patterns emerge? Doctor Tom introduces mathematical invariants and monovariants to confirm a student has understood that these questions seek proofs -- and that she has started down the right path of proving the possibility (or impossibility) of certain states. Read the clues given, and match everything up. What is mathematical induction? Can you give an example of the ideas of math induction? Proof by induction does not prove anything, because in the inductive step, one makes the assumption that P(k) is true... Assumptions, rules, contradictions, and a derivation. How is math related to logic and intuition? Sally, Ron, Jim, and Meghan are President, VP, Treasurer, and Captain of the cheerleading squad, but not necessarily in that order. Who is what? A number divisible by 2 is divisible by 4. Find a hypothesis, a conclusion, and a converse statement, and determine whether the converse statement is true. Is there a mathematical symbol for the term 'if and only if'? What do the common math symbols (backward E, upside-down A, etc.) mean? If a logic statement says, 'James is taking fencing or algebra,' does that mean he is taking one class or the other, or could he be taking both of them? At least how many balance scale weighings of ten coins do you need to determine the two fakes? By applying combinatorics and keeping track of lower bounds, Doctor Jacques provides a methodical A man born in 1806 is x years old at the year x squared. Solve for x. Are there in fact four options? Aren't there three choice points, not just two? There are three cups, one of which is covering a coin. I know the whereabouts of the coin, but you don't. You pick a cup, and I take one of the remaining cups, one which DOESN'T contain a coin. Both you and I know the cup I pick doesn't contain a coin. You then have the option to swap your cup with the third, remaining cup, or keep your first choice. What is the probability of the coin being in the cup if you keep your first choice, or if you decide to swap them? What does it mean to say that a condition is necessary, sufficient, or necessary and sufficient? I'm working on a question in modular math that asks me to identify whether given conditions are "necessary", "sufficient", or "necessary and sufficient". I'm not sure what those terms mean. What is the negation of "at least two"? Is it "none" or "at most two"? Doctor Peterson responds by analyzing one case at a time as well as by representing the proposition as an inequality. What is negation? What is a statement? How do you negate a statement? What is the negation of "In every village, there is a person who knows everybody else in that village"? How can I prove that any two infinite subsets of the natural numbers can be put in a 1-1 correspondence? What is an open sentence? Can you help me understand the order of quantifiers? I recently read a book about infinity which set forth several arguments for why there are different sizes or orders of infinity. None of them seem convincing to me... What is a paradox? What is the difference between paradox and fallacy in mathematics? A teacher announces that a test will be given next week on one of the five weekdays. Why won't the test ever be given? Does the "necessity" condition correspond to "only if" and "sufficient" correspond to "if," or is it the other way around? Who was dancing with whom? Page: [<prev] 1 2 3 4 5 6 7 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_logic.html?start_at=121&num_to_see=40&s_keyid=39897416&f_keyid=39897417","timestamp":"2014-04-21T02:32:10Z","content_type":null,"content_length":"23897","record_id":"<urn:uuid:4f404ba9-4bb1-4b2e-ac09-8857485f9d94>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding a Solution using an integrating constant January 12th 2010, 09:56 PM #1 Junior Member May 2008 Finding a Solution using an integrating constant Err meant integrating factor not constant. My book is terrible and is confusing the heck out of me. The equation is y'-2y=3e^t and I'm supposed to find a general solution and draw a direction field as well as describe how solutions behave for large t. For the solution I've taken the integrating factor m to be e^-2t and the equation to be my'-2my=m3e^t That yields y'e^-2t-2ye^-2t=e^-2t 3e^t or y'e^-2t-2ye^-2t=3e^-t I'm stumped at this point. Can someone help me out? Last edited by sfgiants13; January 12th 2010 at 11:00 PM. Err meant integrating factor not constant. My book is terrible and is confusing the heck out of me. The equation is y'-2y=3e^t and I'm supposed to find a general solution and draw a direction field as well as describe how solutions behave for large t. For the solution I've taken the integrating factor m to be e^-2t and the equation to be my'-2my=m3e^t That yields y'e^-2t-2ye^-2t=e^-2t 3e^t Mr F says: The whole point of getting the integrating factor is to write the left hand side of this equation as ${\color{red} \frac{d}{dt} [y e^{-2t}]}$ or y'e^-2t-2ye^-2t=3e^-t I'm stumped at this point. Can someone help me out? You should Google integating factor. Err meant integrating factor not constant. My book is terrible and is confusing the heck out of me. The equation is y'-2y=3e^t and I'm supposed to find a general solution and draw a direction field as well as describe how solutions behave for large t. For the solution I've taken the integrating factor m to be e^-2t and the equation to be my'-2my=m3e^t That yields y'e^-2t-2ye^-2t=e^-2t 3e^t or y'e^-2t-2ye^-2t=3e^-t I'm stumped at this point. Can someone help me out? Multiply $y'-2y=3e^t$ on $e^{-2t}$, then You know what to do now? In general, $\mu(x)$ is an "integrating factor" for the differential equation $\frac{dy}{dx}+ p(x,y)= q(x)$ if multiplying by it makes the left side of the equation an "exact differential". That is, if $\mu(x)\frac{dy}{dx}+ \mu(x)p(x,y)= \frac{d\mu(x)y}{dx}$. By the product rule, of course, $\frac{d\mu(x)y}{dx}= \mu(x)\frac{dy}{dx}+ \frac{d\mu}{dx}y$ so that means we must have $\frac{d\ mu}{dx}= \mu p(x,y)$, a differential equation for $\mu$. Theoretically, every differential equation has an integrating factor but for general p(x,y) that equation can be as hard to solve as the original equation. However, for linear equations, where p(x,y) is just p(x) times y, the differential equation for $\mu(x)$, $\frac{d\mu}{dx}= \mu(x)p(x)$, is "separable" and easy to solve. "Separate" as $\frac{d\ mu}{\mu}= p(x)dx$ and integrate: $ln(\mu(x))= \int p(x)dx$ so $\mu(x)= e^{\int p(x)dx}$. For this particular equation, $\frac{dy}{dt}-2y=3e^t$, p(t)= -2 so $\int p(t)dt= -2t$ (since you only need an integral, you can ignore the constant of integration) and the integrating factor is, just as you say, $e^{-2t}$. Multiplying the equation by that gives $e^{-2t}\frac{dy}{dt}- 2e^{-2t}y= 3e^te^{-2t}= 3e^{-t}$. That's exactly where you got to. Now, the whole point of an "integrating factor" is that $\frac{de^{-2t}y}{dt}= e^{-2t}\frac{dy}{dt}- 2e^{-2t}y$! You can differentiate using the product rule to check but that is, as I say, the whole point of the "integrating factor". Now, your differential equation is $\frac{de^{-2t}y}{dt}= 3e^{-t}$. Just integrate. The integral on the left, by the "fundamental theorem of calculus", is just $e^{-2t}y$, of course. January 13th 2010, 03:20 AM #2 January 13th 2010, 04:25 AM #3 Jan 2009 January 13th 2010, 04:41 AM #4 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/differential-equations/123525-finding-solution-using-integrating-constant.html","timestamp":"2014-04-18T13:20:43Z","content_type":null,"content_length":"48487","record_id":"<urn:uuid:0eac0fef-d454-4cf7-8f5b-88242675ee8a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate income elasticity of demand advertising elasticity 1. 421868 Calculate income elasticity of demand advertising elasticity Suppose you have the following hypothetical demand or sales function. Qx= -4Px+2Py+0.20I+0.04A PX = $200, (price of good X) PY =$230, (price of good Y) I = $1,500 (disposable per capita income) A =$12,000 (advertizing expenditures) 1. Calculate the income elasticity of demand for product X when I= $1,500. How could we classify product X? Is product X a cyclical or noncyclical good? Is product X a luxury good or necessity? Explain why. Suppose the economy is in a recession and per capita disposable income is expected to decrease by 5%. What percentage effect on sales would you expect to take place? 2. Given that advertizing expenditures are equal to $12,000, calculate the advertising elasticity. Calculate income elasticity of demand, advertising elasticity
{"url":"https://brainmass.com/business/management-accounting/421868","timestamp":"2014-04-20T15:52:39Z","content_type":null,"content_length":"28062","record_id":"<urn:uuid:c665f836-2d41-4d89-bf5a-0536a6d9c10b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Alex is six years older than his sister, Emma. The sum of their ages is 32. How old is Alex? Alex is ____ years old. (Be sure to only type Alex’s age in the answer blank.) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff86425e4b058f8b762df37","timestamp":"2014-04-18T11:08:13Z","content_type":null,"content_length":"37090","record_id":"<urn:uuid:ae606a80-e5ac-4fb3-a62d-cea2c2e1fe4a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Mass of the Earth - Please Help 1. The problem statement, all variables and given/known data I'm trying to figure out the mass of the earth. 2. Relevant equations I know what it actually is, 5.97*10^24 kg, but it's just to do it that's got me stumped. I know the F = Gm1m2/d squared (or r squared depending what you're taught) formula. 3. The attempt at a solution I have the mass of my object, which is 0.012kg, i know my d, that's 6.3781*10^3 squared (which is 4.06801586*10^7), I know G (6.67*10^-11), but I'm just confused about the F, in my book we worked it was 931 N = ...., but I think that was just 9.8 N (gravitational acceleration) multiplied by my teacher's weight, as an example. If I do 0.012*9.8 I get 0.1176 N, so does that mean my equation should look like: 0.1176 N = (6.67*10^-11) * 0.012 * Me (mass of earth) Then I rearrange the formula to get But the answer I keep getting, no matter how many different ways I try, even when I get the right answer, I'm about 1 000 000 off, i.e. I get 5.97*10^18. WHY oh why is this happening? I have to show working for my assignment. Now I'm lost when I even follow my own book cuz we used different masses i.e. 60 kg instead of 12 GRAMS, (the 12 grams is the weight of my sinker). Please, please, pleaase any help is REALLY appreciated.
{"url":"http://www.physicsforums.com/showthread.php?t=238109","timestamp":"2014-04-21T09:55:51Z","content_type":null,"content_length":"46315","record_id":"<urn:uuid:4aaece18-0e03-472b-92fe-b6a05506f562>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Moneyball - How NFL Teams Make Multimillion Dollar Decisions, Why Are They Wrong and How To Do It Right Moneyball – How NFL Teams Make Multimillion Dollar Decisions, Why Are They Wrong and How To Do It Right Last week I was invited to give a talk at the Big Data Innovation Summit in Boston. My colleague, Patrick Philips, and I gave a talk about how LinkedIn uses crowdsourcing to improve its machine learning algorithms and data products. Since this post is not about this talk, I will just mention that we got great feedbacks and you can watch it here. Parallel to the data innovation summit, there was another conference held in Boston, the Sports Analytics Innovation Summit. Since Patrick and I are big sports fans, our work involves doing a lot of analysis and we had a free pass because both conferences were organized by the same group, we decided to drop by. We weren’t wrong. It was incredibly interesting to see the difference between our world – data products oriented consumer internet companies and the world of sports analytics. While state of the art for the internet companies is analyzing terabytes of data using distributed big data frameworks and applying machine learning algorithms such as deep learning neural networks, the state of the art for sports analytics to put it mildly … is different. One of the most informative sessions at the conferences was given by the data analysts of the National Football League. In this session, the people in charge of analytics for the NFL explained that since data gathering and cleaning is a task that needs to be performed by all NFL teams, the NFL has built a new platform for teams to consume this data. Platform is a pretty big word. One might imagine this platform as something similar to Google analytics where a team coach could log in and watch fancy graphs and charts about his team performance. It’s not exactly like that. It’s more like an Excel spreadsheet that holds very similar data to the one you might encounter at Yahoo! Sports or ESPN. Actually, it’s not more like an Excel spreadsheet, but it is exactly an Excel spreadsheet that is emailed to the teams in the league every week. The spreadsheet contains a lot of tabs and a lot of canned reports about which team played, what was the performance of the players, which players were on the field for any given play and links to the videos of the plays. Pretty straightforward stuff. But there was something also a little bit different there. Something that immediately caught my eye, because it was a data product, and one that in the idea, very similar to many of the products we develop at LinkedIn. The product was named “Similar Running Backs”, which sounds very similar to the “Similar Profiles”, “Similar Companies” and “Similar Schools” LinkedIn data products. The way the NFL analysts explained the idea behind Similar Running Backs is that every year the teams need to renegotiate contracts with their players. To make the negotiations (which sometimes go up to more than ten million dollars a year contracts) efficient, it is very helpful to the teams to understand how similar players are compensated. So the league created this tool for the teams as part of their new platform and the first version of this tool was comparing running backs. Here is how it works – you select a player from a drop down list and the select two numbers which represent the similarity range of the players you are looking for. The smaller the range, the less players will fit the criteria and vice versa. For example: the values 95% and 105% will return the players whose regular season stats are between 95% and 105% of the corresponding statistic of the selected player. Now let’s look at real data and see how the algorithm works. Note: for this analysis I only looked at players who in the 2012 season played at least 10 games and who had at least 10 rushing attempts. Let’s see what data do we have. The NFL used the following stats to assess running back similarity: number of games played, rushing attempts made, total rushing yards, rushing yards per game, rushing yards per attempt and rushing touchdowns. Issue #1 Since the job of the running back is to carry the ball through the defensive line, which is basically a set of about seven 300 lbs. guys, running backs tend to get injured a lot. This causes them to miss a lot of games and makes hard to compare players who played all games in a season to players who played part of the games. Solution #1 Normalize the data by the amount of games played. That is, instead of counting total rushing attempts and total rushing touchdowns, use rushing attempts per game and rushing touchdowns per game. Now that we have the stats right, let’s try to find all the players with +/-5% range to every statistic. Issue #2 Not all stats are alike. For example, in 2012 the top rushing yards player was Adrian Peterson with 131 yards per game while the lowest player, Jorvorskie Lane, rushed only 0.8 yards per game, about 160X times less. This means that the gaps between the rushing yards per game of players can be very significant. In comparison, in the rushing yards per attempt category, the top player, Cedric Peerman, rushed for 7.2 yards per attempt, while the lowest player rush for 1 yard per attempt which is only 7.2X lower. Since the differences between players on those two metrics are very different, it doesn’t make sense to compare 5% difference on these two stats as they symbolize the same “similarity”. Being within 5% of rushing yards per game is very similar, while for rushing per attempt it’s not. Solution #2 Normalize the data to have the same units of distance. What we want to do here is to transform of our measurements to have the same range. One way to do so is to use the standard deviation metric. Standard deviation is a measurement for how wide is our range. Think of a bell curve, the wider the bell curve, the higher the standard deviation. We want the bell curves for all of our stats to look similar to each other. To accomplish this, we can normalize our data by the standard deviation. (This post is too short to explain why this concept works. Feel free to read more in this article). Now that we have the right stats and they are all comparable, we can start looking at what players are similar to each other. Remember, since we have only four stats to work with, the most similar players will have all of their stats within 95%-105% of each other. Less similar players will have only 3, 2, so on and so forth. It appears there are no two players in the league who are very similar on all four stats, but there are some who share three. Here is a visualization of these similarities: We can see from this graph that there are three pairs of players that are very similar to each other in their stats and a cluster of six players who are also similar to one another. Issue #3 While Darius Reynaud is similar to D.J. Ware who is similar to Le’Ron McClain who is similar to Jason Snelling. The first and the last are not very similar to each other. While both their output wasn’t high, Jason Snelling rushed twice more per game and per attempt than Darius Reynaud. Issue #4 This similarity metric is too coarse. It’s all or nothing, either the players are within 5% from each other in most stats or they don’t. Even if we reduce the number of stats players have to be similar at to two as can be seen in this graph. Issue #5 This similarity product has too many levers. We need to provide it with the range of what it means that two players are similar to each other and we also need to provide how many stats should be Solution to #3, #4 and #5 We can provide a visualization that: 1. Displays all the players 2. Uses a continuous similarity metric where closer means more similar instead of the binary similar or not similar we used before 3. Doesn’t need any levers In order to achieve that, we will just cluster all players into groups and then display all the players on a chart where similar players will be close to each other and dissimilar people are far. Now we can see all the players in a single graph separated into five groups. The red group (number 1) are the superstars, guys like Adrian Peterson, Marshawn Lynch and Arian Foster. These are the guys with the most rushing attempts, the highest yardage per game and the guys who by far scored the most touchdowns. The group closest to it, in magenta (number 5), are the second tier guys. These guys are very productive running backs, just not as productive as the guys in the first group. But while these two groups are interesting, pretty much every football fan could break these players into these three buckets. What is more interesting is who are the other three groups. The second magenta group (number 3) is our least productive players. These players rushed only for 4 yards on average in a game with each attempt advancing them slightly more than 2 yards. The blue group (number 4) is made of players that while rushing 5 times the yards per game and twice the yardage per attempt, managed to score about the same numbers of touchdowns, 0.07 a game. The green group is made of players who are very similar to the blue group, only twice more effective in scoring touchdowns. While this analysis does not provide a myriad of insights that are not already known to subject matter experts it does provide a nice and robust framework to understand player similarities with a single look. Also, while it’s very easy to compare players according to only their rushing abilities, things become more complicated when we add more dimensions to look at like fumbles and catches. Which running backs finished last season most similar to each other in terms of all this stats combine? This is a much harder question to answer for which I will let you guess in the comments, but the answer could be easily displayed once again as point on a flat surface. Like always, you should follow me on Twitter Follow @bigdatasc
{"url":"http://blog.vitalygordon.com/2013/09/24/moneyball-how-nfl-teams-make-multimillion-dollar-decisions-why-are-they-wrong-and-how-to-make-them-right/","timestamp":"2014-04-17T09:35:00Z","content_type":null,"content_length":"62171","record_id":"<urn:uuid:ff31b452-2f2c-49e0-898f-b77c0e1f058c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
A maxent reading list A maxent reading list Cultural Interest S. Guiasu and A. Shenitzer. The principle of maximum entropy. The Mathematical Intelligencer, 7(1), 1985. (An overview paper) E. Jaynes. Notes on present status and future prospects. In W.T. Grandy and L.H. Schick, editors, Maximum Entropy and Bayesian Methods, pages 1-13. Kluwer, 1990. (Depending on your viewpoint, Jaynes deserves credit for either inventing maxent or, at the very least, formalizing it, in 1957.) Feature induction S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing features of random fields. IEEE Transactions on pattern analysis and machine intelligence, 19(4), 380-393, April, 1997 (Introduces an iterative algorithm for constructing an exponential model from ``informative'' features selected automatically from a large candidate set.) Iterative scaling D. Brown. A note on approximations to discrete probability distributions. Information and Control, 2:386-392, 1959. I. Csiszár. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability, 3(1):146-158, 1975. I. Csiszár and G. Tusnády. Information geometry and alternating minimization procedures. Statistics & Decisions, Supplemental Issue:1, pages 205-237, 1984. I. Csiszár. A geometric interpretation of Darroch and Ratcliff's generalized iterative scaling. The Annals of Statistics, 17(3):1409-1413, 1989. J. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. Ann. Math. Statistics, 43:1470-1480, 1972. The [Della Pietra, Della Pietra, Lafferty] reference above also formally introduces the improved iterative scaling algorithm, a procedure for computing maximum-likelihood estimates of the parameters in a maxent distribution. The proceedings of the yearly conference Maximum Entropy and Bayesian Methods has been published by Kluwer for at least the last ten years and always contains interesting applications of maxent to areas as diverse as portfolio optimization, signal processing, nuclear physics, and, of all things, the ``two envelope'' paradox. A. Berger, S. Della Pietra, and V. Della Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71, 1996. (Covers selected applications in machine translation, including word-sense disambiguation and word reordering) R. Rosenfeld. A maximum entropy approach to adaptive statistical language modelling. Computers, Speech and Language, 1996 (Uses exponential models to construct a conditional model of language which improves upon the standard ``trigram'' model.) A. Ratnaparkhi. A maximum entropy part of speech tagger Proceedings of the conference on empirical methods in natural language processing, May 1996, University of Pennsylvania. (Adwait has done applied maxent to several problems in natural language processing; see his web page for a more complete list. Adam Berger Wed Dec 17 23:49:11 EST 1997
{"url":"http://www.cs.cmu.edu/afs/cs/user/aberger/www/html/mebib.html","timestamp":"2014-04-19T20:38:09Z","content_type":null,"content_length":"4373","record_id":"<urn:uuid:aadf8d0b-f087-4cdb-837e-60b2d339762c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
General Mathematical Identities for Analytic Functions: Summation Finite summation This formula is the definition of the finite sum. This formula shows how a finite sum can be split into two finite sums. This formula shows that a constant factor in a summand can be taken out of the sum. This formula reflects the linearity of the finite sums. This formula represents the concept that the , which is correct under the given restriction. This general formula is correct without any restrictions. This formula is called the Dirichlet formula for a Fourier series. In this formula, the sum is divided into the sums of the even and odd terms. In this formula, the sum of is divided into three sums with the terms , , and . In this formula, the sum of is divided into four sums with the terms , , , and . In this formula, the sum of is divided into sums with the terms , ,…, , and . This formula describes the multiplication rule for finite sums. This formula is called Lagrange's identity. Infinite summation (series) This formula reflects the definition of the convergent infinite sums (series) . The sum converges absolutely if . If this series can converge conditionally; for example, converges conditionally if , and absolutely for . If , the series does not converge (it is a divergent series). This formula shows one way to separate an arbitrary finite sum from an infinite sum. This formula shows that a constant factor in the summands can be taken out of the sum. This formula reflects the linearity of summation. This formula reflects the statement that the sum of the logs is equal to the log of the product, which is correct under the shown restrictions. This formula is correct if all sums are convergent. Parseval's lemma reflects completeness in the trigonometric system . In this formula, the sum is split into the sums of even and odd terms. In this formula, the sum of is split into three sums with the terms , , and . In this formula, the sum of is split into four sums with the terms , , , and . In this formula, the sum of is split into sums with the terms , ,…, , and . In this formula, the sum of is split into sums with the terms , ,…, , and . This formula describes the multiplication rule for a series. This formula is called Lagrange's identity. Double finite summation This formula reflects the commutativity property of finite double sums over the rectangle . This formula shows how to rewrite the double sum through a single sum. This formula shows summation over the triangle in a different order. This formula reflects summation over the triangle in a different order. This formula reflects summation over the triangle in a different order. This formula reflects summation over the triangle in a different order. This formula reflects summation over the trapezium (quadrangle) in a different order. This formula reflects summation over the trapezium (quadrangle) in a different order. This formula reflects summation over the trapezium (quadrangle) in a different order. This formula reflects summation over the trapezium (quadrangle) in a different order. This formula reflects summation over the trapezium (quadrangle) in a different order. Double infinite summation This formula reflects the commutative property of infinite double sums by the quadrant . It takes place under restrictions like , which provide absolute convergence of this double series. This formula shows how to rewrite the double sum through a single sum. This formula shows how to change the order in a double sum. This formula shows how to change the order in a double sum. This formula reflects summation over the infinite triangle in a different order. This formula reflects summation over the infinite trapezium in a different order. This formula reflects summation over the infinite triangle in a different order. This formula reflects summation over the infinite triangle in a different order. This formula reflects summation over the infinite triangle in a different order. This formula reflects summation over the infinite trapezium (quadrangle) in a different order. This formula shows the summation over the infinite trapezium (quadrangle) in a different order. This formula shows the summation over the trapezium (quadrangle) in a different order. This formula shows the summation over the trapezium (quadrangle) in a different order. This formula shows summation over the trapezium (quadrangle) in a different order. Triple infinite summation This formula shows how to change the order of summation in a triple sum. Multidimensional infinite summation This formula shows how to change the order of summation in multiple sums. This formula shows how to change the order of summation in multiple sums.
{"url":"http://functions.wolfram.com/GeneralIdentities/12/","timestamp":"2014-04-18T21:11:45Z","content_type":null,"content_length":"61497","record_id":"<urn:uuid:0abf4285-de37-45ec-a4da-85b2f4af172f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Garfield, NJ Math Tutor Find a Garfield, NJ Math Tutor ...Everyone is teachable one on one when motivated. As a tutor it is very rewarding to be sitting next to a student at the exact moment they "GET IT" and then go on to do well on their test. I work from a baseline of diagnostic test results and design instructions that will address deficiencies. 16 Subjects: including precalculus, business, ACT Math, algebra 1 ...Physics, including 12 years of teaching AP Physics B. My years of teaching were very enjoyable and worthwhile. I always had good results with my students. 7 Subjects: including algebra 1, algebra 2, geometry, SAT math ...I have worked for the “America Learns Program” and I have also worked for my University as a tutor. Additionally, I have experience tutoring students with special needs in classroom settings for the Newark Public School System and I have served as a tutor for the Newark Public Library, in the ci... 25 Subjects: including prealgebra, GED, discrete math, SAT math ...I was named all county in the 400 meter, long jump and high jump my freshmen year. I was also name MVP that same year. I've played basketball since I was 14. 16 Subjects: including geometry, elementary math, precalculus, statistics ...A bit more information on my experience:I graduated from Manhattan College with a degree in High School Math Education. My first job soon after graduation was at a public high school in the country of England, UK. Here I taught all grades 7-11 for 2 years. 15 Subjects: including calculus, geometry, precalculus, Spanish Related Garfield, NJ Tutors Garfield, NJ Accounting Tutors Garfield, NJ ACT Tutors Garfield, NJ Algebra Tutors Garfield, NJ Algebra 2 Tutors Garfield, NJ Calculus Tutors Garfield, NJ Geometry Tutors Garfield, NJ Math Tutors Garfield, NJ Prealgebra Tutors Garfield, NJ Precalculus Tutors Garfield, NJ SAT Tutors Garfield, NJ SAT Math Tutors Garfield, NJ Science Tutors Garfield, NJ Statistics Tutors Garfield, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Garfield_NJ_Math_tutors.php","timestamp":"2014-04-20T16:33:37Z","content_type":null,"content_length":"23526","record_id":"<urn:uuid:358f6495-753b-4753-910e-3830e980ca5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Fwd: Comparing marginal effects of two subsamples Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Fwd: Comparing marginal effects of two subsamples From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Fwd: Comparing marginal effects of two subsamples Date Fri, 21 Oct 2011 09:35:09 +0200 On Thu, Oct 20, 2011 at 10:16 PM, Jianhong Chen wrote: > I am conducting two-way interaction with negative binomial model. The > reviewers asked us to do marginal effects because of non-linear model. > So, I splitted the sample according to the mean level of the > moderator. I would not use marginal effects in this case. The exponentiated coefficients are incidence rate ratios, and are as easy to interpret as marginal effects but without any of the disadvantages related to marginal effects. Consider the example below: The dependent variable (art) is the number of articles published in last three years of PhD (I believe these are biologists, but I am not certain).Women in an average status school (z_phd = 0) produce (1- .80)*100%= -20% less articles than men. This effect of being a women increases, i.e. becomes less negative, when the schools has higher status. For every standard deviation increase in status of the school the effect of women increases by a factor 1.14, i.e. (1-1.14)*100%=14%. In other words the effect of being a women in a school with 1 standard deviation more status than average is 1.1440*.7996= .91. Which means that women is such a school produce only 9% less articles than men. *--------------------- begin example --------------------- use http://www.stata-press.com/data/lf2/couart2, clear gen byte baseline = 1 sum phd if !missing(art,fem,ment,kid5,mar) gen z_phd = (phd - r(mean))/r(sd) nbreg art i.fem##c.z_phd c.ment##c.ment kid5 mar baseline, irr nocons *---------------------- end example ----------------------- (For more on examples I sent to the Statalist see: http://www.maartenbuis.nl/example_faq ) For a more general discussion on how to interpret interactions in such non-linear models see: M.L. Buis (2010) "Stata tip 87: Interpretation of interactions in non-linear models", The Stata Journal, 10(2), pp. 305-308. Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-10/msg00938.html","timestamp":"2014-04-20T06:41:04Z","content_type":null,"content_length":"10113","record_id":"<urn:uuid:4f060fe2-cdce-4699-b2d0-11feef96eca2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Harmonic horoscopes are based on principles of resonance, like overtones, which are present in every horoscope. The whole zodiac (360°) is taken as the basic tone, representing the number one (1). By using a higher vibration, we can 'cause' the circle to oscillate more quickly, so to speak, and investigate which planets work together in this particular pattern. For example, the fourth harmonic would involve all those planets which share square aspects (90° - division of the circle by four). In the harmonic chart, these planets form conjunctions. The number corresponding to each 'vibration' influences the interpretation. At the risk over oversimplifying, one can say that the study of harmonics greatly extends and differentiates the theory of aspects. Numerical Symbolism and Aspects The aspects imply favorable, unfavorable or ambivalent relationships between the planets. Although the interpretation depends greatly on the nature of the planets involved, this view largely derives from the traditional symbolism of the numbers 1,2,3 and 4: When the circle is divided by the number one, the result is again 360° or 0° - the distance which defines a conjunction. Division by 2 gives Minor Aspects and Harmonics Considering the above, we might well wonder about the division of the circle by other numbers, for example 5, 6, 7, 8, 9 and 10. This is as far as the minor aspects go, giving us the quintile, sextile, septile, octile, novile and decile aspects. However, these aspects and their multiples can only be arrived at by calculation. Also, the interpretation of the minor aspects is by no means as clear as that of the major aspects. Using the technique of harmonics, one can concentrate on a certain division of the circle, instead of searching for minor aspects. The division of the circle is a matter of choice, e.g. by 5, by 57 or by 228, and one should be aware of the symbolic meaning of the number one chooses. Using the chosen number, for example 36, one can calculate a sort of auxiliary horoscope, which can be used for further analysis. Applied Harmonics Using the 36th harmonic chart, one can immediately see which planets form 10°-angles (360 : 36 = 10), since all of these planets will stand in conjunction. One could for example take the number 36 as indicative of an ability of fine problem solving: 36=(2x2)x(3x3), as a tension between opposites and the striving for solution multiplied many times. In that case, the 36th harmonic would tell us about someone's approach to problem-solving and of the difficulties to be reckoned with. Within the 36th harmonic one can use major aspects, one can compare the harmonic and the natal charts, one can relate transits to the harmonic positions to events, etc. Finally, the 36th harmonic can be related to the 36th year of life - since it is the 36th time one has completed the journey around the sun, the person will be sensitive to this frequency. When working with harmonics, one of the difficulties one meets with lies in determining the particular significance of each harmonic, in other words, in finding out what the symbolic value of the numerical factor is. In practice, one usually reduces the numbers in question to multiples of those numbers whose interpretation is assured ' as in the above-mentioned example. Another means of arriving at the meaning of the higher values is to resort to cross-sums and other mathematical and numerological practices. In theory, one could calculate an endless number of harmonic horoscopes for each natal chart. One is left with the choice of resorting to a vague numerological mysticism or working toward a systematic evaluation of the subject through comparative study. John M. Addey, Harmonics in Astrology, Fowler & Co. The most comprehensive work on this subject, with detailed instructions regarding the calculation, interpretation and application of harmonics. Michael Harding u. Charles Harvey, Working with Astrology, Arkana 1990, An introduction to harmonics which is easy to read, with lots of practical examples. A great practical guide. David Hamblin, Harmonic Charts, Aquarian Press. John Addey thought it was important too, as he wrote the foreword.
{"url":"http://www.astro.com/astrology/in_harmon_e.htm?nhor=1&cid=pzxfileM2yk5h-u1361686498","timestamp":"2014-04-21T05:38:01Z","content_type":null,"content_length":"47652","record_id":"<urn:uuid:cd4132f4-b01a-4ac9-99ba-dabf28a64301>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
pde a chain oscillation March 16th 2010, 05:31 AM #1 Mar 2010 pde a chain oscillation 2. A flexible chain of length l is hanging from one end x = 0 but' oscillates horizontally. Let the x axis point downward and the u axis point to the right. Assume that the force of gravity at each point of the chain equals the weight of the part of the chain below the point and is directed tangentially along the chain. Assume that the oscillation are small. Find the PDE satisfied by the chain. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/134047-pde-chain-oscillation.html","timestamp":"2014-04-19T20:19:32Z","content_type":null,"content_length":"28697","record_id":"<urn:uuid:4dc90e0c-cbf8-43d3-bd6b-8a8c31c3da2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Suppose 3s represents an even integer. What polynomial represents the product of 3s, the even integer that comes just before 3s, and the even integer that comes just after 3s? • one year ago • one year ago Best Response You've already chosen the best response. it maybe 27s3 - 12s Best Response You've already chosen the best response. 3s is even even before = 3s-2 even after = 3s+2 prod: (3s-2)(3s)(3s+2)=(3s)(9s^2-4) Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51477c68e4b04cdfc583441e","timestamp":"2014-04-17T15:48:34Z","content_type":null,"content_length":"32417","record_id":"<urn:uuid:6896d181-c1cb-4a32-a18d-93b47e8f745a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
More than three decades of exciting games A couple of weeks ago we offered several ways to try to quantify the excitement of baseball games using Win Expectancy and Leverage Index. At the end of this article, we’ll present the ultimate list of the greatest baseball games since 1974. That will end any debate on the subject. (You know I’m not serious, don’t you?) A quick summary of part one We’ve identified several variables, each measuring an unknown portion of one (or more) of three latent dimensions: equilibrium, rally and late game importance. In most cases there is some overlapping in what the variables are measuring; for example, here is how a couple of proposed variables were introduced in part one: To get a sense of the importance of the last phases of a game, we can look at when the moment with the highest Leverage Index occurs. We can indicate the moment as a percentage of game played. Running a correlation between the Leverage Index and the percentage of game played, we can gauge the increasing tension of tight games, as opposed to the fading interest of a lopsided contest. The two are obviously highly correlated, since a game with a growing Leverage Index has by definition the highest Leverage Index occurring late in the game. The correlation between the two variables is .85, meaning that 72 percent (.85 squared) of the variation of one variable can be explained by the variation of the other one. Thus the loss of information if just one of the two is used would be minimal; however, it would be ideal to be able to keep the non-overlapping part of information while removing the “double counting.” More on this in a few paragraphs. A few more ways to skin this cat Commenting on part one, Paul suggested counting the number of times the Win Expectancy crosses the 50 percent line “as a way to try to find back-and-forth type games.” This 1982 epic duel between the Dodgers and the Cubs has the highest count with 29; the Tigers-Twins tiebreaker of 2009, which gave Paul the idea, has 16 crossings. Image courtesy of FanGraphs. I also came up with another couple of candidate variables. One looks at the moment when the winning team cashed the game. I decided to identify that moment as the one when the Win Expectancy for the winning team goes over 90 percent for good. For example, in this interleague game between the White Sox and the Reds, Cincinnati’s chances of winning the game went from 75.8 percent to 93.1 percent when Ramon Hernandez lined out to shortstop and Jerry Hairston was doubled off second base; that was the 66th play of the game, and the Win Expectancy for the Reds never fell below the 90 percent line for the rest of the game (which lasted 75 plays). In such a case, the moment when the winning team cashed the game is marked at 88 percent. The other variable added since part one looks at the moment when the losing team has its highest Win Expectancy. The combination of this with the actual maximum Win Expectancy value for the loser (as seen in part one) should give a better information about the rally and its timing. Here in the first game of a doubleheader between the Tigers and the Indians (1980), Detroit got its highest Win Expectancy (98.3 percent) when it had the game all but sealed (up by two, bases empty, two outs) in the bottom of the ninth, when 96 percent of the regulation game had been played. Gary Gray had other plans that day: He homered two batters later to tie the game and drove in the winning run in the 13th. Combining and not double counting Factor analysis is an advanced statistical technique which can be very summarily described as doing the following: Combine a high number of more or less correlated variables into a smaller number of uncorrelated ones; in the process of going from several dimensions to a bunch (usually two or three) of them, the method aims at losing as little information as possible; finally, the resulting variables computed as a combination of the original ones (factors, that is their name) should identify latent traits of the observed phenomenon. It looks like the right tool for the problem at hand. The ideal result for the exciting games problem would be being able to reduce the 12 variables we have used into three factors identifying equilibrium, rally and late game importance. Luckily, that’s sort of what happens when performing a factor analysis on the complete data set of the regular season games played since 1974. Three factors explain 76 percent of the variability captured by the 12 original variables. (Warning: it’s not a given that the original variables explain the whole phenomenon of game excitement; on the contrary, on a subjective field like this one, we can be sure that’s not the case). Looking at the correlation between each factor and each original variable helps to understand what the factors measure. (Correlation values between -.3 and .3 are not reported.) Factor1 Factor2 Factor3 crescendo 0.86 0.32 90th pctl LI 0.65 0.47 0.43 mean WE swing 0.66 0.50 0.50 time decisive play 0.58 0.44 0.37 time game cashed 0.79 0.36 highest LI 0.66 0.48 time highest LI 0.83 highest loser WE 0.72 top play WPA 0.31 0.78 time highest WE loser 0.35 0.64 0.37 distance 50-50 WE -0.57 -0.70 crosses 50-50 WE 0.37 0.65 Equilibrium, rally and late game importance—once more, with feeling The first factor is highly correlated with the game crescendo (the correlation between time of the game and Leverage Index), the moment when the highest Leverage Index occurs (time highest LI), and when the winning team put its hands on the game for good (time game cashed). It appears this factor defines the importance of the final part of the game. The game with the highest score on this factor is the Mets-Cardinals marathon of last April, with all the important plays occurring past the ninth inning. Coincidentally, the second game ranked by factor one was played in 1974 by the same two teams. For an example of a non-infinite game scoring high on this measurement, look at this Rangers-White Sox game from 1988. All the red bars (indicating Leverage Index of five or higher) are at the end of the contest. Image courtesy of FanGraphs. The second factor is correlated with the Win Probability Added by the biggest play of the game (top play WPA), the highest Win Expectancy reached by the losing team (highest loser WE) and the moment when the losing team reached its highest Win Expectancy (time highest WE loser). Summing up, this factor captures the rally component of games. The game scoring highest on this factors is a Padres-Dodgers affair from 1977: The home team actually rallied from just a one-run deficit, but that happened with two outs in the bottom of the 10th. The Expos were the leading actors in the match ranked second by the rally factor, at the expenses of the Padres: they came back, again in the bottom of the 10th, from a 7-4 deficit. Finally, the third factor is related with the closeness to the 50-50 Win Expectancy line (distance 50-50 WE) and the measure proposed in the comments by Paul; i.e., the number of times the Win Expectancy crosses the same 50-50 line (crosses 50-50 WE). Thus, factor number three is the one identifying equilibrium. At the top of the ranking from factor three we have a 13-inning affair between the Twins and the White Sox, back in 1982. The game was never in the hands of either team, the only big break being a two-out wild pitch by Jeff Little which brought Carlton Fisk home with the go-ahead run in the eighth (quickly answered by Randy Johnson‘s homer in the top of the ninth). It’s time to rank every regular season game played since 1974 according to the factor analysis; to crown the most entertaining game ever, the factors should be further combined in a single index. That could be done in several ways, each one having its own merits and faults. One could be giving out subjective values to each factor. I could lean toward ranking equilibrium higher, while someone else would prefer rally games. To provide the rankings that follow, I simply summed the ranking of a game according to each factor and sorted the games according to the sum obtained. So here are the top 10 games since 1974: 1. Brewers @ White Sox – May 8, 1984. 2. D’Backs @ Giants – May 29, 2001. 3. Padres @ Expos – May 21, 1977. 4. Mariners @ Angels – April 13, 1982. 5. Cubs @ Phillies – Sept. 29, 1980. 6. Brewers @ Expos – April 24, 2002. 7. Orioles @ Red Sox – Oct. 3, 1976. 8. Padres @ Dodgers – Sept. 13, 1982. 9. Reds @ Braves – July 18, 2007. 10. Expos @ Astros – July 7, 1985. And here are the best since 2006, all available thanks to the MLB.tv archives (no links to results, in case you are planning to watch them): 1. Reds @ Braves – July 18, 2007. 2. Rockies @ Padres – April 17, 2008. 3. Reds @ Padres – May 25, 2008. 4. Dodgers @ Padres – April 29, 2007. 5. Athletics @ Blue Jays – April 10, 2008. 6. Pirates @ Cubs – May 8, 2008. 7. Phillies @ Nationals – Sept. 27, 2006. 8. Dodgers @ Cardinals – July 29, 2009. 9. Cardinals @ Red Sox – June 22, 2008. 10. Red Sox @ White Sox – July 9, 2006. What’s ahead There are a couple of natural directions this series can take. One is taking into account the importance of the game in the contest of season, to rank games that are not only exciting, but also meaningful; to do so, the Championship Leverage Index tool could be borrowed from Sky Andrecheck. Unfortunately running the code to assign each game its Championship Leverage Index requires a couple of hours per season, so I won’t go after it in the immediate future. What you’ll likely get in the coming weeks is the ranking of postseason games and series; thus we’ll come full circle in revisiting Dennis Boznango’s articles from 2005 (part 1 – part 2). References & Resources Keep a look on THT Live during the next few days. A gift for you is coming soon!
{"url":"http://www.hardballtimes.com/more-than-three-decades-of-exciting-games/","timestamp":"2014-04-20T05:56:11Z","content_type":null,"content_length":"55462","record_id":"<urn:uuid:58b3f40b-dd15-4214-930c-a57833dd932b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Omega - A. Dekel et al. 4.4. Cluster Abundance and Correlations If clusters can be modeled (e.g., using an improved version of the Press-Schechter formalism) as ``objects'' above a mass threshold in a density fluctuation field that was initially Gaussian, then the cluster mass function can be used to constrain [8] [m]^0.6 [49]. The correlation amplitude of these clusters can be compared with their abundance to give a direct measure of [8]. Together, these results yield [m] and [8] separately [50]. Pro: The two parameters are determined from observational data that are relatively easy to obtain. Con: The amplitude of cluster correlations still carries a large uncertainty. Current Results: [8] [m]^0.6 49] from cluster abundances (compare to Section 4.1), but measures of the cluster autocorrelation strength are still too uncertain to be able to give a useful second constraint [50].
{"url":"http://ned.ipac.caltech.edu/level5/Dekel3/Dek4_4.html","timestamp":"2014-04-18T23:19:05Z","content_type":null,"content_length":"2851","record_id":"<urn:uuid:3775f4e3-9b51-45f3-9797-4b0c407da019>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: Energy sources in the next 20 years From: Al Koop <koopa@gvsu.edu> Date: Wed Jan 14 2004 - 11:19:39 EST According to my calculations, your assumption is wrong. I calculate (and await verification) that 10 sq miles is about 26 million meters. The sunlight is 1370 W/m^2, but that is only with the sun directly overhead. One can expect half of the time, it is dark and one will get on averages somewhere between 25-35% total insulation (I haven't actually calculated this but it isn't hard to do) but that means that each meter of solar floating on the ocean will recieve a daily average of 479 Watts. But cells are only 20% efficient so that reduces each meter to 96 Watts per meter squared. Rounding up that is about 2500 megawatts per day from the sq miles. California uses baseline around 10000 megawatts; peak demand 36,000 megawatts. Unless I have missed something (which is always a possibility) I doubt that 10 sq miles will supply California. Since this report was considering tidal or wave power, I am thinking that the 10 sq mile is more like a 1000 mile long ribbon just offshore that is 52.8 ft wide. Can one put tidal or wave power equipment positioned in many rows back to back? Received on Wed Jan 14 11:20:20 2004 This archive was generated by hypermail 2.1.8 : Wed Jan 14 2004 - 11:20:21 EST
{"url":"http://www2.asa3.org/archive/asa/200401/0157.html","timestamp":"2014-04-17T12:39:55Z","content_type":null,"content_length":"6189","record_id":"<urn:uuid:fdf4205f-d2ee-4b8b-8bb1-94049113e5fe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Drawing cards out of deck September 13th 2009, 12:51 PM #1 Junior Member Aug 2008 Drawing cards out of deck So, I am having trouble with this problem.... Emily draws a card from a standard 52-deck. Let A, B, C be the events: A: Emily draws a club. B: Emily draws a red card. C: Emily draws a picture card (Jack, Queen, or King). What is the probability of at least two of the three events occur? I have tried but wasn't very successful....i don't think. I have done a problem similar i think and i had to do the probability it wouldn't happen and then just subtract it from 1. I tried it but don't think i did it correctly. Any help is very appreciated! Notation: $P(ABC)$ stands for the probability that all three events occur. Of course $P(ABC)=0$ because a club is not red. $P(AC)=\frac{3}{52}$ because there are three clubs face cards. "What is the probability of at least two of the three events occur?" September 13th 2009, 01:06 PM #2
{"url":"http://mathhelpforum.com/statistics/102093-drawing-cards-out-deck.html","timestamp":"2014-04-18T21:20:09Z","content_type":null,"content_length":"34466","record_id":"<urn:uuid:adc91077-841f-40f8-98ae-4f4c2b6e8ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Functorial characterization of morphisms of schemes up vote 6 down vote favorite This question is akin in spirit to this one: Functorial characterization of open subschemes? In the above MO question, a "functorial" characterization is given for closed immersions and open immersions. I am wondering if there are similar characterizations for concepts such as universally closed, separated, proper, projective etc.? Or maybe there are characterizations in a different flavor? In particular, I am wondering if we work over functors (i.e., generalization of schemes), are there things like separated/proper/etc morphism between functors? Any reference or comments would be appreciated! schemes ag.algebraic-geometry 1 The valuative criteria for properness and separateness are expressed in the language of the functor of points. For a discussion of how to describe other properties in the functor of points language, see e.g. a discussion thread on the Secret Blogging Seminar. (It shouldn't be hard to find.) – Emerton Feb 16 '10 at 17:12 2 Also note that any property for morphisms such as affine or quasi-compact which is described by applying an absolute compact to preimages of affine opens can be expressed well in functorial language as long as the corresponding absolute property can, since preimages opens are examples of fibre products, which are most easily described via the functor of points. (See e.g. this question: mathoverflow.net/questions/15291 ) – Emerton Feb 16 '10 at 17:17 sbseminar.wordpress.com/2009/08/06/… – Harry Gindi Feb 16 '10 at 19:47 if $f$:$X\rightarrow Y$ is quasi-separated. Then separatedness, properness of $f$ are equivalent to formally $\mathfrak{M}_{v}$-unramified and etale respectively. Check out EGA Ch.II,7.2.3 and 7.2.8 – Shizhuo Zhang Feb 16 '10 at 20:29 I just want to note that currently I'm working on these questions. The approaches mentioned by Shizhuo Zhang are not appropriate when we are just given abstract functors between abelian categories. Besides they are not handy at all. Also note that first the question has to be solved which functors come from morphisms, etc. I will post my results when they are in a readable and systematic form. – Martin Brandenburg Nov 30 '10 at 16:41 add comment 2 Answers active oldest votes Check out the paper of Kontsevich-Rosenberg Noncommutative spaces and Noncommutative Grassmannian and related constructions. You will get what you want. i.e. the definition of properness and separatedness of presheaves(as functors, taken as "space") and morphism between presheaves(natural transformations). Notice that these definitions are general treatment for algebraic geometry in functorial point of view,nothing to do with noncommutative. Definition for separated morphism and separated presheaves Let $X$ and $Y$ be presheaves of sets on a category $A$(in particular, $CRings^{op}$). We call a morphism $X\rightarrow Y$ separated if the canonical morphism $X\rightarrow X\times _{Y}X$ is closed immersion We say a presheaf $X$ on $A$ is separated if the diagonal morphism: $X\rightarrow X\times X$ is closed immersion Definition for strict monomorphism and closed immersion For a morphism $f$: $Y\rightarrow X$ of a category $A$, denote by $\Lambda _{f}$ the class of all pairs of morphisms up vote 2 $u_{1}$,$u_{2}$:$X\Rightarrow V$ equalizing $f$, then $f$ is called a strict monomorphism if any morphism $g$: $Z\rightarrow X$ such that $\Lambda_{f}\subseteq \Lambda_{g}$ has a unique down vote decomposition: $g=f\cdot g'$ Now we have come to the definition of closed immersion: Let $F,G$ be presheaves of sets on $A$. A morphism $F\rightarrow G$ a closed immersion if it is representable by a strict Let $A$ be the category $CAff/k$ of commutative affine schemes over $Spec(k)$, then strict monomorphisms are exactly closed immersion(classcial sense)of affine schemes. Let $X,Y$ be arbitrary schemes identified with the correspondence sheaves of sets on the category $CAff/k$. Then a morphism $X\rightarrow Y$ is a closed immersion iff it is a closed immersion in classical sense(Hartshorne or EGA) Definition for proper morphism just follows the classical definition: i.e. universal closed and separated. You can also find the definition of universal closed morphism in functorial flavor in the paper I mentioned. Thank you! Can you give some overview or summary of sorts? – natura Feb 16 '10 at 17:46 OK, I will post here later – Shizhuo Zhang Feb 16 '10 at 18:21 add comment Another point of view if you follow the page you quote: Functorial characterization of open subschemes. There are also correspondence notions for separatedness and properness and so on. Let me elaborate a What you do is to identify a commutative scheme $X$ with $Qcoh_{X}$(Gabriel-Rosenberg reconstruction theorem). Let$f_{*}$=$F$ :$Qcoh_{X}\rightarrow Qcoh_{Y}$ (Assume $X,Y$ are quasi compact and quasi separated). Affineness $F$ is affine if $f_{*}$ is conservative(faithful in abelian case),having left adjoint functor $f^{*}$ and having right adjoint functor $f^{!}$ closed immersion Let $C_{X}=Qcoh_{X}$ and $C_{U}=Qcoh_{U}$.(Suppose they are abelian categories). Then $C_{U}\rightarrow C_{X}$ ($u_{*}$) is closed immersion if ($u_{*}$) is an categorical equivalence $C_{U}$ and full topologizing subcategory $C_{V}$ of $C_{X}$(topologizing subcategory is full subcategory which is closed under finite direct sum and subquotient taken in $C_{X}$) up vote 1 down thickennings We call a closed immersion $U\rightarrow T$ a thickenning, if the smallest saturated multiplicative system in $HomC_{T}$ containing $(u*)(HomC_{U})$ coincides with $Hom(C_{T})$ Formally smooth,formally unramified,formally etale I will talk about these notions later. They are defined via thickennings. separatedness and properness Once you have the definition of closed immersion given above, then the definition of separatedness is free(follows the same pattern as EGA) Properness is similar. I will formulated The reason to identify space with category of quasi coherent sheaves on it is mainly for noncommutative algebraic geometry. What I wrote here is trivial case of this consideration because we can drop the categorical language in commutative case. Functor point of view and categorical point of view are not equivalent in general Can you elaborate on the point about formally smooth morphisms? See also mathoverflow.net/questions/45338/…. – Martin Brandenburg Nov 30 '10 at 16:45 add comment Not the answer you're looking for? Browse other questions tagged schemes ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/15474/functorial-characterization-of-morphisms-of-schemes?sort=newest","timestamp":"2014-04-20T06:02:41Z","content_type":null,"content_length":"68749","record_id":"<urn:uuid:f5e41977-7142-4d93-b163-104eaa5814d2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
A differential semantics for jointree algorithms Results 1 - 10 of 13 - Journal of the ACM , 2000 "... We present a new approach to inference in Bayesian networks which is based on representing the network using a polynomial and then retrieving answers to probabilistic queries by evaluating and differentiating the polynomial. The network polynomial itself is exponential in size, but we show how it ca ..." Cited by 112 (18 self) Add to MetaCart We present a new approach to inference in Bayesian networks which is based on representing the network using a polynomial and then retrieving answers to probabilistic queries by evaluating and differentiating the polynomial. The network polynomial itself is exponential in size, but we show how it can be computed efficiently using an arithmetic circuit that can be evaluated and differentiated in time and space linear in the circuit size. The proposed framework for inference subsumes one of the most influential methods for inference in Bayesian networks, known as the tree–clustering or jointree method, which provides a deeper understanding of this classical method and lifts its desirable characteristics to a much more general setting. We discuss some theoretical and practical implications of this subsumption. 1. - International Journal of Approximate Reasoning , 2004 "... We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evalua ..." Cited by 54 (11 self) Add to MetaCart We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose Primula–generated propositional instances have thousands of variables, and whose jointrees have clusters with hundreds of variables. - Journal of Artificial Intelligence Research , 2004 "... MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanatio ..." Cited by 33 (3 self) Add to MetaCart MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NP-complete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference (Pr), and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases. 1. - Artificial Intelligence "... A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base ..." Cited by 22 (0 self) Add to MetaCart A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base in conjunctive normal form (CNF) with weights associated to each model according to the network parameters. Given this CNF, computing the probability of some evidence becomes a matter of summing the weights of all CNF models consistent with the evidence. A number of variations on this approach have appeared in the literature recently, that vary across three orthogonal dimensions. The first dimension concerns the specific encoding used to convert a Bayesian network into a CNF. The second dimensions relates to whether weighted model counting is performed using a search algorithm on the CNF, or by compiling the CNF into a structure that renders WMC a polytime operation in the size of the compiled structure. The third dimension deals with the specific properties of network parameters (local structure) which are captured in the CNF encoding. In this paper, we discuss recent work in this area across the above three dimensions, and demonstrate empirically its practical importance in significantly expanding the reach of exact probabilistic inference. We restrict our discussion to exact inference and model counting, even though other proposals have been extended for approximate inference and approximate model counting. - International Journal of Intelligent Systems , 2004 "... We extend the differential approach to inference in Bayesian networks (BNs) (Darwiche, 2000) to handle specific problems that arise in the context of dynamic Bayesian networks (DBNs). We first summarize Darwiche's approach for BNs, which involves the representation of a BN in terms of a multivariate ..." Cited by 7 (3 self) Add to MetaCart We extend the differential approach to inference in Bayesian networks (BNs) (Darwiche, 2000) to handle specific problems that arise in the context of dynamic Bayesian networks (DBNs). We first summarize Darwiche's approach for BNs, which involves the representation of a BN in terms of a multivariate polynomial. We then show how procedures for the computation of corresponding polynomials for DBNs can be derived. These procedures permit not only an exact roll-up of old time slices but also a constant-space evaluation of DBNs. The method is applicable to both forward and backward propagation, and it does not presuppose that each time slice of the DBN has the same structure. It is compatible with approximative methods for roll-up and evaluation of DBNs. Finally, we discuss further ways of improving efficiency, referring as an example to a mobile system in which the computation is distributed over a normal workstation and a resource-limited mobile device. - In AISTATS , 2010 "... It is well-known that exact inference in tree-structured graphical models can be accomplished efficiently by message-passing operations following a simple protocol making use of the distributive law [Aji and McEliece, 2000,Kschischang et al., 2001], and that exact inference in arbitrary graphical mo ..." Cited by 4 (3 self) Add to MetaCart It is well-known that exact inference in tree-structured graphical models can be accomplished efficiently by message-passing operations following a simple protocol making use of the distributive law [Aji and McEliece, 2000,Kschischang et al., 2001], and that exact inference in arbitrary graphical models can be solved by the Junction-Tree Algorithm; its efficiency is determined by the size "... Bayesian networks (BNs) are used to represent and ef ciently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagati ..." Cited by 2 (2 self) Add to MetaCart Bayesian networks (BNs) are used to represent and ef ciently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to nd application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, speci cally: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and nd that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using growth curves. "... Decision circuits perform efficient evaluation of influence diagrams, building on the advances in arithmetic circuits for belief network inference [Darwiche, 2003; Bhattacharjya and Shachter, 2007]. We show how even more compact decision circuits can be constructed for dynamic programming in influen ..." Cited by 1 (1 self) Add to MetaCart Decision circuits perform efficient evaluation of influence diagrams, building on the advances in arithmetic circuits for belief network inference [Darwiche, 2003; Bhattacharjya and Shachter, 2007]. We show how even more compact decision circuits can be constructed for dynamic programming in influence diagrams with separable value functions and conditionally independent subproblems. Once a decision circuit has been constructed based on the diagram’s “global ” graphical structure, it can be compiled to exploit “local” structure for efficient evaluation and sensitivity analysis. 1 "... In this paper we present a differential semantics of Lazy AR Propagation (LARP) in discrete Bayesian networks. We describe how both single and multi dimensional partial derivatives of the evidence may easily be calculated from a junction tree in LARP equilibrium. We show that the simplicity of the c ..." Cited by 1 (1 self) Add to MetaCart In this paper we present a differential semantics of Lazy AR Propagation (LARP) in discrete Bayesian networks. We describe how both single and multi dimensional partial derivatives of the evidence may easily be calculated from a junction tree in LARP equilibrium. We show that the simplicity of the calculations stems from the nature of LARP. Based on the differential semantics we describe how variable propagation in the LARP architecture may give access to additional partial derivatives. The cautious LARP (cLARP) scheme is derived to produce a flexible cLARP equilibrium that offers additional opportunities for calculating single and multi dimensional partial derivatives of the evidence and subsets of the evidence from a single propagation. The results of an empirical evaluation illustrates how the access to a largely increased number of partial derivatives comes at a low computational cost. "... We propose an approach for approximating the partition function which is based on two steps: (1) computing the partition function of a simplified model which is obtained by deleting model edges, and (2) rectifying the result by applying an edge-by-edge correction. The approach leads to an intuitive ..." Cited by 1 (1 self) Add to MetaCart We propose an approach for approximating the partition function which is based on two steps: (1) computing the partition function of a simplified model which is obtained by deleting model edges, and (2) rectifying the result by applying an edge-by-edge correction. The approach leads to an intuitive framework in which one can trade-off the quality of an approximation with the complexity of computing it. It also includes the Bethe free energy approximation as a degenerate case. We develop the approach theoretically in this paper and provide a number of empirical results that reveal its practical utility. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=119556","timestamp":"2014-04-17T02:31:14Z","content_type":null,"content_length":"39831","record_id":"<urn:uuid:5ad79789-c167-42ca-b48d-8e68461e9bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
If AB/7 > 1/14 and A = B, which of the following must be gre Author Message If AB/7 > 1/14 and A = B, which of the following must be gre [#permalink] 19 Feb 2014, 01:10 25% (low) Question Stats: (02:11) correct 20% (01:05) based on 29 sessions Conquistador22 If AB/7 > 1/14 and A = B, which of the following must be greater than 1? Intern A. A+B Joined: 17 Jan 2012 B. 1-A Posts: 29 C. 2A^2 Location: India D. A^2 - 1/2 Concentration: General E. A Management, International Business [Reveal] GMAT 1: 650 Q48 V31 Spoiler: WE: Information Technology OA is C but It seems A also satisfies this condition... I calculated it as below; Followers: 0 AB/7 > 1/14 multiplying both sides by 14 2AB > 1 as A=B 2A^2 >1 and 2 B^2 > 1 Option (C) is true as per this logic While analyzing other options this is what I got... as A=B, B >1/sqrt(2) so A+B > 1/sqrt(2) + 1/sqrt(2) A+B > 2/sqrt(2) A+B > sqrt(2) This is also going to be greater than 1 Spoiler: OA Last edited by on 19 Feb 2014, 02:52, edited 1 time in total. Renamed the topic and edited the question. Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 02:51 Expert's post Conquistador22 wrote: This problem is from MGMAT Algebra Strategy Guide: If AB/7 > 1/14 and A=B While of the following must be greater than 1 ? a) A+B b) 1-A c) 2A^2 d) A^2-1/2 e) A OA is C but It seems A also satisfies this condition... I calculated it as below; AB/7 > 1/14 multiplying both sides by 14 2AB > 1 as A=B 2A^2 >1 and 2 B^2 > 1 Option (C) is true as per this logic While analyzing other options this is what I got... as A=B, B >1/sqrt(2) so A+B > 1/sqrt(2) + 1/sqrt(2) A+B > 2/sqrt(2) A+B > sqrt(2) This is also going to be greater than 1 If AB/7 > 1/14 and A = B, which of the following must be greater than 1? A. A+B B. 1-A C. 2A^2 D. A^2 - 1/2 E. A First of all notice that the question asks which of the following MUST be true, not COULD be true. \frac{AB}{7} > \frac{1}{14} Bunuel 2AB>1 Math Expert . Since A = B, then Joined: 02 Sep 2009 2A^2>1 Posts: 17278 . Followers: 2862 Answer: C. As for option A: notice that from it follows that A, and therefore B, since A = B, can be negative numbers, for example, -1, and in this case A + B = -2 < 1. The problem with your solution is that means that A>\frac{1}{\sqrt{2}} OR A<-\frac{1}{\sqrt{2}} , not only Hope it's clear. P.S. Please read and follow: Pay attention to rules 3, 5, and 7. Also, please hide the OA under the spoiler. Thank you. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 04:01 Conquistador22 Thanks Bunuel, Your explanation is awesome as usual. Intern But I am a bit confused about below part. Joined: 17 Jan 2012 Bunuel wrote: Posts: 29 The problem with your solution is that A^2>\frac{1}{2} means that A>\frac{1}{\sqrt{2}} OR A<-\frac{1}{\sqrt{2}}, not only A>\frac{1}{\sqrt{2}}. Location: India As per my understanding Concentration: General Management, International A^2>\frac{1}{2} means that GMAT 1: 650 Q48 V31 A>+\frac{1}{\sqrt{2}} OR A>-\frac{1}{\sqrt{2}} WE: Information Technology (Telecommunications) . Could you please elaborate why Followers: 0 A<-\frac{1}{\sqrt{2}} Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 04:06 Expert's post Conquistador22 wrote: Thanks Bunuel, Your explanation is awesome as usual. But I am a bit confused about below part. Bunuel wrote: The problem with your solution is that A^2>\frac{1}{2} means that A>\frac{1}{\sqrt{2}} OR A<-\frac{1}{\sqrt{2}}, not only A>\frac{1}{\sqrt{2}}. As per my understanding means that A>+\frac{1}{\sqrt{2}} OR A>-\frac{1}{\sqrt{2}} . Could you please elaborate why Let me ask you a question: does ? What does Math Expert Joined: 02 Sep 2009 Posts: 17278 even mean? Followers: 2862 Go through the links below to brush up fundamentals on inequalities: Hope this helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 06:56 Bunuel wrote: Let me ask you a question: does x^2>4 mean x>2 or x>-2? What does x>2 or x>-2 even mean? x^2>4 --> |x|>2 --> x>2 or x<-2. Joined: 17 Jan 2012 Posts: 29 This conclusion is based on below logic. Location: India x^2=25 Concentration: General has TWO solutions, +5 and -5 so I thought Management, International Business x^2>4 GMAT 1: 650 Q48 V31 would also have two solutions +2 & -2, this is how I got WE: Information Technology x>2 Followers: 0 Could you please let me know why this is not valid ? I am asking very basic question but I know only after getting my basics clear, I will be able to get good score.... Thanks for bearing with me and my stupid questions Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 07:03 Expert's post Conquistador22 wrote: Bunuel wrote: Let me ask you a question: does x^2>4 mean x>2 or x>-2? What does x>2 or x>-2 even mean? x^2>4 --> |x|>2 --> x>2 or x<-2. This conclusion is based on below logic. has TWO solutions, +5 and -5 so I thought would also have two solutions +2 & -2, this is how I got Could you please let me know why this is not valid ? I am asking very basic question but I know only after getting my basics clear, I will be able to get good score.... Thanks for bearing with me and my stupid questions Guess you did not follow the links I proposed... Again, what does x is Math Expert Joined: 02 Sep 2009 than 2 or x is Posts: 17278 Followers: 2862 than -2 mean? What are the possible values of x in this case? For example, can x be 1, since it's more than -2? indeed has two ranges, which are . Please follow the links in my previous post for more. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS What are GMAT Club Tests? 25 extra-hard Quant Tests gmatclubot Re: Inequalities MGMAT [#permalink] 19 Feb 2014, 07:03 Similar topics Author Replies Last post If b is greater than 1, which of the following must be above720 8 20 Aug 2007, 21:03 3 If abc = b^3 , which of the following must be true? nikhilsrl 8 20 Feb 2011, 07:19 if |a|>|b|, which of the following must be true? a) ab MBAhereIcome 4 30 Aug 2011, 02:32 1 If a > b and if c > d , then which of the following must be carcass 4 19 Mar 2012, 04:39 11 If |x|>3, which of the following must be true? corvinis 23 10 Sep 2012, 01:56
{"url":"http://gmatclub.com/forum/if-ab-7-1-14-and-a-b-which-of-the-following-must-be-gre-167675.html?sort_by_oldest=true","timestamp":"2014-04-16T05:09:17Z","content_type":null,"content_length":"210591","record_id":"<urn:uuid:4b7d785e-7a4c-4a78-b5e7-798bdb3f643d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Universitatis Iagellonicae Acta Mathematica Universitatis Iagellonicae Acta Mathematica is a continuation of Zeszyty Naukowe Uniwersytetu Jagiellońskiego. Prace Matematyczne. Volume numbers are continued. Volume 24 is the first to appear under the new title. Universitatis Iagellonicae Acta Mathematica contains significant research articles in both pure and applied mathematics. Papers intended for publication must be well written and of interest to a substantial number of mathematicians. Manuscripts should be sent to: Universitatis Iagellonicae Acta Mathematica Institute of Mathematics Jagiellonian University ul. Łojasiewicza 6 30-348 Kraków
{"url":"http://www.emis.de/journals/UIAM/index.html","timestamp":"2014-04-16T10:19:31Z","content_type":null,"content_length":"4451","record_id":"<urn:uuid:7a75b7bb-1ae6-4a60-a873-ade6bf52ed6e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the order of the side lengths of triangle ABC from largest to smallest? c, a, b a, b, c a, c, b c, b, a Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8b83fee4b09e61bffcb0c0","timestamp":"2014-04-18T19:21:19Z","content_type":null,"content_length":"112549","record_id":"<urn:uuid:5a3d2e2e-af6c-48f9-abc2-74575a8a7728>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Temple, GA Algebra Tutor Find a Temple, GA Algebra Tutor ...I look forward to working with your child and successfully strengthening their academic skills.I am a certified teacher in Georgia prek-6 in all subjects. I have taught in elementary school for 10 years. My BS degree is in elementary education. 47 Subjects: including algebra 1, algebra 2, reading, chemistry ...Starting my educational career in New York (working the private and public education sectors in both general and special education) and moving to Georgia teaching high and middle school math and social studies, and preparing for administrative roles and responsibilities has made me more equipped ... 22 Subjects: including algebra 1, reading, Microsoft Excel, elementary math My name is James and I am currently working to receive my education certificate from The University of West Georgia. Although my degree is in Music, I have also been well educated in physical science, biology, human anatomy and physiology, literature, history, and mathematics. While at Auburn I pu... 26 Subjects: including algebra 1, English, reading, elementary (k-6th) ...While a summer tutor at Bayshore Christian Ministries, I had a class of third and fourth grade students where I tutored each student in math, reading, science, art, and writing in accordance to the statewide education standards. Lastly, I have acquired 10+ credit hours in Education Studies cours... 26 Subjects: including algebra 1, reading, Spanish, writing ...You will learn techniques to produce user-friendly and efficient workbooks. My pace is determined by you. I am a good listener whose main goal is for you to get what you want from the lessons. 27 Subjects: including algebra 1, algebra 2, English, physics Related Temple, GA Tutors Temple, GA Accounting Tutors Temple, GA ACT Tutors Temple, GA Algebra Tutors Temple, GA Algebra 2 Tutors Temple, GA Calculus Tutors Temple, GA Geometry Tutors Temple, GA Math Tutors Temple, GA Prealgebra Tutors Temple, GA Precalculus Tutors Temple, GA SAT Tutors Temple, GA SAT Math Tutors Temple, GA Science Tutors Temple, GA Statistics Tutors Temple, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/temple_ga_algebra_tutors.php","timestamp":"2014-04-21T07:38:11Z","content_type":null,"content_length":"23768","record_id":"<urn:uuid:402f7929-8680-415e-b94c-274fe473a5eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Instruction for ESL Literacy Students Online Resources: Digests December 1997 Reforming Mathematics Instruction for ESL Literacy Students Keith Buchanan and Mary Helman, Fairfax County Public Schools Download a PDF of this digest. English as a second language (ESL) students who have had limited or interrupted schooling in their first language--whom we refer to as literacy students--can be overwhelmed by new experiences in ESL and content courses. They must learn in a linguistically and culturally unfamiliar environment, construct understanding without the background knowledge that their classmates employ to make assumptions, and process new information. All too often, these circumstances lead to frustration for both literacy students and their teachers. Literacy students must have access to math content from the beginning of their formal education. This calls for modifications in the curricula and in the delivery of instruction. By integrating math and language teaching, innovative courses can provide experiences that bridge gaps in literacy students' math knowledge, expand their communicative competence in English, and ultimately prepare them for success in future math coursework. Correlating Mathematics with Language Skills Building In response to the call for the reform of mathematics education in the United States, the National Council of Teachers of Mathematics (NCTM) established a Commission on Standards for School Mathematics in 1986. This led to publication of Curriculum and Evaluation Standards for School Mathematics (NCTM, 1989), which included 54 standards among four divisions: Grades K-4, 5-8, 9-12, and evaluation. The NCTM standards established five goals for mathematical literacy: (1) that students learn to value mathematics; (2) that they become confident in their ability to do mathematics; (3) that they become mathematical problem solvers; (4) that they learn to communicate mathematically; (5) and that they learn to reason mathematically (NCTM, 1989). The NCTM position statement on language minority students (1994) further clarifies that, "Cultural background and language must not be a barrier to full participation in mathematics programs preparing students for a full range of careers. All students, regardless of their language or cultural background, must study a core curriculum in mathematics based on the NCTM standards." The goals articulated in the NCTM standards have special implications for math teachers who are working with literacy students. While these students have had many experiences outside of school, most of these experiences have not prepared them for success in formal classroom settings. Math teachers can make math meaningful for literacy students by designing instructional activities that build upon students' real life experiences. Lessons that provide challenging problem-solving activities at which students can succeed help to build their reasoning and problem-solving skills, as well as their confidence. For students to learn to communicate mathematically, they need opportunities to hear math language and to speak and write mathematically. NCTM Standards and Effective Instructional Strategies for Literacy Students In 1991, NCTM produced a companion document to the curriculum standards. Professional Standards for Teaching Mathematics, which provides guidelines for teachers to design an environment in which all students will develop mathematical literacy (NCTM, 1991). The guidelines require significant changes in classrooms for literacy students. Five of these changes are described here. 1. Select mathematics tasks that engage students' interests and intellect. Although the math concepts for literacy students may be at a basic level, the interests and intellectual abilities of these students are not. Selecting tasks that can bridge these discrepancies in ability levels is a challenge for math teachers. For example, in a lesson on calculating percentages, younger students might calculate the percentage of tax on a bicycle, while older students may use their pay stubs to calculate percentages of various categories of withholding. 2. Orchestrate classroom discourse in ways that promote the investigation and growth of mathematical ideas. Orchestrating discourse for literacy level ESL students requires the teacher to attend to teaching English in the content area, which includes both the language specific to math and additional English language skills. For example, when teaching that an obtuse angle is greater than 90 degrees, the teacher will not only have to teach the vocabulary word obtuse but may also have to teach the use of the -er suffix to show comparison in the word greater. 3. Use, and help students use, technology and other tools to pursue mathematical investigations. Many literacy students are unfamiliar with the basic tools associated with mathematics such as rulers, protractors, calculators, and computers, and need opportunities to make optimum use of these tools. When working on estimation of lengths, for example, students can use both standard and metric measuring tools to find things that measure approximately one centimeter, one decimeter, one meter, one inch, one foot, or one yard. They can then use these items to estimate the length of other objects in the classroom, check their estimates with the actual tools, and use calculators to find the percentage of error in their estimations. 4. Seek, and help students seek, connections to previous and developing knowledge. To make connections with students' prior experience, teachers must become familiar with the backgrounds of their students. Working in collaboration with other content and ESL teachers will help the math teacher provide connections with the knowledge students are developing in other classes. When students are studying data analysis and graph making, for example, the math teacher can collaborate with science or social studies teachers to build connections with work in those classes. 5. Guide individual, small-group, and whole-class work. Literacy math students benefit from a variety of instructional settings in the classroom. The teacher must guide students through individual, small-group, and whole-class activities. The introduction of a new set of vocabulary or manipulatives to the whole class, for example, can build listening and responding skills. Small-group work allows students to use language to talk about the math tasks at hand while they solve nonroutine problems. Individual work settings ensure that all students process lessons at their own rate of learning. Designing Appropriate Curricula In order to revise math curricula for literacy students, schools must address such as these: ● Who are our literacy students, and why are they unsuccessful in our present math courses? ● What is the most efficient way for students with limited time in school to learn what their classmates already know? ● How should math teachers incorporate language into daily lessons? ● Why is it appropriate to separate literacy students from other math students for a time? ● How should literacy students' understanding of math be assessed? Responses to these questions should be used to guide curriculum development by educators from both math and ESL/bilingual backgrounds who are knowledgeable about both the school district's math objectives and the needs of second language learners from various age groups. Math instructors judge the relative importance of existing instructional objectives and, along with ESL/bilingual personnel, develop specific teaching strategies. Clustering Objectives Literacy math classes aim to teach a number of years of conventional math classes in a condensed period of time. In many cases, it is appropriate to cluster similar learning objectives across grade level boundaries. These clusters of objectives make the most efficient use of students' time in the literacy math class and also recognize that, often, older students do not require as much time to master objectives normally taught in earlier grades. In addition to saving class time, the clustering of objectives reduces the artificiality of structuring lessons where, for example, students only solve problems that involve numbers less than 100 and do not require regrouping. Clustering objectives also offers opportunities to integrate a variety of math strands into one lesson. In a geometry unit, for example, a group of students may estimate the cost of carpeting the classroom. The objectives for the lesson would read, "Identify the space inside a plane shape as its area. Find the area of simple poly-gons." In order to carry out the activity, students also demonstrate their understanding of these objectives: measure lengths of objects using customary units; multiply whole numbers, regrouping as necessary; multiply whole numbers by decimal numbers. These math skills are being used by students in a real life setting to solve a problem while mastering another objective. The teacher can assess mastery of the previously taught content and reteach where necessary while continuing to move through the curriculum. Three Important Variables The essential math objectives identified by local school jurisdictions should remain unchanged for literacy math students. In literacy math curricula, however, the objectives are clustered and condensed, modifying the scope and sequence. Next, specialized teaching strategies are developed. All the strategies take into account students' ages, English proficiency, and developmental levels. Students' Ages Innovative strategies need to be developed for 17 year-old students with beginning English skills, as well as for fourth graders whose first school experience is in an American setting. Older students benefit particularly from math curricula that take into account their previous life experiences, such as problems involving money or their new school environment. For example, high school students who are studying ordinal numbers could be given practice identifying the periods of their school schedules or explaining the order of their lockers in the hallways. The fourth grade math literacy student faces a smaller developmental gap with peers, yet may still need a period of specialized instruction. The texts and materials that native English speakers use to learn about ordinal numbers may not interest a student whose previous learning experience has never originated in books. Instruction with concrete experiences, especially incorporating math manipulatives, are effective bridges to formal math class education for literacy math students of all ages. Students' English Proficiency In a lesson on ordinal numbers, beginning proficiency students could complete an oral activity combining their understanding of colors with identification of the order of colored objects demonstrated by the teacher on an overhead projector. More advanced students could describe the exercise in writing. In general, less proficient learners depend more on the teacher or other students to model expected work and class behavior. A literacy math classroom will have a different look because it is enriched with extra attention to language. Charts with important vocabulary and language structures fill the walls, along with writing by the teacher and students. Students' Developmental Levels Multiple learning strategies are necessary to reach both those students who show understanding of objectives after just a few activities and those who may need continued reinforcement. Literacy math teachers report that they are constantly revising curricular objectives to break them into smaller, simpler pieces, and revising directions to incorporate previously studied vocabulary and activities. Many teachers also modify their overall teaching plan by spiraling out of an objective before it has been mastered by many in their classes, then returning to it after a period of time spent working in another area. For example, after a week spent on a unit on mental math and estimation, the teacher could redirect the class with individualized lessons on operations, incorporating the estimation skills students learned in order to predict their answers. When they return to the estimation unit, the practical value of the lesson will be clear. Assessing Literacy Math Students' Progress Just as mathematics content and instruction change to meet the needs of literacy students, teachers need to find different ways to assess literacy students' progress in mathematics. The point from which this growth is measured varies greatly from one literacy student to another but is usually far below the math and English levels of their ESL and native English speaking peers. Reliance on paper-and-pencil tests is often inappropriate because decoding the language of a test may hinder students rather than allow them to demonstrate what they understand. The use of a wide variety of assessment methods will provide a more complete picture of each literacy student's progress, patterns of development, or areas of need. Instead of focusing on what students do not know, it is important to focus on ways students can show what they do know. That information can be used to guide instruction. While grades from tests and quizzes have a legitimate place in assessment, they comprise only one part of the total picture of a student's math knowledge. National Council of Teachers of Mathematics. (1989). Curriculum and evaluation standards for school mathematics. Reston, VA: Author. National Council of Teachers of Mathematics. (1991). Professional standards for teaching mathematics. Reston, VA: Author. National Council of Teachers of Mathematics. (1994). News Bulletin. Reston, VA: Author. This Digest is drawn from Reforming Mathematics Instruction for ESL Literacy Students (1993), a National Clearinghouse for Bilingual Education (NCBE) Program Information Guide. In addition to a more in-depth discussion of the information highlighted here, the guide provides sample lessons for teaching mathematics to ESL literacy students. The guide is available on the NCBE home page. This report was prepared with funding from the Office of Educational Research and Improvement, U.S. Dept. of Education, under contract no. RR93002010. The opinions expressed do not necessarily reflect the positions or policies of OERI or ED.
{"url":"http://www.cal.org/resources/digest/buchan01.html","timestamp":"2014-04-18T05:42:04Z","content_type":null,"content_length":"26136","record_id":"<urn:uuid:19cb1e7e-6b73-47b8-aa0c-ce1252462682>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity and Differentiation... June 21st 2008, 08:58 AM Continuity and Differentiation... I need help in figuring out the continuity of these weird looking function.. Please help! Q1) Find the maximum and minimum value of the function: f(x) = 4x^3 - x^2 - 4x + 2 on [ -1,1] and on [0,1]? Q2) Let f: [0, infinity), -> R be defined by f(x) = x sin (1/x) when x>0 and f(x) = 0 when x = 0. Figure out if 'f' is continuous and differentiable at x = 0. Q3) Define f: [-2,2] -> R by f(x) = |x^3 - 1|. f: [-2,2] -> R by f(x) = |x^3| -1. Determine the points where f is differentiable and find the derivative at those points? June 21st 2008, 09:39 AM Q2) Let f: [0, infinity), -> R be defined by f(x) = x sin (1/x) when x>0 and f(x) = 0 when x = 0. Figure out if 'f' is continuous and differentiable at x = 0. Well, to be continuous the limit must exist. $\lim_{x\to 0^{+}}xsin(\frac{1}{x})$ $-1 \leq sin(1/x) \leq 1$ so $-x \leq xsin(1/x) \leq x$ But $-x \;\ and \;\ x\rightarrow{0} \;\ as \;\ x\rightarrow{0^{+}}$ This, $\lim_{x\to 0^{+}}xsin(1/x) = 0$ Let's describe the interval where it is continuous. If we have: $f(x) = \begin{Bmatrix}xsin(1/x), \;\ xeq 0\\0, \;\ x=0\end{Bmatrix}$ By the Squeezing Theorem: $-|x|\leq xsin(1/x) \leq |x|$ And we can conclude that $\lim_{x\to 0}f(x)=0$ So f is continuous on the entire real line. June 21st 2008, 09:53 AM I need help in figuring out the continuity of these weird looking function.. Please help! Q1) Find the maximum and minimum value of the function: f(x) = 4x^3 - x^2 - 4x + 2 on [ -1,1] and on [0,1]? Q2) Let f: [0, infinity), -> R be defined by f(x) = x sin (1/x) when x>0 and f(x) = 0 when x = 0. Figure out if 'f' is continuous and differentiable at x = 0. Q3) Define f: [-2,2] -> R by f(x) = |x^3 - 1|. f: [-2,2] -> R by f(x) = |x^3| -1. Determine the points where f is differentiable and find the derivative at those points? For the limit, it also might be more apparent to let So as $x\to{0}\Rightarrow\varphi\to\infty$ So we have $\lim_{\varphi\to\infty}\frac{\sin(\varphi)}{\varph i}$ Now it might be a little more obvious June 21st 2008, 11:01 AM We can use the limit defintion to see if it is differentiable at x=0 $\lim_{x \to 0}\frac{f(x)-f(0)}{x-0}$ $\lim_{x \to 0}\frac{x\sin\left( \frac{1}{x}\right)-0}{x-0}=\lim_{x \to 0}\sin\left( \frac{1}{x}\right)$ This limit does dont exits so the function is not differentiable at x=0 June 21st 2008, 02:15 PM Thanks everyone for the solution. Could anyone please help me understanding the the third problem, the absolute one? June 21st 2008, 02:28 PM I am sorry, but I beg to differ about the sine limit. It appears to be continuous at 0. The oscillations are damped by the factor x. The limit is 0 rather approaching from the left or right. sin(1/x) is not continuous, though. You can see that from the oscillation on the graph. June 21st 2008, 02:29 PM Here is the graph of sin(1/x). It is not continuous at x=0 June 21st 2008, 03:03 PM
{"url":"http://mathhelpforum.com/calculus/42101-continuity-differentiation-print.html","timestamp":"2014-04-20T05:54:36Z","content_type":null,"content_length":"12839","record_id":"<urn:uuid:3a4d79d3-df62-4586-ad22-f311c368ff7b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
The Meaning of Differencials December 6th 2005, 12:50 PM The Meaning of Differencials One thing which makes me very angry is that textbooks treat quantities such as "dy" and "dx" as numbers, yet they are not. When they come to solve differencial equation the authors simply say divide both sides by dx or something like that, it lacks mathematical rigor. Futhermore, they give "dy" and "dx" seperate meanings, but the problems it that they only make sense as dy/dx. And the only reason why we use these symbols is because this Leibniz notation sometimes is easier to use. I was reading an article on www.wikipedia.com about this topic and they said that the symbols "dy" and "dx" in fact do have seperate meanings! They explained that it uses new types of concepts known as "infinitesimal numbers" these numbers are not real. With these numbers we can construct a rigorus meaning to the differencial. This type of construction forms "non-standard analayis" because in a sense it is still analysis but without the theory of limits. Do you agree that textbooks do not give a rigorous definition for a differencial? It is perhaps I am not using an advanced textbook, they probably address this issue, please help. December 6th 2005, 08:45 PM You could think of it as a short hand for a more convoluted argument using finite differences, some regularity conditions and limiting processes.
{"url":"http://mathhelpforum.com/calculus/1416-meaning-differencials-print.html","timestamp":"2014-04-18T18:51:51Z","content_type":null,"content_length":"4670","record_id":"<urn:uuid:15ffddd4-7617-4808-ba08-363fa48fddf1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
The Constrained-Type-Class Problem In Haskell, there are some data types that you want to make an instance of a standard type class, but are unable to do so because of class constraints on the desired class methods. The classic example is that the Set type (from Data.Set) cannot be made an instance of Monad because of an Ord constraint on its desired binding operation: returnSet :: a -> Set a returnSet = singleton bindSet :: Ord b => Set a -> (a -> Set b) -> Set b bindSet sa k = unions (map k (toList sa)) However, despite being the classic example, in some ways it’s not a very good example, because the constraint appears only on the second type parameter of bindSet, not on the first type parameter, nor on returnSet. Another example of the problem also arises in the context of embedded domain-specific languages. When constructing a deep embedding of a computation that will later be compiled, it is often necessary to restrict the involved types to those that can be reified to the target language. For example: data EDSL :: * -> * where Value :: Reifiable a => a -> EDSL a Return :: Reifiable a => a -> EDSL a Bind :: (Reifiable a, Reifiable b) => EDSL a -> (a -> EDSL b) -> EDSL b While we can construct a computation using Return and Bind, we cannot declare a Monad instance using those constructors because of the Reifiable class constraint. (Note: if you want to try out the code in this post, you’ll need the following: {-# LANGUAGE GADTs, MultiParamTypeClasses, KindSignatures, ConstraintKinds, TypeFamilies, RankNTypes, InstanceSigs, ScopedTypeVariables #-} import GHC.Exts (Constraint) import Data.Set hiding (map) Restricted Type Classes There have been numerous solutions proposed to address this problem. John Hughes suggested extending Haskell with Restricted Data Types: data types with attached class constraints. In the same paper, Hughes also suggested defining Restricted Type Classes: type classes that take a constraint as a parameter and impose it on all polymorphic type variables in the class methods. This latter approach was simulated several times (by Oleg Kiselyov and Ganesh Sittampalam, amongst others), before the constraint-kinds extension made it possible to encode it directly: class RMonad (c :: * -> Constraint) (m :: * -> *) where return :: c a => a -> m a (>>=) :: (c a, c b) => m a -> (a -> m b) -> m b It is then straightforward to define instances that require class constraints: instance RMonad Reifiable EDSL where return = Return (>>=) = Bind However, restricted type classes are new type classes: using them doesn’t allow compatibility with existing type classes. If restricted type classes were already used everywhere instead of the original type classes then there would be no problem, but this is not the case. A variant of restricted type classes (suggested by Orchard and Schrijvers is to use an associated type function with a default instance: class Monad (m :: * -> *) where type Con m (a :: *) :: Constraint type Con m a = () return :: Con m a => a -> m a (>>=) :: (Con m a, Con m b) => m a -> (a -> m b) -> m b instance Monad EDSL where type Con EDSL a = Reifiable a return = Return (>>=) = Bind [S:An attraction of this approach is that this type class could replace the existing Monad class in the standard libraries, without breaking any existing code.:S] EDIT: Edward Kmett points out that this claim is not true (see comment below). Any code that is polymorphic in an arbitrary monad m would be broken, as the unknown constraint Con m will need to be satisfied. Normality can be Constraining If we don’t want to modify the type class, then the alternative is to modify the data type. Specifically, we need to modify it in such a way that we can declare the type-class instance we want, but such that the operations of that type class will correspond to the operations we desired on the original data type. For monads, one way to do this is to use continuations, as demonstrated by Persson et al. An alternative (and, in our opinion, more intuitive) way to achieve the same effect is to construct a deep embedding of the computation, and restructure it into a normal form. The normal form we use is the same one used by Unimo and Operational, and consists of a sequence of right-nested >>=s terminating with a return: The first argument to each >>= is a value of the original data type, which we will call primitive operations (a.k.a. "non-proper morphisms", "effect basis", or "instructions sets"). The key feature of the normal form is that every type either appears as a type parameter on a primitive operation, or appears as the top-level type parameter of the computation. Consequently, if we enforce that all primitives have constrained type parameters, then only the top-level type parameter can remain unconstrained (which is easy to deal with, as we will show later). We can represent this using the following deep embedding: data NM :: (* -> Constraint) -> (* -> *) -> * -> * where Return :: a -> NM c t a Bind :: c x => t x -> (x -> NM c t a) -> NM c t a The t parameter is the type of the primitive operations (e.g. Set), and c is the class constraint (e.g. Ord). We can define a Monad instance for this deep embedding, which applies the monad laws to restructure the computation into the normal form during construction (just like the Operational package.) instance Monad (NM c t) where return :: a -> NM c t a return = Return (>>=) :: NM c t a -> (a -> NM c t b) -> NM c t b (Return a) >>= k = k a -- left identity (Bind ta h) >>= k = Bind ta (\ a -> h a >>= k) -- associativity Primitive operations can be lifted into the NM type by applying the remaining monad law: liftNM :: c a => t a -> NM c t a liftNM ta = Bind ta Return -- right identity Notice that only primitive operations with constrained type parameters can be lifted, thereby preventing any unconstrained types infiltrating the computation. Once a computation has been constructed, it can then be interpreted in whatever way is desired. In many cases (e.g. the Set monad), we want to interpret it as the same type as the primitive operations. This can be achieved by the following lowering function, which takes interpretations for return and >>= as arguments. lowerNM :: forall a c t. (a -> t a) -> (forall x. c x => t x -> (x -> t a) -> t a) -> NM c t a -> t a lowerNM ret bind = lowerNM' lowerNM' :: NM c t a -> t a lowerNM' (Return a) = ret a lowerNM' (Bind tx k) = bind tx (lowerNM' . k) Because the top-level type parameter of the computation is visible, we can (crucially) also constrain that type. For example, we can lower a monadic Set computation as follows: lowerSet :: Ord a => NM Ord Set a -> Set a lowerSet = lowerNM singleton bindSet This approach is essentially how the AsMonad transformer from the RMonad library is implemented. The idea of defining a deep embedding of a normal form that only contains constrained types is not specific to monads, but can be applied to any type class with a normal form such that all types appears as parameters on primitive operations, or as a top-level type parameter. We’ve just written a paper about this, which is available online along with accompanying code. The code for our principal solution is also available on Hackage. “An attraction of this approach is that this type class could replace the existing Monad class in the standard libraries, without breaking any existing code.” This is unfortunately just not true. Any code that relies on polymorphic recursion is irredeemably broken by this change. It doesn’t break simple code, but it makes harder cases all but impossible to In the presence of polymorphic recursion using constrained monads by default means you have to limit yourself to a particular monad — or you have to provide explicit witnesses using something like the constraints package that can be used to derive the type for the constraint for the polymorphically recursive case from the previous recursion level, but that is quite horrific and its particular to each case. =( EDIT: Ah, I see. Yes, my claim was bogus. Any code that works with an arbitrary Monad m would break because the constraints may not, in general, be empty. Example: newtype Kleisli m a b = Kleisli (a -> m b) instance RMonad m => Category (Kleisli m) where id :: {- Con m a => -} Kleisli m a a id = Kleisli return (.) :: {- (Con m b, Con m c) => -} Kleisli m b c -> Kleisli m a b -> Kleisli m a c (Kleisli f) . (Kleisli g) = Kleisli (\ a -> g a >>= f) You can fix Category by upgrading it to take a constraint of its own, but this doesn’t extend to anything that recurses polymorphically: e.g. Consider how to write the moral equivalent of a Traversable instance for data Grow a = Grow (Complete (a,a)) | Stop a in the presence of this kind of constraint. Worse, you can’t even do normal Traversable instances. =( Hi Neil, I think this is related: Did you mean: data Grow a = Complete (Grow (a,a)) | Stop a Because if so, then I only half agree. Assuming a “restricted traversable” class: class RTraversable (t :: * -> *) where type ConT t (a :: *) :: Constraint type ConT t a = () traverseR :: (ConT t a, ConT t b, Applicative f) => (a -> f b) -> t a -> f (t b) Then we can define a “normal” instance that imposes no constraints and the polymorphic recursion does not pose a problem: instance RTraversable Grow where traverseR :: (ConT Grow a, ConT Grow b, Applicative f) => (a -> f b) -> Grow a -> f (Grow b) traverseR g (Stop a) = Stop < $> g a traverseR g (Complete aa) = Complete < $> traverseR (\ (a1,a2) -> (,) < $> g a1 < *> g a2) aa Or is this not what you meant? I agree that an instance that actually imposes a constraint would not work due to the polymorphic recursion, e.g. class C a where ... instance RTraversable Grow where type ConT Grow a = C a traverseR = -- as above “Could not deduce (C (b, b)) arising from a use of `traverseR’ from the context (ConT Grow a, ConT Grow b, Applicative f)“ @James: I hadn’t seen that paper; thanks for the link. neil: Yes I meant the Complete (Grow (a,a)) case. Thats what I get trying to type an example without preview. =) But, yes, in general the problem with RTraversable is it can’t handle the polymorphic recursive case, and it can only handle cases that are non-polymorphically recursive by leaking the arguments for every intermediate state that gets threaded through the applicative. This means that now the Traversable instance is even less canonical, because the constraint set can be written several different ways, e.g. by building up a tuple of the args, a level at a time and finally map them into the form they need to be in to (almost) avoid ever putting functions in, etc, but that gets hairy fast. RTraversable is pretty much useless in practice as idiomatic haskell today, because as the API for applicative currently works you need to round trip almost everything with functions. traverse f (Foo a b) = Foo traverse f a f b incurs constraints that you are almost assuredly not going to be able to meet. Extending Applicative with a liftA2.. liftAn would partially alleviate the problem, but working in that system doesn’t appeal to me at all. =)
{"url":"http://www.ittc.ku.edu/csdlblog/?p=134","timestamp":"2014-04-20T23:46:56Z","content_type":null,"content_length":"26769","record_id":"<urn:uuid:09aae768-28b5-4727-8575-9b6eafd7a5e7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
IFNA Function - New logical function in Excel 2013 Finally MS Excel 2013, is available in the market. Along with the wonderful features like Flash Fill and Quick Analysis Tool there are lot of new features are enabled in Excel 2013. Apart from the new features, Excel 2013 released new functions also. Here we are going to discuss about the new Logical Functions in Excel 2013 those are IFNA & XOR. In this post we will concentrate on the function IFNA, which is only available from Excel version 2013 and above. IFNA Function: Syntax: =IFNA(value, value_if_na) IFNA returns the value you specify if the formula returns #N/A error value, otherwise returns the result of the formula The name actually suggesting the same description. In simple words, if the result is #N/A then the result is value_if_na, if not then the result is value. We can have better understanding through the example. Let say, I am using VLOOKUP function to look up the marks of the students from master marks list. In the above example, for the third student Excel resulted error (#N/A) since Krishna is not there in the master list. If we want to further use AVERAGE function to the resulted marks, even the excel results as #N/A since one or more of the inputs are errors. So, IFNA formula is to replace the error (#N/A) with any of the desired value. Look here, the same example by IFNA Function If you observe above, all the results are same from the above example except for the third student. Here, IFNA function replaces the #N/A error with the value 0. AVERAGE function also results the correct value, as there is no error in the range now. In next post, I will explain how XOR function works. (IFNA Formula in excel 2013, how IFNA formula works in excel 2013, what is IFNA formula, new IFNA formula in excel 2013)
{"url":"http://lostinexcel.blogspot.com/2013/01/IFNA-Function-New-logical-function-in-Excel-2013.html","timestamp":"2014-04-21T14:41:46Z","content_type":null,"content_length":"78464","record_id":"<urn:uuid:e6d69001-9d56-4709-a34e-f4030d413603>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Proceedings of Symposia in Applied Mathematics 1992; 125 pp; softcover Volume: 46 Reprint/Revision History: third printing 1997 ISBN-10: 0-8218-5501-8 ISBN-13: 978-0-8218-5501-0 List Price: US$34 Member Price: US$27.20 Order Code: PSAPM/46 This book is based on the AMS Short Course, The Unreasonable Effectiveness of Number Theory, held in Orono, Maine, in August 1991. This Short Course provided some views into the great breadth of applications of number theory outside cryptology and highlighted the power and applicability of number-theoretic ideas. Because number theory is one of the most accessible areas of mathematics, this book will appeal to a general mathematical audience as well as to researchers in other areas of science and engineering who wish to learn how number theory is being applied outside of mathematics. All of the chapters are written by leading specialists in number theory and provides excellent introduction to various applications. General mathematical audience as well as researchers in other areas of science and engineering who wish to learn how number theory is being applied outside of mathematics. "The overall standard here is high ... it is more than useful to have a small book indicating so many applications in a subject long considered to be the most basic branch of mathematics." -- Bulletin of the London Mathematical Society • M. R. Schroeder -- The unreasonable effectiveness of number theory in physics, communication, and music • G. E. Andrews -- The reasonable and unreasonable effectiveness of number theory in statistical mechanics • J. C. Lagarias -- Number theory and dynamical systems • G. Marsaglia -- The mathematics of random number generators • V. Pless -- Cyclotomy and cyclic codes • M. D. McIlroy -- Number theory in computer graphics
{"url":"http://ams.org/bookstore?fn=20&arg1=psapmseries&ikey=PSAPM-46","timestamp":"2014-04-19T08:32:47Z","content_type":null,"content_length":"15609","record_id":"<urn:uuid:ea71b7f0-1ff2-4a84-b40b-046458be0c16>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Explore Teaching Examples Current Search Limits Results 11 - 20 of 20 matches Igneous Rocks Model part of Interactive Lecture Demonstrations:Examples While working in groups to facilitate peer tutoring, students use samples of four igneous rocks (gabbro, basalt, granite, and rhyolite) to observe differences in texture, color and grain size and make inferences ... Magma Viscosity Demos part of Interactive Lectures:Examples This is an interactive lecture where students answer questions about demonstrations shown in several movie files. They learn to connect what they have learned about molecules, phases of matter, silicate crystal ... Using Popcorn to Simulate Radioactive Decay part of Quantitative Skills:Activity Collection Popping popcorn in your class is an excellent way to illustrate both the spontaneity and irreversible change associated with radioactive decay. It helps students to understand the unpredictability of M&M Model for Radioactive Decay part of Quantitative Skills:Activity Collection A tasty in-class demonstration of radioactive decay using two colors of M&M's. Illustrates the quantitative concepts of probability and exponential decay. This activity is appropriate for small classes (<40 students). Exploring Radiometric Dating with Dice part of Quantitative Skills:Activity Collection An activity in which students use dice to explore radioactive decay and dating and make simple calculations. Crystallization from Melt Demonstration part of Interactive Lecture Demonstrations:Examples This demonstration uses melted phenyl salicylate to show how crystals nucleate and grow as the temperature of the liquid melt decreases. - Water Contamination Demonstration part of Interactive Lecture Demonstrations:Examples Summary: Misplaced Matter and Water Pollution The drinking water pollution demonstration provides a very simple but dramatic way to get students to think about water contamination and drinking water standards, ... Adhesion, Cohesion, and Surface Tension Demonstration part of Interactive Lecture Demonstrations:Examples This short (<5-10 minutes) pair of demonstrations uses glass slides with a very thin film of water to demonstrate the cohesive and adhesive forces of water molecules, and a needle floating on water to ... Presenting the Geologic Timescale part of Interactive Lecture Demonstrations:Examples This project has students model the geologic timescale using distance as a metaphor for time. Students give presentions spaced at distances which represent how far apart in time the events occurred. Pressure Melting of Ice: While-U-Wait part of Interactive Lecture Demonstrations:Examples In this demonstration, students get to witness pressure melting and regelation first-hand. A weight is suspended via a thin wire over an ice cube. Over the course of the course of the demonstration, the wire ...
{"url":"http://serc.carleton.edu/introgeo/browse_examples.html?q1=sercvocabs__57%3A10&results_start=11","timestamp":"2014-04-19T04:49:49Z","content_type":null,"content_length":"24701","record_id":"<urn:uuid:bc67a6af-a279-4cd7-aa62-d04d68156631>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
This module functions identically to Data.Generics.Uniplate.Data, but instead of using the standard Uniplate / Biplate classes defined in Data.Generics.Uniplate.Operations it uses a local copy. Only use this module if you are using both Data and Direct instances in the same project and they are conflicting. The Classes class Uniplate on whereSource The standard Uniplate class, all operations require this. uniplate :: on -> (Str on, Str on -> on)Source The underlying method in the class. Taking a value, the function should return all the immediate children of the same type, and a function to replace them. Given uniplate x = (cs, gen) cs should be a Str on, constructed of Zero, One and Two, containing all x's direct children of the same type as x. gen should take a Str on with exactly the same structure as cs, and generate a new element with the children replaced. Example instance: instance Uniplate Expr where uniplate (Val i ) = (Zero , \Zero -> Val i ) uniplate (Neg a ) = (One a , \(One a) -> Neg a ) uniplate (Add a b) = (Two (One a) (One b), \(Two (One a) (One b)) -> Add a b) descend :: (on -> on) -> on -> onSource Perform a transformation on all the immediate children, then combine them back. This operation allows additional information to be passed downwards, and can be used to provide a top-down descendM :: Monad m => (on -> m on) -> on -> m onSource class Uniplate to => Biplate from to whereSource Children are defined as the top-most items of type to starting at the root. biplate :: from -> (Str to, Str to -> from)Source Return all the top most children of type to within from. If from == to then this function should return the root as the single child. descendBi :: (to -> to) -> from -> fromSource descendBiM :: Monad m => (to -> m to) -> from -> m fromSource (Data a, Data b, Uniplate b) => Biplate a b Single Type Operations universe :: Uniplate on => on -> [on]Source Get all the children of a node, including itself and all children. universe (Add (Val 1) (Neg (Val 2))) = [Add (Val 1) (Neg (Val 2)), Val 1, Neg (Val 2), Val 2] This method is often combined with a list comprehension, for example: vals x = [i | Val i <- universe x] transform :: Uniplate on => (on -> on) -> on -> onSource Transform every element in the tree, in a bottom-up manner. For example, replacing negative literals with literals: negLits = transform f where f (Neg (Lit i)) = Lit (negate i) f x = x rewrite :: Uniplate on => (on -> Maybe on) -> on -> onSource Rewrite by applying a rule everywhere you can. Ensures that the rule cannot be applied anywhere in the result: propRewrite r x = all (isNothing . r) (universe (rewrite r x)) Usually transform is more appropriate, but rewrite can give better compositionality. Given two single transformations f and g, you can construct f mplus g which performs both rewrites until a fixed contexts :: Uniplate on => on -> [(on, on -> on)]Source Return all the contexts and holes. propUniverse x = universe x == map fst (contexts x) propId x = all (== x) [b a | (a,b) <- contexts x] holes :: Uniplate on => on -> [(on, on -> on)]Source The one depth version of contexts propChildren x = children x == map fst (holes x) propId x = all (== x) [b a | (a,b) <- holes x] para :: Uniplate on => (on -> [r] -> r) -> on -> rSource Perform a fold-like computation on each value, technically a paramorphism Multiple Type Operations childrenBi :: Biplate from to => from -> [to]Source Return the children of a type. If to == from then it returns the original element (in contrast to children)
{"url":"http://hackage.haskell.org/package/uniplate-1.3/docs/Data-Generics-Uniplate-DataOnly.html","timestamp":"2014-04-17T10:55:13Z","content_type":null,"content_length":"21414","record_id":"<urn:uuid:ec3ed210-91f2-4b0f-9e0f-827529e02514>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
The Black Vault Message Forums PI and the case for Zero Point Energy. I'm sure everyone knows that if you draw a circle whose diameter is 1, its circumference is PI. If we apply this to sub-atomic particles... Take the diameter all the way down to the Planck Length, the smallest distance in the universe (although some interpretations say 2 times Planck Length). Now you have a circle that is the smallest possible size. If you now apply that to a fundamental particle as defined by Superstring theory, you have a string that is rotating and is at absolute minimum energy. In rotation each point on the circle can only be reached once, since you cannot move a transcendental distance, nor can you move a whole multiple of a transcendental distance, which would also be transcendental. So how long before all the locations on the circle, which can be reached only once, are exhausted? And what happens next? Once all the locations have been exhausted it is physically impossible for the system to remain at the current energy level. And the circle cannot get any smaller. So the circle must get larger, but to do that it must GAIN energy. But where does the energy come from? Answer: It must borrow it from the universe. Therefore this is a poor man's proof for the concept of zero point energy. When you are at minimum energy you can only stay there so long and then you MUST gain energy, or cease to exist. Could this play into what happens at the center of a black hole? Can we make use of this borrowing of energy from the universe? Or is this just a load of bunk? Sounds to me like a load of bunk. "George Bush says he speaks to god every day, and christians love him for it. If George Bush said he spoke to god through his hair dryer, they would think he was mad. I fail to see how the addition of a hair dryer makes it any more absurd." Zero Point Energy isn't bunk, but harnessing it requires exotic matter with properties we have yet to discover. So in other words is basically science fiction. "George Bush says he speaks to god every day, and christians love him for it. If George Bush said he spoke to god through his hair dryer, they would think he was mad. I fail to see how the addition of a hair dryer makes it any more absurd." Untill further...technological developments...of some sort...if we ever get there... Truth doesn´t control you, you control it... If it's even possible. But currently it's not, so it's science fiction. "George Bush says he speaks to god every day, and christians love him for it. If George Bush said he spoke to god through his hair dryer, they would think he was mad. I fail to see how the addition of a hair dryer makes it any more absurd." In 1968 Captain Kirk's communicator was science fiction. Today, almost everybody has one. In the future, we will all be Gods! Although doesn´t matter since all of us today would be dead 100000000000million years ago... Truth doesn´t control you, you control it... Its not exactly science fiction because lab test confirm its existence its just untappable for the time being. Its kinda like neutrinos, we know they exist but confirming it took decades but we have way of utilizing them either. But, my description was merely a proof of concept, not how to make use of the idea. That could happen at a much higher level than the Planck size world. This theory actually plays into another theory I have about mitigating apparent mass using magnetic fields. The problem is generating a magnetic field big enough without needing a generator the size of an aircraft carrier. If you can borrow energy from the universe then you need a much smaller power source. Remember that gravity only works on you because it see's you. And it only see's you because of your apparent mass, relative to other masses (like the earth). If you decrease your apparent mass then gravity will see less of you. Superstring theory gives us a framework of ideas that could lead to our understanding of how mass couples you to the Higgs field (which creates what we call gravity). Remove that coupling, remove the gravitational force. Its so cool. If you've seen the science shows on tv where they show the fabric of space and how gravity distorts it. They always make the mistake of looking at it in too few dimensions which makes it look like a cone. That is incorrect. It is not a cone. Its a shape which we don't have a name for yet. To try and visualize it: think of a hot pot belly stove in the middle of your summer camp bunkhouse. The closer you get, the hotter you get. The further away, the cooler you get. Replace temperature with the density of space (the Higgs field). Close to a mass the Higgs field is dense. Far away and the density is lower. Your mass is nothing but the coupling of the strings in the sub-atomic particles of your body to the Higgs field. And this coupling is what distorts the fabric of space. The Higgs field itself may be made of strings which vibrate at certain frequencies. The strings in your body vibrate at a complementary frequency (actually its more than just frequency - phase shift, amplitude, dimension, all play into it). When we learn how to manipulate the strings we can play any tune we want. For example, we can travel without using time. We can travel faster than light without violating relativity.
{"url":"http://www.theblackvault.com/phpBB3/post7811.html","timestamp":"2014-04-17T04:11:32Z","content_type":null,"content_length":"63946","record_id":"<urn:uuid:06db9cfd-ca78-4b01-b93b-bf1f09d1de78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Two kinds of equivalence: conjugate vs. isomorphic objects up vote 5 down vote favorite Conjugate vertices in a graph^1 or conjugate elements of a group^2 are equivalent (indistinguishable, essentially the same) in one specific structural sense. Isomorphic objects in a category are equivalent in another specific structural sense. In both cases we don't look inside the objects but declare them equivalent from the outside. In both cases equivalence has to do with isomorphism: with structure preserving maps between the structure the conjugate elements live in and itself (automorphisms) resp. with iso arrows between the elements themselves. Define two objects $A,B$ in a category to be conjugate when there is an isomorphism endofunctor $F$ with $F(A) = B$. Question 1: Is it true that any two isomorphic objects are conjugate (since there is an isomorphism endofunctor that permutes them)? The reverse is most certainly false: There are categories with conjugate objects that are not isomorphic. E.g. the graphs in the category of graphs over two fixed vertices (with graph homomorphisms as morphisms) are two such objects (#9 and #6 in the diagram below). Note that there is no morphism at all between these two graphs. Question 2: Might it be the case that whenever two objects are conjugate-but-not-isomorphic there is no morphism between them? Or is this true only in special categories and/or special cases? Question 3: How "normal" is it that a category contains conjugate-but-not-isomorphic objects? Most of all I'd like to know how to think about this bewildering pair of equivalences in general terms. Here is the complete category of graphs over two fixed vertices and an arrow whenever there is a graph homomorphism. Compositions and identities are omitted. The numbers are derived from the adjacency matrices: 0 = 00|00, 1 = 10|00, ..., 15 = 11|11. ^1 $x,y$ are conjugate iff there is a $g \in \text{Aut}(G)$ with $g(x) = y $. ^2 $x,y$ are conjugate iff there is a $g \in G$ with $gx= yg$. ct.category-theory graph-theory 2 There is a simpler example of conjugate object which are not isomorphic, take any two different objects in a discrete category. – Guillaume Brunerie Nov 2 '11 at 12:40 Also, there are (two) morphisms between your right graph and your left graph. – Guillaume Brunerie Nov 2 '11 at 12:43 Oh, sorry, if you are in the category of graphs over two fixed vertices, there are indeed no morphisms, I misread this part. – Guillaume Brunerie Nov 2 '11 at 12:47 1 The two notions are not unconnnected -- A and B are conjugate if and only if they are isomorphic in the category of pointed categories (i.e. categories equipped with a distinguished object, with morphisms functors that strictly preserve these). – Finn Lawler Nov 2 '11 at 13:04 1 @Hans: the former -- [A is conjugate to B in the category X] iff [(A,X) is isomorphic to (B,X) in */Cat]. – Finn Lawler Nov 2 '11 at 15:49 show 4 more comments 2 Answers active oldest votes Here's a counterexample to Question 2. Let $\mathbb{Z}$ be the totally ordered set of integers regarded as a category. Then 1. Distinct objects of $\mathbb{Z}$ are non-isomorphic. 2. There is a morphism between every two objects of $\mathbb{Z}$. 3. All objects of $\mathbb{Z}$ are conjugate (just apply the shift that maps one object to the other). up vote 3 I suspect that a finite example may be more difficult to construct (or even not exist at all). down vote I also believe that notions of isomorphism and conjugacy are conceptually quite apart. The first one emphasizes "categories as universes" point of view, where we are interested in properties of objects in a category. On the other hand the notion of conjugacy seems to concern symmetries of the category itself, which emphasizes "categories as structures" point of add comment Question 2 is false for some finite categories. There are two objects $A$ and $B$ that both have an idempotent endomorphism to themselves. There is a morphism from $A$ to $B$ and one from $B$ to $A$, their composition is the idempotent endomorphism. (The best realization I have for this is two finite simple groups, neither being a subgroup of the other, without their automorphisms or with the same automorphism group.) up vote 3 However, Question 2 is true for finite categories where all endomorphisms are invertible. down vote Suppose there is an arrow $A\to B$ and an isomorphism $F(A)=B$. $F$ must have some finite order $n$. Consider the arrows $F(A\to B),F(A\to B), F^2(A\to B),...,F^{n-1}(A\to B)$. These arrows form a cycle, therefore they all have inverses. add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/79825/two-kinds-of-equivalence-conjugate-vs-isomorphic-objects?sort=newest","timestamp":"2014-04-20T13:56:05Z","content_type":null,"content_length":"61628","record_id":"<urn:uuid:8e81e260-99b8-4017-9230-94edbb9ae5a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
investigate how output from photocell depends on distance from infrared point source to measure the current of the photocell can i just make a circuit with a photocell and an ammeter? this seems to be a pretty simple circuit to me and i think maybe it should be more complicated. also, just out of curiosity how does an op-amp convert current to voltage? i cant find it anywhere Yes, as stated in that other thread, with a reasonably sensitive current setting on your DVM, you will be able to measure about a decade of photocurrent variation, starting very close to the source. But to get much of a plot of photocurrent over several decades, you will want to make a simple current to voltage converter circuit using a CMOS opamp. The linearity of the photodiode is also improved by placing a reverse bias across it of several volts, which you can do if you take the anode of the photodiode to V- instead of ground. On your last question, I just googled current to voltage converter photodiode, and got lots of good hits. Here's one of the first ones: and as I just mentioned, consider taking the anode of the photodiode to V- instead of Ground (the first figure in that link shows it connected to Ground).
{"url":"http://www.physicsforums.com/showthread.php?t=119122","timestamp":"2014-04-21T04:40:29Z","content_type":null,"content_length":"66932","record_id":"<urn:uuid:b14cdc50-11fc-4ef3-aa57-c58c2deac336>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Pathfinding - Java-Gaming.org Well, Martian Madness has morphed into a sort of team based RTS and I've just started considering path finding. I have an free form map in which I can place obstacles.. any one have any useful links/ info on how to perform path finding between two points.. And yes.. I'm just off to trawl google
{"url":"http://www.java-gaming.org/index.php?topic=2965.msg28381","timestamp":"2014-04-16T18:34:13Z","content_type":null,"content_length":"145726","record_id":"<urn:uuid:bdf1d7df-303d-4a38-b32f-00b42f8e0b5d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Universal Turing machine on Antonio E. Porreca What did Alan Turing have in mind when he conceived his universal computing machine? One could speculate that his train of thoughts was like this: 1. I can simulate any of my machines (there’s experimental evidence, and of course I defined them to work like people doing maths on a piece of paper). 2. I’ve already formulated a computability thesis that says: if I can do it, then a computing machine also can. 3. But then, there must be a universal computing machine! Let’s fill in the details… Or maybe he came to that realisation some completely different way. Possibly, directly from Gödel’s work? Who knows, maybe there even exists something written by Turing himself on this subject.
{"url":"http://aeporreca.org/tag/universal-turing-machine/","timestamp":"2014-04-18T22:05:31Z","content_type":null,"content_length":"20980","record_id":"<urn:uuid:c9119013-26b8-4621-a76e-928809751752>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
WebDiarios de Motocicleta I am told the least this blog could do is to talk about my own results :) So here goes. Next week at STOC, I am presenting "Changing Base without Losing Space," a joint paper with Mikkel Thorup and Yevgeniy Dodis. The paper and slides can be found . The paper contains two results achieved by the same technique. Today, I will talk about the simpler one: online prefix-free codes The problem is to encode a vector of bits of variable length in a prefix-free way; that is, the decoder should be able to tell when the code ends. (Note on terminology: In information theory, this is called "universal coding"; prefix-free is about coding letters from a fixed alphabet, e.g. the Hamming code is prefix-free.) Let N be the (variable) length of the bit vector. Here are some classic solutions (known as Elias codes): 1. A code of 2N bits: after each data bit, append one bit that is 0 for end-of-file (EOF) or 1 if more data is coming; 2. A code of N+2lg N bits: at the beginning of the message, write N by code 1; then write the bit vector. 3. A code of N+lg N+2lglg N bits: at the beginning, write N by code 2; then write the bit vector. 4. Recursing, one obtains the optimal size of N+lg N+lglg N+...+O(lg*N) Prefix-freeness turns out to be very useful in (practical) cryptography. For instance, if I want a signature of a file, I could use some standard cypher that works on small blocks (say, AES on 128 bits), and chain it: However, this is only secure if the input is prefix-free, or otherwise we are vulnerable to extension attacks: This creates the need for online prefix-free codes: I want to encode a stream of data (in real time, with little buffering), whose length is unknown in advance. In this setting, the simplest solution using 2N bits still works, but the others don't, since they need to write N at the beginning. In fact, one can "rebalance" the 2N solution into an online code of size N+O(√N): append a bit after each block of size √N, wasting a partially-filled block at the end. Many people (ourselves included) believed this to be optimal for quite some time... However, our paper presents an online code with ideal parameters: the size is N+lg N+O(lglg N), the memory is only O(lg N), and the encoding is real time (constant time per symbol). Since the solution is simple and practical, there is even reason to hope that it will become canonical in future standards! So, how do we do it? I will describe the simplest version, which assumes the input comes in blocks of b bits and that b≫2lg N (quite reasonable for b=128 as in AES). Each block is a symbol from an alphabet of size B=2^b. We can augment this alphabet with an EOF symbol; in principle, this should not cost much, since lg(B+1)≈lg B for large B. More precisely, N symbols from an alphabet of B+1 have entropy N·lg(B+1) = N·b+O(N/B) bits, so there's negligible loss if B≥N. The problem, though, is to "change base without losing space": how can we change from base B+1 (not a power of two) into bits in real time? A picture is worth 1000 words: We can think of two continuously running processes that regroup two symbols into two symbols of different alphabets: • Split: Two input symbols in alphabet B+1 are changed into two symbols in alphabets B-3i and B+3(i+1), for i=0,1,2,... This works as long as (B-3i)(B+3i+3) ≥ (B+1)^2, which is always the case for n^2 ≤ B/4 (hence the assumption b≫2lg N). • Merge: Two input symbols in alphabet B-3i and B+3i are regrouped into two symbols in alphabet B, which can be written out in binary (b bits each). This is always possible, since (B-3i)(B+3i) ≤ B^ Encoding and decoding are fast and simple: they amount to a few multiplications and divisions for each block. And that's it! 5 comments: This is very neat. I believe that if you exploit that there can be no two eofs in a row, you can replace the constant 3 by 2, and lose nothing in the splitting step. Blog request: At some point you gave a talk about a combinatorial view of FFT. Any chance you could blog about that? Rasmus, I like your idea of doing B and B+2 using no consecutive EOFs, but it will only work for the first double block. For the second, we will need (B-2)(B+4) = B^2 + 2B - 8 >= (B+1)^2 - 1 = B^ 2 - 2B, which is false. Ironically, replacing 3 by 4 is actually better :). Elias codes correspond to algorithms for the unbounded search problem, your code 2 corresponding to doubling search [Bentley and Yao, 1976], and any adaptive search algorithm yields a (compressed) prefix free code. Did you ponder what search algorithm corresponds to this new code, and what the cryptographic property of the code corresponds to for the search problem? Thinking about it quickly, the search algorithm looks like a cache-friendly version of doubling search, in a cache-aware model, but I did not dig more into it, since you probably already explored this avenue? Jeremy: I didn't think about this. What would you hope to achieve? Doubling search is optimal, at least within constant factors... Mihai: Doubling Search is optimal in the number of comparisons, but its access pattern is very cache unfriendly (you end up comparing one element in each block on long runs). The obvious theoretical answer is to replace it with some finger search in a B-Tree, but somehow, nobody I know does this in practice. When I tried to optimize (experimentally) the running time (as opposed to the number of comparisons performed) of doubling search (in the context of intersection algorithms for sorted arrays), some cache-oblivious techniques did yield some improvements, but only for some of the biggest (intersection) instances. Had I considered cache-aware improvements, I think I would have used some blocking looking a bit like your larger alphabet.
{"url":"http://infoweekly.blogspot.com/2010/06/prefix-free-codes.html?showComment=1275520605360","timestamp":"2014-04-16T07:13:16Z","content_type":null,"content_length":"60885","record_id":"<urn:uuid:6f4866a3-37e5-417d-a615-880b30b7b146>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of modular lifts with prescribed parameters up vote 3 down vote favorite Let $\bar{\rho} : Gal(\bar{\mathbb{Q}}/ \mathbb{Q} ) \rightarrow GL_2(\bar{\mathbb{F}}_p)$ be an odd, irreducible Galois representation mod $p$ which is unramified outside $S$, where $S$ is a finite set of primes which contains $p$. Fix an integer $k \geq 2$ and a local Galois representation $\rho _p ' : Gal(\bar{\mathbb{Q}} _p/ \mathbb{Q} _p ) \rightarrow GL_2(\bar{\mathbb{Q}} _p)$ Question: Is there a way to compute precisely a number (which is finite) of modular lifts $\rho : Gal(\bar{\mathbb{Q}}/ \mathbb{Q} ) \rightarrow GL_2(\bar{\mathbb{Q}}_p)$ of $\bar{\rho}$, such that 1) $\rho _{|Gal(\bar{\mathbb{Q}} _p/ \mathbb{Q} _p )} \simeq \rho _p '$ 2) $\rho$ comes from a modular form of weight $k$. 3) $\rho$ is unramified outside $S$. I'd like to understand it even in the simplest (?) case, when $S =$ {$p$ }, $k=2$ so that $\bar{\rho}$ actually comes from a reduction of some level 1 form (after Serre's conjecture) Is it true that in this case, a modular lift $\rho$ will be unique (if it exists)? galois-representations nt.number-theory modular-forms add comment 2 Answers active oldest votes As Kevin said, I'm not sure if you can get a formula in any concrete sense, but there is a way to tell if the modular form that gives rise to $\bar\rho$ is the unique form (of a specific level). Say that $\bar\rho$ takes values in $GL_2(k)$ and that $f \in S_k(N,\mathcal{O})$ gives rise to $\bar\rho$ (so we have that $k$ is the residue field of $\mathcal{O}$). Then you can show (see for instance Section 4.1 of Darmon, Diamond, Taylor- Fermat's Last Theorem) that the number of forms $g$ which are congruent to $f$ mod $p$ (i.e. $f$ and $g$ have the same residual representation), is equal to the rank (as an $\mathcal{O}$-algebra) of a certain completed Hecke ring, $\mathbb{T}$. In particular, if this rank is 1, then $f$ is the unique form which gives rise to $\bar\rho$. Thanks to the general $R=T$ theorems, one can study the rank of $\mathbb{T}$ by studying the deformation theory of the representation $\bar\rho$. From this point of view, the rank is something you can almost get your hands on (and by "almost", I mean, "isn't completely hopeless"). That's because it is a standard fact that $R$ is a quotient of a power series ring over $\mathcal{O}$ in $d$ variables, where $d$ is the dimension of specific subgroup $H\subset H^1(G_S, ad^0\bar\rho)$. This subgroup $H$ is a so-called "Selmer group" since it is the kernel of a global-to-local map on cohomology, defined by specifying local conditions at all primes in $S$. There are lots of choices of local conditions, and each choice will lead to a different Selmer group. You can read "Deforming Galois representations and the conjectures of Serre and Fontaine-Mazur" by Ravi Ramakrishna for local conditions that deal with "modular" deformations (the same conditions appear in "On icosahedral Artin representations, II" by Richard up vote 4 Taylor). Now, since you have a local-at-$p$ representation in mind, you might have to alter these local conditions a bit (but not too much). down vote accepted The point though, is that, if you can show that $H=0$, then we have that $R$ is a quotient of of a power series ring over $\mathcal{O}$ in 0 variables. Since completed Hecke rings are known to be finite flat complete intersections, we conclude that $R = \mathbb{T} = \mathcal{O}$ when $H=0$. As mentioned above, the $\mathcal{O}$-rank of $\mathbb{T}$ corresponds to the number of modular forms giving rise to $\bar\rho$. Thus, we get to the punchline: $f$ is the unique modular form (of weight $k$ and level $N$) giving rise to $\bar\rho$ if and only if $H=0$. The natural question to ask now is, "When is $H=0$?" This, of course, is the hard part. If you are willing to allow finitely many additional primes into your ramification set, then using the methods of Ramakrishna (Op. cit.) you can force $H=0$. If you want a modular form of optimal Serre conductor, then I don't know if anybody knows how to do this. Studying whether or not $H =0$ as you vary the set $S$ is the subject of a paper of mine with Ramakrishna (whose contents are basically my thesis), "New Parts of Hecke Rings". You can look at that for ideas on how to determine (in specific settings) if $H=0$. Thank you very much! That's more or less what I need in my case. – Przemyslaw Chojecki Feb 9 '12 at 10:55 add comment I can give you a "formula" in the sense that I can give you an algorithm to compute the number in any given case. If $\ell\not=p$ is prime then an old result of Carayol and Livn\'e says that the conductor of a lift $\rho$ of $\overline{\rho}$ at $\ell$ can only jump by a factor of $\ell$ or $\ell^2$. The conductor $N_p$ at $p$ can be read off from $\rho_p'$. So here's the algorithm: let $N$ be $N_p.cond(\overline{\rho}).\prod_{\ell\in S}\ell^2$, and now run through the finite set of newforms of level dividing $N$ and weight $k$, and for each such form check to see whether the associated Galois representation has the right properties. Now just count the number of times that it worked. This might not be what you wanted for a "formula" but I would be very surprised if you could get anything more concrete than this. Even in the simple case of dihedral lifts you're asking for something like the dimension of certain global cohomology groups generalising class groups: even deciding if the answer is 0 or positive will depend on some delicate global cohomology up vote groups. A question very closely related to yours is this: I'll give you a $p$-adic number $a_p$ with $|a_p|<1$, and a finite set of positive integers $X$; what is a "formula" for the number 8 down of weight 2 normalised newforms with level in $X$ and coefficient of $q^p$ equal to $a_p$? (the point being that $k$ and $a_p$ determine the local crystalline lift). I say: just count 'em, vote because you'll surely not get any better than this. As for the "simple" case -- why would you expect a lift to be unique? But I can't answer the question. On the other hand perhaps the following might give you problems: perhaps one can find some large $p$ and a weight 2 modular form of level $p^3$ or whatever, with coefficients in some big number field, but with the property that the associated local $p$-adic Galois representation is defined over a smaller $p$-adic field. Then perhaps you can just look at two Galois conjugates of the form that are globally non-isomorphic but which both happen to satisfy your conditions. I can't immediately rule this phenomenon this out... Thank you for an answer. Indeed, I'd like to have more "theoretic" rather than "algorithmic" formula, because in general, I'm more interested in the question of whether there exists a unique modular lift. As for the uniqueness in the "simple" case, I can't well describe while I suspect it to be like it. It comes from a completely different problem and it'd simplify many things if it'd be like that. – Przemyslaw Chojecki Feb 8 '12 at 0:04 add comment Not the answer you're looking for? Browse other questions tagged galois-representations nt.number-theory modular-forms or ask your own question.
{"url":"http://mathoverflow.net/questions/87803/number-of-modular-lifts-with-prescribed-parameters/87844","timestamp":"2014-04-19T22:12:38Z","content_type":null,"content_length":"62589","record_id":"<urn:uuid:3ee1fa26-6026-42a3-8753-32c84ba35d58>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Neil O'Connell Neil O'Connell's home page Mathematics Institute University of Warwick Coventry CV4 7AL Phone: +44-24-7652 8337 Fax: +44-24-7652 4182 E-mail: n.m.o-connell at warwick.ac.uk Office: C2.19 Preprints (on arXiv). List of publications. Slides for talks Path-transformations in probability and representation theory, Bielefeld, December 2007. Directed polymers and the quantum Toda lattice, MSRI, December 2010 (with some minor corrections). Tropical combinatorics and Whittaker functions, Bielefeld, August 2011. Exactly solvable random polymers and their continuum scaling limits , Saclay, May 2012. Geometric RSK correspondence and Whittaker functions, Leeds, November 2012. Random matrices and related stochastic processes, Durham, November 2012. Combinatorial aspects of random polymers, Bristol, May 17, 2013. From Pitman's 2M-X theorem to random polymers and integrable systems, Doob Lecture, 36th Conference on Stochastic Processes and their Applications, Boulder, July 2013. Geometric RSK and Whittaker functions, Bielefeld, December 2013. Stochastic analysis seminar Probability at Warwick Stochastic integrable systems reading seminar 2011/2012 EPSRC Warwick Symposium in Probability PARTE: Probability and representation theory in
{"url":"http://homepages.warwick.ac.uk/~masgas/","timestamp":"2014-04-20T23:33:16Z","content_type":null,"content_length":"3353","record_id":"<urn:uuid:d0264bbb-0e44-40d3-a033-bd87a1d2d34c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the median of the data set? 23 19 13 11 17 19 A. 12 B. 17 C. 18 D. 19 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. median= It is the middle value of all the arranged data set in order. 1) First rearrange them in order from smallest to largest. 11,13,17,19,19,23 2) Identify the middle value of the data set, we can see it's 17 and 19, we have two in this case so we have to average it. 17+19 / 2 = 18 18 is our Median value. Best Response You've already chosen the best response. or u could just count them off Best Response You've already chosen the best response. Thank you :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ff0e5de4b0426c636811c6","timestamp":"2014-04-19T17:26:41Z","content_type":null,"content_length":"34976","record_id":"<urn:uuid:296b8176-bed7-4a45-95a6-175261641714>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems 3.26 3.26 Space-time Poisson process Consider a highway that starts at x = 0 and extends infinitely eastward toward increasing values of x. Automobile accidents and breakdowns occur along the highway in a Poisson manner in time and space at a rate y per hour per mile. Any accident or breakdown that occurs remains at the location of occurrence until serviced. At time t = 0, when there are no unserviced accidents or breakdowns on the highway, a helicopter starts from x = 0 flying eastward above the highway at a constant speed s. As a service unit, the helicopter will land at the site of any accident or breakdown that it flies over. Moreover, given at time t the helicopter is located at x = st, the helicopter can be dispatched (by radio) to service any accident or breakdown that occurs behind it (i.e., at values of x st). We assume that any such dispatch occurs immediately after the accident or breakdown occurs. We are interested in the time the helicopter first becomes busy, either by landing at an accident/breakdown site or by being dispatched to an accident/breakdown behind its current position; in the latter case, the instant of dispatch (not the time of arrival at the scene) is the time of interest. T = time that the helicopter first becomes busy b. Let 1 - Show that = 1 - Hint: Condition on the event that the first accident breakdown occurs in the time interval (t, t + dt). c. Let Show that d. Suppose that L[1], = time of first accident/breakdown that the helicopter flies over, assuming that it is no longer dispatched by radio (i.e., all incidents are helicopter-discovered incidents) L[2] = time of first accident/breakdown that the helicopter is dispatched to, assuming that it never services accident/breakdowns that it flies over Then, for instance, T = Min[L[1], L[2]]. Show that L[1] and L[2] are identically distributed Rayleigh random variables, each with parameter y. Finally, argue that L[1] and L[2] are independent, thereby concluding that the minimum of two independent Rayleigh random variables, each with parameter y, is itself a Rayleigh random variable with parameter y.
{"url":"http://web.mit.edu/urban_or_book/www/book/chapter3/problems3/3.26.html","timestamp":"2014-04-19T09:44:26Z","content_type":null,"content_length":"6253","record_id":"<urn:uuid:b6ff5d26-08df-44df-8bd3-5627786f0bec>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor or Laurent series May 4th 2010, 12:56 PM #1 May 2010 Taylor or Laurent series Got a bit stuck with the series expansion of the following: I am given a function f(z) = 1/(2z^2+(6z-4i)z-12i) How do I work out the series expansion of f in the region |z1|<|z|<|z2| ? I am also unsure as to how you would tell whether it is a Taylor or Laurent series? Your help would be much appreciated Thank you 1) Are you certain you have written it correctly? It seems there may be an extra 'z' in the middle, there. 2) Solve for singularities. With that extra 'z' gone, I get z = -3 and z = 2i. Is that what you get? 3) Having established singularities, you get to decide what region to work with. You have one selected for you. That is good. Using the central region, one of your singularities is not really a singularity. Well. it still ist, we just don't care for the moment. 4) The partial fraction decomposition will then help you on your way. I get the delightfully straightforward: $\frac{1}{2}\cdot\frac{1}{3+2i}\cdot\left(\frac{1}{ z-2i}-\frac{1}{z+3}\right)$ Now what? Sorry my mistake, it should read: f(z) = 1/(2z^2+(6-4i)z-12i) I don't understand how you put it into partial fractions, mines not working out! For your solution: $<br /> <br /> \frac{1}{2}\cdot\frac{1}{3+2i}\cdot\left(\frac{1}{ z-2i}-\frac{1}{z+3}\right)<br />$ How do you know whether it fits the Taylor or Laurent series? And then how do you put it into either? Thank you! You must factor the denominator and follow the tried-and-true methods for partial fractions. I also pulled out some constants, so it would look simpler. Just try it again and be more careful. Your assignment is to find the series expansion in the region between the singularities. Thus, only the singularity with lesser magnitude will need Laurent. You have $|z-2i| = \sqrt{5}$ and $|z+3| = \sqrt{10}$. There, you can now order the washer-shaped region and carry on. Last edited by TKHunny; May 6th 2010 at 06:07 PM. Let's see what you managed. May 4th 2010, 01:48 PM #2 MHF Contributor Aug 2007 May 4th 2010, 11:54 PM #3 May 2010 May 5th 2010, 04:57 AM #4 MHF Contributor Aug 2007 May 6th 2010, 10:59 AM #5 May 2010 May 6th 2010, 06:07 PM #6 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/advanced-math-topics/143047-taylor-laurent-series.html","timestamp":"2014-04-19T06:56:09Z","content_type":null,"content_length":"44974","record_id":"<urn:uuid:4c3aec34-e424-4a9a-9b93-0ce20bbe90e3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What does it mean when a question is closed in the middle of a solution being developed on the Mathematics subject? • one year ago • one year ago Best Response You've already chosen the best response. It is not closed after 5 minutes, no. Likely the person had another question, and once they saw someone was helping them, posted the other question so that they could get help on that one as well. If they stop interacting, I would check if they've posted a new question and report them for clearly seeking answers instead of help. Best Response You've already chosen the best response. @robtobey, you should post your own question, then close it just to find out how it works from experience. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff88ea8e4b058f8b7631121","timestamp":"2014-04-21T02:08:37Z","content_type":null,"content_length":"30419","record_id":"<urn:uuid:f165a039-eeca-401b-8020-01b8387662eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Hausdorff measure question up vote 8 down vote favorite Say we have some compact metrisable topological space $X$ with a measure $\mu$ defined on the Borel sets of $X$. Then is there some way to determine whether $\mu$ is the Hausdorff measure associated to some metric $d$ compatible with the topology of $X$? And if so, is there some process to recover a metric from the measure? I'd imagine that there would have to be some conditions placed on the space $X$, eg. that it's connected, and it might even be necessary to assume that it's some nice space such as a manifold, with a "gauge" metric $d_{0}$ relative to whose Hausdorff measure $\mu$ is absolutely continuous, but I'd like to ask the question in the greatest generality possible, in the hope that there is an answer out there. mg.metric-geometry dg.differential-geometry 2 I guess you want to say "recover a metric", but not "the metric" --- clearly there are many metrics which give the same Hausdorff measure... – Anton Petrunin Jan 31 '10 at 4:27 I just edited the question to account for your comment, but it leads to another naive question; I can see how a many different metrics on a disconnected set(eg a discrete set) would lead to the same Hausdorff measure, but is there a simple example in the case of connected sets? – Gordon Craig Feb 1 '10 at 4:04 Sure. Just change the standard metric on the real line to $\operatorname{arctan}|x-y|$. All Hausdorff measures associated with power functions will stay exactly the same. – fedja Feb 1 '10 at 4:32 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged mg.metric-geometry dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/13538/hausdorff-measure-question","timestamp":"2014-04-20T08:44:48Z","content_type":null,"content_length":"50060","record_id":"<urn:uuid:c1af808c-7bff-40fe-b808-d35a5495f8d0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Is encryption really crackable? | ZDNet Is encryption really crackable? Summary: Is encryption really crackable or not? What about the danger that zombie networks pose if they're ever unleashed on an encryption stream? Can Moore's law ultimately break encryption? This article will dispel some myths about encryption and security. When I sent out this alert about Banks not using SSL to prove their identity to their users, quite a bit of feedback was excessively cynical on encryption technology and cryptography in general along the lines of "it's useless anyways". While there are times when a little cynicism is healthy, this isn't one of them and it seems all too common for some in the IT industry to say things like "encryption is easily broken". Spreading misinformation 256 bits is roughly equal to the number of atoms in the universe. about the weakness of encryption is harmful because the biggest problem with Cryptography is that it isn't used correctly or isn't used at all. Spreading the myth that encryption is useless will only get people to say "why bother if it's already broken" and make people less The problem is compounded by the fact that much of the misinformation out there actually sounds somewhat believable and many people just don't know what to believe. So to settle this once and for all, let's look at the facts. One of the things that make these myths plausible is the fact that "128-bit" WEP encryption used in 802.11 Wireless LANs is so pathetically weak. The inside scoop is that WEP was designed during the late 90s during a time when USA export laws were extremely tight. Fearing 802.11 devices would be banned by US export laws, good encryption algorithms were deliberately passed up by the 802.11 group in favor of a weaker one. The WEP algorithm was fundamentally flawed and the 802.11 standards body knew full well that it wasn't a strong encryption algorithm when they selected it. However, WEP's glaring weaknesses are not characteristic of any properly implemented symmetric encryption algorithms used in SSL or VPN implementations. To give you an idea of how good something like DES is, DES is 30 years old and no one has found any weakness or shortcut for cracking it yet though it can be brute forced. Brute force techniques are considered impractical because modern encryption algorithms are 128 to 256 bits long. Further propelling the myth that encryption is worthless is that I often hear people saying that they heard that a 512 bit RSA key was broken. The truth of the matter is that 512 bit (and recently even 660 bit) RSA keys have been broken by the University of Bonn in Germany but that is has absolutely nothing to do with the type of encryption that's used for ordinary bulk encryption. Furthermore, RSA's inventors were well aware of the fact that it takes a much larger key to be secure which is why typical implementations are at a minimum 768 bits and can easily go up to 2048 bits and beyond. To give you an idea what it takes to break an RSA 1620 bit key, you would need a computer with 120 Terabytes of memory before you can even think about attempting it and the memory requirement virtually rules out massively distributed cracking methods. Some may ask why use RSA keys when it's many orders of magnitude slower and requires so many more bits to be secure, the reason is that RSA encryption has the special property of being able to do secure key exchanges in plain sight of an adversary who is trying to break in but still remain safe. For this reason, RSA keys are strictly used for the initial phases of a secure communication session for the purpose of Authentication (where one entity proves who they are) and for secure key exchanges (used for bulk symmetric encryption). Once the initial transaction is complete, the key that was exchanged during the initial RSA phase can now be used for SSL or VPN bulk encryption with algorithms like RC5, 3DES, or AES. The last big factor in encryption myths and bit size inflation is salesmen and marketers because bigger numbers always sound nicer. I've had salesmen come in to my office and try to tell me that RSA or AES encryption was worthless and that I should be using their product which uses some kind of 1000 bit wonder-crypto solution. All it takes is one company to try and out do their competitors and pitch their products using 4096-bit RSA and the next company will come along and pitch 16384-bit RSA keys in their product. Many IT consultants will shy away from quoting smaller bit sizes because they're afraid to be out done by their competitors. Ah, but what about the dreaded massively distributed cracking brute force method for attacking something like 128 bit RC5 encryption? There are massive zombie farms of infected computers throughout the world and some may have gotten as big as 1 million infected computers. What if that entire army was unleashed upon the commonly used 128 bit RC5 encryption? Surprisingly, the answer is not much. For the sake of argument, let's say we unleash 4.3 billion computers for the purpose of distributed cracking. This means that it would be 4.3 billion or 2 to the 32 times faster than a single computer. This means we could simply take 2 to the 128 combinations for 128-bit encryption and divide it by 2 to the 32 which means that 2 to the 96 bits are left. With 96 bits left, it's still 4.3 billion times stronger than 64 bit encryption. 64 bit encryption happens to be the world record for the biggest RC5 bit key cracked in 2002 which took nearly 5 years to achieve for a massive distributed attack. Now that we know that the distributed attacks will only shave off a few bits, what about Moore's law which historically meant that computers roughly doubled in speed every 18 months? That means in 48 years we can shave another 32 bits off the encryption armor which means 5 trillion future computers might get lucky in 5 years to find the key for RC5 128-bit encryption. But with 256-bit AES encryption, that moves the date out another 192 years before computers are predicted to be fast enough to even attempt a massively distributed attack. To give you an idea how big 256 bits is, it's roughly equal to the number of atoms in the universe! Once some of these basic facts on encryption become clear, "is encryption crackable" isn't the right question because the real question is "when can it be cracked and will it matter then". This is just like Bank safes which are rated by the time it takes an attacker to crack it open and never sold as "uncrackable". Encryption strength and the number of bits used are selected based on how many decades the data needs to be kept safe. For a secure E-Commerce transaction, the data being transmitted is moot after a few decades which is why 128-bit encryption is perfectly suitable since it's considered unbreakable for the next few decades. For top secret classified data that needs to remain secret for the next 100 years, the Government uses NIST certified 256-bit AES encryption. So the next time someone tells you that encryption is crackable, ask him if he'll be around on this earth to see it demonstrated. Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it. 81 comments Log in or register to join the discussion • Perhaps it's newbies confusing encryption with hashing With the recently discovered weaknesses in hashing algorithms, like [url=http://en.wikipedia.org/wiki/Md5]md5[/url], which make it possible to generate collisions with less than brute force effort, perhaps all of these people who go on about how "encryption is crackable" are confusing hashing with encryption. □ Most of those comments came on the TechRepublic forums The truth is even SHA-1 is under attack and some Chinese scientists have found some shortcuts to accelerate the search for hash collisions. That makes authentication and digital signatures weaker to the point that they might be candidates for a world record attempt to find a hash collision. However, SHA-1 isn't broken and it doesn't affect symmetric encryption. Most of the "encryption is crackable" comments came on my TechRepublic talkback. And yes, you have to cut-paste it in to the browser without the space in the middle of the URL. This talkback engine keeps inserting spaces in the long URLs no matter what I do. ☆ Okay, you're right. It appears some people are just ignorant about encrpytion in general. Oh, and don't be a lazy bum George! Use the url tag to post links, like this, only use sqaure brackets. ○ HTML Tags If you would be so kind, give the readers a 'tutorial' tag usage in ZDNet. An example on each would be appreciated. Thanks in advance ■ They are all listed except for url... ...above the box you type your post in {b}{/b} {i}{/i} {u}{/u} {pre}{/pre} {url=}{/url} Pre:[pre]Pre? Not sure what that means, but when I post this I guess I'll find out[/pre] URL: [url=http://www.zdnet.com]Zdnet.com[/url] ■ (nt)I guess "pre" is like a "quote" feature. ■ Preformatted text [url=http://www.w3schools.com/tags/tag_pre.asp][b]Pre[/b]formatted text[/url] Thanks ToadLife! □ "... no matter what I do" poor G. - maybe you should be writing for ZD, Ou don't knew a clue, the basics anywayz. • hiding in plain sight Given the history of code making and code breaking, I suggest that ANY communication that actually contains information unknown to the intended recipient is ultimately crackable. Sooner rather than later given the proclivity of the crackers to capitalize on how information must be rendered and organized before it is useful to anyone, and the practical time limits on how long the intended recipient has to perform information recovery before it [the information] becomes irrelevant. White noise and nonsense are ultimately uncrackable because there is no information content. Quite possibly there is some advanced mathematical proof for this concept, but don't ask me what it is. □ Unbreakable Encryption There are in fact two methods widely accepted as unbreakable for the uninformed interceptor (of course if the interceptor has acquired the keys by other means you ae still open). First is the one-time-pad cypher, where the key length is as long as the message and never repeats. But this requires delivery of the huge cypher keys by some other secure means, so is best reserved for short messages such as "Fire your missiles at 22:00 Zulu". The other is the Quantum Cypher, which relies on quantum mechanics and is able to deliver BOTH the key and the message securely. Too complex to explain here, but it requires direct transmission of photons so isn't much use on the Internet :-). But it is VERY secure. Nope, you STILL don't get it.... Talking about moore's law atc. is missing the ponit entirely. You can't say "if we had this many computers running at this many terahertz for this long", then extrapolating the numbers is junk science. 128 bits won't ever be brute forced simply because there isn't enough electricity available on earth to power those "future computers". Even if a single key test only needs 0.00000001w of power to perform it...well you can do the math. Short answer....it's not going to happen - and it's [b]NOT[/b] a case of fiddling with megahertz and number of CPU cores on a spreadsheet. Nope, to break encryption you need a flaw in the algorithm itself, not brute force. This is what happened to MD5 - somebody found a shortcut. The biggest problem of all is in the passwords people use. Most people will type in an ordinary english word when asked for a password and that reduces the search to about 20 bits - laughably easy. Even six totally random letters is only abotu 36 bits - half an hour's work on a modern desktop PC. • Let's play a Game You claim "128 bits won't ever be brute forced..." A 128 bit key would be 16 digits (decimal). We are working a project to disprove assertions such as this and we are currently capable to 28 digits decimal (224 bit). The last number we tested was part of the Clay Mathematics Institute prime number challenge. The number 1020030004000050000060000007 was factored by us in 2.5 minutes. (I'm rounding, but you get the point.) If you'd like to see your theory proved or disproved. Please provide a number that is 28 characters or less (digital) long. I will post back the factorization. You response may be to say, "see your solution is good to only 1/10,000 of a 32 bit key." It might shock you to know that our solution is restricted to one desktop. □ Update The Clay Mathmatics Institute had posted a series of challenge numbers. I mentioned such in my previous post. At that time, we had some programatic hurdles to overcome and were limited to 28 bits. We now have a solution that works to 10,000 decimal digits, runs on a pc and works rather quickly. There is a second CMI number that was published in the same challenge. The number: 51920810450204744019132202403246112884629925425640897326550851544998255968235697331455544257 Breaks down into the following factors: In binary bits the number is 305 bits...significantly more than 128 bits. It took us about 15 cpu days to factor this number. So assuming that the data you are trying to protect won't be valuable in 15 days...then 128 bit encryption is adequate. On the other hand we were using a couple of mid-range desktops and we overshot 128 bits by a factor of 138% so it is reasonable to assume that the technology as exists today, for a typical hacker is capable of breaking 128 bit encryption in about 1 week. Couple this with the fact that the most secure encryption that is available through web browsers is 128 bit and my fear is that online ecommerce is in significant danger. ☆ Is that a joke ? I am surprised to see such an incompetent quote from a member of "The Clay Mathmatics Institute". Either you do not know what you are talking about, or you do it on purpose. That's because of people like you that others are misinformed about cryptography. Taking into consideration that you are searching the factoring of a big number, the goal is to break RSA. Today, the minimal recommended number length for RSA is 1024bit. A simple look at the facebook certificate proves that they are using 1024 RSA. Sites like e-commerce use the 2048bit version. You are the mathematical person. I let you do the calculations and tell me what is the result of (2^2048 / 2 ^ 128) * 7 days = 2^192 * 7 days = the time needed YOU will need in order to break the RSA key exchange. Event now, 7 years after your post, you will still run your computers to break the same key. Two Questions 1. Re the zombies, presumably one should also factor in that whoever controls them will not have their full capacity available. I imagine that making them work flat out would, in many cases at least, lead to the fact that they had been taken over being discovered. 2. I take the point about being secure for long enough. But to what extent might quantum computing change things dramatically? Is it reasonable to base projections solely on Moore's Law? Anyone like to guess if or when quantum computing will spread become available outside goverment, high end academia and very large companies? • Answers 1. Yes you're right, it can never be full capacity and no one will ever control 4.3 billion computers. My point was that even if we assumed we had full capacity of all 4.3 billion computers, it would only shave off 32 bits in the encryption strength which isn?t that significant in the context of a 128 or 256 bit encryption key. 2. Quantum computing if it ever comes about will bring about the need for quantum encryption. Encryption will always have an advantage over cracking because it's a ratio thing. The computational power required to encrypt something will always be many magnitudes of order cheaper than the amount of computational power to brute force decrypt it. The only exception to this rule is if there is a weakness in the encryption algorithm that significantly lowers that ratio. □ In addition In addition, it hasn't been proven yet that all types of encryption can be cracked with quantum computing. Only some of them have been shown to be weak against quantum computing. Whether or not this can be generalized remains an open question. ☆ Thanks Thanks for adding that □ There's a bit more to it You've glossed over the use of quantum computers to decrypt data that has been secured using current methods. For the sake of discussion, let's assume quantum computers will be commonly used 30 years from now. Data encrypted using a 128 or 256 bit encryption key that needs to be kept secure for 50 years could possibly be decrypted using quantum computer(s) at some point in the future. As for item 2, quantum computers and quantum cryptography are not directly related. Quantum computing changes the way computations occur at a very fundamental level. Modern computers use a binary system with each bit having a value of either exactly zero or exactly one. Quantum bits (qubits) could be viewed as having values of 0, 1, or a blend of 0 and 1. Visit http://www.cs.caltech.edu/~westside/quantum-intro.html for a more complete description. However, quantum cryptography, based on Werner Heisenberg's Uncertainty Principle, is not dependant on quantum computers. Heisenberg stated that any attempt to measure a sufficiently small (quantum) system would affect the system, making the measurement inaccurrate. Data can be encrpyted using quantum cryptography that, theoretically, could never be decrypted regardless of the type of computer used. The attempt to decrypt it without the key would alter (destroy) the data. Let's say you want to transmit a message securely using quantum cryptography. First you need to convert the message to a binary stream that can be transmitted as a series of photons. Then you would apply a series of filters to individual photons in the stream to encrypt the message. The same filters would have to be applied (in the same order) to decrypt the message. Anything else would alter the message. There is no need for a quantum computer to provide quantum cryptography. However, quantum computers could (theoretically) make a brute-force crack of modern encryption methods possible at some point in the future. ☆ Thanks Thanks for your insightful post.
{"url":"http://www.zdnet.com/blog/ou/is-encryption-really-crackable/204","timestamp":"2014-04-20T16:37:49Z","content_type":null,"content_length":"102299","record_id":"<urn:uuid:8c78ffe6-5275-4cbf-adde-d180d7498204>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Gutin, Gregory - Department of Computer Science, Royal Holloway, University of London • On the complexity of hamiltonian path and cycle problems in certain classes of digraphs • Cycles and Paths in Semicomplete Multipartite Digraphs, Theorems and Algorithms: A Survey • Out-branchings with Extremal Number of Leaves Jrgen Bang-Jensen • Transformations of Generalized ATSP into ATSP D. Ben-Arieh • A polynomial algorithm for the Hamiltonian cycle problem in semicomplete multipartite digraphs • Minimum Cost Homomorphism Dichotomy for Oriented Cycles • Traveling salesman should not be greedy: domination analysis of greedytype heuristics for the TSP • Evaluation of the Contract-or-Patch Heuristic for the Asymmetric TSP • Note on Upper Bounds for TSP Domination Number Gregory Gutin, Angela Koller and Anders Yeo • Hamilton Cycles in Digraphs of Unitary Matrices S. Severini • Polynomial algorithms for finding paths and cycles in quasi-transitive digraphs • Generalizations of tournaments: A survey Jrgen Bang-Jensen • Properly coloured Hamiltonian paths in edge-coloured complete graphs • An approximate algorithm for combinatorial optimization problems with two parameters • When n-cycles in n-partite tournaments are longest cycles Gregory Gutin • On the number of quasi-kernels in digraphs Gregory Gutin • 5.2 INDEPENDENCE AND CLIQUES 745min 4{26{2003 Gregory Gutin, Royal Holloway, University of London • Exponential neighbourhood local search for the traveling salesman problem • Extracting Pure Network Submatrices in Linear Programs Using Signed Graphs • Polynomial approximation algorithms for the TSP and the QAP with a factorial domination number • TSP tour domination and Hamilton cycle decompositions of regular digraphs • The Greedy Algorithm for the Symmetric TSP Gregory Gutin • Fixed-Parameter Complexity of Minimum Profile Gregory Gutin • Tolerance-based greedy algorithms for the traveling salesman problem • Note on edge-colored graphs and digraphs without properly colored cycles • Orientations of digraphs almost preserving diameter Gregory Gutin • Convex sets in acyclic digraphs Paul Balister # Stefanie Gerke + Gregory Gutin # • An algorithm for finding input-output constrained convex sets in an acyclic digraph • Note on alternating directed cycles Gregory Gutin • Alternating cycles and trails in 2-edge-coloured complete multigraphs • Kings in semicomplete multipartite digraphs Gregory Gutin • Quasihamiltonicity: a series of necessary conditions for a digraph to be hamiltonian • Upper Bounds on ATSP Neighborhood Size Gregory Gutin \Lambda Anders Yeo y • Assignment Problem based algorithms are impractical for the Generalized TSP • Characterization of edge-colored complete graphs with properly colored Hamilton paths • Almost minimum diameter orientations of semicomplete multipartite and extended digraphs • Weakly Hamiltonian-connected Ordinary multipartite Tournaments • Construction heuristics for the asymmetric TSP Fred Glover • Solution of a conjecture of Volkmann on the number of vertices in longest paths and cycles of strong semicomplete • Sufficient conditions for a digraph to be Hamiltonian • Minimum Cost Homomorphisms to Locally Semicomplete and Quasi-Transitive Digraphs • On the number of connected convex subgraphs of a connected acyclic digraph • Level of Repair Analysis and Minimum Cost Homomorphisms of Graphs • Alternating cycles and paths in edge-coloured multigraphs: a survey • Algorithms for Generating Convex Sets in Acyclic P. Balister • Extracting Pure Network Submatrices in Linear Programs Using Signed Graphs. II • A note on the cardinality of certain classes of unlabeled multipartite tournaments • Multipartite tournaments with small number of cycles Gregory Gutin and Arash Rafiey • Spanning directed trees with many leaves Fedor V. Fomin • On-line bin packing with two item sizes Gregory Gutin, Tommy Jensen and Anders Yeo • Small diameter neighbourhood graphs for the traveling salesman problem: at most four moves from tour to tour • Remarks on hamiltonian digraphs Gregory Gutin • Longest paths in strong spanning oriented subgraphs of strong semicomplete multipartite digraphs • Domination Analysis of Combinatorial Optimization Algorithms and Problems • 4.5 TRAVELING SALESMAN AND RELATED PROBLEMS 627min 2{25{2003 Gregory Gutin, Royal Holloway, University of London • Domination Analysis of Combinatorial Optimization Problems Gregory Gutin • Minimum Leaf Out-branching and Related Problems Gregory Gutin • On n-partite tournaments with unique n-cycle Gregory Gutin, Arash Rafiey and Anders Yeo • Some Parameterized Problems on Digraphs Gregory Gutin • The Linear Arrangement Problem Parameterized Above Guaranteed Value • Kernels in planar digraphs Gregory Gutin • On Complexity of Minimum Leaf Out-Branching Peter Dankelmann • Introduction to the Minimum Cost Homomorphism Problem for Directed and Undirected Graphs • Minimum Cost Homomorphisms to Semicomplete Multipartite Digraphs • A Dichotomy for Minimum Cost Graph Homomorphisms • Minimum Cost and List Homomorphisms to Semicomplete Gregory Gutin • The Greedy Algorithm for the Symmetric TSP Gregory Gutin • Domination analysis for minimum multiprocessor scheduling Gregory Gutin • When the greedy algorithm fails Jrgen Bang-Jensen • Seismic Vessel Problem Gregory Gutin • A Heuristic for the Resource-Constrained Traveling Salesman Problem • Mediated Digraphs and Quantum Nonlocality S. Severini • Ranking the vertices of a complete multipartite paired comparison digraph • Batched Bin Packing Gregory Gutin, Tommy Jensen and Anders Yeo • Paths, Trees and Cycles in Tournaments Jrgen Bang-Jensen • Hamiltonian cycles avoiding prescribed arcs in tournaments • Paths and cycles in extended and decomposable Jrgen Bang-Jensen • A sufficient condition for a semicomplete multipartite digraph to be Hamiltonian • Finding cheapest cycles in vertex-weighted quasi-transitive and extended semicomplete • Vertex heaviest paths and cycles in quasi-transitive Jrgen Bang-Jensen • Hamiltonian paths and cycles in hypertournaments Gregory Gutin • Antimatroids Gregory Gutin \Lambda
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/20/168.html","timestamp":"2014-04-17T22:22:45Z","content_type":null,"content_length":"20790","record_id":"<urn:uuid:fe6a2c8c-e8cb-4a6b-88b6-98a3c44a33e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve the system, using substitution. Write the solution as an ordered pair. x = 2y - 2 and 3x - y = 14 (6, 4) (4, 6) (1, 0) (0, 1) no solution • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d747e9e4b0d6c1d542126e","timestamp":"2014-04-18T16:42:11Z","content_type":null,"content_length":"55956","record_id":"<urn:uuid:ad0946eb-c36a-4447-ad97-2d5955e875a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Montessori Mathematics This is the introduction I wrote for my Mathematics album for a Montessori preschool diploma course in 2010. It is not properly referenced, although I have included a bibliography, so be warned if you want to refer to any of it for a more formal application. Introduction to Mathematics Mathematical and number concepts are an essential part of everyday life. Even before starting school, young children are exposed to numbers and mathematical concepts daily (e.g. sorting, counting, estimating quantity, measuring), through television, through books, when shopping, and through many different kinds of infant toys as well as in our daily activities. For example, setting the table or putting on shoes develops the idea of one-to-one correspondence, because there must be a place for each person, or a shoe for each foot. Packing away clothes or shopping is an exercise in classification and sorting, and helping with baking gives the young child experience in measurement and estimation as well as the experience of working with volumes or mass in different types of media (e.g. flour and oil). Counting is often introduced directly in educational television programs, books for young children or number rhymes and games. The study of mathematics, however, is more than simply the acquisition of mathematical skills. Pure mathematics is the study of patterns, structure and relationships. It is the ultimate abstraction, where the rules governing interactions are the object of study. The “things” on which the rules operate become immaterial. Maria Montessori said that it is the the mind’s power of abstraction, that, with imagination, goes beyond the simple perception of things, so that the two powers “play a mutual part in the construction of the mind’s content.” (M Montessori, The Absorbent Mind) Maria Montessori referred to the part of the mind that deals with order and abstraction and is precisely and logically constructed as “the mathematical mind”. This part of the mind is vitally important, as it allows the child to order and thus understand his world. Because of the ordered and logical nature of mathematics, the study of mathematics provides intellectual training for precise and rigorous thought, as well as helping to develop the mathematical mind. Before the child can construct number concepts in his mind, other concepts must be well internalised and understood. Three important principles that are essential for mathematical understanding are: Conservation: This implies understanding that two equal things remain equal even if their appearance is altered, or their spatial arrangement changed, as long as nothing is added or taken away. This principle applies to number, length, liquid, mass or substance, weight, area and volume. The understanding of different conservation principles is achieved at different times in a child’s life, with number conservation, essential for the understanding of early mathematics, developing first, and the understanding of volume conservation usually developing last. Reversibility: This involves understanding that an action or operation can be undone, or done in the opposite direction so as to relate back to the original situation e.g. when water is poured from a long, thin glass into a short, thick glass, it can be poured back into the original glass and then the original situation is restored. The understanding of conservation and reversibility is linked – a child who cannot imagine a situation being reversed cannot understand conservation, and a child who does not understand conservation will not be able to relate a changed situation back to the original one. One to One Correspondence: This describes the process of pairing each member of one set to each member of another set and is an essential part of understanding number. Children need to understand that each number word must be said for one item only, and that each item must be paired with a number word. Children develop an understanding of this principle through repeated practise. They need lots of opportunities to touch or move things while counting them (e.g. plates at mealtimes, boys or girls at school) in order to prevent them from merely reciting the number words in a meaningless way. Understanding one-to-one correspondence allows the child to consider the relationships of “more than”, “less than” and “as many as” without needing to count the objects under consideration. (These are also known as ordinal relations. Ordinality refers to the relationship or order of numbers to each other, including positional relationships like 1^st, 2^nd etc.) Other important fundamental mathematical concepts include: Seriation and Transitivity: These concepts refer to grading (by size, colour etc), and to generalizing the idea of grading objects. Cardinality: The cardinal numbers are the ordinary numbers (1,2,3,4,5), and the principle of cardinality is the understanding that the last number used is the total number of counted items i.e. when counting the 5 number rod, you say all five number words, and the last word you say is five, which is then the total number of lengths in the rod, and the ‘name’ of that rod. Indirect preparation for Mathematics The activities and materials of the Practical Life area, as well as the child’s training and development through the Sensorial Materials, give the child a concrete introduction to fundamental mathematical concepts as well as to the logical reasoning underlying them. In this way, as well as through his exposure in everyday life, the child is indirectly prepared for learning the mathematical concepts before these are presented to him through the mathematics equipment and activities. Practical life preparation for Mathematics: The exercises of Practical Life help the development of concentration and a sense of competence. These are essential for mathematics work as the child needs to have both the motivation and the ability to concentrate for long periods in order to complete the Maths activities. The presentation of the Practical life activities in an ordered and logical sequence e.g. the precise sequence of steps involved in polishing brass or shoe cleaning, encourages the development of mathematical thinking. All practical life materials are used in a set sequence, and the child must be precise and thorough in checking at the end of an activity to ensure that it is complete for the next child to use. The following table gives examples of how certain activities prepare the child for Mathematics. │ Mathematical preparation │ Practical Life Activities │ │ Conservation, Reversibility │ • Pouring liquids / solids between different or the same size containers │ │ │ • Spooning between different size containers │ │ One to one correspondence │ • Matching/ pairing lids to jars or boxes │ │ │ • Setting a table │ │ │ • Planting seeds, one per pot │ │ Volume, mass, measurement │ Cooking, weighing out ingredients │ │ Pattern, symmetry │ Paper cutting │ │ Geometry, fractions (½, ¼) │ Napkin folding │ In addition to the standard classroom activities, the child’s experience will be broadened by any activity that involves matching or pairing, such as games like dominos or snap. Games such as these and other can encourage the child to think logically. Sensorial Materials as a preparation for Mathematics: The Sensorial materials in the Montessori classroom provide the child with concrete sensory impressions of the basic mathematical concepts he will encounter later. This satisfies the basic principle contained in the ancient expression: “There is nothing in the intellect which was not first in some way in the senses.” (quoted in The Secret of Childhood). The sensorial material can be understood as “a system of materialized abstractions, or of basic mathematics.” (M Montessori,The Absorbent Mind), The exercises help the child to form a logical mathematical framework for future use and as such are a prerequisite for deep, positive mathematical understanding. The sensorial materials direct the child’s attention to differences or similarities and to sequence, giving practise in classification, and seriation. The child learns about different properties such as colour, shape, texture, sound, size, temperature and weight. Indirect preparation for the decimal system is given by the dimension materials. These are all in sets of 10, using units of 1-10 in several different dimensions. Through comparing and classifying objects, as well as matching them together, the child is able to have concrete experience in discriminating between sizes, in sequencing, grading and making comparisons. Children get visual and muscular impressions of plane (flat) and 3-dimensional shapes with the geometric cabinet and the geometric solids respectively. Together with the Binomial and Trinomial cubes, these allow the child to have an early sensorial experience of materials that have a foundation in geometry and algebra. Children’s mathematical vocabulary is enriched as they learn the correct shape names, as well as the descriptive terms of measurement like narrow, long, short, wide and their comparatives (longer, bigger than) and superlatives (longest, biggest). Maria Montessori said “As cement is to brick so is the sensorial apparatus to mathematics.” This describes the vital importance of the sensory impressions in fixing the mathematical concepts in the child’s mind. Without these impressions, the mathematical concepts encountered are not firmly established, and the more advanced concepts, that require deep understanding of the basic ideas, will not be successfully understood. How is Mathematics learned in the Montessori classroom? Maria Montessori’s first intellectual love was mathematics. As a young teenager, she studied mathematics at an all-boy technical school, in preparation for studying engineering. In order to facilitate the learning of mathematical and number concepts, she designed a system of precise, logically ordered, didactic equipment that is used in the preschool mathematics area. Together with the sensorial equipment, these beautiful materials form one of the more striking physical features of the Montessori preschool classroom. The didactic material and the associated activities provide the child with a concrete and dynamic impression of the mathematical concepts presented. As with all the Montessori activities, movement is an integral part of learning. The child uses his hands and body while working with the materials. He combines, shares, counts and compares. He learns by doing, and by self-discovery. All materials contain a built-in control of error that guides the child towards doing the activity correctly, and allows the child to work independently of adult guidance, leading to true self-discovery and self-education. Through the use and manipulation of these concrete materials, the young child absorbs a visual and muscular impression of quantity and order. He discovers rules and patterns as he gradually progresses from concrete objects to the symbolic representation of number. The mathematical exercises are presented in a logical sequence and systematic progress is made from the concrete to the abstract. The sequence of the presentations is respected and the directress does not allow herself or the child to be rushed through the exercises. The child takes one small step at a time, building on his prior knowledge. To proceed from one concept to the next, he needs to have fully established, understood and internalised the first. This allows him to develop a clear, positive understanding of numbers. Concepts are always introduced in a concrete form, followed by the abstract symbolism. No abstractions are presented until the child has obtained a concrete impression of the concept that is being represented. He never learns a name that doesn’t have a meaning. e.g. the number cards are only introduced after the child has worked with the number rods. The early maths exercises and equipment (e.g. number rods, numerals, spindle box, cards and counters and number games) help the child to grasp the concept of quantity and the symbols 1 to 10. Once the units are understood and mastered, the child is introduced to the whole decimal system through the golden bead equipment. As the child progresses through the equipment, he works simultaneously with very large numbers while practising and honing his skills with addition, subtraction, multiplication and division of small numbers. By about 7 or 8 years old, the child will be able to work out operations in his head with true understanding and confidence. In his biography of Maria Montessori, E.M. Standing speaks about Mathematics as a process of abstraction, and describes this process as follows: “This process of abstraction is by its very nature an individual one, no one can do it for another, however much he may wish to do so. Abstraction is an inner illumination, and if the light does not come from within, then it does not come at all. All we can do, is to help the child by presenting them with external concrete materials. In these materials, the abstract idea of mathematical operation which we wish to teach is, as it were, latent. The child works with them for a good while, and as he does so- his mind rises eventually to a higher level.” As explained by E.M. Standing, the process of abstraction cannot be forced, or rushed. It depends on the child’s environmental experience, as well as on the inner development of the child’s mind and mental abilities. The directress can only offer opportunities for environmental experience with the concrete materials. Each child then comes to his own understanding of the mathematical concepts in his own time. Maria Montessori has been criticised for introducing Mathematics too early. Yet the material is only presented to a child who is observed to be ready for it. The correct and systematic introduction of Mathematical principles and concepts during the child’s absorbent mind period allows these to be naturally and effortlessly understood and internalised, avoiding the commonly found lifelong repugnance which many feel for this beautiful, elegant and fundamental subject. The Secret of Childhood, Maria Montessori The Absorbent Mind, Maria Montessori Maria Montessori: Her Life and Work, EM Standing accessed online at http://www.archive.org/details/mariamontessorih000209mbp “A Child’s World Infancy Through Adolescence”, Papalia, Olds, Feldman (10th Ed)
{"url":"http://montessorimusing.wordpress.com/introduction-to-montessori-mathematics/","timestamp":"2014-04-18T03:03:45Z","content_type":null,"content_length":"55172","record_id":"<urn:uuid:46832ab7-37ac-4aeb-8f74-9325aca97da1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Lafayette Hill Precalculus Tutors Hi everyone, I am an experienced Pennsylvania certified mathematics teacher. My greatest skill is the ability to take complex concepts and break them into manageable and understandable parts. I have a degree in mathematics and a masters in education, so I have the technical and instructional skills to help any student. 15 Subjects: including precalculus, calculus, geometry, algebra 1 ...Even students who do well in their high school math classes (even students in AP Calculus and Statistics) sometimes do poorly on the ACT. The test is quirky, and many students are puzzled by the types of questions the ACT asks. I teach my students more than just the math skills they need to answer the questions. 62 Subjects: including precalculus, reading, English, calculus I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including precalculus, geometry, algebra 1, GRE ...I am graduated from Drexel university last year majoring in Mechanical Engineering and minored in Business Administration. I am currently employed with a company as design engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my hea... 8 Subjects: including precalculus, algebra 1, algebra 2, trigonometry ...Solve simple to complex fraction problems. 3. Solve problems involving decimals, percents, and ratios. 4. Solve problems involving exponents. 5. 27 Subjects: including precalculus, calculus, statistics, geometry
{"url":"http://www.algebrahelp.com/Lafayette_Hill_precalculus_tutors.jsp","timestamp":"2014-04-18T20:53:36Z","content_type":null,"content_length":"25397","record_id":"<urn:uuid:98715ca1-50fe-4220-ab50-601ef7916f56>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with high school algebra problem (some probability) October 12th 2009, 02:40 PM Help with high school algebra problem (some probability) Keekerik is an imaginary land where the people have an interesting three-stage ritual for couples who want to get married. Wandalina and Gerik are in that situation so they go the the hope of Queen Katalana to perform this ritual. Permission for them to marry as soon as they wish depends on the outcome of the ritual. The queen greets them and reaches into a box and pulls out six identical strings for the ritual. The queen hands the strings to Wandalina who holds them firmly in her fist. One end of each string is sticking out above and below her fists. The queen steps to the side and Gerik is called forward. He ties two of the ends together above Wandalina's fist. Then he ties two other ends above her fist together. Finally, he ties the last two ends above her fist together. The six ends below Wandalina's fist are still hanging untied. Now Queen katalana comes forward again. Although she was watching Gerik, she has no idea which string end below Wandalina's fist belongs to which end above. The queen does the final step and randomly picks two of the ends below and ties them together, then two more, and finally the last two. So Wandalina now has six strings in her first with three knots above and three knots below. Whether Wandalina and Gerik will be allowed to marry right away depends on what happens when Wandalina opens her first. If the six strings form one large loop, then they will. Otherwise, they will be required to wait and reapat the ritual in six months. A. When Wandalina opens her first and looks at the strings, what combination of different size loops might there be? B. What is the probability that the strings will form one big loop? In other words, what are the chances that Wandalina and Gerik will be able to marry right away? C. What is the probability for each of the other possible combinations? I'm no math expert (that's why I'm asking you), but I guess this entire problem is mostly about probability. How should I go about answering these questions? I'm totally stuck. (Thinking) October 12th 2009, 08:47 PM Hello, Sarah-! Who assigned this problem? The solution is far too long . . . Keekerik is an imaginary land where the people have an interesting three-stage ritual for couples who want to get married. Wandalina and Gerik are in that situation so they go the the hope of Queen Katalana to perform this ritual. Permission for them to marry as soon as they wish depends on the outcome of the ritual. The Queen reaches into a box and pulls out six identical strings for the ritual. The Queen hands the strings to Wandalina who holds them firmly in her fist. One end of each string is sticking out above and below her fists. The Queen steps to the side and Gerik is called forward. He ties two of the ends together above Wandalina's fist. Then he ties two other ends above her fist together. Finally, he ties the last two ends above her fist together. The six ends below Wandalina's fist are still hanging untied. Now Queen katalana comes forward again. Although she was watching Gerik, she has no idea which string end below Wandalina's fist belongs to which end above. The Queen does the final step and randomly picks two of the ends below and ties them together, then two more, and finally the last two. So Wandalina now has six strings in her first with three knots above and three knots below. Whether Wandalina and Gerik will be allowed to marry right away depends on what happens when Wandalina opens her first. If the six strings form one large loop, then they will. Otherwise, they will be required to wait and repeat the ritual in six months. A. When Wandalina opens her first and looks at the strings, what combination of different size loops might there be? There are 225 possible outcomes, but only 4 configurations: . . Three loops of two strings each. . . Two loops, one with two strings, one with four strings. . . Two loops, both with three strings. . . One big loop, using all six strings. B. What is the probability that the strings will form one big loop? Number the strings from 1 to 6. Gerik can tie the upper ends in 15 different ways. . . $\begin{array}{c}(12, 34, 56) \\ (12,35,46) \\ (12,36,45) \end{array} \quad\begin{array}{c} (13,24,56) \\ (13,25,46) \\ (13,26,45) \end{array}$ . . $\begin{array}{c}(14,23,56) \\ (14,25,36) \ \ (14,26,35) \end{array}\quad \begin{array}{c}(15,23,46) \\ (15,24,36) \\ (15,26,34) \end{array}$ . . $\begin{array}{c}(16,23,45) \\ (16,24,35) \\ (16,25,34) \end{array}$ The Queen has the same 15 choices for tying the lower ends. Hence, there are: . $15 \times 15 \:=\:225$ possible outcomes. For any one of Gerik's choices, the Queen has 8 choices for forming one big loop. For example, if Gerik chooses this combination $(12,34,56)$ . . the Queen can form one big loop with one of these eight choices: . . . . $\begin{array}{ccc}(13,25,46) &&(15,23,46) \\<br /> (13,26,45) && (16,24,36) \\<br /> (14,25,36) && (16,24,35) \\<br /> (14,26,35) && (16,23,45) \end{array}$ I believe the probability of one big loop is $\frac{8}{15}$ And that is all the work I'm willing to do for this problem . . . October 13th 2009, 12:46 PM Thanks for your suggestions! I'm still pretty lost in this one, but I'm trying to figure it out. If anyone else wants to chime in and give me assistance, it's greatly appreciated.
{"url":"http://mathhelpforum.com/statistics/107607-help-high-school-algebra-problem-some-probability-print.html","timestamp":"2014-04-20T10:36:30Z","content_type":null,"content_length":"11861","record_id":"<urn:uuid:759def4c-3684-4c6d-9fa3-1a099faac33a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface area 667pages on this wiki This article needs to be formatted and/or linked to other articles according to our style guidelines. Please remove this notice when this has been done. Surface area is how much exposed area an object has. It is expressed in square units. If an object has flat faces, its surface area can be calculated by adding together the areas of its faces. Even objects with smooth surfaces, such as spheres, In chemistry Surface area is important in chemical kinetics. Increasing the surface area of a substance generally increases the rate of a chemical reaction. For example, iron in a fine powder will combust, while in solid blocks it is stable enough to use in structures. For different applications a minimal or maximal surface area may be desired. The surface area-to-volume ratio of a cell imposes upper limits on size, as the volume increases much faster than does the surface area, thus limiting the rate at which substances diffuse from the interior across the cell membrane to interstitial spaces or to other cells. If you consider the math, you'll see the relation between SA and V much more intuitively: $V = \frac{4}{3} \pi r^3; SA = 4 \pi r^2 = d^2\pi$, where r is the radius of the cell. Do the math and the resulting ratio becomes $\frac{3}{r}.$ If a cell has a radius of 1 μm, the SA:V ratio is 3. Increase the cell's radius to 10 μm and the SA:V ratio becomes 0.3. With a cell radius of 100, SA:V ratio is 0.03. Using the previous simple example, we can see how the surface area falls off steeply with increasing volume. What limitations does this place on a living cell? For small cells, SA:V ratio allows for relatively good exchange of nutrients and wastes. For larger cells and organisms, SA:V ratio forces the cell or organism to find more efficient ways to exchange nutrients and waste products, e.g. specific conduits that carry blood, hormones, lymph, etc. from deep regions to the surface of an organism. External links See also lo:ເນື້ອທີ່ໜ້າພຽງsimple:Surface area sl:Površina uk:Площа поверхні zh:表面積
{"url":"http://math.wikia.com/wiki/Surface_area","timestamp":"2014-04-20T03:11:46Z","content_type":null,"content_length":"57839","record_id":"<urn:uuid:d045f2f8-8d27-46ff-ad1f-f3c003120293>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
The Big Bang versus the 'Big Bounce' Credit: Thinkstock Two fundamental concepts in physics, both of which explain the nature of the Universe in many ways, have been difficult to reconcile with each other. European researchers developed a mathematical approach to do so that has the potential to explain what came before the Big Bang. According to Einstein s (classical) theory of general relativity, space is a continuum. Regions of space can be subdivided into smaller and smaller volumes without end. The fundamental idea of quantum mechanics is that physical quantities exist in discrete packets (quanta) rather than in a continuum. Further, these quanta and the physical phenomena related to them exist on an extremely small scale (Planck scale). So far, the theories of quantum mechanics have failed to quantise gravity. Loop quantum gravity (LQG) is an attempt to do so. It represents space as a net of quantised intersecting loops of excited gravitational fields called spin networks. This network viewed over time is called spin foam. Not only does LQG provide a precise mathematical picture of space and time, it enables mathematical solutions to long-standing problems related to black holes and the Big Bang. Amazingly, LQG predicts that the Big Bang was actually a Big Bounce , not a singularity but a continuum, where the collapse of a previous universe spawned the creation of ours. European researchers initiated the Effective field theory for loop quantum gravity (EFTFORLQG) project to further develop this exciting candidate theory reconciling classical and quantum descriptions of the Universe. Scientists focused on the background-independent structure of LQG which requires that the mathematics defining the system of spacetime be independent of any coordinate system or reference frame They applied both semi-classical approximations (Wentzel-Kramers-Brillouin approximations, WKBs) and effective field theory (sort of approximate gravitational field theory) techniques to analyze a classical geometry of space, study the dynamics of semi-classical states of spin foam and apply the mathematical formulations to astrophysical phenomena such as black holes. Results produced by the EFTFORLQG project team exceeded expectations. Scientists truly contributed to establishing LQG as a major contender for describing the quantum picture of space and time compatible with general relativity with exciting implications for unravelling some of the major mysteries of the Universe. 2.1 / 5 (7) Jul 06, 2012 How the assumption of space-time spin foam implies the Big Bounce cosmology? I don't see any logic in it... 1.5 / 5 (10) Jul 06, 2012 Two fundamental concepts in physics, both of which explain the nature of the Universe in many ways, have been difficult to reconcile with each other. European researchers developed a mathematical approach to do so that has the potential to explain what came before the Big Bang. By the way, it is interesting to note that nowadays we still do not understand the basic foundations of both theories, such as why and how electron could act as both wave and particle (in quantum mechanics), or how and why space-time could be curved (in general relativity). By consider these philosophical ideas (such as one below) parallel to mathematical approach may give the way to reconcile both theories. 4 / 5 (4) Jul 06, 2012 How the assumption of space-time spin foam implies the Big Bounce cosmology? I don't see any logic in it... Because the theory predicts on extremely fine space scales, gravity is repulsive. Think of a foam that is completely filled with matter. You cannot add more. So when space-time gets extremely compressed, the mathematics predicts that space-time would "bounce". 3.7 / 5 (3) Jul 06, 2012 @vacuum-mechanics - wave/particle duality may be difficult to understand and visualize, but it certainly isn't difficult to formulate mathematically. I suspect you are confusing the extreme difficulty of human visualization with a lack of understanding. In terms of space-time curvature, that is both easy to visualize and easy to understand. Meanwhile recent discoveries in how to combine the two theories (Quantum mechanics and general relativity) are showing that the so called intractable infinities caused by combining these theories are almost certainly a product of the math used, rather than being real. 5 / 5 (2) Jul 06, 2012 the theory predicts on extremely fine space scales, gravity is repulsive. Think of a foam that is completely filled with matter. You cannot add more. So when space-time gets extremely compressed, the mathematics predicts that space-time would "bounce". The obvious (perhaps too obvious) question is, how is the universe going to achieve such states of compression (which sounds like the Big Crunch idea) when we see the opposite happening as Dark Energy is accelerating the expansion of space-time, leading perhaps to the Big Rip? Also, how many times could this cycle repeat before the inevitable loss of energy after each iteration would halt the cycle? I think LQG has a long way to go, even compared with string theory. For example, it cannot even reproduce the predictions made by the current standard model. 5 / 5 (1) Jul 06, 2012 the theory predicts on extremely fine space scales, gravity is repulsive. Think of a foam that is completely filled with matter. You cannot add more. So when space-time gets extremely compressed, the mathematics predicts that space-time would "bounce". The obvious (perhaps too obvious) question is, how is the universe going to achieve such states of compression (which sounds like the Big Crunch idea) when we see the opposite happening as Dark Energy is accelerating the expansion of space-time, leading perhaps to the Big Rip? Also, how many times could this cycle repeat before the inevitable loss of energy after each iteration would halt the cycle? I think LQG has a long way to go, even compared with string theory. For example, it cannot even reproduce the predictions made by the current standard model. It almost certainly can't. And I agree with you about LQG. 3.4 / 5 (5) Jul 07, 2012 PHYSORG is DisOrg ! No links, No authors, just this:'Effective field theory for loop quantum gravity & CORDIS ? WTF is CORDIS ? A quik scan of the arxiv under gr-qc reveals no such paper. Suspect this is just filler. When will they stop doing this & allow their readers to follow up on the original research ??!! 4.5 / 5 (8) Jul 07, 2012 Also, how many times could this cycle repeat before the inevitable loss of energy after each iteration would halt the cycle? I'm not sure there would be an inevitable loss of energy. Why would energy conservation be violated? The bounce is an interesting consequence of LQG, but it doesn't really solve the origin problem (I'm very careful not to write 'first cause', here) The obvious (perhaps too obvious) question is, how is the universe going to achieve such states of compression We really don't know if the universe has already gone through all the phase changes it is going to go through. There's nothing that says that ALL phase changes have to happen early on or at higher temperatures than the current CMB. There may be a phase change in the future that leads to eventual collapse. 5 / 5 (4) Jul 07, 2012 ..WTF is CORDIS? A quik scan of the arxiv under gr-qc reveals no such paper. It's Community Research and Development Information Service of European union. LQG phenomenologists are concentrated in Europe-Canada with compare to string theory fans. In US is only one university (Penn State), that has a research group doing non-string QG (I mean more than one faculty member). 1 / 5 (1) Jul 07, 2012 Still, WHERE is the arxiv citation in this article ?? THAT fellow is Martin Bojowald, but there are others, non-cited in the CORDIS webpage. WTF, are they ashamed of work done under their auspices ? 5 / 5 (1) Jul 07, 2012 Not all published papers are in arxiv. It would be nice if they were, but many researchers still go straight to the pay-to-read journals. 1.1 / 5 (7) Jul 08, 2012 Creation of the universe is a continuum of Big Bounces. Creation continues at the periphery, accelerating until an infinite mass front is attained, yielding the gravitational factor that accelerates universal expansion, and because infinite mass is equivalent to infinite density, and infinite density represents infinite potential, yielding the quantum fluctuation that sparks creation once again, we have a series of infinite mass creation fronts - creation cycles of the Great Creation Wave, but the universe never collapses in on itself, it just continues to create outwards, from the nothing that is infinite density. Perhaps the next infinite mass front is what we are interpreting as dark matter - dark because nothing is mathematically represented as infinite density, and infinite density is infinite potential. Those creation cycles are Big Bounces. Crazy? How less so then LQG, or DE? 5 / 5 (8) Jul 09, 2012 Crazy? How less so then LQG, or DE? Because DE and LQC have solid math behind them that matches with other observed phenomena. Your tripe does not. If you were to go through any math that would fit with your 'theory' then you'd see that it doesn't fit anything we currently observe. Understand this: Unproven theory are not the same as made up theories. Serious unproven theories (like string theory, Brane theory, super symmetry, or LQG) have the math that fits with past observations. It is only when you make predictions from that math that you can find discrepancies betwen them. And it is exactly at that point where future experiments will separate the good from the Made up theories that some interent-poster pulled out of his rear are useless, as they neither fit past observation nor make testable predictions. They're just fairy tales. 1 / 5 (5) Jul 10, 2012 Fairy tales are fictional stories. However, some science fiction writers are on the point and present ideas unthought of by the regular scientific community. i.e. HG Wells and the death ray spitting matians. Hence latter day lasers. In a book 'Absolute relativity theory of everything'. The author presents us with the idea that the universe is made up of 4 spatial dimensions. The primary being the expansion of space (ignores time as a by product of the Euclidian 3D & GFT Spacetime). The opening velocity of this dimension determines whether atoms associate and or dissociate. Currently the constant 300kms enables an atomic Hold On, which means atoms can continue to exist. Should however, this velocity change will cause the entire dissociation of atoms back to energy - and hence initiate a further expansion cycle and new big bang. The universe is nothing more than an on ongoing series of pulses. We just happen to be existing in one of them at this time - lasting some 14 billion year 1.8 / 5 (6) Jul 10, 2012 Another science fantacist would be Galileo. He predicted that the moon caused the ebb and flow of tides as the water followed the moon around the Earth. He was condemned for heresy and was due to have his head cut of. Fortunately, an influential person in the Vatican managed to commute this sentence to a life imprisonment to his house. Another example would be the man who said molecules exist. He was made fun of and he committed suicide. Another example would be Faraday - he was only a lab technician - and he left us a substantial legacy. Euclid envisaged we exist in 3D and wrote a 'science fiction called The Elements! Still in print today. Science is understanding nature not just writing pages of math formulae. Einstein said himself imagination is far more important than intelligence. Intelligent people can do the donkey work and calculate how many threads you need to put onto a bolt to make it fit the hole. If one completely understands nature then you must be god without a sum 1 / 5 (5) Jul 10, 2012 We need people with fresh ideas and notions, which are born of their unique sensitivity to nature. As indeed Einstein was. He imagined things which no one else did. Otherwise we are in danger of constantly digging the same bit of ground in the hopeless search of finding a gold nugget. And our cognition limited by formal doctrine. The Vatican being a perfect example which supressed new ideas which Galileo nearly lost his head for. Looking at the little picture of sub atomic construction of course is useful but this does not provide a means to understand the big picture - this persuit may in fact be taking us backwards not forwards as we become more and more possessed with the tiny structure. For example does any data exist which determines if all suns spots are simultaneous? Has anyone thought of that? If they are indeed simultaneous then there is information between them. Which must be occuring at the rate of 300kms. This would prove the existence of the Primary dimension. 5 / 5 (4) Jul 10, 2012 HG Wells and the death ray spitting matians. Hence latter day lasers. I think you have a VASTLY oversimplified idea of what science is and how it works and how little scientists take pointers from fictional literature. And yes: some people were 'made fun of' for their quirky theories which tured out to be true. But to turn that around and say that every quirky theory therefore has to be true is crazy in the extreme. For every theory that pans out there are literaly thousands that don't - and as long as a quirky theory makes no predictions different from the current best model it's useless. Science is understanding nature not just writing pages of math formulae Science is what works. Note the "what works" part. If your science doesn't work and holds no predictive value it's wankery. 4.2 / 5 (5) Jul 10, 2012 We need people with fresh ideas and notions, We have those. They're called scientists. Being a scientist is as much about being innovative as it is about being able to learn what has alread ben discovered and understand complex ideas. But neither works without the other - and all these "arm-chair-theorists" ever do is jot down a brainfart and then think they have discovered the grand unified theory. But when you look through history: those that REALLY came up with the groundbreaking changes were never arm-chair theorists. It was solid scientists, with a deep understanding of what was already learned, working hard and dilligently and putting it together in novel ways (E.g. relativity can be deduced from very simple principles with just two additional assumption: c is constant, and the principle of equivalence) 5 / 5 (1) Jul 11, 2012 From what I just read, we are getting smaller in the big picture than we were before. 2.6 / 5 (5) Jul 12, 2012 But when you look through history: those that REALLY came up with the groundbreaking changes were never arm-chair theorists. It was solid scientists While I agree with you about arm-chair theorists, it seems to me that those "solid scientists" who come up with truly groundbreaking changes are often ridiculed for their work until later vindicated. From what I have observed, most "solid scientists" seem to lack imagination, work exclusively within the box, and the few that have imagination have an uphill fight to get their work accepted. not rated yet Jul 12, 2012 The uphill fight is what makes science work, it's harsh scrutiny and it's against anybody who can proove you wrong. For something to be considered remotely real it must satisfy every experiment that anybody in any scientific field can do. The great discoveries in science have shown us that we are all studying the same thing. 5 / 5 (2) Jul 13, 2012 From what I have observed, most "solid scientists" seem to lack imagination, Then you should probably try to meet one in real life. EVERY article here on physorg has (at least) one innovative idea behind it that, to date, non one on the planet has had. If you don't call that innovative, then what do you? And EVERY article here is the work of a group of 'solid scientists'. If you work exclusively within the box then you're an engineer. (Not knocking engineers. They do also a LOT more for humanity than the armchair theorists) Do revolutionary findings get questioned by other scientists? Yes. Do they sometimes get fought? Yes. Do they ever get ridiculed by other scientists? No. Science is hard. Not every scientists understands what other scientists do immediately. I can't even read some papers in my own specialty straight away because sometimes I'm unfamiliar with the particular subset of math used. It's not surprising that revolutionary ideas take time to take root. 5 / 5 (1) Jul 13, 2012 This description of a Big Bounce reminds me of the premise of the scifi novel Tau Zero by Poul Anderson back in 1970. 5 / 5 (1) Jul 23, 2012 This is a public service from a curious non-scientist who has gained rudimentary acquaintance with the basic themes/concepts/theories of physics such as string theory, quantum mechanics, special and general relativity, higgs bosons, the standard model, etc. Buy, borrow or steal Brian Greene's 'The Elegant Universe' and 'The Fabric of the Cosmos' where all of these concepts plus the theories that use them are described and explained in lucid-to-the-max prose that is a pleasure to read. He really writes for the rank beginner but without being condescending or trivial.
{"url":"http://phys.org/news/2012-07-big_1.html","timestamp":"2014-04-17T01:37:36Z","content_type":null,"content_length":"101812","record_id":"<urn:uuid:6ae0dfd2-9b78-4cbc-9bfd-3f48e8de3ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Tribute to Claude E. Shannon Claude Elwood Shannon Reprinted with permission of Alcatel-Lucent USA Inc. April 30, 1916 --- February 24, 2001 Claude Elwood Shannon was born in Petoskey, Michigan, on April 30, 1916. He graduated with a B.S. in mathematics and electrical engineering from the University of Michigan in 1936. Shannon earned both a master's degree and a doctorate in 1940 as a student of Vannevar Bush at the Massachusetts Institute of Technology. At that time Vannevar Bush was vice president of M.I.T. and dean of the engineering school, and actively conducting research on his invention the differential analyzer, the first reliable analog computer that solved differential equations. Shannon's electrical engineering master's thesis "A Symbolic Analysis of Relay and Switching Circuits" has been described as one of the most important master's theses ever written. In it Shannon applied George Boole's binary (true-false) logic algebra found in Russell and Whitehead's Principia Mathematica to the problem of electronic (on-off) switching circuits. At the time, Boolean arithmetic was little known or used outside the field of mathematical logic. Because of Shannon's work, Boolean arithmetic is the basis for the design and operation of every computer in the world. For his Ph.D. dissertation in mathematics, Shannon applied mathematics to genetics. This work was influenced by Norbert Wiener, one of his coworkers with Vannevar Bush at MIT. Wiener studied how the nervous system and machines perform the functions of communication and control, and as acknowledged by Shannon, Wiener made some early contributions to the field of information theory. In 1941, Shannon joined Bell Telephone Laboratories in New Jersey as a research mathematician. While working on the problem of efficiently transmitting communications, he formulated a theory quantifying information. "The Mathematical Theory of Communication" (Bell System Technical Journal, 1948) extended the concept of entropy (a measure of uncertainty) by demonstrating that decreases in uncertainty correspond to the information content in a message. This paper began the field of Information Theory. In footnotes of the 1949 book reprinting the 1948 paper, Shannon and Warren Weaver acknowledge that their work was based on the treatment of "information" by John von Neumann in 1932, Leo Szilard in 1929, R.V.L. Hartley in 1928, and ultimately to Gibbs 1902 work in statistical mechanics and Boltzmann's 1894 work in thermodynamics. The most important concept of Shannon's theory is the "entropy function". It is expressed in the discrete form by the equation TeX: $$ H = - K \cdot \sum_i p_i \log p_i $$ This function represents the lower limit on the expected number of symbols required to code for the outcome of an event regardless of the method of coding, and is thus the unique measure of the quantity of information. It is the amount of information that would be required to reduce the uncertainty about an event with a set of probable outcomes to a certainty. As derived by Shannon it is the only measure of information that simultaneously meets the three conditions of being continuous over the probability, of monotonically increasing with the number of equiprobable outcomes, and of being the weighted sum of the same function defined on different partitions of the probable outcomes. In the discrete and continuous forms, the uncertainty corresponds to the entropy of statistical mechanics and to the entropy of the second law of thermodynamics, and it is the foundation of information theory. Shannon's work has found application in computer science, in communication engineering, in biological information systems including nucleic acid and protein coding, and hormonal and metabolic signaling, in linguistics, phonetics, cognitive psychology, and cryptography. In his work Shannon used this measure of information to show how many extra bits would be needed to efficiently correct for errors when the message was transmitted on a noisy channel. This work was critical for the development of digital encoding and modern electronic communications. Shannon's papers also contain the first use of the word "bit" as shorthand for "the binary digit." At Bell Labs Shannon was known for his eclectic interests, and for his dexterity both at constructing devices and at juggling and riding his unicycle down the halls. He remained affiliated with the Labs until 1972. He returned to MIT as a visiting professor in 1956, as a permanent member of the faculty in 1958, and as a professor emeritus in 1978. His work on chess-playing machines and an electronic mouse that could "learn" to run a maze helped create the field of artificial intelligence. Dr. Shannon died on Saturday, February 24, 2001, in Medford, Mass. He had been afflicted with Alzheimer's disease for several years. Dr. Marvin Minsky of M.I.T. said that despite the effects of the illness "the image of that great stream of ideas still persists in everyone his mind ever touched." For a bibliography, see the Bio-Info FAQ Shannon Bibliography. Written by John S. Garavelli, Thomas D. Schneider, and John L. Spouge. • Remembering Claude Shannon a project at the George Mason University Digital History. Tom Schneider's rememberances. • Claude Shannon Obituary: Claude Shannon (1916-2001). (Nature 410, 12 April 2001, 768). The article unfortunately has some errors: 1. However surprising it may be, the channel capacity theorem allows arbitrary small probability of error up to and including the channel capacity. • COMMUNICATION: Retrospective: Claude E. Shannon (1916-2001) Solomon W. Golom, Science Apr 20 2001, 292: 455. • Claude Shannon: Reluctant Father of the Digital Age by M. Mitchell Waldrop (MIT Technology Review, volume 104, no 6, July/August 2001, 64-71). This article unfortunately also has an error: it repeatedly claims that Shannon's method allows data transmission "without error" or that it is possible to have "perfect transmission". The theorem allows that the error rate may be made as small as desired but it cannot be zero. • Shannon Statue • A PERSONAL TRIBUTE TO CLAUDE SHANNON by Arthur Lewbel. Includes a link to a movie of Shannon juggling. Schneider Lab origin: 2001 Feb 28 updated: 2013 Aug 04
{"url":"http://schneider.ncifcrf.gov/shannontribute.html","timestamp":"2014-04-20T09:50:14Z","content_type":null,"content_length":"12215","record_id":"<urn:uuid:472d7cfd-1420-42bd-ac8c-8fac37dd0311>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
15th ISeminar 2011/12 Operator Semigroups for Numerical Analysis Talk:Solutions 9 From 15th ISeminar 2011/12 Operator Semigroups for Numerical Analysis How to contribute? Here you will have the possibility to discuss the solutions of exercises in Lecture 9. • To make comments you have to log in. • You can add a new comment by clicking on the plus (+) sign on the top of the page. • Type the title of your comment to the "Subject/headline" field. • Add your contribution to the main field. • You can write mathematical expressions in a convenient way, i.e., by using LaTeX commands between <math> and </math> (for instance, <math> \mathrm{e}^{tA}u_0 </math> gives e^tAu[0]). • Please always "sign" your comment by writing ~~~~ (i.e., four times tilde) at the end of your contribution. This will be automatically converted to your user name with time and date. • You can preview or save your comment by clicking on the buttons "Show preview" or "Save page", respectively. Although all participants are allowed to edit the whole page, we kindly ask you not to do it and refrain from clicking on the "Edit" button. Please do follow the above steps. Remember that due to safety reasons, the server is off-line every day between 4:00 a.m. and 4:20 a.m. in local time (CEST). Problem 4 Dear Isem Team, I think there is a typo in Problem 4: It should state that SAS^ − 1A is a sectorial operator (which is true), not STS^ − 1T. Nevertheless, I was wondering if you meant SAS^ − 1 is a sectorial operator. Thanks! OrifIbrogimov 17:46, 13 December 2011 (CET) Dear Orif, thanks! We meant of course "SAS^ − 1 is a sectorial operator". IsemTeam 17:43, 14 December 2011 (CET) L'viv Team Dear L'viv Team, your `solution' of Problem 6 works only if the semigroups commute. This is not assumed in the problem, as it is posed! JurgenVoigt 22:08, 10 January 2012 (CET)
{"url":"https://isem-mathematik.uibk.ac.at/isemwiki/index.php/Talk:Solutions_9","timestamp":"2014-04-18T08:19:09Z","content_type":null,"content_length":"13388","record_id":"<urn:uuid:b98d5d58-1077-4762-9705-bb2b40bcaf7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Smart Financial Planning (Wealth & Insurance) The reliable National Savings Certificate (NSC) looks like it may have lost popularity with countless competing investment options available such as equities, mutual funds, unit linked insurance and fixed maturity plans. However, there is no ignoring the instrument's respectable returns, which are not only assured, but also tax-exempt (under section 80C) and government-guaranteed. Compared with the NSC, the Public Provident Fund (PPF) has traditionally been more popular on account of its 8 per cent tax free interest. However, the PPF has a maximum investment limit of Rs 70,000 per annum which means the maximum amount one can invest in PPF every year is capped at Rs 70,000). Investment limit NSCs do not have a limit of how much you can invest. What's more, interest earned on NSC investments up to Rs 1 lakh is tax free. You read that correctly. NSCs offer you the possibility of earning up to Rs 1 lakh without paying tax whatsoever. This is because NSC is the only small saving scheme wherein not only the initial deposit, but also the interest for the first five years, out of its term of six years, is eligible for a deduction under section 80C. Interest and returns NSC offers 8 per cent interest compounded half-yearly. Due to this compounding, the effective interest rate per annum works out to 8.16 per cent. It is a cumulative scheme with a term of six years, meaning, though the interest accrues every year, it is paid to the investor together with the initial capital invested at the end of six years. For example, Rs 10,000 invested in NSC today will grow to Rs 16,010 at the end of six years compounded annually at effective interest rate of 8.16 per cent. Let's talk about the tax treatment of the interest paid out. Unlike PPF, where the full amount of interest is tax free, NSC interest is taxable. However, as it is a cumulative scheme (for example, interest is not paid to the investor but instead accumulates in the account), each year's interest for the first 5 years is automatically re-invested in the NSC. Since it is deemed re-invested, it qualifies for a fresh deduction under Sec 80C, thereby making it tax free. Only the final year's interest, when the NSC matures, does not receive a tax deduction as it does not get reinvested, but is paid back to the investor along with the interest of the earlier years and the capital amount. Illustration (All values indicating interest earned have been rounded off for simplicity: Assume that you invested Rs 1,00,000 in an NSC on April 1, 2010. Interest on this investment for each year is shown in the following table: April 1, 2010: Initial investment = Rs 1,00,000. March 31, 2011: Interest for the first year = Rs 8,160 Explanation: Rs 1,00,000 multiplied by 8.16 and then divided by 100). March 31, 2012: Interest for the second year = Rs 8,830 Explanation: For the second year your principal will be Rs 1,00,000 + Rs 8,160 = Rs 1,08,160. This is because the interest of Rs 8,160 earned in the first year is added to your initial investment of Rs 1,00,000 and then interest (at 8.16 per cent) earned is calculated on Rs 1,08,160. March 31, 2013: Interest for the third year = Rs 9,550 Explanation: For the third year your principal will be Rs 1,08,160 + Rs 8,830 = Rs 1,16,990. This is because the interest of Rs 8,830 earned in the second year is added to your corpus of Rs 1,08,160 and then interest (at 8.16 per cent) earned is calculated on Rs 1,16,990. March 31, 2014: Interest for the fourth year = Rs 10,330 Explanation: For the fourth year your principal will be Rs 1,16,990 + Rs 9,550 = Rs . This is because the interest of Rs 9,550 earned in the third year is added to your corpus of Rs 1,16,990 and then interest (at 8.16 per cent) earned is calculated on Rs 1,26,540. March 31, 2015: Interest for the fifth year = Rs 11,170 Explanation: For the fifth year your principal will be Rs 1,26,540 + Rs 10,330 = Rs 1,36,870. This is because the interest of Rs 10,330 earned in the fourth year is added to your corpus of Rs 1,26,540 and then interest (at 8.16 per cent) earned is calculated on Rs 1,36,870. March 31, 2016: Interest for teh sixth year = Rs 12,070 Explanation: For the sixth year your principal will be Rs 1,36,870 + Rs 11,170 = Rs 1,48,040. This is because the interest of Rs 11,170 earned in the fifth year is added to your corpus of Rs 1,36,870 and then interest (at 8.16 per cent) earned is calculated on Rs 1,48,040). Total interest earned in six years = Rs 60,110 (Rs 8,160 + Rs 8,830 + Rs 9,550 + Rs 10,330 + Rs 11,170 + Rs 12,070) Total value of investment at the end of sixth year which will be taxed = Rs 1,60,110 (Rs 1,00,000 + Rs 60,110. What you must ensure while filing tax return To benefit from this feature of re-invested interest and its deduction, it is important to declare the accrued interest on NSC on a yearly basis in your tax return. In the above example, for financial year 2010-11 (the current financial year), you will include the interest amount of Rs 8,160 in your tax return under the head 'Income from other sources'. Under deductions, you will claim Rs 8,160 under Section 80C as re-invested NSC interest. Both cancel each other out, making the interest in effect tax free. From the above discussion, it is shown that both NSC and PPF interest is tax free. However, the difference is that PPF interest is tax free per se, whereas the NSC interest becomes tax free on account of the deemed reinvestment under Section 80C. Remember that Section 80C has a maximum limit of Rs 1 lakh. Your NSC interest would only qualify for the deduction provided you have funds left in Section 80C. Provident fund contributions, insurance premiums, housing loan principal repayments, tuition fees, PPF, tax saving mutual funds and bank deposits -- not to mention any fresh investment in NSC -- are also covered under the same Rs 1 lakh limit. So, if you want to invest and take advantage of the tax-saving feature of NSC interest, remember to make the adjustment so far as the other tax-saving investments are concerned. Where and how to buy? National Savings Certificates (NSC) are issued by Department of Post, Government of India, and are available at most post offices in the country in denominations of Rs 100, Rs 500, Rs 1,000, Rs 5,000 and Rs 10,000. NSCs can also be transferred from one person to another by paying a small fee. They can also be transferred from one post office to another.
{"url":"http://smartfinanceplan.blogspot.com/2011/03/understanding-benefits-of-investing-in.html","timestamp":"2014-04-20T08:26:40Z","content_type":null,"content_length":"91373","record_id":"<urn:uuid:45e3dd08-5990-4ebf-997f-f930cf6daf7f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
In Mexico, more marriages ending in divorce, and sooner May 18, 2012 By David Smith R user Diego Valle analyzed the rate of divorces in Mexican marriage since 1993 (the earliest date for which data are available) and found that not only have more marriages ended in divorce over time, but marriages that do end are ending sooner: This chart is a bit complicated, but it bears close inspection. Each line you see is a cohort of all of the marriages in a given year: 1993, 1994, all the way up to 2009. The vertical height of each line is proportional to the total number of divorces in each subsequent year within each cohort (expressed as a fraction of all marriages in the cohort year). Cleverly, the cohort lines are all arranged not by calendar time, but by years since marriage: the leftmost point represents divorces in the first year (relatively few), then divorces in the second year, and so on. More residents of Mexico married in 1993 saw their 10th wedding anniversary than those married in 1998. Overall, the trend is clear: more weddings that take place now will end than those from previous years, and they're likely to end sooner as well. Although there's not much historical data for recent marriage, the steady progression of divorce rates over time allows Diego to create a forecast (using a linear mixed-effects model in the R language) of the outcomes of recent marriages. He predicts, for example, that 11% of marriages registered in 2007 will have ended in divorce by 2022. By contrast though, that's about the same rate as US marriages from the fifties. If you want to do a similar analysis, Diego provides R code in his post linked below, and at his github. Diego Valle-Jones: Proportion of marriages ending in divorce for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/in-mexico-more-marriages-ending-in-divorce-and-sooner/","timestamp":"2014-04-21T02:12:21Z","content_type":null,"content_length":"37057","record_id":"<urn:uuid:f61b1dda-fcf6-40f2-8d2c-065146c7a25b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Teacher Technology and Reasoning in Algebra and Geometry Daniel B. Hirschhorn and Denisse R. Thompson Explorations to foster reasoning in mathematics. The geometry portion utilizes dynamic software. (889, 1996) 138 - 142 Folded Paper, Dynamic Geometry, and Proof: A Three-Tier Approach to the Conics Daniel P. Scher Folding conics and constructing Sketchpad models. (889, 1996) 188 - 193 Theorems in Motion: Using Dynamic Geometry to Gain Fresh Insights Daniel P. Scher Construction of a constant-perimeter rectangle; a constant area rectangle. (889, 1996) 330 - 332 Using Interactive-Geometry Software for Right-Angle Trigonometry Charles Vonder Embse and Arne Englebretsen Directions for the exploration utilizing The Geometer's Sketchpad, Cabri Geometry II, and TI-92 Geometry. (889, 1996) 602 - 605 Geometry and Proof Michael T. Battista and Douglas H. Clements Connecting Research to Teaching. Discussion of research and instructional possibilities. Includes comments on computer programs and classroom (88, 1995) 48 - 54 From Drawing to Construction with The Geometer's Sketchpad William F. Finzer and Dan S. Bennett Understanding the difference between a drawing and a construction. (88, 1995) 428 - 431 Conjectures in Geometry and The Geometer's Sketchpad Claudia Giamati Exploration as a foundation on which to base proof. (88, 1995) 456 - 458 Network Neighbors William F. Finzer An experiment in network collaboration using The Geometer's Sketchpad. (88, 1995) 475 - 477 Technology in Perspective Albert A. Cuoco, E. Paul Goldenberg, and Jane Mark Technology Tips. Constructions and investigations with dynamic geometry (87, 1994) 450 - 452 Teaching Relationships between Area and Perimeter with The Geometer's Sketchpad Michael E. Stone For all n-gons with the same perimeter, what shape will have the greatest area? Sketchpad investigations of the problem. (87, 1994) 590 - 594 Dynamic Geometry Environments: What's the Point? Celia Hoyles and Richard Noss Technology Tips. Constructions in Cabri Geometry. (87, 1994) 716 - 717 Mathematical Iteration through Computer Programming Mary Kay Prichard Some of the problems involved are geometry related. Cutting figures, diagonals of a polygon, figurate numbers. (86, 1993) 150 - 156 The Geometry Proof Tutor: An "Intelligent" Computer-based Tutor in the Classroom Richard Wertheimer A description of classroom experiences with the GPTutor. (83, 1990) 308 - 317 Students' Microcomputer-aided Exploration in Geometry Daniel Chazan Using the Geometric Supposers. (83, 1990) 628 - 635 Let the Computer Draw the Tessellations That You Design Jimmy C. Woods Gives BASIC routines to save time in the drawing of tessellations. (81, 1988) 138 - 141 Using Logo Pseudoprimitives for Geometric Investigations, Michael T. Battista and Douglas H. Clements A set of Logo procedures to allow the investigation of traditional geometric topics. (81, 1988) 166 - 174 Estimating Pi by Microcomputer Richard J. Donahoe Four BASIC programs using different techniques. (81, 1988) 203 - 206 Integrating Spreadsheets into the Mathematics Classroom Janet L. McDonald Some of the spreadsheets presented involve geometric investigations. (81, 1988) 615 - 622 Periodic Pictures Ray S. Nowak Activities involving graphical symmetries produced by periodic decimals. BASIC program provided. 80, (1987) 126 - 137. Lessons Learned While Approximating Pi James E. Beamer Approximations of pi. BASIC, FORTRAN, and TI55-II programs provided. 80, (1987) 154 - 159. Turtle Graphics and Mathematical Induction Frederick S. Klotz Revising the FD command in Logo. Links to inductive proofs. 80, (1987) 636 - 639, 654. Reflection Patterns for Patchwork Quilts Duane DeTemple Forming patchwork quilt patterns by reflecting a single square back and forth between inner and outer rectangles. Investigating the periodic patterns formed. BASIC program included. 79, (1986) 138 - 143. Logo and the Closed-Path Theorem Alton T. Olson Investigation of some plane geometry theorems utilizing Logo and the Closed-Path Theorem. Logo procedure included. 79, (1986) 250 - 255 The Geometric Supposer: Promoting Thinking and Learning Michal Yerushalmy and Richard A. Houde A description of classroom use of the Supposer. 79, (1986) 418 - 422. Logo in the Mathematics Curriculum Tom Addicks Using Logo to produce bar graphs and pie charts. 79, (1986) 424 - 428. Where Is the Ball Going? Examination of ball paths on a pool table. BASIC routine included. 79, (1986) 456 - 460. Circles and Star Polygons Clark Kimberling BASIC programs for producing the shapes. 78, (1985) 46 - 51. Investigating Shapes, Formulas, and Properties With LOGO Daniel S. Yates LOGO activities leading to results on areas and triangle geometry. 78, (1985) 355 - 360. (See correction p. 472.) Measuring the Areas of Golf Greens and Other Irregular Regions W. Gary Martin and Joao Ponto Divide the region into triangles having a common vertex at an interior point. BASIC program provided. 78, (1985) 385 - 389. A Piagetian Approach to Transformation Geometry via Microworlds Patrick W. Thompson The use of a computerized microworld called Motions to allow students to work with transformation geometry. 78, (1985) 465 - 471. Microworlds: Options for Learning and Teaching Geometry Joseph F. Aieta Using Logo in order to study relations in families of figures. Logo procedures provided. 78, (1985) 473 - 480. High Resolution Plots of Trigonometric Functions Marvin E. Stick and Michael J. Stick Some of the plots were part of a "mathematics in art" project in a high school geometry class. BASIC routines provided. 78, (1985) 632 - 636. A Square Share: Problem Solving with Squares Some geometry and work with Logo. 77, (1984) 414 - 420. Shipboard Weather Observation Richard J. Palmaccio Vector geometry applied to determining wind velocity from a moving ship. BASIC programs provided. 76, (1983) 165 - 169. Geometric Transformations On A Microcomputer Thomas W. Shilgalis Microcomputer programs for use in demonstrating motions and 75, (1982) 16 - 19. Formal Axiomatic Systems and Computer Generated Theorems Michael T. Battista The use of a microcomputer in the development of an abstract system. 75, (1982) 215 - 220. Visualization, Estimation, Computation Evan M. Maletsky Activities for investigating the manner in which the dimensions of a cone change as its shape changes. BASIC program provided. 75, (1982) 759 - 764. Using The Computer To Help Prove Theorems Louise Hay Using a computer in an attempt to generate possible counterexamples can be an aid toward finding a proof for the theorem. 74, (1981) 132 - 138. Computer Classification Of Triangles and Quadrilaterals - A Challenging Application J. Richard Dennis Computer application, uses coordinates of vertices. 71, (1978) 452 - 458. An Investigation Of Integral 60 degree and 120 degree Triangles Richard C. Muller Law of cosines investigation. Computer related. 70, (1977) 315 - 318. email: 00hjludwig@bsu.edu home page: http://www.cs.bsu.edu/~hjludwig/
{"url":"http://mathforum.org/mathed/mtbib/geom.and.computers.html","timestamp":"2014-04-24T17:12:22Z","content_type":null,"content_length":"10561","record_id":"<urn:uuid:7f4cd56d-08fe-4ee3-a8ae-219829873a74>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: I) plot the probability (instead of the dummy) on the residuals Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: I) plot the probability (instead of the dummy) on the residuals // II) cluster test From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: I) plot the probability (instead of the dummy) on the residuals // II) cluster test Date Mon, 12 Dec 2011 13:30:52 +0100 On Sat, Dec 10, 2011 at 11:26 AM, Luca Fumarco wrote: > I) I have a probit regression, where the dep. var. is a dummy. > I want to graphically see whether my model is affected by heteroskedasticity.Is there a way to plot the probability (instead of the dummy) on the residuals instead? The problem is that the residual you care about is the difference between the predicted and the latent (i.e. unobserved) variable. To make things worse the latent variable is identified by the assumption of homoskedasticity (or alternatively functional form for the heteroskedasticity in -oglm-, see -ssc d oglm- and <http://www.nd.edu/~rwilliam/oglm/>). So there is no way to empirically check that assumption. Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-12/msg00472.html","timestamp":"2014-04-19T22:35:08Z","content_type":null,"content_length":"8659","record_id":"<urn:uuid:0b738654-5779-4791-847e-3e6bc1f59c16>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Hanning window of MATLAB As I understand the book is wrong (Notes On Digital Signal Processing by C. Britton Rorabaugh). If I replace the N by (N-1) in the denumerator of the formulation which is 5+.5*cos(2*pi*n/(N-1)), the results coincide exactly with that of MATLAB. Thank you for the link.
{"url":"http://www.physicsforums.com/showpost.php?p=3780142&postcount=3","timestamp":"2014-04-21T09:55:48Z","content_type":null,"content_length":"7269","record_id":"<urn:uuid:491af3f9-8bd1-4345-9247-761945c0b881>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Masters in math: Job options Quote by HiILikeMath (Post 3798164) Trust me I'm glad to be prepared to know that kind of thing. I was told on another forum that with about 18 credits in the right math subjects I would be able to apply to most grad schools (and yes I only have Calc 1 under my belt). Is that off? I think a minor is 15 credits and I was just gonna take one or two more classes in addition to it. That would be an additional semester for me to complete my undergrad, but if that's what it took to get me into grad school and I'm set on that then it's no problem to me. But since you say from your experience only those tailored to the finance side have good prospects, I may have to reconsider if I continue to see this is the case. "It is desirable that the applicant's undergraduate background include courses in calculus, linear and abstract algebra, differential equations, and real and complex analysis." "... a student should have mastered material ... including: three semesters of calculus, one or two semesters of differential equations, one semester courses in modern algebra, linear algebra, geometry or topology, advanced calculus of one and several variables. In addition, a student should have completed at least three additional mathematics courses and at least two courses in related fields such as statistics, computer science, or the physical sciences." "Undergraduate coursework equivalent to a major in mathematics from an accredited university. This should include a one-year course in either analysis or abstract algebra." These three quotes were from a top 5, top 30, and otherwise "unranked" school's master's in math programs ... so i'd imagine your 15 credit minor might look like: calc 1 calc 2 calc 3 intro ODEs linear algebra which wouldn't cut it for any of those three tiers of programs. you'd probably have to add: real analysis complex analysis abstract algebra ... bringing it up to 27 credits rather than 18 for a solid chance at getting into programs without being accepted as a remedial student. Some of those programs did say that they accept otherwise strong applicants but that they are expected to make up remedial preparation within the first semester. The minor in math with just 15 credits and a strong everything else will probably be ok for getting in on remedial status but yeah the whole point is to be able to start the three big (algebra, analysis, topology) sequences your first semester and pass quals after the first year and do all your applied stuff during the 2nd year of your masters ... at least at most schools. Good luck, I'd also suggest talking to the graduate math advisor at your school (if applicable) and seeing what he/she has to say about how you should best prep for the possibility of math grad
{"url":"http://www.physicsforums.com/printthread.php?t=583702","timestamp":"2014-04-21T09:54:31Z","content_type":null,"content_length":"18607","record_id":"<urn:uuid:d77eb6a2-da7e-43ca-984e-bbfded663f53>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] How to fit a surface from a list of measured 3D points ? [SciPy-user] How to fit a surface from a list of measured 3D points ? josef.pktd@gmai... josef.pktd@gmai... Wed Apr 1 12:51:30 CDT 2009 On Wed, Apr 1, 2009 at 1:40 PM, LB <berthe.loic@gmail.com> wrote: >> Hmm, good point. Can you rotate the data points in the 3D space so >> that the new z values do become a proper function in two dimensions? > It may be possible, with some manual transformation of the data > points, but I would prefer a more generic approach if possible. >> If not, then you'll have to: >> a) fit a surface to all of the data in 3D (something done a lot by >> computer graphics and robotics people, who get point clouds as return >> data from LIDAR scanners and similar, and then try to fit the points >> to 3D surfaces for visualization / navigation) >> b) Find locally-smooth patches and fit surfaces to these individually >> (the manifold-learning folks do this, e.g. "Hessian LLE"). Say you're >> interested in curvature around a given data point (x, y, z)... you >> could take the points within some neighborhood and then either fit >> them to a simple 3d surface (like some kind of paraboloid), or figure >> out (with e.g. PCA) the best projection of those data points to a >> plane, and then fit a surface to f(x, y) -> z for the transformed data. >> or perhaps even c) just calculate what you need from the data points >> directly. If you just need very local curvature data, you could >> probably calculate that from a point and its nearest neighbors. (This >> is really just a degenerate case of b...) >> Lots of tools for these tasks are in scipy, but nothing off-the-shelf >> that I know if. > The method c) seems the simplest at first sight but I see two issues > for this local approach : > - the measured data are noisy. Using the nearest neighbor could give > a noisy result two, especially when looking at a radius of curvature > - I don't see how to use this approach to plot the variation of > radius of curvature along the surface, It can give me an array of > radius of curvature, but as my data are not regularly spaced, it won't > be easy to handle. > Th method b) seems very fuzzy to me : I don't have any knowledge in > manifold-learning and I would have the second issue of the method c) > too. > The method a) is what I had initially in mind, but I didn't see how to > do this in scipy :-( > I believed at first that I could make a sort of parametric bispline > fit with the functions available in scipy.interpolate, but I didn't > succeed in. > Do you have any example or hint for doing this kind of treatment in > scipy ? > LB If you have noisy data, then a kernel ridge regression or fitting a gaussian process might be more appropriate than just interpolation. I posted a simple example code for it a while ago, and pymvpa has a more complete implementation. Around 900 points should still be ok, since it builds the square distance matrix (900,900) I don't know anything about your curvature measure, but it should be worth a try. More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-April/020560.html","timestamp":"2014-04-18T00:23:54Z","content_type":null,"content_length":"6363","record_id":"<urn:uuid:8c456f3d-7c8c-4e94-b595-abe8febd95ba>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Dupont, WA Statistics Tutor Find a Dupont, WA Statistics Tutor With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including statistics, calculus, geometry, GRE ...Even if someone isn't interested in the subject, I believe there are always ways to enjoy learning. In writing and grammar, I think that a writer can be taught to do well. We are here to learn and no matter what the subject is, there is always room for a well-written paper.Chemistry is one of the hardest subjects to learn and understand. 25 Subjects: including statistics, chemistry, geometry, English ...I can read and speak Korean as a native Korean. When I was a UW student, after I took differential equations, I tutored this subject to college students. Also, as an Electrical Engineering student, I studied about Laplace Transform and Fourier Series. 20 Subjects: including statistics, calculus, physics, algebra 2 ...I enjoy working one on one with students, whether helping them with homework or preparing for an exam. I am willing to create practice tests for students to ensure their success. I have a love for mathematics, but I understand that not everyone shares my passion. 19 Subjects: including statistics, reading, English, calculus ...I recently was a volunteer tutor at the Kent and Covington libraries where I tutored children K-12th grade in many subjects. I also volunteered with the Pullman, WA YMCA after school tutoring for over a year while earning my degree at WSU. During this time I also volunteered with the YMCA Speci... 25 Subjects: including statistics, chemistry, physics, geometry Related Dupont, WA Tutors Dupont, WA Accounting Tutors Dupont, WA ACT Tutors Dupont, WA Algebra Tutors Dupont, WA Algebra 2 Tutors Dupont, WA Calculus Tutors Dupont, WA Geometry Tutors Dupont, WA Math Tutors Dupont, WA Prealgebra Tutors Dupont, WA Precalculus Tutors Dupont, WA SAT Tutors Dupont, WA SAT Math Tutors Dupont, WA Science Tutors Dupont, WA Statistics Tutors Dupont, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Dupont_WA_Statistics_tutors.php","timestamp":"2014-04-19T23:48:53Z","content_type":null,"content_length":"23898","record_id":"<urn:uuid:40f36c12-589c-4c48-959e-ec3f18fa05b9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
how to do the chapter 1 project for algebra 2 prentice hall Author Message goddy Posted: Saturday 30th of Dec 11:51 Friends , I am in need of aid on graphing lines, factoring expressions, proportions and gcf. Since I am a newbie to Intermediate algebra, I really want to learn the bedrocks of Algebra 1 completely. Can anyone recommend the best place from where I can start learning the basics? I have the final next week. IlbendF Posted: Sunday 31st of Dec 19:28 Well, I cannot do your assignment for you as that would mean plagiarism . However, I can give you a suggestion. Try using Algebrator. You can find detailed and well explained solutions to all your problems in how to do the chapter 1 project for algebra 2 prentice hall. Gog Posted: Sunday 31st of Dec 21:54 Algebrator is a nice thing. I have used it a lot. I tried solving the problems myself, at least once before using the software. If I couldn’t solve the question then I used the software to give me the solution. I then used to compare both the answers and correct my mistakes. Austin, TX medaxonan Posted: Monday 01st of Jan 07:09 Wow! That sounds alright. So where did you buy the program ? DoniilT Posted: Tuesday 02nd of Jan 13:31 I remember having often faced problems with interval notation, geometry and unlike denominators. A truly great piece of math program is Algebrator software. By simply typing in a problem from workbook a step by step solution would appear by a click on Solve. I have used it through many algebra classes – College Algebra, Algebra 1 and College Algebra. I greatly recommend the program. caxee Posted: Wednesday 03rd of Jan 09:49 Sure, why not! You can grab a copy of the program from http://www.solve-variable.com/solving-quadratic-equations-4.html. You are bound to get addicted to it. Best of Luck. Boston, MA,
{"url":"http://www.solve-variable.com/solve-variable/angle-complements/how-to-do-the-chapter-1.html","timestamp":"2014-04-19T07:14:20Z","content_type":null,"content_length":"25092","record_id":"<urn:uuid:a67ccc1e-4b0c-4611-a2e8-7773d99e13ce>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersecting circles November 12th 2012, 03:39 PM Intersecting circles Given an arc PQ with curvature 1/9, Three identical circles with radii 3 and centered at B,G,A respectively. The circumference of the circles pass through each other's centers. Find the area of the shaded region. November 12th 2012, 08:06 PM Re: Intersecting circles Hey blehbleh. Can you show us what you have tried? November 13th 2012, 02:01 AM Re: Intersecting circles I actually got the results already. I was wondering if there is an alternative solution. I wanted to see if this can be solved using polar integration by letting G be the origin. The area of the black circle can be solved using polar integration by getting angle CGE. Which can be derived by cosine law. I was wondering if it is also possible for the blue area by getting angle DGF? But I couldn't seem to get it anymore by using cosine rule. Also is there an alternative solution to solve the red area other than 2(sector GBH - triangle GBH)?
{"url":"http://mathhelpforum.com/geometry/207366-intersecting-circles-print.html","timestamp":"2014-04-24T17:35:43Z","content_type":null,"content_length":"4581","record_id":"<urn:uuid:f2ffd3bf-565b-4e8d-b7bb-7e18e14d83f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
XPPAUT1.8—The differential equations tool. Available at www.pitt.edu/bardware , 1998 "... We propose a biophysical mechanism for the high interspike interval variability observed in cortical spike trains. The key lies in the nonlinear dynamics of cortical spike generation, which are consistent with type I membranes where saddle-node dynamics underlie excitability (Rinzel & Ermentrout, 19 ..." Cited by 37 (4 self) Add to MetaCart We propose a biophysical mechanism for the high interspike interval variability observed in cortical spike trains. The key lies in the nonlinear dynamics of cortical spike generation, which are consistent with type I membranes where saddle-node dynamics underlie excitability (Rinzel & Ermentrout, 1989). We present a canonical model for type I membranes, the θ-neuron. The θ-neuron is a phase model whose dynamics reflect salient features of type I membranes. This model generates spike trains with coefficient of variation (CV) above 0.6 when brought to firing by noisy inputs. This happens because the timing of spikes for a type I excitable cell is exquisitely sensitive to the amplitude of the suprathreshold stimulus pulses. A noisy input current, giving random amplitude “kicks” to the cell, evokes highly irregular firing across a wide range of firing rates; an intrinsically oscillating cell gives regular spike trains. We corroborate the results with simulations of the Morris-Lecar (M-L) neural model with random synaptic inputs: type I M-L yields high CVs. When this model is modified to have type II dynamics (periodicity arises via a Hopf bifurcation), however, it gives regular spike trains (CV below 0.3). Our results suggest that the high CV values such as those observed in cortical spike trains are an intrinsic characteristic of type I membranes driven to firing by “random” inputs. In contrast, neural oscillators or neurons exhibiting type II excitability should produce regular spike trains. - SIAM. J. Math. Anal , 2002 "... In most applications of delay di#erential equations in population dynamics,the need of incorporation of time delays is often the result of the existence of some stage structure. Since the through-stage survival rate is often a function of time delays,it is easy to conceive that these models may invo ..." Cited by 24 (3 self) Add to MetaCart In most applications of delay di#erential equations in population dynamics,the need of incorporation of time delays is often the result of the existence of some stage structure. Since the through-stage survival rate is often a function of time delays,it is easy to conceive that these models may involve some delay dependent parameters. The presence of such parameters often greatly complicates the task of an analytical study of such models. The main objective of this paper is to provide practical guidelines that combine graphical information with analytical work to e#ectively study the local stability of some models involving delay dependent parameters. Specifically,we shall show that the stability of a given steady state is simply determined by the graphs of some functions of # which can be expressed explicitly and thus can be easily depicted by Maple and other popular software. In fact,for most application problems,we need only look at one such function and locate its zeros. This function often has only two zeros,providing thresholds for stability switches. The common scenario is that as time delay increases,stability changes from stable to unstable to stable, implying that a large delay can be stabilizing. This scenario often contradicts the one provided by similar models with only delay independent parameters. Ke words. delay di#erential equations,stability switch,characteristic equations,stage structure, population models AMS sub je classifications. 34K18,34K20,92D25 PII. S0036141000376086 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8405059","timestamp":"2014-04-18T22:19:02Z","content_type":null,"content_length":"16898","record_id":"<urn:uuid:a0b52a92-f305-4715-b85c-933dd8687108>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: There are deer and peawingspans in a zoo. By counting heads they are 80. The number of their legs is 200. How many peawingspans are there ? A. 20 B. 30 C. 50 D. 60 • 11 months ago • 11 months ago Best Response You've already chosen the best response. How many legs peawingspan has?? Best Response You've already chosen the best response. @waterineyes can i put wut i think it is then u correct me? Best Response You've already chosen the best response. wut is a peawingspan anyways? Best Response You've already chosen the best response. its a bird Best Response You've already chosen the best response. Oh so it has 2 legs then Best Response You've already chosen the best response. Okay.. Yeah Sure @dmezzullo Best Response You've already chosen the best response. yes then ? Best Response You've already chosen the best response. Ok so theres 40 peawingspans and 40 deer, so deer have 4 legs, so 40 times 4 = 160 then the peawingspans have 2 legs and there are 40 of them so 20 so then there is ur other 40 u were missing so the answer would be 40. but where did i go wrong @waterineyes Best Response You've already chosen the best response. Just system of linear equations: x : deer and y : peawingspan.. x + y = 80 4x + 2y = 200 So: 4x + 4y = 320 4x + 2y = 200 2y = 120 y = 60.. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51798fd1e4b0c3f42e50c6de","timestamp":"2014-04-16T23:07:32Z","content_type":null,"content_length":"46988","record_id":"<urn:uuid:08496fa3-01bb-41f4-88f9-9c49576a204b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Putting co-ordinates Up: Foundations of geometry Previous: Axiom of Parallels One way of checking the consistency of our system of axioms is to construct ``models'' for which all the axioms are verified. Of course, these verifications again use results from some other area of mathematics and the axioms of that would also have to be verified to be consistent and so on. This is the idea behind the impossibility of verifying consistency. Leaving philosophical studies behind let us examine ``three dimensional projective geometry over a (skew-)field''. Exercise 4 Let K be a skew-field (i.e. K has addition, subtraction, multiplication and division but multiplication does not necessarily commute). Points, lines and planes of P^3(K) are given by (left) linear subspaces of K^4 of rank 1, 2 and 3 respectively. The incidence relations are just the inclusions of subspaces. Show that this gives a system that satisfies the above Incidence axioms and the projective axiom of parallels. In fact, this even leads to another system which satisfies the usual axiom of parallels. Exercise 5 Let A^3(K) be the collection of all points, lines and planes in P^3(K) that are not contained in a fixed plane The notion of between-ness can also be brought in with some more algebra. Definition 1 A positivity on is a subset so that: 1. P + P P and P^ . P P. 2. P P) = K and this is a disjoint union. This conforms to the concept of positive numbers. Using this we can define the cone generated by a collection of vectors in K^4 as the collection of all non-negative linear combinations of the Exercise 6 Fix a three dimensional linear subspace V of K^4 (in other words a plane in P^3(K)) and a vector v not in V. There is a unique linear functional on K^4 which has kernel V and takes the value 1 on v. We say a vector w is positive if f (w) lies in P. Every linear subspace in K^4 which does not line in V is then determined by its positive half. Exercise 7 We say that a point A of A^3(K) lies between points B and C if the positive half of the linear subspace in K^4 corresponding to A is a positive linear combination of of the positive halves of the linear subspaces corresponding to B and C respectively. Check that the axioms of order are satisfied on A^3(K) with this notion of between-ness. We have thus constructed a geometry satisfying all our axioms by making use of some algebra. Other geometries satisfying these axioms can also be constructed. Definition 2 A collection R of points in A^3(K) is said to be convex if, given A and B are points in R and C in A^3(K) is between A and B, then C is also in R. Definition 3 A convex collection R of points is said to be open if for any point A in R and B in A^3(K), there is a point C lying between A and B in A^3(K) so that C is also in R. Exercise 8 Let R be an open convex collection of points in A^3(K). We denote by [R] the geometry for which points are the points of R, lines and planes of [R] are the lines and planes of A^3(K) which meet R. The relations of incidence and order are inherited from A^3(K). Check that this geometry satisfies the axioms of incidence and order. A very important result (a sketch of proof is outlined in the next section) is that every geometry satisfying the axioms of incidence and order is of the type [R] for an open convex set R in A^3(K) for a suitable ordered field K. Hence, and this is important to note, the fact that arithmetic/algebraic problems arise in geometry does not immediately have anything to do with measurement! In particular, the relation between distance and coordinates can be much more complicated than that which will emerge from the Pythagoras theorem. Next: Putting co-ordinates Up: Foundations of geometry Previous: Axiom of Parallels Kapil H. Paranjape 2001-01-20
{"url":"http://www.imsc.res.in/~kapil/geometry/euclid/node5.html","timestamp":"2014-04-16T20:05:47Z","content_type":null,"content_length":"8510","record_id":"<urn:uuid:f7b279c7-5a13-4337-8046-cc2fb9813fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
conf interval short question 1. The 3 ways to test a hypothesis is by finding the critical value, finding a p-value or finidng a cofidence interval. Is that what is meant by the question? 2. $1-\alpha$ 3. Construct an interval and you will see it is the middle. 4. When $\sigma$ is unknown 5. I think you need more information to determine the null and alternate hypothesis here.
{"url":"http://mathhelpforum.com/statistics/151842-conf-interval-short-question.html","timestamp":"2014-04-21T02:22:02Z","content_type":null,"content_length":"41161","record_id":"<urn:uuid:80126193-021e-47e4-ac58-9c94f5aa7fef>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
need some help with pi() func!!!!!! i need to build a basic program that print the num pi.... output: 3.0000000000 what is wrong?? # include <stdio.h> double long exp1(double base, unsigned long exp) double sum=1; unsigned long i; for (i=0; i<exp; i++) return sum; double long pi(int k) double long sum=0; if (k==0) return sum; sum=((1/(exp1(16,k)))*((4/((8*k)+1))-(2/((8*k)+4))-(1/((8*k)+5))-(1/((8*k)+6)))) + pi(k-1); return sum; int main() int k=0; printf("please enter how many k of pi do you like\n"); if (k<0) while (k<0) printf("please enter positive number\n"); printf("the number is %.10lf\n",pi(k)); return 0;
{"url":"http://cboard.cprogramming.com/cplusplus-programming/144959-need-some-help-pi-func.html","timestamp":"2014-04-24T15:21:20Z","content_type":null,"content_length":"57355","record_id":"<urn:uuid:d9499a8a-b6cc-4515-9dbf-7edc08d95e8f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Pearson's correlation - quick questions August 6th 2008, 02:26 PM Pearson's correlation - quick questions I have this question related to finding Pearson's correlation coefficient. I already know how to find to coefficient but then the question asks to find r^2, so do I just find out the square of r? And then it says what does this number mean? What is r^2 supposed to represent? Thanks in advance. August 6th 2008, 03:11 PM You just square r. The Pearson's correlation tells you if the regression model is a good one. Since $r^2 = 1 - \frac{SSE}{SST}$ then $\implies r^2 \leq 1$. If the Pearson's correlation is close to 1, then that mean's that the model is a good model. If the person's correlation is not close to 1, then it means it is not a good model and you should use a different regresssion model.
{"url":"http://mathhelpforum.com/advanced-statistics/45446-pearsons-correlation-quick-questions-print.html","timestamp":"2014-04-18T20:03:30Z","content_type":null,"content_length":"4315","record_id":"<urn:uuid:adfe148a-e48f-4027-9674-e129f7c329ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 2001; 321 pp; softcover Volume: 271 ISBN-10: 0-8218-2621-2 ISBN-13: 978-0-8218-2621-8 List Price: US$91 Member Price: US$72.80 Order Code: CONM/271 This volume presents the proceedings from the AMS-IMS-SIAM Summer Research Conference on Homotopy Methods in Algebraic Topology held at the University of Colorado (Boulder). The conference coincided with the sixtieth birthday of J. Peter May. An article is included reflecting his wide-ranging and influential contributions to the subject area. Other articles in the book discuss the ordinary, elliptic and real-oriented Adams spectral sequences, mapping class groups, configuration spaces, extended powers, operads, the telescope conjecture, \(p\)-compact groups, algebraic K theory, stable and unstable splittings, the calculus of functors, the \(E_{\infty}\) tensor product, and equivariant cohomology theories. The book offers a compendious source on modern aspects of homotopy theoretic methods in many algebraic settings. Graduate students and research mathematicians interested in algebraic topology. • A. Baker -- On the Adams E\(_2\)-term for elliptic cohomology • C.-F. Bödigheimer, F. R. Cohen, and M. D. Peim -- Mapping class groups and function spaces • R. R. Bruner -- Extended powers of manifolds and the Adams spectral sequence • W. G. Dwyer and C. W. Wilkerson -- Centers and Coxeter elements • B. Gray -- On the homotopy type of the loops on a 2-cell complex • J. P. C. Greenlees -- Rational \(SO(3)\)-equivariant cohomology theories • L. Hesselholt and I. Madsen -- On the \(K\)-theory of nilpotent endomorphisms • P. Hu -- The \(Ext^0\)-term of the real-oriented Adams-Novikov spectral sequence • K. Ishiguro -- Toral groups and classifying spaces of \(p\)-compact groups • N. J. Kuhn -- Stable splittings and the diagonal • R. McCarthy -- Dual calculus for functors to spectra • M. Mahowald, D. Ravenel, and P. Shick -- The triple loop space approach to the telescope conjecture • M. A. Mandell -- Flatness for the \(E_\infty\) tensor product • I. Moerdijk -- On the Connes-Kreimer construction of Hopf algebras
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-271","timestamp":"2014-04-21T10:29:21Z","content_type":null,"content_length":"15936","record_id":"<urn:uuid:ab6bc5a8-35b4-4118-89d3-d8faac30835c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Nets for Multiplicative and Additive Linear Logic Results 1 - 10 of 14 , 2000 "... Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1 ..." Cited by 139 (16 self) Add to MetaCart Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1 , 1993 "... This paper is an overview of existing applications of Linear Logic (LL) to issues of computation. After a substantial introduction to LL, it discusses the implications of LL to functional programming, logic programming, concurrent and object-oriented programming and some other applications of LL, li ..." Cited by 41 (3 self) Add to MetaCart This paper is an overview of existing applications of Linear Logic (LL) to issues of computation. After a substantial introduction to LL, it discusses the implications of LL to functional programming, logic programming, concurrent and object-oriented programming and some other applications of LL, like semantics of negation in LP, non-monotonic issues in AI planning, etc. Although the overview covers pretty much the state-of-the-art in this area, by necessity many of the works are only mentioned and referenced, but not discussed in any considerable detail. The paper does not presuppose any previous exposition to LL, and is addressed more to computer scientists (probably with a theoretical inclination) than to logicians. The paper contains over 140 references, of which some 80 are about applications of LL. 1 Linear Logic Linear Logic (LL) was introduced in 1987 by Girard [62]. From the very beginning it was recognized as relevant to issues of computation (especially concurrency and stat... , 1993 "... We introduce a new category of finite, fair games, and winning strategies, and use it to provide a semantics for the multiplicative fragment of Linear Logic (mll) in which formulae are interpreted as games, and proofs as winning strategies. This interpretation provides a categorical model of mll wh ..." Cited by 40 (4 self) Add to MetaCart We introduce a new category of finite, fair games, and winning strategies, and use it to provide a semantics for the multiplicative fragment of Linear Logic (mll) in which formulae are interpreted as games, and proofs as winning strategies. This interpretation provides a categorical model of mll which satisfies the property that every (history-free, uniformly) winning strategy is the denotation of a unique cut-free proof net. Abramsky and Jagadeesan first proved a result of this kind and they refer to this property as full completeness. Our result differs from theirs in one important aspect: the mix-rule, which is not part of Girard's Linear Logic, is invalidated in our model. We achieve this sharper characterization by considering fair games. A finite, fair game is specified by the following data: ffl moves which Player can play, ffl moves which Opponent can play, and ffl a collection of finite sequences of maximal (or terminal) positions of the game which are deemed to be fair. N... - Theoretical Computer Science , 1994 "... We present a proof-theoretic foundation for automated deduction in linear logic. At first, we systematically study the permutability properties of the inference rules in this logical framework and exploit these to introduce an appropriate notion of forward and backward movement of an inference in a ..." Cited by 26 (12 self) Add to MetaCart We present a proof-theoretic foundation for automated deduction in linear logic. At first, we systematically study the permutability properties of the inference rules in this logical framework and exploit these to introduce an appropriate notion of forward and backward movement of an inference in a proof. Then we discuss the naturally-arising question of the redundancy reduction and investigate the possibilities of proof normalization which depend on the proof search strategy and the fragment we consider. Thus, we can define the concept of normal proof that might be the basis of works about automatic proof construction and design of logic programming languages based on linear logic. 1 Introduction Linear logic is a powerful and expressive logic with connections to a variety of topics in computer science. We are mainly interested by the significance it may have in different domains as logic programming or program synthesis through theorem proving. As a matter of fact, classical linear ... - LECTURE AT INTERNATIONAL CENTRE FOR MATHEMATICAL SCIENCES, WORKSHOP ON PROOF THEORY AND ALGORITHMS , 2003 "... It is well-known that weakening and contraction cause naïve categorical models of the classical sequent calculus to collapse to Boolean lattices. Starting from a convenient formulation of the well-known categorical semantics of linear classical sequent proofs, we give models of weakening and contra ..." Cited by 25 (2 self) Add to MetaCart It is well-known that weakening and contraction cause naïve categorical models of the classical sequent calculus to collapse to Boolean lattices. Starting from a convenient formulation of the well-known categorical semantics of linear classical sequent proofs, we give models of weakening and contraction that do not collapse. Cut-reduction is interpreted by a partial order between morphisms. Our models make no commitment to any translation of classical logic into intuitionistic logic and distinguish non-deterministic choices of cut-elimination. We show soundness and completeness via initial models built from proof nets, and describe models built from sets and relations. - In Symposium on Logical Foundations of Computer Science , 1994 "... In this paper, we investigate automated proof construction in classical linear logic (CLL) by giving logical foundations for the design of proof search strategies. We propose common theoretical foundations for top-down, bottom-up and mixed proof search procedures with a systematic formalization of s ..." Cited by 20 (11 self) Add to MetaCart In this paper, we investigate automated proof construction in classical linear logic (CLL) by giving logical foundations for the design of proof search strategies. We propose common theoretical foundations for top-down, bottom-up and mixed proof search procedures with a systematic formalization of strategies construction using the notions of immediate or chaining composition or decomposition, deduced from permutability properties and inference movements in a proof. Thus, we have logical bases for the design of proof strategies in CLL fragments and then we can propose sketches for their design. - Theoretical Computer Science , 1999 "... Linear logic (LL) is the logical foundation of some type-theoretic languages and also of environments for specification and theorem proving. In this paper, we analyse the relationships between the proof net notion of LL and the connection notion used for efficient proof-search in different logics. A ..." Cited by 12 (2 self) Add to MetaCart Linear logic (LL) is the logical foundation of some type-theoretic languages and also of environments for specification and theorem proving. In this paper, we analyse the relationships between the proof net notion of LL and the connection notion used for efficient proof-search in different logics. Aiming at using proof nets as a tool for automated deduction in linear logic, we define a connection-based characterization of provability in Multiplicative Linear Logic (MLL). We show that an algorithm for proof net construction can be seen as a proof-search connection method. This central result is illustrated with a specific algorithm that is able to construct, for a provable MLL sequent, a set of connections, a proof net and a sequent proof. From these results we expect to extend to other LL fragments, we analyse what happens with the additive connectives of LL by tackling the additive fragment in a similar way. , 1992 "... In this paper, we consider the multiplicative fragment of linear logic (MLL) from an automated deduction point of view. Before to use this new logic to make logic programming or to program with proofs, a better comprehension of the proof construction process in this framework is necessary. We propos ..." Cited by 11 (8 self) Add to MetaCart In this paper, we consider the multiplicative fragment of linear logic (MLL) from an automated deduction point of view. Before to use this new logic to make logic programming or to program with proofs, a better comprehension of the proof construction process in this framework is necessary. We propose a new algorithm to construct automatically a proof net for a given sequent in MLL and its proofs of termination, correctness and completeness. It can be seen as an implementation oriented way to consider automated deduction in linear logic. , 1996 "... We give a class of proof nets for Intuitionistic Linear Logic with the connectives (; !, prove a correctness criterion for them and show that a games semantics can be directly derived from these nets, along with a full completeness theorem. It is well-known that games semantics is intimately conn ..." Cited by 6 (1 self) Add to MetaCart We give a class of proof nets for Intuitionistic Linear Logic with the connectives (; !, prove a correctness criterion for them and show that a games semantics can be directly derived from these nets, along with a full completeness theorem. It is well-known that games semantics is intimately connected to linear logic, but there is an important example of games semantics where the connection is far from clear, namely Hyland and Ong's [9] for the simply typed lambda-calculus and PCF. Although in this semantics the construction of the function space (intuitionistic implication) depends quite explicitly on the standard decomposition X)Y = !X (Y , it is not clear at all how one would be able to describe the semantics of these two linear operators independently. In particular if one naively follows the spirit of the constructions given in that paper, it seems one would get that the natural morphism !X ! !!X (comultiplication in the comonad) is an isomorphism. It follows from the theory of...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=706671","timestamp":"2014-04-21T13:17:52Z","content_type":null,"content_length":"36107","record_id":"<urn:uuid:01ed4fad-e5c5-409c-9b72-53e277eedfdb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
oint Presentations Logarithms - University of Washington PPT Presentation Summary : Logarithms Tutorial to explain the nature of logarithms and their use in our courses. What is a Logarithm? The common or base-10 logarithm of a number is the power to ... Source : http://faculty.washington.edu/jackels/tutorials/Logs/Logs.PPT Logarithms - QRC Home Page PPT Presentation Summary : Using logarithms to solve for t in exponential growth problems * * Logarithms Use exponents as an alternative way to represent numbers * Logarithms Use exponents as ... Source : http://qrc.depaul.edu/pcallahan/PowerPoint/Logarithms.ppt Properties of Logarithms - LeTourneau University PPT Presentation Summary : Properties of Logarithms Lesson 5.5 Basic Properties of Logarithms Note box on page 408 of text Most used properties Using the Log Function for Solutions Consider ... Source : http://www.letu.edu/people/stevearmstrong/Math1203/Lesson%205.5.ppt Presentation Summary : Logarithms Tutorial Understanding the Log Function Where Did Logs Come From? The invention of logs in the early 1600s fueled the scientific revolution. Source : http://teachers.henrico.k12.va.us/math/HCPSAlgebra2/Documents/10-1/Logs.ppt Presentation Summary : Logarithms Logarithms Logarithms to various bases: red is to base e, green is to base 10, and purple is to base 1.7. Each tick on the axes is one unit. Source : http://revsworld.com/AP_SUPA_Chem/AP%20stuff/Math/Logarithms.ppt 8.4 – Properties of Logarithms PPT Presentation Summary : 8.4 – Properties of Logarithms Properties of Logarithms There are four basic properties of logarithms that we will be working with. For every case, the base of the ... Source : http://pleasanton.k12.ca.us/avhsweb/kiyoi/Powerpoints/IntAlgebra/8.4%20Properties%20of%20Logarithms.ppt Logarithms and Logarithmic Functions - University of Mississippi PPT Presentation Summary : Logarithms and Logarithmic Functions Coach Baughman November 20, 2003 Algebra II STAI 3 Objectives The students will identify a logarithmic function. Source : http://home.olemiss.edu/~cgbaughm/folio/ppt/logarithms.ppt Presentation Summary : 8.6 Natural Logarithms Natural Logs and “e” Start by graphing y=ex The function y=ex has an inverse called the Natural Logarithmic Function. Source : http://www.pleasanton.k12.ca.us/avhsweb/kiyoi/Powerpoints/IntAlgebra/8.6%20Natural%20Logarithms.ppt Properties of Logarithms - University of West Georgia PPT Presentation Summary : Properties of Logarithms The Product Rule Let b, M, and N be positive real numbers with b 1. logb (MN) = logb M + logb N The logarithm of a product is the sum of ... Source : http://www.westga.edu/~srivera/ca-fall05/3.3.ppt Introduction To Logarithms PPT Presentation Summary : Introduction To Logarithms Example 4 Solution: Now take it out of the logarithmic form and write it in exponential form. First, we write the problem with a variable. Source : http://teachers.henrico.k12.va.us/math/HCPSAlgebra2/Documents/10-2/2006_10_2.ppt Properties of Logarithms - Solon PPT Presentation Summary : Properties of Logarithms Today you will use the properties of logarithms to simplify expressions. Properties of Logarithms Product Property Quotient Property Powering ... Source : http://www.solonschools.org/accounts/ECarnes/61200974204_CorrectedPropertiesofLogarithms.ppt Common and Natural Logarithms - MNPS PPT Presentation Summary : Natural Logarithms THE NUMBER e… The mathematical constant e is the unique real number such that the value of the derivative (the slope of the tangent line) of the ... Source : http://www.mnps.org/AssetFactory.aspx?did=66284 Presentation Summary : A History of Logarithms MAT 320 Instructor: Dr Sunil Chebolu By Maria Paduret How LOGARITHMS appeared? People didn’t know how to multiply or divide big numbers. Source : http://math.illinoisstate.edu/schebol/teaching/320-10-files/Maria.ppt Exponential + Logarithmic Functions PPT Presentation Summary : Exponential & Logarithmic Functions Dr. Carol A. Marinas Table of Contents Exponential Functions Logarithmic Functions Converting between Exponents and Logarithms ... Source : http://mcs-cmarinas.barry.edu/net/ppt/MAT%20108/explog.ppt Logarithms - University of Pennsylvania PPT Presentation Summary : Logarithms Strings of bits There is only one possible zero-length sequence of bits There are two possible “sequences” of a single bit: 0, 1 There are four ... Source : http://www.cis.upenn.edu/~matuszek/cit594-2013/Lectures/logarithms.ppt 5 .4 Common and Natural Logarithmic Functions - Teacher Notes PPT Presentation Summary : Natural Logarithms The functions f(x)=ex and g(x)=ln x are inverse functions. ln v = u if and only if eu = v Notice that the base is “understood” to be e. Source : http://teachernotes.paramus.k12.nj.us/garafalo/5.4%20Common%20and%20Natural%20Log%20Funtions.ppt Solving Logarithms - QRC Home Page PPT Presentation Summary : Solving Logarithms Solving for time (using logarithms) To solve for time, you can get an approximation by using Excel. To solve an exponential equation algebraically ... Source : http://qrc.depaul.edu/mworkman/AU08/Solving%20Logarithms.ppt Presentation Summary : Rules for Logarithms Rules for Logarithms These are the basic rules for logarithms: When taking the log of a product, you are allowed to take the sum of the logs of ... Source : http://www.camdenschools.org/webpages/fdimezzo/files/3%20-%20rules%20for%20logarithms.ppt Common and Natural Logarithms - TeachEngineering PPT Presentation Summary : Common and Natural Logarithms Common Logarithms A common logarithm has a base of 10. If there is no base given explicitly, it is common. You can easily find common ... Source : http://www.teachengineering.org/collection/van_/lessons/van_bmd_less3/common_and_natural_logarithms_w_examples.ppt Unit V: Logarithms Solving Exponential and Logarithmic Equations PPT Presentation Summary : Title: Unit V: Logarithms Solving Exponential and Logarithmic Equations Author: Administrator Last modified by: Julie Merrill Created Date: 11/8/2005 8:29:14 PM Source : http://iws.collin.edu/jmerrill/1314/1314%20ppt_files/4.4%20Exp-Logs-Solving.ppt 5.5 Properties and Laws of Logarithms - Teacher Notes PPT Presentation Summary : Title: 5.5 Properties and Laws of Logarithms Author: tgarofalo Last modified by: tgarofalo Created Date: 5/26/2009 2:41:14 PM Document presentation format Source : http://teachernotes.paramus.k12.nj.us/garafalo/5.5%20Properties%20and%20Laws%20of%20Logarithms.ppt Logarithmic Functions - University of West Georgia PPT Presentation Summary : Inverse Properties of Logarithms Properties of Common Logarithms Examples of Logarithmic Properties Properties of Natural Logarithms Examples of Natural Logarithmic ... Source : http://www.westga.edu/%7Esrivera/ca-fall05/3.2.ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/logarithms.html","timestamp":"2014-04-18T03:05:19Z","content_type":null,"content_length":"21570","record_id":"<urn:uuid:a2c9fbbd-557d-415d-9210-94d53d34da70>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplication of Decimals and Whole Numbers 4.1: Multiplication of Decimals and Whole Numbers Created by: CK-12 Practice Multiplication of Decimals and Whole Numbers Have you ever been to a science museum? Have you ever had to figure out the admission cost for a group of students? Multiplication is definitely involved if you have ever tackled such a problem Mrs. Andersen is planning a field trip to the Science Museum for her sixth grade class. She wants to spend the entire day at the museum and plans to take all twenty-two students with her. She looks up some information on the internet and finds that a regular price ticket is $12.95 and a student ticket is $10.95. However, when Mrs. Andersen checks out the group rates, she finds that the students can go for $8.95 per ticket at the group student rate. Because she is a teacher, Mrs. Andersen gets to go for free. One chaperone receives free admission also. Mrs. Andersen has a total of three chaperones attending the field trip. The other two chaperones will need to pay the regular ticket price. The class has a budget to pay for the chaperones. Mrs. Andersen assigns Kyle the job of being Field Trip Manager. She hands him her figures and asks him to make up the permission slip. Kyle is glad to do it. When collection day comes, Kyle collects all of the money for the trip. Kyle has an idea how much he should collect, what should his estimate be? Given the student price, how much money does Kyle need to collect if all 22 students attend the field trip? What is the total cost for all of the students and for the two chaperones? While Kyle is adding up the money, you have the opportunity to figure out the answers to these two questions. You will need to use information about multiplying decimals and whole numbers. Pay close attention during this Concept, see if your answers match Kyle’s by the end of the Concept. In this Concept you will be learning about how to multiply decimals and whole numbers together. Let’s think about what it means to multiply. Multiplication is a short-cut for repeated addition. We think about multiplication and we think about groups of numbers. 4 $\times$$=$ Here we are saying that we have four groups of three that we are counting or we have three groups of four. It doesn’t matter which way we say it, because we still end up with twelve. When we multiply decimals and whole numbers, we need to think of it as groups too. 2(.25) = _____ Here we are multiplying two times twenty-five hundredths. Remember that when we see a number outside of the parentheses that the operation is multiplication. We can think of this as two groups of twenty-five hundredths. Let’s look at what a picture of this would look like. Our answer is .50. This is one way to multiply decimals and whole numbers; however we can’t always use a drawing. It just isn’t practical. How can we multiply decimals and whole numbers without using a drawing? We can multiply a decimal and a whole number just like we would two whole numbers. First, we ignore the decimal point and just multiply. Then, we put the decimal point in the product by counting the correct number of places. 4(1.25) = _____ Let’s start by multiplying just like we would if this were two whole numbers. We take the four and multiply it by each digit in the top number. $125 \\\underline{\times \ \quad 4}\\500$ But wait! Our work isn’t finished yet. We need to add the decimal point into the product. There were two decimal places in our original problem. There should be two decimal places in our product. $& 5.00 \\& \ warrow \\& \quad \text{We count in two places from right to left into our product}.$ This is our final answer. Here are a few for you to try. Multiply them just as you would whole numbers and then put in the decimal point. Example A Solution: 13.56 Example B Solution: 11.7 Example C Solution: 24.92 Now back to Kyle and the trip to the science museum! Now, let’s think about the estimate. About how much money should Kyle collect? The first step in working this out is to write an equation. 22 students at $8.95 per ticket = 22(8.95) Kyle wants an estimate, so we can round 8.95 to 9 Now let’s multiply 22(9) = $198.00 Now that Kyle has an estimate, he can actually work on collecting the money and counting it. Once he has collected and counted all the money, we will be able to see if his original estimate was reasonable or not. One week before the trip, Kyle collects $8.95 from 22 students. He multiplies his results, 22(8.95) = $196.90 Kyle can see that his original estimate was reasonable. He is excited-the estimation worked!! Next, Kyle figures out the cost of the chaperones. There are two chaperones who each pay the regular price which is $12.95. 2(12.95) = 25.90 Finally, Kyle adds up the total. 196.90 + 25.90 = $222.80 He gives his arithmetic and money to Mrs. Andersen. She is very pleased. The students are off to the Science Museum!!! Here are the vocabulary words in this Concept. a shortcut for addition, means working with groups of numbers the answer from a multiplication problem an approximate answer-often found through rounding Guided Practice Here is one for you to try on your own. Nine friends decided to go to a movie on Friday night. They each paid the $8.50 for admission. How much money did they spend in all? To solve this problem, we can write a multiplication problem. Our answer is $76.50. Video Review Here are videos for review. Khan Academy Multiplying Decimals 2 Multiplying Decimals by Whole Numbers Directions: Multiply to find a product. 1. 5(1.24) = _____ 2. 6(7.81) = _____ 3. 7(9.3) = _____ 4. 8(1.45) = _____ 5. 9(12.34) = _____ 6. 2(3.56) = _____ 7. 6(7.12) = _____ 8. 3(4.2) = _____ 9. 5(2.4) = _____ 10. 6(3.521) = _____ 11. 2(3.222) = _____ 12. 3(4.223) = _____ 13. 4(12.34) = _____ 14. 5(12.45) = _____ 15. 3(143.12) = _____ 16. 4(13.672) = _____ 17. 2(19.901) = _____ 18. 3(67.321) = _____ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r2/section/4.1/","timestamp":"2014-04-18T14:42:39Z","content_type":null,"content_length":"124581","record_id":"<urn:uuid:c2115308-ef5b-4b03-a5ef-b23e11792dc5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Are You Allow To Review Your Dosage Calculations - pg.2 Are You Allow To Review Your Dosage Calculations - page 2 by Itspossible I was just wondering if other schools allow their nursing student to review their dosage calculations after its graded. I'm in my 4th semester in a BSN program and my Pediatric instructor does not allow us to review our dosage... Read More 1. 0 Mar 7, '13 by Thank u all so much! yes I have two dosage calculation textbooks: Dosage calculation by picker and calculate with confidence by Morris. I had my last dosage calc today i needed 10 to pass and I made 9 but for this last one she provided us with an answer sheet so that we can keep the questions. I wish she had started this idea earlier but its too late now. Talking to any other person other than her does not do any good because this problem has been going on for a very long time and I remember last semester she stood in front of the class and said that nobody can change her. So its up to me to decide whether to look for another school or to retake it next semester. If by the day anybody knows off any school pls let me know. Once again a big thank u to u all. 2. 0 Mar 7, '13 by Esme12, I know this website but thanks for the help. 3. 0 Dosage calculations are an essential part of being any kind of nurse, not just a pediatric nurse. One simply cannot safely practice as a nurse unless they are proficient in dosage calculations. An error as simple as moving a decimal point can prove fatal. Frankly, I'm surprised that pediatrics was the first time you encountered dosage calculations, and that they were not introduced prior to your final year of nursing school. In my program, dosage calculations were introduced in our first semester and each subsequent class/clinical began with a dosage calculation test, and students were required to get 100% in order to continue in the I highly recommend that you practice lots of these dosage calculations, consider a tutor and a remedial math course before continuing with nursing school. Best of luck to you. 4. 0 Mar 9, '13 by Ashley, this is not my first time i'm been introduced to dosage cal, just like every other school, dosage calc was introduced to us in our first semester also, and yes you have to pass dosage each semester with 100% in order to get to the next level. I have been passing it with that 100% and that's why I'm in my 4th semester right now. By anyways, thanx. 5. 0 Mar 9, '13 by Don't let dosage calcs beat you. You just need practice so that you learn to recognize what information you are given, and how you use it in your calculation. We had this book (info's at the end of this post) and it has good detailed examples of the various methods of calculating dosages. There is a chapter for peds calculations, and elsewhere it shows dimensional analysis. There is a copy going for $10 + $5 shipping (roughly ) on ebay and I gave the number, below. But dosage calculations are like any other kind of math: A system. Once you understand the system, it works pretty much the same way all the time. The way you get good at them is to read the book, and then re-read it, and do many practice problems. Get very familiar with the various units of English and SI systems, and how to convert back and forth. Honestly, it is cut-and-dried. In peds you use weight or you use body surface area. And all the time, you have to check your answer to make sure that it makes sense: Is the dosage level SAFE, does it meet the 5 (or 6 rights), is the volume you are injecting okay for that site, etc. There are old posts on here by a nurse called Dayton, She passed away a few years ago. But she went through loads of calculations and pharma problems here, helping students. You might search for her posts. I think they are still on the board. Pharmacology : A Nursing Process Approach by Joyce LeFever Kee, Evelyn R.... 6th ed. ebay item number 181097236031 ISBN-10 1416046631 Format Trade Paper ISBN-13 9781416046639 Publication Year 2008 (There are one or more newer editions. But that one is cheap due to reaching 5-year time limit so it's "outdated" as a source but the dosage calculations never change. Same methods in 2008 as now. LOL) 6. 0 Mar 10, '13 by Streamline2010 I really appreciate all the info! 7. 0 Mar 11, '13 by Esme12 Asst. Admin from Streamline2010 Don't let dosage calcs beat you. You just need practice so that you learn to recognize what information you are given, and how you use it in your calculation. There are old posts on here by a nurse called Dayton, She passed away a few years ago. But she went through loads of calculations and pharma problems here, helping students. You might search for her posts. I think they are still on the board. That contributor was DAYTONITE....she passed away in 2010.....she is dearly missed. All of her contributions are here. 8. 0 do you have an academic counselor? if so, can they get a hold of your tests and go over them with you? At this point we don't even know that you have done anything wrong! You need to see the test and check the answers yourself.
{"url":"http://allnurses.com/nursing-student-assistance/you-allow-review-819713-page2.html","timestamp":"2014-04-18T11:32:02Z","content_type":null,"content_length":"41615","record_id":"<urn:uuid:23f5af47-3af0-40e2-ad83-72735701a519>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Flossmoor Math Tutor Find a Flossmoor Math Tutor ...I have achieved a very strong math and science background from my collegiate background. One of my greatest strengths is walking into an unknown environment and fostering a productive and eager classroom. To accomplish this, I deliver lessons in an exciting fashion, thereby capturing students' attention, imagination, and sparkling the desire to learn. 12 Subjects: including algebra 1, prealgebra, trigonometry, English ...I have worked with them in the classroom, individually, and in groups. Over the years I have become familiar with most types of disabilities. I have worked with teenagers with ADD/ADHD for the past 10 years. 24 Subjects: including calculus, trigonometry, differential equations, linear algebra ...I am certified and have experience teaching general and special education students all subjects for grades K-6th. These subjects include Reading, English, Math, Science and Social Studies. I have been teaching these grade levels as wells as 7th and 8th grade levels for the past 3 years. 13 Subjects: including algebra 1, prealgebra, English, reading ...Over the past nine years, I’ve committed to volunteering and working closely with youth. This commitment is not just for enjoyment, but a dedication to make a change. This dedication has warranted me many honors, awards and memories. 4 Subjects: including algebra 2, probability, algebra 1, prealgebra ...I look forward to hearing from you and seeing how I can help you grow in your knowledge of a subject area!I have been an ESL teacher for students in Grades K-8 in a public school for the past 5 years. Before that, I was a Bilingual 1st grade teacher in a public school for 2 years. I am certified to teach students from Grades K-9 in the State of Illinois. 33 Subjects: including prealgebra, chemistry, reading, algebra 1
{"url":"http://www.purplemath.com/flossmoor_il_math_tutors.php","timestamp":"2014-04-18T22:04:16Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:f3fc7c34-c07c-4a2c-b64e-2309f24d1e90>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Splitting Lemma for 2-Connected Graphs ISRN Discrete Mathematics Volume 2012 (2012), Article ID 850538, 7 pages Research Article Splitting Lemma for 2-Connected Graphs Department of Mathematics, University of Pune, Pune 411007, India Received 1 October 2012; Accepted 17 October 2012 Academic Editors: S. Bozapalidis, E. Kranakis, and W. Wang Copyright © 2012 Y. M. Borse. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Using a splitting operation and a splitting lemma for connected graphs, Fleischner characterized connected Eulerian graphs. In this paper, we obtain a splitting lemma for 2-connected graphs and characterize 2-connected Eulerian graphs. As a consequence, we characterize connected graphic Eulerian matroids. 1. Introduction Fleischner [1] introduced a splitting operation to characterize Eulerian graphs as follows. Let be a connected graph and with . If and are two edges incident with , then splitting away the pair of edges from the vertex results in a new graph obtained from by deleting the edges and , and adding a new vertex adjacent to and (see Figure 1). The following splitting lemma established by Fleischner [1] has been widely recognized as a useful tool in the graph theory. Splitting Lemma (see [1, page III-29]). Let be a connected bridgeless graph. Suppose such that and are the edges incident with . Form the graphs and by splitting away the pairs and , respectively, and assume and belong to different blocks if is a cut vertex of . Then either or is connected and bridgeless. This lemma is used to obtain the following characterization of Eulerian graphs. Theorem 1.2 (see [1, page V-6]). A graph has an Eulerian trail if and only if can be transformed into a cycle through repeated applications of the splitting procedure on vertices of a degree exceeding . Moreover, the number of Eulerian trails of equals the number of different labeled cycles into which can be transformed this way. Thus a connected graph is Eulerian if and only if there exists a sequence of connected graphs such that is a cycle and is obtained from by applying splitting operation once. The splitting operation may not preserve -connectedness of the graph. Consider the graph of Figure 2. It is -connected but the graph is not -connected for any two adjacent edges and . We obtain the splitting lemma for -connected graphs as follows. Theorem 1.3. Let be a -connected graph and let be a vertex of with . Then either is -connected for some pair of edges incident with or for any pair of edges incident with ; there is another pair of adjacent edges of such that is -connected. The next theorem is a consequence of the above result. Theorem 1.4. Let be a -connected graph. Then is Eulerian if and only if there exists a sequence of -connected graphs such that is a cycle and is obtained from by applying splitting operation once or twice for . A matroid is Eulerian if its ground set can be partitioned into disjoint circuits, and it is connected if any pair of its elements is contained in a circuit. It is clear that an Eulerian matroid may not be connected. A matroid is graphic if it is isomorphic to the cycle matroid of a graph. For matroid concepts and terminology, we refer to Oxley [2]. Raghunathan et al. [3] generalized the splitting operation of graphs to binary matroids and characterized Eulerian matroids in terms of this operation. We characterize connected Eulerian graphic matroids. In Section 2, we prove Theorems 1.3 and 1.4. The matroid extension is considered in Section 3. 2. Eulerian 2-Connected Graphs A block of a connected graph is a pendant block if it contains exactly one cut vertex of . For an edge , we denote the set of end vertices of by . For a vertex of , let denote the set of edges of which are incident with , that is, . Raghunathan et al. [3] characterized the circuits of the graph in terms of circuits of as follows. Lemma 2.1 (see [3]). Let be a graph and let be a pair of adjacent edges of . Then a subset of edges of the graph is a circuit in if and only if satisfies one of the following conditions:(i)is a circuit in containing and ;(ii) is a circuit in containing neither nor ;(iii), where and are edge disjoint circuits of with , , and does not contain a circuit in satisfying either or above. Lemma 2.2. Let be a -connected graph and be a vertex of with and such that the graph is not -connected. Then is connected and has exactly two pendant blocks. Further, one pendant block contains , and the other pendant block contains . Proof. The proof is straightforward (see Figure 3). Lemma 2.3. Let be a -connected graph and let be a vertex of with such that is not -connected for all . Then, for a given , the graph is connected and has one cut vertex and two blocks. Proof. Let be a pair of edges incident with . By Lemma 2.2, is connected and has exactly two pendant blocks, say and . We may assume that contains and contains . As , we can choose two edges , from . Let and be paths in from to with , , and . Each of and corresponds to a cycle in . By Lemma 2.1, these cycles are preserved in the graph . Therefore is contained in a block of for . By Lemma 2.2, has two pendant blocks one containing edges , and the other containing . Hence and belong to different pendant blocks of . Hence and share at most one vertex of . However, and share all cut vertices of . This implies that has exactly one cut vertex. Therefore, by Lemma 2.2, is connected and has exactly two blocks. Lemma 2.4. Let and be as stated in Lemma 2.3. Then there exists a vertex in such that is the cut vertex of for all . Proof. Let . By Lemma 2.3, is connected and has one cut vertex, say . Let . Then, by Lemma 2.3, is also connected and has two blocks and one cut vertex. It suffices to prove that is the cut vertex of . If , then there is nothing to prove. Suppose . We may assume that and . By Lemma 2.3, is connected and has two blocks, say and . By Lemma 2.2, we may assume that contains and contains . Let be an edge of incident with such that . Let and be paths in from the vertex to such that , , and , . Then each of and contains all cut vertices of . Therefore is a common vertex of and . Further, each of and corresponds to a cycle in . By Lemma 2.1, these cycles are preserved in the graph and hence are contained in blocks of . Therefore contains for . Thus is a common vertex of and . This implies that is a cut vertex of . By Lemma 2.3, is the only cut vertex of . Let and be paths in from to such that and . Then contains the cut vertex and, further, it corresponds to a cycle in for . By Lemma 2.1, corresponds to a cycle of the graph and hence is contained in a block of for . These cycles are contained in different blocks of . By Lemma 2.2, one block of contains the edges , and the other block contains the remaining edges of that are incident with . Hence and belong to different blocks of . As is a common vertex of and , it is a cut vertex of . Thus is the cut vertex of . Lemma 2.5. Let be a -connected graph and be a vertex of with the set of neighbours , where . Suppose is not -connected for all . Then there exists a vertex in such that for any -path and -path in with . Proof. By Lemma 2.4, there exists a vertex in such that it is the cut vertex of for all . Let be a -path and be a -path in with . We prove that . If or is a trivial graph, then there is nothing to prove. Assume that and . Without loss of generality, we may assume that and . Let and . Then is connected and has two blocks, say and (see Figure 4). By Lemma 2.2, we may assume that contains and contains . Since is the cut vertex of , the paths are contained in . Let and . Let be a -path in containing the edge but avoiding . Let be an -path in containing and avoiding . Then for . Let and . Then each of and corresponds to a cycle in . Further, contains , and contains , . Therefore, by Lemma 2.1, and correspond to cycles in . By Lemmas 2.2 and 2.3, has exactly two blocks one of them contains and the other contain . Hence and can share at most one vertex. This implies that and can share at most one vertex. Thus . Proof of Theorem 1.3. Let be a -connected graph and let be a vertex of with . Suppose is not -connected for every pair of edges incident with . Let be the set of neighbours of . Let and be any two edges of incident with . We may assume that and . By Lemma 2.5, there exists a vertex in such that for any with , where is a -path and is a -path in (see Figure 5). It is easy to see that . If for some , then is the trivial graph containing only the vertex . If , then set . If , then set . In other cases, and are nontrivial graphs and hence we can take and . It is easy to see that is Now, we prove Theorem 1.4. Let denotes the degree of a vertex in a graph . Proof of Theorem 1.4. Let be an Eulerian -connected graph. Suppose is not a cycle. Then has a vertex of a degree of at least 4. By Theorem 1.3, we get a pair of edges incident with such that either is -connected or is 2-connected for some pair of edges of having a common vertex other than . Denote this new 2-connected graph by . If , then and for any . If , then , , and for any , where is the common vertex of and other than . Further, the new vertices of that are created in the splitting procedure have degree two. Obviously, is Eulerian. If is not a cycle, then we obtain a 2-connected Eulerian graph from by applying splitting operation once (or twice) which results in reducing the degree of a vertex (or two vertices) of by 2. By repeating the same procedure and through a sequence of once or twice splitting operations performed in such a way that at each step the resulting graph is still -connected one finally arrives at a cycle which corresponds to an Eulerian trail of . The converse is obvious. 3. Eulerian 2-Connected Matroids In this section, we extend Theorem 1.4 to connected Eulerian matroids. Raghunathan et al. [3] generalized the splitting operation of graphs to binary matroids and characterized Eulerian matroids in terms of this operation. In this section, we characterize connected Eulerian graphic matroids. Definition 3.1 (see [3]). Let be a binary matroid and suppose . Let be the matrix obtained from by adjoining the row that is zero everywhere except for the entries of in the columns labeled by and . The splitting matroid is defined to be the vector matroid of the matrix . The transition from to is called a splitting operation. The splitting operation for binary matroids is also studied in [3–6]. We need the following three results. Lemma 3.2 (see [3]). If denotes the circuit matroid of a graph , then for a pair of adjacent edges in a graph . Lemma 3.3 (see [3]). Let be a binary matroid and . Then is Eulerian if and only if is Eulerian. Theorem 3.4 (see [2, page 127). Let be a loopless graph without isolated vertices. If has at least three vertices, then is a connected matroid if and only if is a -connected graph. We obtain the following characterization of connected Eulerian graphic matroids. Theorem 3.5. Let be a connected graphic matroid. Then is Eulerian if and only if it can be transformed into a circuit through a sequence of connected graphic matroids such that is obtained from by applying splitting operation once or twice. Proof. Let be a connected graphic matroid. Then is isomorphic to a cycle matroid of some graph . In view of Theorem 3.4, we may assume that is -connected. Suppose is Eulerian. Then the graph is Eulerian. By Theorem 1.4, there is a sequence of -connected graphs such that is a cycle, and is obtained from by applying splitting operation once or twice for . Let for . By Theorem 3.4, each is connected. It follows from Lemma 3.2 that if is obtained from by applying splitting operation once or twice then is obtained from by applying splitting operation once or twice, respectively. Further, by Lemma 3.3, is Eulerian for . Conversely, suppose there exists a sequence of connected graphic matroids , where is obtained from by applying splitting operation once or twice for . Since is Eulerian, by Lemma 3.3, is Eulerian. By repeated applications of Lemma 3.3, we see that is Eulerian for each . Thus is Eulerian. The author would like to thank the referees for their valuable suggestions. The paper is supported by University of Pune under BCUD project scheme. 1. H. Fleischner, Eulerian Graphs and Related Topics, vol. 1, part 1, North Holland, Amsterdam, The Netherlands, 1990. 2. J. G. Oxley, Matroid Theory, Oxford University Press, Oxford, UK, 1992. 3. T. T. Raghunathan, M. M. Shikare, and B. N. Waphare, “Splitting in a binary matroid,” Discrete Mathematics, vol. 184, no. 1–3, pp. 267–271, 1998. View at Scopus 4. Y. M. Borse and S. B. Dhotre, “On connectivity of splitting matroids,” Southeast Asian Bulletin of Mathematics, vol. 36, no. 1, pp. 17–21, 2012. 5. A. D. Mills, “On the cocircuits of a splitting matroid,” Ars Combinatoria, vol. 89, pp. 243–253, 2008. View at Scopus 6. M. M. Shikare and G. Azadi, “Determination of the bases of a splitting matroid,” European Journal of Combinatorics, vol. 24, no. 1, pp. 45–52, 2003. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/isrn.discrete.mathematics/2012/850538/","timestamp":"2014-04-19T05:58:52Z","content_type":null,"content_length":"323827","record_id":"<urn:uuid:ab5f6463-4cd3-4235-bfc9-1baec0a1d20e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Publications associated with Particle Theory Scattering and Sequestering of Blow-Up Moduli in Local String Models ArXiv (2011) We study the scattering and sequestering of blow-up fields - either local to or distant from a visible matter sector - through a CFT computation of the dependence of physical Yukawa couplings on the blow-up moduli. For a visible sector of D3-branes on orbifold singularities we compute the disk correlator < \tau_s^{(1)} \tau_s^{(2)} ... \tau_s^{(n)} \psi \psi \phi > between orbifold blow-up moduli and matter Yukawa couplings. For n = 1 we determine the full quantum and classical correlator. This result has the correct factorisation onto lower 3-point functions and also passes numerous other consistency checks. For n > 1 we show that the structure of picture-changing applied to the twist operators establishes the sequestering of distant blow-up moduli at disk level to all orders in \alpha'. We explain how these results are relevant to suppressing soft terms to scales parametrically below the gravitino mass. By giving vevs to the blow-up fields we can move into the smooth limit and thereby derive CFT results for the smooth Swiss-cheese Calabi-Yaus that appear in the Large Volume Scenario. More details from the publisher Show full publication list
{"url":"http://www2.physics.ox.ac.uk/research/particle-theory/publications/205555","timestamp":"2014-04-16T18:58:02Z","content_type":null,"content_length":"12228","record_id":"<urn:uuid:af4efded-c0bb-4058-a229-f0e87d2b2399>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
101 Things Every Six Sigma Black Belt Should Know by Thomas Pyzdek, Copyright © 2003 1. In general, a Six Sigma Black Belt should be quantitatively oriented. 2. With minimal guidance, the Six Sigma Black Belt should be able to use data to convert broad generalizations into actionable goals. 3. The Six Sigma Black Belt should be able to make the business case for attempting to accomplish these goals. 4. The Six Sigma Black Belt should be able to develop detailed plans for achieving these goals. 5. The Six Sigma Black Belt should be able to measure progress towards the goals in terms meaningful to customers and leaders. 6. The Six Sigma Black Belt should know how to establish control systems for maintaining the gains achieved through Six Sigma. 7. The Six Sigma Black Belt should understand and be able to communicate the rationale for continuous improvement, even after initial goals have been accomplished. 8. The Six Sigma Black Belt should be familiar with research that quantifies the benefits firms have obtained from Six Sigma. 9. The Six Sigma Black Belt should know or be able to find the PPM rates associated with different sigma levels (e.g., Six Sigma = 3.4 PPM). 10. The Six Sigma Black Belt should know the approximate relative cost of poor quality associated with various sigma levels (e.g., three sigma firms report 25% COPQ). 11. The Six Sigma Black Belt should understand the roles of the various people involved in change (senior leader, champion, mentor, change agent, technical leader, team leader, facilitator). 12. The Six Sigma Black Belt should be able to design, test, and analyze customer surveys. 13. The Six Sigma Black Belt should know how to quantitatively analyze data from employee and customer surveys. This includes evaluating survey reliability and validity as well as the differences between surveys. 14. Given two or more sets of survey data, the Six Sigma Black Belt should be able to determine if there are statistically significant differences between them. 15. The Six Sigma Black Belt should be able to quantify the value of customer retention. 16. Given a partly completed QFD matrix, the Six Sigma Black Belt should be able to complete it. 17. The Six Sigma Black Belt should be able to compute the value of money held or invested over time, including present value and future value of a fixed sum. 18. The Six Sigma Black Belt should be able to compute present value and future value for various compounding periods. 19. The Six Sigma Black Belt should be able to compute the breakeven point for a project. 20. The Six Sigma Black Belt should be able to compute the net present value of cash flow streams, and to use the results to choose among competing projects. 21. The Six Sigma Black Belt should be able to compute the internal rate of return for cash flow streams and to use the results to choose among competing projects. 22. The Six Sigma Black Belt should know the COPQ rationale for Six Sigma (i.e., the Six Sigma Black Belt should be able to explain what to do if COPQ analysis indicates that the optimum for a given process is less than Six Sigma). 23. The Six Sigma Black Belt should know the basic COPQ categories and be able to allocate a list of costs to the correct category. 24. Given a table of COPQ data over time, the Six Sigma Black Belt should be able to perform a statistical analysis of the trend. 25. Given a table of COPQ data over time, the Six Sigma Black Belt should be able to perform a statistical analysis of the distribution of costs among the various categories. 26. Given a list of tasks for a project, their times to complete, and their precedence relationships, the Six Sigma Black Belt should be able to compute the time to completion for the project, the earliest completion times, the latest completion times and the slack times. The Six Sigma Black Belt should also be able to identify which tasks are on the critical path. 27. Give cost and time data for project tasks, the Six Sigma Black Belt should be able to compute the cost of normal and crash schedules and the minimum total cost schedule. 28. The Six Sigma Black Belt should be familiar with the basic principles of benchmarking. 29. The Six Sigma Black Belt should be familiar with the limitations of benchmarking. 30. Given an organization chart and a listing of team members, process owners, and sponsors, the Six Sigma Black Belt should be able to identify projects with a low probability of success. 31. The Six Sigma Black Belt should be able to identify measurement scales of various metrics (nominal, ordinal, etc). 32. Given a metric on a particular scale, the Six Sigma Black Belt should be able to determine if a particular statistical method should be used for analysis. 33. Given a properly collected set of data, the Six Sigma Black Belt should be able to perform a complete measurement system analysis, including the calculation of bias, repeatability, reproducibility, stability, discrimination (resolution) and linearity. 34. Given the measurement system metrics, the Six Sigma Black Belt should know whether or not a given measurement system should be used on a given part or process. 35. The Six Sigma Black Belt should know the difference between computing sigma from a data set whose production sequence is known and from a data set whose production sequence is not known. 36. Given the results of an AIAG Gage R&R study, the Six Sigma Black Belt should be able to answer a variety of questions about the measurement system. 37. Given a narrative description of “as-is” and “should-be” processes, the Six Sigma Black Belt should be able to prepare process maps. 38. Given a table of raw data, the Six Sigma Black Belt should be able to prepare a frequency tally sheet of the data, and to use the tally sheet data to construct a histogram. 39. The Six Sigma Black Belt should be able to compute the mean and standard deviation from a grouped frequency distribution. 40. Given a list of problems, the Six Sigma Black Belt should be able to construct a Pareto Diagram of the problem frequencies. 41. Given a list which describes problems by department, the Six Sigma Black Belt should be able to construct a cross tabulation and use the information to perform a Chi-square analysis. 42. Given a table of x and y data pairs, the Six Sigma Black Belt should be able to determine if the relationship is linear or non-linear. 43. The Six Sigma Black Belt should know how to use non-linearity’s to make products or processes more robust. 44. The Six Sigma Black Belt should be able to construct and interpret a run chart when given a table of data in time-ordered sequence. This includes calculating run length, number of runs and quantitative trend evaluation. 45. When told the data are from an exponential or Erlang distribution the Six Sigma Black Belt should know that the run chart is preferred over the standard X control chart. 46. Given a set of raw data, the Six Sigma Black Belt should be able to identify and compute two statistical measures each for central tendency, dispersion, and shape. 47. Given a set of raw data, the Six Sigma Black Belt should be able to construct a histogram. 48. Given a stem & leaf plot, the Six Sigma Black Belt should be able to reproduce a sample of numbers to the accuracy allowed by the plot. 49. Given a box plot with numbers on the key box points, the Six Sigma Black Belt should be able to identify the 25th and 75th percentile and the median. 50. The Six Sigma Black Belt should know when to apply enumerative statistical methods, and when not to. 51. The Six Sigma Black Belt should know when to apply analytic statistical methods, and when not to. 52. The Six Sigma Black Belt should demonstrate a grasp of basic probability concepts, such as the probability of mutually exclusive events, of dependent and independent events, of events that can occur simultaneously, etc. 53. The Six Sigma Black Belt should know factorials, permutations and combinations, and how to use these in commonly used probability distributions. 54. The Six Sigma Black Belt should be able to compute expected values for continuous and discrete random variables. 55. The Six Sigma Black Belt should be able to compute univariate statistics for samples. 56. The Six Sigma Black Belt should be able to compute confidence intervals for various statistics. 57. The Six Sigma Black Belt should be able to read values from a cumulative frequency ogive. 58. The Six Sigma Black Belt should be familiar with the commonly used probability distributions, including: hypergeometric, binomial, Poisson, normal, exponential, Chi-square, Student’s t, and F. 59. Given a set of data the Six Sigma Black Belt should be able to correctly identify which distribution should be used to perform a given analysis, and to use the distribution to perform the 60. The Six Sigma Black Belt should know that different techniques are required for analysis depending on whether a given measure (e.g., the mean) is assumed known or estimated from a sample. The Six Sigma Black Belt should choose and properly use the correct technique when provided with data and sufficient information about the data. 61. Given a set of subgrouped data, the Six Sigma Black Belt should be able to select and prepare the correct control charts and to determine if a given process is in a state of statistical control. 62. The above should be demonstrated for data representing all of the most common control charts. 63. The Six Sigma Black Belt should understand the assumptions that underlie ANOVA, and be able to select and apply a transformation to the data. 64. The Six Sigma Black Belt should be able to identify which cause on a list of possible causes will most likely explain a non-random pattern in the regression residuals. 65. If shown control chart patterns, the Six Sigma Black Belt should be able to match the control chart with the correct situation (e.g., an outlier pattern vs. a gradual trend matched to a tool breaking vs. a machine gradually warming up). 66. The Six Sigma Black Belt should understand the mechanics of PRE-Control. 67. The Six Sigma Black Belt should be able to correctly apply EWMA charts to a process with serial correlation in the data. 68. Given a stable set of subgrouped data, the Six Sigma Black Belt should be able to perform a complete Process Capability Analysis. This includes computing and interpreting capability indices, estimating the % failures, control limit calculations, etc. 69. The Six Sigma Black Belt should demonstrate an awareness of the assumptions that underlie the use of capability indices. 70. Given the results of a replicated 22 full-factorial experiment, the Six Sigma Black Belt should be able to complete the entire ANOVA table. 71. The Six Sigma Black Belt should understand the basic principles of planning a statistically designed experiment. This can be demonstrated by critiquing various experimental plans with or without 72. Given a “clean” experimental plan, the Six Sigma Black Belt should be able to find the correct number of replicates to obtain a desired power. 73. The Six Sigma Black Belt should know the difference between the various types of experimental models (fixed-effects, random-effects, mixed). 74. The Six Sigma Black Belt should understand the concepts of randomization and blocking. 75. Given a set of data, the Six Sigma Black Belt should be able to perform a Latin Square analysis and interpret the results. 76. Ditto for one way ANOVA, two way ANOVA (with and without replicates), full and fractional factorials, and response surface designs. 77. Given an appropriate experimental result, the Six Sigma Black Belt should be able to compute the direction of steepest ascent. 78. Given a set of variables each at two levels, the Six Sigma Black Belt can determine the correct experimental layout for a screening experiment using a saturated design. 79. Given data for such an experiment, the Six Sigma Black Belt can identify which main effects are significant and state the effect of these factors. 80. Given two or more sets of responses to categorical items (e.g., customer survey responses categorized as poor, fair, good, excellent), the Six Sigma Black Belt will be able to perform a Chi-Square test to determine if the samples are significantly different. 81. The Six Sigma Black Belt will understand the idea of confounding and be able to identify which two factor interactions are confounded with the significant main effects. 82. The Six Sigma Black Belt will be able to state the direction of steepest ascent from experimental data. 83. The Six Sigma Black Belt will understand fold over designs and be able to identify the fold over design that will clear a given alias. 84. The Six Sigma Black Belt will know how to augment a factorial design to create a composite or central composite design. 85. The Six Sigma Black Belt will be able to evaluate the diagnostics for an experiment. 86. The Six Sigma Black Belt will be able to identify the need for a transformation in y and to apply the correct transformation. 87. Given a response surface equation in quadratic form, the Six Sigma Black Belt will be able to compute the stationary point. 88. Given data (not graphics), the Six Sigma Black Belt will be able to determine if the stationary point is a maximum, minimum or saddle point. 89. The Six Sigma Black Belt will be able to use a quadratic loss function to compute the cost of a given process. 90. The Six Sigma Black Belt will be able to conduct simple and multiple linear regression. 91. The Six Sigma Black Belt will be able to identify patterns in residuals from an improper regression model and to apply the correct remedy. 92. The Six Sigma Black Belt will understand the difference between regression and correlation studies. 93. The Six Sigma Black Belt will be able to perform Chi-square analysis of contingency tables. 94. The Six Sigma Black Belt will be able to compute basic reliability statistics (MTBF, availability, etc). 95. Given the failure rates for given subsystems, the Six Sigma Black Belt will be able to use reliability apportionment to set MTBF goals. 96. The Six Sigma Black Belt will be able to compute the reliability of series, parallel, and series-parallel system configurations. 97. The Six Sigma Black Belt will demonstrate the ability to create and read an FMEA analysis. 98. The Six Sigma Black Belt will demonstrate the ability to create and read a fault tree. 99. Given distributions of strength and stress, the Six Sigma Black Belt will be able to compute the probability of failure. 100. The Six Sigma Black Belt will be able to apply statistical tolerancing to set tolerances for simple assemblies. The Six Sigma Black Belt will know how to compare statistical tolerances to so-called “worst case” tolerancing. 101. The Six Sigma Black Belt will be aware of the limits of the Six Sigma approach. Reproduced with kind permission of Six Sigma Training Source: Pyzdek, T (2003), 101 Things Every Six Sigma Black Belt Should Know Six Sigma Training. Related Posts 1 comment: Doyel mirza said... The term six sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modeling of manufacturing processes.
{"url":"http://wjmc.blogspot.com/2012/04/101-things-every-six-sigma-black-belt.html","timestamp":"2014-04-18T19:02:01Z","content_type":null,"content_length":"139593","record_id":"<urn:uuid:e4ee26ce-4281-4caf-89b8-f49e2d207c0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}