content
stringlengths
86
994k
meta
stringlengths
288
619
Make Heap Best Case I don't understand the proof of best case for Make Heap algorithm. The Algorithm is: for i= [tex]\left\lfloor n/2 \right\rfloor[/tex] to 1 while the procedure Sift_Down is: if 2j [tex]\leq[/tex] n and M[2j] > M[k] then k=2j; if 2j < n and M[2j+1] > M[k] then k=2j+1; Exchange M[k] and M[j] until k = j; the proof for the best case goes like this: t(n) <= 2*2[tex]^{0}[/tex] + 2*2[tex]^{1}[/tex] + ... + 2*2[tex]^{k-2}[/tex] so why not until 2[tex]^{k-1}[/tex], in Make_Heap procedure it iterates till the root ? and why every term is multiplied by 2 ? Thank You
{"url":"http://www.physicsforums.com/showthread.php?t=236736","timestamp":"2014-04-19T09:34:06Z","content_type":null,"content_length":"20306","record_id":"<urn:uuid:7128850e-5b8e-4281-9a88-10545d2e522b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple(Or so I thought..) 2D math 04-25-2004, 06:45 PM #1 Junior Member Newbie Join Date Apr 2004 Simple(Or so I thought..) 2D math Ok, basically I'm making a small game that communicates over the network, now the network code works fine, but this is my first time programming in OpenGL. I've got it setup where 4 rotates left, 6 rotates right, and 8 is up and 5 & 2 are down. I can't seem to get the coordinate updates to work I can rotate just fine but when I press 8 to go forward it goes some random(or so it seems direction) what is an easy way to solve this? Re: Simple(Or so I thought..) 2D math When you rotate, you're also rotating your axes. Pressing 8 after a rotation will cause a forward motion with respect to your new axes, which is why you're not seeing the kind of motion you're Try using quaternions for your rotations. Re: Simple(Or so I thought..) 2D math there is absolutely no need for quaternions in this situation. Use sin and cos to decompose the angle and magnitude into x and y coordinates. So something like that Re: Simple(Or so I thought..) 2D math Ok, So the axes rotate at with the same incrementation we use to rotate our object? Right now we have it rotating + or - 10.0f. The way we had it set up is basically like this to change x & y: float nXPos, nYPos = 0; //nMovement is 3 pixels //this is for hitting '8' if(fHeading >= 0.0f && fHeading =< 90.0f) nXPos = nXPos + (nMovement * cos(fHeading)); nYPos = nYpos + (nMovement * sin(fHeading)); then from there we followed the rules of trig for different angles. So if my axes also rotate how would I adjust for that in my math? Re: Simple(Or so I thought..) 2D math you have it set up perfectly, your mistake is that sin and cos take arguments in radians and you are giving them in degrees. Just multiply by pi/180 Re: Simple(Or so I thought..) 2D math Also make sure the camera and projection are plugged correctly, sometimes it's easy to mix up the rotation with the controls, because of one misplaced ",". Just a precaution if it's not the math. 04-25-2004, 07:28 PM #2 Intern Contributor Join Date Mar 2004 Bangalore, India 04-25-2004, 07:32 PM #3 Member Regular Contributor Join Date Oct 2001 Princeton, NJ 04-26-2004, 08:40 AM #4 Junior Member Newbie Join Date Apr 2004 04-27-2004, 12:46 PM #5 Member Regular Contributor Join Date Oct 2001 Princeton, NJ 05-18-2004, 11:12 PM #6 Junior Member Newbie Join Date May 2004
{"url":"http://www.opengl.org/discussion_boards/showthread.php/159422-Simple(Or-so-I-thought-)-2D-math","timestamp":"2014-04-16T04:21:28Z","content_type":null,"content_length":"51929","record_id":"<urn:uuid:1d7355cd-bf41-44ce-bf00-fdded8b1778d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Cartesian equation of a plane perpendicular to anothe plane with certain intercepts My teacher said to forget about learning how to sketch, so that's out. Is the normal to the plane (1,0,0) then? So (0,4,0) and (0,0,-2) are both on the plane. How would I stick those points on a plane with a Cartesian equation of x=0, then? I need to get a D-value, which I can get from points, I know, but when the equation is 1x + 0y + 0z + D, would the intercepts kind of not really matter? The D-value would be 0, so the equation would be x=0, right? Or am I wrong about the normal again? Or am I supposed to be looking for the direction vector formed by the two intercepts and use that as my normal?
{"url":"http://www.physicsforums.com/showpost.php?p=2635713&postcount=3","timestamp":"2014-04-18T23:19:43Z","content_type":null,"content_length":"7714","record_id":"<urn:uuid:4906e9e2-ab1a-40b2-a941-65e325cde7ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Setting up Proportions Date: 03/11/2010 at 22:18:41 From: allie Subject: Advice on word problems I've been having a hard time answering word problems. Say there's 10 notebooks and you bought 3 for $1.89. What is the cost of 10 On a test, I got the answer wrong. I'll show you what I did wrong. I THINK it went like this: 10 over $1.89 = say the variable is n over $1.89 I find setting up difficult. I don't understand anything about setting up word problems. I've always had problems with this. I'm not sure why, If you can just give me advice on how to set up word problems or tips or even a very good website, that will be much appreciated. I have another test Very Soon. I'm in the 7th grade in smart math. Thank you, Date: 03/11/2010 at 23:29:04 From: Doctor Peterson Subject: Re: Advice on word problems Hi, Allie. I think what you most need at the moment is some advice about working with proportions, because that is where you are going wrong. Perhaps out of this I'll have some more general comments to make that would apply to other kinds of word problems. This kind of proportion word problem depends on setting up the proportion properly. The key is that a proper proportion looks like a table of data, where each row is consistent and each column is In this case, you have costs, and you have numbers of books; and you have two cost/number pairs. A table of the data might look like this: what you what the bought question asks -------- -------------- number: 3 10 cost($): 1.89 ? Each column represents one case given in the problem: what you actually bought, or what you are asking about. Each row represents one kind of quantity: the number of books bought, or the cost of the The proportion has to work the same way: the numerators have to be corresponding things, and the denominators have to be corresponding things, and each ratio has to go together. So the proportion can be written directly from the table: 3 10 (number) ---- = ---- 1.89 x (cost) Do you see the connection to my table? The first ratio is two numbers that go together, and so is the second. In each case, the numerator is the number and the denominator is the cost. Now, this isn't the only way to write the proportion; it's just the way my table turned out. Possibly, a teacher would prefer that you make it so each ratio compares two numbers of the same kind (cost or number), and the numerators correspond to one another, and the denominators correspond to one another. It might look like this: (cost) (number) 1.89 3 (what you bought) ---- = ---- x 10 (what the question asks) So here we are saying that the ratio of costs is equal to the ratio of numbers; and each ratio has the same order (bought : question). Either of these (and several others) will work just as well; in fact, if you solve them by cross-products, you'll have the same equation to 3x = 1.89*10 Now let's look at what you did, which I think you said was this: 10 n ---- = ---- 1.89 1.89 If that's what you meant, it's pretty clearly wrong because the 3 doesn't show up anywhere, and 1.89 is in there twice. But let's suppose you wrote this: 10 n ---- = ---- 1.89 3 You can check it for consistency by looking at each "row" and each "column." Let's just replace each number with what it means: number in question cost in question ------------------ = ------------------ cost as bought number bought Do you see what's wrong? The left has number over cost, but the right has cost over number. The numerators go together and the denominators go together, at least! If you make this check on your work and find it's wrong, just rearrange the numbers so they do line up right, and then check it again. Once it's set up right, you're ready to solve the Does that help? Try a few example problems using this way of thinking, and show me your work on a few more if you'd like me to check them. I see one big principle that applies to any problem solving, especially of an algebraic type: You have to begin by looking for relationships among the quantities in a problem, and make sure that what you write algebraically represents those relationships accurately. Making a table is just one way to help yourself focus on that idea. Finally, here is a link to our FAQ on Word Problems, which may give you some other useful ideas: Word problems - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/75071.html","timestamp":"2014-04-16T20:01:59Z","content_type":null,"content_length":"9857","record_id":"<urn:uuid:d2567566-656b-42e3-aab3-8abd334f4822>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Mathematical Modeling of Insulin Action and in Vivo Estimates of Insulin Sensitivity by Michael J. Quon Michael J. Quon, M.D., Ph.D. Senior Investigator Hypertension-Endocrine Branch NHLBI, NIH Bethesda, MD 20892-1754 Diabetes, obesity, and hypertension are major inter-related public health problems that are all characterized, in part, by insulin resistance (decreased sensitivity or responsiveness to metabolic actions of insulin). Therefore, it is of great interest to develop tools to quantify insulin sensitivity in vivo. The "gold standard" hyperinsulinemic euglycemic glucose clamp method is labor intensive and not well suited to large studies. A well-accepted alternative for estimating insulin sensitivity in vivo is to analyze insulin and glucose data from a frequently sampled intravenous glucose tolerance test (FSIVGTT) using Bergman*s minimal model of glucose metabolism. This is less cumbersome than the glucose clamp but still requires at least 3 hours to complete. Estimates of insulin sensitivity (SIMM) derived from minimal model analysis correlate well with measurements of insulin sensitivity using the glucose clamp technique (SIClamp). In addition, SIMM has predictive power with respect to the development of diabetes. Nevertheless, we have previously shown that minimal model analysis systematically underestimates the effect of glucose on glucose disposal and therefore overestimates SIMM (Quon et al., Diabetes 43:890-896, 1994). Furthermore, we have recently shown that this error is due to an oversimplified single-compartment representation of glucose kinetics and is dependent on the dynamics of insulin secretion (Cobelli et al., Am J Phsyiol 38:E1031-E1036, 1998). Therefore, we have developed an alternative Quantitative Insulin-sensitivity Check Index (QUICKI). After analyzing data from both glucose clamp and FSIVGTT studies, we discovered that physiological steady-state values (i.e., fasting insulin (I[0]) and fasting glucose (G[0])) contain important information related to insulin sensitivity and thus defined QUICKI as 1/[log (I[0]) + log (G[0])]. Correlations of QUICKI with SIClamp were as good, or better, than correlations of SIMM with SIClamp. We conclude that QUICKI is a simple, accurate, and reliable insulin sensitivity index obtained from a single fasting blood sample that may be useful for clinical research and epidemiological studies related to diabetes and other insulin resistant states.
{"url":"http://www.ima.umn.edu/biology/wkshp_abstracts/quon1.html","timestamp":"2014-04-18T15:40:39Z","content_type":null,"content_length":"16531","record_id":"<urn:uuid:10e622d5-57d8-42fb-8b8b-d52d2f836563>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of decimal in English: Syllabification: dec·i·mal • 1Relating to or denoting a system of numbers and arithmetic based on the number ten, tenth parts, and powers of ten: decimal arithmetic More example sentences □ The advent of decimal arithmetic reduced the need for such tests. □ The number system which was used to express this numerical information was based on the decimal system and was both additive and multiplicative in nature. □ The discovery of the binomial theorem for integer exponents by al-Karaji was a major factor in the development of numerical analysis based on the decimal system. • 1.1Relating to or denoting a system of currency, weights and measures, or other units in which the smaller units are related to the principal units as powers of ten: decimal coinage More example sentences □ The uniformity of administrative structures was reflected, later, in the imposition of a national, decimal system of weights, measures, and currency. □ Elaborated between 1790 and 1799, the decimal metric system of weights and measures was zealously promoted under Napoleon. □ Even if your business cannot join the general changeover to decimal money for a while after Decimal Day - February 15-you still need to prepare now for the introduction of decimal currency. (also decimal fraction) Back to top • 1A fraction whose denominator is a power of ten and whose numerator is expressed by figures placed to the right of a decimal point. More example sentences □ We can use arithmetics with different bases, fractions, decimals, logarithms, powers, or simply words. □ How do you convert decimals and percentages to fractions? □ Basic means a level that really does not exceed primary school: fractions, decimals, percentages and ratios. • 1.1The system of decimal numerical notation. More example sentences □ The same must well have been true of changing our monetary system to decimal. □ In addition to binary and decimal, computers can also speak in octal and hex. □ Though the entries of the table are in decimal, notation along the rows and columns is in hexadecimal. More example sentences □ Metric units are related decimally. □ He also suggested a standard linear measurement, which he called the mille, based on the length of the arc of one degree of longitude on the Earth's surface and divided decimally. □ The infantry were organized decimally in units of a thousand, with a company and section structure under designated officers. early 17th century: from modern Latin decimalis (adjective), from Latin decimus 'tenth'. More definitions of decimal Definition of decimal in: Subscribe to remove adverts and access premium resources
{"url":"http://www.oxforddictionaries.com/definition/american_english/decimal","timestamp":"2014-04-21T10:00:46Z","content_type":null,"content_length":"121410","record_id":"<urn:uuid:5690f25f-1183-4b4f-af73-2bd9dee8e4bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
On significance and model validation 22 January, 2013 at 12:53 | Posted in Statistics & Econometrics 9 Comments Let us suppose that we as educational reformers have a hypothesis that implementing a voucher system would raise the mean test results with 100 points (null hypothesis). Instead, when sampling, it turns out it only raises it with 75 points and having a standard error (telling us how much the mean varies from one sample to another) of 20. In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since they can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypothesis by making it hard to disconfirm the null hypothesis. And as shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” But looking at our example, standard scientific methodology tells us that since there is only 11% probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more “reasonable” to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed. And, most importantly, of course we should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-value of 0.11 means next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference: I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider. 9 Comments » RSS feed for comments on this post. TrackBack URI 1. I can’t imagine wanting to calculate the odds a null is false by (1-p)/p. Very strange, and certainly not warranted by frequentists or Bayesians so far as I know. Putting that entirely to one side, severity requires considering magnitudes indicated. Here, you might look at the lower bound: mu > 75- 2SD or the like. But, like Fisher, we would never consider a single result, and like Freedman, we would always test assumptions. Comment by Mayo— 23 January, 2013 # □ I appreciate you commenting on my article, Deborah, so let me just make some short remarks: 1) It’s pretty clear from the setting of my example that we’re in a policy situation, and so people do have to make decisions and act on them ["forever undecided" or waiting for "the long run" or "hypothetical parallel universe" won't do (I do think "Student" had a point here ...)], in that context you may very well have to act on your odds ratios. 2) When you write “WE” I have no problem as long as everyone are aware of it including people like you and Aris and other “error statistical philosophy”-statisticians. Fine with me, because compared to how statistical significance testing is practised out there in the social sciences you have a much more ambitious “program” and tougher demands on “severity” etc. But, nota bene, that is not what the other “WE” in the social sciensces do. Not yet at least … 3) I DO think Deirdre and Stephen (not to talk of the dozen of earlier critics) have a couple of really good points, but I also think (like e.g. Thomas Mayer) that they overreach [and too often write in a style - "rhetorical"? - I don't particularly appreciate in an academic context (blogposts, goes without saying, are something completely different ... )]. My own position is leaning more to the less exciting but “sober” view of Olle Häggström (whom I cite in the article). It’s a qustion of balance, but I think we still see too much of one-eyed focus on traditional simple-minded significance testing in social sciences. Comment by Lars P Syll— 23 January, 2013 # ☆ Lars: you haven’t even computed the p-value correctly, if I’m understanding you. If the null is 100 and the observed value being used is 75, the numerator would be 75 – 100 (observed – expected), S o the p-value is over .5. Comment by Mayo— 23 January, 2013 # ○ Deborah, I don’t follow you, to be honest. Since I did flag that normality conditions are assumed to apply, the one-tailed p-values of (100-75)/20 and (75-100)/20 are the same, and equal to 0.10565. Comment by Lars P Syll— 23 January, 2013 # ■ No, if this is a one tailed test looking for positive discrepancies from mu = 100, then the p-value is .89. Comment by Mayo— 23 January, 2013 # ★ Corresponding to the observed value of the test statistic, the p-value is the LOWEST level of significance at which the null hypothesis can be rejected. So, please Deborah, what’s the problem? Comment by Lars P Syll— 23 January, 2013 # 2. Lars; you write that “the test has shown that it is likely – the odds are 0.89/0.11 or 8-to-1 – that the hypothesis is false”. Writing about the odds of a hypothesis being false is explicitly a Bayesian concept, and you haven’t given a prior… so something is not right here. (I appreciate that it may just be a semantic slip.) If you want to (loosely) connect p-values to quantities that appear in Bayesian measures of support for the null hypothesis, check out J Berger and T Sellke 1987, or G Casella and R Berger 1987. There are also other Bayesian interpretations of p-values, but they have little to do with measuring support for the null. Finally, I think you may be overstating the reliance on assumptions here. Independence of observations is important – this is typically guaranteed by the study design. And the test is just comparing means in two populations, so “linearity” is trivially true. In fact, provided we allow for non-constant variance (i.e. the unequal-variance t-test) and have a moderate sample size then standard analyses are very robust. Normal sampling distributions are really not required; the Central Limit Theorem is what really does the work. Comment by fred— 25 January, 2013 # 3. Thanks for your comments, Fred. Normality assumptions were used because I didn’t want to have a discussion about probability distributions (but of course, I agree on that, they are really an overkill in this context). Odds and odds ratios per se are not Bayesian (although Bayesians typically use them and together with priors have a special epistemological status to Bayesians) In a policy situation (vouchers or not) it’s more like at the horse races than in a seminar room, decisions you make have a clear cost/benefit dimension and you can’t stay undecided forever. In a typical scientific context I would be more reluctant to think in those terms. I read Casella/Berger a couple of years ago when writing an article on Bayesianism vs. abduction/IBE. I would recommend Box/Tiao on Bayesian Inference (heavy reading though). Comment by Lars P Syll— 25 January, 2013 # □ Glad you agree about Normality statements. Box and Tiao is indeed a useful book. The original post (about 8-1 odds of a hypothesis being false) still concerns me; your analysis is not Bayesian, so you are not justified in making such a statement. Perhaps you instead meant to write about the odds of observing data at least this extreme if the null was true? This is fine, but it’s not what you wrote; it’s important to avoid confusion in this area. Comment by fred— 25 January, 2013 # Blog at WordPress.com. | The Pool Theme. Entries and comments feeds.
{"url":"http://larspsyll.wordpress.com/2013/01/22/on-significance-and-model-validation/","timestamp":"2014-04-17T21:22:37Z","content_type":null,"content_length":"69082","record_id":"<urn:uuid:e91c10c3-3fcf-48b7-94a0-31de2d865eee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlation dimension Next: Takens-Theiler estimator Up: Dimensions and entropies Previous: Dimensions and entropies Roughly speaking, the idea behind certain quantifiers of dimensions is that the weight D depends also on the precise way one defines the weight. Using the square of the probability 73]: where m-dimensional delay vectors, w will be discussed below. On sufficiently small length scales and when the embedding dimension m exceeds the box-dimension of the attractor [74], Since one does not know the box-dimension a priori, one checks for convergence of the estimated values of m. The literature on the correct and spurious estimation of the correlation dimension is huge and this is certainly not the place to repeat all the arguments. The relevant caveats and misconceptions are reviewed for example in Refs. [75, 11, 76, 2]. The most prominent precaution is to exclude temporally correlated points from the pair counting by the so called Theiler window w [75]. In order to become a consistent estimator of the correlation integral (from which the dimension is derived) the correlation sum should cover a random sample of points drawn independently according to the invariant measure on the attractor. Successive elements of a time series are not usually independent. In particular for highly sampled flow data subsequent delay vectors are highly correlated. Theiler suggested to remove this spurious effect by simply ignoring all pairs of points in Eq.( ) whose time indices differ by less than w, where w should be chosen generously. With O(N²) pairs available, the loss of O(N) pairs is not dramatic as long as j=k have to be excluded [77], since otherwise the strong bias to w, the first zero of the auto-correlation function, sometimes even the decay time of the autocorrelation function, are not large enough since they reflect only overall linear correlations [75, 76]. The space-time-separation plot (Sec. ) provides a good means of determining a sufficient value for w, as discussed for example in [41, 2]. In some cases, notably processes with inverse power law spectra, inspection requires w to be of the order of the length of the time series. This indicates that the data does not sample an invariant attractor sufficiently and the estimation of invariants like Parameters in the routines d2 and c2naive are as usual the embedding parameters m and Fast implementation of the correlation sum have been proposed by several authors. At small length scales, the computation of pairs can be done in O(NlogN) or even O(N) time rather than O(N²) without loosing any of the precious pairs, see Ref. [20]. However, for intermediate size data sets we also need the correlation sum at intermediate length scales where neighbor searching becomes expensive. Many authors have tried to limit the use of computational resources by restricting one of the sums in Eq.( ) to a fraction of the available points. By this practice, however, one looses valuable statistics at the small length scales where points are so scarce anyway that all pairs are needed for stable results. In [62], buth approaches were combined for the first time by using fast neighbor search for d2 goes one step further and selects the range for the sums individually for each length scale to be processed. This turns out to give a major improvement in speed. The user can specify a desired number of pairs which seems large enough for a stable estimation of Next: Takens-Theiler estimator Up: Dimensions and entropies Previous: Dimensions and entropies Thomas Schreiber Wed Jan 6 15:38:27 CET 1999
{"url":"http://www.mpipks-dresden.mpg.de/~tisean/TISEAN_2.1/docs/chaospaper/node30.html","timestamp":"2014-04-20T18:24:36Z","content_type":null,"content_length":"9918","record_id":"<urn:uuid:885a87db-3b07-4627-9f90-ad034f5d75c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
How to topologize X(R) when R is a topological ring? up vote 8 down vote favorite Given a topological ring R, under what conditions and in what way, can one induce a topology on the R-points of a scheme X? For example, if X is P^n or A^n, one has natural topology on the R-points. If G is a group scheme/A and R is A-algebra (still a topological ring), will the induced topology on G(R) (as above) automatically make G into a topological group. For number theorists, if G is an algebraic group/Q, we can consider the adelic points G(A_K) for any number field K. Is the induced topology on G(A_K) that of a restricted direct product? Under what conditions will G(A_K) be locally compact or satisfy other nice properties? ag.algebraic-geometry nt.number-theory algebraic-groups add comment 3 Answers active oldest votes Brian Conrad has some notes on this on his website ("Some notes on topologizing the adelic points of schemes, unifying the viewpoints of Grothendieck and Weil"). The short version is that if X is affine, you can topologize X(R) in a natural, functorial way (specifically, the weakest topology such that the functions X(R)-->R induced by elements of the structure sheaf are up vote 11 continuous). If X isn't affine, you have to be more careful, because the units of R might not be open in X and might not be a topological group wrt the subspace topology. But those are down vote the only problems, and if your ring doesn't have those problems, you can glue the naturally topologized affines and everything is functorial. Happily for number theorists, the adeles are accepted fairly nice, and for a finite-type separated K-scheme X, X(A_K) can be naturally topologized, and it is locally compact and Hausdorff. Does this all agree with, or can this be made to agree with, the "classical" topology on C-points? – Kevin H. Lin Oct 11 '09 at 15:51 Yes, it is all consistent with the case when R is a topological field. This stuff is all addressed in the notes which Rebecca mentions. But it is probably more instructive to figure it out for yourself. – BCnrd Feb 23 '10 at 4:14 add comment For adelic points of X (or G), one can first topologize X(Q_p) so that it becomes a p-adic analytic variety, and for almost all p one can define an open subset X(Z_p). Then take X(A) up vote 3 down to be the restricted product. add comment See also Andrei Jorza's thesis http://www.its.caltech.edu/~ajorza/notes/bsd.pdf, pp.16 ff. up vote 1 down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry nt.number-theory algebraic-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/214/how-to-topologize-xr-when-r-is-a-topological-ring","timestamp":"2014-04-21T02:43:23Z","content_type":null,"content_length":"59481","record_id":"<urn:uuid:b1b0e7d9-dc13-41ab-8039-a33189bf33c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
The Golden Mean Technique Technique - Technique Menu THE GOLDEN MEAN Let's start with an introduction of a technique that is well known for many centuries now: The "Golden Mean" (sometimes called "Golden Section") is a geometric formula by the ancient Greeks. A composition following this rule is thought to be "harmonious". The principal idea behind it is to provide geometric lines which can be traversed when viewing a composition. The Golden Mean was a major guideline for many artists/painters so it is certainly worth to have in mind for modern day photographers as well. Theory - Part I Well, let's begin with some words about the theory. The formula starts with a perfect square (marked blue in illustration A). Now we divide the base of the square into two equal parts. We take point x as the middle of a circle with a radius of the distance between point x and y. Thereafter we expand the base of the square till it hits the circle at point z. Now the square can be transformed to a rectangle with a proportion ratio of 5:8. The ratio of A to C is the same as the one from A to B. Luckily the 5:8 ration fits pretty close to the ratio of the 35mm format (24x36mm = 5:7.5). Theory - Part II So now we've something which is thought to be a "perfect" rectangle. What's next ? We draw a line from the upper left to the lower right edge of the rectangle (see illustration B) and another line from the upper right directed towards point y' (taken from illustration A) till it hits the first cross line. Obviously this divides the rectangle into three different In principal we're finished with the "Golden Mean" now. Just try to find objects/parts in your scene that fit roughly into these three sections and ... you have a "harmonious" composition. You can vary the formula by flipping and/or mirroring the schematic rectangle from illustration B.
{"url":"http://www.photozone.de/technique","timestamp":"2014-04-16T22:25:32Z","content_type":null,"content_length":"24613","record_id":"<urn:uuid:9fb79fe0-52de-46fc-8fd1-d27dffc36ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring Reading: 2011 edition by Edward Z. Yang Books are expensive, but by the power of higher-education (also expensive, but differently so), vast oceans of books are available to an enterprising compsci. Here’s my reading list for the spring break lending period (many of which were recommended on #haskell: • Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi. Wonderfully iconoclastic book, and probably one of the easier reads on the list. • Types and Programming Languages by Benjamin Pierce. I’ve been working on this one for a while; this break I’m focusing on the proof strategies for preservation, progress and safety, and also using it to complement a self-study course summed up by the next book. • Lectures on the Curry-Howard Isomorphism by M.H. Sørensen and P. Urzycyzn. Very good, I’ve skimmed the first three chapters and I’m working on the exercises in chapter 2. I’ve been prone to making silly mis-assertions about the Curry-Howard Isomorphism (or is it?), so I’m looking forward to more firmly grounding my understanding of this correspondence. The sections on intuitionistic logic has already been very enlightening. • Type Theory and Functional Programming by Simon Thompson. Haven’t looked at it yet, but fits into the general course of the previous two books. • Purely Functional Data Structures by Chris Okasaki. Also one I’ve been working on a while. Working on compressing all the information mentally. • Basic Category Theory for Computer Scientists by Benjamin Pierce. I’ve got two items on category theory; I got this one on a whim. Haven’t looked at it yet. • Pearls of Functional Algorithm Design by Richard Bird. Something like a collection of puzzles. I think I will enjoy reading through them and working out the subtleties. I probably won’t get to the information compression stage this time around. • Category Theory by Steve Awodey. I was working on the exercises in this textbook, and think I might get past the first chapter. 3 Responses to “Spring Reading: 2011 edition” 1. Some remarks on Curry-Howard and category theory. If you haven’t noticed yet, there is some of the material towards a categorical understanding of the Curry-Howard correspondence in Awodey’s book. Probably the best way to get a feel for Curry-Howard (probably should add Lambek as well) is to get into a routine of asking yourself what a proof for a given statement would look like Another book I’ve enjoyed in that regard is Crole’s “Categories for Types,” which in my opinion is written very well and is one of the most readable books dealing with category theory I’ve come across so far. An underrated book that has helped me and others a lot is “Abstract and Concrete Categories.” I like that much better than the book by Awodey (or Mac Lane for that matter). If you are interested in categorical logic, you will likely need other books though. 2. Thanks for the pointers. I wasn’t planning on studying categorical logic but I suppose it’s a sort of logical combination of the two topics. :-) 3. I’ve just started reading Asperti and Longo’s “Categories, Types and Structures” which is available free here: I’ve got Pierce’s “Types and Programming Languages” and Okasaki’s “Purely Functional Data Structures” but haven’t yet…you know…actually…emm….opened them yet! [blush] Keep up the superb blog!
{"url":"http://blog.ezyang.com/2011/03/spring-reading-2011-edition/comment-page-1/","timestamp":"2014-04-19T20:50:19Z","content_type":null,"content_length":"14980","record_id":"<urn:uuid:7bb376c3-f936-4f60-90d2-d9544265a1ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Magical Triangle Theorem Today, we had an exercise in hype and entertainment, and it didn’t even feel like work. First period, I taught this: The prescribed curriculum has me teach this way: • Angles 1 and 2 are supplementary angles, Angle 2 = 40° • Angles 2 and 10 are alternate interior angles, Angle 10 = 40° • Angles 8, 9, and 10 make a straight angle, Angle 8 = 60° • Angles 8 and 11 are alternate interior angles, Angle 11 = 60° Instead, we did this: Pass out a ton of triangles, all different shapes. Students cut them out and label the angles. Colored paper helps. Shading the angle helps also for students who have a hard time identifying the First, tear off angle A and align its vertex onto the vertex of the straight angle, then the angle side on the side of the straight angle. Do the same thing with angle B. Students now have a little gap, as you can see here: Now, the next part is very important, so I’ll explain it step by step: 1.) Clap your hands loudly and jump on a desk. Sweep your hand over the class and declare, “Magic has arrived!” in a triumphant tone. 2.) You likely have the students’ attention now. 3.) In the same majestic voice, announce, “At this point… every page in the class has a different triangle* with angles labeled differently. All of us have a gap between the two angles. With my magic powers… I predict… (roll your Rs; it really sets the mood) that your one remaining angle will fit perfectly between the other two… go!” 4.) Students fit the third angle between the first two, then exclaim with wonder and throw roses at your feet. Third period gave a standing ovation and asked how long I was in town. One girl is bringing her parents to the matinee tomorrow. Spoiler: It’s the Triangle Sum Theorem. 5.) Explain that they can perform the same trick at home, and you’ll give away your secret right now: The sum of all the angles in any triangle is always 180°, just like the straight line upon which they are perched. See? Wasn’t that better than this? To be fair, we went actually tackled the above problem after the magic show, but–and you can quote me on this–it’s way easier to hold students’ focus when there is magic involved. On that note, the book I’ve been promoting for several months finally arrived today from Amazon. It’s also notable that CUE’s keynote speaker for this year teaches it the other way. ~Matt “Criss Angel” Vaudrey *There were only 12 different triangles, but I didn’t tell them how to label the angles, so the odds are one in 144 that two students had the same situation. 5 responses to “Magical Triangle Theorem” 1. Thank you for posting this! I love tlap but as a first year teacher have struggled to make activities as engaging as I’d like them to be. The curriculum gives students too many prompts, which isn’t doing them any favors. I really love the things you do in class and you have really inspired me as an educator as well as introduced me to other amazing educators. So thanks for having such an amazing blog! 2. Clever and simple in its approach, powerful in its delivery, what a great activity to get them interested and to move forward. My kids would always know how to do this when we walked through it, but failed to apply it on their own. They were capable of doing it when prompted, but couldn’t apply on their own, and I bet this may be an approach that would really connect with them. 3. I love doing that activity so basic yet very convincing. I’ve never been quite that dramatic though. Maybe I’ll give it a go next year . Got something to say? Let's hear it. This entry was posted in Actual Math and tagged geometry, triangle, triangle sum theorem. Bookmark the permalink.
{"url":"http://mrvaudrey.com/2013/11/20/magical_triangle_theorem/","timestamp":"2014-04-18T20:55:20Z","content_type":null,"content_length":"66114","record_id":"<urn:uuid:f09836e8-1765-4777-8538-dfab3645d0f0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Ferdinand Georg Frobenius Georg Frobenius briefly attended the University of Göttingen, but he only studied there for one semester before returning to the city of his birth, Berlin. At the University of Berlin he attended lectures by Kronecker, Kummer and Weierstrass. He continued to study there for his doctorate, attending the seminars of Kummer and Weierstrass, and he received his doctorate in 1870, supervised by Weierstrass. He taught at the secondary school level for several years. Then, in 1874 he was appointed to the University of Berlin as an extraordinary professor of mathematics, and in 1875 he became an ordinary professor at the Eidgen&oouml;ssische Polytechnikum. For seventeen years, between 1875 and 1892, Frobenius worked in Zürich. Weierstrass and Fuchs list 15 topics on which Frobenius had made major contributions during these years: the development of analytic functions in series, the algebraic solution of equations whose coefficients are rational functions of one variable, the theory of linear differential equations, Pfaff's problem, linear forms with integer coefficients, linear substitutions and bilinear forms, adjoint linear differential operators, the theory of elliptic and Jacobi functions, the relations among the 28 double tangents to a plane of degree 4, Sylow's theorem, double cosets arising from two finite groups, Jacobi's covariants, Jacobi functions in three variables, the theory of biquadratic forms, and the theory of surfaces with a differential parameter. In his work in group theory, Frobenius combined results from the theory of algebraic equations, geometry, and number theory, which led him to the study of abstract groups. He published a paper in 1879, which looks at permutable elements in groups. This paper also gives a proof of the structure theorem for finitely generated abelian groups. In 1884, he published his next paper on finite groups in which he proved Sylow's theorems for abstract groups . The proof which Frobenius gives is the one still used today in most undergraduate courses. Then Frobenius filled the chair at the University of Berlin that became vacant when Kronecker died. He did great work there, but he did not get along with his colleagues. He was known for giving fast-paced, varied, and deep lectures that were not terribly stimulating. He continued his investigation of conjugacy classes in groups which would prove important in his later work on characters. It was in the year 1896, however, when Frobenius was professor at Berlin that his really important work on groups began to appear. In that year he published 5 papers on group theory and one of them on group characters is of fundamental importance. Over the years 1897-1899 Frobenius published 2 papers on group representations, one on induced characters, and one on tensor product of characters. In 1898 he introduced the notion of induced representations and the Frobenius Reciprocity Theorem. It was a burst of activity which set up the foundations of the whole of the machinery of representation theory. In 1896, Frobenius gave the irreducible characters for the alternating groups A[4], A[5], the symmetric groups S[4] , S[5] and the group PSL(2,7). He completely determined the characters of symmetric groups in 1900 and of characters of alternating groups in 1901, publishing definitive papers on each. He continued his applications of character theory in papers of 1900 and 1901 which studied the structure of Frobenius groups. Frobenius had a number of doctoral students who made important contributions to mathematics. These included Edmund Landau who was awarded his doctorate in 1899, Issai Schur who was awarded his doctorate in 1901, and Robert Remak who was awarded his doctorate in 1910. Frobenius collaborated with Schur in representation theory of groups and character theory of groups. Among the topics which Frobenius studied towards the end of his career were positive and non-negative matrices. He introduced the concept of irreducibility for matrices, and the papers which he wrote containing this theory around 1910 remain today the fundamental results in the discipline. The fact so many of Frobenius's papers read like present day text-books on the topics which he studied is a clear indication of the importance that his work.
{"url":"http://www2.stetson.edu/~efriedma/periodictable/html/Fr.html","timestamp":"2014-04-19T01:48:39Z","content_type":null,"content_length":"4957","record_id":"<urn:uuid:53c49cd4-c40f-4fd4-bc08-fe34fdf29341>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Adrian E. Raftery: Publications Home | Working Group | Research | Courses | Software/data | Links | Contact/bio | Adrian E. Raftery: Publications To appear | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | 1996 | 1995 | 1994 | 1993 | 1992 | 1991 | 1990 | 1989 | 1988 | 1987 | 1986 | 1985 | 1984 | 1983 | 1982 | 1981 | 1980 | 1979 To appear Raftery, A.E., Alkema, L. and Gerland, P. (in press). Bayesian Population Projections for the United Nations. Statistical Science, to appear. Sharrow, D.J., Clark, S.J. and Raftery, A.E. (in press). Modeling Age-Specific Mortality for Countries with Generalized HIV Epidemics. PLoS One, to appear. Young, W.C., Raftery, A.E. and Yeung, K.Y. (in press). Fast Bayesian Inference for Gene Regulatory Networks Using ScanBMA. BMC Systems Biology, to appear. Celeux, G., Martin-Magniette, M.-L., Maugis-Rabusseau, C. and Raftery, A.E. (in press). Comparing Model Selection and Regularization Approaches to Variable Selection in Model-Based Clustering. Journal de la Société Française de Statistique, to appear. Bao, L, Raftery, A.E. and Reddy, A. (in press). Estimating the Sizes of Populations at Risk of HIV Infection in Bangladesh Using a Bayesian Hierarchical Model. Statistics and Its Interface, to Raftery, A.E., Lalic, N. and Gerland, P. (2014). Joint Probabilistic Projection of Female and Male Life Expectancy. Demographic Research, 30:795--822. Fosdick, B.K. and Raftery, A.E. (2014). Regional Probabilistic Fertility Forecasting by Modeling Between-Country Correlations. Demographic Research 30:1011--1034. Lenkoski, A., Eicher, T.S. and Raftery, A.E. (2014). Two-Stage Bayesian Model Averaging in Endogenous Variable Models. Econometric Reviews, to appear. Earlier version. Wheldon, M., Raftery, A.E., Clark, S.J. and Gerland, P. (2013). Estimating Demographic Parameters with Uncertainty from Fragmentary Data. Journal of the American Statistical Association, 108:96-110. Raftery, A.E., Chunn, J.L., Gerland, P. and Ševčíková , H. (2013). Bayesian Probabilistic Projections of Life Expectancy for All Countries. Demography, 50:777-801. Sloughter, J.M., Gneiting, T. and Raftery, A.E. (2013) Probabilistic Wind Vector Forecasting using Ensembles and Bayesian Model Averaging. Monthly Weather Review, 141:2107-2119. Raftery, A.E., Li. N., Ševčíková , H., Gerland, P. and Heilig, G.K. (2012). Bayesian probabilistic population projections for all countries. Proceedings of the National Academy of Sciences Raftery, A.E., Niu, X., Hoff, P.D. and Yeung, K.Y. (2012). Fast Inference for the Latent Space Network Model Using a Case-Control Approximate Likelihood. Journal of Computational and Graphical Statistics, 21:909-919. Fosdick, B.K. and Raftery, A.E. (2012). Estimating the Correlation in Bivariate Normal Data with Known Variances and Small Sample Sizes. The American Statistician, 66:34-41. Lo, K., Raftery, A.E., Dombek, K., Zhu, J., Schadt, E.E., Bumgarner, R.E. and Yeung, K.Y. (2012). Integrating External Biological Knowledge in the Construction of Regulatory Networks from Time-series Expression Data. BMC Systems Biology 6: article 101. Bao, L., Salomon, J.A., Brown, T., Raftery, A.E. and Hogan, D. (2012). Modeling HIV/AIDS epidemics: revised approach in the UNAIDS Estimation and Projection Package 2011. Sexually Transmitted Infections 88:i3-i10. McCormick, T.M., Raftery, A.E., Madigan, D. and Burd, R.S. (2012). Dynamic Logistic Regression and Dynamic Model Averaging for Binary Classification. Biometrics 68:23-30. Alkema, L., Raftery, A.E., Gerland, P., Clark, S.J. and Pelletier, F. (2012). Estimating the Total Fertility Rate from Multiple Imperfect Data Sources and Assessing its Uncertainty. Demographic Research 26:331-362. Yeung, K.Y., Gooley, T.A., Zhang, A., Raftery, A.E., Radich, J.P. and Oehler, V.G. (2012). Predicting relapse prior to transplantation in chronic myeloid leukemia by integrating expert knowledge and expression data. Bioinformatics 28:823-830. Alkema, L., Raftery, A.E., Gerland, P., Clark, S.J., Pelletier, F., Buettner, T. and Helig, G. (2011). Probabilistic Projections of the Total Fertility Rate for All Countries. Demography 48:815-839. Ševčíková , H., Alkema, L. and Raftery, A.E. (2011). bayesTFR: An R Package for Probabilistic Projections of the Total Fertility Rate. Journal of Statistical Software 43:1-29. Kleiber, W., Raftery, A.E. and Gneiting, T. (2011). Geostatistical model averaging for locally calibrated probabilistic quantitative precipitation forecasting. Journal of the American Statistical Association 106:1291-1303. Kleiber, W., Raftery, A.E., Baars, J., Gneiting, T., Mass, C.F. and Grimit, E.P. (2011). Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging. Monthly Weather Review 139:2630-2649. Chmielecki, R.M. and A.E. Raftery (2011). Probabilistic Visibility Forecasting Using Bayesian Model Averaging. Monthly Weather Review 139:1626--1636. Yeung, K.Y., Dombek, K.M., Lo, K., Mittler, J.E., Zhu, J., Schadt, E.E., Bumgarner, R.E. and Raftery, A.E. (2011). Construction of regulatory networks using expression time-series data of a genotyped population. Proceedings of the National Academy of Sciences 108:19436-19441. Fraley, C., Raftery, A.E., Gneiting, T., Sloughter, M. and Berrocal, V.J. (2011). Probabilistic Weather Forecasting in R. R Journal, 3:55-63. Ševčíková , H., Raftery, A.E., and Waddell, P.A. (2011). Assessing Uncertainty About the Benefits of Transportation Infrastructure Projects Using Bayesian Melding: Application to Seattle's Alaskan Way Viaduct. Transportation Research Part A - Methodological 45:540-553. Raftery, A.E. and L. Bao. (2010). Estimating and Projecting Trends in HIV/AIDS Generalized Epidemics Using Incremental Mixture Importance Sampling. Biometrics 66:1162-1173. Raftery, A.E., Karny, M., and Ettler, P. (2010). Online Prediction Under Model Uncertainty Via Dynamic Model Averaging: Application to a Cold Rolling Mill. Technometrics 52:52-66. Berrocal, V.J., Raftery, A.E. and Gneiting, T. (2010). Probabilistic Weather Forecasting for Winter Road Maintenance. Journal of the American Statistical Association 105:522-537. Brown, T., L. Bao, A.E. Raftery, J.A. Salomon, R.F. Baggaley, J. Stover, and P. Gerland (2010). Modelling HIV epidemics in the antiretroviral era: the UNAIDS Estimation and Projection package 2009. Sexually Transmitted Infections 86:i3--i10. Bao, L. and A.E. Raftery (2010). A stochastic infection rate model for estimating and projecting national HIV prevalence rates. Sexually Transmitted Infections 86:i93--i99. Steele, R.J. and Raftery, A.E. (2010). Performance of Bayesian Model Selection Criteria for Gaussian Mixture Models. In Frontiers of Statistical Decision Making and Bayesian Analysis (edited by M.-H. Chen et al), pages 113-130, New York: Springer. Earlier version. Eicher, T., Papageorgiou, C. and Raftery, A.E. (2010). Determining Growth Determinants: Default Priors and Predictive Performance in Bayesian Model Averaging. Journal of Applied Econometrics Baudry, J.-P., Raftery, A.E., Celeux, G., Lo, K. and Gottardo, R. (2010). Combining Mixture Components for Clustering. Journal of Computational and Graphical Statistics 19:332-353. Bao, L., Gneiting, T., Grimit, E.P., Guttorp, P. and Raftery, A.E. (2010). Bias correction and Bayesian Model Averaging for ensemble forecasts of surface wind direction. Monthly Weather Review Sloughter, J.M., Gneiting, T. and Raftery, A.E. (2010). Probabilistic Wind Speed Forecasting using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association 105:25-35. Murphy, T.B., Dean. N. and Raftery, A.E. (2010). Variable Selection and Updating In Model-Based Discriminant Analysis for High Dimensional Data with Food Authenticity Applications. Annals of Applied Statistics 4:396-421. Dean, N. and Raftery, A.E. (2010). Latent Class Analysis Variable Selection. Annals of the Institute of Statistical Mathematics 62:11-35. Fraley, C., Raftery, A.E. and Gneiting, T. (2010). Calibrating Multi-Model Forecast Ensembles with Exchangeable and Missing Members using Bayesian Model Averaging. Monthly Weather Review 138:190-202. Steele, R.J., Wang, N. and Raftery, A.E. (2010). Inference from multiple imputation for missing data using mixtures of normals. Statistical Methodology 7:351-365. Gottardo, R. and Raftery, A.E. (2009). Bayesian Robust Variable and Transformation Selection: A Unified Approach. Canadian Journal of Statistics, 37:1-20. Gottardo, R. and Raftery, A.E. (2009). Markov chain Monte Carlo with mixtures of singular distributions. Journal of Computational and Graphical Statistics 17:949-975. Krivitsky, P., Handcock, M.S., Raftery, A.E. and Hoff, P. (2009). Representing Degree Distributions, Clustering, and Homophily in Social Networks With Latent Cluster Random Ects Models. Social Networks 31:204-213. Mass, C.F., Joslyn, S., Pyle, J., Tewson, P., Gneiting, T., Raftery, A.E., Baars, J., Sloughter, J.M., Jones, D. and Fraley, C. (2009). PROBCAST: A Web-Based Portal to Mesoscale Probabilistic Forecasts. Bulletin of the American Meteorological Society 90:1009-1014. Oehler, V.G., Yeung, K.Y., Choi, Y.E., Bumgarner, R.E., Raftery, A.E. and Radich, J.P. (2009). The derivation of diagnostic markers of chronic myeloid leukemia progression from microarray data. Blood Annest, A., Bumgarner, R.E., Raftery, A.E. and Yeung, K.Y. (2009). Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data. BMC Bioinformatics 10, article 72. Berrocal, V.J., Raftery, A.E. and Gneiting, T. (2008). Probabilistic quantitative precipitation field forecasting using a two-stage spatial model. Annals of Applied Statistics 2: 1170-1193. Alkema, L., Raftery, A.E. and Brown, T. (2008). Bayesian melding for estimating uncertainty in national HIV prevalence estimates. Sexually Transmitted Infections 84:i11-i16. Brown, T., Salomon, J.A., Alkema, L., Raftery, A.E. and Gouws, E. (2008). Progress and challenges in modelling country-level HIV/AIDS epidemics: The UNAIDS Estimation and Projection Package 2007. Sexually Transmitted Infections 84:i5-i10. Chu, V.T., Gottardo, R., Raftery, A.E., Bumgarner, R.E. and Yeung, K.Y. (2008). MeV+R: using MeV as a graphical user interface for Bioconductor applications in microarray analysis. Genome Biology 7: article R118. Handcock, M.S., Raftery, A.E. and Tantrum, J. (2007). Model-based clustering for social networks (with Discussion). Journal of the Royal Statistical Society, Series A, 170, 301-354. Gneiting, T., Balabdaoui, F. and Raftery, A.E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society, Series B, 69, 243-268. Gneiting, T. and Raftery, A.E. (2007). Strictly Proper Scoring Rules, Prediction, and Estimation. Journal of the American Statistical Association, 102, 359-378. Alkema, L., Raftery, A.E. and Clark, S.J. (2007). Probabilistic projections of HIV prevalence using Bayesian melding. Annals of Applied Statistics, 1, 229-248. Oh, M.-S. and Raftery, A.E. (2007). Model-based Clustering with Dissimilarities: A Bayesian Approach. Journal of Computational and Graphical Statistics, 16, 559-585. Berrocal, V., Raftery, A.E. and Gneiting, T. (2007). Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts. Monthly Weather Review, 135, 1386-1402. Wilson, L.J., Beauregard, S., Raftery, A.E. and Verret, R. (2007). Calibrated Surface Temperature Forecasts from the Canadian Ensemble Prediction y stem Using Bayesian Model Averaging (with Discussion). Monthly Weather Review, 135, 1364-1385. Discussion pages 4226-4236. Sloughter, J.M., Raftery, A.E. and Gneiting, T. (2007). Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging. Monthly Weather Review, 135, 3209-3220. Fraley, C. and Raftery, A.E. (2007). Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering. Journal of Classification, 24, 155-181. Raftery, A.E., Newton, M.A., Satagopan, J.M. and Krivitsky, P. (2007). Estimating the Integrated Likelihood via Posterior Simulation Using the Harmonic Mean Identity (with Discussion). In Bayesian Statistics 8 (edited by J.M. Bernardo et al.), pp. 1-45, Oxford University Press. Sevcikova, H., Raftery, A.E. and Waddell, P. (2007). Assessing Uncertainty in Urban Simulations Using Bayesian Melding. Transportation Research B, 41, 652-669. Fraley C. and Raftery A.E. (2007). Model-based methods of classification: Using the mclust software in chemometrics. Journal of Statistical Software, 18, paper i06. Raftery, A.E. and Dean, N. (2006). Variable Selection for Model-Based Clustering. Journal of the American Statistical Assocation, 101, 168-178. Gottardo, R., Raftery, A.E., Yeung, K.Y. and Bumgarner, R.E. (2006). Robust Estimation of cDNA Microarray Intensities with Replicates. Journal of the American Statistical Association, 101, 30-40. Gottardo, R., Raftery, A.E., Yeung, K.Y. and Bumgarner, R.E. (2006). Bayesian Robust Inference for Differential Gene Expression in cDNA Microarrays with Multiple Samples. Biometrics, 62, 10-18. Steele, R., Raftery, A.E. and Emond, M. (2006). Computing Normalizing Constants for Finite Mixture Models via Incremental Mixture Importance Sampling (IMIS). Journal of Computational and Graphical Statistics, 15, 712-734. Forbes, F., Peyrard, N., Fraley, C., Georgian-Smith, D., Goldhaber, D.M., and Raftery, A.E. (2006). Model-Based Region-of-Interest Selection in Dynamic Breast MRI. Journal of Computer Assisted Tomography, 30, 675-687. Tewson, P. and Raftery, A.E. (2006). Real-Time Calibrated Probabilistic Forecasting Website. Bulletin of the American Meteorological Society, 7, 880-882. Fraley, C. and Raftery, A.E. (2006). Some applications of model-based clustering in chemistry. R News, 6, no. 3, 17-23. Fraley, C. and Raftery, A.E. (2006). Model-based microarray image analysis. R News, 6, no. 5, 60-63. Czado, C.C. and Raftery, A.E. (2006). ``Choosing the Link function and Accounting for Link Uncertainty in Generalized Linear Models using Bayes Factors.'' Statistical Papers, 47, 419-442. Earlier technical report. Gneiting, T. and Raftery, A.E. (2005). Weather forecasting with ensemble methods. Science, 310, 248-249. Fraley, C., Raftery, A.E. and Wehrens, R. (2005). Incremental Model-Based Clustering for Large Datasets with Small Clusters. Journal of Computational and Graphical Statistics, 14, 529-546. Raftery, A.E., Painter, I. and Volinsky, C.T. (2005). BMA: An R package for Bayesian Model Averaging. R News, volume 5, number 2, 2-8. Murtagh, F., Raftery, A.E., and J.L. Starck (2005). Bayesian inference for multiband image segmentation via model-based cluster trees. Image and Vision Computing, 23, 587-596. Dean, N. and Raftery, A.E. (2005). ``Normal uniform mixture differential gene expression detection for cDNA microarrays.'' BMC Bioinformatics, 6, 173. (doi:10.1186/1471-2105-6-173). Li, Q., Fraley, C., Bumgarner, R.E., Yeung, K.Y. and Raftery, A.E. (2005). ``Donuts, Scratches and Blanks: Robust Model-Based Segmentation of Microarray Images.'' Bioinformatics, 21(12), 2875-2882 Yeung, K.Y., Bumgarner, R.E. and Raftery, A.E. (2005). `` Bayesian Model Averaging: Development of an improved multi-class, gene selection and classification tool for microarray data.'' Bioinformatics, 21(10), 2394-2402 (doi:10.1093/bioinformatics/bti319). Raftery, A.E., Gneiting, T., Balabdaoui, F. and Polakowski, M. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. Gneiting, T., Raftery, A.E., Westveld, A. and Goldman, T. (2005). Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation. Monthly Weather Review, 133, Fuentes, M. and Raftery, A.E. (2005). Model evaluation and spatial interpolation by Bayesian combination of observations with outputs from numerical models. Biometrics, 66, 36--45. Walsh, D.C.I. and Raftery, A.E. (2005). Classification of mixtures of spatial point processes via partial Bayes factors. Journal of Computational and Graphical Statistics, 14, 139-154. Gel, Y., Raftery, A.E. and Gneiting, T. (2004). Calibrated probabilistic mesoscale weather field forecasting: The Geostatistical Output Perturbation (GOP) method (with Discussion). Journal of the American Statistical Association, 99, 575-590. Earlier technical report version with color figures. Wehrens, R., Buydens, L.M.C., Fraley, C. and Raftery, A.E. (2004). Model-Based Clustering for Image Segmentation and Large Datasets Via Sampling. Journal of Classification, 21, 231-253. Fraley, C. and Raftery, A.E. (2003). Enhanced model-based clustering, density estimation and discriminant analysis software: MCLUST. Journal of Classification, 20, 263-286. Bates, S., Raftery, A.E. and Cullen, A.C. (2003). Bayesian Uncertainty Assessment in Deterministic Models for Environmental Risk Assessment. Environmetrics, 14, 355-371. Raftery, A.E. and Zheng, Y. (2003). Discussion: Performance of Bayesian Model Averaging. Journal of the American Statistical Association, 98, 931-938. Hoff, P., Raftery, A.E. and Handcock, M.S. (2002). Latent Space Approaches to Social Network Analysis. Journal of the American Statistical Association, 97, 1090-1098. Wang, N. and Raftery, A.E. (2002). Nearest Neighbor Variance Estimation (NNVE): Robust Covariance Estimation via Nearest Neighbor Cleaning (with Discussion). Journal of the American Statistical Association, 97, 994-1019. Stanford, D.C. and Raftery, A.E. (2002). Approximate Bayes factors for image segmentation: The Pseudolikelihood Information Criterion (PLIC). IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 1517-1520. Byers, S.D. and Raftery, A.E. (2002). Bayesian Estimation and Segmentation of Spatial Point Processes using Voronoi Tilings. In Spatial Cluster Modelling (A.G. Lawson and D. G.T. Denison, eds.), London: Chapman and Hall/CRC Press. Earlier technical report version. (Postscript). Fraley, C. and Raftery, A.E. (2002). Model-Based Clustering, Discriminant Analysis, and Density Estimation. Journal of the American Statistical Association, 97, 611-631. Berchtold, A. and Raftery, A.E. (2002). The Mixture Transition Distribution (MTD) model for high-order Markov chains and non-Gaussian time series. Statistical Science, 17, 328-356. Walsh, D.C.I and Raftery, A.E. (2002). Detecting mines in minefields with linear characteristics. Technometrics, 44, 34-44. Walsh, D.C.I. and Raftery, A.E. (2002). Accurate and Efficient Curve Detection in Images: The Importance Sampling Hough Transform. Pattern Recognition, 35, 1421-1431. Hoeting, J.A., Raftery, A.E. and Madigan, D. (2002). Bayesian variable and transformation selection in linear regression. Journal of Computational and Graphical Statistics, 11, 485-507. Raftery, A.E. (2001). Statistics in Sociology, 1950--2000: A Selective Review. Sociological Methodology, 31, 1-45. Yeung K.Y., Fraley C., Murua A, Raftery, A.E. and Ruzzo, W.L. (2001). Model-based clustering and data transformations for gene expression data. Bioinformatics, 17, 977-987. Oh, M.-S. and Raftery, A.E. (2001). Bayesian Multidimensional Scaling and Choice of Dimension. Journal of the American Statistical Association, 96, 1031-1044. Viallefont, V., Raftery, A.E. and Richardson, S. (2001). Variable selection and Bayesian model averaging in epidemiological case-control studies. Statistics in Medicine, 20, 3215-3230. Merli, G. and Raftery, A.E. (2000). Are births underreported in rural China? Manipulation of statistical records in response to China's population policies. Demography, 37, 109--126. Raftery, A.E. (2000). Statistics in Sociology, 1950--2000: A Vignette. Journal of the American Statistical Association, 95, 654-661. Poole, D.J. and Raftery, A.E. (2000). Inference for deterministic simulation models: The Bayesian melding approach. Journal of the American Statistical Association, 95, 1244-1255. Earlier, more complete technical report version (ps). Stanford, D.C. and Raftery, A.E. (2000). Principal curve clustering with noise. IEEE Transactions on Pattern Analysis and Machine Analysis, 22, 601-609. Volinsky, C.T. and Raftery, A.E. (2000). Bayesian information criterion for censored survival models. Biometrics, 56, 256--262. Biblarz, T.J. and Raftery, A.E. (1999). Family structure, educational attainment and socioeconomic success: Rethinking the Pathology of Matriarchy'. American Journal of Sociology, 105, 321-365. Hoeting, J.A., Madigan, D., Raftery, A.E. and Volinsky, C.T. (1999). Bayesian model averaging: A tutorial (with Discussion). Statistical Science, 14, 382--401. [Corrected version.] Correction: vol. 15, pp. 193-195. The corrected version is available at http://www.stat.washington.edu/www/research/online/hoeting1999.pdf. If cited, the corrected version should also be referenced, as here. Raftery, A.E. (1999). Bayes factors and BIC - Comment on "A critique of the Bayesian information criterion for model selection". Sociological Methods and Research, 27, 411-427. Fraley, C. and Raftery, A.E. (1999). MCLUST: Software for Model-Based Cluster Analysis. Journal of Classification, 16, 297-306. Lewis, S.M. and Raftery, A.E. (1999). Comparing explanations of fertility decline using event history models and unobserved heterogeneity. Sociological Methods and Research, 28, 35-60. Campbell, J.G., Fraley, C., Stanford, D., Murtagh, F. and Raftery, A.E. (1999). Model-based methods for textile fault detection. International Journal of Imaging Science and Technology, 10, 339-346. Forbes, F. and Raftery, A.E. (1999). Bayesian morphology: Fast unsupervised Bayesian image analysis. Journal of the American Statistical Association, 94, 555-568. Poole, D., Givens, G.H. and Raftery, A.E. (1999). A proposed stock assessment method and its application to bowhead whales, Balaena mysticetus. Fishery Bulletin, 97, 144-152. Earlier technical report Mukherjee, S., Feigelson, E.D., Babu, G.J., Murtagh, F., Fraley, C. and Raftery, A.E. (1998). Three types of gamma ray bursts. Astrophysical Journal, 508, 314-327. Fraley, C. and Raftery, A.E. (1998). How many clusters? Which clustering methods? Answers via model-based cluster analysis. Computer Journal, 41, 578-588. Byers, S.D. and Raftery, A.E. (1998). Nearest neighbor clutter removal for estimating features in spatial point processes. Journal of the American Statistical Association, 93, 577-584. Raftery, A.E. and Zeh, J.E. (1998). Estimating bowhead whale, Balaena mysticetus, population size and rate of increase from the 1993 census. Journal of the American Statistical Association, 93, Dasgupta, A. and Raftery, A.E. (1998). Detecting features in spatial point processes with clutter via model-based clustering. Journal of the American Statistical Association, 93, 294-302. Biblarz, T., Raftery, A.E. and Bucur, A. (1997). Family structure and social mobility. Social Forces, 75, 1319-1339. Campbell, J.G., Fraley, C., Murtagh, F. and Raftery, A.E. (1997). Linear flaw detection in woven textiles using model-based clustering. Pattern Recognition Letters, 18, 1539-1548. Petrone, S. and Raftery, A.E. (1997). A note on the Dirichlet process prior in Bayesian nonparametric inference with partial exchangeability. Statistics and Probability Letters, 36, 69-83. Volinsky, C.T., Madigan, D., Raftery, A.E. and Kronmal, R.A. (1997). Bayesian model averaging in proportional hazard models: Assessing stroke risk. Journal of the Royal Statistical Society, series C---Applied Statistics, 46, 433-448. DiCiccio, T.J., Kass, R.E., Raftery, A.E. and Wasserman, L. (1997). Computing Bayes Factors by Combining Simulation and Asymptotic Approximations. Journal of the American Statistical Association, 92, Lewis, S.M. and Raftery, A.E. (1997) Estimating Bayes factors via posterior simulation with the Laplace-Metropolis estimator. Journal of the American Statistical Assocation, 92, 648-655. Bensmail, H., Celeux, G., Raftery, A.E. and Robert, C. (1997). Inference in model-based cluster analysis. Statistics and Computing, 7, 1-10. Raftery, A.E., Madigan, D. and Hoeting, J.A. (1997). Bayesian model averaging for regression models. Journal of the American Statistical Association, 92, 179-191. Madigan, D., Raftery, A.E., Volinsky, C.T., and Hoeting, J.A. (1996). Bayesian model averaging. In Integrating Multiple Learned Models, (IMLM-96), P. Chan, S. Stolfo, and D. Wolpert (Eds.), pp. Hoeting, J.A., Raftery, A.E. and Madigan, D. (1996). A method for simultaneous variable selection and outlier identification in linear regression. Computational Statistics and Data Analysis, 22, Le, N.D., Martin, R.D. and Raftery, A.E. (1996). Modeling outliers, bursts and flat stretches in time series using mixture transition distribution (MTD) models. Journal of the American Statistical Association, 91, 1504-1515. Givens, G.H., Zeh, J.E. and Raftery, A.E. (1996). Implementing the current management regime for aboriginal subsistence whaling to establish a catch limit for the Bering--Chukchi--Beaufort Seas stock of bowhead whales. Report of the International Whaling Commission, 46, 493--501. Raftery, A.E. (1996). Approximate Bayes factors and accounting for model uncertainty in generalized linear models. Biometrika, 83, 251-266. Givens, G.H. and Raftery, A.E. (1996). Local adaptive importance sampling for multivariate densities with strong nonlinear relationships. Journal of the American Statistical Association, 91, 132-141. Le, N.D., Raftery, A.E. and Martin, R.D. (1996). Robust order selection in autoregressive models using robust Bayes factors. Journal of the American Statistical Association, 91, 123-131. Kahn, M.J. and Raftery, A.E. (1996). Discharge rates of Medicare stroke patients to skilled nursing facilities: Bayesian logistic regression with unobserved heterogeneity. Journal of the American Statistical Association, 91, 29-41. Raftery, A.E., Lewis, S.M., Aghajanian, A. and Kahn, M.J. (1996). Event history analysis of World Fertility Survey data. Mathematical Population Studies, 6, 129-153. Earlier technical report version Raftery, A.E. and Richardson, S. (1996). Model selection for generalized linear models via GLIB, with application to epidemiology. In Bayesian Biostatistics (D.A. Berry and D.K. Stangl, eds.), New York: Marcel Dekker, pp. 321--354. Earlier version (ps). Raftery, A.E. and Lewis, S.M. (1996). Implementing MCMC. In Markov Chain Monte Carlo in Practice(W.R. Gilks, D.J. Spiegelhalter and S. Richardson, eds.), London: Chapman and Hall, pp. 115-130. Earlier version (ps). Raftery, A.E. (1996). Hypothesis testing and model selection. In Markov Chain Monte Carlo in Practice(W.R. Gilks, D.J. Spiegelhalter and S. Richardson, eds.), London: Chapman and Hall, pp. 163--188. Earlier version (ps). Madigan, D., Gavrin, J. and Raftery, A.E. (1995). Enhancing the predictive performance of Bayesian graphical models. Communications in Statistics - Theory and Methods, 24, 2271-2292. Earlier technical report version (ps): Technical Report no. 270, Department of Statistics, University of Washington, February 1994. Givens, G.H., Zeh, J.E. and Raftery, A.E. (1995). Assessment of the Bering-Chukchi-Beaufort Seas stock of bowhead whales using the BALEEN II model in a Bayesian synthesis framework. Report of the International Whaling Commission, 45, 345-364. Givens, G.H., Raftery, A.E. and Zeh, J.E. (1995). Response to comments by Butterworth and Punt in SC/46/AS2 on the Bayesian synthesis approach. Report of the International Whaling Commission, 45, Raftery, A.E., Lewis, S.M. and Aghajanian, A. (1995). Demand or ideation? Evidence from the Iranian marital fertility decline. Demography, 32, 159-182. Raftery, A.E., Madigan, D. and Volinsky, C.T. (1995). Accounting for model uncertainty in survival analysis improves predictive performance (with Discussion). In Bayesian Statistics 5 (J.M. Bernardo, J.O. Berger, A.P. Dawid and A.F.M. Smith, eds.), Oxford University Press, pp. 323-349. Earlier version (ps). Raftery, A.E. (1995). Bayesian model selection in social research (with Discussion). Sociological Methodology, 25, 111-196. Discussion: Avoiding model selection in Bayesian social research, by A. Gelman and D. B. Rubin. Discussion: Better rules for better decisions, by R. M. Hauser. Rejoinder: Model selection is unavoidable in social research, by A. E. Raftery. Kass, R.E. and Raftery, A.E. (1995). Bayes factors. Journal of the American Statistical Association, 90, 773-795. Raftery, A.E., Givens, G.H. and Zeh, J.E. (1995). Inference from a deterministic population dynamics model for bowhead whales (with Discussion). Journal of the American Statistical Association, 90, 402-430. Rejoinder. [The 1995 JASA-Applications and Case Studies Invited Paper.] Raftery, A.E. (1994). Change point and change curve modeling in stochastic processes and spatial statistics. Journal of Applied Statistical Science, 1, 403-424. Earlier technical report version. Madigan, D.M. and Raftery, A.E. (1994). Model selection and accounting for model uncertainty in graphical models using Occam's Window. Journal of the American Statistical Association, 89, 1335-1346. Givens, G.H., Raftery, A.E. and Zeh, J.E. (1994). A reweighting approach for sensitivity analysis within the Bayesian synthesis framework for population assessment modeling. Report of the International Whaling Commission, 44, 377-384. Taplin, R.H. and Raftery, A.E. (1994). Analysis of agricultural field trials in the presence of outliers and fertility jumps. Biometrics, 50, 764-781. Raftery, A.E. and Tavare, S. (1994). Estimation and modelling repeated patterns in high-order Markov chains with the mixture transition distribution (MTD) model. Journal of the Royal Statistical Society, series C - Applied Statistics, 43, 179-200. Newton, M.A. and Raftery, A.E. (1994). Approximate Bayesian inference by the weighted likelihood bootstrap (with Discussion). Journal of the Royal Statistical Society, series B, 56, 3-48. Biblarz, T.J. and Raftery, A.E. (1993). The effects of family disruption on social mobility. American Sociological Review, 58, 97-109. Madigan, D., Raftery, A.E., York, J.C., Bradshaw, J.M., and Almond, R.G. (1993). Strategies for graphical model selection. Proceedings of the 4th International Workshop on Artificial Intelligence and Statistics, pp. 361-366. Earlier version (ps). Givens, G.H., Raftery, A.E. and Zeh, J.E. (1993). Benefits of a Bayesian approach for synthesizing multiple sources of evidence and uncertainty linked by a deterministic model. Report of the International Whaling Commission, 43, 495-500. Raftery, A.E. and Schweder, T. (1993). Inference about the ratio of two parameters, with application to whale censusing. The American Statistician, 47, 259-264. Raftery, A.E. (1993). Bayesian model selection in structural equation models. In Testing Stuctural Equation Models (K.A. Bollen and J.S. Long, eds.), Beverly Hills: Sage, pp. 163-180. Earlier Raftery, A.E. and Hout, M (1993). Maximally maintained inequality: Expansion, reform and opportunity in Irish education, 1921-1975. Sociology of Education, 66, 41-62. Grunwald, G.K., Guttorp, P. and Raftery, A.E. (1993). Prediction rules for exponential family state-space models. Journal of the Royal Statistical Society, series B, 55, 937-943. Banfield, J.D. and Raftery, A.E. (1993). Model-based Gaussian and non-Gaussian clustering. Biometrics, 49, 803-821. Raftery, A.E. and Zeh, J.E. (1993). Estimation of Bowhead Whale, Balaena mysticetus, population size (with Discussion). In Bayesian Statistics in Science and Technology: Case Studies (C. Gatsonis et al., eds.), New York: Springer-Verlag, pp. 163-240. Grunwald, G.K., Raftery, A.E. and Guttorp, P. (1993). Time series of continuous proportions. Journal of the Royal Statistical Society, series B, 55, 103-116. Hout, M., Raftery, A.E. and Bell, E.O. (1993). Making the grade: Educational stratification in the United States, 1925-1989. In Persistent Inequality: Changing Educational Attainment in Thirteen Countries, (Y. Shavit and P. Bloesfeld, eds.), Boulder: Westview Press, pp. 25-50. Raftery, A.E. and Lewis, S.M. (1992). How many iterations in the Gibbs sampler? In Bayesian Statistics 4 (J.M. Bernardo et al., editors), Oxford University Press, pp. 763-773. Earlier version (ps). Banfield, J.D. and Raftery, A.E. (1992). Ice floe identification in satellite images using mathematical morphology and clustering about principal curves. Journal of the American Statistical Association, 8, 7-16. Dwyer, T.P. and Raftery, A.E. (1991). Industrial accidents are produced by the social relations of work: A sociological theory of industrial accidents. Applied Ergonomics, 22, 167-178. Zeh, J.E., George, J.C., Raftery, A.E. and Carroll, G.M. (1990). Rate of increase, 1978-1988, in the Bering Sea stock of bowhead whales, Balaena mysticetus, estimated from ice-based census data. Marine Mammal Science, 7, 105-122. Zeh, J.E., Raftery, A.E. and Yang, Q. (1990). Assessment of tracking algorithm performance and its effect on population estimates using bowhead whales, Balaena mysticetus, identified visually and acoustically in 1986 off Point Barrow, Alaska. Report of the International Whaling Commission, 40, 411-421. Raftery, A.E., Zeh, J.E., Yang, Q. and Styer, P.E. (1990). Bayes empirical Bayes interval estimation of bowhead whale, Balaena mysticetus, population size based upon the 1986 combined visual and acoustic census off Point Barrow, Alaska. Report of the International Whaling Commission, 40, 393-409. Stephen, E., Raftery, A.E. and Dowding, P. (1990). Forecasting spore concentrations: A time series approach. International Journal of Biometeorology, 34, 87-89. Raftery, A.E. and Thompson, E.A. (1990). What is the probability of a serious nuclear reactor accident? Journal of Statistical Computation and Simulation, 36, 31-34. Raftery, A.E. (1989). Are ozone exceedance rates decreasing? Statistical Science, 4, 378-381. O'Cinneide, C.A. and Raftery, A.E. (1989). A continuous multivariate exponential distribution that is multivariate phase type. Statistics and Probability Letters, 7, 323-325. Haslett, J. and Raftery, A.E. (1989). Space-time modelling with long-memory dependence: Assessing Ireland's wind power resource (with Discussion). Journal of the Royal Statistical Society, series C - Applied Statistics, 38, 1-50. Zeh, J.E., Turet, P., Gentleman, R. and Raftery, A.E. (1988). Population size estimation for the bowhead whale, Balaena mysticetus, based on 1985 and 1986 visual and acoustic data. Report of the International Whaling Commission, 38, 349-364. Raftery, A.E., Turet, P. and Zeh, J.E. (1988). A parametric empirical Bayes approach to interval estimation of bowhead whales, Balaena mysticetus, population size. Report of the International Whaling Commission, 38, 377-388. Raftery, A.E. and Thompson, E.A. (1988). How many nuclear reactor accidents? Journal of Statistical Computation and Simulation, 29, 347-350. Raftery, A.E. (1988). Inference and prediction for the binomial N parameter: A hierarchical Bayes approach. Biometrika, 75, 223-228. Raftery, A.E. (1988). Analysis of a simple debugging model. Journal of the Royal Statistical Society, series C - Applied Statistics, 37, 12-22. Raftery, A.E. (1987). Inference and prediction for a general order statistic model with unknown population size. Journal of the American Statistical Association, 82, 1163-1168. Martin, R.D. and Raftery, A.E. (1987). Robustness, computation, and non-Euclidean models. Journal of the American Statistical Association, 82, 1044-1050. Akman, V.E. and Raftery, A.E. (1986). Bayes factors for non-homogeneous Poisson processes with vague prior information. Journal of the Royal Statistical Society, series B, 48, 322-329. Raftery, A.E. (1986). A note on Bayes factors for log-linear contingency table models with vague prior information. Journal of the Royal Statistical Society, series B, 48, 249-250. Akman, V.E. and Raftery, A.E. (1986). Asymptotic inference for a change-point Poisson process. Annals of Statistics, 14, 1583-1590. Raftery, A.E. and Akman, V.E. (1986). Bayesian analysis of a Poisson process with a change-point. Biometrika, 73, 85-89. Raftery, A.E. (1986). Choosing models for cross-classifications. American Sociological Review, 51, 145-146. Raftery, A.E. (1985). A model for high-order Markov chains. Journal of the Royal Statistical Society, series B, 47, 528-539. Raftery, A.E. (1985). Some properties of a new continuous bivariate exponential distribution. Statistics and Decisions, Supplement Issue No. 2, 53-58. Raftery, A.E. (1985). Invited review: Time series analysis. European Journal of Operational Research, 20, 127-137. Raftery, A.E. and Hout, M. (1985). Does Irish education approach the meritocratic ideal? A logistic analysis. Economic and Social Review, 16, 115-140. Raftery, A.E. (1985). Social mobility measures for cross-national comparisons. Quality and Quantity, 19, 167-182. Murtagh, F. and Raftery, A.E. (1984). Fitting straight lines to point patterns. Pattern Recognition, 17, 479-483. Raftery, A.E. (1984). A continuous multivariate exponential distribution. Communications in Statistics, A13, 947-965. Raftery, A.E. (1983). Comment on ``Gaps and glissandos . . .''. American Sociological Review, 48, 581-583. Raftery, A.E., Haslett, J. and McColl, E. (1982). Wind power: a space-time process? In Time series analysis: theory and practice 2 (O.D. Anderson, ed.), North-Holland, pp. 191-202. Raftery, A.E. (1982). Generalised non-normal time series models. In Time series analysis: theory and practice 1 (O.D. Anderson, ed.), North-Holland, pp. 621-640. Fuchs, C., Broniatowski, M. and Raftery, A.E. (1981). Etude de la division cellulaire dans le meristeme plan de la feuille du Tropaeolum peregrinum L. I. La distribution des mitoses dans une zone reduite de panenchyme pallisadique releve-t-elle du hasard? Comptes rendus de l'Academie des Sciences de Paris, serie III, 292, 347-352. Fuchs, C., Broniatowski, M. and Raftery, A.E. (1981). Etude de la division cellulaire dans le meristeme plan de la feuille de Tropaeolum peregrinum L. II. Structures presentees par la distribution des mitoses. Comptes rendus de l'Academie des Sciences de Paris, serie III, 292, 385-387. Raftery, A.E. (1980). Estimation efficace pour un processus autoregressif exponentiel a densite discontinue. Publications de l'Institut de Statistique des Universites de Paris, 25, 64-91. Raftery, A.E., Shier, P. and Obilade, T. (1980). Domestic space heating and solar energy in Ireland. International Journal of Energy Research, 4, 31-39. Raftery, A.E. (1979). Un probleme de ficelle. Comptes rendus de l'Academie des Sciences de Paris, serie A, 289,703-705. These papers are being made available here to facilitate the timely dissemination of scholarly work; copyright and all related rights are retained by the copyright holders. Updated April 14, 2014. Copyright 2005-2014 by Adrian E. Raftery; all rights reserved.
{"url":"http://www.stat.washington.edu/raftery/Research/publications.html","timestamp":"2014-04-18T16:14:55Z","content_type":null,"content_length":"52926","record_id":"<urn:uuid:f21d11fb-4705-42da-86a7-7253e9a88e52>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
How to prove trace(A.A*) is positive First of all, the diagonal entries of AA* are real. You can't really compare two compex numbers like that as there is no order on C. Now, what does the (i,j)-th entry of AA* look like? What about the (i,i)-th entry? (Side note: tr(AA*) isn't always positive - it can be zero. So a better thing would be to say that it's nonnegative.)
{"url":"http://www.physicsforums.com/showthread.php?p=1189939","timestamp":"2014-04-23T20:26:22Z","content_type":null,"content_length":"30745","record_id":"<urn:uuid:2dc97b8c-e863-4b54-a4fb-c99c60d23bde>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution: To defrost ice accumulated on the outer surface of an automobile windshield, warm air is blown over the inner surface of the windshield. Consider an automobile windshield with thickness of 5 mm and thermal conductivity of 1.4 W/mK. The outside ambient temperature is -10 C and the convection heat transfer coefficient is 200 W/m^2 K, while the ambient temperature inside the automobile is 25 C. Determine the value of the convection heat transfer coefficient for the warm air blowing over the inner surface of the windshield necessary to cause the accumulated ice to begin melting. heat and mass transfer problems engineering equations heat and mass transfer school homework engineering heat and mass transfer formulas heat and mass transfer solutions to heat and mass transfer problems full solution engineering problem solution heat and mass transfer math problems engineering equations heat and mass transfer school homework engineering solutions to heat and mass transfer formulas heat problems mass problem solutions to transfer problems full solution heat and mass transfer heat and mass transfer problems engineering equations heat and mass transfer school homework engineering heat and mass transfer formulas heat and mass transfer solutions to heat and mass transfer problems full solution engineering problem solution heat and mass transfer math problems engineering equations heat and mass transfer school homework engineering solutions to heat and mass transfer formulas heat problems mass problem solutions to transfer problems full solution heat and mass transfer To defrost ice accumulated on the outer surface of an automobile windshield, warm air is blown over the inner surface of the windshield. Consider an automobile windshield with thickness of 5 mm and thermal conductivity of 1.4 W/mK. The outside ambient temperature is -10 C and the convection heat transfer coefficient is 200 W/m^2 K, while the ambient temperature inside the automobile is 25 C. Determine the value of the convection heat transfer coefficient for the warm air blowing over the inner surface of the windshield necessary to cause the accumulated ice to begin melting. To defrost ice accumulated on the outer surface of an automobile windshield, warm air is blown over the inner surface of the windshield. Consider an automobile windshield with thickness of 5 mm and thermal conductivity of 1.4 W/mK. The outside ambient temperature is -10 C and the convection heat transfer coefficient is 200 W/m^2 K, while the ambient temperature inside the automobile is 25 C. Determine the value of the convection heat transfer coefficient for the warm air blowing over the inner surface of the windshield necessary to cause the accumulated ice to begin melting.
{"url":"http://highalphabet.com/heatandmassprb5/","timestamp":"2014-04-20T03:09:39Z","content_type":null,"content_length":"6373","record_id":"<urn:uuid:bab2b9b8-577e-49b9-9501-0b8687ddc51b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Stationary distribution of a countable state Markov chain up vote 1 down vote favorite We assume the Markov chain to be countable state space, time-homogeneous. Does it necessarily have a stationary distribution? I found a paper on arXiv.org (http://arxiv.org/abs/math/0610707) that proves that for every continuous transformation from the standard infinite dimensional (the convex hull of the standard bases of $\mathbb{R}^\infty$) simplex to itself has a fixed point. So I guess it necessarily has but when I look for such a theorem on books, I cannot find one. Thank you for your help! 2 How about the Markov chain on $\mathbb Z$ defined by $\mathbb P(X_{n+1}=k+1|X_n=k)=1$? – Anthony Quas Apr 20 '13 at 9:22 The arxiv paper you cite takes the closure of the simplex, which means you take the convex hull of the basis vectors and $\vec 0$, so these are not probability measures. The fixed point of a transformation of probability measures may end up being in the closure instead, such as $\vec 0$. – Douglas Zare Apr 20 '13 at 9:31 Martin Hairer has some good [lecture notes][1] on this sort of thing. [1] hairer.org/notes/Markov.pdf – user32372 Apr 20 '13 at 12:14 By the way, I think the shift mentioned by Anthony Quas is better if it is on $\mathbb{N}$. On $\mathbb Z$, adding one preserves uniform measures which don't add up to $1$. On $\mathbb N$, the shift only preserves $\vec 0$. – Douglas Zare Apr 20 '13 at 19:31 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged markov-chains or ask your own question.
{"url":"http://mathoverflow.net/questions/128159/stationary-distribution-of-a-countable-state-markov-chain","timestamp":"2014-04-21T02:59:29Z","content_type":null,"content_length":"49924","record_id":"<urn:uuid:86bbff8d-543b-494b-acc1-4ca7f942cbf7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraic number algebraic number, real number for which there exists a polynomial equation with integer coefficients such that the given real number is a solution. Algebraic numbers include all of the natural numbers, all rational numbers, some irrational numbers, and complex numbers of the form pi + q, where p and q are rational, and i is the square root of −1. For example, i is a root of the polynomial x^2 + 1 = 0. Numbers, such as that symbolized by the Greek letter π, that are not algebraic are called transcendental numbers. The mathematician Georg Cantor proved that, in a sense that can be made precise, there are many more transcendental numbers than there are algebraic numbers, even though there are infinitely many of these latter.
{"url":"http://www.britannica.com/print/topic/14948","timestamp":"2014-04-16T23:12:06Z","content_type":null,"content_length":"7584","record_id":"<urn:uuid:ec369cf8-d5bd-469a-98c4-c1dd1272153f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
U-PHIL: Stephen Senn (2): Andrew Gelman I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach. As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior. The immense practical difficulties with any serious system of inference render it absurd to think that it would be possible to just write down a probability distribution to represent uncertainty. I wish, however, that Senn would recognize “my” Bayesian approach (which is also that of John Carlin, Hal Stern, Don Rubin, and, I believe, others). De Finetti is no longer around, but we are! I have to admit that my own Bayesian views and practices have changed. In particular, I resonate with Senn’s point that conventional flat priors miss a lot and that Bayesian inference can work better when real prior information is used. Here I’m not talking about a subjective prior that is meant to express a personal belief but rather a distribution that represents a summary of prior scientific knowledge. Such an expression can only be approximate (as, indeed, assumptions such as logistic regressions, additive treatment effects, and all the rest, are only approximations too), and I agree with Senn that it would be rash to let philosophical foundations be a justification for using Bayesian methods. Rather, my work on the philosophy of statistics is intended to demonstrate how Bayesian inference can fit into a falsificationist philosophy that I am comfortable with on general grounds. 4 thoughts on “U-PHIL: Stephen Senn (2): Andrew Gelman” Which among I. J. Good’s 46,656 varieties of Bayesian are you? I think that this business of pointing to zillions of varieties is just a cop-out that allows some people to say, whatever I do, it’s Bayesian deep down (BADD), or it has Bayesian grounding . There has to be something that counts as not-X for the claim of holding X to have any merit. It’s not that it matters which account is getting credit, whatever that means. It’s that failing to be very clear on the underlying foundations creates an obstacle to doing things better, or to even understanding what the criteria should be for using a given method for a certain problem, or so I think. Of course, I’m no kind of Bayesian, but rather an error statistical philosopher. I welcome constructive comments for 14-21 days Categories: Philosophy of Statistics, Statistics, U-Phil Tags: Carlin, de Finetti, Gelman, Rubin, Senn 4 Comments
{"url":"http://errorstatistics.com/2012/01/23/u-phil-stephen-senn-2-andrew-gelman/","timestamp":"2014-04-19T17:56:23Z","content_type":null,"content_length":"63663","record_id":"<urn:uuid:cf56c360-f781-452f-821e-91b4dc141e46>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
biggest : Java Glossary To find the largest of two ints use Math. max, e.g. // finding the bigger of two ints int bigger = Math.max( a, b ); // also works with long, float and double e.g. double biggerDouble = Math.max( aDouble, bDouble ); To find the largest of three ints use Math. max, e.g. // finding the bigger of three ints int biggest = Math.max( Math.max( a , b ), c ); // also works with long, float and double e.g. double biggestDouble = Math.max( Math.max( aDouble, bDouble ), cdouble ); To find the biggest of a set, you presume the first element is the biggest, and then look for an even bigger element. Learning More Oracle’s Javadoc on : available:
{"url":"http://www.mindprod.com/jgloss/biggest.html","timestamp":"2014-04-19T05:05:23Z","content_type":null,"content_length":"12654","record_id":"<urn:uuid:797345d7-2293-4c84-9270-d67e91978d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
analytic continuation of an integral May 3rd 2012, 07:29 AM #1 Junior Member Apr 2008 analytic continuation of an integral greetings. we have the following integral : $E_{s}(x)$ is the mittag-leffler function the integral is well defined for $\Re(s)>1$ i was wondering if we can apply Riemann's trick, and replace this integral with a contour integral to obtain a meromorphic integral - one that is analytic almost everywhere in the complex plane- namely, consider the contour integral : $K(s)=-s\oint _{\gamma}\frac{E_{s}((-x)^{s})-1}{xe^{x}(e^{x}-1)}dx$ where the contour starts and ends at +∞ and circles the origin once. using this contour along with the Mellin-Barnes integral rep. of the mittag-leffler function, can we start working the analytic continuation of the original integral ? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/198296-analytic-continuation-integral.html","timestamp":"2014-04-16T13:31:33Z","content_type":null,"content_length":"30230","record_id":"<urn:uuid:ff22bd16-fb05-4606-81a5-6b64e3c3b906>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
simplify a rational fraction I need to simplify the following fraction [Math]\frac{2}{x^2-4}+\frac{1}{2x-x^2}[/tex] The answer should be [Math]\frac{1}{x(x+2)}[/tex] How do i preceed?? Use the fact that $x^2- 4= (x- 2)(x+ 2)$ and that $2x- x^2= x(2- x)= -x(x-2)$ That makes your fraction $\frac{2}{(x- 2)(x+ 2)}- \frac{1}{x(x-2)}$ The "common denominator" is (x- 2)(x+ 2)(x). Multiply numerator and denominator of the first fraction by x and the of the second fraction by (x+ 2).
{"url":"http://mathhelpforum.com/algebra/100369-simplify-rational-fraction.html","timestamp":"2014-04-16T04:34:17Z","content_type":null,"content_length":"43755","record_id":"<urn:uuid:4e0d09dd-3422-4d5e-a250-494d85070cee>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Hi guys! I ve got this very difficult exercise for my uni. Could someone please help? Thank you in advance. http://img2.immage.de/200270494rszexercise2.jpg Thanks for looking into my problem Prove It. I have inserted a picture of the exercise but it seems that you could not see it. I have now uploaded it as an attachment. Thanks again Why not substitute D and S into the equation for $p''$? This will give you a second order constand coefficient ODE. Thanks Prove it for your response. I will substitute into p'' however i don't get p' this way. could you please let me know how i will obtain the differential equation that the first question asks You should end up with $p''(t) = a[d_0 + d_1p(t) - (s_0 + s_1p(t))]$ $= ad_0 + ad_1p(t) - as_0 - as_1p(t)$ $= (ad_1 - as_1)p(t) + ad_0 - as_0$. So $p''(t) + (as_1 - ad_1)p(t) = ad_0 - as_0$. Solve the homogeneous characteristic equation: $m^2 + as_1 - ad_1 = 0$ $m^2 = ad_1 - as_1$ $m^2 = a(d_1 - s_1)$. Now since $d_1 < 0$ and $s_1 > 0$, this means $d_1 - s_1 < 0$. Also, since $a > 0$, this means $a(d_1 - s_1) < 0$. So you have $m^2$ equaling a negative number. What does this tell you about $m$? Can you solve the DE now? My dearest Prove It can you please send me a pm with some contact information (email)? I cannot send you a pm because I am a new member and don't have 15 posts. I need to discuss something in private. Thanks
{"url":"http://mathhelpforum.com/differential-equations/129715-differentials-print.html","timestamp":"2014-04-19T05:26:18Z","content_type":null,"content_length":"10165","record_id":"<urn:uuid:c18758ef-ab56-4dbe-b6e0-264639a64ac7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosemead Algebra Tutor Find a Rosemead Algebra Tutor ...I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. For the past three summers, I have worked as the lead teaching assistant for a math course for incoming freshmen at Caltech. I'm also a tutor there, and I've been working with students from elementary school to undergrad and community college (including PCC, ELAC, Mt. 51 Subjects: including algebra 2, algebra 1, reading, chemistry ...I currently own all things Macintosh (MAC for short). I love Apple products, and can help you master them too! They are very user friendly, but if you find it difficult to switch over from your PC, I can help you. I have a lot of patience when it comes to educating people at anything they need ... 29 Subjects: including algebra 1, English, reading, writing ...My goal is to tutor any individual to the point where I am no longer needed and for the student to fully understand and adapt to the topic or academic field. I will adapt to the student's passion as well. Usually students need help in the areas that are lacking, or more difficult to understand. 11 Subjects: including algebra 1, algebra 2, chemistry, physics ...I have an MBA Postgraduate degree with emphasis in Accounting and Finance. In addition, I had owned and managed retail jewelry business for the period of seven years. I have been exposed to all aspects of retail business such as customer service, sales, marketing, employee training and book keeping. 21 Subjects: including algebra 1, algebra 2, chemistry, physics I am a classroom instructor and a highly qualified Scientist/Mathematician with a MS in Public Health. I was educated in some of the most rigorous academic programs offered by some of the highest accredited private and state Universities in the state of California. Thus, I am highly qualified to t... 4 Subjects: including algebra 1, Italian, prealgebra, Greek
{"url":"http://www.purplemath.com/rosemead_algebra_tutors.php","timestamp":"2014-04-20T11:05:35Z","content_type":null,"content_length":"23934","record_id":"<urn:uuid:91fa1e17-5231-4b98-b038-d74d7e95173b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Help-Kinematic Equations February 6th 2008, 08:21 AM #1 Feb 2008 Physics Help-Kinematic Equations Here is the question.... A car and a motorcycle start from rest at the same time on a straight track, but the motorcycle is 25.0 m behind the car. The car accelerates at a uniform rate of 3.70 m/s^{2} and the motorcycle at a uniform rate of 4.40 m/s^{2}. (a) How much time elapses before the motorcycle overtakes the car? (b) How far will each have traveled during that time? (c) How far ahead of the car will the motorcycle be 2.00 s later? (Both vehicles are still accelerating.) I do not even know how to begin on this one. I know for the first part I need to find the final time, but I do not know how to set it up to do that. I am stressing major about this and in desperate need of some help! Thanks in advance! Hint 1: If an object starts to accelerate (from $V_0 = 0$) at a constant rate a, it'll take x distance in t time. (The units must be equivalent. For example x=meters, t=seconds, a=m/s^2) $x = \frac{1}{2}at^2$ Hint 2: The motorcycle is 25 meters behind the car. So can we say $x_{car} + 25 = x_{motorcycle}$ ? OK.... Still not getting it unfortunately. I have literally sat and stared at this Q for hours. Seems to be flying right past me. I've done well up until now, but I just cannot seem to get this. In t seconds, the car will travel $x_{car} = \frac{1}{2}(3.70)t^2$ and the motorcycle will travel $x_{motorcycle} = \frac{1}{2}(4.4)t^2$. Now imagine the scene. The motorcycle is 25 m behind the car and they start to accelerate. By the time the motorcycle catches up the car, the motorcycle has gone 25 meters more than the car. So $x_{motorcycle} = x_{car} + 25$. Put them together, $\frac{1}{2}(4.4)t^2 = \frac{1}{2}(3.70)t^2 + 25$ Solving this will give you the time $t$. Let's take Wingless's hints even further: Since the motorcycle is 25m behind the car, we can say that when the car and motorcycle meet, the motorcycle will have traveled 25m more than the car, but that makes an initial position of -25m since it was that far behind the car's position (0). Now, we look at the kinematic equation for displacement: $x_f = x_0 + v_0t + \frac{1}{2}at^2$ Now we solve it for the car and the motorcycle: $x_f = 0 + 0t + \frac{1}{2}(3.7)t^2$ $x_f = \frac{1}{2}(3.7)t^2$ $x_f = -25 + 0t + \frac{1}{2}(4.4)t^2$ $x_f = \frac{1}{2}(4.4)t^2 - 25$ Now we see that if you meet up with someone, your final position will be exactly the same, so we now have an equivalence between the motorcycle and the car. $\frac{1}{2}(3.7)t^2 = \frac{1}{2}(4.4)t^2 - 25$ Put like terms on the same side and make the 25 positive: $25 = \frac{1}{2}(4.4)t^2 - \frac{1}{2}(3.7)t^2$ Now we multiply each acceleration by $\frac{1}{2}$ $25 = 2.2t^2 - 1.85t^2$ Combine like terms: $25 = 0.35t^2$ $71.43 = t^2$ Find t: $8.45 = t$ Therefore it takes 8.45 seconds for the motorcycle to catch up to the car. Try b and c using these. Aryth's solution is more general. You can solve questions that include a starting velocity or displacement by using his equations Thanks so much!! For b I got 157 m for the motorcycle and 132 m for the car. For c I got that the motorcycle will be 38 m in front of the car.... but I'm not sure that's right. For b, you take the time at which they met, and plug it in to the final position equations for each vehicle that we derived in part a: $x_{car} = \frac{1}{2}(3.7)(8.45)^2$ $x_{car} = 132.1m$ $x_{motorcycle} = \frac{1}{2}(4.4)(8.45)^2 + 25^{**}$ **(Notice we add 25 to this one as opposed to subtracting it before, when considering total distance traveled, he traveled $\frac{1}{2}(4.4)(8.45)^2$ starting at 0m. But we know that he started at -25m, so he traveled an extra 25m.) $x_{motorcycle} = 182.1m$ For c, you use the distances calculated above (Distance for motorcycle from 0 = 157), plus the acceleration for two seconds: $x_{car} = 132.1 + \frac{1}{2}(3.7)(2)^2$ $x_{car} = 139.5m$ $x_{motorcycle} = 157.1 + \frac{1}{2}(4.4)(2)^2$ $x_{motorcycle} = 165.9m$ $x_{ahead} = 165.9 - 139.5 = 26.4m$ There you are. I hope you know when dealing with the total distance traveled you had to ADD 25, and then you had to subtract 25 to deal with the distance away from the car since: Reference Point for Total Distance = -25 Reference Point for Distance ahead of Car = 0 February 6th 2008, 08:41 AM #2 February 6th 2008, 08:48 AM #3 Feb 2008 February 6th 2008, 09:04 AM #4 February 6th 2008, 09:04 AM #5 February 6th 2008, 09:08 AM #6 February 6th 2008, 09:24 AM #7 Feb 2008 February 6th 2008, 11:04 AM #8
{"url":"http://mathhelpforum.com/math-topics/27615-physics-help-kinematic-equations.html","timestamp":"2014-04-17T21:59:37Z","content_type":null,"content_length":"57365","record_id":"<urn:uuid:a02c6343-5c73-4707-8889-32becead3317>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
(integer-decode-float x) Function: Return three values: 1) an integer representation of the significand. 2) the exponent for the power of 2 that the significand must be multiplied by to get the actual value. This differs from the DECODE-FLOAT exponent by FLOAT-DIGITS, since the significand has been scaled to have all its digits before the radix point. 3) -1 or 1 (i.e. the sign of the argument.)
{"url":"http://lispdoc.com/?q=integer-decode-float","timestamp":"2014-04-17T04:13:50Z","content_type":null,"content_length":"6106","record_id":"<urn:uuid:9d428715-7977-47fd-beaf-2fd31c10fb75>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
9,223,372,036,854,775,808:1 (not even close) I originally put this in the Browns thread but thought it should be here. Never underestimate the power of a monkey with a calculator. This is not even close to right. They make a lot of ridiculous assumptions in coming to this number. If I had some time to do the leg work, I could give you a much better probability. I recreated this number in about 5 seconds by tracing their faulty assumptions. For the math geeks out there, what they did was considered every event as a "Bernoulli Process" with a probability outcome of 50:50 generating a binomial distribution. Essentially, the calculation becomes (1/2)*(1/2)*(1/2).... 63 times (in order to eliminate 63 teams there must be 63 games in the tournament and 63 outcomes). This generates a % probability, now take the inverse of that to get the "9,223,372,036,854,775,808:1." The real flaw is that this method assumes that a 16 seed beating a 1 seed has probability of .50 (50:50 chance). In the 20 some years since expanding the tournament this has never happened. That is 80 trials with 0 successes, that implies (statistically) that the probability of a 16 winning the game is 0 and removes 4 powers of 2 from the answer (divide by 16). Here is another ridiculous aspect of this "model," it assumes (as a consequence of its probability assumptions) that every team has an even probability of winning the tournament. So basically a 16 seed has the same probability of winning the tournament as a 1 (1 in 64 or 1.5625%). If I offered you any of the 1 seeds this year at 64:1, would you take it? Hell yeah you would. How about if I offered you a 16 seed (or for that matter an 8 seed at 64:1)? Now you can also essentially remove the 2/15 matchup because the 2 nearly always wins that, and the same with the 3/14 and the 4/13. These upsets happen, but not anywhere near 50% (I would guess 2/15 happens about 3-5% and the others happen about 10%) of the time. Now all of a sudden instead of 2^63 we actually have ~ 2^47. These changes bring the probability down from [quote]9,223,372,036,854,775,808 to 1 all the way to 140,737,488,400,000. Now this 140 trillion is still a big number but there are more simplifications, That will bring it down futher, outside of the first round, the odds start getting more difficult to calculate as this begins to expand into a Markovian process, but I think you can agree that 1's do not lose to 8/9's 50% of the time, nor do 2's lose to 7/10's 50% of the time and so on. Statistics like this are exceptionally deceiving and really piss me off because they seem accurate, but they are not. Not even close. With some research, you could reduce the probabilty much further, but it is not worth it for me to do it.... I don't actually get paid to report truthful and accurate facts by a major network (maybe I should).
{"url":"http://www.theclevelandfan.com/boards/viewtopic.php?f=4&t=1615","timestamp":"2014-04-20T04:26:15Z","content_type":null,"content_length":"52266","record_id":"<urn:uuid:eb8d0b5f-aadf-459c-a493-5af027b8d7c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Dulzura Algebra 1 Tutor Find a Dulzura Algebra 1 Tutor ...I have general knowledge in English, communication, history, political science/government, and religious studies. I currently working as a tutor at Cuyamaca College STEM Center. My approach to tutoring is: learn by doing, utilizing a range of examples, explanations, outside the box thinking, and a variety of textbooks. 18 Subjects: including algebra 1, chemistry, calculus, physics ...Though math was easy for me at the time, I realized that my passion was history. I graduated with a bachelor's in History with a minor in Asian Languages. Actively using the tools given to you is pertinent to the learning process and I encourage my students to do this. 8 Subjects: including algebra 1, algebra 2, grammar, world history ...I am a current 3 year college student with a Biology major. I have taken multiple science and math based classes during my tenure and enjoy what I am learning. In all my semesters of college I have been on the vice president's list of academics for have an overall grade of 3.5 or above. 17 Subjects: including algebra 1, chemistry, statistics, biology ...My participation in SDSUs Research Experience for Undergraduates and Teachers in 2007 gave me a particular interest in presenting mathematics in a way that inspires students to pursue post-secondary study in math, science, and engineering. I have taught Honors Pre-Calculus with Trigonometry and ... 6 Subjects: including algebra 1, GED, algebra 2, trigonometry I really enjoy helping students of all ages to reach their potential and truly understand the material he or she is struggling with. I have experience teaching high school and undergrad students Biology, Math and Spanish. I am very patient, and always try to use different teaching techniques in order to obtain the best results for students. 8 Subjects: including algebra 1, Spanish, chemistry, physiology
{"url":"http://www.purplemath.com/Dulzura_algebra_1_tutors.php","timestamp":"2014-04-17T10:59:00Z","content_type":null,"content_length":"23951","record_id":"<urn:uuid:696654fb-5b09-4575-8fff-6caabb5adb9a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributed Tree Search and Its Application to Alpha-Beta Pruning Chris Ferguson, Richard E. Korf We propose a parallel tree search algorithm based on the idea of tree-decomposition in which different processors search different parts of the tree. This generic algorithm effectively searches irregular trees using an arbitrary number of processors without shared memory or centralized control. The algorithm is independent of the particular type of tree search, such as single-agent or two-player game, and independent of any particular processor allocation strategy. Uniprocessor depth-first and breadth-first search are special cases of this generic algorithm. The algorithm has been implemented for alpha-beta search in the game of Othello on a 32-node Hypercube multiprocessor. The number of node evaluations grows approximately linearly with the number of processors P, resulting in an overall speedup for alpha-beta with random node ordering of p.75 . Furthermore we present a novel processor allocation strategy, called Bound-and-Branch, for parallel alpha-beta search that achieves linear speedup in the case of perfect node ordering. Using this strategy, an actual speedup of 12 is obtained with 32 processors. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/AAAI/1988/aaai88-023.php","timestamp":"2014-04-21T08:21:46Z","content_type":null,"content_length":"3047","record_id":"<urn:uuid:76b925c6-08fc-4005-8b1f-0e6569da98db>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Decision Making Beyond Arrow’s ’Impossibility Theorem’, With the Analysis of Effects of Collusion and "... Abstract—Most modern physical theories are formulated in terms of differential equations. As a result, if we know exactly the current state of the world, then this state uniquely determines all the future events – including our own future behavior. This determination seems to contradict the intuitiv ..." Cited by 2 (0 self) Add to MetaCart Abstract—Most modern physical theories are formulated in terms of differential equations. As a result, if we know exactly the current state of the world, then this state uniquely determines all the future events – including our own future behavior. This determination seems to contradict the intuitive notion of a free will, according to which we are free to make decisions – decisions which cannot be determined based on the past locations and velocities of the elementary particles. In quantum physics, the situation is somewhat better in the sense that we cannot determine the exact behavior, but we can still determine the quantum state, and thus, we can determine the probabilities of different behaviors – which is still inconsistent with our intuition. This inconsistency does not mean, of course, that we can practically predict our future behavior; however, in view of many physicists and philosophers, even the theoretical inconsistency is somewhat troubling. Some of these researchers feel that it is desirable to modify physical equations in such a way that such a counter-intuitive determination would no longer be possible. In this paper, we analyze the foundations for such possible theories, and show that on the level of simple mechanics, the formalization of a free will requires triple interactions – while traditional physics is based on pairwise interactions between the particles. I. FREE WILL: A NATURAL IDEA Intuitively, most of us believe that we are able to make conscientious decisions, i.e., that we have free will. If we walk to a corner, then we can turn right or cross the street. The commonsense belief is that it is not possible to predict beforehand what exactly a person will do. "... Abstract. In his logical papers, Leo Esakia studied corresponding ordered topological spaces and order-preserving mappings. Similar spaces and mappings appear in many other application areas such the analysis of causality in space-time. It is known that under reasonable conditions, both the topology ..." Cited by 1 (1 self) Add to MetaCart Abstract. In his logical papers, Leo Esakia studied corresponding ordered topological spaces and order-preserving mappings. Similar spaces and mappings appear in many other application areas such the analysis of causality in space-time. It is known that under reasonable conditions, both the topology and the original order relation � can be uniquely reconstructed if we know the “interior ” ≺ of the order relation. It is also known that in some cases, we can uniquely reconstruct ≺ (and hence, topology) from �. In this paper, we show that, in general, under reasonable conditions, the open order ≺ (and hence, the corresponding topology) can be uniquely determined from its closure �. "... Abstract. We show that in many application areas including soft constraints reasonable requirements of scale-invariance lead to polynomial (tensor-based) formulas for combining degrees (of certainty, of preference, etc.) Partial orders naturally appear in many application areas. One of the main obje ..." Add to MetaCart Abstract. We show that in many application areas including soft constraints reasonable requirements of scale-invariance lead to polynomial (tensor-based) formulas for combining degrees (of certainty, of preference, etc.) Partial orders naturally appear in many application areas. One of the main objectives of science and engineering is to help people select decisions which are the most beneficial to them. To make these decisions, – we must know people’s preferences, – we must have the information about different events – possible consequences of different decisions, and – since information is never absolutely accurate and precise, we must also have information about the degree of certainty. All these types of information naturally lead to partial orders: – For preferences, a < b means that b is preferable to a. This relation is used in decision theory; see, e.g., [1]. , 2010 "... In many real-life applications, we have an ordered set: a set of all space-time events, a set of all alternatives, a set of all degrees of confidence. In practice, we usually only have a partial information about an element x of this set. This partial information includes positive knowledge: that a ..." Add to MetaCart In many real-life applications, we have an ordered set: a set of all space-time events, a set of all alternatives, a set of all degrees of confidence. In practice, we usually only have a partial information about an element x of this set. This partial information includes positive knowledge: that a ≤ x or x ≤ a for some known a, and negative knowledge: that a ̸≤ a or x ̸≤ a for the known a. In the case of a total order, the set of all elements satisfying this partial information is an interval. We show that in the general case of a partial order, the corresponding analogue of an interval is a convex set. We also show that in general, to describe partial knowledge, it is sufficient to have only negative information about x but it is not sufficient to have only positive information. "... Abstract—We show that in many application areas including soft constraints reasonable requirements of scale-invariance lead to polynomial formulas for combining degrees (of certainty, of preference, etc.) ..." Add to MetaCart Abstract—We show that in many application areas including soft constraints reasonable requirements of scale-invariance lead to polynomial formulas for combining degrees (of certainty, of preference, "... The study of Artificial Neural Networks started with the analysis of linear neurons. It was then discovered that networks consisting only of linear neurons cannot describe non-linear phenomena. As a result, most currently used neural networks consist of non-linear neurons. In this paper, we show tha ..." Add to MetaCart The study of Artificial Neural Networks started with the analysis of linear neurons. It was then discovered that networks consisting only of linear neurons cannot describe non-linear phenomena. As a result, most currently used neural networks consist of non-linear neurons. In this paper, we show that in many cases, linear neurons can still be successfully applied. This idea is illustrated by two examples: the PageRank algorithm underlying the successful Google search engine and the analysis of family happiness. 1 Linear Neural Networks: A Brief Reminder Neural networks. A general neural network consists of several neurons exchanging signals. At each moment of time, for each neuron, we need finitely many numerical parameters to describe the current state of this neuron and the signals generated by this neuron. The state of the neuron at the next moment of time and the signals generated by the neuron at the next moment of time are "... For a society to function efficiently, it is desirable that all members of this society care no only about themselves, but also about the society as a whole, i.e., about all the other individuals from the society. In practice, most people are only capable of caring about a few other individuals. We ..." Add to MetaCart For a society to function efficiently, it is desirable that all members of this society care no only about themselves, but also about the society as a whole, i.e., about all the other individuals from the society. In practice, most people are only capable of caring about a few other individuals. We analyze this problem from the viewpoint of decision theory and show that even with such imperfect individuals, it is possible to make sure that everyone’s decisions are affected by the society as a whole: namely, it is sufficient to make sure that people have emotional attachment to those few individuals who are capable of caring about the society as a whole. As a side effect, our result provides a possible explanation of why the Biblical commandment to love your God encourages ethical behavior. "... Many decisions are made by voting. At first glance, the more people participate in the voting process, the more democratic – and hence, better – the decision. In this spirit, to encourage everyone’s participation, several countries make voting mandatory. But does mandatory voting really make decisio ..." Add to MetaCart Many decisions are made by voting. At first glance, the more people participate in the voting process, the more democratic – and hence, better – the decision. In this spirit, to encourage everyone’s participation, several countries make voting mandatory. But does mandatory voting really make decisions better for the society? In this paper, we show that from the viewpoint of decision making theory, it is better to allow undecided voters not to participate in the voting process. We also show that the voting process would be even better – for the society as a whole – if we allow partial votes. This provides a solid justification for a semi-heuristic “fuzzy voting ” scheme advocated by Bart Kosko. Need for democratic decision making. Often, a social entity faces a problem, and there are several alternative ways to solve this problem. For example, to build a new baseball stadium, the city can either use the existing funds or issue a bond – and hope that the future profits from this stadium will pay off
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4483722","timestamp":"2014-04-21T13:05:13Z","content_type":null,"content_length":"33036","record_id":"<urn:uuid:37f886f0-769f-438c-975c-da3c307a1b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
User daveh bio website visits member for 2 years, 10 months seen Jan 7 '13 at 14:37 stats profile views 83 Jan Order of elements 4 comment This is the famous "r-s-t" problem assigned to first year graduate students at the University of Chicago out of Alperin-Bell, which intentionally contains many highly nontrivial problems mixed among the easy ones, older grad students are sworn to secrecy as to which are which. We all spent hours trying to solve it inside the symmetric group using multiple different special cases. Only one or two students in history were known to get it correct, and it is easier to do inside PSL. It saddens me to see the answer posted here. 20 awarded Nice Answer Sep Non-isomorphic finite simple groups 20 comment The $A_8 \cong GL(4,2)$ example is well-known at the University of Chicago where it's an exercise in Alperin-Bell, the idea being students shouldn't know if an exercise is hard or easy in advance. That and the famous "rst" problem have tormented many a 1st year grad student. 19 answered Order of automorphism of Projective special linear group 19 awarded Critic Jun searching for text for studying representation theory 14 comment I taught a 1-semester class for seniors/1st year grad students out of this book and they all really enjoyed it. It should be very suitable for self study as well. Jun Heuristic argument that finite simple groups _ought_ to be “classifiable”? 12 comment Can someone explain the mysterious last sentence in this answer? 30 answered When (if ever) disclose your identity as a reviewer? 24 awarded Supporter 24 answered indecomposable modules of the symmetric group 20 awarded Editor Feb Restrictions of Modules and Dimensions 20 revised deleted 4 characters in body 20 answered Restrictions of Modules and Dimensions 29 awarded Necromancer 15 awarded Teacher Jun answered What are some examples of colorful language in serious mathematics papers?
{"url":"http://mathoverflow.net/users/15672/daveh?tab=activity","timestamp":"2014-04-19T15:17:25Z","content_type":null,"content_length":"41555","record_id":"<urn:uuid:d1e61a54-303f-447a-981b-e64ed504e26a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with Elimination (Linear Equation) September 27th 2008, 11:12 AM #1 Junior Member Sep 2008 Need help with Elimination (Linear Equation) So, i'm having a very difficult time understanding elimination and would appreciate anyones help on that matter (most of my class feels the same, so i'm not alone in this, for once :P) The equation that i'm stumped on is. 2x + 4y = 7 4x - 3y = 3 I understanding the graphing and subsitution methods pretty well but not this method. A step-by-step walkthrough would help alot with this. Any help? Last edited by largebabies; September 27th 2008 at 11:33 AM. So, i'm having a very difficult time understanding elimination and would appreciate anyones help on that matter (most of my class feels the same, so i'm not alone in this, for once :P) The equation that i'm stumped on is. 2x - 4y = 7 4x - 3y = 3 I understanding the graphing and subsitution methods pretty well but not this method. A step-by-step walkthrough would help alot with this. Any help? the point is, you want to eliminate one of the variables so that you can solve for the other. to do this, you must have the coefficients of the variable you want to eliminate to be equal to each other (except maybe by a factor of a minus sign). now, lets say we want to eliminate x. we want the coefficients of x in both equations to be equal, in which case we can subtract one equation from the other and get rid of the x. (if we make one coefficient the negative of the other, then we simply add both equations). so lets do that. eliminate the x. what would you do to get the coefficients of x the same in both equations? Opps, I made a mistake writing the equation out. It's 2x + 4y = 7, not 2x - 4y = 7. Does that change anything? September 27th 2008, 11:16 AM #2 September 27th 2008, 11:33 AM #3 Junior Member Sep 2008 September 27th 2008, 11:39 AM #4
{"url":"http://mathhelpforum.com/pre-calculus/50815-need-help-elimination-linear-equation.html","timestamp":"2014-04-21T09:42:49Z","content_type":null,"content_length":"40774","record_id":"<urn:uuid:d3492873-32ba-43ba-bae0-bf0b5387a8aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Oceanography Services on Demand Related links Print version ISSN 1679-8759 Braz. j. oceanogr. vol.60 no.3 São Paulo July/Sept. 2012 Remote wind stress influence on mean sea level in a subtropical coastal region Mabel Calim Costa^I,^*; Marcos Eduardo Cordeiro Bernardes^II ^IInstituto Nacional de Pesquisas Espaciais (Rodovia Presidente Dutra, Km 39, Cachoeira Paulista, SP, Brasil) ^IIUniversidade Federal de Itajubá - Instituto de Recursos Naturais (Avenida BPS, Itajubá MG, Brasil) The purpose of this study was to assess the relative influence of remote wind stress on mean sea level (MSL) variations in the coastal region of Cananeia (Sao Paulo State, Southern Brazil) during the period from 1/1/1955 to 12/31/1993. An optimized low-pass Thompson filter for the study area, and spectral analysis (cross spectrum, coherence and phase lag) of the relationship between the MSL and both parallel (T//) and perpendicular (T|) wind stress components were applied. These were extracted from four grid points of the NCEP/NCAR global model. The predominance of annual oscillations as those of greatest coherence and energy, of periods of approximately 341 days (frequency of 0.00293 cpd) and 410 days (frequency of 0.00244 cpd), respectively, were observed. Offshore NCEP/NCAR grid points were those with the highest coherence and energy throughout the study in relation to the observed MSL. This may be linked to the restriction of the NCEP/NCAR model as regards the inland limit. It is also concluded that remote wind stress may play an important role in several MSL time scales, including the annual ones. Based on criteria such as coherence and energy peaks, the wind stress component of greatest effect on MSL was the parallel one. Descriptors: Thompson filter, Spectral analysis, NCEP/NCAR, Cananeia, Brazil. O presente estudo tem por objetivo avaliar a influência relativa de tensão do vento remoto na variação do nível médio do mar (NMM) para a região costeira de Cananéia (SP) durante o período de 1/1/ 1955 a 31/12/1993. Foram aplicados um filtro de passa-baixa de Thompson (1983), otimizado para a região de Cananéia, além de análise espectral (espectro cruzado, coerência e defasagem) entre o NMM e as componentes paralela (T//) e perpendicular (T|) da tensão do vento. Estas foram extraídas de quatro pontos de grade do modelo global NCEP/NCAR. Observou-se a predominância das oscilações anuais como aquelas de maior coerência e energia, destacando-se os períodos de aproximadamente 341 dias (frequência de 0,00293 cpd) e 410 dias (frequência de 0,00244 cpd), respectivamente. As maiores coerências e energia em todo estudo foram encontradas nos pontos mais distantes da costa. Este fato pode estar associado às restrições do modelo NCEP/NCAR em representar os limites continentais. Conclui-se também que a tensão do vento remoto pode ter um papel importante em várias escalas temporais do NMM, incluindo a escala anual. A partir dos valores de coerência e picos energéticos, a componente paralela da tensão do vento foi a que mostrou ser mais influente no NMM da região estudada. Descritores: Filtro de Thompson, Análise espectral, NCEP/NCAR, Cananéia; Brasil. Given the difficulty encountered in describing phenomena occurring at the ocean-atmosphere interface, as well as in determining the correlation between meteorological and oceanographic data, the variability of mean sea level (MSL), within the context of the global warming and climate change scenarios, has motivated several studies on the part of the scientific community. The belief in a static sea level, or in oceans fluctuating around a stationary average level, lasted during the last three decades of the twentieth century (NEVES, 2005). At the present time, the idea that the MSL presents fluctuations on various time scales has become well established (FRANCO et al., 2007). The investigation of MSL variability has been the focus of several studies around the world due to various factors: its effects along coastal areas, changes in coastal morphodynamic patterns, adjustment of the shoreline profile and the modification of saline intrusion into coastal aquifers (NEVES, 2005). Pugh (1996) reviewed the main concepts and methodologies for MSL and tidal analyses. In Brazil, Mesquita (2002, 2009) studied MSL variability in the light of the main tide gauges along the country´s coast. Neves (2005) and Neves and Muehe (2008) undertook the analysis of different methodologies for estimating MSL and studied its variability along the Brazilian coastline in the context of climate change. Harari and Camargo (1994, 1995) and Harari et al. (2004) studied the variation of MSL particularly at Santos and Cananeia, both on the Sao Paulo State coast. Cananeia was also the object of a NOAA (1997) study, which presented long term cycles in the region between 1955 and 2005. Camargo and Harari (1994) and Camargo et al. (1998, 1999) studied the variability of the MSL in those areas by means of numerical models. Improvements in MSL prediction are closely related to the advance of filtering numerical methods, since MSL can be defined as the observed sea level if inertial, gravitational and 'high' frequency disturbances (such as diurnal, semidiurnal, terdiurnal astronomic tidal constituents, etc.) are removed. Table 1 gives an example of the filtering process used to obtain MSL on the southeastern and southern coast of Brazil by Uaissone (2004 apud Oliveira, 2007) and Oliveira (2009). Duchon (1979) described a Fourier method for filtering time series by adding a 'sigma' factor as a means to minimize Gibbs'^1 phenomenon. The Lanczos filter as it is called is the method most commonly used in oceanography to predict MSL, mainly due to the simplicity of its implementation. Thompson (1983) proposed a filter that allows the user to choose cut-off frequencies. Despite some complexity in its implementation, Thompson´s filter seems more versatile, since it provides an optimized treatment of MSL series. Several Brazilian authors have applied the low-pass filter proposed by Thompson (1983): i) compared data from the Rio de Janeiro State Tide Gauge Network with meteorological data extracted from the global model of NCEP/NCAR; ii) Oliveira et al. (2007) studied the effect of storm surges in Paranagua Bay (PR) and their effects on tidal records from Cananeia (SP); iii) compared Lanczos and Thompson filters' performances in the analysis of MSL variability in the coastal region of Cananeia, and iv) Oliveira et al. (2009) analyzed the response of the coastal sea level to atmospheric phenomena through the use of wind and atmospheric pressure data based on the NCEP/NCAR model and tide gauge data from Cananeia (SP). The choice of the Thompson low-pass filter in this study is mainly to be explained as due to its better performance than that shown by Lanczos' results (COSTA; BERNARDES, 2010) and to its versatility in allowing the user to determine the frequencies to be attenuated in the filtering process. The purpose of this study is to evaluate the influence of remote wind stress on MSL variations along the coastal region of Cananeia, in southern São Paulo State, southeastern Brazil (Fig. 1). Sea level observations were based on tide gauge series sampled at the Oceanographic Institute of Sao Paulo University's base in Cananeia (SP), as the Institute ensures the quality of maintenance and altimetric reference over the years, with hourly measurements, from 1955 to 1993. Meteorological data were extracted from the Reanalysis Project of NCEP/NCAR (http://www.ncep.noaa.gov), by which the zonal (Tx) and meridional (Ty) components of wind stress were estimated at 10 m above sea level with grid spacing equal to 1.875º (lat) by 1.905º (long), available at the synoptic times of 00:00, 06:00, 12:00 and 18:00 GMT. Four grid points near the study area were chosen for the space-time evaluation of tide gauge and meteorological series (Fig. 2). These data were acquired by direct access to the files on extensions such as html, netcdf and others. In this study, the second option was chosen, the data being manipulated through an Excel® macro function. The properties of the reanalysis records are described in detail in Kalnay et al. (1996). In addition to the time series analysis against MSL data, wind model estimates were evaluated at four grid points (the closest and most relevant to the region) chosen from the NCEP/NCAR global model, and these were arranged as in Figure 2: points 1 and 2 being closer to the coast, and points 3 and 4, further offshore. In order to identify the main frequencies with the highest possible coherence and energy, the correlation between filtered oceanographic and meteorological time series was performed through spectral analysis. The analysis took the estimation of cross-spectrum, coherence and lag into consideration. It was carried out by calculating the power spectral density and cross-spectral analysis between MSL and each of the wind stress components, for which Matlab® environment was used. The evaluation of the influence of remote wind stress components on the variability of the MSL was based on the frequency domain. After the numerical filtering, the power spectral density was estimated and then coherence and lag calculated. Numerical filtering The low-pass filter was used to obtain MSL in order to suppress the astronomical and inertial tidal components preserving the 'low' frequency signal (periods longer than three days) - a methodology similar to that used by Oliveira et al. (2007). Thompson's (1983) low-pass filter allows its optimization by the user, who defines the main parameters of calculation, especially by imposing pre-selected cut-off frequencies. The harmonic components were extracted from Mesquita (1997), who analyzed the records of sea level in the coastal regions of southeastern Brazil through the application of the harmonic method developed by Franco and Rock (1971). The selected components were Q1, O1, P1, K1, N2, M2, S2 and M3, which correspond to approximately 90% of the tidal energy on the Cananeia coast (PICARELLI et al., 2002). All the constants were determined at confidence intervals of 95% (MESQUITA, 1997), as can be seen in Table 2. The filter equation is described by the convolution of weights over the time series in the frequency domain (i.e. a Fourier Transform), seen in details in Thompson (1983). The main parameters for optimization are: number of weights, the number of frequencies to be attenuated, lower cut-off frequency (Ω1) and upper cut-off frequency (Ω2), which defines the band to be filtered out. It should be noted that the filtering process is not perfect (EMERY; THOMSON, 2001). In other words, there is no exact, perfect answer for all the frequencies that should hold intact nor a null answer for those intended to be attenuated. The ability of a filter to resolve sequential events is inversely proportional to the bandwidth, i.e. the narrower the bandwidth, the longer the time series needed to resolve individual events (EMERY; THOMSON, 2001). Spectral Analysis The main purpose of time series analysis methods is to define the variability of the data in terms of dominant periodic functions, or pattern recognition. In order to apply a spectral analysis, the first care to be taken is to extract, a priori, the trend and the average of the time series since the aim is to avoid distortions in the 'low' frequency components of the spectrum. At the same time, the time series to be analyzed must be ideally long enough to allow for many cycles of the lowest frequency of interest. This phenomenon is known as aliasing and can be solved by filtering data in order to separate out only the fluctuations of interest, reducing the amount of adjacent frequencies that may distort the energy values of the signal (EMERY; THOMSON, 2001). These authors also state that sampling the signal also interferes in obtaining spectral estimates; for instance, an hourly series collected on a single day cannot fully describe the behavior of a daily cycle, just as monthly series over one year are not sufficient to describe an annual cycle. Cross-spectral analysis is used to measure the degree of relationship between two stochastic processes. In this case, the covariance function between time series, described in detail in Franco (1982), Kumaresan (1993) and Emery and Thomson (2001), was applied. Remote wind stress energetic influence on the MSL was based on the same sampling frequency in both databases. Thus, sea level records, which were originally collected at hourly intervals, were converted to six-hour samples by calculating the average value within this range. The wind stress was decomposed into its zonal (Tx) and meridional (Ty) components. Then, in order to align them with the Cananeia continental shelf, these components were rotated at 45º relative to geographical north. Thus, both the parallel and perpendicular wind tension components were estimated. The entire 39-year-sample of MSL data were used. The evaluation of energy content was adapted from the work of Pawlowicz et al. (2002). This method was chosen due to: i) its faster computational time and ii) more accurate approach in resolving frequencies that are close together. The cross-spectrum methodology (spectrum, coherence and lag) was adapted. In this study, only the maximum coherence peaks in the spectrum were considered. For some cases, the secondary peaks were added to the analysis if their coherence values were above 70%. The choice of this value set as a cut-off for coherence analysis was based on Menezes (2007), who established a value of 65%. Numerical filtering The performance assessment of the low-pass filter proposed by Thompson (1983) considered the following conditions: i) with no preset frequencies to be attenuated (Fig. 2A) and ii) the imposition of major tidal harmonic components to be attenuated (Fig. 2B). These tests were analyzed using the mean squared deviation values (MSD) as a function of the frequency band (area of cut-off frequencies) and the respective number of weights used in the filtering process. For the situation in which no frequencies are imposed for attenuation, a clear improvement in the response function adherence to the idealized filter proposed by Thompson (1983) was observed (Fig. 2A ). It reflects the close relationship between the numbers of weights and the number of pre-defined frequencies (ωj) since reducing the number of imposed frequencies required a decrease in the amount of weights, which consequently reduced the data loss associated with it. However, the main interest of this analysis lies in ensuring that energy attenuation takes place ideally the complete suppression of the main local harmonic components. This is why the test which considers the imposition of the 16 local main tidal components was selected, as it ideally considers a null response of the main tidal harmonic components for Cananeia, giving the filter an optimized character for the study region. From the analysis of MSD values, the minimum error among overall tests was found under the following conditions: Ω1 of 5º/hour and 144 weights (289 weights convoluted altogether), with MSD of 6.2 10-5 (Table 3). These values agree with the mathematical requirement which defines that the sum of Ω1 and Ω2 may not be multiples of 180º/h and reflect the best adherence of the response function to the filter idealized by Thompson (1983). However, there was another set of values which complied with the aforementioned requirements, with an even smaller number of weights (e.g. 120 weights) and greater proximity between the cut-off frequencies. The goal is to get as close as possible to the ideal filter, i.e. a square wave, which would result in a very small distance between Ω1 and Ω2, such that they would merge into a single frequency. Successive approximations to the Fourier series, and hence to the transfer function, are not convergent near discontinuities. These are explained by a truncation error of the Fourier series when trying to approximate sine and cosine of a real function (Fig. 3). The second set (Test 2) of optimum values ally the lowest MSD and the possible number of weights along the shortest distance between the cut-off frequencies, with no significant disturbance generated by Gibb's phenomenon. Since no filter is perfect, as noise from the stop-band cannot be completely removed and certain frequencies in the pass-band will be distorted, it is often necessary to rescale the output series so that the total variance in the pass-band spectral estimates equals the total variance of the input data for that frequency range (EMERY; THOMSON, 2001). In this study, the rescaling process was carried out using a Hanning window. In Figure 3, it can be seen that Test 1 allowed the passage of oscillations of 2.7 days (0.9816) and resulted in the transfer of power in periods of 3.2 days (1.005), 5 days (1.001), 5.5 days (1.012), 6.7 days (1.004) and 7.5 days (1.002). In the second test, oscillations above 2.3 days (0.992) were retained by the filter and there was a power transfer for periods of 2.6 days (1.012), 4.6 days (1.013), 5 days (1.014) and 6 days (1.007). In both tests there was a power transfer to the 'high' (order of less than 3 days) and to the 'low' (order of 3 days and longer) frequencies, allowing the passage of oscillations between semidiurnal and terdiurnal bands. According to Thompson (1983), some power leakage to higher frequencies is to be expected, since the filter best performs at suppressing tides and is necessarily good at suppressing a particular inertial frequency. In other words, the best filter is the one that transmits less power (THOMPSON, 1983). Emery and Thomson (2001) suggest that the frequency response functions should have reasonably sharp transitions between adjacent stop and pass-bands, especially if the data do not have wide 'spectral-gaps' between the dominant frequencies of the two bands, as occurs in this study. The same technique was used to filter out the wind stress components obtained from the Reanalysis project (NCEP/NCAR). However, in this case the filter had to be adapted to a 6 hour-sampling. The filtering of meteorological data should retain the same characteristics as the filter optimized for Cananeia (Test 2), with the same bandwidth and data loss as those of the sea level data. Thus the chosen parameters corresponding to the number of weights and cut-off frequencies are respectively: 21 weights (41 weights convoluted altogether), Ω1= 36.6º/6 h and Ω2= 79.2º/6 h. Power spectral density of MSL The annual and interannual oscillations represent most of the energy peaks found in the sea level data series. The MSL's maximum power peak lies in an annual oscillation (0.00269 cpd period of 372.4 days). Secondary peaks of significant energy (energy that exceeded the white noise's threshold) were found at higher frequency bands, with oscillations between 3 to 50 days. The most energetic oscillation among these higher frequencies was represented by the fluctuation with a 50-day period, as can be seen in Figure 4. Power spectral density of parallel wind stress component (T//) The spectra obtained for the parallel wind stress component (T//), indicated that the NCEP/NCAR point 1 always had the lowest energy level among all the grid points considered. This may be explained by the non-physical limitations of the NCEP/NCAR model, such as grid resolution, morphological discretization and coastal proximity, which may result in a loss of reliability of the modeled data on these regions. Of the maximum peaks of the spectra analyzed, most occurred at an annual oscillation with a frequency of 0.00269 cpd (period of 371.7days). Another annual fluctuation was captured at a frequency of 0.00256 cpd (period of 390.6 days) at point 2 (Fig. 5A). An interannual oscillation was detected at point 1, with a period of 497.5 days (0.00201 cpd). The most energetic oscillation at point 4 was on a seasonal scale, with a frequency of 0.00549 cpd (period of 182.2 days), that exceeds the energy captured by the annual oscillation at point 3. However, the values of the coherence of this oscillation (seasonal) were less than 50 %, thus contrasting with the high levels of coherence found for annual oscillation (above 70%). Cross-correlation between MSL and parallel wind stress component (T//) The selection of oscillations with both highest correlation and energy was carried out by means of the lag between the time series shown both in the frequency domain (given in degrees) as in the time domain (given in days). The lags with a negative sign were interpreted as indicative of the pause in the action of the wind stress components on MSL or derived from the formation of platform waves that are advanced in time as compared with the evolution of the weather systems (UAISSONE, 2004 apud OLIVEIRA, 2007). The opposite happens when the lags are positive. The analysis of maximum spectral density lies in the same frequency of 0.00269 cpd in the spectra of MSL, in the parallel wind stress component spectra and in cross-spectra analysis. The MSL´s highest energy peak occurred in this frequency, and this was compatible with the T// component and cross-spectral analysis, both in the same period. Points 2 and 4 are those that had, respectively, the highest energy values of T// and MSL versus T//, as can be seen in Figure 5. Regarding the lag, there is a predominance of phases equal to zero, when the atmospheric forcing and the MSL response are in phase. Power spectral density of perpendicular wind stress component (T|) In general, the power spectral density of the perpendicular wind stress component had higher values than those estimated for the parallel component at all the points sampled. There is a clear predominance of annual oscillations in the energy domain, such as the frequencies of 0.00275 cpd and 0.00269 cpd, respectively. Sometimes those frequencies could be as much as ten times more energetic that their counterparts in the parallel component analysis (Fig. 6). Those oscillations also sometimes showed higher coherence with MSL, with values above 65%. Cross-correlation between MSL and perpendicular wind stress omponent (T|) The analysis of the perpendicular wind stress component energy and its correlation with the MSL suggested that the maximum values were concentrated in the annual frequency (371.7 days), detected at point 4, as can be seen in Figure 6. This annual fluctuation (0.00269 cpd) was the dominant peak at all the grid points, as was also observed on the MSL spectrum. This oscillation also had high levels of coherence, all above 65% (the maximum being found at point 1 with a value of 75.2%). As for the parallel wind stress component, MSL and perpendicular wind stress component were also in Numerical filtering In both tests involving wind stress components, there was nearly the same variance (≈ 26%) as with the original tide gauge series. Thus, it can be stated that the high-frequency oscillations - especially the local tidal harmonic components, are the cause of approximately 75% of the total variance of the series. There was a similar influence of high frequency phenomena on the tide gauge variance in the coastal regions of Rio de Janeiro and at Paranagua, where the proportion was even higher than 88%, as can be seen in Table 1. In order to ensure that the filters work on the same frequency band for both sea level and wind stress components, the cut-off windows used in this study were chosen based on the analysis of the influence of atmospheric disturbance (wind stress) on the variation of MSL. It was assumed that weather periods longer than 3 days influenced the MSL disturbance over that same period. However, as the atmosphere and the ocean are fluids of different characteristics, which also respond to the influence of similar frequencies, there can be interactions in different periods in response to the same disturbance. Spectral Analysis The correlations between MSL and wind stress components for the entire period of 39 years showed that the annual oscillations presented the most energetic peaks, mainly in T| analysis. However the influence of the T// would be a possible explanation for the variation of sea level, since this component was energetic significant, and the T| component had lower coherence than the T//. Thus, an oscillation may not be effective in influencing the sea level variation if it only shows a strong power with MSL (significant peak compared to the spectral range) without having enough coherence. Regarding the phase signals, oscillations of higher coherence values had null phases, which can be interpreted as indicative of a simultaneous action of wind forcing and MSL. Throughout the 39-year period, energy peaks' influence on annual oscillations tended to decline over the years, and thus their contribution to the variation of MSL in the coastal region of Cananeia was reduced through time. On the other hand, secondary peaks of energy (periods below 50 days) presented increasing trends over the period under analysis. The analysis of Thompson´s low-pass filters resulted in the selection of Test 2, which brought together the basic assumptions of the optimal filter, i.e.: i) lower MSD and ii) sum of cutoff frequencies different from multiples of 180º/h; iii) the shortest distance between Ω1 and Ω2, and iv) with no significant disturbance by Gibb's phenomenon. The spectra were analyzed within a restricted frequency band of periods longer than 3 days, or 0.33 cpd. However there was a mathematical leakage of energy outside the spectral range chosen. In other words, a part of the 'high' frequencies (periods of less than 3 days) were not ideally attenuated by the low pass Thompson filter. Despite that, these inconsistencies were below 10% of the average PSD peaks found in the entire spectrum. This was then considered a noise and thus incorporated into the study, since the filtering was considered successful, although not perfect. In general the spectra presented power peaks over the years analyzed, which suggested the prevalence of phenomena caused by the influence of wind stress, being mainly associated with annual oscillations (mainly the oscillation with periods of 0.00269 cpd). They also combined energy peaks with the highest coherence values found in the overall analysis, as they were responsible for the largest MSL variance in the coastal region of Cananeia. Oscillations of shorter period with coherence values above 70% also stood out. However, these oscillations were not associated with a significant energy level. Higher coherence frequencies were not necessarily associated with maximum peaks of energy. The sample points obtained from the grid of the NCEP/NCAR global model presented different responses to the disturbance during the same period, indicating that the disposition of these points in some way affected the detection of similar spectral bands. Non-physical limitations of the NCEP/NCAR model, such as grid resolution and morphological discretization may have introduced some restrictions to the estimated results. Point 1 always presented the lowest energy level in relation to other sampling points, possibly due to its proximity to the continent. Grid points closer to the coast may have been biased by the global model resolution, which probably did not distinguish the influence of the continent, resulting in the loss of reliability of modeled data. The opposite occurred at point 4 (the grid point furthest offshore in this study), which showed more robust results. The offshore points, i.e. 2 and 4, were those with the higher frequencies of coherence and energy throughout the study. The discrepancies of the time lags among the sampled points was also another result possibly affected by the aforementioned model limitations. Lags with a zero value were interpreted as indicative of simultaneous action of the wind forcing on the MSL. It is concluded that the wind stress components can affect not only the MSL variation through oscillations with periods of the order of days, associated with the passage of cold fronts (periods of 3 to 10 days) or on storm surge-scales (i.e. 3 to 30 days) as described by Uaissone (2004 apud Oliveira, 2007), Castro et al. (2006) and Pugh (1996), but also through annual and interannual oscillations. The parallel component of the wind stress seems to be more effective in the region of Cananeia than the perpendicular component, mainly due the association of high coherence and energy The inclusion of sea level atmospheric pressure is recommended as a parameter to improve the understanding of frontal systems' effects on MSL variations and on wind stress components, as well as the future projection of the main oscillations found in the coastal region of Cananeia (SP) as a result of climate change scenarios. The authors thank Prof. Claudio Neves (COPPE-UFRJ) for his invaluable assistance in preparing this study and CAPES for the financial support granted to the first author. CAMARGO, R.; HARARI, J. Modelagem numérica de ressacas na plataforma sudeste do Brasil a partir de cartas sinóticas de pressão atmosférica na superfície. Bol. Inst. Oceanogr., v. 42, n. 1/2, p. 19-34, 1994. [ Links ] CAMARGO, R.; HARARI, J.; CARUZZO, A. Numerical modeling of tidal circulation in coastal areas of the southern Brazil. Afro-Am. Gloss News.,v. 3, n. 1, 1998. [ Links ] CAMARGO, R.; HARARI, J.; CARUZZO, A. Basic statistics of storm surges over the south-western Atlantic. Afro-Am. Gloss News., ed. 3, n. 2, 1999. [ Links ] CASTRO, B. M.; LORENZZETTI, J. A.; SILVEIRA, I. C. A.; MIRANDA, L. B. Estrutura termohalina e circulação na região entre Cabo de São Tomé (RJ) e o Chuí (RS). In: ROSSI-WONGTSCHOWSKI, C. L. B.; MADUREIRA, L. S. P. (Org.). O ambiente oceanográfico da Plataforma Continental e do Talude na região sudeste-sul do Brasil. São Paulo: EDUSP, São Paulo, 2006. 472 p. [ Links ] COSTA, M. C.; BERNARDES, M. E. C. Estimativa do Nível Médio do Mar (NMM) em Cananéia (SP) a partir da otimização do Filtro de Thompson. In: CONGRESSO BRASILEIRO DE OCEANOGRAFIA, 4., 2010, Rio Grande, RS. Anais ... Trabalho n. 1019, p. 01943-01945. CD Rom. [ Links ] DUCHON, C. E. Lanczos Filtering in one and two dimensions. J. Appl.Meteorol., v. 18, p. 1016-1022, 1979. [ Links ] EMERY, W. J.; THOMSON, R. E. Data analysis methods in Physical Oceanography. Amsterdam: Elsevier Science , 2001. 636 p. [ Links ] FRANCO, A. S. Análise espectral contínua e discreta. São Paulo: Instituto de Pesquisas Tecnológicas do Estado de São Paulo, 1982. 194 p. [ Links ] FRANCO, A. S.; ROCK, N. J. The fast Fourier transform and its application to tidal oscillation. Bol. Inst. Oceanogr., 1971. [ Links ] HARARI, J.; CAMARGO, R. Simulação da propagação das nove principais componentes de maré na plataforma sudeste brasileira através de modelo numérico hidrodinâmico. Bol. Inst. Oceanogr., v. 42, n.1, p. 35-54,1994. [ Links ] HARARI, J.; CAMARGO, R. Tides and mean sea level variabilities in Santos (SP), 1944 to 1989. Relatório Interno Inst. Oceanogr., n. 36, p. 1-15, 1995. [ Links ] HARARI, J.; FRANÇA, C. A. S.; CAMARGO, R. Variabilidade de longo termo de componentes de marés e do nível médio do mar na costa brasileira. Afro-Am. Gloss News, ed. 8, n. 1, 2004. [ Links ] KALNAY, E.; KANAMITSU. M.; KISTLER, R.; COLLINS, W.; DEAVEN, D.; GANDIN, L.; IREDELL, M.; SAHA, S.; WHITE, G.; WOOLLEN, J.; ZHU, Y.; LEETMAA, A.; REYNOLDS, R.; CHELLIAH, M.; EBISUZAKI, W.; HIGGINS, W.; JANOWIAK, J.; MO, K. C.; ROPELEWSKI, C.; WANG, J.; JENNE, R.; JOSEPH, D. The NCEP/NCAR 40-Year Reanalysis Project. In: Bull. Am. Meteor. Soc., Mar. 1996. [ Links ] KUMARESAN, R. Spectral analysis. In: MIRTA, S. K.; KAISER, J. F. (Org.). Handbook for digital signal processing. New York: John Wiley & Sons, 1993. p. 1143 1237. [ Links ] MESQUITA, A. R. Marés, circulação e nível do mar na costa sudeste do Brasil. Documento preparado à Fundespa (Fundação de Estudo e Pesquisas Aquáticas) 1997. Disponível em: <www.fundespa.com.br>. Access: 01 Sep. 2009. [ Links ] MESQUITA, A. R. Sea level variations along the brazilian coast: a short review. In: BRAZILIAN SYMPOSIUM ON SANDY BEACHES, 2000, Itajaí, SC. Anais... Itajaí: 2000. [ Links ] MESQUITA, A. R. Hourly, daily, seasonal and long-term sea levels along brazilian coast. Afro-Am. Gloss News, jan. 2002. [ Links ] MESQUITA, A. R. O gigante em movimento. Scientific American Brasil, v. 1, p. 17-23, 2009. (Especial Oceanos) [ Links ] MUNK, W. Twentieth century sea level: An enigma. P. Natl Acad.Sci. USA, v. 99, n. 10, p. 6550-6555, 2002. [ Links ] NEVES, C. F.; O nível do mar: uma realidade física ou um critério de engenharia? Vetor, v. 15, n. 2, p. 19-33, 2005. [ Links ] NEVES, C. F.; MUEHE, D. Vulnerabilidade, impactos e adaptação a mudanças do clima: a zona costeira. In: Mudança do clima no Brasil: vulnerabilidade, impactos e adaptação. Brasília, DF.: Centro de Gestão e Estudos Estratégicos, n. 27, 2008. p. 217-297. (Série Parcerias Estratégicas). [ Links ] NOAA, Mean sea level trend 874-051 Cananeia, Brazil. Disponível em: <http://tidesandcurrents.noaa.gov/sltrends/sltrends_global_station.shtml?stnid=874-051>. 17/8/2012 [ Links ] OLIVEIRA, M. M. F.; EBECKEN, N. F. F.; SANTOS, I. A.; NEVES, C. F.; CALOBA, L. P.; OLIVEIRA, J. L. F. Modelagem da maré meteorológica utilizando redes neurais artificiais: uma aplicação para a Baía de Paranaguá PR, parte 2: dados meteorológicos de Reanálise do NCEP/NCAR. Rev. Bras. Meteor., v. 22, n. 1, p. 53-62, 2007. [ Links ] OLIVEIRA, M. M. F.; EBECKEN, N. F. F.; OLIVEIRA, J. L. F.; SANTOS, I. A. Neural Network Model to predict a storm surge. J. Appl. Meteor. Clim., v. 48, p. 143-155, 2009. [ Links ] PAWLOWICZ, R.; BEARDSLEY, B.; LENTZ, S. Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE. Comput.Geosci., v. 28, p. 929-937, 2002. [ Links ] PICARELLI, S. S.; HARARI, J.; CAMARGO, R. Modelling the tidal circulation in Cananeia-Iguape estuary and adjacent coastal area (São Paulo, Brazil). Afro- Am. Gloss News, ed. 6, 2002. [ Links ] PUGH, D. T. Tides, surges and Mean Sea-Level. Swindon: John Wiley & Sons, 1996. 486 p. [ Links ] THOMPSON, R. O. R. Y. Low-pass filters to supress inertial and tidal frequencies. J. Phys. Oceanogr., v. 13, p.1077-1083,1983. [ Links ] (Manuscript received 28 July 2011; revised 14 July 2012; accepted 24 July 2012) * Corresponding author: mabelcalim@gmail.com 1 A Gibb's phenomenon is defined by distortions close to a discontinuous zone explained by an error of the Fourier series truncation.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1679-87592012000300006&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T16:43:27Z","content_type":null,"content_length":"73180","record_id":"<urn:uuid:1533184b-d306-4476-b018-b185caaf69b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Riemannian Geometry up vote 10 down vote favorite I come from a background of having done undergraduate and graduate courses in General Relativity and elementary course in riemannian geometry. Jurgen Jost's book does give somewhat of an argument for the the statements below but I would like to know if there is a reference where the following two things are proven explicitly, 1. That the sectional curvature of a 2-dimensional subspace of a tangent space at a point on the Riemannian manifold is independent of the choice of basis. That is the definition of the sectional curvature depends only on the choice of the 2-dim subspace. 2. That the sectional curvature determines the Riemannian curvature fully. Secondly can one give me a reference where I can see how in practice is sectional curvature computed. To a first timer to this subject it is not obvious how one does a calculation on "all" 2-dimensional subspaces of a high-dimensional space. Especially when people talk of manifolds with "constant sectional curvature". How are they realized? I would like to see some explicit examples to understand this point. Further some studies about homogeneous spaces (needed to understand some issues in Quantum Field Theory) got me to the following 4 very non-trivial ideas in Riemannian manifolds which I am stating in my own way here , 1. That the isometry group of a Riemannian manifold is always a lie group. 2. The isotropy subgroup of any point on a Riemannian manifold under the smooth transitive action of its own isometry group on itself is a compact subgroup. (The context being what is called a "Riemannian Homogeneous Space") 3. {This point was earlier framed in a way which made the bi-implication false as pointed out by some people} The formulation should be as follows. A Riemannian Homogeneous Space is a riemannian manifold on which the isometry group acts transitively. Now the theorem is that such a space is compact IFF its isometry group is compact. Thats the statement whose intuition I am looking for. Apologies for the confusion caused. 4. { This question too was not framed properly. Basically I could not figure out how to write the nabla for the connection! It should be as Jose has pointed out.} A riemannian manifold is locally symmetric if and only if the Riemann curvature tensor is parallel with respect to the Levi-Civita connection. Can one give me the intuition behind these or give me specific references where these are proven in explicit details? riemannian-geometry dg.differential-geometry @Anirbit: you might consider editing the question to give a quick summary of which of your 4 questions are proved/disproved in the several answers below. – Scott Morrison♦ Dec 21 '09 at 15:34 Thanks Scott for the suggestion. I have made updates to the question based on the responses and added comments to the responses as I progress in my understanding of what is going on. – Anirbit Dec 23 '09 at 8:08 add comment 8 Answers active oldest votes To get a better feel of the Riemann curvature tensor and sectional curvature: 1. Work through one of the definitions of the Riemann curvature tensor and sectional curvature with a $2$-dimensional sphere of radius $r$. 2. Define the hyperbolic plane as the space-like "unit sphere" of $3$-dimensional Minkowski space, defined using an inner product with signature $(-,+,+)$. Work out the sectional and Riemann curvature of that 3. Repeat #1 and #2 for the $n$-dimensional sphere and hyperbolic space, as well as flat space Sectional curvature determines Riemann curvature: That the sectional curvature uniquely determines the Riemann curvature is a consequence of the following: 1. The Riemann curvature tensor is a quadratic form on the vector space of $\Lambda^2T_xM$ 2. The sectional curvature function corresponds to evaluating the Riemann curvature tensor (as a quadratic form) on decomposable elements of $\Lambda^2T_xM$ 3. There is a basis of $\Lambda^2T_xM$ consisting only of decomposable elements Added in response to Anirbit's comment Perhaps you shouldn't try to compute the curvature too soon. First, make sure you understand the Riemannian metric of the unit sphere and hyperbolic space inside out. There are many ways to do this. But the most concrete way I know is to use stereographic projection of the sphere onto a hyperplane orthogonal to the last co-ordinate axis. Either the hyperplane through the origin or the one through the south pole works fine. This gives you a very nice set of co-ordinates on the whole sphere minus one point. Work out the Riemannian metric and the Christoffel symbols. Also, work out formulas for an orthonormal frame of vector fields and the corresponding dual frame of 1-forms. Figure out the covariant derivatives of these vector fields and the corresponding dual connection 1-forms. up vote 6 After you do this, do everything again with hyperbolic space, which is the hypersurface down vote accepted $-x_0^2 + x_1^2 + \cdots + x_n^2 = -1$ with $x_0 > 0$ in Minkowski space with the Riemannian metric induced by the flat Minkowski metric. You can do stereographic projection just like for the sphere but onto the unit $n$-disk given by $x_1^2 + \cdots + x_n^2 = 1$ and $x_0 = 0$, where the formula for the hyperbolic metric looks just like the spherical metric in stereographic co-ordinates but with a sign change in appropriate places. This is the standard conformal model of hyperbolic space. After you understand this inside out, you can use these pictures to figure out why the $n$-sphere and its metric is given by $O(n+1)/O(n)$ and hyperbolic space by $O(n,1)/O(n)$ and why the metrics you've computed above correspond to the natural invariant metric on these homogeneous spaces. You can then check that the formulas for invariant metrics on homogeneous spaces give you the same answers as above. Use references only for the general formulas for the metric, connection (including Christoffel symbols), and curvature. I recommend that you try to work out these examples by hand yourself instead of trying to follow someone else's calculations. If possible, however, do it with another student who is also trying to learn this at the same time. If, however, you want to peek at a reference for hints, I recommend the book by Gallot, Hulin, and Lafontaine. I suspect that the book by Thurston is good too (I studied his notes when I was a student). For invariant Riemannian metrics on a homogeneous space, I recommend the book by Cheeger and Ebin (available cheap from AMS! When I was a student, I had to pay a hundred dollars for this little book but it was well worth it). But mostly when I was learning this stuff, I did and redid the same calculations many times on my own. I was never able to learn much more than a bare outline of the ideas from either books or lectures. Just try to get a rough idea of what's going on from the books, but do the details yourself. Thanks for your kind reply. I have earlier computed riemann and ricci and scalar curvatures of 4-manifolds in the sense of common space-times. Can you tell me a reference where I can see the computation of sectional curvature of a manifold of dimension > 2 (thats where the things are not so clear!) – Anirbit Dec 21 '09 at 18:40 add comment Here is one way to think about your first question which at least might provide a more geometric picture about what is going on. I want to think about the curvature $R(X,Y)$ as parallel transport around the infinitesimal parallelogram $X \wedge Y$. If I drag a vector $Z$ around the parallelogram $X \wedge Y$, the result is $R(X,Y)Z$. Since the connection is metric, the map $Z \mapsto R(X,Y)Z$ is actually an infinitesimal rotation; this is the observation that $$\langle R(X,Y)Z, W\rangle = -\langle Z, Now I want to define a new operator $S$ which acts bilinearly on pairs of 2-vectors. This will be $$S(X\wedge Y, Z \wedge W) = \langle R(X,Y)Z,W\rangle$$ where I have summed over some basis of 2-planes in $\bigwedge^2 T_pM$. Geometrically, $S$ reports how much the infinitesimal 2-plane $Z \wedge W$ rotates as it is dragged around the 2-plane $X\wedge Y$. To see that this is up vote well-defined we need only to check $S(-,Z\wedge W) = -S(-, W \wedge Z)$. But this follows precisely because of the previous equation for $R$. 5 down vote From here on, I'm going to use the metric to think of $S$ as $$S(X \wedge Y) = \sum_{2\text{-planes } Z\wedge W} \langle R(X,Y) Z,W \rangle ~Z\wedge W$$ The somewhat mysterious "pair swap" symmetry $\langle R(X,Y) Z, W\rangle = \langle R(Z,W)X,Y\rangle$ can now be interpreted as saying that the operator $S$ is symmetric. In particular, this means that we can take the spectral decomposition of $S$ to get a basis of orthogonal unit-area eigenplanes $X_i \wedge Y_i$, $$S(X_i \wedge Y_i) = \lambda_i \cdot X_i \wedge Y_i$$ The eigenvalues $\lambda_i$ are your sectional curvatures for this basis; any other sectional curvatures can be easily computed from these. Note that knowledge of $S$ is now clearly sufficient to reconstruct the curvature tensor, since $$\langle R(X,Y)Z, W\rangle = \langle S(X\wedge Y), Z \wedge W \rangle$$ so in fact the sectional curvature tensor $S$ determines the usual curvature tensor $R$. add comment do Carmo's "Riemannian Geometry" up vote 3 down vote add comment 2) is very easy (assuming your manifold is connected; if not, it's false): you have an induced action of the isometries which fix the point x on the tangent space $T_xX$. This action preserves the metric, which is a positive definite inner product on this vector space. That is, this isometry subgroup is thus a closed subgroup of $SO(T_xX,g)$, which is a compact group. This actually also proves that it's a Lie group, since any closed subgroup of a Lie group is itself Lie. up vote 3 down vote This also makes it easy to prove the true direction of 3), since the isometry group acting on x gives a submersion with compact image and compact fibers, showing the group is compact. add comment The "only if" direction in 3 is incorrect: take standard $\mathbb R^2$ and introduce several bumps to make the isometry group trivial. The "if" direction is the Kobayashi-Nomidzu "Foundations of Differential Geometry", vol I, around Theorem 4.6 for references. There and in vol II you also find answers to both of your 1-2 questions, I think. up vote 2 To see how sectional curvature is computed you need to go through a lot of examples. down vote The "if" direction in 4 is incorrect: there are manifolds of constant scalar curvature that are not locally symmetric. add comment Just to add some things to Igor Belegradek's post: "1.That the isometry group of a Riemannian manifold is always a lie group." This is the famous Myers-Steenrod theorem, proven in 1939 (Myers, S.B. and N.E. Steenrod: The group of isometries of a Riemannian manifold. The Annals of Mathematics, Vol 40, No. 2, April 1939, p. 400-416.) It is in fact highly non-trivial, and I think you need that the manifold is connected Your point "3.That isometry group of a Riemannian manifold is compact IFF the Riemannian manifold is compact." is as Igor pointed out false, the only thing which is right is the 3.If the (connected) Riemannian manifold is compact then the isometry group is compact. up vote 2 This is also a part of Myers-Steenrod theorem, and can be found in the reference above. down vote The "idea" of the proof is the following: (Let $(M,g)$ be a Riemannian manifold) • Show that $(G=Iso(M,g), CO, op)$ is a locally compact topological transformationgroup.Here $CO$ is the compact-open topology, and $op: G \times M \rightarrow M$ the group action. Moreover $(M,g)$ compact implies $(G, CO, op)$ compact. • Show that any tangential subgroup $H$ of $Diff(M)$ inherits a differentiable structure $[b]$ such that $(H,[b],op)$ ($op$ being the natural operation on $M$) is a Lie-Transformation group which is first-countable. The underlying topology $\tau$ is finer than $CO$-topology. (If $(M,g)$ has countable many connected components, $G$ is a tangential subgroup of $Diff(M)$) • Show that the topology $\tau$ cannot be strictly finer than the $CO$-topology. (needs frame-bundles, etc.) add comment Two answers (by Deane Yang and Matt Noonan) have addressed the question about sectional curvatures determining the full curvature tensor, but they seem incomplete to me. The proofs I know use the additional symmetry $R(X,Y)Z+R(Y,Z)X+R(Z,X)Y=0$. up vote 2 Of course there's a basis for $\Lambda^2 T_xM$ consisting of decomposable two-vectors, as Deane says, but knowing a quadratic form on a basis doesn't determine it – you need to know the down vote bilinear form on pairs of basis elements. And in Matt's argument, why should the eigenvectors of $S$ be decomposable two-vectors? add comment I have two favourite books on differential geometry where you can find answers to your questions: 1. do Carmo's Riemannian Geometry (as suggested by David Lehavi) 2. Besse's Einstein manifolds Let me just point out that your 4th point is not quite correct. The statement is that A riemannian manifold is locally symmetric if and only if the Riemann curvature tensor (and not just the scalar curvature) is parallel with respect to the Levi-Civita connection; i.e., $\ nabla R = 0$ This presupposes that by "locally symmetric" you understand that the geodesic symmetry (i.e., changing the sign of the parameter of the geodesic) is an isometry at every point, otherwise it is a definition of locally symmetric. vote 1 Edit (in response to Anirbit's comment) vote This is indeed a result of Élie Cartan and in fact, as far as I understand the history, Cartan started his research on symmetric spaces by studying the question of which riemannian manifolds have parallel curvature. He then classified the irreducibles and found the well-known relationship to the classification of simple Lie algebras. I'm not sure when the characterisation in terms of the geodesic symmetry was introduced. The proof is not complicated. It is basically that the curvature tensor is invariant under the map which interchanges opposite points along a geodesic. In other words, if you fix a point $p $ in your manifold and look at a geodesic $\gamma$ through $p$ in the direction $X$, then if you follow the geodesic a 'time' $s$ you get to some point $p (s)$. But there is also a geodesic through the same point with direction $-X$ and if you follow that geodesic for a time $s$ you end up at a point $p(-s)$. The map which sends $p(s)$ to $p(-s) $ for any (small, say) $s$ leaves the curvature invariant. The covariant derivative of the curvature along $X$ at $p$ can be understood as the difference between the curvature parallel transported to $p(s)$ and that transported to $p(-s)$ divided by $s$ in the limit as $s\to 0$, but even before you take the limit, the difference vanishes. Since you said your background is in relativity, I wonder whether you are not also interested in the case of locally symmetric spaces in lorentzian (or other indefinite) signature. In general signature this is still an open problem, but for lorentzian it was solved by Cahen and Wallach in this paper. Thanks Jose for pointing that out. So can you give me the intuition behind this? This theorem seems to be from Cartan. – Anirbit Dec 23 '09 at 8:07 add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/9468/riemannian-geometry?sort=oldest","timestamp":"2014-04-17T15:45:11Z","content_type":null,"content_length":"90240","record_id":"<urn:uuid:3148b239-e5fa-4aca-88cf-745ef18584c6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Mchenry, IL Math Tutor Find a Mchenry, IL Math Tutor ...In my experiences, I have learned how to gauge what a student needs to succeed and I always strive to provide him or her with just that. I am comfortable teaching students of all ages and subjects of all levels. I graduated with a BS in Biology (with minor in chemistry) and, since then, have ta... 26 Subjects: including trigonometry, ACT Math, discrete math, ASVAB ...I am confident in my ability to deliver my knowledge to my students. I have my master's in science in the genomics area which plays an important role in genetics. Besides my master's, I studied genetics and molecular biology in my bachelor's too which generated my interest in this field. 27 Subjects: including algebra 2, chemistry, elementary (k-6th), Microsoft Excel ...If you would like help with Linear Algebra, I would be delighted to be of service. I took logic and earned an A. Since then, I have continued to study additional fallacies and have applied it in advanced philosophy (e.g. 57 Subjects: including trigonometry, differential equations, linear algebra, SAT math ...I am able to tutor all Math subjects from Pre-Algebra to Calculus and Basic Microsoft Excel. I am available in the Roselle Area after 7pm on weekdays and weekends as needed. I am very passionate about mathematics and connecting it with students learning. 10 Subjects: including statistics, algebra 1, algebra 2, calculus Former classroom teacher and current at home mom with ten years tutoring experience. I have successfully tutored students in Pre-Algebra, Algebra I & II, Geometry, College Algebra, and Biology. Students are more confident, parents are happier, and all are pleased with the report card results. 13 Subjects: including trigonometry, algebra 1, algebra 2, biology Related Mchenry, IL Tutors Mchenry, IL Accounting Tutors Mchenry, IL ACT Tutors Mchenry, IL Algebra Tutors Mchenry, IL Algebra 2 Tutors Mchenry, IL Calculus Tutors Mchenry, IL Geometry Tutors Mchenry, IL Math Tutors Mchenry, IL Prealgebra Tutors Mchenry, IL Precalculus Tutors Mchenry, IL SAT Tutors Mchenry, IL SAT Math Tutors Mchenry, IL Science Tutors Mchenry, IL Statistics Tutors Mchenry, IL Trigonometry Tutors Nearby Cities With Math Tutor Bull Valley, IL Math Tutors Crystal Lake, IL Math Tutors Fox Lake, IL Math Tutors Holiday Hills, IL Math Tutors Island Lake Math Tutors Johnsburg, IL Math Tutors Kildeer, IL Math Tutors Lake Barrington, IL Math Tutors Lakemoor, IL Math Tutors Mccullom Lake, IL Math Tutors Port Barrington, IL Math Tutors Prairie Grove, IL Math Tutors Round Lake Park, IL Math Tutors Round Lake, IL Math Tutors Volo, IL Math Tutors
{"url":"http://www.purplemath.com/Mchenry_IL_Math_tutors.php","timestamp":"2014-04-20T02:14:21Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:c9914029-d434-4b5a-a419-f0c91ce076dd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Eddy-Current Simulation in Prisms, Plates, and Shells with the Program EDDYNET STP722: Eddy-Current Simulation in Prisms, Plates, and Shells with the Program EDDYNET Turner, LR Physicist, Argonne National Laboratory, Argonne, Ill. Lari, RJ Physicist, Argonne National Laboratory, Argonne, Ill. Sandy, GL Physicist, New College, University of South Florida, Sarasota, Fla. Pages: 11 Published: Jan 1981 The program EDDYNET solves eddy-current problems by means of an integral-equation approach. The conducting material is represented by a network of current-carrying line elements. Consequently, Maxwell's field equations can be replaced by Kirchhoff's circuit rules. The loop equations for voltages, supplemented by the node equations for the currents, comprise a set of linear equations that can be solved repeatedly to give the time development of the eddy currents. Currents, magnetic fields, and power are calculated at each step. A TRIM-like mesh generator and internal indexing of lines, nodes, and loops permit solutions with complex geometries incorporating many elements. Results can appear in the form of movies representing currents, field penetration, and power distribution. Calculations can now be performed for conducting, curved shells acted upon by an applied magnetic field. Changes in the flux through each mesh loop are determined from the normal component of field (both the applied field and the field from the current in each line element). The matrix representing the flux through each loop due to each line can be inverted and the system stepped through time to provide a time history of the currents. Another matrix facilitates calculating the net field over a specified rectangular grid at each time step. Incorporating appropriate symmetry conditions reduces the size of the problem. Results are presented for field shielding by a thin-walled toroidal shell and for eddy-current effects on a notched tube in a sinusoidal field. eddy currents, computer simulation, nondestructive evaluation, calculation, theory, transient magnetic field Paper ID: STP27577S Committee/Subcommittee: E07.07 DOI: 10.1520/STP27577S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP27577S.htm","timestamp":"2014-04-18T18:27:03Z","content_type":null,"content_length":"13524","record_id":"<urn:uuid:f3e7c785-4a09-4ab3-b424-ebdcb63ed730>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
The Putnam-Searle-Chalmers Theorem Basic idea of an implementation If a computation is implemented, there must be a mapping from states of the underlying system to formal states of the computation, and the states must have the correct behavior (transition rule) as they change over time. For example, for an electronic digital computer, we can map a high-voltage state of a transistor lead to a "1" and a low voltage state to a "0". By doing the same for various other circuit elements, we can obtain an ordered string of 1's and 0's. This string will change over time as the voltages change. If, due to the laws of physics and the way the circuit is connected, this string must always change in accordance with the transition rules for computation C, then the system implements C. We must allow as much flexibility as possible in the choice of mapping, because we are trying to understand the behavior of the system without any reference to things outside the system. Human convenience in recognizing the mapping is not a consideration. The obvious way to try do this is simply to allow any mapping that is mathematically possible. This leads to what I will call the naive implementation criterion, because while it may sound good at first it is not a viable option for a satisfactory criterion. Chalmers' paper Does a Rock Implement Every Finite-State Automaton? explained this in detail. The following is my version of what I'll call the "Putnam-Searle-Chalmers (PSC) Theorem" which shows that unrestricted mappings are not a viable option: Suppose that a system consists of two parts, S and T, each of which has a numerical value, and that the dynamics of our system are as follows: S(t+1) = S(t) T(t+1) = T(t) + 1 These dynamics are fairly trivial; S is a dial that maintains a constant setting, while T is a clock. We will check to see if this system implements a computation, C, which has the transition rule X(t+1) = F(X(t)) where t is a time index. Here X need not be a single number; it might be a string of bits, for example. F could be a complicated function, such as F(X) = the Xth prime number (expressed in base 2, where X is an integer expressed in base 2). Now make a mapping M going from (S,T) to X with the following properties: X = M(S,T) M(S,T+1) = F(M(S,T)) We do need to make sure that any possible starting value of X is allowed by the mapping, which we can always do if our system has enough possible dial values. Now, according to the mapping, X will change as a function of time based on the dynamics of the system as follows: X(t+1) = M(S(t+1),T(t+1)) = M(S(t),T(t)+1) = F(M(S(t),T(t))) = F(X(t)) Therefore, the system would implement the computation if this mapping is allowed. But the system's dynamics are trivial while the computation can be a very complicated function. Obviously, this is not acceptable; the computation does not characterize the behavior of the system at all. This is what I call a "false implementation". All of the complicated dynamics of the computation have been put into the mapping. It is therefore necessary to put restrictions on what mappings are allowed. One possibility is to require that each part of a string that defines a formal computational state (e.g. each bit in a bit string) takes its value based on a different part of the underlying system. That is basically what Chalmers proposed to overcome the problem. While it somewhat counter-intuitively rules out distinctions based on the value of a single number (since a single number can still be mapped to any other single number), it goes a long way towards ruling out false implementations, while still allowing important standard examples of implementations that we want to retain, such as mapping switch positions to bit values (assuming classical However, it is not quite right. There are some systems for which it still allows false implementations, and there are other cases where it rules out what seem to be legitimate implementations - and these become clearly important when quantum mechanics is brought into the picture. In classical mechanics, different particles are different parts of the system; in quantum mechanics, different particles are different directions in the shared configuration space on which the wavefunction evolves. Restrictions on mappings 1: Independence and Inheritance No comments:
{"url":"http://onqm.blogspot.com/2011/10/putnam-searle-chalmers-theorem.html","timestamp":"2014-04-19T02:17:45Z","content_type":null,"content_length":"36180","record_id":"<urn:uuid:b46fd163-67c2-4a6c-8217-5b2d95f23603>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Household Net Worth: The "Real" Story By Doug Short March 11, 2013 (Quarterly Update) Note from dshort: With last week's release of the Federal Reserve's Z.1. Financial Accounts of the United States for Q4 2013, I have updated this commentary to incorporate the latest data. Let's take a long-term view of household net worth from the latest Z.1 release. A quick glance at the complete data series shows a distinct bubble in net worth that peaked in Q4 2007 with a trough in Q1 2009, the same quarter the stock market bottomed. The latest Fed balance sheet shows a total net worth that is 45.2% above the 2009 trough at a new all-time high 17.8% above the 2007 peak. The nominal Q4 net worth is up 3.8% from the previous quarter and up 13.8% year over year. But there are problems with this analysis. Over the six decades of this data series, total net worth has grown about 7699%. A linear vertical scale on the chart above is misleading because it fails to provide an accurate visual illustration of growth over time. It also gives an exaggerated dimension to the bubble that began in 2002. But there is another more serious problem, one that has to do with the data itself rather than the method of display. Over the same time frame that net worth grew seven-thousand-plus percent, the value of the 1950 dollar shrank to about nine cents. The Federal Reserve gives us the nominal value of total net worth, which is significantly skewed by money illusion. Here is a log scale chart adjusted for inflation using the Consumer Price Index. Here is the same chart with an exponential regression through the data. The regression helps us see the twin wealth bubbles peaking in Q1 2000 and Q1 2007, the Tech and Real Estate bubbles. The trough in real household net worth was in Q1 2009. From that quarter to the latest data point, net worth initially trended at about the same growth rate as the overall regression but has improved over the last five quarters. We are currently 0.6% above the regression. Net Worth Per Capita The next chart gives us a more intuitive sense of real net worth. Here I've divided the inflation-adjusted series above by the Bureau of Commerce's mid-month population estimates, which have been recorded since January 1959. I say "more intuitive" because the per-capita adjustment brings the latest data point from the Multi-Trillion stratosphere to $25,418 -- an amount we can relate to on a personal level. At the end of 2013, we're $587 below the real peak in Q1 2007. Note: I've referred to this data series as "household" net worth. But, as I show in the chart titles, it also includes the net worth of nonprofit organizations. The ratio of two isn't clearly defined in the Fed data, and it obviously varies by asset and liability component. I've seen estimates that the nonprofit component is around six percent of the total net worth. One easy (and rather illuminating) point of comparison in the Z.1 data is the relative share of real estate at market value (B.100 lines 3, 4, and 5). In the latest report, nonprofit organizations account for 6.3% of combined household and nonprofit real estate (unchanged from last quarter). That percentage in the quarterly data has ranged from a high of 9.2% in Q4 1974 to a low of 4.5% in Q3 Remember, if you have a question or comment, send it to .
{"url":"http://www.advisorperspectives.com/dshort/commentaries/Household-Net-Worth.php","timestamp":"2014-04-17T12:36:17Z","content_type":null,"content_length":"18248","record_id":"<urn:uuid:f8d27d0f-8bd0-46d4-9437-f057cdc33a95>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Analysis of Shear Flows over a Porous Medium in a Cylindrical Tube Advances in Mechanical Engineering Volume 2013 (2013), Article ID 168215, 8 pages Research Article Dynamic Analysis of Shear Flows over a Porous Medium in a Cylindrical Tube Department of Mechanical Engineering, Indian Institute of Technology, Banaras Hindu University, Varanasi 221005, India Received 9 February 2013; Revised 29 June 2013; Accepted 8 July 2013 Academic Editor: Jaw-Ren Lin Copyright © 2013 J. P. Dwivedi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Dynamic responses of a viscous fluid flow introduced under a time dependent pressure gradient in a rigid cylindrical tube subject to deformable porous surface layer have been investigated. The coupling effect of the fluid movement and the deformation of the porous medium in Laplace transform space have been studied. Governing equations are simplified for the solid displacement and the fluid velocity in the porous layer. Using Durbin’s algorithm, in transformed domain analytic solutions are obtained, and time dependent variables are considered. Interaction between the solid and the fluid phases in the porous layer and its effects on fluid flow in tube are investigated under steady and unsteady flow conditions when the solid phase is either rigid or deformable. Significant effects of the porous surface layer on the flow in the tube have been observed. 1. Introduction Richardson and Power [1] studied the deformation of a porous material with coupled fluid movements. Barry et al. [2] derived the analytic solutions for a shear fluid over a thin deformable porous layer on the walls of a two-dimensional channel considering the porosity and permeability of the porous layer as constants; therefore, the coupled equations are linear. Barry et al. [3] obtained a closed form solution for deformation of porous medium due to a source in a poroelastic medium. This solution shows an indication of the amount of swelling of the medium and subsequent deformation of the free surface as a function of the location of the point source and boundary condition. Presently, numerical simulation for viscous flow in the porous medium is more applicable. Pozrikidis [4], Wrobel [5], and Dwivedi et al. [6, 7] used the boundary element method in solving partial differential equations. However, there is still a lack of closed form analytic solution for shear flow over a deformable porous medium. In this paper, the solutions in closed form for viscous fluid over a deformation porous layer in the cylinder are obtained in the Laplace transform space. The coupled equations for deformation and fluid velocity within the porous layer are shown in Figure 1. In the present work, the porous medium is isotropic and axially symmetric, and the deformation of solid is small. Assuming constant permeability of the porous medium, linear elasticity theory is applied to investigate the proposed problem. By Durbin’s inversion method, the displacement of solid phase and the velocity of fluid are obtained in the time domain, and analytical solutions for three different situations of the porous layer (i.e., steady state deformation, rigid porous layer, and deformable porous layer) are obtained for a step or a sinusoid pressure gradient in an infinite tube. 2. Governing Equations In cylindrical coordinate system as shown in Figure 1, the governing equations for the velocities of fluid phase, for zero convective, are given by The equation of mass conservation of fluid becomes and equation for solid phase is where the volume expansion is given by In our case, an infinite long tube of radius with a rigid wall, which consists a porous layer of thickness and a fully developed flow as shown in Figure 1, is considered. The fluid is initially at rest and subjected to a time dependent pressure differential at the ends of the tube. Due to the assumption of an infinite tube, we assume that the pressure gradient is constant for each section of the tube; that is, , where is nondimensional function of time and all variables are not dependent. With the symmetry of the geometry of problem, there are no velocities along the radial and circumference directions; that is, everywhere in the tube. When the flow and solid start impulsively from rest and without moving boundaries, under these conditions, that is, and volume expression , (1), (2), and (4) are automatically satisfied. By (3), we have Also, (5) and (6) are automatically satisfied, and (7) for solid phase becomes Taking Laplace transform of (9) and (10), we have where . Nondimensional variables are defined, for the convenience in the following analysis, as where denotes the unit of time and follows nondimensional parameters arisen from the governing equations For convenience, the tilde (^~) is dropped in the following analysis as all variables in the Laplace transform space are nondimensional hereafter. Substituting these expressions from (14) and (15) into ( 12) and (13), we obtain momentum (16) and (17), respectively. Consider By taking , ,, , and and using (14) and (15) in (16), in the pure fluid phase outside of the porous medium, the governing equation of the pure fluid velocity is obtained in the following form: 3. Analytical Solutions 3.1. Steady State Deformation For steady state deformation of porous medium, the pressure gradient is applied steady, and all variables are time independent. Therefore, the governing equations can be represented, by letting in ( 16), (17), and (18) as In case of steady state deformation, at the interface between the porous layer and the pure fluid, the nondimensional boundary conditions and assumptions are as follows: Therefore, the general solution of (21) can be obtained as When , the maximum value of reduces into . The general solutions of (19) and (20) can be obtained as respectively, where and are the modified Bessel functions of the first and second kinds of order . Considering boundary conditions,(i), (ii), and solving (23), (24), and (25), we have 4. Rigid Porous Layer For rigid porous medium layer, the displacement in the solid phase is zero, and the velocity in fluid is time dependent. The governing equations can be represented in the Laplace space, from (16), ( 17), and (18), taking , and for convenience, and , The general solution of (27) is Also, the general solution of (28) is Considering the boundary conditions , and solving (27), (28), (29), and (31), we have Thus 5. Deformable Porous Layer For the deformable porous medium, the displacement in the solid phase can be rewritten, in terms of velocity in the porous from (16), (17), and (18), as where . Substituting (34) into (16), (17), and (18) yields where , . The general solution of velocity in porous medium can be expressed as where , , , and are unknown coefficients and and are two distinct roots of the following quadratic equation From (34), the displacement of the solid phase can be written as and the velocity of pure fluid in the tube is given by Considering the boundary conditions,(i),(ii), (iii), (iv), (v), and solving (34 )–(39), we have where Finally all other coefficients are obtained by The maximum velocity in the pure fluid occurs at the centre of the tube and may be expressed, from (39), as By the help of (36), in particular, the general solutions can be expressed, for the velocity of fluid phase, when the tube is occupied completely by porous medium, is given by and, for the displacement of solid phase, it is where the constants and can be determined directly from the fixed boundary conditions at the rigid wall. Following Barry et al. [2], in particular, both for steady and unsteady flows, we can prove that at the interface when the velocity has behavior where is a constant and is the normal to the 6. Results and Discussions The results for steady state flows are shown in Figures 2, 3, and 4. Normalized velocity at the centre of the tube in (26) is plotted in Figures 2 and 3. Figure 2 shows the variation of velocity with the porous layer thickness , when , and Figure 3 shows the variation of velocity with the volume fraction , when . These results indicate that the velocity in the fluid decreases as the thickness increases for constant porosity of the layer and the maximum velocity increases when the porosity increases for constant thickness. The solutions for the displacement () of solid phase and the velocities of fluid () in (23), (24), and (25) are plotted in Figure 4 for the porosity and 0.9, respectively, where the parameters are selected as , , and . In the case of steady state flow, the velocity profile in the pure fluid is parabolic plus a uniform flow; the fluid flux profile in porous medium and the displacement of the solid phase are almost linear. By increasing the porosity of the two-phase medium, the velocities of both fluids in the pure fluid and the porous medium increase as there is less solid to impede the flow. Therefore, the displacement of solid decreases, since there is less drag on the solid component. The solutions of maximum velocity in the pure fluid in time domain are shown in Figure 5, when parameters , , and are applied to the system under the pressure gradient , where is the Heaviside (step) function. It is apparent that the maximum velocity decreases when the thickness of the porous medium increases. As expected, the velocity converges rapidly to the steady state solutions in (26), that is, immediately after the normalised time . Figure 6 demonstrates how the flow develops from a suddenly applied acceleration to the final steady state when the values of the chosen parameters are , , and . The effects of the porosity and the rigid wall of the tube on the velocity profile in pure fluid are displayed for different volume fraction and 0.9. For deformable porous medium, the maximum velocity in the pure fluid is shown in Figure 7 when the parameters are chosen as , , , and . The effect of porous layer thickness becomes significant in this case. We notice that, for large thickness of porous layer (), the velocity at the centre of the tube oscillation occurs at the position of the steady state flow solution. It is believed that such oscillation in the fluid is caused by the vibration of solid phase around the equilibrium position under dynamic pressure gradient. To illustrate such influence, we plot the variation of displacement of the solid phase at the interface against the real time in Figure 8. Furthermore, the dynamic response for the maximum velocity of the pure fluid subjected to a sinusoid pressure gradient is shown in Figure 9. 7. Conclusion General solutions for the displacement of solid phase and the velocities for both fluids in the porous layer and in the pure fluid space are obtained. The connection (jump) conditions at the interface between porous medium and pure fluid discussed for steady viscous flow are introduced for unsteady viscous flow. It is considered that for unsteady flow the volume-average velocity in the tangential direction is continuous across the porous interface and the stress distribution is proportional to itsvolume fractions at the interface. The interaction for the solid and the fluid phases in the porous medium and the effect on the velocity in the pure fluid are investigated in detail for three cases with different solid phases: (i) steady state deformation; (ii) rigid porous layer; (iii) deformable porous layer. Durbin’s Laplace transformation inversion algorithm is used to obtain a high accuracy solution in the real time domain. Sufficient examples are given for Heaviside and sinuous pressure gradients applied to the system. The derived analytical solutions can be used to test some interesting practical problems. These analytical solutions are derived for axial symmetric : Fluid viscosity : Flow permeability of the porous material : Drag coefficient : Cylindrical polar coordinates , , : Velocity components of fluid phase along radial, circumferential, and longitudinal directions, respectively : Displacement components of solid phase along radial, circumferential, and longitudinal directions, respectively : Time : Excess pore water pressure : Lame constants of the solid phase : Apparent viscosity in the porous medium : Volume fraction of solid phase : Volume fraction of fluid phase, : Density of fluid : Density of soil grain Nondimensional viscosity. The authors are grateful to both referees for their valuable suggestions and comments for the improvement of this paper. 1. J. Richardson and H. Power, “A boundary element analysis of creeping flow past two porous bodies of arbitrary shape,” Engineering Analysis with Boundary Elements, vol. 17, no. 3, pp. 193–204, 1996. View at Publisher · View at Google Scholar · View at Scopus 2. S. I. Barry, K. H. Parkerf, and G. K. Aldis, “Fluid flow over a thin deformable porous layer,” Zeitschrift für Angewandte Mathematik und Physik, vol. 42, no. 5, pp. 633–648, 1991. View at Publisher · View at Google Scholar · View at Scopus 3. S. I. Barry, G. N. Mercer, and C. Zoppou, “Deformation and fluid flow due to a source in a poro-elastic layer,” Applied Mathematical Modelling, vol. 21, no. 11, pp. 681–689, 1997. View at Scopus 4. C. Pozrikidis, Boundary Integral and Singularity Method for Linearized Viscous Flow, Cambridge University Press, Cambridge, UK, 1992. 5. L. C. Wrobel, “The boundary element method,” in Applications in Thermo-Fluids and Acoustics, vol. 1, John Wiley & Sons, Hoboken, NJ, USA, 2002. 6. J. P. Dwivedi, V. P. Singh, and R. K. Lal, “Stress, displacement and pore pressure of partially sealed circular tunnel surrounded by viscoelastic,” Advances in Theoretical and Applied Mechanics., vol. 4, no. 1–4, pp. 189–198, 2011. 7. J. P. Dwivedi, V. P. Singh, and R. K. Lal, “Dynamic response of partially sealed circular tunnel in viscoelastic soil condition,” Bulletin of Applied Mechanics, vol. 7, no. 26, pp. 37–45, 2011.
{"url":"http://www.hindawi.com/journals/ame/2013/168215/","timestamp":"2014-04-18T12:43:45Z","content_type":null,"content_length":"493854","record_id":"<urn:uuid:3f159b24-9a6a-45f6-afbb-0c0b19f5f8aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2013/198 On Evaluating Circuits with Inputs Encrypted by Different Fully Homomorphic Encryption SchemesZhizhou Li and Ten H. LaiAbstract: We consider the problem of evaluating circuits whose inputs are encrypted with possibly different encryption schemes. Let $\mathcal{C}$ be any circuit with input $x_1, \dots, x_t \in \{0,1\}$, and let $\mathcal{E}_i$, $1 \le i \le t$, be (possibly) different fully homomorphic encryption schemes, whose encryption algorithms are $\Enc_i$. Suppose $x_i$ is encrypted with $\mathcal{E}_i$ under a public key $pk_i$, say $c_i \leftarrow \Enc_i({pk_i}, x_i)$. Is there any algorithm $\Evaluate$ such that $\Evaluate(\mathcal{C}, \langle \mathcal{E}_1, pk_1, c_1\rangle, \dots, \langle \mathcal{E}_t, pk_t, c_t\rangle)$ returns a ciphertext $c$ that, once decrypted, equals $\mathcal{C}(x_1, \dots, x_t)$? We propose a solution to this seemingly impossible problem with the number of different schemes and/or keys limited to a small value. Our result also provides a partial solution to the open problem of converting any FHE scheme to a multikey FHE scheme. Category / Keywords: foundations / Fully Homomorphic Encryption, Multi-Scheme FHE, Trivial Encryptions, Ciphertext Trees, Multiparty Computations.Publication Info: under review in a iacr conference.Date: received 6 Apr 2013Contact author: lizh at cse ohio-state edu, lai@cse ohio-state edu Available format(s): PDF | BibTeX Citation Version: 20130409:050704 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2013/198","timestamp":"2014-04-18T03:01:22Z","content_type":null,"content_length":"2865","record_id":"<urn:uuid:2e81042a-fc0e-4b67-bfdb-5edf98f3f097>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
TriangleIterator< V > template<class V> class TriangleIterator< V > Allows iteration over triangles in QVectors of vertices. How the vertices are iterated depends on specified primitive type. Triangles iterated will match triangles that would be drawn if the vertices were drawn with specified primitive type. Used by StelSphereGeometry classes. Definition at line 37 of file TriangleIterator.hpp. template<class V > TriangleIterator< V >::TriangleIterator ( const QVector< V > & vertices, const PrimitiveType primitiveType inline Construct a TriangleIterator iterating over specified vertex array. vertices Vertices to iterate over. primitiveType Primitive type to determine how the vertices form triangles. Must be PrimitiveType_Triangles, PrimitiveType_TriangleStrip or PrimitiveType_TriangleFan. Vertex count can be 0, but can't be 1 or 2. If primitiveType is PrimitiveType_Triangles, vertex count must be divisible by 3. Definition at line 49 of file TriangleIterator.hpp.
{"url":"http://www.stellarium.org/doc/0.12.1/classTriangleIterator.html","timestamp":"2014-04-17T13:29:22Z","content_type":null,"content_length":"9798","record_id":"<urn:uuid:c615f09a-e45b-4aac-a15d-fd5780998b94>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
find the probabilities November 28th 2009, 09:43 AM #1 Sep 2008 find the probabilities Alice and Bob each choose at random a number between zero and one. We assume a uniform probability law under which the probability law under which the probability of an event is proportional to its area. Consider the following events: A: The magnitude of the sum of the two numbers is greater than 2/3 B: At lest one of the numbers is greater than 2/3 C: The two numbers are equal D: Alice's number is greater than 2/3 Find the following probabilities: a) Pr(A) b) Pr(B) c) Pr(A∩B) d) Pr(C) e) Pr(A∩D) ok for each part i guess i draw up a venn diagram? but i'm not entirely sure because the question is worded differently from the questions i've had before. Could anyone help me understand the line "We assume a uniform probability law under which the probability law under which the probability of an event is proportional to its area." Not looking for a cheat just a way to start it. I will help you with part b. It will be a model for the rest. In the unit square below, a point in the area shaded yellow will have at least one of its coordinates greater than $\frac{2}{3}$. Therefore the probability of event $B$ is just that area divided by the total area. Then you model the other events. thanks for your help its much appreciated. Before i move on would i be right in saying Pr(B) = 5/9 ? for some reason i just cant remember how to do Pr(sum of 2 numbers)>(a number). I'm thinking it should be Pr(alice∩Bob)>5/9 but this cant be right because i don't know the values for alice or bob :/ do i need to use binomial? Graph the line $a+b>\frac{2}{3}$. Shade the correct area inside the square. I am trying to solve the same problem. I found some answers but I do not know if I am correct. a) 17/18 b) 5/9 c) 5/36 d) 1/2 e) 1/12 Some of the answers are strange. Can anyone tell me if the answers are correct and if not, what answers are incorrect to think them again please? Did you read my first reply to this question? You will see the your part (a) in not correct. As for part (c) the answer will suprise you. Think in terms of area. I'm just not getting this is part a 7/9? also could (part b) = (part c)? Why do you ask. Yes it is correct. Trust yourself. But I am confused. Did you change (C) & (D) In the OP $D=A\cap B$ so $P(D)=P(B)$. $C$ the two numbers are equal. November 28th 2009, 10:41 AM #2 November 28th 2009, 10:53 AM #3 Sep 2008 November 28th 2009, 10:55 AM #4 November 28th 2009, 11:24 AM #5 Sep 2008 November 28th 2009, 12:00 PM #6 November 29th 2009, 04:35 AM #7 Nov 2009 November 29th 2009, 06:01 AM #8 November 30th 2009, 12:23 PM #9 Sep 2008 November 30th 2009, 12:31 PM #10 Sep 2008 November 30th 2009, 12:33 PM #11 Sep 2008 November 30th 2009, 12:44 PM #12 November 30th 2009, 12:46 PM #13 November 30th 2009, 12:56 PM #14 Sep 2008 November 30th 2009, 01:07 PM #15
{"url":"http://mathhelpforum.com/statistics/117189-find-probabilities.html","timestamp":"2014-04-19T08:30:24Z","content_type":null,"content_length":"81024","record_id":"<urn:uuid:a9a840b1-d096-4b68-ad23-4ef4c2475f7e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Almost Done with Derivation of the Uncertainty Principle, but stuck! May 16th 2011, 06:14 PM #1 May 2009 Almost Done with Derivation of the Uncertainty Principle, but stuck! I would tremendously apreciate if anyone could please shed some light on a step in The uncertainty principle, in which I have gotten stuck. In the link provided: HOW DID THEY GO FROM Eq. (4.93) to (4.94) [Page #51] ... I can't seem to find how this happened? Thanks. I think I have some idea on how this was obtained. Just iluminate me onto something. If I have B<A> where B and A are operators and <A> refers to the mean value of A. If I do <B<A>> can this equal <B><A> also, Can <<A><B>> = <A><B> ... ? Thank you I know I am being very vague with this question, so forgive me if I commited any obvious mathematical flaw. You are correct. $\langle B\langle A\rangle\rangle=\int_{-\infty}^{\infty}\psi^{*}B\langle A\rangle\psi\,dx=\langle A\rangle\int_{-\infty}^{\infty}\psi^{*}B\psi\,dx=\langle A\rangle\,\langle B\rangle.$ So what you have is that, since $\langle A\rangle$ is just a number, it comes out of the integral. In Equation 4.93, on the RHS, if you expand out the product in-between the wave functions, you get $AB-\langle A\rangle B-A\langle B\rangle+\langle A\rangle\,\langle B\rangle.$ When you take the expectation of that whole expression, the right three terms end up being like terms. So you just get $-\langle A\rangle\,\langle B\rangle.$ That is, the positive term cancels out exactly one of the negative terms. Make sense? Thanks for posting a QM question, by the way. In my opinion, we don't get nearly enough of those around here! May 16th 2011, 08:12 PM #2 May 2009 May 17th 2011, 02:11 AM #3
{"url":"http://mathhelpforum.com/advanced-applied-math/180793-almost-done-derivation-uncertainty-principle-but-stuck.html","timestamp":"2014-04-17T07:53:19Z","content_type":null,"content_length":"38250","record_id":"<urn:uuid:349afe8c-a060-483f-bb2d-babf8522a13d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics_Momentum Notes Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Bentley - PS - 252 ut consulting others. • Plagiarism is a serious offense and cases will be referred to the Committee on Academic Honesty for review. • Any websites used in the preparation of assignments must be fully documented as a source. • Any direct quotes Life Chiropractic College West - CHIROPRACT - CLET 3826 have beenplaying piano for the last couple years off and on. No recent injuries.Never had this before.Resting my wrist seems to help.If I put ice on it and I picked up a wrist brace at CVS.If asked: The brace seems to help a bit but I don’t we Mount Airy Christian Academy - WORLD H - hist Major period of ancient history. Forerunners of Ancient Greece They call their county Hellas Foundation of Greek Minoan civilization, On the island of Crete King Minos rose power at 2000 BC Capital city: Knossos Most impressive, art, sculpture, University of British Columbia - COMM - 391 OLAP, not OLTPsupport decision making, not transaction processingBaby careproducts80Beautyproducts50Snacks &amp;Beverage20Canada USUKLOCATIONCOMM 391, Sauder School, UBC69Slice-and-Dice Techniquesin Data WarehousesDrill dow Drexel - CHEM - 244 HNO3-4283 1.512963-1.4MethanolCH4O-9764.7 0.79183215.5Methyl mnitrobenzoateC8H7NO 78-80CAUTION!! If skin comes incontact, wash with ethanolsoluble in waterMechanism4279 N/AQuestionThere are factors that influ King Fahd University of Petroleum & Minerals - CHEM - 102 Ch.12 Recitation1. The rate of a reaction is given by k [A] [B]. Thereactants are gases. If the volume of the container inwhich the reaction occurs is decreased to one-fourth(1/4) of its original value how will the reaction ratechange?2. The rate co Chamblee Charter High School - SCIENCE - Physics MOTION IN ONE DIMENSIONDISPLACEMENT, VELOCITY, AND ACCELERATIONGPSFRAME OF REFERENCEFrame of Reference: a coordinate system for specifying the precise location ofobjects in space.Motion takes place over time and depends upon the frame of reference. King Fahd University of Petroleum & Minerals - CHEM - 102 Chapter 13 Recitation1. Solid ammonium carbamate (NH2COONH4) placed in an evacuatedcontainer at 30 C decomposes according to the reaction:2NH3 (g) + CO2 (g) ; Kp = 2.9 x 10 5NH2COONH4 (s)The total pressure of the gases in equilibrium with the solida Chamblee Charter High School - SCIENCE - Physics SCIENTIFIC NOTATIONScientific notation is a way to write numbers using exponents. Usually it is asimplified way of writing very large or very small numbers . A positive exponentindicates a number larger than one and a negative exponent indicates a numb King Fahd University of Petroleum & Minerals - CHEM - 102 Chapter 14 quizName: . Number: Section: .1. A solution in which [H+] = 1.0 x 10 6 has a pH of _ andis _.2. What is the pH of a 0.00030 M HNO3 solution?3. What is the pH of a 0.00600 M NaOH solutiom? 4. 2.0-L o f a hydrochloric acid (HCl) solution of Chamblee Charter High School - SCIENCE - Physics SOUND WAVESSOUND WAVES ARE PRESSURE WAVESPRODUCTION OF SOUND WAVESAll sound waves are longitudinal and are produced by vibrating objects.Sound waves must have a medium to travel through.Unlike waves along a spring, sound waves in air spread out in al King Fahd University of Petroleum & Minerals - CHEM - 102 KING FAHD UNIVERSITY OF PETROLEUM AND MINERALSCHEMISTRY DEPARTMENTCHEM-102-072-FIRST MAJORTEST CODE NUMBER000STUDENT NUMBER: _NAME :_SECTION NUMBER: _INSTRUCTIONS1. Write your student number, name, and section number on the EXAM COVER page.2. W Chamblee Charter High School - SCIENCE - Physics TEMPERATURE AND INTERNAL ENERGYTEMPERATURETemperature: a measure of the average kinetic energy of the particles in asubstanceAdding or removing energy usually changes temperatureThe sensation of hot and cold can be misleadingThe temperature is propo King Fahd University of Petroleum & Minerals - CHEM - 102 KING FAHD UNIVERSITY OF PETROLEUM AND MINERALSCHEMISTRY DEPARTMENTCHEM-102-072-SECOND MAJORTEST CODE NUMBER000STUDENT NUMBER: _NAME :_SECTION NUMBER: _INSTRUCTIONS1. Write your student number, name, and section number on the EXAM COVER page.2. King Fahd University of Petroleum & Minerals - GEOL - 446 Department of Earth Sciences, KFUPMAcademic Year 2008/2009 (1st. Semester)Environmental Geology (GEOL - 446)SYLLABUSCOURSE NAME: Environmental Geology (GEOL 446)INSTRUCTOR: Dr. Adly Kh. Al-SaafinOffice No.: 4/104-7Office Tel. No.: 3184E-mail: adly King Fahd University of Petroleum & Minerals - GEOL - 446 Earth Science DepartmentEnvironmental Geology (GEOL446)HOMEWORK # 1Student Name: _, I. D. #: _Major: _, Date: Sep. 16, 20071. Define the following terms:(10 points)i) Geology, Environment, and Environmental Geologyii) population growth, and doubli King Fahd University of Petroleum & Minerals - GEOL - 446 Environmental Geology (Geol-446)Date: Sep. 26, 2007HW # 2Earth System1.Define the following:i. Open and closed system (give examples)ii. hydrologic cycleiii. rock cycleiv. Albedov. Regolith2.The Earth is a good approximation of a closed system King Fahd University of Petroleum & Minerals - GEOL - 446 Earth Science DepartmentEnvironmental Geology (GEOL446)HOMEWORK # 3Student Name: _, I. D. #: _Major: _, Date: Oct. 2nd., 20071. Briefly discuss the differences between the following environmentalgeological terms:(20 points)i) Natural Hazard, Natur King Fahd University of Petroleum & Minerals - GEOL - 446 ENVIRONMENTAL GEOLOGYGEOL 446HOMEWORK # 4Student Name: _, Date: Oct. 22, 2007I. D. #: _, Major: _1.What are the two main causes of global warming ?2.Define the following environmental terms:i. eventii. Hazardiii. riskiv. Active Volcanov. Dorm King Fahd University of Petroleum & Minerals - GEOL - 446 EarthquakesReview questionsNov. 4th., 2007Try to Read Chapter 7 Earthquake from Keller Textbook and answerthe following questions. Be ready to discuss some of your answers in thenext class.1. What is the difference between the focus and the epicente King Fahd University of Petroleum & Minerals - GEOL - 446 ENVITONMENTAL GEOLOGYGEOL 446HOMEWORK # 6Name: _Date: Nov. 11, 2007ID #: _Major: _1. Define the following :i. Mass wastingii. Landslideiii. Subsidenceiv. Safety factorv. Driving forcevi. Resisting forcevii. Slumpviii.Cohesion(8 points)2. Grant MacEwan University - BCSC - 260 Transition Words and PhrasesConnections that tell readers the writer is changing directionsVital hints or outright statements that he writer is shifting gears; when the transition iswrong or misleading, readers may be confusedTransition words must app King Fahd University of Petroleum & Minerals - GEOL - 446 Environmental GeologyDate: Sep. 30, 2007Quiz # 1 (Earth System)Try to answer only the following 5 questions1.The Earth is a good approximation of a closed system; but in what ways doesthe Earth system not fit the definition of a closed system ?2.T Grant MacEwan University - BCSC - 260 Authenticity pg. 120 She talks about how people in your prose need to feel authentic and real.Verisimilitude, the details that matter. Pertinent to fiction as well as non-fiction. Promotionalwriting, people or situations seem real. People who feel real King Fahd University of Petroleum & Minerals - GEOL - 446 ENVIRONMENTAL GEOLOGYGEOL 446QUIZ # 2Student Name: _, Date: Oct. 25, 2007I. D. #: _, Major: _1.What is the difference between Shield volcano and cinder volcano, andcomment on the implications on the environmental hazards that areassociated with su Grant MacEwan University - BCSC - 260 Compassionate Editing you must enter the authors mind, mod, and purpose Compassionate editing refers to the ethical obligation of editors to edit mindfully, as wewould wish to be edited ourselves Compassionate editing engages trust and confidence on t King Fahd University of Petroleum & Minerals - GEOL - 446 Earth Science DepartmentEnvironmental Geology (GEOL446)QUIZ # 3Student Name: _,I. D. #: _Major: _,Date: Nov. 7, 2007Answer the Odd NumbersAnswer the Even Numbers=1. Define the following:i. Seismogram:ii. Seismograph:iii. P waves:iv. surface UCSD - ECE166 - RF circuit Oregon State - GEO - 101 Grant MacEwan University - BCSC - 260 Developmental Editing- Chapters 4, 5, 6Chapters 4,5,6Deal with narration, style, and exposition.When Scott Norton is talking about narrative, he is talking about the path through thetext. Refers to the path through the text, narration that the author UCSD - ECE166 - RF circuit King Fahd University of Petroleum & Minerals - GEOL - 446 ENVITONMENTAL GEOLOGYGEOL 446QUIZ # 4Name: _Date: Nov. 17, 2007ID #: _Major: _1. Define: the following:i. Subsidenceii. karst topographyiii. sinkhole2. A common salt mining technique that involves injection of fluidsunderground is calledA) fl Oregon State - GEO - 101 UCSD - CSE - CSE100 Lecture 4 Set and Map abstract data type (ADT) Binary search trees Toward a binary search tree implementation using C+ templates Reading: Weiss Ch 4, sections 1-4CSE 100, UCSD: LEC 4Page 1of 15The SET data structure A set is an abstract data Grant MacEwan University - BCSC - 260 Developmental Editing-Chapters 1, 2, 3Developmental EditingFirst part: Shaping the book proposal-Concept Book proposal: 1) Writing the proposal A writer will write a manuscript, then will send it out via query letters. Sending themanuscript in cold, Oregon State - GEO - 101 King Fahd University of Petroleum & Minerals - GEOL - 446 ENVIRONMENTAL GEOLOGYGEOL-446QUIZ # 5SOIL &amp; ENVIRONMENTName_Date: Dec. 9, 2007ID #: _Major: _=Choose the one alternative that best completes the statement or answers the question.1) How do engineers define soil?1)A) solid Earth material that i Grant MacEwan University - BCSC - 260 Substantive EditingWhat is Substantive Editing?. Involves content editing and structural editingDeals with the substance of them manuscriptAccording to Canadian editor Douglas Gibson, substantive editing isThis is a much earlier pass than the copyedi UCSD - CSE - CSE100 Lecture 5 Erase() from SBT C+ iterators The binary search tree successor function Reading: Weiss Ch 4, sections 1-4CSE 100, UCSD: LEC 5Page 1of 11The SET data structure A set is an abstract data structure that stores a collection of Oregon State - GEO - 101 Grant MacEwan University - BCSC - 260 Project ManagementLooking at a manuscript in its parts or in its wholes.Project management is a processIt involves constant learningIt involves time tables, budgets, relationships, and lying with dignityIt is sometimes referred to as production manag King Fahd University of Petroleum & Minerals - GEOL - 446 Earth Science DepartmentEnvironmental Geology (GEOL-446)MAJOR EXAM # 1Student Name: _, I. D. #: _Major: _, Date: Oct. 28, 20071. Define the following terms:(5 points)i) Environmental Geologyii) Doubling timeiii) Open and closed system (give examp UCSD - CSE - CSE100 Lecture 6 Binary search tree average cost analysis The importance of being balanced Reading: Weiss Ch 4, sections 1-4CSE 100, UCSD: LEC 6Page 1of 22Finding the successor of a BSTNode Suppose X is a node in a BST that has just been visited duri King Fahd University of Petroleum & Minerals - GEOL - 446 Earth Science DepartmentEnvironmental Geology (GEOL446)MAJOR EXAM # 2Student Name: _, Date: Dec.2, 2007I.D. #: _, Major: _PART I: Answer the following questions:1. What is meant by the following terms:i. Earthquake(10 points)ii. River Floodiii. Grant MacEwan University - BCSC - 260 MechanicsSentence length is significant in determining whether primary message will get throughBut sentences dont stand alone! Clear sentences ease understanding, the transition andcohesions etc.The editor must be skilled at seeing flaws, major or min King Fahd University of Petroleum & Minerals - GEOL - 446 FUNDAMENTAL CONCEPTS OFENVIRONMENTAL GEOLOGYCHAPTER 1TOPICS:I. What is environmental geology?II. Culture &amp; Environmental awarenessIII. The Environmental CrisisIV. Why do we need environmental geology?V. The Five Fundamental Concepts:1. Human Popu UCSD - CSE - CSE100 Lecture 3 References in C+ The const qualifier Arrays, pointers and pointer arithmetic The interface/implementation distinction in C+ C+ class templates friend visibility Reading: Weiss Ch 1CSE 100, UCSD: LEC 3Page 1of 34 A java program ope Grant MacEwan University - BCSC - 260 Micro EditingMicro-EditingPacing and dialogue, transitionsMicro editing and macro-editing are related. Spend some thinking about the structuresyou are thinking about keeping and the ones that you are going to cut out. Most writersinvest most of their UCSD - CSE - CSE100 Lecture 8AVL performance.HeapsTreapsFind, insert, delete, split, and join in treapsRandomized search trees CSE 100, UCSD: LEC 8Page 1of 33 Recall the A single rotations:VLXA rotationsVLsingle leftYrotation (withright child)YXcab Grant MacEwan University - BCSC - 260 PrintingYou need to know some basicsSheet-fed or web press?Offset is assumed, but there is a reason to use letterpress?CMYK, not RGB- four color, cyan,Paper: weight, opacity, coating, other qualities- most expensive part of printing. IT will affect t UCSD - CSE - CSE100 Lecture 9 Randomized data structures Random number generation Skip lists: ideas and implementation Skip list time costs Reading:Skip Lists: A Probabilistic Alternative to Balanced Trees (author William Pugh,available online); Weiss UCSD - CSE - CSE100 Lecture 2 An introduction to C+ Comparisons of Java to C+ Basic C+ programming C+ primitive types and operators Pointers in C+CSE 100, UCSD: LEC 2Page 1of 38An introduction to C+ C+ is an object-oriented language based on the non-object-oriented UCSD - CSE - CSE100 CSE 100 Section BLecture 7 : parallel programming formulticore processorsAnnouncements Example threads codes in $PUBLIC/ThreadsScott B. Baden / CSE 100, Lec 5 / Spring 20132Todays Lecture Multicore processors, where did they come from? Programmin UCSD - CSE - CSE100 Implemen'ng Human Tree Compression. Adap've compression 1 1. So far, we talked about compression given having the true probabili'es of each character. 2. Another approach is to count the number of 'mes each character UCSD - CSE - CSE100 Lecture 11 Trees for representation Tries, decision and classification trees, discrimination nets Huffman codingReading: Weiss, Ch 10CSE 100, UCSD: LEC 11Page 1of 2220 questions Suppose youre playing the game of 20 questions for famous people The UCSD - CSE - CSE100 Lecture 12 Data compression Coding and decoding with a Huffman coding tree Huffman coding tree implementation issues Priority queues and priority queue implementationsReading: Weiss Ch. 6, Ch. 10.1.2CSE 100, UCSD: LEC 12Page 1of 27Is there a co UCSD - CSE - CSE100 Lecture 14 C+ I/O Some useful classes in &lt;iostream&gt; I/O buffering Bit-by-bit I/O Reading: online documentation on C+ streamsCSE 100, UCSD: LEC 15Page 1of 23A quick tour of the C+ I/O classes The C+ standard library defines classes used UCSD - CSE - CSE100 Lecture 15 Hashing Hash table and hash function design Hash functions for integers and strings Hashing for security and cryptographyReading: Weiss Ch. 5CSE 100, UCSD: LEC 24Page 1of 21Finding data fast Y u know that searching for a key in an unso UCSD - CSE - CSE100 Lecture 16 Collision resolution strategies: linear probing, double hashing, random hashing,separate chaining Hash table cost functions Map ADT Reading: Weiss Ch. 5CSE 100, UCSD: LEC 25Page 1of 24Collisions in a hash table using the hash func UCSD - CSE - CSE100 Good encodings of the Human Tree Encoding the Human tree. The header of the compressed le is used to transmit the human tree so that the decoder knows how to decode. The simplest approach: transmit the frequenc UCSD - CSE - CSE100 CSE 100 - Section BLecture 23Programming with threads,slight returnAnnouncements Modified Office hours (this week only)Friday,3pm to 4pm (time approx) Experimental peer learning tool Go to http:/ bit.ly/phos-student (via smartphone or laptop) Fo
{"url":"http://www.coursehero.com/file/8353514/PhysicsMomentum-Notes/","timestamp":"2014-04-18T16:10:47Z","content_type":null,"content_length":"53531","record_id":"<urn:uuid:67865913-3a90-4068-a66c-e9d7dff40c30>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
December 25th 2011, 06:25 AM #1 Let V be an n-dimensional complex vector space where $n\geq 1$ and let $x:V\rightarrow V$ be a linear map. show that $x$ has an eigenvector $v \in V$. i can't solve this question . how can ? Re: eigenvector The fact that the space is complex is important, since you now that the characteristic polynomial has a root. Re: eigenvector Yes. I know that the roots of the characteristic polynomial are eigenvalue . but i show x has an eigenvector v\in V Re: eigenvector Fix a basis $B=\{v_1,\ldots,v_n\}$ of $V$ , if $\lambda$ is a root of the characteristic polynomial and $A=[T]_B$ then, the system $(A-\lambda I)x=0$ has at least a non zero solution $(\alpha_1,\ ldots,\alpha_n)\in\mathbb{C}^n$ . Choose $v=\alpha_1v_1+\ldots+\alpha_nv_n$ . Re: eigenvector If you already know that the roots of the characteristic polynomial are eigenvalues, then you are done. The characteristic polynomial does have a complex root and therefore $x$ does have an eigenvalue. By definition, an eigenvalue of a linear map $x$ is a scalar $\lambda$ such that there exists a vector $v$ in the space $V$ such that $xv=\lambda v.$ This is also the definition of the eigenvector $v.$ There is a nice proof of this without the use of determinants/charcteristic polynomials. It's here, page 3. Re: eigenvector If you already know that the roots of the characteristic polynomial are eigenvalues, then you are done. The characteristic polynomial does have a complex root and therefore $x$ does have an eigenvalue. By definition, an eigenvalue of a linear map $x$ is a scalar $\lambda$ such that there exists a vector $v$ in the space $V$ such that $xv=\lambda v.$ This is also the definition of the eigenvector $v.$ There is a nice proof of this without the use of determinants/charcteristic polynomials. It's here, page 3. thanks..but in page 3 :Theorem 2.1 Every linear operator on a finite-dimensional complex vector space has an eigenvalue., but my question eiguenvector. Re: eigenvector thanks fernandorevilla.. you said the system (A-\lambda I)x=0 has at least a non zero solution (\alpha_1,\ldots,\alpha_n)\in\mathbb{C}^n , why? Re: eigenvector Re: eigenvector excuse me $A=[T]_B$ what is T? Re: eigenvector I made a small but significant mistake in my earlier post. The vector $v$ is supposed to be non-zero. The existence of an eigenvalue implies the existence of an eigenvector. With the definitions from my earlier post it's immediate. What does it mean that $x$ has an eigenvalue? This: 1) $\left (\exists \lambda\in\mathbb{C}\right)\left( \exists v\in V\setminus\{0\}\right ) xv=\lambda v.$ What does it mean that $x$ has an eigenvector? This: 2) $\left( \exists v\in V\setminus\{0\}\right )\left (\exists \lambda\in\mathbb{C}\right) xv=\lambda v.$ These formulas are equivalent. The order of the existential quantifiers doesn't matter. However, Sheldon Axler uses a different definition of an eigenvalue. For him, that $x$ has an eigenvalue means this: 3) There exists $\lambda\in \mathbb{C}$ such that $x-\lambda I$ is not injective. We want to see that 3) implies 2). What does it mean that $x-\lambda I$ is not injective? It means that we have two vectors $v_1,v_2\in V,$ such that $v_1eq v_2$ and $(x-\lambda I)v_1=(x-\lambda I)v_2,$ the last formula being equvalent to $xv_1 -\lambda v_1 = xv_2 -\lambda v_2,$ and further equivalent to $x(v_1-v_2)=\lambda (v_1-v_2).$ Since $v_1eq v_2,$ we have that the vector $v_1-v_2$ is non-zero. Therefore, we have found a non-zero vector $v$ such that $xv=\lambda v,$ just as we wanted. Re: eigenvector Re: eigenvector Re: eigenvector I made a small but significant mistake in my earlier post. The vector $v$ is supposed to be non-zero. The existence of an eigenvalue implies the existence of an eigenvector. With the definitions from my earlier post it's immediate. What does it mean that $x$ has an eigenvalue? This: 1) $\left (\exists \lambda\in\mathbb{C}\right)\left( \exists v\in V\setminus\{0\}\right ) xv=\lambda v.$ What does it mean that $x$ has an eigenvector? This: 2) $\left( \exists v\in V\setminus\{0\}\right )\left (\exists \lambda\in\mathbb{C}\right) xv=\lambda v.$ These formulas are equivalent. The order of the existential quantifiers doesn't matter. However, Sheldon Axler uses a different definition of an eigenvalue. For him, that $x$ has an eigenvalue means this: 3) There exists $\lambda\in \mathbb{C}$ such that $x-\lambda I$ is not injective. We want to see that 3) implies 2). What does it mean that $x-\lambda I$ is not injective? It means that we have two vectors $v_1,v_2\in V,$ such that $v_1eq v_2$ and $(x-\lambda I)v_1=(x-\lambda I)v_2,$ the last formula being equvalent to $xv_1 -\lambda v_1 = xv_2 -\lambda v_2,$ and further equivalent to $x(v_1-v_2)=\lambda (v_1-v_2).$ Since $v_1eq v_2,$ we have that the vector $v_1-v_2$ is non-zero. Therefore, we have found a non-zero vector $v$ such that $xv=\lambda v,$ just as we wanted. thanks for your help ymar December 25th 2011, 06:32 AM #2 December 25th 2011, 06:49 AM #3 December 25th 2011, 08:26 AM #4 December 25th 2011, 08:46 AM #5 Feb 2011 December 25th 2011, 09:03 AM #6 December 25th 2011, 09:05 AM #7 December 25th 2011, 09:10 AM #8 December 25th 2011, 09:35 AM #9 December 25th 2011, 09:39 AM #10 Feb 2011 December 25th 2011, 09:41 AM #11 December 25th 2011, 09:48 AM #12 December 25th 2011, 09:55 AM #13
{"url":"http://mathhelpforum.com/advanced-algebra/194660-eigenvector.html","timestamp":"2014-04-20T02:55:09Z","content_type":null,"content_length":"88822","record_id":"<urn:uuid:f86df9e2-80a6-46b7-9743-f27bb7040308>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Morningstar Mutual Fund Risk Measures: Alpha, Beta, and R-squared By Sun A large portion of our taxable investments are held in 10 mutual funds (check out my 2010 Year in Review for a list of mutual funds we currently own and asset allocation at the end of 2010). Though it has been a long time since I made any change to our mutual fund investments, I still keep an idea on any new funds, funds closed before now reopened, etc. Not that I really need any overhaul of our holdings, but to see what are out there. When I check out a mutual fund, the tool that I use most often is Morningstar because, comparing with other mutual fund research websites, Morningstar offers more tools to help me know about a fund’s fundamentals as well as the past performance. In addition to the fund returns, which usually play a big role in my fund selection process (even though I know past returns don’t guarantee future performance), I also pay attention to the fund’s risk. And Morningstar has a set of measures to let me understand better a fund’s risk characters before making a decision. What Is Risk Before getting into the of Morningstar risk measures, let’s first take a look a the general definition of risk in investment. According to Investopedia, risk is The chance that an investment’s actual return will be different than expected. This includes the possibility of losing some or all of the original investment. It is usually measured by calculating the standard deviation of the historical returns or average returns of a specific investment. A fundamental idea in finance is the relationship between risk and return. The greater the amount of risk that an investor is willing to take on, the greater the potential return. The reason for this is that investors need to be compensated for taking on additional risk. For each fund, Morningstar offers two sets of data, Volatility Measurements and Modern Portfolio Theory Statistics, to help investors get a sense of the risk of owning a particular fund. For Volatility Measurements, you can use the following data to gauge a fund’s volatility compared to the broad market: • Mean • Standard Deviation • Sharpe Ratio • Bear Market Decile Rank And for Modern Portfolio Theory Statistics, you will see these data provided by Morningstar: In the following, I will explain mean, standard deviation, beta, R-squared, and alpha and how to use them to assess risk involved in investing a mutual fund. Looking for a cheap stock broker to trade stocks, ETFs, or options? Check out Mean, Standard Deviation, Beta, R-squared, and Alpha Simply speaking, mean is the mathematical average of a set of data. If, for example, a stock XYZ’s annual return in the past three years are 10%, 5% and 15%, respectively, then the arithmetic mean of the stock’s return is 10%, the average 10%, 5% and 15%. Once the mean is known, we can calculate stock XYZ’s standard deviation , which measures the dispersion of the stock’s annual returns (i.e., 10%, 5% and 15%) from the mean expected return (10%). Therefore, the further away an equity’s annual return from the mean, the higher the standard deviation. In finance, standard deviation is used to gauge an equity’s volatility, whether the equity is a stock or a mutual fund. Since the recession more than three years ago, the majority of stocks followed the movement of the general market and turned lower, the only difference among stocks is the extent of the downturn as compared to the benchmark. The risk that a stock tends to go along with the general market is captured by beta, also known as systematic risk (or market risk), which measure how an individual stock or fund reacts to the general market fluctuations. By definition, a benchmark (or index) has a beta of 1.00 and the beta of an equity is relative to this value. If the movement of a stock or fund can be completely explained by the movements of the general market, then this stock or fund will have a R-squared of 100. According to Morningstar, R-squared, represented by a percentage number ranging from 0 to 100, characterizes an equity’s movement against a benchmark. A R-squared that equals to 100 means all the equity’s movements are in-line with the benchmark. With the Greek letter beta, investors can have an sense of how sensitive an equity is in relation to the broad market. If investors decide to take on a higher risk by investing in a volatile equity that carries a larger beta, then in theory, they should be rewarded with a higher than average return. The difference between the realized return and the average expected return is measured by another Greek letter alpha. A positive alpha indicates that the equity exceeds its expectations against the respective benchmark. How Risk Measures Work Now we know what the risk measurements are, let’s see how we can use them to assess the risk/reward of an investment. To illustrate, I use two funds, Dodge & Cox Stock Fund (DODGX) and CGM Focus Fund (CGMFX), that I own to show how they are measured up against each other in each category. Using S&P 500 index as the benchmark, the performance and risk data of the two funds are shown in the following table (obtained from Morningstar.com, trailing 3-year data through January 31, 2011): │Funds│2008 Return│2009 Return│2010 Return│Mean │STD │R-squared│Beta│Alpha│ │DODGX│ -43.31 │ 31.27 │ 13.49 │0.06 │26.41│ 97.13 │1.19│-1.95│ │CGMFX│ -48.18 │ 10.42 │ 16.94 │-0.45│33.32│ 62.15 │1.20│-8.10│ • Mean: The mean represents the annualized average monthly return. Therefore, a higher mean suggests a higher return the fund has delivered. In this case, DODGX performed a little better in the past three years with a mean of 0.06. • Standard deviation (STD): In this case, both funds have a quite high STD comparing to the S&P 500, which has a mean of 0.20 and STD 21.91. A higher STD of CGMFX indicates that the fund is more volatile than the DODGX. • R-squared: If we recall that R-squared measures a fund’s movement against the benchmark and a value close to 100 means the fund follows the benchmark very closely. Also, R-squared can help investor assess the usefulness of a fund’s beta or alpha statistics. A higher R-squared means the fund’s beta is more trustworthy. In this case, CGMFX’s 62.15 R-squared value says that only 62.15% of its movements can be explained by the fluctuations of S&P 500 index. This means that S&P 500 may not be a good benchmark to measure CGMFX. On the other hand, DODGX’s 97.13 R-squared value indicates the fund is well represented by S&P 500 and its beta value can thus be trusted. • Beta: Now we know S&P 500 may not be a good benchmark for CGMFX, its beta value, though higher than that of DODGX, is not particularly helpful in assessing the fund’s risk in comparison to the benchmark. Generally, beta measures a fund’s risk associated with the market and a low beta only means that the funds market-related risk is low. For both DODGX and CGFMX, which have almost identical betas, they tend to swing 20% more than the benchmark in the same direction. • Alpha: With a R-square value that we can trust, beta can be used to predict the fund’s expected return and alpha is the yardstick for the difference between a fund’s actual return and the predication. A large, positive alpha then means a fund has performed better than what its beta would predict. For DODGX, its alpha of -1.95 means the fund has underperformed the benchmark (S&P 500 index) by 1.95%, better than CGMFX’s -8.10 alpha. When evaluating an investment (mutual fund in particular), there are many obvious factors we should consider: returns, risks, expenses, and turn-over ratio, etc. Among them, the risk factor, when used properly, can help us gauge what we can expect from the investment, though past performance does not necessarily indicate future results. Photo credit: jay kid This article was originally written or modified on . If you enjoyed reading this post, please consider subscribing to my full RSS feed. Or you can also choose to have free daily updates delivered right to your inbox. 3 Responses to “Morningstar Mutual Fund Risk Measures: Alpha, Beta, and R-squared” 1. I have a little thought… If we calculate the standard deviation of a mutual fund by Nav…we have the risk of these price distribution. But we are trying to evaluate the risk of portfolio. If we calculare the risk of the portfolio, taking care about correlation, we have a different rate of volatility. Wich one is the real risk? Are we simplifying the calculate assumin the Nav price keep inside the correlation among stocks? Thanks for your help!
{"url":"http://www.thesunsfinancialdiary.com/investing/understanding-ms-mutual-fund-risk-measures-alpha-beta-and-r-squared/","timestamp":"2014-04-20T00:40:19Z","content_type":null,"content_length":"55360","record_id":"<urn:uuid:d1140a3d-dbd7-4502-b941-8051e9b76b92>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Mount Rainier Algebra 1 Tutor Find a Mount Rainier Algebra 1 Tutor ...I have worked with elementary school children on Math and English. I have also worked with high school students on Math and Science. While working on my Molecular Biology BS from Johns Hopkins University, I tutored college students on Math (including Calculus) and Science (including Chemistry). I have worked with individual students and small groups. 40 Subjects: including algebra 1, reading, English, writing ...The pursuit of my strong passion for mathematics, therefore, felt natural at Morehouse College where I graduated summa cum laude with a double major in Mathematics and Economics. For students who don’t play math games and compete with siblings over who can compute the tax and tip the fastest at ... 11 Subjects: including algebra 1, calculus, geometry, algebra 2 ...I hope I can be a help to your child. I have taught several semesters of math to prepare future teachers for the math portion of the Praxis. I also tutor students on a regular basis students to prepare students. 12 Subjects: including algebra 1, calculus, geometry, algebra 2 ...My studies emphasized DNA transcription/translation, gene expression, PCR, primer design, cell culturing methodology, enzymatic transformation of organic molecules. I hold a Bachelor's degree in Chemistry with a minor in Geology. My main interests include Geochemistry, crystal structure, and GIS applications. 10 Subjects: including algebra 1, chemistry, Spanish, algebra 2 ...I am ready to work immediately. Looking forward to hearing from you soon. Please accept my best wishes. 12 Subjects: including algebra 1, reading, geometry, biology
{"url":"http://www.purplemath.com/Mount_Rainier_algebra_1_tutors.php","timestamp":"2014-04-16T10:23:20Z","content_type":null,"content_length":"24181","record_id":"<urn:uuid:b67f665a-b5e8-43ba-8872-e78766feebd8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Acceptable Solutions in the P Finding Acceptable Solutions in the Pareto-Optimal Range using Multiobjective Genetic Algorithms P. J. Bentley^1 and J. P. Wakefield^2 ^1Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK. Tel. 0171 391 1329 P.Bentley@cs.ucl.ac.uk (corresponding author) ^2Division of Computing and Control Systems, School of Engineering University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK. Tel. 01484 472107 J.P.Wakefield@hud.ac.uk Keywords: multiobjective optimization, Pareto-optimal distributions, acceptable solutions, genetic algorithm This paper investigates the problem of using a genetic algorithm to converge on a small, user-defined subset of acceptable solutions to multiobjective problems, in the Pareto-optimal (P-O) range. The paper initially explores exactly why separate objectives can cause problems in a genetic algorithm (GA). A technique to guide the GA to converge on the subset of acceptable solutions is then The paper then describes the application of six multiobjective techniques (three established methods and three new, or less commonly used methods) to four test functions. The previously unpublished distribution of solutions produced in the P-O range(s) by each method is described. The distribution of solutions and the ability of each method to guide the GA to converge on a small, user-defined subset of P-O solutions is then assessed, with the conclusion that two of the new multiobjective ranking methods are most useful. The genetic algorithm (GA) has been growing in popularity over the last few years as more and more researchers discover the benefits of its adaptive search. Many papers now exist, describing a multitude of different types of genetic algorithm, theoretical and practical analyses of GAs and huge numbers of applications for GAs [7,8]. A substantial proportion of these applications involve the evolution of solutions to problems with more than one criterion. More specifically, such problems consist of several separate objectives, with the required solution being one where some or all of these objectives are satisfied to a greater or lesser degree. Perhaps surprisingly then, despite the large numbers of these multiobjective optimization applications being tackled using GAs, only a small proportion of the literature explores exactly how they should be treated with GAs. With single objective problems, the genetic algorithm stores a single fitness value for every solution in the current population of solutions. This value denotes how well its corresponding solution satisfies the objective of the problem. By allocating the fitter members of the population a higher chance of producing more offspring than the less fit members, the GA can create the next generation of (hopefully better) solutions. However, with multiobjective problems, every solution has a number of fitness values, one for each objective. This presents a problem in judging the overall fitness of the solutions. For example, one solution could have excellent fitness values for some objectives and poor values for other objectives, whilst another solution could have average fitness values for all of the objectives. The question arises: which of the two solutions is the fittest? This is a major problem, for if there is no clear way to compare the quality of different solutions, then there can be no clear way for the GA to allocate more offspring to the fitter solutions. The approach most users of GAs favour to the problem of ranking such populations, is to weight and sum the separate fitness values in order to produce just a single fitness value for every solution, thus allowing the GA to determine which solutions are fittest as usual. However, as noted by Goldberg: "...there are times when several criteria are present simultaneously and it is not possible (or wise) to combine these into a single number." [7]. For example, the separate objectives may be difficult or impossible to manually weight because of unknowns in the problem. Additionally, weighting and summing could have a detrimental effect upon the evolution of acceptable solutions by the GA (just a single incorrect weight can cause convergence to an unacceptable solution). Moreover, some argue that to combine separate fitnesses in this way is akin to comparing completely different criteria; the question of whether a good apple is better than a good orange is meaningless. The concept of Pareto-optimality helps to overcome this problem of comparing solutions with multiple fitness values. A solution is Pareto-optimal (i.e., Pareto-minimal, in the Pareto-optimal range, or on the Pareto front) if it is not dominated by any other solutions. As stated by Goldberg [7]: │ Definition 1. │ │ │ │ A vector x is partially less than y, or x <p y when: │ │ │ │ (x <p y) <=> (All[i])(x[i] <= y[i]) /\ (Exists[i])(x[i] < y[i]) │ │ │ │ x dominates y iff x <p y. │ However, it is quite common for a large number of solutions to a problem to be Pareto-optimal (and thus be given equal fitness scores). This may be beneficial should multiple solutions be required, but it can cause problems if a smaller number of solutions (or even just one) is desired. Indeed, for many problems, the set of solutions deemed acceptable by a user will be a small sub-set of the set of Pareto-optimal solutions to the problems [4]. Manually choosing an acceptable solution can be a laborious task, which would be avoided if the GA could be directed by a ranking method to converge only on acceptable solutions. For this work, an acceptable solution (or champion solution) is defined: │ Definition 2. │ │ │ │ A solution is an acceptable solution if it is Pareto-optimal │ │ │ │ and it is considered to be acceptable by a human. │ Consequently, this paper will investigate the problem of using a genetic algorithm to converge on a small, user-defined subset of acceptable solutions to multiobjective problems, in the Pareto-optimal (P-O) range. The paper will initially focus on the difficulties posed by multiobjective problems to genetic algorithms. A technique to guide the GA to converge on the smaller subset of acceptable solutions will then be introduced. In the light of this, six different ranking methods will be described: three commonly used methods ('sum of weighted objectives', 'non-dominated sorting', and 'weighted maximum ranking' - based on Schaffer's VEGA [11]), and three new, or less commonly used methods ('weighted average ranking', 'sum of weighted ratios', and 'sum of weighted global ratios'). This paper will then describe the application of these six multiobjective techniques to four established test functions, and will examine the previously unexplored distribution of solutions produced in the P-O range(s) by each method. The distribution of P-O solutions and the ability of each method to guide the GA to converge on a small, user-defined subset P-O solutions will then be assessed. Existing literature seems to approach this ranking problem using methods that can be classified in one of three ways: the aggregating approaches, the non-Pareto approaches and the Pareto approaches. Many examples of aggregation approaches exist, from simple 'weighting and summing' [7,15] to the 'multiple attribute utility analysis' (MAUA) of Horn and Nafpliotis [9]. Of the non-Pareto approaches, perhaps the most well-known is Schaffer's VEGA [11,12], who (as identified by Fonseca [3]) does not directly make use of the actual definition of Pareto-optimality. Many other non-Pareto methods have been proposed (e.g. by Linkens [5], Ryan [10] and Sun [14]). Finally the Pareto-based methods, proposed first by Goldberg [7] have been explored by researchers such as Horn [9] and Srinivas [13]. In addition, many researchers are now introducing 'species formation' and 'niche induction' in an attempt to allow the uniform sampling of the Pareto set (e.g. Goldberg [7] and Horn [9]). For a comprehensive review, see the paper by Fonseca and Fleming [3]. Upon consideration, it seems that the problems caused by multiple objectives within the evolutionary search process of the GA have more to do with mathematics than evolution.Throughout the evolution by the GA, every separate objective (fitness) function in a multiobjective problem will return values within a particular range. Although this range may be infinite in theory, in practice the range of values will be finite. This 'effective range' of every objective function is determined not only by the function itself, but also by the domain of input values that are produced by the GA during evolution. These values are the parameters to be evolved by the GA and their exact values are normally determined initially by random, and subsequently by evolution. The values are usually limited still further by the coding used, for example 16 bit sign-magnitude binary notation per gene only permits values from -32768 to 32768. Hence, the effective range of a function can be defined: │ Definition 3. │ │ │ │ The effective range of f(x) is the range from min(f(x)) to max(f(x)) │ │ │ │ for all values of x that are actually generated by the GA, and for no other values of x. │ Although occasionally the effective range of all of the objective functions will be the same, in most more complex multiobjective tasks, every separate objective function will have a different effective range (i.e., the function ranges are noncommensurable [12]). This means that a bad value for one could be a reasonable or even good value for another, see fig. 1. If the results from these two objective functions were simply added to produce a single fitness value for the GA, the function with the largest range would dominate evolution (a poor input value for the objective with the larger range makes the overall value much worse than a poor value for the objective with the smaller range). Figure 1. Different effective ranges for different objective functions (to be minimized) Thus, the only way to ensure that all objectives in a multiobjective problem are treated equally by the GA is to ensure that all the effective ranges of the objective functions are the same (i.e., to make all the objective functions commensurable), or alternatively, to ensure that no objective is directly compared to another. In other words, either the effective ranges must be converted to make them equal, and a range-dependent ranking method used, or a range-independent ranking method must be used. Typically, range-dependent methods (e.g., 'sum of weighted objectives', 'distance functions', and 'min-max formulation') require knowledge of the problem being searched to allow the searching algorithm to find useful solutions [13]. Range-independent methods require no such knowledge, for being independent of the effective range of each objective function makes them independent of the nature of the objectives and overall problem itself. Hence, a ranking method should not just be independent of individual applications (i.e., problem independent), as stated by Srinivas [13], it should be independent of the effective ranges of the objectives in individual applications (i.e., range-independent). Multiobjective ranking methods that are range-dependent or range-independent can be defined: │ Definition 4. │ │ │ │ Given the objective functions of a problem: f[1..n](x) │ │ │ │ and a set of solution vectors to the problem: {s[1], s[2], ... , s[m]} │ │ │ │ A multiobjective ranking method is range-dependent if the fitness ranking of {s[1], s[2], ... , s[m]} defined by the method changes when the effective ranges of f[1..n](x) change. │ │ │ │ A multiobjective ranking method is range-independent if the fitness ranking of {s[1], s[2], ... , s[m]} defined by the method does not change when the effective ranges of f[1..n](x) change. │ Because range-independent ranking methods are independent of the problem, they require no weights to fine-tune in order to allow them to rank solutions appropriately into order of overall fitness for a GA. This is a significant advantage over range-dependent methods [1], allowing the same multiobjective GA to be used, unchanged, for a number of different multiobjective problems. Consequently, it would seem that range-independent ranking methods are the most appropriate type of ranking method to use in a general-purpose multiobjective GA. In addition to being range-independent, there is another significant, and usually overlooked property that a good ranking method should have: the ability to increase the 'importance' of some objectives with respect to others in the ranking of solutions, to allow search to be directed to converge on acceptable solutions. Importance can be defined: │ Definition 5. │ │ │ │ Importance is a simple way to give a ranking method additional problem-specific information, in order to direct a GA to converge on acceptable solutions within a smaller subset of the │ │ Pareto-optimal range, by favouring those solutions closer to the optima of functions with increased importance, in proportion to this increased importance. │ It has been known for some time that the quality of solutions to complex search problems can be improved by increasing the importance of a particular part or objective of the problem [2,6]. This is often achieved either by introducing objectives to the search algorithm one at a time (or in distinct 'stages') with the most important first, or by simply weighting the most important objectives more heavily. Indeed, experience shows that many users of GAs and the 'sum of weighted objectives' ranking method are inadvertently increasing the importance of certain objectives without being aware of it, as they fine-tune their weights to improve evolution. In other words, the dual nature of these weights (i.e., the fact that each weight can not only equalise the effective ranges of objectives, but also define increased importance for objectives), is often overlooked. Intentionally determining which objectives are more important in a problem can be a matter of debate, but to improve evolution time, it seems that often the best results are gained by making the most difficult to satisfy objectives the most important. However, some problems demand that certain objectives have differing levels of importance just to allow evolution of an acceptable solution. (For example, the optimization of an electronic device has the design criteria: cost, speed, size and power consumption. For some devices, a low cost is overwhelmingly important, for others, a high speed is of greatest importance.) Consequently, it is clear that importance is an essential tool to help the evolution of acceptable solutions. What is perhaps less clear, is how the concept of importance should be implemented within multiobjective ranking methods. One way to allow the definition of importance within aggregation-based ranking methods is to take advantage of the fact that these methods usually guide the GA to converge upon a single 'best compromise' solution. For the purposes of this paper, the best compromise solution is defined: │ Definition 6. │ │ │ │ A best compromise solution is the solution with the sum of (weighted) objective fitnesses minimized. │ By weighting appropriate objectives with importance values, this best compromise solution can be made the same as (or at least moved into the vicinity of) the required solution, allowing the GA to converge directly to an acceptable solution. Thus, producing a single best compromise solution is not always a disadvantage. Nevertheless, the more favoured ranking methods do not employ aggregation (and typically are range-independent). They are usually used with some form of niching and speciation method to allow the GA to generate not one, but a range of non-dominated P-O solutions. (Niching can also help the quality of solutions by preventing excessive competition between distant solutions [7].) The user is then required to select the preferred solution from this range of different solutions. However, particularly for problems with many objectives, only a small proportion of P-O solutions may be acceptable solutions. This means that even when hundreds of different solutions are generated by the GA, there can be no guarantee that an acceptable solution will be among them. Moreover, for such large problems, it is not always feasible to allow the user to pick the preferred solution from a truly representative range of P-O solutions: the number to be considered may be too large. Thus, the ranking method needs further information, to guide the algorithm to converge more closely to acceptable solutions within the range of P-O solutions. This information is 'importance' - by specifying which objectives must be satisfied more than others, the GA can converge more closely to acceptable solutions, not just P-O solutions. Unfortunately, there is no easy way to increase the importance of one objective in relation to another, without the two objectives being directly compared to each other. In other words, whilst it is simple to specify increased importance with a range-dependent aggregation method such as 'sum of weighted objectives' (just increase the weights), with a range-independent method such as 'non-dominated sorting', specifying importance is more complex. (Fonseca forces a kind of importance with his 'preference articulation' method [4], but this requires detailed knowledge of the ranges of the functions themselves, and is not a continuous guide to evolution.) Thus, alternative methods of ranking multiobjective solutions are required, that are ideally range-independent and allow the easy specification of importance, to enable the GA to converge on the subset of acceptable solutions. 5 Multiobjective Ranking Methods There follows descriptions of six different ranking methods. The first three are the most commonly used methods: the range-dependent 'weighted sum' (aggregation) method, the range-independent Pareto non-dominated sorting, and a range-independent method based on Schaffer's VEGA [11,12]. The last three are new range-independent methods, developed in an attempt to allow importance to be specified with such methods. The techniques used within these methods are not new, but they have as yet been rarely used to rank multiobjective populations within a genetic algorithm. Method 1: Sum of Weighted Objectives (SWO) This is perhaps the most commonly used method because of its simplicity. All separate objectives are weighted to make the effective ranges equivalent (and to specify importance) and then summed to form a single overall fitness value for every solution. These values are then used by the GA to allocate the fittest solutions a greater chance of having more offspring. (Because of the similarity in nature and performance between this method and many of the other 'classical' methods [13], only this classical method will be explored.) Method 2: Non-Dominated Sorting (NDS) Described by Goldberg [7], this range-independent method and variants of it are commonly used. The fitnesses of the separate objectives are treated independently and never combined, with only the value for the same objective in different solutions being directly compared. Solutions are ranked into 'non-dominated' order, with the fittest being the solutions dominated the least by others (i.e. having the fewest solutions partially less than themselves). These fittest can then be allocated a greater probability of having more offspring by the GA. Method 3: Weighted Maximum Ranking (WMR) This ranking method is based on Schaffer's VEGA [11,12]. WMR forms lists of fitness values of each solution for each objective. The fittest n solutions from each list are then extracted, and random pairs are selected for reproduction. Importance levels can be set by weighting appropriate fitness values for solutions. Note that the additional heuristic used by Schaffer to encourage 'middling' values [11] was not implemented in WMR. Method 4: Weighted Average Ranking (WAR) This is the first of the alternative ranking methods proposed. The separate fitnesses of every solution are extracted into a list of fitness values for each objective. These lists are then individually sorted into order of fitness, resulting in a set of different ranking positions for every solution for each objective. The average rank of each solution is then identified, with this value allowing the solutions to be sorted into order of best average rank. Thus, the higher an average rank a solution has, the greater its chance of producing more offspring. Since all objective fitnesses are treated separately, this method is range-independent. This technique allows the specification of importance by the weighting of average ranking values for each solution. Method 5: Sum of Weighted Ratios (SWR) This is the second of the ranking methods proposed for GAs and is basically an extension to SWO (method 1). The fitness values for every objective are converted into ratios, using the best and worst solution in the current population for that objective every generation. More specifically: (fitness_value[i] - min(fitness_value)) fitness_ratio[i] = (max(fitness_value) - min(fitness_value)) This removes the range-dependence of the solutions, and they can be weighted (for the setting of importance) and summed to provide a single fitness value for each solution as with the first method. Method 6: Sum of Weighted Global Ratios (SWGR) This method is the third of the proposed ranking methods for GAs, and is a variation of SWR (method 5). Instead of the separate fitnesses for each objective in every solution being converted to a ratio using the current population best and worst values, the globally best and worst values are used. Again the importance of individual objectives can be set by weighting the appropriate values. 6 Application of the Ranking Methods 6.1 Test Functions To explore and compare the distributions of solutions generated by the six ranking methods, they were applied in turn to four different test functions: F[1] to F[4]. The first three are identical to those used by Schaffer [11,12], whilst F[4] is identical to Fonseca's f[1] [4]. Each function was chosen to represent a different class of function (i.e., each has different numbers of P-O ranges and /or best compromise solutions). All functions are to be minimized, see Fig. 2. Figure 2. The four test functions used to compare the ranking methods (P-O ranges shown by grey shaded regions and best compromise solutions marked with dotted lines). All six methods were used with a basic genetic algorithm using binary coding, a population size of 50, and running for 100 generations. Probability of crossover was 1.0, probability of mutation was 0.01. Although this GA used elitist selection techniques, with all of the ranking methods described in this paper it is possible to use alternatives. The distributions produced by methods 1-6 for each function were calculated by running the GA between 1,000 and 10,000 times (1000 runs for F[1], 2000 runs for F[2] and F[3], and 10000 runs for F[4] ). It was assumed that the distribution of solutions produced by a series of runs of this algorithm would not differ significantly from the distribution of solutions obtained by an algorithm with niching or other speciation techniques. 6.2 Evolved Results: F[1] The first experiment performed with each method was simply to allow the GA to minimize F[1]. This function was used to validate that each method would rank solutions to single-objective problems correctly (as was done for VEGA by Schaffer [12]. As expected, every method allowed the GA to converge on, or very near to, the optimal solution of (0,0,0), every time. (The distributions of solutions for this function are all at a single point and hence are not shown.) 6.3 Evolved Results: F[2] The next experiment involved minimising F[2]. To give some idea of the quality and distribution of solutions, 2,000 test runs were performed for each method. All methods allowed the GA to produce P-O solutions every time, however, as fig. 3 shows, the distribution of these solutions on the Pareto front for this function are very different for each method. SWO and SWGR both produced solutions very close to or exactly the best compromise value of 1.0. SWR also favoured this value, but with a larger 'spread', with the numbers of solutions produced falling almost logarithmically the further from the best compromise value they were. NDS showed a fairly even distribution throughout the P-O range, and WMR favoured solutions at either function optima, with nothing in between. WAR gave the most unexpected and fascinating distribution, with solutions close to each optima and close to the best compromise value being favoured, all other P-O values being less commonly produced, see fig. 3. Figure 3. Distributions of solutions within the Pareto-optimal range for function F[2]. Additionally for F[2], the average solution of each method was calculated to give an indication of how balanced these distributions were. In other words, no matter what value(s) of P-O solution were favoured, the mean value for F[2] should be the centre value of 1.0. Table 1 (F[2] test 1) shows that all methods produced mean solutions close to 1.0. Table 1. Average solutions for each ranking method in F[2] tests 1-3. │ │ Best Compromise │ SWO │ NDS │ WMR │ WAR │ SWR │ SWGR │ │ F[2] test 1 │ 1.0 │ 1.00922 │ 0.93999 │ 0.97595 │ 1.10226 │ 1.21556 │ 0.98763 │ │ F[2] test 2 │ 1.0 │ 2.01459 │ 0.85992 │ 0.99532 │ 1.17007 │ 1.22672 │ 0.98825 │ │ F[2] test 3 │ 1.333 │ 1.37837 │ N/A │ 1.45757 │ 2.01466 │ 1.66141 │ 1.310 │ Two further tests were performed using F[2]. For the second test, f[21] was temporarily changed to: f[21] = x^2 / 1000 to investigate the range-independence (or lack of it) for each method. As Table 1 (F[2 ]test 2) shows, after 2000 test runs for each method, SWO (method 1) clearly demonstrates its range-dependence by converging, on average, to the optimal of f[22] instead of near to 1.0. All other methods show their range-independence by continuing to give mean solution values close to 1.0. Finally, for the third test with F[2], the importance of f[22] was doubled for every method capable of supporting importance (the two objectives being otherwise unchanged from the first test). By increasing the importance, the best compromise solution (i.e., the minimum of weighted and summed objectives) is changed from 1.0 to 1.333. Only three methods: SWO, SWR and SWGR, all successfully produced values close to this new desired value (see Table 1, F[2 ]test 3). NDS does not support importance, and WMR just doubled the frequency of optimal solutions to f[22] (giving a deceptive mean solution), without actually producing any values between the two function optima. Finally, and quite unexpectedly, WAR simply converged every time to the optimal of f[22]. Upon investigation, it emerged that WAR does not permit the specification of gradual importance values. It was expected that increasing the weighting of the ranking value for more important objectives would introduce some level of additional importance for these objectives. Interestingly though, in practice it does not appear to be possible to gradually increase 'importance' values: either all objectives are treated equally, or the objective with the increased weight dominates all other objectives completely. Somewhat counter-intuitively, it seems that no matter how large or small an increase is made to a weight, it will make that objective dominate all others. 6.4 Evolved Results: F[3] Experiments were then performed using F[3] with each method in turn. The function F[3] is significant since it has two disjoint P-O ranges. Nevertheless, the distributions of solutions for this function were surprisingly consistent with those for F[2], see fig. 4. As before, SWO and SWGR almost always converged to solutions near to the best compromise value of 4.5 (for F[3]). Again, SWR favoured the best compromise solution with a slightly larger 'spread', but this time some solutions close to the optimal of f[31] were also produced. NDS gave a fairly even distribution of solutions within the two P-O ranges, and WMR again only generated solutions at the optima of the two objectives, with none in between. Finally, WAR showed its highly unusual distribution once more, by favouring solutions close to the optima of both objectives (including both minima of the multimodal objective f[31]), and the best compromise solution to a lesser degree. Figure 4. Distributions of solutions within the P-O ranges (shown by grey shaded regions) for function F[3]. 6.5 Evolved Results: F[4] Finally, experiments were performed using F[4] with each method in turn. Again, consistent distributions of solutions were obtained, see fig. 5. It should be noted that F[4] is a significant type of function because solutions between the optima of the two objectives are worse than at one optima or the other. This results in two equal best compromise solutions, one at each optima. Hence, although SWO and SWGR this time showed two peaks of distribution, these lie on the best compromise solutions, just as before. Once again, SWR favoured the best compromise solutions with a slightly larger 'spread'. As before, WMR favoured the two optima of the functions with nothing in between. NDS again produced a distribution of solutions covering the entire P-O range, but for this function an unexpected and unwelcome bias towards the middle of the range was evident (where most solutions are very poor). Finally, WAR showed its typically unusual distribution, again favouring values close to the optima of the objectives (and the best compromise solutions, as they are the same for F[4]), with other Pareto-optimal values being favoured less. Figure 5. Distributions of solutions within the Pareto-optimal range for function F[4]. 6.6 Assessing the Distributions It should be stressed that all six of the ranking methods allow a GA to produce almost nothing but Pareto-optimal solutions. It is clear, however, that the distribution of these solutions within the Pareto-optimal range is a highly significant factor in determining whether an acceptable solution will be produced. As the results of the tests show, each ranking method consistently seems to favour certain types of P-O solution, based upon three factors: the Pareto range(s), the separate optimum or optima for each objective and the best compromise solution(s) of the function. These patterns of distributions remain consistent even with more unusual functions with multiple Pareto-ranges (F[3]) and multiple best compromise solutions (F[4]). Upon consideration, these distributions are explicable. The three aggregation-based ranking methods: SWO, SWR and SWGR must inevitably favour the best compromise solution(s) to a problem, by definition. (The best compromise solution is the solution with sum of weighted objectives minimized, so any ranking method that sums objectives in any way, should have a convergence related to the best compromise value.) NDS gives all non-dominated solutions equal rank, so a fairly even distribution throughout the P-O range is to be expected. WMR bases the fitness of a solution on the maximum rank the solution has for any single objective, so this predictably will result in the generation of solutions only at the optimal of one objective, with nothing in between (a high rank equates to a good value for that objective). Finally, even the unexpected distributions of WAR are explicable. WAR bases the fitness of a solution on the average rank for every objective. This means that a solution with a very high rank for one objective and a low rank for another will be judged equally fit compared to a solution with 'middling' ranks for two objectives. In other words, solutions close to optima of objective functions will be favoured, as will solutions close to the best compromise solution(s). The results show that NDS, WMR and the new WAR method all give potentially useful distributions of solutions for applications where multiple solutions are required, with predictable, but immovable biases. In contrast, SWO forces the GA to converge on a single solution as close to the best compromise solution as possible, and does allow this bias of P-O solutions to be altered by the user. Unfortunately, because it is range-dependent, its weights must be laboriously set by trial and error in order to define the location of the best-compromise solution(s) in the Pareto range. However, the methods SWR and SWGR both generate solutions in the vicinity of the best-compromise solution(s), and being range-independent, they allow the location of this bias to be easily defined by specifying relative importance values for the objectives. In other words, these two methods allow the location of a subset of acceptable solutions in the P-O range to be defined by specifying which objectives of the problem are more important. The size of the subset depends on which method is used. Hence, for problems where a range of acceptable solutions are desired, biased in favour of those objectives with increased importance, SWR is a suitable choice. For problems where a smaller range, or a even single acceptable solution is desired, SWGR is a suitable choice. This paper investigated the problem of using a genetic algorithm to converge on a small, user-defined subset of acceptable solutions to multiobjective problems, in the Pareto-optimal range. Multiobjective fitness functions cause problems within GAs because the separate objectives have unequal effective ranges (i.e., they are noncommensurable). If the multiobjective ranking method is not range-independent, then one or more objectives in the problem can dominate the others, resulting in evolution to poor solutions. The concept of importance introduced in this paper, allows the GA to converge on a smaller subset of acceptable P-O solutions. Giving certain objectives in a problem greater importance allows ranking methods to generate not just non-dominated solutions, but smaller subsets of acceptable non-dominated solutions at user-defined locations in the Pareto front. The significance of range-independence and importance in multiobjective ranking methods was shown by the distributions of solutions generated by six methods, applied to four established test functions. The only range-dependent method, SWO, was found to be incapable of coping with objectives with incompatible effective ranges. The three range-independent methods: NDS, WMR and WAR all produced consistent, and sometimes unusual distributions of P-O solutions, making them potentially useful for some problems. However, only the two new range-independent methods that supported importance: SWO and SWGR, had useful distributions and allowed the bias of their distributions to be easily alterable. Indeed, because of these results, SWGR was chosen to be used within a generic evolutionary design system, which has since been used to tackle a wide range of different solid object design problems (involving the minimization of numerous different multiobjective functions) with great success [1]. [1] Bentley, P. J., 1996, Generic Evolutionary Design of Solid Objects using a Genetic Algorithm. Ph.D. Thesis, University of Huddersfield, Huddersfield, UK. [2] Dowsland, K. A., 1995, Simulated Annealing Solutions for Multi-Objective Scheduling and Timetabling. Applied Decision Technologies (ADT '95), London, 205-219. [3] Fonseca, C. M, & Fleming, P. J., 1995a,. An Overview of Evolutionary Algorithms in Multiobjective Optimization. Evolutionary Computation, 3:1, 1-16. [4] Fonseca, C. M, & Fleming, P. J., 1995b, Multiobjective Genetic Algorithms Made Easy: Selection, Sharing and Mating Restriction. Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA 95), Sheffield, 45-52. [5] Linkens, D. A. & Nyongesa, H. O., 1993. A Distributed Genetic Algorithm for Multivariable Fuzzy Control. IEE Colloquium on Genetic Algorithms for Control Systems Engineering, Digest No. 199/130, 9/1 - 9/3. [6] Marett, R. & Wright, M., 1995, The Value of Distorting Subcosts When Using Neighbourhood Search Techniques for Multi-objective Combinatorial Problems. Applied Decision Technologies, London, [7] Goldberg, D. E., 1989, Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley. [8] Holland, J. H., 1992, Genetic Algorithms. Scientific American, 66-72. [9] Horn, J. & Nafpliotis, N., 1993, Multiobjective Optimisation Using the Niched Pareto Genetic Algorithm. Illinois Genetic Algorithms Laboratory (IlliGAL), report no. 93005. [10] Ryan, C., 1994, Pygmies and Civil Servants. Advances in Genetic Programming, MIT Press. [11] Schaffer, J. D., 1984, Some experiments in machine learning using vector evaluated genetic algorithms. PhD dissertation, Vanderbilt University, Nashville, USA. [12] Schaffer, J. D., 1985, Multiple Objective Optimization with Vector Evaluated Genetic Algorithms. Genetic Algorithms and Their Applications: Proceedings of the First International Conference on Genetic Algorithms, 93-100. [13] Srinivas, N. & Deb, K., 1995, Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evolutionary Computation, 2:3, 221-248. [14] Sun, Y. & Wang, Z., 1992, Interactive Algorithm of Large Scale Multiobjective 0-1 Linear Programming. Sixth IFAC/IFORS/IMACS Symposium on Large Scale Systems, Theory and Applications, 83-86. [15] Syswerda, G. & Palmucci, J., 1991, The Application of Genetic Algorithms to Resource Scheduling. Genetic Algorithms: Proceedings of the Fourth International Conference, Morgan Kaufmann, 502-508.
{"url":"http://eprints.hud.ac.uk/4052/1/PB_%26_JPW_1997_Finding_Acceptable_Solutions.htm","timestamp":"2014-04-17T19:36:17Z","content_type":null,"content_length":"49405","record_id":"<urn:uuid:0b84afb9-8e28-42dc-8580-2fa6de2dbada>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Problems Library - Middle School, Mathematical Patterns This page: mathematical patterns About Levels of Difficulty exponents & roots factors, factoring, & prime numbers fractions, decimals & ratio & proportion Discrete Math combinations & graph theory circumference & Browse all M.S. Problems Middle School About the PoW Library Middle School: Mathematical Patterns In grades 6-8, students continue to explore patterns as a means of developing algebraic thinking and logic skills. The study of patterns and relationships in the middle grades focuses on patterns that relate to linear functions, which arise when there is a constant rate of change. Students use tables, graphs, words, and symbolic expressions to solve problems involving functions and patterns of change, and learn to look for formulas to express the patterns they discover. In addition, a goal at this level is to develop a facility with using patterns and functions to represent, model, and analyze phenomena in the real world. The problems listed below require students to investigate patterns in order to find an answer or reach a solution. They may address the NCTM Grades 6-8 Algebra Standard, Problem Solving Standard, or Connections Standard. To find relevant sites on the Web, browse and search Patterns/Relationships in our Internet Mathematics Library; to find middle-school sites, go to the bottom of the page, set the searcher for middle school (6-8), and press the Search button. Access to these problems requires a Membership.
{"url":"http://mathforum.org/library/problems/sets/middle_patterns.html","timestamp":"2014-04-24T11:20:26Z","content_type":null,"content_length":"19491","record_id":"<urn:uuid:c7015028-ab49-4869-b1bf-bbb55e8289d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Fantastic Furniture builds sofas. It costs $2,725 to build 9 sofas and $3,850 to build 14 sofas. Which equation represents the cost, C(x), as a linear function of the number of sofas, x? C(x) = 225x - 700 C(x) = 225x + 700 C(x) = 700x - 225 C(x) = 700x + 225 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fff1ceee4b09082c070468e","timestamp":"2014-04-20T06:18:08Z","content_type":null,"content_length":"37548","record_id":"<urn:uuid:ab3bb65a-a28c-4f61-943c-11d98ea044c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Shenandoah, TX SAT Math Tutor Find a Shenandoah, TX SAT Math Tutor ...There is nothing you learn early that will be contradicted by later lessons. My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with basic concepts, and explain and clarify the ones you need some improvement on; and then to work on the spec... 20 Subjects: including SAT math, writing, algebra 1, algebra 2 ...Ability to blend discrete sound units into words. Ability to segment sounds in words. Ability to understand that sometimes two or more letters represent a sound. 15 Subjects: including SAT math, reading, English, grammar Hello students and parents! I have 5 years of experience tutoring both high school and college students in math courses ranging from Pre-Algebra to Calculus I, II. My tutoring method starts with first finding out how much the student understands about the course material or specific question. 14 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I also work with students to understand their personal learning style and how they learn best. I have worked as a teacher for 11 years and I have worked with students doing both TAKS prep and remediation. I am familiar with the information that students need to know to pass the test and if they have failed the test I analyze their scores and focus on what they need to learn in order to pass. 31 Subjects: including SAT math, reading, Spanish, English ...I have been teaching 5th and 6th grade students in math as well as some reading for the past 5 years. I recently finished my M.Ed with a minor reading which focused heavily on vocabulary development. I have tutored students as young as 2nd grade in math, reading and science. 11 Subjects: including SAT math, reading, writing, algebra 1 Related Shenandoah, TX Tutors Shenandoah, TX Accounting Tutors Shenandoah, TX ACT Tutors Shenandoah, TX Algebra Tutors Shenandoah, TX Algebra 2 Tutors Shenandoah, TX Calculus Tutors Shenandoah, TX Geometry Tutors Shenandoah, TX Math Tutors Shenandoah, TX Prealgebra Tutors Shenandoah, TX Precalculus Tutors Shenandoah, TX SAT Tutors Shenandoah, TX SAT Math Tutors Shenandoah, TX Science Tutors Shenandoah, TX Statistics Tutors Shenandoah, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/Shenandoah_TX_SAT_Math_tutors.php","timestamp":"2014-04-21T00:14:46Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:e6dba39e-7a52-4c17-acb9-22cea28416c2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Einstein equation Einstein equation gravity, supergravity Spacetime configurations Quantum theory Equality and Equivalence • equality (definitional?, propositional, computational, judgemental, extensional, intensional, decidable) • isomorphism, weak equivalence, homotopy equivalence, weak homotopy equivalence, equivalence in an (∞,1)-category • Examples. What are called Einstein’s equations are the equations of motion of gravity: the Euler-Lagrange equations induced by the Einstein-Hilbert action. They say that the Einstein tensor $G$ of the metric/the field of gravity equals the energy-momentum tensor $T$ of the remaing force- and matter-fields: $G = T \,.$ Existence and uniqueness Given a choice of Cauchy surface $\Sigma$, the initial value problem for Einstein’s differential equations of motion is determined by a choice of Riemannian metric on $\Sigma$ and a second fundamental form along $\Sigma$. With this data a solution to the equation exists and is unique. (Klainerman-Nicolo 03). A general discssion is for instance in section 11 of A discussion of the vacuum Einstein equations (only gravity, no other fields) in terms of synthetic differential geometry is in PDE theory Genuine PDE theory for Einstein’s equations goes back to local existence results by Yvonne Choquet-Bruhat in the 1950s. Global existence in the presence of a Cauchy surface was then shown in For further developments see See also The Cauchy Problem in Classical Supergravity Revised on December 18, 2013 07:52:13 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/Einstein+equation","timestamp":"2014-04-21T15:10:46Z","content_type":null,"content_length":"46999","record_id":"<urn:uuid:2a0167a2-895a-45a9-93a6-d9416e0ecde7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary Representation of Real Numbers February 7th 2011, 12:05 PM Binary Representation of Real Numbers Hey everyone, I have question that is asking to show that the set of real numbers has no binary representation. I understand that real numbers cannot have a one to one correspondence with natural numbers. I also understand that the real numbers are an uncountable infinity and natural numbers are a countable infinity. I am a bit lost on how to use this information to prove this. Can anyone help me make the connection? February 7th 2011, 11:47 PM What do you mean by a binary representation of a set (of real numbers)?
{"url":"http://mathhelpforum.com/discrete-math/170466-binary-representation-real-numbers-print.html","timestamp":"2014-04-19T18:35:24Z","content_type":null,"content_length":"3798","record_id":"<urn:uuid:bd06a9a9-d134-4b72-91b1-93a135ab1832>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: University of Phoenix - FP/101 - FP/101 1. How does a personal cash flow statement help you organize your finances?I believe making a personal cash flow statement; first helps you physically see were your money is going. Sometimes you spend money and do not know exactly where you have spent it University of Phoenix - FP/101 - FP/101 PersonalCashFlowStatementIncome1 PROJECTEDMONTHLYINCOME Extraincome Totalmonthlyincome Income1 ACTUALMONTHLYINCOME Extraincome TotalmonthlyincomeHOUSINGProjectedCost ActualCost Difference$1,500 $486 $1,986 $0 $0 $0PROJECTEDBALANCE (Projectedincomemin University of Phoenix - FP/101 - FP/101 Axia College MaterialAppendix EThe Five Cs WorksheetDirections: Identify the following factor descriptions to their corresponding C of credit. The five Cs are listed below: Capacity Capital Collateral Character Conditions Which C? Capacity Conditions C University of Phoenix - FP/101 - FP/101 This article may be very useful to me in the future. I now know that I have a reference on how to get information on certain things. Such as if I am ever a victim of Identity Theft, or if there is a discrepancy on my credit report. I also have learned how University of Phoenix - FP/101 - FP/101 I actually agreed with the recommendation that was made for me. The Auto Calculator suggested that I purchase my next car. This of course makes perfect sense to me, because leasing does not seem feasible for my lifestyle. First off I have children. So any University of Phoenix - FP/101 - FP/101 Axia College MaterialAppendix FInsurance MatrixType of Insurance Auto Functions Example of Company Coverage CharacteristicsTo protect yourself financially if you were to be in a car accident.Progressive Insurance$20,000 per person in accident $40,00 University of Phoenix - FP/101 - FP/101 Axia College MaterialAppendix HTax Return WorksheetDirections: Review Gloria Ramsays tax return. Answer the following questions based on the information listed in her return. (10 points)1.Even though Gloria is single, her filing status is Head of Hou University of Phoenix - FP/101 - FP/101 COVER PAGE Filing Checklist For 2008 Tax Return Filed On Standard Forms Prepared on: 06/14/2009 11:09:19 pm Return: C:\Users\David\Documents\TaxCut\Gloria Ramsay 2008 Tax Return.T08To file your 2008 tax return, simply follow these instructions: Step 1 - University of Phoenix - US101 - us101 I see myself on the Kolb Inventory, in the diverger and accommodator learning groups. The diverger group though relates more to my major in education. Where the accommodator learning style relates more to my personality. I have the tendency to me impatien University of Phoenix - US101 - us101 Axia College MaterialAppendix C University Resources: Week Eight Study PlanHow will I approach the content of Week Eight? Examples What will I do before the week begins? How will I organize my week? How much time will I set aside for this week's content University of Phoenix - US101 - us101 This game helped me understand the electronic resources available to me because I had to go and research before playing the game as I would a Quiz. I started playing the game in the beginning of the week, and got as far as the third or fourth question. I University of Phoenix - SCI 220 - SCI Nutrit It is hard for me to choose a side in this debate. I am a person you would consider to be on the outside looking in, because I do not use or live in any national forest, and I do use the oil that is needed. I do not enjoy paying a high price for gas, and UWO - PSYCH - 2035A Chapter 7 Interpersonal CommunicationThe Process of Interpersonal CommunicationIntrapersonal communication: talk to yourself Interpersonal communication: interactional process in which one person sends a message to another 1. 2 people must be involved 2 University of Texas - STATS - 57530 DEVEMC28_0321500458.qxd11/12/0712:26 PMPage 1CHAPTER28Analysis of VarianceWhere are we going?In Chapter 24 we compared the mean lifetimes of generic and brand-name batteries. But our supermarket carries four different name brands of batteries and University of Phoenix - HCA - HCA230 The three cons I find in the HIPPA act are as follows: 1) That physicians can release our medical records to insurance companies without our consent. As a lawyer is the article stated that this is a violation of our Fourth and Fifth Amendment. The governm University of Phoenix - HCA - HCA230 1Patient Compliance Melina Mckechan HCA/230 November 12, 2010 What patient compliance issues are evident in the scenario? What communication problems are contributing to the patients lack of compliance?How might the caregiver improve communication in University of Phoenix - HCA - HCA230 CheckPoint: Communication for Marginal PopulationAs the medical assistant, you worked with three different marginalized populations. Briefly describe each population and one communication tip for each. The first patient I encountered was an elderly woman University of Phoenix - HCA - HCA230 CheckPoint: Barriers in DiversityNationality can be a huge barrier when it comes to communicating with patients. Some physicians do not understand that patients nationality has a lot how their care is given. An example of the nationality barrier is some Michigan State University - PHYSICS - 851 HOMEWORK ASSIGNMENT 1: Due Monday, 9/14/09PHYS851 Quantum Mechanics I, Fall 2009 1. What is the relationship between | and | ? What is the relationship between the matrix elements of M and the matrix elements of M ? Assuming that H = H , what is n|H |m i Michigan State University - PHYSICS - 851 HOMEWORK ASSIGNMENT 1PHYS851 Quantum Mechanics I, Fall 2009 1. [10 pts]What is the relationship between | and | ? What is the relationship between the matrix elements of M and the matrix elements of M . Assume that H = H what is n|H |m in terms of m|H |n Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2007HOMEWORK ASSIGNMENT 2: Postulates of Quantum Mechanics1. [10 pts] Assume that A|n = an |n but that n |n = 1. Prove that |an = c|n is also an eigenstate of A. What is its eigenvalue? What should c be so that an |an = Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 2: Postulates of Quantum Mechanics1. [10 pts] Assume that A|n = an |n but that n |n = 1. Prove that |an = c|n is also an eigenstate of A. What is its eigenvalue? What should c be so that an |an = Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 3: Fundamentals of Quantum Mechanics1. [10pts] The trace of an operator is dened as T r cfw_A = set. m|A|m , where cfw_|m is a suitable basism(a) Prove that the trace is independent of the cho Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 41. The 2-Level Rabi Model: The standard Rabi Model consists of a bare Hamiltonian H0 = and a coupling term V = |1 2| + |2 1|. 2 2 2(|2 2| |1 1|)(a) What is the energy, degeneracy, and state v Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 4: Solutions1. The 2-Level Rabi Model: The standard Rabi Model consists of a bare Hamiltonian H0 = and a coupling term V = |1 2| + |2 1|. 2 2 2(|2 2| |1 1|)(a) What is the energy, degeneracy, Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 51. In problem 4.3, we used a change of variables to map the equations of motion for a sinusoidally driven two-level system onto the time-independent Rabi model. Here we will investigate how this Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 51. In problem 4.3, we used a change of variables to map the equations of motion for a sinusoidally driven two-level system onto the time-independent Rabi model. Here we will investigate how this Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 61. [10 points] The quantum state of a free-particle of mass, M , at time t is a wave-packet of the form (x, t) = 1 (5/4)0 e( x x 0 ) 4 +ip0 x/ 2 4 0,We can safely predict that the width of th Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 61. [10 points] The quantum state of a free-particle of mass, M , at time t is a wave-packet of the form (x, t) = 1 e (5/4)0( x x 0 ) 4 +ip0 x/ 2 4 0,We can safely predict that the width of t Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 7Topics Covered: 1D scattering problems with delta- and/or step-functions, transfer matrix approach to multi-boundary 1D scattering problems, nding bound-states for combinations of delta- and/or Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 71. The continuity equation: The probability that a particle of mass m lies on the interval [a, b] at time t isbP (t|a, b) =adx | (x, t)|2(1) i d d Dierentiate (1) and use the denition of th Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 8Topics Covered: Algebraic approach to the quantized harmonic oscillator, coherent states. Some Key Concepts: Oscillator length, creation and annihilation operators, the phonon number operator.1 Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 8: SOLUTIONSTopics Covered: Algebraic approach to the quantized harmonic oscillator, coherent states. Some Key Concepts: Oscillator length, creation and annihilation operators, the phonon number Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 9Topics Covered: parity operator, coherent states, tensor product spaces. Some Key Concepts: unitary transformations, even/odd functions, creation/annihilation operators, displaced vacuum states, Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 9: SOLUTIONS1. The Parity Operator: [20 pts] Determine the matrix element x|x and use it to simplify the identity = dx dx |x x|x x |, then use this identity to compute 2 , 3 , and n . From these Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 10Topics Covered: Tensor product spaces, change of coordinate system, general theory of angular momentum Some Key Concepts: Angular momentum: commutation relations, raising and lowering operators Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 11Topics Covered: Orbital angular momentum, center-of-mass coordinates Some Key Concepts: angular degrees of freedom, spherical harmonics 1. [20 pts] In order to derive the properties of the sphe Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 11Topics Covered: Orbital angular momentum, center-of-mass coordinates Some Key Concepts: angular degrees of freedom, spherical harmonics 1. [20 pts] In order to derive the properties of the sphe Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 12Topics Covered: Motion in a central potential, spherical harmonic oscillator, hydrogen atom, orbital electric and magnetic dipole moments1. [20 pts] A particle of mass M and charge q is constr Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 13Topics Covered: Spin Please note that the physics of spin-1/2 particles will gure heavily in both the nal exam for 851, as well as the QM subject exam. Spin-1/2: The Hilbert space of a spin-1/2 Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2008FINAL EXAMNAME:1. Heisenberg Picture Consider a single-particle system described by the Hamiltonian H = i (A A ), where 1 A= 2 1 X +i P so that [A, A ] = 1. (a) Derive the Heisenberg equations of motion for AH (t) Michigan State University - PHYSICS - 851 PHYS851 Quantum Mechanics I, Fall 2009HOMEWORK ASSIGNMENT 3: Solutions Fundamentals of Quantum Mechanics1. [10pts] The trace of an operator is dened as T r cfw_A = set. m|A|m , where cfw_|m is a suitable basism(a) Prove that the trace is independent Michigan State University - PHYSICS - 851 Lecture 1: Demystifying h and iWe are often told that the presence of h distinguishes quantum from classical theories. One of the striking features of Schrdinger's equation is the fact that the variable, , is complex, whereas classical theories deal with Michigan State University - PHYSICS - 851 Lecture I: Dirac Notation To describe a physical system, QM assigns a complex number (`amplitude) to each distinct available physical state. (Or alternately: two real numbers) What is a `distinct physical state?Consider a system with M distinct availab Michigan State University - PHYSICS - 851 Operators In QM, an operator is an object that acts on a ket, transforming it into another ket Let A represent a generic operator An operator is a linear map A:HHA|= | Operators are linear:A(a |1+b | 2) = aA |1+bA|2 a and b are arbitrary c-numbersNo Michigan State University - PHYSICS - 851 Hermitian Operators Definition: an operator is said to be Hermitian if it satisfies: A=A Alternatively called self adjoint In QM we will see that all observable properties must be represented by Hermitian operatorsTheorem: all eigenvalues of a Hermitia Michigan State University - PHYSICS - 851 Lecture 22: Coherent StatesPhy851 Fall 2009Summary Properties of the QM SHO:memorizeP2 1 H= + m 2 X 2 2m 2A==h m1 X 1 X i P + i P A = h h 2 2 (A + A ) P = i h A A X= 2 2 ()1 H = h A A + 2 H n = h (n + 1 / 2) nA n = n n 1 A n = n + 1 n + 1n= Michigan State University - PHYSICS - 851 Lecture 23: Heisenberg Uncertainty PrinciplePhy851 Fall 2009Heisenberg Uncertainty Relation Most of us are familiar with the Heisenberg Uncertainty relation between position and momentum:h x p 2 How do we know this is true? Are the similar relations Michigan State University - PHYSICS - 851 Lecture 24: Tensor Product StatesPhy851 Fall 2009Basis sets for a particle in 3D Clearly the Hilbert space of a particle in three dimensions is not the same as the Hilbert space for a particle in one-dimension In one dimension, X and P are incompatible Michigan State University - PHYSICS - 851 Lecture 25: Introduction to the Quantum Theory of Angular MomentumPhy851 Fall 2009Goals1. Understand how to use different coordinate systems in QM (cartesian, spherical,) 2. Derive the quantum mechanical properties of Angular Momentum Use an algebraic Michigan State University - PHYSICS - 851 Lecture 26: Angular Momentum IIPhy851 Fall 2009The Angular Momentum Operator The angular momentum operator is defined as:rrr L = R P It is a vector operator:r r r r L = Lx e x + L y e y + Lz e z According to the definition of the crossproduct, the Michigan State University - PHYSICS - 851 Lecture 27: Orbital Angular MomentumPhy851 Fall 2009The General Theory of Angular Momentum Starting point: Assume you have three operators that satisfy the commutation relations:[ J x , J y ] = ihJ z [ J y , J z ] = ihJ x [ J z , J x ] = ihJ y Let:2 Michigan State University - PHYSICS - 851 Lecture 28: The Quantum Two-body ProblemPhy851 Fall 2009Two interacting particles Consider a system of two particles with no external fields By symmetry, the interaction energy can only depend on the separation distance:rr P12 P22 H= + + V R1 R2 2 m1 Michigan State University - PHYSICS - 851 Lecture 29: Motion in a Central PotentialPhy851 Fall 2009Side Remarks Counting quantum numbers: 3N quantum numbers to specify a basis state for N particles in 3-dimensions It will go up to 5N when we include spin When does it work: All of the stand Michigan State University - PHYSICS - 851 Lecture 30: The Hydrogen AtomPhy851 Fall 2009Example 2: Hydrogen Atom The Hamiltonian for a system consisting of an electron and a proton is:Pp2 Pe2 e2 H= + rr 2me 2m p 4 0 Re R p In COM and relative coordinates, the Hamiltonian is separable:H = H C Michigan State University - PHYSICS - 851 Lecture 31: The Hydrogen Atom 2: Dipole MomentsPhy851 Fall 2009Electric Dipole Approximation The interaction between a hydrogen atom and an electric field is given to leading order by the Electric Dipole approximation:rr VE = D E (rCM )`Semi-Classica Michigan State University - PHYSICS - 851 Lecture 33: Quantum Mechanical SpinPhy851 Fall 2009Intrinsic Spin Empirically, we have found that most particles have an additional internal degree of freedom, called spin The Stern-Gerlach experiment (1922): Each type of particle has a discrete numbe Michigan State University - PHYSICS - 851 math_tutorial.nbH* This is how you make a comment in Mathematica *L 2 + 2 H* @shiftD + @enterD *L 41In[2]:= In[3]:=In[4]:= Out[4]= In[5]:=H* In mathematica , you type a formula, then hit @shiftD+@enterD to have mathematica evaluate the expression*L Michigan State University - PHYSICS - 851 PHYS 851 Quantum Mechanics IFall Quarter 2009 Class Hours: MWF 10:20 AM to 11:10 AM Class Location: 1420 BPS Textbook: Quantum Mechanics, Volume One, Claude Cohen-Tannoudji Webpage: http:/ www.pa.msu.edu/mmoore/851.html Instructor: Prof. Michael Moore Oce Michigan State University - PHYSICS - 851 PHYS 851 Quantum Mechanics IFall Quarter 2008 Class Hours: MWF 10:20 AM to 11:10 AM Class Location: 1415 BPS Textbook: Quantum Mechanics, Volume One, Claude Cohen-Tannoudji Webpage: http:/ www.pa.msu.edu/mmoore/851.html Instructor: Prof. Michael Moore Oce Waterloo - AFM 131 - 20220098 UNIVERSITY OF WATERLOO School of AccountancyAFM 101 Professor Duane Kennedy Mid-Term Examination Fall 2007 Date and Time: October 19, 2007, 4:45 6:15pm Pages: 16, including cover Tutorial Number and Time: Instructions: 1) 2) Cordless calculators may be u
{"url":"http://www.coursehero.com/file/6036412/AppendixB1/","timestamp":"2014-04-16T10:10:33Z","content_type":null,"content_length":"53147","record_id":"<urn:uuid:277be3c2-c9e1-4bb8-a835-9f933caa1304>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Sampling a matching distribution for bootstrapping Sometimes you just don’t trust statistical tests, and you want an ‘empirical’ p-value to tell you whether your data differ from the null distribution. In those cases it is common to use bootstrapping. This usually takes the following form: 1. Randomly generate fake data 2. Compare it to your actual data 3. Repeat steps 1-2 1000 times. If the real data is ____er than the fake data 998 times out of 1000, then it’s significantly ____ at the p = 2/1000 = .002 significance threshold. The _____ could be any property you’re interested in testing. A good recent example is Nicolae 2011, who wanted to know whether SNPs that are GWAS hits are more likely to be eQTLs - in other words, whether trait-associated genetic variants are more likely to control gene expression. More likely than what, you ask? More likely than the average SNP that gets genotyped on the SNP chips used for genome-wide association studies. To test his hypothesis, Nicolae generated 1000 random sets of SNPs from the master set of all SNPs included on two popular SNP chips used in GWAS. In Nicolae’s case, the actual GWAS hits indeed contained more known eQTLs than almost any random set of SNPs did (though curiously, the empirical p value is never actually stated). What makes generating the random sets of SNPs tricky is that sometimes the control SNPs are statistically different from the test SNPs in a confounding way. In Nicolae’s case, the problem is that the GWAS SNPs tend to have higher minor allele frequency (MAF) than random SNPs, because it is easier to detect trait association for more common SNPs. Here are histograms I created based on CEU allele frequencies from HapMap Phase 3 for GWAS catalog SNPs (as of Jan 16, 2013) vs. all SNPs included in the HapMap SNP chips: Pretty different distributions, huh? That’s a problem for the analysis because if GWAS SNPs have higher MAF, and eQTLs also have higher MAF, then you’ll find enrichment of eQTLs in GWAS whether or not it’s the case that many trait-associated SNPs control traits by controlling gene expression. To handle this problem, Nicolae drew random sets of SNPs matching on MAF. Specifically, he binned all of the SNPs by increments of 0.05, so there are the 0.00 – 0.05 MAF SNPs, the 0.05 – 0.10 MAF SNPs, etc. Then he drew the same number from each bin in the random sets as were present in the true GWAS set. I discovered that this binning approach is delightfully easy to implement in PostgreSQL using subqueries to group things into bins and row number() over (partition by... to get just the number of SNPs you want in each bin. This code to create one permutation runs in just 6 seconds: select sub.snpid from ( select snpid, floor(p.ceu_freq*20)/20 floorbin, row_number() over (partition by floor(p.ceu_freq*20)/20 order by random()) rn from platform_snps ) sub, ( select floor(p.ceu_freq*20)/20 floorbin, count(*) n from platform_snps p, gwas_match gm where p.snpid=gm.snpid group by 1 order by 1 ) gfc -- gfc = "gwas floorbin counts" where sub.floorbin = gfc.floorbin -- match on floorbin and rn <= gfc.n -- limit to number of GWAS hits in that floorbin Note that every time you run this query it will draw with replacement, so some SNPs will appear in more than one permutation. And indeed, you have no choice but to draw with replacement, because there aren’t enough matching control SNPs to let you run 1000 iterations without replacement. You can easily check that using this query: select control.floorbin, control.n::numeric / gfc.n::numeric max_sets_without_replacement from ( select floor(p.ceu_freq*20)/20 floorbin, count(*) n from platform_snps p group by 1 order by 1 ) control, ( select floor(p.ceu_freq*20)/20 floorbin, count(*) n from platform_snps p, gwas_match gm where p.snpid=gm.snpid group by 1 order by 1 ) gfc -- gfc = "gwas floorbin counts" where control.floorbin = gfc.floorbin -- match on floorbin group by 1,2 order by 1,2 For the GWAS catalog (version downloaded January 16, 2013) vs. the platform SNPs from HapMap Phase 3, you could only run at most 136 permutations without replacement before you’d run out of unique SNPs in at least one bin. Anyway, this binning approach is quick and elegant but it’s a coarse tool. If the underlying MAF distributions are quite different (which they indeed are), then their distribution even within each bin will be different enough that the overall MAF distribution for GWAS SNPs and random SNP sets may prove slightly (but significantly) different. You could get around that problem by going to smaller bin sizes, but when your bins get too small, your whole permutation procedure becomes a house of cards – there is too little variety in which SNPs are selected between the different permutations, and so what appears to be the result of 1000 permutations actually depends heavily on a few sets of SNPs that are represented over and over. One alternative to binning is to select a particular control SNP for each and every GWAS SNP. In my case, I am interested in matching on LD block size rather than minor allele frequency, but the principle is the same. Instead of counting the number of SNPs with LD block size between 10kb and 20kb and demanding an equal number of control SNPs in that bin, I can demand that each individual GWAS SNP be matched by an individual control SNP whose LD block size is within ±2kb of it. I couldn’t think of a good way to do this directly in PostgreSQL. Joining every GWAS SNP to every control SNP within ±2kb in block size takes 20 minutes and results in 630 million rows – that’s a lot of effort just to then take one random control SNP for each GWAS SNP. To do this properly you don’t need a full join, you just need to start joining and then give up as soon as you find a single match. PostgreSQL doesn’t provide a way to do that, so I fell back on R. Here’s what I came up with: library(RPostgreSQL) # first load package RPostgreSQL drv = dbDriver("PostgreSQL") con = dbConnect(drv, dbname="enrep", user="postgres", password="postgres", port=5433) # get list of GWAS SNPs readsql = "select ld.snpid, ld.blocksize from ldblocks ld where exists (select null from gwas_match gm where gm.snpid = ld.snpid) and ld.blocksize > 1 order by 1 rs = dbSendQuery(con,readsql) gsnps = fetch(rs,n=-1) ngsnps = dim(gsnps)[1] # number of GWAS SNPs to match # get list of available control SNPs readsql = "select ld.snpid, ld.blocksize from ldblocks ld where ld.blocksize > 1 order by 1 rs = dbSendQuery(con,readsql) csnps = fetch(rs,n=-1) ncsnps = dim(csnps)[1] # number of control SNPs available for matching # set up a table to hold permutation results writesql = "drop table if exists permutations; create table permutations ( rowid serial primary key, perm integer, gsnpid integer, csnpid integer rs = dbSendQuery(con,writesql) set.seed(.1234) # set random seed so results are reproducible nperm = 2 # for testing - in practice this would be 1000 for (perm in 1:nperm) { csnps_random = csnps[sample(ncsnps,ncsnps,replace=FALSE),] # draw the control SNPs into a random order for (i in 1:ngsnps) { # loop through GWAS SNPs to find a match for each for (j in 1:ncsnps) { # for each GWAS SNP, loop through control SNPs looking for a match if (abs(csnps_random$blocksize[j] - gsnps$blocksize[i]) < 2000) { # match if blocksize is within 2kb # then use this SNP in this permutation writesql = paste("insert into permutations (perm,gsnpid,csnpid) values (",perm,",",gsnps$snpid[i],",",csnps_random$snpid[j],");",sep="") rs = dbSendQuery(con,writesql) # and remove it so we don't use it again csnps_random = csnps_random[-j,] Between the nested loop and the ridiculous number of round-trips to the database, this takes about 5 minutes per permutation (compared to 6 seconds for the binning approach). If I eliminate the repeated insert queries and just save the permutation results to a local vector in the R workspace (to later write to a text file that can be copied into PostgreSQL), then I can can cut that down to about 2.5 minutes per permutation. While this doesn’t quite rise to the level of incomputable (you could still generate 1000 permutations in a few days of CPU time), it would certainly change the way I do things. With 6 seconds per permutation it has proven very convenient to run everything locally and to just re-create the permuted sets every time I change my mind about the permutation procedure or SNP inclusion criteria. At 2.5 minutes per permutation I’d have to figure out how to parallelize this while keeping the results distinct yet reproducible (setting different seeds for each core?), and the high start-up cost to change anything will make the project less flexible. So I’m still pondering whether there is a way to do this sort of individual matching more efficiently. Another issue is what to do when your test and control datasets differ on more than one dimension. The GWAS SNPs and control SNPs differ in MAF distribution, in LD block size distribution, in how likely they are to be protein-coding, and surely in several other ways as well. It seems very difficult to make the permuted distributions match the test distribution on more than one dimension. Instead, I’m planning to match on just one dimension (LD block size) and then try to control for the other things as variables in my model. But if anyone has a better idea, by all means, leave a comment to let me know. update 2013-01-29: True to form, @a wrote me with a genius algorithm suggestion for generating permutations more quickly: maybe you want to start by sorting csnps by blocksize. Then for each gsnp, do a binary search to find the lower and upper bounds in the csnps array (at blocksize +/-2k) . Then choose randomly between those two bounds. Should be plenty fast after you finish the initial sort which you only do once. Sorting the csnps table is trivially fast and can be done in the SQL query to grab the data in the first place. Once it’s sorted, the binary search that @a refers to can be achieved in ~1 second using R’s findInterval function and adding 0.5 to the ranges to make them include ties on both sides: bottoms = findInterval(gsnps$blocksize-2000.5,csnps$blocksize)+1 tops = findInterval(gsnps$blocksize+2000.5,csnps$blocksize) Now bottoms is a vector of length ngsnps, which for every gsnp gives you the lowest index into csnps where the blocksize is within -2kb. tops gives you the highest index into csnps where the blocksize is within +2kb. Then, for each permutation all you have to do is generate a random index using runif. Add 0.5 on each side of the index range so that round gives equal probability to every index in the range: randomindex = round(runif(n=1,min=bottoms[i]-0.5,max=tops[i]+0.5)) Those three lines of code are basically the entire concept. Below is the new R code. This generates 1000 permutations in just over 1 minute. Amazing. Thanks @a! library(RPostgreSQL) # first load package RPostgreSQL drv = dbDriver("PostgreSQL") con = dbConnect(drv, dbname="enrep", user="postgres", password="postgres", port=5433) # get list of GWAS SNPs readsql = "select ld.snpid, ld.blocksize from ldblocks ld where exists (select null from gwas_match gm where gm.snpid = ld.snpid) and ld.blocksize > 1 order by 1 rs = dbSendQuery(con,readsql) gsnps = fetch(rs,n=-1) ngsnps = dim(gsnps)[1] # number of GWAS SNPs to match # get list of available control SNPs, sorted by blocksize readsql = "select ld.snpid, ld.blocksize from ldblocks ld where ld.blocksize > 1 order by 2 asc -- order by blocksize rs = dbSendQuery(con,readsql) csnps = fetch(rs,n=-1) ncsnps = dim(csnps)[1] # number of control SNPs available for matching # for each gsnp, find lowest index in sorted csnps array that has blocksize within -2kb # because all block sizes are integers, adding 0.5 to ranges makes ranges inclusive of all ties bottoms = findInterval(gsnps$blocksize-2000.5,csnps$blocksize)+1 # for each gsnp, find highest index in sorted csnps array that has blocksize within +2kb tops = findInterval(gsnps$blocksize+2000.5,csnps$blocksize) set.seed(.1234) # set random seed so results are reproducible nperm = 1001 # for testing - in practice this would be 1000 totalrows = ngsnps*nperm # total rows in output table # set up columns for final match table matching gwas snps to their permutation matches permno = vector(mode="integer",length=totalrows) gwassnpid = vector(mode="integer",length=totalrows) permsnpid = vector(mode="integer",length=totalrows) # first 'permutation' is just the GWAS SNPs themselves - this is convenient for my analysis for (i in 1:ngsnps) { permno[i] = 1 gwassnpid[i] = gsnps$snpid[i] permsnpid[i] = gsnps$snpid[i] # latter permutations are randomly drawn from the control SNPs for (perm in 2:nperm) { # start from permutation #2 since GWAS SNPs are perm #1 for (i in 1:ngsnps) { # go through every GWAS SNP index rowno = (perm-1)*ngsnps+i # calculate master row index for final table columns randomindex = round(runif(n=1,min=bottoms[i]-0.5,max=tops[i]+0.5)) # randomly choose an index within bounds permno[rowno] = perm # permutation number gwassnpid[rowno] = gsnps$snpid[i] # snpid of GWAS SNP being matched permsnpid[rowno] = csnps$snpid[randomindex] # snpid of permuted SNP # write results to disk finaltable = data.frame(permno,gwassnpid,permsnpid) # call PostgreSQL to read the results back from disk. sql = "drop table if exists rpermutations; create table rpermutations ( perm integer, gwassnpid integer, permsnpid integer copy rpermutations from 'c:/sci/019enrep/data/rperms-2013-01-29.txt';" rs = dbSendQuery(con,sql)
{"url":"http://www.cureffi.org/2013/01/29/sampling-a-matching-distribution-for-bootstrapping/","timestamp":"2014-04-24T00:45:02Z","content_type":null,"content_length":"211665","record_id":"<urn:uuid:c491c2c0-78ed-455b-92ed-24005e76fd9e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US6711455 - Method for custom fitting of apparel This invention relates to custom manufacturing of apparel and more particularly to a method of calculating garment dimensions and production specifications based on information captured from or about the individual for whom the garment is to be made. More specifically, this invention relates to the use of a publicly-available anthropometric database for the statistical derivation of the parameters of a mathematical model of the relationship between reported and unreported human body dimensions. This invention also relates to the calculation of the dimensions of a garment based, in part, upon the human body dimensions calculated by application of the aforementioned mathematical model. Matching apparel consumers with garments that have all the desired properties, features, and fit is one of the biggest problems that apparel retailers face. The vast majority of apparel retailers struggle with managing the tradeoff between carrying a larger assortment of products and paying the high costs of carrying large amounts of inventory. A company choosing to offer a large assortment of products, product features or variations, and sizes quickly finds the costs of inventory, inventory handling costs, and infrastructure (e.g., distribution centers) become prohibitively large as the number of stock keeping units (SKUs) increases. On the other hand, a company with a more limited assortment will find that consumers either can't find the product or size they desire, or choose a product that often they are not satisfied with, and end up returning the garment. The combined cost associated with inventory and merchandise returns represents a significant portion of the overall costs for apparel retailers, especially those who sell through direct channels such as the Internet, TV, or mail. The lost revenue opportunity for apparel retailers of all types, including store based retailers, associated with not having the correct size or product in stock can easily make the difference between a struggling and successful company. Those consumers who find an apparel product in their size are often times settling for the best available option, rather than getting a garment that fits them properly. A survey cited in U.S. Pat. No. 5,548,519, issued to Sung K. Park on Aug. 20, 1996, for an apparatus and method for custom apparel manufacturing, found that the percentage of the population that is correctly fitted by an available standard-sized article of clothing without any alteration is only two percent. There are two fundamentally different approaches to helping apparel consumers find garments that best meet their needs. The first involves gathering or capturing information about a consumer and using that information to recommend particular brands, products, and sizes that are likely to fit or match a consumer's tastes. The benefit of this approach is that it theoretically increases the probability that a consumer will find the best available standard product. The drawback is that this approach doesn't solve the assortment-inventory tradeoff described above, nor does it resolve the issue of failure to achieve proper fit without further garment alteration. The second approach involves custom making of apparel garments for consumers after preference and sizing information has been captured. The apparatus and method disclosed in U.S. Pat. No. 5,548,519 is an example of this approach. This approach involves having consumers try on several products of predetermined dimensions until the consumer approves the fit and purchases the garment. At that point, the information captured during the try-on session is reported to a manufacturing system that begins the process of making the garment. Another approach, described in U.S. Pat. No. 5,956,525, issued to Jacob Minsky on Sep. 21, 1999, for a method of measuring body measurements for custom apparel manufacturing, involves the use of multiple cameras in a specially designed room, capturing height and width data about a consumer. These data are then used to manufacture the clothing. These approaches do provide the manufacturing system with information that is useful in producing a custom garment, and will likely result in a better fitting garment than the standard sizes. Since the garments are made after the consumer order has been completed, there is less of a need for retailers to carry large amounts of finished-goods inventory. The downside of these approaches is that they require substantial involvement and time from the consumer. The majority of consumers find that shopping for apparel is not a particularly desirable activity, but rather a necessary evil. Any product that requires more involvement and more time from consumers will find limited potential in today's environment where an increasingly large number of household or personal needs can be met from a computer, a laptop, a PDA, or even a cell phone. It is an object of the present invention to provide a system and method for capturing information about a person and using that information to determine exact specifications for an apparel product and instructions for the production of a custom apparel product. The information can be communicated remotely over the phone, using the Internet, interactive television, via mail, or through any other communication device that is used for electronic commerce such as web-enabled phones or personal digital assistants (PDAs). This information can also be communicated directly to a retailer's agent, a kiosk, or any other information capture tool in a store environment. A consumer is asked a series of questions about themselves (or the person for whom they are purchasing the item), their preferences, desired features, and other product choices regarding the item that is being considered. It is an object of the invention to select such questions in such a way that consumers neither have to be measured by a tailor or other person, nor measure themselves, in order to complete the ordering process. It is an object of the invention to make use of the information that is captured from or on behalf of the person for whom the item is intended to serve as inputs to a set of model formulas that calculates other pieces of information needed for developing product specifications and production instructions for the manufacturing of a custom apparel product, but not provided directly by the consumer. It is an object of the present invention to apply methods of statistical analysis to a publicly available database of human anthropometric measurements as a means of determining the numerical coefficients of the model formulas used to calculate unprovided anthropometric measurements from provided anthropometric measurements. It is also within the scope of the present invention to supplement the anthropometric measurements in the publicly available database with measurements of additional individuals. It is an object of the present invention to provide a method of shopping for products that can be customized based on an individual person's body shape, lifestyle attributes, and product preferences which allows customers to quickly, easily and conveniently order custom apparel. Another object of the present invention is to provide a system and method of determining necessary product specifications such as garment dimensions based upon both consumer-provided and model-derived human body measurements that provides retailers and manufacturers of these products with all the necessary dimensions and other specifications required to produce a custom apparel product. Yet another object of the present invention is to provide a method for adjusting calculated garment dimensions on the basis of consumer-selected garment fit preferences. A further object of the present invention is to provide a method of shopping for products that can be customized based on an individual person's body shape and product preferences as a marketing and sales tool for retailers and manufacturers to provide custom apparel for consumers. These and other features of the present invention are described in more detail in the following detailed description. The scope of the invention, however, is limited only by the claims appended The present invention is a method for custom fitting an article to a human being having the steps of defining a first set of human body dimensions to be reported by the human being, defining a second set of human body dimensions to be inferred from said first set of human body dimensions, providing a first mathematical model relating said second set of human body dimensions to said first set of human body dimensions, wherein said mathematical model has been generated by statistical analysis of a human anthropometric database, obtaining a first set of values of said first set of body dimensions by report of the human being, computing a second set of values of said second set of human body dimensions from said first set of values of said first set of human body dimensions by using said first mathematical model, defining a set of article dimensions, providing a second mathematical model relating said article dimensions to said first set of human body dimensions and said second set of human body dimensions, computing a third set of values of said set of article dimensions from said first set of values of said first set of human body dimensions and said second set of values of said second set of human body dimensions by using said second mathematical model. There are numerous ways an apparel retailer can capture necessary information from a consumer interested in purchasing apparel, both remotely and in-store. Remotely, the interested consumer can access a retailer's web site through a computer, a PDA, a web enabled phone, interactive television, or any other electronic medium used to access the Internet. Also remotely, the interested consumer can call a retailer's customer service or ordering center, or they could send a fax or use any form of mail. In a store environment, the interested consumer could either provide the information directly to an employee of the retailer, or use any self-service device in the store such as a kiosk, Internet terminal or customer service telephone. In a preferred embodiment, the potential consumer would log on to the retailer's web site. This web site may have a combination of standard and custom products, or may offer exclusively custom made products. The potential consumer would choose the portion of the virtual store that offers custom made products, and then select the product category in which they are interested (a pair of pants, a pair of jeans, a sweater, a skirt, a dress, a shirt, a blouse, a vest, a jacket, a coat, a pair of knickers, a pair of leggings, a jersey, a pair of shorts, a leotard, a pair of underwear, a hat, a cap, and a swimming or bathing suit). Once they have selected the product category, then they begin to make choices about the product they desire. In the case of pants, they would choose the fabric, the color, the style, whether they want cuffs, pleats, and the type of fly (zipper or button). These are some of the feature and style choices that could be available. Once the potential consumer has made all of the feature and style choices for the product, they would provide the information needed for sizing. The information that is collected for sizing will be information that most apparel consumers know about themselves, and that can be used to either (1) directly determine desired measurements for the design of the garment pattern, or (2) estimate, either alone or in conjunction with other pieces of information, other necessary measurements for the design of the garment pattern. Consumers may also be asked to make assessments of themselves and their body shape, as well as to take simple measurements of certain of their body dimensions. Once the information is collected from the potential consumer, a series of formulas (also referred to as a “fitting model”) are used to determine the exact garment dimensions for that consumer. These formulas are developed through a detailed understanding of the human body, how the dimensions of the body relate to one another, and how those body dimensions interact to establish the required garment dimensions used as inputs for the pattern-making and garment manufacturing processes. In the preferred embodiment, the fitting model can be subdivided into two conceptually distinct parts. The first part of the model contains formulas that relate various dimensions of the human body to one another, and are used to infer body dimensions that are not reported by the consumer from those that are reported by the consumer. In the most preferred embodiment, this first part of the model is derived by statistical analysis of the publicly available U.S. Army 1988 anthropometric survey, although in other embodiments the data in the U.S. Army database may be supplemented by body measurements of other individuals. The second part of the model calculates from the reported and inferred body dimensions the necessary input values to the garment manufacturing process—i.e., the dimensions of the garment used to determine exactly how to cut and sew the fabric to make the garment. In the most preferred embodiment, this second part of the model is derived in part from the experience of a skilled clothing designer and/or tailor. Although not an essential part of the present invention, we note that the output of the second part of the fitting model—the calculated garment dimensions—would be used as inputs to a pattern maker (either human or automated), which would then use techniques well known to those of ordinary skill in the pattern-making arts to generate exact fabric cutting templates and sewing instructions on the basis of the calculated garment dimensions and intended style of the garment. In order to develop the relationships between dimensions of the body which are needed for correctly sizing a garment, data may be used from both publicly available anthropometrical studies and/or private sources of data, including measuring numerous individuals and recording the information. Once the initial relationships have been defined, these can be refined and improved over time as more data become available and as feedback from consumers and test subjects is collected. In the most preferred embodiment, the U.S. Army 1988 anthropometric survey is used to derive the coefficients of a linear model that relates the values of certain body measurements that are not reported by the consumer to those that are reported by the consumer. In this most preferred embodiment, the body measurements reported by the consumer are “reported waist”, “reported inseam”, “weight”, “height”, and “shoe size”. These are referred to as the independent variables of the model. In this most preferred embodiment, the body measurements to be inferred from these reported measurements are “seat” and “outseam”. These are referred to as the dependent variables of the model. It is to be understood that other sets of body measurements than those of the most preferred embodiment can be used as the independent and dependent variables. In the most preferred embodiment, the following steps are used to derive the linear equations of the model that relate the dependent variables to the independent variables. Principal components multiple linear regression analysis is the well-known statistical method used to derive the parameters of any given linear model that relates a dependent variable to a particular subset of the independent variables. As part of the process of identifying a suitable model, it is determined which of the independent variables have predictive value in inferring the dependent variable, and the coefficients of those predictive variables is also determined. First, a relatively large number of potential models, using a variety of subsets of independent variables, and that have been derived by multiple linear regression, are tested for their predictive value using the well-known statistical technique of prediction squared error, which allows the winnowing out of the least predictive models. Second, the more accurate, but more time-consuming and laborious, method of cross-validation is applied to the remaining models to identify the single model that has the greatest predictive power. Cross-validation takes advantage of the large number of individuals in the U.S. Army database by using only half of the individuals in the database (the “regression” half, which can be randomly chosen) as the input to the multiple linear regression for computing the model coefficients. Then, the values of the independent variables of each individual in the other (“test”) half of the database are used as inputs to each potential model to calculate a predicted value for each dependent variable. The difference for each individual in the “test” half between the predicted value and the actual value of each dependent variable is then squared and summed across all of the individuals in the test half. This sum of squared errors generated through cross-validation provides an accurate measure of the relative predictive power of each of the potential models, and avoids the inaccuracies introduced when one validates a model on the same set of individuals used as inputs to the regression that was used to generate the model. The potential model that exhibits the lowest sum of squared errors in the cross-validation is thus chosen for ultimate use in predicting the unreported body measurements of consumers from their reported body measurements. Once the unreported body measurements have been inferred from the reported body measurements on the basis of the anthropometric model, the actual garment dimensions are calculated. This is the second part of the overall fitting model. This second part of the model may be generated on the basis of the experiences of the garment designer with garment design and patternmaking, and takes into account a number of factors. These factors include adjustments to the body measurements to allow for “ease” in the garment. Ease refers to the fact that if a garment were constructed that had the exact same dimensions—waist, seat, inseam, etc.—as the body dimensions of the wearer, the garment would be “skin tight”, uncomfortable, and correctly perceived as ill-fitting. In order to compensate for this, it is well known in the art to add an amount of ease to the body dimensions when calculating the garment dimensions. In addition, the second part of the model takes into account the stated preferences of the consumer with regard to the shape and/or fit of the garment. Thus, the customer may report whether he or she desires a “close fit” or a “loose fit”, and might also report whether the desired shape of the garment is to be “tapered” or “straight”. These preferences are used to further adjust the garment dimensions in the appropriate way. Also, the second part of the model may be used to compensate for systematic errors in the body dimensions that consumers report. Not surprisingly, most consumers will under-report their weight and waist size, while over-reporting their height. In part, the under-reporting of waist size results from the fact that many manufacturers of off-the-shelf pants use what is known as “vanity sizing”. Off the shelf pants that are labeled as having, e.g., a 34 inch waist, may have an actual waist size of 35 to 36 inches. The under-reporting of weight and over-reporting of height stem from the well-known societal standards of physical attractiveness wherein “tall and slim” is most desirable. Regardless of the origins of any of these reporting errors, adjustments may be made to the calculated garment to dimensions to help compensate. The second part of the model may also be used to take account of the interrelationships between various of the garment dimensions. In other words, depending on the particular value of one garment dimension, another garment dimension may need to be adjusted to keep the overall fit of the garment as required for the body dimensions of the wearer. One example of this is the relationship between “rise”—the vertical distance between the crotch and waist of a pair of pants—and inseam. As the rise increases, the inseam must correspondingly decrease, or else the distance of the cuff of the pant leg from the floor will become too short—i.e. the pants will fit “too long”. EXAMPLE 1 An example of the formulas that can be used to determine garment specifications for men's pants is described in detail below. Where indicated, these formulas were derived from the U.S. Army anthropometric database using a method as outlined above. This example is not meant to be limiting to full the scope of the invention, as many other formulas are consistent with the invention. Algorithm #1 Inferring Male “Seat” and “Outseam” From “Reported Waist”, “Reported Inseam”, “Weight”, “Height”, and “Shoe Size” [Unless otherwise specified, all measurements are stated in units of inches and pounds.] (1) Body Mass Index (BMI) is calculated from Height and Weight as a matter of definition that is well known in the anthropometric arts: BMI=(Weight/(Height{circumflex over ( )}2))*100 (2) Conicity is calculated from Height, Weight, and Reported Waist as a matter of definition that is well known in the anthropometric arts: Conicity=(Reported Waist*0.0254)/(0.109*sqrt((Weight/2.2)/(Height*0.0254))) (3) Chest is calculated from Weight and Height using a standard formula well-known in the garment tailoring arts that embodies a numerical relationship between chest, weight, and height: (4) Foot Length is calculated from Shoe Size (American male sizing system) using a standard formula well-known in the shoe industry: Foot Length=7.29+(Shoe Size*0.338) (5) Seat is calculated from Height, Weight, Chest, BMI, Conicity, and Foot Length, using a linear model derived from the U.S. Army anthropometric database of male body measurements using the method described above: Seat=−2.85+0.36*Height+0.015*Weight−0.19*Chest+5.01*BMI+3.58*Conicity−0.055*Foot Length The coefficients of this linear model can also be expressed in terms of the various confidence intervals within which the coefficients lie, as enumerated in the table below. Es- Std Lower Upper Lower Upper Term timate Error 99% 99% 95% 95% Intercept −2.8526 3.6686 −12.3125 6.6072 −10.0478 4.3426 Ht″ 0.3563 0.0534 0.2186 0.4940 0.2516 0.4611 Wt lbs 0.0155 0.0106 −0.0117 0.0427 −0.0052 0.0362 Chest″ −0.1923 0.0150 −0.2309 −0.1536 −0.2217 −0.1629 BMI 5.0103 0.5116 3.6912 6.3294 4.0070 6.0136 Conicity 3.5781 0.3349 2.7145 4.4417 2.9213 4.2350 Foot −0.0550 0.0174 −0.0998 −0.0101 −0.0891 −0.0208 Lower Upper Lower Upper Lower Upper Term 90% 90% 80% 80% 50% 50% Intercept −8.8901 3.1848 −6.6560 0.9507 −3.3137 −2.3916 Ht″ 0.2684 0.4442 0.3009 0.4117 0.3496 0.3630 Wt lbs −0.0019 0.0329 0.0046 0.0264 0.0142 0.0168 Chest″ −0.2170 −0.1676 −0.2078 −0.1767 −0.1942 −0.1904 BMI 4.1684 5.8522 4.4799 5.5407 4.9460 5.0746 Conicity 3.0269 4.1293 3.2309 3.9253 3.5360 3.6202 Foot −0.0836 −0.0263 −0.0730 −0.0369 −0.0572 −0.0528 For example, the likelihood is 99% that the truly most predictive coefficient of the Weight term lies between −0.012 and 0.043, while the likelihood is 80% that the truly most predictive coefficient of the Weight term lies between 0.005 and 0.026. Seat models whose coefficients lie within any of the enumerated confidence intervals are consistent with the present invention. (6) Outseam is calculated from Height, Chest, BMI, Conicity, and Foot Length, using a linear model derived from the U.S. Army anthropometric database of male body measurements using the method described above: Outseam=−0.63+0.64*Height+0.048*Chest−0.45*BMI−3.64*Conicity+0.14*Foot Length The coefficients of this linear model can also be expressed in terms of the various confidence intervals within which the coefficients lie, as enumerated in the table below. Std Lower Upper Lower Upper Term Estimate Error 99% 99% 95% 95% Intercept −0.6284 0.6094 −2.1998 0.9430 −1.8236 0.5668 Ht″ 0.6395 0.0113 0.6105 0.6686 0.6174 0.6616 Chest″ 0.0480 0.0152 0.0089 0.0871 0.0183 0.0778 BMI −0.4465 0.0924 −0.6848 −0.2083 −0.6277 −0.2654 Conicity −3.6434 0.3389 −4.5172 −2.7696 −4.3080 −2.9788 Foot Length 0.1428 0.0176 0.0974 0.1882 0.1083 0.1773 Lower Upper Lower Upper Lower Upper Term 90% 90% 80% 80% 50% 50% Intercept −1.6313 0.3745 −1.2602 0.0034 −0.7050 −0.5518 Ht″ 0.6210 0.6581 0.6278 0.6512 0.6381 0.6409 Chest″ 0.0231 0.0730 0.0323 0.0638 0.0461 0.0499 BMI −0.5986 −0.2945 −0.5423 −0.3508 −0.4582 −0.4349 Conicity −4.2011 −3.0857 −3.9947 −3.2921 −3.6860 −3.6008 Foot Length 0.1138 0.1717 0.1245 0.1610 0.1406 0.1450 For example, the likelihood is 99% that the truly most predictive coefficient of the Conicity term lies between −4.5172 and −2.7696, while the likelihood is 80% that the truly most predictive coefficient of the Conicity term lies between −3.9947 and −3.2921. Outseam models whose coefficients lie within any of the enumerated confidence intervals are consistent with the present invention. Algorithm #2 Calculating Garment Dimensions From Body Dimensions Calculated By Algorithm #1 and Stated Consumer Preference The consumer reports whether he would prefer the fit of the pants to provide a “Little Room”, a “Close Fit”, or be “Loose Fitting”. [The “ROUND” operator applies ordinary nearest integer rounding. The “ROUNDUPEIGHTH” operator rounds up to the nearest eighth of an inch.] (1) Garment Waist is calculated from Reported Waist: When Reported Waist<36, Garment Waist=Reported Waist+1; Otherwise, Garment Waist=Reported Waist+1.5 (2) Seat-Waist Differential is calculated from Seat and Garment Waist: Differential=ROUND(Seat+4.5)−Garment Waist (3) Garment Seat is calculated from Differential, Garment Waist, Seat, and consumer fit preference: (a) If Fit Preference is “Little Room”: When Differential<5, Garment Seat=ROUND(Garment Waist+5); When Differential>11, Garment Seat=ROUND(Garment Waist+11); Otherwise, Garment Seat=ROUND(Seat+4.5) [The Garment Seat computed for a Fit Preference of “Little Room” is defined as “Little Room Garment Seat”.] (b) If Fit Preference is “Close Fit”: When Differential<6, Garment Seat=ROUND(Garment Waist+5); Otherwise, Garment Seat=Little Room Garment Seat−1 (c) If Fit Preference is “Loose Fitting”: When Differential>9, Garment Seat=ROUND(Garment Waist+11); Otherwise, Garment Seat=Little Room Garment Seat+2 (4) Seat Shape is defined by the input of the consumer, who chooses either FLAT, PROMINENT, or AVERAGE. (5) Rise is defined by the input of the consumer, who chooses either SHORT, LONG, or AVERAGE. (6) Garment Inseam is calculated from Reported Inseam and Rise: When Rise SHORT, Garment Inseam Reported Inseam; When Rise=LONG, Garment Inseam=Reported Inseam−1; Otherwise, Garment Inseam=Reported Inseam−0.5 (7) Leg Bottom Opening Circumference (Bottom Opening) is calculated from consumer fit preference and Foot Length: When Fit Preference=Close Fit, Bottom Opening=3.14*Foot Length*0.50; When Fit Preference=Little Room, Bottom Opening=3.14*Fott Length*0.54; When Fit Preference=Loose Fitting, Bottom Opening=3.14*Foot Length*0.57 The garment dimensions calculated and derived using algorithm #2, as just described, may be used as inputs to either a human or automated pattern maker, thus enabling the ultimate cutting and sewing necessary to produce the desired custom fitted garment. The formulas described in algorithms #1 and #2 in Example 1 for custom men's pants do not limit the broadest scope of the present invention, and are meant to provide an exemplary embodiment of the invention. The present invention may be used to provide custom fitted garments either for men or for women, and may be used to provide not only pants but shirts, jackets, skirts, vests, and any other article of apparel. Indeed, the present invention in its broadest scope should be considered applicable to the custom design of any manufactured article that is most desirable when “fit” to the body dimensions of the human being for whom the article is intended. This would include, but is not limited to, chairs, automobile seats, airplane pilot seats, sporting goods of various types, and other articles. It is also to be understood that it is within the scope of the present invention to make use of feedback from the consumer concerning the results of the custom fitting method to modify either or both of the fitting algorithms to result in improved fit of future garments. This may occur in two ways. First, reports from numerous customers about the fit of the custom garments designed using the present invention may be aggregated and subject to statistical analysis in order to generate corrections to the values of the coefficients of the general mathematical model used to relate unreported to reported body dimensions, or to generate corrections to the algorithm used to calculate garment dimensions from body dimensions. Second, reports from a particular customer concerning the fit of his or her custom garment may be used to generate a set of corrections to the body and/or garment dimensions for that customer so as to improve the fit of the next garment ordered. Of course, this procedure can be performed iteratively, each time the customer reports on the fit of the last garment ordered and orders a new garment.
{"url":"http://www.google.fr/patents/US6711455","timestamp":"2014-04-17T21:42:18Z","content_type":null,"content_length":"143884","record_id":"<urn:uuid:254763d5-14b3-4ac3-9b49-ef527cba8f01>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra review help January 14th 2009, 06:15 PM #1 Jan 2009 Algebra review help My teacher assigned review questions for our algebra semester final. I tried really hard to get the answers correct. She said that I missed seven problems. I asked her for the numbers and she gave me the numbers. I still had the work sheet in class so i wrote down all the problems on a piece of paper. Can you guys help me with these questions? I don't know what i did wrong on them and my teacher still has my work. I'm having trouble following your post. Is that 5 or 6 problems? Pick one, show me how you think it's done and where you get stuck and I'll help. I'm not sure what to do on this problem and I'm pretty sure this is where i got stuck on it. each line separates a question. Thank you if you can help me! :] Ok you are doing everything right so far. Now what you need to do is something called completing the square. You need to find the constant that will allow you to factor both the x expression and the y expression into the form you need. Look at $x^2-6x$. The way you find the constant is you make sure that the only coefficient for the x^2 term is 1, which it is. If not, you need to divide the coefficient out. Now take the number in front of your x term, divide by two, and square it. For this problem it is 9. So the trick is now to add 9 and subtract 9 at the same time so we change nothing about the equation. So $x^2-6x+9-9 = (x-3)^2-9$ Do you see how that works? Now move the -9 to the right hand side and do the same thing for your y terms. January 14th 2009, 06:32 PM #2 MHF Contributor Oct 2005 January 14th 2009, 06:40 PM #3 Jan 2009 January 14th 2009, 06:47 PM #4 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/algebra/68256-algebra-review-help.html","timestamp":"2014-04-16T11:18:29Z","content_type":null,"content_length":"38452","record_id":"<urn:uuid:e582deab-8be2-453c-88ba-8b6afd8b86e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Small letters after tags Re: Small letters after tags Okay, I did not know I was supposed to read it. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags The text in that picture I posted. The one with the ugly people. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Yes, I see that. But it is not clear whether you should read the text or look at the pictures or possibly both. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags I think you can just read the text here. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags You want me to read all the text in post #29? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags No. I want you to read all the text in the picture in the post #23. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Derpina? Asymptotes? Most puzzling. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags Derpina is a name used in comics like that one. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Hmmmm. Why is she being difficult? She is really funny looking. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags Why do you think she is difficult? The faces used there are called memes,and they should be funny. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Mr potato head is obviously interested in a gruesome girl. She on the other hand begins to blather about infinity. Mr potato head is now sad. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags The point is that she told him he is just a friend to her and that is all he will ever be. She just said it using math. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Hmmmm. Such stereotypes are in direct conflict with modern concepts. To break away from them the author of that meme as you call it must free his mind from them. Mr potato head should be telling gargoyle girl that he just wants to be friends. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags The guy wants to be more than just friends. He wants her as his girlfriend. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Hmmmm. It is culturally obsolete to always assume that the guy wants more but the girl wants less. Mr Meme, is caught up in this paradigm. To expand his consciousness he must rise above it. He must get in touch with his feminine side, the child within. The guy should be saying he just wants to be friends, then it would be funnier. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags You think so? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags You think so? Hmmmm! You do make a convincing argument. Okay, it is funny. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags Thank you. Oh,I need to tell gAr that you think that he is not capable of seeing a closed form of a sum in terms of the Hurwitz Zeta function. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Hmmm. gAr is from India. I had a friend from India whom I used to play chess with. Eventually we were on the same team representing our company. He was a very serious guy so I respected that and did not joke with him much. Since he was the only Indian guy I knew well when I met ganesh and gAr I used him as my template for them as well. So I rarely joke with them but here I was. I do not believe there is much of anything that gAr can not do. He is in my opinion along with Jane the best mathematician on this forum. You may tell him what I said as long as you include all the rest. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags Well didn't someone take it too seriously. I know that at the time you probably thought that the problem was found on a site of a mathematician wannabe. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Small letters after tags Not exactly. I am familiar with what you work on. It was not hard to figure the author of the sum. But gAr does not mind using a package. Sage could have spit that out. Mathematician wannabee? The two people I mentioned are not mathematicians. The best numerical analyst was a chemist and not a mathematician. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Small letters after tags It is funny. But back to Chrome. The browser will have a default font size qhich you can change. Methinks your tags are resetting it, so it shows later text in a new font size. Don't know how you control it though. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Small letters after tags Thanks for the response. And,yes,it is funny! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Real Member Re: Small letters after tags I thnk that kind of comics is called Rage Comics And the people involved are called Troll Faces Source: http://knowyourmeme.com/memes/rage-comics It is very nice to do a research on them What are Asymptotes? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Small letters after tags hi Agnishom When trying to sketch a graph, sometimes it will have asymptotes. It means a line that the graph tends towards without reaching or crossing. eg. y = tanx The vertical lines at 90 degrees and 270 are asymptotes. y = x + 1/x When x is small the graph tends towards the y axis. When x is large it tends towards the line y = x You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://mathisfunforum.com/viewtopic.php?id=17677&p=2","timestamp":"2014-04-16T13:41:27Z","content_type":null,"content_length":"39712","record_id":"<urn:uuid:d914eb7e-0036-4a2f-b770-17315243e987>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Ok, a simple question... Are we to assume that the acceleration from 0 to 60 mph is constant, and that the acceleration from 60 mph to 100 mph is constant (but different)? Anyway, you can solve this problem with one formula: [tex]V_f^2 = V_0^2 + 2ax[/tex] [tex]a = \frac{V_f - V_0}{t}[/tex] [tex]x = \frac{V_f^2 - V_0^2}{2\frac{V_f - V_0}{t}} = \frac{t}{2}(V_f + V_0)[/tex] Part 1: When the cars hit 60mph, car A will be at 57.6 meters and car B will be at 64.4 meters. Part 2: Car A's acceleration from 60 to 100 takes 7.3 seconds, whereas car B's takes only 6.8 seconds. The distance car A will pass between 60mph and 100mph is 261 meters, whereas car B will only pass 243.2 meters. The total distance of car A is 318.6 meters, and the total distance of car B is 307.6. So car A will be in front of car B by 11 meters when they both hit 100 mph. Of course, I'm not sure I'm right and I'm probably not. [:)]
{"url":"http://www.physicsforums.com/showthread.php?p=167387","timestamp":"2014-04-17T07:34:17Z","content_type":null,"content_length":"23049","record_id":"<urn:uuid:80f5daf7-87b8-4dc7-940f-044cd16aa8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20030215088 - Key agreement protocol based on network dynamics [0001] This Application is a continuation in part of application Ser. No. 10/245,502, filed on Sep. 18, 2002, the entire contents of which are hereby incorporated by reference. [0002] 1. Field of the Invention [0003] The present invention relates to cryptographic systems. More particularly, the invention generates, by public discussion, a cryptographic key that is unconditionally secure. Prior to this invention, cryptographic keys generated by public discussion, such as Diffie-Hellman, satisfied the weak condition of computational security but were not unconditionally secure. [0004] 2. Discussion of the Related Art [0005] An Achilles heel of classical cryptographic systems is that secret communication can only take place after a key is communicated in secret over a totally secure communication channel. Lomonaco [5,6] describes the matter as the “Catch 22” of cryptography, as follows: [0006] “Catch 22. Before Alice and Bob can communicate in secret, they must first communicate in secret.” [0007] Lomonaco goes on to describe further difficulties involving the public key cryptographic systems that are currently in use. For a discussion on several other disadvantages of the Public Key Infrastructure (PKI) see U.S. General Accounting Office Report [8] and Schneier [13]. [0008] Let x be a common key that has been created for Alice and Bob. That is, x is a binary vector of length n. Then x can be used as a one-time pad as follows. Let m be a message that Alice wishes to transmit to Bob: m is some binary vector also of length n. Alice encodes m as m⊕x where ⊕ denotes bitwise addition, i.e., exclusive OR. Thus m⊕x, not m, is broadcast over the public channel. Bob then decodes in exactly the same way. Thus Bob decodes the message (m⊕x)⊕x, which is m, because of the properties of bitwise addition. [0009] Alternatively, the key x can be used in a standard symmetric key cryptosystem such as that of Rijndael [12] or Data Encryption Standard (DES) [13]. The idea now is to encode m as f[x](m) where f[x ]denotes the Rijndael permutation with the parameter x. Then, to get the message, Bob decodes by g[x][f[x](m)]=m where g[x ]is the inverse of f[x]. [0010] To date, practical protocols for constructing such a common key x use for their security unproven mathematical assumptions concerning the complexity of various mathematical problems such as the factoring problem, the discrete log problem, and the Diffie-Hellman problem. Another serious difficulty concerning present systems involves the very long keys that are needed for even minimal security. In his monograph R. A. Mollin [17] points out that for elliptic curves cryptography an absolute minimum of 300 bits should be used for even the most modest security requirements and 500 bits for more sensitive communication. Further, key lengths of 2048 bits are recommended for RSA in the same reference. [0011] In [19] chapter 5, Julian Brown gives an example of a financial encryption system depending on RSA keys of 512-bit, namely the CREST system introduced in 1997 by the Bank of England. He quotes the noted cryptographer A. Lenstra concerning such codes as follows: “Keys of 512 bits might even be within the reach of cypherpunks. In principle they could crack such numbers overnight”. [0012] Randomness in Arrival Times of Network Communications [0013] Computer networks are very complex systems formed by the superposition of several protocol layers [14]. FIG. 1 shows the layers in a typical network. The following analysis of how the layers work together serves to explain the randomness in networks. [0014] The lowest layer connects two computers, i.e., creates a channel between them, by some physical means and is called the Physical Layer. [0015] The second layer removes random physical errors (called “noise”) from the channel to create an error-free communications path from one point to another. This layer, i.e., the Data Link Layer, is primarily responsible for dealing with transmission errors generated as electrical impulses (representing bits) as sent over a physical connection. Error detection techniques [15] are used to identify the transmission errors in many protocols. Once an error is detected the protocol requests a resend. Random errors in the Data Link Layer can be observed by noting timing delays. [0016] The Medium Access Layer deals with allocating and scheduling all communications over a single channel. In a networked environment, including the Internet, many computers communicate over a single channel. Bursts in packet traffic is a well-known characteristic and is due to the uncontrollable behavior of many individual computers communicating over a single channel [16] leading to random fluctuations in transmission times. [0017] The Network Layer deals with routing information to create a true or virtual connection between two computers. The routing is dependent on the variety of routing algorithms and the load placed on each router. These -two factors makes the transmission times fluctuate randomly. [0018] The Transport Layer interfaces with the final Application Layer to provide an end-to-end, reliable, connection-oriented byte stream from sender to receiver. To do so, the Transport Layer provides connection establishment and connection management. The times associated with Transport layer activities depend on all devices in the network and the algorithms being used. Thus, fluctuations in transmission times in the Transport Layer also occur, contributing to timing delays. [0019] However, not only the network influences timing fluctuations. The transmitting and receiving computers have internal delays resulting from servicing network packets. Thus, even the act of observing the timings will also introduce random fluctuations. (See appendix B for an analysis of the effects of perturbations on arrival timing). [0020] Another approach to obtaining independently generated but correlated raw random keys is to employ a commonly known to the communicating parties probabilistic array and agreed upon generation [0021] The present invention provides an efficient, practical system and method for a key agreement protocol based on network dynamics or a probabilistic generation method that has the strongest possible security, namely, unconditional security, and that does not require any additional hardware. Previous work in this area is either theoretical [11] or practically infeasible due the requirement for additional channels based on expensive and complicated hardware such as satellites, radio transmitter arrays and accompanying additional computer hardware to communicate with these devices [7]. All previous cryptographic keys only satisfy the weaker criterion of computational security. [0022] In one embodiment, the present invention introduces relative time sequences based on round-trip timings of packets between two communicating parties. These packets form the basic building blocks for creating an efficient and unconditionally secure key agreement protocol that can be used as a replacement for current symmetric and asymmetric key cryptosystems. In another embodiment, the present invention introduces correlated raw randomly generated keys that have been independently generated by two communicating parties based on a probabilistic array (or vector). The present invention is an unconditionally secure cryptographic system and method based on ideas that can be used in the domain of quantum encryption [1, 5 and 20 Chapter 6]. Moreover, the present invention for the first time provides a cryptographic protocol that exploits fundamental results (and their interconnectedness) in the fields of information theory, error-correction codes, block design and classical statistics. The system and method of the present invention is computationally faster, simpler and more secure than existing cryptosystems. In addition, due to the unconditional security provided by the present invention, the system and method of the present invention are invulnerable to all attacks from super-computers and even quantum computers. This is in sharp contrast to all previous protocols. [0023] The present invention provides a protocol that uses either two characteristics of network transit time: namely, its randomness, and the fact that, despite this, the average timing measured by two communicating parties will converge over a large number of repetitions or a probabilistic array and adjusting raw key generation method. The result is that two correlated random variables are obtained, one by measuring the relative time a packet takes to complete a round trip with respect to a first party, Alice or A, and a round trip with respect to a second party, Bob or B, and the other by starting with a known probabilistic array and applying an agreed upon adjusting procedure to arrive at a correlated generated raw random key.. [0024] In a first preferred embodiment, A and B engage in rallying packets back and forth and calculate round-trip times individually. The packets may be used for any additional purpose since the contents of the packets are irrelevant. Only the round-trip times are of interest. FIG. 2 shows one round of a relative round-trip time generator of the present invention. FIG. 2 diagrammatically describes the process. [0025] In a second preferred embodiment, A and B employ a pre-determined string P to independently generate raw random keys. Appendix C describes the process. [0026] PHASE 1—Alice and Bob Employ the System and Method of the Present invention to Construct a Raw Random Key. [0027] For example, Alice and Bob exchange packets over a network, record round-trip times, and each form a bit string by concatenating a pre-arranged number of low order bits of successive packet round-trip times. Once sufficient bits are concatenated, the process is stopped and both Alice and Bob apply a pre-determined permutation to their respective concatenated bit strings to form permuted remnant raw keys K[A ]and K[B], respectively of equal length. [0028] Or, in another example, Alice and Bob employ a pre-determined probabilistic string P to independently generate correlated random raw strings K[A ]and K[B ]using a process such as the one described in Appendix C. [0029] PHASE 2—Alice and Bob Employ These Remnant Raw Keys to Create a Reconciled Key: [0030] Alice and Bob systematically partition their respective permuted remnant raw keys, K[A ]and K[B], into sub-blocks, compute, exchange and compare parities for each sub-block, and, discarding the low order bit of the sub-block, re-concatenate the modified sub-blocks in their original order. In the case of blocks with mismatched parities the partition process is iterated until mismatched bits are located and deleted. [0031] PHASE 3—Alice and Bob Create an Unconditionally Secure Pad or Key From Their Common Reconciled Key: [0032] Privacy amplification to eliminate any partial information that an eavesdropper, Eve, might have is applied by both Alice and Bob using a pre-determined proprietary hash function [4] to produce a final unconditionally secure key of a pre-determined length from the reconciled key. [0037] In a preferred embodiment, the key agreement scheme of the present invention comprises three phases. The first phase is construction of a permuted remnant bit string. Two methods are [0038] The first method is based on physical characteristics of the network, wherein, for example and not limitation, the two communicating parties, Alice and Bob, rally packets back and forth recording round-trip times. [0039] The second method is probabilistic, wherein, for example and not limitation, the two communicating parties, Alice and Bob, both know a probabilistic string P of real numbers and generate keys based on this string, see Appendix C. [0040] Some of the bits may still be different after the initial bit string construction so Alice and Bob then participate in a second phase called Information Reconciliation. The second phase results in Alice and Bob holding exactly the same key. However, Eve may have partial knowledge of the reconciled strings, in the form of Shannon bits. Therefore, a third and final phase called Privacy Amplification is performed to eliminate any partial information collected by Eve. [0041] PHASE I—Alice and Bob rally packets back and forth to generate a bit string from truncated round-trip timings. This string is then systematically permuted. The procedure is as follows: [0042] (i) Alice sends Bob a network packet and logs the time t[A0]. [0043] (ii) Bob records the time of reception as t[B0 ]and responds immediately to Alice with another network packet. [0044] (iii) Alice records the time of reception as t[A1], and responds immediately with a network packet. [0045] (iv) Bob records the time of reception as t[B1 ]and responds immediately to Alice with another network packet. [0046] (v) Alice and Bob respectively calculate Δt [A] =t [A1] −t [A0 ] [0047] and Δt [B] =t [B1] =t [B0 ] [0048] Depending on the quality of the network connection, only some bits of Δt[A ]and Δt[B ]are kept. The higher order bits are dropped. Typical experimental data and criteria for the truncation can be found in [18]. [0049] By taking a suitable probability distribution it can be shown that the average of Δt[A ]equals the average of Δt[B]. [0050] (vi) Repeat steps (i) through (v) in order to create enough bits that are then concatenated as a string of bits of a pre-determined length. [0051] (i)-(vi) Alternatively, Alice and Bob each know a random probabilistic array P. They independently proceed as described in Appendix C to generate correlated raw random keys K[A ]and K[B]. [0052] PHASE II—Once sufficient bits are created, the process is stopped. Alice and Bob must now use the relative time series to create an unconditionally secure pad or key. One skilled in the art can deduce, from a study of various papers in the list of references that there are many ways to proceed. The present invention uses an approach which, very loosely speaking, is initially related to that of Bennett et al.[1]. However in [3, 4 and 10], several changes and improvements have been indicated. These changes, based on fundamental results in algebraic coding theory, information theory, block design and classical statistics together achieve the following results: [0053] (a) an a-priori bound on key-lengths; [0054] (b) a method for estimating the initial and subsequent bit correlations and key-lengths; [0055] (c) a precise procedure on how to proceed optimally at each stage; [0056] (d) a formal proof that K[A ]converges to K[B]; [0057] (e) a stopping rule; [0058] (f) a verification procedure for equality; and [0059] (g) a new systematic hash function for Privacy Amplification. [0060] After PHASE I, Alice and Bob have their respective binary arrays K[A ]and K[B ]and both perform the following steps of PHASE II: [0061] (vii) Shuffle and partition. Alice and Bob apply a permutation to K[A ]and K[B]. They then partition the remnant raw keys into sub-blocks of length l=4. [0062] (viii) Parity exchange and bisective search with l=4: Parities are computed and exchanged for each sub-block of length 4 by Alice and Bob. Simultaneously they discard the bottom bit of each sub-block so that no new information is revealed to Eve. If the parities agree Alice and- Bob retain the three top bits of each sub-block. If the parities disagree Alice and Bob perform a bisective search discarding the bottom element in each sub-block exactly as described in [1] and [5] (see also [4]). The procedure in steps (vii) and (viii) is denoted by KAP[4]. [0063] (ix) Estimate Correlation From the length of the new key, we can calculate the expected initial bit correlation x[0 ]between K[A ]and K[B ][4]. Using x[0 ]we can calculate the present expected correlation x=φ[4](x[0]). [0064] (x) Shuffle, parity exchange, bisective search with the optimal l: To the remnant keys K[A], K[B ]we apply a permutations in order to separate adjacent keys. As a non-restrictive example, one such f can be implemented by shuffling the bit order from (1,2,3 . . . ,n) into the order (1, p+1, 2 p+1, . . . q[1 ]p+1, 2, p+2, 2 p+2, . . . , q[2 ]p+2, . . . , p−1, 2 p−1, 3 p−1, . . . , q[p-1 ] p+p−1, p, 2 p, 3 p, q[p ]p+p), where q[i]=(n-i)/p. [0065] Given the present correlation x we choose the optimal value for l=l(x) by using the tables in [4]. Similar to (viii), (ix) for the case l=4, we carry out the procedure KAP[l]. From x, or from the new common length of the remnant keys, we calculate the expected present correlation after KAP[l ]has been applied. We repeat (xi) until the stopping condition holds. [0066] (xi) Stopping Condition: For key length n and correlation x we have n(l-x)<ε, a predetermined small positive number. We then proceed to the verification procedure, an example of which is as [0067] (xii) Verification Procedure: Let K[A], K[B ]both be of length n. Let t be the smallest integer for which 2^t≦n. Construct a binary matrix M=m[ij], (1≦i≦t+1, 1≦j≦2^t) as follows: [0068] a. The entries m[ij], (1≦ij≦t) are the entries of the t×t identity matrix I[tXt]. [0069] b. The (t+1)^th row of M is the all-ones vector, that is m[t+1j]=1 (1≦j≦2^t). [0070] c. Denote the top t entries in the j^th column by the binary vector v[j ](1≦j≦2^t). Thus, vj={m[ij]|1≦i≦t}. Then we impose the condition that the vectors v[j ]are all distinct. Thus, the set {v[j]} equals the set of all 2^t distinct binary vectors of length t. [0071] d. Denote the rows of M by R[1], R[2], . . . , R[t+1]. Let x, y denote the remnant keys K[A], K[B ]written as row vectors of length n. Let x, y denote the vectors that result when a row of zeros of length 2^t-n is adjoined, on the right of x, y respectively. Thus x=(x,000 . . . 0), y=(y,000 . . . 0). [0072] e. Our verification criterion is to check that x. R[i]=y. R[i], (1≦i≦t+1). [0073] If the verification criterion is not satisfied we remove the first t+1 bits from K[A], K[B ]and repeat steps (x), (xi) and check again if the verification criterion is satisfied. Eventually, it will be satisfied. [0074] At this stage Alice and Bob have confirmed that they now share the same key. Once confirmed, the final remnant raw key as transformed by Phase 2 is modified by removing the first t+1 bits from K[A]=K[B]. Our new key is re-named the “reconciled key” and phase 3, Privacy amplification is performed. [0075] PHASE III—At this stage Alice and Bob now have a common reconciled key. In certain cases it is possible that the key is only partially secret to eavesdropper, Eve, in the sense that Eve may have some information on the reconciled key in the form of Shannon bits. Alice and Bob now begin the process of PrivacyAmplification that is the extraction of a final secret key from a partially secret one (see [1] and [2]). A well-known result of Bennett, Brassard and Robert (see [18]) shows that Eve's average information about the final secret key is less than 2^−s/ln 2 Shannon bits as explained below (See also Shannon [9]). [0076] (xiii) Privacy Amplification—Let the upper-bound on Eve's number of Shannon Bits be k and let s>0 be some security parameter that Alice and Bob may adjust as desired. Alice and Bob now apply a hash function described in “Method For The Construction Of Hash Functions Based On Sylvester Matrices, Balanced Incomplete Block Designs And Error-Correcting Codes”, co-pending Irish Patent Application, (the entire contents of which is hereby included by reference as if fully set forth herein [3]) which produces a final secret key of length n-k-s from the reconciled key of length n. [0077] The system and method of the present invention provide an unconditionally secure key agreement scheme based on network dynamics as follows. In PHASE I, Alice and Bob permute the bits of what remains of their respective raw keys, which keys incorporate delay occasioned by network noise. In PHASE II, the key from PHASE I undergoes the treatment of Lomonaco [5]. That is, in PHASE II Alice and Bob partition the remnant raw key into blocks of length l. An upper bound on the length of the final key has been estimated and the sequence of values of I that yield key lengths arbitrarily close to this upper bound has also been estimated [4]. In PHASE II, for each of these blocks, Alice and Bob publicly compare overall parity checks, making sure each time to discard the last bit of the compared block. Each time an overall parity check does not agree, Alice and Bob initiate a binary search for the error, i.e., bisecting the mismatched block into two sub-blocks, publicly comparing the parities for each of these sub-blocks, while discarding the bottom bit of each sub-block. They continue their bisective search on the sub-block for which their parities are not in agreement. This bisective search continues until the erroneous bit is located and deleted. They then proceed to the next i-block.. [0078] PHASE I is then repeated, i.e., a suitable permutation is chosen and applied to obtain the permuted remnant raw key. PHASE II is then repeated, i.e., the remnant raw key is partitioned into blocks of length l, parities are compared, etc. Precise expressions for the expected bit correlation (see below) following each step have been obtained in [4], where it is also shown that this correlation converges to 1. Moreover in [4] the expected number of steps to convergence as well as the expected length of the reconciled key are tabulated. [0079] The probability that corresponding bits agree in the arrays K[A], K[B ]is known as the bit correlation probability or, simply, as the bit correlation. It can be shown (see [4]) that each round can be used to increase the bit-correlation. For example, if we start with a bit-correlation of 0.7 then after one round with l=3 the bit-correlation increases to about 0.77 and then to 0.87. For l=2 the corresponding numbers are 0.84 and 0.97. Estimates are also available for the key lengths after a round of the protocol of the present invention, for various values of l [4]. [0080] The final secret key can now be used for a one-time pad to create perfect secrecy or can be used as a key for a symmetric key cryptosystem such as Rijndael [12] or Triple DES [18]. [0081] A simplified version of the algorithm for the values l=2 and 3 is described in Appendix A. [0082] The system and method of the present invention provides secure transmission over wireless and wire media and networks as set forth below; [0083] a. wireless [0084] 1. radio transmission [0085] 2. radio frequency [0086] 3. satellite [0087] 4. microwave [0088] 5. infrared [0089] 6. acoustic [0090] 7. electro-magnetic spectrum [0091] 8. spread spectrum [0092] 9. laser [0093] b. wired [0094] 1. optical [0095] 2. fiber optics [0096] 3. electrical [0097] 4. Ethernet [0098] 5. quantum communication [0099] c. networks [0100] 1. intranet [0101] 2. Internet [0102] 3. extranet [0103] 4. Public Switched Telephone Network (PSTN) [0104] 5. Local Area Network (LAN) [0105] 6. Wireless Local Area Network (VVLAN) [0106] 7. Wireless Fidelity (WIFI) [0107] 8. Wireless Local Area Network (WILAN) [0108] 9. IEEE 802.11, 802.11a, 802.11b [0109] 10. Personal Area Network (PAN) [0110] 11. Bluetooth [0111] 12. Code Division Multiple Access (CDMA) [0112] 13. Global System for Mobile (GSM) Communication [0113] 14. 3^rd Generation Mobile Network (3G) [0114] 15. Asynchronous Transfer Mode (ATM) [0115] 16. Digital Subscriber Line (DSL) [0116] 17. Frame Relay [0117] It will be understood by those skilled in the art, that the above-described embodiments are but examples from which it is possible to deviate without departing from the scope of the invention as defined in the appended claims. Reference and Bibliography [0118] The following references are hereby incorporated by reference as if fully set forth herein. [0119] [1] Charles Bennett, Frangois Bessette, Gilles Brassard, Louis Salvail, and John Smolin, Experimental quantum cryptography, EUROPCRYPT '90 (Arhus, Denmark), 1990, pp. 253-265. [0120] [2] Charles H. Bennett, Gilles Brassard, and Jean-Marc Robert, Privacy Amplification by Public Discussion, Siam J. of Computing, 17, no.2 (1988), pp. 210-229. [0121] [3] Aiden Bruen and David Wehlau, Method for the Construction of Hash Functions Based on Sylvester Matrices, Balanced Incomplete Block Designs, and Error-Correcting Codes, Irish Patent Co-pending Irish Patent Application. [0122] [4] Aiden Bruen and David Wehlau, A Note On Bit-Reconciliation Algorithms, Non-Elephant Encryption Systems Technical Note 01.xx NE2, 2001. [0123] [5] Samuel J. Lomonaco, A quick glance at quantum cryptography, Cryptologia 23 (1999), no. 1, pp. 1-41. [0124] [6] ______, A Rosetta Stone for Quantum Mechanics With An Introduction to Quantum Computation, quant-ph/0007045 (2000). [0125] [7] Ueli M. Maurer, Secret Key Agreement By Public Discussion From Common Information, IEEE Transactions on Information Theory 39 no.3 (1993), pp. 733-742. [0126] [8] United States General Accounting Office, Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology, GAO 01-227 Report, February 2001, Report to the Chairman, Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, Committee on Government Reform, House of Representatives. [0127] [9] Claude E. Shannon, Communication Theory of Secrecy Systems, Bell System Technical Journal 28(1949), 656-715. [0128] [10] David Wehlau, Report for Non-Elephant Encryption, Non-Elephant Encryption Technical Note 01.08.2001. [0129] [11] A. D. Wyner, The Wire-Tap Channel, Bell System Technical Journal 54 no.8(1975), 1355-1387. [0130] [12] Joan Daemon and Vincent Rijnmeien, The Rijndael Block Cypher, June 1998, http://csrc.nist.gov/encryption/aes/rijndael/rijndael.pdf [0131] [13] Bruce Schneier, Applied Cryptography, 2^nd Edition, John Wiley & Sons, New York, 1996, Chapter 12. [0132] [14] Andrew Tanenbaum, Computer Networks, Prentice Hall, 1996. [15] Claude E. Shannon, A Mathematical theory of Communication, Bell System Technical Journal 27(1948), pp. 379-423 and 623-656. [0133] [16] Will E. Leland, Murad S. Taqq, Walter Willinger, and Daniel V. Wilson, On the Self-Similar Nature of Ethernet Traffic, Proc. SIGCOMM (San Francisco, Calif.; Deepinder P. Sidhu, Ed.), 1993, pp. 183-193. [0134] [17] R. A. Mollin, An Introduction to Cryptography, Chapman & Hall/CRC, 2000. Chapter 6. [0135] [18] Douglas R. Stinson, Cryptography: Theory and Practice, CRC Press, 1995. [0136] [19] Julian R. Brown, The Quest for the Quantum Computer, Simon & Schuster, New York, 2001. [0137] [20] Xiaomin Bao, Probabilistic Adjusting Raw Key Generation Method, Report for Non-Elephant Encryption, Non-Elephant Encryption Technical Note 02.nm, Jul. 26, 2002 [0033]FIG. 1 illustrates a typical multi-layer computer network protocol. [0034]FIG. 2 illustrates one rallying round between two communicating parties for generating a permuted remnant bit string by each party. [0035]FIG. 3 illustrates mean arrival time as a function of channel noise (noise parameter). [0036]FIG. 4 illustrates adjusting bits using the present invention to increase the correlation between the raw keys of the communicating parties while decreasing the correlation between the raw keys of the communicating parties and an possible eavesdropper.
{"url":"http://www.google.co.uk/patents/US20030215088","timestamp":"2014-04-17T21:31:11Z","content_type":null,"content_length":"107790","record_id":"<urn:uuid:91ffcfa3-c317-48da-98d3-4b84fd82bd6d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof of linear dependence of coplanar vectors March 12th 2007, 07:46 AM #1 Proof of linear dependence of coplanar vectors There is a theorem in my math book which proof I don't understand. Theorem is: Vectors x,y,z are linear dependent if they are coplanar and two of them are collinear. 1) Vectors x,y,z are in plane alpha. 2) Vectors x and y are collinear. 3) There is real number k != 0 such that y = kx 4) From 3) follows that y - kx + 0z = 0 which means that they are linear dependent. I don't understand step 4). Where did come from 0z? Can someone explain me step 4)? There is a theorem in my math book which proof I don't understand. Theorem is: Vectors x,y,z are linear dependent if they are coplanar and two of them are collinear. 1) Vectors x,y,z are in plane alpha. 2) Vectors x and y are collinear. 3) There is real number k != 0 such that y = kx 4) From 3) follows that y - kx + 0z = 0 which means that they are linear dependent. I don't understand step 4). Where did come from 0z? Can someone explain me step 4)? Recall what it means to be linearly dependent: A subset S of vector space V is called linearly dependent if there exist a finite number of distinct vectors v1, v2, ..., vn in S and scalars a1, a2, ..., an, not all zero, such that zero vector, not the number zero. Here your distinct vectors are x, y, z as opposed to v_1, V_2,...,v_n. since you are considering only 3 vectors, you can prove they are linearly independent by showing there are distinct constants (that's why it says not all zero) that you can multiply each vector by and get the zero vector. this proof simply chose those three constants to be 1, -k, and 0. (1)y + (-k)x + (0)z = 0 You get all the other steps right? Recall what it means to be linearly dependent: A subset S of vector space V is called linearly dependent if there exist a finite number of distinct vectors v1, v2, ..., vn in S and scalars a1, a2, ..., an, not all zero, such that zero vector, not the number zero. Here your distinct vectors are x, y, z as opposed to v_1, V_2,...,v_n. since you are considering only 3 vectors, you can prove they are linearly independent by showing there are distinct constants (that's why it says not all zero) that you can multiply each vector by and get the zero vector. this proof simply chose those three constants to be 1, -k, and 0. (1)y + (-k)x + (0)z = 0 You get all the other steps right? Yes, I understand now all. In order to prove linear dependency of coplanar vectors x,y,z we must prove that there is at least one constant k != 0 so that k_1x + k_2y + k_3z = 0. Since we have kx - y = 0 then it's k_1 = k, k_2 = -1 and chosing for k_3 = 0 isn't problem because all others are not zero so linear dependency is proven. In other words we found that they aren't all zero which is condition for linear independency. Thanks for help! March 12th 2007, 07:53 AM #2 March 13th 2007, 08:37 AM #3
{"url":"http://mathhelpforum.com/geometry/12462-proof-linear-dependence-coplanar-vectors.html","timestamp":"2014-04-19T13:47:24Z","content_type":null,"content_length":"40167","record_id":"<urn:uuid:cef303e4-4cf5-4920-a203-cd9eb7d4e494>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Bouncing Ball Discussion of the Bouncing Ball Gravity Engine. Restatement of the problem. A ball bounces up and down between floor and ceiling. Both floor and ceiling are rigid and infinitely massive. The bounces are assumed elastic, that is, the ball's velocity after impact is the same as before impact, but with reversed direction. Now imagine that the gravitational constant g is slowly but steadily decreasing. The ball is released at rest from the ceiling. The ball attains a certain speed when it reaches the floor, and bounces back. But since g is now smaller, it still has a small velocity when it hits the ceiling. Clearly this means that on completion of this ceiling-to-floor-to-ceiling cycle it has gained kinetic energy, which we could tap with a slightly inelastic ceiling tile which would steal just that small amount of energy, bringing the ball to rest on impact. The gravitational force, though slightly smaller than before, would cause the ball to fall to the floor and bounce back to the ceiling, where we again steal the excess energy gained in this cycle, and so on until gravity disappears, or forever, whichever comes first. Why isn't this practical? Simple answer: The only energy we could possibly extract from this system would be that kinetic energy the ball attains during its first fall to the floor, slightly less than mgh. More detailed analysis: We will analyze this by appealing only to kinematics laws and Newton's laws of dynamics, without explicit use of conservation of energy nor the laws of thermodynamics. This analysis may be more detailed than some may find necessary, but it illustrates the thought processes one goes through when trying to figure out this puzzle. The assumption that the ball's velocity changes only direction on impact with the massive floor or ceiling is a result of the conservation of momentum law which follows directly from Newton's laws and the assumption that the floor and ceiling are infinitely massive. The derivation is a bit subtle, but we'll assume that the reader will accept its conclusion, having seen behavior of nearly this sort when very elastic balls bounce from solid and massive concrete floors. In this case, when the gravitational constant is changing, the fact that the duration of a perfectly elastic impact is infinitesimal, and the gravitational constant doesn't change significantly during so short an interval. The kinematics law we'll need is the relation for speed of a body under constant acceleration: v[f]^2 = v[i]^2 + 2gx The body moves from point i to point f. v[i] is the speed the initial point i, while v[f] is the speed the final (later) point f. g is the acceleration during this time, and x is the distance it moved. This equation assumes that g is constant during this interval. The deliberate deception in our claim was the same as that made in the Schadewald Gravity engine. Here's the misleading statement: The ball attains a certain speed when it reaches the floor, and bounces back. But since g is now smaller, it still has a small velocity when it hits the ceiling. Clearly this means that on completion of this ceiling-to-floor-to-ceiling cycle it has gained kinetic energy, which we could tap... Let's say the distance from floor to ceiling is . Let have a constant value . The ball falls the distance , reaching the floor with a speed given by v^2 = 2gh But when g is decreasing, its average value is smaller during the fall, say g[1] and therefore the ball's speed is slightly smaller when it reaches the floor than it was in the constant acceleration case. If g were to remain constant at the value g[1] (the average value it had during the fall), the ball would then only have sufficient rebound speed to send it back to a height h. But g has decreased and is still decreasing, so its average value is smaller during the rise, allowing the ball to rise higher by a small distance x. This means that when it reaches height h the ball still has a small velocity. If we place a slightly inelastic pad at the ceiling, it can tap off a small amount of energy, slowing the ball to a stop. Then the ball begins its second cycle with zero speed. However, the speed at the floor during the second cycle is less than it was in the first cycle, and so on, steadily decreasing during successive cycles. The period (the length of time for each cycle) steadily increases. Eventually, when g reaches zero, the ball's speed at the top and everywhere else is zero also. But how much energy have we tapped? No more than mg[1]h, the kinetic energy it had when it first reached bottom. This will not do anything to solve our energy crisis. This is a satisfying and plausible result. When g reaches zero, there's no accelerating force so the ball must have constant velocity then. It's possible that the ball might have a small residual speed when g reaches zero. This would be a consideration if g abruptly went to zero, or if it decreased so rapidly that it reached zero during the first few cycles. In that case the ball would be left bouncing between floor and ceiling with a constant nonzero speed, but a speed less than it attained during the first half cycle. So there's no energy to tap here, energy conservation is working properly. This is the reason Bob Schadewald, in explaining his version of this deception, said: "The weight may pick up speed at the top, but never at the bottom...". This speed increase will occur if g reached zero during a cycle. The same is true of our bouncing ball engine. If, for example, the speed suddenly dropped to zero at the end of the first half cycle when the ball reached the floor, it would rebound and hit the ceiling with the same velocity it had attained while falling. This points out the difficulty we'd have analyzing the gravity shield engine. Discontinuous changes of acceleration represent infinite values of what physicists call the jerk, the rate of change of acceleration. Analyzing such discontinuous changes mathematically requires special care, and a good knowledge of calculus. It's all too easy to blunder and reach absurd conclusions. [I'm speaking from long experience reaching absurd conclusions.] None of these gravity machines would work any better if g were increasing. This analysis is directly relevant to the Schadewald Gravity Engine. The deception is the same, and the actual behavior of the SGE is the same with respect to top and bottom velocities. Related Puzzles: 1. How would steadily reducing gravity affect a simple pendulum? Amplitude? Period? Velocity at bottom of swing? Height of swing? 2. How would steadily reducing gravity affect the motion of a mass suspended from a Hookian spring (obeying Hooke's law) and bouncing up and down? 1. The period increases. The speed at the bottom decreases from cycle to cycle. The height at the extremes of the swing remains the same. The kinetic energy of the pendulum steadily decreases. Return to Return to The Museum of Unoworkable Devices Return to the of this document.
{"url":"http://www.lhup.edu/~DSIMANEK/museum/bbe-ans.htm","timestamp":"2014-04-20T03:23:30Z","content_type":null,"content_length":"8985","record_id":"<urn:uuid:ec690a26-1e68-4ba5-82a3-483206d09b9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
In how many ways can 6 people line up for play tickets? Number of results: 82,763 We would line them up one person at a time. We have 8 choices for the first person. That makes 8 choices. For the second person, we are left with 7 choices, together with the first, we have 8*7=56 ways. For the third person, we are left with 6 people from whom to choose, so ... Wednesday, December 23, 2009 at 5:27pm by MathMate Can someone tell me if I have the correct answers? I am really confused. Three boys and three girls line up to go in the front door. In how many ways can they line up? My answer: 720 How many ways can they line up if the first one in line is a girl and then alternates by ... Saturday, August 23, 2008 at 3:41pm by Tim Tell me how to figure out these probabilities In how many different orders can 9 people stand in line? In how many ways can 4 people be seated in a row of 12 chairs? Thank you For the first problem, we have nine people and want to arrange them in a line. For the first position... Thursday, September 21, 2006 at 10:42pm by Haley In how many ways can 7 people line up for plane tickets? Saturday, March 26, 2011 at 5:24pm by Valerie In how many ways can 7 people line up for play tickets Saturday, March 26, 2011 at 5:24pm by Valerie how many ways can you line up 11 people for a picture? Tuesday, March 12, 2013 at 11:52pm by Regina How many ways can 6 people line up for a race. This is is a combination, but I do not know how to solve it. Help is appreciated. Sunday, March 22, 2009 at 3:02pm by Gaby Math Word Problem How many ways can 8 people line up for play tickets? Saturday, December 5, 2009 at 6:00pm by Ashley How many ways can we select a committee of 2 people from a staff of 5? The first member of the committee can be chosen from 5 people, and the second can be chosen from 4. So the number of possible choices is 5*4=5!/(5-2)! However, the 2 people could have been chosen in the ... Sunday, March 13, 2011 at 1:33pm by MathMate The Downtown Theater has 1 ticket window. In how many ways can 2 people line up to buy tickets? Thursday, April 5, 2012 at 2:02pm by Cheril The Downtown Theater has 1 ticket window. In how many ways can 3 people line up to buy tickets? Monday, April 9, 2012 at 5:34pm by Cheril The question asks how many ways 2 people can line up. One will be in front, the other behind. Thursday, April 5, 2012 at 2:02pm by Ms. Sue In how many ways can 7 people be lined up to see a movie if Ben must be at the start of the line? is the answer 720? Thursday, January 28, 2010 at 4:36pm by sss In how many ways can 7 people be lined up to see a movie if Ben must be at the start of the line? is the answer 720? Thursday, January 28, 2010 at 7:30pm by sss The Downtown Theater has 1 ticket window. In how many ways can 3 people line up to buy tickets? _____ Tuesday, June 1, 2010 at 10:20pm by peaches In how many ways can 7 basketball players of different heights line up in a single row so that no player is standing between 2 people taller than she is? Wednesday, January 27, 2010 at 7:00pm by joe howmany ways can 8 people line up for play tickets? Thank you Wednesday, December 23, 2009 at 5:27pm by Kay Kay Finite Mathematics Suppose 12 people arrive at a bank at the same time. In how many ways can they line up to wait for the next available teller? Monday, February 7, 2011 at 9:02pm by Anonymous I fell asleep trying to figure this one out ... if you could help I would appreciate it ... Here is the problem .. If Jon, Mac, and Heather are taking a group photo, how many different ways can the photographer line them up? .. okay, I have that one .. = 6 ... but now i have ... Wednesday, December 19, 2007 at 7:45am by Dylan How many ways can 5 people be arranged in a straight line if there are 12 people to choose from Saturday, March 26, 2011 at 5:47pm by Valerie HELP ME pLEASE. DUE TOMORROW D: there are 4! ways or 24 ways for them to line up Sunday, March 1, 2009 at 11:42pm by Reiny In November 1994, the first live concert on the Interent by a major rock n roll band was broadcast. Most fans stand in lines for hours to get tickets for concerts. Suppose you are in line for tickets. There are 200 more people ahead of you than behind you in line. the whole ... Tuesday, October 2, 2007 at 10:21pm by Ashley V Math Word Problem In how many ways can 5 people be chosen and arranged in a straight line, if there are 11 people from whom to choose? Saturday, December 5, 2009 at 6:14pm by Ashley Okay, for this one, I know it's either one of two answers. In how many different ways can 7 different people line up for a picture? 49, or 823543. Friday, January 1, 2010 at 12:58pm by Anna math for the middle school and elementary teacher Which one of the following situations a permutation? a) the number of ways 10 people can be lined up in a line for basketball tickets. b) The number of possible four member bowling teams from a family of six siblings. c) The number of ways to choose a committee of three from ... Tuesday, February 28, 2012 at 8:58pm by liz #1 looks ok #2. sounds like you're guessing To prove it, check when n=6 (4/3)^6 = 5.6 Bzzzt. (However, it is true for n>6) #3. That is correct. The kth coefficient in (x+y)^n = C(n,k) #4. You are correct in think that 20 is way too low. There are 2 choices for every answer... Friday, June 7, 2013 at 3:11pm by Steve Matt, Bob, Amy, and John are waiting in line at a fast food restaurant. How many different ways can they line up to place their order Tuesday, October 20, 2009 at 7:07pm by marie Look for ways to express given values. If there are two things you are trying to find out, try to express one of them in terms of the other. Solve the equation of one variable, then use the relationship to find the other variable. Let x = number in back of you. Let y = number ... Tuesday, October 2, 2007 at 10:21pm by Quidditch There are about 7 billion people on Earth. Suppose that they all lined up and held hands, each person taking about 2 yards of space. a. How long a line would the people form? b. The circumference of the Earth is about 25,000 miles at the equator. How many times would the line ... Sunday, November 17, 2013 at 3:39pm by Brandon You are forming 3 teams from 25 people. Team A has 8 Team B has 3 Team C has 14 How many ways can team A be selected? How many ways can team B selected from the remaining people? How many ways can team C be selected from the remaining people? How many ways can all teams be ... Tuesday, December 14, 2010 at 11:10pm by Mark No, you can pick the chairperson in 6 ways, for each of these you can pick the secretary from the remaining 5 people, and finally the treasurer from the remaining 4 people, so the number of ways to pick your 3 positions is 6x5x4 = 120 ways. Tuesday, October 27, 2009 at 10:57pm by Reiny What is the number of ways 9 people from a group of 13 could be arranged in a line. Thursday, May 17, 2012 at 9:23pm by Matt I suppose you mean how many ways could they line up. Think of who goes first: 5 choices, who goes next: 4 choices, ... who goes last: 1 choice. So by the multiplication principle, there are 5.4.3...1 Thursday, February 16, 2012 at 2:25pm by MathMate How many ways can 6 cars line up is it 6 X 6 is it 6 squared? Wednesday, May 12, 2010 at 1:55pm by Cooper for the second: let's consider one of the ways: numbers show up: 1,2,3,4,5,6 the prob of that happening = (1/6)*(1/6)..(1/6) six times = (1/6)^6 but it did not have to come up in that order, as a matter of fact there are 6! ways for the 1,2,3,4,5,6 to come up or 720 ways so ... Tuesday, March 24, 2009 at 8:18pm by Reiny Science - 7th grade Is radiation damaging to the health of humans? If so, in what way/ways? My answer is: Yes, radiation though helpful in treating certain diseases is damaging to the health of human beings in many ways one of which being that it mutates our DNA and is unhealthy. It can also make... Thursday, January 16, 2014 at 9:01pm by Sara How many different ways can a teacher line up 5 students for lunch? Sunday, December 20, 2009 at 11:50pm by Michelle MATH Prob. How many different ways can a teacher line up 5 students for lunch? Thursday, August 20, 2009 at 10:09pm by Twg Math 4 Teresa has 4 flowers, 4 designs. How many ways can she display/mix them up. Thank you ....but 4x4 seems too simple. If I line up abcd, bcda,cdab, etc...... (Naming each design abcd) ??? is it possibley more? Tuesday, September 22, 2009 at 6:31pm by Noah center dot x=0 line up 1 to the left of y x=-2 line up to 2.5 to the left of y x=-3 line up to 4.5 to the right of y x=2 line up to 2.5 to the right of y x=3 line up to 4.5 Does this help? Tuesday, September 18, 2012 at 11:04am by lee heeeeeeelp math Assembling the head, there are 4! ways. Same for each of the three other parts. Total number of ways = 4!4!4!4! But we can arrange the 4 people in 4! different orders, so total number of sets of 4 people = (4!)^4/(4!) =(4!)³ Thursday, May 30, 2013 at 10:25am by MathMate In how many ways can Kwan line up her carvings of a duck, a gull, and a pelican on a shelf? Thursday, August 25, 2011 at 8:21pm by Mia Once we have chosen our 7 people, which would be C(18,7) or 31824 ways, each of those groups can be arranged in the van in 7! ways But I don't think you asked for the last part, so just 31824 ways. Sunday, March 24, 2013 at 3:34pm by Reiny The line for the Dunking Machine was twice as long as the Cake Walk line. The line for the Cake Walk was one-third the length of the line for the Hoop Shoot. If there were 12 people in line for the Hoop Shoot, how many people were in the line for the Dunking Machine? Wednesday, April 27, 2011 at 9:25pm by Rosa Data managment math 7: The basket ball team has total 14 players : 3-1st year player,5-2nd year player and 6-3rd year player (a) in how many ways can the coach choose a starting line up(5 players) with at least one 1 st player. (b) in how many ways can he set up a starting lineup with two 2nd ... Monday, January 28, 2008 at 12:13pm by Samir a)in how many different ways can 8 exam papers be arranged in a line so that the best and worst papers are not together? b)in how many ways can 9 balls be divided equally among 3 students? c)There are 10 true-false questions. By how many ways can they be answered? d)lim(x ... Wednesday, August 5, 2009 at 12:24pm by Aayush daily word promblems The line for the cake walk was one-third the length of the line for the hoop shoot if there were 12 people in line for the hoop shoot how many people were in for the dunking machine? Wednesday, September 5, 2012 at 1:07am by daisy How many different ways can Kathy, Jessie,and Daniella line up, single file,at the drinking fountain? Thanks Tuesday, December 15, 2009 at 6:12pm by Ada Math Word Problem First of all, the 5 people can be chosen from the 11 in C(11,5) or 462 ways so there are 462 groups of 5, but each of these can be arranged in 5! ways so total number of ways for your situation is 462x5! = 55440 Saturday, December 5, 2009 at 6:14pm by Reiny b) You can distribute the balls as follows. You put the 9 balls in some arbitrary order and then let student 1 take the first 3, student 2 the next 3, and student 3 the last 3. Then all possible ways to distribute the balls can be realized this way. There are 9! possible ways ... Wednesday, August 5, 2009 at 12:24pm by Count Iblis Think of it as filling three spots with the three different people. You can place one of 3 different people in the first spot. Now look at the second spot, since one of the people has been placed, that would leave one of the remaining two people to be placed in that spot So ... Wednesday, December 19, 2007 at 7:45am by Reiny U.S. History Rachel Carson's book Silent Spring woke people up to the ways we were poisoning the earth. Friday, April 22, 2011 at 12:41pm by Ms. Sue Ok, so my math teacher didn't tell us ways to do this faster, and it's late enough as it is so I don't want to be up all night doing homework. D; *Note: A permutation is where the order matters. A combination is where it doesn't. Find the number of permutations. 1.Ways to ... Thursday, February 14, 2008 at 12:33am by Who cares? Do members of your community look like you? In what ways do they look the same or different? o How do leaders within your community treat people who are like you? How do they treat people who are different? o How do other members of your community treat people who are like you... Thursday, June 26, 2008 at 10:58am by nik o Do members of your community look like you? In what ways do they look the same or different? o How do leaders within your community treat people who are like you? How do they treat people who are different? o How do other members of your community treat people who are like ... Thursday, December 13, 2007 at 12:28pm by Tasha o Do members of your community look like you? In what ways do they look the same or different? o How do leaders within your community treat people who are like you? How do they treat people who are different? o How do other members of your community treat people who are like ... Thursday, December 13, 2007 at 12:56pm by Tasha The line for the dunking machine was twice as long as the cake walk line. The line for the cake walk was one third the length for the hoop shot. If there were 12 people in line for the hoop shot, how many people were in line for the dunking machine? Tuesday, September 9, 2008 at 7:30pm by skate At the head of the line, there are 7 choices of people. The next one has six, then five, then four.... Since these are independent choices, the total number of ways are multiplied together to give 7*6*5*...1 =7! Saturday, March 26, 2011 at 5:24pm by MathMate i had an essay prompt, write about what Machiavelli would think if he encountered mencius' novel, mencius. I was looking for ideas to get started. Ways I thought he would disagree with Mencius is the importance of profit (Mencius- for the people, Machiavelli- for the prince ... Saturday, August 16, 2008 at 10:36pm by samantha Math [Probability] I'm in a Grade 12 Data Management Course. The Current unit is Factorials and Permutations. I'm having a trouble with this question. The answer is suppose to be 120 but I dont know how. How many different ways can 6 people be seated at a round table? Explain your reasoning. If ... Sunday, September 10, 2006 at 3:24pm by Jerry the number of ways in which 18 students in Mr. Garr grade 3 class can line up for a photo if the the Armstrong triplets cannot all be together is n x16! what is n? Monday, July 26, 2010 at 12:06am by dave above the x axis a dot on the -2 line up to 2.5, and another dot on -3 line up to 4.5, thats on left side, then right side a dot on 2 line up 2.5, and another on line 3 up to 4.5. Not sure how to figure this is someone is willing to help. possible answers; domain: {–3, –2, 0, ... Tuesday, September 18, 2012 at 11:04am by lee Define C(n,r)=n!/(r!(n-r)!)=n choose r Sample space: C(13,5)=1287 ways to choose 0 boy =Choose 5 girls out of 7 and 0 boy out of 6 =C(7,5)*C(6,0)=21 ways to choose 1 boy: =C(7,4)*C(6,1)=210 ways to choose 2 boys: =C(7,3)*C(6,2)=525 ways to choose 3 boys: =C(7,2)*C(6,3)=420 ... Thursday, May 24, 2012 at 11:43pm by MathMate Diverse learners are people who learn in different ways. You may want to look up diverse in a dictionary. Sunday, July 27, 2008 at 2:45pm by Ms. Sue Euclid's Geometry , geometric constructions I don't know at what of math this is from. There are several ways to do this, I will show you two ways: 1. pick one line segment to be the base of your triangle, I picked the line from (-1,5) to (-2,-3) slope = (-3-5)/(-2+1) = -8/-1 = 8 equation: y-5 = 8(x+1) 8x + 8 = y-5 8x... Monday, February 18, 2013 at 9:00am by Reiny How many ways can 3 students be chosen from a class of 20 to represent their class at a banquet? (1 point) 6,840 3,420 1,140 2,280 15. You and 3 friends go to a concert. In how many different ways can you sit in the assigned seats? (1 point) a.6 ways b.12 ways c.24 ways d.10 ... Wednesday, April 17, 2013 at 8:17am by Anonymous pre calc There are 3! =6 ways to order the three separate topics. For each of these ways, there are 4! ways to arrange the algebra books, 2! ways for the geometry books, and 3! ways for the pre-cal books. So, in all, there are 3! 4! 2! 3! = 6*24*2*6 = 1728 ways. Note that if the ... Sunday, September 25, 2011 at 5:39pm by Steve I have a winning strategy, but perhaps there are others. Draw a line of symmetry to split the game into two identical games. Whatever the next player does, you would do the mirror image. Therefore you will end-up splitting the last rectangle. Based on this, there are only two ... Tuesday, June 4, 2013 at 1:30am by MathMate Two dice are tossed. What is the probability that the sum of both dice is a prime number? This is what I have so far Presuming two fair, 6-sided dice, each numbered 1 through 6: You can achieve one of the following results on any given roll: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12... Tuesday, July 2, 2013 at 4:31pm by Meg 6 │ 2 -9 0 6 -5 ...-------------- ......12 18 108684 ....2 3 18 114 679 It is so hard to line things up properly, in the last two rows, the 3 is under the 12, 18's line up, 108 and 104 line up. the answer is the last number, namely 679 Saturday, October 11, 2008 at 8:47pm by Reiny algebra 2 With 10 students you have 10 choices for the first student to go, then 9 choices for the second student to go. That would seem to be 90 ways to choose the people to go on the trip. But the order doesn't matter. You don't care if Alice is chosen, then Bob, or vice versa. So ... Thursday, April 23, 2009 at 12:44pm by xxx I need some ways to rember the people of Titanic i already got some like some people let other people get on the boat and stayed on the Titanic and some people also didnt care about there lives and tried to recuse other people while the ship was sinking. thanks i need this by ... Tuesday, November 27, 2007 at 5:10pm by Anonymous The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $630 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $5 per... Friday, September 21, 2012 at 10:03pm by Anastasia The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $630 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $4 per... Friday, September 13, 2013 at 10:38pm by Anonymous How many ways can a committee of 4 people be selected from a group of 10 people? Sunday, May 1, 2011 at 11:29pm by ALAINI math 12 Seven year-old Sarah has nine crayons; three blue, one red, two green and three yellow. a) In how many ways can she line up the crayons on her desk? b) She is to pick up some of the crayons. How many different choices of some crayons could she make? Saturday, November 27, 2010 at 10:03pm by kitkat A Peculiar Question In what ways are people more fearful than people in past generations? Saturday, January 11, 2014 at 7:35pm by Victoria Help with Analogies and suggest me other ways to write them please. Thanks 1)Smart people keep quiet about what they know, but stupid people advertise their ignorance.Changed into:Good drivers are hindered to show their driving skills, while reckless drivers show off their ... Tuesday, October 5, 2010 at 5:56pm by Angelina SOME RAILWAYS OF SOUTH KOREA: Gaya Line Bukjeonju Line Yeocheon Line Gyeongbu Line Gyeongbu Line (KTX) Gyeongui Line Seoul Gyowoi Line Gyeongin Line Gyeongwon Line Gyeongchun Line Janghang Line Chungbuk Line Honam Line Honam Line (KTX) Jeolla Line Wednesday, March 26, 2008 at 1:09pm by harry Math Data Management In how many ways can the 12 members of a volleyball team line up, if the captain and assistsant captain must remain together? Thank you in advance. Monday, September 21, 2009 at 11:21pm by Sophie Need help with Analogies and suggest me other ways to write them please. Thank you for the help. 1)Smart people keep quiet about what they know, but stupid people advertise their ignorance.Changed into:Good drivers are hindered to show their driving skills, while reckless ... Monday, October 4, 2010 at 4:24pm by Angelina cecile tosses 5 coins one after another a. how many different outcomes are possible b. draw a tree to illustrate the different possibilities c. in how many ways will the first coin turn up heads and the last coin turn up tails d. in how many ways will the second and thrd and ... Thursday, August 28, 2008 at 7:52pm by steven james The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $430 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $4 per... Wednesday, February 13, 2013 at 1:28pm by joy The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $430 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $4 per... Sunday, February 24, 2013 at 11:54pm by Shannon The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $430 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $4 per... Tuesday, February 26, 2013 at 6:31pm by Ashton To handle this type of question, my must be familiar with the C(n,r) notation. The person can select the books in C(10,4) or 210 ways the CD's in C(20,2) or 190 ways and the DVD's in C(5,1) or 5 ways. So the number of ways to make the selection is 210*190*5 ways or 199500 ways. Saturday, February 12, 2011 at 6:37am by Reiny How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself? Tuesday, January 26, 2010 at 5:08pm by robin How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself? Sunday, January 31, 2010 at 9:39pm by robin The owner of a luxury motor yacht that sails among the 4000 Greek islands charges $430 per person per day if exactly 20 people sign up for the cruise. However, if more than 20 people (up to the maximum capacity of 90) sign up for the cruise, then each fare is reduced by $4 per... Tuesday, February 26, 2013 at 9:48pm by Selena (Please Help) Two dice are tossed. What is the probability that the sum of both dice is a prime number? this is the answer i have ; Presuming two fair, 6-sided dice, each numbered 1 through 6: You can achieve one of the following results on any given roll: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12... Monday, July 1, 2013 at 2:29pm by Megan a driver comes to pick up members of your family (20 members) for a family reunion. the van holds 7 people not including the driver. assuming none of the babies will be riding (2 babies in the group of 20) how many different ways can 7 people be chosen to ride in the van? Sunday, March 24, 2013 at 5:22pm by john That's a great activity. It requires people to use their imaginations, both in connecting the bee with the flower in different ways and in drawing or erasing the black line. Here is one re-wording suggestion: in #3 -- "Then the bee will land on the flower." Thursday, March 12, 2009 at 4:09pm by Writeacher All three constructions are pyramids. Study pictures of them carefully and find the ways they are alike and ways they are different. For instance, the Ziggurat was built on a platform and then it went up to the top like stair steps. The Pyramids of Giza rose smoothly from the ... Tuesday, October 13, 2009 at 11:11pm by Ms. Sue 8th math How many ways can seven basketball players of different heights line up in a single row so no player is standing between two players taller then herself? Thursday, January 28, 2010 at 12:21pm by Robin In how many ways can a group of 10 people be divided into: a. two groups consisting of 7 and 3 people? b. three groups consisting of 4, 3, and 2 people? Friday, February 22, 2013 at 8:29am by Therese cake walk line = 1/3 of hoop shot = 1/3(12) or 4 people dunking line was twice as long as cake walk = 2(4) or 8 people Tuesday, September 9, 2008 at 7:30pm by Reiny There are nine customers waiting for their local SuperDuperMegaMart store to open on a big shopping day. In how many ways can the nine customers form a single line at the door? That is, how many different lines of nine people are possible? Monday, February 20, 2012 at 7:45pm by Karen You can roll two dice at a time, a white one and a red one, and there are 36 different ways for the "up faces" to land. How many of ways will give a sum of 7 on the two up faces? Tuesday, April 6, 2010 at 8:43pm by Mia if line segment AC=120ft and the ratio of line AB:BC =3/5 - what is the length of line AB ? I set up the problem up as X:120 & 3:5 3x120= 360 and divided 5 into 360 = 72ft but the answer in the book is 45ft help! Wednesday, October 15, 2008 at 10:16pm by Rose Creative Writing What are some plus and minuses of growing up males, and what are some plus and minuses of growng up females? the topic is: who has an easier time growing up; males or females My opinion: I don't think the difference is tremendous, but in most ways boys have an easier time ... Tuesday, December 8, 2009 at 6:36am by y912f Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=In+how+many+ways+can+6+people+line+up+for+play+tickets%3F","timestamp":"2014-04-18T16:40:20Z","content_type":null,"content_length":"40784","record_id":"<urn:uuid:b467b1d6-77e8-4585-888b-dfa7ebef20b1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Give exact & approx. solns (to 3 dec. places) for (x+8)^2=1 think wrote:Give the exact and approximate solutions to three decimal places This is probably most-simply solved by taking square roots . For instance: . . . . .. . . . .. . . . . The above would be the "exact" solution. The "approximate" solution would be found by plugging each of: . . . . . ...into your calculator, and rounding the outputs to the required three decimal places. Note: In this particular case, the "exact" solution won't involve square roots. For the "approximate" solutions, you'll have to tack on three (entirely unnecessary) zeroes after the decimal point.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=280","timestamp":"2014-04-21T02:19:12Z","content_type":null,"content_length":"19461","record_id":"<urn:uuid:709a0422-a3e7-4ee5-9c1f-f611904b1038>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler problems/51 to 60 From HaskellWiki Find the smallest prime which, by changing the same part of the number, can form eight different primes. Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits in some order. How many values of C(n,r), for 1 ≤ n ≤ 100, exceed one-million? How many hands did player one win in the game of poker? How many Lychrel numbers are there below ten-thousand? Considering natural numbers of the form, ab, finding the maximum digital sum. Investigate the expansion of the continued fraction for the square root of two. Investigate the number of primes that lie on the diagonals of the spiral grid. Using a brute force attack, can you decrypt the cipher using XOR encryption? Find a set of five primes for which any two primes concatenate to produce another prime.
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Euler_problems/51_to_60&oldid=12304","timestamp":"2014-04-18T06:37:10Z","content_type":null,"content_length":"22713","record_id":"<urn:uuid:3e42d090-d825-4f69-81e6-7478dbb47544>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Homotopy theories of diagrams J.F. Jardine Suppose that S is a space. There is an injective and a projective model structure for the resulting category of spaces with S-action, and both are easily derived. These model structures are special cases of model structures for presheaf-valued diagrams $X$ defined on a fixed presheaf of categories E which is enriched in simplicial sets. Varying the parameter category object E (or parameter space S) along with the diagrams X up to weak equivalence requires model structures for E-diagrams having weak equivalences defined by homotopy colimits, and a generalization of Thomason's model structure for small categories to a model structure for presheaves of simplicial categories. Keywords: model structures, presheaves of categories, diagrams 2010 MSC: Primary 18F20; Secondary 18G30, 55U35 Theory and Applications of Categories, Vol. 28, 2013, No. 11, pp 269-303. Published 2013-05-16. TAC Home
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/28/11/28-11abs.html","timestamp":"2014-04-20T06:23:09Z","content_type":null,"content_length":"2294","record_id":"<urn:uuid:d3a9f4a6-96e4-4d77-a720-40711a4f6bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: simple program to compare spearman coeffs using bootstrap - plea Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: simple program to compare spearman coeffs using bootstrap - please help! From "Garth Rauscher" <garthr@uic.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: simple program to compare spearman coeffs using bootstrap - please help! Date Thu, 29 Apr 2010 07:24:44 -0500 Andrew -try this program drop spearmandiff program define spearmandiff, rclass capture drop def1 def2 diff spearman lastmam lastcbe return scalar def1=r(rho) spearman lastmam lastroutine return scalar def2=r(rho) return scalar diff =return(def1)-return(def2) use dataset_name, clear bootstrap def1=r(def1) def2=r(def2) diff=r(diff), reps(100) : spearmandiff estat bootstrap , bc Garth Rauscher Associate Professor of Epidemiology Division of Epid/Bios (M/C 923) UIC School of Public Health 1603 West Taylor Street Chicago, IL 60612 ph: (312)413-4317 fx: (312)996-0064 em: garthr@uic.edu Date: Thu, 29 Apr 2010 02:12:02 -0400 From: "Chong, Qi Lin Andrew" <qchong@middlebury.edu> Subject: st: RE: simple program to compare spearman coeffs using bootstrap - please help! Hi all, sorry to bother again. This is a rudimentary program for what I'm trying to do. I have put the data in the form eff1a eff1b eff2a eff2b where these correspond to the same observation (in this case they are four efficiency scores attached to a firm). I just want to run a spearman coeff on eff1a eff1b and eff2a eff2b separately, and then take the difference. Can anyone help with the basic problems in this program? capture program drop spearmandiff program define spearmandiff, rclass spearman eff1a eff1b return scalar def1=r(rho) spearman eff2a eff2b return scalar def2=r(rho) gen diff = def1-def2 save diff bootstrap diff=r(diff), reps(1000) : spearmandiff The program is wrong but it illustrates that I just want to draw a sample, do a spearman test on the first 2 and the last 2 data pairs, and then compare the differences. The final result for bootstrap should give me a mean difference and a SD. I basically just want to calculate the statistical signficiance of the difference between the two spearman coefficients. 1) I am not sure how to save the different values of diff that I get from drawing the different bootstrap samples, and 2) how to make sure they end up in the final output for the multiple bootstrap sampling. Will this give me the confidence interval I require to evaluate its statistical significance? Thanks for any help you can render! From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Roger Newson Sent: Wednesday, April 28, 2010 8:28 AM To: statalist@hsphsun2.harvard.edu Subject: Re: st: comparing differences in Kendall's tau or Spearman's coefficient using somersd and/or bootstrap Yes, this sounds like what you ought to be doing. Correlations between X1 and Y1 can only be compared with correlations between X2 and Y2 using firms with all 4 variables. I hope this helps. Best wishes Roger B Newson BSc MSc DPhil Lecturer in Medical Statistics Respiratory Epidemiology and Public Health Group National Heart and Lung Institute Imperial College London Royal Brompton Campus Room 33, Emmanuel Kaye Building 1B Manresa Road London SW3 6LR Tel: +44 (0)20 7352 8121 ext 3381 Fax: +44 (0)20 7351 8322 Email: r.newson@imperial.ac.uk Web page: http://www.imperial.ac.uk/nhli/r.newson/ Departmental Web page: Opinions expressed are those of the author, not of the institution. On 28/04/2010 05:37, Chong, Qi Lin Andrew wrote: > Hi all, > thanks a lot to Dr. Newson for responding to my question. To him and all others, I am unsure how I should proceed with the bootstrap command. As Dr. Newson explained accurately, I am "trying to measure the tau-a correlation between "Efficiency definition 1" (ED1) in Year A and ED1 in Year B, and then measure the tau-a correlation between "Efficiency Definition 2" (ED2) in Year A and ED2 in Year B, and then calculate a confidence interval for the difference between the 2 taus." > If I were to reorder the data set so that each firm has "ED1 YearA" "ED1 YearB" "ED2 YearA" "ED2 YearB" lined up next to each other, etc, ed1a ed1b ed2a ed2b, how should I go about estimating this interval? > If it is easy to give me the exact commands, I would very much appreciate it, but if not any advice would be much appreciated. I am trying to calculate the statistical significance of their difference, so should I be drawing bootstrap samples of firms with ed1a ed1b ed2a ed2b all together? And then calculating one difference between ed1a ed1b& ed2a ed2b for each > Would this involve some kind of programming using bootstrap alone and w/o somersd? Also, does it matter that Spearman is in terms of RANKS, so if I happen to draw the same sample twice from say the 6000 observations I have, then there would be a tie in the bootstrap drawn sample (6000 obs from original 6000 but with replacement)? Would this tie be a problem? > Many thanks for your help! > Andrew * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-04/msg01704.html","timestamp":"2014-04-17T00:53:42Z","content_type":null,"content_length":"12378","record_id":"<urn:uuid:cd606977-205a-41f2-a679-8a549c5f4d4e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Centre of a Group and Conjugacy Classes November 11th 2011, 04:46 PM #1 Centre of a Group and Conjugacy Classes Dummit and Foote Section 4.3 Groups Acting on Themselves by Conjugation - The Class Equation - Exercise 5 reads: ================================================== ===== If the centre of G is of index n, prove that every conjugacy class has at most n elements ================================================== ===== I am having trouble getting started on this problem. Can anyone help? Re: Centre of a Group and Conjugacy Classes do you know the class equation? EDIT: maybe that's not so helpful, here. ok, so we have n cosets of Z(G). each one of these corresponds to an inner automorphism of G, that is, we have n distinct possible ways (at most) to conjugate any element of G. Re: Centre of a Group and Conjugacy Classes I have just read about the Class Equation this morning. I am assuming that you are indicating that the proof relies on the Class Equation (thanks for the guidance!) I will re-read Dummit and Foote on the Class Equation and apply to the caser when Z(G) has index n Re: Centre of a Group and Conjugacy Classes I think I can see how conjugation is (or leads to) an automorphism of a group - but having difficulty seeing how a coset of Z(G) corresponds to an inner automorphism Must go an read next section of Dummit and Foote - ie section 4.4 Automorphisms By the way, are you aware of a way to prove this without recourse to an argument re automorphisms? Re: Centre of a Group and Conjugacy Classes ok, let's just write Z, for Z(G), just for notational purposes. suppose that for all g in G, xgx^-1 = ygy^-1. then y^-1xg = gy^-1x, that is, y^-1x is in Z, so Zx = Zy. on the other hand, if Zy = Zx, then y = zx for some z in Z, so ygy^-1 = (zx)g(zx)^-1 = z(xgx^-1)z^-1. but z commutes with all of G, so it commutes with xgx^-1, so ygy^-1 = z(xgx^-1)z^-1 = (xgx^-1)zz^-1 = xgx^-1. that is, all elements of Zx give rise to the same conjugate of g. since [G:Z] = n, we can have at most n conjugates of g, one for each coset Zx (it might be that we have considerably fewer, as there is nothing to stop ygy^-1 equalling xgx^-1 even when Zx is not Zy). Re: Centre of a Group and Conjugacy Classes Thanks for that help Now working through this. Re: Centre of a Group and Conjugacy Classes you'll understand Deveno's argument better if you use maps: let $a \in G$ and suppose that $A$ is the conjugacy class of $a$. let $Z$ be the center of $G$. define the map $\phi : G/Z \ longrightarrow A$ by $\phi(gZ})=gag^{-1}.$ this map is well-defined because if $g_1Z=g_2Z$, then $g_1^{-1}g_2 \in Z$ and so $g_1^{-1}g_2a=ag_1^{-1}g_2,$ which gives you $g_1ag_1^{-1}=g_2ag_2^ {-1}.$ now, obviously $\phi$ is onto and thus $n=|G/Z| \geq |A|$. November 11th 2011, 04:47 PM #2 MHF Contributor Mar 2011 November 11th 2011, 04:56 PM #3 November 11th 2011, 05:05 PM #4 November 11th 2011, 05:16 PM #5 MHF Contributor Mar 2011 November 11th 2011, 05:19 PM #6 November 11th 2011, 06:03 PM #7 MHF Contributor May 2008
{"url":"http://mathhelpforum.com/advanced-algebra/191683-centre-group-conjugacy-classes.html","timestamp":"2014-04-16T07:34:25Z","content_type":null,"content_length":"50961","record_id":"<urn:uuid:2fa77c8a-cf2e-459c-a180-9ed64b6a8bde>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying E^x equation March 25th 2010, 08:33 AM #1 Mar 2010 Simplifying E^x equation Can someone show me how to simplify an exponential function E^x when there is a variable in the exponent? I would like to get X out of the exponent and solve for it. $y =(4X+3)\frac{1}{1+e^{3X-2}}<br />$ or more complicated $y =(4X+3)\frac{1}{1+e^{3X^3+2x^2+4X-2}}<br />$ I was able to get some help on finding the derivative in the Calculus section but have been unable to figure out how to simplify this equation. Also, how do I calculate the square of an e^x equation: $y = {(1+e^{3X-2})}^2<br />$ Hello atljogger Can someone show me how to simplify an exponential function E^x when there is a variable in the exponent? I would like to get X out of the exponent and solve for it. $y =(4X+3)\frac{1}{1+e^{3X-2}}<br />$ or more complicated $y =(4X+3)\frac{1}{1+e^{3X^3+2x^2+4X-2}}<br />$ I was able to get some help on finding the derivative in the Calculus section but have been unable to figure out how to simplify this equation. Also, how do I calculate the square of an e^x equation: $y = {(1+e^{3X-2})}^2<br />$ The bad news is that you'll only be able to solve equations like these by numerical methods - there won't be an analytical solution. So you'll have to settle for approximate answers. The good news is that squaring the final expression you've written down is pretty easy. Just expand in the usual way, using $(a+b)^2 = a^2+2ab+b^2$. So: ${(1+e^{3X-2})}^2 = 1^2 + 2.1.e^{3X-2}+(e^{3X-2})^2$ $= 1 + 2e^{3X-2}+e^{6X-4}$ Thanks Grandad. Is there a way to solve for X since its in both the formula and the exponent? For example, I took a derivatve and want to solve for X by setting these 2 equations equal: $<br /> 1+e^{3X-2}=(4X+3)*({3e^{3X-2})}$ You would have to use the "Lambert W function" which is defined as the inverse function to $f(x)= xe^x$. Thanks. I reduced a variation of this equation to this: $ln(x) + .75x = 9.5<br />$ Can the Lambert Function be used to solve for X - the two terms with X are being added, not multiplied? Sure. Rewrite that as ln(x)= 9.5- .75x and take the exponential of both sides: $x= e^{9.5- .75x}= e^{9.5}e^{-.75x}$. Multiply both sides by $e^{.75x}$ to get $xe^{.75x}= e^{9.5}$ which is the same as $.75xe^{.75x}= .75e^{9.5}$[/tex]. Let y= .75x and your equation becomes $ye^y= .75e^{9.5}$. Now you can use the Lambert W function- $y= W(.75e^{9.5})$ Then, of course, $x= \frac{y}{.75}= \frac{W(.75e^{9.5})}{.75}$. March 25th 2010, 08:57 AM #2 March 25th 2010, 10:55 AM #3 Mar 2010 March 26th 2010, 04:19 AM #4 MHF Contributor Apr 2005 March 26th 2010, 10:57 AM #5 Mar 2010 March 26th 2010, 12:59 PM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/135606-simplifying-e-x-equation.html","timestamp":"2014-04-18T14:19:39Z","content_type":null,"content_length":"50865","record_id":"<urn:uuid:72f50be0-8eef-413e-935f-622445d2010a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;crystals arithmetic-geometry&#39; Questions First some notations. Let $p$ be a prime, $k$ a perfect field of characteristic $p$, $W=W(k)$ the ring of Witt vectors over $k$, $\sigma : W \rightarrow W$ the Frobenius, $R$ a commutative ... I am trying to learn a little bit about crystalline cohomology (I am interested in applications to ordinariness). Whenever I try to read anything about it, I quickly encounter divided power ...
{"url":"http://mathoverflow.net/questions/tagged/crystals+arithmetic-geometry","timestamp":"2014-04-21T02:58:24Z","content_type":null,"content_length":"34147","record_id":"<urn:uuid:9c1c8922-7053-4648-8095-e19c9b5a6bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Panitia Matematik Measuring Metrically with Maggie Wow, I just flew in from planet Micron. It was a long flight, but well worth it to get to spend time with you! My name is Maggie in your language (but you couldn't pronounce my real name!) When I first arrived I couldn't understand how you measure things, but my friend Tom taught me all about measurement, and I am going to share with you everything he taught me. The first thing Tom told me was that you can measure things using two different systems: Metric and US Standard. Today is my day to learn Metric ! Tom says that if I understand 10, 100, and 1000 then I will have a very easy time learning the metric system. I wish I had ten fingers! Since it was such a long flight, the first thing I could use is something cold to drink. But I want to know how much to ask for! So I can get a drink that is not too big or too small. Tom says I only need to know about: A milliliter (that is "milli" and "liter" put together) is a very small amount of liquid. Here is a milliliter of milk in a teaspoon. It doesn't even fill the teaspoon! Tom says if you collect about 20 drops of water, you will have 1 milliliter: 20 drops of water And that a teaspoon can hold about five milliliters: 1 full teaspoon of liquid Milliliters are often written as ml (for short), so "100 ml" means "100 milliliters". But a milliliter is definitely not enough for someone who is thirsty! So Tom told me about liters. A liter is just a bunch of milliliters put all together. In fact, 1000 milliliters makes up 1 liter. 1 liter = 1,000 milliliters This jug has exactly 1 liter of water in it. Liters are often written as L (for short), so "3 L" means "3 Liters". Milk, soda and other drinks are often sold in liters. Tom says to look on the labels, so the next time you are at the store take a minute and check out how many liters (or milliliters) are in each container! Now I know that a milliliter is very small, and a liter is like a jug in size, I think I will ask for half a liter of juice! So this is all you need to know: 1 Liter = 1,000 Milliliters Mass (Weight) Next I wanted to eat some chocolate ... so I should learn about mass. You often call it "weight", but it is only because of the gravity on your planet that items have weight! Tom tells me that to understand mass, I should know these three terms: Grams are the smallest, Tonnes are the biggest. Let’s take a few minutes and explore how heavy each of these are. A paperclip weighs about 1 gram. Hold one small paperclip in your hand. Does that weigh a lot? No! A gram is very light. That is why you often see things measured in hundreds of grams. Grams are often written as g (for short), so "300 g" means "300 grams". Tom tells me a loaf of bread weighs about 700 g Once you have 1,000 grams, you have 1 1 kilogram = 1,000 grams A dictionary has a mass of about one kilogram. Kilograms are great for measuring things that can be lifted by people (sometimes very strong people are needed of course!). Kilograms are often written as kg (that is a "k" for "kilo" and a "g" for "gram), so "10 kg" means "10 kilograms". When you weigh yourself on a scale, you would use kilograms. Tom weighs about 40 kg. How much do you weigh? But when it comes to things that are very heavy, we need to use the tonne. Once you have 1000 kilograms, you will have 1 tonne. 1 tonne = 1,000 kilograms Tonnes (also called Metric Tons) are used to measure things that are very heavy. Things like cars, trucks and large cargo boxes are weighed using the tonne. This car has a mass of about 2 tonnes. Tonnes are often written as t (for short), so "5 t" means "5 tonnes". Final thoughts about masst: 1 kilogram = 1,000 grams 1 tonne = 1,000 kilograms Measuring how long things are, how tall they are, or how far apart they might be are all examples of length measurements. Tom says I should know about: • Millimeters • Centimeters • Meters • Kilometers The smallest units of length are called millimeters. A millimeter is about the thickness of a plastic id card (or credit card). Or about the thickness of 10 sheets of paper on top of each other. This is a very small measurement! When you have something that is 10 millimeters, it can be said that it is 1 centimeter. 1 centimeter = 10 millimeters A fingernail is about one centimeter wide. You might use centimeters to measure how tall you are, or how wide a table is, but you would not use it to measure the length of football field. In order to do that, you would switch to meters. A meter is equal to 100 centimeters. 1 meter = 100 centimeters The length of this guitar is about 1 meter Meters might be used to measure the length of a house, or the size of a playground. When you need to get from one place to another, you will need to measure that distance using kilometers. A kilometer is equal to 1,000 meters. The distance from one city to another or how far a plane travels would be measured using kilometers. Final thoughts about measuring length: 1 centimeter = 10 millimeters 1 meter = 100 centimeters 1 kilometer = 1000 meters I was feeling a bit hot, so I asked Tom how to measure temperature. So he showed me a thermometer. But I saw 2 sets of numbers! Tom explained that a thermometer measures in degrees (°) of either Celsius or Fahrenheit. "Why two scales?", I asked. Tom said that some people like one scale and some like the other, and that I should learn both! He then gave me an example: when water freezes the thermometer shows: • 0 degrees Celsius on the left side, • but on the right side it shows 32 degrees Fahrenheit. So there can be two numbers for the same thing! He gave me more examples. • A hot sunny day might have a temperature of 30 degrees Celsius but would be 86 degrees in Fahrenheit. • Water boils at 100 degrees Celsius or 212 degrees Fahrenheit. • And you can bake cookies in your oven at a temperature of 180 degrees Celsius, but that would be 356 degrees Fahrenheit. I decided to get my own thermometer, so I would learn about all this. I hope you enjoyed learning all about metric measurement. Now I must return home. Keep measuring until I see you again!!!!!!!!! A Decimal Number (based on the number 10) contains a Decimal Point. Place Value To understand decimal numbers you must first know about Place Value. When we write numbers, the position (or "place") of each number is important. In the number 327: • the "7" is in the Units position, meaning just 7 (or 7 "1"s), • the "2" is in the Tens position meaning 2 tens (or twenty), • and the "3" is in the Hundreds position, meaning 3 hundreds. "Three Hundred Twenty Seven" As we move left, each position is 10 times bigger! From Units, to Tens, to Hundreds ... and ... As we move right, each position is 10 times smaller. From Hundreds, to Tens, to Units But what if we continue past Units? What is 10 times smaller than Units? ^1/[10] ths (Tenths) are! But we must first write a decimal point, so we know exactly where the Units position is: "three hundred twenty seven and four tenths" And that is a Decimal Number! Decimal Point The decimal point is the most important part of a Decimal Number. It is exactly to the right of the Units position. Without it, we would be lost ... and not know what each position meant. Now we can continue with smaller and smaller values, from tenths, to hundredths, and so on, like in this example: Large and Small So, our Decimal System lets us write numbers as large or as small as we want, using the decimal point. Numbers can be placed to the left or right of a decimal point, to indicate values greater than one or less than one. The number to the left of the decimal point is a whole number (17 for example) As we move further left, every number place gets 10 times bigger. The first digit on the right means tenths (1/10). As we move further right, every number place gets 10 times smaller (one tenth as big). Definition of Decimal decima: a tenth part). We sometimes say "decimal" when we mean anything to do with our numbering system, but a "Decimal Number" usually means there is a Decimal Point. Ways to think about Decimal Numbers ... ... as a Whole Number Plus Tenths, Hundredths, etc You could think of a decimal number as a whole number plus tenths, hundredths, etc: Example 1: What is 2.3 ? • On the left side is "2", that is the whole number part. • The 3 is in the "tenths" position, meaning "3 tenths", or 3/10 • So, 2.3 is "2 and 3 tenths" Example 2: What is 13.76 ? • On the left side is "13", that is the whole number part. • There are two digits on the right side, the 7 is in the "tenths" position, and the 6 is the "hundredths" position • So, 13.76 is "13 and 7 tenths and 6 hundredths" ... as a Decimal Fraction Or, you could think of a decimal number as a Decimal Fraction. A Decimal Fraction is a fraction where the denominator (the bottom number) is a number such as 10, 100, 1000, etc (in other words a power of ten) So "2.3" would look like this: And "13.76" would look like this: ... as a Whole Number and Decimal Fraction Or, you could think of a decimal number as a Whole Number plus a Decimal Fraction. So "2.3" would look like this: 2 and And "13.76" would look like this: 13 and Those are all good ways to think of decimal numbers. A polyhedron is a solid with flat faces (from Greek poly- meaning "many" and -edron meaning "face"). Each flat surface (or "face") is a polygon. So, to be a polyhedron there should be no curved surfaces. Examples of Polyhedra: Triangular Prism Cube Dodecahedron Common Polyhedra Platonic Solids Counting Faces, Vertices and Edges If you count the number of faces (the flat surfaces), vertices (corner points), and edges of a polyhedron, you can discover an interesting thing: The number of faces plus the number of vertices minus the number of edges equals 2 This can be written neatly as a little equation: F + V - E = 2 It is known as the "Polyhedral Formula", and is very useful to make sure you have counted correctly! Let's try some examples: This cube has: • 6 Faces • 8 Vertices (corner points) • 12 Edges F + V - E = 6+8-12 = 2 This prism has: • 5 Faces • 6 Vertices (corner points) • 9 Edges Volume of a Cuboid A cuboid is a 3 dimensional shape. Therefore to work out the volume we need to know 3 measurements. Look at this shape. There are 3 different measurements: Height, Width, Length The volume is found using the formula: Volume = Height × Width × Length Which is usually shortened to: V = h × w × l Or more simply: V = hwl In Any Order It doesn't really matter which one is length, width or height, so long as you multiply all three together. Example: What is the volume: The volume is: 4 × 5 × 10 = 200 units^3 It also works out the same like this: 10 × 5 × 4 = 200 units^3 --> link di atas memang seronok. Dapat menambahkan pengetahuan dan pembelajaran yang baru. How to Learn Your life will be a lot easier when you can simply remember the multiplication tables. So ... train your memory! First, use the table above to start putting the answers into your memory.Then use the Math Trainer - Multiplication to train your memory, it is specially designed to help you memorize the tables. Use it a few times a day for about 5 minutes each, and you will learn your tables. Try it now, and then come back and read some more ... So, the two main ways for you to learn the multiplication table are: 1.) Reading over the table 2.) Exercising using the Math Trainer But here are some special "tips" to help you even more: Tip 1: Order Does Not Matter When you multiply two numbers, it does not matter which isfirst or second, the answer is always the same. Example: 3×5=15, and 5×3=15 Another Example: 2×9=18, and 9×2=18 In fact, it is like half of the table is a mirror image of the other! So, don't memorise both "3×5" and "5×3", just memorise that "a 3 and a 5 make 15" when multiplied. This is very important! It nearly cuts the whole job in half. In your mind you should think of 3 and 5 "together" making 15. so you should be thinking something like this: Tip 2: Learn the Tables in "Chunks" It is too hard to put the whole table into your memory at once. So, learn it in "chunks" ... A -->Start by learning the 5 times table. B -->Then learn up to 9 times 5. C -->Is the same as B, except the questions are the other way around. Learn it too. D --> Lastly learn the "6×6 to 9×9" chunk Then bring it all together by practicing the whole "10 Times Table" And you have learnt your 10 Times Table! (We look at the 12x table below) Some Patterns There are some patterns which can help you remember: 2× is just doubling the number. The same as adding the number to itself. 2×2=4, 2×3=6, 2×4=8, etc. So the pattern is 2, 4, 6, 8, 10, 12, 14, 16, 18, 20 (And once you remember those, you also know 3×2, 4×2, 5×2, etc., right?) 5× has a pattern: 5, 10, 15, 20, etc. It always end in either a 0 or a 5. 10× is maybe the easiest of them all ... just put a zero after it 10×2=20, 10×3=30, 10×4=40, etc. 9× has a pattern, too: 9, 18, 27, 36, 45, 54, 63, 72, 81, 90 Now, notice how the "units" place goes down: 9,8,7,6, ...? And at the same time, the "tens" place goes up: 1,2,3,...? You can use this pattern to prompt your memory this way: the tens place will be 1 less than what you are multiplying by! Example: 9×7 ... go 1 less than 7, so the tens place is 6, and then remember 63 Angka 8 yang luar biasa Semua perkara dalam kehidupan manusia, sama ada baik atau buruk, adalah ditentukan oleh usaha sendiri. Angka atau nombor tidak ada kaitan untuk menentukan tuah, atau sebaliknya. Tetapi hal ini agak berlainan pula bagi seorang presiden Barrios dari Guatemala. Angka 8 menjadi suatu kebetulan yang tragis bagi dirinya. Beliau telah dibunuh pada pukul 8.00 malam, 8 Februari 1898, di alamat No.8, di Jalan 8. Suatu kebetulan yang cukup luar biasa!
{"url":"http://panitiamatematikskm.blogspot.com/","timestamp":"2014-04-17T16:25:26Z","content_type":null,"content_length":"83522","record_id":"<urn:uuid:187244b3-32e5-4b69-8e13-29dfe8ed2216>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific and numeric research software environment What is PsiLAB PsiLAB has been developed for scientific research and data analysis. It is freely distributed in source code format under Gnu Public License, version 2. PsiLAB is written mainly in the functional language O'CaML developed at INRIA research laboratories. It's mainly made of three parts: 1. An interpreter, of course O'CaML itself 2. libraries written in O'CaML, 3. external libraries written in Fortran and C. Main features of PsiLAB are: • All O'CaML functions and data types are supported, • support for different data types: float, int, complex • extensive matrix package • 2D and 3D plot package with graphical or postscript output • various generic and special mathematical functions • linear algebra package (solving of linear equation systems and linear least square problems) • Linear Regression • non linear least square fit routines • Fast Fourier Transformations • some image processing functions • online help system, easily extensible by user functions • easy to extend for people knowing basics about the O'CaML C extension facilities PsiLAB uses the following external libraries, mainly written in Fortran: • LAPACK: Linear algebra and linear least square problems • MINPACK: Non linear least square fits • PLPLOT: 2D and 3D plot library with several output drivers (X11, PS, Xfig,...) • FFTW: Fastest Fourier Transform in the West (and the East ?) • AMOS: Several special functions: Bessel Polynomials and more ... • SLATEC (partially implemented): More special functions (Gamma function,...) • CamlImages (partially implemented): Support for various image formats PsiLAB is not only written in O'CaML, it is CaML. That means: if you are familar with this programming language, you can write PsiLAB programs. And you can do all things with PsiLAB you can do with the generic O'CaML development system: • using modules for access to data base servers • creating new develop environments • writing lexers and parsers (perhaps with mathematical background) • more sophisticated image processing • http servers (with direct access to your computation results ?) • and many more ... The CaML interpreter system, which is in reality a pure compiler concept, was chosen because of the high computation speed of this system and the high portability. You have the advantages of an interpreter like language (from the user point of view), but with performance comparable with C/C++ programs. All functions will be translated by the CaML compiler into a system and machine independent Byte Code. This Byte Code will be then executed on a virtual machine. Currently, you have a terminal driven environement with online help. Plots are printed to an additional X11 window or to a postscript file. Source code There are currently two versions available in source code form: • The old version 1.0 : psilab-1.0-** • The new version 2.0: psilab-2.0-** both available from the download directory, here. There are currently two documents available: all available from the download directory, here. Other WWW resources Development - About the author - Contact PsiLAB is currently developed and maintained only by BSSLAB You can find more informations about the author here. You can find more informations about scientific software based on OCaML and made by BSSLAB at this location.
{"url":"http://psilab.sourceforge.net/","timestamp":"2014-04-19T11:57:04Z","content_type":null,"content_length":"8497","record_id":"<urn:uuid:59f95356-11cb-417f-a182-48e509fd9df9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] BOF notes: Fernando's proposal: NumPy ndarray with named axes [Numpy-discussion] BOF notes: Fernando's proposal: NumPy ndarray with named axes Neil Crighton neilcrighton@gmail.... Sun Jul 11 13:09:50 CDT 2010 Robert Kern <robert.kern <at> gmail.com> writes: > On Sun, Jul 11, 2010 at 11:36, Rob Speer <rspeer <at> mit.edu> wrote: > >> But the utility of named indices is not so clear > >> to me. As I understand it, these new arrays will still only be > >> able to have a single type of data (one of float, str, int and so > >> on). This seems to be pretty limiting. > Having ticks on *every* axis is the primary feature there. I see, thanks. So for Rob's example slide you could use a record array: rec = np.rec.fromrecords(data, names='name,305,6,234') (Here data is a list of tuples, each tuple giving the movie name + it's data.) In this case it's easy to index by field name (rec['205']), but a trickier to choose the row using the movie name: ind = dict((n,i) for i,n in enumerate(rec.name)) rec[ind['Wrong Trousers, The (1993)']] So datarrays would make this easier. More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-July/051445.html","timestamp":"2014-04-19T02:19:34Z","content_type":null,"content_length":"3832","record_id":"<urn:uuid:add14790-93d4-4002-8cb0-c76f8c56f5f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Inglewood Algebra 2 Tutor Hi I'm Ross. I recently graduated from Duke University, majoring in Neuroscience and minoring in Economics and Psychology. I was on the Dean's List for my grades and I am looking forward to starting vet school this fall at one of the top schools in the country. 27 Subjects: including algebra 2, chemistry, Spanish, physics ...The curriculum lives on within NAI's mathematic department to this day. My tutoring approach is simple - create successful study habits that enhance a student's ability to be a critical thinker and eliminate procrastination. By staying true to this goal, it reinforces the importance of genuinel... 12 Subjects: including algebra 2, reading, writing, geometry ...I am really dedicated and patient as I was myself really bad at mathematics.Soccer is a sport played on a 120 meters over 100 field with two 7.32 meter-wide goals. It opposes two teams with 11 players each, ten on the field and 1 in the goal. There are 3 referees in charge of ruling the game I play soccer at CSUN and also the super metro League in Los Angeles with the L.A. 6 Subjects: including algebra 2, French, geometry, algebra 1 Hello, my name is David Angeles and I am currently attending California State University, Northridge to pursue a Major in Applied Mathematics. I want to be a math professor one day and help out many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. 10 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I came from Indonesia where I have worked for more than 5 years as a science teacher at the high school level. In Los Angeles I have worked for two years in a special education school, where I have learned how to teach students with an IEP (Individualized Education Program). In this school my su... 8 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/inglewood_algebra_2_tutors.php","timestamp":"2014-04-20T11:25:15Z","content_type":null,"content_length":"24213","record_id":"<urn:uuid:2aea70f2-6f9d-4d7e-80bc-7c2467676326>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
: Circumference and Area Lesson: Circumference and Area Introducing the Concept Your students know how to find the area of common figures like rectangles, parallelograms, and triangles. Now they will extend their knowledge to finding the area of a circle. Spend some time helping them understand the number pi as the ratio of the circumference of a circle to its diameter. This will help them feel more comfortable with the formula for the circumference of a circle. It will also help them relate pi to the area of a circle. Materials: 5 circular objects of different sizes, such as jar lids, for every two students; string, rulers, and blank paper for all students Preparation: Distribute a set of jar lids to student pairs. Also distribute string, rulers, and blank paper to each student. Have students create a table on their paper similar to the one described and illustrated below. Prerequisite Skills: Students should be able to use a ruler to measure distances. Draw a picture of a circle on the board or overhead projector and review the definitions for circle, diameter, and radius. Introduce the concept of circumference as perimeter.
{"url":"http://www.eduplace.com/math/mw/background/6/10/te_6_10_area_ideas.html","timestamp":"2014-04-21T07:35:22Z","content_type":null,"content_length":"6992","record_id":"<urn:uuid:80935177-5299-42db-a96b-2b0b05b2e527>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent Contributions to Cryptographic Hash Functions Designing Hash Functions The standard approach to building a hash function is first to construct a compression function that operates on the input strings of a fixed length, and then to use the cascade construction to extend the compression function to strings of arbitrary length [5, 6]. Compression functions are usually built out of block ciphers. Recall that a block cipher is a pair of D and E functions, for decrypting and encrypting, respectively, that operate on strings of a particular length, called the "block size". If the block size is n-bits, then the encryption of an n-bit string s is E (s), and its decryption is D (s). Every string can be encrypted or decrypted, and E(D (s)) = D(E(s)) = s, meaning E (and D), is a permutation of the set of all n-bit strings. Cryptographers say a block cipher is secure if both s → E (s) and s → D (s) are indistinguishable from a randomly selected permutation. To meet the indistinguishability property, block ciphers are keyed, so a block cipher represents a family of permutations. A particular block cipher instance is selected by choosing a key K. The resulting encryption and decryption instances are denoted E[K] and D[K]. That is, for each choice of a key K, s → E[K] (s) (and s → D[K] (s)) behave like different randomly selected permutations. Compression Functions The compression functions for all the hash functions commonly used today are built in the following way: 1. Select a block cipher scheme (E, D). 2. Define a compression function c (iv, s) = E s (iv) ⊕ iv. Here s denotes a message of exactly n-bits, and iv denotes an initialization vector. This recipe for c says to use s as the encryption key and iv as the data to be encrypted, and then to use XOR s with the encrypted result Es ( iv ) . The mapping (iv, s ) → Es ( iv ) ⊕ iv is called the Davies-Meyer construction [7] for E. It is easy to show that a block cipher used in Davies-Meyer mode is collision-resistant, pre-image resistant, and 2nd pre-image resistant. c is called a compression function because it compresses <iv,s> into a new string iv' of exactly s's length. Other compression function constructions also exist: both Vortex and Skein use the Matyas-Meyer-Oseas [8] construction, c ( iv, s ) → Eiv ( s ) ⊕ s, which is identical to Davies-Meyer, except it reverses the role of iv and s. The Cascade Construction The cascade construction builds a hash function h from a compression function c with block size n as follows: Every hash function based on a block cipher must define a padding scheme, because compression functions only operate on strings s of length n bits exactly. Most padding schemes pad s with a single 1 bit followed by as many 0 bits as are necessary to bring the length to a multiple of n. The length of the unpadded string s is then encoded as an n-bit integer and appended to defend against extension attacks. Once padded, partition s into b = |s|/n blocks, each consisting of n bits (|s| denotes s's length in bits): s[1] s[2] … s[b] ← s. Finally, beginning with a hash-function-specific initialization vector iv, serially compute c (iv[i']s[i]) for each block s[i]. The cascade construction extends the collision resistance, pre-image resistance, and pre-image resistance properties from a compression function to a function operating on strings of arbitrary length [9]. The cascade construction is sometimes called the Merkle-Damgard construction after its inventors. This construction is intrinsically serial, as it needs to be able to detect problems, such as two blocks being exchanged. Hashing Today We just summarized the state of the art during the early part of this decade, prior to two significant publications. The first was by Antoine Joux, who introduced the multi-collision attack. The second was by Xiaoyun Wang, where she described an attack, based on differential cryptanalysis, against all of the hash algorithms broadly used today. Suppose a hash function is built out of a compression function by using the cascade construction. Also suppose that someone has broken the collision resistance of the hash function; that is, they have discovered two distinct strings s ≠ s' so that h (s) = h (s'). Joux observed that it is easy to find many more collisions for little additional cost [1]. The source of the problem is that the cascade construction maintains too little state as it progresses from one invocation of the compression function to the next. Joux's result says that by itself the cascade construction is too weak to serve as an adequate building block for constructing hash functions. Wang's attack [2], based on differential cryptanalysis, has a different flavor. Differential cryptanalysis is a technique to analyze block ciphers. Essentially, differential cryptanalysis follows a bit slice through the block cipher being analyzed, to characterize how it gets diffused. The goal of differential cryptanalysis is to identify bits leading to unusually high or low levels of diffusion. When such bits are identified, they can be used to recover bits of the encryption key. This can dramatically shrink the size of the key space, making brute force search realistic. As an example, differential cryptanalysis reduced the cost of key recovery attacks against the DES cipher from 256 encryptions to about 241. Wang showed that a differential attack could produce collision in message digests, thereby breaking the collision-resistance of the hash function producing them. Wang first demonstrated her attack against MD4, MD5, RIPE-MD, and SHA-0. This was viewed as a stunning result, but then she demonstrated that a collision can be produced in SHA-1 at a cost of about 261 operations. This caused upheaval in the cryptographic community, raising the question as to whether we even understand what a hash function is. The cryptographic community has vigorously debated hash design principles in the intervening years. The only clear consensus emerging from this debate is that we need a worldwide, focused project whose goal is to create a new generation of hash functions that defend against the new attacks. A lesson previously learned by the community is that contests have great efficacy in galvanizing technical consensus building. In 2007, NIST initiated an international competition to create a new hash standard [10]. Candidate submissions were due on October 31, 2008. 55 algorithms were entered. In February of this year, NIST whittled down the list of candidates to 40, and from this it plans to select 10 to 15 first-round candidates by August 2009. NIST plans to select a set of finalist algorithms in 2010, then to announce the winner(s) in 2011. NIST is widely influential in the creation of cryptographic standards worldwide, so it is a good sponsor for the competition. One of NIST's most important contributions to cryptography standards has been the creation of requirements for algorithms submitted to the competition. The competition requires that candidate algorithms provide the collision-resistance, pre-image-resistance, and 2nd-pre-image-resistance properties -- and be free of any known intellectual property. Algorithms must support output block sizes of 128, 160, 224, 256, 384, and 512 bits. The rules encourage support for features outside the core properties, especially for parallelization. Submissions must be accompanied by a security rationale, to help establish confidence in the algorithms.
{"url":"http://www.drdobbs.com/parallel/recent-contributions-to-cryptographic-ha/219500573?pgno=2","timestamp":"2014-04-20T07:01:49Z","content_type":null,"content_length":"98928","record_id":"<urn:uuid:90698b4a-43aa-432f-a92c-e490dbfd0fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Integral closure [tex]y= p+q\sqrt{2}[/tex] is a general element of [tex]\mathbb{Q}(\sqrt{2})[/tex] where p and q are rational. First you will need to show that if y is a solution to a polynomial over Z, then it is a solution to a polynomial over Z of degree 2. Then find necessary and sufficient conditions for y such that y is the solution of such a polynomial (use the fact that p and q are rational). This works for quadratic and some cubic extensions. For bigger extensions, this gets really nasty quite quickly, so you're better off using some more theory (differents, discriminants ,etc.) In fact it's quite easy to see by simple calculations what is ring of integers in Q(sqrt(d)), but not so simple to find integral closure in Q(17^(1/3)) with simple calculations like this. (the example I gave above).
{"url":"http://www.physicsforums.com/showpost.php?p=2953880&postcount=4","timestamp":"2014-04-18T10:54:59Z","content_type":null,"content_length":"8387","record_id":"<urn:uuid:60daf8c6-2504-4a92-b91c-190cbc9ae20f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving lines are perpendicular October 13th 2010, 08:17 PM #1 Aug 2010 Proving lines are perpendicular Please tell me how to solve this problem. I would really appreciate it. P1: 2y-x=2 P2 y+2x=4 How would you prove that they are perpendicular? Hint: When you multiply the gradients of 2 perpendicular lines, the it comes to -1. October 13th 2010, 08:31 PM #2
{"url":"http://mathhelpforum.com/geometry/159547-proving-lines-perpendicular.html","timestamp":"2014-04-17T01:59:37Z","content_type":null,"content_length":"30855","record_id":"<urn:uuid:4499a7e3-c635-4330-aaa5-ab9a043afc61>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert cubic feet to cu inches - Conversion of Measurement Units ›› Convert cubic foot to cubic inch ›› More information from the unit converter How many cubic feet in 1 cu inches? The answer is 0.000578703703704. We assume you are converting between cubic foot and cubic inch. You can view more details on each measurement unit: cubic feet or cu inches The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 35.3146665722 cubic feet, or 61023.7438368 cu inches. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between cubic feet and cubic inches. Type in your own numbers in the form to convert the units! ›› Definition: Cubic foot The cubic foot (symbols ft³, cu. ft.) is a nonmetric unit of volume, used in U.S. customary units and Imperial units. It is defined as the volume of a cube with edges one foot in length. ›› Definition: Cubic inch A cubic inch is the volume of a cube which is one inch long on each edge. It is equal to 16.387064 cm³. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0029 seconds.
{"url":"http://www.convertunits.com/from/cubic+feet/to/cu+inches","timestamp":"2014-04-19T07:08:02Z","content_type":null,"content_length":"20292","record_id":"<urn:uuid:3950f9e5-1ed0-4fff-b1d5-99f6bddb82c4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
sigma function Consider the factors of n to be and the sum of be x. Consider the factors of n+1 to be and the sum of be x-1. There must be exactly one for which another have the relation . Therefore, all the rest factors are common. But, since gcd(n, n+1) =1, therefore, no other common factor other than 1 can exist. Thus, there are only three factors. Now, let us imagine that the factors of the first number are 1, a and n and the second number are 1, a-1, n+1 But, whenever there are just three factors of a number, they are perfect squares. But, two positve perfect squares cannot be consecutive. So no such number exists 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=268941","timestamp":"2014-04-18T08:29:05Z","content_type":null,"content_length":"23891","record_id":"<urn:uuid:8ae2b983-4371-4686-b1a1-c8dcd0ae8eda>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Corinth, TX Geometry Tutor Find a Corinth, TX Geometry Tutor ...He has even more years experience as an ESL instructor for various companies and universities. He provides tutoring in subjects ranging from elementary math up to calculus and statistics, earth science to chemistry, grammar to essay writing, and also physics and Spanish. He is currently completing the last few classes necessary for a degree in Biochemistry. 37 Subjects: including geometry, Spanish, reading, chemistry I am an experienced tutor and instructor in undergraduate physics. I tutored at the University of Texas at Dallas, where I was also a Teaching Assistant. I taught courses at Richland College and Collin County Community College. 8 Subjects: including geometry, calculus, physics, algebra 1 ...I love seeing that moment when a concept finally clicks in a student's mind! To me, teaching is a very rewarding experience, especially when I can get one-on-one time with the students. I strive to give my students the resources they need to discover things themselves, rather than just showing them the solutions. 19 Subjects: including geometry, reading, English, trigonometry ...I hold a Masters Degree in Education with emphasize on instruction in math and science for grades 4th through 8th. I have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in college for students that needed help in math. 11 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I know a lot about the TI-84 Plus and Geometer's Sketchpad. I tutor during evenings. I can help you pass the Algebra I STAAR exam. 38 Subjects: including geometry, English, reading, calculus Related Corinth, TX Tutors Corinth, TX Accounting Tutors Corinth, TX ACT Tutors Corinth, TX Algebra Tutors Corinth, TX Algebra 2 Tutors Corinth, TX Calculus Tutors Corinth, TX Geometry Tutors Corinth, TX Math Tutors Corinth, TX Prealgebra Tutors Corinth, TX Precalculus Tutors Corinth, TX SAT Tutors Corinth, TX SAT Math Tutors Corinth, TX Science Tutors Corinth, TX Statistics Tutors Corinth, TX Trigonometry Tutors Nearby Cities With geometry Tutor Addison, TX geometry Tutors Argyle, TX geometry Tutors Bartonville, TX geometry Tutors Copper Canyon, TX geometry Tutors Cross Roads, TX geometry Tutors Denton, TX geometry Tutors Double Oak, TX geometry Tutors Hickory Creek, TX geometry Tutors Highland Village, TX geometry Tutors Lake Dallas geometry Tutors Lakewood Village, TX geometry Tutors Little Elm geometry Tutors Northlake, TX geometry Tutors Oak Point, TX geometry Tutors Shady Shores, TX geometry Tutors
{"url":"http://www.purplemath.com/corinth_tx_geometry_tutors.php","timestamp":"2014-04-16T13:10:31Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:81d84c57-9ca7-404d-ad1d-e8d5178350cd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenbelt Calculus Tutor Find a Greenbelt Calculus Tutor ...This usually gets them focused and we can continue. Having taught many years, I have many work sheets and other extra outside material to share. I have also taught an SAT prep course for 6 21 Subjects: including calculus, statistics, geometry, algebra 1 ...If you’re interested in working with me, feel free to send me an email and inquire about my availability. Currently, I do all of my tutoring at local libraries within 5 miles of Rockville.I took a semester of Discrete Math in College and earned an A in the class. Discrete math accompanies a degree in computer science. 9 Subjects: including calculus, physics, geometry, algebra 1 ...As a camp counselor, I interacted with the children in Spanish and gave English lessons as well. As an undergraduate at Duke University, I took organic chemistry and received a high A in the class. I have an in depth understanding the material and am more than capable of explaining the concepts and mechanisms. 17 Subjects: including calculus, Spanish, writing, physics ...In addition, I have always challenged myself by taking Advanced Placement courses (AP) throughout high school. Many of those include AP Calculus and AP Physics C as I have mentioned as well as AP English 11, AP Chemistry, AP Literature with Composition, and AP Biology. I have learned a lot sinc... 71 Subjects: including calculus, chemistry, English, physics ...I have worked with the Learning Assistance Service at the University of Maryland, learning how to help different kinds of students by catering to a number of needs. I have experience tutoring both high school students and college students for over 3 years. Geometry is a subject that can confuse many but with the right methodology can be very simple. 20 Subjects: including calculus, geometry, statistics, algebra 1
{"url":"http://www.purplemath.com/greenbelt_calculus_tutors.php","timestamp":"2014-04-21T13:06:02Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:45581b7c-6b42-4655-b77c-fcef01d1f207>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples in mirror symmetry that can be understood. up vote 34 down vote favorite It seems to me, that a typical science often has simple and important examples whose formulation can be understood (or at least there are some outcomes that can be understood). So if we consider mirror symmetry as science, what are some examples there, that can be understood? I would like to explain a bit this question. If we consider the article "Meet homological mirror symmetry" http://arxiv.org/abs/0801.2014 it turns out, that in order to understand something we need to know huge amount of material, including $A_{\infty}$ algebras, Floer cohomology, ect. Here, on the contrary, is an example, that "can be understood" (for my taste): According to Arnold, the first instance of symplectic geometry was "last geometric theorem of Poincare". This is the following statement: Let $F: C\to C$ be any area-preserving self map of a cylinder $A$ to itself, that rotates the boundaries of $A$ in opposite directions. Then the map has at least two fixed points. (this was proven by Birkhoff http://en.wikipedia.org/wiki/George_David_Birkhoff) So, I would like to ask if there are some phenomena related to mirror symmetry that can be formulated in simple words. Added. I would like to thank everyone for the given answers! I decided to give a bit of bounty for this question, to encourage people share phenomena related to mirror symmetry that can be simply formulated (or at least look exciting). Since there are lot of people in this area I am sure there must be more examples. mirror-symmetry math-philosophy sg.symplectic-geometry ag.algebraic-geometry 2 Another interesting explanation of Mirror symmetry in certain cases related to combinatorics is in terms of typical shapes for certain classes of partitions. You start with a class of partitions related to some variety, consider the typical shape and this gives you the dual variety. – Gil Kalai Mar 10 '11 at 22:31 1 Gil, I would love to see such examples that illustrate what you say! – aglearner Mar 10 '11 at 22:58 I heard about this typical-shape-of-partition approach to mirror symmetry in a lecture by Okounkov. I dont know the precise papers (some probably with Pandharipande,and Nekrasov). Maybe one can start by reading Okounkov's paper the use of random partitions arxiv.org/PS_cache/math-ph/pdf/0309/0309015v1.pdf and then maybe look at Okounkov-Reshetikhin-Wafa arxiv.org/PS_cache/hep-th/pdf/0309 /0309208v2.pdf But explicit mention of Mirror symmetry there is sparse. – Gil Kalai Mar 11 '11 at 7:22 2 I think that research in mirror symmetry goes in the opposite direction to what happened with Poincare'-Birkhoff discovery. In that case a simple statement lead to a beautiful rich theory. In mirror symmetry a very complicated statement (such as the counting formula for curves on the quintic), which no-one understood at first, lead to a theory which is slowly becoming clearer and enriched with simpler examples. – Diego Matessi Mar 11 '11 at 13:31 Section 2.3 of the paper by O-R-W linked above is called "Mirror symmetry and the limit shape". – Gil Kalai Mar 12 '11 at 11:33 add comment 8 Answers active oldest votes Here is my biased view of a simple example: the two-torus. Everything I know about homological mirror symmetry stems from this example. Because the example is one-dimensional, a symplectic form is just an area form, and Lagrangians are simply curves, and the holomorphic maps which are part of the Fukaya category are simply topological disks. (By uniformization of Riemann surfaces, there is one holomorphic map for each topological disk satisfying the appropriate boundary conditions.) Even better, you can go to the universal cover, which is $R^2,$ and just draw Lagrangians as straight lines with rational slope. The holomorphic disks which determine compositions in the category are simply triangles. up vote 16 On the mirror side, we're talking about a complex two-torus, or elliptic curve. A typical object would be a line bundle on the elliptic curve, such as the theta line bundle, whose down vote sections are theta functions, once we lift them up to the complex plane. The two-torus is circle-fibered over a base circle, and the elliptic curve is circle-fibered by the dual circle (i.e., $U(1)$ local systems on the original circle). This is called T-duality, and it explains how to construct the mirror equivalence going from Lagrangians to line bundles, or vice versa. For example, the Lagrangian $\{y=0\}$ represents a family of trivial $U(1)$ local systems, corresponding to the trivial holomorphic line bundle whose sections are just holomorphic functions. The Lagrangian $\{y=nx\}$ corresponds to a line bundle of degree $n$. After making these definitions, one checks that compositions match up. Thank you! Nice to hear that the elliptic curve example is so imporant for HMS. Do you think, there is a resonable math reference for this, at least, could you advise something? – aglearner Mar 14 '11 at 16:58 Well, I can offer you my own article with Polishchuk: arxiv.org/abs/math/9801119 Below, AByer mentions that HMS implies that we get an isomorphism of our category for each loop in moduli space. These isomorphisms ("autoequivalences") alone can lead to the mirror map, as demonstracted by a calculation for this example in arxiv.org/abs/math/0506359. The T-duality aspect can be pushed in many other directions and examples, too. (Sorry for the lazy self-promotion! There are, of course, many other articles by other authors, some of which are included in the answers below.) – Eric Zaslow Mar 14 '11 at 17:31 I decided to accept this answer, since it looks like this answer go in the direction of I wanted. Namely it says that there is a very important example: elliptic curves, and moreover it gives references to articles that one might try to understand (I just had a quick look at them but will try to do this more seriously). On the other had, Eric, if you want to add anything to your answer (like references, or whatsoever), you are more than welcome. – aglearner Mar 16 '11 at 9:07 The paper with Polishchuk works through the example in detail, as opposed to just the sketch above. You seem to be still somewhat dissatisfied. Why don't you say precisely which aspect of mirror symmetry you are looking to uncover in your example? (Three references for torus fibrations are Arinkin-Polishchuk, Leung-Yau-Z, and Mark Gross's "Topological Mirror Symmetry.") – Eric Zaslow Mar 16 '11 at 12:27 This is a nice answer, and I am happy with it, and will try to study the references that you proposed. I just wanted to hear a bit more... For example, since you are a physicist, I was curious, which side of mirror in this example is "closer" to you -- Fukaya, or derived categories. Or maybe this example is pure math (and does not require any physics intuition)...? But, again, I am happy with the answer (just will need time to read the articles) – aglearner Mar 16 '11 at 22:17 add comment Mirror symmetry gives some remarkable connections between certain varieties. The first step in this connection is that certain homology groups have the same rank. An explicit case for mirror symmetry duals is the case coming from toric varieties. In this case, the dual objects comes from duality of polytopes. So duality of polytopes: associating the octahedron to the cube and the icosahederon to the dodecahederon is related to Mirror symmetry. Perhaps the very first facts about polytopes which demonstrates unexpected equalities for certain homologies can be described as follows: For 2-dimensional polytopes this is the following numerical fact: A polygon has the same number of edges as the dual. (Well, this is not so unexpected.) For 4 dimensional polytope P it is the following numerical fact. Start with a 4-polytope with n vertices and e edges. Triangulate every 2-face by non crossing diagonals. Let $e^+$ be the number of edges including the added diagonals. Consider the quantity $$\gamma (P) = e^+ - 4n . $$ It is true that for every dual pair of 4-polytopes $P$ and $P^*$, $$\gamma (P^*)=\gamma(P).$$ This is more surprising. For example, let P be the 4-dimensional cross polytope and Q be the 4-dimensional cube. P has 8 bertices 24 edges and all the 2-faces are triangles so $\gamma (P)=24~-~4\cdot 8~=~-8$. The up vote 4-cube Q has 16 vertices, and 32 edges and it has also 24 2-faces which are squares, so $e^+(Q)=56$. $\gamma (Q)=56-64 = -8$. Voila! 18 down vote This reflects some properties of toric varieties (unexpected equalities between Hodge numbers) which express (sort of the 0-th step of) mirror symmetry. Related papers: V. Batyrev and L. Borisov, Mirror duality and string-theoretic Hodge numbers; V. Batyrev and B. Nill, Combinatorial aspect of mirror symmetry. Here is a lecture by B. Nill. Another manifestation of mirror symmetry of combinatorial nature, that can be formulated in simple words, is in terms of typical shape of various classes of partitions. I mentioned it in a remark above and let me quote a description taken from my adventure book. A partition is just a way to write a number as a sum of other numbers. Like 9=4+2+1+1+1. Partitions have attracted mathematicians for centuries. Among others, the famous Indian mathematician Ramanujan was well known for his identities regarding partitions. And now enters another idea, baring the names of Ulam, Vershik, Kerov, Shepp and others who studied partitions as stochastic objects. In particular, it was discovered that "most" partitions, say of a number n, come in a "typical shape". The emergent picture drawn by Okounkov and his coauthors goes very roughly like this: an "algebraic variety" (a manifold of some sort) that takes part in a certain string theory is related to a class of partitions, and when we consider the typical shape of a partition in the class this gives us another algebraic variety, and - lo and behold - the typical shape IS the mirror image of the original one. The mirror relations translate to asymptotic results on the number of partitions, somewhat in the spirit of the famous asymptotic formulas of the mathematicians Hardy and Ramanujan for p(n)- the total number of partitions for the number n. As mentioned in the comments I am not sure about good references to this connection between mirror symmetry and limit shapes of classes of partitions. The 2003 paper Quantum Calabi-Yau and Classical Crystals by Andrei Okounkov, Nikolai Reshetikhin, and Cumrun Vafa describes this connection in Section 2.3 called "mirror symmetry and the limit shape". add comment Here is the simplest example that I can think of... The ordinary cohomology ring of $\mathbb{CP}^n$ is given by $\mathbb{C}[a]/(a^{n+1})$. The structure of this ring can be thought of as describing the intersection theory of subvarieties / submanifolds / linear subspaces of $\mathbb{CP}^n$. For example, the relation $a^3 \cdot a^3 = 0$ in the cohomology ring of $\mathbb{CP}^5$ reflects the fact that the intersection of two generic dimension 2 subspaces of $\mathbb{CP}^5$ is empty. Now the quantum cohomology ring of $\mathbb{CP}^n$ is $\mathbb{C}[a]/(a^{n+1} - q)$, where we can think of $q$ as being a nonzero constant, or a formal parameter if you like. The quantum cohomology ring is a deformation (in a suitable sense) of the ordinary cohomology ring. The structure of the deformed ring now encodes "enumerative geometry" information. For example, it is a fact that given generic linear subspaces $A,B,C$ of $\mathbb{CP}^n$ of total dimension $n-1$, there is a unique degree 1 map $\mathbb{CP}^1 \to \mathbb{CP}^n$ sending the points $0,1,\infty$ to $A,B,C$ respectively. Writing $q$ as $1 \cdot q^1$, the coefficient $1$ corresponds to the uniqueness of the map, and the exponent $1$ corresponds to the degree of the map. I like to think of this as a generalization of the fact that there is a unique line passing through any two distinct points in the plane, which has been known since at least Euclid... :-) But so far I haven't said anything about "mirror symmetry"... Mirror symmetry says that the story I've described above is echoed by certain properties of the function $W = x_1 + \cdots + x_n + \frac{q}{x_1\cdots x_n}$ on $(\mathbb{C}^\ast)^n$. For example, the Jacobian ring of $W$, which is by definition the ring $\mathbb{C}[x_i^{\pm 1}]/(\partial_i W)$, is isomorphic to $\mathbb{C}[a]/(a^{n+1} - q)$. up vote EDIT: The relation between $\mathbb{CP}^n$ and $W$ goes much deeper. For another elementary(-ish) mirror symmetry statement, there is Seidel(I think?)'s proof that the derived category of $\ 14 down mathbb{CP}^n$ is equivalent to the Fukaya-Seidel category of $W$. In this case these categories can be described fairly easily, without too much fancy language, via the "Beilinson quiver", vote which on the derived category side corresponds to the line bundles $\mathcal{O}, \mathcal{O}(1), \cdots , \mathcal{O}(n)$ and the fact that there is an $(n+1)$-dimensional set of morphisms from $\mathcal{O}(i)$ to $\mathcal{O}(i+1)$. For example, consider the morphisms from $\mathcal{O}$ to $\mathcal{O}(1)$; these are just the sections of $\mathcal{O}(1)$, which are the homogeneous degree 1 polynomials in $n+1$ variables. On the other side, one can see the Beilinson quiver via the "vanishing cycles" $L_0, L_1, \dots, L_n$ of $W$, and the $n+1$-many morphisms above correspond to the $n+1$ intersection points between $L_i$ and $L_{i+1}$. For more on this, see the notes from Bohan Fang's talk here and this paper of Seidel. This kind of correspondence between vector bundles and cycles, and between morphisms of vector bundles and intersections points of cycles, is a first approximation of homological mirror symmetry, or "categorical" mirror symmetry. For a better approximation, the statement is that compositions of morphisms of vector bundles correspond to "compositions" of intersection points, where these "compositions" are defined via $J$-holomorphic discs. But for the elliptic curve / symplectic torus, things are still pretty simple, and one can avoid saying the word "$J$-holomorphic disc" if one wishes. In this situation, the correspondence between compositions reduces to a correspondence between some classical facts about theta functions on elliptic curves and some very elementary observations about lines and triangles on a torus. And finally, here is the most trivial example of mirror symmetry. Let $X$ be a point $\operatorname{Spec} \mathbb{C}$. Then the mirror of $X$, call it $Y$, is also a point. Notice that the point is a Lagrangian submanifold of $Y$. Notice that the point intersect the point is the point. On the other hand, take $\mathbb{C}$ as a $\mathbb{C}$-module. Then there is a 1-dimensional set of $\mathbb{C}$-module morphisms from $\mathbb{C}$ to $\mathbb{C}$. add comment I won't claim this means ''understanding mirror symmetry'', but if you are familiar with the derived category of coherent sheaves, then there is a consequence of Kontsevich's homological mirror symmetry that can be understood, without knowing anything about the Fukaya category: For every symplectic diffeomorphism of the mirror $\hat X$, there is a autoequivalence of $\mathrm D^b(X)$. up vote 1. If $X$ is a an elliptic curve, then $\mathrm{SL}_2(\mathbb Z)$ acts as a group of symplectic diffeomorphisms on the mirror $\hat X = \mathbb{R}^2/\mathbb{Z}^2$. There is a corresponding 11 down action of (a central extension of) $\mathrm{SL}_2(\mathbb Z)$ generated by Fourier-Mukai transform induced by the Poincare line bundle on $X \times X$, and by tensoring by a line bundle vote of degree one (and shifts). 2. A Dehn twist corresponds to the ''spherical twist'' $\mathrm{ST}_E$ at an spherical object $E$ (see arXiv:math.AG/0001043). 3. More examples have been studied by Horja, see arXiv:0103.5231. This is a nice point. I would find even more exiting if it were possible to go in the opposite direction: we start with an auto-equivalence of $D^b(X)$ and get a symplectomorphism of the "mirror". I wonder if this direction ever appeared in the literature? Also, do I understand correctly, that we don't really know what is the symplectiomorphism group of symplectic manifolds of dimesnion $4$ an higher? We don't even know what is the group of connected components of the symplectomorphism group in any single example? – aglearner Mar 6 '11 at 11:15 @aglearner: There doesn't seem to be any a priori reason for an arbitrary derived autoequivalence of Fukaya to come from a symplectomorphism. – S. Carnahan♦ Mar 6 '11 at 14:58 @Scott. Indeed, they don't all come from symplectomorphisms. For instance, the group of real line bundles acts on the spin-structures that decorate the Lagrangian submanifolds. 1 @aglearner. Some 4-dimensional symplectomorphism groups are well understood. In higher dimensions, a main difficulty is that there are possibly non-trivial things which Floer cohomology doesn't distinguish from the identity: does $\pi_0 \mathrm{Diff}(S^6)=\mathbb{Z}/28$ inject into $\pi_0 \mathrm{Symp}(T^\ast S^6)$?. HMS doesn't help with this. – Tim Perutz Mar 6 '11 at add comment A toy model for mirror symmetry is the following. Consider a real manifold (not necessarily compact) $B$ with an atlas of affine coordinates, i.e. such that the change of coordinate maps are of the type $x \mapsto Mx + b$ where $M$ and $b$ are constant. Then the tangent bundle $TB$ has natural complex coordinates given by $z = x +i y$, where $y$ are coordinates on the fibre. If further one assumes that $\det M = 1$ then $TM$ also has a nowhere vanishing holomorphic $n$-form. On the other hand $T^*B$ has it's usual simplectic structure. So $TB$ and $T^*B$ can be thought of being mirror. One can also twist the complex structure on $TB$ with a B-field. One can go a bit further too, in fact suppose $\gamma$ in $B$ is a affine curve (i.e. a straight line in affine coordinates). Then one can lift $\gamma$ to a Lagrangian submanifold $L_{\gamma}$ of $T^*B$ by adding the anhilator of $\gamma'(t)$ in the fibre at $\gamma(t)$. On the other hand the same curve lifts to a complex object in $TB$ by adding the line generated by $\gamma'$ inside the fibre $T_{\gamma} B$. If the affine structure is also integral (i.e. $M$ and $b$ have integral coefficients), then one can also partially compactify by taking latices $\Lambda \subset TB$ and its dual $\Lambda^'$ and then form torus bundles $X = TB / \Lambda$ and $X^* = T^*B / \Lambda^ '$. This picture is too simple to work in the compact case, but it is expected that actual mirror symmetry is a perturbation of this. What I just described is the SYZ approach to mirror vote 7 I like this example. Consider a surface $V$ in $(\mathbb{C}^{\ast})²$ given by some Laurent polynomial $p(z)$, then the hyperfurface $X$ defined by $ xy = p(z)$ is Calabi-Yau. Its mirror $\ down check X$ can be constructed by taking the Newton polygon $\Delta$ of $p$ and then consider the toric variety defined by the cone over $\{ 1 \} \times \Delta$ in $\mathbb{R}³$. $\check X$ is a vote resolution of this toric variety obtained from some subdivision of $\Delta$. Now, the surface $V$ (the one we started from) has a "tropical amoeba". This can be thought of a graph in $\mathbb {R}²$ which is the limit (in some sense) of the image of $V$ under the standard torus fibration $(\mathbb{C}^{\ast})² \rightarrow \mathbb{R}²$. The interesting thing is that this graph gives a subdivision of $\mathbb{R}²$ which is dual to the subdivision of $\Delta$ (this is related Gil Kalai's answer). More over this graph is also the locus of singular fibres of a Lagrangian torus fibration defined on $X$. Such a Lagrangian fibration induces on the base an affine structure as I said previously. A construction such as the one I described above can be used to construct many Lagrangian $S^3$'s in $X$ over the bounded regions defined by the graph. The mirror of these objects are the divisors $\check X$ corresponding to interior integral points of $\Delta$, or better, line bundles supported on such divisors. There are some interesting correspondences between intersection points of these spheres and cohomology of the line bundles, even without getting into $\mathcal{A}^{\infty}$ constructions. Diego, thanks! This is simple indeed. This generalised the idea that $T\mathbb R^n$ is naturally complex, while $T^*\mathbb R^n$ is naturally symplectic. But what would be the first non-trivial statement that one could try to understand? – aglearner Mar 10 '11 at 23:14 Diego, thanks for adding the example! It is nice. – aglearner Mar 12 '11 at 18:16 General do we have a natural map from a manifold to its its mirror? For example, in the $TB$ and $T^{*}B$ case, seems we need a legendre transform to do this, but that required extre data.. Also about K3 case, do we have such map between manifolds? – Jay Nov 15 '12 at 17:30 add comment I'm not sure if this is what you're looking for, but the paper "Mirror symmetry and Elliptic curves" by R. Dijkgraaf might be provide a good example. up vote 6 The example in that paper concerns the mirror of an elliptic curve. They have two moduli parameters, their complex moduli and their Kahler moduli. Mirror symmetry in this case simply down vote states that the mirror of some $E_{\tau, \omega}$ is $E_{\omega,\tau}$ i.e. you simply switch the two moduli parameters. add comment You need the machinery of triangulated categories and homological algebra to understand the mirror symmetry as it stand today, Homological mirror symmetry. But one can get an idea of mirror symmetry without delving into these concepts,I am talking about the classical picture of mirror symmetry as noticed by Physicists. i.e. Mirror symmetry as an isomorphism between the complex and Kahler moduli spaces of Calabi-Yau 3-folds.(Beware : this was first definition of mirror symmetry and even here I am overlooking the subtleties involving the large complex structure limit.) e.g. Elliptic curve (Dijgraaf), Quintic (Greene, Plesser and Candelas et al).This may give an idea of Mirror symmetry but to understand this picture properly one need an understanding up vote of the geometry of Calabi-Yau manifolds, Variations of mixed Hodge structures, quantum cohomology and GW invariants. 2 down vote Also there is a modern picture of mirror symmetry called SYZ conjecture which is more geometric and doesn't involve homological algebra and triangulated categories. But again you need the knowledge of geometry of special Lagrangian submanifolds of CY manifolds. add comment Since you are asking for examples, you might want to take a look at the lecture notes of Mark Gross published in Calabi-Yau Manifolds and Related Geometries. In the chapter "Mirror Symmetry in Practice" the case of the quintic is worked out in some detail. up vote Although one does not need to know what an $A_\infty$ algebra is, one has instead to be familiar with variations of Hodge structure and some symplectic geometry. However, as far as I know, 2 down homological mirror symmetry is actually weaker than what is believed to be true, so it does probably not hurt to see, what one can show in specific examples. You might also want to look at vote this book, it is written for an audience of physicists and mathematicians, but probably does not represent the most recent view on mirror symmetry. Gernot, thanks for taking time to give these references. In fact I know about the existence of these books surely, and have opened them several times. But you see, this statement -- the last geometric theorem of Poincare can be explained to any calculus student, and this statement, led to important development in math (Arnold's conjectures), it is simple and deep. So, my hope is that people who studied the books that you mention, understand them to some extent and work professionally in the area could be able to say something at least relatively similar (if this is possible at all)-I can't :( – aglearner Mar 9 '11 at 20:32 add comment Not the answer you're looking for? Browse other questions tagged mirror-symmetry math-philosophy sg.symplectic-geometry ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/57520/examples-in-mirror-symmetry-that-can-be-understood/57526","timestamp":"2014-04-18T00:36:01Z","content_type":null,"content_length":"114457","record_id":"<urn:uuid:f8c0bdcf-f809-4aba-b6cc-d03022dd2616>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraic manipulation January 24th 2010, 12:58 PM #1 algebraic manipulation I need to find the relationship between A and B, where $A = \frac{{{L_2} - {L_1}}}{{{L_1}\left( {{t_2} - {t_1}} \right)}}<br />$ and $B = \frac{{{V_2} - {V_1}}}{{{V_1}\left( {{t_2} - {t_1}} \right)}}$ I'm assuming I need to make ${V_1} = {L_1}{W_1}{H_1}\,\,\,{\rm{and}}\,\,\,{V_2} = {L_2}{W_2}{H_2}$ So basically, how do I get $\frac{{{L_2} - {L_1}}}{{{L_1}\left( {{t_2} - {t_1}} \right)}}$ out of $\frac{{{L_2}{W_2}{H_2} - {L_1}{W_1}{H_1}}}{{{L_1}{W_1}{H_1}\left( {{t_2} - {t_1}} \right)}}$ ? I've only gotten so far: ${W_1}{H_1}B = \frac{{{L_2}{W_2}{H_2} - {L_1}{W_1}{H_1}}}{{{L_1}\left( {{t_2} - {t_1}} \right)}}$ and I can't figure out how to extract ${{L_2} - {L_1}}$ from the numerator on the right side of the equation. Is this even possible? Am I going about this problem incorrectly? Your initial question is to get a relation between A and B, right ? If so, just look at their formulas. They both have $(t_2-t_1)$ in it. So, for example, write that $t_2-t_1=\frac{V_2-V_1}{V_1B}$ and substitute it in A. If it's not what you're looking for, can you be more precise about what you want ? I can't see what you want to do with the stuff you've written down in the rest of your message Or, if we divide A by B, we can invert B and multiply $\frac{A}{B}=\frac{L_2-L_1}{L_1(t_2-t_1)}\ \frac{V_1(t_2-t_1)}{V_2-V_1}=\frac{V_1(L_2-L_1)}{(V_2-V_1)L_1}$ Thank you. I suppose I was just overthinking the problem. Got it now January 24th 2010, 01:43 PM #2 January 24th 2010, 02:13 PM #3 MHF Contributor Dec 2009 January 24th 2010, 02:22 PM #4
{"url":"http://mathhelpforum.com/algebra/125227-algebraic-manipulation.html","timestamp":"2014-04-18T05:57:15Z","content_type":null,"content_length":"41481","record_id":"<urn:uuid:e4cd8850-b50c-4d98-9bb2-014f04eb92cb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Inglewood Algebra 2 Tutor Hi I'm Ross. I recently graduated from Duke University, majoring in Neuroscience and minoring in Economics and Psychology. I was on the Dean's List for my grades and I am looking forward to starting vet school this fall at one of the top schools in the country. 27 Subjects: including algebra 2, chemistry, Spanish, physics ...The curriculum lives on within NAI's mathematic department to this day. My tutoring approach is simple - create successful study habits that enhance a student's ability to be a critical thinker and eliminate procrastination. By staying true to this goal, it reinforces the importance of genuinel... 12 Subjects: including algebra 2, reading, writing, geometry ...I am really dedicated and patient as I was myself really bad at mathematics.Soccer is a sport played on a 120 meters over 100 field with two 7.32 meter-wide goals. It opposes two teams with 11 players each, ten on the field and 1 in the goal. There are 3 referees in charge of ruling the game I play soccer at CSUN and also the super metro League in Los Angeles with the L.A. 6 Subjects: including algebra 2, French, geometry, algebra 1 Hello, my name is David Angeles and I am currently attending California State University, Northridge to pursue a Major in Applied Mathematics. I want to be a math professor one day and help out many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. 10 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I came from Indonesia where I have worked for more than 5 years as a science teacher at the high school level. In Los Angeles I have worked for two years in a special education school, where I have learned how to teach students with an IEP (Individualized Education Program). In this school my su... 8 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/inglewood_algebra_2_tutors.php","timestamp":"2014-04-20T11:25:15Z","content_type":null,"content_length":"24213","record_id":"<urn:uuid:2aea70f2-6f9d-4d7e-80bc-7c2467676326>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Catuscia Palamidessi The reduced product of abstract domains is a rather well known operation for domain composition in abstract interpretation. In this paper, we study its inverse operation, introducing a notion of domain complementation in abstract interpretation. Complementation provides a systematic way to design new abstract domains, and it allows to systematically decompose domains. Also, such an operation allows to simplify domain verification problems, and it yields space saving representations for complex domains. We show that the complement exists in most cases, and we apply complementation to three well known abstract domains, notably to Cousot and Cousot's interval domain for integer variable analysis, to Cousot and Cousot's comportment domain for analysis of functional languages and to the complex domain Sharing for aliasing analysis of logic languages. A preliminary version of this paper, with the same title, appeared in the proceedings of SAS 95. The foundations of this work are provided by the results of the paper Weak Relative Pseudo-Complements of Closure Operators. See also Confluence in Concurrent Constraint Programming and Compositional Analysis for Concurrent Constraint Programming.
{"url":"http://www.lix.polytechnique.fr/~catuscia/abstracts.html","timestamp":"2014-04-16T13:03:58Z","content_type":null,"content_length":"27367","record_id":"<urn:uuid:075c342a-f93e-4f42-a956-3055fca5c660>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Strongly Diagnosable Systems under the Comparison Diagnosis Model December 2008 (vol. 57 no. 12) pp. 1720-1725 ASCII Text x Sun-Yuan Hsieh, Yu-Shu Chen, "Strongly Diagnosable Systems under the Comparison Diagnosis Model," IEEE Transactions on Computers, vol. 57, no. 12, pp. 1720-1725, December, 2008. BibTex x @article{ 10.1109/TC.2008.104, author = {Sun-Yuan Hsieh and Yu-Shu Chen}, title = {Strongly Diagnosable Systems under the Comparison Diagnosis Model}, journal ={IEEE Transactions on Computers}, volume = {57}, number = {12}, issn = {0018-9340}, year = {2008}, pages = {1720-1725}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2008.104}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Strongly Diagnosable Systems under the Comparison Diagnosis Model IS - 12 SN - 0018-9340 EPD - 1720-1725 A1 - Sun-Yuan Hsieh, A1 - Yu-Shu Chen, PY - 2008 KW - Diagnostics KW - Topology KW - Diagnostics KW - Graph Theory KW - Network problems VL - 57 JA - IEEE Transactions on Computers ER - A system is $t$-diagnosable if all faulty nodes can be identified without replacement when the number of faults does not exceed $t$, where $t$ is some positive integer. Furthermore, a system is strongly $t$-diagnosable if it is $t$-diagnosable and can achieve $(t+1)$-diagnosable except for the case where a node's neighbors are all faulty. In this paper, we propose some conditions for verifying whether a class of interconnection networks, called Matching Composition Networks (MCNs), are strongly diagnosable under the comparison diagnosis model. [1] T. Araki and Y. Shibata, “$(t, k)\hbox{-}{\rm Diagnosable}$ System: A Generalization of the PMC Models,” IEEE Trans. Computers, vol. 52, no. 7, pp. 971-975, July 2003. [2] J.R. Armstrong and F.G. Gray, “Fault Diagnosis in a Boolean $n$ Cube Array of Multiprocessors,” IEEE Trans. Computers, vol. 30, no. 8, pp. 587-590, Aug. 1981. [3] C.P. Chang, P.L. Lai, J.J.M. Tan, and L.H. Hsu, “Diagnosability of $t\hbox{-}{\rm Connected}$ Networks and Product Networks under the Comparison Diagnosis Model,” IEEE Trans. Computers, vol. 53, no. 12, pp. 1582-1590, Dec. 2004. [4] G.Y. Chang, G.J. Chang, and G.H. Chen, “Diagnosabilities of Regular Networks,” IEEE Trans. Parallel and Distributed Systems, vol. 16, no. 4, pp.314-323, Apr. 2005. [5] G.Y. Chang, G.H. Chen, and G.J. Chang, “$(t, k)\hbox{-}{\rm Diagnosis}$ for Matching Composition Networks under the ${\rm MM}^{\ast}$ Model,” IEEE Trans. Computers, vol. 56, no. 1, pp. 73-79, Jan. 2007. [6] P. Cull and S.M. Larson, “The Möbius Cubes,” IEEE Trans. Computers, vol. 44, no. 5, pp. 647-659, May 1995. [7] K.Y. Chwa and L. Hakimi, “On Fault Identification in Diagnosable Systems,” IEEE Trans. Computers, vol. 30, no. 6, pp. 414-422, June 1981. [8] A. Das, K. Thulasiraman, and V.K. Agarwal, “Diagnosis of $t/(t + 1)\hbox{-}$ ${\rm Diagnosable}$ Systems,” SIAM J. Computing, vol. 23, no. 5, pp. 895-905, May 1994. [9] K. Efe, “A Variation on the Hypercube with Lower Diameter,” IEEE Trans. Computers, vol. 40, no. 11, pp. 1312-1316, Nov. 1991. [10] J. Fan, “Diagnosability of the Möbius Cubes,” IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 9, pp. 923-928, Sept. 1998. [11] J. Fan, “Diagnosability of Crossed Cubes under the Comparison Diagnosis Model,” IEEE Trans. Parallel and Distributed Systems, vol. 13, no. 7, pp. 687-692, July 2002. [12] J. Fan and X. Lin, “The $t/k\hbox{-}{\rm Diagnosability}$ of the BC Graphs,” IEEE Trans. Computers, vol. 54, no. 2, pp. 176-184, Feb. 2005. [13] P.A.J. Hilbers, M.R.J. Koopman, and J.L.A. van de Snepscheut, “The Twisted Cube,” Proc. Parallel Architectures and Languages Europe (PARLE '87), pp. 152-159, June 1987. [14] A. Kavianpour and K.H. Kim, “Diagnosability of Hypercubes under the Pessimistic One-Step Diagnosis Strategy,” IEEE Trans. Computers, vol. 40, no. 2, pp. 232-237, Feb. 1991. [15] S. Khanna and W.K. Fuchs, “A Graph Partitioning Approach to Sequential Diagnosis,” IEEE Trans. Computers, vol. 46, no. 1, pp. 39-47, Jan. 1997. [16] P. Kulasinghe, “Connectivity of the Crossed Cube,” Information Proc. Letters, vol. 61, no. 4, pp. 221-226, Feb. 1997. [17] P.L. Lai, J.J.M. Tan, C.H. Tsai, and L.H. Hsu, “Diagnosability of the Matching Composition Network under the Comparison Diagnosis Model,” IEEE Trans. Computers, vol. 53, no. 8, pp. 1064-1069, Aug. 2004. [18] P.L. Lai, J.J.M. Tan, C.P. Chang, and L.H. Hsu, “Conditional Diagnosability Measures for Large Multiprocessor Systems,” IEEE Trans. Computers, vol. 54, no. 2, pp. 165-175, Feb. 2005. [19] J. Maeng and M. Malek, “A Comparison Connection Assignment for Self-Diagnosis of Multiprocessor Systems,” Proc. 11th Int'l Symp. Fault-Tolerant Computing (FTCS '81), pp. 173-175, 1981. [20] F.P. Preparata, G. Metze, and R.T. Chien, “On the Connection Assignment Problem of Diagnosable Systems,” IEEE Trans. Computers, vol. 16, pp.448-454, Dec. 1967. [21] Y. Saad and M.H. Schultz, “Topological Properties of Hypercubes,” IEEE Trans. Computers, vol. 37, no. 7, pp. 867-872, July 1988. [22] A. Sengupta and A. Dahbura, “On Self-Diagnosable Multiprocessor System: Diagnosis by the Comparison Approach,” IEEE Trans. Computers, vol. 41, no. 11, pp. 1386-1396, Nov. 1992. [23] A.K. Somani, V.K. Agarwal, and D. Avis, “A Generalized Theory for System Level Diagnosis,” IEEE Trans. Computers, vol. 36, no. 5, pp. 538-546, May 1987. [24] A.K. Somani and O. Peleg, “On Diagnosability of Large Fault Sets in Regular Topology-Based Computer Systems,” IEEE Trans. Computers, vol. 45, no. 8, pp. 892-903, Aug. 1996. [25] D. Wang, “Diagnosability of Hypercubes and Enhanced Hypercubes under the Comparison Diagnosis Model,” IEEE Trans. Computers, vol. 48, no. 12, pp. 1369-1374, Dec. 1999. [26] C.L. Yang, G.M. Masson, and R.A. Leonetti, “On Fault Isolation and Identification in $t_{1}/t_{1}\hbox{-}{\rm Diagnosable}$ Systems,” IEEE Trans. Computers, vol. 35, no. 7, pp. 639-643, July [27] X. Yang, D.J. Evans, and G.M. Megson, “The Locally Twisted Cubes,” Int'l J. Computer Math., vol. 82, no. 4, pp. 401-413, Apr. 2005. [28] X. Yang and Y.Y. Tang, “Efficient Fault Identification of Diagnosable Systems under the Comparison Model,” IEEE Trans. Computers, vol. 56, no. 12, pp. 1612-1618, Dec. 2007. Index Terms: Diagnostics, Topology, Diagnostics, Graph Theory, Network problems Sun-Yuan Hsieh, Yu-Shu Chen, "Strongly Diagnosable Systems under the Comparison Diagnosis Model," IEEE Transactions on Computers, vol. 57, no. 12, pp. 1720-1725, Dec. 2008, doi:10.1109/TC.2008.104 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2008/12/ttc2008121720-abs.html","timestamp":"2014-04-17T18:42:32Z","content_type":null,"content_length":"56734","record_id":"<urn:uuid:2d6b7e49-f9b8-421e-9e6b-ebf0013a88aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance map of a cluster detection test using extended power Conventional power studies possess limited ability to assess the performance of cluster detection tests. In particular, they cannot evaluate the accuracy of the cluster location, which is essential in such assessments. Furthermore, they usually estimate power for one or a few particular alternative hypotheses and thus cannot assess performance over an entire region. Takahashi and Tango developed the concept of extended power that indicates both the rate of null hypothesis rejection and the accuracy of the cluster location. We propose a systematic assessment method, using here extended power, to produce a map showing the performance of cluster detection tests over an entire region. To explore the behavior of a cluster detection test on identical cluster types at any possible location, we successively applied four different spatial and epidemiological parameters. These parameters determined four cluster collections, each covering the entire study region. We simulated 1,000 datasets for each cluster and analyzed them with Kulldorff’s spatial scan statistic. From the area under the extended power curve, we constructed a map for each parameter set showing the performance of the test across the entire region. Consistent with previous studies, the performance of the spatial scan statistic increased with the baseline incidence of disease, the size of the at-risk population and the strength of the cluster (i.e., the relative risk). Performance was heterogeneous, however, even for very similar clusters (i.e., similar with respect to the aforementioned factors), suggesting the influence of other The area under the extended power curve is a single measure of performance and, although needing further exploration, it is suitable to conduct a systematic spatial evaluation of performance. The performance map we propose enables epidemiologists to assess cluster detection tests across an entire study region. Cluster detection test; Performance map; Extended power; Simulation study Les études de puissance ont montré leurs limites dans l’évaluation des performances des tests de détection d’agrégats. En raison de la nécessité de prendre en compte à la fois la capacité du test à rejeter l’hypothèse nulle et à localiser correctement l’agrégat, la puissance usuelle ne peut refléter la véritable performance de ces tests. De plus, ces évaluations ne traitent en général qu’un nombre limité d’hypothèses alternatives ignorant donc le comportement de ces tests sur l’ensemble d’une région d’étude. Takahashi et Tango ont proposé le concept de puissance étendue qui, au-delà de la puissance usuelle, reflète également la précision de localisation de l’agrégat. Nous proposons une méthode d’évaluation systématique, fondée ici sur la puissance étendue, pour produire une carte offrant une visualisation synoptique des performances des tests de détection d’agrégats sur l’ensemble d’une région. De façon à explorer le comportement d’un test de détection d’agrégats sur un même type d’agrégat pour toutes les localisations possibles, nous avons fixé quatre jeux de paramètres spatiaux et épidémiologiques, de façon à simuler quatre collections d’agrégats, chacune couvrant l’ensemble de la région d’étude. Mille jeux de données ont été simulés pour chaque agrégat et soumis au scan spatial de Kulldorff. A partir de l’aire sous la courbe de puissance étendue, nous avons produit une carte de performance pour chaque jeu de paramètres. Conformément aux précédentes études, la performance du scan spatial croît avec l’incidence de base de la maladie, la taille de la population à risque et la force de l’agrégat (i.e., le risque relatif). Cependant, même pour des agrégats très similaires, la performance du test est hétérogène, suggérant l’influence potentielle d’autres facteurs. L’aire sous la courbe de puissance étendue est une mesure unique de performance et, bien qu’elle nécessite des évaluations plus poussées, elle convient à l’évaluation spatiale systématique de la performance. La carte de performance que nous proposons autorise les épidémiologistes à évaluer les tests de détection d’agrégats sur l’ensemble d’une région d’étude. Spatial clusters can be detected using a wide range of statistical tests [1,2], many of which are available in free software packages such as R [3,4]. Epidemiologists use local methods to detect clusters without a priori knowledge of their location, and to determine their significance. Because these cluster detection tests (CDTs) must reveal both the presence and location of clusters, performance studies have been constrained by the limitations of conventional estimation techniques. For example, a CDT may have maximum power for rejecting the null hypothesis (cluster absence), yet be incapable of accurately locating the simulated cluster. CDT performance is also a function of epidemiological and geographical context [1,5-11]. Furthermore, because epidemiological (e.g., incidence and relative risk) and geographical (e.g., spatial unit size and shape) factors tend to be intrinsically linked, their proper or common effects are difficult to evaluate. When evaluating the behavior of these CDTs in a particular region, limited knowledge can consequently be gleaned by simulating one or a few clusters in that region, and even less knowledge can be accrued from studies on other region. Takahashi and Tango have proposed the concept of extended power (EP) [12,13] as a more accurate measure of CDT performance. This measure assesses both the probability that the null hypothesis is rejected and the accuracy of the cluster location. As such, it overcomes the inadequacy of conventional power measures. However, EP cannot eliminate the need to define what is meant by “an accurate” or “sufficiently accurate” location. The level of spatial accuracy depends upon context; for instance, an epidemiologist will require higher spatial accuracy for an ad hoc study than for a survey system. Takahashi and Tango therefore introduced a quantitative indicator of spatial accuracy, and summarized CDT performance using an EP curve in conjunction with this spatial accuracy indicator. In this work, we propose a method that integrates the area under the EP curve (AUC[EP]) in order to produce maps that provide a global overview of CDT performance over an entire study region. Clustering model To explore CDT behavior on same-class clusters in all possible locations, we set common spatial and epidemiological characteristics for four cluster collections covering the entire study region. The study region was the Auvergne region (France), divided into n=221 spatial units (SUs) equivalent to U.S. ZIP codes. The exhaustive collection of approximately circular clusters with four SUs was identified within the study region. To achieve this outcome, the 221 SUs were successively associated with their three nearest neighbors as defined by Euclidian distances between the SU centroids. To obtain four cluster collections, we applied four combinations of two baseline risks (incidences) and two relative risks to the same at-risk population, whose size was estimated by mean annual number of live births. For a realistic analysis, we used data archived in CEMC (birth defects registry for the Auvergne region) and INSEE (National Institute of Statistics and Economic Studies) databases. We collected two categories of data from 1999 to 2006: all birth defects and cardiovascular birth defects. Both datasets were sorted by SU. The number of live births was approximated by the number of birth declarations in the at-risk population. Global annual incidences of all birth defects (I[all]) and cardiovascular birth defects (I[CV]) were estimated as 2.26% and 0.48% of births, respectively. In the analysis, we constructed risk combinations of these two incidences at relative risks of 3 and 6. For each cluster within the four categories (221×4), we generated 1,000 datasets, i.e., a total of 884,000 datasets. Each dataset consisted of 221 rows and 5 columns. The rows contained SU coordinates (longitude and latitude), observed number of cases, size of the at-risk population (i.e., the number of live births) and expected number of cases in the specified SU. This last quantity was the product of the global incidence (I[all] or I[CV]) and the at-risk population size in the SU. The observed case numbers were assumed as independent Poisson variables such that where N[i] is the observed number of cases, ϵ[i] denote the expected number of cases in the ith SU under the null hypothesis of risk homogeneity (H[0]) and π[i] the expected number of cases in the i th SU under the alternative hypothesis of one simulated cluster (H[1]). θ is the relative risk, and ith SU is within the simulated cluster, and 0 otherwise. Measure of performance The extended power was proposed by Takahashi and Tango as an improved measure of CDT performance. For a particular cluster, global performance is the weighted cumulative sum of the contribution of each detected cluster in all submitted datasets. Here, we summarize the construction of the performance indicator. For a more detailed description, the reader is referred to Takahashi and Tango [12, Within a simulated cluster of s SUs, if the null hypothesis is rejected, the size l of a detected cluster and its s* SUs (where s* denotes a subset of s) are recorded. A maximum cluster size L is imposed, such that if l>L, the detected cluster is discarded. This limit prevents very large, meaningless clusters from contributing to CDT global performance. In this work, L was set to 30 SUs. All eligible detected clusters (EDCs), i.e. with l ≤ L, are counted and sorted by l and s*. For each combined value of l and s*, the proportion of corresponding detected clusters (P[(l,s*)]) in all submitted datasets is assigned a weight W[(l,s*)]. This weight is also a function of the detection accuracy (i.e., the correct location of the simulated cluster). Thus, Takahashi and Tango define W[ (l,s*,w+,w−)] as where w^− and w^+ are penalties for false negative and false positive SUs, respectively. The penalties w^− and w^+ are determined according to the following constraints. For w^−, detected clusters that generate no false negative must fully contribute to global performance, and those that induce s false negatives must be discarded. These constraints are satisfied when For w^+, detected clusters that generate no false positive must fully contribute to global performance, and those that induce at least l[0] false positives must be discarded. These constraints are satisfied when So that l[0] is not assigned arbitrarily, Takahashi and Tango specify the ratio To favor sensitivity over specificity (as is usually preferred), w^− is greater than or equal to w^+; thus l[0]≥s because 1/s≥1/l[0]. For example, when: For each value of q, the extended power is the cumulative sum of W[(l,s*,q)]×P[(l,s*)], where l runs from 1 to L and s* runs from 0 to s. CDT global performance in detecting a particular cluster is then represented by the extended power curve with q running from 0 to 1. At any point on this curve, the extended power is, by construction, between 0 and 1. Furthermore, we note that the extended power is a monotonically decreasing function of q. Consequently, the area under the extended power curve (AUC[EP]), defined by is between 0 and 1, with 0 signifying an inoperative CDT (s^* always null) and 1 a perfect CDT (H[0] always rejected, with all detected clusters exactly overlaying the simulated cluster). As suggested by Takahashi and Tango [13], we used the area under the extended power curve as the measure of CDT performance. Performance mapping Global performance was visualized over the entire region using maps representing the measured AUC[EP] for each collection of clusters. The AUC[EP] is a measure of a cluster and thus associated with four SUs. In order to obtain a global overview on a single map, we assigned the AUC[EP] value of each cluster, to its central SU. Thus, we affected a single measure of AUC[EP] to each SU of the map. As we defined four cluster collections for four risks combination (incidence and relative risks), we produced four performance maps. Kulldorff’s Spatial scan statistic In this study, we selected Kulldorff’s spatial scan statistic [14,15], a well-known and widely used CDT whose performance has been studied by many authors [1,6,10,16]. The spatial scan statistic detects the most likely cluster based on locally observed statistics of likelihood ratio tests. The scan statistic considers all possible zones z defined by two parameters: a center that is successively placed on the centroid of each SU, and a radius varying between 0 and a predefined maximum. The true geography being delineated by administrative tracts, i.e., each zone z defined by all SUs whose centroids lie within the circle, is irregularly shaped. Let N[z] and n[z] be the size of the at-risk population and the number of cases counted in zone z (over the entire region, these quantities are the total population size N and the total number of cases n, respectively). The probabilities that an at-risk case lies inside or outside zone z are respectively defined by p[z]=n[z] /N[z] and q[z]=(n−n[z])/(N−N[z]). Given the null hypothesis H[0]: p[z]=q[z] versus the alternative H[1]: p[z]>q[z] and assuming a Poisson distribution of cases, Kulldorff defined the likelihood ratio statistics as proportional to where λ is global incidence, and the indicator function I equals 1 when the number of observed cases in zone z exceeds the expected number under H[0], and 0 otherwise. The circle yielding the highest likelihood ratio is identified as the most likely cluster. The p-value is obtained by Monte Carlo inference. Data simulation and analysis (see Data and Script in the Additional files 1 and 2) were performed in R 2.14.0 [3,17-19] using AUVERGRID [20]. Additional file 1. Script: This file is an r script (script.r) containing a complete procedure to define the collection of clusters, simulate the datasets, perform the test and plot the corresponding performance map. Format: ZIP Size: 3KB Download file Additional file 2. Data: This is a zip file (Data.zip) containing the population data in an r format (Pop.rda) and a folder with the shapefiles for the Auvergne region. Format: ZIP Size: 57KB Download file The Auvergne region is characterized by low and medium mountains situated around a central plain. The at-risk population (see Methods) was heterogeneously distributed throughout sparsely populated areas (mainly borderland and mountainous) and highly populated urban areas. Figure 1 shows the size of the at-risk population in each cluster, which was assigned to its central SU. Figure 1. Size of the at-risk population for each cluster in the Auvergne region, as defined by mean number of live births per year between 1999 and 2006 (source: INSEE). Q1: ≤ 102; Q2: > 102 and≤175; Q3: > 175 and≤293; Q4: >293. Figure 2 demonstrates how CDT performance improved with increasing risk level. Clearly, the CDT could not detect clusters within regions with low number of births. For these clusters, performance only marginally improved, even at the highest risk combination (Figure 3). Figure 2. AUC[EP ]of Kulldorff’s spatial scan. AUC[EP] was measured for four combinations of two relative risk (RR) and two annual incidence of birth defects: low RR=3 and high RR=6; low incidence=0.48% births and high incidence=2.26% births. Figure 3. AUC[EP ]of Kulldorff’s spatial scan based on the size of the at-risk population for four combinations of two relative risk (RR) and two annual incidence of birth defects: low RR=3 and high RR=6; low incidence=0.48% births and high incidence=2.26% births. CDT performance increased monotonically with the at-risk population size (Figure 3). We noted a stronger heterogeneity of CDT performance for the clusters with the largest populations, especially at intermediate risk levels (Figure 3); by this, we mean that clusters with nearly the same population size led to slightly different test performance behaviors. For example, Figure 4 shows test performance in detecting three clusters centered on SUs “43770” (red cluster in the figure), “03700” (blue cluster) and “03420” (green cluster), which had population sizes of 544, 558 and 545 births (mean number over 8 years), respectively. At the lowest risk level, the red cluster was the only one even marginally detected, whereas under other configurations, the blue cluster was best detected. The worst detection performance was exhibited with respect to the green cluster, particularly at intermediate risk levels. We note that the green cluster was the only borderland cluster. Figure 4. AUC[EP ]of Kulldorff’s spatial scan and locations of three simulated clusters for four combinations of two relative risk (RR) and two annual incidence of birth defects: low RR=3 and high RR=6; low incidence=0.48% births and high incidence=2.26% births. Some summary statistics of the AUC[EP] distributions are displayed in Table 1. Figure 5 shows two different extended power curves (and thus two different CDT behaviors) that have nearly equal AUC [EP]. One of these clusters was centered on SU “03160”, the other on SU “63112”. Table 1. AUC[EP ]distribution for each risk combination and category of at-risk population size Figure 5. Extended power curves for two simulated clusters. Line 03160: cluster centered on the SU with zip code 03160 (northwest Auvergne); line 63112: cluster centered on the SU with zip code 63112 (central Auvergne). Both clusters were simulated with a relative risk of 6 and a baseline incidence of birth defects set to 2.26%. Generation of one performance map from 221,000 datasets required about 5 days of computational time using the AUVERGRID grid. Takahashi and Tango [13] have suggested using the AUC[EP] to compare performance between CDTs. We used this synthetic indicator, suitable for compiling maps, to describe CDT performance. It thus fulfills our primary goal of realizing a systematic performance assessment of a CDT over an entire study area, rather than over only a few clusters. This mapping method, although using Takahashi and Tango’s extended power, is not dependent on this concept. Our method can use any other indicator that meets the requirements of being a scalar (i.e., a single measure of performance) indicating both the spatial accuracy of the detection and the capacity of cluster detection tests to reject the null hypothesis. Interpretation of the AUC[EP] requires further exploration, however. Although a higher AUC[EP] clearly signifies stronger CDT performance, quite different behaviors can yield the same AUC[EP]. As shown in Figure 5, different curves can possess very similar AUC[EP] values. This figure shows the extended power curves “03160” and “63112”, whose AUC[EP] values are nearly equal (0.931 and 0.932, respectively), but which reflect different CDT behaviors. The procedures used to construct these curves are described in detail within separate spreadsheets (see EP curve in the Additional file 3). Additional file 3. EP curve: This file is an Excel spreadsheet (EP curve.xls) containing two worksheets. Sheets “03160” and “63112” describe step-by-step construction of EP curves for clusters centered on SU “03160” and SU “63112”, respectively. In both constructions, the relative risk is set to 6 and the baseline incidence of birth defects is assumed to be 2.26%. To toggle between the corresponding procedures for calculating EP, the user need only alter the value of q in cell D41. Format: XLS Size: 91KB Download file This file can be viewed with: Microsoft Excel Viewer The curve “63112” is nearly horizontal, indicating that the EDCs (H[0] rejected, and cluster size l<maximum cluster size L) located the simulated cluster with high accuracy. As q increases, less tolerance is given to false positives until, eventually, only EDCs with at least one true positive and less than s false positives can contribute to the extended power. A near zero slope thus indicates that the same detected clusters, all of which contain less than s false positives, contribute to the extended power, regardless of q. The intercept of curve “63112” is 0.939, meaning that eligible clusters (l<L), all of which contribute to the extended power (i.e., all clusters contain at least one true positive), were detected in 93.9% of the tests (H[0] rejected). To summarize curve “63112”, the simulated cluster was not always detected (no H[0] rejection or EDC without true positive); however, provided that an EDC identified at least one true positive, the location was accurate (i.e., less than s false positives existed in the cluster). In contrast, the curve “03160” yields the same AUC[EP], but is negatively sloped with an intercept of 0.951. Thus, the associated CDT produced more EDCs containing at least one true positive. The negative slope indicates that a higher proportion of these EDCs generated at least s false positives. To summarize curve “03160”, the test rejected H[0] more often and/or produced more EDCs, but located the simulated cluster with less accuracy (i.e., this analysis produced more than s false One particular curve has intercept equal to 1 (q=0) and a zero slope. An intercept equal to 1 implies that the CDT always rejects H[0] and that no false negatives exist in the EDCs. All detected clusters entirely overlap the simulated cluster, as in all other cases the weighting function W[(l, s*, q=0)] is less than one. In addition, the zero slope indicates the perfect test that always exactly locates the simulated cluster. A perfect test always rejects H[0], and detected clusters always satisfy l=s*=s (i.e., generate no false positive or negative). The AUC[EP] of a perfect test equals one, because in all other cases W[(l, s*, q)] is less than one. The intercept of an extended power curve can be regarded as a “quantitative” feature of CDT performance (all EDCs generating true positives contribute to the extended power), whereas the slope may be thought of as a “qualitative” feature of CDT performance, assessing location accuracy. The parameter q can, in fact, be regarded as a continuous indicator reflecting to what extent a detected cluster must accurately locate the simulated cluster to contribute to the performance measure. As shown in Figure 5, however, if an entire curve is condensed into a single measure (such as the AUC), some information is lost, because CDTs with different behaviors (i.e., curves with different shapes) can yield the same performance value. Consequently, the impact of CDT behavior on the extended power curve must be thoroughly explored, and behaviors relevant to a particular research or application need to be defined. Through such exploration, the extent to which the AUC[EP] is a relevant performance measure, and the purposes for which it is most suited, can be determined. The EP has the advantage of requiring only one arbitrarily set parameter. In this work, the parameter L, that determines the maximum allowed size for EDCs, has been set to 30 SUs. Takahashi and Tango [12] initially proposed to set the limit L to one fourth or one third of region size (in numbers of SUs). The authors stated that it was not unreasonable to assume that an actual cluster size will be less than such a limit. Such arguments are often open to dispute but in any case, it is an arbitrary decision. In our view, it would be more correct to set L according to the size s of the simulated cluster because, in the simulation, it is the “real” cluster. By construction, the consequences of this arbitrary setting are limited to the lowest values of q. Indeed, low values of q mean that EDCs with false positives are less penalized, and thus large clusters are allowed to contribute to EP. In our case (L=30), only values of extended power for q≤0.15 could be underestimated, and only if we consider that detected clusters more than 7.5 times larger than the simulated cluster (4 SUs) are still meaningful. At last, compared with L set to 30, computing AUC[EP] with L equal to 221 (i.e. without an arbitrary limit) yields a difference in AUC[EP] always less than 10^-5 in this work. In producing our performance map, we chose to assign the AUC[EP] value of a single cluster of four SUs to a single SU. Because two clusters centered on neighboring SUs likely contain common SUs, and the AUC[EP] evaluates the detection of the entire cluster, visualizing performance on a single map can only be done in two ways. On the one hand, the AUC[EP] of a cluster can be assigned to each of its SUs, or on the other hand, it can be assigned to a single, albeit arbitrarily chosen, SU. In the first solution, as each SU has a strong probability to be associated with more than one cluster, it is then necessary to compute a summary statistic, such as the mean, to produce a single map. In our view, it seems more comprehensible to arbitrary assign the performance measure for the whole cluster on a single SU. As we simulated more or less circular clusters, the central SU of the cluster was naturally chosen for this assignment. When simulating different cluster shapes, this choice will clearly be less obvious. We nevertheless recommend assigning the performance measure to the SU where the centroid of the cluster is located. Authors who have studied CDT behavior mentioned its dependence on epidemiological and geographical factors [1,5-11]. Consistent with previously published results, the performance of Kulldorff’s spatial scan, and more generally, all local CDTs, improves in study regions of small SUs, large populations, high incidence of the studied phenomenon and for clusters with strong relative risk. Furthermore, as shown in Figure 4 and Table 1, the variation in AUC[EP] among very similar simulated clusters (identical length, shape, population size and risk association) suggests that other factors influence CDT performance. To our knowledge, no other simulation study has been performed to both assess and visualize CDT performance over an entire region. Until now, authors have always considered a limited set of simulated clusters with particular epidemiological or geographical characteristics of interest. Consider the typical example of population size effect. To assess this effect, clusters are generally simulated in only a few arbitrarily chosen locations where a CDT behavior is assumed to be representative of its behavior in any other “similar” location. Usually, clusters in rural areas are compared with clusters in urban areas. Such studies are not sufficient to assess this factor that, as we have shown (Figure 3), has a strong relationship with CDT performance. Furthermore, population size cannot explain in itself all the variability in CDT performance. However, some authors [21] have assessed performance on many randomly located clusters, which is a way to take into account the effect of spatial location without assessing it. It enabled them to assess the effect of factors such as relative risk or spatial resolution without the potential confounding effect of the spatial location. Still, this approach, while accounting for this effect, cannot quantify it. Our systematic evaluation allows us to assess exactly when heterogeneity is most important, and thus within what population size range we can expect any other potential factor to have a maximum effect. In this work, we used predefined values for incidence and clustering characteristics (relative risk, shape, size and number) to generate performance maps. Epidemiologists should use reasonable values if a priori knowledge is available for some factors. However, the proper effect of any factor on CDT performance can be studied with this systematic evaluation, provided it uses suitable measure such as the AUC[EP]. Given that CDT performance depends on geographical and epidemiological context, the performance of these methods should be explored prior to monitoring a particular phenomenon in a given region. This work enables epidemiologists to study global CDT performance over an entire region. Furthermore, from a research viewpoint, our method seems beneficial for unraveling the proper effect of many factors, particularly geographical ones, on CDT performance. AUCEP: Area under the curve of extended power; CDT: Cluster detection test; EDC: Eligible detected cluster; EP: Extended power; H0: Null hypothesis; H1: Alternative hypothesis; Iall: Incidence of all birth defects; Icv: Incidence of cardiovascular birth defects; RR: Relative risk. Authors’ contributions AG and LO conceived the design, performed the study and drafted the manuscript. AG was responsible for statistical programming and data analysis. JD, JG, IP, XL and JYB contributed to manuscript revision. All authors read and approved the final manuscript. The authors are very grateful to Dr. Francannet who granted access to the CEMC database. We thank Paul De Vlieger who provided access and technical support for AuverGrid on behalf of the particle physics laboratory, Blaise Pascal University. 1. Kulldorff M, Tango T, Park PJ: Power comparisons for disease clustering tests. Comput Stat Data Anal 2003, 42:665-684. Publisher Full Text 2. Sankoh OA, Becher H: Disease cluster methods in epidemiology and application to data on childhood mortality in rural Burkina Faso. 3. Robertson C, Nelson TA: Review of software for space-time disease surveillance. Int J Heal Geogr 2010, 9:16. BioMed Central Full Text 4. Aamodt G, Samuelsen SO, Skrondal A: A simulation study of three methods for detecting disease clusters. Int J Heal Geogr 2006, 5:15. BioMed Central Full Text 5. Ozonoff A, Jeffery C, Manjourides J, White LF, Pagano M: Effect of spatial resolution on cluster detection: a simulation study. Int J Heal Geogr 2007, 6:52. BioMed Central Full Text 6. Jeffery C, Ozonoff A, White LF, Nuño M, Pagano M: Power to detect spatial disturbances under different levels of geographic aggregation. J Am Med Informatics Assoc JAMIA 2009, 16:847-854. Publisher Full Text 7. Olson KL, Grannis SJ, Mandl KD: Privacy protection versus cluster detection in spatial epidemiology. Am J Public Heal 2006, 96:2002-2008. Publisher Full Text 8. Puett R, Lawson A, Clark A, Aldrich T, Porter D, Feigley C, Hebert J: Scale and shape issues in focused cluster power for count data. Int J Heal Geogr 2005, 4:8. BioMed Central Full Text 9. Goujon-Bellec S, Demoury C, Guyot-Goubin A, Hémon D, Clavel J: Detection of clusters of a rare disease over a large territory: performance of cluster detection methods. Int J Heal Geogr 2011, 10:53. BioMed Central Full Text 10. Jacquez GM: Cluster morphology analysis. Spat Spatio-Temporal Epidemiol 2009, 1:19-29. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 11. Tango T, Takahashi K: A flexibly shaped spatial scan statistic for detecting clusters. Int J Heal Geogr 2005, 4:11. BioMed Central Full Text 12. Takahashi K, Tango T: An extended power of cluster detection tests. Stat Med 2006, 25:841-852. PubMed Abstract | Publisher Full Text 13. Kulldorff M: A spatial scan statistic. Commun Stat Theor M 1997, 26:1481-1496. Publisher Full Text 14. Kulldorff M, Nagarwalla N: Spatial disease clusters: detection and inference. Stat Med 1995, 14:799-810. PubMed Abstract | Publisher Full Text 15. Ribeiro SHR, Costa MA: Optimal selection of the spatial scan parameters for cluster detection: a simulation study. Spat Spatio-Temporal Epidemiol 2012, 3:107-120. PubMed Abstract | Publisher Full Text 16. Cici C, Kim AY, Ross M, Wakefield J, Venkatraman ES: SpatialEpi: Performs various spatial epidemiological analyses. R package version 1.1. 2013. http://CRAN.R-project.org/package=SpatialEpi webcite 17. Team RC: R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2012. http://www.R-project.org/ webcite 18. Keitt TH, Bivand Pebesma E, Rowlingson B: Rgdal: Bindings for the Geospatial Data Abstraction Library. 2012. http://CRAN.R-project.org/package=rgdal webcite 19. AuverGrid. http://www.auvergrid.fr/ webcite Sign up to receive new article alerts from International Journal of Health Geographics
{"url":"http://www.ij-healthgeographics.com/content/12/1/47","timestamp":"2014-04-17T21:23:39Z","content_type":null,"content_length":"127166","record_id":"<urn:uuid:6c7a9c62-c5a5-415a-8446-d30d33d88af3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Nobel Prize The 1999 Nobel Prize in Physics Porter Johnson; 12 October 1999 This is an extension of some comments I made recently to the High School Physics Teachers in the SMILE program. The 1999 Nobel Prize in Physics was awarded to Tini Veltman and Gerard 't Hooft for proving that non-abelian gauge theories are renormalizable, using dimensional regularization. In these very intricate fundamental theories, " bare " or "undressed" particles [such as electrons and photons] are the fundamental entities of the theories as formulated, but one can observe only the "fully clothed" particles [real electrons are always enshrouded in photon clouds, and real photons are dressed in clouds of virtual electron-positron pairs]. Therefore, very elaborate procedures must be employed to obtain consistent results for observable quantities. .The "dimensional regularization" concept involves formulating the theory and calculating quantities in "d" space-time dimensions, and carefully extirpating the unphysical parts in the limit d --->4. Why such an arcane approach? So that you don't spoil the gauge symmetries of theories by arbitrary procedures, such as introducing fictitious wrong-signature particles, putting in a space-time or 4-momentum cut-off, etc. Formulating the gauge theory on a lattice is also a sensible procedure, since one can impose and maintain a lattice version of the gauge symmetry; the only problem is to determine when one is seeing the continuous limit of the lattice theory, and not simply artefacts of the lattice. The modern viewpoint is that renormalization is required in a quantum field theory for reasons that are physical, and one must do it whether it produces or involves a finite or infinite quantity. One may get a divergence whenever one calculate an unphysical quantity [that is, not measurable or preparable], but the divergences must cancel out of observable quantities. The issue is especially sensitive when massless particles are present. Clear?? If not, consider the following venerable observation: You boil it in sawdust; You salt it in glue You condense it with locusts and tape Still keeping on principal object in view--- To preserve its symmetrical shape Fit the Fifth, The Hunting of the Snark,Lewis Carroll Alfred Nobel [Swedish dynamite inventor and manufacturer] set up the physics prize in his will for a "device or discovery made by during the last year",but it seems to have more of a "lifetime achievement academy award". Incidentally, one of Tini's students [a German friend of mine with the improbable but authentic name of Alfred Hill] was killed in PanAm Flight 192 over Lockerby Scotland several years ago.
{"url":"http://mypages.iit.edu/~johnsonpo/nobel99.html","timestamp":"2014-04-19T14:30:28Z","content_type":null,"content_length":"3400","record_id":"<urn:uuid:3fb11b80-2f55-4f83-9621-7279a2148585>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Difficult variable to solve for. includes hyperbolic cos. March 19th 2013, 03:51 PM Difficult variable to solve for. includes hyperbolic cos. I have the equation y = 1/k * ln(cosh(SQRT(g*k)*t)) My goal is to solve for k. This is for aerospace engineering and "k" includes a variable that I need. I have rearranged two ways already: 1) k = (acosh^2(e^yk))/(g*t^2) 2) k = ln(cosh(SQRT(g*k)*t))/y I have NO IDEA as I am only a freshman. How do I get k to a single side? I would greatly appreciate help with this. Best regards, March 19th 2013, 04:53 PM Prove It Re: Difficult variable to solve for. includes hyperbolic cos. You will not be able to get an explicit expression for k. March 19th 2013, 04:59 PM Re: Difficult variable to solve for. includes hyperbolic cos. How can I remotely solve for it? I have MathCad available for use; however, we are expected to do this and I don't know where to start. March 19th 2013, 05:08 PM Prove It Re: Difficult variable to solve for. includes hyperbolic cos. That depends, do you have values of y and t to put in? March 19th 2013, 05:32 PM Re: Difficult variable to solve for. includes hyperbolic cos. yes I do. I have a table of data from an accelerometer. March 19th 2013, 08:07 PM Prove It Re: Difficult variable to solve for. includes hyperbolic cos. You could try to use something like Newton's method...
{"url":"http://mathhelpforum.com/advanced-algebra/215095-difficult-variable-solve-includes-hyperbolic-cos-print.html","timestamp":"2014-04-19T00:51:42Z","content_type":null,"content_length":"5600","record_id":"<urn:uuid:96d62e66-f195-4f04-9a4d-e418b59b4d03>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Sun City West Algebra 2 Tutor ...This class will present how to solve for x and factor which is vital to math. Precalculus goes in depth with algebra included with some geometry or even physics. This is the first class in math that really starts applying to real life situations. 21 Subjects: including algebra 2, chemistry, calculus, physics ...With a bachelor's degree in engineering, I took numerous math and science courses in college, and subsequently acquired a mastery of mathematics. I began taking algebra at age 12, and finished my academic career taking Calculus IV and Differential Equations in college. As I mentioned in my prof... 20 Subjects: including algebra 2, English, writing, calculus ...I have tutored students for the AIMs test, occupational math and college level math. I hold a teaching certificate for K-8 elementary. I have 4 years experience teaching in k-8. 24 Subjects: including algebra 2, English, reading, writing ...In both the group and private setting, I've seen great improvement in swimming skills upon my instruction. I've been involved with volleyball since a young age, playing competitively and recreationally for many years. I've had training in refereeing competitive volleyball matches and even assist in teaching younger people volleyball fundamentals. 13 Subjects: including algebra 2, reading, Spanish, chemistry ...I studied Spanish in order to teach the English better. In 2009 I became a member of Mensa. So even I know that I'm quite capable of learning. 9 Subjects: including algebra 2, geometry, accounting, algebra 1
{"url":"http://www.purplemath.com/Sun_City_West_Algebra_2_tutors.php","timestamp":"2014-04-20T10:52:26Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:efa7a38a-6c95-461c-904c-8c2ab0f75a33>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] For what values of x will F be undefined? April 9th 2010, 10:22 AM [SOLVED] For what values of x will F be undefined? Given the domain $0 \leq x \leq 2\pi$ in $F(x) = x \cdot tan(x) - \frac{sin(x)}{x}$, is it possible to for $tan(x)$ to cause $F$ to be undefined for $x = \frac{\pi}{2} \pm 2\pi K$, where $K$ is a natural number? Given the domain restriction, it would seem that ONLY $x = \{\frac{\pi}{2}, \frac{3\pi}{2}\}$ in $tan(x)$ would cause $F$ to be undefined. Divide by zero is trivially obvious and is not in question. The solution key says that $x$ in $tan(x)$ will cause $F$ to be undefined for all $x = \frac{\pi}{2} \pm 2\pi K$. $\frac{\pi}{2} + 2\pi 1$ is $\frac{5\pi}{2}$, which is greater than the $2\pi$ limit of $x$, and $\frac{\pi}{2} - 2\pi 1$ is $- \frac{3\pi}{2}$, which is less than the $0$ limit of $x$. Their solution would also exclude $x = \frac{3\pi}{2}$. Furthermore, the only value of $K$ that will keep $x$ within the domain restriction is $0$. So, why do they bring " $\pm 2\pi K$" into consideration at all? April 9th 2010, 10:58 AM tan(x) is not defined for $x = \frac{\pi}{2} \pm k\pi$ April 9th 2010, 01:04 PM Domain Restrictions? $x = \frac{\pi}{2} + K\pi$ works when $K$ = 1. That's $\frac{3\pi}{2}$. However, $x = \frac{\pi}{2} + 2\pi$ is $\frac{5\pi}{2}$AND $x = \frac{\pi}{2} - 1\pi$ is $- \frac{\pi}{2}$, both of which are outside of the domain restriction $0 \leq x \leq 2\pi$ . Is there something I'm not understanding about how domain restrictions work? (Nerd) April 9th 2010, 03:44 PM When you restrict the domain of a function, you only care about the range on the interval of restriction. So if pi/2 and 3pi/2 are the only answers on the interval, those would be your answers.
{"url":"http://mathhelpforum.com/pre-calculus/138143-solved-what-values-x-will-f-undefined-print.html","timestamp":"2014-04-24T09:44:35Z","content_type":null,"content_length":"12186","record_id":"<urn:uuid:28d3cbd2-ee50-4818-a999-2e914fa0f0fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability GHC only Maintainer Benedikt Schmidt <beschmi@gmail.com> Safe Haskell None Guarded formulas data Guarded s c v Source GAto (Atom (VTerm c (BVar v))) GDisj (Disj (Guarded s c v)) GConj (Conj (Guarded s c v)) GGuarded Quantifier [s] [Atom (VTerm c (BVar v))] (Guarded s c Denotes ALL xs. as => gf or Ex xs. as & gf& depending on the Quantifier. We assume that all bound variables xs occur in fi atoms in v) as. Apply LNGuarded Foldable (Guarded s c) (Eq s, Eq c, Eq v) => Eq (Guarded s c v) (Ord s, Ord c, Ord v) => Ord (Guarded s c v) (Show s, Show c, Show v) => Show (Guarded s c v) (Binary s_1627723903, Binary c_1627723904, Binary v_1627723905) => Binary (Guarded s_1627723903 c_1627723904 v_1627723905) (NFData s_1627723903, NFData c_1627723904, NFData v_1627723905) => NFData (Guarded s_1627723903 c_1627723904 v_1627723905) Ord c => HasFrees (Guarded (String, LSort) c LVar) data GAtom t Source Atoms that are allowed as guards. GEqE (t, t) GAction (t, Fact t) Eq t => Eq (GAtom t) Ord t => Ord (GAtom t) Show t => Show (GAtom t) Smart constructors ginduct :: Ord c => LGuarded c -> Either String (LGuarded c, LGuarded c)Source Try to prove the formula by applying induction over the trace. Returns Left errMsg if this is not possible. Returns a tuple of formulas: one formalizing the proof obligation of the base-case and one formalizing the proof obligation of the step-case. formulaToGuarded_ :: LNFormula -> LNGuardedSource formulaToGuarded fm returns a guarded formula gf that is equivalent to fm under the assumption that this is possible. If not, then error is called. :: (LNAtom -> Maybe Bool) Partial assignment for truth value of atoms. -> LNGuarded Original formula -> Maybe LNGuarded Simplified formula, provided some simplification was performed. Simplify a Guarded formula by replacing atoms with their truth value, if it can be determined. mapGuardedAtoms :: (Integer -> Atom (VTerm c (BVar v)) -> Atom (VTerm d (BVar w))) -> Guarded s c v -> Guarded s d wSource Map a guarded formula with scope info. The Integer argument denotes the number of quantifiers that have been encountered so far. isSafetyFormula :: HasFrees (Guarded s c v) => Guarded s c v -> BoolSource Check whether the guarded formula is closed and does not contain an existential quantifier. This under-approximates the question whether the formula is a safety formula. A safety formula phi has the property that a trace violating it can never be extended to a trace satisfying it. Conversions to non-bound representations bvarToLVar :: Ord c => Atom (VTerm c (BVar LVar)) -> Atom (VTerm c LVar)Source Assuming that there are no more bound variables left in an atom of a formula, convert it to an atom with free variables only. openGuarded :: (Ord c, MonadFresh m) => LGuarded c -> m (Maybe (Quantifier, [LVar], [Atom (VTerm c LVar)], LGuarded c))Source openGuarded gf returns Just (qua,vs,ats,gf') if gf is a guarded clause and Nothing otherwise. In the first case, qua is the quantifier, vs is a list of fresh variables, ats is the antecedent, and gf' is the succedent. In both antecedent and succedent, the bound variables are replaced by vs. substBound :: Ord c => [(Integer, LVar)] -> LGuarded c -> LGuarded cSource substBound s gf substitutes each occurence of a bound variable i in dom(s) with the corresponding free variable s(i)=x in all atoms in gf. substBoundAtom :: Ord c => [(Integer, LVar)] -> Atom (VTerm c (BVar LVar)) -> Atom (VTerm c (BVar LVar))Source substBoundAtom s a substitutes each occurence of a bound variables i in dom(s) with the corresponding free variable x=s(i) in the atom a. substFree :: Ord c => [(LVar, Integer)] -> LGuarded c -> LGuarded cSource substFreeAtom s gf substitutes each occurence of a free variables v in dom(s) with the correpsonding bound variables i=s(v) in all atoms in gf. substFreeAtom :: Ord c => [(LVar, Integer)] -> Atom (VTerm c (BVar LVar)) -> Atom (VTerm c (BVar LVar))Source substFreeAtom s a substitutes each occurence of a free variables v in dom(s) with the bound variables i=s(v) in the atom a. :: HighlightDocument d => LNGuarded Guarded Formula. -> d Pretty printed formula.
{"url":"http://hackage.haskell.org/package/tamarin-prover-theory-0.8.5.0/docs/Theory-Constraint-System-Guarded.html","timestamp":"2014-04-19T02:34:17Z","content_type":null,"content_length":"47663","record_id":"<urn:uuid:5298e7a5-7c70-46c3-888e-1b53cb650ebc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
The natural spectrogram, Re: Gaussian vs uniform noise audibility [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] The natural spectrogram, Re: Gaussian vs uniform noise audibility • To: AUDITORY@xxxxxxxxxxxxxxx • Subject: The natural spectrogram, Re: Gaussian vs uniform noise audibility • From: Eckard Blumschein <Eckard.Blumschein@xxxxxxxxxxxxxxxxxxxxxxxxxx> • Date: Tue, 27 Jan 2004 19:05:30 +0100 • Comments: To: Julius Smith <jos@CCRMA.STANFORD.EDU> • Delivery-date: Tue Jan 27 13:32:24 2004 • Reply-to: Eckard Blumschein <Eckard.Blumschein@xxxxxxxxxxxxxxxxxxxxxxxxxx> • Sender: AUDITORY Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx> At 09:13 27.01.2004 -0800, you wrote: >Yes, a "sliding cosine transform" can be used in place of the usual >"hopping short-time Fourier transform", and in that case, phase information >is contained in the time variation of the sliding transform >coefficients. I didn't realize you were doing something like that, I claim, you are doing the same, at least twice unconsciously in your inner ears. I would however argue that neither magnitude-phase representation nor time-frequency representation omit information while the usual spectrogram is a faulty design that strips off phase. In other words, phase information is merely a fictitious component that belongs to an inappropriate model of the inner ear. I do not see any justification for attributing it to the actual real-valued analysis. so my >argument was based on different assumptions. Even the short-time Fourier >transform hopping by half its window length each frame can be stripped of >all phase information and still be used as the basis of a convincing sound >synthesis, at least for smoothly changing sounds. Yes, this is what the usual spectrogram does. Short-time means acceptable with respect to temporal resolutiontoo while too short as to resolve low frequency. Do you not believe that the natural spectrogram overcomes such discrepancy, too? It is distinguished by: "no arbitrary window and no There are many variants of desinging the windows and also many designs of wavelets but there is only one physiological function of the inner ear and only one corresponding natural spectrogram. >At 03:08 AM 1/26/2004, Eckard Blumschein wrote: >>At 12:06 23.01.2004 -0800, Julius Smith wrote: >> >At 11:16 AM 1/23/2004, Eckard Blumschein wrote: >> >>First of all, forget the wrong idea that the cochlea performs a complex >> >>Fourier transform. >> > >> >This implies phase is discarded. >>No! Do not consider me a moron. You and largely the rest of the world grew >>up with the erroneous believe that there is no equivalent alternative to >>complex spectral analysis. Complex calculus is indeed tremendously useful. >>No matter whether one prefers magnitude and phase or real and imaginary >>part, one always has to consider both constituents except for the case one >>of them equals zero. Given, a function of time like 2A cos(omega t) does >>not have any imaginary part at all. Entrance into complex plane is payed by >>mandatory arbitrary omission of A exp(- i omega t) or A exp(i omega t). >>Neither the magnitude A nor the phase omega t can be discarded. >>At that point, you will object: Aren't anti-symmetrical functions, i.e. >>functions of time with odd symmetry like sinus, also needed in frequency >>No again, on condition, causality has been taken into account. In brief: >>Future signals cannot be analyzed yet. Even sin(omega t) can be continued >>as its mirror into fictive future time like an even function. Of course, >>this wouldn't hold for its derivative or antiderivative. However, our topic >>is just frequency analysis within cochlea. >> >However, phase information does exist as >> >the phase of the basilar membrane vibration,... >>I don't take amiss this fallacy. It has to do with the missing natural >>justification for fixing any reference point on the time scale. Our ears >>are not synchronized with anything. When Descartes introduced Cartesian >>coordinates, he imagined a spatially infinite world. Time is >>correspondingly believed to also expand from minus infinite to plus >>infinite. However, elapsed time definitely ends at the 'NOW' being the only >>clever choice for a natural time scale. Take subsequent snapshots of a >>sinusoid at NOW each. Try the same with any cochlear pattern. By chance, >>you might observe sin or cos. In other words, so called linear phase is >>arbitrary as is time. I don't deny that delay or according phase difference >>is reasonable with respect to a second signal or a different reference. >>Without such reference, a sinusoidal function cannot be a identified as >>sin, cos or something complex in between, and the reference is lacking in >>nature. The only natural reference is the NOW, which is steadily on the >>move. This causes the trouble of permanently lagging window position in >>case of arbitrarily centered complex Fourier transform. >> >Since basilar membrane filtering is generally >> >modeled as linear, any corresponding short-time-Fourier-transform would >> >have to be complex to model basilar membrane filtering. Subsequent >> >half-wave rectification does not eliminate all phase information, >>An old specialist of power electronics like me cannot retrace how you >>imagine rectification of a complex-valued function of time. >>My wife is a teacher for adults. Perhaps she would more heedfully >>anticipate what you and many others are feeling rather than thinking. I >>will try and elucidate how engineers handle a similar case: Consider an >>ideal sinusoidal voltage as a real input into a circuit that may also >>contain a first (small) resistor and a reactance in series. Parallel to the >>first resistor there are a diod and a much larger second impedance in >>series. The voltage across the first resistor is a complex quantity with >>respect to the source but pretty independent of the diod. However, >>piecewise linear calculation requires to refer to the current through the >>diod as a real one. In case of hearing, phase of the stimulus does not >>matter since it anyway relates to an arbitrary reference. >>As a rule, recognized experts like you tend to be cautious against >>radically uncommon views. Therefore I would like to ask you: Look at >>pattern of BM motion (e.g. T. Ren's) or of firing in the auditory nerve. >>They do not resemble magnitude, nothing to say about phase. As far as I can >>judge, they resemble the pattern of the natural (real-valued) spectrogram. >>More in detail: Magnitude cannot account for the different patterns with >>rarefaction vs. condensation clicks while positve and negative amplitudes >>of the natural spectrogram clearly differ from each other. >>In all, I didn't find any tenable argument in favor of complex cochlear >>function. On the other hand, Fourier cosine transform, the natural >>spectrogram and joint autocorrelation already resolved a lot of so far >>poorly understood questions. >>Incidentally, I recall a textbook denying any difference between time >>domain and frequency domain. I do not fully share this opinion. In >>particular, I consider it necessary to clearly distinguish between real >>world and fictitious complex domain.
{"url":"http://www.auditory.org/mhonarc/2004/msg00087.html","timestamp":"2014-04-18T14:07:50Z","content_type":null,"content_length":"10552","record_id":"<urn:uuid:a9482f7e-92ef-4dd8-a477-9a7f8e9a39cd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison Function - GNU libavl 2.0.2 The C language provides the void * generic pointer for dealing with data of unknown type. We will use this type to allow our tables to contain a wide range of data types. This flexibility does keep the table from working directly with its data. Instead, the table's user must provide means to operate on data items. This section describes the user-provided functions for comparing items, and the next section describes two other kinds of user-provided functions. There is more than one kind of generic algorithm for searching. We can search by comparison of keys, by digital properties of the keys, or by computing a function of the keys. In this book, we are only interested in the first possibility, so we need a way to compare data items. This is done with a user-provided function compatible with tbl_comparison_func, declared as follows: 2. <Table function types 2> = /* Function types. */ typedef int tbl_comparison_func (const void *tbl_a, const void *tbl_b, void *tbl_param); See also 4. This code is included in A comparison function takes two pointers to data items, here called a and b, and compares their keys. It returns a negative value if a < b, zero if a == b, or a positive value if a > b. It takes a third parameter, here called param, which is user-provided. A comparison function must work more or less like an arithmetic comparison within the domain of the data. This could be alphabetical ordering for strings, a set of nested sort orders (e.g., sort first by last name, with duplicates by first name), or any other comparison function that behaves in a “natural” way. A comparison function in the exact class of those acceptable is called a strict weak ordering, for which the exact rules are explained in Exercise 5. Here's a function that can be used as a comparison function for the case that the void * pointers point to single ints: 3. <Comparison function for ints 3> = /* Comparison function for pointers to ints. param is not used. */ compare_ints (const void *pa, const void *pb, void *param) const int *a = pa; const int *b = pb; if (*a < *b) return -1; else if (*a > *b) return +1; return 0; This code is included in 134. Here's another comparison function for data items that point to ordinary C strings: /* Comparison function for strings. param is not used. */ compare_strings (const void *pa, const void *pb, void *param) return strcmp (pa, pb); See also: [FSF 1999], node “Defining the Comparison Function”; [ISO 1998], section 25.3, “Sorting and related operations”; [SGI 1993], section “Strict Weak Ordering”. 1. In C, integers may be cast to pointers, including void *, and vice versa. Explain why it is not a good idea to use an integer cast to void * as a data item. When would such a technique would be acceptable? [answer] 2. When would the following be an acceptable alternate definition for compare_ints()? compare_ints (const void *pa, const void *pb, void *param) return *((int *) pa) - *((int *) pb); 3. Could strcmp(), suitably cast, be used in place of compare_strings()? [answer] 4. Write a comparison function for data items that, in any particular table, are character arrays of fixed length. Among different tables, the length may differ, so the third parameter to the function points to a size_t specifying the length for a given table. [answer] *5. For a comparison function f() to be a strict weak ordering, the following must hold for all possible data items a, b, and c: • Irreflexivity: For every a, f(a, a) == 0. • Antisymmetry: If f(a, b) > 0, then f(b, a) < 0. • Transitivity: If f(a, b) > 0 and f(b, c) > 0, then f(a, c) > 0. • Transitivity of equivalence: If f(a, b) == 0 and f(b, c) == 0, then f(a, c) == 0. Consider the following questions that explore the definition of a strict weak ordering. a. Explain how compare_ints() above satisfies each point of the definition. b. Can the standard C library function strcmp() be used for a strict weak ordering? c. Propose an irreflexive, antisymmetric, transitive function that lacks transitivity of equivalence. *6. libavl uses a ternary comparison function that returns a negative value for <, zero for ==, positive for >. Other libraries use binary comparison functions that return nonzero for < or zero for > =. Consider these questions about the differences: a. Write a C expression, in terms of a binary comparison function f() and two items a and b, that is nonzero if and only if a == b as defined by f(). Write a similar expression for a > b. b. Write a binary comparison function “wrapper” for a libavl comparison function. c. Rewrite bst_find() based on a binary comparison function. (You can use the wrapper from above to simulate a binary comparison function.)
{"url":"http://adtinfo.org/libavl.html/Comparison-Function.html","timestamp":"2014-04-21T04:31:45Z","content_type":null,"content_length":"15099","record_id":"<urn:uuid:6cca5885-afce-48ad-927c-612f0e5871b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do research in Pure Mathematics? Many non-mathematicians have some sense of the importance of Applied Mathematics (the adjective “Applied” being rather suggestive!) and Statistics, but they may view Pure Mathematics as something with little use. Consequently, we feel it is important to say a little about Pure Mathematics and the reasons for doing research in it. More precisely, we address the following basic questions: • What is Mathematics? • Why do people do research in Mathematics? • What are the benefits to society of research in Pure Mathematics? What is Mathematics? It is not possible to give an answer to this question that will satisfy all mathematicians, since so many fields in mathematics are so different from each other. Indeed, we are tempted to throw our hands up in the air and echo the words of Justice Potter Stewart by simply saying, “I know it when I see it.” But if we wish to say something more informative than this, we could say that Mathematics is the logical and abstract study of pattern. Why do people do research in Mathematics? One of the main reasons that mathematicians do research is because they appreciate the beauty of the particular types of abstract patterns involved in their own research, and enjoy discovering non-obvious aspects of these complex patterns. Human beings are naturally quite good at recognizing all sorts of patterns, and through all our senses we seem to have an in-built fascination with pattern: we like to cover the inside of our houses with nicely patterned wallpaper or paintwork, and the outside with nicely patterned brickwork or other finishes, we like the complex patterns of music (whether that music is the latest popular music or something classical), and scientists in all fields get a thrill when a formerly unpredictable phenomenon is seen to be governed by some pattern. Evolutionary biologists would likely say that there are good evolutionary reasons for this: the weather, the seasons, habits of predators, and many other things exhibit patterns, and over the course of human evolution, it has been advantageous for us to be good at recognizing these patterns. The fascination that mathematicians have for their research is thus arguably a mere extension of the fascination that all people have with pattern. When humans examine a complex pattern, whether by admiring the decorations of the Alhambra in Granada, or by listening to a fugue by Bach, they occasionally notice patterns that they had not noticed before, a phenomenon that they generally find delightful. This is precisely the reaction that a mathematician feels when he/she makes a breakthrough in understanding some complex abstract patterns. Indeed, Mathematics is analogous to an extra sense that many people lack. When our brains process and make sense out of the complicated array of words and symbols on a page of Mathematics, we begin to understand the patterns and appreciate their beauty. That beauty has no more to do with the complicated sequence of symbols on the page than the beauty of the Alhambra has to do with the complex sequence of photons that hit our eyes when we look at the Alhambra’s decorations. In both cases, the brain must do a substantial amount of processing to understand the complexity of the input data, and appreciate the patterns. Trying to explain the beauty of Mathematics to someone with little mathematical training is as difficult as trying to explain the beauty of the Alhambra to a blind Although appreciation of the beauty of Mathematics, and delight at discovering new patterns within complex abstract patterns is one of the primary motivations for all mathematicians, there are a variety of other reasons that people do research in Mathematics. Indeed, some people may get as much enjoyment out of solving a crossword puzzle or getting a high score in a video game as a mathematician does in discovering something new in Mathematics, but people typically do not make careers out of those other activities. The fact is that Mathematics research is extremely useful! What are the benefits to society of research in Pure Mathematics? When we are teaching a child to count, we normally count a variety of different things with them: toys, Lego bricks, cartoon animals, and so on. The child eventually understands the abstract principle behind these concrete instances of number, and can then go on to count all the important things that need to be counted later in life. By understanding the abstract principles, they are well prepared not just for putting numbers to use in their everyday activities, but also when they need to count something new (such as scoops of baby formula when they themselves have a child). If someone said to us, “Why are you teaching your child to count toys? Since they will likely be a bank teller like yourself, wouldn’t it be better to teach them how to count money?,” most of us would consider such a comment ridiculous: we know at least subconsciously that number is an abstract concept, and that the counting of toys and cartoon animals is merely a means to the key goal of understanding number in the abstract. When we understand the abstract concept well, we can then put it to use in all sorts of areas. Understanding a piece of abstract mathematical theory is like understanding numbers in the abstract. We usually gain the understanding by looking at special cases. And, although an abstract mathematical theory, just like the abstract concept of number, is divorced from the real world, it very often has the potential to be useful in a variety of different areas: its abstraction is a strength because it maximizes its potential usefulness. One example of the power of abstraction is provided by Laplace’s equation, one of the most studied and best understood (non-trivial) partial differential equations in mathematics. A variety of phenomena in astronomy, electromagnetism, and fluid flow are governed by this equation, as is the steady state heat distribution in an object. By understanding the abstract mathematical equation, we simultaneously gain an understanding of all these phenomena. CT scanners are one of the greatest advances in modern medical technology. These scanners form a three-dimensional image from a collection of two-dimensional images taken from different angles. The same principle is employed in reflection seismology to create three-dimensional images of the earth’s subsurface, and in certain types of electron microscopy. In all cases, the underlying mathematics involves the inverse Radon transform, or related transforms. When Johann Radon and others investigated such transforms (from 1917 onwards), they were designed for applications within Pure Mathematics (specifically, harmonic analysis and related areas), but their importance in the “Real World” was not recognised. There are many other examples of mathematical research that for a long time seemed to have little relevance to the real world, but which eventually became of great importance. The pursuit of a proof of Euclid’s fifth postulate appeared increasingly quixotic as many people tried and failed over two millennia. When Farkas Bolyai, one of the mathematicians who tried and failed to resolve the issue, discovered that his son János had also begun to work on this problem, he wrote to János: For God's sake, I beseech you, give it up. Fear it no less than sensual passions because it too may take all your time and deprive you of your health, peace of mind, and happiness in life. But János persisted, and he and other 19th Century mathematicians made important breakthroughs that eventually led to modern manifold theory, which was central to Einstein’s development of General Relativity. After Einstein’s mathematician friend, Marcel Grossmann explained manifold theory to him, Einstein wrote ... in all my life I have not laboured nearly so hard, and I have become imbued with great respect for mathematics, the subtler part of which I had in my simple-mindedness regarded as pure luxury until now. Another example is provided by Number Theory, long considered the most inapplicable of all areas of mathematics. It was widely felt that the only reason to do research in Number Theory was to discover its beauty. This all changed with modern encryption theory, which is an essential part of e-commerce and relies heavily on the properties of prime numbers and other aspects of Number Theory. For further related reading, we refer the reader to two wonderful philosophical articles about the usefulness of Mathematics in the Physical Sciences and Engineering. The first one, entitled The Unreasonable Effectiveness of Mathematics in the Natural Sciences, was published by the physicist Eugene Wigner in 1960. It argues that the way in which the mathematical structure of a physical theory often points the way to further advances in that theory and even to empirical predictions, is not a coincidence but must reflect some larger and deeper truth about both Mathematics and Physics. A follow-up article with the same title, which was written by the Mathematician/Computer Scientist Richard Hamming in 1980, makes several attempts at answering Wigner’s questions but, in the end, Hamming admits that all of his points combined are still insufficient to explain why Mathematics is so useful and so often leads to further advances in other fields. We should add that the division between Pure and Applied Mathematics is a rather false one. Any reasonable attempt to list the important areas of modern Pure Mathematics research would include many topics of great importance in applications. Conversely, an applied mathematician working, for instance, on efficient implementations of a numerical solution technique is often led naturally to study related abstract mathematical problems (for instance, the properties of certain special types of matrices). Finally, outside of its direct applicability to the world around us, mathematical research helps us to improve and refresh the quality of what we teach, and certainly the world needs a large number of graduates with a wide variety of mathematical skills to fill the wide variety of positions that require some Mathematics or the ability to analyze problems logically.
{"url":"http://www.maths.nuim.ie/why_pure_mathematics","timestamp":"2014-04-20T03:33:34Z","content_type":null,"content_length":"28884","record_id":"<urn:uuid:3fcb413c-5978-41d4-b810-2b72bce8950b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Mathematics - Mathematics For Computer Science Discrete Mathematics - Mathematics For Computer Science by Ram Sagar Mourya, Software Developer on Oct 15, 2009 This is document about the Discrete mathematics covering all the topics This is document about the Discrete mathematics covering all the topics Total Views Views on SlideShare Embed Views Usage Rights © All Rights Reserved
{"url":"http://www.slideshare.net/rammy.sagar/discrete-mathematics-mathematics-for-computer-science","timestamp":"2014-04-20T22:49:48Z","content_type":null,"content_length":"1049263","record_id":"<urn:uuid:d444ba82-6e70-4d7c-9ed2-42bce62b5e08>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
You limited your search to: Access Rights: Use restricted to UNT Community Degree Discipline: Mathematics Date: May 2006 Creator: Ghenciu, Eugen Andrei Description: In this dissertation we study graph directed Markov systems (GDMS) and limit sets associated with these systems. Given a GDMS S, by the Hausdorff dimension spectrum of S we mean the set of all positive real numbers which are the Hausdorff dimension of the limit set generated by a subsystem of S. We say that S has full Hausdorff dimension spectrum (full HD spectrum), if the dimension spectrum is the interval [0, h], where h is the Hausdorff dimension of the limit set of S. We give necessary conditions for a finitely primitive conformal GDMS to have full HD spectrum. A GDMS is said to be regular if the Hausdorff dimension of its limit set is also the zero of the topological pressure function. We show that every number in the Hausdorff dimension spectrum is the Hausdorff dimension of a regular subsystem. In the particular case of a conformal iterated function system we show that the Hausdorff dimension spectrum is compact. We introduce several new systems: the nearest integer GDMS, the Gauss-like continued fraction system, and the Renyi-like continued fraction system. We prove that these systems have full HD spectrum. A special attention is given to the backward continued fraction ... Contributing Partner: UNT Libraries Date: May 2004 Creator: Muller, Kimberly O. Description: In this paper, exhaustivity, continuity, and strong additivity are studied in the setting of topological Riesz spaces. Of particular interest is the link between strong additivity and exhaustive elements of Dedekind s-complete Banach lattices. There is a strong connection between the Diestel-Faires Theorem and the Meyer-Nieberg Lemma in this setting. Also, embedding properties of Banach lattices are linked to the notion of strong additivity. The Meyer-Nieberg Lemma is extended to the setting of topological Riesz spaces and uniform absolute continuity and uniformly exhaustive elements are studied in this setting. Counterexamples are provided to show that the Vitali-Hahn-Saks Theorem and the Brooks-Jewett Theorem cannot be extended to submeasures or to the setting of Banach lattices. Contributing Partner: UNT Libraries Date: May 2006 Creator: Alhaddad, Shemsi I. Description: The Iwahori-Hecke algebras of Coxeter groups play a central role in the study of representations of semisimple Lie-type groups. An important tool is the combinatorial approach to representations of Iwahori-Hecke algebras introduced by Kazhdan and Lusztig in 1979. In this dissertation, I discuss a generalization of the Iwahori-Hecke algebra of the symmetric group that is instead based on the complex reflection group G(r,1,n). Using the analogues of Kazhdan and Lusztig's R-polynomials, I show that this algebra determines a partial order on G(r,1,n) that generalizes the Chevalley-Bruhat order on the symmetric group. I also consider possible analogues of Kazhdan-Lusztig polynomials. Contributing Partner: UNT Libraries Date: December 2004 Creator: Ghenciu, Petre Ion Description: In this dissertation we study the Hamiltonicity and the uniform-Hamiltonicity of subset graphs, subspace graphs, and their associated bipartite graphs. In 1995 paper "The Subset-Subspace Analogy," Kung states the subspace version of a conjecture. The study of this problem led to a more general class of graphs. Inspired by Clark and Ismail's work in the 1996 paper "Binomial and Q-Binomial Coefficient Inequalities Related to the Hamiltonicity of the Kneser Graphs and their Q-Analogues," we defined subset graphs, subspace graphs, and their associated bipartite graphs. The main emphasis of this dissertation is to describe those graphs and study their Hamiltonicity. The results on subset graphs are presented in Chapter 3, on subset bipartite graphs in Chapter 4, and on subspace graphs and subspace bipartite graphs in Chapter 5. We conclude the dissertation by suggesting some generalizations of our results concerning the panciclicity of the graphs. Contributing Partner: UNT Libraries Date: August 2006 Creator: Howard, Tamani M. Description: In this paper we use the Sobolev steepest descent method introduced by John W. Neuberger to solve the hyperbolic Monge-Ampère equation. First, we use the discrete Sobolev steepest descent method to find numerical solutions; we use several initial guesses, and explore the effect of some imposed boundary conditions on the solutions. Next, we prove convergence of the continuous Sobolev steepest descent to show local existence of solutions to the hyperbolic Monge-Ampère equation. Finally, we prove some results on the Sobolev gradients that mainly arise from general nonlinear differential equations. Contributing Partner: UNT Libraries Date: December 2009 Creator: Bajracharya, Neeraj Description: Given a real N by N matrix A, write p(A) for the maximum angle by which A rotates any unit vector. Suppose that A and B are positive definite symmetric (PDS) N by N matrices. Then their Jordan product {A, B} := AB + BA is also symmetric, but not necessarily positive definite. If p(A) + p(B) is obtuse, then there exists a special orthogonal matrix S such that {A, SBS^(-1)} is indefinite. Of course, if A and B commute, then {A, B} is positive definite. Our work grows from the following question: if A and B are commuting positive definite symmetric matrices such that p(A) + p(B) is obtuse, what is the minimal p(S) such that {A, SBS^(-1)} indefinite? In this dissertation we will describe the level curves of the angle function mapping a unit vector x to the angle between x and Ax for a 3 by 3 PDS matrix A, and discuss their interaction with those of a second such matrix. Contributing Partner: UNT Libraries Date: December 2012 Creator: Herath, Dushanthi N. Description: Receiver operating characteristic (ROC) analysis is one of the most widely used methods in evaluating the accuracy of a classification method. It is used in many areas of decision making such as radiology, cardiology, machine learning as well as many other areas of medical sciences. The dissertation proposes a novel nonparametric estimation method of the ROC surface for the three-class classification problem via Bernstein polynomials. The proposed ROC surface estimator is shown to be uniformly consistent for estimating the true ROC surface. In addition, it is shown that the map from which the proposed estimator is constructed is Hadamard differentiable. The proposed ROC surface estimator is also demonstrated to lead to the explicit expression for the estimated volume under the ROC surface . Moreover, the exact mean squared error of the volume estimator is derived and some related results for the mean integrated squared error are also obtained. To assess the performance and accuracy of the proposed ROC and volume estimators, Monte-Carlo simulations are conducted. Finally, the method is applied to the analysis of two real data sets. Contributing Partner: UNT Libraries Date: August 2001 Creator: Huettenmueller, Rhonda Description: Let (Ω, Σ, µ) be a finite measure space and X, a Banach space with continuous dual X*. A scalarly measurable function f: Ω→X is Dunford integrable if for each x* X*, x*f L1(µ). Define the operator Tf. X* → L1(µ) by T(x*) = x*f. Then f is Pettis integrable if and only if this operator is weak*-to-weak continuous. This paper begins with an overview of this function. Work by Robert Huff and Gunnar Stefansson on the operator Tf motivates much of this paper. Conditions that make Tf weak*-to-weak continuous are generalized to weak*-to­weak continuous operators on dual spaces. For instance, if Tf is weakly compact and if there exists a separable subspace D X such that for each x* X*, x*f = x*fχDµ-a.e, then f is Pettis integrable. This nation is generalized to bounded operators T: X* → Y. To say that T is determined by D means that if x*| D = 0, then T (x*) = 0. Determining subspaces are used to help prove certain facts about operators on dual spaces. Attention is given to finding determining subspaces far a given T: X* → Y. The kernel of T and the adjoint T* of T are used ... Contributing Partner: UNT Libraries Date: December 2001 Creator: Lindsay, Larry J. Description: The term quantization refers to the process of estimating a given probability by a discrete probability supported on a finite set. The quantization dimension Dr of a probability is related to the asymptotic rate at which the expected distance (raised to the rth power) to the support of the quantized version of the probability goes to zero as the size of the support is allowed to go to infinity. This assumes that the quantized versions are in some sense ``optimal'' in that the expected distances have been minimized. In this dissertation we give a short history of quantization as well as some basic facts. We develop a generalized framework for the quantization dimension which extends the current theory to include a wider range of probability measures. This framework uses the theory of thermodynamic formalism and the multifractal spectrum. It is shown that at least in certain cases the quantization dimension function D(r)=Dr is a transform of the temperature function b(q), which is already known to be the Legendre transform of the multifractal spectrum f(a). Hence, these ideas are all closely related and it would be expected that progress in one area could lead to new results in another. It would ... Contributing Partner: UNT Libraries Date: May 2004 Creator: Ghenciu, Ioana Description: In this dissertation we study the structure of spaces of operators, especially the space of all compact operators between two Banach spaces X and Y. Work by Kalton, Emmanuele, Bator and Lewis on the space of compact and weakly compact operators motivates much of this paper. Let L(X,Y) be the Banach space of all bounded linear operators between Banach spaces X and Y, K(X,Y) be the space of all compact operators, and W(X,Y) be the space of all weakly compact operators. We study problems related to the complementability of different operator ideals (the Banach space of all compact, weakly compact, completely continuous, resp. unconditionally converging) operators in the space of all bounded linear operators. The structure of Dunford-Pettis sets, strong Dunford-Pettis sets, and certain spaces of operators is studied in the context of the injective and projective tensor products of Banach spaces. Bibasic sequences are used to study relative norm compactness of strong Dunford-Pettis sets. Next, we use Dunford-Pettis sets to give sufficient conditions for K(X,Y) to contain c0. Contributing Partner: UNT Libraries
{"url":"http://digital.library.unt.edu/explore/partners/UNT/browse/?sort=title&fq=str_degree_discipline%3AMathematics&fq=dc_rights_access%3Aunt","timestamp":"2014-04-19T11:07:37Z","content_type":null,"content_length":"43851","record_id":"<urn:uuid:af383d7c-2fac-4259-8401-49b4b1bb868e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Footings and Boussinesq Stress Contour Chart - Transportation I'm going through Lindeburg's Civil PE Sample Examination. Question#12 of the morning session group has me stumped. The beginning of the solution solves for the footing width based on the column load and foundation bearing pressure: width = (N/p)^1/2. I cannot seem to find this formula anywhere in my references. Here's the question with the possible answers: [/indent]A square foundation supports a column load of 800 kN. The soil beneath the footing is generally homogeneous. If the foundation bearing pressure from this load is reduced from 400 kPa to 100 kPa (the column load remaining constant), the change in stress at a depth of 3 m below the foundation center will be most nearly: A. a decrease in stress of 20 kPa B. a decrease in stress of 10 kPa C. an increase in stress of 10 kPa D. an increase in stress of 20 kPa One of my old concrete design books references ACI-15.2.2 as a possible reference for sizing footings. Any help in locating a document that explains that formula would be appreciated. Thanks.
{"url":"http://engineerboards.com/index.php?showtopic=15535&pid=6874778&st=0","timestamp":"2014-04-18T20:48:26Z","content_type":null,"content_length":"53451","record_id":"<urn:uuid:f1b37ca5-e672-4915-8482-7f6cb472274d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Robin Milner Winner of the ACM in 1991: For three distinct and complete achievements: 1) LCF, the mechanization of DanaScott's Logic of Computable Functions, probably the first theoretically based yet practical tool for machine assisted proof construction (see AutomatedTheoremProving); 2) ML [MlLanguage], the first language to include PolymorphicTypeInference together with a type-safe exception-handling mechanism; 3) CCS [CalculusOfCommunicatingSystems?], a general theory of concurrency. In addition, he formulated and strongly advanced full abstraction, the study of the relationship between operational and One half of . Did research on like the . Author of the books "Communicating and Mobile Systems: The Pi-Calculus" ISBN 0521658691 and "Communication and Concurrency" ISBN 0131150073 Also see is not to be confused with RobertMilne who co-authored the book "A Theory of Programming Language Semantics" with CategoryPerson CategoryAuthor of this page (last edited October 22, 2011) or FindPage with title or text search
{"url":"http://c2.com/cgi/wiki?RobinMilner","timestamp":"2014-04-19T13:10:50Z","content_type":null,"content_length":"2951","record_id":"<urn:uuid:2b2d5f35-791d-426f-91a4-a77515d55d2f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00372-ip-10-147-4-33.ec2.internal.warc.gz"}