content
stringlengths
86
994k
meta
stringlengths
288
619
Palo Alto Science Tutor Find a Palo Alto Science Tutor ...I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help. 11 Subjects: including physics, chemistry, calculus, statistics ...I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain it to an intelligent 12 year old, then you don't really understand it" I explain to my students because I u... 24 Subjects: including organic chemistry, ACT Science, anatomy, philosophy ...I often share the story of my humble beginnings in rural India with students when I’d have to walk several miles a day to go to the nearest school and how, that early education opened my eyes to the world in ways I could never have imagined, The commitment to learn led me to be the first in my vi... 1 Subject: physics ...I noticed that the top learners and earners ask this question: "Why do I do this? If it's done, how would I feel and how would my life be if I consistently achieve these goals?" Relate this to tutoring, I love playing this vision building exercise with my tutee: "If you have an A in class, would... 20 Subjects: including sociology, English, writing, reading ...I also taught physics in an Oakland high school for seven years. Thus, I have unusually extensive experience teaching physics. I have a BS in chemistry from UC Berkeley. 9 Subjects: including physics, algebra 2, chemistry, geometry
{"url":"http://www.purplemath.com/Palo_Alto_Science_tutors.php","timestamp":"2014-04-20T13:31:23Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:e47def4c-5c5f-4fe4-9751-ff43ccbd668c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Term for an "almost regular" sequence up vote 2 down vote favorite Let $R$ be a ring (commutative, with unit), $M$ an $R$-module, and $x_1, \dotsc, x_n \in R$. Consider the following two conditions: 1. For all $i$, the homomorphism $$\frac{M}{(x_1, \dotsc, x_{i-1})M} \stackrel{x_i}{\longrightarrow} \frac{M}{(x_1, \dotsc, x_{i-1})M}$$ is injective. 2. $$\frac{M}{(x_1, \dotsc, x_n) M} \neq 0.$$ Taken together, these conditions give the definition for the $x_i$ to form an $M$-regular sequence. However, it is sometimes useful to consider Condition 1 by itself. For instance, this comes up in Eisenbud, Commutative Algebra, Exercise 6.7 (page 174 in my copy). Eisenbud sort of, but not really, calls such a sequence an "(almost) regular sequence." Is there a standard term (or, for that matter, any reasonable term with a not-too-obscure reference) for a sequence of elements of $R$ (possibly contained in some fixed ideal of $R$, especially if $R$ is local) that satisfy Condition 1, but not necessarily Condition 2? (Other related notions of "not-quite-regular sequence" would also be of interest.) terminology reference-request ac.commutative-algebra I will just note that the term "almost regular has been taken": projecteuclid.org/… – Hailong Dao Jul 31 '11 at 22:57 add comment 1 Answer active oldest votes A reasonable term with a not-too-obscure reference: that is called a weak $M$-sequence by Bruns and Herzog in Cohen-Macaulay Rings (p. 3 in the edition I have). up vote 6 down vote This is certainly a valid reference (+1). The trouble is, based on google searches, that "weak M-sequence" is more commonly used to refer to a sequence in which condition (i) is replaced by the condition that everything annihilating $x_i$ annihilates your entire ideal (e.g., your maximal ideal, if $R$ is local). – Charles Staats Jul 31 '11 at 2:43 add comment Not the answer you're looking for? Browse other questions tagged terminology reference-request ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/71690/term-for-an-almost-regular-sequence","timestamp":"2014-04-18T20:47:12Z","content_type":null,"content_length":"52772","record_id":"<urn:uuid:37c64dc7-f031-41c1-ba85-cd633d4e8d5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
POLL: How much re-balancing bonus do you expect? The basic Boglehead is to pick a target asset allocation and then stick with it. But over time, portfolio asset weights will drift from the target. At some point, re-balance back to target. This could be done periodically, like every year, month or day, or by threshold bands, like +/-5% or 10%. For example, if the target portfolio had two funds, Total Stock Market (VTI) and Total Bond Market (BND), and the target was equal weighting, 50/50. Over time, this may drift to 60/40. If you had 10% rebalancing bands, you might rebalance back to 50/50, or maybe partway to 50/50. There's a million different re-balancing schema. Some refer to a "re-balancing bonus", or re-balancing return, which would be the return from selling the winners and buying the losers. How much extra return do you expect to get from re-balancing? Note that the poll answers allow for both positive (rebalancing bonus) and negative (rebalancing penalty) responses. If you like, feel free to describe your re-balancing method and explain why you answered the way you did. Last edited by grayfox on Thu Dec 20, 2012 11:41 am, edited 1 time in total. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: How much re-balancing bonus do you expect? I went with 0 to negative 1, but it could be much worse (more negative) than that. If I really never rebalanced back from equities, I imagine my equity-heavier-and-heavier portfolio will more likely than not do much better than my balanced stock/bond one. As for me: I rebalance with new money in fits and starts as it comes in, and may or may not if I need to sell things to do it, since a lot of my port is taxable. In theory, I have bands. In practice, I have tax consequences. I haven't hit a problem yet that couldn't be solved with new money added, but may at some point, so we'll see. "In the absence of clarity, diversification is the only logical strategy" -= Larry Swedroe Re: How much re-balancing bonus do you expect? Equities, in the long-run, have outperformed bonds, and the expectations are that they will continue to do so. Hence, on average, you are rebalancing more often from equities to bonds which should diminish returns. However, if one looks at it from a set allocation point (say 50-50 target), it's not the "rebalancing bonus" that is the driver, IMHO. It's more about sticking with your portfolio need of risk in your written investment plan. Re: How much re-balancing bonus do you expect? I voted 'zero to negative one' because most of my rebalancing over the past 25 years has been from stocks to bonds. Since stocks tend to perform better than bonds over the long run, I probably would have had a higher return, but riskier portfolio, if I left the money in stocks. I rebalance to manage risk. However, when rebalancing between U.S. and foreign stocks, I would expect a 'zero to positive one' percent gain due to the 'buy low, sell high' tendency. Re: How much re-balancing bonus do you expect? I expect a bonus of between 1% and 2%. I do not rebalance based on the calendar nor do I use 5% to 10%. Instead, I use the RBD method which seems to work extremely well. It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. Re: How much re-balancing bonus do you expect? I voted Approximately 0% as the expected re-balancing return. Now, I have no doubt that if there are two portfolios, one rebalanced and one not, that they will end up with different results. I don't believe the rebalancing return ex-post will be zero. But I think the rebalancing return, which I would characterize as a return from a trading strategy, can be positive or negative, and it depends on luck and timing. I found a paper by Vanguard, Portfolio Rebalancing in Theory and Practice that looked into re-balancing. They conclude that rebalancing subtracts from returns in trending markets and adds to returns in mean-reverting markets. If you are in a trending market, the rebalancing strategy that rebalances less often will have higher rebalancing return. For example,10% bands instead of 5%. But if you wait too long before rebalancing and the trend reverses, then you missed out. It all seems to be in the timing. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: How much re-balancing bonus do you expect? livesoft wrote:I expect a bonus of between 1% and 2%. I do not rebalance based on the calendar nor do I use 5% to 10%. Instead, I use the RBD method which seems to work extremely well. It doesn't look like there are any votes for 1 to 2% Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: How much re-balancing bonus do you expect? ^ I didn't say I voted, did I? It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. Re: How much re-balancing bonus do you expect? One of the reasons I created this poll was that I saw this old thread Opportunistic Rebalancing: A New Paradigm from 2008 had been revived Now if you study Modern Finance in university, one of the results is that return of a portfolio is equal to the weighted sum of the returns of the components. Rp = w1*R1 + w1*R2 + w3*R3 + ... wn*Rn As far as I have seen up to this point, balancing return is not part of the modern finance theory. They don't add in rebalancing bonus to Rp. When you solve for the efficient frontier (EF), the highest returning portfolio is 100% of the highest returning asset. The lowest returning portfolio is 100% of the lowest returning asset. there are no portfolios on the efficient frontier that have higher return than the highest return asset. That's why I would call the rebalancing return profit & loss from trading. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: How much re-balancing bonus do you expect? grayfox wrote:One of the reasons I created this poll was that I saw this old thread Opportunistic Rebalancing: A New Paradigm from 2008 had been revived Now if you study Modern Finance in university, one of the results is that return of a portfolio is equal to the weighted sum of the returns of the components. Rp = w1*R1 + w1*R2 + w3*R3 + ... wn*Rn As far as I have seen up to this point, balancing return is not part of the modern finance theory. They don't add in rebalancing bonus to Rp. When you solve for the efficient frontier (EF), the highest returning portfolio is 100% of the highest returning asset. The lowest returning portfolio is 100% of the lowest returning asset. there are no portfolios on the efficient frontier that have higher return than the highest return asset. That's why I would call the rebalancing return profit & loss from trading. That's a really interesting result. I almost wonder if the next poll should be something like: do you expect the bond component of your portfolio to increase returns? I'd never presumed it might in my case, but figured it could actually do that in the case of someone with 80-90% stocks, say, and long-term bonds - i.e. I thought perhaps, without ever bothering to run numbers, that some portfolios might indeed beat their top-asset return if they had something volatile enough to facilitate rebalancing at extreme points. "In the absence of clarity, diversification is the only logical strategy" -= Larry Swedroe Re: POLL: How much re-balancing bonus do you expect? I voted approximately zero. I have doubts about rebalancing back to a DIY AA. Is there any reason to believe an initially arbitrary AA is efficient or will be efficient over the long run? Re: POLL: How much re-balancing bonus do you expect? I have not taken any university level finance classes. But based on my reading of Boglehead books, I think I disagree with what you posited. The return of a portfolio cannot simply be the weighted average return of the components. The compounded return of the portfolio is always less than the weighted average annual return of the components due to portfolio volatility. In fact, the more you can dampen portfolio volatility through investing in multiple asset classes, the closer the compounded return will be to the average annual return of the components. The positive effect of dampened portfolio volatility on annualized (compounded) returns is what I would view as the rebalancing bonus. The potential benefit of adding an asset class to a portfolio depends on more than its return. It depends on expected return, correlation to other portfolio components, volatility. Curious to hear your thoughts. Re: POLL: How much re-balancing bonus do you expect? At this point I will not be making any rebalancing decisions now. My IPS allows me to make rebalancing decisions up to 1 quarter past the end of the year (Mar 31) if rebalancing is not indicated at the end of the year. I will be in postponement mode from Jan-Mar31, just like last year. Part-Owner of Texas | | “The CMH-the Cost Matters Hypothesis -is all that is needed to explain why indexing must and will work… Yes, it is that simple.” John C. Bogle Re: POLL: How much re-balancing bonus do you expect? I agree with GrayFox. Re: POLL: How much re-balancing bonus do you expect? The ideal is to build a portfolio which will deliver, say, 8% per year with zero volatility. Then your planning is straightforward. In the real world the portfolio SD is going to cause the portfolio return to fall below 8%, so we try to minimize deviations from 8% in order to keep our plan on track. But there's no way a portfolio made up of a 6% class and an 8% class is going to have higher portfolio return than a portfolio made up of only the 8% class. I'm looking backward here. That said, it is possible the 6/8 combo has better risk-adjusted return but it will never equal the 8% performance of the single class portfolio. Re: POLL: How much re-balancing bonus do you expect? Doing a full backward-looking mean-variance optimization the only inputs are the returns and the weights for each investment. The interaction among the classes/investments is derived from the In a roundabout way you can see why backward-looking analyses have significant limitations. That's not to say crystal balls don't have limitations, but... The portfolio return is the sum of the weighted returns of the individual classes. The returns of the individual classes are determined at the end of the period, after each class SD has been accounted for (rolled up into the class return)...and consequently rolled up into the portfolio return. IOW, for return purposes, the SD has already been accounted for and is thus irrelevant...that's why all the interactive stuff doesn't matter when calculating portfolio return. I think everything you stated after your denial of how portfolio return is determined is basically true, if you step back and look at everything after the fact you'll see what I mean. Are you still with me? Oh well I guess I have to realize I can dump on presentations and interpretations in uncomfortable ways, always to the benefit of the readers. Good luck to all, and if that means me learning a lesson or two so be it. Re: POLL: How much re-balancing bonus do you expect? Random Walker wrote: The compounded return of the portfolio is always less than the weighted average annual return of the components due to portfolio volatility. Not true. Suppose I have two investments that have an expected return of 8% but one has twice the volatility of the other. I open two accounts, one with each investment, and wait long enough, say, 30 or 40 years. Both accounts will be up 8%; that's what expected return means. In the meantime, one account will have moved up and down twice as much as the other, but the end result will be the same. Now let's suppose that the investments both have really high volatility but they happen to have a 1.0 negative correlation with each other so that when one zigs the other zags. At the end of the long time period, both investments will be up 8%, no more no less, but if I plot the total of the two investments for each year, it will be a nice smooth straight line instead of moving up and down on either side of the 8% line. Volatility has no effect on the expected value of investments over a long term; Grayfox is absolutely correct. On the other hand, reducing volatility means that any year's returns will be closer to the long term expected value but the expected value itself won't change. Re: POLL: How much re-balancing bonus do you expect? I have run the numbers on the effect of rebalancing my accounts from 2002 through mid 2012. It can be a bit tedious accounting for dividends, but I am quite confident of the numbers. The rebalancing benefit averaged about 0.73% of the portfolio value per year based on round trip transactions. My asset allocation was 50/50 and I maintained fairly tight rebalancing bands. It was not much fun rebalancing into the decline of 2008/2009, but the "bonus" eventually realized made up for some of the discomfort. Volatility can be your friend, but only if you rebalance. Volatility is my friend Re: POLL: How much re-balancing bonus do you expect? I wonder what the bonus would be for someone who rebalanced on the best day for rebalancing of the year each year. Would that not set an upper bound for a rebalancing bonus for the once-a-year rebalancer? Would that not be a number to strive for? I suppose it might depend on stock:bond ratio as well. It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. Re: POLL: How much re-balancing bonus do you expect? Dale_G wrote:I have run the numbers on the effect of rebalancing my accounts from 2002 through mid 2012. It can be a bit tedious accounting for dividends, but I am quite confident of the numbers. The rebalancing benefit averaged about 0.73% of the portfolio value per year based on round trip transactions. My asset allocation was 50/50 and I maintained fairly tight rebalancing bands. Rebalancing does better when you have two volatile assets with similar returns, as we had with stocks and bonds over the past ten years. If stocks do substantially better than bonds, rebalancing is likely to lower returns. Re: POLL: How much re-balancing bonus do you expect? Random Walker wrote:GrayFox, I have not taken any university level finance classes. But based on my reading of Boglehead books, I think I disagree with what you posited. The return of a portfolio cannot simply be the weighted average return of the components. The compounded return of the portfolio is always less than the weighted average annual return of the components due to portfolio volatility. In fact, the more you can dampen portfolio volatility through investing in multiple asset classes, the closer the compounded return will be to the average annual return of the components. The positive effect of dampened portfolio volatility on annualized (compounded) returns is what I would view as the rebalancing bonus. The potential benefit of adding an asset class to a portfolio depends on more than its return. It depends on expected return, correlation to other portfolio components, volatility. Curious to hear your thoughts. What you are talking about is that the arithmetic mean is less than the geometric mean when there is volatility. That is true even for one asset. For example, if you have twelve monthly returns, take the arithmetic mean, and get 5%. The geometric mean will be less than 5%, unless every month was exactly 5%, i.e. no volatility. Any variation will give lower geometric return. And, yes, the theory is to minimize volatility at some return. Calculate the minimum volatility across the range of returns, and that is the efficient frontier. The surprising thing is that you can get portfolio with lower volatility than the least volatile asset. So 10/90 stocks/bonds might be lower volatility than 0/100, due to lack of correlation. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? Dale_G wrote:I have run the numbers on the effect of rebalancing my accounts from 2002 through mid 2012. It can be a bit tedious accounting for dividends, but I am quite confident of the numbers. The rebalancing benefit averaged about 0.73% of the portfolio value per year based on round trip transactions. My asset allocation was 50/50 and I maintained fairly tight rebalancing bands. It was not much fun rebalancing into the decline of 2008/2009, but the "bonus" eventually realized made up for some of the discomfort. Volatility can be your friend, but only if you rebalance. I calculated the return 1) without re-balancing and 2) with daily re-balancing for a Harry Browne portfolio for 2011. The portfolio with daily re-balancing had 0.89% higher return. But I only looked at one year, nor did I look at other re-balancing rules. The Vanguard paper found that in trending markets you get a negative re-balancing return and in mean reverting markets you get a positive rebalancing return. If you went from 2007 to 2010, there was some mighty mean reversion. As I said earlier, I have no doubt that, ex-post, you can measure a positive rebalancing return if you time it well. But that has nothing to do with portfolios. You can get it with one security. Suppose IBM returned 10% in 2012. Well if it fluctuated in price, I might have got 12% from that stock just by timing my transactions. I could have sold when it rose, and bought back more shares when it fell. Or even without ever selling, if I'm added new contributions, just buying on dips. I just don't know if you can say in advance if you will see a bonus or a penalty from re-balacning. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: How much re-balancing bonus do you expect? Random Musings wrote:Equities, in the long-run, have outperformed bonds, and the expectations are that they will continue to do so. Hence, on average, you are rebalancing more often from equities to bonds which should diminish returns. However, if one looks at it from a set allocation point (say 50-50 target), it's not the "rebalancing bonus" that is the driver, IMHO. It's more about sticking with your portfolio need of risk in your written investment plan. Diminish returns as compared to what? It is true that the un-rebalanced portfolio could do better than the rebalanced one, but only at the expense of increased risk. A rebalanced portfolio can still beat the arithmetic mean return of the un-rebalanced portfolio. Here is a quote from an old EF article. "Only when long term return differences among asssets exceed 5 percent do nonrebalanced portfolios provide superior returns, and then only at the cost of increased risk" I recommend it: Re: POLL: How much re-balancing bonus do you expect? I'm not really buying what I'm reading here. I certainly could be wrong. But it seems to me that the return of a REBALANCED portfolio cannot simply be the weighted average of the individual components. One needs to look at dollar weighted returns, not simply time weighted. That is the whole potential benefit of rebalancing: buying relatively low and selling relatively high at the asset class level. If I am wrong, please continue to convince me. I want to understand this stuff. So far though I'm sticking with compounded return less than average annual. And the goal of multi-asset class investing with rebalancing is to get the annualized return closer to the average annual return. But need to look at dollar weighted, not just time weighted. Thanks. Re: POLL: How much re-balancing bonus do you expect? Oh, my! Rebalancing into a trending market. Rebalancing into a reverting market. When will we ever run out of lipstick for the pig of market timing? Over the last decade, suppose you invested the same amount monthly into a 50/50 mix of stocks and bonds. 1. You invested and did not rebalance. 2. You rebalanced monthly to keep your allocation at 50/50. Option 2, continuous rebalancing, is superior to the tune of about 0.3% per year. In my view, you may never rebalance, and perhaps see your portfolio AA drift to higher returns (and higher risks). Or, you may continually rebalance and see a lower risk and probably better returns. Anything in between is market timing. Déjà Vu is not a prediction Re: POLL: How much re-balancing bonus do you expect? Rebalancing, for me, is a risk management strategy, not an optimization strategy. If I don't rebalance my risk exposure gets off. So the more relevant question for me, not necessarily everyone, is how much lower is the probability of a catastrophic loss if I rebalance? I see that benefit as very large. I realize risk and return are two blades of the same scissors, but when I rebalance I am thinking risk more than return. Seems at a minimum you'd want to look at the return bonus and the risk benefit, not just one. In any event the counterfactual is a bit bizarre. Having chosen an AA why wouldn't you rebalance to maintain it? Re: POLL: How much re-balancing bonus do you expect? Random Walker wrote:I'm not really buying what I'm reading here. I certainly could be wrong. But it seems to me that the return of a REBALANCED portfolio cannot simply be the weighted average of the individual components. That's exactly what I said. Rebalanced portfolio, by buying and selling, will end up with a different return than simply holding a portfolio. It could be higher or lower. In other words, the rebalancing return can be positive or negative. But it has nothing to do with modern portfolio theory. It is explained by old fashioned buying low and selling high. If stocks go lower after you sell stocks to rebalance, you will have higher return than holding. <- rebalancing bonus If stocks continues higher after you sell to rebalance, you will have lower return than holding. <- rebalancing penalty Here is a link to Modern portfolio theory. You will not see anything about a rebalancing bonus on that page. Below is the formula for expected return from that page, the weighted sum of the Last edited by grayfox on Mon Dec 24, 2012 11:45 am, edited 1 time in total. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? Aptenodytes wrote:Rebalancing, for me, is a risk management strategy, not an optimization strategy. If I don't rebalance my risk exposure gets off. So the more relevant question for me, not necessarily everyone, is how much lower is the probability of a catastrophic loss if I rebalance? I see that benefit as very large. I realize risk and return are two blades of the same scissors, but when I rebalance I am thinking risk more than return. Seems at a minimum you'd want to look at the return bonus and the risk benefit, not just one. In any event the counterfactual is a bit bizarre. Having chosen an AA why wouldn't you rebalance to maintain it? That's the bigger picture. If I have a target portfolio that is 30/70, then it make sense to maintain that by rebalancing. For whatever reasons, I chose 30/70, and it wouldn't make sense to have 20/80, 50/00, 0/100 or anything else. Ideally, that would mean rebalancing as often as practical. With mutual funds that might be every two months because of frequent trading rules. But it seems that some are advising, "No, delay rebalancing for a couple of years. Then you will get the maximum re-balancing bonus." I can cite books that recommend that. I've also read advice to choose 50/50 asset allocation because it will maximize the re-balancing bonus. So the poll is to see how many here believe that they can get a re-balancing bonus by timing their re-balancing moves. From the poll, it looks like over 40% of respondents believe that they will get a rebalancing bonus by well-timed re-balancing. BTW, the poll is set up so that you can change your vote, if anyone is having second thoughts about the re-balancing bonus. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? So the poll is to see how many here believe that they can get a re-balancing bonus by timing their re-balancing moves. If they're that smart, why don't they unbalance? Déjà Vu is not a prediction Re: POLL: How much re-balancing bonus do you expect? Gray fox, Thanks for the comments. I'm sure I'm going to learn something from this. But I'm still disagreeing. The equation you post is the expected return ex ante of a portfolio comprised of individual components. It does not account for potential affects of internal rebalancing. I agree that if you start at time point 1 and finish at time point 2, then the portfolio return of a portfolio with no additions/subtractions and no rebalancing will be the weighted average of the components. But I believe portfolio volatility will have a huge effect on compounded annualized returns and that volatility will be dampened by multi asset class investing. So I guess what I'm saying is the following. Multi asset class investing will dampen portfolio volatility and bring compounded return closer to average annual return. Rebalancing will maintain the risk profile and can have positive or negative effect on returns depending on the course of market, momentum, reversion to mean. Ok now decipher this and straighten me out. Thanks. Re: POLL: How much re-balancing bonus do you expect? Random Walker wrote:Gray fox, Thanks for the comments. I'm sure I'm going to learn something from this. But I'm still disagreeing. The equation you post is the expected return ex ante of a portfolio comprised of individual components. It does not account for potential affects of internal rebalancing. I agree that if you start at time point 1 and finish at time point 2, then the portfolio return of a portfolio with no additions/subtractions and no rebalancing will be the weighted average of the components. But I believe portfolio volatility will have a huge effect on compounded annualized returns and that volatility will be dampened by multi asset class investing. So I guess what I'm saying is the following. Multi asset class investing will dampen portfolio volatility and bring compounded return closer to average annual return. Rebalancing will maintain the risk profile and can have positive or negative effect on returns depending on the course of market, momentum, reversion to mean. Ok now decipher this and straighten me out. Thanks. I don't disagree with anything you just wrote. So I'm not sure what you disagreeing about. Are you disagreeing that we agree? Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? umfundi wrote: So the poll is to see how many here believe that they can get a re-balancing bonus by timing their re-balancing moves. If they're that smart, why don't they unbalance? They do. On RBDs. They get a better return than Wellington, but have more bonds in their portfolios. It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. Re: POLL: How much re-balancing bonus do you expect? Look at the Vanguard paper cited above. Look at Table 3, the one that contains results for a no reblance condition. Rebalancing does win in average return, by an astonishing, massive, .014%. (I hope that had good numerical accuracy control in their simulation.) Reblancing does somewhat better when you look at volatility. All of the rebalancing conditions have around 10% volatility as versus 12% for a pure buy and hold strategy. My conclusion: rebalance for less jumping around, not for a better return. If I can get the same return with less volatility why not go for it? If I want better returns, as usual, I'll have to take on more risk. Re: POLL: How much re-balancing bonus do you expect? In theory, re-balancing could results in a total higher return than any return from the sub asset class. I still remember an interesting question:"how do you make money from a random walking stock market, which has zero expected return" So if you have 2 asset class: 1. cash, stable value, no interest, expected return is zero 2. a stock, random walking, expected return is zero if you keep the cash and stock to be 50/50 (or any other fixed ratio) and keep rebalancing every time the stock price changes. ignoring any transaction fees, you are guaranteed to make money whenever the stock price come back to it's original price. you can use Excel to simulate and prove it. the weighted return thing only applies to portfolios without any transactions. Imagine I can catch all up and downs, I can have a return much higher than the reported annualized return of the specific fund. Rebalancing tells you when to catch the up and downs, so theoretically, can results higher returns than the weighted average of all the sub asset class. Re: POLL: How much re-balancing bonus do you expect? I expect upwards of about 1%. But I don't consider it a bonus; just a return from outlandishly out of balance portfolio assets returning to mean post rebalancing. So far it seems to be working. -- Re: POLL: How much re-balancing bonus do you expect? "Rebalancing equals noise." John Bogle Re: POLL: How much re-balancing bonus do you expect? Matigas wrote:"Rebalancing equals noise." John Bogle Bogle never said that. He might have said not to worry about small drifts in your equity ratio and that that formulaic rebalancing with precision is not necessary, but so far as I can find he he never made that exact Déjà Vu is not a prediction Re: POLL: How much re-balancing bonus do you expect? This paper from 2008 that was posted on another thread has the best information I've seen on rebalancing Opportunistic Rebalancing A couple of points: 1. Observe that some periods had negative rebal-return and some periods had positive rebal-return. Periods with trending markets had negative rebal-return; periods with reversals, had positive rebal-return. Rebalancing can add to return or subtract from return, depending on the market conditions, i.e. trending or mean-reverting. 2. The amount of rebal-return varied with the rebalancing algorithm. It must also vary with the portfolio asset allocation, as well. The paper only looked at one asset allocation, 60/40 stock/bonds. 3. The often-recommended annual rebalance with 0% bands does not look like the best method. The paper shows 20% bands with look-interval of 1 day to 2 weeks was best. Last edited by grayfox on Wed Dec 26, 2012 11:39 pm, edited 1 time in total. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? grayfox wrote:... 3. The often-recommended annual rebalance with 0% bands does not look like the best method. The paper shows 20% bands with look-interval of 1 day to 2 weeks was best. ... ... of the limited methods that the paper looked at. It's all about short-term opportunistic rebalancing due to a short-term change in one's asset allocation, uh, I mean opportunistic rebalancing, uh I mean rebalancing, uh I mean market timing. Re: POLL: How much re-balancing bonus do you expect? [removed by author - didn't read OPs link so my comment wasn't helpful] Last edited by CaliJim on Wed Dec 26, 2012 11:57 pm, edited 2 times in total. Re: POLL: How much re-balancing bonus do you expect? Yes, I believe you have a rebalancing algorithm that is triggered by a large one-day fall in the market. I would categorize that as event driven re-balancing. Not covered by the paper. I guess the theory is that the market over-reacts and is likely to rebound within a few days, so it is a behavioral theory. But it definitely appears that a short look-interval works best. Check your portfolio once a week to see of it needs re-balancing. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? I think the answer is that bonehead (simply periodic) rebalancing is worth 0.2 - 0.5% a year in a diversified portfolio. There is a vocal minority that thinks RBD market timing is rebalancing, and they can do much better than a fraction of a percent. There is another minority that thinks rebalancing will be negative, because they are prepared to tolerate drift in their AA to higher risk. My own conclusion is that it is worth for me to pay a professional a low fee to pay attention to details like this, and others. He's a passive manager, but I believe his attention outweighs my Déjà Vu is not a prediction Re: POLL: How much re-balancing bonus do you expect? grayfox wrote:This paper from 2008 that was posted on another thread has the best information I've seen on rebalancing Opportunistic Rebalancing A couple of points: 1. Observe that some periods had negative rebal-return and some periods had positive rebal-return. Periods with trending markets had negative rebal-return; periods with reversals, had positive rebal-return. Rebalancing can add to return or subtract from return, depending on the market conditions, i.e. trending or mean-reverting. 2. The amount of rebal-return varied with the rebalancing algorithm. It must also vary with the portfolio asset allocation, as well. The paper only looked at one asset allocation, 60/40 stock/ 3. The often-recommended annual rebalance with 0% bands does not look like the best method. The paper shows 20% bands with look-interval of 1 day to 2 weeks was best. That's a very interesting paper and it makes sense. But I couldn't tell if it is just a backward-looking algorithm derived from the data that has a ring of reason dressed as a halo that makes it look I wonder if there's an easy way to set up an automatic alert that tells you when a rebalance should be triggered based on hitting a 20% band? Re: POLL: How much re-balancing bonus do you expect? Here is what I am thinking: Without Re-balancing You want to invest a sum of money for some period. You chose some collection assets for your portfolio. Each asset has an expected return, R_i, and volatility, sigma_i^2. One way or another you select the weights, w_i of each asset. Then you make the investment. The expected portfolio return R_p is the weighted sum of the asset returns. The expected volatility of the portfolio, sigma_p^2, is a little more complicated to calculate, because it has to take into account not only the volatility of each asset, but all the covariances between pairs of assets. At the end of the period, each asset will have some return and volatility. If you held the assets through the whole period, the portfolio return and volatility could be calculated using the same equation above, the weighted sum of the asset returns. With Re-balancing Now what happens with rebalancing is that you don't hold through the whole period. You sell some and buy some. You are changing the portfolio partway though the period. The return and volatility of the re-balanced portfolio will be different from the buy and hold. Obviously the difference will depend on the re-balancing algorithm. Let's call the difference between the buy-and-hold portfolio and a re-balanced portfolio, the rebalancing return, DELTA. R_r = Rp + DELTA DELTA is random variable. Depending on your portfolio and rebalancing algorithm, DELTA is drawn from probability distribution with some parameters like mean and variance. As I said above, this mean and variance will depend on the rebalancing algorithm. DELTA can be positive or negative. We've seen that it tends to be negative in trending markets and positive in mean reverting markets. Re-stating the poll question: For some portfolio and re-balancing algorithm, what is the mean value of DELTA, mean(DELTA)? Is it positive, zero or negative? Another question is: How big is the variance of DELTA, var(DELTA) ? Since the return of the rebalanced portfolio R_rp = Rp + DELTA, then variance of the rebalanced portfolio equals the var(R_p) + var(DELTA), if R_p and DELTA are independent. This result is from basic statistics. See Variance of Differences of Random Variables In other words, re-balancing increases the variance of the portfolio by the variance of DELTA. Тише едешь, дальше будешь. (Quieter you-go, further you-will-be.) Re: POLL: How much re-balancing bonus do you expect? Gray fox, Now you're seeing what I was trying to say. Additions and subtractions to the portfolio via rebalancing make it a lot more complicated than just a weighted average return issue. Rebalancing helps in reversion to the mean times and hurts when momentum dominates. Lots of variables
{"url":"http://www.bogleheads.org/forum/viewtopic.php?f=10&t=107255&newpost=1567454","timestamp":"2014-04-21T04:41:53Z","content_type":null,"content_length":"122901","record_id":"<urn:uuid:17abc68c-2210-486f-8d49-ab841296685f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
NKS 2006 Conference & Minicourse: Franklin Squares Franklin Squares A Chapter in the Scientific Studies of Magical Squares Peter Loly University of Manitoba The enumeration of eighth-order Frankin squares by Schindel, Rempel, and Loly will be published in the Proceedings of the Royal Society A, in July or August 2006. While classical magic squares with the entries 1::n2 must have the magic sum for each row, column, and the main diagonals, there are some interesting relatives for which these restrictions are increased or relaxed. These include serial squares with sequential filling of rows that are always pandiagonal (having all parallel diagonals to the main ones on tiling with the same magic sum, also called broken diagonals), pandiagonal logic squares derived from Karnaugh maps [Loly and Steeds, Int. J. Math. Ed. Sci. Tech. 36(4), 2005, 375–388, with an application to Chinese patterns, Loly, The Oracle - The Journal of Yijing Studies, 2(12), January 2002, 2–13)], and Franklin squares that are not required to have any diagonal properties, but have equal half row and column sums and two-by-two quartets, as well as magical bent diagonals. We modifed Walter Trump’s backtracking strategy for other magic square enumerations from GB32 to C++ to perform the Franklin count [a datafile of the 1,105,920 distinct squares is available], and also have a simplifed demonstration of counting the 880 fourth-order magic squares using Mathematica [a draft notebook]. Our early explorations of magic squares considered as square matrices used M athematica to study their eigenproperties. We have also studied the moment of inertia and multipole moments of magic squares and cubes (treating the numerical entries as masses or charges), finding some elegant theorems [Rogers and Loly, Am. J. Phys., 72(6), 786–9, June 2004, and European J. Phys. 26 (2005) 809–813], and have shown how to easily compound smaller squares into very high-order ones, e.g. 12,544 (= 28 x 72) th order [Chan and Loly, Math. Today, 38(4), 113–118, August 2002]. Brée and Ollerenshaw have a patent on using relatives of Franklin squares for cryptography, while a group at Siemens in Munich using pandiagonal logic squares has another pending. Other possible applications include dither matrices for image processing and providing tests for developing CSP (constraint satisfaction problem) solvers for difficult problems. This presentation will be based on a spectacular 3’-by-4’ poster of the Franklin work. [presentation materials <1> <2>]
{"url":"http://www.wolframscience.com/conference/2006/presentations/loly.html","timestamp":"2014-04-21T13:31:10Z","content_type":null,"content_length":"9624","record_id":"<urn:uuid:a7d74c94-6fa2-4a3c-9a55-e2361ac47e07>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - symmetry, groups and gauge theories in the standard model 1)how do these particle structures realte to the mathematics of group theory? (note that my knowledge of group theory is very limited) The physics of color is not understandable if all one knows is that there are 3 colors. One must really understand something about SU(3). SU(3) is the group of 3 x 3 unitary matrices with determinant 1. This is the symmetry group of the strong force. What this means is that, as far as the strong force is concerned, the state of a particle is given by a vector in some vector space on which elements of SU(3) act as linear (in fact unitary) operators. We say the particle "transforms under some representation of SU(3)". For example, since elements of SU(3) are 3 x 3 matrices (and these matrices can be constructed using the given generators in your formula), they can act on column vectors by matrix multiplication. This gives a 3-dimensional representation of SU(3). The quarks are represented by this 3*1-matrix. The antiquarks can be represented by row vectors because we can multiply a 3*3-matrix with a row vector on the LEFT side of the matrix. The gluons are represented by the socalled adjoint representation which consits out of traceless 3*3matrices. It can be seen that a row of such a matrix represents one quark colour and a colom of such a matrix represents a anti-colour. each gluon is therefore constructed out of a colour-anticolour combination. Given that there are 3 such colours and anticolours, you would expect 9 gluons. However there are only eight . Can you see why ??? ps : you know that the colours are red green and blue and it is the postulate of QCD that the sum of these three represents colour-neutrality !!! This is the main law that needs to be respected : in interactions : the sum of all involved colours must be WHITE 2) Has the symmetry breaking in gauge theories have anything to do with the broken symmetries (eg the masses of the particles in the multplets differ, the symmetries are not complete) in particle First of all, mass itself breaks symmetry because it mixes the two types of chirality which are fundamentally different in nature. So every elementary particle is massless. Read more on this in my journal. I also suggest the text i wrote on the Higgsfield, which is the system that accounts for generated mass after symmetry breaking in QFT. Just look it up in my journal ps : the text on elementary particles is on page 3 or 4 here you can find a nice course on Lie Groups IN DUTCH
{"url":"http://www.physicsforums.com/showpost.php?p=476873&postcount=2","timestamp":"2014-04-19T15:15:29Z","content_type":null,"content_length":"10800","record_id":"<urn:uuid:fa66e96e-6df6-4536-9b87-7a8c934c2db5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
I haven't the slightest idea how to solve this I need to solve for this but I don't know how (i've probably forgot) cos4x=sin4x $\frac{sin(4x)}{cos(4x)} =1$ $tan(4x) =1$ $4x =arctan(1) = \frac{\pi}{4} + n\pi$ where $n \in \mathbb Z$ $x = \frac{\pi}{16} + \frac{n\pi}{4}$ Perhaps I should broaden the question: I have to find the area between the two curves $f(x)=sin(4x) and g(x)=cos(4x)$ for $<br />$ $x = \frac{\pi}{2}$ ≤ x ≤ 0 Now, from what I understand, I need to find where the graphs intersect in order to determine which function need to become absolute. Is this step necessary at all?
{"url":"http://mathhelpforum.com/calculus/69286-i-haven-t-slightest-idea-how-solve.html","timestamp":"2014-04-20T10:59:18Z","content_type":null,"content_length":"35226","record_id":"<urn:uuid:c5c94415-0a8e-4618-a558-c2b1ff0a6471>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Flying Machines of the 21st Century? February 6, 2012 First of three responses by Aram Harrow Dave Bacon began the blog The Quantum Pontiff in September 2003. Thus he was among the earliest voices promoting the theory of quantum computation, and explaining it brilliantly in ways non-experts can understand. He now works at Google in the Seattle area, while his blog is staffed by “A College of Quantum Cardinals”: Charlie Bennett, Steve Flammia, and our second debate participant, Aram Today Aram begins a three-part rebuttal to Gil Kalai’s post with conjectures about entangled noise as an impediment to building quantum computers. He has chosen Bacon as “patron saint” for this first part. In Bacon’s own three-part catechism on quantum error correction, he scribed that the cardinal question is: Why is classical computation possible? To quote Bacon in an earlier post: “When we dig down into the bow[e]ls of our computers we find that in their most basic form, these computers are made up of parts which are noisy or have uncertainties arising from quantum theory.” Thus if Gil’s concern about entangled errors is actual, why are they not defeating these parts? Cristopher Moore was the first of several to say that a better analogy than perpetual motion is heavier-than-air flying machines, which were also thought by some to be infeasible on first principles. This all goes to say that if you take a laptop computer running Red Hat on a jet with quantum-level navigation components, nihil obstat—no trenchant problem needed to be overcome. This part discusses differences between quantum and classical error correction. One key difference is that qubits can suffer “de-phasing” noise which effectively turns them into classical bits. Bacon answered Robert Alicki, the “patron” for Gil’s post, on this point before, but last year noted his own “de-phasing” by leaving quantum computing research to work in software engineering. He still occasionally posts as “Pontifex Praetorium” on his old blog, but more often can be found at his new blog, Pontiff++. Let us give the floor again to our debaters. First Response by Aram Harrow There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer. My defense of quantum computing, unlike Gil’s, is not based so much on original thinking, but rather the consensus picture from the field of quantum information. Other quantum bloggers, such as Scott Aaronson and Dave Bacon, have argued similar points (and see also the excellent comment discussion on this series’ opening post), but I am not explicitly speaking for anyone else. My main contribution is in specifically responding to Gil’s conjectures. For background, I like Daniel Gottesman’s explications of QECC (quantum error correcting codes) and FTQC (fault-tolerant quantum computing). Gil has advanced conjectures to argue that FTQC will fail in real systems—because errors either follow the computation, becoming maliciously correlated with the error-correction, or because they correlate for other intrinsic reasons. By all means, see his original post, his paper, or his more technical paper.) My response is in three parts—only the first part appears in this post: 1. The classical fault-tolerance test. 2. Nature is subtle, but not malicious. 3. Throwing (imaginary) money at the problem. The Classical Fault-Tolerance Test If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing. At first glance, this seems absurd. Classical computing seems so much easier! The conventional wisdom on classical fault-tolerance is that von Neumann and others invented the theory back when classical computers used relays, vacuum tubes and other unreliable components, but now the bit error rates are so low that we don’t need to use their awkward circuits for iterated majority voting. In fact, every bi-stable flip-flop in a classical computer is implementing a ${1\rightarrow n}$ repetition code, where ${n}$ is comparable to the number of electrons flowing through a single wire. Viewed in this lens, a classical computer is already implementing all of the principles of fault-tolerant computing: it keeps data encoded at all times, never fully decoding (which would correspond to storing a bit in a single electron), and performs gates directly on encoded data. It is also constantly error-correcting, by using the non-linearity of the transistors to perform majority voting de facto, and doing so at a rate significantly faster than the GHz clock speed (in other words, classical computers spend most of their “time” correcting errors.) Fault-tolerant quantum computing follows all of these principles as well, with one big exception: there is no quantum analogue of the repetition code. (Why not? The natural candidate of ${|\psi\ rangle \mapsto |\psi\rangle\otimes |\psi\rangle\otimes |\psi\rangle}$ not only suffers the flaws inherent to analog computing, but also requires nonlinear encoding and decoding, violating a fundamental principle of quantum mechanics.) Instead, quantum errors come in two types: • X, or bit-flip, errors that exchange ${|0\rangle}$ and ${|1\rangle}$; and • Z, or phase, errors that send ${|1\rangle}$ to ${-|1\rangle}$, while leaving ${|0\rangle}$ unchanged. Classical computers of course have only X errors. In classical computers repetition codes are used everywhere, and the codes self-correct much more quickly than the gate time for logical bits. To improve reliability in a classical computer, it’s sufficient just to make the wires and insulators fatter, or the voltages higher. In quantum computers the lack of a repetition code is a big deal. It not only complicates achieving reliability, but puts more onus on active error correction. We need to enlarge the error-correcting code, which is not just a homogeneous collection of qubits, but a highly structured object that needs active error correction. How a QECC Copes The natural quantum version of a repetition code is the linear map sending ${a|0\rangle + b|1\rangle \mapsto a|000\rangle + b|111\rangle}$. This would protect against X errors (error rate ${p}$ would become ${3p^2+p^3}$) while increasing the vulnerability to Z errors (error rate ${p}$ would become ${3p+p^3}$). A dual version would protect against Z errors while increasing vulnerability to X errors. Only by combining these do we obtain codes such as the 9-qubit Shor code that can correct any single-qubit error, thus transforming error rate ${p}$ on the physical qubits to error rate ${O(p ^2)}$ on the encoded qubit. By encoding each of these 9 qubits in 9 more, we get an 81-qubit code whose error rate is ${O(p^4)}$. And if ${p}$ a small enough constant (like 0.3), then by iterating, we can get error rate ${\ epsilon}$ with poly(log(${1/\epsilon}$)) qubits, albeit with some ugly constants. If our decoding is also subject to error, as in FTQC, then we need ${p}$ to be smaller, usually ${10^{-6}}$ to ${10^ {-2}}$, depending on the model. But the same threshold phenomenon occurs, assuming independent errors. (An aside about passive error-correction. We don’t know how to do it in fewer than 4 dimensions. See arXiv:1103.1885 for an argument that passive self-correction in 3-d cannot use a scale-invariant code (such as something repetition-like); on the other hand, at least one 3-d system is known with some moderate self-correcting properties. Known no-go theorems in 2-d are stronger, and in 1-d even passive classical memory is ruled out, although examples such as Toom’s rule almost qualify.) On the other hand, the lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers, but rather explain the enormous difference in the practical difficulty of constructing quantum and classical computers. In other words, the differences between classical and quantum computing mean a large difference in practical difficulty, but could not lead to one being possible and the other fundamentally impossible. Where is the Objection Strictly Quantum? The first reason I’m skeptical about Gil’s conjectures is that they don’t appear to make use of these two differences between classical and quantum computing. Rather Gil questions the independence assumption of errors. But if highly correlated errors routinely struck computers, then they would also be a problem for classical computers. Quantum mechanics describes everything, including classical computers. If quantum computers suffer massively correlated errors with non-negligible probability, then so must classical computers, be they Pentiums or abacuses (or DNA, or human memory). If electrons hop from wire to wire not one-by-one, but a trillion at a time, then those correlated bit-flip errors would defeat a classical repetition code, just like they’d defeat various quantum coding schemes. To distinguish these cases, you would have to have that Z errors (the ones that only quantum computers care about) are highly correlated, while X errors are not. It is important to separate the issue of correlation from the issue about single-qubit error rate. If Z errors occur at rate ${p_Z}$ and X errors occur at rate ${p_X}$, then the threshold theorem relies on the probability of ${k}$ simultaneous Z (resp. X) errors decaying like ${\tilde{p}_Z^k}$ (resp. ${\tilde p_X^k}$). It’s not crucial for fault-tolerance that ${\tilde p_X=p_X}$ or ${\tilde p_Z=p_Z}$, only that ${\tilde p_X, \tilde p_Z}$ are both small. In experiments, we observe that ${p_X}$ is often much smaller than ${p_Z}$, and that ${p_X \approx \tilde p_X, p_Z\approx \tilde p_Z}$ (up to our abilities to measure this). The former fact means that superficially it seems harder to protect quantum data, since the first thing we notice is that nasty high ${p_Z}$ rate. But ${p_Z}$ can with effort be lowered, and what ultimately matters is that ${\tilde p_X,\tilde p_Z}$ are sufficiently small. To reconcile Gil’s conjectures with classical computing as well as experiment, we would need that ${p_X, p_Z,\tilde p_X}$ are all small, but ${\tilde p_Z}$ is inevitably large. This is like theories of aether, in which dynamics have a preferred direction. How can Gil make this work? One possibility is a dramatic rewrite of the rules of quantum mechanics, which is not his intent. Another is to argue that in any physical system, however X and Z are defined (because their definitions are as arbitrary as the definitions of the x,y,z coordinate axes), at least one of X or Z will have ${\tilde p \gg p}$. Basically, Nature would have to maliciously stick her fingers into every computation in unpredictable ways that foil every attempt at quantum computing, but have not yet been detected by exquisitely precise tests of quantum mechanics. I find this unlikely. Short Reply by Gil Kalai The classical error-correction test is indeed at the heart of matters, and understanding the enormous difference in constructing classical and quantum memory and computing is indeed a fundamental question. I am glad that this is also Aram’s view. Aram’s objections to my conjectures are based on intuitive grounds and not on formal grounds, as my conjectures have no consequences for classical error correction. In a crucial passage above, Aram wrote: “The lack of a repetition code, and the associated simple forms of passive error correction, are the only two things I can think of that separate quantum and classical fault-tolerance. And they do not appear to be fundamental barriers… [they] explain the enormous difference in the practical difficulty of constructing quantum and classical computers\dots, but could not lead to one being possible and the other fundamentally impossible.” Aram is correct that the repetition code is important. But my conjectures are required for drawing the picture of why it is important. What is it that distinguishes the repetition code from others, more sophisticated and more efficient error-correcting codes? Why is it that in nature and in classical computers, repetition codes are used everywhere? The answer that I propose is that when you consider the repetition code as a quantum code that corrects X-errors, there is no difference between the noise given by the standard noise models and the noise described by Conjecture 1. Therefore, the repetition code allows classical memory from noisy quantum systems even under Conjecture 1. Conjecture 1 does not involve any preferred direction or symmetry breaking. The conjecture excludes the possibility of quantum error correction codes but allows the possibility of classical error correction via the repetition code. Incidentally, the lack of a quantum analog of the repetition code and the unique properties of the majority function were the jumping-off points in my first paper on quantum computers in 2005. Open Problems Does maintaining a physical difference between the classical model with repetition/copy and the quantum model entail positing an artificial scale limit on the applicability of quantum mechanics Can one give a physical theory compatible with such inherently-correlated errors? Critical systems experience fluctuations with equal probability at all length scales. Could one of these produce the right correlations in Z errors, while having exponentially small probability of experiencing highly correlated X errors? Update: We note that John Preskill contributed a comment in the first post, importantly with a link to a 6-page note addressing the noise correlation problem in mathematical detail. 1. February 6, 2012 10:07 pm Aram, to a nonexpert like me, a natural model for quantum noise seems to be that at every step every qubit is measured with some probability p. Does such a model make sense? Do error correcting codes help for it? □ February 6, 2012 10:47 pm Yes, this model makes sense, and it describes a large amount of physical noise. Applying a Z gate with probability 50% is equivalent to measuring a qubit, so I talked about Z errors in my post, but that’s equivalent to the way you described it. Physicists often talk about T1 and T2 noise (although these are far from the only kinds). T1 is the rate at which qubits relax to thermal equilibrium, and T2 is the rate at which they are effectively measured by the environment. Normally T2 is much faster than T1. T1, T2 or any other kind of noise are all handled by FTQC, as long as their strength is below a universal constant (0.01 to 0.000001) and errors are independent. ☆ February 7, 2012 2:33 am To be a bit more precise, Aram, FTQC is possible as long as the errors aren’t too strongly correlated. See Preskill’s note from the previous thread here, and references therein. (I know you know this, but given the venue, it pays to reiterate that we don’t really need independence of the errors.) 2. February 7, 2012 12:37 am Whose open problems are these? Gil’s, Aram’s, or yours? “Can one give a physical theory compatible with such inherently-correlated errors? Critical systems experience fluctuations with equal probability at all length scales. Could one of these produce the right correlations in Z errors, while having exponentially small probability of experiencing highly correlated X errors?” Isn’t it Gil’s conjecture that *every* physical system must have correlated errors? Are you saying that we don’t know of *any* system that works this way? Is it really plausible that there are strong correlations in the random errors on Sol and on Alpha Centauri? □ February 7, 2012 12:43 am The first one is mine, trying to address the issue I see raised by Aram’s references to “abacuses”. The second, which you quote, is Aram’s as a challenge to Gil to show one physical system that meets (approximately) his postulates. 3. February 7, 2012 5:37 am There are two things I find troubling about Gil’s response. The first is that the treatment of the evolution of a classical computation is treated classically, rather than quantum mechanically. This may seem totally reasonable, but in fact it glosses over an important issue. To transition between two computational basis states, there must be some continuous evolution of the system which transitions it between the two states, which necessarily is not diagonal in the computational basis, and hence does not commute with dephasing noise. Importantly this means that correlated Z noise leads to correlated X errors in the resultant state. Using a classical treatment with infinitely quick gates glosses over this issue. The second is the implicit assumption that we don’t know what the noise should look like, and hence are free to conjecture whatever we please about it so long as it allows for classical computation. This, however, is not the case. The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to. The 2-local restriction on Hamiltonians found in nature significantly reduces the scope for correlated noise. □ February 7, 2012 11:36 am Aram touches on exactly your second point at the end, where he mentions the “exquisitely precise tests of quantum mechanics” (the g-2 experiments.) Like you, I am also a bit baffled as to why we should extrapolate from known physics (as I think Gil intends) to the conclusion that noise is always correlated in such a way as to prevent FTQC. The locality of physical interactions is something Aram will get to in more detail in an upcoming post. □ February 7, 2012 12:24 pm Joe Fitzsimons writes: There are two things I find troubling about Gil’s response … Here Joe utters yet another Great Truth on a GLL topic that already has encompassed many such truths … because these same two aspects are precisely what is exhilarating about Gil’s response. Specifically, Gil’s conjectures — and more broadly, innumerable lessons-learned in recent decades from QM / QC / QIT researcn — illuminate for us two Exhilarating Great Truths: Exhilarating Great Truth I: The STEM community has not yet achieved a firm grasp of the physical principles even of classical computation (per the survey below). Exhilarating Great Truth II: The STEM community has not yet achieved a firm grasp of the algorithmic principles even of classical computation. In particular, with regard to EGT-II, much has been conjectured yet little has been proved regarding the limits to simulating quantum physics with classical resources; yet meanwhile empirical quantum simulation capabilities are accelerating at a wonderfully exhilarating Moore’s Law pace, with no foreseeable end to this acceleration yet in view. Thus EGT-II amounts to a growing and intrinsically creative resource of broad applicability and tremendous power, coming to us in an era in which the world stands in urgent need of such creative resources. There’s nothing troubling about that. :) □ February 9, 2012 2:35 pm Dear Joe, These are two excellent points. I would like you to clarify your first point. In my short response I tried to explain (briefly) how repetition codes + conjecture 1 suggest an explanation for why classical memory is possible. I can elaborate but first I would like to better understnd the point in your first paragraph. Can you elaborate? “The fact of the matter is that we know in excruciating detail the physics of how low energy particles interact with one another, and can use this to say a lot about the type of noise that a given quantum computer is subject to. The 2-local restriction on Hamiltonians found in nature significantly reduces the scope for correlated noise.” Excellent point! I will try to relate to it along with Aram’s open question. The 2-local hamiltonian restriction indeed significantly reduce the scope for correlated noise, but remember that this applies to cases where the states along the evolution also satisfit 2-local restriction. In any case we will come back to this point in detail. ☆ February 13, 2012 4:39 am Hi Gil, Sorry for the delay in responding. The point I was trying to make in the first paragraph was that even a classical computer is at some fundamental level operating quantum mechanically, and that in passing between computational basis states the system must pass through non-classical states, albeit for very short times. This is simply because the operation causing the transition is not instantaneous and the associated (possibly time dependent) Hamiltonian must not commute with the computational basis. Because of this, it is not clear that the type of noise you conjecture should not affect classical computation. It seems to me that it is necessary to treat even the classical systems quantum mechanically to get an accurate picture of what is and isn’t possible. ☆ February 13, 2012 3:13 pm Dear Joe, thanks, I agree with you, and I tried to relate to this point in my last remark. It seems like an important issue and it would be nice to explore it more. 4. February 7, 2012 8:05 am There is one aspect of Gil’s “conjecture 1″ that is not fully clarified. He writes: “The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words.” I can take this to mean that Gil conjectures that any physical device which takes a single qubit input in some particular form, say as the state of a spin, encodes it into some larger number of qubits, then waits some length of time, and finally decodes back into the original form, will necessarily have some non-zero probability of an error, no matter how many qubits are used to encode it. If this is what he means, then _no_ researcher in quantum information would disagree with him, because the first steps of the encoding process, where the information is as yet stored in only a few qubits and so is not well-protected, will have some non-zero error rate depending upon physical details of the system. The important claim made by the theory of fault tolerance is that for sufficiently good gates one can encode the single qubit and decode it an arbitrary time later (with the number of qubits used in the code depending upon the time you choose to wait) with the error probability being bounded above by some constant independent of the length of time that you wait. Gil’s statement of his conjecture “In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding” actually seems to me to fit with the interpretation above. □ February 10, 2012 4:56 am Hi Matt, no, QFT allows to create encoded protected states where the probability for error in the encoded qubit is as small as we wish. (In fact, exponentially small in the number n of qubits uesd in the encoding). My conjecture is that you will always get a substantial mixture with indesired codewords. ☆ February 11, 2012 6:21 pm Hi Gil, I think you misread my point (or maybe I misread your conjecture…my point was precisely to clarify your conjecture). QFT (“quantum fault tolerance” for those who might otherwise read that as “quantum field theory”) allows one indeed to make the probability of error in the encoded qubit as small as desired, as you say. However, if what you are asking (which is what I thought your original conjecture asked about) is whether one can take a given state encoded in just a single qubit (or any O(1) qubits), then encode it into a large number of qubits, and then decode it back into a single qubit, then the answer is that this cannot be made as small as desired, because when the state is still encoded in just a small number of qubits it is not protected by any fault tolerant machinery. This would also be true classically: if the output of a classical circuit is just a single bit, then the probability that you get an error is going to be at least as big as the probability that the very last gate in the circuit makes an error. The protection only holds when the state is encoded in many qubits. This is why I mentioned on a long time limit: FT’s (classical or quantum FT) claim really is that one can keep the probability of error for such a circuit bounded by some small number no matter how deep the circuit is. ☆ February 11, 2012 6:32 pm A similar mistake is made in this paper: ☆ February 11, 2012 11:28 pm Hi Matt and Aram, My conjecture is about encoding a single qubit with n qubits. Quantum fault tolerance allows one indeed to make the probability of error in the encoded qubit as small as desired, and my conjecture asserts that in every realistic implementation of quantum error correction the error will be significant. I dont know what led to your interpretation, and if you or Aram are still uncertain about what Conjecture 1 is. In any case, let’s try to make sure that the conjecture is as clear as An example: Suppose you encode 2 qubits with a toric code which involves n qubits. (Or 6 qubits with Kitaev’s 4D codes). Standard noise models tells you that as the number of qubits used in the encoding grows the encoded 2 states approach a single 2-qubit state (or 6-qubit). My conjecture asserts that we will get a “cloud” of encoded states and that no matter how many qubits will be used the probability of error in the encoded qubit will remain substantial. 5. February 7, 2012 8:45 am Doesn’t Dave Bacon work at UW, not Google? □ February 7, 2012 11:28 am Dave Bacon left academia and works at Google now. ☆ February 18, 2012 5:09 pm Confirmed :) Also I don’t remember ever seeing that picture of me, but it’s definitely me because I’ve got my Berkeley license neckless on. Enjoying the debate from afar, in between cranking out Java code. ☆ February 18, 2012 5:14 pm Dave, do you ever have days in which all of your code simultaneously develops bugs, thereby foiling your attempts at error correction? ☆ February 18, 2012 5:35 pm Happened to me in C++ :-). With code from an earlier iteration of our CSE250: Data Structures in C++ course, that is. Dave, the pic is from the qubit06 conference. Thanks (to all) for the interest! ☆ February 18, 2012 6:15 pm It is notable that all-or-nothing error correction is fundamentally characteristic of everyday classical informatic technologies like cell-phones, blu-ray discs, magnetic disc drives, and satellite communication channels do — either they curate their data with near-perfect fidelity, or they destroy it with near-irreversible finality. All-or-nothing error correction similarly is ubiquitous in the biological world. For precisely so long as an organism’s internal error-correcting mechanisms are working effectively (at the genomic level and at the neural level), we call it that organism “alive” … otherwise, not. Moreover, this error-corrective distinction between “living” and “nonliving” becomes indistinct for organisms that lack self-repair capability, and thus are dependent upon the repair capabilities of their hosts (e.g., retroviruses). And conversely, it is thought that mammalian genomes are no bigger than they are, precisely because their error correction mechanisms do not scale to genomes larger than $\sim 10^9\ \text{bp}$ with sufficient fidelity to avert fatal genetic drift. Two salient points of these observations are: (1) error-correction capability is ubiquitous in classical information-processing systems, and (2) coupled error dynamics and all-or-nothing system failure modes are classically common. What lessons-learned for QM/QC/QIT can be derived from this classical ubiquity? Perhaps one broad lesson is that we can advantageously focus our appreciation upon the teachings of QM/QC/ QIT regarding noise. These reflections render it natural to enquire generally: What new mathematical and/or physical insights regarding noise has the 21st century gained from QM/QC/QIT? ☆ February 27, 2012 12:48 am The first apearance of Conjectures 3 and 4 (as they are called now) was on Dave’s “Quantum pointif” almost 6 years ago. John Sidles was the first to comment on them and we had a nice discussion. For my last post there I even prepared a special internet page to represent me and my approach; ( http:// www.ma.huji.ac.il/~kalai/gpont.html )I wondered if anybody ever noticed it. 6. February 7, 2012 10:45 am This is a fun discussion, and so I would like to contribute some fun references. Wholly for fun, and yet with regard to the common-sense idea that even “exquisitely precise” tests of a (classical or quantum) theory do not imply that theory is true, GLL readers are referred to the wonderful web page that describes Prof. E. T. Hall’s Littlemore Clock: the highest-precision (classical) pendulum clock ever build … accurate to about 50 ms/year. Does the existence of the “exquisitely precise” Littlemore clock prove that Newtonian mechanics and Euclidean geometry both are strictly true? :) More seriously, a class of dynamical models of computation for which the classical/quantum boundary is particularly well-defined was laid out in John von Neumann’s sole patent “Non-linear capacitance or inductance switching, amplifying and memory organs” (US2815488, 1957). There is an extensive literature on von Neumann’s computing devices, which were commercially marketed as parametrons; among the seminal articles that discuss them are two by Rolf Landauer (“Dissipation and noise immunity in computation and communication”, Nature 1988, and the follow-up article “Dissipation and noise immunity in computation, measurement, and communication”, Journal of Statistical Physics, 1989), which in turn was followed-up by Seth Lloyd’s “Any nonlinear gate, with linear gates, suffices for computation” (Physics Letters A, 1992). These works mainly argue for a Harrow-esque point-of-view, in which classical computational processes are specified that are dissipation-free yet robust with respect to noise, thus demonstrating a key similarity of classical computational processes to quantum computational processes. It is natural to wonder: What new insights have we gained since 1992 that might alter this conclusion in a more Kalai-esque direction? Without going into detail, a striking limitation of Seth Lloyd’s analysis (and perhaps of von Neumann’s and Landauer’s analyses too) is that the specified nonlinear classical dynamics in general are nonsymplectic, in the sense that nonlinearities are specified that compress state-space volumes. While such dynamical state-space compression commonly is observed locally (in both classical and and quantum systems), we are entirely confident that such compression can never happen globally; this being the substance of the Second Law. Therefore — in service of Kalai-esque unease with regard to the fundamental feasibility and/or practicability — it might be worthwhile casting the classical and quantum arguments of von Neumann, Landauer, and Lloyd into explicitly symplectic form, upon a variety of dynamical state-space geometries, so as to be entirely confident that the fundamental constraints imposed by the Second Law have been fully accounted, even in the large-dimension state-spaces of QC, within which both order and disorder camouflage themselves so effectively. □ February 7, 2012 11:26 am Poking around a bit more, we discover a remarkable instance of inventive synchronicity: John von Neumann’s patent filing (of April 28, 1954) that described parametron-based computers was simultaneous (approximately) with the independent invention of the same computational method, also in 1954, by Japan’s Eiichi Goto, who was a graduate student at the time. Moreover, it turns out that another physicist whose work I greatly respect, Stony Brook’s Konstantin K. Likharev, has written extensively on low-dissipation parametron-type computation implemented via Josephson junctions. For everyone else whose research my GLL survey (above) unjustly overlooked, I hereby pledge to pay for a pizza-and-beer the next time we meet. :) 7. February 7, 2012 2:27 pm Dear all, I participate these days in a conference at the US Virgin Islands on analysis of Boolean functions, organized by the Simons foundation. A lot of exciting talks and discussions. One (nonmathematical) thing I heard for the first time: Speaker (Luca) asks: “Do I have five minutes or ten minutes?” Avi :”Whatever comes first”. (At the end, Luca took the ten and the five and it was worth it.) So beside the Boolean functions research and discussions and the recreation activity (and some deadline you don’t want to hear about) I will not have much time to actively participate in the discussion before sunday. There are some excellent points addressed to me in the earlier post and some excellent issues to discuss from Aram’s post and comments here. Of course, I would also like to go over John Preskill’s writeup and to relate to some of John’s questions. Let me briefly explain two principles which I sat to myself in trying to pursue an explanation within QM for non-feasibility of quantum fault tolerance. The two principles are: 1. The explanation should not assume symmetry breaking between X-errors and Z errors. 2. No geometry will be involved. The rationale for these two principles is simple. Both the basic model of quantum computers and the model of noisy quantum computers with standard noise satisfy these principles. Moreover, I felt that geometry should not play a role – the explanation should not depend on the geometry of our quantum computer (or the world), and that a different behavior between X-errors and Z-errors cannot be assumed as part of the answer, although it will be interesting to understand the origin of such asymmetry. Indeed all the conjectures that are stated in my papers and in the first post are blind to the difference between bit-flip errors and phase-errors, and do not rely on geometry. Please do look at the conjectures and see it for yourself. I also want to mention that I do not assume or conjecture that the errors are always correlated. I conjecture that there is a systematic relation between the state of the computer and the errors and all the conjectures I presented in the first post are based on such systematic relation. In particular I conjecture that entangled qubits come with positively correlated errors. I suppose that we will discuss next time if this relation between noise and signal violates QM’s linearity, and how plausible it is. While the conjectures are not in conflict with QM linearity they do not represent a walk in the park either, and as I said they require drastically different noise modeling compared to the standard ones . In any case, the question Aram raised here about what is the different between classical error correction and quantum error correction is extremely important regardless of your beliefs, and I am nost interested to hear your thoughts about this difference. □ February 7, 2012 4:11 pm “I conjecture that there is a systematic relation between the state of the computer and the errors and all the conjectures I presented in the first post are based on such systematic relation. In particular I conjecture that entangled qubits come with positively correlated errors.” Can you give any example where this happens without violating linearity? It seems by definition that if your Hamiltonian depends on the state, then you have a nonlinear model that contradicts quantum mechanics. I don’t see where you are going with this. You are trying to overturn major, well-established physical theories, quantum mechanics and special relativity, but without any experimental evidence to back you up or proposing any new theories to replace them. The main motivation seems to be the belief that factoring should be hard. It is depressing to say it, but possibly computer scientists should leave quantum computing to the physicists. Aaronson’s idea that computational complexity conjectures should be granted the status of physical laws has terrible consequences. ☆ February 7, 2012 4:27 pm Anonymous, since we are going to discuss linearity next time lets leave it to next time. (I dont see your point regarding special relativity.) Meanwhile you can share us with your perception of why quantum fault tolerance is it so much harder than classical fault tolerance. ☆ February 7, 2012 6:15 pm “It is depressing to say it, but possibly computer scientists should leave quantum computing to the physicists.” Good idea. Will you tell Peter Shor, Michael Ben-Or and Dorit Aharonov this, or shall I do that? ☆ February 7, 2012 9:53 pm The “converse” of Wim’s comment is that there are also well-known physicists (e.g., Gerard ‘t Hooft) who happily ignore known facts about QM in the course of arguing that quantum computing must be impossible. But I guess some people will seize on any excuse to bash CS… ☆ February 7, 2012 11:09 pm Scott – It wasn’t Wim. It was the ever-prolific Anonymous. ☆ February 8, 2012 6:40 am Sorry, the first sentence of my comment was amplifying what Wim said, while the second was referring to the ever-prolific Anonymous. 8. February 8, 2012 8:06 am Hey, why is my awesomely insightful comment still awaiting moderation? Here it is again for all who didn’t get to see this. There is one aspect of Gil’s “conjecture 1″ that is not fully clarified. He writes: “The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to qubits , a noise in the input amounts to having a mixture with undesired code words.” I can take this to mean that Gil conjectures that any physical device which takes a single qubit input in some particular form, say as the state of a spin, encodes it into some larger number of qubits, then waits some length of time, and finally decodes back into the original form, will necessarily have some non-zero probability of an error, no matter how many qubits are used to encode it. If this is what he means, then _no_ researcher in quantum information would disagree with him, because the first steps of the encoding process, where the information is as yet stored in only a few qubits and so is not well-protected, will have some non-zero error rate depending upon physical details of the system. The important claim made by the theory of fault tolerance is that for sufficiently good gates one can encode the single qubit and decode it an arbitrary time later (with the number of qubits used in the code depending upon the time you choose to wait) with the error probability being bounded above by some constant independent of the length of time that you wait. Gil’s statement of his conjecture “In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some , independently of the number of qubits used for encoding” actually seems to me to fit with the interpretation above. □ February 11, 2012 8:50 pm Hi Matt—it was a several-hours delay while I was traveling. As you’ve probably noticed, moderation is only on the first comment; after that you’re in unless you include a lot of spam-filter-catchable stuff. 9. February 8, 2012 9:57 am Hi Aram, Do you think you could explain a bit about the reasons why people haven’t been able to build quantum computers with more than a handful of qubits yet? □ February 8, 2012 11:47 am Hi Boaz, I know you addressed the comment to Aram, but it’s an interesting question, and I hope you don’t mind me trying to give an answer. Despite what you will often here in popular accounts (which generally claim decoherence is the limiting factor), there is no one single reason for the current limitations. Different quantum computing implementations are limited by different factors. In some cases noise is indeed the limiting factors, but others are limited by factors. For example, optical implementations are partially limited by the fact that it is hard to interact two photons and so one has to resort to either weak non-linearities (i.e. weak interaction between photons) or the use of measurements to entangle qubits, both of which incur serious overheads, and it is also technically hard to produce single photons on demand. Liquid state NMR is generally limited by cooling. Each of the qubits is very very nearly completely random. One relies on the very large number of spins in the sample, combined with the tiny polarization of the spins to create a pseudo-pure state (considering only the tiny difference in the populations with different spins). Unfortunately, this difference is an exponentially small fraction of the ensemble, and so eventually you reach a point where the ensemble simply isn’t large enough for this fraction to have a detectable signal (and eventually you run out of molecules entirely). For this reason, liquid state NMR is not presently considered scalable. Simple ion traps have difficulty controlling and cooling ions as the number of ions in the trap increase. For that reason in recent years there has been a lot of work on segmented traps, and also on coupling two or more traps optically, but these techniques are not yet as well developed as more simple trap configurations. This is only scratching the surface as there are many different implementations, and each has its own limiting factors. The above limitations are by no means definitive, but I list them to give you a feel for the relevant issues in different settings. ☆ February 8, 2012 12:57 pm That’s a great answer. And one of the other answers is that people (with some notable exceptions) haven’t seen a reason to build quantum computers with more than a few bits until they’ve got the devices with a small number of qubits running at a high fidelity. If you still aren’t at threshhold yet with 2 qubits then the priority is to improve your experiment there in a more controlled setting before dealing with any other effects introduced by having other qubits nearby. □ February 8, 2012 12:05 pm Boaz, that’s a good question. (Just noticed Joe answered too; hopefully our replies are somewhat complementary.) The consensus is that for a QC (and without radically new architecture ideas) you need something called the DiVincenzo criteria: 1. identification of well-defined qubits; 2. reliable state preparation; 3. low decoherence; 4. accurate quantum gate operations and 5. strong quantum measurements. Often these goals tradeoff with each other. e.g. if you run more wires into the system to perform (4), then these wires will carry noise in them which makes (3) harder. More generally, you want systems to be highly protected from the environment, and yet accessible to the control pulses that you apply. It’s a tough combination. NMR on liquids achieves all of these using technology from the 1970s (and in fact old NMR experiments could be reinterpreted as doing controlled-NOTs and SWAPs), but this doesn’t scale because to get more qubits you need to make the molecule bigger, and as you do, the molecules tumble more slowly, and the nuclei become less well protected. In other systems, you have to develop new technologies for things people have never tried before, and that’s hard. In a few experiments, like this one: http://arxiv.org/abs/1009.6126 people have tried to get “many” qubits, which as Joe points out presents some technical challenges that are somewhat orthogonal to the goal of getting good 1 and 2-qubit operations. So often you see people focusing only on bringing the noise rate down, and not yet trying to get more qubits. If you look at the noise rate, here there has been steady progress. Check out this awesome plot: The recent results are getting to just about the rate that the FT threshold theorem requires. ☆ February 8, 2012 4:15 pm Thank you so much Joe and Aram – these are very informative answers! (thanks also for the interesting links). From these answers I understand that right now the effort of constructing quantum computers does not seem “stuck” at any concrete bottleneck but rather has been making significant progress against several difficult challenges, I guess that if the next decade will pass without significant additional progress, then it may make sense to start asking if there are fundamental obstacles to this enterprise, but right now there is no reason to assume this will be the case. Is this a correct interpretation of your answers? ☆ February 8, 2012 6:17 pm Boaz: Yeah, pretty much. A slightly complicating factor is that there’s a large field of quantum information processing, of which quantum computing and cryptography are just two of the applications. e.g. there’s also precision measurement, and wacky stuff like ultralong-baseline telescopes. So the range of possibilities is not just one bit. But broadly speaking, we’re making fast progress now, and I think we should only start giving up after we stop making progress for several years. 10. February 8, 2012 9:49 pm Let me mention some issues that were raised in our discussion. The starred ones are those we plan to discuss, or described as open problems, the double starred ones are those that came up in pur private discussion with Aram, Ken and Dick. Others came up in the discussion here and few are my contribution. (I dont suggest to discuss all of them.) To make things clearer I wrote “Gil’s conjectures” instead “my conjectures.” Sorry if I missed something important. 1*) Classical Vs Quantum 1 (Aram’s first post) Is it possible that quantum error correction fails while classical error correction prevail? 2*) Classical vs quantum 2 (Aram’s first post) What accounts for the huge practical difference between classical error correction and quantum error correction? 3*) Correlated noise (Aram’s second point) are the conjectures about correlated noise reasonable 4*) Entanglement and error-correlation (Aram’s second post) Can there be dependence between the state of the computer and the noise? Does such dependence violate the linearity of quantum 5*) hypothetical QC to the rescue (Aram’s third point) Aram’s imaginary quantum computers that shows that Gil’s conjectures cannot be true 6*) Intermediate models (Aram’s third post.) Computation based on cluster states, anyons, adiabatic computation and more. 7*) Noisy nonabelyons and clusters (Gil’s open problem post 1) Does noise described by Conjecture 1 allow universal quantum computing when we consider noisy cluster states and noisy nonabelian 8*) Physics examples (Aram’s open problem, his response part 1) Are there any examples from physics supporting Gil’s conjectures? 9) Precise predictions (Aram) can Gil’s conjectures lead to any precise predictions? 10*) Classical/quantum threshold (Ken’s open question, Aram’s post 1 my understanding): Is the conjectures/discussion relevant to the threshold transition between classical and quantum behavior. 11) QM → QC? Does quantum computer skepticism necessarily mean quantum mechanics skepticism? (Chris) Does conjecture C or any conjectured limitation on quantum states contradict QM? (also Chris) 12) What if ¬QEC? Are there any spectacular application to physics of failure of quantum error correction? In what ways would the physical world without quantum error correction and universal quantum computers (or before them) be different from a world with quantum error correction. ((Scott) Is the QC skeptical research just “negative?”) 13) Computational complexity. If quantum error correction fails what is the computational class needed to describe our physical universe? What are the computational complexity implications of Gil’s conjectures (Boaz and others) 14**) environment. How to understand the notion of “environment” in controlled quantum evolutions. 15) Lost in translation? (Related to John Preskill and Robert Alicki comments) Relation between the CS models and Hamiltonian models for quantum computers and noisy quantum computers. 16) Don’t we know better? (Joe, John P.) Arn’t Gil’s conjecture already in conflict which what we know about noise? And also with John Preskill’s Hamiltonian description of noise (in his recent 17) Simulating nature. If universal quantum computers are impossible, does this mean that classical computers can simulate realistic quantum physics? 18**) Quantum field theory computations. If quantum computers are impossible what does it say about computations in QM that requires exponential resources? (These computations were Feynman’s original motivation for QCs.) 19) Universe. Is the universe a giant computer? of what kind? 20) Correlation in classical noisy systems. (*; perhaps) Correlated errors in classical systems. Do the conjectures say or suggest anything about them? 21) Analogies. Analogies between the quantum computer endeavor and: heavier-than-air-flight, building classical computers, creating experimentally Eisnstein-Bose states, controlled fusion, perpetual motion machines, solving equations of degree five, flight to Mars… 22) Intuition transfer. To what extent does our intuition on the behavior of classical computers extend to quantum computers. (Chris, Boaz) 23) Geometry. Is the geometry of the quantum computer relevant to the discussion? (Note that in the model of a quantum computer, in the model of a noisy quantum computer, and in my conjectures geometry does not enter.) 24) Natural high entanglement. (**) Are highly entangled states of the kind we don’t see even in quantum computers represented in the universe? 25) Rate of errors. (Boaz) Why not to simply conjecture that the rate of noise will be high? What can we say about the rate of noise? □ February 28, 2012 11:56 am Let me also add another item to the list. 26) Thermodynamics. The study of noise and decoherence which is at the heart of the issue is connected to thermodynamics. Robert Alicki expressed his belief that impossibility of fault-tolerant quantum computations should follow from the existing, perhaps refined, laws of thermodynamics. John Sidles mentioned the connection with Onsager regression hypothesis. ☆ February 28, 2012 4:19 pm Traditional thermodynamics applies to equilibrium systems, which are not sufficient for studying error correction (systems in equilibrium cannot even store classical data. See http:// www.de.ufpe.br/~toom/others-articles/engmat/BEN-GRI.pdf for more on this.) Further not all systems are in thermodynamic equilibrium (one need only look around you to convince yourself that we have not yet reached a heat death universe.) So this means that one needs to study thermodynamics of out of equilibrium systems. In that case there are metastable as well as perfectly stable classical memories (2D Ising an example of the first and Gacs CA as an example of the second.) For storage of quantum information, it certainly seems like the 4D toric code is an example of the first, and spatially local QECC is an example of the second. The thermodynamics and out of equilibrium dynamics of these systems seems well studied, for specific noise models. If you’re going to take a crack at breaking quantum fault tolerance, I think pinning your hope on a more refined thermodynamics is a bit of a longshot and really it would have to be tied to the specific physics of the system. So it seems to me that your more traditional argument Gil is more likely to succeed then some magical development in thermodynamics (I’m not downplaying Alicki’s approach: I just don’t think his approach is likely to be a more refined thermodyanmics, except in the sense that the dyanmics of systems with different noise models behave differently. This would then make the refinement thermodynamics not universal, and then the question is experimental: what is our actual noise model.) ☆ February 28, 2012 4:47 pm Dear Dave, thanks for the comment. I didn’t add any new argument. I tried to gather the issues that were raised by me and by other people in the discussion into a list, the list contained 25 items, and I forgot thermodynamics, so I added it now as the 26th item. (I remember several discussions about thermodynamics and QC on the Quantum Pointif.) I do have some comments on your comment but I will make them later in the most recent thread. ☆ February 28, 2012 5:27 pm I would like to extend Dave’s post with two recommendations and one meditation: Recommendation #1: If you are in downtown Seattle, and enter the commercial building visible on Google Earth at 47.607754° N, 122.338396° W (100 m west of Benroya Hall), and ascend to the 17th floor ( … not one floor more … not one floor less), then you will arrive at a beautiful, uncrowded, open-to-the public zen garden, with sweeping views of the Puget Sound and the Olympic Mountains … and also a cafeteria and high-speed internet connection. Best … Seattle … math-doing … environment … ever! For which guidance, GLL’s appreciation and thanks go to you, daughter Lane! :) Recommendation #2: For guidance in simulating quantum systems with due respect for thermodynamical “goodness”, a concise-yet-clear reference is R. K. P. Zia, Edward Redish, and Susan McKay’s “Making sense of the Legendre transform” (arXiv:0806.1147 and Am J Phys, 2009). This article’s mathematically natural and duality-centric exposition of the foundations of thermodynamics is recommended to anyone who struggled to come to terms with thermodynamic potentials (students in particular). Meditation: To the extent that there are any fundamental difficulties associated to thermodynamic descriptions of FTQC along the lines of the Zia/Redish/McKay formalism, they plausibly are associated with the well-known practical difficulties that associated to “vacua”, by which is meant dynamical baths $\mathcal{B}$ that: (1) are coupled to qubits, in a thermodynamic limit in which (2) $\dim \mathcal{B} \to \infty$, and simultaneously (3) $\text{temperature} \mathcal{B} \to 0\pm$, while (4) qubit relaxation rates remain constant Vacua enter idiosyncratically in FTQC (as they do not in ordinary thermodynamics) via the couplings that are associated to unit-efficiency photon emission (i.e., $\text{temperature} \ mathcal{B} \to 0{-}$) and detection (i.e., $\text{temperature} \mathcal{B} \to 0{+}$) … all Lindbladian descriptions of qubit measurement being equivalent to these. It may or may not be the case that FTQC’s peculiar vacua pose fundamental issues in physics or mathematics, but for sure, the practical engineering of these vacua poses considerable practical challenges. Perhaps in concentrating on the latter class of practical challenges, we may gain insight into the former class of fundamental questions. □ March 10, 2012 12:36 pm Recent posts and comments threads added a few more items to the list. 27) Numerics. (Suggested by John Sidles). Are there numerical experiments relevant to the debate, the different suggested noise models, and Gil’s conjectures. 28) Quantum-like evolutions on non-Hilbert spaces (Suggested by John Sidles). Models of quantum evolutions where the Hilbert space is replaced by other geometric manifolds. This idea comes in three forms: a) Regular strength: A computational tool strictly within QM b) Maximum-strength: An alternative to QM formulation which is consistent with QM c) Prescription-strength: An alternative to QM with nonlinear Schrödinger evolution that can falsify QM. 29*) Noise?; probability?. (Aram’s second thought-experiment). Is noise a fundamental physics notion or a technical nuisance. A related foundational question: What is the physical interpretation of probability (classic and quantum)? 30*) Engineering and theory (Ken’s open problem for Aram 3rd post). What is the dividing line between an engineering obstacle that cannot be overcome and a theoretical impossibility 31*) Censorship. Can one draw a line between quantum states that can be created without (before) universal quantum computers via quantum fault tolerance, and with (after) them. 32*) Universal James Bond car analogy (Gil’s thought experiment; Aram’s post 2) . Is it the case that to create different quantum evolutions and states requires fundamentally different □ March 19, 2012 1:47 pm Recent posts and comments added a few more items to the list 33) Specific architectures for quantum computers (Aram, others). Can one present specific noise models which show why Gil’s conjectures apply to specific implementations of quantum computers, and to familiar ideas regarding noise for them, like noise from control wires. 34) Hamiltonian models (John Preskill, Aram). Can one present a Hamiltonian model (or related modelling) of quantum computers that supports Gil’s conjecture? 35) Locality (Joe, Aram). How does non-local models like those of Gil’s conjectures can be reconciled with known physics? 36) Quantum fault-tolerance in known physics. Is quantum fault-tolerance manifested anywhere in known physical reality? □ May 20, 2012 4:28 am Recent posts and comments added a few more items to the list 37) Cooling. (Alicki-Preskill discussion and others). Can you always cool individual qubits? Are there types of quantum states that cannot be cooled at all? 38) MRI and more (John Sidles). To what extent can ideas from fault-tolerant quantum computing apply to improvements in MRI imaging sensitivity and resolution MRI? to electronic microscopy. 39) Deep quantum evolutions, iron (Conjecture C post). Does nature support deep (essentially pure) quantum evolutions? (Namely, evolutions which are not of small bounded depth.) Is the presence of iron in the Earth a demonstration of deep quantum evolutions? of natural quantum fault-tolerance? of computationally superior quantum computation? 40) Very few qubits. How do quantum computers with 2, 3, 4 and 5 qubits behave? □ January 14, 2013 3:16 pm The last post of the debate and a few later discussions added (or reminded me about) a few more items to the list: 41. Chaos and computation. What is the relation of chaotic classical dynamics with computation and computational complexity? What is the quantum analog of chaotic classical dynamics, and what are the connections with open quantum systems, and with quantum computation. 42. Smoothed Lindblad evolutions. Can realistic noise and realistic noisy quantum evolutions be described by smoothed Lindblad evolutions? Does such a description cause quantum fault-tolerance to fail? Does it suffice to reduce the computational power all the way to BPP? 43. Noise and measures of noncommutativity. Can a general lower bound for the magnitude of decoherence in a time interval be based on a noncommutativity measure for the algebra of operators generated by the dynamics in that interval? 44. Symplectic geometry, quantization and quantum noise. What is the relevance of quantizations of Newtonian mechanics, expressed by symplectic geometry, to quantum noise, and what are the relations of quantum aspects of symplectic geometry with the behavior of open quantum systems and noisy quantum computation. Find relations of Gil’s conjectures with the “unsharpness principle,” and the notions of inherent and systematic noise from symplectic geometry. (See this post for information and links.) 45. Systematic dependence between the evolution and the noise; symmetries. Is it the case (As Gil’s conjectures posit) that noisy quantum systems are subject to noise which systematically depends on the quantum evolution of the system; that this dependence reflects the dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it performs? In particular, is it the case that noise of a quantum system also largely inherits symmetries of the system? □ January 14, 2013 4:09 pm 46. Pullback-smoothed Lindblad evolutions (seeks a natural and concrete unification of 23. Geometry, and 28. Quantum-like evolutions on non-Hilbert spaces , and 31. Censorship , and 42. Smoothed Lindblad evolutions, and 44. Symplectic geometry, quantization and quantum noise) For smoothing concretely defined as a Stratonovich description of Lindblad noise/measurement increments (per Carlton Caves’ on-line internal report Completely positive maps, positive maps, and the Lindblad form) that has been naturally pulled-back onto a lower-dimension state-space (and thereby effectively smoothed), upon what state-space geometries (if any), and for what restricted class of noise/measurement processes (if any), is the informatic causality of the immersing Hilbert space rigorously respected by pullback-smoothing onto lower-dimension state-spaces? □ May 29, 2013 4:53 am Some post-debate discussions added (or reminded me about) a couple more items to the list. 47. Square-root fluctuation. Can we expect Gaussian fluctuation for the number of errors for a noisy system of n interacting particles? Specifically, can we expect that the standard deviation for the number of errors behaves like $\sqrt n$. What is the situation for digital memory? (In this case, where interactions are weak and unintended, we can consider both transient errors for the entire memory, and also the microscopic situation for a single bit.) 48. Uncertainty. What are the connections (if any) of the issue of quantum noise and quantum fault-tolerance with various manifestation of the uncertainty principle in quantum physics. □ March 27, 2014 7:57 am Post-debate discussions added (or reminded me about) a couple more items to the list: 49) The firewall paradox. The black hole firewall paradox is a recent meeting place for quantum information, quantum computation and physics of black holes. Can a principle of “no quantum fault-tolerance” shed light on the “firewall paradox?” 50) Classical evolutions without fault-tolerance and computation. Describe classical evolutions that do not enable/hide robust information, fault-tolerance and computation. 11. February 10, 2012 5:14 am Aram’s open problem asked if even a single example can be exhibited were my conjectures hold. Joe Fitzsimons ‘s related remark was that we know what the noise is, so we cannot reinvent it. Here is a proposed case to think about Conjecture 1. We need though to replace “qubits” by a much larger Hilbert space representing the state of a single atom. Consider Bose-Einstein condensation for (say) a huge number of atoms. We can thing about it as a quantum code where the same state of a single atom is encoded in a huge number of atoms. We can think if Conjecture 1 holds in this case. I suppose that in reality we cannot achieve pure Bose-Einstein state but rather some noisy positive entropy nearby states. Joe, others, do we know what the noise is in this case? Will it be like independent noise acting on each atom separately? Or perhaps it will be closer to what Conjecture 1 predicts: A mixture of different B-E pure states? □ February 10, 2012 9:07 am Gil, that is not a sensible quantum code to use. It reduces the effect of bit flip errors, just like a classical repetition code, but greatly increases the effect of phase errors. As for what you get in thermal equilibrium and as shown in experiments, this is a very well-studied problem. You do get a mixture of different B-E pure states (plus some local excitations on top of that), but for reasons that have nothing to do with conjecture 1 and everything to do with this being a bad quantum code. Noise, even completely uncorrelated noise, will dephase the state in a given basis. The earliest studied example of this that I know (dating back about 50 years ago!) was Anderson’s original discussion of what happens in a quantum antiferromagnet, such as an antiferromagnet on a cubic lattice with nearest neighbor interactions. One can prove that the ground state has spin 0, but experimentally we observe instead a breaking of spin rotation symmetry, so that the observed state is not spin 0. This results from two facts. First, there are many states which are almost exactly degenerate with the ground state (splitting between them is of order 1/(number of atoms in the sample)). Second, in this system, the space spanned by these almost degenerate states does not act as a good quantum code and so local noise quickly decoheres in a certain basis, leading to the observation of spontaneous symmetry breaking. The remarkable thing about topological protection is that there are quantum systems where the ground state subspace does act as a good quantum code (the first “fact” holds but the second “fact” does not) and so this argument does not apply; however, I think it doesn’t make sense to appeal to this B-E or antiferromagnetic system as a claim for correlated noise because conversely it shows how uncorrelated noise can lead to effects that some people think are due to correlated noise. ☆ February 10, 2012 5:21 pm This is very interessting, many thanks, Matt ☆ February 12, 2012 12:00 am Hi Matt, few little comments. The B-E example was meant to demonstrate conjecture 1 and NOT the idea of correlated noise. As I point out in my short reply the repetition code has the marvelous property that the effect of the noise described by Conjecture 1 is the same as the effect of standard noise. According to your interesting description of the B-E example (which was new to me) Conjecture 1, which should apply to bad codes and good codes alike is correct in this case. (Of course I did not mean to offer new predictions on B-E states.) I realize that you think that this has a completely different reason, has “nothing to do” with Conjecture 1, and that you suggest topological protection as a case where Conjecture 1 will fail. That’s interesting. Maybe this can be tested already by Abelian anyons which were created experimentally. When you think about them as codes I expect that the states that were created are pretty uncontrolled mixtures of different codewords. What do you expect? ☆ February 12, 2012 1:53 pm Dear Matt, on second thought, based on your comment, maybe under Conjecture 1, correlated errors occurs for every linear quantum code which encode a single qubit with n qubits (including the repetition code regarded as a quantum code}, assuming that indeed all qubits are used in the encoding. Perhaps what is special with the repetition code is that the correlation occurs only in one direction which allows stable “classical” direction. Of course, Conjecture 1 does not specify the quantum operation representing the noise. You can consider arbitrary unitary evolution leading to the encoding and apply some standard noise of very low rate before applying the evolution. In any case, the (mathematical) question if you always get correlated errors is interesting. ☆ February 12, 2012 3:11 pm Hi Gil, regarding use of Abelian anyons, curently the experiments aren’t far enough along to test such dephasing questions. http://arxiv.org/pdf/1112.3400.pdf is a recent paper with some interesting results. They measure effects consistent with abelian anyon statistics, and observe jumps in phase consistent with changes in the number of anyons in some island. However, in this experiment, topological protection doesn’t imply much because the anyon number does couple to local operators. In order to have a good code using abelian anyons, you need to have large holes in the system; with large holes, it becomes impossible to detect whether or not a hole contains an anyon without braiding another anyon around it, so you can form topologically protected superpositions of states. (To really make this work, you actually need 2 holes, and then form a superposition of different anyon number in the two different holes (such as 0 anyons in hole 1 and 1 anyon in hole 2, superposed with 1 anyon in hole 1 and 0 anyons in hole 2) because you can’t superpose states with different anyon number.) The topological protection in this case is supposed to be exponentially good in the diameter of the holes and the spacing between holes (you need both distances to be large); however, in the experiment of this arxiv paper, the anyons are just pointlike particles rather than trapped in large holes and so there isn’t supposed to be any protection against dephasing between states with different anyon number in different places (because the “diameter of the hole” is O(1) ). With non-abelian anyons, it should be possible to have topological protection with only pointlike defects; however, in the paper I mentioned, they argue that coupling between these defects and the edge mode has destroyed the non-abelian features…to make it work you’d need a large enough separation between the defects and the edge. In contrast, the paper http://arxiv.org/abs/0911.0345 claims behavior consistent with non-abelian statistics, but even if that paper is correct it is still some distance away from the point of creating protected superpositions of states which you would like to see. Alas, in these experiments there are effects of impurities, small system sizes, charging effects, etc… which make things messy. Hopefully soon it will be more clear. Perhaps Majorana fermions systems will provide a cleaner test soon. 12. February 10, 2012 3:03 pm Following-up on a (very enjoyable!) chat yesterday evening with Aram (Harrow) and Steve (Flammia), there is now on MathOverflow a newly-posted question titled Harris’ “Algebraic Geometry”, quantum dynamics on varieties, and a $100,000 reward, that seeks to specify concrete math-and-physics foundations for meeting Scott Aaronson’s wonderful challenge: “There’s a detailed picture of [what] the world could be like such that QC is possible, but no serious competing picture of what the world could be like such that it isn’t. That situation could change at any time, and we would welcome it as the scientific adventure of our lives.” Starting next week — or as soon as it becomes administratively feasible — a substantial MathOverflow reputation bonus will be attached to this MOFL roadmap (motto: “MOFL reputations are beyond price”). Happy STEM treasure-hunting, everyone! :) □ February 14, 2012 8:32 am In regard to the above QM/QC/QIT roadmap — which might be titled Aaronson’s Search in tribute to Borges’ celebrated story Averroes’s Search — please let me commend in Joseph Landsberg’s new textbook Tensors: Geometry and Applications his short essay of Section 0.3, titled “Clash of Cultures“. By drawing upon various references in the algebraic geometry literature (that were elicited by the above MOF question) I am becoming reasonably hopeful of posting to MOL, in the next week or two, a concrete conjecture in algebraic geometry that will: (1) help to help reconcile Landsberg’s clash of cultures, and that (2) will concretely further Aaronson’s search, and that (3) will mathematically illuminate Gil Kalai’s conjectures here on GLL. This particular roadmap is suggested by a mathematical aphorism of David Hestenes and Garret Sobczyk: “Geometry without algebra is dumb! Algebra without geometry is blind!” 13. February 10, 2012 3:21 pm Gerhard Paseman posted: I recommend making the title be less crass and have more class by omitting the dollar amount. Gerhard’s suggestion was excellent, and so here is a link to the MOFL question thus amended: Harris’ “Algebraic Geometry”, quantum dynamics on varieties, and a more-than-monetary reward. Thanks, Gerhard! :) 14. February 13, 2012 1:38 pm Those are my principles. (G. Marx) Let me draw again my proposed picture for Aram’s important classical fault-tolerance test. Before doing that let me remark that my proposed picture can be supported even under the weak interpretations of my conjectures, so it can coexist with the possibility that quantum computers can be built. We start with the (rather informal) strong principle of noise which says that you cannot find an (essentially) noiseless quantum evolution as a subsystem of a realistic noisy quantum evolution. I find this principle appealing although admittedly it will require pretty wild (fully within QM) Hamiltonian models (probably wilder than Preskill’s model and Alicki’s models). But this is something we will talk about later. Next we move to Conjecture 1: For any encoded qubit we have a substantial mixture of the intended state with a “cloud” of unintended states. (I hope that the discussion with Matt helped to make it clear what the conjecture says.) Now we look at the repetition code as a quantum code. For this special code, our non standard noise (mixture of undesired code words) has the same effect in terms of bit-flip errors as the standard noise and therefore it allows classical error correction. The symmetry breaking and the special “direction” emerges from the repetition code, not from the conjecture. This may explain not only why we do not see quantum codes in nature but also why we do not see other, more sophisticated, classical error correction codes in nature. The discussion with Matt suggests that perhaps for every quantum code that genuinely use n qubits to encode a single qubit, Conjecture 1 implies that there will be an effect of error-synchronization. But the repetition code (and perhaps only the repetition code) allows to extract noiseless classical bits. This is an interesting mathematical question. Note that this proposed explanation is geometry-free so it will apply also in four dimensions. Kitaev’s remarkable self-correcting 4D memory will only lead to a cloud of codewords – not a single □ February 13, 2012 3:23 pm Gil Kalai asserts: This may explain not only why we do not see quantum codes in nature but also why we do not see other, more sophisticated, classical error correction codes in nature. Please let me recommend the review by Sancar et al. “Molecular mechanisms of mammalian DNA repair and the DNA damage checkpoints” (2004, 1,315 citations). The point being, mammalian cells have evolved to incorporate dozens of independent, highly sophisticated error-correction mechanisms, acting upon-and-beyond the “obvous:” informatic redundancies that are associated to (1) chromosomal pairing on top of (2) DNA strand pairing on top of (2) the triplet nucleotide code. Heck, Nature exploits for error-correction purposes even the topological information that is encoded in the DNA winding number. “What has been will be again, what has been done will be done again; there is nothing new under the sun.” (Ecclesiastes 1:9, NIV). ☆ February 13, 2012 8:30 pm That’s interesting! □ February 13, 2012 4:00 pm Gil, would you question whether a physical implementation of a classical LDPC code would be able to perform in practice close to what I might predict assuming uncorrelated noise? i.e., I have some physical realization of a communication channel, I make some measurements of its noise properties and succeed in obtaining a rough model of the noise. Then, I choose an LDPC code (or rather, family of such codes) such that this noise is less than the error threshhold for that family. In fact, as a careful engineer, I perhaps allow some safety margin between my measured noise and the threshold for the code. Would you expect that I would be able to see exponential suppression of the error rate in practice or not? ☆ February 13, 2012 8:14 pm Sure, in such a case you will indeed see exponantial suppression of errors. The issue I referred to is why we do not see such codes in nature, in situations where encoding processes themselves are noisy. Once we have noiseless bits and we can perform noiseless operations on them (and this is possible under my conjectures) we can prepare noiseless compilcated error-correcting codes which will be useful against independence noise. (Of course when I say “noiseless” I mean “essentially noiseless” with amount of noise that we can keep arbitrary low.) ☆ February 15, 2012 12:58 pm Gil, why is it that you don’t think that noise correlation will affect a “complicated” classical code like the LDPC code? One reasonable answer of course is that LDPC has a linear distance, so even highly correlated noise won’t destroy the information if sufficiently few bits are flipped. But if the key is the distance of the code, note that there are quantum codes also with linear distance. ☆ February 15, 2012 11:21 pm Matt, If you believe Conjecture 1 (or even its weakest interpretation) then you get the following picture: a) there is now way to use quantum error correction to get protected quantum evolutions (or at least we did not construct or witness such a way yet), b) You can get via repetition code a protected classic evolution c) on top of that you can protect it further by classic quantum error correction if you wish. Anyway, what is your explanation to the large gap we witness in reality between classical and quantum error correction? ☆ February 16, 2012 8:53 am The large gap occurs for three reasons. First, the theoretical threshhold for quantum fault tolerance is higher than the threshhold for classical fault tolerance. Both are “numbers of order unity”, but the quantum number seems a lot closer to 1, although theoretical advances have dropped this number a lot. Second, the quantum experiments are much harder to do, and so the gates are worse, although getting closer to threshold. I think this isn’t surprsing: good classical gates require only technology that we’ve had for a long time and can even be built using gears and steam engines if need be. You wouldn’t get very good miniaturization that way, but at least you could get to clearly demonstrate some kind of classical computation, with good enough gates that fault tolerance isn’t required. Finally, the simplest classical code, the repetition code, is naturally implemented by many systems and is in fact why we see classical reality emerge out of quantum mechanics, so fault tolerant is hardware is easier classically. Conversely, fault tolerant quantum hardware, such as a topological state, occur in fewer systems. As a condensed matter physicist, one intuition is that systems just “like to order” classically, and that ordering gets in the way of having a topological phase. tl/dr: several qualitative facts make it a lot harder quantum mechanically. Suppose we were debating whether or not it is possible to build a space craft that travels at 10 percent of the speed of light. This debate would have three parts: 1)is it possible under the fundamental laws of physics? 2)is there too much space dust to build such a craft, or not enough fuel available to us to propel it? (i.e., things that aren’t fundamental objections, but are perhaps insurmountable features of the place where we live—the analogue of a quantum theorist who believes that there is a threshhold but that gates at that accuracy can’t be built practically) 3)is this something that our society will ever do? (is there enough money and will to do this?) Each of those have an analogue in the case of building a quantum computer. My feeling is that a quantum computer is like the problem of, say, building a spacecraft to go to Pluto (hard but possible), while your fear is that it is fundamentally impossible (you object on question 1). The least interesting case is that it is impossible for reasons analogous to question 2. ☆ February 16, 2012 8:55 am btw, to clarify: when I say “good classical gates” can be built with old technology, I mean good in the sense of high fidelity. Smaller/faster gates are definitely a technological feat! (and fidelity drops as size drops) ☆ February 16, 2012 9:04 am at the risk of talking too much, I guess the history of classical fault tolerance is interesting too. Vacuum tube machines had people running around trying to fix burnt-out tubes, and Feynman has some nice Manhattan project stories of starting small computational jobs to patch up part of a larger job when an error occured, so fault tolerance at the time had a human component. And people using hand cranked calculators tell me that they had to be careful not to turn the crank too quickly. So, I guess it is not that wide a window in technological time where we have really high fidelity classical gates. ☆ February 16, 2012 9:40 am Matt, your post and mine on the same subject overlapped in time and substantially agreed in content. Even today substantially all large-scale classical memories and communication channels are error-corrected. As classical technologies increasingly press against quantum limits to size, speed, and (especially) power-efficiency, it is not surprising that classical error-correction techniques are becoming steadily more sophisticated, deeply nested, and ubiquitous. When we apply modern dynamical simulation methods to predict error rates in error-corrected quantum computer memories, we discover ample reason to conceive that Gil Kalai’s conjectures may be correct. ☆ February 16, 2012 2:41 pm Indeed John, I used the word “window” for a reason. As classical gates are getting made smaller and larger computations are being done, some form of classical error correction becomes more important. ☆ February 16, 2012 3:38 pm Yes Matt, your “window” point is very well taken … it was your post that led me to appreciate that pretty much all of today’s (classical) computer memory technologies and (classical) communication channels are either error-corrected, or else they are (necessarily!) power-hungry. Soberingly, in human beings, inborn DNA repair-deficiency disorders often are associated with a grave medical prognosis. Thus we should none of us overlook the ubiquity, necessity, and physics-level subtlety of classical error-correction dynamics. □ February 14, 2012 7:03 am Well, John and Matt you are indeed correct: when I wrote “The discussion with Matt suggests that perhaps for every quantum code that genuinely use n qubits to encode a single qubit, Conjecture 1 implies that there will be an effect of error-synchronization. But the repetition code (and perhaps only the repetition code) allows to extract noiseless classical bits. This is an interesting mathematical question.” the “perhaps only the repetition code” is too good to be true. We can simply start with a repetition code and then apply an additional, as complicated as we wish, classical error correcting code. (Both your examples look essentially like this.) Anyway, conjecture 1 allows classical error correction but not quantum error correction. The repetition code indeed seems to be especially suited for noise obeying Conjecture 1 but it is not clear to me how exactly to formalize it. The interesting possibility that came up in the discussion that Conjecture 1 always implies error-synchronization is new to me and we can further think about it. As always, counterexamples are most welcome. ☆ February 14, 2012 8:04 am Gil, I have a lot of respect for Conjecture 1. For me, this conjecture can be focused equally plausibly upon lower bounds to noise levels associated to synchronized error processes, or alternatively upon upper bounds to measures of spatial localization and isolation, and this was the point of my earlier post regarding the emergence of macroscopic spatial parameters (in transport equations) from microscopic parameters (in qubit Hamiltonians). In the hardware design of QCs, a reliable rule-of-thumb is that as detectors approach near-perfect efficiency and near-zero noise, spatially correlated qubit couplings (and thus spatially correlated noise) become stronger-and-stronger, via cavity QED effects. It is natural to ask, does this engineering rule-of-thumb regarding correlated noise reflect a fundamental law of physics regarding the limits to spatial localization? Thus, one way to read your Conjecture I (as it seems to me) is as a suggestion that we should examine more closely the nuts-and-bolts processes by which the set of macroscopic transport equations that we collectively regard as comprising “three-dimensional space and one-dimensional time”, emerge naturally from the microscopic quantum structure of qubit couplings. Needless to say, a full century of arduous wrestling with the notion of “quantum field theory” — both theoretically and experimentally — tells us that these investigations into the quantum dynamics of spatial localization will not be easy. For me, a great virtue of Kalai Conjecture I in particular, and modern advances in QM/QC/QIT in general, are that they encourage us to think about these space-time problems in new ways, with new mathematical tools. 15. February 16, 2012 6:11 pm Thanks, John I can especially identify with the sentiments of your last two paragraphs (without claiming to understand the technical jargon). □ February 16, 2012 10:06 pm Gil, please let me see that I earnestly regret the use of technical jargon. The real path by which our group’s QM/QC/QIT ideas arise is by the canonical random walk: (1) contemplate any practical problem that pushes against quantum limits to sensitivity, speed, power, size (etc.), (2) analyze the associated dynamics by any computational strategy that works, then (3) gaze slack-jawed at the resulting computational miracles and say to oneself mathematicians surely have a name for this. ☆ February 17, 2012 7:57 am Whoops, the above link is paywalled (which is deplorable) and not much fun (which is worse). Much more fun are the lectures of last summer’s Topics in Tensors: A Summer School by Shmuel Friedland (Coimbra, 2011). It is striking that certain key themes of the Kalai/Harrow GLL/QM/QC/QIT debate were sounded too at these Coimbra Lectures … as reflected in the fun-house mirrors of modern algebraic For example, to appose Dick’s quote of Einstein’s great truth “God is subtle, but He is not malicious” we have Shmuel Friedland’s dual aphorism “Matrices were created by God and tensors by the Devil.” Similarly, to appose the Aaronson Quantum Prize of $100,000 generously offered for (essentially) a viably non-Hilbert state-space for quantum dynamics, we have the Allman Salmon Prize, generously offered by the geometer/biologist Elizabeth Allman for (essentially) a better understanding of certain algebraic state-spaces that, by the synchronicity so usual in math-and-science, are reasonable candidates too for the Aaronson Quantum Prize. Moreover, because Prof. Allman teaches at the University of Alaska, her tender of a tasty wild salmon is a prize well worth winning! And finally, a virtue shared by this GLL debate and by the Coimbra Lectures is this: challenges are construed broadly rather than narrowly, such that fundamental problems in representation theory are regarded as encompassing quantum dynamics and phylogenetic trees and sparse decompositions and image compression … such that each discipline and each application enjoyably illuminates the others. 16. August 7, 2012 2:47 pm The last post in my debate with Aram will be come rather soon. We had an interesting and useful off-line discussion on some of the issues that I raised in my response to Aram’s posts. Here is something that came up: When you have independent noise on qubits then the number of errors decay exponentially above the rate level. This is a crucial property for quantum fault tolerance proofs. Another property of independent noise is that the number of qubit errors is highly concentrated. This property seems to me unrealistic, e.g., in the context of the repetition code and Aram’s microscopic description on digital computers. The exponential decay property whixh enables FTQC is witnessed also in the more general models of Aharonov-Kitaev-Preskill and Preskill. I expect (or suspect) that for these models the high concentration of the number of errors also hold. (But this requires a proof). In contrast, for the noise described by Conjecture 1, applied to the repetition code the number of errors will be “smeared” around the rate. This observation seems to me as giving some support to Conjecture 1. □ August 7, 2012 6:31 pm Why should this concentration of errors be unrealistic? In classical computers, the distribution of errors is fairly well understood, and is broken into 1/f noise, unwanted capacitive/ inductive couplings, Johnson noise, manufacturing faults, and so on. These are all what I would call realistic, and generally lead to the observed infinitesimal failure rates of the repetition-encoded bits that transistors use. What is the realistic noise source that we have not yet noticed that will derail quantum computers? ☆ August 7, 2012 7:33 pm Aram, please let me say that the commitment, and effort, and the imagination that you and Gil both bring your debate is greatly appreciated, by me and (I’m sure) many folks. Aram asks “What is the realistic noise source that we have not yet noticed that will derail quantum computers?” As usual, the history of science does not supply a specific answer, but it does provide a reasonably reliable generic answer: “The best way to discover this mechanism is to attempt to build practical quantum computers.” E.g, our theoretical appreciation of plasma instabilities has been hugely enhanced by numerous humility-inducing attempts to build practical fusion reactors. As the Soviet theoretician Lev Artsimovich famously prophesied: “Fusion will be there when society needs it.” Now the Soviet Union is gone, and society urgently needs fusion power … and it is not here. Ouch! A little closer to home, please allow me to commend the historical review in Orbach and Stapleton’s article “Electron Spin-Lattice Relaxation” which is a chapter in Electron Paramagnetic Resonance (1972, S. Geschwind, ed.), for its humility-inducing account of the struggle to account for spin-lattice relaxation mechanisms … a multi-decade struggle in which Nature’s ingenuity in creating spin relaxation mechanisms consistently dominated the ability of theorists to foresee those mechanisms. As concrete evidence that this struggle continues even to the present era, we see on the arxiv server the very interesting preprint by Lara Faoro, Lev Ioffe, and Alexei Kitaev titled “ Dissipationless dynamics of randomly coupled spins at high temperatures” (arXiv:1112.3855v1, 2011). Perhaps good question to ask, therefore, is this one: “How many realistic noise sources that we have not yet noticed will derail quantum computers?” As an (entirely sincere) tribute to the strength and creativity of Alexei Kitaev’s work, that is informed by the above history, I have posted on CalTech’s new Quantum Frontiers weblog a brief essay to the effect that the best we can hope for is the Kalai-esque yet utopian answer “Plenty!” ☆ August 9, 2012 1:37 pm Aram, the observed failing rate of digital memory is supported both by the independent error-model and its variations, and also by Conjecture 1. The difference is that under the independent model the fraction of faulty qubits is highly concentrated around the expectation. This concentration behaves like one over the square root of the number of microscopic spins -and this is what I regard unrealistic. Under Conjecture 1 the fraction of errors is not concentrated. ☆ August 10, 2012 12:52 am What makes you say that O(sqrt(n)) fluctuations are not realistic? Here is one simple example: A capacitor with capacitance C charged to a constant voltage will have O(C) electrons more on one plate than one the other. But at constant temperature, the fluctuations in this number will be O(sqrt(C)). To keep citing wikipedia for everything, this article claims that the shape of the noise is Gaussian: I promise I didn’t just edit those two articles to say that. :) Now it’s your turn: what’s a realistic source of noise that would give rise to Omega(n) fluctuations with exp(-o(n)) probability? 17. August 8, 2012 8:24 am Dear Aram and John, The issue of my comment was not so much to identify new sources of noise (which is quite interesting) but to address the question of correct modelling of the noise we are already familiar with in the context of more general noisy quantum evolutions. Especially in the context of quantum codes, and the familiar repetition code. Aram proposed to regard the memory of a digital computer as representing the repetition code which represent ’0′ by |00000…0> and ’1′ by |111…1>. The zeros can correspond to microscopic up-spins and the ones to down spins. The noise causes some spins to flip. When we consider the model of independent errors with probability p then the fraction of flips will be highly concentrated at p. We do not need this high concentration for the purpose of classical error correction (and even not for quantum error correction); it is enough that the probability that a large fraction of spins will be flipped is negligible. But we get this concentration nevertheless. Realistically, I would expect that the fraction of flipped spins is not concentrated as the number of spins n grows but rather is smeared around a certain level while maintaining the crucial property that the probability of many flips which change the value of the majority function is negligible. I suspect that the sharp concentration witnessed for independent noise also holds for the Aharonov-Kitaev-Preskill model and the recent Preskill Hamiltonian model in the cases where it was shown that FTQC is possible. (A new version of Preskill’s paper was recently arxived.) One nice feature of Preskill’s recent model is that FTQC is possible if a certain norm associated to the noise is bounded. (Preskill proved that this is the case if some correlations are decaying in certain geometric models.) Can we expect that the norm described by Preskill will indeed be bounded in realistic situations? As I said, I expect that this model like a model of completely independent noise will predict that the fraction of flipped spins is highly concentrated. Conjecture 1 suggests that the noisy repetition code will lead to states which are the encoding of a certain “neighborhoods” of |0> and |1> in the Bloch sphere and this is consistent with the fraction of spin-flips being “smeared” and not concentrated on a single value. Conjecture 1 will still allow us to have robust classical bits from the repetition code but will not allow quantum error correction. (Of course, one can also describe models of noise based on independent noise where the number of errors is not concentrated but events of having many errors are negligible.) Three further remarks. 1) The 1/f noise that Aram mentioned were also mentioned to me by Nadav Katz and it will be interesting to look how and if 1/f noise is relevant to our discussion. 2) Robert Alicki made an interesting remark regarding the boundedness of the norm used by Preskill. He considered more and more refined description of the same familiar quantum evolution. The coarser description can be regarded as a noisy version of the finer description or as a noisy description of the limit evolution. Alicki expected that the norms associated to the noise compared to larger and larger refinement is unbounded. 3) My last paper on the topic describes also a somewhat different way to think classically about the repetition code which explains why in certain cases where the analog of Conjecture 1 holds classical memory and computation are still possible. □ August 8, 2012 8:26 am Let me add relevant links to my comments above: Preskill’s arxived article, Alicki’s comment which followed this comment by Preskill (My description is based on my understanding of some explanations by Robert); 1/f noise (scholarpedia article). □ August 9, 2012 9:56 am Gil and Aram, with regard to the quantum origins of noise in general, and more specifically with regard to “1/f” noise, and most specifically with regard to “1/f” noise originating in spin dynamics, please let me commend the long arc of literature (6+ decades!) that extends from Hendrik Casimir’s “On Onsager’s principle of microscopic reversibility” (Reviews of Modern Physics, 1945) — a work that I highly commend — through Azriel Genack and Alfred Redfield’s “Theory of nuclear spin diffusion in a spatially varying magnetic field” (Physical Review B, 1975) to Lara Faoro, Lev Ioffe, and Alexei Kitaev’s recent preprint “Dissipationless dynamics of randomly coupled spins at high temperatures” (arXiv:1112.3855v1, 2011). This long arc of literature is written entirely in the “classical” dynamical and thermodynamical language of Dirac and Onsager, that today is reflected in epynomic textbooks by (for example) Slichter, Landau and Lifshitz, Nielsen and Chuang, and Kittel (among dozens of excellent works). And yet, during the 1950s, and continuing to the present day, a second dynamical language was conceived, emphasizing naturality and geometry, that is associated to names like Élie Cartan, Samuel Eilenberg, Saunders Mac Lane, and Vladimir Arnold. Simultaneously, a third dynamical language was conceived, emphasizing the unraveling of open systems into ensembles of dynamical trajectories, that is associated to names like Göran Lindblad and Howard Carmichael. And nowadays, what seems to be a fourth dynamical language is being conceived, emphasizing entropy-area properties of dynamical state-spaces, and their intimate relation to the existence of efficient computational simulation algorithms … we don’t yet know whose names will be associated with this new dynamical language! It is straightforward and instructive exercise to set down the natural equations that govern these four points-of-view — a single page suffices — and yet will be a very long time (decades at a minimum) before we grasp all of their implications … and still longer before any textbook is written that syntheses these four perspectives. Therefore (for me) the most valuable outcome of the Kalai-Harrow debate will not be any narrow determination of “who’s right” — which is an unlikely outcome in any event — but rather an already-gained appreciation that each of the above four languages contributes substantially to our 21st century’s integrated appreciation of quantum dynamics, and therefore, a concrete near-term opportunity is to integrate, synthesize, and apply the dynamical insights that we already have achieved. Therefore, with regard to building practical quantum computers (or determining finally that they are infeasible), we need not hurry, because we have *plenty* of useful work to do, and good ideas to pursue, right now. Which is GOOD, eh? :) □ August 9, 2012 6:20 pm Here is an ad hoc suggestion for a class of noise models that are (1) maximally “toxic” (perhaps) to quantum computation in the sense of Kalai, and yet are (2) realistic (maybe) in the sense of Faoro, Ioffe, and Kitaev’s arXiv:1112.3855v1, and futhermore are (3) theoretically tractable (maybe) in the sense of Aaronson and Arkhipov arXiv:1011.3245v1. We’ll start with an error-corrected ion-trap quantum computer, and for noise we’ll shine a (weakly coupled) laser diode on *one* of the ion qubits, that is itself an element of *one* of the computational basis qubits. So far, no problem for error-correction. Now direct the diode output into *one* of the 50 input fibers of a 50×50 Aaronson-Arkhipov (passive) optical coupler, and shine each of the 50 output fibers onto a *different* ion. Hmmm … now *this* noise is problematic for any error-correction method (know to me), for the physical reason that we can (in principle) collect all 50 of the outgoing photons in *another* Aaronson-Arkhipov, thus creating a 50-in/50-out photon interferometer that continuously monitors high-order quantum correlations among the ion quibits, and thus acts to quench those self-same correlations. Ouch! To make the above noise model more physical, we first dispense with the output interferometer (which is irrelevant by ambiguity of operator-sum representation), and then we dispense with the input photons (substituting unpaired electrons in the walls of the optical cavity holding the ions), and finally we dispense even with the input Aaronson-Arkhipov interferometer, substituting optical modes and/or conduction-band modes of the ion trap. Here it point is that models like the above render it non-obvious (to me) that nasty high-order quantum noise mechanisms do not ubiquitously lurk in innocuous-seeming laboratory apparatus. In this regard there is no substitute for going into the laboratory and attempting difficult experiments. In this, everyone agrees! ☆ August 10, 2012 1:00 am This is a very nice idea, combining experimental falsifiability with theoretical tractability. I need to think more about its realism (especially with “50″ replaced by a number that grows with n), but let’s leave this aside for the moment. Let me mention, though, that “problematic for any error-correction method” is a tricky thing to claim. Yes, “problematic” in that we cannot trivially say that QECC will work simply by appealing to the minimum-distance property. However, it is not enough for the noise to have a non-negligible probability of being high weight. It’s crucial that the noise operators also are close to logical operators for the code, otherwise they may still be correctable. This is especially true if we get to measure and characterize the noise before choosing our error-correction scheme. For example, we _do_ observe collective dephasing, and this is one of the easiest types of noise to correct for, simply by using a decoherence-free-subspace (cf. Dave Bacon’s dissertation). ☆ August 10, 2012 8:47 am Aram, with regard to computation-challenging quantum noise-and-transport models, please don’t look (to me) for any final conclusions very soon. In my own reading of the history of science, quantum noise-and-transport problems are remarkable for: (1) the distinguished cadre of physicists who have worked upon these problems, and (2) the stately pace of progress in the past century, and (3) the immense journey still before us, before we arrive at a reasonably comprehensive and natural understanding. So the good news is, there are very many, very interesting, very important open problems in quantum noise-and-transport. The sobering news is, few or none of these open problems are easy. ☆ August 10, 2012 9:04 am Oh, and perhaps I’d better mention, that physics disciplines like cavity QED in general — and single-photon emission as a canonical example — for me are “transport” disciplines, in the sense that these features are present: • the coherent dynamics is Hamiltonian, • the decoherent dynamics is Lindbladian, • the transport dynamics is “Onsagerian”, and • the informatic dynamics is “Shannonian” An objection might be, that these elements are present in *most* physics articles, and (explicitly or implicitly) these elements are present in *all* quantum computing articles. To which the answer is: yes. ☆ August 12, 2012 8:55 pm And finally, the above class of qubit noise models plausibly (and perhaps naturally) violates the $k$-qubit noise scaling that is postulated in Preskill’s recent “Sufficient condition on noise correlations for scalable quantum computing” (arXiv:1207.6131v1). Most likely, it will require a considerable effort, and a lot of ingenuity, to experimentally verify and theoretically validate our understanding of these noise-related quantum dynamical processes. ☆ August 13, 2012 6:53 am And as a coda to the above piece-wise meditation, here is a delightful passage from the introductory chapter to David Deutsch’s book Fabric of Reality (a book that is strictly In the future, all explanations will be understood against the backdrop of universality, and every new idea will automatically tend to illuminate not just a particular subject, but, to varying degrees, all subjects. By Deutsch’s universality principle we can infer that a mathematical/physical inquiry that casts a sobering light upon scalable quantum computing, almost surely will provide inspiring illumination to other research programs and technologies. We need only look, and we will see. Good! :) 18. August 12, 2012 11:33 am Aram: “Now it’s your turn: what’s a realistic source of noise that would give rise to Ω(n) fluctuations with exp(-o(n)) probability?” It is good that we identified, Aram, a simple clear-cut technical place of disagreement. A simple model with Ω(n) fluctuations The simplest description of a realistic noise with Ω(n) fluctuations is for a bit to be faulty with probability p where p is not a constant but rather determined based on some probability distribution on [0,1]. I would regard the description where p is a constant unrealistic. For memories of digital computers it is realistic to assume that p is supported on an interval close to zero (like [0.05, 0.1]). This allows noiseless bits and classical computation. ( Ω(n) fluctuations may still allow quantum fault-tolerance. If the distribution of p is (entirely) supported on an interval sufficiently near to zero, or decay fast enough, then quantum fault tolerance is possible.) Preskill’s model Noise models where p is concentrated and the fluctuation is $O(\sqrt n)$ are, in my opinion, unrealistic. (Of course, they can still be useful.) This (I think) applies also to the new general model of Preskill. This would mean that Preskill’s model leaves out a major ingredient of realistic noise. This ingredient which is abstracted out is needed to tell if FTQC is possible or not. This also means that the norm described by Preskill is unbounded in realistic situations. Why I regard O(√n) fluctuations unrealistic for memories of digital computers A few sentences for why I regard, for memories of digital computers, highly concentrated p as unrealistic. A single bit in digital computers can be modeled by a 2-D Ising model. Here in cold “temperatures” T there are two stationary states: in one a fraction of p spins are up and a fraction of 1-p spins are down and it is the other way around in the other. The value p of the fraction of up-spins depends on T and indeed if T is fixed then so is p and we have suqre-root n fluctuations. Realistically, I would expect that T is not fixed but fluctuates as well. Putting more effort into the engineering you can make T fluctuates less and less but this will only affect the constants and the overall fluctuations of the number of spin ups will be linear and not square-root. □ August 12, 2012 12:27 pm Gill, Two questions 1. You say ” A single bit in digital computers can be modeled by a 2-D Ising model” Why would a zero-field Ising model be a good model for the bits in a computer memory? 2. “The value p of the fraction of up-spins depends on T and indeed if T is fixed then so is p and we have square-root n fluctuations.” Why does this lead to square root fluctuations? For the Ising model in the low temperature phase Martin-Löf has proven that the magnetisation converges to a gaussian, and so have very nice concentration properties See http://proxy.ub.umu.se:2103/content/n65471h167w25j58/?MUD=MP ☆ August 12, 2012 12:51 pm Klas, thanks for the comment! But can you please fix the link for those of us not at your university? ☆ August 12, 2012 12:54 pm Aram, sorry about that, I used the proxy since I logged in from home. Here is the correct link ☆ August 12, 2012 12:59 pm Thanks! One more editorial comment: Springer is apparently offering to sell the article for $39.95, while Project Euclid offers CMP articles from 1965-1997 for free. I mention this only in the vain hope that everyone will be inspired to always put all of their articles on arxiv.org. ☆ August 12, 2012 2:57 pm Klas, Regarding the second question, dont we say the same thing? note that n is the number of spins and that when T is fixed when we condition on the state represeting ‘one’, we have a very nice concentration which gives that the standard deviation of the number of up spins is sqrt n. More precisely, conditioned on majority of spins being up the distribution of spins is essentially binomial with probability for a spin=up is p. (p depends on T) Regarding the first question: The 2-D Ising model is regarded as a “role model” for a two-state system with spontaneous (or active) error correction. So I think it is common to think of digital computer memory as modeled by it. (In some sense Kitaev’s 4D quantum model is an analogue of the Ising 2-D model as the toric code is an analog of the 1D Ising model.) ☆ August 13, 2012 2:56 am Gil, yes I think we are saying the same thing about the standard Ising model. What I was really wondering is how you get the linear fluctuations from that? Do you intend to have an individual, somehow random, T-value for each spin? In that case I don’t see right away why you would see large fluctuations, rather than something close to the Ising model at a different, but fixed, temperature. Here I’m assuming that you do not allow local temperature variations which are large enough to mover parts of the system into the high temperature region. ☆ August 13, 2012 8:07 am Dear Klas, It is reasonable to assume that T (or the average value of T) will flactuate as a function of time. (It can also change for individual spins..) ☆ August 13, 2012 12:16 pm Gil, I am not questioning that. I just wonder how large you must make those fluctuations in order to get as large fluctuation in the magnetization as you propose? When one is not close to the phase transition the magnetization M, and so the effective value of p, is not very sensitive to changes in T. For low enough T the change in magnetization would be sub-linear in the change in T, so for small expected T it looks like the fluctuations in M should be small even with a fluctuating T. However closer to the phase transition this could change. So maybe there is a phase transition in terms of the expected value of T here? 19. August 14, 2012 4:40 pm Hi Klas, I prefer to think directly on the fluctuation of the fraction p of up spins (and not in terms of the parameter T). Suppose that your computer bit is represented by 10^8 spins and a ’0 state means roughly a fraction of 0.01 spins up and a ’1′ state means roughly 0,99 spins up. Square root fluctuations mean that these values will be extremely stable so the 0.01 will be 0.01+- 0.00001 or so. Such a strong stability is not needed for computation (which is essentially based on the majority function) which will work perfectly if p is constrained to an interval of length proportional to 0.01. It also looks counter-intuitive to me that such a strong stability is maintained rather than p, while being kept in a small interval, fluctuates and oscillates conditioned on some physical parameter of the system. But Aram thinks differently. Probably some people understand much better the physics of digital memory to enlighten us. 1. Updates, Boolean Functions Conference, and a Surprising Application to Polytope Theory | Combinatorics and more 2. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb Recent Comments The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian… The More Variables,… on An Amazing Paper The More Variables,… on Mathematical Embarrassments The More Variables,… on On Mathematical Diseases The More Variables,… on Who Gets The Credit—Not… John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests John Sidles on Multiple-Credit Tests John Sidles on Multiple-Credit Tests Leonid Gurvits on Counting Is Sometimes Eas… Cristopher Moore on Multiple-Credit Tests Multiple-Credit Test… on Wait Wait… Don’t F… Amanda on Counting Is Sometimes Eas…
{"url":"http://rjlipton.wordpress.com/2012/02/06/flying-machines-of-the-21st-century/?like=1&_wpnonce=93ffd21953","timestamp":"2014-04-17T06:49:04Z","content_type":null,"content_length":"289670","record_id":"<urn:uuid:4050d4c2-3884-407c-a02b-ade6248fca75>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching resources Academic Earth believes everyone deserves access to a world-class education, which is why we continue to offer a comprehensive collection of free online college courses from the world's top universities. And now, we take learning outside the classroom with our original series of thought-provoking videos, designed to spark your intellectual curiosity and start a conversation. Watch, learn, share, debate. After all, only through questioning the world around us, can we come to better understand it. ONLINE COLLABORATION PROJECTS We have been having another great year of collaboration projects! The collaboration projects are learning activities that provide collaboration between two or more classrooms. The participating classrooms will be from various locations around the world, will use the Internet to interact with one another and will be working on a similar topic for a specific length of time. Using the Internet students and teachers will share their activities, findings and reflections. In addition to student collaboration, teachers will also be provided with the necessary tools to collaborate with one another. EVLM Central Portal NGfL CYMRU GCaD - Nightly DEWIS Test Yourself questions in a range of topic such as complex numbers, matrices, partial differentiation, percentages, ratios, vectors and transposition of formulae available from mathcentre. Find out more..... Welsh language Fact & Formulae leaflets are available. Find out more..... mathcentre has evolved to become a well-used and valued online drop-in centre for mathematics resources. Find out more about who uses mathcentre and which are the most popular resources. Free & inexpensive math curriculum materials: workbooks, ebooks, downloads, videos, tutorials, and more. - Nightly Save yourself money and a trip to the store! Print graph paper free from your computer. This site is perfect for science and math homework, craft projects and other graph paper needs. All graph paper files are optimized PDF documents requiring Adobe Reader for viewing. Take advantage of your printing flexibility; print on transparency film for sharp graph paper overheads, or waterproof paper for field data-collecting. Cartesian graph paper is the most popular form of graph paper in use. Explore math Use interactive apps to explore math and get a better understanding of what it all means. Go ahead - play and learn! There's over 300 easy to understand math lessons. What users say... This is an excellent site for review of forgotten skills necessary to understand higher level math. MathSphere Ltd P.O. Box 7533WeymouthDT4 4FPtel: 01273 782 786 fax: 01273 785 550 MathSphere Free Resources for children aged 5 to 11 Problem Solving This teaching resource, compatible with iPad, Android and IWB is another puzzle for encouraging children to work on their problem solving skills. It is a variation on the traditional 15 Puzzle. Slide the tiles around until they run in alphabetical order from the top left.. Skip to main content TES the largest network of teachers in the world Jobs Secondary Maths resources collections WELCOME!. . . . . . . . .. . . www.tsm-resources.com ... to TSM RESOURCES - home to Douglas Butler's collection of resources, mainly relevant to the teaching of Mathematics. The iCT Training Centre, based in Oundle, near Peterborough (UK), offers "TSM" training workshop for teachers in the UK and abroad. The TSM philosophy is to combine a fluency with MS Office (Word and Excel) with the effective use of dynamic software and web resources, at the same time keeping up with the best pedagogical use of the new mobile technologies. Oundle is also the home of Autograph, the UK's most popular software for teaching mathematics. TSM Resources Skoool.co.uk A fantastic free resource to support maths and science at Key Stages 3 and 4. There are wonderful interactive activities and study notes. Pupils can download the resources for offline use. Interactive whiteboard resources too. Mathematics - Topmarks Search Design Your Own Games Pre-Made Games Matching Game Directions - In this game you can match up words. You have two columns to work in . Type in your words in the first column and the matching words in the second column. You should have at least 8 pairs of words. When your students ask the inevitable question, "When would I ever use this?" answer it with a micro-documentary from the largest STEM video library of its kind --- The Futures Channel. Teachers tell us >> “I like to motivate the students to learn math by showing applications for math and science in real life, and your videos are a valuable resource. You do a great job. Thank you..” F.A. Powerpoint presentations for math - Free Math PowerPoint presentations and Math teacher resources for K-12 Keystage KS and post 16 A level lessonplans, and more. Use and alter these powerpoint presentations freely or any power point template used in this presentations site for other teachers. If you have any powerpoints then please consider submitting them for other teachers to download too. It's all about sharing and helping others. Need a free powerpoint viewer.
{"url":"http://www.pearltrees.com/lwhotton/teaching-resources/id4297561","timestamp":"2014-04-20T09:28:19Z","content_type":null,"content_length":"26622","record_id":"<urn:uuid:4ee45122-a6e3-432c-9e23-da3c6988719c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Helppp!!! I Tried Working The Problem Out, I Want ... | Chegg.com helppp!!! i tried working the problem out, i want to compareanswers, if possible can u show each step, thank you A toy is moving along a straight and level track in the +xdirection. It starts at t = 0 and x = 0 with an initial velocityVο = 5.00 m/s in the +x direction. At X = X[F = 50.0mthere is a brick wall. For the first 25.0m, the car moves atconstant velocity, Vο.] The toy car starts slowing downhalf way to the wall (at X = X[F]/2 = 25.0m) with aconstant acceleration of a = -0.440 m/s^2 and ends uphitting the wall. In all cases find the answer using symbols beforeplugging in the numerical values. a) How much time, T[1] does it take to move the first 25.0m? b) How much time, T[2] doest it take to move the second25.0 m? c) What is the final velocity, V[f], when the car reachesX = X[f] = 50.0m and hits the wall ?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/helppp-tried-working-problem-want-compareanswers-possible-u-show-step-thank-toy-moving-alo-q212289","timestamp":"2014-04-16T12:13:02Z","content_type":null,"content_length":"19582","record_id":"<urn:uuid:76bfc94b-4268-4d6e-ae8e-b6fd74c54877>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hey, guys, I was wondering if you could help me with some statistics and physics. http://www.trin.cam.ac.uk/show.php?dowid=475 Questions 2, 4, 5, 6, 9 and 10. Thank you :) • one year ago • one year ago Best Response You've already chosen the best response. for number two, make a "tree" i guess, it might help. sorry, im still learning statistics there is a formula for how to find the probablity of B given A, but I can't seem to get it on here. :( Best Response You've already chosen the best response. Thanks nonetheless... :) Best Response You've already chosen the best response. @mandja for #2 what is the probability for a false positive? Are we to assume it is zero? Best Response You've already chosen the best response. Assuming that the test is 95% accurate for the positive and for 100% accurate for the negative, this is solvable: Let \(C\) denote having the condition and positive/negative signs denote the results. Given is this: \[P(C)=0.001\]\[P(+|C) = 0.95\]\[P(\lnot C)=1-P(C) = 0.999\]\[P(+|\lnot C)=P(\lnot C\land+)/P(\lnot C)=0.05\]Then,\[P(C|+)=\frac{P(+\land C)}{P(+)}\]Since \(P(+) = P(+|\ lnot C)P(\lnot C) + P(+|C)P(C)\), we can just plug in the values to get: 1.8% Best Response You've already chosen the best response. Thanks. For the 4-th question I'm thinking of binomial distribution? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ae3a22e4b0e906b4a58f6d","timestamp":"2014-04-16T19:35:02Z","content_type":null,"content_length":"37744","record_id":"<urn:uuid:9ce79e55-cc70-4a79-8b64-7a95d2e48a7e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Northlake, TX Trigonometry Tutor Find a Northlake, TX Trigonometry Tutor ...He is currently completing the last few classes necessary for a degree in Biochemistry. He lives, along with his wife and 3-year-old daughter, near Dallas, TX. He has traveled quite a bit, both in the United States and abroad. 37 Subjects: including trigonometry, Spanish, chemistry, English ...I have over 40 years of experience with public speaking. I took courses in the subject in high school, college and graduate school. For 22 years I served as a pastor in various churches from different denominations, which required that I prepare at least one sermon on a weekly basis. 82 Subjects: including trigonometry, chemistry, reading, English ...During my last semester of college, I tutored middle school students in TAKS test preparation. The subjects for that were reading, history, and math. I am a hands-on and visual learner myself, and often use methods geared towards those learning styles. 27 Subjects: including trigonometry, chemistry, reading, physics ...The biggest problem I've noticed with students who come to me for Physics is a lack of firm grasp on its very basic and fundamental concepts. Physics is not like Calculus: you can't just get by with knowing a few techniques of integration and differentiation; you really have to be well grounded ... 41 Subjects: including trigonometry, chemistry, French, calculus ...The advanced classes I took also applied Linear Algebra vector analysis and matrix theory: Geometry of Robots, Numerical Methods, and Aerodynamics. I also have experience tutoring students in advanced math classes. I graduated with honors in Mechanical and Aerospace Engineering. 21 Subjects: including trigonometry, chemistry, English, accounting Related Northlake, TX Tutors Northlake, TX Accounting Tutors Northlake, TX ACT Tutors Northlake, TX Algebra Tutors Northlake, TX Algebra 2 Tutors Northlake, TX Calculus Tutors Northlake, TX Geometry Tutors Northlake, TX Math Tutors Northlake, TX Prealgebra Tutors Northlake, TX Precalculus Tutors Northlake, TX SAT Tutors Northlake, TX SAT Math Tutors Northlake, TX Science Tutors Northlake, TX Statistics Tutors Northlake, TX Trigonometry Tutors Nearby Cities With trigonometry Tutor Argyle, TX trigonometry Tutors Bartonville, TX trigonometry Tutors Colleyville trigonometry Tutors Copper Canyon, TX trigonometry Tutors Corinth, TX trigonometry Tutors Corral City, TX trigonometry Tutors Denton, TX trigonometry Tutors Highland Village, TX trigonometry Tutors Justin trigonometry Tutors Oak Point, TX trigonometry Tutors Roanoke, TX trigonometry Tutors Saginaw, TX trigonometry Tutors Shady Shores, TX trigonometry Tutors Southlake trigonometry Tutors University Park, TX trigonometry Tutors
{"url":"http://www.purplemath.com/Northlake_TX_trigonometry_tutors.php","timestamp":"2014-04-21T07:45:10Z","content_type":null,"content_length":"24251","record_id":"<urn:uuid:21390c0f-bae7-4af6-aaec-0b4f10512fdf>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Palo Alto Science Tutor Find a Palo Alto Science Tutor ...I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help. 11 Subjects: including physics, chemistry, calculus, statistics ...I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. As I've always said, "If you can't explain it to an intelligent 12 year old, then you don't really understand it" I explain to my students because I u... 24 Subjects: including organic chemistry, ACT Science, anatomy, philosophy ...I often share the story of my humble beginnings in rural India with students when I’d have to walk several miles a day to go to the nearest school and how, that early education opened my eyes to the world in ways I could never have imagined, The commitment to learn led me to be the first in my vi... 1 Subject: physics ...I noticed that the top learners and earners ask this question: "Why do I do this? If it's done, how would I feel and how would my life be if I consistently achieve these goals?" Relate this to tutoring, I love playing this vision building exercise with my tutee: "If you have an A in class, would... 20 Subjects: including sociology, English, writing, reading ...I also taught physics in an Oakland high school for seven years. Thus, I have unusually extensive experience teaching physics. I have a BS in chemistry from UC Berkeley. 9 Subjects: including physics, algebra 2, chemistry, geometry
{"url":"http://www.purplemath.com/Palo_Alto_Science_tutors.php","timestamp":"2014-04-20T13:31:23Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:e47def4c-5c5f-4fe4-9751-ff43ccbd668c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Beware Of Line-Current Unbalance On VFDs Severe line-current unbalance can cause VFD nuisance tripping and 3rd harmonic currents, which are unusual for phase-to-phase nonlinear loads. Are you having problems with a "temperamental" variable frequency drive (VFD)? Is it nuisance-tripping, showing all the signs of circuit overload, even though measurements show otherwise? The culprit may be unbalanced phase currents. What happens is the overload protection trips when the VFD line currents become very unbalanced because there's excessive current in one or two phases. This happens even though the average current of the three phases is well below the VFD's current rating. According to a paper from the Electric Power Research Institute's (EPRI's) Power Electronics Application Center (PEAC) (Application No. 1 dated September 1995), line-voltage unbalances usually cause these line-current unbalances. In fact, current unbalance can be 20 times as high as the voltage unbalance. Unevenly distributed single-phase loads can cause the latter across a 3-phase power system. In other words, instead of balancing phase-to-neutral loads evenly across all three phases, more are distributed onto one or two phases. The same goes for phase-to-phase single-phase loads. Instead of balancing the loads on an A-B, B-C, A-C basis, they are distributed unevenly on two pair of phases. Voltage unbalance can also result when using some types of transformer connections such as the "open-delta" connection, where two transformers are used to make a three-phase system. There are a couple of symptoms to look for The first symptom in most cases is nuisance tripping of upstream breakers due to unbalance current. Many new digital motor control protection relays use a current unbalance trip, and in some cases, the trip limit can be set as low as 5%. The line-current unbalance also increases current harmonic distortion, which can overload building wiring and transformers. You may see excessive 3rd harmonics (normally associated with phase-to-neutral nonlinear loads) as one of the resulting consequences of line-current imbalance on a VFD (a phase-to-phase nonlinear load). Another may be low power factor. Of course, the "robustness" of the unbalanced power system serving the drive affects the amount of unbalance. According to PEAC, if the system is very strong and has a large transformer and high available fault current, the unbalance will be greater than with a weaker system. Table 1 on page 16 lists line-to-line voltages (first three columns), calculated voltage unbalance (fourth column), line currents (next three columns), and calculated current unba- lance (last column) for a typical 5-hp VFD for voltage unbalance conditions. Table 2 on page 16 shows the current waveforms for the same unbalanced conditions noted in Table 1. As you can see, the current of one or two phases changes, as the current unbalance increases, from a double-pulse waveform, which is characteristic of a VFD, to a single-pulse waveform. The 3rd harmonic component also increases as the current unbalance increases, which is unusual for phase-to-phase non-linear load. So, how do you go about reducing this problem? First, you measure the VFD's line current during normal operation, just to verify if, in fact, there's an overcurrent in any phase. If the measured value on any phase is not greater than the VFD rated line current and the reason for tripping is due to an upstream current unbalance relay, then increase the trip setting of the relay to a maximum of 30%. One of the benefits of VFDs is that the line side voltage unbalance or the current unbalance does not reflect on the motor side; therefore, you can relax the unbalance trip setting without any concern for damage to the motor connected to the VFD. Second, if there is a significantly higher phase current (higher than expected for the VFD load), then calculate the amount of voltage unbalance at the VFD power panel. (According to PEAC, greater than 2A per hp load or greater than 15% unbalance is beyond normal expectations for VFD.) Third, if the voltage is unbalanced by more than 2%, then try rebalancing the line voltages by redistributing as evenly as possible all single-phase loads across all three phases, or relocating unbalanced loads to different power panels, or correcting all overloads within the building. Let's try a sample problem Let's say you measure VsubAB at 448V, VsubBC at 465V, and VsubAC at 450V. Next, you do some simple arithmetic calculations and insert the results into the following equation: %Unbalance = ((maximum deviation from average) divided by (average of all 3 phase-to-phase voltages)) times 100. Obviously, the average voltage is the sum of all three voltages divided by 3, which, in this case, is 454V. The maximum deviation is 465V minus 454V, or 11V. Plugging these values into the above equation yields 2.4%. Based on this, you need to do some single-phase load relocation work. What happens if load relocation doesn't work? You may have to install an AC line reactor at the problem VFD. It will reduce line currents and harmonic distortion as well. You should size the reactor to carry the VFD's full load current, and it should have an impedance rating anywhere from 2% to 6%. (These ratings describe the expected rms voltage drop across the reactor at rated current.) Choosing the best rating between 2% and 6% can be a problem. If you choose a higher rating, the higher voltage drop will give you the greatest amount of current balance but you'll sacrifice the responsiveness of your VFD. PEAC suggests starting with a 3% impedance. There are other benefits to ridding your system of line-current unbalance. According to PEAC, balancing 3-phase voltages can reduce system losses and prevent costly shutdowns. Besides reducing current unbalance and harmonic distortion, adding reactors also can increase power factor when unbalanced 3-phase voltages persist. And, they help protect VFDs from motor-starting and capacitor-switching transients. Suggested Reading Practical Guide to Quality Power for Sensitive Equipment, Second Edition. Order #6670; Electronic Drives, Order #6113. To order, call 1-800-543-7771.
{"url":"http://ecmweb.com/print/content/beware-line-current-unbalance-vfds?page=12","timestamp":"2014-04-19T05:32:18Z","content_type":null,"content_length":"20871","record_id":"<urn:uuid:647941be-98df-4f6b-bc4f-c08d33f5a6d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;riemann-zeta-function&#39; Questions - Page 2 The Riemann zeta function is the function of one complex variable $s$ defined by the series $\zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}$ when $\operatorname{Re}(s)>1$. It admits a meromorphic continuation to $\mathbb{C}$ with only a simple pole at $1$. This function satisfies a functional equation ... learn more… | top users | synonyms
{"url":"http://mathoverflow.net/questions/tagged/riemann-zeta-function?page=2&sort=newest&pagesize=15","timestamp":"2014-04-17T08:00:22Z","content_type":null,"content_length":"180021","record_id":"<urn:uuid:79f2b3c8-9f44-46e5-90f0-0489d11b81ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple time series plot using R : Part 2 October 4, 2011 By "We think therefore we R" I would like to share my experience of plotting different time series in the same plot for comparison. As an assignment I had to plot the time series of Infant mortality rate(IMR) along with the SOX emission(sulphur emission) for the past 5 decades in the same graph and compare how the intensities have been varying in the past 5 decades.Well to start with there is a problem of how to get these plots in the same graph as one is a mortality rate and other is an emission rate, having different units of measurements! What we essentially want to do is to see how the intensity of these problem has been changing, whether the intensity has increased/decreased. From a policy makers standpoint, whether it requires immediate attention or not. So what we can do instead is divide all the IMR values by the maximum IMR value that we have for the past 5 decade and store them as "IMR.std". Similarly divide all the SOX values by the maximum SOX value and store it in "SOX.Std". What we have achieved is a parsimonious way of representing the 2 variables that have values between "0 and 1".(Achieving the desired Now that we have "IMR.Std." and "SOX.Std." both with values between "0-1" I can plot them in the same graph. Recalling from the previous post: # Make sure the working directory is set to where the file is, in this case "Environment.csv": a <- read.csv("Environment.csv") # Plotting the "IMR.Std" on a graph plot(a$Year, a$IMR.Std., type="l", xlab= "Years", ylab= "Intensity of problem(normalized between 0-1)", col="green" , lwd=2) # Adding the plot for "SOX.Std."# lines(...) command basically adds the argument variable to the existing plot. lines(a$Year, a$SOX.std., type="l", col="red", lwd=2) Ideally this should have done the job. Giving me the IMR.Std(in green) and SOX.Std.(in red) in the same plot. But this dint happen for the reason that the data for SOX was available only after 1975 and also the data was not available for alternate years. Well I thought that R would treat it trivially and just plot the "non-NA" values of SOX.std. that were there but as it happens that this was not such a trivial thing for R. It demands a lot more rigor(just like a mathematical proof), to execute a command, not taking anything for granted. Hence to get the desired result I had to specify that it considers only the "non-NA" values for SOX.Std. The code for SOX.std had to be altered a bit: a <- read.csv("Environment.csv") plot(a$Year, a$IMR.Std., type="l", xlab= "Years", ylab= "Intensity of problem(normalized between 0-1)", col="green" , lwd=2) # All I need to make sure now is that I direct R to refer to only the "non-NA" values in the SOX.Std. variable.lines( a$Year[ !is.na(a$SOX.std.) ], a$SOX.std.[ !is.na(a$SOX.std.) ], type="l", col= "red", lwd=2) This is how the plot finally looks like. It was Utkarsh's generosity, who gave me the codes, that saved me a lot of time in solving this small issue, I wish to pass this on as it might save someone else's. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/simple-time-series-plot-using-r-part-2/","timestamp":"2014-04-18T13:21:12Z","content_type":null,"content_length":"41477","record_id":"<urn:uuid:a70dcd5e-51cf-4e93-aca9-205e37bff965>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
BitBlt diagonal roll over effect October 6th, 2012, 03:57 PM BitBlt diagonal roll over effect I have a little big problem about how can I move a picture on the diagonal (roll over effect). I attached the example of roll over effect from right to left. So how is done diagonally effect? Any ideas are welcome. October 11th, 2012, 01:27 PM Re: BitBlt diagonal roll over effect If I understand correctly, you want to move an image in a diagonal direction, yes? This means changing both X and Y coordinates. So, if you were to increase both X and Y by 1 pixel, the image would appear to move diagonally.
{"url":"http://forums.codeguru.com/printthread.php?t=528235&pp=15&page=1","timestamp":"2014-04-16T14:57:56Z","content_type":null,"content_length":"5068","record_id":"<urn:uuid:82fc137d-b6b8-4a0d-a086-29ee7d2102bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] definition of N without quantifying over infinite sets friedman@math.ohio-state.edu friedman at math.ohio-state.edu Tue Aug 10 09:57:13 EDT 2004 Forster wrote: > The significance of FFF - Friedman's Finite > Form of Kruskl's theorem is of course that it is a fct about N probable > only by reasoning about infinite sets. > When explaining this to my students I of course have to anticipate that > the inductive definition of N involves quantifying over ininite sets - FFF has much stronger properties than indicated. FFF is provable only by reasoning about uncountably many infinite sets - specifically what is normally called impredicativity. In particular, the use of infinite sets for FFF is not removable in the same way that simply regarding N as a predicate removes use of infinite sets. In fact, the use of infinite sets for FFF is not even removeable using predicates defined by natural number recursion (along with quantification over natural numbers). The use of predicates defined by natural number recursion (along with quantification over natural numbers) is enough to prove, e.g., the Paris/Harrington Ramsey theorem. For state of the art FFFs see my paper in the Feferfest volume dedicated to Solomon Feferman. Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-August/008395.html","timestamp":"2014-04-17T16:19:44Z","content_type":null,"content_length":"3898","record_id":"<urn:uuid:f4566097-9515-44de-a0f2-8524df81886d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: After being rearranged and simplified, which of the following equations could be solved using the quadratic formula? Check all that apply. A. 2x2 - 3x + 10 = 2x + 21 B. 2x2 - 6x - 7 = 2x2 C. 5x2 + 2x - 4 = 2x2 D. 5x3 - 3x + 10 = 2x2 Best Response You've already chosen the best response. A. and C. B. is linear and D. is a cubic Best Response You've already chosen the best response. can u explane plz...i want tolean this thing...quadratic formula..@robtobey Best Response You've already chosen the best response. I'll explain, bro. Best Response You've already chosen the best response. The quadratic formula is awesome. Basically the idea is that any polynomial where the highest exponent for x is 2, you can just plug the different coefficients into the formula, and it'll tell you what values of x make the polynomial 0. Now, the only trick is that if you have an equation like that, you have to put it in the form Ax^2 + Bx + C = 0. A, B, and C can be any constant numbers. As long as they aren't variables, you're good. Best Response You've already chosen the best response. thanks a lot@smoothmath Best Response You've already chosen the best response. Now, one thing... those coefficients are allowed to be 0. Best Response You've already chosen the best response. Well. As long as A isn't 0. Because in the quadratic, then you would divide by 0. But B or C can be. Just be aware of that. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e3c2e3c0b8bfc76a3f65baa","timestamp":"2014-04-16T08:06:36Z","content_type":null,"content_length":"42468","record_id":"<urn:uuid:ac9aa5db-d7f7-41b2-9f4a-bb099ac68bcc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Confused on finding a function with multiple variables (Pizzabox) October 22nd 2011, 04:02 PM #1 Oct 2011 Confused on finding a function with multiple variables (Pizzabox) (still confused) Well the problem is this 1) Find a polynomial function p(x) that computes the volume of the box in terms of x. What is the degree of p? 2) Find a polynomial function q(x) that computes the exposed surface area of the closed box in terms of x. What is the degree of q? What are the explicit dimensions if the exposed surface of the area is 600 square inches? I was having trouble with this, because there is 'x' in 2 sides. At first I was thinking 50 + 20, but I don't know where that came from. I was thinking that there was 3 'x' in one side, and 3 'x' in the other half So it was like 2x^3? But I'm confused if that makes any sense at all. May anyone help me start this off and explain the steps? I understand the steps now and where the 600 came from Thanks for all your help Last edited by Chaim; October 22nd 2011 at 07:52 PM. Re: Confused on finding a function with multiple variables (Pizzabox) folded box length = $20-2x$ folded box width = $\frac{50-2x}{2} = 25-x$ folded box height = $x$ Re: Confused on finding a function with multiple variables (Pizzabox) Oh.... I see now So to find the volume, would be multiplying the length, width, and height together right? (Since it's a rectangle?) So (20-2x) * (25 - x) * (x), I think 500 - 20x - 50x + 2x^2 Which, in order would be 2x^2 - 70x + 500 Though I am confused with to 'find the degree of 'p' (70 +- sqrt 4900 - 40000)/2 Though that is wrong cause I can't get the square root of a negative number I am confused on what is the 'degree'? I thought it was like 90 degree, as in right angle, but that makes no sense, and I'm assuming it's not that type of degree. Re: Confused on finding a function with multiple variables (Pizzabox) V = (20-2x)(25-x)(x) you forgot to multiply by the last x, the height the degree of a polynomial is the highest power of x in the polynomial Re: Confused on finding a function with multiple variables (Pizzabox) Re: Confused on finding a function with multiple variables (Pizzabox) So the second part is the surface area of a rectangle which is what I am on right now Re: Confused on finding a function with multiple variables (Pizzabox) So I'm on the second part right now to find the surface area of the rectangle which is 2ab + 2bc + 2ac right? a = 20-2x b = 25-x c = x 2(20-2x)(25-x) + 2(25-x)(x) + 2(20-2x)(2x) =2(500-20x-50x+2x^2) + 2(25x+x^2) + 2(40x-4x^2) =2(500-70x+2x^2) + 2(25x+x^2) + 2(40x-4x^2) Though I think I messed up, it's suppose to be -50x instead of -60x, but I don't know how did I do it wrong And how would you do "What are the explicit dimensions if the exposed surface of the area is 600 square inches?" Also, how did you get (50-2x)/2 I understand the 50-2x, but all over the 2, how did you get that? Re: Confused on finding a function with multiple variables (Pizzabox) So I'm on the second part right now to find the surface area of the rectangle which is 2ab + 2bc + 2ac right? a = 20-2x b = 25-x c = x 2(20-2x)(25-x) + 2(25-x)(x) + 2(20-2x)(2x) =2(500-20x-50x+2x^2) + 2(25x+x^2) + 2(40x-4x^2) =2(500-70x+2x^2) + 2(25x+x^2) + 2(40x-4x^2) Though I think I messed up, it's suppose to be -50x instead of -60x, but I don't know how did I do it wrong And how would you do "What are the explicit dimensions if the exposed surface of the area is 600 square inches?" Also, how did you get (50-2x)/2 I understand the 50-2x, but all over the 2, how did you get that? This is the part I am still confused with, help please? Re: Confused on finding a function with multiple variables (Pizzabox) That middle term should be 25x- x^2. =2(500-70x+2x^2) + 2(25x+x^2) + 2(40x-4x^2) Though I think I messed up, it's suppose to be -50x instead of -60x, but I don't know how did I do it wrong No, neither -50x nor -60 x. (-140+ 50+ 80)x= -10x. And with that "-x^2" in the middle term, you have (4- 2- 8)x^2= -6x^2. it should be 1000- 10x- 6x^2. [quote] And how would you do "What are the explicit dimensions if the exposed surface of the area is 600 square inches?" Solve 1000- 10x- 6x^2= 600 and use that value of x to find the three lengths. Also, how did you get (50-2x)/2 I understand the 50-2x, but all over the 2, how did you get that? The long side of the original rectangle was 50 and there are two sections of length x taken out so that leaves 50-2x. And both top and bottom are cut from that: each has length (50-2x)/2= 25- x. October 22nd 2011, 04:31 PM #2 October 22nd 2011, 04:51 PM #3 Oct 2011 October 22nd 2011, 05:00 PM #4 October 22nd 2011, 05:07 PM #5 Oct 2011 October 22nd 2011, 05:17 PM #6 Oct 2011 October 22nd 2011, 05:22 PM #7 Oct 2011 October 22nd 2011, 06:42 PM #8 Oct 2011 October 23rd 2011, 08:18 AM #9 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/pre-calculus/191058-confused-finding-function-multiple-variables-pizzabox.html","timestamp":"2014-04-21T15:42:09Z","content_type":null,"content_length":"63220","record_id":"<urn:uuid:3ccb9383-590e-46d7-8ebf-c049827df45a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Lombard Trigonometry Tutor ...I am also a proficient user of Microsoft Excel and am able to tutor all of its functionalities.I am a proficient user of Excel and skilled in both facets; Advanced Spreadsheet Formulas and VBA (Visual Basic for Applications) programming. I provide Excel tutoring for students who seek to learn fo... 18 Subjects: including trigonometry, geometry, algebra 2, study skills ...Some of my favorite moments as a teacher have been when I was able to work with a student one on one and see their excitement when they finally understood material they had been struggling with. There is nothing more rewarding as a teacher than instilling a confidence in a student which they nev... 12 Subjects: including trigonometry, calculus, algebra 2, geometry ...I have taught physiology along with anatomy in a career college in New York. I have taken (and got the highest grade) medical school physiology. I am a medical doctor, who practiced for 23 years before leaving the practice of medicine in 2008. 17 Subjects: including trigonometry, chemistry, reading, biology ...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra. 23 Subjects: including trigonometry, calculus, physics, statistics ...And, I taught C++ computer programming courses at a major university for 8 years. I am a retired computer systems professional with an undergraduate degree in Mathematics and a Masters in Computer Science. I have also completed certification testing for an Illinois State teaching certificate in Mathematics for grades 6 – 12. 14 Subjects: including trigonometry, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Lombard_Trigonometry_tutors.php","timestamp":"2014-04-19T14:47:41Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:a90e09f9-5db4-4241-9e26-b53738cc7865>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Faculty of Arts & Science 2014-2015 Calendar Calendar Home | Course Timetables Computer Science University Professor Emeritus S. Cook, SM, PhD, FRS, FRSC Professors Emeriti R. Baecker, MSc, PhD D. Corneil, MA, PhD E. Hehner, MSc, PhD R. Holt, PhD C. Gotlieb, MA, PhD, DMath, DEng, FRSC R. Mathon, MSc, PhD (University of Toronto Mississauga) J. Mylopoulos, MSc, PhD, FRSC D. Wortman, MSc, PhD Senior Lecturer Emeritus J. Clarke, MSc, PhD University Professors A. Borodin, MSc, PhD, FRSC G. Hinton, PhD, FRS, FRSC Professor and Chair of the Department S. Dickinson, MSc, PhD Professor and Vice Chair of the Department M. Chechik, MSc, PhD Professor and Associate Chair (Graduate Studies) A. Jepson, PhD Senior Lecturer and Associate Chair (Undergraduate Studies) P. Gries, MSc T. Abdelrahman, MSc, PhD F. Bacchus, MSc, PhD R. Balakrishnan, MSc, PhD C. Boutilier, M Sc, PhD M. Chechik, MSc, PhD S. Dickinson, MSc, PhD S. Easterbrook, PhD F. Ellen, M Math, PhD W. Enright, MSc, PhD (University of Toronto Scarborough) E. Fiume, MSc, PhD D. Fleet, MSc, PhD (University of Toronto Scarborough) V. Hadzilacos, PhD (University of Toronto Scarborough) G. Hirst, MSc, PhD (University of Toronto Scarborough) K. Jackson, MSc, PhD A. Jepson, PhD N. Koudas, MSc, PhD (University of Toronto Scarborough) K. Kutulakos, MSc, PhD H. Levesque, MSc, PhD, FRSC P. Marbach, MSc, PhD S. McIlraith, MMath, PhD R. Miller, MSc, PhD, FRSC M. Molloy, MMath, PhD (University of Toronto Scarborough) R. Neal, BSc, PhD G. Penn, MSc, PhD T. Pitassi, MSc, PhD C. Rackoff, PhD (University of Toronto Mississauga) K. Singh, MSc, PhD S. Stevenson, MSc, PhD S. Toueg, MA, PhD R. Zemel, MSc, PhD Associate Professors A. Bonner, MSc, PhD (University of Toronto Mississauga) M. Brudno, MSc, PhD C. Christara, MSc, PhD J. Danahy, MScUrb & DesPl E. de Lara, MSc, PhD A. Demke-Brown, MSc, PhD Y. Ganjali, MSc, PhD B. Schroeder, MSc, PhD (University of Toronto Scarborough) K. Truong, PhD Assistant Professors A. Farzan, PhD S. Fidler, PhD R. Johnson, MSc, PhD (University of Toronto Scarborough) R. Salakhutdinov, PhD R. Urtasun, PhD V. Vaikuntanathan, SM, PhD (University of Toronto Mississauga) D. Wigdor, MSc, PhD (University of Toronto Mississauga) Senior Lecturers G. Baumgartner, MSc J. Campbell, MMath M. Craig, MSc S. Engels, MMath T. Fairgrieve, MSc, PhD P. Gries, MEng D. Heap, MSc D. Horton, MSc F. Pitt, MSc, PhD K. Reid, MSc Cross Appointed C. Amza, PhD P. Andritsos, PhD G. Bader, PhD C. Beck, PhD M. Chignell, M Sc, Ph D M. Consens, PhD M. Fox, PhD B. Frey, PhD A. Goel, PhD M. Gruninger, PhD A. Jacobsen, MSc, PhD I. Jurisica, MSc, PhD P. Kim, PhD B. Li, MSc, PhD D. Lie, PhD J. Liebeherr, PhD K. Lyons, MSc, PhD E. Mendelsohn, M Sc, Ph D (Professor Emeritus) (University of Toronto Scarborough) A. Mihailidis, PhD Q. Morris, PhD A. Moses, PhD F. Roth, PhD G. Steffan, MSc, PhD M. Stumm, MSc (Math), PhD A. Urquhart, MA, PhD (Professor Emeritus) A. Veneris, MSc, PhD E. Yu, MSc, PhD Z. Zhang, PhD Adjunct and Status Only D. Aruliah, PhD J. Birnholtz, PhD A. Borgida, PhD M. Braverman, PhD B. Buxton, MSc C.Forlines, PhD J. Glasgow, PhD S. Goldberg, PhD A. Goldenberg, PhD A. Hertzmann, PhD A. Kreinin, MSc, PhD G. Lakemeyer, PhD A. LaMarca, MS, PhD C. Landreth, MS Y. Lesperance, MSc, PhD R. Lilien, PhD K. Moffat, PhD C. Munteanu, PhD D. Penny, PhD K. Pu, PhD D. Reilly, PhD F. Rudzicz, PhD P. Salvini, PhD T. Savor, M Sc, Ph D B. Selic, Magister Ing F. Shein, PhD, PEng C. Sminchisescu, MS, PhD J. Stam, PhD B.Taati, PhD T. Topalouglou, PhD J. Tsotsos, PhD What is Computer Science? Despite the name, Computer Science is not really a science of computers at all. Computers are quite remarkable electronic devices, but even more remarkable is what they can be made to do: simulate the flow of air over a wing, manage communication over the Internet, control the actions of a robot, synthesize realistic images, play grandmaster-level chess, and on and on. Indeed the application of computers in activities like these has affected most areas of modern life. What these tasks have in common has little to do with the physics or electronics of computers; what matters is that they can be formulated as some sort of computation. This is the real subject matter of Computer Science: computation, and what can or cannot be done computationally. In trying to make sense of what we can get a computer to do, a wide variety of topics come up. There are, however, two recurring themes. The first is the issue of scale: how big a system can we specify without getting lost in the design, or how big a task can a computer handle within reasonable bounds of time, memory and accuracy. A large part of Computer Science deals with these questions in one form or another. In the area of programming languages and methodology, for example, we look for notations for describing computations, and programming methodologies that facilitate the production of manageable and efficient software. In the theory of computation area, we study resource requirements in time and memory of many basic computational tasks. The second theme concerns the scope of computation. Computers were originally conceived as purely numerical calculators, but today, we tend to view them much more broadly. Part of Computer Science is concerned with understanding just how far computational ideas can be applied. In the area of artificial intelligence, for example, we ask how much of the intelligent behaviour of people can be expressed in computational terms. In the area of human/computer interaction, we ask what sorts of normal day-to-day activities of people might be supported and augmented using computers. Some Computer Science courses are offered in the evening, to allow part-time students to pursue our programs. Introductory courses and some higher-level courses are offered in the summer. The Professional Experience Year Program (PEY) offers students the opportunity to gain valuable work experience in industry, over a twelve to sixteen-month period. It is available to eligible, full-time students pursuing their first degree. Students may also take advantage of the International Exchange Program offered by CIE. Please refer to Student Services & Resources chapter of this Associate Chair - Undergraduate Studies: Senior Lecturer Paul Gries Student Counsellors, Undergraduate Office: Bahen Building, 40 St. George Street, Rooms 4252/4254/4256, M5S 2E4 (416-978-6360, email: ug@cs.utoronto.ca). Web site: www.cs.toronto.edu Computer Science Programs Tuition fees for students enrolled in Computer Science Specialist and Major programs are higher than for other Arts and Science programs. For more information visit www.fees.utoronto.ca . Computer Science Specialist (Science program) This is a limited enrolment program that can only accommodate a limited number of students. Eligibility will be based on a student’s marks in the required courses. The precise mark thresholds outlined below are an estimate of what will be required in the coming POSt admission cycle. Achieving those marks does not necessarily guarantee admission to the POSt in any given year. Applying immediately after first year: • An average mark of at least 67% in CSC148H1 and CSC165H1/CSC240H1 with a minimum mark of 60% in each • Completion of 4 FCEs Applying after second or third year: Note that students admitted to the program after second or third year will be required to pay retroactive program fees. (12.0 full course equivalents[FCEs], including at least 1.5 FCEs at the 400-level) First year (2.5 FCEs): 1. (CSC108H1, CSC148H1)/CSC150H1, CSC165H1/CSC240H1; (MAT135H1, MAT136H1)/MAT137Y1/MAT157Y1 Second year (3.5 FCEs): 2. CSC207H1, CSC209H1, CSC236H1/CSC240H1, CSC258H1, CSC263H1/CSC265H1; MAT221H1/MAT223H1/MAT240H1; STA247H1/STA255H1/STA257H1 1. Students with a strong background in an object-oriented language such as Python, Java or C++ may omit CSC108H1 and proceed directly with CSC148H1. [There is no need to replace the missing half-credit; however, please base your course choice on what you are ready to take, not on “saving” a half-credit]. 2. CSC240H1 is an accelerated and enriched version of CSC165H1 plus CSC236H1, intended for students with a strong mathematical background, or who develop an interest after taking CSC165H1. If you take CSC240H1 without CSC165H1, there is no need to replace the missing half-credit; but please see Note 1. 3. Consult the Undergraduate Office for advice about choosing among CSC108H1 and CSC148H1, and between CSC165H1 and CSC240H1. Later years (6.0 FCEs): 3. CSC369H1, CSC373H1/CSC375H1 4. 5 FCEs from the following, of which at most 2.0 FCEs may be from MAT or STA courses, and at least 1.5 FCEs must be 400-level CSC, BCB, or ECE courses: CSC: any 300-/400-level; BCB410H1, BCB420H1, BCB430Y1; ECE385H1, ECE489H1; MAT224H1, MAT235Y1/MAT237Y1/MAT257Y1, any 300-/400-level MAT course except MAT329H1, MAT390H1, MAT391H1; STA248H1/STA261H1, any 300-/400-level STA course No more than more than 1.0 FCE from CSC490H1, CSC491H1, CSC494H1, CSC495H1, BCB430Y1 may be used to fulfill program requirements The choices in 4 must satisfy the requirement for an integrative, inquiry-based activity by including one of the following half-courses: CSC301H1, CSC318H1, CSC404H1, CSC411H1, CSC418H1, CSC420H1, CSC428H1, CSC454H1, CSC485H1, CSC490H1, CSC491H1, CSC494H1, CSC495H1. This requirement may also be met by participating in the PEY (Professional Experience Year) program. Preparing for graduate study in Computer Science Strong students should consider the option of further study in graduate school (where the degrees offered are typically M.Sc. and Ph.D.). If you find yourself frequently receiving marks in the B+ range or better, you should consult with faculty members to learn more about graduate school and whether it would be a good option for you. You will want to ask for advice on your particular interests — and you will find faculty members are happy to talk to you — but there are also some course choices that should be considered by all students thinking of graduate study in Computer The focuses can help you further refine your areas of interest, but you should not take courses exclusively in one area. You will benefit by having taken an advanced course requiring considerable software development and a theoretical course. It will be especially beneficial to have done a project course (CSC494H1/CSC495H1), a capstone course (CSC490H1/CSC491H1), and/or a summer research project. It is good if this individual work is in the area where you eventually decide you'd like to do your own research, but that is not essential; what you need most is some experience doing work on your own, under the mentorship of an experienced researcher. Choosing courses This program offers considerable freedom to choose courses at the 300-/400-level, and you are free to make those choices on your own. We are eager to offer guidance, however, and both our Undergraduate Office and individual faculty members are a rich source of advice. Computer Science Specialist: Focuses You have the option of completing one or more of the focuses defined below. Focuses are sets of courses that direct you toward expertise in particular areas of Computer Science, such as game design, theory of computation, human-computer interaction, and many more. These focuses are meant to help your choice, not to constrain it, and each focus has at least one faculty member who would be happy to discuss it with you. More information about each of the focuses can be found on our web site at http://web.cs.toronto.edu/program/ugrad/programs.htm Each focus has a set of required courses that must be completed to satisfy the focus. Most focuses also have an additional list of related courses that students in the focus may find interesting. In some cases these are courses offered by different departments or faculties. Note that you must petition to take Engineering courses or graduate-level courses. In many cases, the courses required of the focus will also satisfy Specialist program requirements. Focuses that require courses in addition to the Specialist requirements have a note in the descriptions below. To enrol in one or more focuses, students must first be enrolled in the Computer Science Specialist program. Enrolment instructions can be found on the Arts & Science Current Students subject POSt enrolment web site. Focuses can be chosen on ROSI after admission to the program, which begins in July. Focus in Scientific Computing (3.5 FCEs) Scientific computing studies the world around us. Known and unknown quantities are related through certain rules, e.g. physical laws, formulating mathematical problems. These problems are solved by numerical methods implemented as algorithms and run on computers. The numerical methods are analyzed and their performance (e.g. accuracy, efficiency) studied. Problems, such as choosing the optimal shape for an airplane (to achieve, for example, minimal fuel consumption), finding the fair price for derivative products of the market, or regulating the amount of radiation in medical scans, can be modeled by mathematical expressions, and solved by numerical techniques. Students wishing to study scientific computing should have a strong background in mathematics, in particular calculus of several variables, linear algebra and statistics, be fluent in programming, and have a good understanding of data structures and algorithm design. Required Courses: 1. MAT235Y1/MAT237Y1/MAT257Y1, 2. 1.5 FCE from the following: CSC336H1, CSC436H1, CSC446H1, 456H1 3. 1 FCE from the following: CSC320H1/418H1, CSC321H1/411H1, CSC343H1, CSC384H1, CSC358H1/CSC458H1 Suggested Related Courses: MAT224H1/MAT240H1, MAT244H1, MAT334H1/MAT354H1, MAT337H1/MAT357H1 It is also recommended that students in this focus consider taking a half-course or two from the basic sciences (such as physics, chemistry, biology), as these sciences provide the sources of many problems solved by numerical techniques. Focus in Artificial Intelligence (3.5 FCEs) Artificial Intelligence (AI) is aimed at understanding and replicating the computational processes underlying intelligent behaviour. These behaviours include the perception of one's environment, learning how that environment is structured, communicating with other agents, and reasoning to guide one's actions. This focus is designed to provide students with an introduction to some of the key scientific and technical ideas that have been developed in AI. There are four different sub-areas of AI represented in our department: Computer Vision, Computational Linguistics (CL), Machine Learning (ML), and Knowledge Representation and Reasoning (KR). These areas cover a wide variety of ideas and techniques. Students wanting to achieve this focus are required to take courses from at least two of these sub-areas. Required Courses: 1. 1 FCE from the following: MAT235Y1/237Y1/257Y1, APM236H1/MIE262H1/STA248/261H1, CSC336H1, CSC310H1, CSC330H1, CSC438H1, CSC448H1, CSC463H1 2. 2.5 FCEs from the following covering at least two of the four areas a) CSC401H1, CSC485H1 b) CSC320H1, CSC420H1 c) CSC321H1, CSC411H1, CSC412H1 d) CSC384H1, CSC486H1 Suggested Related Courses: CSC200Y1, CSC324H1, COG250Y1, PSY270H1, PHL232H1, PHL342H1, STA414H1 Focus in Computational Linguistics & Natural Language Processing (4.0 FCEs) How can we build and analyze systems for enabling users to communicate with computers using human language (also called natural language), and for automatically processing the vast amounts of data on the web available in the form of text? The focus covers appropriate material on natural language interfaces, as well as tools such as document summarization, intelligent search over the web, and so on. Students considering this focus are encouraged to consider a second Major in Linguistics. [Note 0.5 FCEs in LIN are in addition to the 12.0 FCEs required to complete the Specialist program] Required Courses 1. CSC318H1 2. CSC401H1, CSC485H1 3. LIN200H1 4. 1.5 FCE from the following: CSC309H1, CSC321H1, CSC330H1, CSC411H1, CSC428H1, CSC486H 5. 0.5 FCE from the following: PSY100H1, COG250Y1 Suggested Related Courses: Other relevant Computer Science courses, depending on the student's interests, include other courses in artificial intelligence such as CSC384H1 or CSC420H1. Linguistics, Psychology, and Cognitive Science are all directly relevant to this focus, and we recommend that interested students take additional courses from any or all of them. Focus in Computer Vision (3.5 FCEs) Computer vision is the science and technology of machines that can see. As a science, the goal of computer vision is to understand the computational processes required for a machine to come to an understanding of the content of a set of images. The data here may be a single snapshot, a video sequence, or a set of images from different viewpoints or provided by medical scanners. The computer vision focus introduces students to the study of vision from a computational point of view. That is, we attempt to clearly define computational problems for various steps of the overall process, and then show how these problems can be tackled with appropriate algorithms. Students who wish to pursue computer vision should have an understanding of linear algebra and calculus of several variables. Moreover, they should be solid programmers and have a good understanding of data structures and algorithm design. These basic tools are required in order to first pose computational vision problems, and then develop and test algorithms for their solution. Required Courses: 1. MAT235Y1/MAT237Y1/MAT257Y1, CSC320H1, CSC336H1, CSC411H1, CSC420H1 2. 0.5 FCE from the following: CSC418H1, CSC412H1, CSC2503H (Note: students must petition to take this course.) Suggested Related Courses: The following are examples of topics and courses that fit naturally with a study of computational vision. The list is meant to be illustrative of the range of cognate topics, but is not necessarily complete. The ordering is alphabetical and not indicative of importance. Note: there are prerequisites for many of these courses that we do not list here. APM462H1, COG250Y1, CSC384H, CSC485H1, CSC486H1, ECE216H1, PHL232H1, PHY385H1, PSL440Y1, PSY270H1, PSY280H1, STA257H1/STA261H1 Focus in Computer Systems (3.5 FCEs) Software systems are complex and interesting. Poorly done systems can be incredibly expensive: they can cost society billions of dollars, and sometimes make the difference between life and death. Rapid changes in technology and applications means that the underlying systems must continually adapt. This focus takes you under the covers of software systems, laying bare the layers and introducing you to concurrency issues, scalability, multiprocessor systems, distributed computing, and more. Required Courses: 1. CSC324H1, CSC343H1, CSC443H1, CSC469H1, CSC488H1 2. 1 FCE from the following: CSC372H1/ECE385H1, CSC358H1, CSC458H1 Suggested Related Courses: 1. CSC301H1, CSC309H1, CSC410H1, ECE489H1 2. Relevant courses offered at UTM: CSC347H5, CSC423H5, CSC427H5 3. Relevant courses offered by Engineering: ECE454H1, ECE568H1 Focus in Game Design (3.5 FCEs): Video game design combines several disciplines within computer science, including software engineering, graphics, artificial intelligence and human-computer interaction. It also incorporates elements of economics, psychology, music and creative writing, requiring video game researchers to have a diverse, multidisciplinary set of skills. Students who wish to pursue video game design should have an understanding of linear algebra (for computer graphics modeling), computer hardware and operating systems (for console architecture), data structures, and algorithm design. Students will gain a general knowledge of the more advanced topics listed in the courses below. Required courses: 1. CSC300H1, CSC301H1, CSC318H1, CSC324H1, CSC384H1, CSC418H1, CSC404H1 Suggested Related Courses: 1. CSC358H1, CSC458H1, CSC428H1, 2. MUS300H1, INI222H1, INI465H1, ENG235H1 3. ECO326H1, MGT2056H Focus in Human-Computer Interaction (6.5 FCEs) Human-Computer Interaction (HCI) is the scientific study of the use of computers by people and the design discipline that informs the creation of systems and software that are useful, usable, and enjoyable for the people who use them. HCI students have exciting opportunities for research and graduate school; HCI professionals often have jobs with titles such as user interface architect, user interface specialist, interaction designer, or usability engineer. [Note 3.5 FCEs in SOC & PSY are in addition to the 12.0 FCEs required to complete the Specialist program] Required Courses: 1. CSC300H1, CSC301H1, CSC318H1, CSC428H1 2. SOC101Y1, SOC200H1, SOC202H1, SOC302H1 [To enrol in restricted SOC courses, please contact the CS Undergraduate Office in the July preceeding the academic year in which you plan to take the 3. 1 FCE from the following: CSC309H1, CSC320H1, CSC321H1, CSC343H1, CSC384H1, CSC401H1, CSC404H1, CSC418H1, CSC485H1, CSC490H1/491H1 4. PSY100H1, PSY270H1/PSY280H1 Suggested Related Courses: 1. CSC454H1, CSC290H1 2. At least one half-course in Human Factors or Ergonomics offered by the Department of Mechanical and Industrial Engineering, such as MIE240H, MIE343H, MIE344H, MIE448H, and MIE449H. Human factors is a sister discipline to human-computer interaction that approaches problems in slightly different ways. 3. WDW260H1 Focus in Theory of Computation (4.5 FCEs + 2.0 FCEs from required Specialist courses) Why is it easy to sort a list of numbers, but hard to break Internet encryption schemes? Is finding a solution to a problem harder than checking that a solution is correct? Can we find good approximate solutions, even when the exact solutions seem out of reach? Theory of Computation studies the inherent complexity of fundamental algorithmic problems. On one hand, we develop ground-breaking efficient data structures and algorithms. On the other, we have yet to develop good algorithms for many problems despite decades of effort, and for these problems we strive to prove no time- or space-efficient algorithms will ever solve them. While the field has seen some successful impossibility results, there are still many problems -- such that those underlying modern cryptography and security -- for which we do not know either efficient algorithms or strong lower bounds! This focus takes a rigorous, mathematical approach to computational problem-solving: students will gain a deep understanding of algorithm paradigms and measures of problem complexity, and develop the skills necessary to convey abstract ideas with precision and clarity. Many of our students go on to graduate studies and sophisticated algorithmic work in industry. This focus has natural ties with many branches of mathematics and is the foundation of many computer science fields. Consequently, our students often apply their theoretical knowledge to other fields of interest. We strongly encourage taking the enriched theory courses (CSC240H1, CSC265H1, CSC375H1) as well as specialist/major versions of the MAT requirements for our focus. [Depending on courses selected for points 4 & 5, students may need to complete 0.5-1.0 FCEs in addition to the 12.0 FCEs required to complete the Specialist program.] Required Courses: 1. MAT137Y1/MAT157Y1/MAT237Y1 Note: if MAT237Y1 is used it cannot be counted in the 2 FCE list below. 2. CSC463H1 3. CSC336H1/CSC350H1 4. 1.5 FCEs from the following: CSC310H1, CSC438H1, CSC448H1, MAT443H1, MAT332H1, MAT344H1, At UTM: CSC322H5/MAT302H5, CSC422H5; CSC494H1/CSC495H1 project supervised by a faculty member from the Theory group, or a relevant introductory graduate course in Theory. (Note that students must petition to take a graduate course.) 5. 2 FCEs from the following: APM236H1/MIE262H1, MIE263H1, APM421H1, APM461H1, MAT224H1/MAT247H1, MAT237Y1/MAT257Y1, MAT244H1/MAT267H1, MAT301H1/MAT347Y1, MAT315H1, MAT327H1, MAT334H1/MAT354H1, MAT337H1/MAT357H1, Any 400-level MAT course (except MAT443H1), STA248H1/STA261H1, STA347H1 Recommended Courses: 1. Students are strongly encouraged to take the enriched theory courses: CSC240H1, CSC265H1, and CSC375H1, rather than their regular counterparts: CSC165H1/CSC236H1, CSC263H1, and CSC373H1, Suggested Related Courses: 1. BCB410H1 2. CSC320H1/CSC418H1/CSC420H1, CSC321H1/CSC384H1/CSC411H1/CSC485H1, CSC343H1/CSC443H1, CSC351H1/CSC456H1, CSC358H1/CSC458H1, CSC412H1/CSC465H1/CSC486H1, CSC488H1 Focus in Web and Internet Technologies (3.5 FCEs) The Web and Internet Technologies focus introduces students to the systems and algorithms that power today's large-scale web and Internet applications such as search engines, social networking applications, web data mining applications, and content distribution networks. The focus covers both the algorithm foundations of Web and Internet Technologies, as well as the implementation and system architecture. Students who wish to pursue the Web and Internet Technologies focus should have a solid understanding of statistics, should be good programmers and have a good understanding of data structures and algorithm design. To get practical experience, students pursuing the web and Internet technologies focus are encouraged to do either a term project or a summer USRA carrying out a project in web and internet Required courses: 1. STA248H1, CSC309H1, CSC343H1, CSC358H1, CSC458H1, CSC411H1 2. 0.5 FCEs from the following: CSC310H1, CSC443H1, CSC469H1 Suggested Related Courses: 1. Courses offered at UTM: CSC347H5, CSC423H5, CSC427H5 2. ECE568H1 Computer Science Major (Science program) This is a limited enrolment program that can only accommodate a limited number of students. Eligibility will be based on a student’s marks in the required courses. The precise mark thresholds outlined below are an estimate of what will be required in the coming POSt admission cycle. Achieving those marks does not necessarily guarantee admission to the POSt in any given year. Applying immediately after first year: • An average mark of at least 67% in CSC148H1 and CSC165H1/CSC240H1 with a minimum mark of 60% in each • Completion of 4 FCEs Applying after second or third year: Note that students admitted to the program after second or third year will be required to pay retroactive program fees. (8.0 full course equivalents [FCEs], including at least 0.5 FCEs at the 400-level) First year (2.5 FCEs): 1. (CSC108H1, CSC148H1)/CSC150H1, CSC165H1/CSC240H1; (MAT135H1, MAT136H1)/MAT137Y1/MAT157Y1 Second year (2.5 FCEs): 2. CSC207H1, CSC236H1/CSC240H1, CSC258H1, CSC263H1/CSC265H1; STA247H1/STA255H1/STA257H1 1. Students with a strong background in an object-oriented language such as Python, Java or C++ may omit CSC108H1 and proceed directly with CSC148H1. [There is no need to replace the missing half-credit; however, please base your course choice on what you are ready to take, not on “saving” a half-credit]. 2. CSC240H1 is an accelerated and enriched version of CSC165H1 plus CSC236H1, intended for students with a strong mathematical background, or who develop an interest after taking CSC165H1. If you take CSC240H without CSC165H1, there is no need to replace the missing half-credit; but please see Note 1. 3. Consult the Undergraduate Office for advice about choosing among CSC108H1 and CSC148H1, and between CSC165H1 and CSC240H1. Later years (3.0 FCEs). 3. 3.0 FCEs from the following, with at least 0.5 FCE from 400-level CSC/BCB course; at least 1.0 additional FCE from 300-/400-level CSC/BCB/ECE courses; at least 0.5 additional FCE from 300-/ 400-level courses: CSC: any 200-/300-/400-level; BCB410H1, BCB420H1, BCB430Y1; ECE385H1, ECE489H1; MAT221H1/MAT223H1/MAT240H1, MAT235Y1/MAT237Y1/MAT257Y1, any 300-/400-level MAT course except MAT329H1, MAT390H1, MAT391H1; No more than more than 1.0 FCE from CSC490H1, CSC491H1, CSC494H1, CSC495H1, BCB430Y1 may be used to fulfill program requirements. The choices in 3 must satisfy the requirement for an integrative, inquiry-based activity by including one of the following half-courses: CSC301H1, CSC318H1, CSC404H1, CSC411H1, CSC418H1, CSC420H1, CSC428H1, CSC454H1, CSC485H1, CSC490H1, CSC491H1, CSC494H1, CSC495H1. This requirement may also be met by participating in the PEY (Professional Experience Year) program. Advice on choosing courses towards a Major in Computer Science A Major program in any discipline may form part (but not the whole) of your degree requirements. The Major program in Computer Science is designed to include a solid grounding in the essentials of Computer Science, followed by options that let you explore one or a few topics more deeply. You will also realize what areas you have not studied, and be ready to explore them if your interests change after completing the Major. To give you freedom to choose your path through Computer Science, we have designed the Major to include a minimal set of required courses. There are some courses that we think you ought to consider carefully as you make those choices. CSC373H1 is fundamental to many more advanced Computer Science topics, where designing appropriate algorithms is central. CSC209H1 is a prerequisite to effective work in many application areas. We have designed “packages” of related courses that are intended to accompany the Specialist program in Computer Science, and you may find them helpful in completing your Major too. Please see our web site at http://web.cs.toronto.edu/program/ugrad.htm A significant role of the Major is to allow you to integrate your studies in Computer Science and another discipline. For example, many Computer Science students are also interested in statistics, economics, physics or mathematics. In those cases, it makes sense to enrol in a Major in one discipline and either a Major or a Specialist in the other. If your interests are evenly balanced, the obvious choice is to do two Majors, and that is what we assume here. If you are doing a double Major (two Majors in related disciplines), you might want to consult your college registrar’s office for advice on satisfying the degree requirements with overlapping Majors. A number of sample combinations are listed below for your reference. This is not a complete list: many other combinations are possible. A Major program is generally not enough to prepare you for graduate study in Computer Science, though a complete Specialist is not required. Please consult the advice about graduate study included with the description of the Specialist program in Computer Science. CSC and Mathematics The theoretical foundations of Computer Science are essentially a branch of mathematics, and numerical analysis (the area of CS that studies efficient, reliable and accurate algorithms for the numerical solution of continuous mathematical problems) is also a topic in applied mathematics. If you are interested in both Computer Science and Mathematics, a double major is a good choice. In this double major, you should choose all the theoretical courses in the first three years: CSC165H1, CSC236H1, CSC263H1, CSC373H1 and CSC463H1. If the "enriched" versions are available as alternatives, you should prefer them: CSC240H1 in place of CSC165H1 and CSC236H1, and CSC265H1 and CSC375H1 in place of CSC263H1 and CSC373H1 respectively. If you come to realize that your interests are mathematical after taking some of the non-enriched courses, it's not too late; you should ask us for advice. You should also take at least one of CSC438H1, CSC448H1 and CSC465H1. You should also make sure you take courses in numerical analysis -- CSC336H1 and CSC436H1, and possibly CSC446H1. In the Major in Mathematics, you should prefer courses that are also in the Specialist program in Mathematics: MAT157Y1, MAT240H1, MAT247H1 and so on. Ask the advisors in the Department of Mathematics which courses they would recommend if you're planning a career in mathematics. Don't be afraid to admit your interest in CS. CSC and Bioinformatics/Computational Biology Bioinformatics is a field that came into existence only in the 1990s but has become an extremely fruitful interaction between biological scientists and computer scientists. Deciphering the genome requires not just extremely clever biology but extremely clever computer science, drawing from the study of algorithms and data structures and from data mining. To study bioinformatics, you should enrol in the Specialist program in Bioinformatics and Computational Biology sponsored by the Department of Biochemistry, and also in the Major in Computer Science. Your Computer Science Major should include a selection of courses something like this: BCB410H1, BCB420H1 Some of CSC310H1, CSC324H1, CSC412H1, CSC456H1, CSC463H1 You should seek advice from both the Department of Biochemistry and the Department of Computer Science on how to distribute your courses across the two programs. CSC and Statistics Here your Computer Science course choices should be somewhat similar to those for Computer Science and Mathematics: take the theoretical Computer Science courses up to the 300-level, and prefer the higher-level MAT and STA courses. For example, take STA257H1 and STA261H1 rather than STA247H1 and STA248H1. Within Computer Science, take courses in numerical analysis (CSC336H1 and CSC436H1). Choose also from among information theory (CSC310H1), machine learning (CSC321H1 and CSC411H1), and natural language processing (CSC401H1). CSC and Economics There is considerable opportunity for mutually supporting interests in Computer Science and economics in the area of economic modelling, econometrics and numerical analysis. In Computer Science, you might choose courses such as CSC343H1 (databases), CSC358H1 (networks) and CSC369H1 (operating systems) to acquire the technical background for working with large systems and data sets, and CSC336H1 and CSC436H1 (numerical analysis) to understand the difficulties of large numerical models. If you are interested in financial modelling, you will also want to take CSC446H1 to learn how to handle partial differential equations; to do that, you would want to have taken the necessary mathematical courses. Applying ideas from economics to Computer Science is a little harder, but certainly economic principles apply to databases (CSC443H1) and networks (CSC458H1). CSC358H1 discusses how to model the processes involved in computer networks and in other customer-server systems. CSC454H1 (Business of Software) would also benefit from your background in economics. CSC and Linguistics If you are interested in both Computer Science and Linguistics, you should consider doing a Major in both. Your Major in Computer Science should focus on computational linguistics (CL), the sub-area of AI concerned with human languages (“natural languages”); researchers in this area are interested in developing programs that can “understand” and generate natural language. You should take our Computational Linguistics courses, CSC401H1 and CSC485H1. (They can be taken in either order.) As preparation, you should also take CSC324H1 (programming languages). Other courses you might find valuable are CSC384H1 (AI), CSC343H1 (databases), and the theoretical courses CSC373H1/CSC375H1 and CSC463H1. CSC and Physics If you want to study Computer Science and physics, then as a physicist, you will be interested in how natural processes and human design can take us from the materials and laws of nature to useful computational machinery, and you will want to study CSC258H1 (computer organization -- the way solid-state devices can be combined to build a machine that repeatedly executes instructions) and CSC369H1 (operating systems -- the large software systems that organize the programs people write and run to present the appearance of a well-run self-policing machine). As a computer scientist, you will wonder how accurately you can compute the results of calculations needed in simulating or predicting physical processes. CSC336H1 and CSC436H1 introduce you to numerical analysis, and CSC446H1 applies it to partial differential equations, used to model many physical systems. Both a computer scientist and a physicist will wonder how to write effective programs. CSC263H1 and CSC373H1 teach you to choose appropriate data structures and algorithms, and CSC463H1 helps you to understand whether a problem is computable, and if so, whether the computation takes a reasonable amount of time. In fourth year, you may choose CSC418H1, which depends on and also simulates the behaviour of light and mechanical systems. CSC456H1 deals with high-performance computing of the kind used in scientific computing. CSC420H1 might also be a good choice, though some preparation in artificial intelligence would be helpful. Computer Science Minor (Science program) This is a limited enrolment program that can only accommodate a limited number of students. Eligibility will be based on a student’s marks in the required courses. The precise mark thresholds outlined below are an estimate of what will be required in the coming POSt admission cycle. Achieving those marks does not necessarily guarantee admission to the POSt in any given year. Applying immediately after first year: Applying after second or third year: (4.0 full course equivalents[FCEs]) (CSC108H1, CSC148H1)/CSC150H1, CSC165H1/CSC240H1, CSC207H1, CSC236H1/CSC240H1 1. Students with a strong background in Java or C++ may omit CSC108H1 and proceed directly with CSC148H1. 2. CSC240H1 is an accelerated and enriched version of CSC165H1 plus CSC236H1, intended for students with a strong mathematical background, or who develop an interest after taking CSC165H1. 3. Consult the Undergraduate Office for advice about choosing among CSC108H1 and CSC148H1, and between CSC165H1 and CSC240H1. (Total of above: 2.5 FCEs. If you take fewer than 2.5 FCEs, you must take more than 1.5 FCEs from the next list, so that the total is 4.0 FCEs.) 1.5 credits from the following list, of which at least 1 credit must be at the 300-/400-level: CSC: any 200-/300-/400-level 1. Computer Science Minors are limited to three 300-/400-level CSC/ECE half-courses Computer Science Courses Enrolment notes 1. The University of Toronto Mississauga and University of Toronto Scarborough Computer Science Minor subject POSt is not recognized as a restricted Computer Science subject POSt for St. George course enrolments. 2. No late registration is permitted in any Computer Science course after the first two weeks of classes. Enrolment in most Computer Science courses above 100-level MAY BE restricted. Consult the Calendar or the Arts & Science Registration Instructions and Timetable for details. Prerequisites and exclusions Prerequisites and exclusions for all courses are strictly enforced. Prerequisite waivers can be granted by instructors if the student demonstrates that s/he has background covering the material of the prerequisite course(s). Please refer to the Arts & Science Registration Instructions and Timetable for prerequisite waiver deadlines. Dropping down from enriched to regular courses Students may go to their college to drop down from enriched courses to regular courses. The courses are as follows: from CSC148H1 to CSC108H1, from CSC240H1 to CSC165H1 (or to CSC236H1 if you have already passed CSC165H1), from CSC265H1 to CSC263H1, and from CSC375H1 to CSC373H1. Drop down deadlines: 20149, Fall session: October 3, 2014 20151, Winter session January 30, 2015 Students with transfer credits If you have transfer credits in Computer Science, or a similar subject, for courses done at another university or college, contact our Undergraduate Office (BA4252/4254) for advice on choosing courses. Also ask for advice even if you don’t have transfer credits yet but are considering degree study at the University of Toronto. Without advice, you risk poor course choice or other adverse First Year Seminars The 199Y1 and 199H1 seminars are designed to provide the opportunity to work closely with an instructor in a class of no more than twenty-four students. These interactive seminars are intended to stimulate the students’ curiosity and provide an opportunity to get to know a member of the professorial staff in a seminar environment during the first year of study. Details can be found at CSC104H1 Computational Thinking[24L/12T] Humans have solved problems for millennia on computing devices by representing data as diverse numbers, text, images, sound and genomes, and then transforming the data. A gentle introduction to designing programs (recipes) for systematically solving problems that crop up in diverse domains such as science, literature and graphics. Social and intellectual issues raised by computing. Algorithms, hardware, software, operating systems, the limits of computation. Note: you may not take this course concurrently with any Computer Science course, but you may take CSC108H1/CSC148H1 after CSC104H1. Exclusion: Any Computer Science course Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) Choosing first year courses To help you select the programming course that is right for you, see www.cs.toronto.edu/dcs, Choose Programs & Courses > Undergraduate Courses > Choosing Your First Year Courses. CSC108H1 Introduction to Computer Programming[36L/12T/12P] Structure of computers; the computing environment. Programming in a language such as Python. Program structure: elementary data types, statements, control flow, functions, classes, objects, methods, fields. Lists; searching, sorting and complexity. Practical (P) sections consist of supervised work in the computing laboratory. These sections are offered when facilities are available, and attendance is required. NOTE: You may not take this course after or concurrently with CSC148H1, but you may take CSC148H1 after CSC108H1. Exclusion: CSC120H1, CSC148H1, CSC150H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC120H1 Computer Science for the Sciences[24L/12P] An introduction to computer science for students in other sciences, with an emphasis on gaining practical skills. Introduction to programming with examples and exercises appropriate to the sciences; web programming; software tools. Topics from: database design, considerations in numerical calculation, using UNIX/LINUX systems. At the end of this course you will be able to develop computer tools for scientific applications, such as the structuring and analysis of experimental data. With some additional preparation, you will also be ready to go on to CSC148H1. Practical (P) sections consist of supervised work in the computer laboratory. No programming experience is necessary. Exclusion: Any CSC course Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC148H1 Introduction to Computer Science[24L/12T/12P] Abstract data types and data structures for implementing them. Linked data structures. Encapsulation and information-hiding. Object-oriented programming. Specifications. Analyzing the efficiency of programs. Recursion. This course assumes programming experience in a language such as Python, C++, or Java, as provided by CSC108H1. Students who already have this background may consult the Computer Science Undergraduate Office for advice about skipping CSC108H1. Practical (P) sections consist of supervised work in the computing laboratory. These sections are offered when facilities are available, and attendance is required. NOTE: Students may go to their college to drop down from CSC148H1 to CSC108H1. See above for the drop down deadline. Prerequisite: CSC108H1 Exclusion: CSC150H1; you may not take this course after taking more than two CSC courses at the 200-level or higher Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC165H1 Mathematical Expression and Reasoning for Computer Science[36L/24T] Introduction to abstraction and rigour. Informal introduction to logical notation and reasoning. Understanding, using and developing precise expressions of mathematical ideas, including definitions and theorems. Structuring proofs to improve presentation and comprehension. General problem-solving techniques. Running time analysis of iterative programs. Formal definition of Big-Oh. Diagonalization, the Halting Problem, and some reductions. Unified approaches to programming and theoretical problems. Corequisite: CSC108H1/CSC148H1 Exclusion: CSC236H1, CSC240H1; MAT102H5 (University of Toronto Mississauga) Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) 200-level courses CSC200Y1 Economic and Social Networks: Models and Applications[48L/24T] The course will provide an informal, but rigourous treatment of a variety of topics, introducing students to the relevant background in graph theory, social network formation, incentives and game theory, and providing exposure to the relevant mathematical and computational tools required to analyze relevant phenomenon. Topics may include: structural analysis of social networks, matching markets, trading networks, web search, information cascades, prediction markets, and online advertising, among others. Distribution Requirement Status: This is a Science course Breadth Requirement: Society and its Institutions (3) + The Physical and Mathematical Universes (5) CSC207H1 Software Design[24L/12T] An introduction to software design and development concepts, methods, and tools using a statically-typed object-oriented programming language such as Java. Topics from: version control, unit testing, refactoring, object-oriented design and development, design patterns, advanced IDE usage, regular expressions, and reflection. Representation of floating-point numbers and introduction to numerical Prerequisite: CSC148H1/CSC150H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC209H1 Software Tools and Systems Programming[24L/12T] Software techniques in a Unix-style environment, using scripting languages and a machine-oriented programming language (typically C). What goes on in the operating system when programs are executed. Core topics: creating and using software tools, pipes and filters, file processing, shell programming, processes, system calls, signals, basic network programming. Prerequisite: CSC207H1/enrolment in Bioinformatics and Computational Biology (BCB) subject POSt Exclusion: CSC372H1, CSC369H1, CSC469H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC236H1 Introduction to the Theory of Computation[24L/12T] The application of logic and proof techniques to Computer Science. Mathematical induction; correctness proofs for iterative and recursive algorithms; recurrence equations and their solutions (including the Master Theorem); introduction to automata and formal languages. This course assumes university-level experience with proof techniques and algorithmic complexity as provided by CSC165H1 . Very strong students who already have this experience (e.g. successful completion of MAT157Y1) may consult the undergraduate office about proceeeding directly into CSC236H1. Prerequisite: CSC148H1/CSC150H1, CSC165H1 Exclusion: CSC240H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC240H1 Enriched Introduction to the Theory of Computation[24L/12T] The rigorous application of logic and proof techniques to Computer Science. Propositional and predicate logic; mathematical induction and other basic proof techniques; correctness proofs for iterative and recursive algorithms; recurrence equations and their solutions (including the Master Theorem); introduction to automata and formal languages. This course covers the same topics as CSC236H1, together with selected material from CSC165H1, but at a faster pace, in greater depth and with more rigour, and with more challenging assignments. Greater emphasis will be placed on proofs and theoretical analysis. Certain topics briefly mentioned in CSC165H1 or CSC236H1 may be covered in more detail in this course, and some additional topics may also be covered. NOTE: Students may go to their college to drop down from CSC240H1 to CSC165H1 (or to CSC236H1 if they have already passed CSC165H1). See above for the drop down deadline. Corequisite: CSC148H1/CSC150H1 Exclusion: CSC236H1 Recommended Preparation: first term of MAT137Y1/MAT157Y1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC258H1 Computer Organization[24L/12T/13P] Computer structures, machine languages, instruction execution, addressing techniques, and digital representation of data. Computer system organization, memory storage devices, and microprogramming. Block diagram circuit realizations of memory, control and arithmetic functions. There are a number of laboratory periods in which students conduct experiments with digital logic circuits. Prerequisite: CSC148H1/CSC150H1, CSC165H1/CSC240H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC263H1 Data Structures and Analysis[24L/12T] Algorithm analysis: worst-case, average-case, and amortized complexity. Expected worst-case complexity, randomized quicksort and selection. Standard abstract data types, such as graphs, dictionaries, priority queues, and disjoint sets. A variety of data structures for implementing these abstract data types, such as balanced search trees, hashing, heaps, and disjoint forests. Design and comparison of data structures. Introduction to lower bounds. Prerequisite: CSC207H1, CSC236H1/CSC240H1; STA247H1/STA255H1/STA257H1 Exclusion: CSC265H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC265H1 Enriched Data Structures and Analysis[24L/12T] This course covers the same topics as CSC263H1, but at a faster pace, in greater depth and with more rigour, and with more challenging assignments. Greater emphasis will be placed on proofs, theoretical analysis, and creative problem-solving. Certain topics briefly mentioned in CSC263H1 may be covered in more detail in this course, and some additional topics may also be covered. Students without the exact course prerequisites but with a strong mathematical background are encouraged to consult the Department about the possibility of taking this course. NOTE: Students may go to their college to drop down from CSC265H1 to CSC263H1. See above for the drop down deadline. Prerequisite: CSC240H1 or an A- in CSC236H1 Corequisite: STA247H1/STA255H1/STA257H1 Exclusion: CSC263H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) 300-level courses If you are not in our Major or Specialist program, you are limited to three 300-/400-level CSC/ECE half-courses. CSC300H1 Computers and Society[24L/12T] Privacy and Freedom of Information; recent Canadian legislation and reports. Computers and work; employment levels, quality of working life. Electronic fund transfer systems; transborder data flows. Computers and bureaucratization. Computers in the home; public awareness about computers. Robotics. Professionalism and the ethics of computers. The course is designed not only for science students, but also those in social sciences or humanities. Prerequisite: Any half-course on computing Distribution Requirement Status: This is a Science course Breadth Requirement: Society and its Institutions (3) CSC301H1 Introduction to Software Engineering[24L/12T] An introduction to agile development methods appropriate for medium-sized teams and rapidly-moving projects. Basic software development infrastructure; requirements elicitation and tracking; estimation and prioritization; teamwork skills; basic UML; design patterns and refactoring; security, discussion of ethical issues, and professional responsibility. Prerequisite: CSC209H1, CSC263H1/CSC265H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC302H1 Engineering Large Software Systems[24L/12T] An introduction to the theory and practice of large-scale software system design, development, and deployment. Project management; advanced UML; reverse engineering; requirements inspection; verification and validation; software architecture; performance modeling and analysis. Prerequisite: CSC301H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC309H1 Programming on the Web[24L/12T] An introduction to software development on the web. Concepts underlying the development of programs that operate on the web; survey of technological alternatives; greater depth on some technologies. Operational concepts of the internet and the web, static client content, dynamic client content, dynamically served content, n-tiered architectures, web development processes, and security on the web. Assignments involve increasingly more complex web-based programs. Guest lecturers from leading e-commerce firms will describe the architecture and operation of their web sites. Prerequisite: CSC209H1, CSC343H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC310H1 Information Theory[24L/12T] Measuring information. The source coding theorem. Data compression using ad hoc methods and dictionary-based methods. Probabilistic source models, and their use via Huffman and arithmetic coding. Noisy channels and the channel coding theorem. Error correcting codes, and their decoding by algebraic and probabilistic methods. Prerequisite: 60% or higher in CSC148H1/CSC150H1/CSC260H1; STA247H1/STA255H1/STA257H1/STA107H1; (MAT135H1, MAT136H1)/ MAT135Y1/MAT137Y1/MAT157Y1, MAT221H1/MAT223H1/MAT240H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC318H1 The Design of Interactive Computational Media[24L/12T] User-centred design of interactive systems; methodologies, principles, and metaphors; task analysis. Interdisciplinary design; the role of graphic design, industrial design, and the behavioural sciences. Interactive hardware and software; concepts from computer graphics. Typography, layout, colour, sound, video, gesture, and usability enhancements. Classes of interactive graphical media; direct manipulation systems, extensible systems, rapid prototyping tools. Students work on projects in interdisciplinary teams. Enrolment limited, but non-computer scientists welcome. Prerequisite: Any CSC half-course Recommended Preparation: CSC300H1 provides useful background for work in CSC318H1, so if you plan to take CSC300H1 then you should do it before CSC318H1 Distribution Requirement Status: This is a Science course Breadth Requirement: None CSC320H1 Introduction to Visual Computing[24L/12P] Image synthesis and image analysis aimed at students with an interest in computer graphics, computer vision or the visual arts. Focus on three major topics: (1) visual computing principles - computational and mathematical methods for creating, capturing, analyzing and manipulating digital photographs (image acquisition, basic image processing, image warping, anti-aliasing); (2) digital special effects - applying these principles to create special effects found in movies and commercials; (3) visual programming - using C/C++ and OpenGL to create graphical user interfaces for synthesizing and manipulating photographs. The course requires the ability to use differential calculus in several variables and linear algebra. Prerequisite: CSC209H1/(CSC207H1, proficiency in C or C++); (MAT135H1, MAT136H1)/MAT135Y1/MAT137Y1/MAT157Y1, MAT221H1/MAT223H1/MAT240H1 Distribution Requirement Status: This is a Science course Breadth Requirement: None CSC321H1 Introduction to Neural Networks and Machine Learning[24L/12P] The first half of the course is about supervised learning for regression and classification problems and will include the perceptron learning procedure, backpropagation, and methods for ensuring good generalisation to new data. The second half of the course is about unsupervised learning methods that discover hidden causes and will include K-means, the EM algorithm, Boltzmann machines, and deep belief nets. Prerequisite: (MAT135H1, MAT136H1)/MAT135Y1/MAT137Y1/MAT157Y1, MAT221H1/MAT223H1/MAT240H1; STA247H1/STA255H1/STA257H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC324H1 Principles of Programming Languages[24L/12T] Programming principles common in modern languages; details of commonly used paradigms. The structure and meaning of code. Scope, control flow, datatypes and parameter passing. Two non-procedural, non-object-oriented programming paradigms: functional programming (illustrated by languages such as Lisp/Scheme, ML or Haskell) and logic programming (typically illustrated in Prolog). Prerequisite: CSC263H1/CSC265H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC336H1 Numerical Methods[24L/12T] The study of computational methods for solving problems in linear algebra, non-linear equations, and approximation. The aim is to give students a basic understanding of both floating-point arithmetic and the implementation of algorithms used to solve numerical problems, as well as a familiarity with current numerical computing environments. Prerequisite: CSC148H1/CSC150H1; MAT133Y1(70%)/(MAT135H1, MAT136H1)/MAT135Y1/MAT137Y1/MAT157Y1, MAT221H1/MAT223H1/MAT240H1 Exclusion: CSC350H1, CSC351H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC343H1 Introduction to Databases[24L/12T] Introduction to database management systems. The relational data model. Relational algebra. Querying and updating databases: the query language SQL. Application programming with SQL. Integrity constraints, normal forms, and database design. Elements of database system technology: query processing, transaction management. Prerequisite: CSC165H1/CSC240H1/(MAT135H1, MAT136H1) /MAT135Y1/MAT137Y1/MAT157Y1; CSC207H1 Prerequisite for Engineering students only: ECE345/CSC190/CSC192 Exclusion: CSC434H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC358H1 Principles of Computer Networks[24L/12T] Introduction to computer networks with an emphasis on fundamental principles. Basic understanding of computer networks and network protocols. Topics include network hardware and software, routing, addressing, congestion control, reliable data transfer, performance analysis, local area networks, and TCP/IP. Prerequisite: CSC209H1, CSC258H1, CSC263H1/CSC265H1, STA247H1/STA255H1/STA257H1/ECO227Y1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC369H1 Operating Systems[24L/12T] Principles of operating systems. The operating system as a control program and as a resource allocator. The concept of a process and concurrency problems: synchronization, mutual exclusion, deadlock. Additional topics include memory management, file systems, process scheduling, threads, and protection. Prerequisite: CSC209H1, CSC258H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC372H1 Microprocessor Software[24L/12T/36P] Development of embedded software for control and monitoring. Techniques for efficient running of multiple real-time, critical multiple processes and for device control. Methods of working on small systems, such as microcontroller-based systems. Projects use microprocessors to control equipment with feedback from sensors. Design, implementation and testing of software using a language such as C. Ordinarily offered in years alternating with ECE385H1. Prerequisite: CSC209H1; CSC258H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC373H1 Algorithm Design, Analysis & Complexity[36L/12T] Standard algorithm design techniques: divide-and-conquer, greedy strategies, dynamic programming, linear programming, randomization, network flows, approximation algorithms. Brief introduction to NP-completeness: polynomial time reductions, examples of various NP-complete problems, self-reducibility. Students will be expected to show good design principles and adequate skills at reasoning about the correctness and complexity of algorithms. Prerequisite: CSC263H1/CSC265H1 Exclusion: CSC375H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC384H1 Introduction to Artificial Intelligence[24L/12T] Theories and algorithms that capture (or approximate) some of the core elements of computational intelligence. Topics include: search; logical representations and reasoning, classical automated planning, representing and reasoning with uncertainty, learning, decision making (planning) under uncertainty. Assignments provide practical experience, both theory and programming, of the core Prerequisite: CSC263H1/CSC265H1, STA247H1/STA255H1/STA257H1 Recommended Preparation: CSC324H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) ECE385H1 Microprocessor Systems[24L/36P] A hardware-oriented course dealing with microprocessor and embedded systems. Microprocessor structures, memory and cache structures, input/output techniques, peripheral device control, hardware system and programming considerations. Laboratory experiments provide "hands-on" experience. Ordinarily offered in years alternating with CSC372H1. Prerequisite: CSC258H1; CSC209H1/proficiency in C Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) 400-level courses If you are not in our Major or Specialist program, you are limited to three 300-/400-level CSC/ECE half-courses. CSC401H1 Natural Language Computing[24L/12T] Introduction to techniques involving natural language and speech in applications such as information retrieval, extraction, and filtering; intelligent Web searching; spelling and grammar checking; speech recognition and synthesis; and multi-lingual systems including machine translation. N-grams, POS-tagging, semantic distance metrics, indexing, on-line lexicons and thesauri, markup languages, collections of on-line documents, corpus analysis. PERL and other software. Prerequisite: CSC207H1/CSC209H1; STA247H1/STA255H1/STA257H1 Recommended Preparation: MAT221H1/MAT223H1/MAT240H1 is strongly recommended Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC404H1 Introduction to Video Game Design[24L/12T] Concepts and techniques for the design and development of electronic games. History, social issues and story elements. The business of game development and game promotion. Software engineering, artificial intelligence and graphics elements. Level and model design. Audio elements. Practical assignments leading to team implementation of a complete game. Prerequisite: CSC301H1/CSC318H1/CSC384H1/CSC418H1 Distribution Requirement Status: This is a Science course Breadth Requirement: Creative and Cultural Representations (1) CSC410H1 Software Testing and Verification[24L/12T] Concepts and state of the art techniques in quality assessment for software engineering; quality attributes; formal specifications and their analysis; testing, verification and validation. Prerequisite: CSC207H1, CSC236H1/CSC240H1 Recommended Preparation: CSC330H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC411H1 Machine Learning and Data Mining[24L/12T] An introduction to methods for automated learning of relationships on the basis of empirical data. Classification and regression using nearest neighbour methods, decision trees, linear models, and neural networks. Clustering algorithms. Problems of overfitting and of assessing accuracy. Problems with handling large databases. Prerequisite: CSC263H1/CSC265H1, MAT(135H1,136H1)/MAT137Y1/MAT137Y1/MAT157Y1, STA247H1/STA255H1/STA257H1, STA248H1/STA250H1/STA261H1 Recommended Preparation: CSC336H1/CSC350H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC412H1 Probabilistic Learning and Reasoning[24L/12T] An introduction to probability as a means of representing and reasoning with uncertain knowledge. Qualitative and quantitative specification of probability distributions using probabilistic graphical models. Algorithms for inference and probabilistic reasoning with graphical models. Statistical approaches and algorithms for learning probability models from empirical data. Applications of these models in artificial intelligence and machine learning. Prerequisite: CSC411H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC418H1 Computer Graphics[24L/12T] Identification and characterization of the objects manipulated in computer graphics, the operations possible on these objects, efficient algorithms to perform these operations, and interfaces to transform one type of object to another. Display devices, display data structures and procedures, graphical input, object modelling, transformations, illumination models, primary and secondary light effects; graphics packages and systems. Students, individually or in teams, implement graphical algorithms or entire graphics systems. Prerequisite: CSC336H1/CSC350H1/CSC351H1/CSC363H1/CSC365H1/CSC373H1/CSC375H1/CSC463H1, (MAT135H1, MAT136H1)/MAT135Y1/MAT137Y1/MAT157Y1, CSC209H1/proficiency in C or C++ Prerequisite for Engineering students only: ECE345H1 or ECE352H1 Recommended Preparation: MAT237Y1, MAT244H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC420H1 Introduction to Image Understanding[24L/12P] Introduction to basic concepts in computer vision. Extraction of image features at multiple scales. Robust estimation of model parameters. Multiview geometry and reconstruction. Image motion estimation and tracking. Object recognition. Topics in scene understanding as time permits. Prerequisite: CSC260H1/CSC263H1/CSC265H1,(MAT135H1, MAT136H1)/MAT135Y1/MAT137Y1/MAT157Y1, MAT221H1/MAT223H1/MAT240H1 Recommended Preparation: CSC320H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC428H1 Human-Computer Interaction[24L/12T] Understanding human behaviour as it applies to user interfaces: work activity analysis, observational techniques, questionnaire administration and unobtrusive measures. Operating parameters of the human cognitive system, task analysis and cognitive modelling techniques and their application to designing interfaces. Interface representations and prototyping tools. Cognitive walkthroughs, usability studies and verbal protocol analysis. Case studies of specific user interfaces. Prerequisite: CSC318H1; STA247H1/STA255H1/STA257H1,(STA248H1/STA250H1/STA261H1)/(PSY201H1, PSY202H1)/(SOC202H1, SOC300H1); CSC209H1/proficiency C++ or Java Recommended Preparation: A course in PSY; CSC209H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC436H1 Numerical Algorithms[24L/12T] Numerical algorithms for the algebraic eigenvalue problem, approximation, integration, and the solution of ordinary differential equations. Emphasis is on the convergence, stability and efficiency properties of the algorithms. Prerequisite: CSC336H1/CSC350H1 Exclusion: CSC351H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC438H1 Computability and Logic[24L/12T] Computable functions, Church's thesis, unsolvable problems, recursively enumerable sets. Predicate calculus, including the completeness, compactness, and Lowenheim-Skolem theorems. Formal theories and the Gödel Incompleteness Theorem. Ordinarily offered in years alternating with CSC448H1. Prerequisite: (CSC363H1/CSC463H1)/CSC365H1/CSC373H1/CSC375H1/MAT247H1 Exclusion: MAT309H1; PHL344H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC443H1 Database System Technology[24L/12T] Implementation of database management systems. Storage management, indexing, query processing, concurrency control, transaction management. Database systems on parallel and distributed architectures. Modern database applications: data mining, data warehousing, OLAP, data on the web. Object-oriented and object-relational databases. Prerequisite: CSC343H1, CSC369H1, CSC373H1/CSC375H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC446H1 Computational Methods for Partial Differential Equations[24L/12T] Finite difference methods for hyperbolic and parabolic equations; consistency, convergence, and stability. Finite element methods for 2-point boundary value problems and elliptic equations. Special problems of interest. Ordinarily offered in years alternating with CSC456H1. Prerequisite: CSC351H1/(CSC336H1 (75%))/equivalent mathematical background; MAT237Y1/MAT257Y1; APM346H1/APM351Y1/(MAT244H1/MAT267H1 and exposure to PDEs) Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC448H1 Formal Languages and Automata[24L/12T] Regular, deterministic, context free, context sensitive, and recursively enumerable languages via generative grammars and corresponding automata (finite state machines, push down machines, and Turing machines). Topics include complexity bounds for recognition, language decision problems and operations on languages. Ordinarily offered in years alternating with CSC438H1. Prerequisite: CSC236H1/CSC240H1, CSC363H1/CSC365H1/CSC463H1/MAT247H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC454H1 The Business of Software[24L/12T] The software and internet industries; principles of operation for successful software enterprises; innovation and entrepreneurship; software business definition and planning; business models, market and product planning; product development, marketing, sales, and and support; financial management and financing of high-technology ventures; management, leadership, and partnerships. Students will all write business plans in teams. Prerequisite: Five CSC half-courses at the 200-level or higher Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC456H1 High-Performance Scientific Computing[24L/12T] Computationally-intensive applications in science and engineering are implemented on the fastest computers available, today composed of many processors operating in parallel. Parallel computer architectures; implementation of numerical algorithms on parallel architectures. Topics from: performance evaluation; scientific visualization; numerical methods; applications from science and engineering. For students in computer science, applied mathematics, science, engineering. Ordinarily offered in years alternating with CSC446H1. Prerequisite: CSC350H1/(CSC336H1 (75%))/equivalent mathematical background; CSC209H1/proficiency in C, C++ or Fortran Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC458H1 Computer Networking Systems[24L/12T] Computer networks with an emphasis on systems programming of real networks and applications. An overview of networking basics; layering, packet switching fundamentals, socket programming, protocols, congestion control, routing, network security, wireless networks, multimedia, web 2.0, and online social networks. Prerequisite: CSC209H1, CSC258H1, CSC263H1/CSC265H1, STA247H1/STA255H1/STA257H1/ECO227Y1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC463H1 Computational Complexity and Computability[24L/12P] Introduction to the theory of computability: Turing machines and other models of computation, Church’s thesis, computable and noncomputable functions, recursive and recursively enumerable sets, many-one reductions. Introduction to complexity theory: P, NP, polynomial time reducibility, NP-completeness, self-reducibility, space complexity (L, NL, PSPACE and completeness for those classes), hierarchy theorems and provably intractable problems. Prerequisite: CSC236H1/CSC240H1 Exclusion: CSC363H1, CSC365H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC469H1 Operating Systems Design and Implementation[24L/12T] An in-depth exploration of the major components of operating systems with an emphasis on the techniques, algorithms, and structures used to implement these components in modern systems. Project-based study of process management, scheduling, memory management, file systems, and networking is used to build insight into the intricacies of a large concurrent system. Prerequisite: CSC369H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC485H1 Computational Linguistics[24L/12T] Computational linguistics and the understanding of language by computer. Possible topics include: augmented context-free grammars; chart parsing, statistical parsing; semantics and semantic interpretation; ambiguity resolution techniques; discourse structure and reference resolution. Emphasis on statistical learning methods for lexical, syntactic and semantic knowledge. Prerequisite: STA247H1/STA255H1/STA257H1 or familiarity with basic probability theory; CSC209H1 or proficiency in C++, Java, or Python Recommended Preparation: CSC324H1/CSC330H1/CSC384H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC486H1 Knowledge Representation and Reasoning[24L/12T] Representing knowledge symbolically in a form suitable for automated reasoning, and associated reasoning methods. Topics from: first-order logic, entailment, the resolution method, Horn clauses, procedural representations, production systems, description logics, inheritance networks, defaults and probabilities, tractable reasoning, abductive explanation, the representation of action, Prerequisite: CSC384H1, CSC363H1/CSC365H1/CSC373H1/CSC375H1/CSC463H1 Recommended Preparation: CSC330H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC488H1 Compilers and Interpreters[24L/12T] Compiler organization, compiler writing tools, use of regular expressions, finite automata and context-free grammars, scanning and parsing, runtime organization, semantic analysis, implementing the runtime model, storage allocation, code generation. Prerequisite: CSC258H1, CSC324H1, CSC263H1/CSC265H1 Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) ECE489H1 Compilers II[24L/36P] Theoretical and practical aspects of building modern optimizing compilers. Topics: intermediate representations, basic blocks and flow graphs, data flow analysis, partial evaluation and redundancy elimination, loop optimizations, register allocation, instruction scheduling, interprocedural analysis, and memory hierarchy optimizations. Students implement significant optimizations within the framework of a modern research compiler. (This course is a cross-listing of ECE540H1, Faculty of Applied Science and Engineering.) Prerequisite: CSC236H1/CSC240H1 Recommended Preparation: ECE385H1, proficiency in C Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC490H1 Capstone Design Project[48L] This half-course gives students experience solving a substantial problem that may span several areas of Computer Science. Students will define the scope of the problem, develop a solution plan, produce a working implementation, and present their work using written, oral, and (if suitable) video reports. Class time will focus on the project, but may include some lectures. The class will be small and highly interactive. Project themes change each year. Contact the Computer Science Undergraduate Office for information about this year’s topic themes, required preparation and course enrolment procedures. Not eligible for CR/NCR option. Prerequisite: Permission of the instructor Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC491H1 Capstone Design Project[48L] This half-course gives students experience solving a substantial problem that may span several areas of Computer Science. Students will define the scope of the problem, develop a solution plan, produce a working implementation, and present their work using written, oral, and (if suitable) video reports. Class time will focus on the project, but may include some lectures. The class will be small and highly interactive. Project themes change each year. Contact the Computer Science Undergraduate Office for information about this year’s topic themes, required preparation and course enrolment procedures. Not eligible for CR/NCR option. Prerequisite: Permission of the instructor Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC494H1 Computer Science Project[TBA] This half-course involves a significant project in any area of Computer Science. The project may be undertaken individually or in small groups. The course is offered by arrangement with a Computer Science faculty member. Not eligible for CR/NCR option. Prerequisite: Three 300-/400-level CSC half-courses, and permission of the Associate Chair, Undergraduate Studies. Contact the Computer Science Undergraduate Office for information about course enrolment procedures. Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5) CSC495H1 Computer Science Project[TBA] This half-course involves a significant project in any area of Computer Science. The project may be undertaken individually or in small groups. The course is offered by arrangement with a Computer Science faculty member. Not eligible for CR/NCR option. Prerequisite: Three 300-/400-level CSC half-courses, and permission of the Associate Chair, Undergraduate Studies. Contact the Computer Science Undergraduate Office for information about course enrolment procedures. Distribution Requirement Status: This is a Science course Breadth Requirement: The Physical and Mathematical Universes (5)
{"url":"http://www.artsandscience.utoronto.ca/ofr/calendar/crs_csc.htm","timestamp":"2014-04-20T18:29:28Z","content_type":null,"content_length":"149599","record_id":"<urn:uuid:2b1eea6f-9369-4027-b265-2d00211f0541>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
factorization ? April 25th 2006, 10:04 PM #1 Apr 2006 factorization ? Can someone please show me how to find the lcm useing prime factorization ? i am so confused about this here is a sample problem 18 and 24 Can someone please show me how to find the lcm useing prime factorization ? i am so confused about this here is a sample problem 18 and 24 1. Transform both numbers into a product of prime factors: 18 = 2 * 3 * 3 24 = 2 * 2 * 2 * 3 2. The lcm consists of all prime factors so that each number is completely in the lcm: $lcm = \underbrace{2 * 3 * 3}_{\csub{thats\ for\ 18}}* \underbrace{2*2}_{\csub{addition\ to\ get\ 24}}=72$ Can someone please show me how to find the lcm useing prime factorization ? i am so confused about this here is a sample problem 18 and 24 it's me again. I've attached a diagram to show you how you can find the lcm for 2 and more numbers by using prime factorization. Maybe you need some time to understand the "mechanic" which is used. But if you've understand it, it's a very easy way to do a very unpleasent calculation. Good luck. Can someone please show me how to find the lcm useing prime factorization ? i am so confused about this here is a sample problem 18 and 24 The easiest method I know if it done through the Euclidean Algorithm. You basically work with the greatest common divisor. There is a theorem that: $\gcd(a,b)\cdot \mbox{lcm} (a,b)=ab$ If you understand what I said, good. If you like me to explain how to use this to find both the greatest common divisor and lowest common multiple please ask. April 26th 2006, 05:10 AM #2 April 26th 2006, 10:08 AM #3 April 26th 2006, 01:21 PM #4 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/algebra/2698-factorization.html","timestamp":"2014-04-18T21:42:31Z","content_type":null,"content_length":"41721","record_id":"<urn:uuid:7020ea1e-55d2-40a0-ab84-42fd329b1dac>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
The Slippery Slope and Formulas slope If the data points do not all lie on a line, but are close to a line, you can draw a "_?_" vertical If the product of K4 and K5 are -1/2 are these lines perbendicular? three A " ? " is described by an equation of the from y = kx lineoffit As x increases, y increases this is called a ________ correlation. parallel An undefined slope is represented by a ________ line. no Lines where the product of the slope is -1 are _______. slope Y = 1/2x - 21, What is the value of slope? onehalf The "?" of a line is a number determined by any two points on the line. directvariation Lines that have the same slope are ________. undefined A "?" is a graph in which two sets of data are plotted as ordered pairs in a coordinate plane. yes An equation generated using the coordinates of a known point and athe slpe of the line is " ? " yintercept The calculator uses a statistical method to find the line that most closely approximates the data. The line is called the "_?_" yes Are the lines perpendicular? m1=1/4, m2=-4 negative b is representative of "what value" is our equations positive If line K1 has a slope of 2/3 and K2 has a slope of 2/3, are the lines parallel? horizontal An equation of the form y=mx b is in "?" form. bestfitline What is the slope intercept form? run True or False, the change in x is over the change in y. scatterplot Find the slope of a line that passes through (1,3) and (-2, -6) pointslope Y= 4/5x 10, What is the y-intercept? False A line with a slope of 0 is a ________ line perpendicular As x decreases, y decreases this is called a ________ correlation. y=mx b Slope is the ratio of the change in y (rise) to the change in x (?) slopeintercept A slope with 0 on the bottom i.e 5/0 is what? ten M is representative of "what value" is our equations?
{"url":"http://www.armoredpenguin.com/wordmatch/Data/best/math/slippery.slope.01.html","timestamp":"2014-04-19T12:15:54Z","content_type":null,"content_length":"16665","record_id":"<urn:uuid:1c0dd580-ab1d-4124-8c4d-744bb7fc4402>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary Angle Measurement As a programmer, I have to keep things easy to compute and calculate. However, if you’ve taken trigonometry, you’ll find out how extremely difficult it can be to accurately calculate angles on older systems. Many programmers have taken and exploited shortcuts in order to allow for fast calculations at the expense of accuracy. One of those shortcuts is binary angle measurement. For those who don’t know, trigonometry requires floating point values (numbers with decimal values, as opposed to integers, whole numbers), which were unavailable back then, and if they were, they were often too hard to implement with the resulting code being too slow. The answer to this was binary angle measurement, or BAM. To summarize, the most significant bit is 180 degrees, the next bit is half of that, the next bit is half of the last bit, etc. This is a more visual example: Bit 7: 180 Bit 6: 90 Bit 5: 45 Bit 4: 22.5 This allowed for storage of angles in convenient little bytes or words, instead of large floating point numbers. This method was used for computing angles in popular games such as Doom and Duke Nukem 3D, so it was pretty useful.
{"url":"http://rob1840.wordpress.com/2010/05/17/binary-angle-measurement/","timestamp":"2014-04-18T08:54:51Z","content_type":null,"content_length":"52961","record_id":"<urn:uuid:6b9ae3c1-1d81-430a-9395-d6c82be79622>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: MATHEMATICS OF COMPUTATION Volume 66, Number 219, July 1997, Pages 957­984 S 0025-5718(97)00826-0 PRECONDITIONING IN H (div) AND APPLICATIONS DOUGLAS N. ARNOLD, RICHARD S. FALK, AND R. WINTHER Dedicated to Professor Ivo Babuska on the occasion of his seventieth birthday. Abstract. We consider the solution of the system of linear algebraic equa- tions which arises from the finite element discretization of boundary value problems associated to the differential operator I- grad div. The natural setting for such problems is in the Hilbert space H (div) and the variational formulation is based on the inner product in H (div). We show how to con- struct preconditioners for these equations using both domain decomposition and multigrid techniques. These preconditioners are shown to be spectrally equivalent to the inverse of the operator. As a consequence, they may be used to precondition iterative methods so that any given error reduction may be achieved in a finite number of iterations, with the number independent of the mesh discretization. We describe applications of these results to the efficient solution of mixed and least squares finite element approximations of elliptic boundary value problems. 1. Introduction
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/567/3084497.html","timestamp":"2014-04-18T05:53:34Z","content_type":null,"content_length":"8308","record_id":"<urn:uuid:6a0e0740-8ffc-4f6f-b8f1-4d7147ee7d1f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
gcse chemistry Calculations 2. How to calculate relative formula mass relative molecular mass calculating RFM/RMM of a compound formula igcse KS4 science A level GCE AS A2 O Level practice questions exercises 2. How to calculate relative formula mass or relative molecular mass RFM/RMM or M[r] How do I calculate relative molecular mass? RMM How to calculate relative formula mass? RFM Is there any difference between RMM and RFM? Does it matter whether the compound is ionic or covalent? If all the individual atomic masses of all the atoms in a formula are added together you have calculated the relative formula mass (for ionic compounds e.g. NaCl = 58.5) or molecular mass (for covalent elements e.g. N[2] = 28 or compounds e.g. C[6]H[12]O[6] = 180). To be honest, the term relative formula mass can be used with any compound whether it be ionic or covalent - it just seems not quite correct to talk about the molecular mass of an ionic compound when it doesn't consist of molecules! The shorthand M[r] can be used for the formula of any element or compound and to repeat, 'it doesn't matter whether a compound is ionic or covalent'. M[r] = Relative formula mass = relative molecular mass = the sum of all the atomic masses for all the atoms in a given formula Whereas relative atomic mass (section 1. Relative Atomic Mass) only applies to a single atom but anything with at least two atoms requires the term relative formula mass or relative molecular mass. The most common error is to use atomic/proton numbers instead of atomic masses, unfortunately, except for hydrogen, they are different! Examples of relative formula/molecular mass calculations: How to calculate relative molecular mass = How to calculate relative formula mass Recap: Molecular/formula mass = total of all the atomic masses of all the atoms in the molecule/compound. • Molecular/formula mass calculation Example 2.1 □ The diatomic molecules of the elements hydrogen H[2] and chlorine Cl[2] □ relative atomic masses, Ar: H = 1, Cl = 35.5 □ Formula masses, RMM or M[r], are H[2] = 2 x 1 = 2, Cl[2] = 2 x 35.5 = 71 respectively. • Molecular/formula mass calculation Example 2.2 □ The element phosphorus consists of P[4] molecules. □ RMM or M[r] of phosphorus = 4 x its atomic mass = 4 x 31 = 124 • Molecular/formula mass calculation Example 2.3: The compound water H[2]O □ relative atomic masses are H=1 and O=16 □ RMM or M[r] = (1x2) + 16 = 18 (molecular mass of water) • Molecular/formula mass calculation Example 2.4 □ The compound sulphuric acid H[2]SO[4] □ relative atomic masses are H=1, S=32 and O=16 □ RMM or M[r] = (1x2) + 32 + (4x16) = 98 (molecular mass of sulphuric acid) • Molecular/formula mass calculation Example 2.5 □ The compound calcium hydroxide Ca(OH)[2] (ionic) □ relative atomic masses are Ca=40, H=1 and O=16 □ RMM or M[r] = 40 + 2 x (16+1) = 74 • Molecular/formula mass calculation Example 2.6 □ The ionic compound aluminium oxide (Al^3+)[2](O^2-)[3] or just plain Al[2]O[3], but it makes no difference to the calculation of relative formula mass or relative molecular mass. □ relative atomic masses are Al = 27 and O = 16 □ so the formula mass RFM or M[r] = (2 x 27) + (2 x 16) = 102 • Molecular/formula mass calculation Example 2.7 □ Calcium phosphate is also ionic but a more tricky formula to work out! □ (Ca^2+)[3](PO[4]^3-)[2] or Ca[3](PO[4])[3], but it makes no difference to the calculation of relative formula mass or relative molecular mass. □ atomic masses: Ca = 40, P = 31, O =16 □ RFM or M[r] = (3 x 40) + 3 x {31 + (4 x 16)} = (120) + (3 x 95) = 405 • Molecular/formula mass calculation Example 2.7 □ Glucose C[6]H[12]O[6] □ atomic masses: C = 12, O= 16, H = 1 □ Molecular mass of glucose M[r](C[6]H[12]O[6]) = (6 x 12) + (12 x 1) + (6 x 16) = 180 Self-assessment Quizzes [rfm] type in answer or multiple choice OTHER CALCULATION PAGES Revision KS4 Science Additional Science Triple Award Science Separate Sciences Courses aid to textbook revision GCSE/IGCSE/O level Chemistry Information Study Notes for revising for AQA GCSE Science, Edexcel GCSE Science/IGCSE Chemistry & OCR 21st Century Science, OCR Gateway Science WJEC gcse science chemistry CCEA/CEA gcse science chemistry O Level Chemistry (revise courses equal to US grade 8, grade 9 grade 10) A level Revision notes for GCE Advanced Subsidiary Level AS Advanced Level A2 IB Revise AQA GCE Chemistry OCR GCE Chemistry Edexcel GCE Chemistry Salters Chemistry CIE Chemistry, WJEC GCE AS A2 Chemistry, CCEA/CEA GCE AS A2 Chemistry revising courses for pre-university students (equal to US grade 11 and grade 12 and AP Honours/honors level for revising science chemistry courses revision guides Website content copyright © Dr Phil Brown 2000-2013 All rights reserved on revision notes, images, puzzles, quizzes, worksheets, x-words etc. * Copying of website material is not permitted * Alphabetical Index for Science Pages Content A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
{"url":"http://www.docbrown.info/page04/4_73calcs02rfm.htm","timestamp":"2014-04-18T10:36:32Z","content_type":null,"content_length":"30921","record_id":"<urn:uuid:123dacb5-0acd-4d41-a174-5c6f88e3a01c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
August 30th 2008, 04:51 AM in the game of poker, what is probability that a five-card hand will contain(a) a straight(five cards in unbroken numerical sequence) (b) four of a kind (c) a full house(three cards of one value and two cards of another value) thanks you so much. I could not figure out. August 30th 2008, 06:27 AM Hm...I shall attempt this question...not sure if my answer is right though! >.< (a) Sorry I don't understand what exactly is a straight cos I don't play poker...is J,Q,K,A,2 still considered a straight? If not, then my working is definitely incorrect. And the suits don't matter right? $Probability = 1 \times \displaystyle{\frac{4}{51}} \times \displaystyle{\frac{4}{50}} \times \displaystyle{\frac{4}{49}}$ (b) $Probability = 1 \times \displaystyle{\frac{12}{51}} \times \displaystyle{\frac{11}{50}} \times \displaystyle{\frac{10}{49}}$ August 30th 2008, 01:33 PM See the following page: Poker probability - Wikipedia, the free encyclopedia August 31st 2008, 09:48 AM THANKS SO MUCH FOR YOUR INFORMATION, IT MEANS SOMETHING TO ME!
{"url":"http://mathhelpforum.com/statistics/47148-combination-print.html","timestamp":"2014-04-16T19:50:45Z","content_type":null,"content_length":"6060","record_id":"<urn:uuid:def93651-e03a-4ed2-857a-62f44d58cdd8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Ok here is the "redo" for #7 - #10: 7. When the rope goes around the barn, what is the new radius? How much of a circle can it make without hitting the barn or overlapping area you've already found? What is that area? Answer: The new radius is 30. It can make up to ¼. So the area would be: 1/4 x 30^2 x PI 1/4 x 900 x PI 225 (PI) = 706.858 8. When the rope goes around the barn the other way, what is the new radius? How much of a circle can it make without hitting the barn or overlapping area you've already found? What is that area? Answer: The radius is 30. It can make up to ¼. So the area would be: 1/4 x 30^2 x PI 1/4 x 900 x PI 225 (PI) = 706.858 9. The areas you found in 7 and 8 overlap each other. How much do they overlap? What *approximate* shape do they make? What is that area? Answer: I would say that the area is 75. It looks like it almost makes a square shape. 10. What is the total grazing area the goat can reach? Answer: To get this answer, I added up the ansers of #6, #7 and #8 then subtracted the answer from #9: 1875(PI) + 225(PI) + 225(PI) – 75 2325(PI) – 75 7304.202 – 75 7229.202 is the final answer.
{"url":"http://www.mathisfunforum.com/post.php?tid=19992&qid=283957","timestamp":"2014-04-17T06:43:30Z","content_type":null,"content_length":"23364","record_id":"<urn:uuid:359c2ae6-fccc-4076-8a6d-c8af5ea9ab2e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2001 [00242] [Date Index] [Thread Index] [Author Index] Re: Re: Re: Solve[] for equations? • To: mathgroup at smc.vnet.net • Subject: [mg31989] Re: [mg31961] Re: [mg31946] Re: [mg31928] Solve[] for equations? • From: Andrzej Kozlowski <andrzej at tuins.ac.jp> • Date: Sat, 15 Dec 2001 01:30:07 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com I must admit it feels rather silly being reduced to sending successive messages saying "you are absolutely right" in reply to two mutually contradictory postings. I am afraid that I (and Fred Simons before me) have fallen into the common trap of thinking only of real solutions, which of course behave in a completely different way (except for linear equations). Daniel Lichtblau's comments are based on Elimination Theory using Groebner basis (for a simple account see Ideal, Varieties and ALgorithms, by Cox, Little and O'Shea) and of course apply only to complex solutions. If the system of equations had no solutions than then the Groebner basis for the corresponding ideal which eliminates the variables in turn (such a basis always exists and which is found by Eliminate) would have to contain a "unit", (a complex number in case of polynomials with numerical coefficients or a symbolic expression not involving the "variables" in the symbolic case) . The fact in the case considered here Eliminate did not return such an expression means (assuming that it Eliminate is working correctly) that the equations have a solution and that it must be the one that is found by applying Eliminate and Solve. Why numerical tests may indicate otherwise is a bit of a mystery, but when things are this complicated one can't really be sure of anything. Nota bene, in the complex case the the principle that "n equations in n variables" generically have a solution holds in the same way as the corresponding principle for linear equations in the real (or complex) case. This is can also be proved using elimination theory. Andrzej Kozlowski Toyama International University On Saturday, December 15, 2001, at 03:47 AM, Daniel Lichtblau wrote: > Andrzej Kozlowski wrote: >> You are of course completely right. To use this approach one would need >> to be able to prove that the system does have solutions. If it does >> than >> they will be found in this way. But indeed they may very well not exist >> at all. >> On Friday, December 14, 2001, at 12:14 AM, Fred Simons wrote: >>> Only a few remarks: >>> From Andrzej Kozlowsky's message: >>>> The concept may be simple but the practice is not quite so. You can >>>> see >>>> that as follows. >>>> Here are your equations. >>>> eq1 = Rac == R1(R2 + R3 + R4)/(R1 + R2 + R3 + R4) >>>> eq2 = Rad == R2(R1 + R3 + R4)/(R1 + R2 + R3 + R4) >>>> eq3 = Rbc == R3(R1 + R2 + R4)/(R1 + R2 + R3 + R4) >>>> eq4 = Rbd == R4(R1 + R2 + R3)/(R1 + R2 + R3 + R4) >>>> we ask Mathematica to eliminate all the variables except one (say >>>> R4). >>>> eq5 = Eliminate[{eq1, eq2, eq3, eq4}, {R1, R2, R3}]; >>>> You have to wait a bit for this to work (Mathematica 4.1). >>>> If you want to see the 4th degree equation in R4 that you get you can >>>> evaluate: >>>> eq5 /. Equal[x_, y_] :> Collect[x - y, R4] == 0 >>>> What you see is a fourth degree equation with symbolic coefficients >>>> which is far from simple. Mathematica can actually "solve" it with: >>>> Solve[eq5, R4] >>>> You have to wait quite a while and then you will see something >>>> phenomenally complicated and in my opinion essentially useless (and >>>> in >>>> addition there is basically no way of checking its correctness). >>> The result is simpler when we force Solve not to solve cubics and >>> quartics: >>> SetOptions[Roots, Cubics -> False, Quartics -> False]; >>> Solve[eq5, R4] >>> But still the result is rather useless. >>>> If you like that you can use the same approach to get R1,R2 and R3 or >>>> much better, you can just use the symmetry of your equations to find >>>> out the other answers (to get R3 just replace Rbd by Rbc and vice >>>> versa). >>> It is not so simple. Indeed, in the above way we can find the four >>> values >>> for R4. Similarly, or by cyclic permutation, we can find the four >>> values for >>> R1, for R2 and for R3 in symbolic form. But it is not clear how this >>> four >>> times four values for each of the unknowns have to be combined for >>> finding a >>> solution for the set of equations. Some testing with numerical values >>> shows >>> that it is unlikely that a solution of the set of equations indeed can >>> be >>> expressed as a combination of the results found in this way. Maybe >>> that >>> explains that the Solve command in Mathematica is unable to solve the >>> equations within 14 hours, while in the above way it finds formula for >>> each >>> of the unknowns within 2 minutes. >>> Fred Simons >>> Eindhoven University of Technology > In reference to whether there might be no solutions, we can say that > this does not happen. Indeed, one knows there are solutions unless there > are polynomials in the basis that do not involve any of the main > variables (as this would violate genericity). But in performing the > Eliminate as per Andrzej' post, you find that there is only one > polynomial that does not involve the first three variables, and it > contains the fourth, hence there is no problem with genericity. Offhand > I am not sure why Solve says otherwise. In truth I've not had the > patience to wait for Solve to complete for this problem. > Here is a faster way to get elimination results (except it is > problematic in version 4 of Mathematica). > ee = {Rac - R1*(R2+R3+R4)/(R1+R2+R3+R4), > Rad - R2*(R1+R3+R4)/(R1+R2+R3+R4), > Rbc - R3*(R1+R2+R4)/(R1+R2+R3+R4), > Rbd - R4*(R1+R2+R3)/(R1+R2+R3+R4)}; > Timing[gb = GroebnerBasis[ee, R4, {R1,R2,R3}, > MonomialOrder->EliminationOrder];] > It will work correctly in our development version but seems to have > trouble in Mathematica 4.1. I suspect the issue is in handling the > denominators. But if you do not mind clearing denominators, the code > below will give a single polynomial in R4 and the parameter variables. > ff = Numerator[Together[ee]]; > Timing[gb2 = GroebnerBasis[ff, R4, {R1,R2,R3}, > MonomialOrder->EliminationOrder];] > Substituting in specific "random" values for the parameters and solving > the resulting system indicates that there are 4 solutions. > Daniel
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Dec/msg00242.html","timestamp":"2014-04-17T01:05:16Z","content_type":null,"content_length":"41347","record_id":"<urn:uuid:b011582d-42c3-4cf6-af08-66968f896803>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Links (at Lots of Levels) I have 12 tabs open in Firefox right now, all things I want to remember to follow up. Maybe if I put a few links here, I can close some of those tabs... • I think I posted before about this. Gwen Dewar writes: This preschool math game was designed by researchers who wanted to know if a board game could help kids develop their number sense (Ramani and Siegler 2008). The premise? That a game featuring sequentially-numbered spaces would help preschoolers learn about the number line and about the relative magnitude of numbers. The game was very effective. After only 4 game sessions totaling less than 80 minutes, kids made substantial, lasting improvements in the areas of mathematical knowledge mentioned above. She describes how you can make the same game yourself. Instead of making a spinner (as she suggests), you could modify a die to have 3 ones and 3 twos on it. I found this older article when I was reading her current article on good educational toys. It hadn't occurred to me how cool digital cameras might be for kids. High School. • The New York Times has an intriguing article about 8 high school students who were allowed to form their own mini-school within the school, which they called the Independent Project. • Keith Nabb wrote an article I like, but it's hidden in a password protected site. I'm asking if I can post it here. Meanwhile, check out these animations he has for his Algebra, Trig, and Calc • A student of mine in Beginning Algebra is struggling with negative numbers. I liked this article, and plan to send her a link to it. • In my Intermediate Algebra class, we'll be starting roots tomorrow. This article is at a higher level than most of them will want, but I think I can share a bit of this issue with my students. How do we pick which square root is the principal root? • Research on teaching (versus pseudoteaching) and learning. • In the 26th comment on Dan Meyer's WCYDWT: Storytelling post, Kathy Sierra wrote: Why they don’t teach screenwriting techniques to teachers is beyond me. We used to make all the authors in our tech book series read the screenwriting book Save the Cat, by Blake Snyder, and build storyboards for each topic using that simplified framework. It’s not an answer to bad teaching, but it’s a way of structuring a lesson that feels more like a hero’s journey for the learner... [I want that book.] Math In Use. • How many representatives should each European Union member country get? Mathematicians studied this question. One of the criteria was that the final 'formula' be easy for everyone to understand. They settled on something pretty simple, but there are lots of little twists. (And one big hurdle: Some countries would lose representatives. Can the other countries get those countries to agree to this?) I have a story to tell about helping a friend design another formula, but that will have to wait until I have more time. 3 comments: 1. Hi Sue, just noticed your comment about my comment from Dan's blog. This morning I sent him a link to a post about writing novels, because something in it reminded me of pseudocomtext... It might be a bit of a stretch, but it seems like many of the mistakes we make in teaching have a similar feeling to mistakes in developing a character in a novel/screenplay, for example: "But you can’t show them all at once. Let’s say you have a protagonist who is smart, well off without being filthy rich, generous towards his friends and family, but with very little tolerance for idiots and people who make poor choices. As characters go, that’s a pretty well-rounded description. There’s a lot there for you to work with in the course of your novel. But you can’t show us all this stuff at once. It’s just too much. I suppose you could create some kind of bizarre, tortured scene in which all of these come into play, but I doubt it would feel natural. You have to spread it out over several scenes, letting each scene touch on one or maybe two personality features, until we have the whole picture. Further, let these scenes be natural to the story, ones that arise in clear relationship to the plot, so they don’t stick out like sore thumbs. The last thing you want is readers thinking to themselves “Ah, this seemingly irrelevant scene must exist in order to show the guy’s generosity.” I couldn't help but think of how we try to show all of a topic/feature's attributes, even if we must create a "seemingly irrelevant scene" in order to do so. Only in teaching, sometimes the word "seemingly" does not apply. (hence the pseudo context reminder) Anyway, I am happy to discover your blog! 2. Dear Mrs. Van Hattum, I agree with your thoughts about the math book and author. I should really get that because sometimes I have trouble figuring hard problems out. I think it was great that you could work the problem out the way you did. It was very creative. I really like the tessellation example you put on this blog. I like art, and that is really neat and colorful. The odd shapes is the part that really amazed me. How it's put together is really cool. I'm glad I could share my thoughts with you. Have a great day. 3. @Kathy, your comment got stuck in the spam filter. I just now found it. I'd like to think about your idea in relation to teaching... Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
{"url":"http://mathmamawrites.blogspot.com/2011/03/links-at-lots-of-levels.html","timestamp":"2014-04-20T00:40:34Z","content_type":null,"content_length":"110073","record_id":"<urn:uuid:1c9345c3-f358-4e64-934b-0b1aa271bc29>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
R.: When do datatypes commute Results 1 - 10 of 14 , 2000 "... A polytypic value is one that is defined by induction on the structure of types. In Haskell the type structure is described by the so-called kind system, which distinguishes between manifest types like the type of integers and functions on types like the list type constructor. Previous approaches to ..." Cited by 107 (20 self) Add to MetaCart A polytypic value is one that is defined by induction on the structure of types. In Haskell the type structure is described by the so-called kind system, which distinguishes between manifest types like the type of integers and functions on types like the list type constructor. Previous approaches to polytypic programming were restricted in that they only allowed to parameterize values by types of one fixed kind. In this paper we show how to define values that are indexed by types of arbitrary kinds. It appears that these polytypic values possess types that are indexed by kinds. We present several examples that demonstrate that the additional exibility is useful in practice. One paradigmatic example is the mapping function, which describes the functorial action on arrows. A single polytypic definition yields mapping functions for datatypes of arbitrary kinds including first- and higher-order functors. Polytypic values enjoy polytypic properties. Using kind-indexed logical relations we prove... - 3rd International Summer School on Advanced Functional Programming , 1999 "... ..." - Science of Computer Programming , 2000 "... . A downwards accumulation is a higher-order operation that distributes information downwards through a data structure, from the root towards the leaves. The concept was originally introduced in an ad hoc way for just a couple of kinds of tree. We generalize the concept to an arbitrary regular d ..." Cited by 19 (3 self) Add to MetaCart . A downwards accumulation is a higher-order operation that distributes information downwards through a data structure, from the root towards the leaves. The concept was originally introduced in an ad hoc way for just a couple of kinds of tree. We generalize the concept to an arbitrary regular datatype; the resulting denition is co-inductive. 1 Introduction The notion of scans or accumulations on lists is well known, and has proved very fruitful for expressing and calculating with programs involving lists [4]. Gibbons [7, 8] generalizes the notion of accumulation to various kinds of tree; that generalization too has proved fruitful, underlying the derivations of a number of tree algorithms, such as the parallel prex algorithm for prex sums [15, 8], Reingold and Tilford's algorithm for drawing trees tidily [21, 9], and algorithms for query evaluation in structured text [16, 23]. There are two varieties of accumulation on lists: leftwards and rightwards. Leftwards accumulation ... - Informal Proceedings Workshop on Generic Programming, WGP'98, Marstrand , 1998 "... This paper describes structural polymorphism, a new form of type polymorphism appropriate to functional languages featuring user-defined algebraic data types (e.g., Standard ML, Haskell and Miranda 1 ). The approach extends the familiar notion of parametric polymorphism by allowing the definition of ..." Cited by 6 (0 self) Add to MetaCart This paper describes structural polymorphism, a new form of type polymorphism appropriate to functional languages featuring user-defined algebraic data types (e.g., Standard ML, Haskell and Miranda 1 ). The approach extends the familiar notion of parametric polymorphism by allowing the definition of functions which are generic with respect to data structures as well as to individual types. For example, structural polymorphism accommodates generalizations of the usual length and map functions which may be applied not only to lists, but also to trees, binary trees or similar algebraic structures. Under traditional polymorphic type systems, these functions may be defined for arbitrary component types, but must be (laboriously) re-defined for every distinct data structure. In this sense, our approach also extends the spirit of parametric polymorphism, in that it provides the programmer relief from the burden of unnecessary repetitive effort. The mechanism we will use to realize this form of polymorphism is inspired by a feature familiar to functional programmers, namely the pattern abstraction. Pattern abstractions generalize the usual lambda abstraction (x.e) in that they are comprised of multiple pattern/expression clauses, rather than just a single bound-variable/expression pair. By analogy with pattern abstractions, we generalize polymorphic type abstractions (Òå.e) to type-pattern abstractions, which are comprised of multiple type-pattern/expression pairs. The types given to type-pattern abstractions are universally quantified, just as for traditional type abstractions, but the universal quantifiers are now justified by a recursive analysis of the forms of all possible type instantiations, rather than by parametric independence with respect to a type variable. (x:+.e) ... - Workshop on Fixed Points in Computer Science , 1999 "... The study of inductive and coinductive types (like finite lists and streams, respectively) is usually conducted within the framework of category theory, which to all intents and purposes is a theory of sets and functions between sets. Allegory theory, an extension of category theory due to Freyd, is ..." Cited by 6 (3 self) Add to MetaCart The study of inductive and coinductive types (like finite lists and streams, respectively) is usually conducted within the framework of category theory, which to all intents and purposes is a theory of sets and functions between sets. Allegory theory, an extension of category theory due to Freyd, is better suited to modelling relations between sets as opposed to functions between sets. The question thus arises of how to extend the standard categorical results on the existence of final objects in categories (for example, coalgebras and products) to their existence in allegories. The motivation is to streamline current work on generic programming, in which the use of a relational theory rather than a functional theory has proved to be desirable. In this paper, we define the notion of a relational final dialgebra and prove, for an important class of dialgebras, that a relational final dialgebra exists in an allegory if and only if a final dialgebra exists in the underlying category of map... "... Datatype-generic programs are programs that are parametrized by a datatype or type functor: whereas polymorphic programs abstract from the ‘integers ’ in ‘lists of integers’, datatype-generic programs abstract from the ‘lists of’. There are two main styles of datatype-generic programming: the Algebr ..." Cited by 5 (3 self) Add to MetaCart Datatype-generic programs are programs that are parametrized by a datatype or type functor: whereas polymorphic programs abstract from the ‘integers ’ in ‘lists of integers’, datatype-generic programs abstract from the ‘lists of’. There are two main styles of datatype-generic programming: the Algebra of Programming approach, characterized by structured recursion operators arising from initial algebras and final coalgebras, and the Generic Haskell approach, characterized by case analysis over the structure of a datatype. We show that the former enjoys a kind of higherorder naturality, relating the behaviours of generic functions at different types; in contrast, the latter is ad hoc, with no coherence required or provided between the various clauses of a definition. Moreover, the naturality properties arise ‘for free’, simply from the parametrized types of the generic functions: we present a higherorder parametricity theorem for datatype-generic operators. Categories and Subject Descriptors D.3.3 [Programming languages]: Language constructs and features—Polymorphism, patterns, control structures, recursion; F.3.3 [Logics and meanings of programs]: Studies of program constructs—Program and recursion schemes, type structure; F.3.2 [Logics and meanings of programs]: Semantics of programming languages—Algebraic approaches to semantics; D.3.2 [Programming languages]: Language classifications—Functional languages. - In Workshop on Generic Programming (WGP'98), Marstrand , 1998 "... This paper describes the polytypic functions in PolyLib, motivates their presence in the library, and gives a rationale for their design. Thus we hope to share our experience with other researchers in the field. We will assume the reader has some familiarity with the field of polytypic programming. ..." Cited by 4 (0 self) Add to MetaCart This paper describes the polytypic functions in PolyLib, motivates their presence in the library, and gives a rationale for their design. Thus we hope to share our experience with other researchers in the field. We will assume the reader has some familiarity with the field of polytypic programming. Of course, a library is an important part of a programming language. Languages like Java, Delphi, Perl and Haskell are popular partly because of their useful and extensive libraries. For a polytypic programming language it is even more important to have a clear and well-designed library: writing polytypic programs is difficult, and we do not expect many programmers to write polytypic programs. On the other hand, many programmers use polytypic programs such as parser generators, equality functions, etc. This is a first attempt to describe the library of PolyP; we expect that both the form and content of this description will change over time. One of the goals of this paper is to obtain feedback on the library design from other researchers working within the field. At the moment the library only contains the basic , 1999 "... This paper demonstrates the potential for combining the polytypic and monadic programming styles, by introducing a new kind of combinator, called a traversal. The natural setting for dening traversals is the class of shapely data types. This result reinforces the view that shapely data types form a ..." Cited by 4 (0 self) Add to MetaCart This paper demonstrates the potential for combining the polytypic and monadic programming styles, by introducing a new kind of combinator, called a traversal. The natural setting for dening traversals is the class of shapely data types. This result reinforces the view that shapely data types form a natural domain for polytypism: they include most of the data types of interest, while to exceed them would sacrice a very smooth interaction between polytypic and monadic programming. Keywords: functional/monadic/polytypic programming, shape theory. 1 Introduction Monadic programming has proved itself extremely useful as a means of encapsulating state and other computational eects in a functional programming setting (see e.g. [12,14]). Recently, interactions between monads and data structures have been studied as a further way for structuring programs. Initially focusing on lists, the studies have been extended to the class of regular datatypes (see e.g. [4,11,1]), with the aim to embo... - In: ECOOP’12 (2012 "... Abstract. This paper presents a new solution to the expression problem (EP) that works in OO languages with simple generics (including Java or C#). A key novelty of this solution is that advanced typing features, including F-bounded quantification, wildcards and variance annotations, are not needed. ..." Cited by 4 (2 self) Add to MetaCart Abstract. This paper presents a new solution to the expression problem (EP) that works in OO languages with simple generics (including Java or C#). A key novelty of this solution is that advanced typing features, including F-bounded quantification, wildcards and variance annotations, are not needed. The solution is based on object algebras, which are an abstraction closely related to algebraic datatypes and Church encodings. Object algebras also have much in common with the traditional forms of the Visitor pattern, but without many of its drawbacks: they are extensible, remove the need for accept methods, and do not compromise encapsulation. We show applications of object algebras that go beyond toy examples usually presented in solutions for the expression problem. In the paper we develop an increasingly more complex set of features for a mini-imperative language, and we discuss a real-world application of object algebras in an implementation of remote batches. We believe that object algebras bring extensibility to the masses: object algebras work in mainstream OO languages, and they significantly reduce the conceptual overhead by using only features that are used by everyday programmers. 1 , 2001 "... Nested datatypes are a generalisation of the class of regular datatypes, which includes familiar datatypes like trees and lists. They typically represent constraints on the values of regular datatypes and are therefore used to minimise the scope for programmer error. ..." Cited by 4 (0 self) Add to MetaCart Nested datatypes are a generalisation of the class of regular datatypes, which includes familiar datatypes like trees and lists. They typically represent constraints on the values of regular datatypes and are therefore used to minimise the scope for programmer error.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1603717","timestamp":"2014-04-23T09:54:33Z","content_type":null,"content_length":"39524","record_id":"<urn:uuid:674ed351-75bf-46fe-b597-8d2bd9c744b8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Quincunx If you do not mind, may I use the animation and your pictures in my mathematics project? Thanks for the information. Do you know of any other open source language which is suitable for making this animation? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=292041","timestamp":"2014-04-19T02:14:02Z","content_type":null,"content_length":"17105","record_id":"<urn:uuid:e536f76e-e790-4605-a56b-8657ab20bbc8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Pressure Washer Dilution Calculator Mixing Ratio and Concentration Calculator Skip ahead to the Pressure Washer Dilution Calculator. This seemingly simple subject is still the cause of confusion, and it was for me too. After having to figure this out two springs in a row I decided to write it down and share it. Consider this problem: One part pressure washer detergent should be mixed with 20 parts water for proper dilution (1:20). The pressure washer uses a fixed mixing ratio of 1:7 detergent to water. How should the detergent be pre-diluted so that the pressure washer delivers a spray with 1:20 mixing ratio? It is easy to think that it is a simple algebra problem, using the equation 1/20 = (1/7) × MR. Not so! This would have worked had we used concentration instead of mixing ratio, however. • Mixing Ratio, MR = solute / solvent • Concentration, C = solute / (solvent + solute) Mixing ratio is usually given as the inverse, solvent/solute, but either one works as long as we know which one we're using. Concentration is often expressed in percent by multiplying C by 100%. Pressure Washer Dilution Theory, Using Mixing Ratio The way to think about the problem in the introduction is to think about the total quantities of detergent and water used. Let's say we want to prepare a p gallon jug of pre-mix detergent and water. On the left hand side we have the desired final mixing ratio. On the right hand side we have the quantities of detergent and water in units of gallon, where d is the quantity of detergent in the p gallon jug. The (p-d) is the quantity of water in the jug, and 7p is the number of gallons added by the pressure washer while dispensing the content of the jug. If we set p=1 gallon, we solve for d and find that d=0.381 gal = 48.8 oz. Theory, Using Concentration We convert the mixing ratios to concentrations and we have... ...where C is the concentration of detergent in the pre-mix jug. Solving for C we find that C=0.381 (or 38.1%). Multiply 1 gallon by 0.381 and we find that we need 0.381 gal detergent = 48.8 oz. This type of problem is easier to solve using concentration rather than mixing ratios, but since directions provided with consumer products use mixing ratios (in the U.S. anyway) we will use them as This calculator computes the amount of detergent needed in the pre-mix jug, using the given mixing ratios. Detergent Water Final Mixing Ratio: : Tip: Set to 1:0 for a general mixing calculation. Pressure Washer Mixing Ratio: : Desired Volume of Pre-mix: gal^† Pre-mix Concentration: % Detergent: gal^†= fl. oz Water: gal= fl. oz The water volume needs typically not be measured since we can just top up the jug with water. The calculator also works for mixing gasoline and oil for two-stroke engines, for example. Set the pressure washer mixing ratio to 1:0. Notice that this calculator works differently than the typical gasoline pre-mix calculator on the web in that you set the final volume of pre-mix rather than the volume of gasoline (or water) used. ^†The unit here is intended to be gallon, but it can be any unit including liter. Naturally, the fl. oz result only applies if the unit is gallon.
{"url":"http://jansson.us/mixingratio-concentration.html","timestamp":"2014-04-21T04:42:24Z","content_type":null,"content_length":"8978","record_id":"<urn:uuid:db4399ad-6166-471f-81fd-f74e11be1cf1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Artima Developer Spotlight Forum - Elliotte Rusty Harold on New Additions to java.lang.Math Stating that, The most efficient code is the code you never write. Don't do for yourself what experts have already done, Elliotte Rusty Harold reviews new mathematical functions in java.lang.Math in Java's new math. The latest two version of the JDK each added 10 new methods to java.lang.Math. One reason those methods are important learn about is because they are almost certain to provide more efficient, and more accurate, implementations than what most developers would write themselves for certain calculations. In additions, many methods in java.lang.Math take advantage of native code that, in turn, exploits hardware acceleration on modern CPUs. In the article, Harold notes many instances where a naive approach could lead to incorrect calculations, starting with such basic notions as the size of a number: The Platonic ideal of the number is infinitely precise, while the Java representation is limited to a fixed number of bits. This is important when you deal with very large and very small numbers. For example, the number 2,000,000,001 (two billion and one) can be represented exactly as an int, but not as a float. The closest you can get in a float is 2.0E9 — that is, two billion. Proper calculations of sine and other functions that are both accurate and fast require very careful algorithms designed to avoid accidentally turning small errors into large ones. Often these algorithms are embedded in hardware for even faster performance. For example, almost every X86 chip shipped in the last 10 years has hardware implementations of sine and cosine that the X86 VM can just call, rather than calculating them far more slowly based on more primitive operations. HotSpot takes advantage of these instructions to speed up trigonometry operations dramatically. In the rest of his article, Harold reviews some of the most useful new java.lang.Math methods, such as Math.hypot that calculates the Pythagorean equation: Java 5 added a Math.hypot function to perform exactly this calculation, and it's a good example of why a library is helpful. The naive approach would look something like this: public static double hypot(double x, double y){ return Math.sqrt (x*x + y*y); The actual code [as implemented in java.lang.Math] is somewhat more complex... The first thing you'll note is that this is written in native C code for maximum performance. The second thing you should note is that it is going to great lengths to try to minimize any possible errors in this calculation. In fact, different algorithms are being chosen depending on the relative sizes of x and y. The latest version of java.lang.Math also corrects naming issues with the log method: Logs base 10 tend to appear in engineering applications. Logs base e (natural logarithms) appear in the calculation of compound interest, and numerous scientific and mathematical applications. Logs base 2 tend to show up in algorithm analysis. The Math class has had a natural logarithm function since Java 1.0. That is, given an argument x, the natural logarithm returns the power to which e must be raised to give the value x. Sadly, the Java language's (and C's and Fortran's and Basic's) natural logarithm function is misnamed as log(). In every math textbook I've ever read, log is a base-10 logarithm, while ln is a base e logarithm and lg is a base-2 logarithm. It's too late to fix this now, but Java 5 did add a log10() function that takes the logarithm base 10 instead of base e... Math.log10() has the usual caveats of logarithm functions: taking the log of 0 or any negative number returns NaN. Harold also explains the Math.cbrt(), the hyperbolic trigonometric functions Math.cosh(), Math.sinh(), and Math.tanh(), as well as Math.signum. What do you think of the latest math-related functions in Java?
{"url":"http://www.artima.com/forums/flat.jsp?forum=270&thread=241967","timestamp":"2014-04-19T12:33:48Z","content_type":null,"content_length":"42469","record_id":"<urn:uuid:8e3b4d90-448b-4c1b-9120-8b34ed1bfdb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
LaunchBar’s built-in Calculator allows you to quickly perform numeric calculations. Just type , type your calculation and press Return. LaunchBar’s built-in Calculator Opening the Calculator To open the Calculator, do one of the following: • Press the Calculator keyboard shortcut, which can be configured in the Calculator pane LaunchBar preferences. • Choose Select > Calculator (or press Command-=). • Type an abbreviation to select the Calculator item (e.g. CALC or '='), then press Space. • Type your calculation right away, e.g. just type 13 + 5 and press Return. • Paste a calculation or just a series of numbers onto LaunchBar (Command-V). Entering leading digits (either by typing them or via 'paste') automatically switches to Calculator. This behavior can be turned off in LaunchBar preferences > General > Switch to Calculator when typing digits. Operation Example Basic arithmetical operations 44 - 16 * (12.3 + 4.8 / 3) Trigonometric operations sin(pi / 2) Inverse trigonometric operations atan(1) Square Root sqrt(2) Raise to power pow(27 | 1/3) Raise to integral power 2^8 Exponential function e^x exp(1) Logarithm (base 10) log(1000) Natural logarithm ln(2.71828182846) Binary logarithm ld(1024) Greatest common divisor gcd(527 | 697) Least common multiple lcm(91 | 143) Multiple arguments If a function has more than one argument (such as pow or gcd), the arguments have to be separated with a vertical line character | to prevent ambiguities with decimal and thousand separators. To enter this separator more conveniently, you can also press either Tab or \. Smart Calculator Input To speed up typing your expressions (especially on notebook keyboards), LaunchBar provides some smart conversions during input or prior to calculating the result: • The plus sign can be omitted between consecutive numbers. For example, 13 9 8.3 will be converted to 13+9+8.3. • In some cases the multiplication operator can be omitted as well. For example, you can type 17(3+5) or 2pi to get 17*(3+5) or 2*pi. • Square brackets can be entered as an alternative to parentheses. • A lowercase x can be used for multiplications as an alternative to '*' • The Equals sign '=' can be typed to enter '+'. • The key left from the "1" key can be used to enter a decimal separator. Smart Brackets Calculator automatically inserts closing brackets if appropriate. When you type an opening bracket, the corresponding closing bracket will inserted automatically. To put some part of an already entered expression in brackets, select that part and type a bracket. Entering an opening bracket will put the insertion point before the bracketed expression, entering a closing bracket will put it behind. Smart Brackets can be turned off in the General pane of LaunchBar preferences. Function Shortcuts Uppercase letters can be used to enter function names more quickly. For example, type S45 to get sin(45), or type Q2 to get sqrt(2). Input Result S sin(x) C cos(x) T tan(x) AS asin(x) AC acos(x) AT atan(x) Q sqrt(x) L ln(x) D ld(x) G log(x) E exp(x) P pow(x|y) X pow(10|x) R 1/x Shift-2 x² Shift-3 x³ You can select an expression prior to typing the shortcut to use the expression as the function’s argument. For example, when you’ve selected the number 43, typing S results in sin(45). Automatic Decimal Separator Detection Calculator automatically detects the used notation for decimal- and thousand-separators (period vs. comma). So you may enter either 12.3 + 1,550 or 12,3 + 1.550 and you’ll get the expected results in both cases. In ambiguous cases (e.g. 12.000 + 3,123) LaunchBar considers the number format as specified in System Preferences > Language & Text > Formats. Calculating Sums To calculate the sum of a series of numbers, you can omit the plus signs. It’s sufficient to separate the numbers with a space character, or if you enter them via Copy & Paste they just have to be in separate lines (so you can paste e.g. a column of numbers). Instant Calculate With Instant Calculate you can quickly perform calculations via Instant Send. If the sent text appears to be a valid calculator expression, the results are displayed automatically. For example, if you are working on a document that contains a series of numbers, you can select these numbers, send them to LaunchBar via Instant Send, and LaunchBar will instantly show the sum of these numbers. Non-contiguous Selections Many OS X applications support non-contiguous text selections, allowing you to select multiple, individual pieces of text. You first select one piece of text, then hold down the Command key and select some other text elsewhere in the document. For example, the following TextEdit selection was created by double clicking the first, third and fourth number while holding the Command key down: Now copy and paste this text selection from TextEdit to LaunchBar, and you will get the following result: Or if you are using Instant Calculate as described above, you’ll get the sum of these numbers with just a single keystroke: Subsequent Calculations When the result of a calculation is displayed in large type, you can press Space to modify the current expression, or press Tab, +, -, *, / or A to perform subsequent calculations based on the current result. The letter “a” can be used as a placeholder for the most recent result, which is especially useful if you want to use this result in different places of your new expression. For example, if your last calculation delivered 7.24 as the result you can enter: (a + 3) / a which will then evaluate to: (7.24 + 3) / 7.24 Invoking Calculator from external applications Calculations can be sent from external applications to LaunchBar via AppleScript or URL commands. The result of the expression is then displayed in large type. Calculation requests can be sent to LaunchBar via AppleScript using the perform action command. For example: tell application perform action "Calculator" with string "(1+sqrt(5))/2" end tell URL commands Calculation requests can also be sent to LaunchBar using the x-launchbar:calculate URL command. You can optionally specify input and output formats to customize the display of the result. Special characters must be properly URL encoded using UTF-8 percent escapes. Search Templates The x-launchbar:calculate URL command can also be used to compute the result of predefined formulas via Search Templates. Convert Fahrenheit to Celsius Convert Celsius to Fahrenheit
{"url":"http://www.obdev.at/resources/launchbar/help/Calculator.html","timestamp":"2014-04-20T13:18:51Z","content_type":null,"content_length":"13482","record_id":"<urn:uuid:bcbfff43-5dc5-4823-ade1-d80df0443f39>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
[Help-glpk] patch to upgrade glpk 4.5 to 4.6 [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Help-glpk] patch to upgrade glpk 4.5 to 4.6 From: Andrew Makhorin Subject: [Help-glpk] patch to upgrade glpk 4.5 to 4.6 Date: Mon, 2 Aug 2004 16:48:53 +0400 N.B. THIS IS NOT AN OFFICIAL RELEASE OF GLPK 4.6. Here is a patch to upgrade glpk 4.5 to 4.6 (please see the attachment). Its MD5 check-sum must be: 757c51aaf23fb2e32feed95a0a2922f1 *glpk.diff.gz To upgrade the package do the following: 1. Download glpk-4.5.tar.gz from GNU ftp site or some its mirror. 2. Unzip and untar glpk-4.5.tar.gz in a working subdirectory. 3. Unzip the patch attached and place it in the same working directory, i.e. the subdirectory 'glpk-4.5' and the file 'glpk.diff' must be in the same subdirectory. 4. Run the command (only once!): patch -p0 < glpk.diff There will be some warnings about patching glpkmex files. Never mind on them. 5. Rename the subdirectory 'glpk-4.5' to 'glpk-4.6'. 6. Configure and compile/install the package as usual. What's new in glpk 4.6: Two new statements of the GNU MathProg language were implemented: solve and printf. The solve statement is optional and can be used only once in the model description. It has the following syntax: Having been executed the solve statement makes all model variables to be similar model parameters, i.e. below the solve statement any variable can be referenced in the same way as a parameter. Note that variable, constraint, and objective statements can be used only above the solve statement while set, parameter, display, and printf statements can be used above as well as below the solve statement. The printf statement is intended to produce resulting reports. It has the following syntax: printf format-string, expr, expr, ..., expr; printf { domain } : format-string, expr, expr, ..., expr; where format-string is a symbolic literal or expression which specifies a format control string in the same way as in the C language; expr is a numeric, symbolic, or logical expression (if printf is used below the solve statement, the expression may refer to model variables). Both statements solve and printf are supported by the solver glpsol. The output may be redirected with '-y' or '--display' option. Below here is a brief example which illustrates how to use the solve and printf statements. Any comments and suggestions are welcome. Andrew Makhorin # This problem finds a least cost shipping schedule that meets # requirements at markets and supplies at factories. # References: # Dantzig G B, "Linear Programming and Extensions." # Princeton University Press, Princeton, New Jersey, 1963, # Chapter 3-3. set I; /* canning plants */ set J; /* markets */ param a{i in I}; /* capacity of plant i in cases */ param b{j in J}; /* demand at market j in cases */ param d{i in I, j in J}; /* distance in thousands of miles */ param f; /* freight in dollars per case per thousand miles */ param c{i in I, j in J} := f * d[i,j] / 1000; /* transport cost in thousands of dollars per case */ var x{i in I, j in J} >= 0; /* shipment quantities in cases */ minimize cost: sum{i in I, j in J} c[i,j] * x[i,j]; /* total transportation costs in thousands of dollars */ s.t. supply{i in I}: sum{j in J} x[i,j] <= a[i]; /* observe supply limit at plant i */ s.t. demand{j in J}: sum{i in I} x[i,j] >= b[j]; /* satisfy demand at market j */ printf ""; printf "From To Cost Shipping Total cost"; printf "---------- ---------- ---------- ---------- ----------"; printf {i in I, j in J: x[i,j] != 0}: "%-10s %-10s %10.3f %10d %10.3f", i, j, c[i,j], x[i,j], c[i,j] * x[i,j]; printf "------------------------------------------------------"; printf " %10.3f", sum{i in I, j in J} c[i,j] * x[i,j]; printf ""; set I := Seattle San-Diego; set J := New-York Chicago Topeka; param a := Seattle 350 San-Diego 600; param b := New-York 325 Chicago 300 Topeka 275; param d : New-York Chicago Topeka := Seattle 2.5 1.7 1.8 San-Diego 2.5 1.8 1.4 ; param f := 90; $ ./glpsol transp.mod Reading model section from transp.mod... Reading data section from transp.mod... 76 lines were read Generating cost... Generating supply... Generating demand... Model has been successfully generated lpx_simplex: original LP has 6 rows, 6 columns, 18 non-zeros lpx_simplex: presolved LP has 5 rows, 6 columns, 12 non-zeros lpx_adv_basis: size of triangular part = 5 0: objval = 0.000000000e+00 infeas = 1.000000000e+00 (0) 4: objval = 1.563750000e+02 infeas = 0.000000000e+00 (0) * 4: objval = 1.563750000e+02 infeas = 0.000000000e+00 (0) * 5: objval = 1.536750000e+02 infeas = 0.000000000e+00 (0) Time used: 0.0 secs Memory used: 0.2M (174146 bytes) From To Cost Shipping Total cost ---------- ---------- ---------- ---------- ---------- Seattle Chicago 0.153 300 45.900 San-Diego New-York 0.225 325 73.125 San-Diego Topeka 0.126 275 34.650 Model has been successfully processed Description: GNU Zip compressed data [Prev in Thread] Current Thread [Next in Thread] • [Help-glpk] patch to upgrade glpk 4.5 to 4.6, Andrew Makhorin <=
{"url":"http://lists.gnu.org/archive/html/help-glpk/2004-08/msg00007.html","timestamp":"2014-04-16T20:04:02Z","content_type":null,"content_length":"10128","record_id":"<urn:uuid:024fe53a-6562-486a-99c6-40a47a8129fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Attractive Tablecloths Copyright © University of Cambridge. All rights reserved. 'Attractive Tablecloths' printed from http://nrich.maths.org/ Charlie has been designing tablecloths for each weekday. He likes to use as many colours as he possibly can but insists that his tablecloths have some symmetry. The $5$ by $5$ tablecloths below each satisfy a different symmetry rule. Monday's $5$ by $5$ tablecloth has just $1$ line of symmetry. this interactivity to design tablecloths of other sizes with just $1$ line of symmetry. Can you determine a way of working out how many colours would be needed for an n by n tablecloth (where n is odd)? Tuesday's $5$ by $5$ tablecloth has rotational symmetry of order $4$, and no lines of symmetry. this interactivity to design tablecloths of other sizes with rotational symmetry of order $4$, and no lines of symmetry. Can you determine a way of working out how many colours would be needed for an n by n tablecloth (where n is odd)? Wednesday's $5$ by $5$ tablecloth has $2$ lines of symmetry (horizontal and vertical), and rotational symmetry of order $2$. this interactivity to design tablecloths of other sizes with $2$ lines of symmetry, and rotational symmetry of order $2$. Can you determine a way of working out how many colours would be needed for an n by n tablecloth (where n is odd)? Thursday's $5$ by $5$ tablecloth has $2$ (diagonal) lines of symmetry and rotational symmetry of order $2$. this interactivity to design tablecloths of other sizes with $2$ (diagonal) lines of symmetry and rotational symmetry of order $2$. Can you determine a way of working out how many colours would be needed for an n by n tablecloth (where n is odd)? Friday's $5$ by $5$ tablecloth has $4$ lines of symmetry and rotational symmetry of order $4$. this interactivity to design tablecloths of other sizes with $4$ lines of symmetry and rotational symmetry of order $4$. Can you determine a way of working out how many colours would be needed for an n by n tablecloth (where n is odd)? At weekends Charlie likes to use tablecloths with an even number of squares. Investigate the number of colours that are needed for different types of symmetric $n$ by $n$ tablecloths where $n$ is even. You may wish to investigate using this interactivity.
{"url":"http://nrich.maths.org/900/index?nomenu=1","timestamp":"2014-04-18T14:05:41Z","content_type":null,"content_length":"7861","record_id":"<urn:uuid:1ff8d3f9-b5af-4763-acad-b89b7f0a283a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Direct Simulation of Initial Value Problems for the Motion of Solid Bodies in a Newtonian Fluid Part 1. Sedimentation J. Feng, H. H. Hu and D. D. Joseph J. Fluid Mech. 261, 95-134 (1994) Abstract This paper reports the result of direct simulations of fluid-particle motions in two dimensions. We solve the initial value problem for the sedimentation of circular and elliptical particles in a vertical channel. The fluid motion is computed from the Navier-Stokes equations for moderate Reynolds numbers in the hundreds. The particles are moved according to the equations of motion of a rigid body under the action of gravity and hydrodynamic forces arising from the motion of the fluid. The solutions are as exact as our finite element calculations will allow. As the Reynolds number is increased to 600, a circular particle can be said to experience five different regimes of motion: steady motion with and without overshoot and weak, strong and irregular oscillations. An elliptic particle always turns its long axis perpendicular to the fall, and drifts to the center-line of the channel during sedimentation. Steady drift, damped oscillation and periodic oscillation of the particle are observed for different ranges of the Reynolds number. For two particles which interact while settling, a steady staggered structure, a periodic wake-action regime and an active drafting-kissing-tumbling scenario are realized at increasing Reynolds numbers. The non-linear effects of particle-fluid, particle-wall and inter-particle interactions are analyzed, and the mechanisms controlling the simulated flows are shown to be lubrication, turning couples on long bodies, steady and unsteady wakes and wake interactions. The results are compared to experimental and theoretical results previously published.
{"url":"http://www.math.ubc.ca/~jfeng/Publications/Abstracts/94_JFM1.htm","timestamp":"2014-04-17T06:53:13Z","content_type":null,"content_length":"2357","record_id":"<urn:uuid:95dca9e9-22d3-49e7-a38f-78b7dd90670f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
CS70 - Lecture 6 - Jan 31, 2011 - 10 Evans We are increasing sections sizes, waitlist down to 2. Section 105 (T 3-4) still has 4 slots. Our next goal is to use induction to analyze algorithms. In other classes you learned about recursion. Here our point is that recursion and induction are two sides of the same coin: recursion is how an algorithm works and induction is how we analyze it to either prove it is correct or figure out how long it takes to execute. EG: Fibonacci numbers: Let F(0)=0, F(1)=1 and F(n)=F(n-1)+F(n-2). So F(0,1,2,…) = 0,1,1,2,3,5,8,13,21,... Theorem: Let x_+ = (1+sqrt(5))/2 ~ 1.6 and x_- = (1-sqrt(5))/2 ~ -.6 Then F(n) = ( x_+^n - x_-^n )/sqrt(5) ~ 1.6^n / sqrt(5) grows exponentially fast Proof by induction: P(n) = " F(n) = ( x_+^n - x_-^n )/sqrt(5)" Bases case(s): Check P(0) and P(1) are true, i.e. that the formula yields F(0) = 0 and F(1) = 1 as desired Induction step: show that P(n-2) and P(n-1) -> P(n): P(n-2) and P(n-1) -> F(n-2)+F(n-1) = ( x_+^(n-2) - x_-^(n-2) )/sqrt(5) + (x_+^(n-1) - x_-^(n-1))/sqrt(5) = … = ( x_+^n - x_-^n)/sqrt(5) = F(n) -> P(n) Consider following 2 algorithms for computing F(n): func F1(n) if n=0 return 0 elseif n = 1 return 1 x = 0, y = 1, for i= 2 to n tmp = y, y = x+y, x=tmp (Note: bug in class notes; what does the function there compute instead?) func F2(n) if n=0 return 0 else if n= 1 return 1 else return F2(n-1) + F2(n-2) Which algorithm is faster? More simply: how many additions does each one perform? Let A1(n) = #additions_in_F1(n) = ? Let A2(n) = #additions_in_F2(n) = 1 + A2(n-1) + A2(n-2) So A2(0,1,2,…) = 0, 0, 1, 2, 4, 7, 12, 20,... What is the relationship between A2(n) and F(n)? Looks like A2(n) = F(n+1)-1. Proof by induction: A2(n) = 1 + A2(n-1) + A2(n-2) = 1 + (F(n)-1) + (F(n-1)-1) … by induction = F(n) + F(n-1) -1 = F(n+1) -1 … as desired! Looks like F1(n) is *much* faster than F2(n) Ex: Assume your compute takes 1 nanosecond to add two numbers. How long does it take to evaluate F1(129)? About 128 nanoseconds How long does it take to evaluate F2(129)? About F(130) nanoseconds ~ 10^27 nanoseconds ~ 34 billion years How old is the universe? Now suppose you had one computer for each atom in the universe, about 10^80 of them, all running in parallel to help you run program F2. How long would it take to compute F2(512)? About F(513) / 10^80 nanoseconds ~ 10^107/1e^80 nanoseconds ~ 23 billion years How would you ever guess the formula for F(n)? Exactly: guess! Try F(n) = x^n and see what x has to be for this to be true: F(n) = x^n = F(n-1) + F(n-2) = x^(n-1) + x^(n-2) or x^n = x^(n-1) + x^(n-2) or x^2 = x + 1 or x = (1+sqrt(5))/2 = x_+ or x = (1-sqrt(5))/2 = x_- So both F(n) = x_+^n and F(n) = x_-^n satisfy F(n) = F(n-1)+F(n-2). But neither satisfies F(0)=0 and F(1) = 1: what to do? Note that for any constants r and s, F(n) = r*x_+^n + s*x_-^n also satisfies F(n) = F(n-1) + F(n-2), so we can pick the 2 constants r and s to satisfy the 2 constraints F(0)=0 and F(1) = 1 (or F(0)=7 and F(1) = -pi, whatever we like). The same "guessing" procedure works for similar recurrences, like G(n) = 2*G(n-1) - 7*G(n-2), G(0) = 2, G(1) = -3 Try plugging in G(n) = x^n, solve a quadratic for two values of x, etc. What do you think happens with H(n) = 3*H(n-1) + 2*H(n-2) - H(n-3)? Such "linear recurrences" occur commonly, eg in analyzing signal processing. Next example of using induction to analyze and algorithm, this time for sorting. One of the fastest algorithms is called quicksort, and it works like this. function quicksort(n, A) … input is array A of n numbers … output is array of these n numbers sorted in increasing order if (n=0 or n=1) pick a random number 1 <= i <= n reorder the entries of A so that the initial entries are all <= A(i) (say there are m of them) the next entry = A(i) the remaining entries are > A(i) return S = [quicksort(m,A),A(i),quicksort(n-m-1,A(m+1:n))] The function quicksort is recursive, and we will prove it correctly sorts by using induction on the length of the array being sorted P(n) = "quicksort correctly sorts an input array of length n" Base cases: P(0) and P(1) work because the algorithm doesn't have to do anything Induction step: We assume P(0) and P(1) and … and P(n) and prove P(n+1): after reordering array A of length n+1, it is partitioned into 3 subsets: (1) entries <= A(i), except A(i) itself (2) A(i) (3) entries > A(i) Obviously, if we correctly sort subsets (1) and (3), the whole array will be sorted. Quicksort correctly sorts these subsets because their lengths are at most n, since they don't contain A(i).
{"url":"http://www.cs.berkeley.edu/~demmel/cs70_Spr11/Lectures/CS70_Lecture06_Jan31.html","timestamp":"2014-04-16T08:05:16Z","content_type":null,"content_length":"11789","record_id":"<urn:uuid:dc08b221-19f3-45c6-960a-777ffac58194>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Isocontour - File Exchange - MATLAB Central Rate this file 67 Downloads (last 30 days) File Size: 3.15 KB File ID: #30525 24 Feb 2011 (Updated 10 Mar 2011) Find ISO-contour geometry in a 2D image using marching-squares, and sort the contour objects Please login to add a comment or rating.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/30525-isocontour","timestamp":"2014-04-21T01:02:24Z","content_type":null,"content_length":"29194","record_id":"<urn:uuid:eae5d79a-a25b-434f-9989-1c59b2c034f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference equation with non-linear term Hi all-- I can't figure out how to approach the following difference equation: where a, b are constants, e_t is a known function and f(x_t-1) is a convex, u-shaped function that goes through the origin. (Sorry Tex would not want to work) To begin with, I considered f linear and solved the equation. Exactly one of the roots of the corresponding homogenous equation lies within the unit circle, so I set the free coefficients in the general solution to zero to obtain a bounded solution and derive the particular solution. Does anyone know a way to treat a nonlinear function f?
{"url":"http://www.physicsforums.com/showthread.php?p=2175878","timestamp":"2014-04-16T16:01:40Z","content_type":null,"content_length":"20041","record_id":"<urn:uuid:3235fd75-6cff-4908-af4d-a53b1931edfd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiation- Quotient Rule Calculate the gradient of the tangent to the curve $y=\frac{x+2}{\sqrt{3x+1}}$ at $x=1$ If your question is to calculate the derivative of the function here is step by step $y=\frac{x+2}{\sqrt{3x+1}} \rightarrow y'=\frac{(x+2)'(\sqrt{3x+1}) - (\sqrt{3x+1})'(x+2)}{3x+1}$ The derivate of $\sqrt{3x+1}$ is calculated using the chain rule $f(x)=\sqrt{3x+1} = (3x+1)^{\frac{1}{2}} \rightarrow f'(x) = \frac{1}{2}\cdot \frac{1}{\sqrt{3x+1}} \cdot 3$ can now conclude Since you will need to use the chain rule on the square root anyway, you might find it easier to do this as $f(x)= (x+2)(3x+ 1)^{-1/2}$ and use the product rule rather than the quotient rule. Thanks to both of you with your reply, I will try it out!! Hello! Just a little update on my workings... Quote: Calculate the gradient of the tangent to the curve $y=\frac{x+2}{\sqrt{3x+1}}$ at $x=1$ $Let U=x+2$ and $V=\sqrt{3x+1}$ $\frac{dU}{dx}=1$ $\frac {dV}{dx}=\frac{1}{2}(3x+1)^{-\frac{1}{2}}(3)$ $=\frac{3}{2}(3x+1)^{-\frac{1}{2}}$ $\frac{dy}{dx}=\frac{\frac{3x+1-(\frac{3}{2}(x+2))}{\sqrt{3x+1}}}{3x+1}$ $=\frac{(3x+1)^2-(\frac{3}{2}(x+2)(3x+1))}{\ sqrt{3x+1}}$ $=\frac{3x+1(3x+1-((\frac{3}{2})(x+2)))}{\sqrt{3x+1}}$ I can't simplify further. Am I on the right track?
{"url":"http://mathhelpforum.com/pre-calculus/128165-differentiation-quotient-rule-print.html","timestamp":"2014-04-18T20:45:33Z","content_type":null,"content_length":"11051","record_id":"<urn:uuid:61fdfaae-ab41-4ce5-90d2-1964c4737a07>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Why I Hate FOIL Let's use our imagination a bit. Picture yourself in math class (Algebra I to be exact), minding your own business, having fun playing with the axioms (aka rules) of algebra, and then one day your teacher drops this bomb on you: "Expand (x+3)(x-1)" And you might be thinking, "woah now, where did come from?" It makes sense that this would shock you. You were just getting used to the idea of expanding 3(x-1), and you probably would have been fine with x+3(x-1), but (x+3)(x-1) is a foreign idea all Well, before you have much time to think about it on your own and discover anything interesting, your teacher will probably tell you that even though you don't know how to solve it now, there is a "super helpful", magical technique that will help you… For those of you lucky enough never to have heard of FOIL, I will explain. FOIL stands for First Outside Inside Last and is a common mnemonic device used to confuse children about a fairly easy If you remember the distributive property a(b+c) = ab+ac, then it might seem odd that all of a sudden we put two grouping next to each other and now we are doing something "new". But is FOIL really new? The answer is of course no, what we are actually doing is just a short cut for the distributive property. And if you were aloud to try and solve it before being told what to do, you might actually have figured that out. For example, if we have (a+b)(c+d), we could distribute (a+b) as if it was a whole quantity. So (a+b)(c+d) = c(a+b) + d(a+b) and then we distribute again and get ac + bc + ad + To me this seems much simpler than having to learn a mnemonic device, and remember how to draw our "rainbow lines" and remember where to put a plus and a minus, and so son and so forth. We merely follow a simple rules we already know. Another useful reason not to teach FOIL is because in only works for expressions similar to (a+b)(c+d). But what about expressions that look like (a+b)(c+d+e), or even (a-b+c)(d+e)(f-g-h+i+j)(k-l+p)? You can't use FOIL for these, but of course, you use the distributive property. So please, if you are a math teacher, the next time you have a chance to teach FOIL… don't. Spare your students the confusion and teach them what is really going on. FOIL might be quicker, but math isn't about the destination, it's about the journey.
{"url":"http://www.wyzant.com/resources/blogs/243067/why_i_hate_foil","timestamp":"2014-04-18T09:47:28Z","content_type":null,"content_length":"41323","record_id":"<urn:uuid:964c8b35-fad4-4534-aad8-6deeb151e032>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Discrete Cosine Transformation up vote 2 down vote favorite I'm currently working on some software and a key component is 2D DCT. But my question is more general, as I'm trying to understand the DCT in general, let's say from engineers point of view. For start, I know that there are 8 types of DCT, and that many authors use different notation, sometimes even different parameterization, but that's doesn't matter as I'm not going to implement DCT, I only want to understand it. I will stick to formula, scavenged from http://www.cs.cf.ac.uk/Dave/Multimedia/node231.html. DCT is defined as following: $$ F(u) = \left ( \frac{2}{N} \right )^\frac{1}{2} \sum_{i=0}^{N-1} \Lambda (i)cos\left [ \frac{\pi\cdot u}{2N}(2i+1) \right ]f(i) $$ $N$ is count of samples. $i$ is index of particular sample and $f(i)$ it's value. $\Lambda$- well I'm not sure, but it's only a weight coefficient, so it does not affect the principle of the DCT. What I'm struggling to understand are values $u$ and $F(u)$. I know that DCT transform data to frequency domain, but I have not found the meaning of this values. My guess is that $u$ is particular frequency and $F(u)$ is amount of this frequency in data, e.g. for signal with 8kHz frequency (for example whistle), the DCT would return $0$ for all values of $u$ and some great value for $u=8000$. (this is an ideal case, I know this is overcast example). I've also deducted, that maximum frequency in DCT result will be limited by number of samples, e.g. for sound sampled at 44100kHz there won't be any coefficient for frequency higher than 44100kHz, due to Nyquist criterium. So are my conclusions right, or completely off track? Thanks in advance. signal-analysis fourier-analysis Your question would probably fit one of the other sites mentioned in the FAQ. – Douglas Zare Oct 19 '12 at 12:53 add comment 2 Answers active oldest votes You're very close. $u$ corresponds to frequency and $|F(u)|$ is frequency content in the signal. Let me explain relation between variable $u$ and the frquency it corresponds to:- A signal is being sampled at time period of $T_{p}$, then maximum frequency that it can successfully represent is $1/2T_{p}$. Here $f=1/T_{p}$ is sampling rate and that in case of audio signal is 44.1 KHz (so that it can represent 22KHz signal which is close to hearing limit of human ears). up vote 1 Now, what all frequency it can represent depends on $N$ i.e. number of samples that you take. down vote Frequency in this case will take discrete values from $0,f/N,2f/N...(N-1)f/N$ and these frequency will correspond to $u=0,1,2,..N-1$. Frequency beyond that will alias back to one those frequencies. And so more number of samples you take you can represent more number of frequency. add comment This question would better suit being asked on the dsp stack I believe. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged signal-analysis fourier-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/110083/understanding-discrete-cosine-transformation/110124","timestamp":"2014-04-20T08:59:13Z","content_type":null,"content_length":"54897","record_id":"<urn:uuid:1e04617c-2b04-46bd-b79b-703519b10e29>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Abella is an interactive theorem prover developed by Andrew Gacek while a PhD student at the University of Minnesota and a postdoc within the Parsifal project. This system include declarative support for λ-tree syntax (an approach to syntax with bindings), the two-level approach to reasoning about logic specifications, the ∇-quantification, and nominal abstractions. Many examples in the meta-theory of the lambda-calculus and π-calculus have been developed. Tac is an interactive and automatic theorem prover for an intuitionistic logic extended with fixed points and generic (∇) quantification. Tac is based on recent research work by David Baelde and others. It has been implemented by David Baelde and Alexandre Viel (INRIA & LIX/Ecole Polytechnique) and Zach Snow (University of Minnesota - Minneapolis). See the Tac page in the Slimmer GForge Bedwyr is a model checker that allows for reasoning directly on syntactic expressions (even those contining bindings). The earlier Level 0/1 prover was rewritten into OCaml by David Baelde and Axelle Zeigler and then greatly extended by David Baelde and Andrew Gacek during the summer of 2006. The system is being developed on the INRIA Gforge open source platform. See the Bedwyr page in the Slimmer GForge project. Alwen Tiu has a prototype implementation of a logic system that can reason on “levels 0 and 1”. The system integrates finite success and finite failure as proof search for a single logic. Teyjus is an implementation of λProlog by Gopalan Nadathur. While Teyjus has no formal connections with the Parsifal project, it is a commonly used prototyping tool for this team.
{"url":"http://www.lix.polytechnique.fr/parsifal/dokuwiki/doku.php?id=software","timestamp":"2014-04-16T13:04:03Z","content_type":null,"content_length":"17821","record_id":"<urn:uuid:f3b40432-6ca1-41a2-b920-b250dd9f19d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Recreations in the Theory of Numbers 2nd edition by Beiler | 9780486210964 | Chegg.com Details about this item Recreations in the Theory of Numbers: Number theory, the Queen of Mathematics, is an almost purely theoretical science. Yet it can be the source of endlessly intriguing puzzle problems, as this remarkable book demonstrates. This is the first book to deal exclusively with the recreational aspects of the subject and it is certain to be a delightful surprise to all devotees of the mathematical puzzle, from the rawest beginner to the most practiced expert. Almost every aspect of the theory of numbers that could conceivably be of interest to the layman is dealt with, all from the recreational point of view. Readers will become acquainted with divisors, perfect numbers, the ingenious invention of congruences by Gauss, scales of notation, endless decimals, Pythagorean triangles (there is a list of the first 100 with consecutive legs; the 100th has a leg of 77 digits), oddities about squares, methods of factoring, mysteries of prime numbers, Gauss's Golden Theorem, polygonal and pyramidal numbers, the Pell Equation, the unsolved Last Theorem of Fermat, and many other aspects of number theory, simply by learning how to work with them in solving hundreds of mathematical puzzle problems. The text is extremely clear and easy to follow, and it bears convincing evidence of the author's deep sense of humor and his outstanding ability to lure the reader through even the most difficult trails by skillfully revealing their fascination. The problems distributed throughout the book are explained in the final chapter and there is also a supplementary chapter containing 100 problems and their solutions, many original. There are over 100 tables. The appeal of these stimulating puzzles lies in their ready comprehensibility and the fact that only high school math is needed to master the fundamental theory presented by the author. This theory is itself interesting and of use to the more serious math student, but it may be omitted by lay readers without diminishing the book's challenge or detracting from the pleasure-giving nuggets it Back to top
{"url":"http://www.chegg.com/textbooks/recreations-in-the-theory-of-numbers-2nd-edition-9780486210964-0486210960","timestamp":"2014-04-25T04:31:04Z","content_type":null,"content_length":"21914","record_id":"<urn:uuid:9eb86a1e-e188-44c1-88c8-1c8c2c931295>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorial gamma function November 4th 2008, 09:33 PM #1 Jun 2008 Factorial gamma function This is for high school but I suppose it is not studied in most high schools so I'll post it here. I don't know where to begin. Prove that $\sum _{n=1}^N \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (1+N)+m \Gamma (1+m) \Gamma (1+N)-\Gamma (2+m+N)}{(1+m) \Gamma (1+N)}$ Any help will be appreciated This is for high school but I suppose it is not studied in most high schools so I'll post it here. I don't know where to begin. Prove that $\sum _{n=1}^N \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (1+N)+m \Gamma (1+m) \Gamma (1+N)-\Gamma (2+m+N)}{(1+m) \Gamma (1+N)}$ Any help will be appreciated From looking at it, I think the best way would be to prove this by mathematical induction: First, show that this is true for N=1: $\frac{(1+m)!}{1!}=-\frac{m! \Gamma (2)+m\cdot m! \Gamma (2)-(m+2)!}{(1+m) \Gamma (2)}$ But $\Gamma(2)=1$, Thus $(1+m)!=-\frac{m! +(m+1)! -(m+2)!}{(1+m)}$ Now, $-\frac{m! +(m+1)! -(m+2)!}{(1+m)}=\frac{-1-m-1+m^2+3m+2}{m+1}m!=\frac{m^2+2m+1}{m+1}m!$$=\frac{(m+1)^2}{m+1}m!=(m+1)m!=(m+1)!$ Since we've shown this holds for N=1, Now we assume it holds for N=K The hard [and really messy part] is to show it holds for N=k+1. This means that $\sum _{n=1}^{k+1} \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (k+2)+m \Gamma (1+m) \Gamma (k+2)-\Gamma (k+3+m)}{(1+m) \Gamma (k+2)}$ Note that $\sum_{n=1}^{k+1}\frac{(n+m)!}{n!}=\sum_{n=1}^k\fra c{(n+m)!}{n!}+\frac{(k+1+m)!}{(k+1)!}$ So $\sum_{n=1}^k\frac{(n+m)!}{n!}$$+\frac{(k+1+m)!}{(k+1)!}=-\frac{\Gamma (1+m) \Gamma (k+2)+m \Gamma (1+m) \Gamma (k+2)-\Gamma (k+3+m)}{(1+m) \Gamma (k+2)}$ Knowing that $\Gamma(u)=(u-1)!$, we see that $\sum_{n=1}^k\frac{(n+m)!}{n!}$$+\frac{(k+1+m)!}{(k+1)!}=-\frac{m!(k+1)!+m!m(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ We can further simplify $-\frac{m!(k+1)!+m!m(k+1)!-(k+2+m)!}{(1+m)(k+1)!}=-\frac{(1+m)m!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$$=-\frac{(m+1)!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ Since $\sum _{n=1}^k \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (1+k)+m \Gamma (1+m) \Gamma (1+k)-\Gamma (2+m+k)}{(1+m) \Gamma (1+k)}$$=-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}$ Thus, we need to show that $-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}+\frac{(k+1+m)!}{(k+1)!}$$=\color{red}-\frac{(m+1)!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ Let's get the left side to look like the right side: $-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}+\frac{(k+1+m)!}{(k+1)!}$$=-\frac{(1+m)m!k!-(k+1+m)!}{(1+m)(1+k) k!}(k+1)+\frac{(1+m)(k+1+m)!}{(1+m)(k+1)k!}$ $=\frac{-(m+1)!(k+1)!+(k+1)(k+1+m)!+(m+1)(k+1+m)!}{(m+1)(k+ 1)!}$$=\frac{-(m+1)!(k+1)!+(k+2+m)(k+1+m)!}{(m+1)(k+1)!}$ This completes the inductive step. Does this make sense? Hopefully you can follow what I did... P.S. I wonder if anyone knows of a shorter way!?!?!?! From looking at it, I think the best way would be to prove this by mathematical induction: First, show that this is true for N=1: $\frac{(1+m)!}{1!}=-\frac{m! \Gamma (2)+m\cdot m! \Gamma (2)-(m+2)!}{(1+m) \Gamma (2)}$ But $\Gamma(2)=1$, Thus $(1+m)!=-\frac{m! +(m+1)! -(m+2)!}{(1+m)}$ Now, $-\frac{m! +(m+1)! -(m+2)!}{(1+m)}=\frac{-1-m-1+m^2+3m+2}{m+1}m!=\frac{m^2+2m+1}{m+1}m!$$=\frac{(m+1)^2}{m+1}m!=(m+1)m!=(m+1)!$ Since we've shown this holds for N=1, Now we assume it holds for N=K The hard [and really messy part] is to show it holds for N=k+1. This means that $\sum _{n=1}^{k+1} \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (k+2)+m \Gamma (1+m) \Gamma (k+2)-\Gamma (k+3+m)}{(1+m) \Gamma (k+2)}$ Note that $\sum_{n=1}^{k+1}\frac{(n+m)!}{n!}=\sum_{n=1}^k\fra c{(n+m)!}{n!}+\frac{(k+1+m)!}{(k+1)!}$ So $\sum_{n=1}^k\frac{(n+m)!}{n!}$$+\frac{(k+1+m)!}{(k+1)!}=-\frac{\Gamma (1+m) \Gamma (k+2)+m \Gamma (1+m) \Gamma (k+2)-\Gamma (k+3+m)}{(1+m) \Gamma (k+2)}$ Knowing that $\Gamma(u)=(u-1)!$, we see that $\sum_{n=1}^k\frac{(n+m)!}{n!}$$+\frac{(k+1+m)!}{(k+1)!}=-\frac{m!(k+1)!+m!m(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ We can further simplify $-\frac{m!(k+1)!+m!m(k+1)!-(k+2+m)!}{(1+m)(k+1)!}=-\frac{(1+m)m!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$$=-\frac{(m+1)!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ Since $\sum _{n=1}^k \frac{(n+m)!}{n!}=-\frac{\Gamma (1+m) \Gamma (1+k)+m \Gamma (1+m) \Gamma (1+k)-\Gamma (2+m+k)}{(1+m) \Gamma (1+k)}$$=-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}$ Thus, we need to show that $-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}+\frac{(k+1+m)!}{(k+1)!}$$=\color{red}-\frac{(m+1)!(k+1)!-(k+2+m)!}{(1+m)(k+1)!}$ Let's get the left side to look like the right side: $-\frac{m!k!+m!k!m-(k+1+m)!}{(1+m) k!}+\frac{(k+1+m)!}{(k+1)!}$$=-\frac{(1+m)m!k!-(k+1+m)!}{(1+m)(1+k) k!}(k+1)+\frac{(1+m)(k+1+m)!}{(1+m)(k+1)k!}$ $=\frac{-(m+1)!(k+1)!+(k+1)(k+1+m)!+(m+1)(k+1+m)!}{(m+1)(k+ 1)!}$$=\frac{-(m+1)!(k+1)!+(k+2+m)(k+1+m)!}{(m+1)(k+1)!}$ This completes the inductive step. Does this make sense? Hopefully you can follow what I did... P.S. I wonder if anyone knows of a shorter way!?!?!?! Thank you for the answer. I've solved it using induction myself. But what if the question is "express $\sum _{n=1}^N \frac{(n+m)!}{n!}$ in terms of N and m using the basic identity: $\binom{k-1}{\ell}=\binom{k}{\ell}-\binom{k-1}{\ell -1},$ we will have: $\sum_{n=1}^N \binom{n+m}{n}=\sum_{n=1}^N \left[\binom{n+m+1}{n}-\binom{n+m}{n-1} \right]. \ \ \ \ \ \ \ \ \ (1)$ now the sum in the right hand side of (1) is a nice telescoping sum. thus (1) gives us: $\sum_{n=1}^N \binom{n+m}{n}= \binom{N+m+1}{N} - 1. \ \ \ \ \ \ \ (2)$ finally multiplying both sides of (2) by $m!$ gives us: $\sum_{n=1}^N \frac{(n+m)!}{n!}=\left[\binom{N+m+1}{N}-1 \right]m!,$ which completes the proof because the nice looking right hand side of my identity is equal to the weird looking right hand side of your identity! $\Box$ November 4th 2008, 10:38 PM #2 November 4th 2008, 11:06 PM #3 Jun 2008 November 5th 2008, 01:10 AM #4 MHF Contributor May 2008
{"url":"http://mathhelpforum.com/calculus/57683-factorial-gamma-function.html","timestamp":"2014-04-18T19:01:43Z","content_type":null,"content_length":"59500","record_id":"<urn:uuid:23180125-d76b-4920-82ba-83fac6eb82fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2000 [00011] [Date Index] [Thread Index] [Author Index] Re: solve f(x)=0, where f:Rn+1 -> Rn • To: mathgroup at smc.vnet.net • Subject: [mg26212] Re: [mg26189] solve f(x)=0, where f:Rn+1 -> Rn • From: "Carl K. Woll" <carlw at u.washington.edu> • Date: Sat, 2 Dec 2000 02:10:39 -0500 (EST) • References: <200012010302.WAA11186@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com A while back I put forth a function which I believe does exactly what you want. The function is called ImplicitSolve, and it has the usage message: ImplicitSolve[eqns, {x->x0,y->y0,...}, {x, xmin, xmax}, opts] finds a solution to the implicit equations eqns for the functions y, ... with the independent variable x in the range xmin to xmax. The root {x0,y0,...} should satisfy the equations, or should provide a good starting point for finding a solution when using FindRoot. Currently, the only available option is AccuracyGoal, but a better ImplicitSolve would include the possibility of supplying options for both the FindRoot and NDSolve function calls. I will give the function definition at the end of this message (modified slightly from my previous post). For your circle example, you would use ImplicitSolve in the following way: ImplicitSolve[{x^2 + y^2 == 1}, {x -> 0, y -> 1}, {x, -1, 1}, AccuracyGoal -> 10] and Mathematica would return an interpolating function for y as a function of x, with an error of about 10^-8. Carl Woll Physics Dept U of Washington Here is the definition of ImplicitSolve: (* options *) (* root *) (* check root *) (* get interpolating function *) ImplicitSolve::usage="ImplicitSolve[eqns, {x->x0,y->y0,...}, {x, xmin, xmax}, opts] finds a solution to the implicit equations eqns for the functions y, ... with the independent variable x in the range xmin to xmax. The root {x0,y0,...} should satisfy the equations, or should provide a good starting point for finding a solution when using FindRoot. Currently, the only available option is AccuracyGoal, but a better ImplicitSolve would include the possibility of supplying options for both the FindRoot and NDSolve function calls."; ImplicitSolve::badroot="Supplied root is missing value for `1`"; ImplicitSolve::incomplete="Supplied root is incomplete"; ImplicitSolve::inaccurate="Supplied root is inaccurate, using FindRoot to improve accuracy"; ----- Original Message ----- From: <Pavel.Pokorny at vscht.cz> To: mathgroup at smc.vnet.net Subject: [mg26212] [mg26189] solve f(x)=0, where f:Rn+1 -> Rn > Dear Mathematica friends > Is there a way in Mathematica 4.0 to solve (numerically) the problem > f(x) = 0 > where f: R^{n+1} -> R^n, > i.e. f has n+1 real arguments and n real results ? > The solution is (under certain conditions on f) > a curve in (n+1)-dim space. > Example > x^2 + y^2 - 1 = 0 > is a unit circle. > This problem is called "continuation" in nonlinear system analysis > see > Seydel: Tutorial on Continuation > Int.J.Bif.Chaos, Vol.1 No.1 (1991) pp 3-11. > -- > Pavel Pokorny > Math Dept, Prague Institute of Chemical Technology > http://staff.vscht.cz/mat/Pavel.Pokorny
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Dec/msg00011.html","timestamp":"2014-04-17T04:18:03Z","content_type":null,"content_length":"38272","record_id":"<urn:uuid:a0083881-61b1-473f-8567-caa53642115c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Free GED Math Practice Tests Our completely free GED Math practice tests are the perfect way to brush up your skills. Take one of our many GED Math practice tests for a run-through of commonly asked questions. You will receive incredibly detailed scoring results at the end of your GED Math practice test to help you identify your strengths and weaknesses. Pick one of our GED Math practice tests now and begin! Practice Tests by Concept Practice Quizzes
{"url":"http://www.varsitytutors.com/ged_math-practice-tests","timestamp":"2014-04-20T16:08:25Z","content_type":null,"content_length":"157383","record_id":"<urn:uuid:f010034e-a477-440c-84d2-e2a8ffc2f6a2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Alternative methods to analyse the impact of HIV mutations on virological response to antiviral therapy Principal component analysis (PCA) and partial least square (PLS) regression may be useful to summarize the HIV genotypic information. Without pre-selection each mutation presented in at least one patient is considered with a different weight. We compared these two strategies with the construction of a usual genotypic score. We used data from the ANRS-CO3 Aquitaine Cohort Zephir sub-study. We used a subset of 87 patients with a complete baseline genotype and plasma HIV-1 RNA available at baseline and at week 12. PCA and PLS components were determined with all mutations that had prevalences >0. For the genotypic score, mutations were selected in two steps: 1) p-value < 0.01 in univariable analysis and prevalences between 10% and 90% and 2) backwards selection procedure based on the Cochran-Armitage Test. The predictive performances were compared by means of the cross-validated area under the receiver operating curve (AUC). Virological failure was observed in 46 (53%) patients at week 12. Principal components and PLS components showed a good performance for the prediction of virological response in HIV infected patients. The cross-validated AUCs for the PCA, PLS and genotypic score were 0.880, 0.868 and 0.863, respectively. The strength of the effect of each mutation could be considered through PCA and PLS components. In contrast, each selected mutation contributes with the same weight for the calculation of the genotypic score. Furthermore, PCA and PLS regression helped to describe mutation clusters (e.g. 10, 46, 90). In this dataset, PCA and PLS showed a good performance but their predictive ability was not clinically superior to that of the genotypic score. The development of HIV resistance mutations is one of the major problems for optimizing treatment of HIV-infected patients. Therefore, resistance testing before starting highly active antiretroviral therapy (HAART) or before switching to a new antiretroviral component is widely recommended [1-4] and now routinely implemented in industrialised countries. Resistance is due to mutations in the viral genome, e.g. mutations in the reverse transcriptase (RT), protease or integrase genes that cause resistance to nucleoside RT inhibitors (NRTIs) and non-nucleoside RT Inhibitors (NNRTIs), protease inhibitors (PIs), or integrase inhibitors, respectively. Genotypic and phenotypic resistance testing are the two commonly used tests. The impact of genotypic mutations on virological response in patients treated with a particular drug regimen are based on in vitro informations or on the virological response reported in patients who switched to that particular regimen. Before the initiation of an optimized treatment, a genotype of the main (major) patients' virus populations (only virus species present at >20–30% are detected and therefore analysed) is assessed. Statistical analyses aim at finding the baseline genotypic mutations associated with virological response in order to predict whether a patient who will switch to a similar regimen is resistant or not. Noteworthy, data are mostly analysed for the main drug of a given regimen only, i.e. NNRTI and/or PI. However, traditional statistical analyses of the association between genotypic mutations and virological response are hampered by i) the high number of potential mutations, ii) the correlations between mutations and iii) the low number of patients usually available for this type of study. Specifically, the analysis of the effect of high number of mutations measured in a limited number of patients may lead to over-fitting issues. Hence, inflated variances result in non-significant associations. In order to circumvent these problems and to simplify the interpretation, genotypic mutations are summarised in a so-called genotypic score. This score is the sum of observed resistance mutations at baseline for the given drug in a given patient. The mutations composing the score are selected by different strategies [5,6]. The drawbacks of this analysis are that a preselection of mutations is required and that every mutation has the same weighting. Alternative strategies such as principal component analysis (PCA) and partial least square (PLS) regression have been suggested for the sake of size reduction of correlated predictors [5,7-9] and may present advantages to improve the description of associations between mutations. The two techniques do not lead to a selection of mutations but to a different weighting of each mutation presented in the dataset. We aimed at comparing these two strategies with the usual construction of a genotypic score using data from an existing study evaluating the impact of protease mutations on the virological response in patients switching to a fosamprenavir/ritonavir-based HAART [10]. The Zephir study was designed to investigate the impact of baseline protease genotypic mutations in HIV-1 infected PI-experienced patients on virological response. All patients had baseline HIV-1 RNA levels >1.7 log[10 ]copies/mL and switched to a ritonavir-boosted fosamprenavir-based HAART [10]. Patients included were followed at the Bordeaux University hospital and at four other public hospitals in Aquitaine, south western France, all participating to the ANRS CO3 Aquitaine Cohort. We used a subset of 87 patients with a complete baseline genotype and plasma HIV-1 RNA available at baseline and at week 12. Virological failure was defined as a HIV-1 RNA ≥400 copies/mL and <1 log[10 ]copies/mL decrease of HIV-1 RNA between baseline and week 12 (virological success: HIV-1 RNA <400 copies/mL or ≥1 log[10 ]copies/mL reduction). A mutation was defined as a difference between the amino acid sequence of the studied virus and the wild type (HXB2) virus. In total, we created 69 dummy variables (69 mutations among the 99 possible protease mutations were encountered at least once). Statistical analysis Construction of a genotypic score The genotypic score was created in two steps. The first step considered mutations with prevalences ≥10% and ≤90% [5] to assess their association with virological failure. Mutations associated with a p-value ≥ 0.01 (univariable logistic regression) were selected. Second, the backwards procedure selected the combination with the strongest association with virological response [6]. These m selected mutations were used to calculate the first genotypic score for each patient. For instance, a first set contains the six mutations V32, I47, I50, V77, I84 and L90. The score is defined as S = I[V32 ]+ I[I47 ]+ I[I50 ]+ I[V77 ]+ I[I84 ]+ I[L90 ](S varying from 0 to 6). During the backwards selection procedure every mutation was removed one by one and all combinations of (m-1) mutations were investigated. The Cochran-Armitage test for linear trends in proportions was used to compare the probability of virological failure in patients having none to (m-1) mutations [11]. The combination providing the lowest p-value was kept and the procedure was repeated with all combinations of (m-2) mutations. The procedure stopped when removal of a mutation did not result in a lower p-value. We performed 200 bootstrap samples from the original data set to analyze the variability in mutations' selection. We assumed that variability in the selection of mutations due to the restricted sample size might essentially play a role in the first selection step. Therefore, a bootstrap analysis was performed only to the first selection criteria. In each sample the prevalence of each mutation was calculated. A univariable logistic regression was performed to determine the association of each mutation with virologic failure in each sample. Then we calculated the frequencies of selection of each mutation in the 200 bootstrap samples under the conditions mentioned above (prevalence between 10% and 90% and a p-value < 0.01 in univariable analysis). Principal component analysis (PCA) Each principal component is a linear combination of the original variables, with coefficients equal to the eigenvectors of the correlation or covariance matrix [7,9]. Principal components analysis determines components which are representing the variability of the mutations. The association between the principal components and the response variable was tested with the Wald test statistics of the estimated regression coefficient related to the principal components. We only tested principal components with an eigenvalue >2 reflecting that ≥3% of the variability of the mutations was explained. Any principal component was kept when it was related to the virological response using a logistic regression according to the Wald test. Partial least square (PLS) regression PLS regression is a technique widely used for dealing with numerous correlated explanatory variables [8,12]. PLS regression aims also at identifying components explaining as much as possible the variance of the predictor variables. These components are simultaneously correlated with the response variable. Over-fitting issues were controlled with a leave-one-out cross-validation during the construction process. The number of factors chosen is usually the one that minimizes the predicted residual sum of squares (PRESS) [13]. The probability of virological failure at week 12 was studied using a logistic regression model adjusted for either the genotypic score or the principal components or the PLS components as explanatory variables. The performance of each strategy was compared using the cross-validated AUC [7,8]. We used 5-fold cross-validation. We split the dataset in five equal parts. That way we selected five times a dataset with 1/5 of the patients as 'validation set' and the remaining 4/5 of the patients served as 'test set'. In the test set, we determined i) the genotypic score ii) the principal components and iii) the PLS components. The selected mutations were then used to calculate the genotypic score for the patients included in the validation set. The weights for each mutation derived by PCA and PLS were applied to calculate the score of the principal component and the PLS component respectively for the patients of the validation set. For each validation set the AUC under the ROC curve was calculated by means of a logistic regression for the three different methods. Thus, we obtained for each method 5 AUCs and the cross-validated AUC was calculated as the mean of these 5 AUCs. This approach allows to avoid over-fitting because the performance of the methods is tested in a subset of patients that were not used to determine the genotypic score and the weights of mutations in the PCA and PLS components. Statistical analyses were performed using SAS^® version 9.1 software (SAS Institute, Inc., Cary, NC). We used the procedures PROC PRINCOMP for principal component analysis and PROC PLS for partial least square regression. Principal components and PLS components were determined considering all mutations being present in at least one patient. Study population characteristics have been reported before [10]. We used a subset of 87 patients with a complete baseline genotype and plasma HIV-1 RNA available at baseline and at week 12. Virological failure was observed in 46 (53%) patients at week 12. Mutations at codon 63 had the highest prevalence in this population 80% followed by mutations at codons 10 (58%), 71 (51%), 46 (47%), 54 (47%), 37 (47%), 35 (41%), 82 (40%) and 90 (40%). Mutations at codons 11, 12, 13, 14, 15, 19, 20, 32, 33, 34, 36, 41, 43, 47, 55, 57, 60, 61, 62, 64, 69, 72, 73, 77, 84, 89 and 93 had prevalences between 10% and 40%. Mutations at codons 10, 46, 54, 82 and 90 showed the highest association with virological failure in univariable analysis (p < 10^-5). All patients with virological failure presented a mutation at codon 84. Genotypic score Among mutations occurring in more than 10% and less than 90% of the patients, 27, 18 and 11 mutations were selected according to p-value thresholds of < 0.25, < 0.05 and < 0.01, respectively. The backward selection procedure using the Cochrane Armitage trend test was started with the 11 mutations (10, 33, 36, 46, 54, 62, 71, 73, 82, 84, 90) selected with the most restrictive criteria (p < 0.01) to avoid computational issues. The stability of this selection step was checked on 200 bootstrap samples. Seven (10:100%, 46: 100%, 54: 100%, 71: 95.5%, 82: 97%, 84: 100%, 90: 96%) of the 11 mutations were selected in over 90% of the samples. The other four mutations were selected between 50% and 90% (33: 88%, 36: 68%, 62: 50%, 73: 68.5%). Mutations not included in the IAS list [14] were in general not selected in the bootstrap samples (exceptions: 19: 36.5%, 37: 19% and 41: 19%). This additional bootstrap analysis confirmed that mutations known to be associated with virological failure were chosen for further steps. Mutations (also known as polymorphisms) that also occur occasionally in untreated patients, thus generally without any relation to antiretroviral treatment, were chosen in less than 3% of the bootstrap samples. During the backward selection procedure the following six mutations 10, 36, 46, 62, 84, and 90 were selected for the calculation of a genotypic score. The genotypic score calculated with these six mutations was significantly associated with virological failure (OR = 4.1 for a difference of one mutation, CI[95% ][2.4; 7.0]; p < 10^-4; cross-validated OR = 4.9). Principal component analysis The first and second principal components explained 11% and 6% of mutations variability. Principal components accounted for a small variability overall. Therefore, their interpretation was difficult. The correlation of the mutations amongst them and to the principal components allowed identifying some clusters as for example mutations 10, 46 and 90 or mutations 32 and 47 already known to be associated together (figure 1). Figure 2 represents the relative weight of each mutation in the dataset to calculate the first principal component. The relative weight of each mutation to calculate the PCA 'score' ranged between 0% (e.g. mutation at codon 22) and 4.3% (e.g. mutations at codons 10 and 54). The sum of the relative weights of mutations represented in the IAS list was 70%, meaning that mutations of the IAS list contributed the most to calculate the first principal component. The mutations at the following six positions 10, 33, 46, 54, 82 and 90 contributed mostly to the first component (figure 2). Among others, mutations at positions 77, 88 and 30 contributed with a negative scoring coefficient to the first component, meaning that the presence of such mutation would decrease the value of the score. Medians of the first and the second principal component were -0.10 (IQR: -0.5–0.84) and 0 (IQR: -0.53–0.40), respectively. The first principal component was significantly associated with virological failure with an OR of 11.9 (CI[95% ][4.8; 29.7], p < 10^-4) for a difference of one unit whereas the second was not OR = 1.1 (CI[95% ]0.7; 1.7, p = 0.62). Figure 1. Mutations on the first and second principal components. All mutations having prevalences different from 0 are depicted. The wild type amino acid is cited before the codon of the mutation. Interpretation: PC1: First principal component (representing 11% of the variability), PC2: Second principal component (representing 6% of the variability). Mutations are represented by the component when they are close to the corresponding axis. When two mutations are far from the center, then, if they are: i) Close to each other, they are significantly positively correlated; ii) If they are in a rectangular position, they are not correlated; iii) If they are on the opposite side of the center, then they are negatively correlated. When the mutations are close to the center, it means that some information is carried on other axes. Figure 2. Relative weights of each mutation to calculate the 'score' of the first principal component. Black line: separation of mutations represented in the IAS list [14] and polymorphisms. Partial least Square One PLS component was chosen according to the PRESS criterion. This component explained 11% of the variability of the mutations and 60% of the variability of the response variable. The median of the first PLS component was -0.17 (IQR: -2.69–2.64). This PLS component was significantly associated with virological failure OR = 2.6 (CI[95%]1.8; 3.9 p < 10^-4). Figure 3 represents the relative weight of each mutation in the dataset to calculate the first PLS component. Mutations at positions 10, 46, 54, 82, 84, and 90 had the highest contribution to the calculation of the first component (figure 3). Negative weight for the calculation of the first PLS component was amongst others given by mutations 77, 30 and 48. Mutation at codon 69 contributed with the smallest relative weight (0.03%) and mutation at codon 10 with the highest (4.7%). The contribution of mutations included into the IAS list was 69% (i.e. the sum of relative weights). Thus, mutations already known to be associated with virological failure were given more weight than polymorphisms (mutations that also occur occasionally generally without association to antiretroviral treatment). Figure 3. Relative weights of each mutation to calculate the 'score' of the first PLS component. Black line: separation of mutations represented in the IAS list [14] and polymorphisms. We compared the results of the PCA and PLS with the results obtained using the classical strategy to build a genotypic score. Mutations 10, 46 and 90 were found among the six mutations contributing with the highest weight for the calculation of the first PC, the first PLS component and were selected for the genotypic score. Major mutations 54 and 82, which were found among the mutations with the highest association to virological failure in univariable analysis, were also found among the six mutations contributing with the highest weight for the calculation of the first PC and the first PLS component. In contrast, these two mutations were eliminated from the score during the backward selection procedure (figure 4). Therefore, one first advantage of methods based on PCA and PLS is that they helped in reducing the number of predictors without neglecting mutations that could play a significant role. Figure 4. Codons of mutations taken into consideration by the presented methods to predict virological failure(Codons at which polymorphisms occur are not depicted). The IAS mutation list shows all codons which have been described to be related with resistance to any of the protease inhibitors. Black boxes: Codons where major mutations occur. We compared the performance of these three methods with the area under the ROC curve. The cross-validated AUCs for the PCA, PLS and genotypic score were 0.880, 0.868 and 0.863, respectively. The model with the first principal component slightly outperformed the model with one PLS component. The predictive quality of the genotypic score was slightly lower than the two AUCs obtained for PCA and PLS but still showed a very good performance. To compare the methods in an illustrative way we used a patient presenting the following 21 protease gene mutations at baseline: mutations at positions 33, 54, 82, 90 defined as major, mutations at positions 10, 13, 20, 35, 36 43, 53, 60, 63, 64, 74 defined as minor and mutations at positions 14, 15, 19, 37, 67, 98 defined as polymorphisms. Virological failure was observed for this patient. The genotypic score was S = I[10]+I[36]+I[90 ]= 3 and the probability of virological failure was 77% using this score. The main difference between the genotypic score and the principal component value or the PLS component value is that with the latter methods we can take in consideration the fact that the patient has 21 protease gene mutations and give them different weights. For instance, the relative weights for mutations 10, 36, 90 were 4.4%, 2.2%, 4.1% and 4.7%, 2.4%, 4.4% for the PCA and PLS 'score', respectively (figure 2 and 3). The predicted probability of virological failure was 94% and 96% using the PC "score" and the PLS "score", respectively. We investigated PCA and PLS regression to analyse associations between baseline protease mutations and virological failure. PCA and PLS are easily applicable because they are implemented in standard statistical analyses programs such as SAS (SAS Institute, Inc., Cary, NC). We compared these two techniques with the construction of a genotypic score because they allow considering each mutation with a different weight. The objective of PCA is to find a set of new "latent variables" in form of a linear transformation of the original predictors. The properties of these latent variables are that they are uncorrelated and that they account for as much of the variance of the predictor variables as possible. PCA has been recently used to determine clusters of mutations in patients that were treated with at least one PI [15] and to predict the phenotypic fold change from genotypic information [16]. PLS regression reduces also a set of predictor variables to a set of uncorrelated "latent variables", the so-called PLS components. The main difference between the two techniques is that PLS also considers the strength of each mutation effect on the virological response to construct the components. Hence, these two methods can help solving the issues of the high number of predictors and their different effects. They may also help in describing the relationship between mutations by detecting potential groups of mutations. PLS was mentioned to be a useful analysing strategy for genotypic mutation data [5] but neither applications nor comparisons had been published yet. In this study population, these two methods were able to identify some mutations that were expected to contribute with higher weights to virologic failure (e.g. mutations at codons 10, 82 and 90 which contribute to resistance to at least 7 of the 8 currently used PIs [5]). Furthermore, known clusters of mutations could be described. Recent papers including co-variation analysis [15,17-19] found some correlated pairs and clusters which are associated with a specific treatment. Two of them used PCA to visualise correlations of mutations. We identified some clusters of mutations, e.g. mutations at codons 10, 46, and 90 and at codons 33, 46, 54 and 82, which were also found to be correlated with each other. Mutations 32 and 47 had the highest correlation coefficient (r = 0.78) in this population and are known to be key mutations for amprenavir [20] and lopinavir [14]. The cluster of mutations at positions 10, 46, 90 [19] and a high correlation between 32 and 47 were also determined by Wu et al and Kagan et al [19,21]. The mutations 10, 33, 46, 54, 71, 82, 84 and 90 are separated from all other mutations by the PCA and are contributing with the highest weight to calculate this component. The cluster 10, 46, 54, 71, 90 was recently described [17] to appear under lopinavir treatment and these mutations are also related to amprenavir-resistance [22]. We found that PCA had indeed detected this latter cluster in our patient's population previously treated by lopinavir or amprenavir (25% and 32% of the patients, respectively). Furthermore, the fact that the principal component was related to virological response highlights that PCA can detect mutation clusters on the way to lopinavir and fosamprenavir resistance although principal component analysis did not consider the virologic response for the construction of the component. As mentioned above, PLS searches latent variables but takes into account the response variable. Consequently one might expect differences for the distribution of the weights given by the mutations. Actually, the mutations found to contribute the highest weight on the PLS component are almost the same. Among the six mutations contributing with the highest weight, mutations at codons 10, 46, 54 82 and 90 were found for the principal component and the PLS component. Mutation at codon 33 was found on the principal component, while mutation 84 was found on the PLS component. In addition, the mutations which contributed with a higher weight for the calculation of the first principal and first PLS components are those which showed the highest association with virological response in univariable analysis. In conclusion, the weightings of the mutations found were consistent across these alternative strategies. A possible explanation is that the patients were mainly pre-treated with two PIs known to induce similar mutation patterns than fosamprenavir. In other cases, PLS might outperform PCA when a drug induces completely different mutations since the virological response is considered during the construction of the component. The above presented example (patient presenting 21 protease gene mutations) highlights the advantage of taking into account all mutations and giving them different weights by either PCA or PLS. This results in a better prediction of virological failure. After cross-validation the first principal component and the first PLS component only slightly outperformed the genotypic score in the prediction ability. However, it has to be stated that the cross-validated AUCs showed no clinical relevant difference. In this study population this might partly be explained by the fact that there was an explicit subset of mutations strongly associated with virological failure. This was also substantiated by the bootstrap analyses in which four of the six mutations remaining in the final genotypic score had been selected in over 95% of the bootstrap samples. This clear separation between mutations associated with virological failure from those which are not, could have facilitated the detection of a predictive subset using the classical strategy to construct a genotypic score. One of the reasons to apply PCA and PLS analyses to these kind of data was that these approaches do not need a pre-selection of variables (i.e. mutations) as they are summarized in predictors. Hence, all mutations can be considered even when they are present in a small proportion of patients. Among others, the attempt to study these approaches was to study whether considering all mutations has an advantage and if mutations known to be associated with virologic failure are given higher weights. However, the slightly better performance of the alternative approaches may be simply linked with the use of a larger amount of information. This was the minimum expected gain of these approaches compared to the usual one. Therefore, it would be very helpful to study the performance of PCA and PLS in other, potentially bigger, trials considering other antiretroviral regimen/patients. PCA and PLS regression were helpful in describing the association between mutations and to detect mutation clusters. PCA and PLS showed a good performance but their predictive ability was not clinically superior to that of the genotypic score. Aquitaine Cohort composition Scientific Committee: J. Beylot, M. Dupon, M. Longy-Boursier, J.L. Pellegrin, J.M. Ragnaud and R. Salamon (Chair). Scientific Coordination: M. Bruyand, G. Chêne, F. Dabis (Coordinator), S. Lawson-Ayayi, C. Lewden, R. Thiébaut. Medical Coordination: N. Bernard, M. Dupon, D. Lacoste, D. Malvy JF. Moreau, P. Mercié, P. Morlat, D. Neau, JL. Pellegrin, and JM. Ragnaud. Data Management and Statistical Analysis: E. Balestre, L. Dequae-Merchadou, V. Lavignolle-Aurillac. Technical Team: MJ. Blaizeau, M. Decoin, S. Delveaux, D. Dutoit, C. Hanappier, L. Houinou, S. Labarrère, G. Palmer, D. Touchard, and B. Uwamaliya. Participating Hospital Departments (participating physicians): Bordeaux University Hospitals: J. Beylot (N. Bernard, M. Bonarek, F. Bonnet, D. Lacoste, P. Morlat, and R. Vatan), P. Couzigou, H. Fleury (ME. Lafon, B. Masquelier, and I. Pellegrin), M. Dupon (H. Dutronc, F. Bocquentin, and S. Lafarie), J. L. Pellegrin (O. Caubet, E. Lazaro C. Nouts, and J. F. Viallard), M. Longy-Boursier (D. Malvy, P. Mercié, T. Pistonne and C. Receveur), J. F. Moreau (P. Blanco), J. M. Ragnaud (C. Cazorla, D. Chambon, C. De La Taille, D. Neau, and A. Ochoa); Dax Hospital: P. Loste (L. Caunègre); Bayonne Hospital: F. Bonnal (S. Farbos, and M. C. Gemain); Libourne Hospital: J. Ceccaldi (S. Tchamgoué); Mont-de-Marsan Hospital: S. de Witte. ANRS: Agence Nationale de Recherche sur le SIDA; AUC: Area under the receiver operating characteristics curve; CI: Confidence interval; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IAS: International AIDS society; IQR: Interquartiles range; NNRTI: Non-nucleoside reverse transcriptase inhibitor; NRTI: Nucleoside reverse transcriptase inhibitor; OR: Odds ratio; PC: Principal component; PCA: Principal component analysis; PLS: Partial least square; PRESS: Predicted residual sum of squares; RT: Reverse transcriptase. Authors' contributions LW carried out the statistical analysis and drafted the manuscript. RT and DC participated in the statistical analysis and helped to draft the manuscript. IP, DB, DN, DL, JLP, GC and FD performed the clinical trial and helped to draft the manuscript. All authors read and approved the final manuscript. 1. Gazzard B, Bernard AJ, Boffito M, Churchill D, Edwards S, Fisher N, Geretti AM, Johnson M, Leen C, Peters B, et al.: British HIV Association (BHIVA) guidelines for the treatment of HIV-infected adults with antiretroviral therapy (2006). HIV Med 2006, 7:487-503. PubMed Abstract | Publisher Full Text 2. Hammer SM, Saag MS, Schechter M, Montaner JS, Schooley RT, Jacobsen DM, Thompson MA, Carpenter CC, Fischl MA, Gazzard BG, et al.: Treatment for adult HIV infection: 2006 recommendations of the International AIDS Society-USA panel. Jama 2006, 296:827-843. PubMed Abstract | Publisher Full Text 3. Hirsch MS, Brun-Vezinet F, Clotet B, Conway B, Kuritzkes DR, D'Aquila RT, Demeter LM, Hammer SM, Johnson VA, Loveday C, et al.: Antiretroviral drug resistance testing in adults infected with human immunodeficiency virus type 1: 2003 recommendations of an International AIDS Society-USA Panel. Clin Infect Dis 2003, 37:113-128. PubMed Abstract | Publisher Full Text 4. Report 2006 under the direction of Patrick Yeni: Prise en charge médicale des personnes infectées par le VIH, recommandations du groupe d'experts. [http://www.sante.gouv.fr/htm/actu/yeni_sida/ rapport_experts_2006.pdf] webcite République française, Médecines-Sciences, Flammarion; 2006. (accessed 29 october 2008) 5. Brun-Vezinet F, Costagliola D, Khaled MA, Calvez V, Clavel F, Clotet B, Haubrich R, Kempf D, King M, Kuritzkes D, et al.: Clinically validated genotype analysis: guiding principles and statistical concerns. Antivir Ther 2004, 9:465-478. PubMed Abstract 6. Flandre P, Marcelin AG, Pavie J, Shmidely N, Wirden M, Lada O, Bernard MC, Molina JM, Calvez V: Comparison of tests and procedures to build clinically relevant genotypic scores: application to the Jaguar study. Antivir Ther 2005, 10:479-487. PubMed Abstract 7. Aguilera A, Escabias M, Valderrama M: Using principal components for estimating logistic regression with high-dimensional multicollinear data. Comput Stat Data Anal 2006, 50:1905-1924. Publisher Full Text 8. Bastien P, Esposito Vinzi V, Tenenhaus M: PLS generalised linear regression. Comput Stat Data Anal 2005, 48:17-46. Publisher Full Text 9. Massy W: Principal Components Regression in Exploratory Statistical Research. Journal of the American Statistical Association 1965, 60:234-256. Publisher Full Text 10. Pellegrin I, Breilh D, Coureau G, Boucher S, Neau D, Merel P, Lacoste D, Fleury H, Saux MC, Pellegrin JL, et al.: Interpretation of genotype and pharmacokinetics for resistance to fosamprenavir-ritonavir-based regimens in antiretroviral-experienced patients. Antimicrob Agents Chemother 2007, 51:1473-1480. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 11. Marcelin A-G, Masquelier B, Descamps D, Izopet J, Charpentier C, Alloui C, Bouvier-Alias M, Signori-Schmuck A, Montes B, Chaix M-L, et al.: Tipranavir-Ritonavir Genotypic Resistance Score in Protease Inhibitor-Experienced Patients. Antimicrob Agents Chemother 2008, 52:3237-3243. PubMed Abstract | Publisher Full Text 12. Tenenhaus M, Esposito Vinzi V: PLS regression, PLS path modeling and generalized Procrustean analysis: a combined approach for multiblock analysis. Journal of Chemometrics 2005, 19:145-153. Publisher Full Text 13. SAS, Institute, Inc: The PLS Procedure. [http://support.sas.com/onlinedoc/913/docMainpage.jsp] webcite SAS Online Doc SAS Online Doc. 9.1.3: SAS Institute Inc., Cary, NC, USA; 2007., 913 accessed 29 october 2008 14. Johnson VA, Brun-Vezinet F, Clotet B, Gunthard HF, Kuritzkes DR, Pillay D, Schapiro JM, Richman DD: Update of the Drug Resistance Mutations in HIV-1: 2007. Top HIV Med 2007, 15:119-125. PubMed Abstract | Publisher Full Text 15. Rhee SY, Liu TF, Holmes SP, Shafer RW: HIV-1 subtype B protease and reverse transcriptase amino acid covariation. PLoS Comput Biol 2007, 3:e87. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 16. Rabinowitz M, Myers L, Banjevic M, Chan A, Sweetkind-Singer J, Haberer J, McCann K, Wolkowicz R: Accurate prediction of HIV-1 drug response from the reverse transcriptase and protease amino acid sequences using sparse models created by convex optimization. Bioinformatics 2006, 22:541-549. PubMed Abstract | Publisher Full Text 17. Garriga C, Perez-Elias MJ, Delgado R, Ruiz L, Najera R, Pumarola T, Alonso-Socas Mdel M, Garcia-Bujalance S, Menendez-Arias L: Mutational patterns and correlated amino acid substitutions in the HIV-1 protease after virological failure to nelfinavir- and lopinavir/ritonavir-based treatments. J Med Virol 2007, 79:1617-1628. PubMed Abstract | Publisher Full Text 18. Hoffman NG, Schiffer CA, Swanstrom R: Covariation of amino acid positions in HIV-1 protease. Virology 2003, 314:536-548. PubMed Abstract | Publisher Full Text 19. Wu TD, Schiffer CA, Gonzales MJ, Taylor J, Kantor R, Chou S, Israelski D, Zolopa AR, Fessel WJ, Shafer RW: Mutation patterns and structural correlates in human immunodeficiency virus type 1 protease following different protease inhibitor treatments. J Virol 2003, 77:4836-4847. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 20. Maguire M, Shortino D, Klein A, Harris W, Manohitharajah V, Tisdale M, Elston R, Yeo J, Randall S, Xu F, et al.: Emergence of resistance to protease inhibitor amprenavir in human immunodeficiency virus type 1-infected patients: selection of four alternative viral protease genotypes and influence of viral susceptibility to coadministered reverse transcriptase nucleoside inhibitors. Antimicrob Agents Chemother 2002, 46:731-738. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 21. Kagan RM, Cheung PK, Huard TK, Lewinski MA: Increasing prevalence of HIV-1 protease inhibitor-associated mutations correlates with long-term non-suppressive protease inhibitor treatment. Antiviral Res 2006, 71:42-52. PubMed Abstract | Publisher Full Text 22. Johnson VA, Brun-Vezinet F, Clotet B, Conway B, D'Aquila RT, Demeter LM, Kuritzkes DR, Pillay D, Schapiro JM, Telenti A, Richman DD: Update of the drug resistance mutations in HIV-1: 2004. Top HIV Med 2004, 12:119-124. PubMed Abstract | Publisher Full Text Pre-publication history The pre-publication history for this paper can be accessed here: Sign up to receive new article alerts from BMC Medical Research Methodology
{"url":"http://www.biomedcentral.com/1471-2288/8/68","timestamp":"2014-04-20T14:18:48Z","content_type":null,"content_length":"120061","record_id":"<urn:uuid:92acb4de-3a76-443a-a75d-36844e51df4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring The Yard In order to figure out how much sod you will need to buy you will need to measure the yard where you want to install the sod. To do this will you need to know how to figure out the area of basic shapes. If your yard is not a basic shape, like most, then break your yard up into basic shapes. Once your yard is broken up into basic shapes then you add the areas of the different sections to figure out your total area of your yard. You will also want to add about another 5% to your measurements for the unexpected. We made things a little easier by offering a way to figure out the areas with just your measurements. On the right you can figure the areas out for three basic shapes. If you would like to do it on your own then you can just look at the examples and the formulas for help. We even have a nice printer friendly version that you can print out and use. To figure out the square feet of a square, multiply the width by the length. Ex. 5 x 5 = 25 sq. ft. To figure out the square feet of a triangle, multiply the width by the length and then divide by 2. Ex. 5 x 10 = 50 50/2 = 25 sq. ft. To figure out the area of a circle with the radius, multiply the radius times itself and then multiply that by 3.14. Ex. 10 x 10 = 100 100 x 3.14 = 314 sq. ft. To figure out the area of a circle with the diameter. First divide the diameter by 2 and that will give you the radius. Then follow the directions from the example above.
{"url":"http://www.mckellipsod.com/area.php","timestamp":"2014-04-21T04:48:43Z","content_type":null,"content_length":"9576","record_id":"<urn:uuid:6283d76a-f8d4-43bb-990f-f76449acb5dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Transposition problem August 14th 2011, 11:47 AM Transposition problem Hello everyone. I have just started a new college course which involves an analyticalmethods module. We have just started transposition and have been given thefollowing: Transpose to find v I have racked by brain for a bit and came up with.. Is this right? not sure if I have it right, can anyone help out? Thanks in advance. August 14th 2011, 12:27 PM Re: Transposition problem It would be more clear if you used brackets. Is it: $F=\frac{u\cdot v}{u+v}$ or $F=\frac{u\cdot v}{u}+v$ August 14th 2011, 12:33 PM Re: Transposition problem I assume you mean F= uv/(u+v), not F= (uv/u)+ v since the latter would be just 2v. Transpose to find v I have racked by brain for a bit and came up with.. Is this right? not sure if I have it right, can anyone help out? Thanks in advance. No, that is not correct. If you multiply both sides by u+ v, you get F(u+ v)= uF+ vF= uv. Then uF= uv- vF= v(u- F). Can you finish it? August 14th 2011, 12:41 PM Re: Transposition problem August 14th 2011, 12:43 PM Re: Transposition problem No problem. HallsofIvy has given you a hint. Can you continue? ... August 14th 2011, 12:55 PM Re: Transposition problem Thanks guys, just trying to work through it now... Bit confused with the two sets of equal signs tho i.e F(u+ v)= uF+ vF= uv Unless I am reading it wrong... August 14th 2011, 01:00 PM Re: Transposition problem If we start with: Step 1: Multiply both sides with $u+v$ (like HallsofIvy said): Step 2: Work the brackets out: $Fu+Fv=uv \Leftrightarrow Fu=uv-Fv \Leftrightarrow Fu=v(u-F)\Leftrightarrow v=...$ Is this clear? ... August 14th 2011, 01:15 PM Re: Transposition problem Yes it is, thank you... Is it: v=Fu/(u-F) ? August 14th 2011, 01:35 PM Re: Transposition problem August 14th 2011, 01:48 PM Re: Transposition problem Thank you for your help (Siron/HallsofIvy), much appreciated. August 14th 2011, 01:49 PM Re: Transposition problem You're welcome!
{"url":"http://mathhelpforum.com/algebra/186133-transposition-problem-print.html","timestamp":"2014-04-21T10:04:41Z","content_type":null,"content_length":"13260","record_id":"<urn:uuid:4569d7e4-f73d-410d-967e-0715e61da822>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Farmers Branch, TX Trigonometry Tutor Find a Farmers Branch, TX Trigonometry Tutor ...My students have consistently improved their SAT scores by 200 points overall and raised their class grade by one letter grade. I try to make math fun for my students and work at a pace that the student can maintain. I will meet your student at you home or a nearby location of your choice. 15 Subjects: including trigonometry, chemistry, calculus, geometry ...I specialize in tutoring chemistry and physics for high school and first year college (Introduction to chemistry, General Chemistry 1 & 2, Organic chemistry 1 & 2, Physics 1 & 2) students. I also tutor high school math. I help students with their understanding of new and complicated concepts. 19 Subjects: including trigonometry, chemistry, physics, geometry ...I have enjoyed chemistry since I was in elementary school and received a chemistry set for my birthday. When I was finally able to take chemistry in high school, I took to it like a duck to water. I excelled in A. 82 Subjects: including trigonometry, chemistry, reading, English ...I am currently pursuing my PhD in Physics at the University of Texas at Dallas. I have over ten years experience in teaching and tutoring students in various topics. I enjoy teaching and I am passionate about it. 25 Subjects: including trigonometry, chemistry, physics, calculus ...I graduated high school with high honors. In college I completed 40 hours of mathematics to obtain a mathematics degree. I also completed 18 hours of graduate mathematics courses in pursuit of a Master degree in Mathematics Education. 17 Subjects: including trigonometry, calculus, statistics, geometry Related Farmers Branch, TX Tutors Farmers Branch, TX Accounting Tutors Farmers Branch, TX ACT Tutors Farmers Branch, TX Algebra Tutors Farmers Branch, TX Algebra 2 Tutors Farmers Branch, TX Calculus Tutors Farmers Branch, TX Geometry Tutors Farmers Branch, TX Math Tutors Farmers Branch, TX Prealgebra Tutors Farmers Branch, TX Precalculus Tutors Farmers Branch, TX SAT Tutors Farmers Branch, TX SAT Math Tutors Farmers Branch, TX Science Tutors Farmers Branch, TX Statistics Tutors Farmers Branch, TX Trigonometry Tutors Nearby Cities With trigonometry Tutor Addison, TX trigonometry Tutors Balch Springs, TX trigonometry Tutors Bedford, TX trigonometry Tutors Carrollton, TX trigonometry Tutors Coppell trigonometry Tutors Euless trigonometry Tutors Flower Mound trigonometry Tutors Grapevine, TX trigonometry Tutors Highland Park, TX trigonometry Tutors Hurst, TX trigonometry Tutors Irving, TX trigonometry Tutors Parker, TX trigonometry Tutors Richardson trigonometry Tutors The Colony trigonometry Tutors University Park, TX trigonometry Tutors
{"url":"http://www.purplemath.com/Farmers_Branch_TX_trigonometry_tutors.php","timestamp":"2014-04-16T05:04:36Z","content_type":null,"content_length":"24334","record_id":"<urn:uuid:22ed99a2-b95d-4587-9293-18b067ba8632>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
When does a polynomial have all pure imaginary roots? up vote 3 down vote favorite Let $P(x)=x^{n}+a_{1}x^{n-1}+\cdots+a_{n-1}x+a_{n}$, where $a_1, a_2, \dots, a_n$ are intergers. Question 1. When does the polynomial $P(x)$ have its zeros all being pure imaginary or zero(here 0 is a root of the given polynomial)? Question 2. Does there exist a characterization that $P(x)$ have its zeros all being pure imaginary or zero? Please point me to some references if this has already been studied. Thanks for your time! 3 are your coefficients $a_1,\dots,a_n$ assumed to be real? – Pietro Majer Feb 27 '11 at 9:26 mathoverflow.net/questions/20946/… – J.C. Ottem Feb 27 '11 at 9:53 Here's a hint for the degree 2 case: $P(x) = (x-ia)(x-ib)$. – Franz Lemmermeyer Feb 27 '11 at 10:33 @Pietro Majer, thanks! The coefficients a_i,(i=1,\dots,n) are all intergers – Shunyi Liu Feb 27 '11 at 12:39 @J.C. Ottem, thanks! – Shunyi Liu Feb 27 '11 at 13:37 show 2 more comments closed as too localized by Franz Lemmermeyer, Qiaochu Yuan, Andres Caicedo, Gjergji Zaimi, Dmitri Pavlov Feb 27 '11 at 19:23 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. 4 Answers active oldest votes A necessary and sufficient condition is that $P(x)$ is a power of $x$ times a product of terms $x^2+c$ with $c$ real and positive. Hence $P(x)$ is $x^{n-2m}Q(x^2)$ where $m\ge0$ and $Q (t)$ is a unitary polynomial in $t$ of degree $m$ with integer nonnegative coefficients. Finally, a necessary condition is that $a_{k}=0$ for every odd $k$ and one should further require the underlying polynomial $Q$ to be a product of $t+c_i$. EDIT It appears that the problem of locating the zeroes of a polynomial resurfaces regularly on MO, see for example this question or that one or that one. The answers to these provide the following facts. First, there are Newton's inequalities. For any real numbers $(t_i)_{1\le i\le m}$, their elementary symmetric means $S_k$ are such that, for every $k$, $S_k^2\ge S_{k-1}S_{k+1}$. Here $S_k=\sigma_k/{m\choose k}$ where $\sigma_k$ denotes the $k$th elementary function in the real numbers $(t_i)_{1\le i\le m}$, see here. So, write $Q$ as $Q(t)=t^m+b_1t^{m-1}+\cdots+b_m$ (and recall that $b_k=a_{2k}$). Any $Q$ under consideration is such that $b_k$ must be nonnegative for every $k$ and Newton's inequalities indicate that supplementary necessary condition are $S_k^2\ge S_{k-1}S_{k+1}$ for every $k$, where the $S_k$ are based on the $\sigma_k=b_k$. (At first sight, one should up vote 8 choose $\sigma_k=(-1)^kb_k$ but the $(-1)^k$ disappear.) Hence, a necessary condition is that, for every $k$, $$ k(m-k)b_k^2\ge (k+1)(m-k+1)b_{k-1}b_{k+1}. $$ Second, a complete down vote characterization of the polynomials $Q$ with only real negative roots is based on the notion of Hermite forms, see this answer. Recall that the Hermite form of a polynomial $Q$ of degree $m$ is a symmetric matrix, usually denoted by $H_1(Q)$, of size $m\times m$ with entries $(h_{ij}(Q))$, defined by $$ h_{ij}(Q) =s_{i+j-2}(Q), $$ where, for every $k$, $s_k(Q)$ is the sum of the $k$th powers of the roots of $Q$ (and $s_0(Q)=m$), see these lecture notes. Recall that the $s_k(Q)$ are well known functions of the elementary symmetric functions $\sigma_k=(-1)^kb_k$, see here. Recall also that the signature of a symmetric matrix is equal to the number of its positive eigenvalues minus the number of its negative eigenvalues. Then the signature of $H_1(Q)$ is equal to the number of real roots of $Q$. In the context of this question, one already knows that $Q$ has no positive real root because $b_k$ is nonnegative for every $k$ and one wants $Q$ to have $m$ real roots. Hence the signature of $H_1(Q)$ must be $m$, that is, $H_1(Q)$ must be positive definite. Finally, necessary and sufficient conditions are that the odd numbered $a_k$ are zero and that the even numbered $a_k$ are nonnegative and define a polynomial $Q$ such that the symmetric matrix $H_1(Q)$ is positive definite. @Didier Piau, thanks for your reply! Unfortunately I am mainly interested in what conditions of the coefficients $a_{i}$ are satisfied to leave $P(x)$ have its roots pure imaginary or o. – Shunyi Liu Feb 27 '11 at 12:58 @Shunyi Liu: Are you asking for a condition on the coefficients of $Q$ (the $Q$ in my post) equivalent to the fact that all the roots of $Q$ are real negative? – Did Feb 27 '11 at Why is invertibility of $H_1(Q)$ equivalent to saying the signature is $m$? It looks like the condition to check was that $H_1(Q)$ is positive definite. – Douglas Zare Feb 27 '11 at @Douglas Let $A$ be symmetric of size $m\times m$. Then: $A$ is positive definite iff every eigenvalue of $A$ is positive iff the number of positive eigenvalues of $A$ is $m$ iff the signature of $A$ is $m$. Or am I missing something? – Did Feb 28 '11 at 7:42 @Didier Piau, thank you very much for your kind help. It is easy to see that the final conditons you obtained are necessary. But how to show that these conditions are also sufficient? Besides, whether $Q$ has only real negative roots is equivelent to $\det(H_{1}(Q))\ne0$? – Shunyi Liu Feb 28 '11 at 10:17 show 2 more comments The general study of connections between the coefficients of a polynomial, the locations of its roots, the roots of its derivative, et cetera, is called the Geometry of Zeros. There are books, I believe, with this title. Google turned up this survey. up vote 3 down As with most things, your exact question is likely not in print anywhere (I like Didier's answer), but there is an active and beautiful field with powerful techniques that are worth vote study. @Kevin O'Bryant, thanks for the interesting reference! – Shunyi Liu Feb 28 '11 at 9:57 add comment This should be a comment but I don't have enough reputation to leave one. First of all, $0$ is a root of $P(x)$ if and only if the independent term $a_n$ is null. Then, you can characterize the coefficients in terms of elementary symmetric polynomials and their degree: $a_i = (-1)^{n-i} \sigma_n$, where $\sigma_n = \sum_{1 \le j_1 < j_2 < \cdots < j_n \le n} r_{j_1}r_{j_2}\ up vote dotsm r_{j_n}$, and $r_1, \dotsc , r_n$ are the roots of the polynomial. From this you can see that a necessary condition for $r_1, \dotsc , r_n$ to be imaginary is that $a_k$ is of the form 2 down $c_ki$ if $k \equiv n \pmod 4$, $c_k$ if $k \equiv n-1 \pmod 4$, $-c_ki$ if $k \equiv n-2 \pmod 4$, and $-c_k$ if $k \equiv n-3 \pmod 4$, where $c_k$ is a real number for all $k \in \mathbb vote {N}$. @Abel, thanks very much for your reply! I think it is easy to find some necessary conditions for $r_{1},\dots, r_{n}$ to be imaginary. For example, $a_{k}=0$ for all odd numbers k. However, it is hard to find a necessary and sufficient condition for $r_{1},\dots, r_{n}$ to be imaginary according the relations between the coefficients of the polynomial. – Shunyi Liu Feb 27 '11 at 13:24 add comment There is a method due to Sturm which allows the determination of the number of roots of a polynomial in any real interval, in particular you may apply this to find the number of negative up vote 2 roots. The following Wikipedia article may serve as an initial reference: http://en.wikipedia.org/wiki/Sturm%27s_theorem down vote @Michael Renardy, thanks for your kind reply. – Shunyi Liu Feb 28 '11 at 9:58 add comment Not the answer you're looking for? Browse other questions tagged polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/56802/when-does-a-polynomial-have-all-pure-imaginary-roots/56835","timestamp":"2014-04-17T04:35:36Z","content_type":null,"content_length":"73490","record_id":"<urn:uuid:9f5fa630-0ed3-4355-9c57-f1e7f1975871>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Axelsson, Roland (2010): Verification of Non-Regular Program Properties. Dissertation, LMU München: Faculty of Mathematics, Computer Science and Statistics Metadaten exportieren Autor recherchieren in Most temporal logics which have been introduced and studied in the past decades can be embedded into the modal mu-calculus. This is the case for e.g. PDL, CTL, CTL*, ECTL, LTL, etc. and entails that these logics cannot express non-regular program properties. In recent years, some novel approaches towards an increase in expressive power have been made: Fixpoint Logic with Chop enriches the mu-calculus with a sequential composition operator and thereby allows to characterise context-free processes. The Modal Iteration Calculus uses inflationary fixpoints to exceed the expressive power of the mu-calculus. Higher-Order Fixpoint Logic (HFL) incorporates a simply typed lambda-calculus into a setting with extremal fixpoint operators and even exceeds the expressive power of Fixpoint Logic with Chop. But also PDL has been equipped with context-free programs instead of regular ones. In terms of expressivity there is a natural demand for richer frameworks since program property specifications are simply not limited to the regular sphere. Expressivity however usually comes at the price of an increased computational complexity of logic-related decision problems. For instance are the satisfiability problems for the above mentioned logics undecidable. We investigate in this work the model checking problem of three different logics which are capable of expressing non-regular program properties and aim at identifying fragments with feasible model checking complexity. Firstly, we develop a generic method for determining the complexity of model checking PDL over arbitrary classes of programs and show that the border to undecidability runs between PDL over indexed languages and PDL over context-sensitive languages. It is however still in PTIME for PDL over linear indexed languages and in EXPTIME for PDL over indexed languages. We present concrete algorithms which allow implementations of model checkers for these two fragments. We then introduce an extension of CTL in which the UNTIL- and RELEASE- operators are adorned with formal languages. These are interpreted over labeled paths and restrict the moments on such a path at which the operators are satisfied. The UNTIL-operator is for instance satisfied if some path prefix forms a word in the language it is adorned with (besides the usual requirement that until that moment some property has to hold and at that very moment some other property must hold). Again, we determine the computational complexities of the model checking problems for varying classes of allowed languages in either operator. It turns out that either enabling context-sensitive languages in the UNTIL or context-free languages in the RELEASE- operator renders the model checking problem undecidable while it is EXPTIME-complete for indexed languages in the UNTIL and visibly pushdown languages in the RELEASE- operator. PTIME-completeness is a result of allowing linear indexed languages in the UNTIL and deterministic context-free languages in the RELEASE. We do also give concrete model checking algorithms for several interesting fragments of these logics. Finally, we turn our attention to the model checking problem of HFL which we have already studied in previous works. On finite state models it is k-EXPTIME-complete for HFL(k), the fragment of HFL obtained by restricting functions in the lambda-calculus to order k. Novel in this work is however the generalisation (from the first-order case to the case for functions of arbitrary order) of an idea to improve the best and average case behaviour of a model checking algorithm by using partial functions during the fixpoint iteration guided by the neededness of arguments. This is possible, because the semantics of a closed HFL formula is not a total function but the value of a function at some argument. Again, we give a concrete algorithm for such an improved model checker and argue that despite the very high model checking complexity this improvement is very useful in practice and gives feasible results for HFL with lower order fuctions, backed up by a statistical analysis of the number of needed arguments on a concrete example. Furthermore, we show how HFL can be used as a tool for the development of algorithms. Its high expressivity allows to encode a wide variety of problems as instances of model checking already in the first-order fragment. The rather unintuitive -- yet very succinct -- problem encoding together with an analysis of the behaviour of the above sketched optimisation may give deep insights into the problem. We demonstrate this on the example of the universality problem for nondeterministic finite automata, where a slight variation of the optimised model checking algorithm yields one of the best known methods so far which was only discovered recently. We do also investigate typical model-theoretic properties for each of these logics and compare them with respect to expressive power. Item Type: Thesis (Dissertation, LMU Munich) Keywords: model-checking, program verification, non-regular logic, pdl, ctl, hfl Subjects: 600 Natural sciences and mathematics > 510 Mathematics 600 Natural sciences and mathematics Faculties: Faculty of Mathematics, Computer Science and Statistics Language: English Date Accepted: 25. June 2010 1. Referee: Lange, Martin Persistent Identifier (URN): urn:nbn:de:bvb:19-116775 MD5 Checksum of the PDF-file: 7500acce9b432b6443f1f02f1565f7cb Signature of the printed copy: 0001/UMC 18811 ID Code: 11677 Deposited On: 31. Aug 2010 12:55 Last Modified: 16. Oct 2012 08:39
{"url":"http://edoc.ub.uni-muenchen.de/11677/","timestamp":"2014-04-20T08:23:04Z","content_type":null,"content_length":"34242","record_id":"<urn:uuid:79eb6851-6a94-4b9b-b692-68c72fc2cdca>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
what are you doing for pi day? - ProTeacher Community now I feel guilty because I wasn't planning on doing anything since 3.14 is on Saturday this year. 1. My students had to earn slices (slivers, really) of pie by passing off all their circle formulas at centers. As they finished they colored in a 15 degree "slice of pie." We had already listed the types of pies they would bring and decided on a color coding system, i.e. green for key lime, orange for pumpkin, etc. This also made it easier to serve the pie on PI day. They had to compute the area for the amount of pi they earned in order to get a plate and a fork. They loved it!! Plus, they really remembered the formula for end of year testing in May! 2. We made PI necklaces and held on the spot challenges to see who could repeat the most digits correctly. Geometry Value Of Pi (π) Count the number of letters in each word: Pi (π) to 7 decimal places: (Word lengths are digits) May I have a large container of coffee? Pi (π) to 10 decimal places: (Word lengths are digits) May I have a large container of coffee ready for today? Pi (π) to 12 decimal places: (Word lengths are digits) See, I have a rhyme assisting my feeble brain, its tasks oft-times resisting. Pi (π) to 30 decimal places: (Word lengths are digits) Now I, even I, would celebrate In rhymes unapt, the great Immortal Syracusan, rivaled nevermore, Who in his wondrous lore, Passed on before, Left men his guidance How to circles mensurate. and another one also to 30 decimal places (written by Michael Shapiro) Now I will a rhyme construct By chosen words the young instruct. Cunningly devised endeavor, Con it and remember ever. Widths of circle here you see. Sketched out in strange obscurity. 3.1415926535897932384626433832 795............ 3. We did the EdHelper units on Sir Cumference and the Knights of the Round Table and Sir Cumference and the Dragon of Pi. EdHelper is a subscription service, but there are tons of PI Day stuff 4. Students brought in cans and jar of varying sizes and measured them with string. See page 3 in this link: 5. Memorized Mnemonics for Circle Formulas. Find the Area and Circumference of a Circle. Tweedle-dee-dum and Tweedle-dee-dee, Around the circle is pi times d, But if the area is declared, Think of the formula π "r" squared. ***Around the circle" is the circumference. Circumference = π × d (diameter). Area = π × r(radius) squared. Area Of A Circle Apple pie are square: A = π × r2 Apple pie are round: A = π × r × r Circumference Of A Circle Cherry pie delicious!: C = π × d I'd love to hear what you are doing, as well.
{"url":"http://www.proteacher.net/discussions/showthread.php?t=144828","timestamp":"2014-04-19T01:59:56Z","content_type":null,"content_length":"82626","record_id":"<urn:uuid:29a30258-941a-498b-ad46-fdf00a9fdd0b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Numbers and Computer Methods for Factorization, Birkhäuser Results 1 - 10 of 22 - Australian Computer Science Communications , 1986 "... Lenstra’s integer factorization algorithm is asymptotically one of the fastest known algorithms, and is also ideally suited for parallel computation. We suggest a way in which the algorithm can be speeded up by the addition of a second phase. Under some plausible assumptions, the speedup is of order ..." Cited by 47 (13 self) Add to MetaCart Lenstra’s integer factorization algorithm is asymptotically one of the fastest known algorithms, and is also ideally suited for parallel computation. We suggest a way in which the algorithm can be speeded up by the addition of a second phase. Under some plausible assumptions, the speedup is of order log(p), where p is the factor which is found. In practice the speedup is significant. We mention some refinements which give greater speedup, an alternative way of implementing a second phase, and the connection with Pollard’s “p − 1” factorization algorithm. 1 , 2000 "... The generation of prime numbers underlies the use of most public-key schemes, essentially as a major primitive needed for the creation of key pairs or as a computation stage appearing during various cryptographic setups. Surprisingly, despite decades of intense mathematical studies on primality test ..." Cited by 12 (4 self) Add to MetaCart The generation of prime numbers underlies the use of most public-key schemes, essentially as a major primitive needed for the creation of key pairs or as a computation stage appearing during various cryptographic setups. Surprisingly, despite decades of intense mathematical studies on primality testing and an observed progressive intensification of cryptographic usages, prime number generation algorithms remain scarcely investigated and most real-life implementations are of rather poor performance. Common generators typically output a n-bit prime in heuristic average complexity O(n^4) or O (n^4/log n) and these figures, according to experience, seem impossible to improve significantly: this paper rather shows a simple way to substantially reduce the value of hidden constants to provide much more efficient prime generation algorithms. We apply our... - Publ. Inst. Math. (N.S , 1995 "... Dedicated to the memory of Prof.-Duro Kurepa ..." - Master's thesis, ECE Dept., Worcester Polytechnic Institute , 1996 "... The recent developments in the study of elliptic curve public-key algorithms have shown that they could play a major factor in the design of cryptosystems of the future. This thesis describes efficient algorithms for two important aspects of such systems. The first part describes a structured approa ..." Cited by 5 (0 self) Add to MetaCart The recent developments in the study of elliptic curve public-key algorithms have shown that they could play a major factor in the design of cryptosystems of the future. This thesis describes efficient algorithms for two important aspects of such systems. The first part describes a structured approach for finding cryptographically secure curves. A comprehensive lists of elliptic curves over subfields GF (2 n ), n = 8; 9; : : : 18, was generated, which are cryptographically secure over GF ((2 n ) m ), n \Delta m = 150; : : : ; 200. The second part describes efficient algorithms for fast software implementations of elliptic curve computations which can be used in a variety of public-key protocols. These algorithms, which perform group operations over nonsupersingular elliptic curves, are optimized through the use of composite Galois fields of the form GF ((2 n ) m ). An elliptic curve key-exchange protocol over the composite field GF ((2 16 ) 11 ) was implemented using - Math. Comp , 1996 "... Abstract. If P is a prime and 2P+1 is also prime, then P is a Sophie Germain prime. In this article several new Sophie Germain primes are reported, which are the largest known at this time. The search method and the expected search times are discussed. 1. ..." Cited by 4 (1 self) Add to MetaCart Abstract. If P is a prime and 2P+1 is also prime, then P is a Sophie Germain prime. In this article several new Sophie Germain primes are reported, which are the largest known at this time. The search method and the expected search times are discussed. 1. , 2007 "... We present a detailed analysis of SQUFOF, Daniel Shanks’ Square Form Factorization algorithm. We give the average time and space requirements for SQUFOF. We analyze the effect of multipliers, either used for a single factorization or when racing the algorithm in parallel. ..." Cited by 4 (0 self) Add to MetaCart We present a detailed analysis of SQUFOF, Daniel Shanks’ Square Form Factorization algorithm. We give the average time and space requirements for SQUFOF. We analyze the effect of multipliers, either used for a single factorization or when racing the algorithm in parallel. - Proceedings of CHES 2006, LNCS 4249 , 2006 "... Abstract. The generation of prime numbers underlies the use of most public-key cryptosystems, essentially as a primitive needed for the creation of RSA key pairs. Surprisingly enough, despite decades of intense mathematical studies on primality testing and an observed progressive intensification of ..." Cited by 3 (1 self) Add to MetaCart Abstract. The generation of prime numbers underlies the use of most public-key cryptosystems, essentially as a primitive needed for the creation of RSA key pairs. Surprisingly enough, despite decades of intense mathematical studies on primality testing and an observed progressive intensification of cryptography, prime number generation algorithms remain scarcely investigated and most real-life implementations are of dramatically poor performance. We show simple techniques that substantially improve all algorithms previously suggested or extend their capabilities. We derive fast implementations on appropriately equipped portable devices like smart-cards embedding a cryptographic coprocessor. This allows onboard generation of RSA keys featuring a very attractive (average) processing time. Our motivation here is to help transferring this task from terminals where this operation usually took place so far, to portable devices themselves in near future for more confidence, security, and compliance with networkscaled distributed protocols such as electronic cash or mobile commerce. "... À Henri Cohen pour son soixantième anniversaire. Let Sn denote the symmetric group with n letters, and g(n) the maximal order of an element of Sn. If the standard factorization of M into primes is M = q α1 1 qα2 2... q αk k, we define ℓ(M) to be qα1 1 + qα2 2 +... + qα k k; one century ago, E. Landa ..." Cited by 2 (0 self) Add to MetaCart À Henri Cohen pour son soixantième anniversaire. Let Sn denote the symmetric group with n letters, and g(n) the maximal order of an element of Sn. If the standard factorization of M into primes is M = q α1 1 qα2 2... q αk k, we define ℓ(M) to be qα1 1 + qα2 2 +... + qα k k; one century ago, E. Landau proved that g(n) = maxℓ(M)≤n M and that, when n goes to infinity, log g(n) ∼ p nlog(n). There exists a basic algorithm to compute g(n) for 1 ≤ n ≤ N; its running time is O N 3/2 / √ ” log N and the needed memory is O(N); it allows computing g(n) up to, say, one million. We describe an algorithm to calculate g(n) for n up to 10 15. The main idea is to use the so-called ℓ-superchampion numbers. Similar numbers, the superior highly composite numbers, were introduced by S. Ramanujan to study large values of the divisor function τ(n) = P d | n 1. Key words: arithmetical function, symmetric group, maximal order, highly "... Cryptology has advanced tremendously since 1976; this chapter provides a brief overview of the current state-of-the-art in the field. Several major themes predominate in the development. One such theme is the careful elaboration of the definition of security for a cryptosystem. A second theme has be ..." Cited by 1 (0 self) Add to MetaCart Cryptology has advanced tremendously since 1976; this chapter provides a brief overview of the current state-of-the-art in the field. Several major themes predominate in the development. One such theme is the careful elaboration of the definition of security for a cryptosystem. A second theme has been the search for provably secure cryptosystems, based on plausible assumptions about the difficulty of specific number-theoretic problems or on the existence of certain kinds of functions (such as one-way functions). A third theme is the invention of many novel and surprising cryptographic capabilities, such as public-key cryptography, digital signatures, secret-sharing, oblivious transfers, and zero-knowledge proofs. These themes have been developed and interwoven so that today theorems of breathtaking generality and power assert the existence of cryptographic techniques capable of solving almost any imaginable cryptographic problem. "... This work reports on a graduate students' project on parallel computing in cryptoanalysis. Major hardware- and softwaretypes have been used to implement basic cryptoanalytic algorithms. 1 Introduction In this work we report experiences made within a graduate students' project performed at the depar ..." Add to MetaCart This work reports on a graduate students' project on parallel computing in cryptoanalysis. Major hardware- and softwaretypes have been used to implement basic cryptoanalytic algorithms. 1 Introduction In this work we report experiences made within a graduate students' project performed at the department of Computer Science and System Analysis (Univ. Salzburg). The topic of the project was "Parallel Computing in Cryptoanalysis". The security of most of the public key cryptosystems known today relies on computationally infeasible problems in computational number theory (e.g. RSA -- factoring of large integers, ElGamal -- calculating discrete logarithms in a finite field; for more examples see [10]). The goal of this project was to exploit to power of parallel and distributed computing in order to perform the necessary computations to break such cryptosystems in reasonable time. Since the projects' underlying course was not theory-focused we had to choose simple algorithms to be parallel...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=550955","timestamp":"2014-04-17T23:07:04Z","content_type":null,"content_length":"36297","record_id":"<urn:uuid:9f12729f-9ec8-4b17-9c70-3857fd5cfa68>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
A Multiplicative Noise Removal Approach Based on Partial Differential Equation Model Mathematical Problems in Engineering Volume 2012 (2012), Article ID 242043, 14 pages Research Article A Multiplicative Noise Removal Approach Based on Partial Differential Equation Model College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, China Received 26 February 2012; Accepted 23 March 2012 Academic Editor: Ming Li Copyright © 2012 Bo Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Multiplicative noise, also known as speckle noise, is signal dependent and difficult to remove. Based on a fourth-order PDE model, this paper proposes a novel approach to remove the multiplicative noise on images. In practice, Fourier transform and logarithm strategy are utilized on the noisy image to convert the convolutional noise into additive noise, so that the noise can be removed by using the traditional additive noise removal algorithm in frequency domain. For noise removal, a new fourth-order PDE model is developed, which avoids the blocky effects produced by second-order PDE model and attains better edge-preserve ability. The performance of the proposed method has been evaluated on the images with both additive and multiplicative noise. Compared with some traditional methods, experimental results show that the proposed method obtains superior performance on different PSNR values and visual quality. 1. Introduction Image denoising plays an important role in the areas of image processing. A real recorded image may be distorted by many expected or unexpected random factors, of which random noise is an unavoidable one [1, 2]. The objective of image denoising or filtering is to recover the true image from the noisy one. One of the challenges during the denoising process is to preserve and enhance the important features. For images, edge is one of the most universal and crucial features. Denoising via linear filters normally does not give satisfactory performance since both noises and edges contain high frequencies. Therefore, some nonlinear filters [3–18] have been proposed. Median filter [1] is one of the classical examples. Wavelet-based image filters [19–22] are developing quickly. PDE-based nonlinear diffusion filters [23–26] also make a hit on image denoising. One of PDE-based methods is the famous total variation model (TVM) [27–35]. TVM has been improved in theory and algorithm continuously. Recently, Kim [23] proposed a model called (ABO)-model by hybridizing a nonconvex variant of the TVM, the motion by mean curvature (MMC) [29], and Perona-Malik model [4] to deal with the mixture of the impulse and Gaussian noises reliably. In [23], they apply the essentially nondissipative difference (ENoD) schemes [5, 6] for the MMC component to eliminate the impulse noise with a minimum (ideally no) introduction of dissipation. Many denoising methods are also employed in medical image processing [36–39]. Due to the coherent nature of some complicated image acquisition processes, such as ultrasound imaging, synthetic aperture radar (SAR) and sonar (SAS), and laser imaging, the standard additive noise model, so prevalent in image processing, is inadequate. Instead, multiplicative noise models, that is, in which the noise field is multiplied by (not added to) the original image, provide an accurate description of coherent imaging systems [40–42]. Multiplicative noise is naturally dependent on the image data. Various adaptive filters [43, 44] for multiplicative noise removal have been proposed. Experiments have shown that filtering methods work well when the multiplicative noise is weak. In this paper, a new fourth-order PDE model is introduced by improving the original fourth-order PDE model [24] in order to get high fidelity of the denoised images. To solve the model efficiently and reliably, we suggest a simple and symmetrical difference schemes. Median filter is exploited to alleviate the speckle effects in the processed image. At the same time, a new multiplicative noise removal algorithm based on fourth-order PDE model is proposed for the restoration of noisy image. To apply the proposed model for removal of multiplicative noise, the Fourier transform is used to change convolution into product; meanwhile, the logarithmic transformation is used to convert multiplicative noise into additive one. Experimental results show that the proposed method gets nice result in restoring images, especially in edge preservation and enhancement. The rest of this paper is organized as follows. In Section 2 we investigate a general model of multiplicative noise. Total variation model and its discretization are introduced in Section 3. In order to avoid the blocky effects of second-order PDE model and preserve edges, a new fourth-order PDE denoising model is proposed in Section 4. Section 5 is devoted to a study of multiplicative noise removal method, and an algorithm based on fourth-order PDE model is developed. Numerical results are presented in Section 6. We summarize our conclusions in Section 7. 2. Multiplicative Noise Model Noise removal or reduction is very important in image processing community. The objection of image denoising or filtering is to recover the true image from a noisy one. There are different noise types in real world. Multiplicative noise is common beside additive noise. Quality of images may degenerate while images’ obtaining, transferring, and storage. The movement of objects, the defects of the imaging system, the noise of the inherent record equipment, and external disturbance also cause the image noise. Under the assumption that imaging system is linear translation invariance system, we can use the following degradation model to describe the multiplicative noise images: where is the ideal image, is the noised image, denotes the additive noise with mean 0 and variance , * denotes convolution operation, denotes the point spread function (PSF), andGaussian function can be considered as one of the classical PSF: Therefore, the synthesized images with multiplicative noise in this paper are generated for ideal images convolution with 2D Gaussian kernels and then noised with additive Gaussian white noise. An example is shown in Figure 1. in (2.1) is chosen as (2.2), namely Gaussian function templates. 3 × 3 Gaussian function template is employed here. 3. Total Variation Model In order to recover the true image as much as possible and/or to find a new image in which the information of interest such as object boundary in the image is more obvious and/or more easily extracted, we will discuss PDE-based image denoising in this section. Second-order PDE models have been studied as a useful tool for image denoising. The classical model of them is total variation model (TVM) [27], and we will introduce it. TVM was first proposed by Rudin et al. [27]. It is now one of the most successful tools in image restoration. TVM has a simple fixed filter structure. In terms of the mathematical foundation, unlike most statistical filters, TVM is based on functional analysis and geometry. The additive noise removal problem is converted to energy function minimization problem as below: where denotes image domain, and is Lagrange multiplier. The selection of the parameter is very important for the smoothing result. The corresponding Euler-Lagrange equation is and the steepest descent marching gives To avoid singularities in flat regions or at local extreme, in (3.2) is regularized to for a small positive parameter . Chan et al. [30] deduce discrete iterative equation of TV model as follows: where denotes the pixel value at node in the noisy image, denotes iteration times, denotes the image pixel value after iterations, denotes field of node (see Figure 2). The filter coefficients and are given by Here, for any node . In conclusion, TV denoising algorithm steps can be summarized as follows:(1)to assign parameter and ;(2)compute the local variation by (3.7);(3)compute respectively and (3.6);(4)compute the filter coefficients and by (3.5);(5)calculate iterative equation (3.4). For TV filtering process, the computational cost can be reduced by the algorithm. TVM not only can remove noise but also can keep the image edge information. Some experimental results are shown in Figure 3. TVM is better than the traditional denoising methods not only in PSNR values but also in visual quality. 4. A New Fourth-Order PDE Denoising Model In order to avoid the blocky effects (seen in Figure 3(f) and Figure 4(f)) widely seen in images processed by anisotropic diffusion while preserve edges, You and Kaveh [24] proposed a fourth-order PDE for noise removal. Motivated by [24] and TVM, we proposed a novel model in [25, 26]. The new approach combines the advantages of the famous TVM and original fourth-order PDE model. It can avoid the blocky effects and get high fidelity (improve the quality of the processed image), which is important for image filter application (see Figure 4). Consider the energy function as follows: where is the image domain and is a parameter similar as in TVM. is the noisy image. denotes Laplacian operator and we require is an increasing function and bigger than zero. Therefore, the minimization of the functional is equivalent to smoothing the image as measured by . The corresponding Euler-Lagrange equation is where is the signed distance function, so (4.2) can be written as If we define , which is therefore, the Euler equation may be solved through the following gradient descent procedure: So we can discretize and iterate to solve the equation. To solve the model in (4.5) efficiently and reliably, we propose a simple symmetric difference algorithm based on four neighboring systems (seen in Figure 2). We calculate Laplacian of the image intensity function as where is space grid size. Given a time step , (4.5) can be discretized as Similar as [24], we define where is a parameter. So the symmetric fourth-order PDE denoising algorithm is as follows. Step 1. Initialization: select the constants , , , and choose an initial function (image) . Step 2. Compute and using (4.6). Step 3. Compute using (4.8). Step 4. Update using (4.7). Step 5. Repeat Steps 2 and 4 until convergence. Figure 4 shows the results for a medical image with Gaussian white noise of mean 0 and variance 0.01. Median filter is applied to alleviating the speckle effects in the processed image. We can see from Figure 4 that the new fourth-order PDE method obtains the biggest PSNR values in all filter method and can avoid the block effect in Figure 4(f). At the same time, the last result of the new method (Figure 4(j), dB) is better than the original fourth-order PDE method (Figure 4(b), dB) not only in PSNR values but also in visual quality. 5. Multiplicative Noise Removal Algorithm Based on Fourth-Order PDE Model Objective of most traditional algorithms is to deal with additive noise, but the result is not ideal for the big multiplicative noise. This paper proposes a new multiplicative noise removal algorithm and combines the denoising algorithm with image frequency domain. The whole process is as follows. Firstly, remove the additive noise in model (2.1) by denoising algorithm, then the model is simplified as Secondly, convolution in (5.1) changes to product according to fast Fourier transform (FFT): where , , and denote FFT of ,, and , respectively. Thirdly, (5.2) can be rewritten by logarithmic transformation (LN) as follows: Fourthly, in (5.3) can be regarded as additive noise in image frequency domain, and we can remove it by some additive denoising algorithms, such as TVM and fourth-order PDE model. Therefore, Fifthly, by exponential transform (EXP), (5.4) is rewritten as Sixthly, by inverse fast Fourier transform for (5.5), we can get where in (5.6) is considered as the denoised image got by our algorithm. There are two denoising processes in the multiplicative noise removal framework, that is, the first step and the fourth step, if we select the denoising methods all as TVM removal framework of all as TVM, structure of multiplicative noise removal algorithm can be seen below. In Figure 5, multiplicative noise removal algorithm considers the natural image noise as two parts, convolution changes to product by Fourier transform and product changes to summation by logarithm, then noise can be removal according to total variation model and the image is rebuilt. 6. Experimental Results We use MATLAB 7.10 (R2010a) as the tool to carry out all algorithms a PC equipped with an Intel Core i3-2330M CPU at 2.20GHz and 4G RAM memory and Windows 7 operating system. Denoising performance is evaluated using the PSNR (peak signal-to-noise ratio) in dB which defined by where denotes the restored image with respect to the original image , , and and are the wide and high of image. The effectiveness of the new multiplicative noise removal algorithm is based on the total variation model (MNRATV) shown in Table 1, Figures 6 and 7. The sizes of the noisy “Lena” and “vegetables” images is all . The numerical results are listed in Table 1 and compared in Figure 6. Visual quality is shown in Figure 7. Experimental results show that the new method is available. It is better than the traditional denoising algorithm not only in PSNR values but also in visual quality. We can see from Table 1 and Figure 6 that the PSNR values of the restored images by MNRATV are higher than restored images by all the other methods. It is little bigger than those by TVM when the noise level is lower. The results of MNRATV and TVM are shown in Figure 8. There are two denoising methods in the first step and fourth step of the multiplicative noise removal framework; we can call it denoising method 1 and denoising method 2. If they are all chosen as TVM, then the whole framework in Figure 5 is called MNRA1 method, which is called MNRATV before. TVM and fourth-order PDE (FPDE) model which we introduced in Section 4 constitutes four methods. Details are shown in Table 2. Different methods are employed to remove the noisy Lena image with different variance. PSNR values are shown in Table 3. Seen from Table 3, TVM or FPDE directly is not good because of the multiplicative noise type. Median filter cannot be exploited as denoising method 2 since complex number is generated by the Fourier transform. Denoising method 1 is selected as FPDE in MNRA2, and it gets better results. 7. Conclusion PDE models have been widely applied in image processing community especially in image denoising. However, traditional PDE-based methods have some drawbacks unless the governing equations are both incorporating appropriate parameters and discretized by suitable numerical schemes. In this paper, a new fourth-order PDE model is introduced by improving the original fourth-order one [24] in order to avoid the blocky effect. To solve the model efficiently and reliably, we suggest a symmetrically difference schemes. Median filtering is exploited to alleviate the speckle effects in the processed image in succession. Accordingly, a new multiplicative noise removal algorithm based on the proposed fourth-order PDE model is presented. To remove the multiplicative noise, the convolution is changed into a product by applying the Fourier transform. Furthermore, the multiplicative noise is converted into the additive one by using a logarithmic transformation. Then the noise can be removed by applying the proposed PDE model. Experimental results have shown the effectiveness of the proposal. This paper is partially supported by NSFC (61070087, 61105130), the Natural Science Foundation of Guangdong Province (S2011040000433, S2011040004017), the Science & Technology Planning Project of Shenzhen City (JC200903130300A, JYC200903250175A), and the Opening Project of Guangdong Province Key Laboratory of Computational Science of Sun Yat-Sen University (201106002) and Teaching Reform and Research Project for Young Teachers of Shenzhen University (JG2010118). The authors would like to thank the Key Laboratory of Medical Image Processing in Southern Medical University for providing original medical images. 1. Q. Y. Ruan and Y. Z. Ruan, Digital Image Processing, Publishing House of Electronics Industry, Beijing, China, 2nd edition, 2004. 2. Q. H. Chang and T. Yang, “A lattice Boltzmann method for image denoising,” IEEE Transactions on Image Processing, vol. 18, no. 12, pp. 2797–2802, 2009. View at Publisher · View at Google Scholar 3. C. Chaux, L. Duval, A. Benazza-Benyahia, and J.-C. Pesquet, “A nonlinear Stein-based estimator for multichannel image denoising,” IEEE Transactions on Signal Processing, vol. 56, no. 8, part 2, pp. 3855–3870, 2008. View at Publisher · View at Google Scholar 4. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990. 5. S. Osher and J. A. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations,” Journal of Computational Physics, vol. 79, no. 1, pp. 12–49, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. S. Osher and C. W. Shu, “High-order essentially nonoscillatory schemes for Hamilton-Jacobi equations,” SIAM Journal on Numerical Analysis, vol. 28, no. 4, pp. 907–922, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. B. Chen, Y. Li, and J.-L. Cai, “Noisy image segmentation based on nonlinear diffusion equation model,” Applied Mathematical Modelling, vol. 36, no. 3, pp. 1197–1208, 2012. 8. Z. Liao, S. Hu, and W. Chen, “Determining neighborhoods of image pixels automatically for adaptive image denoising using nonlinear time series analysis,” Mathematical Problems in Engineering, vol. 2010, Article ID 914564, 14 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. S. Hu, Z. Liao, D. Sun, and W. Chen, “A numerical method for preserving curve edges in nonlinear anisotropic smoothing,” Mathematical Problems in Engineering, vol. 2011, Article ID 186507, 14 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. M. Li, “Fractal time series—a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 11. Z. Shang, L. Zhang, S. Ma, B. Fang, and T. Zhang, “Incomplete time series prediction using max-margin classification of data with absent features,” Mathematical Problems in Engineering, vol. 2010, 14 pages, 2010. 12. M. Freiberger, H. Egger, and H. Scharfetter, “Nonlinear inversion schemes for fluorescence optical tomography,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 11, pp. 2723–2729, 2010. 13. Z. Liao, S. Hu, D. Sun, and W. Chen, “Enclosed Laplacian operator of nonlinear anisotropic diffusion to preserve singularities and delete isolated points in image smoothing,” Mathematical Problems in Engineering, vol. 2011, Article ID 749456, 15 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 14. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at Google Scholar 15. M. Li, C. Cattani, and S. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011. View at Publisher · View at Google Scholar 16. F. Zhang, Y. M. Yoo, L. M. Koh, and Y. Kim, “Nonlinear diffusion in laplacian pyramid domain for ultrasonic speckle reduction,” IEEE Transactions on Medical Imaging, vol. 26, no. 2, pp. 200–211, 17. M. Ceccarelli, V. De Simone, and A. Murli, “Well-posed anisotropic diffusion for image denoising,” IEE Proceedings: Vision, Image and Signal Processing, vol. 149, no. 4, pp. 244–252, 2002. 18. M. Li, M. Scalia, and C. Toma, “Nonlinear time series: computations and applications,” Mathematical Problems in Engineering, vol. 2010, Article ID 101523, 5 pages, 2010. 19. L. X. Shen, M. Papadakis, I. A. Kakadiaris, et al., “Image denoising using a tight frame,” IEEE Transaction on Image Processing, vol. 15, no. 5, pp. 1254–1263, 2006. 20. B. Chen and W.-S. Chen, “Noisy image segmentation based on wavelet transform and active contour model,” Applicable Analysis, vol. 90, no. 8, pp. 1243–1255, 2011. View at Publisher · View at Google Scholar 21. T. Celik and K. Ma, “multitemporal image change detection using undecimated discrete wavelet transform and active contours,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 2, pp. 706–716, 2011. 22. T. Celik and K. Ma, “Unsupervised change detection for satellite images using dual-tree complex wavelet transform,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1199–1210, 2010. 23. S. Kim, “PDE-based image restoration: a hybrid model and color image denoising,” IEEE Transaction on Image Processing, vol. 15, no. 5, pp. 1163–1170, 2006. 24. Y.-L. You and M. Kaveh, “Fourth-order partial differential equations for noise removal,” IEEE Transactions on Image Processing, vol. 9, no. 10, pp. 1723–1730, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 25. B. Chen, W.-S. Chen, L.-W. Zhang, et al., “Image restoration based on fourth-order PDE model,” in Proceedings of the IEEE International Conference on Natural Computation, vol. 5, pp. 549–554, Jinan, China, 2008. 26. B. Chen, P.-C. Yuen, J.-H. Lai, and W.-S. Chen, “Image segmentation and selective smoothing based on variational framework,” Journal of Signal Processing Systems, vol. 54, pp. 145–158, 2009. 27. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removel algorithms,” Physica D, vol. 60, pp. 259–268, 1992. 28. S. D. Babacan, R. Molina, and A. Katsaggelos, “Variational bayesian blind deconvolution using a total variation prior,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 12–26, 2009. 29. L. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” in Proceedings of the 1st IEEE International Conference on Image Processing, vol. 1, pp. 31–35, 1994. 30. T. F. Chan, S. Osher, and J. H. Shen, “The digital TV filter and Nonlinear Denoising,” IEEE Transaction on Image Processing, vol. 10, no. 2, pp. 231–241, 2001. 31. A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2419–2434, 2009. View at Publisher · View at Google Scholar 32. C. Drapaca, “A nonlinear total variation-based denoising method with two regularization parameters,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 3, pp. 582–586, 2009. 33. M. Kumar and S. Dass, “A total variation-based algorithm for pixel-level image fusion,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 2137–2143, 2009. View at Publisher · View at Google Scholar 34. P. Rodríguez and B. Wohlberg, “Efficient minimization method for a generalized total variation functional,” IEEE Transactions on Image Processing, vol. 18, no. 2, pp. 322–332, 2009. View at Publisher · View at Google Scholar 35. T. Zeng and M. Ng, “On the total variation dictionary model,” IEEE Transactions on Image Processing, vol. 19, no. 3, pp. 821–825, 2010. View at Publisher · View at Google Scholar 36. B. C. Vemuri, M. Liu, S. Amari, et al., “Total bregman divergence and its applications to DTI analysis,” IEEE Transactions on Medical Imaging, vol. 30, no. 2, pp. 475–483, 2011. 37. S. Ramani, P. Thevenaz, and M. Unser, “Regularized interpolation for noisy images,” IEEE Transactions on Medical Imaging, vol. 29, no. 2, pp. 543–558, 2010. 38. J. Yao, J. Chen, and C. Chow, “Breast tumor analysis in dynamic contrast enhanced MRI using texture features and wavelet transform,” IEEE Journal of Selected Topics in Signal Processing, vol. 3, no. 1, pp. 94–100, 2009. 39. M. Zibetti and A. R. De Pierro, “A new distortion model for strong inhomogeneity problems in Echo-Planar MRI,” IEEE Transactions on Medical Imaging, vol. 28, no. 11, pp. 1736–1753, 2009. 40. J. M. Bioucas-Dias and M. A. T. Figueiredo, “Multiplicative noise removal using variable splitting and constrained optimization,” IEEE Transactions on Image Processing, vol. 19, no. 7, pp. 1720–1730, 2010. View at Publisher · View at Google Scholar 41. L. Rudin, P. Lions, and S. Osher, “Multiplicative denoising and deblurring: theory and algorithms,” in Geometric Level Set Methods in Imaging, Vision, and Graphics, S. Osher and N. Paragios, Eds., pp. 103–120, Springer, New York, NY, USA, 2003. 42. J. Goodman, “Some fundamental properties of speckle,” Journal of the Optical Society of America, vol. 66, pp. 1145–1150, 1976. 43. Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Transactions on Image Processing, vol. 11, no. 11, pp. 1260–1270, 2002. View at Publisher · View at Google Scholar 44. K. Krissian, C.-F. Westin, R. Kikinis, and K. G. Vosburgh, “Oriented speckle reducing anisotropic diffusion,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1412–1424, 2007. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/mpe/2012/242043/","timestamp":"2014-04-16T13:41:54Z","content_type":null,"content_length":"188074","record_id":"<urn:uuid:7a3dc542-fc41-45ea-9631-3edd9b8f26b0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
btree balancing 11-09-2001 #1 Registered User Join Date Nov 2001 btree balancing I came to the realization that unbalanced btrees are not much more efficient than linked lists. So I've been mulling over a balancing algorithm for them and I was looking for suggestions. I will use the following terminology: node - entry in the tree, has 2 children, left child and right child leaf - a node that has two null pointers for children root - first node in list distance - number of nodes in the path from one node to another, minus one sub tree - a tree extending from any node to it's decendents (that node is the root in the sub tree) full tree - a tree in which all nodes are either leaves or have 2 non-null children and the distance from the root to any leaf is the same Which child is greater than, less than is irrelevant. Here is what I have so far: 1. take the root and one of it's children and feed this back into the tree, using the root's other child as the new root. problems: this will only balance the tree if the number of entries in the old root's left child's sub tree were close to the number of entries in the old root's right child's sub tree (wow, good luck understanding that) 2. find the mean value of the entries in the tree and make this a root in a temporary tree. Take the entries from the first tree, and put them in the temporary tree, then copy the temp tree back to the original problems: this will only approximately balance the tree if there are no large gaps between the values of the nodes in the tree, otherwise one side will be top heavy geez, I haven't come up with much, and none of it's very good. Neither of these even come close to giving me a full tree, which would be ideal. The other problem I've come to is deciding when to balance. Now if I had a way to balance that would give me a full tree I imagine I could balance every time I created old_distance^2 nodes, where old_distance is the maximum distance after the last balance. Well I'm confusing myself now and I should probably put some more thought into this, but any help would be greatly appreciated. Thanks 11-09-2001 #2
{"url":"http://cboard.cprogramming.com/cplusplus-programming/4908-btree-balancing.html","timestamp":"2014-04-17T08:37:53Z","content_type":null,"content_length":"43371","record_id":"<urn:uuid:9e98c9f5-a124-4b36-9255-1cd40813b44c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem of the Month (September 2002) Consider the cellular automaton on the square lattice with the following rules: i. Each cell's neighborhood is the 8 horizontally, vertically, and diagonally adjacent cells. ii. We start at time 1 with a finite collection of cells in state 1, with all other cells in state 0. iii. At time n, if a cell is in state 0, and the sum of the states of its neighborhood add to n, that cell switches to state n. iv. If at time n, no cell switches to state n, the growth stops. Since cells stay in state 0 or change to a higher state only once, this is a growth model, and we can illustrate the growth with a picture of the states of the cells after they no longer change. For example, here are two such pictures, each starting from 3 connected cells in state 1: The largest state ever reached by any cell is called the lifetime of the original pattern. The patterns above both have lifetime 6. Can you find some small patterns with some large lifetimes? What is the largest lifetime of any pattern starting with at most n cells in state 1? What if we require the n starting cells to be connected? Can you prove that arbitrarily large lifetimes occur? What if we require the starting pattern to be connected? Can you find a pattern with infinite lifetime? Is finding the lifetime of a pattern an NP-complete problem? Boris Bukh defined L(n) to be the longest lifetime of a pattern starting from n cells in state 1. He also defined l(n) to be the longest lifetime of a pattern starting from n connected cells in state 1. Through a long argument, he proves the amazing upper bounds L(n) = O(n log^2n) and l(n) = O(n log n log log n). He does not know whether l(n) is unbounded. Berend Jan van der Zwaag and Joseph DeVincentis found an infinite collection of patterns which generate arbitrarily long lifetimes. This shows that L(n) ≥ 3n-3: Philippe Fondanaiche found a similar construction. Boris Bukh and John Hoffman found a sequence of patterns which show L(n) ≥ 3n-2: Then Berend Jan van der Zwaag improved this pattern to show L(n) ≥ (7n-5)/2: Brendan Owen found the small configurations with the longest lifetimes. The connected configurations were the result of a complete search, but the disconnected configurations might be improved upon. The Configurations Starting from n Cells with the Largest Lifetimes Clinton Weaver, Claudio Baiocchi, Philippe Fondanaiche, and Joseph DeVincentis also sent some solutions, not all of which were optimal. Brendan Owen also found the small configurations with the longest lifetimes if the rules were changed to fewer neighbors or a different latice: Configurations with the Largest Lifetimes with Different Rules If you can extend any of these results, please e-mail me. Click here to go back to Math Magic. Last updated 9/12/02.
{"url":"http://www2.stetson.edu/~efriedma/mathmagic/0902.html","timestamp":"2014-04-19T01:48:56Z","content_type":null,"content_length":"9808","record_id":"<urn:uuid:16409bfb-90e7-4e03-b2ed-67b8b97a2d93>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and Random Processes Question Papers Anna University Probability and Random Processes Question Papers Anna University MA2261 PRP 4th Sem ECE Tags : Anna University, ECE, prp question papers anna university. Probability and Random Processes Question Papers Anna University Probability and Random Processes question papers Anna University is searched by Engineering students.We have given Probability and Random Processes Previous question papers and Model question papers.Probability and Random Processes is a important subject for MA2261 4th sem ECE department.Anna University question papers PRP Previous year question papers is given for student reference. PRP Question Papers 4th Sem ECE Department : ELECTRONICS COMMUNICATION AND ENGINEERING University : Anna University Chennai/Tirunelveli/Coimbatore/Trichy/Madurai Subject :Probability and Random Processes(PRP) Subject Code :MA2261 Semester : 4th sem probability and random process anna university question papers NOV DEC 2011 prp question papers anna university November December 2010 probability and random process previous year question papers anna university april may 2010
{"url":"http://www.kinindia.com/university/probability-and-random-processes-question-papers-anna-university-ma2261-prp-4th-sem-ece/","timestamp":"2014-04-19T02:20:59Z","content_type":null,"content_length":"36473","record_id":"<urn:uuid:5ddd0e8c-4778-4498-9359-ef957b9d04e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
aluminum weight formula [WiredBox.Net - Office Newsgroups] Geek News Im a complete n00b to excel and I really need some help.... I need a formula to tell me how much certain amounts of sheet aluminum weigh. I know a square foot of 1/8" aluminum weighs 1.8lbs. Basically I want to type in the amount of square feet and get the weight. This may help to, a 60" X 144" sheet of 1/8" aluminum weighs 104.76lbs. Thanks in advance, Anthony -- sHooTer296 there is something wrong with your numbers or my brain. you say that a square foot of alumimun weighs 1.8 lbs and a sheet of 5ft(60in) by 12ft(144in) weights 104.76. when i do the math, i keep getting 108 lb. the math is simple. in cell A1 enter the standard weight 1.8 lb sq ft (.0125 lb sq in) in cell B1 enter the length 5 ft (60 in) in cell C1 enter the width 12 ft (144 in) in cell D1 enter formula = sum(A1*B1*C1) which equals 108 lb (in sq in or sq ft) did i miss something? "sHooTer296" wrote: [Quoted Text] > Im a complete n00b to excel and I really need some help.... I need a > formula to tell me how much certain amounts of sheet aluminum weigh. I > know a square foot of 1/8" aluminum weighs 1.8lbs. Basically I want to > type in the amount of square feet and get the weight. This may help to, > a 60" X 144" sheet of 1/8" aluminum weighs 104.76lbs. > Thanks in advance, > Anthony > -- > sHooTer296 Hi Anthony Set this up in a worksheet A1:- 60 A2:- 144 A3:- 0.125 B1:- =CONVERT(A1,"in","m") B2:- =CONVERT(A2,"in","m") B2:- =CONVERT(A3,"in","m") B5:- =B1*B2*B3 B6:- =B5*2.7*1000 B7:- =CONVERT(B6,"kg","lbm") I've done this by converting to metric for the calculation then converting back to Imperial for the result. (metric is so much easier for density The 2.7 figure in B6 is the density of solid aluminium (2.700 grams per centimetre). That figure is correct for pure aluminium, if there are alloys in the product then you would need to obtain the specific gravity for that product. The underlying equation here is density = mass/volume or in this instance mass = volume*density. Your answer in B7 is 105.35lbs(47.78kg in B6) Input 12 into A1 and A2 and you will see that a square foot of 1/8 aluminium Is 1.755787lbs and not 1.8lbs hence the discrepency that FSt1 found. KIS - why convert one way and back. * Comments in brackets E1 =3D 1.8 ( weight of 1 sq foot) A2 =3D 12 B2 =3D 12 C2 =3D A2 * B2 ( convert to sq inches ) D2 =3D C2 / 144 ( convert to sq feet ) E2 =3D $e$1 * D2 ( convert to weight ) Now select A2:E2 and drag down a few rows Now you can enter various sheets sizes in inches in A3:B3, A4:B4 etc and you have a visual table of various sheet size /weight combinations f= or = Now as others have noticed if you put in 60 * 144 for size, you get 108l= So assuming the weight for the 60" * 144" is accurate at 104.76, we can = actually recalculate the Weight/lb figure So now in E1, enter =3D 104.76/60 (total sq foot of knwon weight sample= Now all the calculations are corrected to use the new weight/lb figure. And by having the table all previous calculations are now re-calculated.= On Thu, 20 Jul 2006 11:38:40 +0100, MartinW <mtmw[ at ]hotmail.invalid> wrote= [Quoted Text] > Hi Anthony > Set this up in a worksheet > A1:- 60 > A2:- 144 > A3:- 0.125 > B1:- =3DCONVERT(A1,"in","m") > B2:- =3DCONVERT(A2,"in","m") > B2:- =3DCONVERT(A3,"in","m") > B5:- =3DB1*B2*B3 > B6:- =3DB5*2.7*1000 > B7:- =3DCONVERT(B6,"kg","lbm") > I've done this by converting to metric for the calculation then = > converting > back to Imperial for the result. (metric is so much easier for density= > calculations) > The 2.7 figure in B6 is the density of solid aluminium (2.700 grams pe= > cubic > centimetre). That figure is correct for pure aluminium, if there are = > alloys > in the > product then you would need to obtain the specific gravity for that = > product. > The underlying equation here is density =3D mass/volume or in > this instance mass =3D volume*density. > Your answer in B7 is 105.35lbs(47.78kg in B6) > Input 12 into A1 and A2 and you will see that a square foot of 1/8 = > aluminium > Is 1.755787lbs and not 1.8lbs hence the discrepency that FSt1 found. > HTH > Martin -- = Steve (3) Hi Steve, You're working in square measurements and rounded values for those square measurements which will always give you false results in a density calculation. You have to work on cubic calculations. 1.8lbs is incorrect as my previous post shows. 104.76lbs is incorrect as my previous post shows. <KIS - why convert one way and back> Because the metric system makes these calculations extremely easy and also makes them very flexible so you can use the one setup for any material type whether it be aluminium, steel, lead, gold whatever. Agreed, but we don't know the environment for original request. It's a bit scientific to throw in the exact density of aluminium. I just took it that there were sheets of aluminium, and they wanted to = know that weight various sizes were. I think taking the 104.76 as an on site weight is more realistic. On Thu, 20 Jul 2006 15:09:08 +0100, MartinW <mtmw[ at ]hotmail.invalid> wrote= [Quoted Text] > Hi Steve, > You're working in square measurements and rounded > values for those square measurements which will always > give you false results in a density calculation. > You have to work on cubic calculations. > 1.8lbs is incorrect as my previous post shows. > 104.76lbs is incorrect as my previous post shows. > <KIS - why convert one way and back> > Because the metric system makes these calculations > extremely easy and also makes them very flexible > so you can use the one setup for any material > type whether it be aluminium, steel, lead, gold whatever. > Regards > Martin -- = Steve (3) Hi Steve, <It's a bit scientific to throw in the exact density of aluminium.> Yeah, I know it sounds pretty anal, but you do have to be scientific about these calculations. I work in a materials testing laboratory and the difference between a density of 2.700 and 2.713 can translate into very big dollars on large projects. Thanks for the quick and very helpful posts guys, They are much appreciated! -- sHooTer296 It is 5052H32 aluminum mostly. We make marine fuel tanks and such and I need to figure out how much each weighs(empty). Just wanted to say thanks again for all the helpful posts. -- sHooTer296
{"url":"http://www.wiredbox.net/Forum/Thread270511_aluminum_weight_formula.aspx","timestamp":"2014-04-18T16:06:58Z","content_type":null,"content_length":"19175","record_id":"<urn:uuid:b693b0d8-ecb4-4535-bcfd-81ad0f368463>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Go Tell the Public Teacher's Guide 1. The teacher will remind the students of the papers they read shortly after the video. 2. The teacher will ask the students what they thought about the papers and discuss this. The teacher should help the students point out that they were somewhat difficult to read. 3. The teacher will then inform the students they will be writing a paper for the public that summarizes their studies for this unit. 4. The student will organize and display their collected information from all lessons, produce graphics or use the ones they have already developed, analyze data to support or refute their class hypothesis, and draw inferences for the presence of oysters in the Chesapeake Bay. 5. The teacher should have the students do a five-minute exit card about all their experiences during this unit. This allows students the opportunity to share their feelings and thoughts about the lessons presented. It may also assist the teacher in deciding if there is any mass confusion on an objective that was presented.
{"url":"http://www.esep.umces.edu/modules/lf_teachers_guide/index.php?adminaction=view&table_id=87","timestamp":"2014-04-18T15:41:01Z","content_type":null,"content_length":"7592","record_id":"<urn:uuid:2b02d422-3abc-460f-9923-ffdce041a982>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Grades K - 2 Hens and Sheep Problem #25 Deadline to submit the solution: January 31st, 2001 In the Aunt Emma’s yard there are several hens and one sheep, so altogether there are 10 legs. Q1: How many hens are there? The Solutions Name: Vivek 10-4=6 (there is only one sheep. sheep has only 4 legs) Total number of legs of hens is 6. Hen have 2 legs. 2x3 = 6 ( so the are 3 hens) School: Frost Age: 6 Grade: 1 About me: I like to read and do math puzzles. I like to play gameboy. State: USA Name: Anita There were 10 legs. Sheep has 4 legs. Take away 4 from 10 is 6 legs. Count how many pairs you have. There are 3 pairs so 3 hens. School: Laurel Mountain Age: 6 Grade: 1 State: USA Name: Anita Solution: There are 10 legs. So take away 4 from 10 because there is 1 sheep so thats why you take away 4 from 10. Then you pair the rest in 2 groups then you count the pairs. Thats how I found out the anweser was 3. School: Laurel Mountain Age: 6 Grade: 1 About me: I'm very good at chess State: USA Name: Emily since there are l0 legs, the sheep has four and that leaves 3 hens with 2 legs each for a total of 6 School: Breezy point Age: six Grade: k About me: I like the computer. I am a good reader, speller and artist. State: United States Name: Morgan School: homeschool Age: 8 Grade: 3 About me: i love to dance jazz tap ballet State: usa Name: Melissa School: chbs Age: 6 Grade: 1 About me - I like: dad flys a airplane State: florida Name: Kishan School: lav Age: 8 Grade: 3 About me: i like to solve problems. I love math but i hate to write. State: america
{"url":"http://www.dositey.com/problems/k2/problem25.htm","timestamp":"2014-04-17T16:41:01Z","content_type":null,"content_length":"14684","record_id":"<urn:uuid:398074f4-e23b-4f09-adf8-c6c136abe64f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael Jerison Michael Jerison This is information that was supplied by Michael Jerison in registering through RePEc. If you are Michael Jerison , you may change this information at the RePEc Author Service . Or if you are not registered and would like to be listed as well, register at the RePEc Author Service . When you register or update your RePEc registration, you may identify the papers and articles you have authored. Personal Details First Name: Michael Middle Name: Last Name: Jerison RePEc Short-ID: pje96 Postal Address: Department of Economics State University of New York-Albany (SUNY) Location: Albany, New York (United States) Homepage: http://www.albany.edu/econ/ Phone: (518) 442-4735 Fax: (518) 442-4736 Postal: Department of Economics, BA-110, Albany, NY 12222 Handle: RePEc:edi:dealbus (more details at EDIRC) Working papers 1. Michael Jerison & John K.-H. Quah, 2006. "Law of Demand," Discussion Papers 06-07, University at Albany, SUNY, Department of Economics. 2. David Jerison & Michael Jerison, 2001. "Real Income Growth And Revealed Preference Inconsistency," Economics Working Papers we012902, Universidad Carlos III, Departamento de Economía. 3. Michael Jerison, 2001. "Demand Dispersion, Metonymy and Ideal Panel Data," Discussion Papers 01-11, University at Albany, SUNY, Department of Economics. 4. Michael Jerison & David Jerison, 1999. "Measuring Consumer Inconsistency: Real Income, Revealed Preference and the Slutsky Matrix," Discussion Papers 99-01, University at Albany, SUNY, Department of Economics. 5. Michael Jerison, 1998. "Dispersed Excess Demands, the Weak Axiom and Uniqueness of Equilibrium," Discussion Papers 98-03, University at Albany, SUNY, Department of Economics. 6. Michael Jerison, 1997. "Nonrepresentative Representative Consumers," Discussion Papers 97-01, University at Albany, SUNY, Department of Economics. 7. Igor V. Evstigneev & Werner Hildenbrand & Michael Jerison, 1995. "Metonymy and Cross Section Demand," Discussion Paper Serie A 469, University of Bonn, Germany. 8. Michael Jerison, 1993. "Qualitatively Identical Comparative Statics for Firms of Consumers," Discussion Papers 93-04, University at Albany, SUNY, Department of Economics. 9. Michael Jerison, 1993. "Russell on Gorman's Engel Curves: A Correction," Discussion Papers 93-05, University at Albany, SUNY, Department of Economics. 10. Michael Jerison, 1992. "Optimal Income Distribution Rules and the Nonrepresentative Representative Consumer," Discussion Papers 92-08, University at Albany, SUNY, Department of Economics. 11. Michael Jerison & David Jerison, 1991. "Approximately Rational Consumer Demand," Discussion Papers 92-02, University at Albany, SUNY, Department of Economics. 12. Hardle, W. & Jerison, M., 1990. "Cross section Engel curves over time," CORE Discussion Papers 1990016, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). □ HÄRDLE, Wolfgang & JERISON, Michael, . "Cross section Engel curves over time," CORE Discussion Papers RP -991, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). □ Wolfgang HÄRDLE & Michael JERISON, 1991. "Cross section Engel Curves over Time," Discussion Papers (REL - Recherches Economiques de Louvain) 1991045, Université catholique de Louvain, Institut de Recherches Economiques et Sociales (IRES). □ Haerdle,Wolfgang Jerison,Michael, 1988. "Cross section Engel curves over time," Discussion Paper Serie A 160, University of Bonn, Germany. 13. Wolfgang Härdle & Werner Hildenbrand & Michael Jerison, 1989. "Empirical Evidence on the Law of Demand," Discussion Paper Serie A 264a, University of Bonn, Germany. □ Haerdle,W. Hildenbrand,W. Jerison,M., 1988. "Empirical evidence on the law of demand," Discussion Paper Serie A 193, University of Bonn, Germany. □ HARDLE, Wolfgang & HILDENBRAND, Werner & JERISON, Michael, . "Empirical evidence on the law of demand," CORE Discussion Papers RP -968, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). 14. Roger Guesnerie & Michael Jerison, 1989. "Taxation as a Social Choice Problem, The Scope of the Laffer Argument," Discussion Paper Serie A 245, University of Bonn, Germany. 15. David Jerison & Michael Jerison, 1989. "Approximately Rational Consumer Demand and Ville Cycles," Discussion Paper Serie A 246, University of Bonn, Germany. 16. Hildenbrand,Werner & Jerison,Michael, 1988. "The Demand theory of the Weak axioms of Revealed Preference," Discussion Paper Serie A 163, University of Bonn, Germany. 17. Jerison,David Jerison,Michael, 1988. "Approximate Slutzky conditions," Discussion Paper Serie A 166, University of Bonn, Germany. 1. Jerison, Michael, 1999. "Dispersed excess demands, the weak axiom and uniqueness of equilibrium," Journal of Mathematical Economics, Elsevier, vol. 31(1), pages 15-48, February. 2. Evstigneev, I. V. & Hildenbrand, W. & Jerison, M., 1997. "Metonymy and cross-section demand," Journal of Mathematical Economics, Elsevier, vol. 28(4), pages 397-414, November. □ EVSTIGNEEV, Igor V. & HILDENBRAND, Werner & JERISON, Michael, 1996. "Metonymy and Cross Section Demand," CORE Discussion Papers 1996046, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). □ Igor V. Evstigneev & Werner Hildenbrand & Michael Jerison, 1995. "Metonymy and Cross Section Demand," Discussion Paper Serie A 469, University of Bonn, Germany. 3. David Jerison & Michael Jerison, 1996. "A discrete characterization of Slutsky symmetry (*)," Economic Theory, Springer, vol. 8(2), pages 229-237. 4. Jerison, Michael, 1994. "Optimal Income Distribution Rules and Representative Consumers," Review of Economic Studies, Wiley Blackwell, vol. 61(4), pages 739-71, October. 5. Jerison, David & Jerison, Michael, 1993. "Approximately Rational Consumer Demand," Economic Theory, Springer, vol. 3(2), pages 217-41, April. 6. Jerison, Michael, 1993. "Russell on Gorman's Engel curves : A correction," Economics Letters, Elsevier, vol. 43(2), pages 171-175. □ Michael Jerison, 1993. "Russell on Gorman's Engel Curves: A Correction," Discussion Papers 93-05, University at Albany, SUNY, Department of Economics. □ Jerison,Michael, 1993. "Russel on Gorman`s Engel curves: A correction," Discussion Paper Serie A 412, University of Bonn, Germany. 7. Jerison, David & Jerison, Michael, 1992. "Approximately rational consumer demand and ville cycles," Journal of Economic Theory, Elsevier, vol. 56(1), pages 100-120, February. 8. Guesnerie, Roger & Jerison, Michael, 1991. "Taxation as a social choice problem : The scope of the Laffer argument," Journal of Public Economics, Elsevier, vol. 44(1), pages 37-63, February. □ Guesnerie, R. & Jerison, M., 1990. "Taxation as a Social Choice Problem, the Scope of the Laffer Argument," DELTA Working Papers 90-06, DELTA (Ecole normale supérieure). □ Roger Guesnerie & Michael Jerison, 1989. "Taxation as a Social Choice Problem, The Scope of the Laffer Argument," Discussion Paper Serie A 245, University of Bonn, Germany. 9. Hardle, Wolfgang & Hildenbrand, Werner & Jerison, Michael, 1991. "Empirical Evidence on the Law of Demand," Econometrica, Econometric Society, vol. 59(6), pages 1525-49, November. □ Wolfgang Härdle & Werner Hildenbrand & Michael Jerison, 1989. "Empirical Evidence on the Law of Demand," Discussion Paper Serie A 264a, University of Bonn, Germany. □ HARDLE, Wolfgang & HILDENBRAND, Werner & JERISON, Michael, . "Empirical evidence on the law of demand," CORE Discussion Papers RP -968, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). □ Haerdle,W. Hildenbrand,W. Jerison,M., 1988. "Empirical evidence on the law of demand," Discussion Paper Serie A 193, University of Bonn, Germany. 10. Hildenbrand, Werner & Jerison, Michael, 1989. "The demand theory of the weak axioms of revealed preference," Economics Letters, Elsevier, vol. 29(3), pages 209-213. 11. Jerison, David & Jerison, Michael, 1984. "Demand aggregation and integrability of the HOGLEX demand function," Economics Letters, Elsevier, vol. 15(3-4), pages 357-362. 12. Jerison, Michael, 1984. "Aggregation and pairwise aggregation of demand when the distribution of income is fixed," Journal of Economic Theory, Elsevier, vol. 33(1), pages 1-31, June. Most cited item Most downloaded item (past 12 months) For general information on how to correct material on RePEc, see these instructions To update listings or check citations waiting for approval, Michael Jerison should log into the RePEc Author Service To make corrections to the bibliographic information of a particular item, find the technical contact on the abstract page of that item. There, details are also given on how to add or correct references and citations. To link different versions of the same work, where versions have a different title, use this form. Note that if the versions have a very similar title and are in the author's profile, the links will usually be created automatically. Please note that most corrections can take a couple of weeks to filter through the various RePEc services.
{"url":"http://ideas.repec.org/e/pje96.html","timestamp":"2014-04-16T16:02:29Z","content_type":null,"content_length":"36824","record_id":"<urn:uuid:6c6554a3-91c2-4773-9501-1289bb50c594>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
The Monad class defines the basic operations over a monad, a concept from a branch of mathematics known as category theory. From the perspective of a Haskell programmer, however, it is best to think of a monad as an abstract datatype of actions. Haskell's do expressions provide a convenient syntax for writing monadic expressions. Minimal complete definition: >>= and return. Instances of Monad should satisfy the following laws: > return a >>= k == k a > m >>= return == m > m >>= (\x -> k x >>= h) == (m >>= k) >>= h Instances of both Monad and Functor should additionally satisfy the law: > fmap f xs == xs >>= return . f The instances of Monad for lists, Data.Maybe.Maybe and System.IO.IO defined in the Prelude satisfy these laws. Monadic Graphs Monadic Graph Algorithms Internal stuff that most people shouldn't have to use. This module mostly deals with the internals of the CGIT monad transformer. Monads having fixed points with a 'knot-tying' semantics. Instances of MonadFix should satisfy the following laws: * purity mfix (return . h) = return (fix h) * left shrinking (or tightening) mfix (\ x -> a >>= \y -> f x y) = a >>= \y -> mfix (\x -> f x y) * sliding mfix (Control.Monad.liftM h . f) = Control.Monad.liftM h (mfix (f . h)), for strict h. * nesting mfix (\x -> mfix (\y -> f x y)) = mfix (\x -> f x x) This class is used in the translation of the recursive do notation supported by GHC and Hugs. Monads that also support choice and failure. Provides a monad-transformer version of the Control.Exception.catch function. For this, it defines the MonadCatchIO class, a subset of MonadIO. It defines proper instances for most monad transformers in the mtl library. Version 0.3.0.5 Functions like alloca are provided, except not restricted to IO. Version 0.1 Provides functions to throw and catch exceptions. Unlike the functions from Control.Exception, which work in IO, these work in any stack of monad transformers (from the transformers package) with IO as the base monad. You can extend this functionality to other monads, by creating an instance of the MonadCatchIO class. Warning: this package is deprecated. Use the exceptions package instead, if possible. Version 0.3.1.0 Functions like alloca are provided, except not restricted to IO. Version 0.1 The class of CGI monads. Most CGI actions can be run in any monad which is an instance of this class, which means that you can use your own monad transformers to add extra functionality. The strategy of combining computations that can throw exceptions by bypassing bound functions from the point an exception is thrown to the point that it is handled. Is parameterized over the type of error information and the monad type constructor. It is common to use Either String as the monad type constructor for an error monad in which error descriptions take the form of strings. In that case and many other common cases the resulting monad is already defined as an instance of the MonadError class. You can also define your own error type and/or use a monad type constructor other than Either String or Either IOError. In these cases you will have to explicitly define instances of the Error and/or MonadError classes. Monads in which IO computations may be embedded. Any monad built by applying a sequence of monad transformers to the IO monad will be an instance of this class. Instances should satisfy the following laws, which state that liftIO is a transformer of monads: * . return = * (m >>= f) = liftIO m >>= > (liftIO . Monads in which IO computations may be embedded. Any monad built by applying a sequence of monad transformers to the IO monad will be an instance of this class. Instances should satisfy the following laws, which state that liftIO is a transformer of monads: * . return = * (m >>= f) = liftIO m >>= > (liftIO . A fast-paced 2-D scrolling vector graphics clone of the arcade game Gradius; it is dedicated to the 20th anniversary of Gradius series. Version 0.99 Support for computations which consume random values. Version 0.1.12 Show more results
{"url":"http://www.haskell.org/hoogle/?hoogle=Monad","timestamp":"2014-04-16T11:26:41Z","content_type":null,"content_length":"22041","record_id":"<urn:uuid:4816051b-c3d1-4dba-945e-f32d95d4d617>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordered Pairs Question Hi, I've started solving this problem, but I don't know how to finish it off. Find all ordered pairs of integers (x,y) such that: $x^2 + 2x + 18 = y^2$ so far i have completed the square to get the following: $y^2 - (x+1)^2 = 17$
{"url":"http://mathhelpforum.com/number-theory/156865-ordered-pairs-question.html","timestamp":"2014-04-18T16:09:19Z","content_type":null,"content_length":"38708","record_id":"<urn:uuid:f81cfb7f-2361-4b10-9702-5e286e3f2fe9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
How Irrational! In this lesson, students use geoboards to construct non-traditional, "tilted" squares whose side lengths are irrational numbers. This lesson addresses standards in both Number Sense and Measurement. Begin by having students evaluate expressions involving square roots. Review the notation for square root and the geometric meaning of square root, which is the length of a side of a square with a given area. Construct these triangles on a virtual geoboard. Provide students with the formula for the area of a triangle (A = ½bh) and ask them to determine the areas of the displayed triangles. [12.5 units^2 and 4.5 units^2.] Have a student summarize the previous day's lesson, In Search of Perfect Squares. Tell students that there are more squares that fit on a geoboard than those found during the previous lesson. Today's lesson will focus on finding some atypical squares using geoboards. Project the How Irrational! overhead. Discuss the questions on the overhead. Ask, "How can we determine the area of the square?" [One possible method is to construct a square that surrounds this square; determine the area of the four corner triangles, and subtract the area of the large, outer square minus the area of the four triangles.] Students may need a review on finding the area of a triangle. Also, introduce the term irrational number. An irrational number can be understood as the side length of a square whose area is not a perfect square. For example, 5 is not a perfect square. Therefore, the square root of 5 is an irrational number. In other words, there is no rational number that, when squared, is equal to 5. Distribute the How Irrational activity sheet. Have students work in groups of four to find more squares with non-horizontal, non-vertical sides using geoboards. Although the students have their own geoboards, group interaction will promote communication and provide peer support for this challenging mathematical concept. How Irrational Activity Sheet Show students this method for constructing a non-traditional square whose side has a slope of 3/2 on a geoboard. Place a rubber band on a geoboard peg. Stretch the rubber band "north" 3 units and "east" 2 units and hook the rubber band. Then rotate the geoboard 90° clockwise; and repeat 3 times. For groups that have difficulty, consider providing pictures of atypical squares drawn on dot paper. Students could use the pictures to help them construct squares on their geoboards. Circulate with a geoboard to check for understanding and provide additional support. As students complete the activity sheet, assign each group a specific type of square, categorized by the slope of its side, to present at the end of class. Each group should create a poster and include: • a drawing of their assigned square • the area of the square, including their reasoning • the side length using square root notation • the side length rounded to the nearest whole number, including their reasoning As a class summary, groups of students should present their posters to the whole class. As each group presents their poster, ask questions to emphasize key concepts, such as, "How do you know your figure is a square?", "How did you measure the area of your square?" and "Explain how you estimated the side length of your square to the nearest whole number." As a conclusion to the lesson, present the final square. (See Family of Squares Whose Side has Slope of 4/3 below.) This is the first example we have seen of a square with non-vertical sides that is perfectly square both geometrically and numerically. That is, the side length of this atypical square is a whole number! Twenty-Seven Geoboard Squares, Organized by Type A typical 11 × 11 geoboard (shown below) has 11 pegs in both the horizontal and vertical directions. On this type of geoboard, 27 different squares can be formed. These squares are categorized below, based on the slope of their sides. On a 10 x 10 geoboard, there are 10 squares whose side lengths are whole numbers. The areas of these squares are called perfect squares, because their side lengths are whole number measures. The areas of the 10 squares, in ascending order, are as follows: Family of Squares Where the Side has a Slope of 1 These squares are grouped together because the slope of their segments is 1. The grey square's side has a slope of 1, because the ratio of rise to run is 1/1. │ Area │ 2 │ 8 │ 18 │ 32 │ 50 │ │ Side length │ √2 │ √8 │ √18 │ √32 │ √50 │ │ Whole number estimate │ 1 │ 3 │ 4 │ 6 │ 7 │ Family of Squares Where the Side has a Slope of 2 These squares are grouped together because the slope of their segments is 2. The gray square's side has a slope of 2, because the ratio of rise to run is 2/1. │ Area │ 5 │ 20 │ 45 │ │ Side length │ √5 │ √20 │ √45 │ │ Whole number estimate │ 2 │ 4 │ 7 │ Family of Squares Where the Side has a Slope of 3 These squares are grouped together because the slope of their segments is 3. The grey square's side has a slope of 3, because the ratio of rise to run is 3/1. │ Area │ 10 │ 40 │ │ Side length │ √10 │ √40 │ │ Whole number estimate │ 3 │ 6 │ Family of Squares Where the Side has a Slope of 4 These squares are grouped together because the slope of their segments is 4. The grey square's side has a slope of 4, because the ratio of rise to run is 4/1. │ Area │ 17 │ 68 │ │ Side length │ √17 │ √68 │ │ Whole number estimate │ 4 │ 8 │ Family of Squares Where the Side has a Slope of 3/2 These squares are grouped together because the slope of their segments is 3. The grey square's side has a slope of 3/2, because the ratio of rise to run is 3/2. │ Area │ 13 │ 52 │ │ Side length │ √13 │ √52 │ │ Whole number estimate │ 4 │ 7 │ Family of Squares Where the Side has a Slope of 5/2 The grey square's side has a slope of 5/2, because the ratio of rise to run is 5/2. │ Area │ 29 │ │ Side length │ √29 │ │ Whole number estimate │ 5 │ Family of Squares Where the Side has a Slope of 5/3 The grey square's side has a slope of 5/3, because the ratio of rise to run is 5/3. │ Area │ 34 │ │ Side length │ √34 │ │ Whole number estimate │ 6 │ Family of Squares Where the Side has a Slope of 5/4 The grey square's side has a slope of 5/4, because the ratio of rise to run is 5/4. │ Area │ 41 │ │ Side length │ √41 │ │ Whole number estimate │ 6 │ Family of Squares Where the Side has a Slope of 4/3 [Reserve for Teacher Presentation] The grey square's side has a slope of 4/3, because the ratio of rise to run is 4/3. │ Area │ 25 │ │ Side length │ √25 │ │ Whole number measure (exact) │ 5 │ • 10 × 10 geoboards with elastics • Internet access to a virtual geoboard or overhead geoboard • Dot Paper • Chart or Poster Paper 1. Journal Entry: Have students compare and contrast the "perfect squares" they explored in the previous lesson with squares having irrational side lengths. 2. Have students explain why the area of a square is a whole number, the side length is not necessarily a whole number. [It is possible for a square to have an area of 2 square units, but the side length is not a whole number. The reason is that no whole number, multiplied by itself, equals 2. In other words, the square root of 2 is an irrational number.] 3. Use a "One-Question Quiz" as a formative assessment and ask students to estimate the square root of 28 to the nearest whole number. 4. As an entry in their notebooks, students define the term "irrational number" and include an example based on what they learned in class today. 1. Students interested in the history of mathematics could research the origins of irrational numbers and research why the discovery of irrational numbers disturbed the ancient Greeks. 2. Students who are interested in technology can use an online tool called "Square Coordinates" to design a game. The game should include the constructions of non-traditional squares along with coordinate geometry. 3. Investigate right triangle lengths without introducing the Pythagorean theorem, but instead, considering the hypotenuse as one side of a "non-traditional" square. Questions for Students 1. Is there more than one way to determine the area of the square? [Break the square into parts.] 2. Once you know the area of a square, how can you find the side length? [Take the square root of the area.] 3. Explain how you estimated square roots to the nearest whole number. [Answers will vary. Encourage answers such as: the square root of 15 rounds to 4, since 15 is between 9 (3^2) and 16 (4^2), but closer to 16. Therefore, the square root of 15 rounds to 4.] Teacher Reflection • What parts of the lesson do you feel went well? Why? • Did students become confused at any point in the lesson? When and why did this occur? • Who was disengaged during the class discussion time? How could you involve more students in the future? • Did you notice students making shapes on their geoboards that were not squares? What part of the "launch" or set-up phase could have been done differently? In this lesson, students use geoboards to explore the relationships between the area of a square and its side length. They also gain a numeric and geometric understanding of squaring a number and envision what the square root of a number "looks like." Learning Objectives Students will: • Estimate square roots to the nearest whole number without a calculator • Relate squaring a number and finding square root as inverse operations • Define irrational number and give examples Common Core State Standards – Practice • CCSS.Math.Practice.MP1 Make sense of problems and persevere in solving them. • CCSS.Math.Practice.MP4 Model with mathematics. • CCSS.Math.Practice.MP5 Use appropriate tools strategically. • CCSS.Math.Practice.MP7 Look for and make use of structure.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=3096","timestamp":"2014-04-16T18:56:45Z","content_type":null,"content_length":"95735","record_id":"<urn:uuid:08a42fc4-fce1-4901-8aad-f2e91f05585b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Classroom and school factors affecting mathematics achievement: a comparative study of Australia and the United States using TIMSS. Recent work on differences in mathematics achievement has highlighted the importance of classroom, teacher and school factors. The present study used data from the Third International Mathematics and Science Study (TIMSS TIMSS Trends in International Mathematics and Science Study TIMSS Third International Math and Science Study ) to look at student, classroom and school factors influencing mathematics achievement in Australia Australia (ôstrāl`yə), smallest continent, between the Indian and Pacific oceans. With the island state of Tasmania to the south, the continent makes up the Commonwealth of Australia, a federal parliamentary state (2005 est. pop. and the United States United States, officially United States of America, republic (2005 est. pop. 295,734,000), 3,539,227 sq mi (9,166,598 sq km), North America. The United States is the world's third largest country in population and the fourth largest country in area. (US). It found that classroom differences account for about one-third of the variation in student achievement in the US and over one-quarter in Australia. Most of the classroom variation in both countries was due to compositional and organisational factors, very little of it due to differences between teachers. This has important implications for policy regarding the improvement of mathematics achievement. It suggests that school systems may gain little by targeting teachers only, and need to give consideration to the role of pupil pupil: see eye. grouping practices and the effects of tracking and streaming on classroom learning environments. There is widespread interest in improving the levels of mathematics achievement in schools. Apart from the economic benefits that it is argued this would bring, by better preparing young people for the numeracy numeracy Mathematical literacy Neurology The ability to understand mathematical concepts, perform calculations and interpret and use statistical information. Cf Acalculia. demands of modern workplaces and raising the overall skill levels of the workforce, there are also social benefits tied to improving access for larger numbers of young people to post-school education and training opportunities and laying stronger foundations to skills for lifelong learning Lifelong learning is the concept that "It's never too soon or too late for learning", a philosophy that has taken root in a whole host of different organisations. Lifelong learning is attitudinal; that one can and should be open to new ideas, decisions, skills or behaviors. . The interest in raising levels of achievement has led to a focus on identifying the range of factors that shape achievement as well as understanding how these factors operate to limit or enhance the achievement of different groups of students. The impact on different groups of students is important because social differences in mathematics performance persist, despite inequalities This page lists Wikipedia articles about named mathematical inequalities. Pure mathematics • Abel's inequality • Barrow's inequality • Berger's inequality for Einstein manifolds • Bernoulli's inequality • Bernstein's inequality (mathematical analysis) in some other areas of school having declined. A study of trends in mathematics achievement over the three decades to 1996, in Australia, shows that substantial social class differences persist (Afrassa & Keeves, 1999). Similar results have been reported in the US for the same period, with differences related to social groups (measured by parental education) remaining strong (National Center for Education Standards, 2000). The evidence is a reminder that at a time when there are weakening weak·en tr. & intr.v. weak·ened, weak·en·ing, weak·ens To make or become weak or weaker. weak n. social trends on some broad indicators of educational participation, such as school retention rates, social differences in student progress and academic outcomes continue. This paper examines student, classroom and school factors influencing mathematics achievement in Australia and the US. To do this, it uses data from the Third International Mathematics and Science Study (TIMSS). A recent paper using these data has shown that, in Australia, although student background variables influence differences in achievement in mathematics, classroom and school variables also contribute substantially (Lamb & Fullarton Fullarton can refer to: People • Iain Fullarton, rugby union footballer • Jackie Fullarton, football commentator • James Fullarton, artist • Fullarton, South Australia • Fullaron, Trinidad and Tobago • Fullarton, Ontario , 2000). How much does this result hold in the US? Are the factors influencing mathematics achievement the same in both contexts? What can the relationships between teachers, classrooms, schools and student achievement in both countries inform us about policies or reforms to improve levels of mathematics achievement for all young people? School: and classroom effectiveness The early literature on school effectiveness placed an emphasis on the ability and social backgrounds of students as factors that shape academic performance, and suggested that schools had little direct effect on student achievement. Coleman Cole·man , Cy Originally Seymour Kauffman. Born 1929. American composer and theatrical producer whose best known Broadway productions include Sweet Charity (1966) and The Will Rogers Follies (1991). et al. (1966), for example, in a major study of US schools seemed to cast doubt on the possibility of improving school achievement through reforms to schools. They found that differences in school achievement reflected variations in family background, and the family backgrounds of student peers, and concluded that 'schools bring little influence to bear on a child's achievement that is independent of his background and general social context' (p. 325). A later analysis of the same dataset See data set. by Jencks and his colleagues reached the same conclusion: `our research suggests ... that the character of a school's output depends largely on a single input, namely the characteristics of the entering children. Everything else--the school budget, its policies, the characteristics of the teachers--is either secondary or completely irrelevant' (Jencks et al., 1972, p. 256). Criticisms of this early work suggested that the modelling procedures employed did not take account of the hierarchical A structure made up of different levels like a company organization chart. The higher levels have control or precedence over the lower levels. Hierarchical structures are a one-to-many relationship; each item having one or more items below it. nature of the data, and was not able to separate out accurately school, student and classroom factors (e.g. Raudenbush & Willms, 1991). More recent school effectiveness research has used multi-level modelling techniques to account for the clustering Using two or more computer systems that work together. It generally refers to multiple servers that are linked together in order to handle variable workloads or to provide continued operation in the event one fails. Each computer may be a multiprocessor system itself. effects of different types of data. The results of such studies show, according to according to 1. As stated or indicated by; on the authority of: according to historians. 2. In keeping with: according to instructions. 3. the meta-analysis meta-analysis /meta-anal·y·sis/ (met?ah-ah-nal´i-sis) a systematic method that takes data from a number of independent studies and integrates them using statistical analysis. of school effectiveness research undertaken by Bosker and Witziers (1996), that school effects account for approximately ap·prox·i·mate 1. Almost exact or correct: the approximate time of the accident. 2. 8 to 10 per cent of the variation in student achievement, and that the effects are greater for mathematics than for language. A number of studies have shown that there are substantial variations between schools (Lamb, 1997; Mortimore et al., 1988; Nuttall Nuttall may refer to: • Amy Nuttall (b. 1982), British actress • Anthony Nuttall (1937 - 2007), English literary critic • Blackman-Nuttall window • Carrie Nuttall, photographer • Charles Nuttall (1872-1934), Australian artist • Enos Nuttall (1842 - 1916), Clergyman. et al., 1989; Smith & Tomlinson Tomlinson is a surname, and may refer to: • Charles Tomlinson, British poet and translator • Charles Tomlinson (scientist) • Claire Tomlinson, presenter for Sky Sports. , 1989). Several studies have concluded that classrooms as well as schools are important and that teacher and classroom variables account for more variance The discrepancy between what a party to a lawsuit alleges will be proved in pleadings and what the party actually proves at trial. In Zoning law, an official permit to use property in a manner that departs from the way in which other property in the same locality than school variables (Scheerens, 1993; Scheerens, Vermeulen Vermeulen may refer to: • Chris Vermeulen: Australian motorcycle racer (1982 to present) • Elvis Vermeulen: French rugby union player (1998 to present) • Mark Vermeulen: Zimbabwean cricketer (1979 to present) • Matthijs Vermeulen: Dutch composer (1888 to 1967) , & Pelgrum, 1989). Schmidt et al. (1999) in their comparison of achievement across countries using TIMSS data reported that classroom-level differences accounted for a substantial amount of variation in several countries including Australia and the US. Are these differences due more to teachers, to classroom organisation, to pupil management practices or other factors? Recent work on classroom and school effects has suggested that teacher effects account for a large part of variation in mathematics achievement. In the United Kingdom, a recent study of 80 schools and 170 teachers measured achievement growth over the period of an academic year, when using start-of-year and end-of-year achievement data (Hay Mcber, 2000). Using multi-level modelling techniques, the study modelled the impact teachers had on achievement growth. The report on the work claimed that over 30 per cent of the variance in pupil progress was due to teachers. It concluded that teacher quality and teacher effectiveness, rather than other classroom, school and student factors, are large influences on pupil progress. Several Australian Australian pertaining to or originating in Australia. Australian bat lyssavirus disease see Australian bat lyssavirus disease. Australian cattle dog a medium-sized, compact working dog used for control of cattle. studies have also pointed to teachers having a major effect on student achievement. In a three-year longitudinal study longitudinal a chronological study in epidemiology which attempts to establish a relationship between an antecedent cause and a subsequent effect. See also cohort study. of educational effectiveness, known as the Victorian Victorian one reflecting an unshaken confidence in piety and temperance, as during Queen Victoria’s reign. [Am. and Br. Usage: Misc.] See : Prudery Quality Schools Project, Hill and his colleagues (Hill, 1994; Hill et al., 1996; Rowe & Hill, 1994) examined student, class/teacher and school differences in mathematics and English 1. English - (Obsolete) The source code for a program, which may be in any language, as opposed to the linkable or executable binary produced from it by a compiler. The idea behind the term is that to a real hacker, a program written in his favourite programming language is achievement. Using multi-level modelling procedures to study the interrelationships between different factors at each level--student, classroom and school--the authors found in the first phase of the study that, at the primary level, 46 per cent of the variation in mathematics was due to differences between classrooms, whereas at secondary level the rate was almost 39 per cent. Further analyses showed that between-class differences were also important in examining student growth in mathematics achievement, and that differences in achievement progress located at the classroom level ranged from 45 to 57 per cent (Hill et al., 1996; Hill & Rowe, 1998). In explaining the large classroom-level differences in student achievement in mathematics, Hill and his colleagues highlighted the role of teacher quality and teacher effectiveness. They contended that, although not fully confirmed, they had `evidence of substantial differences between teachers and between schools on teacher attitudes to their work and in particular their morale' and this supported the view that `it is primarily through the quality of teaching that effective schools make a difference' (Hill, 1994). In further work that examined the impact of teacher professional development on achievement, they again argued that differences between teachers helped explain much of the variation in mathematics achievement (Hill et al., 1996). However alternative explanations for the large classroom-level differences were also advanced by Hill and his team. They pointed to the possibility that classroom-level pupil management practices such as streaming and setting could account for the class effects. This was not pursued by the authors who stated that, in all of the schools they surveyed, the classes were of mixed ability (Hill, 1994; Rowe & Hill, 1994). Another possibility was an under-adjustment for initial differences, that is, they did not control adequately for prior achievement differences. A further explanation considered was the possibility of inconsistency in·con·sis·ten·cy n. pl. in·con·sis·ten·cies 1. The state or quality of being inconsistent. 2. Something inconsistent: many inconsistencies in your proposal. in teacher ratings used in the measure of student achievement in mathematics. This possibility was also deemed by Hill and his colleagues as unlikely to have had a major bearing, though its influence was not ruled out. However the authors did not use, or argue for the use of, more objective, independently assessed mathematics tests. Other studies have shown that contextual, variables such as student body composition and organisational policies play an important role in mathematics achievement. Teacher background attributes such as gender, number of years in teaching and educational qualifications have been shown to be important factors in student achievement (Larkin Lar·kin , Philip 1922-1985. British poet noted for his witty distrust of the modern world and self-deprecating humor, as in The Whitsun Weddings (1964). He was also a well-known jazz critic. & Keeves, 1984; Anderson Anderson, river, Canada Anderson, river, c.465 mi (750 km) long, rising in several lakes in N central Northwest Territories, Canada. It meanders north and west before receiving the Carnwath River and flowing north to Liverpool Bay, an arm of the Arctic , Ryan Ryan may refer to: Places • Division of Ryan, an electoral district in the Australian House of Representatives, in Queensland • Ryan, Iowa • Ryan, Oklahoma • Ryan Township, Pennsylvania • Ryan, New South Wales Film and television , & Shapiro Sha·pir·o , Karl Jay 1913-2000. American poet and critic known for his early poems concerning World War II and his later works in free verse. 1989), as have a variety of school effects such as school size (Lee & Smith, 1997) and mean student social composition. These studies suggest that classrooms and schools matter, as well as student background. A range of studies has examined different effects; however few have been able to use the range of contextual variables available in TIMSS. This paper uses the TIMSS data to investigate the interrelationships among different factors at the student, classroom and school levels in both the US and Australia. A key issue is to investigate whether teacher quality and classroom effectiveness account for classroom-level variation in mathematics achievement or whether there are other factors that are of more importance. To do this, we examine patterns of Grade 8 student achievement by partitioning To divide a resource or application into smaller pieces. See partition, application partitioning and PDQ. variance and using multi-level modelling procedures to estimate the amount of variance that can be explained at the student, classroom and school levels. By introducing different classroom and teacher variables, we test the extent to which factors linked to teachers and those linked to classroom organisation and practice influence achievement. If differences in mathematics achievement are heavily influenced by variations in the quality of teachers and teacher effectiveness, as the work of Hill and his colleagues suggests, then there are major policy implications for schools and school systems in terms of changing the provision and quality of teacher training, taking more care in teacher selection practices, re-shaping and investing more heavily in teacher professional development, and reforming the way in which schools deploy teachers and monitor their effectiveness. Alternatively, if other features of classrooms and schools explain more of the variation, then schools and school systems may not obtain the expected benefit in increased nmthematics achievement by targeting teachers only. Data and methods TIMSS was sponsored by the International Association for the Evaluation of Educational Achievement (IEA IEA International Energy Agency IEA International Environmental Agreements IEA International Association for the Evaluation of Educational Achievement IEA Institute of Economic Affairs IEA Inferred from Electronic Annotation IEA International Ergonomics Association ) and was conducted in 1996 (Lokan, Ford, & Greenwood Greenwood. 1 City (1990 pop. 26,265), Johnson co., central Ind.; settled 1822, inc. as a city 1960. A residential suburb of Indianapolis, Greenwood is in a retail shopping area. Manufactures include motor vehicle parts and metal products. , 1996). It set out to measure, across 45 countries, mathematics and science achievement among students at different ages and grades. In total, over half a million students from more than 30 000 classes in approximately 15 000 schools provided data. Not only were comprehensive mathematics and science tests developed for the study, there were questionnaires developed for students, their teachers and their school principals. Prior to the development of the tests, an extensive analysis of textbooks and curriculum documents was carried out. Mathematics and science curriculum developers from each country also completed questionnaires about the placement of and emphasis on a wide range of mathematics and science topics in their country's curricula. Together the data provide a unique opportunity to examine an extensive range of contextual variables that influence mathematics and science achievement. TIMSS investigated mathematics achievement at three stages of schooling with the following target populations: * Population 1: adjacent grade levels containing the largest proportion of nine-year-old students at the time of testing; * Population 2: adjacent grade levels containing the largest proportion of thirteen-year-old students at the time of testing; and * Population 3: the final year of schooling. This study uses data from the US and Australian samples of Population 2 students. For Population 2, the original TIMSS design specified spec·i·fy tr.v. spec·i·fied, spec·i·fy·ing, spec·i·fies 1. To state explicitly or in detail: specified the amount needed. 2. To include in a specification. 3. a minimum of 150 randomly selected schools per population per country, with two classes randomly selected to participate from each of the adjacent grade levels within each selected school. However, due to the cost of collecting such data, most countries were unable to achieve this position, and the US and Australia were two of only three countries which selected and tested more than one class per grade level per school. The importance of the sampling design used in the US and Australia is that it enables differences between schools to be separated from differences between classes within schools. In this way, we are able to analyse an·a·lyse v. Chiefly British Variant of analyze. analyse or US -lyze [-lysing, -lysed] or -lyzing, school and classroom differences. For the purposes of comparison, the analysis in the current paper is restricted to Grade 8 students and classes. The final sample numbers are presented in Table 1. The main aim of this analysis of the TIMSS data was to compare for the US and for Australia the relationships between student achievement in mathematics and factors at the student, classroom and school levels. Table 2 provides details of the variables that were used in the analysis. Student background variables. The sex of each student was recorded, as well as the number of people living in the student's household. A variable representing socioeconomic status socioeconomic n the position of an individual on a socio-economic scale that measures such factors as education, income, type of occupation, place of residence, and in some populations, ethnicity and religion. (SES) was computed as a weighted composite composite, alternate common name for Asteraceae or Compositae, the aster family. composite - aggregate comprising the mother's and father's level of education, the number of books in the home and the number of possessions in the home. Language background was measured as the frequency with which English was spoken at home. Family formation was based on whether or not the student lived with one parent or both. Student mediating variables A composite variable was derived de·rive v. de·rived, de·riv·ing, de·rives 1. To obtain or receive from a source. 2. to represent the student's enjoyment The exercise of a right; the possession and fruition of a right or privilege. Comfort, consolation, contentment, ease, happiness, pleasure, and satisfaction. Such includes the beneficial use, interest, and purpose to which property may be put, and implies right to profits and income of mathematics. This variable consisted of positive responses to five attitude prompts: `I usually do well in mathematics', `I like mathematics', `I enjoy learning mathematics', `Mathematics is boring', and `Mathematics is an easy subject'. A further variable was computed to represent student's perceptions of the importance of mathematics. This variable was comprised of responses to the items: `Mathematics is important to everyone's life', `I would like a job involving mathematics', `I need to do well in mathematics to get the job I want', `I need to do well in mathematics to please my parent(s)', `I need to do well in mathematics to get into the university/post-school course I prefer', and `I need to do well in mathematics to please myself'. An additional variable was created representing the amount of time spent on mathematics homework. This was based on a scale from 0 to more than 4 hours per night. Classroom variables A range of classroom variables was collected or derived for this analysis. The stream, track or set of the class was derived if setting was a practice used in the school to organise v. t. 1. Same as organize. Verb 1. organise - bring order and organization to; "Can you help me organize my files?" coordinate, organize structure - give a structure to; "I need to structure my days" mathematics classes. Mean SES was derived at the class level. A variable was derived if the classrooms within schools in the data set had the same teacher. The background attributes of teachers--gender, number of years teaching and educational qualifications--were also controlled for. Estimates of the amount of homework teachers set for classes, the extent of their reliance on a prescribed pre·scribe v. pre·scribed, pre·scrib·ing, pre·scribes 1. To set down as a rule or guide; enjoin. See Synonyms at dictate. 2. To order the use of (a medicine or other treatment). textbook textbook Informatics A treatise on a particular subject. See Bible. , and the amount of time they spent teaching mathematics were also derived. School level variables Mean SES was derived for each school to provide a control for the social composition of the school. In addition, a measure of the school size was used, ranging from schools of less than 250 students through to schools of more than 1250 students. Average class sizes, time dedicated to mathematics teaching across a school year, and school climate measured by the levels of absenteeism ab·sen·tee·ism 1. Habitual failure to appear, especially for work or other regular duty. 2. The rate of occurrence of habitual absence from work or duty. and behavioural Adj. 1. behavioural - of or relating to behavior; "behavioral sciences" behavioral disturbances were also included. Explicit school policies relating to relating to relate prep → concernant relating to relate prep → bezüglich +gen, mit Bezug auf +acc the selection of pupils (open admission from the surrounding sur·round tr.v. sur·round·ed, sur·round·ing, sur·rounds 1. To extend on all sides of simultaneously; encircle. 2. To enclose or confine on all sides so as to bar escape or outside communication. n. area, academic selection of pupils) were also variables included in the analysis. This study looks at the effects of classrooms, teachers and schools after controlling for student-level factors. An appropriate procedure for doing this is hierarchical linear modelling or HLM HLM Habitation à Loyer Modéré (France) HLM Houston Lake Mining, Inc (Val Caron, ON, Canada) HLM Heart-Lung Machine HLM Hierarchical Linear Modelling HLM Holland, Michigan (Bryk & Raudenbush, 1992). This procedure allows modelling of outcomes at several levels (e.g. student level, classroom level, school level), partitioning separately the variance at each level while controlling for the variance across levels. In the present study, the interest is on variability within and between classrooms and schools. Two sets of analyses were undertaken to measure the levels of variation, one for the US and one for Australia. The first set modelled mathematics achievement of Grade 8 students in the United States. In the analyses, several models were tested each adding successively suc·ces·sive 1. Following in uninterrupted order; consecutive: on three successive days. 2. a new group or layer of variables. The first involved fitting a variance-components model to estimate the amount of variance due to the effects of students (level 1), within classrooms (level 2), within schools (level 3) by running the models without any explanatory ex·plan·a·to·ry Serving or intended to explain: an explanatory paragraph. ex·plan variables. The second model introduced a group of student background variables comprising sex, socioeconomic status (SES), family size, birthplace birth·place The place where someone is born or where something originates. the place where someone was born or where something originated Noun 1. of parents, language background, and family formation (single parent or intact family). The third model added a set of mediating variables to the student background variables. The mediating variables included attitudes towards mathematics, views on the importance of mathematics, and time spent on mathematics homework. The fourth model contained a set of classroom composition variables relating to mean SES, stream or track, and whether the classes in Grade 8 had the same teacher or not. The next model added a set of teacher variables including the sex of the teacher, qualifications, years of teaching experience, the amount of homework the teacher sets, the amount of time they spend teaching mathematics, and the amount of time in class they teach using a set textbook. The final model added several school-level factors including the mean SES of the school, school size, average class size, student selection policy (academically selective, open admission), time dedicated to mathematics teaching, and school climate measured by student absenteeism and level of behavioural disturbances. By examining changes in the size of the variance components estimates after the addition of each group of variables, it was possible to measure the contribution of student, teacher, classroom and school-level factors to mathematics achievement. In this way, it was possible to estimate the extent to which factors linked to teachers rather than classroom composition and organisation shape differences in mathematics achievement and to what extent student-level and school-level factors influence achievement. The second set of analyses was based on data for Australia. The same sequence of models was applied. Student, classroom and school variance in mathematics achievement Table 3 presents the results of the HLM analyses for the US and Table 4 presents the results for Australia. The variance components estimates are presented in the second column. The third column presents the percentages of variance (intraclass correlations In statistics, the intraclass correlation (or the intraclass correlation coefficient^[1]) is a measure of correlation, consistency or conformity for a data set when it has multiple groups. ) in mathematics achievement located at each of the levels--student, classroom and school. The final column contains the percentages of variance explained at each level after controlling for the different groups of variables. As a first step, a fully unconditional HEIR, UNCONDITIONAL. A term used in the civil law, adopted by the Civil Code of Louisiana. Unconditional heirs are those who inherit without any reservation, or without making an inventory, whether their acceptance be express or tacit. Civ. Code of Lo. art. 878. UNCONDITIONAL. (null A character that is all 0 bits. Also written as "NUL," it is the first character in the ASCII and EBCDIC data codes. In hex, it displays and prints as 00; in decimal, it may appear as a single zero in a chart of codes, but displays and prints as a blank space. ) model was tested. This model, the equivalent of a one-way one-way 1. Moving or permitting movement in one direction only: a one-way street. 2. Providing for travel in one direction only: a one-way ticket. ANOVA anova see analysis of variance. ANOVA Analysis of variance, see there with random effects Random effects can refer to: • Random effects estimator • Random effect model , estimates variances in the outcome variable at the student, classroom and school levels. The results suggest for both the US and Australia considerable variation in mathematics achievement at the classroom and school levels. Over one-half (54.1 per cent) of the estimated variation in mathematics achievement in the US occurs at the student level. However differences between classrooms also account for a substantial amount of variance--33.8 per cent. Differences between schools accounted for the remaining 12.1 per cent of variance. This suggests a moderate though significant level of variation between schools. The results for Australia show a smaller level of variance at the classroom (27.9 per cent) and school (10.4 per cent)levels, though the results suggest that differences between classrooms and between schools are an important source of variation in mathematics achievement. The next step in the analysis involved adding the student background predictors (SES, gender, language background, family size, single parent family, birthplace of parents) to the model of mathematics achievement. This allowed differences between classrooms and schools to be adjusted for differences at the individual level. The results presented in column 4 show that differences in the background characteristics of students in the US accounted for 4.7 per cent of the estimated variance at the student level, 15.0 per cent of the variance between classrooms, and 19.5 per cent of the variance at the school level. The Australian results show a higher level of explained variance--7.4, 16.4 and 54.0 per cent, respectively. It suggests that student background factors explain more of the between-school variance in Australia than in the US. Adding the student mediating variables (time spent on homework, attitudes towards mathematics, and views on the importance of maths) in the next step substantially increased the percentages of explained variance Explained variance is part of the variance of any residual that can be attributed to a specific condition (cause). The other part of variance is unexplained variance. The higher the explained variance relative to the total variance, the stronger the statistical measure used. at the student level. When achievement is adjusted for the student background and mediating variables, the amount of variance explained at the student level increased to 12.0 per cent in the US and 19.3 per cent in Australia. At the classroom, level, the amount of variance explained increased only modestly to 15.7 per cent in the US and 27.6 per cent in Australia. The results suggest that, although the mediating variables are important to explaining student level variance, they do not add much to the understanding of classroom and school level variance. The next step involved the inclusion of the classroom composition, variables--mean SES, high stream or track classroom, low stream or track classroom, non-streamed or tracked classroom, same teacher across classrooms. This further increases the percentage of variance explained at the classroom level. The between-classroom variance explained jumped from 15.7 per cent to 64.6 per cent in the US, and from 27.6 to 74.3 per cent in Australia. It suggests that classroom organisation and composition factors are important in explaining classroom differences in student achievement. Teacher effects would appear to be quite small, at least based on the changes that occur after adding in the available teacher variables--years of teaching experience, sex of the teacher, qualifications, time spent teaching mathematics, textbook-based teaching methods, and amount of homework set. This group of variables increased the explained variance at the classroom level by only about 3 per cent in both the US and Australia. The school level variables also added little to the explained variances. The school level variables add more to the explained variance in the US than they do in Australia. The combined effects of the mean SES of the school, school size, average class size, admissions policy, and features of school climate explain roughly 13 per cent of variance between schools in the US (13 per cent) and about 6 per cent in Australia. Student, classroom and school factors shaping mathematics achievement Table 5 presents the results from the HLM analyses for the United States and Table 6 the results for Australia. At the first level of analysis, shown in the first column of Table 5, it can be seen that all of the variables, other than family size, have a significant effect on achievement in mathematics for students in the US. As has been found in previous studies, gender has a significant negative effect on mathematics achievement. That is, Grade 8 girls' achievement levels are still not equal to that of boys. Also, as has been found in previous studies, students from a higher SES background and those from two-parent rather than single-parent families single-parent family Social medicine A family unit with a mother or father and unmarried children. See Father 'factor.', Latchkey children, Quality time, Supermom. Cf Extended family, Nuclear family, Two parent advantage. tend to have higher achievement levels in mathematics. Language background is also important. Students from families that more often speak a language other than English at home tend to have lower levels of achievement than those where English is the main language. For Australia, although Grade 8 girls tend not to do as well as boys in mathematics, the differences are not significant. Similarly there are not significant differences linked to family size or family formation. The most influential variables for Australian students are SES and language background. Students from higher SES origins achieve significantly higher than those from lower SES backgrounds. Students from families that more often speak a language other than English at home do significantly worse in mathematics than those where English is the main language. The mediating variables--attitudes towards mathematics, perceived per·ceive tr.v. per·ceived, per·ceiv·ing, per·ceives 1. To become aware of directly through any of the senses, especially sight or hearing. 2. To achieve understanding of; apprehend. importance of mathematics, time spent on mathematics homework--have strong independent effects, at least in Australia (see column 3). They are influential predictors of mathematics achievement. But they not only have independent effects, they also transmit To send data over a communications line. See transfer. or relay relay, electromechanical switch operated by a flow of electricity in one circuit and controlling the flow of electricity in another circuit. A relay consists basically of an electromagnet with a soft iron bar, called an armature, held close to it. some of the effects of the different student background variables. This is evident from the drop in the sizes of the estimates for SES and family formation when the mediating variables are included in the model. The results for the mediating variables are weaker for students in the US. The estimates for time spent on mathematics homework and for attitudes towards mathematics are smaller than for Australian students. The estimate for perceived importance of mathematics is positive, though not significant. It suggests that the perceived importance of mathematics is a greater influence on mathematics achievement in Australia than in the US. This is supported by the differential increase in explained variance reported at the base of the tables. The figures show that, whereas the mediating variables increase the level of explained variance in Grade 8 mathematics achievement by approximately 14 per cent in Australia, they increase the level by only 3 per cent in the US. In summary, the differences between males and females are greater in the US than in Australia. In the US, gender differences, SES and family formation have both a direct effect on achievement and a transmitted effect through their influence on attitudes to mathematics and amount of time spent on homework. These findings reinforce re·in·force 1. To give more force or effectiveness to something; strengthen. 2. To reward an individual, especially an experimental subject, with a reinforcer subsequent to a desired response or performance. 3. previous studies showing that student background has an effect, both directly and indirectly, on student achievement in mathematics. In Australia, SES and language background are important predictors of mathematics achievement, working independently as well as through their influence on attitudes towards mathematics, perceived importance of mathematics and time spent on homework. The results presented in the previous section show that, as well as student level factors, classrooms and schools also matter. The next stages of the modelling investigate the effects of classroom variables on achievement. Tables 5 and 6 show that for the US and for Australia tracking or streaming has a large impact on mathematics achievement. There is a strong positive effect for classes in the top band in schools with streaming or tracking policies. In the US, classes in the top track or stream gain 28 points on average over classes which are in the middle track or band. The advantage in Australia is larger at 38 points. Students in the US in the lowest track or band have significantly lower results than students in the middle track or band. Tracking or streaming clearly benefits those students in the higher band classes, but leads to significantly poorer achievement in lower band classes. The achievement in classes in the lower bands or streams is moderately, though significantly, lower than classes that are not streamed or set in Australia. In the US, however, the result for non-tracked or streamed classes is not much better than that for the bottom track or stream. There are differences in the number of classes that are tracked or streamed between the countries. In Australia, 48 per cent of classes were not streamed or tracked, compared with only about 20 per cent in the US. Classroom social composition (mean SES) has strong independent effects on student achievement in mathematics, and this applies both in the US and Australia. In both countries, there are achievement advantages to being located in classrooms largely composed of students from higher SES backgrounds. The results show that the higher the mean SES composition of classes, the higher the achievement. In the US, approximately 30 per cent of the sampled classes were taught by the same teacher in each school. In Australia, the rate was about 10 per cent. The results suggest that having the same teacher does not have any effect on the results for Australia or the US. This does not support the recent research on teacher effects which has suggested that it is teacher effects rather than other classroom factors that are the major influences on mathematics achievement. If this was the case, we might have expected smaller classroom differences where classes have the same teacher. The classroom composition and organisation variables added substantially to the levels of explained variance in both countries. Addition of the pupil grouping variables and classroom composition factors increased the total variance explained from 13 to 34.7 per cent in the US, and from 24.8 to 39.7 per cent in Australia. The next step in the analysis was to add the teacher attribute (1) In relational database management, a field within a record. (2) In object technology, a single element of data. See instance attribute and static attribute. variables to the achievement models. Sex of the teacher and educational qualifications had no significant effect on student achievement. Teacher experience, as measured by years of teaching, had a small but significant positive effect in the US, suggesting that the more experienced teachers achieved better results. This did not apply in Australia. In both countries, the results suggest that classes where teachers set more homework were associated with higher levels of achievement. In Australia, there was also a positive significant impact in classrooms where the amount of time teachers spent using a prescribed textbook was greater. The results suggest that, in classes where teachers use more traditional textbook-based methods, the results are better. This did not apply in the US where the effect was negative and significant, which suggests that the results were better where teachers used alternative methods. The teacher effect variables in both countries added only marginally mar·gin·al 1. Of, relating to, located at, or constituting a margin, a border, or an edge: the marginal strip of beach; a marginal issue that had no bearing on the election results. 2. to the levels of variance in mathematics achievement. The addition of the school level factors--mean SES, school size, average class size, admissions policy, and length of time given to mathematics instruction, and school climate--also adds only a small amount to explaining total levels of variance in both countries. However these variables do contribute more to explaining school level variance in the US than in Australia. In the US, school level SES has a positive impact on mathematics achievement, which suggests that students in schools with a higher mean SES do better in mathematics than students in schools with lower levels of SES, other things equal. Social composition of the school influences mathematics achievement. What can we learn from the TIMSS data about differences in mathematics achievement? One thing we learn is that differences between classes and schools matter in both the US and Australia. Early studies examining patterns of student achievement in mathematics had concluded that schools have little impact above and beyond student intake intake /in·take/ (in-tak´) the substances, or the quantities thereof, taken in and utilized by the body. intake, n the substance or quantities thereof taken in and used by the body. factors. The results from TIMSS show, consistent with current research on school effectiveness, that not only do schools make a difference, but classrooms as well. There are strong classroom effects and modest school effects on mathematics achievement. These effects are linked to particular classroom and school level The pooling of pupil resources that are associated with the grouping of students--reflected by mean SES and stream or track--heavily influence mathematics achievement. In both the US and Australia, achievement is highest in those classes and schools with higher concentrations of students from middle-class middle class The socioeconomic class between the working class and the upper class. mid families and students in the highest track or stream. Therefore the effects of residential segregation segregation: see apartheid; integration. more broadly and school level pupil management policies more locally (policies such as setting or tracking) shape the contexts within which differences in mathematics learning and achievement develop. The findings support the view that such context setting factors are important influences. School level pupil management practices such as setting or streaming contribute to the classroom effects by shaping classroom composition. Within this context, the effects of teachers are quite modest, in contrast to the claims of other research. This is supported in the current research by the non-significant results in both countries linked to having the same teacher across different classrooms. Having the same teacher did not reduce, significantly, differences between classrooms, suggesting that composition factors and pupil grouping practices are far more influential. Policies regarding pupil management are critical. Schools which formally group students according to mathematics achievement or ability promote differences in mathematics achievement. The benefits of this practice are large for students who enter higher band or track classes. They receive substantial gains in achievement. The cost is for those students in the lower band or stream classes. They have significantly lower level of achievement compared with their top-streamed peers in the US and also their unstreamed peers in Australia. In Australia, in terms of mathematics achievement, it is better for students to be in a school that does not stream or track mathematics classrooms than in a bottom stream or track in a school where streaming or tracking is policy. It suggests that the different learning environments created through selective pupil grouping may work to inhibit inhibit /in·hib·it/ (in-hib´it) to retard, arrest, or restrain. 1. To hold back; restrain. 2. student progress in the bottom streams and accelerate it for those in the top streams. These findings do not support the view of recent research, which argues that the differences in quality of teachers and teacher effectiveness account for much of the classroom variation in mathematics achievement. Rather they support an alternative explanation, that the types of pupil grouping practices that schools employ shape the classroom learning environments in ways that affect student progress and student achievement, and these kinds of differences more significantly influence classroom effects. By this, it is not suggested that the quality of teachers does not matter or that all teachers have the same effectiveness. Teachers do matter. In the US, more experienced teachers promote higher levels of achievement. The approach they take to homework, measured by the amount of time they set for homework, has a modest but significant effect on achievement, after controlling for other factors. Those more often using less traditional textbook approaches also promote higher levels of achievement. By contrast, in Australia, teachers using more traditional approaches appeared to enhance achievement. Although these teacher effects have an impact, what the TIMSS results suggest is that the organisational and compositional features of classrooms have a more marked impact on mathematics achievement. curriculum policy international studies mathematics achievement school effectiveness socioeconomic influences teacher effectiveness Table 1 Sample sizes United States Australia Students 7087 6916 Classrooms 348 309 Schools 183 158 Table 2 Student, classroom and school variables Variable Description Student level Student background variables Sex Student's gender Language background Level of skill in language of test Family size Number of people living in student's Socioeconomic status A composite variable representing family wealth, parents' education and number of books in the home Birthplace of parents Both parents born outside the United States or Australia Single parent family Student lives with one parent Student mediating variables Time spent on homework Self-reported assessment of length of time spent doing mathematics home Attitudes towards mathematics A composite variable measuring attitudes to mathematics. Perceived importance of A composite variable reflecting the mathematics perceived importance of mathematics to the student. Classroom level Classroom composition variables Mean SES Average SES for the class Grouping practice High band Highest band or track class Middle band Middle band or track class Low band Lowest band or track class No band Setting, streaming or tracking is not used Same teacher or not Same teacher for other class(es) participating in the survey Classroom teacher variables Sex Teacher's gender Educational qualifications Teacher's qualifications Years teaching Number of years teaching Teaching practices Homework set Estimate of amount of homework the teacher sets % time teaching in maths Estimate of time spent teaching Amount of time using text- Estimate of amount of teaching time book focused on prescribed textbook School level Mean SES Average SES for the school School size Number of students enrolled Class size Average class size in maths Time on maths Time dedicated to maths teaching across a school year Pupil intake policy Academically selective Intake of students is based on academic selection Open admission Intake is not based on academic selection and is mainly based on those who live in the local area Other Selection of intake is based on non- academic criteria School climate Behavioural disturbances Percentage of students who misbehave in class Absenteeism Percentage of students who are absent without an excuse Table 3 Variance in Grade 8 mathematics achievement explained by three-level HLM models: United States, population 2, TMSS Variance between Variance within classrooms (level 1 variance) 4685.8 54.1 After controlling for: Student background variables 4466.3 Student mediating variables 4124.1 Variance between classrooms (level 2 variance) 2924.5 33.8 After controlling for: Student background variables 2485.8 Student mediating variables 2465.0 Classroom composition variables 1035.1 Classroom teacher variables 891.7 Variance between schools (level 3 variance) 1043.1 12.1 After controlling for: Student background variables 840.1 Student mediating variables 935.4 Classroom composition variables 495.1 Classroom teacher variables 559.7 School-level variables 420.5 at each Variance within classrooms (level 1 variance) After controlling for: Student background variables 4.7 Student mediating variables 12.0 Variance between classrooms (level 2 variance) After controlling for: Student background variables 15.0 Student mediating variables 15.7 Classroom composition variables 64.6 Classroom teacher variables 69.5 Variance between schools (level 3 variance) After controlling for: Student background variables 19.5 Student mediating variables 10.4 Classroom composition variables 52.5 Classroom teacher variables 46.3 School-level variables 59.7 Table 4 Variance in Grade 8 mathematics achievement explained by three-level HLM models: Australia, population 2, TIMSS Variance % Variance w/thin classrooms (level 1 variance) 5415.6 61.7 After controlling for: Student background variables 5014.2 Student mediating variables 4370.6 Variance between classrooms (level 2 variance) 2446.6 27.9 After controlling for: Student background variables 2045.7 Student mediating variables 1771.4 Classroom composition variables 627.8 Classroom teacher variables 541.7 Variance between schools (level 3 variance) 908.3 10.4 After controlling for: Student background variables 417.4 Student mediating variables 451.6 Classroom composition variables 289.0 Classroom teacher variables 258.3 School-level variables 200.9 explained at each level Voriance w/thin classrooms (level 1 variance) After controlling for: Student background variables 7.4 Student mediating variables 19.3 Variance between classrooms (level 2 variance) After controlling for: Student background variables 16.4 Student mediating variables 27.6 Classroom composition variables 74.3 Classroom teacher variables 77.9 Variance between schools (level 3 variance) After controlling for: Student background variables 54.0 Student mediating variables 50.3 Classroom composition variables 68.2 Classroom teacher variables 71.6 School-level variables 77.9 Table 5 HLM estimates of Grade 8 mathematics achievement: United States, population 2, TIMSS Level 1 Level 1 model model Student Student background mediating variables variables Intercept 488.3 *** 488.6 *** Student-level variables Background variables Female -10.7 *** -9.2 *** SES 11.1 *** 9.9 *** Language -11.2 *** -11.3 *** Parents not born in United States 6.4 ** 4.8 * Family size -1.0 * -1.2 * Single parent family -4.3 ** -3.1 * Mediating variables Time spent doing homework -3.7 *** Positive attitudes towards maths 7.0 *** Perceived importance of maths 0.4 Classroom level variables Classroom composition Mean SES Top stream or track Bottom stream or track No streaming or tracking Same teacher Teacher attributes Sex of the teacher Educational qualifications Years in teaching Amount of homework set % time teaching maths Amount of time using textbook School level variables School size Average class size Academically selective Open admission Time dedicated to maths teaching Behavioural disturbances Total variance explained Level 1 (61.T) 10.0 13.0 Level 2 (27.9) Level 3 (10.4) Level 2 Level 2 model model Classroom Classroom composition teacher variables variables Intercept 489.5 *** 489.4 *** Student-level variables Background variables Female -9.2 *** -9.1 *** SES 7.8 *** 7.7 *** Language -10.9 *** -10.7 *** Parents not born in United States 5.7 ** 5.5 * Family size -0.8 -0.8 Single parent family -2.9 * -3.0 * Mediating variables Time spent doing homework -4.3 *** -4.4 *** Positive attitudes towards maths 7.0 *** 7.0 *** Perceived importance of maths 0.4 0.4 Classroom level variables Classroom composition Mean SES 23.4 *** 22.7 *** Top stream or track 28.2 *** 27.7 *** Bottom stream or track -20.6 *** -22.4 *** No streaming or tracking -16.8 ** -16.7 ** Same teacher 5.5 4.4 Teacher attributes Sex of the teacher 4.3 Educational qualifications -2.6 Years in teaching 0.6 ** Amount of homework set 2.3 *** % time teaching maths 0.0 Amount of time using textbook -2.3 * School level variables School size Average class size Academically selective Open admission Time dedicated to maths teaching Behavioural disturbances Total variance explained Level 1 (61.T) Level 2 (27.9) 34.7 35.6 Level 3 (10.4) Level 3 Intercept 489.4 *** Student-level variables Background variables Female -9.1 *** SES 7.8 *** Language -10.4 *** Parents not born in United States 6.2 * Family size -0.8 Single parent family -2.9 * Mediating variables Time spent doing homework -4.4 *** Positive attitudes towards maths 6.9 *** Perceived importance of maths 0.4 Classroom level variables Classroom composition Mean SES 29.5 *** Top stream or track 29.2 *** Bottom stream or track -22.7 ** No streaming or tracking -18.5 ** Same teacher 4.6 Teacher attributes Sex of the teacher 4.3 Educational qualifications -2.5 Years in teaching 0.6 ** Amount of homework set 2.7 *** % time teaching maths 0.0 Amount of time using textbook -3.7 * School level variables SES 10.2 *** School size 0.0 Average class size -0.9 Academically selective -2.6 Open admission 11.4 Time dedicated to maths teaching 0.0 Behavioural disturbances -0.3 Absenteeism -0.7 Total variance explained Level 1 (61.T) Level 2 (27.9) Level 3 (10.4) 37.2 * Significant at the .10 level; ** Significant at the .05 level; *** Significant at the .01 level Table 6 HLM estimates of Grade 8 mathematics achievement: Australia, population 2, TIMSS Level 1 Level 1 model model Student Student background mediating variables variables Intercept 516.6 *** 516.0 *** Student level variables Background variables Female -2.1 1.4 SES 8.7 *** 7.5 *** Language -14.9 *** -16.7 Parents not born in Australia 2.0 0.7 Family size -1.2 -1.0 Single parent family -1.1 -0.4 Mediating variables Time spent doing homework -10.3 *** Positive attitudes towards maths 11.3 *** Perceived importance of maths 2.4 *** Classroom level variables Classroom composition Mean SES Top stream Low stream No stream Same teacher Teacher attributes Sex of the teacher Educational qualifications Years in teaching Amount of homework set Time teaching maths Amount of time using textbook School level variables School size Average class size Academically selective Open admission Time dedicated to maths teaching Behavioural disturbances Total variance explained Level 1 (61.7) 14.7 24.8 Level 2 (27.9) Level 3 (10.4) Level 2 Level 2 Level 3 model model model Classroom Classroom composition teacher School variables variables variables Intercept 516.4 *** 516.4 *** 516.5 *** Student level variables Background variables Female 0.9 0.9 1.0 SES 6.6 *** 6.6 *** 6.6 *** Language -16.3 *** -16.3 *** -16.0 *** Parents not born in Australia 1.2 0.9 1.2 Family size -0.9 -0.8 -0.8 Single parent family -0.8 -0.8 -0.8 Mediating variables Time spent doing homework -11.7 *** -12.0 *** -11.9 *** Positive attitudes towards maths 11.2 *** 11.2 *** 11.2 *** Perceived importance of maths 2.4 *** 2.4 *** 2.4 *** Classroom level variables Classroom composition Mean SES 24.6 *** 21.4 *** 22.5 *** Top stream 38.6 *** 35.6 *** 34.6 *** Low stream -45.4 *** -41.1 *** -37.3 *** No stream 0.2 0.9 0.8 Same teacher -1.5 -1.1 -0.2 Teacher attributes Sex of the teacher -0.0 -0.0 Educational qualifications 0.4 0.5 Years in teaching 0.3 0.3 Amount of homework set 3.7 *** 3.8 *** Time teaching maths 0.0 0.0 Amount of time using textbook 3.9 *** 4.1 *** School level variables SES 1.2 School size 0.0 Average class size -0.4 Academically selective 3.8 Open admission -0.8 Time dedicated to maths teaching 0.0 Behavioural disturbances -0.5 Absenteeism -0.1 Total variance explained Level 1 (61.7) Level 2 (27.9) 39.7 41.0 Level 3 (10.4) 41.7 * Significant at the .10 level; ** Significant at the .05 level; *** Significant at the .01 level An earlier version of this paper was presented at the annual meeting of the American Educational Research Association The American Educational Research Association, or AERA, was founded in 1916 as a professional organization representing educational researchers in the United States and around the world. , Seattle Seattle (sēăt`əl), city (1990 pop. 516,259), seat of King co., W Wash., built on seven hills, between Elliott Bay of Puget Sound and Lake Washington; inc. 1869. , April 10-14, 2001. Affrassa, T.H. & Keeves, J.P. (1999), Student-level factors that influence mathematics achievement of Australian students: A path analysis with comparisons over time. Paper presented at the Annual Conference of AARE Aare (är`ə) or Aar (är), longest river entirely in Switzerland, 183 mi (295 km) long, rising in the Bernese Alps and fed by several glaciers. , December December: see month. 1999, Melbourne Melbourne, city, Australia Melbourne, city (1991 pop. 2,761,995), capital of Victoria, SE Australia, on Port Phillip Bay at the mouth of the Yarra River. Melbourne, Australia's second largest city, is a rail and air hub and financial and commercial center. . Anderson, L. W., Ryan, D. W., & Shapiro, B. J. (1989). The IEA classroom environment study. Oxford: Pergamon Pergamon or Pergamum (Greek: Πέργαμος, modern day Bergama in Turkey, Press. Bosker, R. J. & Witziers, B. (1996). The magnitude magnitude, in astronomy, measure of the brightness of a star or other celestial object. The stars cataloged by Ptolemy (2d cent. A.D.), all visible with the unaided eye, were ranked on a brightness scale such that the brightest stars were of 1st magnitude and the of school effects or does it really matter which school a student attends? Paper presented at the Annual Meeting of the American Educational Research Association, New York New York, state, United States New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of . Bryk, A.S. & Raudenbush, S.W. (1992). Hierarchical linear models: Applications and data analysis methods. Newbury Newbury, town (1991 pop. 31,488), West Berkshire, S central England. In a farming region, Newbury trades in wool, malt, and farm products. Paper, furniture, and metal products are also made. In the Middle Ages the town was an important textile manufacturing center. Park, CA: Coleman, J. S., Campbell Campbell, city, United States Campbell, city (1990 pop. 36,048), Santa Clara co., W Calif., in the fertile Santa Clara valley; founded 1885, inc. 1952. , E. Q., Hobson Hobson may refer to: People with the surname Hobson: • Hobson (surname) In places: • Hobson, County Durham, a village in England • Hobson, Montana, United States See also , C. J., Partland, J., Mood, A. M., Weinfeld, F. D., & York York, former name of Toronto, Canada York, Ont.: see Toronto, Ont., Canada.York, city, England York, city (1991 pop. 123,126) and district, North Yorkshire, N England, at the confluence of the Ouse and Foss rivers. , R. L. (1966). Equality equality Generally, an ideal of uniformity in treatment or status by those in a position to affect either. Acknowledgment of the right to equality often must be coerced from the advantaged by the disadvantaged. Equality of opportunity was the founding creed of U.S. of educational opportunity. Washington Washington, town, England Washington, town (1991 pop. 48,856), Sunderland metropolitan district, NE England. Washington was designated one of the new towns in 1964 to alleviate overpopulation in the Tyneside-Wearside area. , DC: US Government Printing Office. Hay Mcber. (2000). Research into teacher effectiveness: A model of teacher effectiveness. Report commissioned by the Department for Education and Employment. London London, city, Canada London, city (1991 pop. 303,165), SE Ont., Canada, on the Thames River. The site was chosen in 1792 by Governor Simcoe to be the capital of Upper Canada, but York was made capital instead. London was settled in 1826. : Department for Education and Employment. Hill, P.W. (1994). The contribution teachers make to school effectiveness. In P.W. Hill, P. Holmes-Smith, K. Rowe, & V.J. Russell Russell, English noble family. It first appeared prominently in the reign of Henry VIII when John Russell, 1st earl of Bedford, 1486?–1555, rose to military and diplomatic importance. . (Eds.), Selected reports and papers on findings from the first phase of the Victorian Quality Schools Project. Melbourne: University of Melbourne • AsiaWeek is now discontinued. In 2006, Times Higher Education Supplement ranked the University of Melbourne 22nd in the world. Because of the drop in ranking, University of Melbourne is currently behind four Asian universities - Beijing University, , Centre for Applied Educational Research. Hill, P.W. & Rowe, K.J. (1996). Multilevel mul·ti·lev·el Having several levels: a multilevel parking garage. Adj. 1. multilevel - of a building having more than one level modelling in school effectiveness research. School Effectiveness and School Improvement, 7, 1-34. Hill, P.W. & Rowe, K.J. (1998). Modelling student progress in studies of educational effectiveness. School Effectiveness and School Improvement, 9(3), 310-333. Hill, P.W., Rowe, K.J., Holmes-Smith, P., & Russell, V.J. (1996). The Victorian Quality Schools Project: A study of school and teacher effectiveness (Report, Vol. 1). Melbourne: University of Melbourne, Centre for Applied Educational Jencks, C., Smith, M., Acland Acland is an English surname. The Aclands of Devon (often Dyke Acland: see Acland Baronets, Dyke Acland Baronets) were an influential family. • Alexander Fuller-Acland-Hood, 1st Baron St Audries • Alexander Acland Hood , H., Bane BANE. This word was formerly used to signify a malefactor. Bract. 1. 2, t. 8, c. 1. , M., Cohen cohen or kohen (Hebrew: “priest”) Jewish priest descended from Zadok (a descendant of Aaron), priest at the First Temple of Jerusalem. The biblical priesthood was hereditary and male. , D., Gintis, H., Heyns, B., & Michelson Mi·chel·son , Albert Abraham 1852-1931. German-born American physicist who with Edward Morley disproved the existence of ether, the hypothetical medium of electromagnetic waves. He won a 1907 Nobel Prize in physics. Noun 1. , S. (1972). Inequality inequality, in mathematics, statement that a mathematical expression is less than or greater than some other expression; an inequality is not as specific as an equation, but it does contain information about the expressions involved. : A reassessment Reassessment The process of re-determining the value of property or land for tax purposes. Property is usually reassessed on an annual basis. You may request a "reassessment" if you disagree with your assessment. of the effect of family and schooling in America America [for Amerigo Vespucci], the lands of the Western Hemisphere—North America, Central (or Middle) America, and South America. The world map published in 1507 by Martin Waldseemüller is the first known cartographic use of the name. . New York: Basic Books. Lamb, S.P. (1997). Access to level of mathematics study in high school: Social area and school differences. In People in mathematics education. Conference Proceedings of the twentieth annual meeting of the Mathematics Education Research Group of Australasia Australasia (ôstrəlā`zhə, –shə), islands of the South Pacific, including Australia, New Zealand, New Guinea, and adjacent islands. The term is sometimes used to include all of Oceania. (pp. 286-293). Aotearoa Aotearoa (pronounced: [aoˌteaˈroa]) is the most widely known and accepted Māori name for New Zealand. , NZ: MERGA MERGA Mathematics Education Research Group of Australasia . Lamb, S. & Fullarton, S. (2000). Classroom and teacher effects in mathematics achievement: Results from TIMMS TIMMS Trends in International Mathematics and Science Study (formerly known as the Third International Mathematics and Science Study) TIMMS TMDE (Test, Measurement, and Diagnostic Equipment) Integrated Maintenance Management System . In Mathematics education beyond 2000. Conference proceedings of the twenty-third annual meeting of the Mathematics Education Research Group of Australasia. Fremantle Fremantle (frē`măn'təl, frĭm`əntəl), city (1996 pop. 24,276), Western Australia, SW Australia, a suburb of Perth, on the Indian Ocean at the mouth of the Swan River. : MERGA. Larkin, A. I. & Keeves, J. P. (1984). The class size question: A study at different levels of analysis. Hawthorn hawthorn, any species of the genus Crataegus of the family Rosaceae (rose family), shrubs and trees widely distributed in north temperate climates and especially common in E North America. , Vic.: Australian Council for Educational Research The Australian Council for Educational Research (ACER) is a non-governmental educational research organisation based in Camberwell, Victoria and with offices in Sydney, Brisbane, Perth, Dubai and India. . Lee, V. E. & Smith, J. B. (1997). High school size: Which works best and for whom? Education Evaluation and Policy Analysis, 19(3), 205-228. Lokan, J., Ford, P., & Greenwood, L. (1996). Mathematics & science on the line: Australian Junior Secondary Students' Performance in the Third International Mathematics and Science Study (TIMSS Australia Monograph, No. 1). Melbourne: ACER. Mortimore, P., Sammons, P., Stoll Stoll is a surname, and may refer to: • Cal Stoll, American football coach • Caspar Stoll, entomologist • Clifford Stoll, American astronomer • David Stoll, American anthropologist • Günther Stoll, German television actor , L., Lewis, D., & Ecob, R. (1988). School matters: The junior years. Somerset Somerset, cities, United States 1 City (1990 pop. 10,733), seat of Pulaski co., S Ky., in a farm, coal, and limestone area of the Cumberland foothills; inc. 1810. : Open Books. National Center for Education Standards. (2000). NAEP NAEP National Assessment of Educational Progress NAEP National Association of Environmental Professionals NAEP National Association of Educational Progress NAEP National Agricultural Extension Policy NAEP Native American Employment Program 1996: Trends in academic progress. Washington, DC: US Department of Education. Nuttall, D., Goldstein Gold·stein , Joseph Leonard Born 1940. American biochemist. He shared a 1985 Nobel Prize for discoveries related to cholesterol metabolism. , H., Prosser Prosser may refer to: Places • Prosser, Washington • Prosser, Nebraska • Prosser Bay, Tasmania, Australia • Prosser River, Tasmania, Australia , R., & Rasbash, J. (1989). Differential school effectiveness. International Journal of Educational Research, 13(7), 769-776. Raudenbush, S. W. & Willms, J. D. (Eds.). (1991). Schools, classrooms and pupils: International studies of schooling from a multilevel perspective. New York: Academic Press. Rowe, K.J. & Hill, P.W. (1994). Multilevel modelling in school effectiveness research: How many levels? In P.W. Hill, P. Holmes-Smith, K. Rowe, & V.J. Russell (Eds.), Selected reports and papers on findings from the first phase of the Victorian Quality Schools Project. Melbourne: University of Melbourne, Centre for Applied Educational Research. Scheerens, J. (1993). Basic school effectiveness research: Items for a research agenda. School Effectiveness and School Improvement, 4(1), 17-36. Scheerens, J., Vermeulen, C. J. A. J., & Pelgrum, W. J. (1989). Generalizability of instructional and school effectiveness indicators across nations. International Journal of Educational Research, 13(7), 789-799. Schmidt, W. H., McKnight, C. C., Cogan Cogan is a suburb of Penarth in the Vale of Glamorgan, South Wales. It has one of four of the vale's Leisure Centre's. The Cogan railway line serves Barry, Rhoose and Bridgend and Cardiff. , L. S., Jakwerth, P. M., & Houang, R. T. (1999). Facing the consequences: Using TIMSS for a closer look at US mathematics and science education. Dordrecht Dordrecht (dôr`drĕkht) or Dort (dôrt), city (1994 pop. 113,394), South Holland prov., SW Netherlands, at the point where the Lower Merwede divides to form the Noord and Oude Maas (Old Meuse) rivers. : Kluwer. Smith, D. & Tomlinson, S. (1989). The school effect. London: Policy Studies Institute. Dr Stephen Stephen, 1097?–1154, king of England (1135–54). The son of Stephen, count of Blois and Chartres, and Adela, daughter of William I of England, he was brought up by his uncle, Henry I of England, who presented him with estates in England and France and Lamb is a Senior Research Fellow in the Department of Education Policy and Management at the University of Melbourne, Parkville, Victoria Parkville is an inner city suburb north of Melbourne, Victoria, bordered by North Melbourne to the south-west, Carlton and Carlton North to the south and east, Brunswick to the north, and Flemington to the west. It includes the postcodes 3052 and 3010 (University). 3010. Dr Sue Fullarton is a Senior Research Fellow in the Policy Research Division, Australian Council for Educational Research, Private Bag 55, Camberwell, Victoria For other uses of the name Camberwell, see Camberwell (disambiguation). Camberwell is a suburb of Melbourne, Australia, in the local municipality of the City of Boroondara. 3124. Reader Opinion
{"url":"http://www.thefreelibrary.com/Classroom+and+school+factors+affecting+mathematics+achievement%3A+a...-a093920784","timestamp":"2014-04-18T18:46:12Z","content_type":null,"content_length":"123297","record_id":"<urn:uuid:d0da660a-a4fc-4cb7-94a4-2d3c94fcb008>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Methuen ACT Tutor Find a Methuen ACT Tutor ...Sometimes, it is difficult for students to understand the relevance and importance of science in their lives and struggle with the material. As your tutor, I will help you to understand the intricacies of the subject and visualize the many connections that exist between the subtopics. I will a... 23 Subjects: including ACT Math, chemistry, writing, biology ...I used the skills and knowledge I learned in my subsequent high school and college math and science courses, as well as in my 30 year career as an electrical engineer in industry. During my first few years teaching math at Lowell High School, MA, I taught Algebra I and used its content and skill... 9 Subjects: including ACT Math, calculus, geometry, algebra 1 ...I was a CS minor and took 11 courses on the topic. I have been a software developer since graduation, working first at PayPal, then a start up in Boston, and, most recently, in my own consulting company, where I build and design applications for clients. I have owned and used Macintosh computers for the past 10 years or so. 19 Subjects: including ACT Math, Spanish, English, geometry ...I am a second year graduate student at MIT, and bilingual in French and English. I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. 16 Subjects: including ACT Math, French, elementary math, algebra 1 ...I have taught and/or tutored mathematics from basic addition and subtraction through calculus. I have also helped students prepare for the GED, SAT and MCAS tests in mathematics. All this experience has taught me that all students can learn math and that I have (and continue to develop) excelle... 14 Subjects: including ACT Math, calculus, C, linear algebra
{"url":"http://www.purplemath.com/methuen_act_tutors.php","timestamp":"2014-04-19T20:21:24Z","content_type":null,"content_length":"23509","record_id":"<urn:uuid:44f16bd9-6a38-4283-bc83-62469c9fefc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Plot of a Geometric Sequence and Its Partial Sums This graph plots terms of a geometric sequence as well as the partial sums of the related geometric series. You can show that both the sequence and the sums converge if and only if . THINGS TO TRY • Gamepad Controls "Plot of a Geometric Sequence and Its Partial Sums" from the Wolfram Demonstrations Project Contributed by: Aaron Dunigan AtLee
{"url":"http://demonstrations.wolfram.com/PlotOfAGeometricSequenceAndItsPartialSums/","timestamp":"2014-04-21T02:38:39Z","content_type":null,"content_length":"43427","record_id":"<urn:uuid:325f30ae-cdcd-4186-89a8-107ed31625c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Horizontal tangent February 11th 2010, 01:54 PM #1 Oct 2009 Horizontal tangent At which point is the tangent to 2x/(x + y) = y horizontal? I found the derivative of y, which is 2y/((x+y)^2 + 2x). So if the tangent is horizontal then 2y/((1+2x)(x+y)^2)=0, right? The only thing I can come up with is the point (0,0). Is that right? Since (0,0) is NOT in the Domain, that seems like an odd answer. Further, since your only other choices are y = 0 and x NOT 0 (zero), when there is no such point on the curve, I think you're out of luck. No solution. You may find this result unsatisfactory. I advice you to look over your work again. If you graph the curve, you can see that there is no horizontal tangent line at (0,0). In fact, the point (0,0) is not even part of the domain. (If you plug (0,0) into the original equation, you get y=2x/0 which is not defined and can therefore not have a tangent, let alone a horizontal one.) Hope I helped. If you want any more help, feel free to ask. Your post wasn't there when I posted. Sorry if it was repetitive. But I agree, there is no solution to the problem. If you want me to prove it, just ask. February 11th 2010, 02:33 PM #2 MHF Contributor Aug 2007 February 11th 2010, 02:39 PM #3 February 11th 2010, 02:41 PM #4
{"url":"http://mathhelpforum.com/calculus/128395-horizontal-tangent.html","timestamp":"2014-04-19T03:28:28Z","content_type":null,"content_length":"38961","record_id":"<urn:uuid:7e1d1955-991f-4b72-8a8f-668f91ae064a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionExtraction of the Central Axis of a TunnelDetermination of the Central Axis of a Tunnel Based on 2D ProjectionEstimation of the Boundary PointsFitting of the Bounding LinesFitting of the Central AxisGlobal Adjustment of the Central Axis Using Segment-Wise FittingCross Section Extraction Based on Quadric Parametric Surface FittingAdjustment of the Pseudo Cross-Sectional PlaneContinuous Estimation of the Cross-Sectional PointQuadric Parametric Surface ModelFitting Process Based on the Improved BaySAC AlgorithmExperimental SectionCentral Axis FittingFitting of the Central Axis Based on the 2D ProjectionGlobal Extraction of the Central Axis Using Segment-Wise FittingCross Section Extraction Based on Quadric Parametric Surface FittingComputational EfficiencyFitting AccuracyContinuous Extraction of the Cross SectionsConclusionsAuthor ContributionsConflicts of InterestReferencesFigures and Tables The tunnel point clouds are projected onto the XOY plane, from which we extract the boundary points of both sides of the tunnel. An algorithm for boundary point extraction is proposed using a moving window. Figure 1 shows a circular window with a predefined radius that is centered at the point of interest P. All points within the window are considered the neighboring points of point P. The polar angles of the neighboring points are computed relative to point P (e.g., α[1]). We then calculate the differences between consecutive polar angles. If point P is a boundary point, the difference Δα[i + 1,i] between boundary points P[i] and P[i + 1] is much larger than the difference Δα[i,i − 1] between boundary point P[i] and interior point P[i − 1]. Therefore, once the difference is greater than a predefined threshold, point P is labeled as a boundary point. The bounding lines of a tunnel usually contain segments of straight lines, curves and transition curves, which are parameterized as follows: Straight line model: X = aY + b Transition curve model: X = cY 3 + dY 2 + eY + f Curve model: X = gY 2 + hY + k where a and b are the parameters of a straight line; c, d, e, and f are the parameters of a transition curve; and g, h, and k are the parameters of a curve. The bounding line fitting process includes the estimation of multiple models. To ensure the robustness of the fitting, the RANSAC algorithm [21] is used to estimate the parameters of the three models. Instead of using as much data as possible to obtain an initial solution and attempting to eliminate the invalid data points, RANSAC uses as small an initial data set as is feasible and enlarges this set using consistent data when possible. The RANSAC paradigm contains three unspecified parameters: (1) the error tolerance, which is used to determine whether a point is compatible with the model; (2) the number of subsets to attempt; and (3) the threshold t, which is the number of compatible points and is used to define that the correct model has been found. The determination of these three parameters is discussed in the introduction to RANSAC [21]. A statistical testing algorithm is proposed to automatically detect the initial models from the extracted boundary points so the proper model is selected to fit each segment of the bounding line. The statistical testing is implemented using straight-line, transition-curve and curve models. The statistical testing process is implemented using a histogram, which illustrates the distribution of the discrete hypothesis model parameter sets that are computed during different iterations. The degree of convergence of a candidate parameter set is used as a criterion of the statistical testing. This criterion describes how the other sets converge to it and is calculated by dividing the number of converging sets by the total number of parameter sets. We construct vectors with two, four and three dimensions for each set of model parameters. The Euclidian distances between different vectors are computed to describe the deviation between the candidate parameter set and the other hypothesis model parameter sets. If the distance between a hypothesis set and the candidate set is smaller than the predefined threshold, the hypothesis set is considered the converging set of the candidate set, and the degree of convergence of the candidate set increases. In this method, the histogram of the candidate parameter sets is updated during each iteration using the newly calculated hypothesis parameter set. When the degree of convergence of a candidate parameter set reaches a predefined threshold, the candidate parameter set is detected as an initial model to fit the bounding line segment. If the degree of convergence fails to reach the threshold after a predefined number of iterations, we believe that there is no such model. To visualize the statistical results, we illustrate them as a histogram (Figure 2). The horizontal axis denotes the mean value of the model parameters, and the vertical axis represents the degree of convergence of each cell. A high degree of convergence for a parameter reflects a high probability of finding the initial model. After the initial model is detected, RANSAC is used to robustly estimate the optimized model parameters. Two, four, and three points are used to estimate the model parameters to fit a straight line, a transition curve and a curve, respectively. The criterion used to identify outliers is based on the deviations of the tested points from the fitted model. The inlier bounding points of a certain model are classified as a segment that is used in the following global optimization. The final optimal parameters are computed by the least-squares adjustment using the obtained inlier points. After fitting the bounding lines, the boundary points are evenly resampled. To extract the central axis of the tunnel, the normal vector V[l] of the left bounding curve at boundary point P[l] is determined (Figure 3). A straight line orthogonal to the normal vector reaches the right bounding curve from P[l] and generates point P[l]′. Theoretically, the radial line from point P[l]′ that is orthogonal to V[l]′ reaches the left bounding curve at point P[l], so the extracted central-axis point is the midpoint of the line P[l] P[l]′. However, because the bounding curves are subject to errors that are generated from the fitting processes, the radial orthogonal to V[l]′ produces point P[l]″ instead of point P[l]. Figure 3 shows that M[l]′ and M[l]″ are the midpoints of P[l] P[l]′ and P[l]′ P[l]″, respectively. The extracted central-axis point is determined as M[l], which is the average of points M[l]′ and M[l]″. The same process is implemented from boundary point P[r] on the right bounding curve to extract the point on the central axis as point M[r]. Based on the extracted central-axis points, the presented strategy to fit a bounding line is used to generate the central axis. Because the extraction of the central axis is implemented on the XOY plane, the height of the central axis is determined as the average height of the tunnel points. Because the extraction of the segments of the bounding lines and the central axis on the XOY plane using the three models may suffer from noise in the tunnel points, there may be deviations in the overlapping parts of adjacent fitted models (Figure 4). Therefore, we propose a global extraction algorithm to minimize the deviations. To maintain consistency between adjacent fitted models, the divided segments overlap each other somewhat, and a global least-squares adjustment is developed to implement the multiple model fitting of all of the segments together by minimizing the deviations in the overlapping parts of adjacent fitted models. Using Equations (1)–(3), the constraints are derived between a straight line, a transition curve and a curve, respectively, and are added to the adjustment model. For example, Equation (4) parameterizes the constraint between a straight line and a transition curve: a i Y + b i − ( c j Y 3 + d j Y 2 + e j Y + f j ) = 0where a[i] and b[i] are the line parameters of segment i, c[j], d[j], e[j], f[j] are the transition curve parameters of segment j, and Y is the Y coordinate of a point in the overlap region between segments i and j. Equation (4) describes the constraint that the X coordinates computed by any Y coordinate in the overlap region between segments i and j using the model parameters of segments i and j are theoretically equal. The coefficient matrix of the observation and constraint equations of the global least-squares adjustment is derived in Equation (5): B = [ B l B tc B c C ]where B l = [ B l 1 0 0 0 0 B l 2 0 0 0 0 ⋯ 0 0 0 0 B l n 1 ] , B c = [ B c 1 0 0 0 0 B c 2 0 0 0 0 ⋯ 0 0 0 0 B c n 2 ] , B tc = [ B tc 1 0 0 0 0 B tc 2 0 0 0 0 ⋯ 0 0 0 0 B tc n 3 ] , B l i = [ y l i , 1 1 y i i , 2 1 ⋮ 1 y l i , m 1 ] , B c i = [ y c i , 1 2 y c i , 1 1 y c i , 2 2 y c i , 2 1 ⋮ ⋮ ⋮ y c i , m 2 y c i , m 1 ] and B tc i = [ y tc i , 1 3 y tc i , 1 2 y tc i , 1 1 y tc i , 2 3 y tc i , 2 2 y tc i , 2 1 ⋮ ⋮ ⋮ ⋮ y tc i , m 3 y tc i , m 2 y tc i , m 1 ] ,which are derived from Equations (1)–(3) for segment i, m denotes the number of the points on segment i, and n = n1 + n2 + n3; C = [ C 11 C 12 0 0 0 0 0 0 C 22 C 23 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ 0 0 0 0 0 0 0 0 C ( n − 1 ) ( n − 1 ) C ( n − 1 ) n ]where C ij = [ y i 1 , j ( j + 1 ) 2 y i 1 , j ( j + 1 ) 1 y i 2 , j ( j + 1 ) 2 y i 2 , j ( j + 1 ) 1 ⋮ ⋮ ⋮ y ik , j ( j + 1 ) 2 y ik , j ( j + 1 ) 1 ] , or C ij = [ y i 1 , j ( j + 1 ) 3 y i 1 , j ( j + 1 ) 2 y i 1 , j ( j + 1 ) 1 y i 2 , j ( j + 1 ) 3 y i 2 , j ( j + 1 ) 2 y i 2 , j ( j + 1 ) 1 ⋮ ⋮ ⋮ ⋮ y ik , j ( j + 1 ) 3 y ik , j ( j + 1 ) 2 y ik , j ( j + 1 ) 1 ] , or C ij = [ y i 1 , j ( j + 1 ) 1 y i 2 , j ( j + 1 ) 1 ⋮ ⋮ y ik , j ( j + 1 ) 1 ] , and C i ( j + 1 ) = − C ijwhich are derived from Equation (5) for the overlap region between segment j and segment j + 1. The form of C[ij] depends on the models of the two overlapping segments. In the proposed global least-squares adjustment system, as a constraint equation, Equation (4) is weighted with a large value (e.g., 10) instead of 1 as in an observation equation. Based on the coefficient matrix B, we calculate the optimized parameters of the bounding line segments by following the least-squares strategy. After the bounding lines are fitted, the method presented in Section 2.1 is implemented to extract the central-axis points, which we use to generate the globally optimized central axis using the proposed global least-squares adjustment system.
{"url":"http://www.mdpi.com/2072-4292/6/1/857/xml","timestamp":"2014-04-21T16:16:00Z","content_type":null,"content_length":"108104","record_id":"<urn:uuid:db9b810f-63bf-4328-9932-2fd0d5d4ee90>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling Time Series Count Data: An Autoregressive Conditional Poisson Model Heinen, Andreas (2003): Modelling Time Series Count Data: An Autoregressive Conditional Poisson Model. Download (455Kb) | Preview This paper introduces and evaluates new models for time series count data. The Autoregressive Conditional Poisson model (ACP) makes it possible to deal with issues of discreteness, overdispersion (variance greater than the mean) and serial correlation. A fully parametric approach is taken and a marginal distribution for the counts is specified, where conditional on past observations the mean is autoregressive. This enables to attain improved inference on coefficients of exogenous regressors relative to static Poisson regression, which is the main concern of the existing literature, while modelling the serial correlation in a flexible way. A variety of models, based on the double Poisson distribution of Efron (1986) is introduced, which in a first step introduce an additional dispersion parameter and in a second step make this dispersion parameter time-varying. All models are estimated using maximum likelihood which makes the usual tests available. In this framework autocorrelation can be tested with a straightforward likelihood ratio test, whose simplicity is in sharp contrast with test procedures in the latent variable time series count model of Zeger (1988). The models are applied to the time series of monthly polio cases in the U.S between 1970 and 1983 as well as to the daily number of price change durations of :75$ on the IBM stock. A .75$ price change duration is defined as the time it takes the stock price to move by at least .75$. The variable of interest is the daily number of such durations, which is a measure of intradaily volatility, since the more volatile the stock price is within a day, the larger the counts will be. The ACP models provide good density forecasts of this measure of volatility. Item Type: MPRA Paper Original Modelling Time Series Count Data: An Autoregressive Conditional Poisson Model Language: English Keywords: Forecast; volatility; transactions data G - Financial Economics > G1 - General Financial Markets Subjects: C - Mathematical and Quantitative Methods > C5 - Econometric Modeling > C53 - Forecasting and Prediction Methods; Simulation Methods C - Mathematical and Quantitative Methods > C2 - Single Equation Models; Single Variables > C25 - Discrete Regression and Qualitative Choice Models; Discrete Regressors; Proportions Item ID: 8113 Depositing Heinen Date 07. Apr 2008 00:28 Last 11. Feb 2013 21:15 Bollerslev, Tim, 1986, Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics 52, 5{59. , Robert F. Engle, and Daniel B. Nelson, 1994, Arch models, in Robert F. Engle, and Daniel L. McFadden, ed.: Handbook of Econometrics, Volume 4 (Elsevier Science: Amsterdam, North-Holland). BrÄannÄas, Kurt, and Per Johansson, 1994, Time series count data regression, Communica- tions in Statistics: Theory and Methods 23, 2907{2925. Cameron, A. Colin, and Pravin K. Trivedi, 1998, Regression Analysis of Count Data (Cam- bridge University Press: Cambridge). Cameron, Colin A., and Pravin K. Trivedi, 1996, Count data models for ¯nancial data, in Maddala G.S., and C.R. Rao, ed.: Handbook of Statistics, Volume 14, Statistical Methods in Finance (Elsevier Science: Amsterdam, North-Holland). Campbell, M.J., 1994, Time series regression for counts: an investigation into the relation- ship between sudden infant death syndrome and environmental temperature, Journal of the Royal Statistical Society A 157, 191{208. Chang, Tiao J., M.L. Kavvas, and J.W. Delleur, 1984, Daily precipitation modeling by discrete autoregressive moving average processes, Water Resources Research 20, 565{ 580. Davis, Richard A., William Dunsmuir, and YinWang, 2000, On autocorrelation in a poisson regression model, Biometrika 87, 491{505. Diebold, Francis X., Todd A. Gunther, and Anthony S. Tay, 1998, Evaluating density fore- casts with applications to ¯nancial risk management, International Economic Review 39, 863{883. Efron, Bradley, 1986, Double exponential families and their use in generalized linear regres- sion, Journal of the American Statistical Association 81, 709{721. Engle, Robert, 1982, Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. in°ation, Econometrica 50, 987{1008. Engle, Robert F., References: and Je®rey Russell, 1998, Autoregressive conditional duration: A new model for irregularly spaced transaction data, Econometrica 66, 1127,1162. Fahrmeir, Ludwig, and Gerhard Tutz, 1994, Multivariate Statistical Modeling Based on Generalized Linear Models (Springer-Verlag: New York). Gurmu, Shiferaw, and Pravin K. Trivedi, 1993, Variable augmentation speci¯cation tests in the exponential family, Econometric Theory 9, 94{113. Harvey, A.C., and C. Fernandes, 1989, Time series models for count or qualitative obser- vations, Journal of Business and Economic Statistics 7, 407{417. Johansson, Per, 1996, Speed limitation and motorway casualties: a time series count data regression approach, Accident Analysis and Prevention 28, 73{87. Jorgensen, Bent, Soren Lundbye-Christensen, Peter Xue-Kun Song, and Li Sun, 1999, A state space model for multivariate longitudinal count data, Biometrika 96, 169{181. Lee, Charles M., and Mark J. Ready, 1991, Inferring trade direction from intraday data, Journal of Finance 66, 733{746. MacDonald, Iain L., and Walter Zucchini, 1997, Hidden Markov and Other Models for Discrete-valued Time Series (Chapman and Hall: London). McKenzie, Ed., 1985, Some simple models for discrete variate time series, Water Resources Bulletin 21, 645{650. Rydberg, Tina H., and Neil Shephard, 1998, A modeling framework for the prices and times of trades on the nyse, To appear in Nonlinear and nonstationary signal processing edited by W.J. Fitzgerald, R.L. Smith, A.T. Walden and P.C. Young. Cambridge University Press, 2000. , 1999a, Dynamics of trade-by-trade movements decomposition and models,Working paper, Nu±eld College, Oxford. , 1999b, Modelling trade-by-trade price movements of multiple assets using multi- variate compound poisson processes, Working paper, Nu±eld College, Oxford. Zeger, Scott L., 1988, A regression model for time series of counts, Biometrika 75, 621{629. , and Bahjat Qaqish, 1988, Markov regression models for time series: A quasi- likelihood approach, Biometrics 44, 1019 URI: http://mpra.ub.uni-muenchen.de/id/eprint/8113
{"url":"http://mpra.ub.uni-muenchen.de/8113/","timestamp":"2014-04-18T23:17:37Z","content_type":null,"content_length":"28526","record_id":"<urn:uuid:07a273d0-e03a-450e-8cfe-ed927c78e2f7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Formula Calculator Welcome to the online quadratic formula calculator program. It uses JavaScript to calculate the root of a quadratic equation in the form ax^2 + bx + c. Enter the values of a, b, and c, and the values of x will automatically be calculated using the quadratic formula, x = (-b ± √(b^2 - 4ac)) ÷ 2a. Note: this calculator will also solve quadratic equations with complex roots, so if you don't know what those are, just ignore the answers with the letter "i" in them. High school teachers may have told you that taking the square root of a negative number is impossible, but they are either wrong or lying.
{"url":"http://quadratic-formula-calculator.com/","timestamp":"2014-04-18T03:33:18Z","content_type":null,"content_length":"5997","record_id":"<urn:uuid:cc4b9737-5407-4cf3-bea9-a1d4ac30f4f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Cardioid Valentine's Card (1) Cardioid curve can be generated using two points on the same circle such that: - both points start at the same position; - both points are moving on the circle in the same direction; - one of the points is moving twice as fast as the other one. 1. Right click the slider and select "Animation On" tool. 2. Allow the cardioid to be fully generated. 3. Turn off the "Animation On" tool by right clicking on the slider. 4. Right click on each of the objects that you want to hide and turn off the "Show Object" tool. 5. Your Valentine's Card is ready to print! [1] Alfinio Flores, A Rhytmic Approach to Geometry, Mathematics Teaching the Middle School, Vol. 7, No. 7, (MARCH 2002), pp. 378-383. Violeta Vasilevska, Created with GeoGebra
{"url":"http://www.uvu.edu/csh/mathematics/mgr/schedule/Cardioid_1.html","timestamp":"2014-04-19T22:06:15Z","content_type":null,"content_length":"7456","record_id":"<urn:uuid:a298469a-cff6-4bef-a65c-d5c289d08166>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
little engineering/little calc March 7th 2009, 04:54 AM #1 Mar 2009 little engineering/little calc In dealing with metal surfaces that undergo oxidation, the thickness of the oxide film has been found to increase exponentially with time, such that d=dmax (1-e^(-kt)), t>2000 where d=thickness of oxide film and t=time. This expression is the result of integrating an ordinary differential equation. I'm trying to derive the Ordinary differential equation and the associated initial condition needed that produces d=dmax (1-e^(-kt)) Any help for how to approach this is much appreciated! In dealing with metal surfaces that undergo oxidation, the thickness of the oxide film has been found to increase exponentially with time, such that d=dmax (1-e^(-kt)), t>2000 where d=thickness of oxide film and t=time. This expression is the result of integrating an ordinary differential equation. I'm trying to derive the Ordinary differential equation and the associated initial condition needed that produces d=dmax (1-e^(-kt)) Any help for how to approach this is much appreciated! $d(t)=d_{max} (1-e^{-kt})$ Then differentiating: $d'(t)=d_{max} k e^{-kt}$ and $d(0)=0$. March 7th 2009, 11:17 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-applied-math/77322-little-engineering-little-calc.html","timestamp":"2014-04-17T01:00:56Z","content_type":null,"content_length":"34245","record_id":"<urn:uuid:a7e07fce-518e-4de9-8a79-dc77dd394602>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Frederick, CO Prealgebra Tutor Find a Frederick, CO Prealgebra Tutor ...I am organized, prompt, and hate wasting someone's time and money. I also embrace technology to make learning more effective and efficient. Send me a note and we'll get started!I use the flipped model here, where I will assign a short youtube video on the topic we are working on. 41 Subjects: including prealgebra, reading, Spanish, English ...I have tutored Math and Statistics, professionally and privately, for 15 years. I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and Linear Algebra. I can also teach Intro Statistics and Logic. 11 Subjects: including prealgebra, calculus, geometry, statistics ...Additionally to the above experience I have had as a teacher and tutor I also have published learning manuals and internal confidential papers during my professional career as both an engineer and as a scientist. I was a key liaison knowledge conduit for the research labs. My mission was to tr... 36 Subjects: including prealgebra, chemistry, physics, calculus Hi, I'm Leslie! I am a mathematics and writing tutor with three years of experience teaching and tutoring. I spent two years teaching mathematics and physics at a secondary school in rural 27 Subjects: including prealgebra, reading, writing, geometry My original training was at Colorado School of Mines with a Major (Bachelor's and Master's degree) in Chemical Engineering and Minor in Biological Engineering Life Sciences. While attending college, I ran an "academic excellence workshop" for Colorado School of Mines students (by recommendation of ... 7 Subjects: including prealgebra, chemistry, algebra 1, algebra 2 Related Frederick, CO Tutors Frederick, CO Accounting Tutors Frederick, CO ACT Tutors Frederick, CO Algebra Tutors Frederick, CO Algebra 2 Tutors Frederick, CO Calculus Tutors Frederick, CO Geometry Tutors Frederick, CO Math Tutors Frederick, CO Prealgebra Tutors Frederick, CO Precalculus Tutors Frederick, CO SAT Tutors Frederick, CO SAT Math Tutors Frederick, CO Science Tutors Frederick, CO Statistics Tutors Frederick, CO Trigonometry Tutors Nearby Cities With prealgebra Tutor Brighton, CO prealgebra Tutors Dacono prealgebra Tutors East Lake, CO prealgebra Tutors Eastlake, CO prealgebra Tutors Erie, CO prealgebra Tutors Evergreen, CO prealgebra Tutors Firestone prealgebra Tutors Fort Lupton prealgebra Tutors Johnstown, CO prealgebra Tutors Lafayette, CO prealgebra Tutors Longmont prealgebra Tutors Louisville, CO prealgebra Tutors Mead, CO prealgebra Tutors Platteville, CO prealgebra Tutors Superior, CO prealgebra Tutors
{"url":"http://www.purplemath.com/Frederick_CO_Prealgebra_tutors.php","timestamp":"2014-04-20T04:33:06Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:74f34aa7-994c-4d7d-8caf-1ee416a8baa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Lafayette, CA Science Tutor Find a Lafayette, CA Science Tutor ...I get results.I am proficient in all major facets of algebra 1, including plotting points and graphing linear equations; solving systems of equations; factoring polynomials; exponent rules; order of operations and so on. I have taught students at all levels of competency and comfort with these concepts. I earned a B.A. in Philosophy (Cum Laude) from the University of California-Santa Cruz. 29 Subjects: including ACT Science, philosophy, reading, English ...I'm currently a computer science PhD student at Cal, studying computer vision, and regularly coding in MATLAB for my research. I'm familiar with Simulink, many of the toolboxes (e.g., math, stats, optimization, signal processing, image processing, computer vision, etc.), and have coded numerous GUIs from scratch. I have a lot of experience helping others debug their code. 27 Subjects: including physics, chemistry, biology, physical science ...I have helped a number of youngsters -- from early grades through middle school -- improve their handwriting. Use of model-tracing sheets and a molded "grip" on the end of the pencil can be helpful devices for many, but not all, students. It's exciting to help young writers learn cursive, as th... 49 Subjects: including philosophy, sociology, psychology, GRE ...I believe that the best way to learn and develop new skills is through practice with positive re-enforcement. As a student is introduced to a new concept, I focus on making sure the student has fully comprehended each step before moving on to the next topic. Practice is key! 53 Subjects: including physical science, nursing, biology, anthropology ...I took private lessons when I was in middle and high school. I have been in honor band and marching band all throughout middle and high school. I even participated in the pit band and extra band groups throughout high school. 34 Subjects: including ACT Science, psychology, physics, calculus Related Lafayette, CA Tutors Lafayette, CA Accounting Tutors Lafayette, CA ACT Tutors Lafayette, CA Algebra Tutors Lafayette, CA Algebra 2 Tutors Lafayette, CA Calculus Tutors Lafayette, CA Geometry Tutors Lafayette, CA Math Tutors Lafayette, CA Prealgebra Tutors Lafayette, CA Precalculus Tutors Lafayette, CA SAT Tutors Lafayette, CA SAT Math Tutors Lafayette, CA Science Tutors Lafayette, CA Statistics Tutors Lafayette, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Lafayette_CA_Science_tutors.php","timestamp":"2014-04-18T15:48:43Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:ae468179-d9f9-4983-85f0-6a2a604ef479>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Appel on 4CT proof Carl G. Jockusch jockusch at math.uiuc.edu Thu Dec 10 11:41:36 EST 1998 I am pleased to forward (with his permission) the following message from Ken Appel, who has read some of the recent fom discussion on the proof of the four-color theorem. Carl Jockusch >From kia at oregano.unh.edu Thu Dec 10 09:16:22 1998 From: Kenneth Appel <kia at oregano.unh.edu> Subject: fom One of the things that I find lacking from most discussions of the proof of the Four Color Theorem is what I might call the "inevitability" of the argument. I think that many proofs in mathematics find easier acceptance because of the intuitive certainty on the point of most mathematicians that they are true. Thus, I would like to describe the proof from the point of view of what might legitimately dismissed as "semi-religious" reasoning but what really, to my mind, motivates the belief that there is a proof in Erdos' "God's Book". The proofs, ours and the more recent ones, depend on the following two Thesis 1. There are many acceptable classes of "reducible configurations" on which such proofs can be based (for historical reasons only Kempe's C and D reducible configurations that essentially date back directly to Birkhoff's work have been used), and these configurations appear to be relatively dense among those that satisfy Heesch's criteria and that we call "geographically good". Thesis 2. Looking at the intuitive electrical model, due in its most sophisticated form to Haken, in a large dual triangulation there must be many localities of positive charge and in many of them there will be reducible configurations, many of which will be very unpleasant to actually show reducible. These theses are really what gives one confidence that if there are errors in the presented arguments these errors are just errors of presentation and not errors that lead to the invalidity of the underlying understanding of the problem. It is totally maddening that none of us seem to understand reducibility well enough to prove good general theorems about useful enough classes of reducible configurations and thus computers must be used to show each individual configuration reducible. It is totally frustrating that it is becoming intuitively clear that almost any reasonable use of the discharging procedures will work and that the collection of reasonable unavoidable sets is huge. With this as background, it is almost as frustrating to depend on specific verifications of unavoidable sets of reducible configurations as it would be to insist on finding ten spots in the Dutch dikes to use pressure gauges to show that the Netherlands would be under water if there were no structure of dikes. I know of no other area in mathematics that the proof of a theorem has had to be made by such artificial means and the true intuition of why the theorem is true has been so poorly communicated. As a member of ASL for 42 years I am totally embarrassed to make such a contribution to the discussion. I hope that I am not drummed out as a result. Ken Appel More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-December/002476.html","timestamp":"2014-04-18T18:16:40Z","content_type":null,"content_length":"5490","record_id":"<urn:uuid:e340ebc3-8b6f-4218-aabe-f853a0d8712e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Re: distance data missing values [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: Re: distance data missing values From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Re: distance data missing values Date Fri, 17 Oct 2008 12:29:13 +0100 In that spirit: I take that the only issue is filling in gaps for pairs of trading partners, i.e. the distance between A and B is necessarily the same as the distance between A and B. Then I don't see any need for a file fandango: gen first = cond(Exporter < Importer, Exporter, Importer) gen second = cond(Exporter < Importer, Importer, Exporter) egen pairs = group(first second) bysort pairs (distance) : replace distance = distance[1] if Martin Weiss This question is quite close to: Might want to try your luck there... Ermal Hitaj > I have trade distance data in a long form. I have missing values as > follows. > Exporter Importer Distance > A B 1000km > B A . > I need to substitute the missing values with the distance value from > case where country A is the exporter and B is the importer. > Obviosly, there's a lot of countries (and a lot of products). * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00923.html","timestamp":"2014-04-20T18:27:57Z","content_type":null,"content_length":"7049","record_id":"<urn:uuid:515c7ab5-6cd3-4159-bd56-26ec5682a9a7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Moschovakis, Yiannis N. - Department of Mathematics, University of California at Los Angeles • IS THE EUCLIDEAN ALGORITHM OPTIMAL AMONG ITS PEERS? LOU VAN DEN DRIES AND YIANNIS N. MOSCHOVAKIS • sense and denotation as algorithm and value 249 [30] , Sense in Frege, in Selected Essays [29], pp. 55--64. • What Is an Algorithm? Yiannis N. Moschovakis # • HORNER'S RULE IS OPTIMAL FOR POLYNOMIAL NULLITY YIANNIS N. MOSCHOVAKIS • A gametheoretic, concurrent and fair model of the typed calculus, with full recursion • The Journal of Symbolic Logic Volume 63, Number 2, June 1998 • Kleene's Amazing Second Recursion Theorem Extended Abstract • Two aspects of situated meaning Eleni Kalyvianaki and Yiannis N. Moschovakis • Recursion and Complexity Yiannis N. Moschovakis • DETAILED PROOF OF THEOREM 4.1 IN SENSE AND DENOTATION AS ALGORITHM AND VALUE • On founding the theory of algorithms Yiannis N. Moschovakis • Classical descriptive set theory as a refinement of effective descriptive set theory • The Bulletin of Symbolic Logic Volume 16, Number 2, June 2010 • Powerdomains, Powerstructures and Fairness Yiannis N. Moschovakis1 and Glen T. Whitney2? • Y I ANN I S N . MOSCHOVAK I S THE LOGIC OF FUNCTIONAL RECURSION • A mathematical modeling of pure, recursive algorithms • Arithmetic Complexity LOU VAN DEN DRIES • The Logic of Recursive Equations Author(s): A. J. C. Hurkens, Monica McArthur, Yiannis N. Moschovakis, Lawrence S. Moss, • On founding the theory of algorithms Yiannis N. Moschovakis • The Bulletin of Symbolic Logic Volume 10, Number 3, Sept. 2004 • A game-theoretic, concurrent and fair model of the typed -calculus, with full recursion • What Is an Algorithm? Yiannis N. Moschovakis
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/05/677.html","timestamp":"2014-04-21T02:06:44Z","content_type":null,"content_length":"10722","record_id":"<urn:uuid:8f496804-7848-4d12-96ff-b2102bfefd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling The regression example (2.1.1 Simple Regression) states y = β[0] + β [1] ∗ x + ϵ Then in the text below the parameters are called " β0 , β1 , σ2, ϵ , and the mean and variance of x". And then in the diagram we see σ^2[x] ,σ^2[y], β[y], β[yz] and μ[x]. Much easier to follow with one set of conventions rather than three. And where mapping of conventions is necessary, it should be explicit (i.e., "and the mean and variance of x (μ[x] and σ^2[x] respectively) )" Thu, 08/13/2009 - 21:06 Nice flow. I like it. I Nice flow. I like it. I wonder how we should treat these kinds of edits. It might be nice if we started a set of documentation as wiki pages so we could be working in parallel on them. Fri, 08/21/2009 - 12:46 Can that be made to happen? Can that be made to happen? Would be a very good idea at this early stage, I think Thu, 08/13/2009 - 13:11 PS: I would add a helper PS: I would add a helper equation there along the lines of "In regression we are finding the line that best relates our dependent variable (Y) to the independent variable (X), so we will typically want to know both the slope (how much Y increases for each increment in X) and possibly the intercept as well (how much Y we have even when X is 0). This is traditionally expressed as Y = b*X + C In the conventions of statisticians, the intercept is termed βo, the slope β1. In SEM, we also model the error (epsilon: ϵ) inherent in our measurement of x, giving us: y = β0 + β 1 ∗ x + ϵ In R you may be used to running this as myModel = lm(y~x, data= myRegData); Here we will implement this in OpenMx...
{"url":"http://openmx.psyc.virginia.edu/thread/69","timestamp":"2014-04-19T04:28:46Z","content_type":null,"content_length":"30368","record_id":"<urn:uuid:ae968ef6-83ad-4baa-b864-b4b81aebf80f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
\documentstyle{amsart} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{exmp}[thm]{Example} \ newtheorem{cor}[thm]{Corollary} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \newtheorem{claim}{Claim} \begin{document} {\noindent\small {Electronic Journal of Differential Equations, Vol. 1995(1995), No. 13, pp. 1--17.}\newline ISSN: 1072-6691, URL: http://ejde.math.swt.edu (147.26.103.110)\newline telnet (login: ejde), ftp, and gopher: ejde.math.swt.edu or ejde.math.unt.edu} \thanks {\copyright 1995 Southwest Texas State University and University of North Texas.} \vspace{1.5cm} \title[\hfilneg EJDE--1995/13\hfil Dichotomy]{DICHOTOMY AND $H^{\infty}$ FUNCTIONAL CALCULI} \author [R. DeLaubenfels \& Y. Latushkin \hfil EJDE--1995/13\hfilneg] {R.~DeLaubenfels \& Y. Latushkin} \address{R. DeLaubenfels \newline Scientia Research Institute\newline P.O. Box 988\newline Athens, OH 45701} \email{72260.2403@@compuserve.com} \address{Y. Latushkin\newline Department of Mathematics\newline University of Missouri\newline Columbia MO 65211} %\email{??} \date{} \thanks{Submitted August 1, 1995. Published September 21, 1995.} \thanks{Second author supported in part by NSF grant DMS-9400518.} \subjclass{47D05, 47A60} \keywords{Abstract Cauchy problem, operator semigroups, \ newline\indent exponential dichotomy, functional calculi} \begin{abstract} Dichotomy for the abstract Cauchy problem with any densely defined closed operator on a Banach space is studied. We give conditions under which an operator with an $H^\infty$ functional calculus has dichotomy. For the operators with imaginary axis contained in the resolvent set and with polynomial growth of the resolvent along the axis we prove the existence of dichotomy on subspaces and superspaces. Applications to the dichotomy of operators on $L_p$-spaces are given. The principle of linearized instability for nonlinear equations is proved. \end{abstract} \maketitle % \newcommand{\re}{\operatorname{Re}} \newcommand{\im}{{\mbox{ Im }}} \newcommand{\gs}{g(s,u(s))} \newcommand{\ImP}{{\mbox{ Im }P}} \newcommand{\ImQ}{{\mbox{ Im }(I-P)}} % \section{INTRODUCTION} In the present paper we use methods from \cite{dL2,dL3} to study dichotomies for solutions to the abstract Cauchy problem $$\label {(ACP)} \frac{d}{dt}u(t, x) = A u(t, x) \quad u(0, x) = x \in X, \quad t \geq 0$$ with a closed densely defined operator $A$ on a Banach space $X$. By a {\it solution} of \eqref{(ACP)} we will mean a classical solution, that is, $t \mapsto u(t, x) \in C([0, \infty), [\cal D(A)]) \cap C^1([0, \infty), X)$. Dichotomy means the existence of a bounded projection, $P$, such that the solutions that start in $\im(P)$ decay to zero and the solutions that start in $\im(I-P)$ are unbounded. Dichotomy and, in particular, exponential dichotomy is one of the main tools in the study of linear differential equations in Banach spaces, linearized instability for nonlinear equations, existence of invariant and center manifolds, etc. Due to the importance of the subject the literature on dichotomy is vast; besides the classical books \cite{DK,Hale,Henry,MS}, we mention here more recent papers \cite{BGK,Chow,LatMSR,SS} and \cite{SellBibl}, where one can find further references. Assume, for a moment, that \eqref{(ACP)} is well-posed; that is, $A$ generates a strongly continuous semigroup $\{e^{tA}\}_{t\ge 0}$ on $X$. The semigroup is called hyperbolic if $\sigma(e^{tA})\cap \Bbb T$ is empty, for $t\neq 0$, where we write $\sigma(\cdot)$ for the spectrum and $\Bbb T$ for the unit circle. Suppose we know that $A$ generates a hyperbolic semigroup. Then \eqref{(ACP)} has dichotomy (and even uniform exponential dichotomy--- see definitions below), and $P$ is the Riesz projection for $e^{tA}$, $t>0$ that corresponds to the part of $\sigma(e^{tA})$ in the unit disk. Also, by the spectral inclusion theorem $$\sigma(e^{tA})\setminus\{0\}\supseteq \exp {t\sigma(A)}, \quad t\neq 0$$ (see, e.g., \cite[p. 45]{P}), one has $$\label{0sp} \sigma(A)\cap i\Bbb R=\ emptyset,$$ and, moreover, $$\label{emptysp} \sigma(A)\cap \{z\in\Bbb C: |\re z|\leq \epsilon\} =\emptyset \mbox{ for some } \epsilon >0.$$ However, it is more important to know under which additional condition on $A$ either \eqref{0sp} or \eqref{emptysp} imply dichotomy. If the spectral mapping theorem \[ \sigma(e^{tA})\setminus\{0\}=\exp{t\sigma(A)},\quad t\neq 0\] holds for the semigroup $\{e^{tA}\}$, then \eqref{0sp} implies the hyperbolicity of the semigroup. This is the case, for example, when $A$ generates an analytic semigroup; see \cite{N}. We note that the spectral mapping theorem holds, in fact, only provided some condition on the growth of the resolvent $R(z,A)=(z-A)^{-1}$ is fulfilled. If, for instance, $X$ is a Hilbert space then, by the Gearhart-Herbst spectral mapping theorem (see \cite{N}), condition \eqref{0sp} implies the hyperbolicity of the semigroup $\{e^{tA}\}$ provided $\|R(z,A)\|$ is bounded along $i\Bbb R$. For any Banach space by a spectral mapping theorem from \cite{LMS} this implication is true also provided a certain condition on the boundedness of the resolvent holds. Another way to obtain $P$ under conditions \eqref{0sp} or \eqref{emptysp} is to integrate $R(z,A)$ along $i\Bbb R$. If $A$ is a bounded operator with \eqref{0sp}, then the Riesz-Dunford functional calculus for $A$ gives the dichotomy projector $P$. If $A$ is unbounded this way does not work without additional conditions on the decay of $\|R(z,A)\|$ along $i\Bbb R$. The necessary and sufficient conditions for a semigroup with \eqref{emptysp} to be hyperbolic are given in \cite{KVL}. These conditions include, in particular, the integrability of $R(z,A)$ along $i\Bbb R$ in Ces\`aro sense. The present paper has two goals. First, we would like to consider dichotomy for {\it non}-well-posed abstract Cauchy problems \eqref{(ACP)}. That is, we do {\it not} assume that $A$ generates a strongly continuous semigroup. Second, we study dichotomy under very mild conditions on $R(z,A)$, $z\in i\Bbb R$. We require only a polynomial growth of the resolvent. Our main technical tool is to use an $H^\infty$ functional calculus for $A$ to obtain the dichotomy projection $P$. In the first part of the paper, similarly to the stability theory for semigroups, cf. \cite{N}, we define strong and uniform dichotomy for $A$ in \eqref{(ACP)}. We show that $A$ has uniform dichotomy provided both $A|_{\im P}$ and $-A|_{\im(I-P)}$ generate uniformly stable semigroups. The operators that satisfy these assumptions are called the bigenerators and were studied in \cite{BGK}. We show that $A$ has strong dichotomy provided these semigroups are strongly stable and $\sigma(A)\cap i\Bbb R$ is finite. Next, we assume that $A$ has an $H^\infty(\Omega)$ functional calculus and prove that $A$ has strong (resp. uniform) dichotomy provided $\overline\Omega$ is disjoint from $i\Bbb R$ (resp. from a vertical strip, containing $i\Bbb R$). This corresponds to conditions \eqref{0sp} and \eqref{emptysp}, respectively. We apply these results for two classes of operators $A$ on $L^p$-spaces having $H^\infty$ calculi: when $iA$ generates a bounded group \ cite{HP} and when $A$ is an elliptic differential operator \cite{[AHS]}. In the second part of the paper we assume that \eqref{0sp} holds and $\|R(z,A)\|$ has no more than polynomial growth along $i\ Bbb R$. Under these mild assumptions $A$, generally, does not have the dichotomy on the entire space $X$. We are able to prove, however, the existence of the Banach spaces $Z$ and $W$ such that $Z \ hookrightarrow X \hookrightarrow W$ and the restriction and extension of $A$ on $Z$ and $W$, respectively, have strong dichotomy. To comment on the last result, let us assume, for a moment, that $A$ generates a continuous semigroup, condition \eqref{0sp} holds and $\|R(z,A)\|$ is bounded for $z\in i\Bbb R$. If $X$ is a Hilbert space, the Gearhart-Herbst spectral mapping theorem implies that the semigroup $\{e^{tA}\}$ is hyperbolic. This means that $A$ has uniform dichotomy on the entire space $X$. If $X$ is a Banach space then, generally, $\{e^{tA}\}$ is not hyperbolic, and, by our result, $A$ has strong dichotomy only on a subspace $Z \hookrightarrow X$. In the last section of the paper we consider a semilinear equation with a linear part that satisfies the condition of polynomial growth of the resolvent. Using the result on dichotomy on subspaces, we prove the ``principle of linearized instability'' for the equation. This generalizes some results from \cite{Henry}. We use the following notation: $\sigma(A)$, $\rho(A)$, $R(z,A)$, $\cal D(A)$ - the spectrum, resolvent set, resolvent, domain of an operator $A$, $\cal L(X)$ - the set of bounded linear operators on a Banach space $X$. \section{ DICHOTOMY AND SEMIGROUPS } \noindent In the theory of stable strongly continuous semigroups (see \cite[p. 99]{N}) the following terminology is used. A strongly continuous semigroup $\{T(t)\}_{t \geq 0}$ is called {\it stable} if \[\lim_{t \to \infty} T(t)x = 0 \mbox{ for all } x \in X. \] The semigroup is called {\it uniformly exponentially stable} if there exists positive $\epsilon$ so that \[ \lim_{t \to \infty} \|e^{\epsilon t}T(t)\| = 0. \] Similarly, we define dichotomy for a densely defined closed operator $A$ in \eqref{(ACP)} as follows: \begin{defn}\ label{1.2} We will say that an operator $A$ {\it has strong dichotomy} if there exists a bounded projection, $P$, such that $PA \subseteq AP$, $A|_{\im P}$ generates a stable strongly continuous semigroup, and all nontrivial solutions of \eqref{(ACP)} such that $x \in \im(I-P)$ are unbounded. We will say that an operator $A$ {\it has uniform exponential dichotomy} if the semigroup generated by $A|_{\im P}$ is uniformly exponentially stable and there exists positive $\epsilon$ such that $$\label{diver} \underline{\lim}_{t \to \infty} \| e^{-\epsilon t}u(t,x) \| > 0$$ for every solution $u$ of \eqref{(ACP)} with $x \in \im(I - P)$. \end{defn} The following proposition shows that \eqref{(ACP)} has uniform exponential dichotomy provided $A$ is, in the terminology of \cite{BGK}, a bigenerator. \begin{prop}\label{1.3} Suppose there exists a bounded projection $P$ such that $PA \subseteq AP$ and both $A|_{\im P}$ and $-A|_{\im(I - P)}$ generate strongly continuous uniformly exponentially stable semigroups. Then $A$ has uniform exponential dichotomy. \end{prop} \begin{pf} Suppose $u$ is a nontrivial solution of \eqref{(ACP)}, with $x \in \im(I - P)$. We must show that $u$ satisfies \eqref{diver}. Let $G \equiv A|_{\im(I - P)}$. Since $t \mapsto (I - P)u(t, x)$ is a solution of \eqref{(ACP)}, it follows by the uniqueness of the solutions of \eqref{(ACP)} that $u(t, x) \in \im(I - P)$, for all $t \geq 0$. Thus we may define $$ w(t) \equiv e^{-tG}u(t, x) \, \, (t \geq 0). $$ Since $\frac{d}{dt}w(t) = 0$, for all $t \geq 0$, it follows that $w(t) = w(0) = x$, for all $t \geq 0$. Thus $$ \|x\| \leq \|e^{-tG}\| \|u(t, x)\|, \, \, \forall t \geq 0, $$ so that $$ \|u(t, x)\| \geq \|e^{-tG}\|^{-1}\|x\|, \, \, \forall t \geq 0, $$ as desired. \end{pf} In order to characterize strong dichotomy in terms of strong stability of the semigroups generated by $A|_{\im P}$ and $-A|_{\im(I - P)}$, we need to introduce the Hille-Yosida space (see \cite{KLC,K,dLK}, or \ cite[Chapter V]{dL3}). \begin{defn}\label{1.4} Suppose $A$ is a closed operator, such that the only solution of \eqref{(ACP)}, with $x = 0$, is trivial. The {\it Hille-Yosida space,} $Z(A)$, for $A$, is defined to be the set of all $x$ for which a bounded uniformly continuous mild solution of \eqref{(ACP)} exists. \end{defn} \noindent We define a norm on $Z(A)$ by \[ \|x\|_{Z(A)} \equiv \sup_{t \ geq 0} \|u(t, x)\|. \] In the following lemma the Hille-Yosida spaces for $A$ and $-A$ were used to find a maximal subspace on which $A$ generates a bounded group (see \cite{K} and \cite[Chapter V] {dL3} for the proof). \begin{lem}\label{1.5} Suppose that $A$ is as in Definition~\ref{1.4} and define $Z \equiv Z(A) \cap Z(-A)$. Then the following holds: \begin{itemize} \item[(1)] $Z$ is the maximal continuously embedded Banach subspace of $X$ such that $A|_Z$ generates a bounded strongly continuous group; \item[(2)]$\sigma(A|_{Z}) \subseteq \sigma(A)$. \end{itemize} \end{lem} It is clear that $Z$, from Lemma~\ref{1.5}, is the set of all bounded, uniformly continuous mild solutions of the reversible abstract Cauchy problem $$\frac{d}{dt}u(t, x) = Au(t, x),\quad u(0, x) = x, \ quad t\in\Bbb R. \label{(1.6)}$$ Under natural conditions on $\sigma(A)$, this abstract Cauchy problem cannot have solutions bounded on the entire line: \begin{lem}\label{1.7} Suppose that $A$ is as in Definition~\ref{1.4}, $\sigma_p(A) \cap i\Bbb R$ is empty, and \[\sigma(A) \cap i\Bbb R \quad\mbox{ is countable.} \] Then all nontrivial solutions of \eqref{(1.6)} are unbounded. \end{lem} \begin {pf} Suppose $u$ is a bounded solution of \eqref{(1.6)}. Fix $\lambda \in \rho(A)$. Then $$ (\lambda - A)^{-1}u(0) \in Z \equiv Z(A) \cap Z(-A), $$ since $t \mapsto (\lambda - A)^{-1}u(t)$ has a bounded derivative, hence is uniformly continuous. By Lemma~\ref{1.5}(2), $\sigma(A|_Z) \cap i\Bbb R$ is countable. But since $A|_Z$ generates a bounded strongly continuous group, $\sigma(A|_Z) \ subseteq i\Bbb R$. Thus $\sigma(A|_Z)$ is a countable subset of $i\Bbb R$. If $\sigma(A|_Z)$ is nonempty, then it follows that it must contain an isolated point. This isolated point is an imaginary eigenvalue for $A|_Z$ (see \cite[Chapter 8]{Da}), hence for $A$. Since $\sigma_p(A) \cap i\Bbb R$ is empty, this would be a contradiction. Thus $\sigma(A|_Z)$ is empty, which implies that $Z$ is trivial (see \cite[Chapter 8]{Da}). Thus $(\lambda - A)^{-1}u(0) = 0$, so that $u$ is trivial. \end{pf} When $\sigma(A) \cap i\Bbb R$ is empty, Lemma 2.5 may be found in \cite{dLV} and \cite{Huang}. The following proposition is the analogue of Proposition~\ref{1.3} for the case of strong dichotomy. \begin{prop}\label{1.8} Suppose there exists a bounded projection $P$ such that $PA \subseteq AP$, both $A|_{\im P}$ and $-A|_{\im(I - P)}$ generate strongly continuous stable semigroups, and $$\label{finite} \sigma(A) \cap i\Bbb R \mbox{ is countable. }$$ Then $A$ has strong dichotomy. \end{prop} \begin{pf} Note first that $\sigma_p(A) \cap i\Bbb R$ is empty. Indeed, if $Ax = i\lambda x$ for some real $\lambda$, then $APx = i\lambda Px$, so that, since $A|_{\im P}$ generates a stable strongly continuous semigroup, $Px = 0$; similarly, $(I - P)x = 0$. Suppose $u$ is a bounded solution of \eqref{(ACP)}, with $x \in \im(I - P)$. We must show that $u$ is trivial. Clearly $u$ extends to a bounded solution of (1.6), by defining \[ u(t) \equiv e^{tA|_{\im(I - P)}}u(0) \, \, (t \leq 0). \] By Lemma~\ref{1.7}, $u$ is trivial. \end{pf} The following example shows that hypothesis \eqref {finite} in Proposition~\ref{1.8} is necessary. That is, it is not sufficient, for $A$ to have strong dichotomy, to have both $A|_{\im P}$ and $-A|_{\im(I - P)}$ generate strongly continuous stable semigroups. \begin{exmp} Take $X \equiv L^p(\Bbb R, g(s) \, ds)$, $1 \leq p < \infty$, where $g$ is a nondecreasing positive function on $\Bbb R$, and take $A$ to be $-\frac{d}{ds}$. That is, $$ \|f\ |^p \equiv \int_{\Bbb R} |f(s)|^p g(s) \, ds, $$ and $-A$ is the generator of the strongly continuous contracting semigroup of left-translations \[ e^{-tA}f(s) \equiv f(s + t), \quad s \in \Bbb R, \, t \geq 0,\, f \in X.\] It is not hard to see that, for $f$ bounded and of compact support, $$ \lim_{t \to \infty} \|e^{-tA}f\|^p = g(-\infty) \int_{\Bbb R} |f(s)|^p \, ds. $$ Thus, if we choose $g$ such that $g(-\infty) = 0$, then $-A$ generates a stable strongly continuous semigroup. Except for condition \eqref{finite}, we have the hypotheses in Proposition~\ref{1.8}, with $P \equiv 0$. Strong dichotomy is thus equivalent to \eqref{(ACP)} having no nontrivial bounded solutions. If we assume that $g$ is exponentially bounded, then translation becomes a strongly continuous group, $$ e^{tA}f (s) \equiv f(s - t), \quad s, t \in \Bbb R, \, f \in X. $$ It is again clear that $$ \lim_{t \to \infty} \|e^{tA}f\|^p = g(\infty) \int_{\Bbb R} |f(s)|^p \, ds $$ for any $f \in X$. Thus, if $g$ is bounded, we do not have strong dichotomy; in fact, \eqref{(ACP)} has a bounded solution for all initial data in the domain of $A$. \end{exmp} An analogue of this example, for incomplete second-order Cauchy problems, is in \cite[Example 2.15]{dL1}. See \cite[Section II]{dL1} for the relationship between different versions of such Cauchy problems and stable or bounded strongly continuous semigroups. In the language of \cite[Definition 2.7]{dL1}, the operator $-A|_{\im(I - P)}$, from Proposition~\ref{1.8}, generates a bounded, nowhere-reversible strongly continuous semigroup. \section { DICHOTOMY AND $H^\infty$ FUNCTIONAL CALCULI} \noindent In this section we will study dichotomy for \eqref{(ACP)} for operators that have an $H^\infty$ functional calculus. Examples of operators with this property and applications of our dichotomy results are given in the next section. \begin{defn}\label{2.1} If $\Omega$ is an open subset of the complex plane, not equal to the entire plane, we will say that an operator $A$ {\it has an $H^\infty(\Omega)$ functional calculus} if $\sigma(A) \subseteq \overline{\Omega}$ and there exists a continuous algebra homomorphism, $f \mapsto f(A)$, from $H^{\infty}(\Omega)$ into $\cal L(X)$, such that $f_0(A) = I$ and $g_{\lambda}(A) = (\lambda - A)^{-1}$, for all $\lambda \notin \overline{\Omega}$, where $f_0(z) \equiv 1, g_{\lambda}(z) \equiv (\lambda - z)^{-1}$. \end{defn} The main tool in the proof of the next proposition is the ABLV-Theorem (Arendt-Batty-Lyubich-V\~u; see \cite{LV} and \cite {AB}), that gives the best available condition for a strongly continuous semigroup to be stable. \begin{prop}\label{2.2} Suppose $\Omega$ is an open set contained in the left half-plane, such that $\overline{\Omega} \cap i\Bbb R$ is countable, $\sigma_p(A) \cap i\Bbb R$ is empty and $A$ is densely defined and has an $H^{\infty}(\Omega)$ functional calculus. Then $A$ generates a stable strongly continuous semigroup, if either \ begin{itemize} \item[(1)] $X$ is reflexive, or \item[(2)] $\overline{\Omega} \cap i\Bbb R$ is empty. \end{itemize} \noindent If $\Omega \subseteq \{z \in \bold C \, | \, \re (z) < -\epsilon \}$, for some positive $\epsilon$, then the semigroup is uniformly exponentially stable. \end{prop} \begin{pf} Since $A$ has an $H^{\infty} (\Omega)$ functional calculus, and $\Omega$ is contained in the left half-plane, a short calculation shows that $\{\|\lambda^n(\lambda - A)^{-n}\| \, | \, \lambda > 0, n \in \bold N \}$ is bounded. By the Hille-Yosida theorem, since $\cal D(A)$ is dense, $A$ generates a bounded strongly continuous semigroup. Since $\sigma(A) \cap i\Bbb R$ is contained in $\overline{\Omega} \cap i\Bbb R,$ the ABVL-Theorem (\cite{LV} and \cite {AB}) guarantees that either (1) or (2) above implies that the semigroup generated by $A$ is stable. If there exists positive $\epsilon$ such that $\Omega \subseteq \{z \in \bold C \, | \, \re (z) < - \epsilon \}$, then, exactly as argued at the beginning of the proof, $(A + \epsilon)$ generates a bounded strongly continuous semigroup, so that the semigroup generated by $A$ is uniformly exponentially stable. \end{pf} To obtain dichotomy, we need to apply this result for both ``stable'' and ``unstable'' parts of $A$ as follows. \begin{cor}\label{2.3} Suppose $\sigma_p(A) \cap i\Bbb R$ is empty and $A$ is densely defined and has an $H^{\infty}(\Omega)$ functional calculus, where $\Omega$ is an open subset of the complex plane such that $\overline{\Omega} \cap i\Bbb R$ is countable. Then there exists a bounded projection $P$ such that $PA \subseteq AP$ and $A|_{\im P}$ and $-A|_{\im(I-P)}$ generate stable strongly continuous semigroups, if either \begin{itemize} \item[(1)] $X$ is reflexive, or \item[(2)] $\overline{\ Omega} \cap i\Bbb R$ is empty. \end{itemize} \noindent If there exists positive $\epsilon$ such that $\Omega \cap \{z \in \bold C \, | \, |\re (z)| < \epsilon \}$ is empty, then ``stable'' may be replaced by ``uniformly exponentially stable.'' \end{cor} \begin{pf} Let $$ \Omega_1 \equiv \Omega \cap \{z \in \bold C \, | \, Re(z) < 0 \}, \, \, \Omega_2 \equiv \Omega \cap \{z \in \bold C \, | \, Re(z) > 0 \}. $$ Let $P \equiv 1_{\Omega_1}(A)$ for the characteristic function $1_{\Omega_1}(\cdot)$ of $\Omega_1$. Then $I - P = 1_{\Omega_2}(A)$, thus we may apply Proposition~\ref{2.2} to both $A |_{\im P}$ and $-A|_{\im(I-P)}$. If there exists positive $\epsilon$ such that $\Omega \cap \{z \in \bold C \, | \, |\re (z)| < \epsilon \}$ is empty, then replace $Re(z) < 0$ with $Re(z) < -\ epsilon$ and $Re(z) > 0$ with $Re(z) > \epsilon$, and again use Proposition 3.2. \end{pf} We are ready to prove the main result of this section. For $0 < \theta \leq \pi$ let $S_{\theta} \equiv \{re^ {i\phi} \, | \, r > 0, | \phi | < \theta \}$ denote a sector of angle $\theta$. \begin{thm}\label{2.5} Suppose $\Omega$ is an open subset of the complex plane such that $\overline{\Omega} \cap i\Bbb R$ is countable, $\sigma_p(A) \cap i\Bbb R$ is empty and $A$ is densely defined and has an $H^{\infty}(\Omega)$ functional calculus. Then $A$ has strong dichotomy, if either \begin{itemize} \item [(1)] $X$ is reflexive, or \item[(2)] $\overline{\Omega} \cap i\Bbb R$ is empty. \end{itemize} \noindent If, in addition to (2), either \begin{itemize} \item[(3)] there exists $\epsilon > 0$ such that $\Omega$ is disjoint from $\{z \in \Bbb C \, | \, |\re (z)| < \epsilon \}$, or \item[(4)] $0 \in \rho(A)$ and $\Omega$ is contained in a cone $(S_{\theta} \cup -S_{\theta})$, for some $\theta < \frac{\pi}{2}$, \end{itemize} then $A$ has uniform exponential dichotomy. \end{thm} \begin{pf} The assertion about strong dichotomy follows from Corollary~\ref{2.3} and Proposition~\ref{1.8}, since $ \sigma(A)$ is contained in $\overline{\Omega}$. Under hypothesis (3), uniform exponential dichotomy follows from Corollary~\ref{2.3} and Proposition~\ref{1.3}. Under hypothesis (4), it is straightforward to show, analogously to the proof of Proposition~\ref{2.2}, that, for $P$ as in Corollary~\ref{2.3}, both $A|_{\im P}$ and $-A|_{\im(I - P)}$ generate bounded holomorphic strongly continuous semigroups. Since $0 \in \rho(A)$, so that $0 \in \rho(A|_{\im P})$ and $\rho(-A|_{\im(I - P)})$, these semigroups are both uniformly exponentially stable (see \cite[Theorem 4.4.3]{P}). Thus we may again apply Proposition~\ref{1.3}. \end{pf} \begin{rem} Let us stress, that under hypothesis (3) both $A|_{\im P}$ and $-A|_{\im(I-P)}$ generate uniformly stable strongly continuous semigroups. We will use this fact in the last section. \end{rem} \section{EXPONENTIAL DICHOTOMY ON $L^P$ SPACES} \noindent In this section we will apply Theorem~\ref{2.5} for two classes of operators on $L^p$-spaces having an $H^\infty$ functional calculus. \noindent {\bf 1. Bounded groups.} We cite the following result from \cite{HP}. Let $X = L^p(\Omega, \mu)$, for $1 < p < \infty$, $(\Omega, \ mu)$ be a measure space. \begin{lem}\label{3.1} If $iA$ generates a bounded strongly continuous group, $A$ is injective and $0 < \theta < \frac{\pi}{2}$, then $A$ has an $H^{\infty}(S_{\theta} \cup -S_{\theta})$ functional calculus. \end{lem} Theorem~\ref{2.5} now implies the following. \begin{cor}\label{3.2} If $iA$ generates a bounded strongly continuous group and $A$ is injective, then $A$ has strong dichotomy. If, in addition, $0 \in \rho(A)$, then $A$ has uniform exponential dichotomy. \end{cor} \noindent {\bf 2. Differential operators.} Our next goal is to combine Theorem~\ref{2.5} and results from \cite{[AHS]} to study the dichotomy of elliptic differential operators acting on vector valued $L^p$-functions over $\Bbb R^n$ with sufficiently large zero order term and certain regularity conditions on the coefficients. To formulate the results from \cite{[AHS]} we will need some notations. Let $$\cal A=\sum_{|\alpha|\leq m}a_{\alpha}D^{\alpha}$$ be a linear differential operator of order $m$ on $X=L^p(\Bbb R^n,\Bbb R^k)$, $1 0$ and $\theta_0\in[0,\pi/2]$. We will say that $\cal A$ is {\it uniformly $(M,\theta_0)$-elliptic} if $\max_{|\alpha|=m}\|a_\alpha\|_\infty\le M$ and for its principal symbol \[\cal A_\pi(x,\xi)\ equiv\sum_{|\alpha|=m}a_\alpha(x)\xi^\alpha,\quad (x,\xi)\in\Bbb R^n\times\Bbb R^n\] the following conditions hold: \[\sigma(\cal A_\pi(x,\xi))\subset S_{\theta_0}\setminus\{0\},\quad \| [\cal A_\pi (x,\xi)]^{-1} \|\le M, \quad x\in\Bbb R^n,\, \|\xi\|=1.\] To formulate the regularity conditions on the coefficients, for fixed $p\in (1,\infty)$ and $m\in\Bbb N$ choose any $q_\alpha$ such that \[q_ \alpha=p \mbox{ if } |\alpha| n/(m-|\alpha|) \mbox{ if } m-n/p\le |\alpha|\le m.\] Let $\omega:\Bbb R\to\Bbb R$ be a modulus of continuity that satisfies the condition \[\int\limits_0^1\frac{\omega^ {1/3}(t)}{t}\,dt<\infty.\] Let $BUC(\Bbb R^n,\cal L(\Bbb R^k);\omega)$ denote the set of bounded uniformly continuous functions with the finite norm \[\|a\|_{C(\omega)}\equiv\|a\|_\infty+\sup_{x\neq y} \frac{|a(x)-a(y)|}{\omega(|x-y|)}.\] Also, let \begin{align*} L^q_{\mbox{ unif }} & (\Bbb R^n,\cal L(\Bbb R^k)) \equiv \\ & \left\{ a\in L^1_{\mbox{ loc }} (\Bbb R^n,\cal L(\Bbb R^k)): \|a\|_{q,{\ mbox{ unif }}} \equiv \sup_{x\in\Bbb Z^n}\|a(\cdot - x)\|_{L^q((-1,1)^n, \cal L(\Bbb R^k))}<\infty\right\}. \end{align*} We impose the following regularity conditions on the coefficients: \label {regcond1} & a_\alpha\in BUC(\Bbb R^n,\cal L(\Bbb R^k);\omega) \mbox{ if } |\alpha|=m, \nonumber \\ & a_\alpha\in L^q_{\mbox{unif}}(\Bbb R^n,\cal L(\Bbb R^k)) \mbox{ if } |\alpha|\le m-1, and $$\ label{regcond2} \max_{|\alpha|\leq m-1}\|a_\alpha\|_{q_\alpha,{\mbox{ unif }}}+ \max_{|\alpha|=m}\|a_\alpha\|_{C(\omega)}\le M.$$ The following result was proved in \cite{[AHS]}. \begin{lem}\label {ahs} There exists a constant $\mu>0$ such that for each $(M,\theta_0)$-elliptic operator $\cal A$ on $\Bbb R^n$, satisfying \eqref{regcond1}--\eqref{regcond2}, the operator $\mu+\cal A$ has $H^\ infty(S_\theta\setminus\{0\})$ functional calculus for $0\leq\theta_0<\theta<\pi/2$. \end{lem} Theorem~\ref{2.5} now gives the following fact. \begin{cor}\label{ahscor} Assume $\cal A$ and $\mu$ are as in Lemma~\ref{ahs}. If $\mu+\cal A$ is injective, then $\mu+\cal A$ has strong dichotomy. If $\mu+\cal A$ is invertible, then $\mu+\cal A$ has uniform exponential dichotomy. \end{cor} \section {EXPONENTIAL DICHOTOMY ON SUBSPACES AND SUPERSPACES} \noindent In this section we will assume that $\sigma(A)\cap i\Bbb R=\emptyset$ and the resolvent of $A$ grows no faster than a polynomial along $i\Bbb R$. Under these conditions $A$, generally, does not have dichotomy on $X$. However, we will identify Banach spaces $Z$ and $W$ such that $Z \hookrightarrow X \hookrightarrow W$ and the restriction and extension of $A$ on $Z$ and $W$, respectively, have dichotomy. Our main tool is the existence of an $H^\infty$ functional calculus on $Z$ and $W$. \begin{lem}\label{4.1} Suppose $\ Omega$ is an open subset of the complex plane whose complement contains a half-line and whose boundary is a positively oriented countable system of piecewise smooth, mutually nonintersecting (possibly unbounded) arcs, $\sigma(A) \subseteq \Omega$, $A$ is densely defined and $\|(w - A)^{-1}\|$ is $O((1 + |w|)^N)$, for $w \notin \Omega$. Then there exist Banach spaces $Z$, $W$, and an operator $B$, on $W$, such that $$ [ \cal D(A^{N+2}) ] \hookrightarrow Z = [ \cal D(B^{N+2}) ] \hookrightarrow X \hookrightarrow W, $$ $A|_Z$ and $B$ are densely defined and have $H^{\infty}(\Omega)$ functional calculi and $A = B|_X$. \end{lem} \begin{pf} The existence of $Z$ is proven in \cite[Theorem 7.1]{dL2}, except that the density of $\cal D(A|_Z)$ is not addressed. This density follows by observing that, since $\cal D(A)$ is dense, it follows that $\cal D(A^{N+3})$ is dense in $[ \cal D(A^{N+2}) ]$, hence is dense in $Z$; it is clear that $\cal D(A^{N+3})$ is contained in $\cal D(A| _Z)$. Define $W$ to be the completion of $Z$ with respect to the norm $$ \|x\|_W \equiv \|A^{-(N+2)}x\|_Z. $$ We construct a functional calculus as follows. For any $f \in H^{\infty}(\Omega)$, $x \in W$, define $$ (\Lambda f)x \equiv \lim_{n \to \infty} f(A|_Z)x_n, $$ where the limit is taken in $W$, and $\{x_n\}$ is any sequence in $Z$ converging to $x$ in $W$. Note that the existence and uniqueness of $\lim_{n \to \infty} f(A|_Z)x_n$ follows from the boundedness of $f(A|_Z)$ and the fact that $A^{-(N+2)}$ commutes with $f(A|_Z)$. It is clear that $f \mapsto \Lambda f$ is a continuous algebra homomorphism from $H^{\infty}(\Omega)$ into $\cal L(W)$. Let us show that this homomorphism is as in Definition~\ref{2.1}. For $\lambda \notin \overline{\Omega}$, we claim that $\Lambda g_{\ lambda}$ is injective. To see this, suppose $x \in W$ and $\Lambda g_{\lambda}x = 0$. Choose $\{x_n\} \subset Z$ such that $x_n \to x$ in $W$. Then $y_n \equiv g_{\lambda}(A|_Z)x_n \to 0$ in $W$. Since $g_{\lambda}(A|_Z) = (\lambda - A|_Z)^{-1}$, this means that \[ (\lambda - A|_Z)A^{-(N+2)}y_n = A^{-(N+2)}x_n \to A^{-(N+2)}x\quad \mbox{ and } A^{-(N+2)}y_n \to 0,\] both in $Z$. Since $A|_Z$ is closed, this implies that $A^{-(N+2)}x$, hence $x$, must equal $0$, proving the claim. Since $f \mapsto \Lambda f$ is an algebra homomorphism, $\{ \Lambda g_{\lambda} \, | \, \lambda \notin \ overline{\Omega} \}$ is a pseudoresolvent family. Thus $\{ \Lambda g_{\lambda} \, | \, \lambda \notin \overline{\Omega} \}$ is a pseudoresolvent family of injective operators. This means that there exists an operator $B$, on $W$, such that $(\lambda - B)^{-1} = \Lambda g_{\lambda}$, for all $\lambda$ not in $\overline{\Omega}$. It is clear that $f \mapsto \Lambda f$ is now an $H^{\infty}(\ Omega)$ functional calculus for $B$. There exists a constant $M$ such that $$ \|A^{-(N+2)}y\|_Z \leq M\|A^{-(N+2)}y\|_{[\cal D(A^{(N+2)})]} \equiv M \|y\|, $$ for all $y \in X$. This implies that $$ \|x\|_W \equiv \|A^{-(N+2)}x\|_Z \leq M \|x\|, $$ for all $x \in Z$; that is, $ X \hookrightarrow W. $ To show that $A = B|_X$, it is sufficient to show that $A^{-1} = B^{-1}|_X$. Suppose $x \in X$. Then $x \in W$, so there exists $\{x_n\}\subseteq Z$ such that $x_n \to x$ and $A^{-1}x_n \to B^{-1}x$, both in $W$. This means that $A^{-(N+2)}x_n \to A^{-(N+2)}x$ and \[A^{-(N+2)}A^{-1}x_n \to A^{- (N+2)}B^{-1}x\quad\mbox{ in } Z,\] hence in $X$. Thus $A^{-(N+2)}A^{-1}x = A^{-(N+2)}B^{-1}x$, so that $A^{-1}x = B^{-1}x$, as desired. Since $\cal D(A|_Z)$ is dense in $Z$, it is dense in $W$; since $\cal D(A|_Z)$ is contained in $\cal D(B)$, it follows that $B$ is densely defined. All that remains is to show that $[ \cal D(B^{N+2})] = Z$. For $x \in \cal D((A|_Z)^{N+2})$, $$ \|x\|_Z = \|x\|_{[\ cal D(B^{N+2)})]}, $$ thus, since $\cal D((A|_Z)^{N+2})$ is dense in $Z$, it follows that $$ [ \cal D(B^{N+2}) ] = Z. $$ \end{pf} The following lemma shows that the polynomial growth of the resolvent along $i\Bbb R$ automatically implies the same growth outside some $\Omega$ as in Lemma~\ref{4.1} \begin{lem}\label{4.2} Suppose $i\Bbb R \subseteq \rho(A)$ and $\|(iy - A)^{-1}\|$ is $O(1 + |y|^N)$, for $y$ real. Then there exists $\Omega$, as in Lemma 5.1, such that $\overline{\Omega} \cap i\Bbb R$ is empty, $\sigma(A) \subseteq \Omega$ and $\|(z - A)^{-1}\|$ is $O(1 + |z|^N)$, for $z$ outside $\Omega$. \end{lem} \begin{pf} This follows from a power series expansion of the resolvent. There exists a constant $M$ so that $$ \|(iy - A)^{-1}\| \leq M(1 + |y|^N), \forall y \in \Bbb R. $$ For $\ frac{1}{\epsilon} > 2M(1 + |y|^N)$ one has $(\epsilon + iy) \in \rho(A)$ with $$ (\epsilon + iy - A)^{-1} = \sum_{k=0}^{\infty} (-\epsilon)^k(iy - A)^{-(k+1)}, $$ so that \begin{align*} \|(\ epsilon+iy-A)^{-1}\| & \leq \sum_{k=0}^\infty(\epsilon)^k(M(1+|y|^N))^{k+1} \\ & = \dfrac{M(1+|y|^N)}{1-\epsilon M(1+|y|^N)} \leq 2M(1+|y|^N), \end{align*} as required. \end{pf} \begin{rem}\label {bddstr} The proof of Lemma~\ref{4.2} also shows that the resolvent of $A$ is bounded in a vertical strip around $i\Bbb R$ provided $i\Bbb R\subset \rho(A)$ and $\|(iy-A)^{-1}\|$, $y\in\Bbb R$, is bounded. \end{rem} The following theorem is an immediate consequence of Theorem~\ref{2.5} and Lemmas~\ref{4.1} and \ref{4.2}. \begin{thm}\label{4.3} Suppose $A$ is densely defined, $i\Bbb R\subseteq\ rho(A)$ and $\|(iy - A)^{-1}\|$ is $O(1 + |y|^N)$, for $y$ real. Then \begin{itemize} \item[(1)] there exists a Banach space $Z$ such that $$ [ \cal D(A^{N+2}) ] \hookrightarrow Z \hookrightarrow X $$ and $A|_Z$ has strong dichotomy, and \item[(2)] there exists a Banach space $W$ and an operator $B$, on $W$, such that $$ [ \cal D(B^{N+2}) ] \hookrightarrow X \hookrightarrow W, $$ $A = B|_X$, and $B$ has strong dichotomy. \end{itemize} \end{thm} A similar result for uniform exponential dichotomy also follows from Theorem~\ref{2.5} and Lemma~\ref{4.2}. \begin{thm}\label{4.4} Suppose $A$ is densely defined, there exists positive $\epsilon$ such that $\{z\in\Bbb C: |\re z|<\epsilon\}\subseteq \rho(A)$, and $\|(z-A)^{-1}\|$ is $O(1+|z|^N)$, for $|\re z|<\epsilon$. Then \begin{itemize} \ item[(1)] there exists a Banach space $Z$ such that $$ [ \cal D(A^{N+2}) ] \hookrightarrow Z \hookrightarrow X $$ and $A|_Z$ has uniform exponential dichotomy, and \item[(2)] there exists a Banach space $W$ and an operator $B$, on $W$, such that $$ [ \cal D(B^{N+2}) ] \hookrightarrow X \hookrightarrow W, $$ $A = B|_X$, and $B$ has uniform exponential dichotomy. \end{itemize} \end{thm} Since $\ |(iy-A)^{-1}\|=O(1/|y|)$ provided $iA$ generates a bounded strongly continuous group, the following result holds. \begin{cor}\label{4.5} If $iA$ generates a bounded strongly continuous group and $0 \ in \rho(A)$, then \begin{itemize} \item[(1)] there exists a Banach space $Z$ such that $$ [ \cal D(A) ] \hookrightarrow Z \hookrightarrow X $$ and $A|_Z$ has uniform exponential dichotomy, and \item [(2)] there exists a Banach space $W$ and an operator $B$, on $W$, such that $$ [ \cal D(B) ] \hookrightarrow X \hookrightarrow W, $$ $A = B|_X$, and $B$ has uniform exponential dichotomy. \end {itemize} \end{cor} \begin{exmp} To illustrate the effect of ``dichotomy on subspaces'' in Theorems~\ref{4.3}--\ref{4.4}, let us consider the operator \[A\equiv i\dfrac{d}{dx}\quad \mbox{ on } \quad X\equiv\{f\in L^p[0,1]: \int_0^1f(x)\,dx=0\}, \quad 1 \leq p < \infty.\] For $1 < p < \infty$ the operator $A$ has the uniform dichotomy with the bounded projector \[ P: f\sim \sum_{k\neq 0}a_ke^ {ikx} \mapsto \sum_{k>0}a_ke^{ikx}. \] For $p=1$ this projector is unbounded, and $A$ does not have dichotomy on the entire space $X$. Note that $iA$ generates a bounded strongly continuous group and $\|R(iy,A)\|=O(1/|y|)$. Theorem~\ref{4.3} gives a dense subspace $Z$ in $X$ such that $A|_Z$ has strong dichotomy. \end{exmp} \section {NONLINEAR ABSTRACT CAUCHY PROBLEM} \noindent In this section we assume that $A$ generates a strongly continuous semigroup on $X$. Let $g$ be a nonlinear function, \[ g: \Bbb R\times U\to \cal D(A^{N+2})\quad\mbox{ for an open set }\quad U\subset \cal D(A^{N+2}), \quad 0\in U,\] such that $g(t,0)=0$. Assume that $g$ is H\"{o}lder: \begin{eqnarray}\label{H} &&\|g(t,x_1)-g(t,x_2)\|_{\cal D(A^{N+2})}\leq k(r) \|x_1-x_2\|_X,\\ &&\mbox{ for } x_i\in U, \, \|x_i\| _X\leq r, \, i=1,2, \mbox{ and } k(r)\to 0 \mbox{ as } r\to 0.\nonumber \end{eqnarray} For $t_0\in\Bbb R$ consider the following semilinear abstract Cauchy problem: \begin{eqnarray}\label{Cp} \dfrac {d}{dt}u(t,x)=Au(t,x)+g(t,u(t,x)),\quad u(t_0,x)=x\in X. \end{eqnarray} We say, that $u(\cdot,x)$ is a mild solution of \eqref{Cp} on $(t_0,\tau)$, if it satisfies the integral equation $$\label {inteq} u(t,x)=e^{A(t-t_0)}x+\int\limits_{t_0}^t e^{A(t-s)}g(s,u(s,x))\, ds, \quad t\in (t_0,\tau).$$ For $A$ as in Theorem~\ref{4.4} we will prove the following ``principle of linearized instability'' (see \cite[Th.~5.1.3]{Henry}, \cite{Kato,Sh} and references therein for similar results on sectorial operators $A$). Recall (see Remark~\ref{bddstr}) that, for instance, the condition $ \|(iy-A)^{-1}\|=O(1)$, $y\in\Bbb R$ implies the hypothesis of Theorem~\ref{4.4} with $N=0$. \begin{thm} Suppose there exists positive $\epsilon$ such that $\{z\in\Bbb C: |\re z|<\epsilon\}\subseteq \ rho(A)$, and $\|(z-A)^{-1}\|$ is $O(1+|z|^N)$, for $|\re z|<\epsilon$. Assume $\sigma(A)\cap \{z: \re z>0\}\neq \emptyset$. Then the zero solution of \eqref{Cp} is unstable in $X$. That is, for some positive $\epsilon$ and a sequence $x_n\in X$ such that $\|x_n\|_X\to 0$ there exist solutions $u_*(\cdot)=u_*(\cdot,x_n)$ of \eqref{inteq} with $x=x_n$ so that $\|u_*(t_n,x_n)\|_X\ge\epsilon$ for some $t_n\ge t_0$. \end{thm} \begin{pf} By Theorem~\ref{4.4} there exists a Banach space $Z$ with $\cal D(A^{N+2}) \hookrightarrow Z \hookrightarrow X$ such that, for constants $M,M_1>0$, $$\label {imb} \|x\|_X\le M \|x\|_Z,\, x\in Z,\quad\mbox{ and }\quad \|x\|_Z\le M_1\|x\|_{\cal D(A^{N+2})},\,x\in\cal D(A^{N+2}),$$ and $A|_Z$ generates a hyperbolic strongly continuous semigroup on $Z$. This means that for a projection $P$ bounded on $Z$ and some $\beta>0$ and $C>1$ one has $$\label{est} \|e^{tA_-}x\|_Z\le Ce^{-t\beta}\|x\|_Z,\quad \|e^{-tA_+}x\|_Z\le Ce^{-t\beta}\|x\|_Z, \quad t>0.$$ Here $A_-$ and $A_+$ denote the restrictions of $A$ on $\Im(P)$ and $\Im(Q)$, respectively, where $Q \equiv I-P$. For $C$ and $\beta$ from \eqref{est} choose $\tilde{x}_0\in \ImQ$, $\tilde{x}_0\in Z$, so that $$\label{1} 0<\|\tilde{x}_0\|_Z \leq \dfrac{1}{2CM},$$ and $r$ small enough, so that $$\label{main0} \dfrac{2}{\beta}CMM_1k(r)(\|P\|_{\cal L(Z)}+\|I-P\|_{\cal L(Z)}) \leq\dfrac{\|\tilde {x}_0\|_X}{8}.$$ By \eqref{1} and \eqref{imb} with $C>1$ one has $$\label{main1} \dfrac{2}{\beta}CMM_1k(r)(\|P\|_{\cal L(Z)}+\|I-P\|_{\cal L(Z)}) \leq\dfrac{1}{16C}\le\dfrac{1}{2}.$$ Denote $x_0=r\ tilde{x}_0$. Then \eqref{1} gives: $$\label{2} \|x_0\|_Z=\|r\tilde{x}_0\|_Z\le\dfrac{r}{2CM}.$$ Fix $\tau\ge t_0$. Denote $\cal C=C((-\infty,\tau],Z)$ the space of $Z$-valued continuous functions with $\sup$-norm. Consider a subset $\cal B=\cal B_{\tau,x_0}$ of $\cal C$, defined as follows: $$\label{defB} \cal B=\{ u\in\cal C: \|u(t)\|_Z\le \dfrac{r}{M} \cdot e^{\frac{\beta}{2}(t-\tau)},\quad t\le\tau,\quad (I-P)u(\tau)=x_0\}.$$ Define a nonlinear operator $T=T_{\tau,x_0}$ in $\cal C$ as follows: \begin{eqnarray*} (Tu)(t)\equiv e^{-A_+(\tau-t)} x_0 & - & \int\limits_t^\tau e^{-A_+(s-t)} (I-P) \gs \, ds\\ & + & \int\limits_{-\infty}^t e^{A_-(t-s)}P \gs \, ds,\quad t\le\tau. \end{eqnarray*} \begin{claim} \label{cl1} $T$ preserves $\cal B$. \end{claim} \begin{pf} By \eqref{est} and \ eqref{2} one has: $$\label{11} \| e^{-A_+(\tau-t)} x_0\|_Z \leq Ce^{-\beta(\tau-t)}\|x_0\|_Z\leq \dfrac12\dfrac{r}{M}e^{\frac{\beta}{2}(t-\tau)}.$$ Fix $u\in\cal B$. Then \eqref{est} gives: $$\label {21} \left\|\int\limits_{-\infty}^t e^{A_-(t-s)} P g(s,u(s)) ds\right\|_Z \le C\|P\|_{\cal L(Z)} \int\limits_{-\infty}^t e^{-\beta(t-s)}\|\gs\|_Z ds.$$ Since $u\in\cal B$, one has from \eqref{imb}: \ [\|u(s)\|_X\le M\|u(s)\|_Z\le re^{\frac{\beta}{2}(s-\tau)}\le r,\quad s\le \tau.\] Then \eqref{H} can be applied in \eqref{21}, and we use \eqref{imb} to continue the estimate in \eqref{21}: $$\label {22} \le C\|P\|_{\cal L(Z)} M_1k(r) r\int\limits_{-\infty}^t e^{\frac{\beta}{2}(s-\tau)}e^{-\beta(t-s)}\, ds= \dfrac{2}{3\beta}CMM_1k(r)\|P\|\cdot\dfrac{r}{M} e^{\frac{\beta}{2}(t-\tau)}.$$ Similarly, $$\label{23} \left\| \int\limits_t^\tau e^{-A_+(s-t)}(I-P) \gs \, ds\right\|\leq \dfrac{2}{\beta}CMM_1k(r)\|I-P\|\cdot\dfrac{r}{M} e^{ \frac{\beta}{2} (t-\tau)}.$$ Adding \eqref{11},\eqref {22} and \eqref{23}, and taking into account the inequality \eqref{main1}, we get the desired estimate, as in \eqref{defB}. \end{pf} \begin{claim}\label{cl2} $T$ is a strict contraction on $\cal B$. \end{claim} \begin{pf} Indeed, similarly to Claim~\ref{cl1}, for $u_1,u_2\in\cal B$ one has : \begin{eqnarray*} &&\|Tu_1-Tu_2\|_{\cal C}\le\\ \max_t \{ &&\left\| \int\limits_{t}^\tau e^{-A_+(s-t)} (I-P) [g(s,u_1(s))-g(s,u_2(s))]\, ds \right\|_Z +\\ &&\left\| \int\limits_{-\infty}^t e^{A_-(t-s)}P [g(s,u_1(s))-g(s,u_2(s))]\, ds \right\|_Z \}\\ &&\leq \dfrac{1}{\beta} C M M_1 k(r) (\|P\|+\|I-P\|) \|u_1-u_2\|_{\cal C}\le \dfrac14 \|u_1-u_2\|_{\cal C} \end{eqnarray*} by \eqref{est} and \eqref{main1}. \end{pf} Therefore, the equation $u=Tu$ has a unique solution $u_*(\cdot)\equiv u_{\tau,x_0}(\ cdot)$ in $\cal B$. \begin{claim}\label{cl3} $u_*$ is a solution of \eqref{inteq} with $x\equiv u_{\tau,x_0}(t_0)$. \end{claim} \begin{pf}Indeed, we project $u_*=Tu_*$ on $\ImP$ to obtain: \begin {eqnarray}\label{Pcomp} &&Pu_*(t)= \int\limits_{-\infty}^t e^{A_-(t-s)}P g(s,u_*(s))\, ds=\\ &&e^{A_-(t-t_0)}\left[ \int\limits_{-\infty}^{t_0} e^{A_-(t_0-s)}P g(s,u_*(s))\, ds\right] +\int\limits_ {t_0}^t e^{A_-(t-s)}P g(s,u_*(s))\, ds.\nonumber \end{eqnarray} Similarly, \begin{eqnarray}\label{Qcomp} (I-P)u_*(t) & = & e^{-A_+(\tau-t)}x_0- \int\limits_{t}^\tau e^{-A_+(s-t)}(I-P) g(s,u_*(s))\, ds\\ & = & e^{A_+(t-t_0)}\left[ e^{A_+(t_0-\tau)}x_0- \int\limits_{t_0}^\tau e^{A_+(t-s)}(I-P) g(s,u_*(s))\, ds \right]\nonumber\\ & + & \int\limits_{t_0}^t e^{A_+(t-s)}(I-P) g(s,u_*(s))\, ds.\ nonumber \end{eqnarray} Since \begin{eqnarray*} x=u_{\tau,x_0}(t_0) & = & e^{A_+(t_0-\tau)}x_0- \int\limits_{t_0}^\tau e^{A_+(t-s)}(I-P) g(s,u_*(s))\, ds\\ & + & \int\limits_{-\infty}^{t_0} e^{A_- (t_0-s)}P g(s,u_*(s))\, ds, \end{eqnarray*} we see that $u_*$ satisfies \eqref{inteq} just by adding \eqref{Pcomp} and \eqref{Qcomp}. \end{pf} To finish the proof of the theorem, let $\epsilon=\ frac78\|x_0\|_X$. For $n\in\Bbb N$ and $\tau=t_0+n$ construct $u_*(\cdot)\equiv u_{t_0+n,x_0}(\cdot)$ as above and denote $x_n=u_{t_0+n,x_0}(t_0)$. Since $u_*\in\cal B_{t_0+n,x_0}$, one has \[\|x_n\| _X\le M\|x_n\|_Z=M\|u_{t_0+n,x_0}(t_0)\|_Z \le re^{-\frac{\beta}{2}n}\to 0 \mbox{ as } n\to\infty.\] By Claim~\ref{cl3}, $u_*=u_*(\cdot,\,x_n)$ is a mild solution for \eqref{Cp} with $x=x_n$. It remains to show that, for $t_n\equiv\tau$, $$\label{final} \|u_*(t_n,x_n)\|_X= \|u_{\tau,x_0}(\tau)\|_X\ge \epsilon=\dfrac78\|x_0\|_X.$$ Indeed, as in Claim~\ref{cl1}, one has: \begin{eqnarray*}\ label{last} \|u_*(\tau)-x_0\|_Z& = &\left\| \int\limits_{-\infty}^{\tau} e^{A_-(t_0-s)}P g(s,u_*(s))\, ds \right\|_Z \mbox{ (using \eqref{main0}) } \\ & \le & \dfrac{\|\tilde{x}_0\|_X}{8} \cdot\dfrac {r}{M}=\dfrac{\|x_0\|_X}{8M}. \nonumber \end{eqnarray*} Now the estimate \[\|x_0\|_X -\|u_*(\tau)\|_X\le \|u_*(\tau)-x_0\|_X\le M \|u_*(\tau)-x_0\|_Z\le\dfrac{\|x_0\|_X}{8}\] gives \eqref{final}. \ end{pf} \begin{thebibliography}{00} \bibitem {[AHS]} H. Amann, M. Hieber and G. Simonett, {\em Bounded $H_\infty$-calculus for elliptic operators,} Diff. Int. Eqns. {\bf 7} (1994) 613--653. \bibitem {AB} W. Arendt and C.J.K. Batty, {\em Tauberian theorems and stability of one-parameter semigroups,} Trans. Amer. Math. Soc. {\bf306} (1988), 837--852. \bibitem{BGK} H.~Bart, I.~Gohberg, and M.~A.~Kaashoek, {\em Wiener-Hopf factorization, inverse Fourier transform and exponentially dichotomous operators,} J.~Funct.~Anal. {\bf 68} (1986)i, 1--42. \bibitem{BV} C.J.K. Batty and V\~u Qu\^oc Ph\'ong, {\em Stability of individual elements under one-\-para\-meter semigroups,} Trans. Amer. Math. Soc. {\bf322} (1990) 805--818. \bibitem{Chow} S.-N.~Chow and K.~Lu, {\em Invariant manifolds for flows in Banach spaces,} J.~Diff. Eqns. {\bf 74} (1988), 285--317. \bibitem{DK} J.~Daleckij and M.~Krein, ``Stability of Differential Equations in Banach Space,'' AMS, Providence, RI, 1974. \bibitem {Da} E.B. Davies, ``One-Parameter Semigroups,'' Academic Press, London, 1980. \bibitem{dL1} R. deLaubenfels, {\em Incomplete iterated Cauchy problems,} J. Math. Anal. and Appl. {\bf 168} (1992), 552--579. \bibitem{dL2} R. deLaubenfels, {\em Unbounded holomorphic functional calculus and abstract Cauchy problems for operators with polynomially bounded resolvent,} J. Func. Anal. {\bf 114} (1993), 348--394. \bibitem{dL3} R. deLaubenfels, ``Existence Families, Functional Calculi and Evolution Equations,'' Lecture Notes in Math. {\bf 1570}, Springer, Berlin, 1994. \bibitem{dLK} R. deLaubenfels and S. Kantorovitz, {\em Laplace and Laplace-Stieltjes spaces,} J. Func. Anal {\bf 116} (1993), 1--61. \bibitem{dLV} R. deLaubenfels and V\~u Qu\^oc Ph\'ong, {\em Stability and almost periodicity of solutions of ill-posed abstract Cauchy problems,} Proc. Amer. Math. Soc., to appear. \bibitem{Du} X. T. Duong, {\em $H^{\infty}$-functional calculus for second order elliptic partial differential operators on $L^p$-spaces,} In: I. Doust, B. Jefferies, C. Li, and A. McIntosh, eds., Operators in Analysis, Sydney, Australian National University, 1989, 91--102. \bibitem{G} J.A. Goldstein, ``Semigroups of Linear Operators and Applications,'' Oxford Univ. Press, Oxford, 1985. \bibitem{Hale} J.~Hale and S.~M.~Verduyn Lunel, ``Introduction to Functional Differential Equations,'' Appl. Math. Sci. {\bf 99}, Springer-Verlag, New York, 1993. \bibitem{Henry} D.~Henry, ``Geometric Theory of Nonlinear Parabolic Equations,'' Lecture Notes in Math. {\bf 840}, Springer-Verlag, New York, 1981. \bibitem {HP} M. Hieber and J. Pr\"uss, {\em $H^{\infty}$-calculus for generators of bounded $C_0$-groups and positive contraction semigroups on $L^p$-spaces,} preprint. \bibitem{Huang} S. Huang, {\em Characterizing spectra of closed operators through existence of slowly growing solutions of their Cauchy problems,} Studia. Math., to appear. \bibitem{KVL} M.~A.~Kaashoek and S.~M.~Verduyn Lunel, {\em An integrability condition on the resolvent for hyperbolicity of the semigroup,} J. Diff. Eqns. {\bf 112} (1994), 374--406. \bibitem{K} S. Kantorovitz, {\ em The Hille-Yosida space of an arbitrary operator,} J. Math. Anal. and Appl. {\bf 136} (1988), 107--111. \bibitem{Kato} N.~Kato, {\em A principle of linearized stability for nonlinear evolution equations}, Trans. AMS, to appear. \bibitem{KLC} S.G. Krein, G.I. Laptev and G.A. Cretkova, {\em On Hadamard correctness of the Cauchy problem for the equation of evolution,} Soviet Math. Dokl. {\bf 11} (1970), 763--766. \bibitem{LMS} Y.~Latushkin and S.~Montgomery-Smith, {\em Evolutionary semigroups and Lyapunov theorems in Banach spaces,} J.~Funct.~Anal. {\bf 127} (1995), 173--197. \bibitem {LatMSR} Y.~Latushkin, S.~Montgomery-Smith, T.~Randolph, {\em Evolutionary semigroups and dichotomy of linear skew-product flows on locally compact spaces with Banach fibers,} J. Diff. Eqns., to appear. \bibitem{Sh} A.~Lunardi, ``Analytic Semigroups and Optimal Regularity in Parabolic Problems,'' Birkh\"{a}user, Basel, 1995. \bibitem{LV} Yu.I. Lyubich and V\~u Qu\^oc Ph\'ong, {\em Asymptotic stability of linear differential equations on Banach spaces,} Studia Math. {\bf 88} (1988), 37--42. \bibitem{MS} J.~Massera and J.~Schaffer, ``Linear Differential Equations and Function Spaces,'' Academic Press, NY, 1966. \bibitem{N} R. Nagel et al., ``One-Parameter Semigroups of Positive Operators,'' Lecture Notes in Math. {\bf 1184}, Springer, Berlin, 1986. \bibitem{P} A. Pazy, ``Semigroups of Linear Operators and Applications to Partial Differential Equations,'' Springer, New York, 1983. \bibitem{SS} R.~Sacker and G.~Sell, {\em Dichotomies for linear evolutionary equations in Banach spaces,} J. Diff. Eqns. {\bf 113} (1994), 17--67. \bibitem{SellBibl} G.~Sell, References on Dynamical Systems, IMA Preprint Series no. 1300, 1995. \bibitem{V2} Vu Quoc Phong, {\em On the spectrum, complete trajectories and asymptotic stability of linear semi-dynamical systems,} J. Diff. Eqns. {\bf 115} (1993), 30--45. \end{thebibliography} \end{document}
{"url":"http://ejde.math.txstate.edu/Volumes/1995/13/deLaubenfels-tex","timestamp":"2014-04-16T07:32:35Z","content_type":null,"content_length":"49231","record_id":"<urn:uuid:ae9ce86d-2b38-4b48-996d-b5ff114c2ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Stamford, CT Science Tutor Find a Stamford, CT Science Tutor ...I I assist the student with building vocabulary, roots, prefixes and suffixes. I help the student to understand nouns, verbs, adverbs and adjectives (their similarities and differences), and when and how to use them. I help the student to build Reading Comprehension skills and Writing. 47 Subjects: including chemistry, ACT Science, physical science, SAT math ...While copying is a powerful tool for learning, one must integrate the lessons of history and contemporary practice. My students emerge from their technical studies with abilities and a vision that is uniquely theirs. With a Bachelor's degree in Painting, a Masters of Fine Art degree in Historic... 13 Subjects: including psychology, anatomy, dyslexia, special needs ...I recently elected early retirement, so my availability for tutoring is fairly flexible. I have extensive experience in teaching the following courses: Introductory/AP Biology, General Biology, Histology, Molecular Cell Biology, Genetics, Biology of Cancer, Neuroscience, DNA Technology, Bioethic... 3 Subjects: including biology, genetics, nutrition ...I can also help in translating from/to English or Italian and I'm highly skilled in writing with intensive publications. I will work with your schedule and meet you at a location convenient to you. I am qualified to tutor anthropology. 3 Subjects: including archaeology, anthropology, Italian ...As an undergraduate student I went to tutors myself and can definitely understand the frustration and difficulty involved when trying to pick up foreign concepts. Since textbooks can sometimes seem cryptic in nature, I try to relay information through previously learned concepts and ideas. When... 6 Subjects: including biology, physiology, physical science, ecology Related Stamford, CT Tutors Stamford, CT Accounting Tutors Stamford, CT ACT Tutors Stamford, CT Algebra Tutors Stamford, CT Algebra 2 Tutors Stamford, CT Calculus Tutors Stamford, CT Geometry Tutors Stamford, CT Math Tutors Stamford, CT Prealgebra Tutors Stamford, CT Precalculus Tutors Stamford, CT SAT Tutors Stamford, CT SAT Math Tutors Stamford, CT Science Tutors Stamford, CT Statistics Tutors Stamford, CT Trigonometry Tutors Nearby Cities With Science Tutor Astoria, NY Science Tutors Bridgeport, CT Science Tutors Bronx Science Tutors Cos Cob Science Tutors Darien, CT Science Tutors Flushing, NY Science Tutors Glenbrook, CT Science Tutors Greenwich, CT Science Tutors New Rochelle Science Tutors Norwalk, CT Science Tutors Old Greenwich Science Tutors Ridgeway, CT Science Tutors Riverside, CT Science Tutors White Plains, NY Science Tutors Yonkers Science Tutors
{"url":"http://www.purplemath.com/Stamford_CT_Science_tutors.php","timestamp":"2014-04-16T10:27:48Z","content_type":null,"content_length":"23810","record_id":"<urn:uuid:4c322dcb-1539-4aba-88e6-65fd28335d61>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Variance matrix differences May 15, 2013 By Pat Torturing portfolios to give different volatilities between a factor model and Ledoit-Wolf shrinkage. There have been posts on: Two of the several ways to produce an estimate of the variance matrix of asset returns is a statistical factor model and Ledoit-Wolf shrinkage. Can we learn anything about what is happening when the two estimates give different answers for a portfolio? We can manufacture such portfolios by generating random portfolios that satisfy some constraints including that the portfolio variances have some minimum difference. Daily returns during 2012 were used for 442 large cap US stocks. The variance estimates were based on these returns. The portfolios were generated as of the end of 2012. All of the portfolios that were created had the constraints: • long-only • exactly 10 assets in the portfolio • the minimum weight of assets in the portfolio is 0.5% Many of the portfolios had additional constraints on variance. All sets of random portfolios consist of 1000 portfolios. Figure 1 shows the distribution of the volatility for portfolios that obey only the constraints given above. Figure 1: Distribution of volatility of random portfolios with 10 assets, are long-only and meet the minimum weight constraint. The volatility range of 14% to 15% was selected as an additional constraint. Figure 2 shows the distribution of the difference in estimated portfolio variances for portfolios generated with the basic constraints plus the volatility range. Figure 2: Distribution of variance differences for portfolios with Ledoit-Wolf volatility between 14% and 15%. We see that in this case the factor model is likely to give a smaller estimate of volatility than Ledoit-Wolf. Which one is more likely to have a larger value depends on the particular constraints. The picture when the factor model is used to estimate volatility is very similar to Figure 2 — that is not always the case with other constraints. The 10 and -10 on the x-axis is a difference of about 0.8% to 0.9% in volatility for the volatility range we are in. That is the minimum difference that is imposed for the portfolios that have divergent volatility estimates. Figure 3 shows the fraction of random portfolios with 14% to 15% Ledoit-Wolf volatility that each asset appears in versus the asset volatility. Figure 3: Fraction of portfolios in which each asset appears versus the asset volatility for portfolios with 14% to 15% Ledoit-Wolf volatility. We see that the higher volatility assets are less likely to appear. This is because the restriction to a volatility of 14% to 15% is below average. The selection effect is quite mild though. Figures 4 and 5 compare the volatility estimates for the portfolios that have a difference of variances imposed. Figure 4: Ledoit-Wolf volatility versus factor model volatility for portfolios with larger Ledoit-Wolf volatility. Figure 5: Ledoit-Wolf volatility versus factor model volatility for portfolios with larger factor model volatility. In Figure 4 the full range of 14% to 15% volatility is represented with some of the portfolios having a substantially larger difference than the limit imposed. There is a tendency for portfolios to have a larger volatility (but that is true without the difference constraint). Figure 5 hints at the difficulty of imposing the difference in this direction. Only the upper range of volatility is represented, almost all the portfolios are near 15% Ledoit-Wolf volatility, and the difference is never much larger than the minimum imposed. Figures 6 and 7 show the occurrence of individual stocks in the portfolios with the variance differences imposed. Figure 6: Fraction of portfolios in which each asset appears versus the asset volatility for portfolios with larger Ledoit-Wolf volatility. Figure 7: Fraction of portfolios in which each asset appears versus the asset volatility for portfolios with larger factor model volatility. There is quite strong selection for particular assets — especially for the larger factor model case. Below we look at the five most popular assets as shown in Figures 6 and 7 in terms of the diferences in the variance matrix estimates. Big Ledoit-Wolf popular assets Ledoit-Wolf variance (for percent returns): DF BBY NFLX BIG FDO DF 10.20 1.73 0.21 0.85 0.52 BBY 1.73 11.37 2.58 1.47 0.57 NFLX 0.21 2.58 18.52 3.31 1.28 BIG 0.85 1.47 3.31 8.92 1.21 FDO 0.52 0.57 1.28 1.21 2.51 Factor model minus Ledoit-Wolf variance (for percent returns): DF BBY NFLX BIG FDO DF -0.04 -1.15 0.25 -0.74 -0.24 BBY -1.15 -0.04 -1.09 -0.75 -0.21 NFLX 0.25 -1.09 -0.07 -2.12 -0.75 BIG -0.74 -0.75 -2.12 -0.04 -0.80 FDO -0.24 -0.21 -0.75 -0.80 -0.01 Ledoit-Wolf correlation: DF BBY NFLX BIG FDO DF 1.00 0.16 0.01 0.09 0.10 BBY 0.16 1.00 0.18 0.15 0.11 NFLX 0.01 0.18 1.00 0.26 0.19 BIG 0.09 0.15 0.26 1.00 0.26 FDO 0.10 0.11 0.19 0.26 1.00 Factor model minus Ledoit-Wolf correlation: DF BBY NFLX BIG FDO DF 0.00 -0.11 0.02 -0.08 -0.05 BBY -0.11 0.00 -0.07 -0.07 -0.04 NFLX 0.02 -0.07 0.00 -0.16 -0.11 BIG -0.08 -0.07 -0.16 0.00 -0.17 FDO -0.05 -0.04 -0.11 -0.17 0.00 Big factor model popular assets Ledoit-Wolf variance (for percent returns): CA MOLX EMN AON ADI CA 2.05 0.82 0.54 0.30 0.72 MOLX 0.82 1.99 1.51 0.76 1.10 EMN 0.54 1.51 4.05 1.07 1.30 AON 0.30 0.76 1.07 1.08 0.66 ADI 0.72 1.10 1.30 0.66 1.50 Factor model minus Ledoit-Wolf variance (for percent returns): CA MOLX EMN AON ADI CA -0.01 0.11 0.47 0.20 0.10 MOLX 0.11 0.11 0.21 0.07 0.15 EMN 0.47 0.21 -0.02 -0.03 0.13 AON 0.20 0.07 -0.03 0.00 0.02 ADI 0.10 0.15 0.13 0.02 0.09 Ledoit-Wolf correlation: CA MOLX EMN AON ADI CA 1.00 0.41 0.19 0.20 0.41 MOLX 0.41 1.00 0.53 0.52 0.64 EMN 0.19 0.53 1.00 0.51 0.53 AON 0.20 0.52 0.51 1.00 0.52 ADI 0.41 0.64 0.53 0.52 1.00 Factor model minus Ledoit-Wolf correlation: CA MOLX EMN AON ADI CA 0.00 0.04 0.16 0.14 0.05 MOLX 0.04 0.00 0.06 0.03 0.05 EMN 0.16 0.06 0.00 -0.01 0.04 AON 0.14 0.03 -0.01 0.00 0.00 ADI 0.05 0.05 0.04 0.00 0.00 Anyone see any significance in the assets that are selected for in the portfolios with big differences? The differences in correlation estimates seem to be driving the differences in portfolio volatility estimates. Given that there are almost 100,000 correlations, the correlation differences that have the largest impact don’t seem all that big. Portfolios with 10 assets are more likely to have large discrepancies than larger portfolios. Hence it seems unlikely that volatility estimates for real portfolios will differ much between the two Both of us say there are laws to obey But frankly I don’t like your tone from “Different Sides” by Leonard Cohen Appendix R Computations and plots were done in R. variance estimation The two variance estimates use functions from the BurStFin package. fm12 <- factor.model.stat(initret12) lw12b <- var.shrink.eqcor(initret12, tol=1e-5) As “Correlations and positive-definiteness” points out, the default value of the tol argument in the Ledoit-Wolf estimate is over zealous about making sure that there are no portfolios estimated to have very small variance. (The default will be changed when the package is updated.) generate wild portfolios The generation and manipulation of the random portfolios depends on the Portfolio Probe package. 1000 portfolios with just the initial three constraints are generated with: rp10wild <- random.portfolio(1000, priceEnd12, port.size=c(10,10), gross=1e6, long.only=TRUE, There is an additional constraint in the command that the gross value needs to be close to $1 million. While a specification of the amount of money in the portfolios is mandatory, it has no effect on our results (unless the amount is tiny). The min.weight.thresh argument is new in Portfolio Probe version 1.06. get volatility The volatility estimate for each of the portfolios is produced with: vol.rp10wild <- sqrt(unlist(randport.eval(rp10wild, additional.args=list(variance=lw12b))) * 252) * 100 This first produces a vector of the Ledoit-Wolf estimates of the variance of each portfolio, and then transforms that into volatility. generate volatility restricted portfolios We want to restrict volatility to the range of 14% to 15%. However, the software thinks in terms of variance rather than volatility. So we need to transform to the variance scale that we have. We also need the bounds for the variance constraint to be a two-column matrix: vcmat1 <- matrix(c(.14, .15)^2/252, 1, 2) Now we use this matrix: rp10vtest <- random.portfolio(1000, priceEnd12, port.size=c(10,10), gross=1e6, long.only=TRUE, min.weight.thresh=.005, variance=lw12b, see variance differences Now we can see what the difference is between the two variance estimates for each of these portfolios: vardif.rp10vtest <- unlist(randport.eval(rp10vtest, We change what it thnks the problem is by putting in a different value for the variance matrix and removing the variance constraint. You may be wondering if the difference of variance matrices really gives us the portfolio variance differences. The portfolio variance is the double sum over assets of the (i,j) position in the variance matrix times the i-th weight times the j-th weight. So it is linear in the variance matrix values. count assets in portfolios The command to count how many times each asset appears in a portfolio is: acount.rp10vtest <- table(sapply(rp10vtest, names)) generate variance difference portfolios Now we know what variance differences are feasible. We want to impose two different variance constraints: • volatility is 14% to 15% (according to one of the variance matrices) • the difference of variances is at least 1e-5 To do this we need to provide two variance matrices in the form of a three-dimensional array, where each slice of the third dimension is a variance matrix. We also need a constraint on each of the variances. So we need a 2 by 2 matrix: columns are minimum and maximum allowed, rows are for the different variances. vcmat2 <- rbind(vcmat1, c(1e5, Inf)) The random portfolios are generated with: rp10biglw <- random.portfolio(1000, priceEnd12, port.size=c(10,10), gross=1e6, long.only=TRUE, variance=threeDarr(fm12, lw12b-fm12), The threeDarr function is in the BurStFin package and it stacks matrices into a three-dimensional array. difficult constraints Figure 2 shows that the variance difference where Ledoit-Wolf is bigger (what the command above is getting) is rare, but not exceedingly rare. But we might think that the difference where the factor model is bigger could be impossible. As Figure 5 shows, it isn’t impossible but barely. When generating random portfolios, you want it to give up trying if it can’t find any within a reasonable amount of time — you don’t want to have to throw your computer away every time you ask it to do something impossible. The default settings suggested that the factor-model-bigger problem was impossible. Changing the settings so it worked harder on each try, and was less frustrated with failure allowed the generation to go ahead. The time to generate random portfolios heavily depends on the constraints. To get 1000 portfolios that satisfied the Ledoit-Wolf bigger constraint took 5 seconds. To get the factor model bigger constraint it took just under 23 hours. selected variances and correlations To get the most selected names: acnam.biglw <- rev(names(tail(sort(acount.rp10biglw),5))) Variance matrix selection was: round(lw12b[acnam.biglw, acnam.biglw] * 1e4, 2) Correlation matrix selection was: round(cov2cor(lw12b[acnam.biglw, acnam.biglw]), 2) daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/variance-matrix-differences/","timestamp":"2014-04-20T21:08:14Z","content_type":null,"content_length":"50678","record_id":"<urn:uuid:321f6c60-9b15-4a9e-b352-9686b39b4122>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/why.math/answered/1","timestamp":"2014-04-17T10:06:10Z","content_type":null,"content_length":"86319","record_id":"<urn:uuid:7c017d30-f8a2-4927-bf7e-c7a7bd187585>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Vacuum conductance calculation, tube closed by a disc. Hi ! I'm trying to calculate a conductance for free molecular flow of a tube which is closed by a disc. The distance between the tube and the disc will be approx 10 micro meters while inner and outer radius of the tube is around r1~10cm ; r2~20cm. Here are some pictures: The conductance of the tube is given in most textbooks on this subject, but I don't know how to find a conductance of a volume enclosed by the disc and the wall thickness (r2-r1) of the tube. My first idea was to approximate the enclosed volume by rectangular tubes and then integrate the relation for rectangular tube conductance from 0 to 2*Pi*r2, that is: where c is the rec. tube height, b ist the tube thickness and L=r2-r1 is the length of the tube. But I'm not sure whether this is good.
{"url":"http://www.physicsforums.com/showthread.php?t=687459","timestamp":"2014-04-20T18:23:08Z","content_type":null,"content_length":"20453","record_id":"<urn:uuid:7dfcb925-78e4-440f-9604-b24265a8161d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Derwood Statistics Tutor Find a Derwood Statistics Tutor ...I have experience tutoring in many subjects, but my specialty is test prep for college and medical school. I took the MCAT in March of 2013, scoring in the 95th percentile, and I took both the SAT and ACT with scores at or above the 95th percentile. I can help with subject mastery and test taking tips and strategies. 39 Subjects: including statistics, reading, chemistry, Spanish ...I also have over 20 years of research experience in the social sciences, most recently in the fields of early education and health care. Whether you need help planning your research project, cleaning and managing your data, entering your data, analyzing your data, writing up your results, or lea... 6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word ...I have actively used Geometry in my work at the NASA/Goddard Space Flight Center in the mathematical modeling of the Earth's Land/Ocean/Atmosphere System. I have more than 30 years experience at the NASA/Goddard Space Flight Center in studying and modeling the physics of the Earth's Atmosphere/L... 39 Subjects: including statistics, chemistry, physics, writing ...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way to learn.Have studied and scored high marks in econometric... 14 Subjects: including statistics, calculus, geometry, algebra 1 ...I have taken several Praxis Tests and have done very well on all of them. My scores highly qualify me to teach all of the math and science curricula at the middle school and high school levels. My scores are as follows: Praxis 1: 550/570 MS Science: 198/200 MS Math: 195/200 Chemistry: 177/200 ... 31 Subjects: including statistics, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Derwood_Statistics_tutors.php","timestamp":"2014-04-17T01:40:56Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:23f390c2-3508-47b4-b067-eef90938d054>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
real numbers I need help with this question, Does any open interval in R have a maximum? Explain your answer. Thomas If it does, is it open? What say you? There must be a defintion of "Open Interval" sitting about somewhere. Why not have a good, close look at it? i know it doesnt have any end points because you can always get a little bit more larger for example 0.1 then 0.11 but i dont know to explain correctly That may or may not be correct. It depends on which endpoint 0.1 is. However, this is the essential point: between any two real numbers there is a third number. If $x\in (a,b)$ then $a<x<b$. Therefore $\left( {\exists y \in (x,b)} \right)\left[ {x < y < b} \right]$. So can $(a,b)$ have a maximal element? Indirect proof: Suppose the open interval, (a, b), does have a maximum, M. Since M is in (a, b), a< M< b. Let $N= \frac{M+b}{2}$. Prove: 1) N< b. 2) a< M< N so N is in (a, b) 3) M< N, contradicting the hypothesis. Do you understand the difference between a "maximum" and a "supremum" (least upper bound)? That is crucial here. Last edited by Plato; July 31st 2011 at 05:42 AM. Reason: LaTeX fix
{"url":"http://mathhelpforum.com/differential-geometry/185346-real-numbers.html","timestamp":"2014-04-17T22:03:59Z","content_type":null,"content_length":"48580","record_id":"<urn:uuid:36b3acc7-3895-4147-a092-16f9f3320e21>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the equation for the slope of a line and what's the standard form equation? Analytic geometry, or analytical geometry, has two different meanings in mathematics. The modern and advanced meaning refers to the geometry of analytic varieties. This article focuses on the classical and elementary meaning. In classical mathematics, analytic geometry, also known as coordinate geometry, or Cartesian geometry, is the study of geometry using a coordinate system and the principles of algebra and analysis. This contrasts with the synthetic approach of Euclidean geometry, which treats certain geometric notions as primitive, and uses deductive reasoning based on axioms and theorems to derive truth. Analytic geometry is widely used in physics and engineering, and is the foundation of most modern fields of geometry, including algebraic, differential, discrete, and computational Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with specified numbers, algebra introduces quantities without fixed values, known as variables. This use of variables entails a use of algebraic notation and an understanding of the general rules of the operators introduced in arithmetic. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Most quantitative results in science and mathematics are expressed as algebraic equations. A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and (the first power of) a single variable. Linear equations can have one or more variables. Linear equations occur with great regularity in most subareas of mathematics and especially in applied mathematics. While they arise quite naturally when modeling many phenomena, they are particularly useful since many non-linear equations may be reduced to linear equations by assuming that quantities of interest vary to only a small extent from some "background" state. Linear equations do not include exponents. Geometry (Ancient Greek: γεωμετρία; geo- "earth", -metron "measurement") is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer. Geometry arose independently in a number of early cultures as a body of practical knowledge concerning lengths, areas, and volumes, with elements of a formal mathematical science emerging in the West as early as Thales (6th Century BC). By the 3rd century BC geometry was put into an axiomatic form by Euclid, whose treatment—Euclidean geometry—set a standard for many centuries to follow. Archimedes developed ingenious techniques for calculating areas and volumes, in many ways anticipating modern integral calculus. The field of astronomy, especially mapping the positions of the stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, served as an important source of geometric problems during the next one and a half millennia. Both geometry and astronomy were considered in the classical world to be part of the Quadrivium, a subset of the seven liberal arts considered essential for a free citizen to master. The introduction of coordinates by René Descartes and the concurrent developments of algebra marked a new stage for geometry, since geometric figures, such as plane curves, could now be represented analytically, i.e., with functions and equations. This played a key role in the emergence of infinitesimal calculus in the 17th century. Furthermore, the theory of perspective showed that there is more to geometry than just the metric properties of figures: perspective is the origin of projective geometry. The subject of geometry was further enriched by the study of intrinsic structure of geometric objects that originated with Euler and Gauss and led to the creation of topology and differential geometry. A line drawing algorithm is a graphical algorithm for approximating a line segment on discrete graphical media. On discrete media, such as pixel-based displays and printers, line drawing requires such an approximation (in nontrivial cases). On continuous media, by contrast, no algorithm is necessary to draw a line. For example, oscilloscopes use natural phenomena to draw lines and curves. In geometry, line coordinates are used to specify the position of a line just as point coordinates (or simply coordinates) are used to specify the position of a point. There are several possible ways to specify the position of a line in the plane. A simple way is by the pair (m, b) where the equation of the line is y =mx + b. Here m is the slope and b is the -interceptx. This system specifies coordinates for all lines that are not vertical. However, it is more common and simpler algebraically to use coordinates (l, m) where the equation of the line is lx + my + 1 = 0. This system specifies coordinates for all lines except those that pass through the origin. The geometrical interpretations of l and m are the negative reciprocals of the x and -intercepty respectively. Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-equation-for-the-slope-of-a-line-and-what-s-the-standard-form-equation","timestamp":"2014-04-20T21:42:24Z","content_type":null,"content_length":"33315","record_id":"<urn:uuid:df3bebb1-bc54-4d4d-8b02-6e86da2b2001>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
The go-to guy | EE Times The go-to guy A bunch of us engineers were sitting together for lunch in a company cubicle when we were interrupted by someone whom the company had hired as a resident mathematician. If any of us ran into something that required mathematics beyond our personal skill sets, this fellow was our go-to guy. He announced to all of us: "I am better than you." After we recovered from our collective astonishment, one of us asked what he was talking about. He replied: "How would you get the first derivative of the arctangent function?" I held up a mathematics textbook. "I don't need that," he said. "I know it off the top of my head and you don't. That's why I'm better than you." For the sake of keeping this text fit for family consumption, I won't go any further into the ensuing commentary except to say that it was quite colorful, but wouldn't you know it, I actually found something later on to ask this mathematician about. I had an eleven pole filter that had been designed into a digital multimeter. I wanted to know if the roots of the eleventh order polynomial of that filter's transfer function could be found; could they be factored out. The answer I got from the mathematician was "no," but he couldn't tell me why that was the case. In fact, it was the case. The mathematician was right, but I only learned why later from a biographical article in Scientific American about the French mathematician, Évariste Galois (October 25, 1811 – May 31, 1832) who, if I got this right, had sought to find a generalized method of factoring a polynomial of any order and proved that there was no such general method for polynomials of greater than fifth order. Galois' work was the beginning of what is today called "group theory." Because our resident mathematician was who he was, because of the offensive attitude he displayed, I didn't really believe him. He had lost his credibility with me and as I later saw, with the others There was a life lesson in that. (John Dunn is an electronics consultant at Ambertec, P.E., P.C. in Merrick, N.Y., a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE) and is a member and former Chairman of The IEEE Consultants Network of Long Island (LICN)).
{"url":"http://www.eetimes.com/author.asp?section_id=36&doc_id=1284872","timestamp":"2014-04-18T00:19:01Z","content_type":null,"content_length":"161493","record_id":"<urn:uuid:d197a910-c8dd-496b-994a-6dd5b62acc0a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Rounding And Precision In Excel Rounding Errors In Excel Rounding Errors In Microsoft� Excel97� You can download this article as either a Word97 or RTF file. This articles applies to all versions of Microsoft Excel For Windows. This article was written by Chip Pearson, 27-Oct-1998. � Copyright, 1998, 1999, Charles H. Pearson Article Summary: This article describes the reasons why you may experience arithmetic errors in Microsoft Excel97. Article Contents: • Actual And Displayed Values • Floating Point Numbers • Worksheet Functions For Rounding • IEEE Floating Point Standard This article assumes that you are familiar with the following: • The Excel Application • Visual Basic For Applications (VBA) programming concepts • The Binary systems of numbers There may be times that the value that you see on an Excel worksheet does not equal the value that you believe that it should be. There are generally two possible causes of this problem. The first is that the numbers are not displayed to their full values. The second is a computer design issue. Neither of the two are "bugs" or problems with the design of Microsoft Excel or Windows. Excel stores numbers differently that you may have them formatted display on the worksheet. Under normal circumstances, Excel stores numeric values as "Double Precision Floating Point" numbers, or "Doubles" for short. These are 8-byte variables that can store numbers accurate to approximately 15 decimal places. You may have only two decimal places displayed on the worksheet, but the underlying value has the full 15 decimal places. The second problem arises from the fact that a computer, any computer, cannot store most fractional numbers with total accuracy. Computers, in general, use the IEEE (Institute Of Electrical And Electronic Engineers) standard for floating point numbers. This standard provides a way to store fractional numbers in the limited space of an 8-byte number. Of course, for most numbers, some approximation must be made. This article describes and explains the causes for errors that are due to either of the causes described above: the displayed formatted number and the internal errors associated with floating point Actual And Displayed Values Under normal circumstances, Excel always stores and manipulates numbers as 8-byte "Double Precision Floating Point" numbers, or "Doubles". Excel's internal storage of the number is not affected by the way that you may choose to format a number for display. For example, if a cell contains the formula =1/3, Excel always treats this value as 0.3333 , regardless of how many decimal places you choose to display on the worksheet. Even if you choose to display the value as simple "0.3", Excel still retains the complete number as the value of the cell. This can cause situations in which it may appear that Excel is making an error in calculation, when it is really not. For example, suppose we have the formula =1/3 in each of the three cells A1:A3. Formatting these cell for one decimal point would show "0.3" in each cell. Adding these three cells together with the SUM function will give the result 1.0. But 0.3 + 0.3 + 0.3 equals 0.9 not 1.0, right? The result would appear to be incorrect. Of course, it is not. Regardless of how you have the cells formatted for display, Excel uses the underlying value when doing calculations. In the example, you are not really adding 0.3 + 0.3 + 0.3, but rather 0.333333333333333 + 0.333333333333333 + 0.333333333333333 whose sum is (almost) 1.0. Excel does offer an option called "Precision As Displayed", which you can enable from Calculate tab on the Options dialog (Tools menu). Enabling this option forces Excel to use the displayed values in its calculations, rather than the underlying numbers. With this option enabled, the example above would indeed SUM to 0.9. You must be very careful when using this option, however. Once enabled, all precision is lost, and cannot be regained. All cells are calculated based on the displayed value. This option applies to the entire workbook, not to a specific cell or range of cells. Floating Point Numbers Excel, like nearly every other computer program, uses the IEEE Standard for Double Precision Floating Point numbers. This standard is described in detail, at the bit level, in a later section of this article. We can generalize it, though, to describe how Excel stores fractional numbers. Just as computers store integers as binary numbers, they store fractional numbers as binary fractions. Computers store an integer (whole number) value as ( x*1 + x*2 + x*4 + x*8 + x*16 etc) where x is the state of the bit. If the bit is on, x=1. If the bit of off, x=0. In this notation, any integer can be stored exactly. For example, the number 13 is stored in binary as 1101 which indicates, reading from left to right, 1*8 + 1*4 + 0*2 + 1*1 = 13. Fractional numbers are stored in a similar manner. In the binary system, fractional numbers are stored as the sum of a series of fractions: ( x*1/2 + x*1/4 + x*1/8 + x*1/16 etc) where x is the state of the bit. If the bit is on, x=1. If the bit of off, x=0. Unlike integers, however, not every fractional value can be stored exactly accurately. For example, it is impossible to store the number ^1/[10] = 0.1 in binary form. A close approximation is (0*1/2 + 0*1/4 + 0*1/8 + 1*1/16 + 1*1/32 etc). Computers carry this operation to the equivalent of 15 decimal places. Even with this accuracy, many numbers are represented as an approximation of their "true" or "analytic" value. For example, it is impossible to accurately describe the number ^1/[10] in 8-byte (or any length) binary notation. Floating point numbers can come extremely close to representing that number, but there will always be some very small error. It is important to note that these errors and limitations on fractional numbers are not really errors at all. Nor are they "bugs" in the programs. These are well-known and well-documented limitations of the floating point arithmetic systems in almost every software package and hardware device. Worksheet Functions For Rounding Excel provides you with several functions to handle rounding. These functions are listed below. • INT • MROUND • ROUND • ROUNDDOWN • ROUNDUP • TRUNC NOTE: The MROUND function is part of the Analysis ToolPak Add-In for Excel. You must have this package installed in order to use these functions. To install the ATP, go to the Tools menu, select Add-Ins, and place a check next to the Analysis ToolPak item. See the on-line help for more information about these functions. IEEE Floating Point Standard The section describes the internal format of 64-bit double precision floating point variables. The layout of a double is as follows: │Bit 63│62 52 │51 0 │ │ Sign │Exponent │Mantissa │ A number n is expressed in floating point format at n = (-1)^s * m * 2^e s is the value of the sign bit, m is the mantissa, and e is the exponent. The mantissa m is "normalized," which means that it is always scaled such that it is greater than or equal to 1, and less than 2. Therefore, the ones bit (2^0) is always set, and is not present in the actual number. This is called an implied bit. Since the mantissa is 52 bits, plus the implied ones bit, the precision of the number is stored to 53 bits, or 2^53 = 900,719,925,474,100, approximately 15 digits of precision. The exponent e is "biased". The number stored in the exponent bits is the actual exponent plus 1023, which ensures that it will always be positive. The "unbiased" value of the exponent, after subtracting the 1023 bias, can be between 1022 and +1023. (The cases of all exponent bits equal to 0 or equal to 1 are reserved. When the exponent bits are all zero, the exponent is treated as being fixed at 1022 and the mantissa is assumed to be between 0 and 1. This is called an "unnormalized" number, and is how the value 0 is stored in a double. This allows extremely small numbers to be stored, but with less precision. When all exponent bits are 1, this indicates that an error has occurred and for representing positive or negative infinities.) Therefore, the value 2^e can be between 2^(-1022) and 2^1023, or approximately 2.2*10^(-308) and 8.9*(10^307). Since the mantissa has a maximum value of just less than 2, (actually 2 1/(2^52) ), the maximum value of the floating point number is about 1.8*(10^308). Example: The number 10.4 can be expressed as a double precision floating point number as follows. As a binary fraction, 10.4 = 1010.011001100110011 . The number 10 is represented in binary as 1010, and the number 0.4 is represented as .011011011 . Of course, this is only a very close approximation of 0.4, since it cannot be stored exactly. There is no sum of (1/2 + 1/4 + 1/8 ) that is exactly equal to 0.4. To normalize the number, the "binary point" is shifted three places to the left, and the result multiplied by 2^3: 1.010011001100110011 We can see then that 10.4 = (-1)^0 * (1.010011001100110011 ) * 2^3 Therefore, the sign is 0, the exponent is 3, and the mantissa is 1.010011001100110011 . Since the mantissa is always greater than or equal to 1, only the portion to the right of the binary point is stored: 010011001100110011 . Since the exponent is 3, and 3 + 1023 = 1026, the exponent is stored in the variable as 10000000010. The number 10.4 is stored as In Hex notation, this would be The first 0 is the sign bit. The bold portion is the exponent, and the rest is the mantissa. Single precision floating point numbers, or "singles" are similar to doubles, except that they occupy 32 bits, rather than 64 bits, and have an 8 bit exponent rather than an 11 bit exponent. The bias of the exponent is 127 rather than 1023.
{"url":"http://www.cpearson.com/excel/rounding.htm","timestamp":"2014-04-16T16:00:34Z","content_type":null,"content_length":"16066","record_id":"<urn:uuid:98f126e6-2699-45d0-a175-ce349f8077fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Derwood Statistics Tutor Find a Derwood Statistics Tutor ...I have experience tutoring in many subjects, but my specialty is test prep for college and medical school. I took the MCAT in March of 2013, scoring in the 95th percentile, and I took both the SAT and ACT with scores at or above the 95th percentile. I can help with subject mastery and test taking tips and strategies. 39 Subjects: including statistics, reading, chemistry, Spanish ...I also have over 20 years of research experience in the social sciences, most recently in the fields of early education and health care. Whether you need help planning your research project, cleaning and managing your data, entering your data, analyzing your data, writing up your results, or lea... 6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word ...I have actively used Geometry in my work at the NASA/Goddard Space Flight Center in the mathematical modeling of the Earth's Land/Ocean/Atmosphere System. I have more than 30 years experience at the NASA/Goddard Space Flight Center in studying and modeling the physics of the Earth's Atmosphere/L... 39 Subjects: including statistics, chemistry, physics, writing ...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way to learn.Have studied and scored high marks in econometric... 14 Subjects: including statistics, calculus, geometry, algebra 1 ...I have taken several Praxis Tests and have done very well on all of them. My scores highly qualify me to teach all of the math and science curricula at the middle school and high school levels. My scores are as follows: Praxis 1: 550/570 MS Science: 198/200 MS Math: 195/200 Chemistry: 177/200 ... 31 Subjects: including statistics, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Derwood_Statistics_tutors.php","timestamp":"2014-04-17T01:40:56Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:23f390c2-3508-47b4-b067-eef90938d054>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Kansas Geological Survey, Open-file Report 1999-57 Improved Definition of Hydraulic Conductivity Structure Using Multilevel Nonlinear Slug Tests C. D. McElwee, University of Kansas, and G. M. Zemansky, Compass Environmental, Inc. Prepared for Presentation at Fall AGU Meeting, San Francisco, CA Dec. 15, 1999 KGS Open-file Report 1999-57 The major control on the transport and fate of a pollutant as it moves through an aquifer is the spatial distribution of hydraulic conductivity. Although stochastic theories or fractal representations can represent the hydraulic conductivity in a generic sense, it is becoming increasingly apparent that site-specific features (such as high conductivity zones) need to be quantified in order to reliably predict contaminant movement and design a remediation plan for a given site. A field site in the Kansas River alluvium (coarse sand and gravel overlain by silt and clay) exhibits very high conductivities and nonlinear behavior for slug tests in the sand and gravel region. We know from extensive drilling, sampling, and a tracer test that the hydraulic conductivity varies a great deal spatially. The slug tests are performed in wells that are fully screened throughout the sand and gravel interval using a multilevel packer system with a piston for slug test initiation, allowing accurate determination of the initial head and starting time for the slug test. A general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the wellbore, and a Hvorslev model for the aquifer has been developed (C.D. McElwee and M.A. Zenner, Water Resources Research, pp. 55-66, Jan. 1998). The nonlinear model has three parameters: β which is related to radius changes in the water column, A which is related to the nonlinear head losses, and K the hydraulic conductivity. We find that the model is quite robust in its estimates of K over varying conditions and allows a wide range of slug test data to be analyzed with a greater accuracy than traditional linear methods. One well has been extensively studied to determine the potential of nonlinear slug tests to accurately delineate the hydraulic conductivity distribution there. Results from a bromide tracer test indicate that there are two zones of considerably higher hydraulic conductivity in the vicinity of this well. The first series of multilevel slug tests was performed using a 2 foot slugged interval (17 locations) and was analyzed with the nonlinear model. The analysis results did indeed show the presence of two zones of higher hydraulic conductivity. A second series of multilevel slug tests was performed using a 1 foot slugged interval (34 locations) to see if more detail in the hydraulic conductivity distribution could be seen. Finally, a third series was used to test the region around one peak, using a 0.5 foot interval. After analysis with the nonlinear slug-test model, we do see more detail in the results for the hydraulic conductivity distribution. In some regions of low hydraulic conductivity, the results of the earlier survey are confirmed on average while some additional small structure is revealed. The largest difference in the two series occurs in delineating the two zones of higher hydraulic conductivity. The 1 foot and 0.5 foot interval surveys suggests that the gradation into zones of higher conductivity is sharper than seen previously and that the maximum conductivity observed in the high conductivity zones is larger than previously measured. The results at this point are very positive for better definition of the hydraulic conductivity distribution using multilevel slug tests analyzed with the nonlinear slug-test model. The major control on the transport and fate of a pollutant as it moves through an aquifer is the spatial distribution of hydraulic conductivity. Although stochastic theories or fractal representations can represent the hydraulic conductivity in a generic sense, it is becoming increasingly apparent that site-specific features (such as high conductivity zones) need to be quantified in order to reliably predict contaminant movement and design a remediation plan for a given site. A field site in the Kansas River alluvium (coarse sand and gravel overlain by silt and clay) exhibits very high conductivities and nonlinear behavior for slug tests in the sand and gravel region. We know from extensive drilling, sampling, and a tracer test that the hydraulic conductivity varies a great deal spatially. The slug tests are performed in wells that are fully screened throughout the sand and gravel interval using a multilevel packer system with a piston for slug test initiation, allowing accurate determination of the initial head and starting time for the slug test. • We have developed a Geohydrological Experiment and Monitoring Site (GEMS) • Located in Kansas River Alluvium • Coarse sand and gravel overlain by silt and clay • Highly permeable • Slug tests only last a few seconds Location map for the Geohydrologic Experimental and Monitoring Site (GEMS). Well Nests at GEMS A typical well nest is shown in the figure below. Typically there is a fully screened well and several wells with short screens completed at various depths. In some nests we may have a well completed into the bedrock. Typical Slug Test Arrangement • The figure below shows a typical slug test arrangement • h(t) is the head in the well at any time above the static value • Z[o] is the length of water below the static level to the top of the screen • b is the length of the screen Typical Wellbore for Slug Tests • Radius change due to packer • Radius change due to casing diameter change Recording of Slug-Test Data • The slug tests are over very quickly. • It is necessary to use a high quality data logger with high accuracy and fast sample rate. • Our data logger has 16 bit accuracy • We used a 20 Hz sample rate. • A fully screened GEMS Injection Well designed for a bromide tracer test was used. • Casing radius is 5 inches in diameter. • Packers were used above and below the slugged interval. • Riser pipe is 2 inches in diameter. • Slugged intervals of 2 foot, 1 foot, and 0.5 foot length were used. • The following photos show the packer and piston assemblies. Slug Test Initiation The slug tests are initiated either with pump rods attached to the piston or by a cable releasing a spring loaded piston. The following photo shows the cable arrangement for releasing the piston. When the handle is lifted, the piston is pulled to the open position by a spring under tension, thus starting the slug test. Typically each vertical location is tested with a series of 4 slug tests with varying initial heads by adding 4, 2, 1, and 2 liters of water. The repeat 2 liter tests are for quality control and to verify repeatability. Slug Test Response The 4 slug tests at each location are analyzed as a suite to test for nonlinear behavior and to give better noise suppression. It can be seen from the following plots (for the 2 liter test) of slug-test responses with depth, for the 2 foot slugged interval, that the nature of the slug tests vary from overdamped to oscillatory. It appears that there are two zones of higher conductivity where oscillations occur. The Model A general nonlinear model based on the Navier-Stokes equation, nonlinear frictional loss, non-Darcian flow, acceleration effects, radius changes in the wellbore, and a Hvorslev model for the aquifer has been developed. The nonlinear model has three parameters: β which is related to radius changes in the water column, A which is related to the nonlinear head losses, and K the hydraulic conductivity. We find that the model is quite robust in its estimates of K over varying conditions and allows a wide range of slug test data to be analyzed with a greater accuracy than traditional linear methods. The GEMS Injection Well has been extensively studied to determine the potential of slug testing to accurately delineate the hydraulic conductivity distribution. Results from a bromide tracer test indicate that there are two zones of considerably higher hydraulic conductivity in the vicinity of this well. The first series (4 tests at each location) of multilevel slug tests was performed using a 2 foot slugged interval (17 locations) and each suite of 4 tests was analyzed simultaneously with the nonlinear model. The analysis results (shown below) do indicate the presence of two zones of higher hydraulic conductivity. A second series (4 tests at each location) of multilevel slug tests was performed using a 1 foot slugged interval (34 locations) to see if more detail in the hydraulic conductivity distribution could be seen. Finally, a third series was used to test the region around one peak, using a 0.5 foot interval. The results of all these tests and analyses are shown below, along with the screen average calculated from a suite of slug tests over the entire screen length. Decreasing the slugged interval does show more detail in the results for the hydraulic conductivity distribution. In some regions of low hydraulic conductivity, the results of the various surveys approximately agree, while some additional small structure is revealed by the smaller slugged interval tests. The largest difference in the surveys occurs in delineating the two zones of higher hydraulic conductivity. The smaller interval surveys indicate that the gradation into zones of higher conductivity is sharper and more complex than seen previously. The maximum conductivity observed in the high conductivity zones is larger for the smaller interval tests. The results at this point are very positive for better definition of the hydraulic conductivity distribution using multilevel slug tests analyzed with the nonlinear slug-test model. McElwee, C.D., and Zenner, M., 1998, A nonlinear model for analysis of slug-test data: Water Resources Research., v. 34, no. 1, pp. 55-66. McElwee, C.D., 1998, Multilevel nonlinear slug tests to characterize high conductivity aquifers: KGS Open-File Report no. 98-62, 45 pp. Kansas Geological Survey, Geohydrology Placed online Nov. 14, 2007, original report dated Dec. 1999 Comments to webadmin@kgs.ku.edu The URL for this page is http://www.kgs.ku.edu/Hydro/Publications/1999/OFR99_57/index.html
{"url":"http://www.kgs.ku.edu/Hydro/Publications/1999/OFR99_57/index.html","timestamp":"2014-04-20T13:53:41Z","content_type":null,"content_length":"14320","record_id":"<urn:uuid:1a089d37-f45c-461d-8dc2-c1db52537150>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a Pyramid Scheme? Pyramid schemes are illegal scams in which large numbers of people at the bottom of the pyramid pay money to a few people at the top. Each new participant pays for the chance to advance to the top and profit from payments of others who might join later. For example, to join, you might have to pay anywhere from a small investment to thousands of dollars. In this example, $1,000 buys a position in one of the boxes on the bottom level. $500 of your money goes to the person in the box directly above you, and the other $500 goes to the person at the top of the pyramid, the promoter. If all the boxes on the chart fill up with participants, the promoter will collect $16,000, and you and the others on the bottom level will each be $1,000 poorer. When the promoter has been paid off, his box is removed and the second level becomes the top or payoff level. Only then do the two people on the second level begin to profit. To pay off these two, 32 empty boxes are added at the bottom, and the search for new participants continues. Each time a level rises to the top, a new level must be added to the bottom, each one twice as large as the one before. If enough new participants join, you and the other 15 players in your level may make it to the top. However, in order for you to collect your payoffs, 512 people would have to be recruited, half of them losing $1,000 each. Of course, the pyramid may collapse long before you reach the top. In order for everyone in a pyramid scheme to profit, there would have to be a never-ending supply of new participants. In reality, however, the supply of participants is limited, and each new level of participants has less chance of recruiting others and a greater chance of losing money.
{"url":"http://www.wfdsa.org/about_dir_sell/index.cfm?fa=schemes2","timestamp":"2014-04-18T06:08:07Z","content_type":null,"content_length":"9821","record_id":"<urn:uuid:05b29c4f-4782-43ef-8b74-319d2afd589d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: how import a variable from script file to function file? Replies: 3 Last Post: May 15, 2013 12:31 PM Messages: [ Previous | Next ] ghasem Re: how import a variable from script file to function file? Posted: May 15, 2013 8:38 AM Posts: 72 Registered: 4/13/13 is there anyone who help me? How import a "i" index from script to function? I need to import "i" index from script to function,where function include a two equation two variable. for example my function is as following: function output = func_bedune_dielec(inputs) realPartOfInput = inputs(1); imagPartOfInput = inputs(2); % k1 , k2 , are known.and w is a vector that length(w)=100. % namely : w =linspace(1,5,100); kz = complex(realPartOfInput, imagPartOfInput); er = 1-w(i)^2; k2 = k1*sqrt(er); g1 = er*sqrt(kz^2-k3^2).*besseli(1,sqrt(-kz^2-k2^2)).*besselk(0,sqrt(-kz^2-k3^2))+... sqrt(-kz^2-k2^2).*besseli(0,sqrt(-kz^2-k2^2)).*besselk(1,sqrt(-kz^2-k3^2)); output = [ real(g1); imag(g1)]; in fact,when function in FSOLVE is called,I want to use from w(i) in my function,so that "i" is for loop index of my script. how import "i" index value from script to function,when function is called? Date Subject Author 5/15/13 how import a variable from script file to function file? ghasem 5/15/13 Re: how import a variable from script file to function file? ghasem 5/15/13 Re: how import a variable from script file to function file? Steven Lord 5/15/13 Re: how import a variable from script file to function file? ghasem
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2572288&messageID=9123809","timestamp":"2014-04-21T10:45:34Z","content_type":null,"content_length":"20429","record_id":"<urn:uuid:432d1a10-f8e0-484c-9eeb-5482d038e827>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
HKUST Institutional Repository: Item 1783.1/1658 HKUST Institutional Repository > Mathematics > MATH Doctoral Theses > Please use this identifier to cite or link to this item: http://hdl.handle.net/1783.1/1658 Title: Further results on factorization theory of meromorphic functions Authors: Ng, Tuen-Wai Issue Date: 1998 In this thesis, we shall prove some results which, in turn, will allow us to solve some factorization problems in a systematic way. Also, we shall utilize new methods from theory of complex analytic sets and local holomorphic dynamlics to solve some factorization problems. In Chapter 3, by using an extended version of Steinmetz's theorem, we prove that certain class of meromorphic functions is pseudo-prime. Hence, we can prove that under certain conditions, R(z)H(z) is pseudo-prime, where R(z) is a non-constant rational function and H(z) is a finite order periodic function. In Chapter 4, we try to find out all possible factorizations of p(z)H(z) when H is an exponential type periodic function and p is a non-constant polynomial. This Abstract: confirms a conjecture of G.D. Song and C.C. Yang. In Chapter 5, we shall use results from theory of complex analytic sets to prove certain criteria on the existence of a non-linear entire common right factor of two entire functions. Applying these criteria, we can then prove that if f is an entire function which is pseudo-prime and not of the form H(Q(z)), where H is a periodic entire function and Q is a polynomial, then R(f(z)) is also pseudo-prime for any non-constant rational function R. This result essentially solves a problem of G.D. Song and is a fundamental property of pseudo-prime function. We also give other applications of these criteria to unique factorization problems. In Chapter 6, we consider the unique factorization problems of f o p and p o f where f is a prime transcendental entire function and p is prime polynomal. We shall use methods from local holomorphic dynamics to solve these problems. Thesis (Ph.D.)--Hong Kong University of Science and Technology, 1998 Description: ix, 99 leaves ; 30 cm HKUST Call Number: Thesis MATH 1998 Ng URI: http://hdl.handle.net/1783.1/1658 Appears in MATH Doctoral Theses Files in This Item: File Description Size Format th_redirect.html 0Kb HTML View/Open All items in this Repository are protected by copyright, with all rights reserved.
{"url":"http://repository.ust.hk/dspace/handle/1783.1/1658","timestamp":"2014-04-18T18:52:38Z","content_type":null,"content_length":"18800","record_id":"<urn:uuid:7f285db9-3913-4e10-b27b-371e56564519>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Collision Theory Collision Theory The collision number or frequency which is the number of collisions per unit ... Kinetic molecular theory can be used to develop a molecular interpretation of ... – PowerPoint PPT presentation Number of Views:531 Avg rating:3.0/5.0 Slides: 9 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/51cd1-YTM4Z/Collision_Theory_powerpoint_ppt_presentation","timestamp":"2014-04-18T00:26:25Z","content_type":null,"content_length":"99026","record_id":"<urn:uuid:b571e38e-146c-4520-81d6-8ffc8f070721>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Inclined Planes and Acceleration "tasked with a non-assessable project " The teachers should be arrested for abusing the language! "relationship between the angle of the inclined plane and the distnace travelled along the flat surface " As pointed out above, you will have to make some assumptions as to friction. No friction and the ball will roll forever, regardless of angle. It is also true, as others have said, that the original height is the crucial point, not angle. However, since the problem specifically asked about "relationship between the angle and..." it probably would be better to assume the ball starts at a specific distance up the inclined plane. In that case, with d that distance and &theta; the angle, the height is d sin(&theta;). The acceleration down the inclined plane would be g sin(&theta;).
{"url":"http://www.physicsforums.com/showthread.php?t=15193","timestamp":"2014-04-17T15:36:37Z","content_type":null,"content_length":"41231","record_id":"<urn:uuid:5d558912-addb-4c05-aba0-81899897e4be>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"}