content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Wow. Sometimes things sound too intuitive, too rational, too simplistic and too good to be true, especially in baseball. But in this case, simplicity is beautiful.
Yes, but K/BF is almost as good and the data is a lot easier to find. Why didn't you include K/BB in your survey?
So let me see if I understand this: if you have a pitcher who is capable of K's but prefers to pitch to contact, the K/P stat will incorporate their skill better than the K/9. K/9 rewards pitchers
who always try to strike batters out (and build up high pitch counts), while K/P rewards pitchers who strike batters out when the opportunity presents itself (lower pitch counts), because a
first-pitch groundout isn't as much of a drag on the denominator with a single pitch as with a single out. Is that right?
Seems to me that you're probably picking up vestiges of K/BB. Pitchers who don't walk many batters will have higher K/P ratios.
The really key comparisons are ERA and R/G. The correlations will naturally be higher with FIP and DIPS because those stats are focused on strikeout and walk rates.
Another test might be to compare K/P and BB/P against K/BFP and BB/BFP to see which pair of variables better correlates with ERA and RA.
I have found that K/27BF often comes close to K/100P. Most pitchers would have K/100P K/27BF, implying he was quite efficient at getting strikeouts and put him in the same company with Santana,
Pedro, and Carpenter. On the other hand, young strikeout pitchers like Prior, Peavy and Beckett are all on the other side, which makes me wonder whether efficiency can be learned as pitchers age.
whoops. I meant to say "Most pitchers would have K/100P less than K/27BF, because power pitchers use more pitches. But sometimes, pitchers can have K/100P greater than K/27BF, like Loiaza last
The signs for Greater Than and Less Than cut off part of my post, damn you html!
What is the comparison between strikeout per pitch and strike per pitch? Are we moving towards a simpler percent stat here?
One other thing:
"For whatever reason, many people are slow to accept new ideas. No matter how much proof one provides, there will always be naysayers who don't want to embrace the truth. But that is OK with me. You
see, I'm not a member of the Flat Earth Society."
This is a complete crock. You posted the hypothesis on your blog one day ago, February 20th. You had damn well better expect to spend a significant amount of time defending it. Right or wrong, you're
going to have to shoot down every single person who questions it. Look at what McCracken had to go through to get DIPS accepted, then what real scientists have to do to defend 150-year old theories
with mountains of evidence behind them.
You've got to expect to have to take some time defending this, and if you respect your own work, you'll be a little less flip about it while you're still trying to get others to rely on it for their
geez powers, two scoops of crabby in your coffee this morning or what?
It's a great thing if he's found a way to advance the predictive value of pitcher performance, it's a terrible thing if he thinks saying it's so is enough to make him right. There are people around
with some decent questions about this that might lead to a refinement of it or other components of DIPS, and one regression analysis isn't going to inoculate him from answering those questions if he
wants people to take it seriously.
For example, the groundball questions (aka the Greg Maddux Argument) over at Baseball Musings might lead one to wonder if GB/FB pitchers' value might be better estimated by Outs Per Pitch than
Strikeouts Per Pitch. We all accept at this point that strikeouts are the best, but what if we could find more accurate predictors for other kinds of pitchers?
Dealing with science all day, I often find myself poking at people not to feed ignorant trolls, but that's a matter of limiting yourself to "one link responses" until said troll demonstrates he'll do
his own homework. There's a difference between limiting yourself to one measured response and brushing off all questions with "take this on faith or you're a Flat Earther." After all, he's not
posting each equation and the exact source data he used so others can repeat the test, as a real scientist would have to, and he didn't have anyone verify his work before he took it public. It's
natural for people to question a one-day-in-the-wild idea.
I've got a question. I'd like Rich to address the issue of "strikeout as process" versus "strikeout as result" that Pinto and XeiFrank brought up. In other words, I also question if this stat is
telling us something that K/BF isn't already telling us. Might it be a problem that K/P assumes every pitch is equal?
i understand your idea as far as the Maddux example, but Out per pitch doesnt work for me, as it cant take into consideration the defensive prowress behind the pitcher and doesnt account for things
like a long out in pittsburgh, a home run in Cincinatti, and countless other factors. He clearly stated he was going for "the single greatest Defense Independent Pitching Stat out there."
Studes makes the important point here. This is just a poor man's K/BB ratio, with perhaps a pinch of BABIP thrown in. If you have two pitchers who both have a K/BF of .20, but one has a lower K/P
ratio, that will usually mean he allows more BBs and/or hits on BIP (and thus faces more batters). If you want a measure of dominance, K/BB is clearly better; if you want the best metric for
strikeout proficiency, K/BF is the right choice. It's not clear what one would use K/P for, if anything.
So J. Santana with 5.29 K/BB is a worse pitcher than Carlos Silva with 7.89K/BB? Give me a break.
If you want a measure of dominance, K/BB is clearly better
Is that so? Let's take a look at the correlations among pitchers who pitched 162 or more innings last year...
ERA R/G ERC FIP DIPS
K/P -0.534 -0.557 -0.656 -0.717 -0.755
K/BB -0.434 -0.450 -0.530 -0.548 -0.565
Here are the results for all pitchers with 38 or more innings (which, for these purposes, is essentially all of 'em)...
ERA R/G ERC FIP DIPS
K/P -0.496 -0.506 -0.540 -0.661 -0.705
K/BB -0.454 -0.459 -0.559 -0.626 -0.635
The facts win out over unsubstantiated opinion again.
Re Ken's question (in comment #3 above), my answer would be "not exactly."
K/P is an indicator. It doesn't reward or say anything about pitching to contact. If anything, pitchers with low K/P are more apt to be those considered to pitch to contact than pitchers with high K/
However, as it relates to K/P vis-a-vis K/IP, pitchers who rank higher in K/P than K/IP will almost always be those who walk fewer batters and, therefore, throw fewer pitches than pitchers who rank
higher in K/IP than K/P. Therefore, the pitchers in the former group are more efficient than those in the latter.
Re comments by Robert and Studes re K/BB, K/P is a better measure of run prevention than K/BB. Check the correlation matrices two comments above for the details.
Another test might be to compare K/P and BB/P against K/BFP and BB/BFP to see which pair of variables better correlates with ERA and RA.
Negative outcomes, such as BB, don't work particularly well with respect to number of pitches. But, in any event, here is the correlation matrix for walks per pitch (or BB/P) among the latter sample
size (which includes 360 pitchers with 38 or more IP):
ERA R/G ERC FIP DIPS
BB/P .232 .236 .374 .358 .336
What is the comparison between strikeout per pitch and strike per pitch? Are we moving towards a simpler percent stat here?
I have the number of pitches per pitcher but not the number of balls and strikes. However, that information is available so I will see if I can come up with it. Thank you for the idea.
Rich, I know your shorts are in a bundle about people who don't like your idea, but you haven't shown anything yet that refutes my point.
My point is that K/P has a higher correlation with ERA simply because it's K/BFP with a dollop of BB/BFP thrown in. It's not a better measure of "strikeout dominance", but it is a measure that
correlates better with ERA because it's capturing something other than strikeout dominance.
As Guy says, it's probably also capturing BABIP luck. Pitchers who get more outs on batted balls will pitch fewer pitches, regardless of their strikeout dominance.
Of course, I could be wrong, but your additional analyses haven't shown that yet.
Rich, you make a valid point on K/BB ratio. I was thinking about distinguishing among high-K pitchers, which is the main point of K/P. If two pitchers have the same K rate, the one who gives up fewer
BB will be, on average, more effective. But some low-K/low-BB pitchers post good K/BB ratios, mixing apples with the organges.
However, Studes' point remains valid: the only reason K/P correlates better than K/BF is because it's not only an indicator of strikeout proficiency -- you're including a little bit of BB and H-BIP
info as well. Any measure that included Ks and BBs in their right proportions (essentially FIP w/o the HRs) would show an even better correlation. I'd guess that something as simple as (K-BB)/BF
would do the trick.
I still think K/P suffers from being neither fish nor fowl -- it's not a true measure of strikeout proficiency, and if you want a broader measure of pitcher effectiveness, there are many better (more
comprehensive) metrics available.
It isn't a bad idea at all, really. Though Dave S. over at the baseball musings comment you linked had a composite formula that seemed to provide a better correlation.
Let's see if it holds up in different situations. I don't fully understand why correlation with ERA is critical, since ERA is based on judgement calls (hit or error?), so at this point I'd be looking
at correlation with Runs Allowed and also (for a laugh) at the correlation between this year's K/P and next year's RA, to see if it has predictive value. Or I'd plug it into the existing predictive
models like PECOTA and see if the projections are improved or harmed.
"This is just a poor man's K/BB ratio, with perhaps a pinch of BABIP thrown in. If you have two pitchers who both have a K/BF of .20, but one has a lower K/P ratio, that will usually mean he allows
more BBs and/or hits on BIP (and thus faces more batters). If you want a measure of dominance, K/BB is clearly better; if you want the best metric for strikeout proficiency, K/BF is the right choice.
It's not clear what one would use K/P for, if anything."
If that is indeed what it is, then it could be very useful in determining long-term efficiency, with an easier calculation than adding together several other indicia. If it works out as a way to tell
whether someone has a long-term consistently low BABIP and high K/BB ratio, you've just managed to combine two of the most useful statistics in determining a pitcher's long-term prospects, not to
mention that it also incorporates outs per pitch on a more primevial level. Someone does well with this statistic for two years, or makes a jump in this statistic, then it bodes pretty well for the
future and may have a higher correlation for future success than the other statistics because (a) by incorporating BABIP, you catch the rare 10% of major league pitchers who have the ability to force
a lower BABIP than normally would be expected based on luck, (b) by incorporating K/BB, you get a measure of overall dominance versus efficiency and (c) by incorporating pitches per out, you get an
idea of who may be able to have a higher 'workload' without having as much risk of long-term injury. Sure, to get a better idea of the overall picture, you'll still want to look at the component
ratios, but to the casual fan or fantasy baseball player (or Jim Bowden and certain other GMs who can't stand to look at numbers for more than 20 seconds at a time), it could be a very useful
indicator regarding who to dig deeper on.
I suspect that this stat would be even more useful at the minor league level (for figuring out pitchers who could be useful major league players without the tools to immediately jump out at you,
simply because it probably means that they're getting into all the right habits), but unfortunately, you'd need MiLB to put out better numbers before one could make that determination.
"This is just a poor man's K/BB ratio, with perhaps a pinch of BABIP thrown in."
I would call it a "rich man's" version - not because the information is necessarily better but because it costs more (pitch counts) to come up with the metric.
First of all, thank you everyone for the feedback. I will try to answer most, if not all, of the questions, as well as to respond to as many comments as humanly possible. However, I will need your
patience. Baseball Analysts is a hobby and not my occupation so my time is limited.
I would disagree, though, with the apparent conclusion that striking out batters on three pitches represents the ideal for pitching effectiveness - if that is what Rich is saying.
No, that was not my conclusion. The one statment that Pinto focused unfortunately is being misconstrued as the basis for my argument. I apologize if I misled anyone and would be happy to retract that
paragraph if it means reducing or eliminating the confusion inherent in it.
The point of K/IP is as follows (which is excerpted out of my original article):
"We have known for some time that strikeouts are the out of choice. The more Ks, the better. We also know that the fewer pitches, the better. Combining high strikeout and low pitch totals is a recipe
for success. The best way to measure such effectiveness is via K/100 pitches...I believe this stat just might be the best way to measure pitcher dominance, if not overall performance."
The bottom line is that if you believe in the power of strikeouts, you should believe in the power of K/100P as an indicator of pitching success. K/100P does a better job of identifying run
prevention than K/9 or K/BF.
I would call it a "rich man's" version - not because the information is necessarily better but because it costs more (pitch counts) to come up with the metric.
All of the information I used for my study is both readily and publicly available at no cost. I just happened to access it at ESPN.com in the stats section on the baseball portion of the site.
There are some high fallutin' stats out there that cost money to access but not this one. One of the beauties of this stat is that it is for the common man and not the so-called rich.
That's true at the major league level for recent seasons. But I can't test the value of K/100 for pitchers playing in the majors twenty years ago. I also can't test it out for minor leaguers playing
one year ago.
... and not to sound like a "flat earth" guy, but if we are really looking for the "single greatest Defensive Independent Stat out there", why not use DIPS or something more simple like (SO -
(1.5*BB))/BF ?
This information is widely available, would reflect strikeout "dominance", and would do a more complete job of capturing control/efficiency than the K/100 does. (SO - (1.5*BB))/BF would also do a
better job of predicting runs allowed.
I don't think this is "the single greatest Defense Independent Pitching Stat out there." While it is apparently the best at measuring strikeout proficiency, as was the aim of your study, I don't
believe it would hold up FIP and other DIPS of the same sort--when correlated with runs allowed or runs allowed/game.
So, my real question is, are you implying that K/P is the best DIPS, or the best indicator of pitching dominance? My thoughts are a little jumbled on this--gotta pull out a notebook and good ol'
Despite these uncertainties, nice study and interesting findings...
I looked at (K-BB)/9, and as I suspected it correlates much better with R/G than does K/BB. It's the differential btwn Ks and BBs that matters, not the ratio. And K-BB/9 correlates with R/G better
than K/BF or K/P as well, at least in 2005. (K-BB)/BF is even more powerful -- very high correlations.
K-BB/9 (or K-BB/BF) is simple, and a better measure of pitcher effectiveness than either K/BB or K/P.
I was wondering why, as Rich pointed out, K/pitch correlates better with ERA than does K?BB, so I broke it down.
K/BB can be written as (K/IP) times (IP/BB).
K/pitch can be written as (K/IP) times (BFP/pitch) times (IP/BFP).
The 1st terms are the same. The second, (BFP/pitch), is pretty neutral--good pitchers and bad pitchers, as a group, are about average. The 3rd term, (IP/BFP), is essentially a form of OBA allowed.
So the reason K/pitch has a better correlation with ERA is that the hidden information it includes (OBA allowed) is important. It has nothing to do with being a better measure of "strikeout
Also, K/pitch is not really "defense-independent", since the fielding certainly has an influence on OBA allowed.
david smyth wrote:
So the reason K/pitch has a better correlation with ERA is that the hidden information it includes (OBA allowed) is important. It has nothing to do with being a better measure of "strikeout
Wouldn't the solution be to run a multiple regression including the other factors that correlate with K/P? I think this is what studes was suggesting earlier when he wrote "Another test might be to
compare K/P and BB/P against K/BFP and BB/BFP to see which pair of variables better correlates with ERA and RA."? (studes-Sorry if I'm attributing a statistically incorrect idea to you).
Here is an effort to respond to a number of readers who have presented overlapping comments and questions.
First of all, using # of Pitches in the denominator has "hidden" information in it but using # of Batters Faced doesn't?
Yes, pitchers who don't walk many batters will have higher K/P ratios. I'm not disputing that. In fact, that is one of the beauties of this stat. However, the same thing can be said about K/BF. That
is, pitchers who don't walk many batters will also have higher K/BF ratios.
As it relates to K-BB, yes, that correlates better with run measures than K/BB. It also has a stronger fit than K/P. I realized that going into my study. But I wasn't looking for a derivative stat.
The numerator is obtained by altering the number of strikeouts by subtracting walks.
Strikeouts divided by total pitches uses two variables only. The numerator isn't being altered in any way. It is a pure stat or what I termed a "single" stat in my article.
One can always improve a correlation by placing more variables into the equation and multiplying or dividing this by that and/or adding or subtracting this to that.
What I believe I have found is the highest correlation to run measures using two variables only. In hindsight, I wish I had been more explicit in stating that point in my articles.
I will have more on this subject in the future as I continue to believe that K/P is a simple and extremely valuable metric. To say that Johan Santana led the majors last year by striking out 7.14
batters per 100 pitches is a lot more comprehensible (and, therefore, useful) to me than stating that he led the majors in (SO - (1.5*BB))/BF with .187 or that he led the majors in (K-BB)/BF with
Thanks for the comments, Rich. I don't consider myself to know as much about stats as many of the other posters here, so take the following for what it's worth, and hopefully some others will chime
If we are just looking at correlations, then I think what you showed is complete. But if we want to infer something about the correlations, mainly, what effect does K/P have an ERA (i.e., examine
causation), I do not think what you did is sufficient. And I also think that while showing some sort of relationship via correlations is useful, especially when we have the intuition that the two
variables might some sort of causal relationship (i.e., we aren't correlation ERA with attendence), I think it's good to try and go further, and see to what extent we can use K/P to predict ERA/etc.
That said, I think there is an inherent problem in using regressions of y (ERA/etc.) on x (K/P) where there is some other factor z (e.g., K/BB), that influences y, and correlates with x. The OLS
model makes the following assumptions, among others:
(1) y[i] = a + b*x[i] + e[i]
where e[i] are "other factors"
(2) corr(x[i],e[i])=0 --> other factors are uncorrelated with the regressors
Now, if k/bb is among the "other factors" and K/BB correlates with x (K/P), then assumption (2) is violated. This leads to the conclusion that OLS estimate for b is no longer consistent. That means
that there is an omitted variable bias, that is, if our sample size -> infinity, our value for b will not approach the "true" value, but will be off by a factor that includes the covariance of K/P
and K/BB (covariance basically can be thought of as measuring the correlation). The larger the covariance of K/P and K/P, the larger the omitted variable bias.
Perhaps I'm just anxious to apply some of the statistics I've learned and am doing so incorrectly. Again, I'm less experienced than many here.
Can you run your correlations on this measure:
The problem with "ratios", like K/BB and GB/FB, is that they are not symetrical when you take their inverse. If the league average GB/FB was 1.0, then 2 GB and 1 FB or 2 FB and 1 GB should be
equidistant from the league mean, and they are not when using ratios, but they are when using rates.
I cringe everytime anyone does a regression on ratios. They are bound to break because of its inherent nonlinearity.
Clearly a regression of K/BB to ERA or BB/K to ERA should give you the exact same result (or you'd want the exact same result), and we're not going to get it. K/(K+BB) or BB/(K+BB) *will* give you
the exact same correlation to ERA.
That said, this has been a fun look, though as others have said, making pronouncements is a little early. Then again, without pronouncements, where's the fun?
Great idea, Tango. It meets Rich's criteria of using only two variables, but may well have an even higher correlation with ERA or RA. Should be close, anyway.
Rich, thanks for clarifying what you were trying to do. I was thrown off when you called K/P the "best measure of strikeout dominance" (or words to that effect). I agree that, from that perspective,
K/P is more useful than K/BFP.
"What I believe I have found is the highest correlation to run measures using two variables only."
I'm pretty sure you'll find that K/HR has a better correlation than K/P. So if the award is for "Best 2-variable Stat without Use of Subtraction," I'll nominate K/HR. But I'll confess I'm not clear
on the rationale for this particular Oscar category.
"To say that Johan Santana led the majors last year by striking out 7.14 batters per 100 pitches is a lot more comprehensible...[than] stating that he led the majors in...(K-BB)/BF with 5.72."
I'd agree that K/P is simple to describe, and simplicity can be a genuine virtue. But the problem is that most fans, told this stat is a good measure of pitching prowess, would conclude it shows that
the fewer pitches a pitcher uses the better. And indeed, you say this in making your case. But it isn't true: there is no correlation at all btwn P/BF and R/G. K/P really 'says' that pitchers should
try not to allow BBs and hits (true), but seems to say it's important to be "efficient" in the number of pitches made (false). So yes, K/P has a certain clarity of message, but but one that would
lead many to incorrect conclusions.
Also, I personally find (K-BB)/9 to be quite intuitive and easy to explain. It's just "strikeouts minus walks per game" -- how many batters you erase by yourself minus those you give a free pass. If
fans can understand K/BB ratio, and they do, this is at least as easy to understand. And it's more predictive than K/BB, K/BF, or K/P.
But the real test for a stat is whether the larger community finds it helpful and starts to use it. Maybe K/P will catch on. Time will tell.
In any case, as a new visitor here I just want to add that -- my knocks against K/P notwithstanding -- you've got a great site here. Congrats on reaching your first anniversary....
That makes a lot of sense, Tom. Thanks for teaching me something new :-)
That formula doesn't have any correlations whatsoever with runs allowed...
ERA R/G ERC FIP DIPS
K/(K+BB) -0.038 -0.042 -0.039 -0.024 -0.009
K/(K-BB) feels like it would make more sense to me. Plus, they both would need to be turned into a rate stat to compare to the above run measures.
That said, this has been a fun look, though as others have said, making pronouncements is a little early. Then again, without pronouncements, where's the fun?
Yeah, I've got the writer devil on one shoulder and the analyst devil on the other competing with each other all the time. Analysis without the writing can be boring. So leave it to me to be a bit
Rich, you probably have a sample size issue. I ran several regressions, with the results you can find here:
Rich, you probably have a sample size issue.
I think my sample size is fine. I'm using 360 pitchers from last year. The difference in your results and mine isn't related to sample size; it's due to the fact that the equations are different.
You used a different formula [BB/(BB+SO)] in your regressions than the one [K/(K+BB)] you asked me run.
In any event, if BB/(BB+SO) to ERA is .62 and SO/P is .62 "when P is estimated as BFP*3.3+SO*1.5+BB*2.2 (Actual pitches may result in a better correrlation)," I don't understand how BB/(BB+SO) and BB
/IP can have more influence than SO/P, which you say "has almost no influence."
"You used a different formula [BB/(BB+SO)] in your regressions than the one [K/(K+BB)] you asked me run."
Actually, they are directly related.
BB/(BB+SO) + SO/(SO+BB) = 1
So, regardless of which of the two terms you use, you will get identical correlations (just different coefficients).
The sample size doesn't simply relate to the number of samples, but to how relevant each sample is. In my case, every single pitcher had at least 1000 BFP. In your case, maybe 1 or 2 did.
Three posts later in that thread, I retracted my statement about running the regression with all the variables, because they are clearly not independent. The rest of the post stands however, and all
these "two or three variable" metrics have a .6x correlation. FIP stands tall at .84.
It should be retold that I used an estimate for pitches, and it is possible, though highly unlikely, that using actual pitch counts would have yielded a better correlation. The reason I say "highly
unlikely" is that this simple pitch count estimator has a high .9x correlation to actual pitch counts. It doesn't seem reasonable to think that using actual pitch counts will change the correlation
that much, if at all. | {"url":"http://baseballanalysts.com/archives/2006/02/strikeout_profi.php","timestamp":"2014-04-19T22:06:16Z","content_type":null,"content_length":"108916","record_id":"<urn:uuid:65651381-06d2-45cc-8391-ce0666c7376c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Medford, MA Precalculus Tutor
Find a Medford, MA Precalculus Tutor
I am available and eager to tutor anyone seeking additional assistance in the fields of physics (or mathematics), either at the high school or college level! I have been teaching physics as an
adjunct faculty at several universities for the last few years and very much look forward to the opportuni...
9 Subjects: including precalculus, calculus, physics, geometry
...My schedule is flexible as I am a part time graduate student. I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a
good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. I also coached.
19 Subjects: including precalculus, Spanish, chemistry, calculus
...I also have experience in Java as I have been teaching it this past year to high school students. I also have experience in Pascal from my high school years. I have an undergraduate degree
from Harvard University in Computer Science.
19 Subjects: including precalculus, physics, algebra 2, algebra 1
...I have tutored middle and high school students for 14 years overwhelmed by or uninterested in homework and/or studying for and taking tests and provided them with methods, strategies, and
skills to address issues of time management, organization, studying habits, note taking, effective reading an...
34 Subjects: including precalculus, reading, calculus, English
...I aim to find a student's individual way of learning to get the most out of each tutoring experience and find the best way to delve into difficult topics. The first scheduled session is on the
house. This is to ensure that the student is comfortable with me and that his or her parents believe I am the right person to help.
14 Subjects: including precalculus, chemistry, calculus, geometry
Related Medford, MA Tutors
Medford, MA Accounting Tutors
Medford, MA ACT Tutors
Medford, MA Algebra Tutors
Medford, MA Algebra 2 Tutors
Medford, MA Calculus Tutors
Medford, MA Geometry Tutors
Medford, MA Math Tutors
Medford, MA Prealgebra Tutors
Medford, MA Precalculus Tutors
Medford, MA SAT Tutors
Medford, MA SAT Math Tutors
Medford, MA Science Tutors
Medford, MA Statistics Tutors
Medford, MA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Arlington, MA precalculus Tutors
Belmont, MA precalculus Tutors
Brighton, MA precalculus Tutors
Cambridge, MA precalculus Tutors
Chelsea, MA precalculus Tutors
East Boston precalculus Tutors
Everett, MA precalculus Tutors
Malden, MA precalculus Tutors
Melrose, MA precalculus Tutors
Revere, MA precalculus Tutors
Somerville, MA precalculus Tutors
Watertown, MA precalculus Tutors
West Medford precalculus Tutors
Winchester, MA precalculus Tutors
Woburn precalculus Tutors | {"url":"http://www.purplemath.com/Medford_MA_precalculus_tutors.php","timestamp":"2014-04-17T11:31:42Z","content_type":null,"content_length":"24153","record_id":"<urn:uuid:44a207ec-e0b8-4452-b073-a1b7b64bd274>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Correcting Station Pressure to Sea-Level Pressure - Meteorology 101
So I was trying to correct the measured pressure from my weather station at my house to sea-level pressure using the equation P = Po*e^-("roh"g/Po)y where Po is atmospheric pressure (1.013 x 10^5 N/m
^2), "roh" is the density of air (1.29 kg/m^3), and y is the elevation of the station (in my case, about 680 feet or 207 meters). Anyway, in going through the calculation, I came up with that I
should have to add 26 mb to my station pressure to obtain a corrected value. However, based on SPC mesoanalysis, it would seem that I only need to add exactly half that amount, 13 mb. The only thing
I could come up with is that my weather station pressure reading is too high. I'm pretty sure the math was done correctly because I was not off by magnitudes of 10. If anyone has any ideas, let me | {"url":"http://www.americanwx.com/bb/index.php/topic/36477-correcting-station-pressure-to-sea-level-pressure/","timestamp":"2014-04-16T19:18:21Z","content_type":null,"content_length":"126768","record_id":"<urn:uuid:0f002628-7a2e-408a-8099-4dc223890727>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about GLSL on machines don't care
Archive for the 'GLSL' Category
Half-hour project, from GPU Gems 3.
It’s not a very realistic effect, since the actual lighting on the cubes comes from the front, whereas the big sun thing is behind them. Also, sometimes rays from further-back cubes actually
sometimes appear on top of the frontmost cubes, which spoils the illusion a little. There are ways around this, but they require rendering the same geometry multiple times, which isn’t really
possible to do efficiently in QC.
It’s just occurred to me that; of course the lightrays shouldn’t be picking up colour from the the foreground objects. I guess it will be necessary to render more than once, afterall. I’d need a pass
where the foreground objects are black, then I render the light rays from the sun, then additively blend in the lit foreground objects on top.
What I’ve got at the moment is essentially just a zoom blur.
This is the GLSL vertex shader code for 3D noise. Fragment shader is the same as for the 2D variant.
3D Perlin-Noise in the vertex shader, based originally on
vBomb.fx HLSL vertex noise shader, from the NVIDIA Shader Library.
Original Perlin function substituted for Stefan Gustavson's
texture-lookup-based Perlin implementation.
Quartz Composer setup
toneburst 2009
// 3D Perlin Noise //
3D Perlin-Noise from example by Stefan Gustavson, found at
uniform sampler2D permTexture; // Permutation texture
const float permTexUnit = 1.0/256.0; // Perm texture texel-size
const float permTexUnitHalf = 0.5/256.0; // Half perm texture texel-size
float fade(in float t) {
return t*t*t*(t*(t*6.0-15.0)+10.0);
float pnoise3D(in vec3 p)
vec3 pi = permTexUnit*floor(p)+permTexUnitHalf; // Integer part, scaled so +1 moves permTexUnit texel
// and offset 1/2 texel to sample texel centers
vec3 pf = fract(p); // Fractional part for interpolation
// Noise contributions from (x=0, y=0), z=0 and z=1
float perm00 = texture2D(permTexture, pi.xy).a ;
vec3 grad000 = texture2D(permTexture, vec2(perm00, pi.z)).rgb * 4.0 - 1.0;
float n000 = dot(grad000, pf);
vec3 grad001 = texture2D(permTexture, vec2(perm00, pi.z + permTexUnit)).rgb * 4.0 - 1.0;
float n001 = dot(grad001, pf - vec3(0.0, 0.0, 1.0));
// Noise contributions from (x=0, y=1), z=0 and z=1
float perm01 = texture2D(permTexture, pi.xy + vec2(0.0, permTexUnit)).a ;
vec3 grad010 = texture2D(permTexture, vec2(perm01, pi.z)).rgb * 4.0 - 1.0;
float n010 = dot(grad010, pf - vec3(0.0, 1.0, 0.0));
vec3 grad011 = texture2D(permTexture, vec2(perm01, pi.z + permTexUnit)).rgb * 4.0 - 1.0;
float n011 = dot(grad011, pf - vec3(0.0, 1.0, 1.0));
// Noise contributions from (x=1, y=0), z=0 and z=1
float perm10 = texture2D(permTexture, pi.xy + vec2(permTexUnit, 0.0)).a ;
vec3 grad100 = texture2D(permTexture, vec2(perm10, pi.z)).rgb * 4.0 - 1.0;
float n100 = dot(grad100, pf - vec3(1.0, 0.0, 0.0));
vec3 grad101 = texture2D(permTexture, vec2(perm10, pi.z + permTexUnit)).rgb * 4.0 - 1.0;
float n101 = dot(grad101, pf - vec3(1.0, 0.0, 1.0));
// Noise contributions from (x=1, y=1), z=0 and z=1
float perm11 = texture2D(permTexture, pi.xy + vec2(permTexUnit, permTexUnit)).a ;
vec3 grad110 = texture2D(permTexture, vec2(perm11, pi.z)).rgb * 4.0 - 1.0;
float n110 = dot(grad110, pf - vec3(1.0, 1.0, 0.0));
vec3 grad111 = texture2D(permTexture, vec2(perm11, pi.z + permTexUnit)).rgb * 4.0 - 1.0;
float n111 = dot(grad111, pf - vec3(1.0, 1.0, 1.0));
// Blend contributions along x
vec4 n_x = mix(vec4(n000, n001, n010, n011),
vec4(n100, n101, n110, n111), fade(pf.x));
// Blend contributions along y
vec2 n_xy = mix(n_x.xy, n_x.zw, fade(pf.y));
// Blend contributions along z
float n_xyz = mix(n_xy.x, n_xy.y, fade(pf.z));
// We're done, return the final noise value.
return n_xyz;
// Sphere Function //
const float PI = 3.14159265;
const float TWOPI = 6.28318531;
uniform float BaseRadius;
vec4 sphere(in float u, in float v) {
u *= PI;
v *= TWOPI;
vec4 pSphere;
pSphere.x = BaseRadius * cos(v) * sin(u);
pSphere.y = BaseRadius * sin(v) * sin(u);
pSphere.z = BaseRadius * cos(u);
pSphere.w = 1.0;
return pSphere;
// Apply 3D Perlin Noise //
uniform vec3 NoiseScale; // Noise scale, 0.01 > 8
uniform float Sharpness; // Displacement 'sharpness', 0.1 > 5
uniform float Displacement; // Displcement amount, 0 > 2
uniform float Speed; // Displacement rate, 0.01 > 1
uniform float Timer; // Feed incrementing value, infinite
vec4 perlinSphere(in float u, in float v) {
vec4 sPoint = sphere(u, v);
// The rest of this function is mainly from vBomb shader from NVIDIA Shader Library
vec4 noisePos = vec4(NoiseScale.xyz,1.0) * (sPoint + (Speed * Timer));
float noise = (pnoise3D(noisePos.xyz) + 1.0) * 0.5;;
float ni = pow(abs(noise),Sharpness) - 0.25;
vec4 nn = vec4(normalize(sPoint.xyz),0.0);
return (sPoint - (nn * (ni-0.5) * Displacement));
// Calculate Position, Normal //
const float grid = 0.01; // Grid offset for normal-estimation
varying vec3 norm; // Normal
vec4 posNorm(in float u, in float v) {
// Vertex position
vec4 vPosition = perlinSphere(u, v);
// Estimate normal by 'neighbour' technique
// with thanks to tonfilm
vec3 tangent = (perlinSphere(u + grid, v) - vPosition).xyz;
vec3 bitangent = (perlinSphere(u, v + grid) - vPosition).xyz;
norm = gl_NormalMatrix * normalize(cross(tangent, bitangent));
// Return vertex position
return vPosition;
// Phong Directional VS //
// -- Lighting varyings (to Fragment Shader)
varying vec3 lightDir0, halfVector0;
varying vec4 diffuse0, ambient;
void phongDir_VS() {
// Extract values from gl light parameters
// and set varyings for Fragment Shader
lightDir0 = normalize(vec3(gl_LightSource[0].position));
halfVector0 = normalize(gl_LightSource[0].halfVector.xyz);
diffuse0 = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;
ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
ambient += gl_LightModel.ambient * gl_FrontMaterial.ambient;
// Main Loop //
uniform vec2 PreScale, PreTranslate; // Mesh pre-transform
void main()
vec2 uv = gl_Vertex.xy;
// Offset XY mesh coords to 0 > 1 range
uv += 0.5;
// Pre-scale and transform mesh
uv *= PreScale;
uv += PreTranslate;
// Calculate new vertex position and normal
vec4 spherePos = posNorm(uv[0], uv[1]);
// Calculate lighting varyings to be passed to fragment shader
// Transform new vertex position by modelview and projection matrices
gl_Position = gl_ModelViewProjectionMatrix * spherePos;
// Forward current texture coordinates after applying texture matrix
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
Crashes on my laptop, works like a dream on my desktop machine.
More info to come.
The red ones above are from a different version of the effect, that does work on my ageing MacBook Pro (though it’s still slow- 12-14fps at 640 x 360). Note the highlights aren’t as smooth, because I
had to drop the resolution of the base mesh to improve the framerates a little on the laptop.
And here’s a clip of the 2D version:
and the 3D one:
and with an environment-map shader (not so successful)
Finally got it to work!
So, here’s Borg and Blob surfaces rendered in a GPU-accelerated GLSL raycaster shader:
As you can see, there’s a certain graininess to the render. This I actually quite like. It can be smoothed-out by decreasing the step-length that each ray is incremented by, but this slows down
render times exponentially.
Obviously, there’s no lighting calculation being done, and I’m not working out normals at all. I quite like the ghostly quality you get from this simple opacity-accumulation-style render though.
Thanks once again to Peter Trier for sharing his method, and I’d also like to thank Viktor N. Latypov for his encouragement on this project.
Here’s a clip of the shader at work on a Blob surface. I’ve added some colour to the rendering by mixing-in the ray XYZ-position as RGB colour, to add a bit of interest to the volume.
..and the Borg surface:
And this time, modulating the ray’s start position with live video input:
First attempt at implementing Peter Trier’s simple raycasting setup as an isosurface renderer. It’s definitely not working as it should, but it’s looking interesting, anyway.
Here it’s rendering a Borg isosurface, from Paul Bourke’s site.
From Ogre3D.
/* Vertex Shader */
varying vec3 P;
varying vec3 N;
varying vec3 I;
void main()
//Transform vertex by modelview and projection matrices
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// Position in clip space
P = vec3(gl_ModelViewMatrix * gl_Vertex);
// Normal transform (transposed model-view inverse)
N = gl_NormalMatrix * gl_Normal;
// Incident vector
I = P;
// Forward current color and texture coordinates after applying texture matrix
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
/* Fragment Shader */
varying vec3 P;
varying vec3 N;
varying vec3 I;
uniform float EdgeFalloff;
void main()
float opacity = dot(normalize(N), normalize(-I));
opacity = abs(opacity);
opacity = 1.0 - pow(opacity, EdgeFalloff);
gl_FragColor = opacity * gl_Color;
I keep coming back to this one (and so do lots of other people, by the looks of it).
It’s a GLSL conversion of the old NVIDIA HLSL vBomb shader, that applies Perlin Noise to vertex positions in much the same way as the Quartz Composer Vertex Noise example.
The idea here though was that I’d try to make a version that would be more likely to be hardware-accelerated. Since the GLSL noise() functions aren’t actually implemented on all graphics hardware.
The fact that they seem to generally be pretty slow, at least on my ATI X1600, suggests to me that this is the case with my card, and GLSL noise() is actually forcing software-render-fallback. The
vBomb conversion certainly seems pretty fast, though!
Here some examples that don’t take things any further than the original vBomb, in terms of the output. I’m planning to add lighting too, though. Again, absolutely nothing original about that- in
fact, Desaxismundi’s beautifully-lit vBomb vvvv shaders (witness here, and, particularly here) were one of my original inspirations for getting into 3D graphics, and particularly GLSL shaders in the
first place. I owe Desaxismundi, and the vvvv community generally a huge debt of gratitude for introducing me to this wonderful (if confusing) world.
I’ve got involved in a few other things lately, and haven’t had so much time to work in fun stuff in QC, sadly. I’ll be getting back to it as soon as I can though.
In the meantime, here’s a nice video produced by George Toledo. It’s a rendering of the IsoSurface shader, and very nice it is too. | {"url":"https://machinesdontcare.wordpress.com/category/glsl/page/2/","timestamp":"2014-04-21T10:29:25Z","content_type":null,"content_length":"155437","record_id":"<urn:uuid:d1e48391-0047-4227-a1f6-b2a841d12493>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
C++ Programming Help Needed Please!!!
C++ Programming Help Needed Please!!!
I am trying to write a program that displays all the bits of the binary representation of the type float.
The program needs to display info like this Example. (User enters the number 6).
This program displays a binary representation of real numbers.
Enter a real number: 6
The number's representation is 32 bits long:
sign: 0
exponent: 10000001
mantissa: 10000000000000000000000
To display the bits I have to use the for loop.
There is a skeleton program which has to be used and only edited within int main:
* skeleton program file float.cpp
* This program calculate calculates and displays a binary
* representation of real numbers.
* Assume all input is of the correct format.
* Input: a real number.
* Output: The bit pattern of the number stored in the computer's
* memory as float
* Processing:
* .......
* Author:
#include <iostream> // for cin and cout
#include <iomanip> // for setw()
using namespace std;
// prototype
int TestBit( unsigned bit, float number );
// start of the program
int main()
return 0;
} // of main()
* TestBit: tests bits of a float number
* receive: a bit where 0 is the rightmost bit
* and a number to test
* return: 1 if the bit is set on, 0 therwise
* preconditions: bit is in the range 0..31
int TestBit( unsigned bit, float number )
if ( bit < 8*sizeof(number) )
if ( ((( unsigned int) 1) << bit) & (*((unsigned int*)(&number))) )
return 1;
return 0;
} // TestBit
I am completely lost. I do not understand how to write a for loop into it or return the particular bit of the float number. I have spent days trying to work it out. If anyone could help by giving
me an example of how it works or showing me how to do it I would really appreciate it.
I have done all of the formatting and the prompt for real number etc (i think), I am just stuck on the 0's and 1's parts. It has just completely gone over my head.
Thank you......
try Google on "binary representation of real numbers".
may you get lucky!
( unsigned int) 1
could be just
you may want to start by writing a loop that goes from 0 to 8*sizeof(number)
and calls you TestBit function
I tried that and 'c++ help', 'for loop', 'for loop c++', 'binary representation c++', 'binary representation for loop', and alot more (in google and yahoo searches). I cannot find anything simple
enough for me to understand. Any other ideas?
As far as i know i can only add to the int main on the skeletal program. Not really sure. The main problem is that I do not understand how to write a for loop into it or return the particular bit
of the float number. I have spent days trying to work it out. I have done all of the formatting and the prompt for real number etc, I am just stuck on the 0's and 1's parts. It has just
completely gone over my head.
have you read this:
I have but I cannot understand how to write it, the website shows how it is set up but what is the statement suppose to be and what does x = is it suppose to be the realNumber? What is is
supposed to be <= because it is only 0's and 1's and the last part of it the ++ if we add to it each time dont we get more than 0's and 1's? I am not very smart at this as you can see. I just
cannot make sense of it. | {"url":"http://cboard.cprogramming.com/cplusplus-programming/100874-cplusplus-programming-help-needed-please-printable-thread.html","timestamp":"2014-04-17T10:20:51Z","content_type":null,"content_length":"11868","record_id":"<urn:uuid:94f11ceb-65e6-4721-98e0-c8db71b69786>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Classical/Constructive Arithmetic
Harvey Friedman friedman at math.ohio-state.edu
Sat Mar 18 04:18:05 EST 2006
I would like to start two threads, one on classical and constructive
arithmetic and discrete math, the other on classical and constructive real
The aim is to concentrate on the following questions.
1. Examples where the known proofs are nonconstructive, and nobody knows if
there are constructive proofs.
2. Examples where the known proofs are nonconstructive, and we know that all
proofs are nonconstructive.
3. Examples where the known proof is nonconstructive, and where one can give
a constructive proof, but all known constructive proofs are grotesque (e.g.,
extremely long, or extremely unpleasant, etc.).
4. Examples where the known proof is nonconstructive, and where one can give
a constructive proof, but it is known that all constructive proofs are
grotesque (e.g., extremely long, or extremely unpleasant, etc.).
5. Conservative extension and non conservative extension results between PA
and HA (Heyting arithmetic = Peano Arithemtic with intuitionistic logic),
and variants thereof.
Here are some things I know.
I stated this:
THEOREM A. Every polynomial P:Z^n into Z^m assumes a value closest to the
THEOREM A'. Every polynomial P:Z^n into Z^m with integral coefficients
assumes a value closest to the origin.
I.e., there is a value which is at least as close to the origin, in the
Euclidean distance, than any other value.
The above are provable in PA, but not in HA, even for m = 1. In fact, it is
provably equivalent to single quantifier PA over intuitionistic EFA (even if
we fix m = 1). (We can use any reasonable norm, such as Euclidean or sup
Note that version A is a bit stronger in that it does cover some polynomials
whose coefficients are not all integers - e.g., n(n+1)/2.
Apparently there are quite a number of famous AEA theorems of mathematics
which people would like to prove constructively, but can't. Nobody knows if
they can be proved constructively. Here there is a real criterion:
*if (forall x)(therexists y)(forall z)(R(x,y,z)) can be proved
constructively (i.e., in, e.g., HA), then there is a recursive function f
such that (forall x)(forall z)(R(x,fx,z)) is true*.
So the mathematicians don't have to know hardly anything about HA or
constructivity in general, or even buy into any related philosophy, in order
to get interested and clearly formulate the problem: find a recursive
function f. And also for that, they don't even have to know what recursive
means! They only need to recognize that something is recursive if it is.
Of course, mathematicians are looking for something weaker and stronger
than merely a recursive f such that (forall x)(forall z)(R(x,fx,z)). They
want a "reasonable" function f such that
(forall x)(therexists y < fx)(forall z)(R(x,fx,z)).
This is likely to, but *might* not be enough to give a constructive proof of
(forall x)(therexists y)(forall z)(R(x,y,z)).
Let us summarize this situation by formal results.
THEOREM 1. Let P be an AEA sentence, (forall x)(therexists y)(forall
z)(R(x,y,z,)). Then P has a proof in HA if and only if there is a
presentation of a partial recursive function f such that PA proves (forall
x)(forall z)(R(x,fx,z)).
THEOREM 2. There exists a sentence "(forall x)(R(x)) or (forall x)(S(x))"
which is provable in PA but not in HA.
The above theorem presumably applies to various fragments of PA/HA. In
particular, it applies to EFA = exponential function arithmetic.
Let me remind readers of the following, originally due to Godel.
THEOREM 3. Every AE sentence provable in PA is provable in HA. There is no
significant blowup involved.
Theorem 1 can be extended in the obvious way to prenex sentences.
PROGRAM: Give an appropriate necessary and sufficient criteria for a
sentence to be provable in HA in terms of provability in PA. For striking
progress, one should be doing this for various special syntactic classes, as
I have done here with the known Theorems 1 and 3.
This program should be gone through systematically. Note that syntactic
classes here should not be merely prenex classes. That would be already
clear from Theorem 1 and its obvious extension to prenex sentences.
Also, worthy of discussion, are the status of:
there exists a positive integer n such that e^(e+n) is irrational.
there exists a positive integer n such that e^(pi + n) is irrational.
for all irrationals x there exists a positive integer n such that x^(x+n) is
For all irrationals x,y there exists a positive integer n such that x^(y+n)
is irrational.
Some people should push this discussion forward, and then I will return to
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-March/010208.html","timestamp":"2014-04-17T00:53:42Z","content_type":null,"content_length":"7217","record_id":"<urn:uuid:e0a95b23-5ec9-4558-beaa-fb0684c9505c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
bit rusty on partial derivatives
August 18th 2010, 10:11 AM
bit rusty on partial derivatives
I have a function f(u,v) of two variables
If I set $w = u\sin \theta + v \cos \theta$, how do I show then that
$\frac{ d f}{dw} = \sin \theta \frac{ \partial f}{ \partial u} + \cos \theta \frac{ \partial f}{ \partial v}$ ?
My book states this but I was wondering what rule was used to get this
Thanks very much
August 18th 2010, 10:18 AM
Well, the general rule would be
$\displaystyle{\frac{df}{dw}=\frac{\partial f}{\partial u}\,\frac{\partial u}{\partial w}+\frac{\partial f}{\partial v}\,\frac{\partial v}{\partial w}}.$
To me, off-hand, I'm a bit puzzled why the trig functions aren't in the denominators. Are you sure this is the correct expression?
August 18th 2010, 10:26 AM
yeah its about a parabolic cylinder, whose bottom runs along the v direction. Theta is introduced as an angle between v and w to show that we can see what happens in all directions for all theta,
except when theta = 0, i.e. what happens in the v-direction. So perhaps we are considering theta to be constant here. I don't know. thanks for reminding me of the chain rule
August 18th 2010, 11:13 AM
I would definitely say that $\theta$ is constant here. But what I can't get over is where the trig functions are in the expression you're trying to prove. You've got
$\displaystyle{\frac{\partial w}{\partial u}=\sin(\theta)}$, so I would expect $\displaystyle{\frac{\partial u}{\partial w}=\frac{1}{\sin(\theta)}.}$
A similar calculation would go for the other. I can't explain why this is not the case. Maybe there's something simple I'm missing. Maybe Danny could weigh in?
August 18th 2010, 11:30 AM
I will write out a section: ''
f has a maximum or a minimum (depending upon the sign) in the u-direction, but we do not yet know what happens in the v-direction. The surface z = f(x,y) is, to second order, a parabolic cylinder
In fact we know what happens in every direction except the v-direction. For let $w = u \sin \theta + v\cos \theta$. Then at the origin
$\frac{d f}{d w} = \sin \theta \frac{ \partial f}{\partial u} + \cos \theta \frac{ \partial f}{\partial v} = 0$
$\frac{ d^2 f}{d w^2} = \sin^2 \theta \frac{\partial^2 f}{\partial u^2} + 2\sin \theta \cos \theta \frac{\partial^2 f}{\partial u \partial v} + \cos^2 \theta \frac{\partial^2 f}{\partial v^2} = \
sin^2 \theta \frac{\partial^2 f}{\partial u^2}$
Hence f has the same sort of behaviour in the w-direction as in the u-direction, provided only that theta is not 0. IF theta is 0, i.e. in the v-direction, the Taylor series for f reduces to....
In case this is relevant this is an examination of what happens when the Hessian is 0 of a two variable function and not all the 2nd partial derivatives are zero
August 20th 2010, 08:15 AM
I have a function f(u,v) of two variables
If I set $w = u\sin \theta + v \cos \theta$, how do I show then that
$\frac{ d f}{dw} = \sin \theta \frac{ \partial f}{ \partial u} + \cos \theta \frac{ \partial f}{ \partial v}$ ?
My book states this but I was wondering what rule was used to get this
Thanks very much
I think what you're trying to do (please correct me if I'm wrong) is to establish the directional derivative.
If we start at the point, say $(a,b)$ and move in the direction of say ${\bf w} = < \cos \theta, \sin \theta>$ then
$D_{\bf w}f = \displaystyle \lim_{h \to 0} \dfrac{f(a + h \cos \theta, b + h \sin \theta) - f(a,b)}{h}$.
If we define $g(h) = f(a + h \cos \theta, b + h \sin \theta)$ then
$D_{\bf w}f = \displaystyle \lim_{h \to 0} \dfrac{g(h) - g(0)}{h}$
which, by definition is $g'(0)$. Using the chain rule for functions of more than one variable
$g'(h) = f_x(a + h \cos \theta, b + h \sin \theta)\cos \theta + f_y(a + h \cos \theta, b + h \sin \theta) \sin \theta$ so
$D_{\bf w}f = g'(0) = \cos \theta f_x(a , b) + \sin \theta f_y(a,b )$
noting that I've used $x'\text{s}$ and $y'\text{s}$ instead of $u'\text{s}$ and $v'\text{s}$.
I would, however, like to know your reference.
August 20th 2010, 08:22 AM
the book is Saunders: an introduction to catastrophe theory.
thanks fo the help, i think it is the directional derivative. you can work it too out by making a w' variable which is orthogonal to the w axis and then rearrange and doing partial derivatives | {"url":"http://mathhelpforum.com/calculus/154003-bit-rusty-partial-derivatives-print.html","timestamp":"2014-04-17T11:05:34Z","content_type":null,"content_length":"14433","record_id":"<urn:uuid:ecfe35b0-aa7d-4e83-8c92-d7b60b1c93a9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annapolis Junction Statistics Tutor
Find an Annapolis Junction Statistics Tutor
...In Mathematics, my tutoring experience includes: - Arithmetic - Pre-algebra - Algebra I & II - Plane & Analytic Geometry - Trigonometry - Probability & Statistics - Number Theory - Calculus -
Differential Equations -- Ordinary and Partial - Real & Complex Analysis - Numerical Analysis In the Sc...
39 Subjects: including statistics, English, ACT English, Java
...I have former colleagues who continue to call me to ask me questions about functions in Word and Excel when they can't get help from their IT department. I am primarily self-taught, and love to
share software tips with others. I have been using STATA for nearly 10 years, and teaching students, as well as colleagues, in its use.
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a
graduate student, I took courses in mathematical methods for physics and engineering. These courses inc...
16 Subjects: including statistics, physics, calculus, geometry
...I am blessed with vast knowledge in Maths and Chemistry (My BS Degree). I have over 20 years teaching experience in Kenya, and the USA. Through the years of hard work, I have acquired prowess
in the following subject areas: Maths - Algebra I and II, College Math, Geometry and Probability, Chemis...
15 Subjects: including statistics, chemistry, geometry, algebra 1
...My significant body of research in mathematics education, which includes grants from the National Science Foundation and Department of Education, makes me an expert on student learning and
thinking. I focus, in particular, on students' problem-solving skills, which includes forming a clear under...
17 Subjects: including statistics, calculus, physics, geometry | {"url":"http://www.purplemath.com/annapolis_junction_statistics_tutors.php","timestamp":"2014-04-19T06:58:29Z","content_type":null,"content_length":"24561","record_id":"<urn:uuid:2b344ed9-1c97-485b-93ca-da89742f40ae>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
s quantization of the Nambu-Goto action
Thiemann’s quantization of the Nambu-Goto action
Posted by urs
Last year there was a symposium called Strings meet Loops at the AEI in Potsdam at which researchers in the fields of String Theory and Loop Quantum Gravity were supposed to learn about each other’s
approaches. In his introductory remarks H. Nicolai (being a string theorist) urged the LQG theorists to try to better understand how their quantization approach compares to known results.
Since the worldsheet theory of the (super)string is nothing but (super)gravity in 1+1 dimensions coupled to other fields it would be an ideal laboratory to compare the results of LQG in this setting
to the usual lore, which in particular features the central extension of the Virasoro algebra as well as consistency conditions on the number of target space dimensions.
How does this model fit into the framework of canonical and loop quantum gravity?
Nicolai asked.
A search on the arXive showed that so far only one paper had appeared which did address aspects of this simple and yet somewhat decisive question:
Artem Starodubtsev, String theory in a vertex operator representation: a simple model for testing loop quantum gravity.
Starodubtsev concluded:
The suggested [LGQ-like] version of the Hamiltonian constraint leaves us with a theory which is considerably different from ordinary string theory. There are several indications that string
theory in its usual form can probably not be recovered from the model obtained. […] the first version of Hamiltonian constraint is anomaly-free and the same is true of the diffeomorphism
When, after the symposium, I mentioned this reference to A. Ashtekar, a leading figure in LQG, he told me that he meanwhile was aware of this result and planning to analyze the problem in more
Apparently this has borne fruit by now, since yesterday a paper by Th. Thiemann appeared on the arXive
Th. Thiemann, The LQG-String: Loop Quantum Gravity Quantization of String Theory I. Flat Target Space
which gives a detailed analysis of an LQG inspired canonical quantization of the 1+1 dimensional Nambu-Goto action for flat target space. The approach is a little different from that by Starodubtsev,
but the results are similar in their unorthodoxy: Thiemann finds
- no sign of a critical dimension
- no ghost states
- no anomaly, no central charge
- no tachyon (and, indeed, not the rest of the usual string spectrum).
The claim is that all this is possible due to a quantization ambiguity that has not been noticed or not been investigated before: Instead of using the usual Fock/CFT representation and imposing the
constraints as operator equations, Thiemann uses families of abstract representations of the operator algebra obtained by the GNS construction and solves the quantum constraints by a method called
group averaging, or its more sophisticated cousin, the so-called Direct Integral Method.
Since these are the same methods used in LQG for quantizing the gravitational field in 3+1 dimensions it is somewhat interesting to see how vastly different the results obtained this way are from the
standard lore. One might hence take this as a sign that the LQG approach to quantization is odd. But in some circles this is interpreted in just the opposite way, dreaming of the possibility that the
new quantization method might improve on the standard approach to quantization in string theory. Indeed Thiemann himself speculates in his conclusions that his quantization prescription might
- solve the cosmological constant problem
- clarify tachyon condensation [?]
- solve the vaccum degeneracy puzzle
- help finding a working phenomenological model
- help proving perturbative finiteness beyond two loops .
To my mind these are surprisingly bold speculations.
I would much rather like to understand conceptually the nature of the apparent quantization ambiguity (if it really is one) that is the basis for all this. Do we really have this much freedom in
quantizing the NG action? Why then do several different quantization schemes (BRST, path integral, lightcone quantization) all yield the standard result which strongly disagrees with the one obtained
by Thiemann? What is the crucial assumption in Thiemann’s quantization that makes it different from the ordinary one?
I believe that these questions are what originally motivated H. Nicolai to initiate this investigation and their answer should teach us something.
In the remainder of this entry I shall try to look at some of the technical details of Thiemann’s paper, trying to understand what exactly it is that is going on.
We all know from Edward Nelson that
First quantization is a mystery.
But it should be possible to understand how precisely it is mysterious and how it is not.
[Note added later on:]
After an intensive discussion and some false attempts to explain what is going on inThomas Thiemann’s paper, he finally chimed in himself and we could clarify the issue at the technical level. The
crucial point is the following:
Thomas Thiemann does not perform a canonical quantization of the Virasoro constraints if we want to understand under canonical quantization that a theory with classical first-class constraints ${C}_
{I}$ is quantized by demanding
(1)$〈\text{phys}\mid {\stackrel{̂}{C}}_{I}\mid \text{phys}〉=0\phantom{\rule{thinmathspace}{0ex}}.$
What Thomas Thiemann instead does (by his own account) is the following:
1) Find a representation ${\stackrel{̂}{U}}_{\phi }$ of the classical symmetry group elements $\phi$ on some Hilbert space. (Here the ${\stackrel{̂}{U}}_{\phi }$ need not have anything to do with the
quantized ${\stackrel{̂}{C}}_{I}$, and in the case of the ‘LQG-string they don’t have anything to do with them.)
2) Demand that physical states are invariant under the action of the ${\stackrel{̂}{U}}_{\phi }$.
It is clear that this method explicitly translates the classical symmetry group to the ‘quantum’ theory and hence cannot, by its very construction, ever find any anomalies and related quantum
An interesting aspect of this is that exactly the same method is used with respect to the spatial diffeomorphism constraints in Loop Quantum Gravity (while the Hamiltonian constraint is quantized
more in the usual way). It must therefore be emphasized that LQG is not canonical quantization in the sense that the classical first-class constraints are not promoted to hold as expectation value
equations in the quantum theory.
For me, this is the crucial insight of this discussion, and it shows that Hermann Nicolai’s question did address precisely the right problem. In the toy example laboratory of the Nambu-Goto string it
is much easier for non-experts (like me) to follow the details and implications of what is being done, than in full fledged LQG. And it turns out, to my surprise, that what is being done is a
speculative proposal for an alternative to standard quantum theory. This is not only my interpretation, but Thomas Thiemann himself says that the procedure, sketched above, for dealing with the
constraints, should be compared to experiment to see if nature favors it over standard Dirac/Gupta-Bleuler quantization.
I am open-minded and can accept this in principle, but this has not been obvious to me at all, before. It means that, in the strinct sense of the word ‘canonical’, LQG is not canonical at all but
rather similar in spirit to other proposed modifications of quantum theory, like for instance those proposed to explain away the black hole information loss problem by modifying Schroedinger’s
I have tried to discuss some of these insights here.
So let me try to recapitulate the key idea in Thiemann’s quantization of the Nambu-Goto action, as far as I understand it.
Let ${\pi }_{\mu }$ be the canonical momentum to the embedding variable ${X}^{\mu }$. The usual left and right-moving bosonic fields are (pointwise)
(1)${Y}_{±}^{\mu }:={\eta }^{\mu u }{\pi }_{\mu }±{X}^{\prime \mu }\phantom{\rule{thinmathspace}{0ex}}.$
Smearing them over an interval $I$ of the circle and contracting with some reak ${k}_{\mu }$ yields
(2)${Y}_{±}^{k}\left(I\right):={\int }_{I}d\sigma \phantom{\rule{thinmathspace}{0ex}}{k}_{\mu }{Y}_{±}^{\mu }\phantom{\rule{thinmathspace}{0ex}}.$
This are the fields that we want to represent as operators on some Hilbert space with commutation relation given by
(3)$\left[{\stackrel{̂}{Y}}_{±}^{\mu }\left(\sigma \right),{\stackrel{̂}{Y}}_{±}^{u }\left({\sigma }^{\prime }\right)\right]=2i{\delta }^{\prime }\left(\sigma ,{\sigma }^{\prime }\right)\phantom{\rule
From these one obtains bounded operators by exponentiation
The point is that for these bounded operators the GNS construction applies which tells us how to represent any unital *-algebra by bounded operators on some Hilbert space $ℋ$, which will be called
the kinematical Hilbert space (up to some details).
Now, the crucial difference to the usual Dirac quantization ,where the constraints ${C}_{I}$ are imposed as
(5)${\stackrel{̂}{C}}_{I}\mid \psi 〉=0$
seems to be that instead the technique of group averaging imposes the exponentiation of this, namely
(6)$\mathrm{exp}\left({\stackrel{̂}{C}}_{I}\right)\mid \psi 〉=\mid \psi 〉$
(in the weak sense discussed between eqs. (5.4) and (5.5) of Thiemann’s paper). Naively this might appear to be the same thing, but it is not at all!
As an example, consider the commutator of one of the Virasoro constraints ${\stackrel{̂}{V}}_{±}\left(\xi \right)$ with ${\stackrel{̂}{W}}_{±}^{k}\left(I\right)$. There is an operator ordering issue
and dealing with that yields the usual result that the conformal dimension of these ${\stackrel{̂}{W}}_{±}^{k}$ depends on $k$. But now instead look at the exponentiated expression
stackrel{̂}{V}}_{±}}=\mathrm{exp}\left(i{e}^{{\stackrel{̂}{V}}_{±}}{\stackrel{̂}{Y}}_{±}^{k}\left(I\right){e}^{-{\stackrel{̂}{V}}_{±}}\right)=\mathrm{exp}\left(i{\stackrel{̂}{Y}}_{±}^{k}\left(\varphi \
left(I\right)\right)\right)={\stackrel{̂}{W}}_{±}^{k}\left({\varphi }_{±}\left(I\right)\right)\phantom{\rule{thinmathspace}{0ex}},$
where ${\varphi }_{±}$ here denotes the group element of $\mathrm{Diff}\left({\mathrm{S}}^{1}\right)$ associated with ${V}_{±}={V}_{±}\left(\xi \right)$ ($\xi$ is some smearing function).
The exponentiation in a sense removes all operator ordering ambiguities, since the conjugation operation (the similarity transform) ${e}^{{\stackrel{̂}{V}}_{±}}\phantom{\rule{thinmathspace}{0ex}}\cdot
\phantom{\rule{thinmathspace}{0ex}}{e}^{-{\stackrel{̂}{V}}_{±}}$ acts on every ${\stackrel{̂}{Y}}_{±}^{k}$ seperately and there is no operator ordering issue in the commutator $\left[{\stackrel{̂}{V}}_
Without this operator ordering issue there is no anomaly, hence no critical dimension, no tachyon, etc.
I therefore believe that the quantum ambiguity between the two sides of
(8)${\stackrel{̂}{C}}_{I}\mid \psi 〉=0↔\mathrm{exp}\left({\stackrel{̂}{C}}_{I}\right)\mid \psi 〉=\mid \psi 〉$
is what is at the heart of the difference between Thiemann’s quantization and the usual OCQ/BRST quantization.
Am I wrong?
Even if this is about right, there is something related which I don’t quite understand yet. Somehow the center-of-mass degree of freedom of the string is missing from Thiemann’s original Hilbert
space. In section 6.4 he re-incorporates it by using a D-parameter familiy of his original Hilbert space, which hence clearly was just that of string oscillations. What I am puzzled about is that the
0-mode of the momentum operator does not seem to be the same thing as ${\pi }_{\mu }\left({p}_{u }\right)$ above equation (6.36). It seems to me that the two should be identified, somehow, and that
then the question whether there is a tachyon or not should be addressed by actually constructing group-averaged and hence physical states.
Posted at January 27, 2004 3:34 PM UTC
Re: Thiemann’s quantization of the Nambu-Goto action
Here is a copy of Luboš’ answer to a related post of mine on sci.physics.research:
On 27 Jan 2004, Urs Schreiber wrote:
> I was trying to figure out what exactly it is in Th. Thiemanns
> quantization hep-th/0401172 of what he calls the ‘LQG-string’ that
> makes it so different from the usual quantization. I now believe that
> the crucial issue is how to impose the constraints.
Exactly. If physics is done properly, the (Virasoro) constraints are not
arbitrary constraints that are added by hand. They are really Einstein’s
equations, derived as the equations of motion from the action if it is
varied with respect to the metric - in this case the worldsheet metric.
The term R_{ab}-R.g_{ab}/2 vanishes identically in two dimensions, and
T_{ab}=0 is the only term in the equation that imposes the constraint. The
constraints are really Einstein’s equations, once again.
Moreover, because the (correct) theory is conformal, the trace
T_{ab}g^{ab} vanishes indentically, too, and therefore the three
components of the symmetric tensor T_{ab} actually reduce to two
components, and those two components impose the so-called Virasoro
constraints (which are easiest to be parameterized in the conformal gauge
where the metric is the standard flat metric rescaled by a
spacetime-dependent factor). For closed strings, there are independent
holomorphic and independent antiholomorphic generators - and they become
left-moving and right-moving observables on the Minkowski worldsheet
after we Wick-rotate.
Thomas Thiemann does not appreciate the logic behind all these things, and
he wants to work directly with the (obsolete) Nambu-Goto action to avoid
conformal field theory that he finds too difficult. Of course, the
Nambu-Goto action has no worldsheet metric, and therefore one is not
allowed to impose any further constraints. They simply don’t follow and
can’t follow from anything such as the equations of motion.
Thiemann does not give up, and imposes “the two” constraints by hand. It
is obvious from his paper that he thinks that one can add any constraints
he likes. Of course, there are no “the two” constraints. If he has no
worldsheet metric, the stress energy tensor has three components, and
there is no way to reduce them to two. Regardless of the effort one makes,
two tensor constraints in a general covariant nonconformal theory can
never transform properly as a tensor - because a symmetric tensor simply
has three components - and therefore his constraints won’t close upon an
algebra. His equations are manifestly general non-covariant, in contrast
with his claims.
Equivalently, because he obtained these constraints by artificially
imposing them, they won’t behave as conserved currents. (In a general
covariant theory without the worldsheet metric, we can’t even say what
does it mean for a current to be conserved, because the conservation law
nabla_a T^{ab} requires a metric to define the covariant derivative.) If
they don’t behave as conserved currents, they don’t commute with the
Hamiltonian, and imposing these constraints at t=0 will violate them at
nonzero “t” anyway (the constraint is not conserved).
If one summarizes the situation, these constraints simply contradict the
equations of motion. It is not surprising. We are only allowed to derive
*one* equation of motion for each degree of freedom i.e. each component of
X, and this equation was derived from the action. Any further constraint
is inconsistent with such equations unless we add new degrees of freedom.
I hope that this point is absolutely clear. The equations of motion don’t
allow any new arbitrarily added constraints unless it is possible to
derive them from extra terms in the action (that can contain Lagrange
multipliers). The Lagrange multipliers for the Virasoro constraints *are*
the components of worldsheet metric, and omitting one component of g_{ab}
makes his theory explicitly non-covariant (even if Thiemann tries to
obscure the situation by using the letters C,D for the two components of
the metric in eqn. (3.1)).
The conformal symmetry is absolutely paramount in the process of solving
the theory and identifying the Virasoro algebra - isolating the two
generators T_{zz} and T_{zBAR zBAR} per point from the general symmetric
tensor. Conformal/Virasoro transformations are those that fix the
conformal gauge - i.e. the requirement that the metric is given by the
unit matrix up to an overall rescaling. Conformal theories give us T_{z
zBAR} (the trace) equal to zero, and this is necessary to decouple T_{zz}
and T_{z zBAR}. In two dimensions, the conformal transformations -
equivalently the maps preserving the angles - are the holomorphic maps
(with possible poles), and the holomorphic automorphisms of a closed
string’s worldsheet are generated by two sets of the Virasoro generators.
This material - why it is necessary to go from the Nambu-Goto action to
the Polyakov action and to conformal field theory in order to solve the
relativistic string and quantize it - is a basic material of chapter 1 or
chapter 2 of all elementary books about string theory and conformal field
theory. I think that a careful student should first try to understand this
basic stuff, before he or she decides to write “bombastic” papers boldly
claiming the discovery of new string theories and invalidity of all the
constraints (such as the critical dimension) that we have ever found.
In fact, I think that a careful student should first try to go through the
whole textbook first, before he publishes a paper on a related topic.
Thomas Thiemann is extremely far from being able to understand the chapter
3 about the BRST quantization, for example.
Thiemann’s theory has very little to do with string theory, and very
little to do with real physics, and unlike string theory, it is
inconsistent and misled. String theory is a very robust and unique theory
and there is no way to “deform it” from its stringiness, certainly not in
these naive ways.
> This may seem like essentially the same thing, but the crucial issue is
> apparently that the latter form allows to deal quite differently with
> operator ordering, which completely changes the quantization. In particular,
> it seems to allow Thiemann, in this case, to have no operator re-ordering at
> all, which is the basis for him not finding an anomaly, hence no tachyon and
> no critical dimension.
A problem is that you don’t know what you’re averaging over because his
“group” is not a real symmetry of the dynamics.
By the way, if you want to define physical spectrum by a
Gupta-Bleuler-like method, you must have a rule for a state itself that
decides whether the state is physical or not. In Gupta-Bleuler old
quantization of the string, “L_0 - a” and “L_m” for m>0 are required
to annihilate the physical states. This implies that the matrix element of
any L_n is zero (or “a” for n=0) because the negative ones annihilate the
It is important that we could have defined the physical spectrum using a
condition that involves the single state only. If you decided to define
the physical spectrum by saying that all matrix elements of an operator
(or many operators) between the physical states must vanish, you might
obtain many solutions of this self-contained condition. For example, you
could switch the roles of L_7 and L_{-7}. However all consistent solutions
would give you an equivalent Hilbert space to the standard one.
The modern BRST quantization allows us to impose the conditions in a
stronger way. All these subtle things - such as the b,c system carrying
the central charge c=-26 - are extremely important for a correct
treatment of the strings, and they can be derived unambiguously.
> If this is true and Group averaging on the one hand and Gupta-Bleuler
> quantization on the other hand are two inequivalent consistent quantizations
> for the same constrained classical system I would like to understand if they
> are related in any sense.
No, they are not. What is called here the “group averaging” is a naive
classical operation that does not allow one any sort of quantization. You
can simply look that at his statements - such as one below eqn. (5.2) -
that in his treatment, the “anomaly” (central charge) in the commutation
relations (of the Virasoro algebra, for example) vanishes, are never
justified by anything. They are only justified by their simple intuition
that things should be simple. This incorrect result is then spread
everywhere, much like many other incorrect results. It is equally wrong as
simply saying that we have constructed a different representation of
quantum mechanics where the operators “x” and “p” commute with one
The central charge - the c-number that appears on the right hand side of
the Virasoro algebra - is absolutely real and unique determined by the
type of field theory that we study (and the theory must be conformal,
otherwise it is not possible to talk about the Virasoro algebra). It can
be calculated in many ways and any treatment that claims that the Virasoro
generators constructed out of X don’t carry any central charge is simply
There is absolutely no ambiguity in quantization of the perturbative
string. Knowing the background is equivalent to knowing the full theory,
its spectrum, and its interactions. There is no doubt that Thiemann’s
paper - one with the big claims about the “ambiguities” of the
quantization of the string - is plain wrong, and exhibits not one, but a
plenty of elementary misunderstanding by the author about the role of
constraints, symmetries, anomalies, and commutators in physics.
Let me summarize a small part of his fundamental errors again. He believes
many very incorrect ideas, for example that
* artificially chosen constraints can be freely imposed on your Hilbert
space, without ruining the theory and contradicting the equations of motion
* two constraints in 2 dimensions can transform as a general symmetric
tensor, and having a tensor with a wrong number of components does not
spoil the general covariance
* he also thinks that the Virasoro generators have nothing to do with the
conformal symmetry and they have the same form in any 2D theory
* in other words, he believes that you can isolate the Virasoro generators
without going to a conformal gauge
* classical Poisson brackets and classical reasoning is enough to
determine the commutators in the corresponding quantum theory
* anomalies in symmetries, carried by various degrees of freedom,
can be ignored or hand-waved away
* there is an ambiguity in defining a representation of the algebra of
creation and annihilation operators
* the calculation of the conformal anomaly does not have to be treated
* the tools of the so-called axiomatic quantum field theory are useful
in treating two-dimensional field theories related to
perturbative string theory
* if a set of formulae looks well enough to him, it must be OK and the
consistent stringy interactions and everything else must follow
Once again, all these things are wrong, much like nearly all of his
conclusions (and completely all “new” conclusions).
Thiemann himself admits that this is the same type of “methods” that they
have also applied to four-dimensional gravity. Well, probably. My research
of the papers on loop quantum gravity confirms it with a high degree of
reliability. Every time one can calculate something that gives them an
interesting but inconvenient result, they claim that in fact we don’t need
to calculate it, and it might be ambiguous, and so on. No, this is not
what we can call science. In science, including string theory, we have
pretty well-defined rules how to calculate some class of observables, and
all things calculated according to these rules must be treated seriously.
If a single thing disagrees, the theory must be rejected.
The inevitability of conformal symmetry for a controlled quantization of
the relativistic string - and for isolation (in fact, the definition) of
the Virasoro generators - is real. The theorems of CFT about its being
uniquely determined by certain data are also real. The conformal anomalies
of certain fields are also real. The two-loop divergent diagrams in
ordinary GR are also real. We know how to compute and prove all these
things, and propagating fog and mist can only obscure these
well-established facts from those who don’t want to see the truth.
I guess that this paper will demonstrate to most theoretical physicists -
even those who have not been interested in these “alternative” fields -
how bad the situation in the loop quantum gravity community has become.
There are hundreds of people who understand the quantization of a free
string very well, and they can judge whether Thiemann’s paper is
reasonable or not and whether funding of this “new kind of science”
should continue.
All the best
Posted by: Urs Schreiber on January 28, 2004 3:01 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Hi Luboš,
thanks for your answer!
I see your general point, but would like to look at some of the issues you raised in more detail.
You say that the Nambu-Goto action is ‘obsolete’. But of course the NG action is classically equivalent to the Polyakov action and I think that in the critical number of dimensions the equivalence
extends to the quantum theory. Furthermore, the Nambu-Goto action for the string is essentially the Dirac-Born-Infeld action (up to the worldsheet gauge field) of the D-string.
As far as I can see the constraints that Thiemann arrives at in equation (2.4) of his paper follow from standard canonical reasoning. One finds that the canonical momenta ${\pi }_{\mu }$ of the
Nambu-Goto action as well as of the DBI action classically satisfy two identities which can be identified as constraints. At the classical level these constraints are precisely the (classical)
Virasoro constraints that one also obtains by varying the worldsheet metric in the Polyakov action. Since the two actions are classically equivalent this is no surprise.
My point is that there should be a priori nothing wrong with looking at the Nambu-Goto action when studying the string. Indeed this is frequently done for instance when F-strings and D-strings are
considered at the same time, as for instance in
Y. Igarashi, K. Itoh, K. Kamimura, R. Kuriki, Canonical equivalence between super D-string and type IIB superstring.
In equations (2.3) and (2.4) of this paper the authors in particular give the same two bosonic constraints of the Nambu-Goto action that Thiemann arrives at. Their action also involves superfields
and the worldsheet gauge field, but this does not affect the general result that the Virasoro constraints follow from a canonical analysis of the Nambu-Goto action. I have spelled out the derivation
(for the bosonic DBI action) in a recent entry. (By setting the worldsheet gauge field and the $C$ fields to zero this derivation directly restricts to that for the ordinary Nambu-Goto action).
My point is that it is maybe not fair to say that Thiemann artificially or freely chooses the constraints - at least not at the classical level. The constraints that he uses are, classically, the
Virasoro constraints of the closed bosonic string.
My suspicion is rather that Thiemann devitates from standard reasoning when he defines what he wants to understand under quantizing the Virasoro constraints. Would you agree with this?
Let’s ignore the way on which we arrived at the classical Virasoro constraints (by starting from one of various classically equivalent actions) and concentrate on the question what it means to
quantize them.
The standard procedure is to make Gupta-Bleuler quantization and use either creation/annihilation operator normal ordering or CFT techniques to make sense of the quantum representation of the
classical Virasoro generators. This leads in the usual way to the anomaly, the shift a in (L_0 - a) and so on.
Thiemann claims (based on a large literature on quantization of constrained systems that is also the basis for loop quantum gravity) that there is an at least superficially different technique that
can also be addressed as quantization of the Virasoro constraints. In the simple case at hand this is imposing the constraint the way mentioned right below equation (5.4), which essentially says that
(1)$〈\psi \mid \mathrm{exp}\left(\mathrm{constraints}\right)\mid {\psi }^{\prime }〉=〈\psi \mid {\psi }^{\prime }〉\phantom{\rule{thinmathspace}{0ex}},$
where the Hilbert space and the representation of the operators is not necessarily the usual Fock representation.
This is not equivalent to and not even implied by saying that
(2)$〈\psi \mid \mathrm{constraints}\mid {\psi }^{\prime }〉=0\phantom{\rule{thinmathspace}{0ex}}.$
Of course when I write this I am ignoring issues of what we really mean by writing $\mathrm{exp}\left(\mathrm{some}\mathrm{operator}\right)$, i.e. whether this is supposed to be normal ordered or
regulated or what. I am trusting that this is taken care of by Thiemann’s rigorous construction of Hilbert spaces and operators on them, but I guess that Luboš disagrees with this. :-)
Posted by: Urs Schreiber on January 28, 2004 4:14 PM | Permalink | Reply to this
Thomas Thiemann does not appreciate the logic behind all these things, and he wants to work directly with the (obsolete) Nambu-Goto action to avoid conformal field theory that he finds too
difficult. Of course, the Nambu-Goto action has no worldsheet metric, and therefore one is not allowed to impose any further constraints. They simply don’t follow and can’t follow from anything
such as the equations of motion.
As I teach my students in the first days of my String Theory class, the Virasoro constraints follow straightforwardly from a canonical treatment of the Nambu-Goto string.
That is hardly the issue.
Posted by: Jacques Distler on January 29, 2004 2:58 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Here is another reply by Luboš:
Dear Urs,
Concerning your comments that you can get rid of all ordering constants by
exponentiating something, I hope that you don’t really believe it because
this would be a complete misunderstanding of the singularities in quantum
field theory. The exponentials of something always store the same
information as “something”, and if one of them has some ordering constant
contribution, you see it in the other as well.
For example, X(z) X(0) have logarithmic OPEs. This implies that
exp(i.K.X(z)) has a power law OPE with exp(-i.K.X(z)). It’s totally
nonsensical at quantum level to imagine that exp(-i.K.X(z)) is an inverse
operator to exp(i.K.X(z)). Do you understand why? This is a very
important point.
While for the Virasoro group without the central charge you would be able
to write the explicit “exponentiated” elements of the reparameterization
group and - because they have a clear geometric interpretatino, you could
invert them without anomalies, it is simply not true for the Virasoro
operators generating the reparameterization of X’s. Because of the term
c/z^4 in the OPE of two stress energy tensors, you must know very well
that exp(-V) can’t be treated as the inverse of exp(+V). You can only
imagine that exp(V) is an honest element of a group if the OPEs of V with
itself - and all other “V“‘s that you want to use - only have the 1/z
term, corresponding to the commutator. Recall that
O1(z) O2(0) ~ [O1,O2] (z) / z
the coefficient of 1/z is schematically the commutator of the two
operators. If you integrate a stress energy tensor etc., it is also OK to
have the 1/z^2 term in the OPEs of the stress energy tensor because it
reflects the worldsheet dimension of the stress energy tensor and tells
you how should you integrate it to get scalars etc.
But the OPE of the stress energy tensor (of the X^mu CFT) with itself
contains an extra 1/z^4 term. This is just a fact that you can calculate
in many ways, and this simply means that exp(V) where V is a Virasoro
generator, or some integrated combination of the stress energy tensor,
does not behave as an honest element of some group, and exp(-V) is not in
any naive sense inverse to exp(V) because these two *operators* have
Note that his naive operation, involving the (wrong) application of the
exp(C.D.C^{-1}) = C exp(D) C^{-1}
which is OK for matrices, is incorrect in our “usual” representation of
CFT, because of singularities between C and C itself. You can’t imagine
that C^{-1} is inverse to C - there are just no meaningful operators on
the Hilbert space that would look like C=exp(V) and were inverse to one
another. Because C^{-1}.C is not really one, you can’t derive the formula
you derived either, unless c=0. Note that it even requires you, for
C=exp(V), to consider exp(exp(V)…). These are heavily singular
operators, and all these confusions simply come from his/their wrong
intuition that you can work with the operators in CFT as with ordinary
classical numbers. They don’t understand where the normal ordering terms
come from, they don’t understand singularities of operators in quantum
field theories, they don’t understand the difference between classical and
quantum field theory.
It’s just totally pathetic, and every student in theoretical physics
should be able to identify all these errors.
All the best
Posted by: Urs Schreiber on January 28, 2004 4:17 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Hi again, Luboš!
Yes, I understand everything that you say here. I know that $:\mathrm{exp}\left(-V\right):$ is not the inverse to $:\mathrm{exp}\left(V\right):$ in CFT and I do understand where the $1/{z}^{4}$ terms
come from. When you go back to my original entry you’ll see that I address precisely this phenomenon by mentioning that things like $:\mathrm{exp}\left(k\cdot X\right):$ have conformal dimension
depending on $k$ in CFT, which is another aspect of this phenomenon.
But, yes, I was taking for granted that Thiemann is using a rep of his operators that allows him to ignore all normal ordering issues and work with them as with matrices and hence not as in CFT. He
is referring to lot’s of mathematical theorems, using the GNS construction etc. (that I obviously haven’t checked myself and I am trusting that he applies them correctly) and even though he does not
say so explicitly I deduced from his paper, in particular from the the third paragraph on p. 20, that he does use
(1)$\mathrm{exp}\left(C\cdot D\cdot {C}^{-1}\right)=C\mathrm{exp}\left(D\right){C}^{-1}\phantom{\rule{thinmathspace}{0ex}}.$
I do understand that this does not make sense in CFT (or even any other quantum field theory in the usual sense) but I also believe that a large number of mathematically versed people in the LQG camp
do think that this can be given good meaning by using all these mathematical constructions that Thiemann alludes to. Unfortunately I am not an expert on this stuff.
I think the key ingredient is the GNS construction, which tells you that a unital *-algebra can be represented faithfully i.e. without normal ordering issues just like matrices on some Hilbert space.
That’s the content of the relation in the 9th line from below on p.15:
On the right hand side is the classical multiplication of the algebra, on the left hand side we have operator multiplication. Whenever this is true we do have
(3)${\left(\mathrm{exp}\left({\pi }_{\omega }\left(a\right)\right)\right)}^{-1}=\mathrm{exp}\left(-{\pi }_{\omega }\left(a\right)\right)\phantom{\rule{thinmathspace}{0ex}}.$
There is some fine print to this construction which I am maybe not fully aware of. In particular things need to be bounded for this to make sense. That’s why Thiemann uses the operators $\stackrel{̂}
{W}=\mathrm{exp}\left(i\stackrel{̂}{Y}\right)$ instead of the $\stackrel{̂}{Y}$ themselves, because these would be unbounded.
Posted by: Urs Schreiber on January 28, 2004 4:41 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
For the general discussion of Thiemann’s paper I think it is important to realize that much of the usual lore of QFT is not supposed to apply. In particular, there is, as far as I understand, nothing
like a double Wick contraction in the commutator of two Virasoro generators.
Let me spell this out in detail:
Assume that we have operators $Y\left(\sigma \right)$, $\sigma \in {S}^{1}$ which have the commutator
(1)$\left[Y\left(\sigma \right),Y\left({\sigma }^{\prime }\right)\right]=-{\delta }^{\prime }\left(\sigma ,{\sigma }^{\prime }\right)\phantom{\rule{thinmathspace}{0ex}},$
as in equation (6.4) of Thiemanns paper. Next assume that one can make sense of products of these operators $Y\left(\sigma \right)Y\left(\sigma \right)$ at equal points, without introducing any
notion of normal ordering. This can be either thought of as pertaining to the classical Poisson algebra or, according to Thiemann et. al (if I understand correctly), by using a special representation
on a special Hilbert space ${ℋ}_{\omega }$ obtained by the GNS construction. Anyway, assume that the following expression makes sense:
(2)${L}_{\xi }:=\frac{1}{2}{\int }_{{S}^{1}}d\sigma \phantom{\rule{thinmathspace}{0ex}}\xi \left(\sigma \right)\phantom{\rule{thinmathspace}{0ex}}Y\left(\sigma \right)Y\left(\sigma \right)\phantom{\
The point is not to worry, for the moment, how this object is supposed to act on some state, but merely to regard its algebraic relations.
These all follow from
(3)$\left[{L}_{\xi },Y\left(\sigma \right)\right]={\left(\xi \left(\sigma \right)Y\left(\sigma \right)\right)}^{\prime }\phantom{\rule{thinmathspace}{0ex}}.$
This is nothing but what one also gets by using classical Poisson brackets, too.
For convenience, let me introduce some notation: For a general field $A\left(\sigma \right)$ let $w\left(A\right)$ be the classical conformal weight of $A\left(\sigma \right)$ iff
(4)$\left[{L}_{\xi },A\left(\sigma \right)\right]=\xi \left(\sigma \right){A}^{\prime }\left(\sigma \right)+w\left(A\right){\xi }^{\prime }\left(\sigma \right)A\left(\sigma \right)\phantom{\rule
It is easy to check that
(5)$w\left(A\left(\sigma \right)B\left(\sigma \right)\right)=w\left(A\left(\sigma \right)\right)+w\left(B\left(\sigma \right)\right)$
so that
(6)$w\left(Y\left(\sigma \right)\right)=1$
(7)$w\left(Y\left(\sigma \right)Y\left(\sigma \right)\right)=2$
and so on.
Now, denote for any field $A\left(\sigma \right)$ and any complex-valued function $\xi$ on ${S}^{1}$ the $\xi$-mode of $A$ by ${A}_{\xi }$, i.e.
(8)${A}_{\xi }:=\int d\sigma \phantom{\rule{thinmathspace}{0ex}}\xi \left(\sigma \right)A\left(\sigma \right)\phantom{\rule{thinmathspace}{0ex}}.$
Using again the naive quantum mechanical commutation relations (or Poisson brackets) one finds the following transformation of such modes
(9)$\left[{L}_{{\xi }_{1}},\phantom{\rule{thinmathspace}{0ex}}{A}_{{\xi }_{2}}\right]={A}_{\left(w-1\right){\xi }_{1}^{\prime }{\xi }_{2}-{\xi }_{1}{\xi }_{2}^{\prime }}\phantom{\rule{thinmathspace}
This implies in particular that
(10)$\left[{L}_{{\xi }_{1}},{L}_{{\xi }_{2}}\right]={L}_{{\xi }_{1}^{\prime }{\xi }_{2}-{\xi }_{1}{\xi }_{2}^{\prime }}\phantom{\rule{thinmathspace}{0ex}}.$
This is of course nothing but the usual relation known from classical Poisson brackets of the classical Virasoro constraints, as reviewed for instance by Thiemann in his equation (3.3). There is no
anomaly because one assumed to have no need to consider normal ordering as in $:{L}_{\xi }:$ or the like and all operator products are assumed to behave like classical products. But the important
point seems to be that Thiemann claims that secion 6.2 of his paper gives us a way to make sense of the above algebraic expressions as relations between operators that are well defined on some
Hilbert space ${ℋ}_{\omega }$. This is how he gets a representation of the conformal group on his Hilbert space without having a conformal anomaly.
Posted by: Urs Schreiber on January 28, 2004 6:48 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
I do not understand Thiemann’s paper at all so I’m going to ask totally naive questions. It seems clear that if Thiemann’s construction is consistent, this new theory is nothing like a 1+1
dimensional field theory, so it might be that my questions will not even make sense in this framework.
First, how does he calculate the spectrum? He says in some place that the graviton state is gauge-dependent, which just boggles me out. What are the observables?
Secondly, can he write down the operator corresponding to X, and see what its commutation relations are?
Let me say I also share Lubos’ view about such grandiose claims. It doesn’t improve my confidence in the paper when he blithely disregards all the previous literature about quantizing the string.
Posted by: Arvind on January 28, 2004 11:17 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Just a very quick and brief comment for the moment: Thiemann claims to be able to construct an operator representation $\stackrel{̂}{W}$ of the classical observables $W\left(\xi \right)=\mathrm{exp}\
left(\int d\sigma \phantom{\rule{thinmathspace}{0ex}}\xi \left(\sigma \right)\left(i\frac{\delta }{\delta X\left(\sigma \right)}±{X}^{\prime }\left(\sigma \right)\right)$ that essentially behaves
just as the classical $W$. This way the Pohlmeyer charges, uncontroversial classical invariants of the string, become quantum ‘charges’ for him.
I believe that if instead of Pohlmeyer charges we use classical DDF states (in the hopefully obvious sense) this construction would even give something similar to the usual string spectrum (up to the
offset $a$ and the existence of null states).
We should (at least I should) try to understand if and why the claim about $\stackrel{̂}{W}$ can be correct. This is where the mystery lies, I believe.
Posted by: Urs Schreiber on January 29, 2004 2:01 AM | Permalink | Reply to this
There’s no place like home
Why don’t I just close my eyes, click my heels and wish away all anomalies?
What are the rules here?
It is well known that it is impossible to preserve all of the relations of the classical Poisson-bracket algebra as operator relations in the quantum theory.
What principle allows Thiemann to decide which relations will be carried over into the quantum theory?
Where does he discuss which relations fail to carry over?
Posted by: Jacques Distler on January 29, 2004 3:20 AM | Permalink | Reply to this
Re: There’s no place like home
Maybe the issue is seperability of the Hilbert space.
Is the Hilbert space Thiemann constructs in his paper seperable? Unless I am missing something it is apparently not. This might explain why things work very different on this Hilbert space than on
the ordinary seperable one.
Compare the situation in the what is called “loop quantum cosmology”. There, after the dust has settled, what is done is essentially an ordinary quantization of the Wheelder-DeWitt equation but on a
nonseperable Hilbert space, where, if $a$ is the scale factor of the universe, all states of the form $\mathrm{exp}\left(\mathrm{ip}a\right)$ for real $p$ are orthonormal simply by postulating a
non-standard scalar product $〈\cdot \mid \cdot 〉$ with respect to which
(1)$〈{e}^{i{p}_{1}a}\mid {e}^{i{p}_{2}a}〉:=1\mathrm{if}{p}_{1}={p}_{2}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathrm{and}=0\mathrm{otherwise}\phantom{\rule
This is what makes the operator $\partial /\partial a$ technically have a discrete spectrum (where eigenstates are normalizable) even though its eigenvalues have a continuous range. (I am doing this
from memory. Probably I mix up some details. Maybe the role of $\stackrel{̂}{a}$ and $\partial /\partial a$ in the above has to be exchanged.) This is the basis on which loop quantum cosmology obtains
a discrete evolution of the scale factor, somehow (unfortunately I didn’t understand how precisely this follows when hearing a talk about this once).
So quantization on a non-seperable Hilbert space leads to radically different quantum theories. Maybe that’s what happens in Thiemann’s paper on the quantization of the string?
Posted by: Urs Schreiber on January 29, 2004 7:17 AM | Permalink | Reply to this
Re: There’s no place like home
Hi all,
I can’t follow any of this in detail at the moment, but my impression from talking to some of Ashtekar’s student is that non-separable Hilbert space is generic in their quatization. So, Urs may be on
the correct reasoning here. Now , then the issue is how to recover the ordinary classical world from it. I know Josh Willis did some work (and still working on it) on the issue.
Demian Cho
Posted by: Demian on January 29, 2004 2:16 PM | Permalink | Reply to this
Re: There’s no place like home
Hi Demian,
I have just finished posting a lengthy message to s.p.r. explaining why I think Thiemann’s Hilbert space is indeed non-seperable - when your comment comes in! :-)
Good, so my memory was correct that LQG usually deals with non-seperable Hilbert spaces. I didn’t fully realize this until I heard a talk by Bojowald at ‘Strings meet Loops’ where he mentioned that
this is a crucial issue in his ‘loop quantum cosmology’.
You write:
Now , then the issue is how to recover the ordinary classical world from it. I know Josh Willis did some work (and still working on it) on the issue.
Hm. Maybe I am confused, but right now it seems that there is way too much classicality in Thiemann’s paper and that we’d rather like to understand how to recover the ordinary quantum world in his
approach! :-)
For instance, he emphasizes that his choice of inner product = choice of $\omega$ is just meant to be a simple example (although he also mentions that other examples may be hard to come by). Would
other $\omega$ maybe yield seperable Hilbert spaces?
As far as I understand this could be possible, but it does not appear to be likely. Maybe Josh Willis could comment on this point?
Posted by: Urs Schreiber on January 29, 2004 2:45 PM | Permalink | Reply to this
Re: There’s no place like home
On p. 115 of
T. Thiemann, Introduction to Modern Canonical Quantum General Relativity
it says indeed
We remark that the spin-network basis is not countable because the set of graphs in $\sigma$ is not countable, whence ${ℋ}^{0}$ is not separable. We will see that this is even the case after
moding out by spatial diffeomorphisms although one can argue that after moding out by diffeomorphisms the remaining space is an orthogonal, uncountably infinite sum of superselected, mutually
isomorphic, separable Hilbert spaces.
Posted by: Urs Schreiber on January 29, 2004 3:01 PM | Permalink | Reply to this
Re: There’s no place like home
Too many forums! :)
I just posted some of my thoughts at the Physics Forum
(I had trouble getting the direct link to work here)
Which is the preferred place to discuss this: there, here, spr? :)
The point is, Urs, I think our work may be relevent to “fixing” the problem of Hilbert spaces in LQG.
You and I haven’t talked much about topology in our approach, but of course, any topology we have will be non-Hausdorff. However, it does have a property that I sometimes think of as being “weakly
Hausdorff”. I hope I am not clashing with standard terminology there. Points in our space are not separable in general, but they are weakly separable. What I mean by that is that two points are
separable if they are not contained in the same D-diamond. This gives a kind of “blurriness” down at the level of individual cells, but carries the usual notion of separability as long as you back
away from the cells a bit.
Very exciting stuff :)
Posted by: Eric on January 29, 2004 3:15 PM | Permalink | Reply to this
Re: There’s no place like home
Hi Eric!
Here, there, everywhere (imagine the respective Beatles tune :-)
I would vote for discussion here at the Coffee Table. You can be sure that I read this, while I will not regularly check the Physics Forum, in general.
Regarding your point on separability: Is the notion of separability of a Hilbert space really related to separability of points in the sense of Hausdorff/non-Hausdorff? I think with respect to
Hilbert spaces separability simplye means ‘has a countable basis’. Is this related to a Hausdorff property, somehow?
Posted by: Urs Schreiber on January 29, 2004 3:31 PM | Permalink | Reply to this
Re: There’s no place like home
Regarding your point on separability: Is the notion of separability of a Hilbert space really related to separability of points in the sense of Hausdorff/non-Hausdorff? I think with respect to
Hilbert spaces separability simplye means ‘has a countable basis’. Is this related to a Hausdorff property, somehow?
Hi Urs,
Don’t forget that my math sucks :) I don’t know of any theorem that relates the two ideas, but it feels right. What you wrote (somewhere between here, there, and everywhere :)) made me think that the
two illnesses were related. For example, you said
the W(I) are not sensitive to ‘neighbouring’ W(J): The Hilbert space is by construction so large that W({x}) and W({x+epsilon}) can sit right next to each other without noticing each other.
I am probably totally off and should just shut up :) At least I can say I’m having fun :)
The real point is that their inner product seems to be sick, and I think that our work (or maybe Harrison’s) could be step in the direction of trying to fix it. Maybe not.
Posted by: Eric on January 29, 2004 3:45 PM | Permalink | Reply to this
Re: There’s no place like home
What’s a separable Hilbert space, and why does it matter?
Posted by: Arvind on January 29, 2004 4:45 PM | Permalink | Reply to this
Re: There’s no place like home
A Separable Hilbert Space is one with a countable basis.
I don’t think it matters a whit.
Posted by: Jacques Distler on January 29, 2004 5:14 PM | Permalink | Reply to this
Re: There’s no place like home
Hm, don’t you think that the reason that Thiemann can work with his operators essentially as if dealing with a classical Poisson algebra is due to the peculiar nature of the Hilbert space that he
Posted by: Urs Schreiber on January 29, 2004 5:19 PM | Permalink | Reply to this
Re: There’s no place like home
No, I don’t believe there’s any quantization scheme that takes the full Poisson-bracket algebra of the classical theory and carries it over — unaltered — into the operator algebra of the quantum
Depending on the quantization scheme, you may be able to carry over some subalgebra (the prototypical example, being the CCRs).
Posted by: Jacques Distler on January 29, 2004 7:34 PM | Permalink | Reply to this
Re: There’s no place like home
I don’t believe either that the full Poisson algebra carries over. (There is a theorem showing that this cannot work in general.) That’s why I added the qualifier ‘essentially’. My point is, which I
have been discussing with Luboš here, that in Thiemann’s quantization many more properties of the classical algebra carry over than just the CCR. That, of course, not the entire classical algebra is
reproduced is the content of section 6.6 of Thiemann’s paper, where he discusses the quantum deformations of the classical invariant algebra of the Pohlmeyer charges.
But what is crucial for Thiemann’s removal of the anomaly is that things like
(1)${\left(\mathrm{exp}\left(\left({\pi }^{\mu }+i{X}^{\prime \mu }\right)\right)\right)}^{-1}=\mathrm{exp}\left(-\left({\pi }^{\mu }+i{X}^{\prime \mu }\right)\right)$
do hold true in his quantization, as oppsosed to the analogous normal-ordered relations in CFT. You can see this explicitly in his equation (6.7) and implicitly in the absolutely crucial relation
(2)$\alpha \left(W\left({Y}_{±}\right)\right)=W\left(\alpha \left({Y}_{±}\right)\right)$
in the third paragraph on p. 20. (Here $\alpha$ is the action of the exponentiated Virasoro generators.)
As far as I can see Thiemann’s quantization is technically correct (no mathematical errors). So there must be some physical assumption which makes him part company with the usual lore.
I think that it is crucial that he allows himself to work on non-separable Hilbert spaces. His construction of a Hilbert space by applying the GNS theorem to the Weyl algebra of $W$ operators is what
allows the above-mentioned non-standard quantum relations, but it also leads to a non-separability of the Hilbert space.
Of the kinematical Hilbert space that is. It is not too surprising that the physical Hilbert space is separable again, because it is obviously much ‘smaller’ in general. But, when comparing his
quantization with the OCQ or BRST quantization (instead of the LCQ, where only the physical Hilbert space appears because the constraints are solved before quantization) we have to look at the
kinematical Hilbert space, because the Hilbert space on which the CFT operators are represented in the usual approach is also kinematical (contains non-physical states). The physical Hilbert space in
the usual approach is that generated by the DDF operators acting on physical massless/tachyonic states.
Of course non-separable Hilbert spaces do appear in practice from time to time, but then we are always dealing with uncountably many superselection secors, each of which is separable. Thiemann’s
non-countable Hilbert space (and, by the way, I have just received email by him confirming that the kinematical Hilbert space in his paper is non-separable) does however not separate into
superselction sectors each of which would carry a representation of the constraints.
I think there are two alternatives:
1) Either there is a technical, mathematical error in Thiemann’s paper and hence his conclusions are wrong. If you believe that this is the case, that his quantization in particular is flawed, then
please point out where you think the mistake lies.
2) Or the math is correct (which I am pretty convinced that it is). In this case we need to talk about if the assumptions that are made before the crank of the formalism is turned are viable. I am
suggesting that the assumption of a non-separable kinematical Hilbert space may be a physically non-viable assumption.
Posted by: Urs Schreiber on January 30, 2004 12:16 PM | Permalink | Reply to this
Re: There’s no place like home
OK, so you (he) claim(s) that there is a quantization in which the commutation relations of $X\prime \left(\sigma \right)$, $\Pi \left(\sigma \right)$, ${T}_{++}\left(\sigma \right)$ and ${T}_{--}\
left(\sigma \right)$ are carried over from the classical Poisson-bracket algebra, unaltered (i.e., the commutators of the $T$’s do not pick up a central term)?
Certainly, that’s not true if the $T$’s lie in the universal enveloping algebra generated by $X\prime \left(\sigma \right)$, $\Pi \left(\sigma \right)$ — as is conventionally the case.
Posted by: Jacques Distler on January 30, 2004 3:17 PM | Permalink | Reply to this
Re: There’s no place like home
So what is wrong with this?
Posted by: Urs Schreiber on January 30, 2004 3:26 PM | Permalink | Reply to this
Re: There’s no place like home
You mean aside from the fact that none of the symbols are well-defined?
Look, this is elementary stuff.
We can expand everything in Fourier modes. If ${T}_{++}$ is in the universal enveloping algebra of the Fourier modes of $X\prime$ and $\Pi$, then its Fourier modes (conventionally called ${L}_{n}$)
are some expressions quadratic in those modes.
Since the Fourier modes of $X\prime$ and $\Pi$ (the “oscillators”) don’t commute, you need to specify an ordering. I don’t care what ordering you choose, but I insist that you choose one.
Now compute the commutator of two ${L}_{n}$’s. Again, you will obtain something which is at most quadratic in oscillators (there will, in general, also be a piece ${0}^{\mathrm{th}}$-order in
oscillators). And it must be re-ordered to agree with your original definition of the ${L}_{n}$s.
Carrying out this computation, you obtain the central term in the Virasoro algebra, and I believe that it is a theorem that the result is independent of what ordering you chose for the ${L}_{n}$s.
Note that I never mentioned what Hilbert space I hope to represent these operators on. So I don’t see where its separability (or lack thereof) enters into the considerations.
Posted by: Jacques Distler on January 30, 2004 5:02 PM | Permalink | Reply to this
Re: There’s no place like home
You mean aside from the fact that none of the symbols are well-defined?
I don’t see why as an algebra these symbols should not be well defined. All the caveats that I included pertained only to the representation of these things as operators.
[…] you need to specify an ordering. I don’t care what ordering you choose, but I insist that you choose one.
Ok, let me choose the ordering the way it drops out from the Fourier decomposition without reordering:
(1)${L}_{m}=\frac{1}{2}\sum _{k=-\infty }^{\infty }{\alpha }_{m-k}{\alpha }_{k}\phantom{\rule{thinmathspace}{0ex}}.$
I could open Green, Schwarz & Witten on p. 73, where they derive the classical algebra of this object and check that in going from their (2.1.83) to (2.1.84) there is no re-ordering involved. But let
me write it out here in a different way:
(2)$\left[{L}_{m},{\alpha }_{k}\right]=-k{\alpha }_{k+m}$
one gets
(3)$\left[{L}_{m},{L}_{n}\right]=\frac{1}{2}\sum _{k}\left[{L}_{m},{\alpha }_{n-k}{\alpha }_{k}\right]$
(4)$=\frac{1}{2}\sum _{k}\left(\left[{L}_{m},{\alpha }_{n-k}\right]{\alpha }_{k}+{\alpha }_{n-k}\left[{L}_{m},{\alpha }_{k}\right]\right)$
(5)$=\frac{1}{2}\sum _{k}\left(\left(k-n\right){\alpha }_{n+m-k}{\alpha }_{k}-k{\alpha }_{n-k}{\alpha }_{m+k}\right)$
(6)$=\frac{1}{2}\sum _{k}\left(\left(k-n\right){\alpha }_{n+m-k}{\alpha }_{k}+\left(m-k\right){\alpha }_{n+m-k}{\alpha }_{k}\right)$
(7)$=\left(m-n\right)\frac{1}{2}\sum _{k}{\alpha }_{n+m-k}{\alpha }_{k}$
There is no reordering involved in this.
I believe that it is a theorem that the result is independent of what ordering you chose for the ${L}_{n}$’s.
Do you have a reference to this theorem?
Posted by: Urs Schreiber on January 30, 2004 6:40 PM | Permalink | Reply to this
Re: There’s no place like home
Good God! If you’re going to be that sloppy manipulating divergent quantities, we had better quit discussing this now.
Cut off those infinite sums (i.e., rather than ${\sum }_{k=-\infty }^{\infty }$, consider ${\sum }_{k=-N}^{N}$) and try again.
The only ${L}_{n}$ with an ordering ambiguity is ${L}_{0}$, so it suffices define an ordering for it. To compute central term, it suffices to compute the commutator of $\left[{L}_{n},{L}_{-n}\right]$
Posted by: Jacques Distler on January 30, 2004 7:15 PM | Permalink | Reply to this
Re: There’s no place like home
Dear Jacques -
you write:
Good God! If you’re going to be that sloppy manipulating divergent quantities, we had better quit discussing this now.
I hope I am not annoying you. I very much appreciate that you take the time to discuss these things with me.
You seem to be very convinced that Thiemann (and myself, for that matter) are confused about a very elementary point. As for myself I don’t see my mistake yet, but it is of course well possible that
I am subject to misapprehensions. Certainly you don’t have infinite time to waste on this - but please be at least assured that your contributions are very valuable to me and probably to others, who
are interested in Thomas Thiemann’s work.
I believe that if you, or other string theorists, can point out technical mistakes in Thiemann’s paper that this will have considerable effect on the Loop Quantum Gravity people in general.
Thiemann’s quantization can be regarded as a testing ground, a laboratory, for the techniques used in LQG. The LQG camp is well known for its high esteem of mathematical rigour and it would be very
important to them to be made aware of a technical mistake. Physical viability of their approach is another matter, but I do expect that they care about the consistent definition of the objects that
they are dealing with.
Thomas Thiemann, as you know, is one of the more prominent people working on LQG, and he has a record of papers with a rather high technical level. I have heard string theorists criticising his
papers as being games of math instead of physics. But in any case the claim is that this math is well done. So I bet that he and many others in the LQG field would highly appreciate if string
theorists can spot technical mathematical mistakes in their work.
Because of this I would kindly ask you not to give up on me and my attempts to answer to your charges. I may not be the most suitable person for that task and am indeed hoping that somebody more
knowledgeable will chime in to help me out. I have indeed contacted Thomas Thiemann by email and he says that next week he’ll be back from a conference and willing to discuss his paper. Surely he’ll
be a better advocate of his work than I am.
That said, let me try to answer your latest comments. Unfortunately, as you will see below, I will still not be able to completely understand your criticism. Please bear with me. Thanks!
You write:
Cut off those infinite sums […] and try again.
Ok. At least I can reassure you that I do understand that cutting off these sums does modify the algebraic relation $\left[{L}_{n},{L}_{m}\right]=\left(n-m\right){L}_{n+m}$.
But, alas, I don’t see what the cutting off of these sums has to do with the question whether there is a non-commutative algebra such that it has commutators which reproduce the Poisson-brackets of
the oscillators ${a}_{n}$ and the generators ${L}_{n}$.
You are saying that I should be more careful with manipulating divergent terms. This puzzles me a little. All I wrote down are infinite sums of products of elements of an infinite-dimensional
non-commutative algebra generated by elements ${a}_{n}$. Until I talk about representing these as operators on some space there are no numbers which could diverge, I think.
Of course I do understand that when I acted with the ${L}_{0}$ generator with the ordering as given in my previous comment on a Fock vaccum state which is annihilated by the ${a}_{n}$ for positive
$n$, that the result would be ill defined because it would formally contain an infinite real number multiplying the Fock vacuum.
But this leads precisely to the idea that I tried to discuss before: The claim by Thiemann is essentially that the noncommutative algebra that I indicated in my previous comment, with the ordering as
given there (which is equivalent to the definitions in that other comment) can be represented on a non-separable Hilbert space in such a way that the objects that I manipulated in my last comment
have a perfectly well defined action on this Hilbert space, without any divergencies.
This is the crucial claim. It is about operator representations of the abstract non-commutative algebra of the ${a}_{n}$ and the ${L}_{n}$ (in the ordering indicated before) or equivalently the
smeared $Y\left(\sigma \right)$ and $Y\left(\sigma \right)Y\left(\sigma \right)$, I believe. The claim is essentially that there is an operator representation where the ${L}_{n}$ (in the ordering
that I have given, or equivalently, the ${\int }_{{S}^{1}}d\sigma \phantom{\rule{thinmathspace}{0ex}}\xi \left(\sigma \right)\phantom{\rule{thinmathspace}{0ex}}Y\left(\sigma \right)Y\left(\sigma \
right)$) are well defined and act without producing divergencies. This applies to the ${L}_{n}$ with the sums going from $-\infty$ to $\infty$, because this is what one gets when Fourier-decomposing
the $Y\left(\sigma \right)$ in ${L}_{n}\propto \int d\sigma \phantom{\rule{thinmathspace}{0ex}}{e}^{\mathrm{in}\sigma }Y\left(\sigma \right)Y\left(\sigma \right)$.
But there is some fine print. Maybe that’s what is at the heart of the matter:
Actually Thiemann does no explicitly construct a representation of the ${a}_{n}$ on a Hilbert space such that the above is true. What he does construct is a represenation of the exponentiated
oscillators $\mathrm{exp}\left(i{a}_{n}\right)$. That’s because these give bounded operators, which is what he needs to apply the GNS construction, as far as I understand.
I am not sure that I fully understand what this implies for the representation of the ${a}_{n}$ themselves. At the beginning of section 6.5 it says that
Since the Pohlmeyer Charges ${Z}_{±}$ involve polynomials of the ${Y}_{±}$ [$\sim {a}_{n}$] rather than polynomials of the ${W}_{±}$ [$\sim \mathrm{exp}\left({\mathrm{ia}}_{n}\right)$] it seems
that our representation does not support the Quantum Pohlmeyer charges.
I think that he then goes on to show how to resolve this apparent problem. But I am not sure that I fully understand his solution. It seems that he is claiming that by dealing carefully with the
various terms the Pohlmeyer Charges, and hence the ${a}_{n}$ are represented on his Hilbert space. Right above his equation (6.41) it says
we write the regulated invariants as polynomials in the ${W}_{±}\left(s\right)$ and then remove the regulator and see whether the result is well-defined and meaningful.
This sounds like problems could be hidden here. Even more so since he does not address the question whether what holds true for the Pohlmeyer charges in this context also holds true for the $\mathrm
{exp}\left(i{L}_{n}\right)$. Maybe this is where the problem lies? I would like to understand this better. If this is a problem then it is at least not a trivial, elementary and obvious problem, is
Posted by: Urs Schreiber on January 30, 2004 8:57 PM | Permalink | Reply to this
Re: There’s no place like home
Ok. At least I can reassure you that I do understand that cutting off these sums does modify the algebraic relation $\left[{L}_{n},{L}_{m}\right]=\left(n-m\right){L}_{n+m}$.
I am saying more than that. I am telling you that you can derive the value of the central charge by doing this cutoff calculation, and then taking $N\to \infty$.
Do it! It’s a worthwhile exercise.
Actually Thiemann does no explicitly construct a representation of the ${a}_{n}$ on a Hilbert space such that the above is true. What he does construct is a represenation of the exponentiated
oscillators ${e}^{i{a}_{n}}$ . That’s because these give bounded operators, which is what he needs to apply the GNS construction, as far as I understand.
I thought the claim was that he had found a quantization in which the Virasoro algebra is unextended in the quantum theory. If we are not able to represent the oscillators ${a}_{n}$ on the string
Hilbert space, then I return to my previous position of having no idea what he is talking about.
I’m curious, though, what he means by the spacetime “graviton” in a theory where he does not know how to represent the ${a}_{n}$ on the string Hilbert space.
Posted by: Jacques Distler on January 30, 2004 9:46 PM | Permalink | Reply to this
Re: There’s no place like home
Do it! It’s a worthwhile exercise.
Ok, I’ll do it tomorrow. My girlfriend just called and said I should come home (it’s almost midnight already). :-)
I would still like to understand why the comuptation without the cutoff should be inconsistent (as long as I don’t claim to apply these non-normally ordered objects to a Fock vacuum).
If we are not able to represent the oscillators a n on the string Hilbert space, then I return to my previous position of having no idea what he is talking about.
Yeah. I did not clearly realize that this might be a problem before I wrote my previous comment. Actually what Thiemann needs is that the $\mathrm{exp}\left(i{L}_{n}\right)$ are represented on his
Hilbert space because he demands states to be invariant under the action of $\mathrm{exp}\left(i{L}_{n}\right)\phantom{\rule{thinmathspace}{0ex}},\forall n$. I’ve sent him an email asking him about
it. Let’s see. Alternatively I could try to figure it out myself - but that might take longer. :-)
Posted by: Urs Schreiber on January 30, 2004 10:31 PM | Permalink | Reply to this
Re: There’s no place like home
Hi Jacques -
this morning I did the calculation that you told me to do. Sorry for having been dense. I now see that when I introduce a regulator and remove it after calculating the commutator I find an anomaly
even if I don’t normal order anything.
Thanks for healing me from this confusion. Before I had been under the impression that the anomaly is entirely due to the normal ordering.
I need to recheck my calculation, though, because the first attempt gave me a prefactor 1/8 in front of the ${m}^{3}$ term. Could it be that this factor depends on the chosen ordering?
Ok, so now I finally get your point about Kansas, etc. ;-). I admit that the problem in Thiemann’s paper is not related to the non-separability of his Hilbert space. Thanks for your help!
Posted by: Urs Schreiber on February 2, 2004 4:18 PM | Permalink | Reply to this
And your little dog too …
I need to recheck my calculation, though, because the first attempt gave me a prefactor 1/8 in front of the m^3 term. Could it be that this factor depends on the chosen ordering?
Nope. That coefficient should be universal. The choice of ordering affects the non-universal (m-independent) terms.
But, computational details aside, I think you can now see the point of my comment above.
If the L[n]s are quadratic expressions in the oscillators, then you get a central extension. If you think you don’t get one, you’ve made a mistake.
I will leave it to others to draw whatever conclusions seem warranted about the rest of Thiemann’s paper.
Posted by: Jacques Distler on February 2, 2004 5:21 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Hopefully Thomas will join this discussion himself. But for now let me say that he tells me that
1) he is aware of the fact (which took me a while to appreciate) that one cannot get a quantization of the ${L}_{m}$ without anomaly, no matter which ordering is chosen
2) this does not affect his approach because he defines the action of the operators representing the exponentiated constraints by
(1)${U}_{±}\left(\varphi \right){\pi }_{\omega }\left({W}_{±}\left(s\right)\right){\Omega }_{\omega }={\pi }_{\omega }\left({\alpha }_{\varphi }^{±}\left({W}_{±}\left(s\right)\right)\right){\Omega }_
{\omega }\phantom{\rule{thinmathspace}{0ex}}.$
Here ${U}_{±}\left(\varphi \right)$ (in Thomas’ paper this is a \varphi) is the operator which represents the exponentiated Virasoro element $\mathrm{exp}\left({\sum }_{n}{c}^{n}{L}_{n}^{±}\right)$
which again implements the diffeomorphism $\varphi$ on one half ($+$ or $-$) of the algebra.
${\pi }_{\omega }\left({W}_{±}\left(s\right)\right)$ is the operator version of ${W}_{±}\left(s\right)$, which is essentially an exponentiated oscillator.
${\Omega }_{\omega }$ is sort of a vaccuum in the GNS-Hilbert space (all states are obtained by acting on ${\Omega }_{\omega }$ with the ${\pi }_{\omega }\left({W}_{±}\left(s\right)\right)$.
Thomas says that the oscillators themselves are not represented on his Hilbert space, only the exponentiated oscillators are. Also the Virasoro generators ${L}_{m}$ are not represented on the Hilbert
space, only the exponentiated ${U}_{±}\left(\varphi \right)$ are, he says.
My question would be:
Is it ok to just define the action of the ${U}_{±}\left(\varphi \right)$ as above, without writing them out in terms of the canonical coordinates-momenta/oscillators?
If this were ok, could’t I just do the same in the usual Fock quantization of the string by simply declaring that
(2)$\mathrm{exp}\left(\sum _{n}{c}^{n}{\stackrel{̂}{L}}_{n}\right){\stackrel{̂}{\alpha }}_{-m}\mid 0〉={\left(\mathrm{exp}\left(\left\{\sum _{n}{c}^{n}{L}_{n},.\right\}\right){\alpha }_{-m}\right)}^{\
stackrel{̂}{\cdot }}\mid 0〉$
(where now ${\alpha }_{-m}$ is a worldsheet oscillator and hats $\stackrel{̂}{\cdot }$ distinguish Poisson algebra elements from operators and $\left\{,\right\}$ is the Poisson bracket.
Posted by: Urs Schreiber on February 2, 2004 7:28 PM | Permalink | Reply to this
Damn those pesky oscillators!
Is it ok to just define the action of the ${U}_{±}\left(\varphi \right)$ as above, without writing them out in terms of the canonical coordinates-momenta/oscillators?
You can define whatever the heck you want. Above, I made a very important stipulation that the ${L}_{n}$s should be in the universal enveloping algebra generated by the oscillators. If I relax that
assumption, I can get the central charge of the Virasoro algebra to be anything I want (including 0) by adding to the ${L}_{n}$s a piece that commutes with the oscillators, and contributes
(negatively) to the total central charge.
In the critical bosonic string, that’s what happens when you add in the ghost contribution to the Virasoro generators.
If Thiemann has only the exponentiated oscillators, rather than the oscillators themselves, then he’d better build the Virasoro generators (or their exponentiated versions) out of those, instead.
Otherwise, one has said nothing.
More prosaically, as I said above, if the oscillators are not represented on the String Hilbert Space, I wonder how the heck the graviton (or any other tensor field) is.
Posted by: Jacques Distler on February 2, 2004 8:03 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Thomas Thiemann has asked me to forward the following email message to the Coffee Table discussion.
In reply to an email by myself he answers (the quoted text is from my original mail):
[begin forwarded text]
let me try to rephrase the objections that have been raised in terms of the following question:
By the logic of your paper, what keeps you from constructing analogues of the operators ${U}_{±}\left(\phi \right)$ on the standard Fock space of string states?
nothing, however, they I am worried that they won’t act unitarily because they mix the standard annihilation and creation operators. in other words, the operators ${U}_{±}$ are defined in the
standard Fock rep. of the string but I am not sure whether they define a unitary rep. of the reparameterization group. That’s a good question, see below.
There certainly exist these operators (defined by their very action on the states) which represent the conformal group without anomaly on this Fock space. But they are not expressible in terms of
the ${L}_{n}$. So in which sense could one claim that these ${U}_{±}\left(\phi \right)$ are obtained from Dirac’s quantization scheme, if they are not expressible in terms of the quantized first
class constraints?
Good question. Here is the simple answer: Suppose you have a classical phase space and a constraint function $C$ on it which generates infinitesimal gauge transformations as
(1)${\delta }_{t}F=t\left\{C,F\right\}$
where $F$ is any function on phase space and $\left\{.,.\right\}$ is the Poisson bracket. One can exponentiate this infinitesimal action to the Hamiltonian flow of $C$ given by the automorphisms
(2)${\alpha }_{t}\left(F\right)=\mathrm{exp}\left(\left\{C,.\right\}\right)\cdot F$
The Dirac observables of the system are those functions which are gauge invariant, that is,
(3)${\alpha }_{t}\left(F\right)=F\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\forall t$
In a quantization you want to find a representation $\pi$ of the functions $F$ as operators $\pi \left(F\right)$ on a Hilbert space $H$ with a cyclic “vacuum” $\mid 0〉$. We now define a
representation of the one parameter group of automorphisms by
(4)$U\left(t\right)\pi \left(F\right)\Omega :=\pi \left({\alpha }_{t}\left(F\right)\right)\Omega$
It follows that $\Omega$ is invariant since $\pi \left(1\right)=1$, it is a physical state. More generally, physical states are defined by the condition
(5)$U\left(t\right)\mid \mathrm{phys}〉=\mid \mathrm{phys}〉\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\forall t.$
There is a beautiful interplay between physical states and Dirac observables because they obviously map $\Omega$ to physical states.
Your question now boils down to asking whether one can define a self-adjoint operator by
(6)$i\pi \left(C\right):={\left[\frac{d}{\mathrm{dt}}U\left(t\right)\right]}_{t=0}$
First of all this works at best only if $U\left(t\right)$ are unitary operators as otherwise $\pi \left(C\right)$ cannot be self-adjoint. If $\pi \left(C\right)$ is not self-adjoint you have violated
a basic quantization principle, namely that real valued functions $C$ should be represented as self-adjoint operators. The necessary and sufficient criterion for $U\left(t\right)$ to be unitary is to
check whether the functional
(7)$\omega \left(F\right):=〈0\mid \pi \left(F\right)\mid 0〉$
is ${\alpha }_{t}$ – invariant, that is
(8)$\omega \cdot {\alpha }_{t}=\omega .$
[Editor’s note: The original message here has a \circ instead of a \cdot $\cdot$.]
If that is the case, Stone’s theorem of functional analysis says that $\pi \left(C\right)$ exists if and only if the one parameter unitary group $t↦U\left(t\right)$ is weakly continuous, that is
(9)$\underset{t\to 0}{\mathrm{lim}}〈\psi ,U\left(t\right)\psi \prime 〉=〈\psi ,\psi \prime 〉\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}
{0ex}}\forall \psi ,\psi \prime \in H$
In my representation this condition is violated. However, this is unimportant because obviously $U\left(t\right)$ is a bona fide quantization of ${\alpha }_{t}$ and secondly one can use the $U\left(t
\right)$ in order to define physical states, the $\pi \left(C\right)$ are not needed for that.
Notice that all of this is standard knowledge in constraint quantization, it is not my invention. The beauty of the construction is that you get everything for free once you have a positive linear
functional $\omega$. The functional of standard string theory is positive only in $D=26$ and $a=1$, so you need the tachyon there. It is a good exercise to check whether that functional is invariant
under the Virasoro group or only under the algebra. I have not done that calculation yet.
A related question comes to mind: In the LQG quantization of 1+3d gravity, is the representation of the constraints there similar to those in the ‘LQG-string’?
yes and no. the spatial diffeomorphism group is represented very much in analogy to the diffeomorphism group of the circle for the lqg string. The Hamiltonian constraint is represented in the form $\
pi \left(C\right)$.
Hope that helps. Best,
PS: Maybe you can upload this to the coffee table, I somehow can’t make this work.
end forwarded text
Posted by: Urs Schreiber on February 3, 2004 5:06 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Hi Thomas -
you write:
$U\left(t\right)$ is a bona fide quantization of ${\alpha }_{t}$ and secondly one can use the $U\left(t\right)$ in order to define physical states, the $\pi \left(C\right)$ are not needed for
Apparently this is the crucial point that is controversial.
I’d think that Dirac quantization forces us to impose
(1)$〈\mathrm{phys}\mid \pi \left(C\right)\mid \mathrm{phys}〉=0\phantom{\rule{thinmathspace}{0ex}}.$
Among other reasons, this is what you get from path-integral approaches.
With respect to the LQG-constraints of 1+3d gravity you write
the spatial diffeomorphism group is represented very much in analogy to the diffeomorphism group of the circle for the lqg string. The Hamiltonian constraint is represented in the form $\pi \left
What is the rationale behind this difference? Is it that for the spatial diffeos you don’t have the weak continuity condition that you were referring to, while for the temporal diffeos you do?
Many thanks for all your answers!
Posted by: Urs Schreiber on February 3, 2004 5:21 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
Dear Urs,
I understand that the question whether constraint quantization can be done with
the group or the algebra is controversial.
The uneasy feeling may come from your
experience with Fock spaces of which perturbative path integral quantization
is just another version. In those representations one usually deals with
the algebra, however, notice that one can
work as well with the group. So you question my procedure by using an example where both approaches work. I would say
that there is no evidence for concern.
For instance in LQG we have a similar
phenomenon with respect to the spatial
diffeomorphism group. We can only quantize the group, not its algebra. Yet the solution space consists of states which are supported on generalized knot classes
which sounds completely right. There are
other examples where the group treatment,
also known as group averaging or refined algebraic quantization
produces precisely the correct answer.
See for instance [23] and references
Coming to your second question, the reason for the unsymmetrical treatment of Hamiltonian and spatial diffeomorphism
constraint in Loop Quantum Gravity
is that the constraint algebra
of GR is much more complicated than for the string. The temporal diffeomorphisms
are treated differently than the spatial ones because the Hamiltonian constraint,
also known as Wheeler-DeWitt constraint,
in contrast to the spatial diffeomorphism
constraint, is not a quadratic expression
in the basic variables.
One can show that in the LQG representation that we use, also called
Ashtekar – Isham – Lewandowski representation, we
indeed get a unitary rep. of the spatial
diffeo group which like for the particular
rep. of the LQG string that I studied is
not weakly continuous. On the other hand, the Hamiltonian constraint can be defined
as a self-adjoint operator. So we use a
mixture of both procedures to quantize
Hope that helps,
Posted by: Thomas Thiemann on February 3, 2004 7:22 PM | Permalink | Reply to this
Out, out, damned anomaly!
We now define a representation of the one parameter group of automorphisms by $U\left(t\right)\pi \left(F\right)\Omega :=\pi \left({\alpha }_{t}\left(F\right)\right)\Omega$
In other words, there is no such thing as an anomalous symmetry? The gauge group of the classical theory is by definition promoted to a gauge symmetry of the quantum theory.
Umh, sorry, but things don’t work that way in quantum field theory.
Posted by: Jacques Distler on February 4, 2004 4:00 AM | Permalink | Reply to this
Rather than re-inventing the wheel…
I’d recommend the classic paper,
Alvarez-Gaumé and Nelson, “Hamiltonian Interpretation Of Anomalies,” Commun. Math. Phys. 99 (1985) 103.
for how to understand anomalies from this point of view.
Posted by: Jacques Distler on February 4, 2004 4:21 AM | Permalink | Reply to this
Re: Out, out, damned anomaly!
So may we conclude that the single most problematic technical step in LQG is that the spatial diffeo constraints are not imposed as operator constraints but that instead the classical spatial diffeo
group is, as Jacques says,
by definition promoted to a gauge symmetry of the quantum theory?
Posted by: Urs Schreiber on February 4, 2004 4:11 PM | Permalink | Reply to this
I was making no comment about LQG, merely about the matter at hand — the quantization of the Nambu-Goto string.
I would hope that the LQG formalism would be “smart” enough to know that it is supposed to crash and burn if you attempt use it to quantize a theory with gravitational anomalies.
But this example (in which the un-centrally-extended $\mathrm{Diff}\left({S}^{1}\right)$ is simply postulated to hold in the quantum theory) does not make one sanguine.
In this particular case, the central extension does not prevent one from carrying out the quantization (one merely has to split the constraints, in the conventional fashion). But a formalism that
doesn’t even notice the existence of the central extension is too naive to be of any use, either here or, presumably, elsewhere.
Posted by: Jacques Distler on February 4, 2004 5:07 PM | Permalink | Reply to this
Re: LQG
Ok, so let’s assume we’d take the classical ADM constraints of 1+3d gravity, perform canonical quantization, introduce a cutoff the way you have taught me to do, compute the quantum commutators and
remove the regulator. Would we find an anomaly?
Posted by: Urs Schreiber on February 4, 2004 5:29 PM | Permalink | Reply to this
Re: LQG
Such a procedure has not a prayer of working because you will find yourself unable to remove the regulator.
Surely that’s well-known to anyone who’s thought seriously about the subject.
Posted by: Jacques Distler on February 4, 2004 5:40 PM | Permalink | Reply to this
Re: LQG
I see. So what do you mean when you say:
I would hope that the LQG formalism would be ‘smart’ enough to know that it is supposed to crash and burn if you attempt use it to quantize a theory with gravitational anomalies.
If we demand that canonical quantization has to have $〈\mathrm{phys}\mid \pi \left(C\right)\mid \mathrm{phys}〉=0$ and if, according to your last comment, we cannot even in principle make sense of
this, then we have to conclude that ‘canonical quantum gravity’ is an oxymoron. So what is it that you hope the LQG formalism to know?
Posted by: Urs Schreiber on February 4, 2004 5:57 PM | Permalink | Reply to this
Just Say No!
I’m really not interested in getting into a discussion of LQG.
If we want to discuss the quantization of the Nambu-Goto string, where everything can be made rigorous and well-defined, then I will be happy to lend any insight I might have.
If this discussion of Thiemann’s paper leads you to some new understanding about what the LQG people are attempting to do, that’s great.
But I, personally, would like to stay on a subject where I know what I’m talking about.
Posted by: Jacques Distler on February 4, 2004 7:44 PM | Permalink | Reply to this
Re: Just Say No!
it seems you guys are getting into a religious discussion about whether it is
allowed to quantize the constraints the
way I did.
I am a disbeliever of any religion and I have therefore nothing useful to add to this
discussion. However, maybe you may find it
useful to remember that the way I
treated the constraints is EXACTLY the same as one quantizes the Poincare group of ordinary QFT.
Posted by: Thomas Thiemann on February 5, 2004 12:34 PM | Permalink | Reply to this
Faith-based calculation
it seems you guys are getting into a religious discussion about whether it is allowed to quantize the constraints the way I did.
The existence or nonexistence of anomalies is not a matter of religious faith.
In the absence of anomalies, any old slapdash, illegitimate set of manipulations will obtain the “right answer.” In the presence of anomalies (as Urs discovered above), one needs to be more careful.
From your description, your method can be used to promote any gauge symmetry of the classical theory to a gauge symmetry of the quantum theory. The symmetry in the quantum theory can never be spoiled
by anomalies.
I don’t believe that. And our difference of opinion is not merely a religious one.
Posted by: Jacques Distler on February 5, 2004 1:50 PM | Permalink | Reply to this
Alternative quantization procedure?
Hi Thomas -
you wrote:
it seems you guys are getting into a religious discussion about whether it is allowed to quantize the constraints the way I did.
Let me emphasize that I, for one, are just sitting here trying to understand what is going on.
I am already quite fond of the fact that we apparently managed to pinpoint the very spot at which your approach parts company with the standard quantization of the string. As you can see by looking
through the discussion here, I first thought the difference lies somewhere else.
I believe that I was a little bit mislead by your discussion of group averaging. From equation (5.2) and (6.25) of your paper I got the impression that you had found a way to quantize the Virasoro
constraints without getting an anomaly. The absence of anomalies is of course the necessary condition for group averaging to be applicable (which is pretty obvious but also confirmed for instance by
Giulini and Marolf in their gr-qc/9902045).
I am glad that we could clarify that no group averaging in the sense of using exponentiated constraints is used in your ‘LQG-string’ and that in fact the constraints are not representable on your
Hilbert space.
If I understand you correctly, then you are proposing (and using in LQG) a procedure for quantizing a classical constrained theory which can be summarized as
‘Find an operator representation of the classical symmetry group (which need not be constructible from the quantized first class constraints) and define physical quantum states to be those that are
invariant under the action of these operators.’
I am absolutely no expert on LQG, but I have heard talks by yourself and other people working in LQG and have also looked at parts of your introductory papers on LQG. Unfortunately this, apparently
crucial, fact has so far escaped my attention. I appreciate that we could isolate it as the key difference between your treatment of the string and the standard quantization.
Let me assure you that I have no religious prejudices about what quantization is supposed to be. I am just trying to understand physics. As I wrote in the introduction to this discussion here, there
is a well known saying due to E. Nelson, I believe, who said that “first quantization is a mystery” (of course his point was mainly to emphasize that second quantization is not). As you know, this
means that it is an empirical fact that the world is quantum and that we sometimes have to guess the correct quantum rules from knowledge of the classical limit that we observe.
There are however some prescription for how to obtain a quantum theory from an action functional, among them are path integral quantization and Dirac quantization of first class constraints. For
simple enough systems, at least, like the Nambu-Goto string, these can be shown to lead to the same result.
Now you are saying that nature might be described by a quantum theory which follows neither from the path integral nor from Dirac/Gupta-Bleuler quantization but is radically different from these.
Since so far this alternative approach has been used only to tackle non-perturbative quantum gravity, where nobody has much of a clue what the result is supposed to look like, it seems that the
radical departure of the LQG quantization procedure from standard quantization procedures has not been widely noticed.
Do you agree with this assessment?
There are several proposals for alternative quantization procedures out there. Some people try to modify Schroedinger’s equation, other invent things like ‘Event Enhance Quantum Mechanics’ or the
like. Modifications of the quantum principle have been proposed to explain away the information loss problem in black holes, for instance. So in a sense, when it comes to quantum gravity, one might
take the viewpoint that everything we know about the world far above the Planck scale is subject to scrutinization again.
Still, all modifications of the standard procedure of quantization should reduce to the well known physics in an appropriate limit, far from the Planck scale. You mention that your quantization of
the constraints is precisely that of the Poincare group in QFT. But the quantized Poincare generators are not anomalous, so that no difference is to be expected here.
But do you have an argument why and how your alternative quantization could reproduce the standard quantization in some limit?
Posted by: Urs Schreiber on February 5, 2004 5:04 PM | Permalink | Reply to this
Re: Alternative quantization procedure?
Dear Urs,
I appreciate very much your interest, thanks
a lot.
I agree that weakly discontinuous representations of gauge symmetries are
quite unfamiliar in standard QFT. I strongly
believe that this is related to the fact that in ordinary QFT we have a background
metric to build on while in LQG or for the
string, which is a worlsheet metric independent 2d QFT, there is no such structure available. My intuition comes from the fact that without a background metric it is meaningless to say whether
two objects are close or far apart. Close or far wrt which metric if there is none?
This leads to very discontinuous behaviour. For instance for the LQG string
two states are orthogonal if the intervals
on which they are supported are not identical, no matter how “little” they
differ in one coordinate system, because in another they maybe drastically different.
Notice, however, that I did not show that
this happens for all reps., there might be
more continuous ones.
It is true that the Poincar'e group is
represented w/o anomaly in standard QFT
as required by the axioms but maybe one
can view the recently discussed defomations of the Poincar'e group as
a “anomalous” realization.
In any case, look at sections 6.6, 6.7, 6.8 where I show that one construct
coherent states for the LQG string such that the W_\pm and hence the Z_\pm have expectation values
as close to the classical one as you want
so that semiclassically you get the
correct limit.
Hope that helps,
Posted by: Thomas Thiemann on February 5, 2004 11:25 PM | Permalink | Reply to this
The Virasoro anomaly from regulated generators
Since I have been asked by others about it and generally for the record, I’d like to present the calculation, recommended by Jacques Distler as a worthwhile exercise and of some importance for the
present discssion, of the Virasoro anomaly by means of using regulated generators. In retrospect all this is pretty obvious and I am sufficiently ashamed to have been confused about it, but that’s
Everybody knows the various derivations of the anomaly as done in in GSW and Polchinski. Here I am going to discuss what could be called the canonical, functional perspective, because this is a
perspective that might confuse one into missing the anomaly - as I have unfortunately demonstrated.
So let there be a canonical coordinate field $X\left(\sigma \right)$ on the circle, with canonical momentum $\pi \left(\sigma \right)=-i\frac{\delta }{\delta X\left(\sigma \right)}$ such that
(1)$\left[X\left(\sigma \right),\pi \left({\sigma }^{\prime }\right)\right]=i\delta \left(\sigma ,{\sigma }^{\prime }\right)\phantom{\rule{thinmathspace}{0ex}}.$
From these the ‘chiral’ field
(2)$Y\left(\sigma \right):=\frac{1}{\sqrt{2}}\left(i\frac{\delta }{\delta X\left(\sigma \right)}+{X}^{\prime }\left(\sigma \right)\right)$
is constructed, which has the commutator
(3)$\left[Y\left(\sigma \right),Y\left({\sigma }^{\prime }\right)\right]=-i{\delta }^{\prime }\left(\sigma ,{\sigma }^{\prime }\right)\phantom{\rule{thinmathspace}{0ex}}.$
The task is to make sense of the commutator algebra of squared $Y$.
Naively one might write “$\left[\frac{1}{2}Y\left(\sigma \right)Y\left(\sigma \right),\frac{1}{2}Y\left({\sigma }^{\prime }\right)Y\left({\sigma }^{\prime }\right)\right]=-i{\delta }^{\prime }\left(\
sigma ,{\sigma }^{\prime }\right)\left(Y\left(\sigma \right)Y\left({\sigma }^{\prime }\right)+i\frac{1}{2}{\delta }^{\prime }\left(\sigma ,{\sigma }^{\prime }\right)\right)$”, where the second term
on the right is due to reordering, is hence classically absent and essentially the quantum anomaly - except for the fact that all this is not well defined since it involves products of distributions.
To deal with these, the fields have to be smeared appropriately or, equivalently, their sum over modes have to be truncated. There are many possible smearings and truncations. After some
experimenting the most convenient one I found is obtained by using
(4)${Y}^{\left(N\right)}\left(\sigma \right):=\frac{1}{\sqrt{2}}\left(i\frac{\delta }{\delta X\left(\sigma \right)}+\int d{\sigma }^{\prime }\phantom{\rule{thinmathspace}{0ex}}f\left(\sigma -{\sigma
}^{\prime }\right){X}^{\prime }\left({\sigma }^{\prime }\right)\right)\phantom{\rule{thinmathspace}{0ex}},$
where $f\left(\sigma -{\sigma }^{\prime }\right)$ is an approximation for $\delta \left(\sigma -{\sigma }^{\prime }\right)$:
(5)$f\left(\sigma -{\sigma }^{\prime }\right):=\frac{1}{2\pi }\sum _{k=-N}^{N}{e}^{\mathrm{ik}\left(\sigma -{\sigma }^{\prime }\right)}\phantom{\rule{thinmathspace}{0ex}}.$
This one turns the $Y$-$Y$ commutator into
(6)$\left[{Y}^{\left(N\right)}\left(\sigma \right),{Y}^{\left(N\right)}\left({\sigma }^{\prime }\right)\right]=-i{f}^{\prime }\left(\sigma -{\sigma }^{\prime }\right)$
and the regulated modes of the Virasoro generators ${L}_{m}^{\left(N\right)}$ are simply
(7)${L}_{m}^{\left(N\right)}=\int d\sigma \phantom{\rule{thinmathspace}{0ex}}{e}^{-\mathrm{im}\sigma }\phantom{\rule{thinmathspace}{0ex}}\frac{1}{2}{Y}^{\left(N\right)}\left(\sigma \right){Y}^{\left
(N\right)}\left(\sigma \right)\phantom{\rule{thinmathspace}{0ex}}.$
The commutators in question are now computed formally just as for the ill-defined expression mentioned above:
(8)$\left[{L}_{m}^{\left(N\right)},{L}_{-m}^{\left(N\right)}\right]=\int d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}\left(-i{e}^{-\mathrm{im}\left(\sigma -\
kappa \right)}\phantom{\rule{thinmathspace}{0ex}}{f}^{\prime }\left(\sigma -\kappa \right){Y}^{\left(N\right)}\left(\sigma \right){Y}^{\left(N\right)}\left(\kappa \right)+\frac{1}{2}{e}^{-\mathrm{im}
\left(\sigma -\kappa \right)}\phantom{\rule{thinmathspace}{0ex}}{f}^{\prime }\left(\sigma -\kappa \right){f}^{\prime }\left(\sigma -\kappa \right)\right)\phantom{\rule{thinmathspace}{0ex}}.$
The second term in the integral, which is again due to the reordering, should essentially give the sought-after anomaly. Indeed, it can easily be evaluated explicitly, which yields
(9)$\frac{1}{2}\int d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}{e}^{-\mathrm{im}\left(\sigma -\kappa \right)}\phantom{\rule{thinmathspace}{0ex}}{f}^{\prime
}\left(\sigma -\kappa \right){f}^{\prime }\left(\sigma -\kappa \right)=\frac{1}{12}\left({m}^{3}-m\right)-m\phantom{\rule{thinmathspace}{0ex}}\frac{1}{2}N\left(N+1\right)+F\left(N\right)\phantom{\
where $F\left(N\right)$ is a polynomial in $N$ which is independent of $m$. The first term is the standard anomaly
The second is a shift that can be re-absorbed into the definition of ${L}_{0}^{\left(N\right)}$:
(11)${\stackrel{˜}{L}}_{m}^{\left(N\right)}:={L}_{m}^{\left(N\right)}-{\delta }_{m,0}\frac{1}{2}\frac{N\left(N+1\right)}{2}$
so that the non-normal-ordered ${\stackrel{˜}{L}}_{0}^{\left(N\right)}$ has finite expectation value in the Fock vacuum.
The term $F\left(N\right)$ would diverge when the regulator is removed (when $N$ is sent to infinity). It should somehow cancel. To see that we need to look at the first term of the above $\left[{L}_
{m}^{\left(N\right)},{L}_{-m}^{\left(N\right)}\right]$. From the Jacobi identity it follows that the anomaly can contain only ${m}^{3}$ and ${m}^{1}$ terms. Therefore it makes sense to look at terms
containing different powers of $m$:
(12)$-i\int d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}{e}^{-\mathrm{im}\left(\sigma -\kappa \right)}\phantom{\rule{thinmathspace}{0ex}}{f}^{\prime }\left(\
sigma -\kappa \right){Y}^{\left(N\right)}\left(\sigma \right){Y}^{\left(N\right)}\left(\kappa \right)=-i\int d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}\
left(1-\mathrm{im}\left(\sigma -\kappa \right)+\cdots \right)\phantom{\rule{thinmathspace}{0ex}}{f}^{\prime }\left(\sigma -\kappa \right){Y}^{\left(N\right)}\left(\sigma \right){Y}^{\left(N\right)}\
left(\kappa \right)\phantom{\rule{thinmathspace}{0ex}}.$
For even powers of $m$ the coefficient in front of the $Y$s is an odd function of $\sigma -\kappa$. This means that for even powers of $m$ we may replace ${Y}^{\left(N\right)}\left(\sigma \right){Y}^
{\left(N\right)}\left(\kappa \right)$ in the integrand with $\frac{1}{2}\left[{Y}^{\left(N\right)}\left(\sigma \right),{Y}^{\left(N\right)}\left(\kappa \right)\right]=-{\mathrm{if}}^{\prime }\left(\
sigma -\kappa \right)$.
For ${m}^{0}$ this yields
(13)$-\frac{1}{2}\int d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}{f}^{\prime }\left(\sigma -\kappa \right){f}^{\prime }\left(\sigma -\kappa \right)=-F\left
which indeed precisely cancels the previously found potentially diverging term $F\left(N\right)$. From the Jacobi identity it now follows that all other even powers of $m$, which could give
c-numbers, will disappear. Since we are implicitly using symmetric ordering (no (normal-)reordering in the Fourier decomposition) the odd powers of $m$ don’t give rise to further c-number terms,
This means that the regulator can now be removed, which finally yields
(14)$\underset{N\to \infty }{\mathrm{lim}}\left[{\stackrel{˜}{L}}_{m}^{\left(N\right)},{\stackrel{˜}{L}}_{-m}^{\left(N\right)}\right]=2m{\stackrel{˜}{L}}_{0}^{\left(\infty \right)}+\frac{1}{12}\left
Posted by: Urs Schreiber on February 5, 2004 7:16 PM | Permalink | Reply to this
Re: The Virasoro anomaly from regulated generators
Dear Urs,
thanks for the calculation.
I guess this is to show that the Virasoro
anomaly is unavoidable and unique. I insist
that this conclusion is wrong since it
is representation dependent.
Namely to represent the momentum as
\pi(x)=i\delta/\delta X(x)
makes an assumtion about the representation: Since \pi must be self-adjoint, your assumption is that
the Hilbert space is L_2([dX]). Leaving
aside the fact that the Lebesgue measure [dX] does not exist in infinite dimensions
suppose that I choose a different rep.
L_2(d\mu) where \mu is a different measure. Unless the two measures are
mutually absolutely continuous, the two
reps. are not unitarily equivalent and
the value of the anomaly will change.
Maybe there is an additional argument that
one must use, but the above calculation
is not sufficient to show uniqueness of the anomaly.
Maybe I am missing something. Looking forward to your answwer,
Posted by: Thomas Thiemann on February 5, 2004 11:40 PM | Permalink | Reply to this
Wishful thinking
Urs never used the expression $i\frac{\delta }{\delta X\left(\sigma \right)}$. He was just being unnecessarily fancy. What he did use was that $\pi \left(\sigma \right)$ obeys the canonical
commutation relations
(1)$\left[X\left(\sigma \right),\pi \left(\sigma \prime \right)\right]=i\delta \left(\sigma ,\sigma \prime \right)$
From that, he constructed properly-regularized versions of the Virasoro generators, and computed their commutators.
Never was the Hilbert space mentioned, nor were any Hermiticity assumptions made.
This is really important to understand.
You can experiment with other smearing functions all you want. You will never succeed in making the anomaly go away.
More generally, do you still contend that all anomalies (including this one) can be defined away by royal fiat?
Posted by: Jacques Distler on February 6, 2004 1:13 AM | Permalink | Reply to this
Re: Wishful thinking
Dear Jaques,
thanks for clarifying this. If really only
the CCR’s are used then I believe the result
and this is probably the no-go theorem that
we were talking about.
However, I was really never doubting this.
What I am saying is that it is not necessary
to define the L_m, you can live with the
finite transformations in order to construct the physical Hilbert space. The finite transformations can be quantized w/o
anomalies, basically because one can
quantize w/o going back to the L_m.
In other
words I am quantizing the group rather than the Lie algebra. There are standard
techniques in QFT for doing that and this
is what I have done.
You may have an intuition against such a procedure but please notice that this is
just how one quantizes the Poincar’e group
in QFT. I would like to know what your
intuition is so that I can better understand where the discussion is going.
Posted by: Thomas Thiemann on February 6, 2004 1:23 PM | Permalink | Reply to this
Here, there, … everywhere
I can assure you that the group, too, receives a central extension (look up the phrase, “Schwarzian derivative”).
However, the computation is bound to be more involved, because defining the properly-regulated group elements is more subtle.
In any case, the result is a foregone conclusion, because if you take group elements infinitesimally-close to the identity, you will just reproduce the Lie-algebra computation.
Earlier, you claimed refuge in the thought that the Lie-algebra elements that would be obtained would not be self-adjoint. But, as you can see, no assumptions about self-adjointness were made in this
The Poincaré group (and, indeed, its Lie algebra too) do not receive any corrections, because there are no ordering-ambiguities in defining the generators, so the computation can proceed “naively”.
Not so, for instance, if you go to Light-cone gauge. In Light-cone gauge, the Lorentz-generators, ${M}^{i-}$, (here, I label the coordinates, ${X}^{±}=\frac{1}{\sqrt{2}}\left({X}^{0}±{X}^{1}\right)$
and ${X}^{i}$, $i=2,...,d-1$) have ordering ambiguities and need to be smeared. When you compute the commutator $\left[{M}^{i-},{M}^{j-}\right]$, you get an additional term, showing that for general
$d$, the Poincaré algebra does not survive quantization in Light-cone gauge.
The computation is very similar to the one Urs just did for you. I commend it highly as a very illuminating exercise to do.
Posted by: Jacques Distler on February 6, 2004 2:06 PM | Permalink | Reply to this
Rules of the game
I must say, I am not 100% convinced by Urs argument. What I think one needs to show would be the following: Start with the classical Poisson algebra generated by the ${a}_{n}$’s, that is all
functions (polynoms or some power series or something like that) of the ${a}_{n}$. This is the classical algebra $A$. Let’s pretend that the ${L}_{n}$ live in this algebra (there might be a problem
with the infinite sums. I am willing to ignore that but if one introduces some regulator one has to show that the result does not depend on that choice).
Now we quantize $A$, that is we find a linear map
(1)$q:A\to \mathrm{Op}\left(H\right)$
where $\mathrm{Op}\left(H\right)$ are operators on a Hilbert space (not neccesarily bounded) with the property that for $f,g\in A$ we have
(2)$q\left(\left\{f,g\right\}\right)=i\hslash \left[q\left(f\right),q\left(g\right)\right]+O\left({\hslash }^{2}\right).$
We further require the representation to be irreducible, that is it shouldn’t have any invariant subspaces (this should say that we are quantizing only the ${a}_{n}$’s and nothing else). And then I
would like to see that this implies
Note that I don’t want to assume a highest weight representation! One example of the above is that $q$ is normal ordering but the claim would be that the central charge is independent of the choice
of $q$. Can you show that while being careful with the $q$’s and the terms of higher order in $\mathrm{hbar}$?
Posted by: Robert on February 10, 2004 6:03 PM | Permalink | Reply to this
Re: Rules of the game
Hi Robert,
assume I introduce a regulator and work with finite sums only, the way I have sketched above. Fixing Weyl-ordering I get
mathrm{that}\mathrm{disappears}\mathrm{as}N\to \infty \phantom{\rule{thinmathspace}{0ex}}.$
But since this is a result for finite sums I can without any problems (I think) reorder the ${a}_{n}$ in this expression. This is for free in the ${L}_{m}$ (since we can assume that $me 0$) and gives
a term proportional to $m$ for ${L}_{0}$. The ${m}^{3}$-term is in any case unaffected.
So dealing with finite sums removes all further subtleties related to different choices of ordering, I’d say. Not so?
Posted by: Urs Schreiber on February 10, 2004 6:46 PM | Permalink | Reply to this
Re: Rules of the game
I must say, this does not satisfy me. Maybe I am not aware of some basic theorem of quantization but how do I know that all ambiguities in quantization of the U(1) algebra come from ordering
Unfortunately, in my previous post the hbar’s came out as question marks (at least in my netscape that keeps warning me that it needs more fonts), but in general the rule “Poisson brackets go to
commutators” is only true up to higher order terms in hbar!
Let me give an illustrating however not convincing example: Let’s quantize the usual Heisenberg algebra of $x$’s and $p$’s (this of course is unique by Stone-von-Neumann but I would like to show the
higher order corrections). For definiteness, let us quantize by “normal ordering” that is by moving all $p$’s to the right. Then compute for example $\left\{{x}^{2}{p}^{4},{x}^{4}{p}^{2}\right\}=-12
{x}^{5}{p}^{5}$ However in the quantized theory, compute $\left[{x}^{2}{p}^{4},{x}^{4}{p}^{2}\right]={x}^{2}{p}^{4}{x}^{4}{p}^{2}-{x}^{4}{p}^{2}{x}^{2}{p}^{4}$ and then restore the ordering on the
RHS. To do this you have to do more than one reordering but the term you would get from the Poisson bracket captures only the term coming from one reordering, the others are higher order in hbar.
The upshot is: You cannot assume you know the commutators of your quantum operators (and you use them to compute the central charge above), you only know them up to higher order terms. What we
usually do is we assume that there are some “elementary” ones, like $x$ and $p$ or ${a}_{n}$ andpostulate that they don’t have higher order corrections, but how do we justify this? We could use other
coordinates in the Poisson manifold and then rrequire those not to have higher order corrections.
In the Heisenberg case we know that the quantization is unique up to unitary equivalence. But what about the $U\left(1\right)$ current algebra? That looks like many copies of Heisenberg algebras but
there might be functional analysis issues in the infinite tensor product.
Posted by: Robert on February 11, 2004 9:43 AM | Permalink | Reply to this
Re: Rules of the game
After this concrete question I also have a more philosophical one: What exactly do we mean by “Quantum theory Q is a quantization of a classical theory C”? They should have the same symmetries and
the same field content (representation data of the symmetries). But that surely isn’t enough. In the Lagrangian setting this just determines the gauge groups and the fiels but not the action in any
way. It might be that this specifies the action (maybe after getting rid of irrelevant operators) but for example there could be truely marginal ones that we cannot get rid of. But as we all know the
moduli space they generate usually is very different between the quantum theory and the classical theory (in cases we believe we know how to quantize, like on the lattice or we can use enough susy to
render some classical reasoning valid in the quantum theory). Just doing an expansion in hbar does not seem like a good idea, especially in strongly coupled theories.
We would believe that the classical theory at least specifies a perturbative theory and if there is a regime where the coupling is small (UV for QCD) we could require the quantum theory to converge
to the pertubation theory but such a regime is not always available.
So what do we require ofsomebody that claims he has quantized the string? $U\left(1\right)$ current algebra +Vir? Any dynamics?
Posted by: Robert on February 11, 2004 9:59 AM | Permalink | Reply to this
Re: Rules of the game
So what do we require of somebody that claims he has quantized the string?
I think this is precisely the question around which the discussion about Thiemann’s approach revolves. More concretely, in this particular case of a constrained system the question is:
What do we mean by the quantization of a theory with classical (1st class) constraints ${C}_{I}=0$?
There once was a little discussion of this point over at s.p.r.
There Aaron pointed out, that the form of the quantum constraints follows from the path integral. I am not sure how we would handle that in case of the Nambu-Goto action, but I think that if a
proposed ‘quantization’ is not derivable by means of path integral techniques we will hesitate to accept this procedure.
Looking back at this old thread, I was reminded of very relevant comments by Marc Henneaux from pp. 156 of his string theory lecture notes, where he writes
Third question: Conversely, should one not try to use a different representation of the string operators so as to avoid the central charge? Again, it might very well be possible to construct such
a representation and, if so, it is very likely that the resulting quantum theory would be very different from the one explained here. It could be that this yet-to-be-constructed theory would
possess an intrinsic interest of its own […]. Moreover, because that theory would not be based on the use of oscillator variables, it might be more easily extendable to higher-dimensional
objects, such as the membrane. However, to the author’s knowledge, this subject has not been investigated.
He even mentions the relation of this question to the canonical quantization of gravity:
Second question: Is it conceivable that one should somehow weaken the Wheeler-De Witt equations of quantum gravity, as it would be necessary if a (c- or q-number) “central charge” appears in the
constraint “algebra”? Yes it is, but no work along these lines has been done.
Posted by: Urs Schreiber on February 11, 2004 4:11 PM | Permalink | Reply to this
Re: Rules of the game
What we usually do is we assume that there are some ‘elementary’ ones, like $x$ and $p$ or ${a}_{n}$ and postulate that they don’t have higher order corrections, but how do we justify this?
Ok, now I get your point. Apparently we agree that my little calculation above demonstrates that the anomaly is independent of ordering if the commutator of the ${a}_{n}$ is taken to be
(1)$\left[{a}_{m},{a}_{n}\right]=m{\delta }_{n,-m}\phantom{\rule{thinmathspace}{0ex}}.$
But your point is that this is already an unnececarily restrictive assumption, since we might as well have
(2)$\left[{a}_{m},{a}_{n}\right]=m{\delta }_{n,-m}+𝒪\left(h\right)\phantom{\rule{thinmathspace}{0ex}}.$
In the Heisenberg case we know that the quantization is unique up to unitary equivalence. But what about the U (1) current algebra? That looks like many copies of Heisenberg algebras but there
might be functional analysis issues in the infinite tensor product.
I see. Well, being lazy I would tend to answer this by again pointing to the fact that in my above calculation only finite sums appear, hence also only finitely many copies of the Heisenberg algebra,
hence every possibility is unitarily equivalent to the one I have been discussing.
Probably that is not the answer that you want to see. If you have a solution to this problem which uses the way of reasoning that you are getting at, please let me know.
Posted by: Urs Schreiber on February 11, 2004 11:21 AM | Permalink | Reply to this
Re: Rules of the game
To address Robert’s question, note that the canononical commutation relations for $X\prime \left(\sigma \right)$ and $\pi \left(\sigma \right)$ are just a copy of the Heisenberg algebra for each
Fourier mode. One can, I think legitimately, take them to hold exactly (with no $O\left({\hslash }^{2}\right)$ term).
The anomaly that you compute in the commutator of two Virasoro generators is then a term which is, indeed $O\left({\hslash }^{2}\right)$. The $O\left(\hslash \right)$ term in the commutator is the
same as in the classical theory. The leading ($\propto {m}^{3}$) piece of the anomaly is $O\left({\hslash }^{2}\right)$. But, given the canonical commutation relations, it is completely universal.
Posted by: Jacques Distler on February 11, 2004 1:57 PM | Permalink | Reply to this
Choice of ordering
Just to make clear a point that Urs did not dwell on: the different choices of ordering of ${L}_{0}$ (Weyl-ordering, as used here, versus normal-ordering, say) differ by an $N$-dependent constant.
Thus the shift in ${L}_{0}$ that Urs talks about above will differ for different choices of ordering of ${L}_{0}$. What is completely universal is the coefficient of ${m}^{3}$ in the above
Posted by: Jacques Distler on February 6, 2004 12:52 AM | Permalink | Reply to this
Re: The Virasoro anomaly from regulated generators
I wrote:
Therefore it makes sense to look at terms containing different powers of $m$:
Unfortunately this is wrong and makes the entire argument wrong.
Naively one might think that it is enough to count powers of $m$. But there are subtle effects. Since $m$ is integer we cannot distinguish $m$ from $m\mathrm{cos}\left(2\pi m\right)$, for instance.
I realized that this is a problem when trying to run through the same regulator calculation for the superstring.
Now I am totally at a loss how to properly evaluate
(1)$\underset{N\to \infty }{\mathrm{lim}}{\int }_{0}^{2\pi }d\sigma \phantom{\rule{thinmathspace}{0ex}}d\kappa \phantom{\rule{thinmathspace}{0ex}}{e}^{-\mathrm{im}\left(\sigma -\kappa \right)}{f}^{\
prime }\left(\sigma -\kappa \right)Y\left(\sigma \right)Y\left(\kappa \right)\phantom{\rule{thinmathspace}{0ex}}.$
Posted by: Urs Schreiber on March 4, 2004 7:28 PM | Permalink | PGP Sig | Reply to this
Anomaly Monopoly
Since this topic is all-but-beaten to death now, I thought it might be fun apply Thomas’s methods to a theory people actually care about (nobody gives a crap about the bosonic string).
Classical Yang-Mills theory is dilatation-invariant.
Exercise 1: Construct the generator, $D$, of dilatations in classical Yang-Mills.
The action of dilatations (just like the action of Poincaré) doesn’t quite commute with the Hamiltonian. Rather, the Poisson-bracket of $D$ with the Hamiltonian is proportional to $H$. Equivalently,
the action of a 1-parameter group of dilatations is to rescale the Hamiltonian, $H\to \lambda H$.
Exercise 2: Now, apply Thomas’s procedure to construct the action of this 1-parameter group of dilatations in the quantum theory (“just like Poincaré”, as Thomas would say).
Excellent! Since $H$ rescales under the action of the dilatation group, we have proven that the spectrum of $H$ in the quantum theory is continuous near zero.
In other words, the quantum theory “has no mass-gap.”
Bzzzt! Do not pass GO, do not collect $1 million!
Posted by: Jacques Distler on February 6, 2004 2:59 PM | Permalink | Reply to this
Re: Anomaly Monopoly
I assume that we are now talking about YM on a flat background. If ${T}_{\mu u }$ is the (symmetric) energy-momentum tensor of the action
the dilatation current is
(2)${D}_{\mu }={T}_{\mu u }{x}^{u }$
whose divergence is the trace of ${T}_{\mu u }$:
(3)${\partial }_{\mu }{D}^{\mu }={T}^{\mu }{}_{\mu }\phantom{\rule{thinmathspace}{0ex}}.$
Classically this trace vanishes, but quantumly scale transformations are associated with running of the coupling constants $g\to g+\beta \left(g\right)$ so that in the quantum theory
(4)${\partial }_{\mu }{D}^{\mu }=\beta \left(g\right)\frac{2}{{g}^{3}}{F}^{2}\phantom{\rule{thinmathspace}{0ex}}.$
This is the trace anomaly and should be related to what Jacques is getting at.
My question to Thomas is: It has been argued that because LQG has the exact EH action at the Planck-scale it cannot possibly have this as an effective action at low energies, due to renormalization.
Could it be that this disagreement between what LQG proponents are expecting to happen and what some other people are expecting, is also due to the fact that there are no quantum anomalies present in
the LQG approach?
Posted by: Urs Schreiber on February 6, 2004 7:06 PM | Permalink | Reply to this
Re: Anomaly Monopoly
This is the trace anomaly and should be related to what Jacques is getting at.
Of course they are related. If Thomas can quantize this particular 2D theory without picking up the Virasoro anomaly, he can presumably quantize 4D Yang-Mills without picking up the trace anomaly.
But, to make things easy for Thomas, you really should have written out the stress tensor in canonical form,
where ${B}^{k}={ϵ}^{\mathrm{ijk}}\left({\partial }_{i}{A}_{j}+\frac{i}{2}\left[{A}_{i},{A}_{j}\right]\right)$ is the chromomagnetic field, and ${E}_{i}$ is the chromoelectric field^*,
canonically-conjugate to ${A}_{i}$. The generator of the dilatation symmetry is, as you said,
(2)$D=\int {d}^{3}x{T}_{0\mu }{x}^{\mu }=tH+\int {d}^{3}x{T}_{0i}{x}^{i}$
and satisfies
(3)$\frac{dD}{dt}=\left\{D,H\right\}+\frac{\partial D}{\partial t}=0$
Promoting $U\left(\lambda \right)={e}^{-\lambda \left\{D,\cdot \right\}}$ to a symmetry of quantum Yang-Mills, one easily finds an example of Dirac’s dictum:
All theorists should first apply their theories to themselves.
^* I should point out, just for completeness, that there is no quantum-mechanical obstruction to imposing the Gauss-law constraint $〈\text{phys}\mid {D}^{i}{E}_{i}\mid \text{phys}\prime 〉=0$.
Posted by: Jacques Distler on February 6, 2004 9:57 PM | Permalink | Reply to this
Re: Anomaly Monopoly
You can have an anomaly free rep. of the
symmetry and still get drastic quantum corrections at the Planck scale. In LQG
geometrical operators corresponding to
length, area and volume of a curve, surface
and region respectively have discrete
spectrum which is a drastic departure from
the smooth classical structure. At scales
way above the Planck scale things look smooth again semiclassically.
In all books about renormalization you will find the statement that violating
a continuous gauge symmetry (as e.g. the
local gauge symmetry of YM theory) leads to an inconsistent theory. As the gauge symmetry of the string is continuous as well, it fits to the pattern when trying to representing it w/o anomaly. You
violate rigid symmetries such as the chiral symmetry corresponding to the ABJ
anomaly. Now the string is curious in the sense that in the usual rep. we do have an
anomaly and yet get a consistent theory
at least when adding supersymmetry.
Finally, although I do not know of any rep. for non abelean ym theory by itself
which supports the hamiltonian, the coupled YM - EH action can be quantized
in the LQG representation
and supports the Hamiltonian. That Hamiltonian IS NOT dilatation invariant.
Posted by: Thomas Thiemann on February 7, 2004 3:58 PM | Permalink | Reply to this
Splitting the constraints
Now the string is curious in the sense that in the usual rep. we do have an anomaly and yet get a consistent theory at least when adding supersymmetry.
No, supersymmetry has nothing to do with this issue. Even though the algebra of constraints receives a central extension, it is possible to ensure that the matrix elements of the constraints vanish
between physical states, by imposing the constraints weakly:
(1)$\begin{array}{rcl}{L}_{n}\phantom{\rule{thinmathspace}{0ex}}\mid \text{phys}〉& =& 0,\phantom{\rule{1em}{0ex}}n>0\\ 〈\text{phys}\mid \phantom{\rule{thinmathspace}{0ex}}{L}_{n}& =& 0,\phantom{\
rule{1em}{0ex}}n<0\\ \left({L}_{0}-a\right)\phantom{\rule{thinmathspace}{0ex}}\mid \text{phys}〉& =& 〈\text{phys}\mid \phantom{\rule{thinmathspace}{0ex}}\left({L}_{0}-a\right)=0\end{array}$
This is hardly an unusual situation. Indeed, in “most” quantum field-theoretic instances of constrained quantization, one can only impose the constraints weakly,
(2)$〈\text{phys}\mid \phantom{\rule{thinmathspace}{0ex}}\pi \left(C\right)\phantom{\rule{thinmathspace}{0ex}}\mid \text{phys}\prime 〉=0$
It almost never happens that one can impose the stronger version, $\pi \left(C\right)\phantom{\rule{thinmathspace}{0ex}}\mid \text{phys}〉=0$.
(Which is why your procedure of attempting to impose the exponentiated constraints strongly almost never works in field theory.)
Posted by: Jacques Distler on February 7, 2004 5:42 PM | Permalink | Reply to this
Re: Anomaly Monopoly
Unfortunately you are not correct because
nobody succeeded in finding a representation
in which the Hamiltonian or stress energy tensor are well-defined for non Abelean
Yang Mills theory. There is no conclusion.
In any case, whether or not you should have an exact or projective rep. of a symmetry
depends on the physical system under study and hence must be decided ultimately by experiment. In case of the string everything is allowed until we have quantum gravity experiments.
Bzzzzt! Do not even throw the dice.
Posted by: Thomas Thiemann on February 7, 2004 3:41 PM | Permalink | Reply to this
“Easy” problems are not worth doing
Unfortunately you are not correct because nobody succeeded in finding a representation in which the Hamiltonian or stress energy tensor are well-defined for non Abelean Yang Mills theory.
There is no technical issue that arises in the quantization of Yang-Mills theory that does not arise in much more difficult form in the quantization of gravity.
Yang Mills theory is an infinitely easier problem.
Indeed, the cutoff theory can be perfectly-rigourously defined, say by Lattice Gauge Theory. There is no analytical proof that the Lattice Gauge Theorists can take the cutoff away, and get to the
continuum limit. But there is powerful numerical evidence that they can. Indeed, they can, nowadays, compute the spectrum of glueball masses (verifying, numerically, the existence of a mass gap) to
within a few percent.
In any case, whether or not you should have an exact or projective rep. of a symmetry depends on the physical system under study and hence must be decided ultimately by experiment. In case of the
string everything is allowed until we have quantum gravity experiments.
No, no, and no! It is mathematically inconsistent to assume that the Virasoro constraints do not pick up a central extension in the bosonic string (as Urs proved to you above). It is mathematically
inconsistent to assume that dilatation symmetry is unbroken in quantum Yang-Mills.
And “Anything goes, until we have experimental evidence proving that our mathematically-inconsistent manipulations are ruled out by Mother Nature.” is not a recipe for doing good science. (OK, that
last one was a religious statement, which you are welcome to disagree with.)
Posted by: Jacques Distler on February 7, 2004 6:04 PM | Permalink | Reply to this
Baby & Bathwater
So now, the party line is that Thiemann’s quantization is some clever new method of quantization, completely unrelated to canonical quantization, that no one has thought of before.
This is not only my interpretation, but Thomas Thiemann himself says that the procedure, sketched above, for dealing with the constraints, should be compared to experiment to see if nature favors
it over standard Dirac/Gupta-Bleuler quantization.
It is well-known that if one is willing to abandon locality, one has great lattitude to “cancel” the anomalies which arise in local QFT. A charitable interpretation of Thiemann’s procedure is that it
correponds precisely to such a nonlocal modification of local field theory.
There are reasons to reject nonlocal modification of the worldsheet theory of the bosonic string — to do with getting consistent string interaction, a problem on which Thiemann is clueless, as he
has, at best, made a failed attempt to construct the free bosonic string.
However, it is quite clear why Thiemann does not wish to apply his methods to Quantum Field Theories people care about, like Yang-Mills Theory. There, we know quite clearly whose side Mother Nature
has come down on.
Posted by: Jacques Distler on February 12, 2004 4:29 PM | Permalink | Reply to this
Re: Baby & Bathwater
Demian Cho has kindly pointed me to a paper that seems to be (indeed, claims to be) relevant to the issues that we have been discussing here:
A. Ashtekar, S. Fairhurst and J. Willis, Quantum gravity, shadow states and quantum mechanics, 2002
I have so far only read the first dozen pages or so, unfortunately. In the introduction it says that the purpose of the paper is to discuss the peculiarities of LQG-inspired quantization in the
framework of a very simple toy example - the single nonrelativistic particle in 1-dimension.
Unfortunately this is not a theory with constraints, so I am worried that maybe a key aspect of our previous discussion is not dealt with in this paper. Nevertheless, many technical aspects clearly
are as in Thomas Thiemann’s paper.
With only ten pages read I cannot yet comment on more than the following point:
On p.8 the well-known fact is emphasized that there are inequivalent representations of the Weyl-Heisenberg algebra, some of which are weakly continuous and give the usual quantum theory, while
others are not and hence give the LQG-like quantum theory (called the polymer representation in that paper). Just for reference, let me note that in the weakly continuous case the Weyl-Heisenberg
algebra is that generated by the exponential operators $U\left(a\right)=\mathrm{exp}\left(\mathrm{ia}\stackrel{̂}{x}\right)$ and $V\left(a\right)=\mathrm{exp}\left(ia\stackrel{̂}{p}\right)$.
Now one point of the LQG approach is that there are representations of this algebra which are not weakly continuous (which means that matrix products of operators are not continuous in the parameter
$a$). This implies that for instance $V\left(a\right)$ exists as an operator, but the exponent $\stackrel{̂}{p}$ does not.
Recall that this was a crucial aspect of Thomas Thiemann’s quantization of the string: The worldsheet oscillators and Virasoro constraints themselves were not represented on the Hilbert space, only
their classical exponentiations were. This was the very reason why no anomaly appeared, since the classically exponentiated constraints were promoted to operators directly. This was also the point of
the main criticism of Thomas Thiemann’s quantization, so let’s see what the above paper has to say about this point.
To see this clearly, however, I’ll have to read the rest of the paper which has to wait until tomorrow, in my case. Let me only note that in the introduction an interesting hint is given:
There it says that the non-weakly-continuous representation does indeed describe ‘physics’ fundamentally very different from the ordinary one. But it also says that the non-standard representation
(the ‘classical exponentiation’) can reproduce the usual results pertaining to the usual Schroedinger quantization in some limit (or so), using a concept called ‘shadow states’ .
Let me quote from the first paragraph on p.4:
At the mathematical level, the two descriptions [ordinary quantization and non-weakly-continuous representation] are quite distinct and, indeed, appear to be disparate. Yet, we will show that
states in the standard Schroedinger Hilbert space define elements of the analog of ${\mathrm{Cyl}}^{*}$. As in quantum geometry, the polymer particle ${\mathrm{Cyl}}^{*}$ does not admit a natural
inner product. Nonetheless, as indicated in [1], we can extract the relevant physics from elements of ${\mathrm{Cyl}}^{*}$ by examining their shadows, which belong to the polymer particle Hilbert
space ${ℋ}_{\mathrm{Poly}}$. This physics is indistinguishable from that contained in Schroedinger quantum mechanics in its domain of applicability.
This really makes me curious, because it sounds like there might be a way to obtain the usual quantization of the string from Thomas Thiemann’s quantization, maybe in some limit or something.
If anyone (Demian Cho, Joshua Willis, Thomas Thiemann, for instance) can explain this, I would be very grateful.
Posted by: Urs Schreiber on February 16, 2004 9:16 PM | Permalink | Reply to this
Re: Baby & Bathwater
This morning I continued reading A. Ashtekar, S. Fairhurst and J. Willis, Quantum gravity, shadow states and quantum mechanics, 2002.
According to Demain Cho’s comments here this paper should contain the key peculiarities of LQG-like quantization in a tractable toy example.
The basic idea is to see what happens when the correspondence principle of elementary quantum mechanics is violated. Taking the risk of boring everybody let me recall that this principle says, in its
most naive form, that classical canonical coordinates and momenta are promoted to self-adjoint operators on some Hilbert space in the quantum theory with CCR commutator $\left[\stackrel{̂}{x},\
One observes that this commutation relation may be exponentiated by defining $\stackrel{̂}{U}\left(a\right)=\mathrm{exp}\left(-\mathrm{ia}\stackrel{̂}{p}/h\right)$ and $\stackrel{̂}{V}\left(a\right)=\
mathrm{exp}\left(-\mathrm{ia}\stackrel{̂}{x}/h\right)$, which gives the Weyl form of the CCR:
One central idea of the LQG-like quantization is to modify the correspondence principle to the effect that instead of demanding $\stackrel{̂}{x}$ and $\stackrel{̂}{p}$ to be operators on some Hilbert
space satisfying the CCR, one demands $\stackrel{̂}{U}\left(a\right)$ and $\stackrel{̂}{V}\left(a\right)$ to be operators on some Hilbert space and that the Weyl form of the CCR holds.
The crucial point is that there are representations of the Weyl algebra, namely those which are not weakly continuous (expectation values of $\stackrel{̂}{V}\left(a\right)$ of $\stackrel{̂}{U}\left(a\
right)$ are not continuous in $a$), which are not unitarily equivalent to that obtained by exponentiating operators $\stackrel{̂}{x}$ and $\stackrel{̂}{p}$. LQG-like quantization wants to work with
these ‘exotic’ representations of the Weyl algebra, that’s the program.
To my mind this program has the following problem, which, in different guises, has been discussed here a lot already: The problem is that classical equations of motion, classical constraints, the
Schroedinger equation, etc., are usually expressed in terms of $x$ and $p$. But now not both of these are available as operators $\stackrel{̂}{x}$ and $\stackrel{̂}{p}$. So what is the quantization
prescription then? Are we to do the exponentiation classically, in the Poisson algebra and then promote the result to an operator? By which rules to we specify the commutation relations of the
resulting operators? Again by the classical theory?
I was hoping to find an answer to this important question in the above mentioned paper. It is page 14 of that paper where an aspect of this questions is discussed. There, the task is to find a
Weyl-algebra analog of the definition of coherent states
(2)$\stackrel{̂}{a}\mid {\psi }_{\zeta }〉=\zeta \mid {\psi }_{\zeta }〉\phantom{\rule{thinmathspace}{0ex}},$
where $\stackrel{̂}{a}$ is the lowering operator of ordinary 1d nonrelativistic QM. In this form this equation is not available in LQG-like QM, because there $\stackrel{̂}{p}$, which enters the
definition of $\stackrel{̂}{a}$, is not defined. Therefore one has to find an exponentiated version of this equation.
Now comes the interesting point: The exponentiated version of the above equation in this paper is modeled after the respective exponentiated equation in in the usual Schroedinger quantization, using
the Baker-Campbell-Hausdorff formula for the ordinary CCR algebra operators! (Please see page 14 of this paper for details.)
If this does not sound surprising,, recall how a similar step was done in Thomas Thiemann’s paper: There the question was how to represent the Virasoro constraints in exponentiated form, since the
constraints themselves were not represented as operators in Thomas Thiemann’s LQG-like quantization of the string. Following the above paper by Ashtekar, Fairhurst and Willis one might have expected
that this was done modeled after the usual quantum theory, which, as Jacques Distler has emphasized would still see the anomaly, of course. Instead, what Thomas Thiemann does in his paper is to use
the classical Poisson-algebra of the exponentiated Virasoro constraints and represents this in terms of operators on some Hilbert space.
It seems to me that by modifying the usual correspondence principle (which incidentally also means giving up the path integral) one arives at a proposal for a new form of quantization which is not
uniquely well defined. Of course a similar statement is true for the standard form of Schroedinger-like quantization, where ordering ambiguities in expressions like $\mathrm{xp}$ have to be dealt
with. But the ambiguity in the LQG-like quantization seems to be much more severe.
If in Thomas Thiemann’s paper one were to follow the prescription indicated on page 14 of the Ashtekar,Fairhurst&Willis paper, one would find the anomaly. If one instead uses the classical algebra
one misses it.
How are we supposed to deal with the ambiguities that arise as soon as the usual correspondence principle based on the CCR is replaced by one based on the Weyl-form of the CCR?
One way is indicated by Ashtekar,Fairhurst&Willis in their simple QM example: If one takes care that the usual relations are correctly translated to the new formalism (as is done on their page 14)
then one finds the same results as in the usual quantum theory, essentially. In this case, however, one might wonder what the LQG-like formalism buys us.
The other way is to model the Weyl-CCR operator relations after the classical algebra, as done by Thomas Thiemann for the LQG string and in general in LQG for the spatial diffeomorphism constraints
of gravity. This approach however has very little in common with what one usually calls ‘quantization’. And it is also doubtful that an argument as in Ashtekar,Fairhurst&Willis can recover the usual
theory in this case, I think.
Posted by: Urs Schreiber on February 17, 2004 11:32 AM | Permalink | Reply to this
Re: Baby & Bathwater
Following the above paper by Ashtekar, Fairhurst and Willis one might have expected that this was done modeled after the usual quantum theory, which, as Jacques Distler has emphasized would still
see the anomaly, of course. Instead, what Thomas Thiemann does in his paper is to use the classical Poisson-algebra of the exponentiated Virasoro constraints and represents this in terms of
operators on some Hilbert space.
Is that even consistent with “quantizing” the Weyl form of the of the CCR’s? Are the (exponentiated) Virasoro constraints even expressible in terms of the $\stackrel{̂}{U}\left(a\right)$’s and $\
Another, unrelated, question is just how non-weakly continuous a representation one can abide. In the presence of several $x$’s and $p$’s, one should generalize the above equation to
(1)$\stackrel{̂}{U}\left(\stackrel{⇀}{a}\right)\stackrel{̂}{V}\left(\stackrel{⇀}{b}\right)={e}^{i\stackrel{⇀}{a}\cdot \stackrel{⇀}{b}/h}\stackrel{̂}{V}\left(\stackrel{⇀}{a}\right)\stackrel{̂}{U}\left(\
and demand that there exist a rotation operator $\stackrel{̂}{W}\left(R\right)$ such that
(2)$\stackrel{̂}{W}\left(R\right)\stackrel{̂}{U}\left(\stackrel{⇀}{a}\right)\stackrel{̂}{W}\left(R{\right)}^{-1}=\stackrel{̂}{U}\left(R\circ \stackrel{⇀}{a}\right),\phantom{\rule{1em}{0ex}}\stackrel{̂}{W}
\left(R\right)\stackrel{̂}{V}\left(\stackrel{⇀}{a}\right)\stackrel{̂}{W}\left(R{\right)}^{-1}=\stackrel{̂}{V}\left(R\circ \stackrel{⇀}{a}\right)$
The Poincaré group is generated by $\stackrel{̂}{W}$ and $\stackrel{̂}{U}$. Surely, for there to be any sense to this, we must demand that the Poincaré group is represented weakly-continuously.
Is it possible to get a non-weakly-continuous representation of the Weyl relation above, while still having Poincaré represented weakly-continuously?
Posted by: Jacques Distler on February 17, 2004 3:40 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
K.-H. Rehren, who has worked on Pohlmeyer invariants, was so kind to answer to my mail. I have posted the reply here.
Among many other things he says:
Hence implementation of the constraints with c=0 is possible.
Posted by: Urs Schreiber on February 17, 2004 2:38 PM | Permalink | Reply to this
Re: Thiemann’s quantization of the Nambu-Goto action
The discussion seems to have moved to sci.physics.research .
Posted by: Urs Schreiber on March 12, 2004 4:54 PM | Permalink | PGP Sig | Reply to this | {"url":"http://golem.ph.utexas.edu/string/archives/000299.html","timestamp":"2014-04-19T22:14:16Z","content_type":null,"content_length":"316378","record_id":"<urn:uuid:ba76cd87-5d40-4789-8dfa-2c06b30aed9c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
number of handshakes in a party
March 29th 2010, 08:33 AM
number of handshakes in a party
A collection of $n$ couples attend a party at which a number
of people shake hands. Suppose that no pair shake hands more than once,
and no one shakes hands with her partner. At the end of the evening, the
host asks each of the $2n-1$ other people how many hands they shook. She
receives $2n - 1$ different answers. How many hands did the host shake?
How many did her partner shake?
March 30th 2010, 09:53 PM
A collection of $n$ couples attend a party at which a number
of people shake hands. Suppose that no pair shake hands more than once,
and no one shakes hands with her partner. At the end of the evening, the
host asks each of the $2n-1$ other people how many hands they shook. She
receives $2n - 1$ different answers. How many hands did the host shake?
How many did her partner shake?
Without even trying to hard the answer is use the pigeon hole principle! -- edit sorry this should say injectivity.
The answer is n-1. The logic used to it is rather tedious. Here is a quick overview.
There are 2n people. First, we must use injectivity. 2n-1 people reported 2n-1 different number of handshakes. Since the maximum number of handshakes is 2n-1-1 = 2n-2 a person can make, that
means each persons number of handshakes is between 0 and 2n-2. Since there are 2n-1 values between 0 and 2n-2 we must have by injectivity that each person other than the hostess must have a
unique number of handshakes between 0 and 2n-2. Label each person (other than the hostess) by their number of handshakes.
Consider person 2n-2, this person must have handshook everyone except 0, including the hostess. This is because there are 2n-1 possible people other than 2n-2 himself, but 0 is not available.
Then consider person 2n-3 this person can not handshake 0 or 1 (1 has reached his maximum allowed handshakes) thus he must have handshook everyone except those 2. Continue this process until you
reach person n this person must handshake everyone except people 0, ..., n-2. Now consider person n-1. This person has reached their exact allowed quota of handshakes already. This is true of all
people n-1 to 0. Now consider the hostess. She can not be allowed to handshake any additional people for every person has reached their quota of handshakes. Thus she has handshook everyone from
2n-2 to n that is 2n-2 -n + 1 = n -1 people.
March 30th 2010, 11:58 PM
added the answer to my post above. Is should note by necessity her partner is person n-1. Thus both her and her partner shook n-1 hands. | {"url":"http://mathhelpforum.com/discrete-math/136301-number-handshakes-party-print.html","timestamp":"2014-04-17T04:22:50Z","content_type":null,"content_length":"7447","record_id":"<urn:uuid:d380d810-0f39-4cf4-adf8-08ac19b66bcc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kearny, NJ SAT Math Tutor
Find a Kearny, NJ SAT Math Tutor
I graduated from Algerian University with a Master's degree in Materials Science. I hold a Masters degree of Art in Teaching (M.A.T.) from New Jersey City University. I am a certified teacher in
14 Subjects: including SAT math, chemistry, calculus, French
...I can also help you prepare for GRE, GMAT, MCAT, DAT, and PRAXIS tests. For talented 7th and 8th graders I also offer Bergen Academies Entrance Exam Prep. I'm a very dynamic and engaging
instructor who can quickly assess your needs and determine the best way to help you achieve your potential.
83 Subjects: including SAT math, chemistry, calculus, physics
...Helping a student achieve a dream is a very rewarding experience. If you have a dream, and if you think I can be of any help, please do not hesitate to contact me. I look forward to working
with you to reaching your GPA and test score goals!
21 Subjects: including SAT math, English, calculus, geometry
Do you have a student who is behind in class? Is help needed with elementary, middle or high school math (including algebra I, algebra II and geometry)? Is test prep needed for tests such as
standardized grade-level advancement tests, Regents, SATs or the GED? Do you have a student whose progress...
30 Subjects: including SAT math, English, reading, writing
...I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am currently finishing my
second major in engineering science. I have helped many students as a private tutor in both mathematics and Spanish.
9 Subjects: including SAT math, Spanish, calculus, geometry
Related Kearny, NJ Tutors
Kearny, NJ Accounting Tutors
Kearny, NJ ACT Tutors
Kearny, NJ Algebra Tutors
Kearny, NJ Algebra 2 Tutors
Kearny, NJ Calculus Tutors
Kearny, NJ Geometry Tutors
Kearny, NJ Math Tutors
Kearny, NJ Prealgebra Tutors
Kearny, NJ Precalculus Tutors
Kearny, NJ SAT Tutors
Kearny, NJ SAT Math Tutors
Kearny, NJ Science Tutors
Kearny, NJ Statistics Tutors
Kearny, NJ Trigonometry Tutors
Nearby Cities With SAT math Tutor
Belleville, NJ SAT math Tutors
Bloomfield, NJ SAT math Tutors
East Newark, NJ SAT math Tutors
East Orange SAT math Tutors
Glen Ridge SAT math Tutors
Harrison, NJ SAT math Tutors
Irvington, NJ SAT math Tutors
Lyndhurst, NJ SAT math Tutors
Montclair, NJ SAT math Tutors
Newark, NJ SAT math Tutors
North Arlington SAT math Tutors
Nutley SAT math Tutors
Orange, NJ SAT math Tutors
South Kearny, NJ SAT math Tutors
West Orange SAT math Tutors | {"url":"http://www.purplemath.com/Kearny_NJ_SAT_Math_tutors.php","timestamp":"2014-04-17T04:40:50Z","content_type":null,"content_length":"23875","record_id":"<urn:uuid:916f0a49-0b2c-4d72-843a-afb2bcf54a6d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
EPW maths assigment
March 31st 2008, 06:06 AM #1
Mar 2008
EPW maths assigment
I have this EPW assigment and I don't understand it.
A rat is put into a square maze that has a feeding station at one corner. The rat can move along corridors to the feeding area. The corridors are represented by the horizontal and vertical lines.
The rat may only travel upward and to the right toward the feeding station. How many different routes can the rat travel to get to the feeding station from corner A?
My teacher says there is some sort of formula to get the answer but I looked through my book and found nothing.
that's the maze.
my friend came up with this, is it correct?
Thank you very much.
March 31st 2008, 07:25 AM #2
April 1st 2008, 12:38 AM #3
Mar 2008 | {"url":"http://mathhelpforum.com/math-topics/32647-epw-maths-assigment.html","timestamp":"2014-04-16T16:20:29Z","content_type":null,"content_length":"35055","record_id":"<urn:uuid:a8f21ecf-c5af-40e5-b04d-948898447019>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Narberth Prealgebra Tutor
...I'm a Princeton alum who majored in Classics with 4 years of experience in college level Latin. I've tutored students in Latin for a year and a half. In the beginner/intermediate level of
study, I like to get students thinking analytically about the language by having them recognize words that are derived from Latin.
10 Subjects: including prealgebra, algebra 1, vocabulary, grammar
...In addition to being a certified English teacher, I am also a member of the Phi Beta Kappa Honor Society. I believe in teaching students the skills and study strategies to learn and succeed on
their own. I also understand that each student learns in his or her own way, so I tailor my tutoring methods to each individual student.
12 Subjects: including prealgebra, reading, English, writing
...For the past 4 years I have taught 9th grade Algebra preparing students for Pennsylvania Keystone exams in Algebra. Other subjects I've taught include SAT Math Statistics Trigonometry, and
Algebra 2. I am very familiar with Keystone Exams because of my Student teaching experiences in New York a...
9 Subjects: including prealgebra, geometry, algebra 2, algebra 1
...I have been teaching for 14 yrs. I sometimes use the ASBRM methods of Music theory and have taught through the 3rd level book already out of 8 levels. Thank you.
51 Subjects: including prealgebra, English, chemistry, reading
...I have experience teaching in face-to-face as well as cyber environments, and would be able to tutor in both settings. I have worked with students from Kindergarten through 12th grade, in
general education and a variety of special education settings.I am a certified Special Education Teacher at ...
15 Subjects: including prealgebra, reading, writing, algebra 1 | {"url":"http://www.purplemath.com/Narberth_Prealgebra_tutors.php","timestamp":"2014-04-20T04:32:43Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:7f3d6714-818b-415d-899d-f79dff4cd32c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
23. Name three faces of the cube above that intersect at point G. (1 point) a. DHGC, CBFG, GFEH b. ABFE, ADHE, ABCD c. DCBA, GEBF, GFEH d. GDAF, CBFG, GFEH 26. Find the surface area of the cylinder
to the nearest tenth of a square unit. Use pi. = 3.14. a. 206 cm2 b. 100.5 cm2 c. 623 cm2 d. 412 cm2 mc0262.jpg nar0021.jpg
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512bada3e4b098bb5fbb2ba8","timestamp":"2014-04-20T03:20:29Z","content_type":null,"content_length":"79058","record_id":"<urn:uuid:03c41bc8-3822-48ca-b270-6309c17ccfe1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates SAT Math Tutor
Are you struggling in Chemistry? Do you not understand the difference between mitosis and meiosis? Do you think that the square root of 69 is 8?
15 Subjects: including SAT math, chemistry, geometry, biology
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am
teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual.
14 Subjects: including SAT math, chemistry, geometry, biology
...I specialize in tutoring students for standardized test including the ACT/SAT/SSAT/high school graduation test. I also enjoy assisting students with general needs such as raising grades and
performance in the academic setting. Willing to tutor in person (Home environment, Library, Coffee Shopt, etc.) or online.
14 Subjects: including SAT math, calculus, geometry, biology
...I have taught middle school math for 3 years and taught high school math for 6 years. I also taught in the college environment for over 10 years and I am currently teaching Math. I have tutored
middle and high school math for 20+ years.
20 Subjects: including SAT math, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/avondale_estates_sat_math_tutors.php","timestamp":"2014-04-16T10:47:42Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:73f53d05-d5aa-46da-b1c9-b1423738b6b0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Giveaway Data Sufficiency Statements
Because of this very confined structure, there are actually cases where the structure of question and statements can give you information regardless of the specifics of the problem. There are at
least four instances where a specific form of the statement(s) will allow you to eliminate several responses without evaluating the full content of the problem.
1) A value statement for a yes/no question
If a statement provides a value for the sole variable in the question, it is definitely sufficient to answer any yes or no question.
For example:
Does the integer x have more than two positive factors?
1) x = 104,381
There is no need to spend time considering the factors of 104,381 or trying to use divisibility rules to see if x is divisible by 3 or 9. If I know the value of the number, I can answer any yes or
no question about that number. Thus, statement 1 is sufficient (meaning the only possible answers are A or D). One caveat to this rule is that it only applies if the yes/no question is about a
single variable (e.g. is the question was Does xy have more than two positive factors? just knowing the value of x may not be sufficient).
2) The two statements provide the same information.
If the two statements for data sufficiency questions provide the exact same information, the answer is either D or E.
For example
Company X has a total of 400 employees. (additional information in question). What percentage of Company X employees received a raise?
1) 80 of Company X’s employees are managers.
2) 320 of Company X’s employees are not managers.
Given that we know, Company X has a total of 400 employees from the question, it is easy to see that providing the number of managers allows us to calculate the number of non-managers and vice
versa. The statements have told us exactly the same information. Repeating the same thing twice (even if you do so at a louder volume) does not actually provide any new information. Either this
information allows us to answer the question (in which case the answer is D, either statement is sufficient), or the information is not useful (in which case the answer is E, not enough
information). Make sure to assess that the statements do actually provide the same information and you are not assuming information from one statement when considering the other one.
3) Statements that give only relative numbers when the question asks for a magnitude
If only relative numbers are provided in the question, a statement that provides only relative numbers will not be sufficient to answer questions of magnitude.
Company Y’s costs were 75% of its revenues in 2011. What were Company Y’s profits in 2012?
1) Revenues increased by 1/3 and costs increased by ¼ for Company Y in 2012 relative to 2011.
In the problem above, the question asks about an actual number “ company profits. In the question and the statements, we are only provided with relative information (i.e. percentages, fractions, and
ratios). You can never answer a question of magnitude based only on relative information. In the context of this question, when we only know percentage costs and fractional increases year over
year, we have no sense of how large this company is. Is it a hot dog stand on the corner or a multinational corporation? Thus, we have no potential to answer this question, or any question of
magnitude with statement 1 and should eliminate answers A and D.
Be careful because a statement with relative information could be helpful for this question if magnitudes are provided in the question. Consider statement 1 if question instead read Company Y’s
revenues were 100,000 and its costs were 75% of its revenues in 2011. What were Company Y’s profits in 2012?
4) Statements that give only angle measures for a geometry problem that asks about size (side length, perimeter, area).
If only angle measure are provided in the question, a statement that provides only angle measure will not be sufficient to answer a geometry question about size (e.g. side length or area).
Line k is parallel to line l. What is the perimeter of quadrilateral wxyz?
(Diagram that includes only information about angles)
1) Angle xyz measures 60 degrees.
This rule is essentially the geometry equivalent to rule 3. Angle measures don’t tell me anything about the size of a shape. Even give all three angle measures of a triangle that triangle could be
microscopic or the face of one the great pyramids. You cannot answer any question relating to the size of a shape (side length, perimeter, area, etc.) given only angle measures. Thus, in this
question, I would immediately eliminate answers A and D because statement 1 is not sufficient.
The same caveat applies as in rule 3. If, for example, the diagram included some information about the size of the shape, such as one or more side lengths, the angle information provided in the
statement could be sufficient to answer the question.
In a broader sense, understanding the meaning of these four statements is about recognizing the repetition and patterns present on the GMAT. As your recognition grows, you should feel you are not
always starting from ground zero when you see a new GMAT problem, rather you can apply some of the logic or lessons from prior problems you have done. As you continue you doing GMAT practice
problems, consider for yourself other common question and statement types that should inform your potential answers.
No Comments
Be the first to start the conversation.
You must be logged in to post a comment. | {"url":"http://www.manhattangmat.com/blog/index.php/2013/06/18/giveaway-data-sufficiency-statements/","timestamp":"2014-04-17T21:52:23Z","content_type":null,"content_length":"59433","record_id":"<urn:uuid:16277dcd-40ef-4243-8ec8-442deb48f601>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
PID Theory Explained
1. Control System
The basic idea behind a PID controller is to read a sensor, then compute the desired actuator output by calculating proportional, integral, and derivative responses and summing those three components
to compute the output. Before we start to define the parameters of a PID controller, we shall see what a closed loop system is and some of the terminologies associated with it.
Closed Loop System
In a typical control system, the process variable is the system parameter that needs to be controlled, such as temperature (ºC), pressure (psi), or flow rate (liters/minute). A sensor is used to
measure the process variable and provide feedback to the control system. The set point is the desired or command value for the process variable, such as 100 degrees Celsius in the case of a
temperature control system. At any given moment, the difference between the process variable and the set point is used by the control system algorithm (compensator), to determine the desired actuator
output to drive the system (plant). For instance, if the measured temperature process variable is 100 ºC and the desired temperature set point is 120 ºC, then the actuator output specified by the
control algorithm might be to drive a heater. Driving an actuator to turn on a heater causes the system to become warmer, and results in an increase in the temperature process variable. This is
called a closed loop control system, because the process of reading sensors to provide constant feedback and calculating the desired actuator output is repeated continuously and at a fixed loop rate
as illustrated in figure 1.
In many cases, the actuator output is not the only signal that has an effect on the system. For instance, in a temperature chamber there might be a source of cool air that sometimes blows into the
chamber and disturbs the temperature.Such a term is referred to as disturbance. We usually try to design the control system to minimize the effect of disturbances on the process variable.
Figure 1: Block diagram of a typical closed loop system.
Defintion of Terminlogies
The control design process begins by defining the performance requirements. Control system performance is often measured by applying a step function as the set point command variable, and then
measuring the response of the process variable. Commonly, the response is quantified by measuring defined waveform characteristics. Rise Time is the amount of time the system takes to go from 10% to
90% of the steady-state, or final, value. Percent Overshoot is the amount that the process variable overshoots the final value, expressed as a percentage of the final value. Settling time is the time
required for the process variable to settle to within a certain percentage (commonly 5%) of the final value. Steady-State Error is the final difference between the process variable and set point.
Note that the exact definition of these quantities will vary in industry and academia.
Figure 2: Response of a typical PID closed loop system.
After using one or all of these quantities to define the performance requirements for a control system, it is useful to define the worst case conditions in which the control system will be expected
to meet these design requirements. Often times, there is a disturbance in the system that affects the process variable or the measurement of the process variable. It is important to design a control
system that performs satisfactorily during worst case conditions. The measure of how well the control system is able to overcome the effects of disturbances is referred to as the disturbance
rejection of the control system.
In some cases, the response of the system to a given control output may change over time or in relation to some variable. A nonlinear system is a system in which the control parameters that produce a
desired response at one operating point might not produce a satisfactory response at another operating point. For instance, a chamber partially filled with fluid will exhibit a much faster response
to heater output when nearly empty than it will when nearly full of fluid. The measure of how well the control system will tolerate disturbances and nonlinearities is referred to as the robustness of
the control system.
Some systems exhibit an undesirable behavior called deadtime. Deadtime is a delay between when a process variable changes, and when that change can be observed. For instance, if a temperature sensor
is placed far away from a cold water fluid inlet valve, it will not measure a change in temperature immediately if the valve is opened or closed. Deadtime can also be caused by a system or output
actuator that is slow to respond to the control command, for instance, a valve that is slow to open or close. A common source of deadtime in chemical plants is the delay caused by the flow of fluid
through pipes.
Loop cycle is also an important parameter of a closed loop system. The interval of time between calls to a control algorithm is the loop cycle time. Systems that change quickly or have complex
behavior require faster control loop rates.
Figure 3: Response of a closed loop system with deadtime.
Once the performance requirements have been specified, it is time to examine the system and select an appropriate control scheme. In the vast majority of applications, a PID control will provide the
required results
2. PID Theory
Proportional Response
The proportional component depends only on the difference between the set point and the process variable. This difference is referred to as the Error term. The proportional gain (K[c]) determines the
ratio of output response to the error signal. For instance, if the error term has a magnitude of 10, a proportional gain of 5 would produce a proportional response of 50. In general, increasing the
proportional gain will increase the speed of the control system response. However, if the proportional gain is too large, the process variable will begin to oscillate. If K[c] is increased further,
the oscillations will become larger and the system will become unstable and may even oscillate out of control.
Figure 4: Block diagram of a basic PID control algorithm.
Integral Response
The integral component sums the error term over time. The result is that even a small error term will cause the integral component to increase slowly. The integral response will continually increase
over time unless the error is zero, so the effect is to drive the Steady-State error to zero. Steady-State error is the final difference between the process variable and set point. A phenomenon
called integral windup results when integral action saturates a controller without the controller driving the error signal toward zero.
Derivative Response
The derivative component causes the output to decrease if the process variable is increasing rapidly. The derivative response is proportional to the rate of change of the process variable. Increasing
the derivative time (T[d]) parameter will cause the control system to react more strongly to changes in the error term and will increase the speed of the overall control system response. Most
practical control systems use very small derivative time (T[d]), because the Derivative Response is highly sensitive to noise in the process variable signal. If the sensor feedback signal is noisy or
if the control loop rate is too slow, the derivative response can make the control system unstable
3. Tuning
The process of setting the optimal gains for P, I and D to get an ideal response from a control system is called tuning. There are different methods of tuning of which the “guess and check” method
and the Ziegler Nichols method will be discussed.
The gains of a PID controller can be obtained by trial and error method. Once an engineer understands the significance of each gain parameter, this method becomes relatively easy. In this method, the
I and D terms are set to zero first and the proportional gain is increased until the output of the loop oscillates. As one increases the proportional gain, the system becomes faster, but care must be
taken not make the system unstable. Once P has been set to obtain a desired fast response, the integral term is increased to stop the oscillations. The integral term reduces the steady state error,
but increases overshoot. Some amount of overshoot is always necessary for a fast system so that it could respond to changes immediately. The integral term is tweaked to achieve a minimal steady state
error. Once the P and I have been set to get the desired fast control system with minimal steady state error, the derivative term is increased until the loop is acceptably quick to its set point.
Increasing derivative term decreases overshoot and yields higher gain with stability but would cause the system to be highly sensitive to noise. Often times, engineers need to tradeoff one
characteristic of a control system for another to better meet their requirements.
The Ziegler-Nichols method is another popular method of tuning a PID controller. It is very similar to the trial and error method wherein I and D are set to zero and P is increased until the loop
starts to oscillate. Once oscillation starts, the critical gain K[c] and the period of oscillations P[c] are noted. The P, I and D are then adjusted as per the tabular column shown below.
│Control │ P │ Ti │ Td │
│ P │0.5Kc │ - │ - │
│ PI │0.45Kc│Pc/1.2│ - │
│ PID │0.60Kc│0.5Pc │Pc/8│
Table 1. Ziegler-Nichols tuning, using the oscillation method.
4. NI LabVIEW and PID
LabVIEW PID toolset features a wide array of VIs that greatly help in the design of a PID based control system. Control output range limiting, integrator anti-windup and bumpless controller output
for PID gain changes are some of the salient features of the PID VI. The PID Advanced VI includes all the features of the PID VI along with non-linear integral action, two degree of freedom control
and error-squared control.
Fig 5: VIs from the PID controls palette of LabVIEW
PID palette also features some advanced VIs like the PID Autotuning VI and the PID Gain Schedule VI. The PID Autotuning VI helps in refining the PID parameters of a control system. Once an educated
guess about the values of P, I and D have been made, the PID Autotuning VI helps in refining the PID parameters to obtain better response from the control system.
Fig 6: Advanced VIs from the PID controls palette of LabVIEW
The reliability of the controls system is greatly improved by using the LabVIEW Real Time module running on a real time target. National Instruments provides the new M Series Data Acquisition boards
which provide higher accuracy and better performance than an average control system.
Fig 7: A typical LabVIEW VI showing PID control with a plug-in NI data acquisition device
The tight integration of these M Series boards with LabVIEW minimizes the development time involved and greatly increases the productivity of any engineer. Figure 7 shows a typical VI in LabVIEW
showing PID control using NI-DAQmx API of M series devices.
5. Summary
The PID control algorithm is a robust and simple algorithm that is widely used in the industry. The algorithm has sufficient flexibility to yield excellent results in a wide variety of applications
and has been one of the main reasons for the continued use over the years. NI LabVIEW and NI plug-in data acquisition devices offer higher accuracy and better performance to make an excellent PID
control system.
6. References
1. Classical PID Control
by Graham C. Goodwin, Stefan F. Graebe, Mario E. Salgado
Control System Design, Prentice Hall PTR
2. PID Control of Continuous Processes
by John W. Webb Ronald A. Reis
Programmable Logic Controllers, Fourth Edition, Prentice Hall PTR
»Get Free Courseware Packets for Controls and Mechatronics
»See Product Solutions for Teaching Controls and Mechatronics | {"url":"http://www.ni.com/white-paper/3782/en/","timestamp":"2014-04-18T13:11:38Z","content_type":null,"content_length":"71103","record_id":"<urn:uuid:46258142-fbbd-4a40-b2dd-6edca4275a73>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
4x+17>and 3x-19>-13 solve
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4dd8ab17d95c8b0b215c63c4","timestamp":"2014-04-21T02:31:32Z","content_type":null,"content_length":"39381","record_id":"<urn:uuid:eb0db5b2-b10b-4334-944d-7e5bbb1fae36>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help with the following sequence
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
\[a _{n}=e ^{2n/(n+2)} \]
Best Response
You've already chosen the best response.
I ln both side ( after putting the limit) and then used the hospital rule but my book solved it in a completely different way :/ and came to a different answer. So, I'm kinda lost on how to solve
Best Response
You've already chosen the best response.
multiplied ln to both side*
Best Response
You've already chosen the best response.
what does "solve" mean in this context?
Best Response
You've already chosen the best response.
you have \[a_n=e^{\frac{2n}{n+2}}\] what are you trying to find?
Best Response
You've already chosen the best response.
if you are looking for \[\lim_{n\to \infty}e^{\frac{2n}{n+2}}\] then since \[\lim_{n\to \infty}\frac{2n}{n+2}=2\] the answer is \(e^2\)
Best Response
You've already chosen the best response.
Sorry I was gone for a while but yea it was what I was looking for^^
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51774b0be4b0955c26de2ef7","timestamp":"2014-04-17T21:36:32Z","content_type":null,"content_length":"42199","record_id":"<urn:uuid:2c92209b-d552-413b-a238-928b6baaaa41>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
FinAid | Calculators | Inverted Level Payment Calculator (Amount)
Inverted Level Payment Calculator (Amount)
This inverted level payment calculator is an integrated saving and borrowing calculator that focuses on what really matters to most people: how much they'll pay per month and how much money they'll
have for college. Think of it as a layaway plan for college that starts several years before matriculation and continues for several years after graduation. In other words, the same monthly payment
is used to build savings before college and to repay the debt after college. It yields a complete financial plan for college.
This calculator computes the total amount of money available for college costs based on a single flat monthly payment for both saving and borrowing, the number of years of savings before
matriculation, the number of years in repayment on the loans, the interest rate on savings and the interest rate on debt.
See also the following related calculators: | {"url":"http://www.finaid.org/calculators/invertedlevelpayments.phtml","timestamp":"2014-04-19T09:31:07Z","content_type":null,"content_length":"15770","record_id":"<urn:uuid:17f3dc16-13d2-43b6-9f88-524d193ec2e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Do Psychologists Analyze and Interpret Research Data?
Statistics is oftentimes used in Psychology because research data are usually gathered quantitatively. There are two types of Statistics - descriptive and inferential. The classification depends upon
the function of statistical principles in analyzing and interpreting research data.
Descriptive Statistics
Descriptive Statistics is used to summarize the characteristics of sets of data by describing their overall tendency or variability.
Measures of Central Tendency:
Mean is the statistical term for average. One can obtain the mean by adding all the scores and dividing the sum by the number of scores. Suppose one class section gets the scores 30, 29, 29, 25, 20,
18, 26, 3 for the first quiz in Psychology. The mean in this example is the sum of the scores divided by 8 (because there are 8 scores overall), which is 22.5. Because the mean takes into account all
the scores, it is duly affected by extreme scores. In the above example, the student who got the score of 3 pulled the rest of the class and their standing from other sections by affecting their
average score. Should the student studied harder and scored 20, the average performance of the class would have risen to 24.6. The same thing could happen in a section with poor scores and 1 very
high score. In a section where 7 students get a score of 5, and 1 student gets a score of 60, the average performance of the class could shoot up to 11.9, well above the performance of most students.
Thus, the problem with using the mean as a statistical tool to measure the overall tendency of a set of scores is that it is easily distorted by extremely low and high scores.
Median is the middlemost score in a set of scores. Picture a class of kindergarten students falling in line according to their height. The middle student is the median in height. Thus, in order to
find out the median in a set of scores, it is important to arrange the scores from highest to lowest, or vice versa, and locating the middlemost score. For example, in a score set of {7, 7, 5, 4, 1},
the median is 5. When a data set can be equally divided into two groups, such that no middlemost score can be found, the average of the two middlemost scores is obtained. Thus, in a score set of {27,
25, 20, 19}, the median is the sum of 25 and 20 divided by 2, resulting to 22.5. Notice that unlike the mean, only the arrangement of the scores matters, not the value. Because of this, the median is
unaffected by extreme scores, and is useful when such extreme scores are present in a set of data.
Mode is the most frequently occurring score in a set of scores. Imagine an elementary school in Africa. A typical classroom may well be composed primarily of black-colored children. Similarly, in a
score set of {5, 4, 4, 3, 3, 3, 2, 1, 1}, the mode is undoubtedly 3. Just like the median, the mode is unaffected by extreme scores.
Measures of Variability:
Range is the distance between the highest and the lowest score. It gives a very basic and general idea of the variation of a score set. For example, person A can jump as high as 3 yards, while person
B can jump as high as 4 yards. The range of person B's high jump is higher compared to person A. Thus, if person A's jump score set is {0, 3, 2, 1.5, 2, 2.5} and person B's score set is {0, 4, 2, 2,
2, 1}, person A's range of high jump is 3, while person B's range of high jump is 4, even if person A's mean score is just the same as person B's, which is 1.83. The formula for getting the range is
H-L, where H refers to the highest score, and L refers to the lowest score.
Standard Deviation is the average difference of all the scores from the mean. Thus, you have to get the mean of the scores first, then get the difference of all the scores from that mean, and then
get the average of the sum of those differences. The formula for getting the standard deviation is (Σ|X-M|)/N, where (Σ|X-M|) refers to the sum of the integral or absolute value of the difference of
all the scores (X) from the mean (M), and (N) refers to the number of the differences. For example, in a score set of {10, 8, 8, 7, 5, 3, 3, 2, 2, 2}, the mean is 5.0. The summation of the absolute
difference of all the scores from the mean can be calculated as follows: |10-5|+|8-5|+|8-5|+|7-5|+|5-5|+|3-5|+|3-5|+|2-5|+|2-5|+|2-5|, or 5+3+3+2+0+2+2+3+3+3, which is equal to 26. Because there are
10 scores, and consequently 10 absolute differences from the mean, then the standard deviation is 26/10, or 2.6. Thus, the average distance of the scores from the mean is 2.6.
Note: The concern over extreme scores is applicable to all measures of variability and the mean.
Inferential Statistics
Inferential Statistics is commonly used in Experimental Research to compare results from the experimental group and the control group. The difference between the two groups is considered
statistically significant only upon meeting a 0.01-0.05 level of confidence. This means that the difference between the two groups is due to chance only 1-5% of the time.
An important aspect in reporting about the significant difference of two groups is indicating the separate tendency and variability of each group (as discussed on Descriptive Statistics). For
example, Benbow and Stanley (1983) found that SAT scores of males and females are significantly different from each other. However, the difference is too small to be considered important. In
addition, the overlap between the scores is so extensive that very few males outperform the highest-scoring females. It is therefore wrong to conclude from this research that all males do better than
females in SAT scores.
Psychology as a Science | {"url":"http://general-psychology.weebly.com/how-do-psychologists-analyze-and-interpret-research-data.html","timestamp":"2014-04-21T02:00:35Z","content_type":null,"content_length":"21522","record_id":"<urn:uuid:9dd1d540-d395-475b-b2f8-f760d1a817f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
A First Look at Taylor Series
Search Mathematics of the DFT
Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog?
A First Look at Taylor Series
Most ``smooth'' functions Taylor series expansion:
This can be written more compactly as
where `
. Clearly, since many derivatives are involved, a Taylor series expansion is only possible when the function is so smooth that it can be differentiated again and again. Fortunately for us, all audio
are in that category, because
is bandlimited
to below
of any sum of
is infinitely differentiable. (Recall that
for more about this point.
Previous: Real ExponentsNext: Imaginary ExponentsAbout the Author: Julius Orion Smith III
Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at
Stanford's Center for Computer Research in Music and Acoustics (CCRMA)
, teaching courses and pursuing research related to signal processing applied to music and audio systems. See
for details. | {"url":"http://www.dsprelated.com/dspbooks/mdft/First_Look_Taylor_Series.html","timestamp":"2014-04-16T22:00:23Z","content_type":null,"content_length":"63196","record_id":"<urn:uuid:876e6278-9f66-4098-82ab-88586d1417a5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
We all have an intuitive sense of what "simplicity" means. In science, the word is often used as a term of praise. We expect that simple explanations are more natural, sounder, and more reliable than
complicated ones. We abhor epicycles, or long lists of exceptions and special cases.
But can we take a crucial step further, to refine our intuitions about simplicity into precise, scientific concepts? Is there a simple core to "simplicity"? Is simplicity something we can quantify,
and measure?
When I think about big philosophical questions, which I probably do more than is good for me, one of my favorite techniques is to try to frame the question in terms that could make sense to a
computer. Usually it's a method of destruction: It forces you to be clear, and once you dissipate the fog you discover that very little of your big philosophical question remains. Here, however, in
coming to grips with the nature of simplicity, the technique proved creative, for it led me straight toward a (simple) profound idea in the mathematical theory of information, the idea of description
length. (The idea goes by several different names in the scientific literature, including algorithmic entropy and Kolmogorov-Smirnov-Chaitin complexity. Naturally, I chose the simplest one.)
Description length is actually a measure of complexity, but for our purposes that's just as good, since we can define simplicity as the opposite—or, numerically, the negative—of complexity. To ask a
computer how complex something is, we have to present that "something" in a form the computer can deal with, that is as a data file, i.e. a string of 0s and 1s. That's hardly a crippling constraint:
We know that data files can represent movies, for example, so we can ask how about the simplicity of anything we can present in a movie; since our movie might be a movie recording scientific
observations or experiments, we can ask about the simplicity of a scientific explanation.
Interesting data files might be very big, of course. But big files need not be genuinely complex; for example, a file containing trillions of 0s and nothing else isn't genuinely complex. The idea of
description length is, simply, that a file is only as complicated as its simplest description. Or, to put it in terms a computer could relate to: A file is as complicated as the shortest program that
can produce it from scratch. This defines a precise, widely applicable, numerical measure of simplicity.
An impressive virtue of this notion of simplicity is that illumines and connects up other attractive, successful ideas. Consider, for example, the method of theoretical physics. In theoretical
physics, we try to summarize the results of a vast number of observations and experiments in terms of a few powerful laws. We strive, in other words, to produce the shortest possible program that
outputs the world. In that precise sense, it's a quest for simplicity.
It's appropriate to add that symmetry, a central feature of the physicist's laws, is a powerful simplicity enabler. For example, if we work with laws that have symmetry under space and time
translation—in other words, laws that apply uniformly, everywhere and everywhen—then we don't need to spell out new laws for distant parts of the universe or for different historical epochs, and we
can keep our world-program short.
Simplicity leads to depth: For a short program to unfold into rich consequences, it must support long chains of logic and calculation, which are the essence of depth.
Simplicity leads to elegance: The shortest programs will contain nothing gratuitous. Every bit will play a role, for otherwise we could expunge it, and make the program shorter. And the different
parts will have to function together smoothly, in order to make a lot from a little. Few processes are more elegant, I think, than the construction, following the program of DNA, of a baby from a
fertilized egg.
Simplicity leads to beauty: For it leads, as we've seen, to symmetry, which is an aspect of beauty. As, for that matter, are depth and elegance.
Thus simplicity, properly understood, explains what it is that makes a good explanation deep, elegant, and beautiful. | {"url":"http://edge.org/print/response-detail/10324","timestamp":"2014-04-17T09:52:37Z","content_type":null,"content_length":"17576","record_id":"<urn:uuid:724bf5dd-668e-4d23-b406-998e15fd0d18>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Why is potential energy and why is change in potential energy equal to negative of work done. Please prove this formula. \[\Delta\] U=-W
Best Response
You've already chosen the best response.
Potential Energy can be negative or positive because it is all relative. calling U = mgh (for approximations close to the surface) you could make U=0 the floor or U=0 the ceiling, it wouldn't
matter, since the change would be the same anyway. The change in energy can be positive or negative. Kinetic energy can never be negative, since energy is a scalar quantity, but the change in
Kinetic Energy can be positive or negative. Work can be positive or negative, depending on the direction of the force and displacement. However, Work is equal to the CHANGE in energy, not the
actual quantity of energy itself.
Best Response
You've already chosen the best response.
Do you know the work-energy theorem(Change in Kinetic energy = Total work done)? According to conservation of energy, \[K1+U1=K2+U2\] \[\Delta U=U2-U1=-(K2-K1)\] By the work-energy theorem, \[\
Delta W=K2-K1\] Thus, \[\Delta U=-\Delta W\] Clear?
Best Response
You've already chosen the best response.
\[K+U=E=constant\] As you see, the sum of Kinetic and Potential energy is constant. So, if Kinetic energy increases(work is positive), Potential energy must go down to maintain the constant sum
E. So change in potential energy is negative. Similar explanation for the converse. If the kinetic energy decreases, work done is negative and so potential energy goes up to maintain the constant
sum. So, change in potential energy is positive. Ok?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f7f11e5e4b0bfe8930b874c","timestamp":"2014-04-17T01:34:31Z","content_type":null,"content_length":"33585","record_id":"<urn:uuid:7ad30e7a-7ce7-4e7a-bac7-152d54c2d5fc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof with closed sets....
October 25th 2009, 09:07 AM #1
Senior Member
Jan 2009
Proof with closed sets....
Suppose f:[a,b]--> R and g:[a,b]-->R. Let T={x:f(x)=g(x)}
Prove that T is closed.
So we can set h(x)=f(x)-g(x) and then T is the set such that h(x)=0 and so T complement is the set with h(x)=/=0 and then if we can show that this is open, then it's complement, T is closed. I
don't know how to show it is open though. We need to show there is some neighborhood, but I don't know what to do. Thanks.
If f and g are continuous, then the proof is trivial, since for any continuous function h, $h^{-1}(\text{a closed sed})$ is a closed set.
And here, you have $h^{-1}(\{0\})$...
October 25th 2009, 11:50 PM #2 | {"url":"http://mathhelpforum.com/differential-geometry/110318-proof-closed-sets.html","timestamp":"2014-04-19T02:45:47Z","content_type":null,"content_length":"32967","record_id":"<urn:uuid:402fef94-5d0a-4fc0-8a53-20891f1b894b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Candy HuntAlgebraLAB: Hands-On Activities
To be able to write and solve a
of inequalities and equations with a creative twist. This will also be an introduction to linear programming.
Materials Needed:
□ Graph paper (at least two sheets)
□ Ruler
□ Candy (or something to hypothetically hide on the graph)
□ Clean sheet of paper
Background Information:
Slope-intercept form of a
linear equation
is y = mx + b.
Length of Activity:
You should allow at least two class periods working in groups of two.
1. Divide the class into groups of two.
2. Give each student their candy or object to hypothetically hide on their graph.
3. Each group will “hide” their candy somewhere on the coordinate plane.
a. Determine the grid point where you have hidden your candy.
b. Write a system of three inequalities on a clean half sheet of paper where the solution will be a small area surrounding the hidden candy.
c. Write a system of two equations on the other clean half sheet of paper where the solution is the exact location of the candy.
4. Groups 1 and 2, 3 and 4, 5 and 6, etc. will exchange their system of three inequalities.
5. When groups receive the system of three inequalities, give them sufficient time to graph the system on their own sheet of graph paper.
6. Once the time limit has expired or groups have successfully graphed the system, groups will receive the system of two equations from the same group they received the system of inequalities.
7. Groups will try to successfully graph the system of equations to determine the exact location of the hidden candy.
8. If Group 2 finds Group 1’s candy, then Group 2 gets Group 1’s candy.
9. If Group 2 does NOT find Group 1’s candy, then Group 2 does NOT get Group 1’s candy.
10. If Group 2 finds that Group 1 has made an error setting up their system of three inequalities or system of two equations, Group 2 gets Group 1’s candy. | {"url":"http://www.algebralab.org/activities/activity.aspx?file=Science_CandyHunt.xml","timestamp":"2014-04-16T19:35:35Z","content_type":null,"content_length":"11945","record_id":"<urn:uuid:f88e2922-367f-40ea-88b0-724f10ddbea1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
GUI application for pipe flow of a power-law fluid - File Exchange - MATLAB Central
The velocity and shear stress versus radial position are obtained for the laminar flow of a power-law fluid in a pipe. Pipe radius and applied pressure gradients can be set by the user. If you choose
a power-law exponent, n, equal to 1 then a Newtonian fluid is recovered. Dilatant and pseudo-plastic fluids are obtained for n>1 and n<1, respectively. A non-Newtonian fluid has a viscosity that
changes with the applied shear force.
For a Newtonian fluid (such as water), the viscosity is independent of how fast you are stirring it, but for a non-Newtonian fluid the viscosity is dependent. It gets easier or harder to stir faster
for different types of non-Newtonian fluids. Different constitutive equations, giving rise to various models of non-Newtonian fluids, have been proposed in order to express the viscosity as a
function of the strain rate. In power-law fluids, n is the power-law exponent and kappa is the power-law consistency index. Dilatant fluids correspond to the case where the exponent in the power-law
constitutive equation is positive while pseudo-plastic fluids are obtained when n<1. We see that viscosity decreases with strain rate for n<1, which is the case for pseudo-plastic fluids, also called
shear-thinning fluids. On the other hand, dilatant fluids are shear-thickening. If n=1, one recovers the Newtonian fluid behavior.
The user of the GUI application should be aware that good data for the power law exponent and consistency index must be used in order for the result to be correct.
For a treatment using Mathematica, please visit the following links:
Please login to add a comment or rating. | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/17910-gui-application-for-pipe-flow-of-a-power-law-fluid","timestamp":"2014-04-18T13:42:45Z","content_type":null,"content_length":"29698","record_id":"<urn:uuid:b9bc3170-f95e-454b-932e-68281f17b96d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another coin problem
Re: Another coin problem
The sun sign is the position of the sun at birth. The rising sign is calculated from the hour. At least as far as I know.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Okay maybe its the rising sign then..,
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Another coin problem
The 24 hours are divided into 2 hour periods and your rising sign is computed from there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Super Member
Re: Another coin problem
Speaking of, my prof. recently asked us to search up a proof for the Linearity of Expectation function which I just found.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Another coin problem
I would like to see that.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Another coin problem
The linearity of expectation is surprising because there is no need for independence.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Super Member
Re: Another coin problem
Had been searching in textbooks for 6 days-then opened google.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Another coin problem
2 problems:
The 52 cards are spread out in a line what is the expected number of adjacent cards of the same suit.
The 52 cards are spread out in a line what is the probability that adjacent cards are of the same suit.
One is hard and one is easy because of linearity.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Is the answer of the first 0.25*0.25*51??
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Another coin problem
I am not getting that.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Why? Whats wrong?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Another coin problem
The probability that the second card is the same suit given the first is 12 / 51.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
I would say the answer to the first one is 12.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
That is what I am getting too. Of course it is a bad answer!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Actually, shouldn't it be 11?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
Why do you think so?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
No, sorry. I forgot there were 13 cards in a suit.
Why is the answer of 12 wrong then?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
I did not say it is wrong. It is correct but it is still a bad answer. Sufficient for the people on forums but not sufficient for real work!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Why not?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
Ever seen the way these guys work? Ever checked any of their answers? Conjectures that they never even try a single example against...
Numerical work demands 2 solutions for every problem! Combinatorics and probability should be treated with the same respect.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
Uh-huh. Ok, then we can also use a simulation to confirm the answer.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
Already did that, got 12.0004. Now I can hand the answer into my boss and get paid the big bucks.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
You work in a casino?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Another coin problem
Several of them in the past.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Another coin problem
(12/51)*51 = 12
Did you state that the cards were all from the same deck?
What is the answer to the other problem?
Last edited by Agnishom (2013-03-28 02:49:05)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://mathisfunforum.com/viewtopic.php?id=19142&p=3","timestamp":"2014-04-18T03:42:15Z","content_type":null,"content_length":"39929","record_id":"<urn:uuid:6fe995a8-4447-465e-a4a5-9b6383ba7182>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite group with identity subgroup.
February 18th 2010, 05:25 AM #1
Feb 2010
Finite group with identity subgroup.
If you have a finite group, say G with no subgroups apart from {1G } and G. How would you go about showing that G is cyclic and that the number of elements in G is either 1 or prime?
I know that the order of this group is 1,but how would you show that this makes it cyclic, by Lagrange's theorem?
Last edited by chipette; February 18th 2010 at 07:33 AM.
Do you agree that, by definition, a group $G$ is cyclic iff there is some $a$ in $G$ such that $<a>=G$.
I know that the order of this group is 1
Since the order of a finite group is its cardinality, the one and only group of order 1 is the trivial group $G=\{1_G\}.$ But there are non trivial groups with no subgroup other than $\{1\}$ and
themselves (i.e. with no non-trivial subgroup).
So, consider a finite group $Geq\{1_G\}$ with no non-trivial subgroup, let's say $G$ has order $n\geq 2$. You may know that if a prime $p$ divides $n,$ then $G$ has an element of order $p.$
Assume $n$ is not a prime, and try to obtain a contradiction by proving $G$ has a non-trivial subgroup.
So far you'll have that a finite group with no non-trivial subgroup has order 1 or a prime. If it is 1, do you agree that $G=\{1_G\}=<1_G>$ therefore $G$ is cyclic, because its only element is a
generator. If it is not 1, take an element of order strictly greater than $1$ (justify its existence) and consider the subgroup it spans. Because of the hypothesis, since it is different from $\
{1_G\},$ it must be the whole group, hence $G$ is cyclic.
Note that we did not to use that order (when different from 1) is prime; that being said a similar argument shows that any group with order a prime has no non-trivial subgroup (and is cyclic).
Conclusion: A group has no non-trivial subgroup iff its order is 1 or a prime.
February 18th 2010, 09:17 AM #2
Senior Member
Nov 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/129443-finite-group-identity-subgroup.html","timestamp":"2014-04-16T21:13:44Z","content_type":null,"content_length":"36440","record_id":"<urn:uuid:82d31dde-9514-49df-8bd9-d26efeecbd1d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Computer and Human Vision Exercise Set 2
Introduction to Computer and Human Vision
Exercise Set 2
Deblurring using Natural Image Priors
Submission: individual or in pairs
Due date: Thursday, December 31st , 2009
In this assignment we will investigate the solution of inverse problems using natural
image priors. In particular we will focus on the deblurring problem and explore solutions
using Gaussian and sparse priors.
Brief overview: In class we discussed the problem of deblurring an m × n image y. We
assume that the blurred image y is generated by
y =x k+n
where k is a blur kernel and n ∼ N (0, η 2 ) is zero mean Gaussian noise.
Introducing a prior on natural images
1 y
|gij (x)|α +|gij (x)|α
p(x) ∝ e− σ2 i,j
where σ 2 is a parameter of the prior and gij (x) and gij (x) are the responses at coordinate
(i, j) of the image to the filters (−1, 1) and (−1, 1)T respectively. We then formulated the
problem of deblurring using a prior on the distribution of the edges in the image as finding
a minimizer of
1 1 y
L(x) = 2 y − Ax 2 + 22
|gij (x)|α + |gij (x)|α
2η 2σ
Where A is the mn × mn blur matrix corresponding to k, y is the column-stacked blurred
image and η 2 is the variance of the noise in the blurred image. Setting α = 2 we get the
Gaussian prior and using α = 0.8 we get the sparse prior.
The “iteratively reweighted least squares” algorithm is an iterative procedure to find a
minimizer of a function of the form
E(x) = ρi (|bi x − zi |)
Given an estimate of a minimizer xt define an iteration by
xt+1 = arg min q it (|bi x − zi |)
Where q it (c) is a quadratic (scaler) function, which coincides with ρi at ct = bi xt − zi and
which is strictly greater than ρi at all other points. From this we can derive three conditions
on q it
1. q it (ct ) = ρi (ct )
2. q it (ct ) = ρi (ct )
3. q it is symmetric around the y-axis (thus q it (c) = ac2 + d for some constants a, d).
These conditions determine q it . Since all we care about is finding the minimizer of q it we
can neglect the additive term d and use
ρ (ct ) 2
q it (c) = c
This is a Matlab programming assignment.
• It is very important that you thoroughly document your code. In particular each
function should begin with a short description of what it does and what are the input
and output.
• Remember that Matlab is efficient when the code is vectorized, so try to avoid using
loops. Specifically linear systems should be formulated in matrix notation and solved
without using loops (looping over IRLS iterations is fine).
• You will most probably want to use the following Matlab functions: help, imread,
imwrite, imshow, figure, conv2, reshape, spdiags...
• In this exercise you will have to use Matlab’s sparse matrix representation. If you are
not familiar with sparse matrices please consult the Matlab premier (you can find a
link on the “links” page of the course website). The important thing to note here is
that if you don’t keep all your matrices sparse you will run out of memory.
• You can use the function getConvMat.m which is posted on the website to generate
the convolution matrices.
• Save each matlab function in a separate file.
1. Write a function
function x_star = solveGaussianPrior(A,y,eta,sigma)
which computes x_star the minimizer of
1 2 1 y
L(x) = y − Ax 2 + x
|gij (x)|2 + |gij (x)|2
2η 2 2σ 2
Where A is an mn × mn matrix (which could correspond to deblurring super resolution or
any linear operator), y is an m × n image and eta and sigma are scalars as defined above.
The output is of size m × n. The optimization should be done in the spatial domain, solving
a linear system of equations using Matlab’s backslash.
2. Write a function
function x_hat = solveSparsePrior(A,y,eta,sigma,x0,n_iteration)
which computes x_hat an (approximate) minimizer of
1 2 1 y
L(x) = y − Ax 2 + x
|gij (x)|0.8 + |gij (x)|0.8
2η 2 2σ 2
by running n_iteration iterations of “iteratively reweighted least squares” with an initial
guess of x0 (also of size m × n). Thus the iteration should be
1 1t 1 y
xt+1 = arg min 2
q (|yij − aij x|) + 2 q 2t (|gij (x)|) + q 2t (|gij (x)|)
2η 2σ
where q 1t (c) = c2 , q 2t (c) = |ct |−1.2 c2 and aij is the ij’th row of A. Or equivalently
1 2 1 y y
xt+1 = arg min y − Ax 2 + x x
wij |gij (x)|2 + wij |gij (x)|2
2η 2 2σ 2
y y
x x
where wij = |gij (xt )|−1.2 and wij = |gij (xt )|−1.2 . To avoid division by zero we modify the
weights to
x x
wij = max(|gij (xt )|, )−1.2
y y
wij = max(|gij (xt )|, )−1.2
In this exercise we use = 1e − 5 and for the initial estimate x0 we use the solution of
the Gaussian prior minimization. Technical note: assume that the input to the function
is σ 2 = 2obsvar (which is the correct estimate for the Gaussian case). Use this value
to compute x0 . Then to compensate for the inaccuracy of this estimation for the non-
Gaussian case multiply σ by 5 and proceed with the IRLS iterations. In all the questions
in this exercise we will perform 3 iterations of IRLS for the sparse deblurring.
3. Download the file ex2_q3.mat, this file contains three blur kernels ker_1,ker_2
and ker_3. It also contains 4 versions of the same image. im_1 the original image and
y1 y2 y3 image im_1 blurred with the corresponding kernels plus noise. Use the clean
image to estimate σ 2 . A good estimate is 2obsvar where obsvar is the mean of the squared
filter responses (gij (xt ))2 and (gij (xt ))2 . You can also use the clean image to estimate the
noise η 2 .
(a) Estimate the noise for each one of the noisy images and the prior parameter σ 2 . Include
the estimated values in your report along with a brief explanation of how you obtained
the noise estimators.
(b) Deblur the blurry images using a Gaussian image prior and then a sparse image prior.
For each blur kernel j = 1, 2, 3 generate two images im_1_deblur_ker_j_Gauss and
im_1_deblur_ker_j_sparse where j is and index which should be substituted with
the value it assumes. Also generate a (3 × 3) image array of the following form.
Row 1 consists of the three blurred images. Row 2 shows the results of the Gaussian
deblurring and row 3 the results of sparse deblurring.
Save this image array as a .png image called ex2_q3_b.png.
(c) Investigate the effect of under and over estimating the noise parameter on debluring.
Use y2 to generate a (1 × 3) image array of the following form. The left most image
shows an example of deblurring when the noise is underestimated (multiply η by 0.2).
The central image is deblurring with the correct noise estimation and the rightmost
should show a result of deblurring with overestimation (multiply η by 10).
Save this image array as a .png image called ex2_q3_c.png. Also include a brief
discussion which explains your results and why you think they turn out as they do.
(d) Investigate the effect of using different blur kernels on debluring. Generate a (2 × 2)
image array of the following form. The left most image shows deblurring of y1 with
ker_1. The next image shows the results of deblurring y1 with ker_3. The next row
is deblurring of y3 with ker_1 and y3 with ker_3
Save this image array as a .png image called ex2_q3_d.png. Also include a brief
discussion which explains your results and why you think they turn out as they do.
4. Download the file ex2_q4.mat, this file contains a blur kernel and a blurry-noisy
image called ker and y respectively. Deblur using a Gaussian prior and a sparse prior. Use
η = 0.01 and σ = 0.09. Save the results as im_2_deblur_sparse and im_2_deblur_Gauss
Also save a (1×3) image array (blurry, Gaussian, sparse) as a .png image called ex2_q4.png.
In this exercise we perform the debluring in the spatial domain, thus the computations
are pretty demanding. In particular you will need more memory than is usually available
on a home PC. Thus in this question you will have to perform the deblurring in overlapping
blocks which should be stitched together to form the complete image. There should be
an overlap so as to avoid boundary problems within the image (of course you will still get
boundary effects on the boundary of the stitched image). Another thing to note is that it
is important to give the function solveSparsePrior an initial guess which comes from the
final stitched image of the Gaussian deblurring part.
Submission instructions: Submit one .zip archive called
CVFall09_ex2_Name1_FamilyName1_Name2_FamilyName2.zip by sending an email to
daniel.glasner@weizmann.ac.il with the subject “CVfall2009 ex2”. This zip archive
should contain
1. A discussion of the results in pdf, ps or rtf format. The first page should include the
names of the submitters and your ID numbers.
2. A .mat file called ex2.mat which includes the deblurred images from question 3
(im_1_deblur_ker_j_Gauss and im_1_deblur_ker_j_sparse for j = 1, 2, 3) and
from question 4 (im_2_deblur_sparse and im_2_deblur_Gauss).
3. all the .png images namely: ex2_q3_b.png ex2_q3_c.png ex2_q3_d.png ex2_q4.png.
4. Two .m files implementing the functions from questions 1 and 2. and the scripts /
functions which you used to answer questions 3 and 4.
5. Please submit a hard copy of your code and report to the “computer vision” mailbox. | {"url":"http://www.docstoc.com/docs/22192619/Introduction-to-Computer-and-Human-Vision-Exercise-Set-2","timestamp":"2014-04-18T01:21:55Z","content_type":null,"content_length":"67129","record_id":"<urn:uuid:b46db933-21f4-46a1-be1b-804b71cca2b1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Last Diff Equation for today
April 30th 2006, 10:41 AM #1
Senior Member
Apr 2006
Last Diff Equation for today
I am asked to find the general solution, using the method of separating the variables of the following DE
dy/dx = (2xcos x)/y where y > 0
Using the method of seperating the variables I get:
int y.dy = int 2xcosx dx
which basically comes to (after a little working)
= 2 (cos x + xsinx)
but where do I go from here to get an integration constant?
2) If y = 2 when x = 0, find y in terms of x
Could someone help me on this one
3) Explain why your answer may not be used for x=pi. Comment in relation to the solution curve through (0,2).
I am asked to find the general solution, using the method of separating the variables of the following DE
dy/dx = (2xcos x)/y where y > 0
Using the method of seperating the variables I get:
int y.dy = int 2xcosx dx
which basically comes to (after a little working)
= 2 (cos x + xsinx)
but where do I go from here to get a integration constant?
2) If y = 2 when x = 0, find y in terms of x
Could someone help me on this one
3) Explain why your answer may not be used for x=pi. Comment in relation to the solution curve through (0,2).
$\frac{1}{2}y^2=2(\cos x+x\sin x)+C$
But $(x,y)=(0,2)$
Which gives,
$\frac{1}{2}y^2=2\cos x+2x\sin x$
$y=2\sqrt{ \cos x+x\sin x}$
You cannot use $x=\pi$ because you get a negative in the radical, which brings non-real numbers.
Last edited by ThePerfectHacker; April 30th 2006 at 10:48 AM.
$\frac{1}{2}y^2=2(\cos x+x\sin x)+C$
But $(x,y)=(0,2)$
Which gives,
$\frac{1}{2}y^2=2\cos x+2x\sin x$
$y=2\sqrt{ \cos x+x\sin x}$
You cannot use $x=\pi$ because you get a negative in the radical, which brings non-real numbers.
What about the relation to the solution curve through (0,2)?
What about the relation to the solution curve through (0,2)?
PH found the general soltion, then substituted the initial condition
into the general solution to obtain his final solution.
PH found the general soltion, then substituted the initial condition
into the general solution to obtain his final solution.
If the general solution is y = 2 SQU ROOT OF cosx + xsinsx
and subsitution the initial condition y = 2 and x = 0
we get
2 = 2 SQU ROOT OF cos 0 + 0sin0
If the general solution is y = 2 SQU ROOT OF cosx + xsinsx
and subsitution the initial condition y = 2 and x = 0
we get
2 = 2 SQU ROOT OF cos 0 + 0sin0
Without checking in detail, my reading of PH's posting is that the general
solution is:
$<br /> \frac{1}{2}y^2=2(\cos x+x\sin x)+C<br />$,
into which he substitutes $y=2$ when $x=0$, to get
$<br /> \frac{1}{2}y^2=2\cos x+2x\sin x<br />$
$<br /> y=2\sqrt{ \cos x+x\sin x}<br />$
April 30th 2006, 10:45 AM #2
Global Moderator
Nov 2005
New York City
April 30th 2006, 10:58 AM #3
Senior Member
Apr 2006
April 30th 2006, 11:14 AM #4
Grand Panjandrum
Nov 2005
April 30th 2006, 11:21 AM #5
Senior Member
Apr 2006
April 30th 2006, 11:53 AM #6
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/2749-last-diff-equation-today.html","timestamp":"2014-04-16T05:31:23Z","content_type":null,"content_length":"50283","record_id":"<urn:uuid:260784b0-b9b9-419b-971f-9ff5a057bcab>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum of the residuals
02-04-2011 12:31 PM #2
02-04-2011 11:49 AM #1
Thanked 0 Times in 0 Posts
Re: Sum of the residuals
Sum of the residuals
My question may sound trivial but here it goes:
When doing a sample regression by the ordinary least squares method, does the sum (nonsquared! ) of the residuals have to be equal to zero??
Here's what I know. If we have numerous "y observations" per x, one important assumption is that the residuals conditional on a given X follow an identical distribution usually with mean 0 (which
also suggests that the sum of the residuals is 0)
Σ e_ij= 0 where j is the iterating term and where
e_ij = (Yj - Y(estimated)) for a given X_i
However when we do a sample regression we usually have one Y observation per X. With the ordinary least squares method we try to :
min Σ (e_i)^2
Does this however mean that the sum of the residuals will be equal to 0?
Σ e_i = 0 ??
Yes, the sum of the error terms will be zero. You can see this here in the context of simple regression. Note that I am ommiting the subscript i.
Re: Sum of the residuals
Ah that's great thank you
02-04-2011 12:40 PM #3
Thanked 0 Times in 0 Posts | {"url":"http://www.talkstats.com/showthread.php/16048-Sum-of-the-residuals","timestamp":"2014-04-21T16:01:03Z","content_type":null,"content_length":"54816","record_id":"<urn:uuid:e40c2867-67d0-45ea-82e0-82b3463aefaa>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates SAT Math Tutor
Are you struggling in Chemistry? Do you not understand the difference between mitosis and meiosis? Do you think that the square root of 69 is 8?
15 Subjects: including SAT math, chemistry, geometry, biology
I hold a bachelor's degree in Secondary Education and a master's degree in Education. I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High
School level in both private and public schools.
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am
teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual.
14 Subjects: including SAT math, chemistry, geometry, biology
...I specialize in tutoring students for standardized test including the ACT/SAT/SSAT/high school graduation test. I also enjoy assisting students with general needs such as raising grades and
performance in the academic setting. Willing to tutor in person (Home environment, Library, Coffee Shopt, etc.) or online.
14 Subjects: including SAT math, calculus, geometry, biology
...I have taught middle school math for 3 years and taught high school math for 6 years. I also taught in the college environment for over 10 years and I am currently teaching Math. I have tutored
middle and high school math for 20+ years.
20 Subjects: including SAT math, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/avondale_estates_sat_math_tutors.php","timestamp":"2014-04-16T10:47:42Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:73f53d05-d5aa-46da-b1c9-b1423738b6b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
South African Journal of Education
Services on Demand
Related links
Print version ISSN 0256-0100
S. Afr. j. educ. vol.28 no.1 Pretoria Feb. 2008
Teacher learning about probabilistic reasoning in relation to teaching it in an Advanced Certificate in Education (ACE) programme
Faaiz Gierdien
Faaiz Gierdien is Lecturer in the Department of Curriculum Studies at the University of Stellenbosch. His interests span many domains in mathematics education, such as teacher learning (including
pre-service), teacher knowledge, and the connection between curriculum policy and teachers' practice. faaiz@sun.ac.za
I report on what teachers in an Advanced Certificate in Education (ACE) in-service programme learned about probabilistic reasoning in relation to teaching it. I worked 'on the inside' using my
practice as a site for studying teaching and learning. The teachers were from three different towns in the Northern Cape province and had limited teaching contact time, as is the nature of ACE
programmes. Findings revealed a complicated picture, where some teachers were prepared to consider influences of their intuitive probabilistic reasoning on formal probabilistic reasoning when it came
to teaching. It was, however, the 'genuineness' of teacher learning which was the issue that the findings have to address. Therefore a speculative, hopeful strategy for affecting teacher learning in
mathematics teacher education practice is to sustain disequilibrium between dichotomies such as formal and intuitive probabilistic reasoning, which has analogies in content and pedagogy, and subject
matter and method.
The research question what do teachers learn about probabilistic reasoning in relation to teaching it in an Advanced Certificate in Education (ACE) programme? is the focus in this article. It has
several interrelated and overlapping levels. It is concerned with the importance of studying teacher learning in a teacher education programme, connecting probabilistic reasoning and probabilistic
reasoning teaching, and finally, studying teacher learning in relation to children's probabilistic reasoning. To address these different levels there is initially a review of literature on teacher
education and mathematics teacher education.
Teacher learning is a key research area in teacher education practice. In the current climate of curriculum policy in South Africa all teachers find themselves in situations where they learn the
policy rhetoric associated with 'outcomes-based education'. For example, they talk about assessment standards, learning outcomes and continuous assessment, to name but a few. South African curriculum
policy is quite ambitious about the 'kind of teacher that is envisaged' (Department of Education (DoE), 2003). For instance, teachers are to mediate, interpret and design learning programmes and
materials. These are examples of policy images of teachers (Jansen, 2001). How do teachers learn to teach in ways that are aligned with such ambitious policy rhetoric? More importantly, what is the
role of teacher educators in fostering teacher learning in such a policy environment? In policy debates in the United States, Ball and Cohen (1999) call for 'interweaving' (Ball & Bass, 2000) of
content and pedagogy in teaching and learning to teach. They contend that teacher educators should not only be interested in what teachers must know, but also how they must be able to use knowledge
(Ball & Bass, 2000) as they learn to teach. In their writing 'learning' and 'teaching' are deliberately put together because of the notion of 'interweaving.' They argue that knowing how to teach
entails more than simply applying prior understandings. Teacher educators must therefore take seriously the notion of teacher learning when it comes to aligning curriculum policy and classroom
practice. The fact of the matter is that curriculum reform in South Africa has no relation with pedagogical reform (Jansen, 2001). For teacher educators therefore the critical issue remains: how are
teachers going to learn to teach in ways that reflect curriculum policy?
As a way forward Ball and Cohen (1999) propose 'closing the gap' in teacher education with a focus on developing and using knowledge 'in practice'. This proposal means closing the gaps between
subject matter and method, and between content and pedagogy. This view builds on Dewey (1904/1964), who articulates the tension between subject matter and method, and points out a sophisticated and
subtle relationship between the two. Separating the two in teacher education programmes reduces teaching practice to the use of clearly stated recipes. In another sense it means there is a need for
teachers to 'learn in and from practice' (Ball & Cohen, 1999:10). What they mean by 'in practice' is not to be understood in a narrow, physical sense, e.g. in a school or in a university setting.
Wilson and Berne (1999) recommend that teacher learning be activated, rather than bound and delivered in the form of recipes and models. They regard creating and sustaining disequilibrium as a
requirement for teacher learning. On a similar point Lord (1994) theorises that a 'critical collegiality' will help teachers learn by increasing their comfort with high levels of ambiguity and
uncertainty, which will be regular features of teaching for understanding.
In mathematics teacher education practice we need to understand better what it means to teach both mathematics and teaching in the same programme (Adler, Ball, Krainer, Lin & Novotna, 2005). How do
teachers learn both mathematics, probability as in this study, and teaching in teacher education? This 'gap' is an instantiation of Dewey's (1904/1964) content and pedagogy and subject matter and
method dichotomies. In the Teacher Development Experiment, Simon (2000) worked on teachers' mathematical development and their pedagogical development. An enduring problem in mathematics teacher
education is to build both mathematics and teaching identities in teachers (Adler et al., 2005). This is a specific response to Jansen's (2001) policy images of teachers, which is more general. A
teacher could have a very clear understanding of probability, but that would not necessarily mean that he or she would be able to apply that during teaching. In a Deweyan sense the teacher would have
to understand the subject matter of the probability in relation to method. This point is taken up next in the case of probability in the mathematics curriculum.
Probabilistic reasoning research
Research reveals that the subject matter of formal, mathematical probability has its 'psychical roots' (Dewey, 1904/1964:162) in intuitive or subjective probability. For instance, Konold (1989)
points out that probabilistic reasoning is fraught with misconceptions or strong prior conceptions that are at odds with formal conceptions of probability. This has implications for the 'method' of
teaching and learning of probabilistic reasoning. For the purposes of this article, 'probabilistic reasoning' refers to those instances in the teaching and learning of probability concepts or notions
where explanation and reasoning are required. In the teaching of probability notions there should be a consideration of the nature and influence of 'subjective probabilities in the development of
formal probability concepts', in particular cases where inferences from the former come into conflict with those based on the latter (Hawkins & Kapadia, 1984:350). Teachers must learn to become aware
of, and extend, the two probabilities when it comes to teaching children. In fact Hawkins and Kapadia (1984) emphasise that the counter-intuitive nature of even simple probabilities needs to be borne
in mind when teaching probability to children. In this study many of the teachers wanted to know how they might teach probability to their learners. Furthermore, Hawkins and Kapadia (1984) emphasise
the importance of developing a better understanding of growth and communication in probabilistic notions. They see subjective probability as an expression of personal belief or perception and also as
a precursor to formal probability. An example of ignoring the psychical roots of probabilistic reasoning is where the scientist's formal mathematical probability is transposed as the subject matter
into the teaching situation, bound and delivered (Wilson & Berne, 1999). This means ignoring intuitive or subjective probability, i.e. separating subject matter from method. The result is producing
skill in action independent of any engagement of thought (Dewey, 1916/1966:178). It implies having the skill of computing formal probabilities without understanding why and how particular probability
formulas come about. Teachers who wish to learn to develop their awareness and that of children or learners in this regard in and through their teaching constantly struggle against situations where
formal knowledge comes into conflict with students' intuitive knowledge (Lampert, 1985; 1990; 2001).
Hawkins and Kapadia (1984) recognize no 'harsh dividing line' between the two, a move consonant with Dewey's (1904/1964) call for studying subject matter in ways that took it back to its 'psychical
roots'. Similarly Fischbein and Gazit (1984) argue for a teaching programme that aims at developing and improving probabilistic intuitions for probability concepts along with formal mathematical
probability concepts. They suggest providing learners with frequent opportunities to experience stochastic situations actively, even emotionally. Their argument is consonant with Lord's (1994) call
for enabling teachers to deal with high levels of ambiguity and uncertainty, in this case, probabilistic reasoning. Hawkins and Kapadia (1984:358-359) also refer to 'misconceptions' and give the
famous historical example of the possibility of obtaining a head and a tail when tossing two coins. They observe that a number of mathematicians have assigned a probability of 1/3 as they have
erroneously assumed an equally likely sample space of three possibilities (two heads, two tails, or a head and a tail). Bennie (1998) refers to this example under 'distinguishing outcomes' and found
that teaching that involved systematic listing and classroom discussion was useful in trying to counter this misconception.
In South Africa little work has been done in terms of gathering information from teachers at the in-service level when it comes to stochastics, i.e. probability and statistics. Laridon (1995) studied
intuitive probability concepts in South African adolescents, while Kazima (2000) and Kazima and Adler (2006) studied students' perceptions of fairness in probability games. Quite some time ago
Shaughnessy (1992) pointed out the need to unravel teachers' probability concepts as an area in which little or no research exists and for which data could be of assistance to those making decisions
regarding the professional development needs of teachers.
On the subjects in the study and the teaching context
The subjects in the study were intermediate and senior phase teachers who were registered for an ACE which was administered by a higher education institution in the Western Cape. There were 50
teachers in total enrolled in a mathematics education module in the ACE programme, scattered over three teaching venues. They came from rural and urban areas in the Northern Cape province of South
Africa. They all had a Grade 12 or matriculation certificate and three years of teacher training college education. Their experience in the classroom ranged from being novices to mid-career teachers.
They had never done any tertiary-level study in mathematics in general, nor had they done any tertiary-level courses on data handling, specifically courses in statistics and probability. They may be
described as 'generalists' with a professional training mainly in pedagogy. They formed part of the majority of teachers in the South African education system, amounting to about 77%, who have a
three-year post-school level or a Relative Education Qualification Value (REQV) of 13 (a Diploma in Education), which obviously impacts on school mathematics reform. The policymaking community is
well aware of this phenomenon and has called for adequate planning to ensure that recruitment drives and programme design take into account the actual needs of the school sector, in terms of scarce
subject areas (Mathematics, Physical Science and Technology) and the capacity of teachers to implement outcomes-based approaches to teaching, learning and assessment. For example, the current
qualification framework has raised the minimum qualification requirement for all new teachers from a three-year post-school level (REQV 13) to a four-year professional degree level (REQV 14) (DoE,
2005). Through ACE programmes the Department of Education of the South African government makes funds to 'upgrade' and 'reskill' available to a selected number of teachers who do not have a
university qualification to learn to teach the different 'learning outcomes' (LOs) in the mathematics component of the Revised National Curriculum Statement (RNCS) (DoE, 2003). For this reason it is
important to study how these teachers with an REQV of 13 learn to reflect on their learning about probabilistic reasoning, albeit on a small scale as is the case in this study.
Teachers enrolled in the ACE programme took modules in all of the five LOs in the mathematics component of the RNCS as well as other elective modules. According to the RNCS, the LOs are
numbers, operations and relationships;
patterns, functions and algebra;
space and shape;
measurement; and
data handling.
I taught a module called Mathematics for Teaching on 'Learning Outcome 5' (LO5) on data handling (DoE, 2003) to the teachers. This was the first and only module that I taught them. It was also the
last module in the mathematics sequence of the ACE programme according to bureaucratic arrangements. On completion of the module the teachers had no further contact with me. It was one of several
modules that they had to take to earn an ACE. The module had a total of four contact sessions of three hours each, followed by a final examination.
The module was offered in three towns, P, Q, and R in the Northern Cape province in South Africa. I was part of a team of lecturers who taught other modules in the ACE programme who came to these
towns, with rented cars. During a period of one week starting on a Monday and ending on a Friday I taught the module in the three different towns P, Q and R. This meant that I travelled between these
towns during that particular week. During the same week teachers in the different towns had lectures ranging from the Mathematics for teaching module on a particular day(s) to other modules offered
by the other lecturers in the team. The contact sessions for the module occurred once a month over a three-month period, giving a total contact time of 18 hours for each town. Table 1 shows in which
towns the module was offered and on which days of the week and the total contact time for the complete module.
I had not taught previous Mathematics for teaching modules on the other LOs to the same cohort of teachers.
On method and data
In this study 'method' has two meanings. One is related to my teaching of the module on data handling, and the other on the way I went about researching my teaching and presenting my findings. In
terms of the latter I 'worked on the inside' using my own teaching of the module as a site to study teaching and learning (Ball, 2000). Such a research genre requires 'distance' (Adler et al., 2005),
i.e. critical perspective in terms of reporting and analysing findings. Later on in the article this perspective will be used in a discussion in relation to the findings. In teaching the module an
overall method I used was to incorporate statistical reasoning and probabilistic reasoning with an explicit focus on 'for teaching' in line with the title of the module and literature on the
importance of 'interweaving content and pedagogy' in teacher education programmes (Ball & Cohen, 1999; Ball & Bass, 2000; Adler et al., 2005). Furthermore, I attempted to follow, with major
modifications, the guidelines expounded by Simon (2000), such as addressing questions about 'genuineness' and 'legitimacy' when it comes to presenting teachers' self-reports as data or findings.
To elaborate further on the method I used in my teaching, a brief exposition of the policy rhetoric on data handling, probability in particular, follows. According to the RNCS, the 'learning outcome'
for data handling states that:
[the] learner will be able to collect, summarise, display and critically analyse data in order to draw conclusions and make predictions, and to interpret and determine chance variation (DoE,
This policy statement can be connected to literature on the teaching and learning of probabilistic reasoning at the school level. Units from the Connected Mathematics Project (Lappan, Fey,
Fitzgerald, Friel & Phillips, 1997) such Data around us, What are my chances and How likely is it? turned out to be useful in terms of giving meaning to this policy statement. These units are from a
middle grades curriculum project in the United States and had to be changed to match local conditions. One of my explicit goals for this module was to direct the teachers' attention to ambiguity and
complexity in stochastics in line with the literature reviewed earlier on. I had hoped to find out what they learned about probabilistic reasoning during the module and how they reflected on their
own learning about it with respect to their future teaching. The reason for the latter was my research interest in teacher learning with respect to curriculum policy in general and probabilistic
reasoning in particular.
The data in this study include the following:
My reflective notes on the mathematics I taught and my teaching;
The subjects' educational and biographical backgrounds from the higher education institution that administered the ACE programme;
Data from the Department of Education on teachers with a post-school level REQV 13, reported earlier on (DoE, 2005);
Teachers' responses to the questionnaire that I administered during the last contact session (see Appendix).
Teachers' discussions and debates with me and with their peers in the ACE programme should be seen as happening 'in practice,' meaning that they were capable of 'learning from practice'. This line of
reasoning is consonant with the literature on teacher learning reviewed earlier on. For example, during the middle month of the three-month teaching contact session, I designed a set of tasks that
highlighted the differences between subjective or intuitive probability, and formal or mathematical probability. The theoretical intent of this design was to put theories of teacher learning 'in
harm's way', i.e. to test theories of teacher learning with respect to the counter-intuitiveness of probabilistic reasoning. These tasks represent instances where I sought alignment with policy
rhetoric about learners having to 'critically analyse, make predictions, and to interpret and determine chance variation' (DoE, 2003). In the context of my teaching the 'learners' should be viewed as
the teachers taking the module.
In one particular set of tasks the teachers grappled with ways intuitive probability interacts and collides with formal probability which leads to Pascal's Triangle as a central object. Pascal's
Triangle has significant mathematics encoded in it that unifies many of the different 'learning outcomes' in the RNCS. Examples of questions that the tasks included are:
What is the probability of getting a one head / one tail, when tossing?
One coin?
Two coins?
Three coins?
Four coins and so on.
Explain your reasoning in each case.
The prompts have purposes, namely, mathematical development and pedagogical development with respect to probabilistic reasoning. Another example is:
What is the probability that there will be one boy in a family of ...
One child?
Two children?
Three children?
Four children?
A summary of formal probability in the investigations in the case of the coins can be presented in the form of a summary as in Figure 1, which leads to Pascal's Triangle (Figure 2).
In the investigation on the coins a significant incident occurred which gave rise to the idea of researching teacher learning of probabilistic reasoning in relation to teaching it. In this incident
all the teachers in the three different teaching venues assumed that the probability of getting a head and a tail when tossing two fair coins is 1/3. This incident captured a misconception in
probabilistic reasoning (Hawkins & Kapadia, 1984). A fair coin is one where the formal probability of getting a heads or tails when tossing the coin is the same. It was then that I developed the idea
of a questionnaire as a means to generate data on teacher learning. The prompts in the questionnaire attempted to focus the teachers' attention on the misconception which occurred during my teaching
of the coins investigation elaborated in the above. It specifically required teachers to spell out their reasoning and how that reasoning might be taken into account should they teach a similar
investigation where intuitive and formal probability interact. It should be noted that not all the teachers in the different towns completed the questionnaire.
The written responses of the teachers TA, TB, TC and TD are presented as findings because they reflect an interesting variation on what teachers who were enrolled in the ACE module on data handling
learned. One of the teachers wrote the following:
P(head and tail) = 1/3. Why do you think this is so?
Nobody really knew how to reason about this.
This excerpt is indicative of the extent of the engagement and disequilibrium the teachers experienced when the researcher taught probability.
Variation in teacher learning with respect to the first prompt ranged from figuring out how to distinguish between different outcomes (heads and tails) and finding a way out of intuitive probability
to formal or mathematical probability by interpreting the numbers in Pascal's Triangle. The first prompt focused on promoting teachers' mathematical development with respect to probabilistic
reasoning. It required the teachers to give reasons why the misconception occurred:
In the case of two coins there was a misconception about the probability of getting a head and a tail. Everyone in the class agreed that this P(head and tail) = 1/3. Why do you think this was so?
Below is what the teachers wrote as a response:
TA Because we did not take the position of the coin into consideration. HT/TH is the same but position plays a role. Hence the confusion.
TB I was not aware that the order of the coins must also be considered.
TC Intuitive thinking without actual activities.
TD It now seems stupid with the understanding of Pascal's Triangle.
TA and TB had learned that the psychical roots of their probabilistic reasoning based on intuitive notions were in conflict with mathematical probability. They gave reasons for the misconception in
terms of formal or mathematical probability, namely, the position or order of the outcomes, heads/tails and tails/heads. Their responses showed their mathematical development in probabilistic
reasoning. They had learned to distinguish between the outcomes heads/tails and tails/heads in the case of tossing two coins. TA regarded the tension between intuitive and mathematical probability
when tossing two coins as 'confusion.' TB became 'aware that the order of the coins must also be considered' if mathematical probability were to be considered. TA and TB realised the mathematical
significance of distinguishing between heads/ tails and tails/heads. Evidence for the psychical roots of probabilistic reasoning is captured in the words 'confusion' and 'aware that the order of the
coins must also be considered'. During their investigation of finding the probabilities when tossing several coins, they experienced the sophisticated and subtle relationship between intuitive and
formal probability. Here the subject matter of probabilistic reasoning became interwoven with their method. In their method they had an opportunity to notice the difference between heads/tails and
tails/heads. This they were not aware of during the investigation itself.
Teacher learning was activated to a point that reveals disequilibrium as can be seen in TC's and TD's responses. During the coin-tossing investigation I resisted transposing the scientist's formal or
mathematical probability as the subject matter into the teaching situation, bound and delivered. Not all the teachers readily accepted the distinction between heads/tails and tails/heads. They only
agreed after seeing how the structure in the positions of the heads and tails leads to Pascal's Triangle. Teachers' 'intuitive thinking' as indicated by TC prevailed at the beginning when they
settled for P (head and tail) = 1/3. I encouraged all the teachers in each of the teaching venues in the different town to discuss and to confer whether positions of the heads and tails mattered
through 'actual activities' as TC wrote. TC's response highlighted the psychical roots of the subject matter of probability, i.e. the influence of intuitive probabilities 'intuitive thinking' in
the development of mathematical probabilities. During the investigation the teachers tossed different numbers of coins and recorded the outcomes. These can be considered as 'actual activities'
although not sufficient for them to be convinced that the P(head and tail) = 1/2 according to mathematical probability. There was thus the disequilibrium between 'intuitive thinking' and 'actual
activities'. In the case of tossing two coins the teachers would have had to do a simulation via information technology using the law of large numbers to be convinced P(head and tail) = 1/2 when
tossing two coins. No information technology was used during the teaching of the ACE module. What is evident from the responses of TA, TB and TC was that the teachers had opportunities to experience
a stochastic situation actively and even emotionally. Disequilibrium in the form of 'confusion' as reported by TA was captured in TD's response, where she or he wrote 'It now seems stupid with the
understanding of Pascal's Triangle'. This is an emotive response, which Fischbein and Gazit (1986) suggest as a means to improve intuitive probability along with mathematical probability. TD's
response appears to indicate a resolution in the disequilibrium 'It now seems stupid with the understanding of Pascal's Triangle' (emphasis added). Pascal's Triangle is a mathematical object that
can be interpreted as a mathematical summary of the formal probability outcomes when tossing one coin, two coins, three coins, and so on. A particular understanding of the numbers in Pascal's
Triangle also connects it to algebra, probability and combinations. TD's response shows that he or she has an understanding of the row 1 2 1 in Pascal's Triangle:
It can be inferred that TD had acquired such an understanding.
After the coin-tossing investigation all the teachers in the different towns were able to distinguish between different outcomes, but not without some uneasiness and disequilibrium prevailing. From
my notes I recall posing the question: what is the probability that there will be one boy in a family of four children? Many teachers answered 1/4, while others said 1/16. The latter answer is the
mathematical probability and is counter-intuitive for those, whether they are adults or children, who do not know the subtlety and sophistication in probabilistic reasoning. For this question the
teachers had brief exchanges during which some teachers pointed out how intuitive and mathematical probability 'come together' and how they can be 'confusing'.
The second prompt,
How do you think this misconception will affect your teaching of the 2 coins probability?
aimed at finding out whether the teachers would consider the psychical roots of intuitive probabilistic reasoning in the case of the 2 coins when it came to teaching. Below is a list of teacher
responses to this prompt:
TA It only broadened the way I think things were, but now I have to take into account the role of the positioning of the coins.
TB The learners will first have to figure out on their own, before I will lead them to the correct way of probability.
TC This might result in giving wrong information to learners, because you did not test the validity of your information.
TD The misconception was straightened out and a mutual understanding was reached and thus no misconception after thorough examples and explanations from the lecturer.
For the first prompt TA noted that he or she would take into consideration the position of the coins. For the second prompt TA wrote that the misconception had 'broadened' the way he or she thought '
things were' and learned to discern the 'role of the positioning of the coins' and may take it into account when it comes to teaching. This could mean that he or she would enable children to
experience the subtle and sophisticated relationship between intuitive and mathematical probability. It is hard to say, because there were no follow-up interviews with any of the teachers in the
different towns. TB's response, however, was more explicit about a teaching strategy, i.e. 'learners will first have to figure out on their own'. It seemed as if he or she thought of mathematical
probability as 'the correct way of probability'. Alternatively, 'the correct way of probability' could mean that this teacher would have his or her learners 'figure out on their own' and thus
experience the psychical roots of intuitive probabilistic reasoning and its influence on mathematical probability. TB appeared to be receptive to an organised and deliberate investigation (Lord,
1994) of probabilistic reasoning when it comes to teaching it in the case of tossing 2 coins. This could also show a willingness to avoid a rush towards mathematical probability. On the other hand,
what TB wrote may simply be due to the effect of discussion and debate that occurred during my teaching of probabilistic reasoning. In relation to teaching, TC saw the misconception as 'giving wrong
information to learners'. 'Wrong information' could point to the belief that mathematical probability is the 'right information' despite the fact that all the teachers experienced the influence of
subjective or intuitive probabilistic reasoning in the development of mathematical probability, i.e. in their experience there was no harsh dividing line between the two. 'Validity of information'
could mean that the teachers tell their learners that there are two possible answers P(head/tail) = 1/2 and P(head/tail) =1/3 and that they must decide which is correct and that they must support
their answers with reasons. Testing 'the validity of your information' would therefore be an interesting way to explore probability.
During my teaching, mathematical probability only became evident after experimentation and discussion. Teachers then checked the validity of each other's information and some agreed while others
disagreed. My teaching was not simply a case of giving information about 'correct' or mathematical probability. I designed instruction so that teachers would engage and encounter a tension between
intuitive and mathematical probability. TC would be correct in terms of mathematical probability, meaning that the misconception should be avoided if mathematical probability were to be the sole
objective of teaching. 'Test [ing] the validity of your information' was what happened during my teaching when the teachers debated whether the order or position of the coins mattered. For the first
prompt TC wrote that the misconception came about because of 'intuitive thinking without actual activities'. It could be that he continued to see mathematical probability as result of the 'actual
activities'. TD saw the misconception 'straightened out' through 'thorough examples and explanations from the lecturer'. His/her use of 'mutual understanding' could be an indication that an
understanding came about where the influence of intuitive probability on mathematical probability could not be ignored when one teaches an investigation of the 2 coins probability for the first time
with no awareness of the 'correct' probability.
It was hard to infer what TC and TD would actually do when their learners said the probability of getting a heads and a tails when tossing two fair coins is 1/3, according to intuitive probability.
Furthermore, there are also no data on their actual teaching of the 2 coins probability. TC's learning seemed to be pulled in the direction of mathematical probability. Phrases such as 'wrong
information' and 'validity of information' are evidence for this claim. Also, it seemed especially unlikely that TD would see a role for intuitive probability in the development of mathematical
probability concepts, especially in instances where inferences from the former are in conflict with those of the latter. Phrases such as 'The misconception was straightened out' and 'thorough
examples and explanations from the lecturer' support this claim, but cannot be viewed as conclusive evidence. If TC and TD were to teach in ways more or less similar to their own experiences in
probabilistic reasoning, then it would be very likely that they would encounter this misconception, and would then have to address it in their classes. The third prompt aimed at getting to such a
classroom situation.
The third prompt was more explicit in terms of asking teachers what they would do in their teaching because of the use of the word 'address':
What would you do to address this misconception when you one day teach this 2 coins probability problem?
TA A practical demonstration would be ideal for the learners to see the position of the coins.
TB I would let the learners 'play' around to find out the probabilities and let them write the findings. Further, I would let them 'mark' the coins as coin 1 and coin 2.
TC I will give then practical exercises to do; they will have to engage in tossing coins practically and record the findings, make observations and come to conclusions.
TD Let the learners come up with a response; if they're incorrect, confess that I did the same mistake and show them how Pascal's Triangle can help them with the coin problem.
If a 'practical demonstration' amounted to showing children in a straightforward way that the position of heads/tails is different from tails/heads, then it means that formal probability will be
reached very quickly in terms of teaching it. TA would thus not be prepared to educate children or learners about the tension between intuitive and formal probability. On the other hand, if it meant
providing learners with opportunities to 'play,' 'make observations' and 'come to conclusions' or 'come up with a response' and thus experience the ambiguity and uncertainty in the misconception,
then it would imply explicitly educating learners about the tension between intuitive and formal probability. '[M]ark the coins as coin 1 and coin 2' was not what I did during his teaching. This was
a suggestion that some teachers came up with as a way to resolve the tension between intuitive and formal probability in the case of tossing two or more coins. In fact, TA, TB, TC and TD's collective
responses to the third prompt could be viewed as evidence for ways they might counsel children. In one way or another they want to counsel children on the tension between intuitive or informal and
formal probabilistic reasoning. Guidelines that I, together with the teachers, came up with were 'play,' 'make observations,' 'to come to conclusions' and 'let the learners come up with a response'.
These are consistent with developing informal conceptions of probability. They are also pedagogically and psychologically responsive to ways of fostering children's conceptions of probability and
take into account difficulties that some children might encounter in probabilistic reasoning.
TD's 'confess that I did the same mistake' regarding the misconception showed evidence of a certain "comfort level [with] ambiguity and uncertainty" (Lord, 1994) and an admission of the psychical
roots of intuitive probability in relation to formal probability. This claim is especially evident in the use of the word 'mistake'. This is the same teacher who wrote: 'it now seems stupid with the
understanding of Pascal's Triangle'. What was still not clear from TD's writing in the third response was whether he/she would organize probabilistic reasoning teaching in the case of the coins so
that Pascal's Triangle comes at the end of several investigations or whether it comes out of thin air as a way to cope with the tension between intuitive and formal probabilistic reasoning.
Understanding the meanings of the numbers in Pascal's Triangle would certainly help in coping with the ambiguity and uncertainty in the misconception, leading to the correct answer of 1/2 according
to formal probability in the case of tossing two coins.
The fourth prompt was further aimed at structuring teacher learning on the interplay between intuitive and formal probability regarding the outcomes of tossing two fair coins:
What was difficult or unclear about the 2 coins probability question?
TA Position of the coins.
TB BLANK
TC There was nothing unclear; it's just that we did not really think deeply on what was really asked.
TD Logical thinking was needed and after a hard day's work, logical thinking goes down the drain.
If the teachers in all the classes were told explicitly to consider the position of the coins, they would probably have been able to distinguish heads/ tails as different from tails/heads. This was
not what I did at the beginning when he introduced probabilistic reasoning when tossing a different number of coins. The importance of the positions of the coins came as a result of discussion and
debate, in line with the idea of designing, experimenting and studying teacher learning with respect to tension between intuitive or subjective and formal probabilistic reasoning. TA clearly
indicated that he/she was now aware of the position of the coins and saw this as a difficulty that caused 'confusion' (see TA's response to the first prompt). He/she was more aware of a method of
resolving this difficulty by focusing on the position of the coins through 'a practical demonstration' (see TA's response to the third prompt). TB did not have a written response, which I only
realised afterwards. TC's articulation 'we did not really think deeply' seemed to show an awareness that he/she had become sensitized to what was 'really asked'. Did TC show signs of a habit of
thought where he or she might 'think deeply' on what was really asked? It is difficult to say. We can become 'unclear' even in the case of simple probabilities because of their counter-intuitive and
ambiguous natures. TD's reference to 'logical thinking' could imply an awareness that was analytical and amenable to distinguishing between outcomes such as heads/ tails and tails/heads. This could
mean discerning intuitive aspects of probabilistic reasoning from formal ones. On the other hand, using 'logical thinking' could mean getting straight to formal probability. The question was whether
TC's and TD's reports were merely expressions of their desire to cope with or to ignore the misconception they as part of the class encountered during my teaching? What was evident, however, from all
the teachers' responses to the prompts was that some teachers learned that there is a subtle and sophisticated relationship between intuitive and formal probability because of their experiences
during my teaching. This is consonant with Dewey's notion of the 'psychical roots' of subject matter and method, content and pedagogy.
What we have in these limited excerpts is evidence of teacher learning in probabilistic reasoning in relation to teaching it. They are instances of 'closing the gap' between intuitive and formal
probability as reflected in the teachers' and children's probabilistic reasoning as reviewed in the literature.
Having made these arguments about teacher learning, however, a couple of caveats are in order. First, should the findings reported be taken seriously? After all they are based on a small-scale
qualitative study. If we consider Shaughnessy's (1992) call for the need to unravel teachers' probability concepts, then a small-scale study like this one is a good place to start. The findings
provide us with knowledge of effects of putting teacher learning theories with respect to probabilistic reasoning and its teaching 'in harm's way'. The findings drew on the empirical and on
literature related to how teachers learn in general and how they learn some of the counter-intuitiveness associated with probabilistic reasoning in relation to teaching it in particular. In teaching
the module ACE Mathematics for teaching I designed ways to promote the development of the teachers as a means to study their development. A credible explanation for the teacher learning reported in
this study can therefore be attributed to the focused investigation on the tension between intuitive and formal probability. Most of the teachers in the different towns P, Q, R, and S admitted that
they had never taught probabilistic reasoning in ways that they experienced during the module. For example, when asked to comment, one of them wrote the following:
(f) Comment on any part of what you learned in DH.
To be honest, I've never taught Data Handling/Probability for more than 2 days. I, firstly, got bored, but now I'm looking forward to it.
Also, a careful read of the written responses shows how the four teachers became aware of the tension between intuitive and formal probabilistic reasoning and what they hypothetically might want to
do to address this tension or misconception when it comes to teaching it to children.
A second reason for taking the findings seriously is because they provide us with opportunities to better understand the particular cases of ACE programmes in which mathematics education modules are
offered. It seems natural that the interest in particularisation small-scale studies such as this one precedes generalisation, i.e. large-scale studies of teacher learning of probabilistic
reasoning in relation to teaching it. The findings are a good starting point for working with perhaps the same teachers, in particular because they can compare their situation with what they wrote
then. The findings can give principals, education bureaucrats and policy makers an authentic view of the limitations of ACE programmes and what is possible within them. Moreover, the findings can be
used to show policy makers how complex teachers' learning about probabilistic reasoning really is.
How does one defend the fact that the findings are about four teachers out of a total of fifty teachers who registered for the Mathematics for Teaching module in the ACE programme? This concern is
also connected to qualitative small-scale studies. Not all the teachers completed and handed in the questionnaire. It must be borne in mind that my teaching happened in 'real time' as far as the ACE
programme was concerned. Sustained contact time with the teachers during and beyond the ACE programme was not possible. Some of the teachers taught and live in outlying rural towns. There was no
large-scale, external funding to support the research reported.
A third reason why the findings should be taken seriously is because they are about a teacher population, albeit very small, that is a subset of the vast majority of teachers at the REQV 13 level in
South Africa. To date we know very little about how this teacher population understands and learns probabilistic reasoning as expounded in the current curriculum policy of the South African
Department of Education. ACE programmes are specifically targeted at such teachers as a means to familiarise them with the policy statements in RNCS. In terms of mathematical and pedagogical
development the findings of the four teachers give an idea of what REQV 13 level teachers might wish to do beyond the 'upgrading' and 'reskilling' of the ACE programme. Moreover, at a practical level
these findings speak directly to the types of problems that teachers might have to address in the course of their work when they wish to introduce probabilistic reasoning to children. If the teachers
in the study were to go a route similar to their experience during the teaching experiment, they could arrive at what is called Pascal's Triangle. An insightful understanding of Pascal's Triangle
will show that it can serve to unify several of the separately stated so-called learning outcomes in the mathematics curriculum. They are likely to revisit their own experiences in probabilistic
reasoning encountered in the ACE programme in which the literature connects to those of children's probabilistic reasoning. The latter was a definite concern many of the teachers raised during
More importantly, it is necessary to adopt a sceptical stance, i.e. 'distance' towards the findings because they are self-reported data. In other words, there is a need to regard the findings as
'speculative.' How should one understand 'speculative' in the case of these findings? In the teacher learning self-report data, notions of legitimacy and genuineness come into play. The latter are
taken from Simon's (2000) research on the development of mathematics teachers. It is legitimate for the teachers to respond to the prompts in the questionnaire, i.e. the teachers are the appropriate
persons to report on their learning. However, it should be noted that in the context of my pedagogy in the module Mathematics for Teaching the teachers were likely to develop conceptions of the
idealised participant. I pointed out to the teachers how the subject matter of formal probability has its psychical roots in intuitive probability. Most of the teachers therefore became aware of
their intuitive probabilistic reasoning in the case of tossing different numbers of coins and how that differed from formal or mathematical probability. I made them aware of the possibility that
children might experience the same tension between intuitive and formal probability in the case of the tossed coins and other probability concepts in general. The teachers' various written statements
should therefore not be seen as evidence of understanding. As particular findings these statements have to be treated with a critical perspective because I was 'working on the inside' studying my own
Specifically the genuineness of the self-reported data in the findings should be questioned. For example, in a final comment written at the end of the questionnaire, one of the teachers wrote the
Thank you I understand this for the first time in my life !!
This looks like the teacher recorded his or her insight with respect to the complexity of probabilistic reasoning in relation to teaching it. It has to be treated with scepticism, however, and cannot
be viewed as a deep personal commitment to understanding the complexity of probabilistic reasoning. In research where one 'works on the inside' there is always the seduction of simple-minded
enthusiasm. Is this teacher saying that he or she is now convinced of the ambiguity or counter-intuitive nature the psychical roots of even simple probabilities when it comes to teaching? It is
hard to say definitively. Data that reflect teachers' involvement in teaching and/or learning situations involving probability concepts from which inferences can be made are needed. The design of ACE
programmes does not enable the occurrence of such situations.
Noting the limitations of ACE programmes in general and their noble goals of 'upgrading' and 'reskilling' teachers with an REQV 13, these findings do illuminate our understanding of possibilities
about how to align teacher learning with respect to probabilistic reasoning in relation to teaching it. If we were to move beyond speculation, then sustaining disequilibrium between dichotomies, such
as subject matter and method and intuitive and formal probability, appears to be a viable option in terms of studying teachers' 'learning to teach' probabilistic reasoning.
I express my appreciation to the anonymous reviewers for their helpful comments. Also, I thank the teachers with whom I worked.
Adler J, Ball DL, Krainer K, Lin F-L & Novotna J 2005. Reflections on an emerging field: Researching mathematics teacher education. Educational Studies in Mathematics, 60:359-381. [ Links ]
Ball DL 2000. Working on the inside: Using one's own practice as a site for studying teaching and learning. In: Kelly A & Lesh R (eds). Handbook of research design in mathematics and science
education. Mahwah, NJ: Lawrence Erlbaum Associates. [ Links ]
Ball DL & Bass H 2000. Interweaving content and pedagogy in teaching and learning to teach: Knowing and using mathematics. In: Boaler J (ed.). Multiple perspectives in mathematics teaching and
learning. Wesport, CT: Ablex Publishing. [ Links ]
Ball DL & Cohen DK 1999. Developing practice, developing practitioners: Towards a practice-based theory of professional education. In: Darling-Hammond L & Sykes G (eds). Teaching as the learning
profession: Handbook of policy and practice. San Francisco: Jossey Bass. [ Links ]
Bennie K 1998. The 'slippery' concept of probability: reflections on possible teaching approaches. Paper presented at the 4th Annual Congress of the Association for Mathematics Education in South
Africa (AMESA), Pietersburg. [ Links ]
Department of Education (DoE) 2005. Teachers for the future: Meeting the teacher shortages to achieve education for all. Pretoria: Department of Education. [ Links ]
Department of Education (DoE) 2003. Revised national curriculum statement: Grades R-9 (Schools): Mathematics. Pretoria: Government Printer. [ Links ]
Dewey J 1916/1966. Democracy and education. New York: Free Press. [ Links ]
Dewey J 1964. The relation of theory to practice in education. In: Archambault R (ed.). John Dewey on education. Chicago: University of Chicago Press (Original work published in 1904). [ Links ]
Fischbein E & Gazit A 1984. Does the teaching of probability improve probabilistic intuitions? Educational Studies in Mathematics, 15:1-24. [ Links ]
Hawkins AS & Kapadia R 1984. Children's conceptions of probability a psychological and pedagogical review. Educational Studies in Mathematics, 15:349-377. [ Links ]
Jansen JD 2001. Image-ining teachers: Policy images and teacher identity in South African classrooms. South African Journal of Education, 21:242-245. [ Links ]
Kazima M 2000. Students' perceptions of fairness in probability games. African Journal of Research in SMT Education, 10:25-35. [ Links ]
Kazima M & Adler J 2006. Mathematical knowledge for teaching: adding to the description through a study of probability in practice. Pythagoras, 63:46-59. [ Links ]
Konold C 1989. Informal conceptions of probability. Cognition and Instruction, 6:59-98. [ Links ]
Lampert M 2001. Teaching problems and the problems of teaching. New Haven: Yale University Press. [ Links ]
Lampert M 1990. When the problem is not the question and the solution is not the answer: mathematical knowing and teaching. American Educational Journal, 27:29-63. [ Links ]
Lampert M 1985. How do teachers manage to teach? Perspectives on problems in practice. Harvard Educational Review, 55:178-194. [ Links ]
Lampert M & Ball DL 1998. Teaching, mathematics and multimedia: Investigations of real practice. New York: Teachers College Press. [ Links ]
Lappan G, Fey JT, Fitzgerald WM, Friel SN & Phillips ED 1995. Connected mathematics project. White Plains, NJ: Dale Seymour Publications. [ Links ]
Laridon P 1995. Intuitive probability concepts in South African adolescents. Pythagoras, 37:25-29. [ Links ]
Lord B 1994. Teachers' professional development: Critical colleagueship and the role of professional communities. In: Cobb N (ed.).The future of education: Perspectives on national standards in
education. New York: College Entrance Examinations Board. [ Links ]
Shaughnessy JM 1992. Research in probability and statistics: Reflections and directions. In: DA Grouws (ed.). Handbook of research on mathematics teaching and learning. New York: NCTM & MacMillan. [
Links ]
Simon MA 2000. Research on the development of mathematics teachers: The teacher development experiment. In: Kelly A & Lesh R (eds). Handbook of research design in mathematics and science education.
Mahwah, NJ: Lawrence Erlbaum Associates. [ Links ]
Wilson SM & Berne J 1999. Teacher learning and the acquisition of professional knowledge: An examination of research on contemporary professional development. In: Iran-Nejad A & Pearson PD (eds).
Review of Research in Education 24. Washington: American Education Research Association. [ Links ]
I am interested in finding out your understanding of the kind of teaching I am doing in Data Handling (DH) in Learning Outcome 5. I am therefore asking your permission to respond to my questions
regarding my teaching of DH.
Do you agree to participate in this research project?
Please circle: Yes No
You do not have to fill in your name anywhere.
Please take a few minutes of your time to respond to the following questions.
I am interested to find out
ü What has helped in your learning of probability?
ü What has hindered in your learning of probability?
How many years have you been teaching?
What grades do you currently teach?
How many kilometres do you drive to attend this class?
In the case of 2 coins there was a misconception about the probability of getting a head and a tail. Everyone in the class said that this P(head and tail)
= 1/3. Why do you think this is so?
How do you think this misconception will affect your teaching of the 2 coin probability?
What would you do to address this misconception when you one day teach this 2 coin probability problem?
What was difficult or unclear about the 2 coin probability question?
About the 3 coin probability question?
What ideas come in the way of understanding probability questions? | {"url":"http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0256-01002008000100002&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-19T09:43:42Z","content_type":null,"content_length":"86408","record_id":"<urn:uuid:a1789959-ec77-4346-bf85-3a8b10efdf54>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Filtering a list with the Filter higher-order function
Last week markbulling over at Drunks & Lampposts posted a method of using sapply to filter a list by a predicate. Today the @RLangTip tip of the day was to use sapply similarly. This made makes me
wonder if R‘s very useful higher-order functions aren’t as well known as they should be. In this case, the Filter higher-order function would be the tool to use. Filter works more or less like the
*apply family of functions, but it performs the subsetting (the filtering) of a list based on a predicate in a single step.
As an example, let’s say we have a list of 1000 vectors, each of length 2 with \(x_1,\,x_2 \in [0,\,1]\), and we want to select only those vectors where the elements of the list sum to a value
greater than 1. With Filter, this is all we have to do:
mylist <- lapply(1:1000, function(i) c(runif(1), runif(1)))
method.1 <- Filter(function(x) sum(x) > 1, mylist)
Which is at least a bit more transparent than the sapply alternative:
method.2 <- mylist[sapply(mylist, function(x) sum(x) > 1)]
In some very quick tests, I found no performance difference between the two approaches.
There are other useful higher-order functions. If you are interested, check out ?Filter.
You must be logged in to post a comment.
This entry was posted in R. Bookmark the permalink. | {"url":"http://leftcensored.skepsi.net/2012/01/26/filtering-a-list-with-the-filter-higher-order-function/","timestamp":"2014-04-19T18:08:12Z","content_type":null,"content_length":"17323","record_id":"<urn:uuid:d19770d8-b657-43c4-baf9-7aaaf5a2cfe5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Hydroelastic Waves beneath a Floating Ice Sheet in a Fluid of Finite Depth
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 108026, 13 pages
Research Article
Nonlinear Hydroelastic Waves beneath a Floating Ice Sheet in a Fluid of Finite Depth
^1School of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
^2Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai 200072, China
^3Research Center for Complex Systems and Network Sciences, Department of Mathematics, Southeast University, Nanjing 210096, China
Received 21 May 2013; Revised 29 August 2013; Accepted 29 August 2013
Academic Editor: Rasajit Bera
Copyright © 2013 Ping Wang and Zunshui Cheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
The nonlinear hydroelastic waves propagating beneath an infinite ice sheet floating on an inviscid fluid of finite depth are investigated analytically. The approximate series solutions for the
velocity potential and the wave surface elevation are derived, respectively, by an analytic approximation technique named homotopy analysis method (HAM) and are presented for the second-order
components. Also, homotopy squared residual technique is employed to guarantee the convergence of the series solutions. The present formulas, different from the perturbation solutions, are highly
accurate and uniformly valid without assuming that these nonlinear partial differential equations (PDEs) have small parameters necessarily. It is noted that the effects of water depth, the ice sheet
thickness, and Young’s modulus are analytically expressed in detail. We find that, in different water depths, the hydroelastic waves traveling beneath the thickest ice sheet always contain the
largest wave energy. While with an increasing thickness of the sheet, the wave elevation tends to be smoothened at the crest and be sharpened at the trough. The larger Young’s modulus of the sheet
also causes analogous effects. The results obtained show that the thickness and Young’s modulus of the floating ice sheet all greatly affect the wave energy and wave profile in different water
1. Introduction
In recent decades, the ice cover in the polar region has attracted more and more attention in the field of ocean engineering and polar engineering in view of their practical importance and
theoretical investigations. The motivations for the research work are to study damage to offshore constructions by floating ice sheets, the transportation systems in the cold region where the ice
cover can be considered as roads and aircraft runways and air-cushioned vehicles are used to break the ice, for example. One of the important problems in this field would appear to be the accurate
measurement of the characteristics of waves traveling beneath a floating ice sheet. And such wave may have been generated in the ice cover itself by the wind, or it may have originated by a moving
load on the ice sheets. Considerable work has been done since the first theoretical model of wave propagation in sea ice was proposed by Greenhill [1] in 1887. A comprehensive summary on mathematical
method and modeling for the problem can be found in some review articles such as Squire et al. [2, 3]. In addition to ice sheets, this work can apply to very large floating structures (VLFSs) such as
floating airports, mobile offshore bases, offshore port facilities, offshore storage and waste disposal provisions, energy islands including some wave power configurations, and ultralarge ships,
where there is an extensive complementary literature [4–6].
Most theoretical works on the problem are still in the scope of linear theory based on the assumption that the wave amplitudes generated are very small in comparison with the wave lengths. So such
models are not appropriate to describe waves of arbitrary amplitude considered here. According to hydrodynamics and elasticity, we can construct the nonlinear partial differential equations (PDEs) (1
)–(5) to describe nonlinear hydroelastic waves of arbitrary amplitude traveling through water covered by an ice sheet in finite water depth. Unfortunately, it is very difficult to solve analytically
the coupled nonlinear PDEs mathematically. Further, most of the most works literature on the nonlinear theory of sea waves ice sheet interaction are necessarily in the context of weakly nonlinear
analysis due to the limitation of present mathematical tools. Now the main analytical study on such complex nonlinear PDEs still follows the well-known perturbation technique. For example, Forbes [7]
derived nonlinear PDEs to describe two-dimensional periodic waves beneath an elastic sheet floating on the surface of an infinitely deep fluid. The periodic solutions are sought using the Fourier
series and perturbation expansions for the Fourier coefficients. And it is found that the solutions have certain features in common with capillary-gravity waves. Following the framework in [7],
Forbes continued his study of finite-amplitude surface waves beneath a floating elastic sheet in infinitely deep water [8], and optimized their previous perturbation technology directly by developing
the Fourier coefficients as expansions in the wave height. Waves of extremely large amplitude are found to exist, and results are presented for waves belonging to several different nonlinear solution
branches. Recently, Vanden-Broeck and Părău [9] further extended the results of Forbes for periodic waves to the arbitrary-amplitude waves. It is noted that perturbation and asymptotic approximations
of nonlinear PDEs often break down as nonlinearity becomes strong. So the weakly nonlinear solutions of small-amplitude waves are derived by the perturbation approach, while fully nonlinear solutions
of large-amplitude waves have to be calculated numerically by means of the numerical series truncation method in Vanden-Broeck’s study.
Furthermore, perturbation and asymptotic techniques depend extremely on the small/large parameters in general, while our nonlinear PDEs have no any small/large parameters. Thus the perturbation
techniques are not applicable to the nonlinear problem under consideration. In this paper, we apply a new analytic approximation method known as the homotopy analysis method (HAM) to effectively
solve the nonlinear PDEs presented here. Based on the concept of homotopy in algebraic topology, the HAM was proposed by Liao [10] in 1992. Unlike the perturbation method, the HAM is entirely
independent of any small/large parameters. Moreover, it provides us with extremely large freedom to choose base functions and initial approximations (16) and (17) of solutions and auxiliary linear
operators (21)–(23) only under some basic rules [11, 12]. More importantly, in contrast to all other previous analytic techniques, the HAM provides us a convenient way to control and adjust the
convergence of the approximate series solutions by means of introducing an auxiliary parameter . The method has been systematically described by Liao [11, 12]. Recently the HAM has been successfully
applied to the study of a number of classical nonlinear differential equations including nonlinear equations arising in fluid mechanics [13–18], heat transfer [19, 20], solitons and integrable models
[21–24], and finance [25, 26]. These aforementioned studies show the validity and generality of the HAM for some highly nonlinear PDEs with multiple solutions, singularity, and unknown boundary
The objective of the present work is to analytically study the nonlinear hydroelastic waves under an ice sheet lying over an incompressible inviscid fluid of finite uniform depth by means of the HAM.
According to the potential theory in hydrodynamics and elasticity, the nonlinear partial differential equations (PDEs) (1)–(5) are composed of the Laplace equation taken as the governing equation for
inviscid flows, the kinematic and dynamic boundary conditions on the unknown ice sheet-water interface with a zero draft, a simple linear model for the thin sheet that includes the effects of
flexural rigidity and vertical inertia, and a bottom boundary condition. The convergent homotopy-series solutions for the velocity potential and the wave surface elevation are formally derived by
applying the HAM with the consideration of the minimum of the squared residual, respectively. It should be mentioned that we study the effects of the water depth and two important physical parameters
including Young’s modulus and the thickness of the ice sheet on the wave energy and its elevation in detail. Discussion and conclusions are made in Sections 4 and 5, respectively. All of results
obtained will help enrich our understanding of nonlinear hydroelastic waves propagating under a floating ice sheet on a fluid of finite depth.
2. Mathematical Description
The problem under consideration is a train of nonlinear hydroelastic waves propagating beneath a two-dimensional infinite elastic plate floating on a fluid of finite depth and a zero draft. A
Cartesian coordinate is used in which the -axis points vertically upward, while represents the undisturbed surface. We follow Greenhill in [1] assuming that this problem is capable of modeling ocean
waves in the presence of sea ice when the fluid is inviscid and incompressible and the flow is irrotational, and the ice sheet is mathematically idealized as a thin elastic plate. Then the governing
equations for a velocity potential can be written as where is the wave surface elevation. The bottom boundary condition reads
The motion of the fluid and the plate is coupled through the dynamic free-surface condition. We also assume that any particle which is once between the elastic plate and the water surface remains on
it. So the kinematic and dynamic boundary conditions on the unknown surface are, respectively, modeled as where is the water-plate interface pressure, is the fluid density, and is the gravitational
acceleration, for a thin homogeneous elastic plate with uniform mass density and constant thickness .
Since we are considering long waves here, the linear Kirchhoff (Euler-Bernoulli) beam theory is applied to the floating elastic plate as follows: where , is the flexural rigidity of the plate, is the
effective Young’s modulus of the plate, and Poisson’s ratio. We substitute (5) into (4) to derive a new form of the dynamic boundary condition as follows:
Here, we consider a train of nonlinear waves traveling beneath an elastic plate with constant wave number and constant angular frequency of the incident wave. For a general case it should be
emphasized that, by means of the traveling-wave method directly, the progressive waves are transferred from the temporal differentiation into the spatial one, which is very different from the
mathematical model obtained by simply eliminating the time-dependent terms from the kinematic and dynamic boundary conditions on the unknown free surface [7–9]. Namely, we introduce an independent
variable transformation where the angular frequency and the wave number are given. Thus, we can express the potential function and the traveling wave profile .
Then the governing equation and the bottom boundary condition for the velocity potential are transformed, respectively, by With the transformation (7), (3), and (6) on are given by respectively,
where We combine partially (10) and (11) to gain the boundary conditions on as follows: Now the corresponding unknown potential function and the wave surface elevation are governed by (8), (9), (11),
and (13).
3. Analytic Approach Based on the Homotopy Analysis Method
3.1. Solution Expression and Initial Approximation
Using the homotopy analysis method, we should first of all start from a set of base functions and solution expression which are very important to approximate the unknown solutions of the nonlinear
boundary problem under consideration. Mathematically, it seems impossible to guess the expression forms of the unknown potential function and the wave vertical displacement. Fortunately, considering
the physical background of our problem, we may gain proper solution expressions of it. From viewpoints of the physical considerations here, our problem is composed of a train of progressive waves
cause by a load moving on the ice sheet, an infinite elastic plate acting as an ice sheet floating on an fluid of finite depth. As is well known, in case of the pure water waves, the progressive wave
elevation can be expressed as by a set of base functions , where are unknown coefficients. In the case of plate-covered surface, since we assume that there is no gap between the bottom surface of the
thin elastic plate and the top surface of the fluid layer and a zero draft, the vertical displacement of the thin plate is still periodic in the direction. Therefore, we clearly know that can be
expressed in the above form (14) too.
Besides, according to the linear wave theory, we can find the solutions of the Laplace equation (8) by the separation of variables method. To acquire those solutions, we have to use kinematic and
dynamic boundary conditions of the free surface and the boundary condition in finite water depth, and we consider the solution derived here as the solution expression of potential function by a set
of base functions , where are unknown coefficients. Note that the potential function defined by (15) automatically satisfies the governing equation (8) and the bottom boundary condition (9). The
above expressions (14) and (15) are called the solution expressions of and , respectively, which play important roles in the method of homotopy analysis.
According to the solution expression (15) and the boundary condition (9), we construct the initial approximation of the potential function: where is an unknown coefficient. We choose as the initial
approximation of wave profile to simplify the subsequent solution procedure [18, 20]. It should be emphasized that higher order terms can hold the corrections of the analytic series solutions due to
the nonlinearity inherent in (11) and (13) although the initial guess is zero.
3.2. Continuous Variation
The HAM is based on a kind of continuous mapping of an initial approximation to the exact solution through a series of deformation equations. For simplicity, based on the nonlinear boundary condition
(13) and (11), we define the two following nonlinear operators and as follows where and is the embedding parameter of the HAM.
Here, it should be emphasized that, as mentioned by Liao and Cheung and Tao et al. [14, 15], the HAM provides us with extremely large freedom to choose the auxiliary linear operators and the initial
guess. Note that both linear terms of and linear terms of are all contained in (18). If we choose all linear terms, the subsequent iterative procedure will become very complex. Fortunately, based on
the HAM, we can completely forget the linear terms in (13) and choose proper auxiliary linear operator of by means of the solution expression (15) which is obtained under the physical considerations
In particular, if the angular frequency is given, we can choose such an approximation based on the linear wave theory to simplify the subsequent resolution of the nonlinear PDEs as follows: So we
simplify the auxiliary linear operator in (21) as follows: where . Note that, due to the weakly nonlinear effects, the actual frequency is often slightly different from the linear dispersion relation
. In Section 4, is chosen so that the perturbation theory is valid and corresponding results are highly accurate, and then we can compare our results with those obtained by the perturbation method.
Based on the linear operator of the wave profile function in the nonlinear operator , for simplicity, we may choose another auxiliary linear operator: where .
We let be an nonzero convergence-control parameter. It is noted that both and in the HAM are auxiliary parameters without any physical meaning. Instead of the nonlinear PDEs (8), (9), (11), and (13),
we reconstruct the so-called zeroth-order deformation equations as follows: Then, from (27) and (28), two mapping functions and vary respectively continuously from their initial approximation and to
the exact solutions and of the original problem. The Taylor series of and at are where
Assume that is so properly chosen that the series in (29) and (30) converge at ; then we have the so-called homotopy-series solutions as follows:
At the th-order of approximations, we have
As shown later in the following section, the unknown terms and are governed by the linear PDEs (34)–(36).
3.3. High-Order Deformation Equations
High-order deformation equations for the unknown , can be derived directly from the zeroth-order deformation equations. Firstly, substituting the homotopy-Maclaurin series (29) and (30) into the
governing equation (25) and the boundary condition in finite water depth (26) and then equating the like-power of the embedding parameter , we have where .
Note that, at the unknown surface may be expressed in terms of the Taylor expansion at instead of . The detailed derivation of the expansion of at the unknown surface is given in Appendices – . Upon
the substitution of appropriate series and (30) into the boundary conditions (27) and (28), we have two linear boundary conditions on as follows: where
The detailed derivation of the above equations and the expression for and are given in Appendix A. It should be noted that (27) and (28) holds on the unknown boundary , while (35) and (36) hold on .
Furthermore, the original nonlinear DPEs (1)–(5) are transferred into an infinite number of linear decoupled high-order deformation equations (34)–(36). Namely, given and , and can be obtained easily
by means of the inverse operators of the right-hand sides of (35) and (36), respectively, and a computer algebra system such as Mathematica. The resulting expressions for and are presented to the
second order in the coming subsection.
3.4. First-Order and Second-Order Approximations
Substituting initial approximations (16) and (17) into (36), we can get using the inverse linear operator in (36) as follows:
But now the coefficient in the initial approximation of in (16) is still unknown. So we introduce an additional equation to relate the solutions with the wave height: in which is an even integer, is
an odd integer, and is the wave height to the first order based on the HAM. The relation (39) for the wave height and its vertical displacement can determine the solution of .
Further, in the analogous manner as for the first-order approximation, by using the inverse linear operator in (35), it is easy to get the solution of , especially by means of the symbolic
computation software such as Mathematica:
We find the common solution has one unknown coefficient which can be determined by avoiding the “secular” term in . We note that all subsequent functions occur recursively. Utilizing the linear
equations (35) and (36) to continue with the first-order approximations we have where is the th unknown coefficient of and is the th unknown coefficient of . The detailed expressions of these
coefficients for and are given in Appendix B.
In order to obtain higher-order functions and , we need only to continue this approach. In principle, we can acquire infinite-order solutions for our physical model. It is also worthwhile to mention
that these solutions will retain model parameters and the convergence control parameter .
3.5. Optimal Convergence-Control Parameter
If we fix all model parameters in our approximate series solutions, there is still an unknown convergence control parameter in them, which is used to guarantee the convergence of our approximation
solutions. According to Liao [12], it is the convergence control parameter that essentially differs the HAM from all other analytic methods. And the optimal value of is determined by the minimum of
the total squared-residual of our nonlinear problem, defined by where where and are two residual square errors of boundary conditions (27) and (28), respectively. is the number of the discrete
points, and . In this paper, we choose .
Theorem 2.1 given by Liao in [12] can guarantee the rationality of (42). So we obtain the optimal convergence control parameter by the minimum of the squared-residual , generally corresponding to .
4. Results and Analysis
In order to show the convergence of the analytical series solution to our problems by means of the HAM, we consider the cases of , m, , , , m, , and and take these data hereinafter for computation
unless otherwise stated. The total residual square error at several orders of approximation versus the convergence-control parameter is shown in Figure 1. It is found that at every order has the
smallest values which corresponds to the optimal . For example, as , the optimal , and the smallest value of . So, let the optimal convergence-control parameter , the total residual square error
decreases quickly as the order increases, as shown in Table 1. It is also found that is down to at the 15th-order of approximation, which indicates the convergence of our series solutions. In this
way, we ensure that all our solutions are highly accurate.
Also, we compare our HAM solutions of waves propagating beneath an elastic plate floating on a fluid of finite depth with those results obtained by perturbation techniques, as shown in Figure 2. It
should be noted that the perturbation-series solution is derived by substituting the series expansions (4.5) and (4.6) in [9] into the nonlinear PDEs (8)–(12), and equating power of small parameter
leads to a succession of linear PDEs, and then the linear PDEs can be solved by the separation of variables. In Figure 2. It is seen that our homotopy-series approximation of the surface elevation
agrees well with the perturbation-series approximation, and only slight derivations occur at the trough of the wave profile as in Figure 2, which further indicates the validity of our present theory
about nonlinear hydroelastic waves beneath a floating ice sheet.
We define quantities which measure how much energy there is in the wave propagating beneath an infinite elastic plate. Let P.E. be the mean potential density per unit length in the -axis [27]. In
terms of the wave surface elevation function, the energy density can be written as
Different from all research objectives in [7–9], we firstly consider in this paper the effect of water depth on nonlinear hydroelastic waves beneath a floating elastic plate in detail. The energy of
hydroelastic waves for different Young’s moduli of the plate and different plate thicknesses in various water depths are as shown in Figures 3 and 4 and Tables 2 and 3, respectively. We find that,
when water depth is about more than 2, the hydroelastic waves traveling beneath the thickest plate always contain the largest wave energy in different water depths. And with an increasing Young’s
modulus of the plate, the wave energy becomes large too.
The effect of Young’s modulus of the plate on the wave elevation under a floating elastic plate is studied. Figures 5 and 6 show the differences in for , , and . According to Figures 5 and 6,
respectively, we can see that the nonlinear hydroelastic response of the waves becomes flatter at the crest and steeper at the trough due to the larger value of Young’s modulus . Finally, we consider
the impact the plate thickness by increasing from to . In Figures 7 and 8, we show several displacements with , , and , respectively. It indicates that the results are very similar to the effects due
to different Young’s moduli of the plate.
5. Conclusions
In this paper, the nonlinear hydroelastic waves propagating beneath a two-dimensional infinite elastic plate floating on a fluid of finite depth are investigated analytically by the HAM.
Mathematically, for a train of nonlinear hydroelastic waves traveling at a constant velocity in a fluid of finite or infinite depth, the PDEs in [7–9] were obtained by simply eliminating the
time-dependent terms from the kinematic and dynamic boundary conditions on the unknown free surface in the frame of reference moving with the wave. Here, for a general case it should be noted that we
construct the PDEs by directly applying the traveling-wave method to transfer the temporal differentiation into the spatial one in a fixed Cartesian coordinate . Furthermore, the convergent
homotopy-series solutions for the PDES are derived by the HAM with the optimal convergence control parameter.
Physically, we study the effect of the water depth on the nonlinear hydroelastic waves under an elastic plate in detail. It is found that, in different water depths, the wave energy density (P.E.)
tends to become larger with an increasing thickness of the sheet. The same conclusions are obtained in various water depths by means of different values of Young’s modulus of the plate. Additionally,
the influences of Young’s modulus and the thickness of the plate on the wave elevation are investigated, respectively. As Young’s modulus of the plate increases, the wave elevation becomes lower. And
the increasing thickness of the plate flattens the crest and sharpens the trough of the wave profile. The results obtained here demonstrate that Young’s modulus and the thickness of the sheet have
important effects on the energy and the profile of nonlinear hydroelastic waves under an ice sheet floating on a fluid of finite depth.
A. The Detailed Derivation of (35) and (36) and the Expressions for and
Let For any , we have a Maclaurin series as follows: For , it follows from (A.1) and (A.2) that where Thus we have, for , where
Substituting the series expansions (A.1) and (A.5) into the boundary conditions (27) and (28) and then equating the like-power of the embedding parameter , we have two linear boundary conditions (35)
and (36), respectively. And the explicit expressions for , , , and in these two conditions are given by where
B. Expressions of the Coefficients
Conflict of Interests
There is no conflict of interests in the paper. The authors themselves used the program of the symbolic computation software named Mathematica independently to gain the approximate analytical
solutions of the PDEs considered here.
This work was supported in part by China Postdoctoral Science Foundation funded Project 20100481088 and in part by the Natural Science Foundation of Shandong Province of China under Grant
ZR2010FL016. The authors would like to thank the reviewer for his constructive comments.
1. A. G. Greenhill, “Wave motion in hydrodynamics,” American Journal of Mathematics, vol. 9, no. 1, pp. 62–96, 1886. View at Publisher · View at Google Scholar · View at MathSciNet
2. V. A. Squire, J. P. Dugan, P. Wadhams, P. J. Rottier, and A. K. Liu, “Of ocean waves and sea ice,” Annual Review of Fluid Mechanics, vol. 27, pp. 115–168, 1995. View at Publisher · View at Google
Scholar · View at MathSciNet
3. V. A. Squire, “Of ocean waves and sea-ice revisited,” Cold Regions Science and Technology, vol. 49, no. 2, pp. 110–133, 2007. View at Publisher · View at Google Scholar · View at Scopus
4. V. A. Squire, “Synergies between VLFS hydroelasticity and sea ice research,” International Journal of Offshore and Polar Engineering, vol. 18, no. 4, pp. 241–253, 2008. View at Scopus
5. T. Kakinuma, K. Yamashita, and K. Nakayama, “Surface and internal waves due to a moving load on a very large floating structure,” Journal of Applied Mathematics, vol. 2012, Article ID 830530, 14
pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
6. F. Xu and D. Q. Lu, “Wave scattering by a thin elastic plate floating on a two-layer fluid,” International Journal of Engineering Science, vol. 48, no. 9, pp. 809–819, 2010. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. L. K. Forbes, “Surface waves of large amplitude beneath an elastic sheet. Part 1. High-order series solution,” Journal of Fluid Mechanics, vol. 169, pp. 409–428, 1986. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. L. K. Forbes, “Surface waves of large amplitude beneath an elastic sheet. Part 2. Galerkin solution,” Journal of Fluid Mechanics, vol. 188, pp. 491–508, 1988. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
9. J.-M. Vanden-Broeck and E. I. Părău, “Two-dimensional generalized solitary waves and periodic waves under an ice sheet,” Philosophical Transactions of the Royal Society A, vol. 369, no. 1947, pp.
2957–2972, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. S.-J. Liao, The proposed homotopy analysis technique for the solution of nonlinear problems [Ph. D. Dissertation], Shanghai Jiao Tong University, 1992.
11. S.-J. Liao, Beyond Perturbation: Introduction to the Homotopy Analysis Method, Modern Mechanics and Mathematics, Chapman and Hall/ CRC Press, 1st edition, 2003.
12. S.-J. Liao, Homotopy Analysis Method in Nonlinear Differential Equations, Springer & Higher Education Press, Heidelberg, Germany, 2003.
13. S.-J. Liao, “On the analytic solution of magnetohydrodynamic flows of non-Newtonian fluids over a stretching sheet,” Journal of Fluid Mechanics, vol. 488, pp. 189–212, 2003. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. S.-J. Liao and K. F. Cheung, “Homotopy analysis of nonlinear progressive waves in deep water,” Journal of Engineering Mathematics, vol. 45, no. 2, pp. 105–116, 2003. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. L. Tao, H. Song, and S. Chakrabarti, “Nonlinear progressive waves in water of finite depth—an analytic approximation,” Coastal Engineering, vol. 54, no. 11, pp. 825–834, 2007. View at Publisher ·
View at Google Scholar · View at Scopus
16. S.-J. Liao, “On the homotopy multiple-variable method and its applications in the interactions of nonlinear gravity waves,” Communications in Nonlinear Science and Numerical Simulation, vol. 16,
no. 3, pp. 1274–1303, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. D. Xu, Z. Lin, S.-J. Liao, and M. Stiassnie, “On the steady-state fully resonant progressive waves in water of finite depth,” Journal of Fluid Mechanics, vol. 710, pp. 379–418, 2012. View at
Publisher · View at Google Scholar · View at MathSciNet
18. J. Cheng and S. Q. Dai, “A uniformly valid series solution to the unsteady stagnation-point flow towards an impulsively stretching surface,” Science China, vol. 53, no. 3, pp. 521–526, 2010. View
at Publisher · View at Google Scholar · View at Scopus
19. S. Abbasbandy, “The application of homotopy analysis method to nonlinear equations arising in heat transfer,” Physics Letters A, vol. 360, no. 1, pp. 109–113, 2006. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
20. S.-J. Liao and A. Campo, “Analytic solutions of the temperature distribution in Blasius viscous flow problems,” Journal of Fluid Mechanics, vol. 453, pp. 411–425, 2002. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. W. Wu and S.-J. Liao, “Solving solitary waves with discontinuity by means of the homotopy analysis method,” Chaos, Solitons and Fractals, vol. 26, no. 1, pp. 177–185, 2005. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at Scopus
22. E. Sweet and R. A. van Gorder, “Analytical solutions to a generalized Drinfel'd-Sokolov equation related to DSSH and KdV,” Applied Mathematics and Computation, vol. 216, pp. 2783–2791, 2010. View
at Publisher · View at Google Scholar · View at MathSciNet
23. R. A. van Gorder, “Analytical method for the construction of solutions to the Föppl-von Kármán equations governing deflections of a thin flat plate,” International Journal of Non-Linear Mechanics
, vol. 47, pp. 1–6, 2012.
24. S.-J. Liao, “An optimal homotopy-analysis approach for strongly nonlinear differential equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 8, pp. 2003–2016,
2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
25. L. Zou, Z. Zong, Z. Wang, and L. He, “Solving the discrete KdV equation with homotopy analysis method,” Physics Letters A, vol. 370, no. 3-4, pp. 287–294, 2007. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
26. J. Cheng, S.-P. Zhu, and S.-J. Liao, “An explicit series approximation to the optimal exercise boundary of American put options,” Communications in Nonlinear Science and Numerical Simulation,
vol. 15, no. 5, pp. 1148–1158, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. A. J. Roberts, “Highly nonlinear short-crested water waves,” Journal of Fluid Mechanics, vol. 135, pp. 301–321, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View
at Scopus | {"url":"http://www.hindawi.com/journals/aaa/2013/108026/","timestamp":"2014-04-20T04:48:08Z","content_type":null,"content_length":"766556","record_id":"<urn:uuid:f60f13e8-225b-42c3-be43-f881dd7be603>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
From GeoMod
A VPython model of a marble that can be fired across a rotating turntable. The trail of the marble's path across the turntable is traced behind it to show the Coriolis Effect. An application of the
model in a large lecture class is described in the paper by Urbano and Houghton (2006).
• There are links to excellent descriptions of what exactly is the coriolis effect (eg. Teunissen, 2007) and some of its history in the coriolis links section below.
Download software
Like a number of 2-D Javascript applications available on the web, this model allows the user to fire a ball across a rotating disk, and marks its position relative to the disk and the general
co-ordinate system. In this VPython model the user can also control the velocity of the ball, the angular velocity of the disk and the friction between the ball and the disk. This model is
particularly useful in illustrating the effect of coriolis on atmospheric motion where these three parameters interact. A target can be placed on the disk to offer an objective of shooting the ball
and to illustrate angular velocity. The model also permits the user to view the scene from the perspective of the ball, which has proven to be an extremely popular feature for all ages. This model
and its application in a large lecture is described in Urbano and Houghton (2006).
• NOTE: This model only accounts for the conservation of angular momentum component of the coriolis effect and not for conservation of linear momentum, which results in only half of the coriolis
force (see Persson, 1998 for a good explanation).
User's Guide: Model Controls
1. Fire the marble by clicking on the marble atop the cannon.
2. Drag the cannon by its barrel to any position in the scene
3. Retreive the marble by clicking on the box (loader) of the cannon. (This leaves the marble trail on the turntable.)
1. Velocity: Sets the speed of the marble.
2. Rotation: Sets the angular velocity and direction of rotation of the turntable.
3. Friction: Sets the degree of friction between the marble and the turntable.
Balls and buttons
1. Target: Rotates with the turntable when it is dragged onto the turntable. (Great for explaining angular velocity)
2. Reset All: Retreives the marble to the turntable and clears all trails off the turntable
3. Unmarked in the upper right: When this button is clicked it turns red and you see the scene from the marble's point of view. (Kids love this.)
• Urbano, L., and Houghton, J., 2006. An Interactive Computer Model for Coriolis Demonstrations, Journal of Geoscience Education, v. 54, no. 1, p. 54-60. (preprint)
□ describes the application of the model in a large lecture class
1. Friction here only acts to drag the marble with the rotating turntable, and does not slow the marble speed (except if the rotation is moving the marble backward on the turntable).
2. This model only accounts for the conservation of angular momentum component of the coriolis effect, which gives only half of the effective coriolis force (see Persson, 1998).
Coriolis Links
There are a number of other sites that have excellent explanations of the coriolis effect, weather you need a simple description or a more in-depth analysis of the math. I've linked a subset of these
Explanations of the coriolis effect
• Articles/webpages by Cleon Teunissen that give a good in-depth explanation of the coriolis effect (with great animations and diagrams).
□ Physlets showing
□ The Eötvös effect, which like the coriolis effect is a result of relative motions on the rotating earth.
□ For the best in-depth explanation of the coriolis effect, view the series of interconnected articles:
History and explanations of the coriolis effect | {"url":"http://earthsciweb.org/GeoMod/index.php?title=Coriolis","timestamp":"2014-04-20T01:22:09Z","content_type":null,"content_length":"26830","record_id":"<urn:uuid:073074eb-91fc-4d75-ac2b-ca2f0d3dd005>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
e Theorems
Triangle Theorems
Triangles are the closed figure studied in theoretical geometry. The theorems available for triangle congruence and similarity are used in proving and solving many a geometrical properties and
problems. There are quite a number of theorems starting with triangle sum theorem till theorems on the concurrencies. Some of the theorems are given here with brief explanations on them.
Triangle Sum Theorem
The sum of the measures of the angles of a triangle is 180
This is a common property of all triangles. This property ensures that any triangle has at least two acute angles.
Exterior Angle Theorem
│The measure of an exterior angle of a triangle is equal to the sum of the interior opposite angles. │
│The exterior angle of a triangle is the linear pair of an interior angle. ││
│ ││
│The diagram given here shows the exterior angle measures ││
│E[1], E[2] and E[3] of the angles A, B and C of the triangle. ││
│ ││
│The exterior angle theorem states that ││
│ ││
│E[1] = m < B + m < C ││
│E[2] = m < C + m < A ││
│E[3] = m < A + m < B ││
Side Comparison
What is the relation between the angle measure and side length? How do the sides compare in terms of their lengths?
In a triangle the larger angle has the longer side opposite to it.
Triangle inequality theorem The sum of the lengths of two sides of a triangle is greater than the length of the third side.
Triangle Congruence Theorems
There area number of congruence theorems for triangles based on the criterion considered. All these theorems are commonly used in geometrical problems.
1. SSS Congruence Postulate or SSS Postulate (Side-Side-Side)
If three sides of a triangle are congruent to the three sides of a triangle then two triangles are congruent.
In the adjoining diagram, the sides of the triangle ABC are shown to be congruent to the sides of triangle PQR.
By SSS postulate for triangle congruency,
$\bigtriangleup$ ABC $\cong$ $\bigtriangleup$ PQR
The congruence statement should be made maintaining the correct order of the vertices. It is wrong to write $\bigtriangleup$ ABC $\cong$ $\bigtriangleup$ QPR or $\bigtriangleup$ ABC $\cong$ $\
bigtriangleup$ RPQ.
The position of vertex in the naming of the triangle should match the position of the corresponding vertex in the name of the other triangle.
2. SAS Congruence Postulate or SAS Postulate (Side Angle Side)
If two sides and the included angle of a triangle are congruent to the two sides and included angle of another triangle, then the two triangles are congruent.
In the adjoining diagram, the two sides and the angle included by them of triangle DEF are congruent to the two sides and the angle included between them of triangle STU.
By SAS postulate for triangle congruency, $\bigtriangleup$ DEF $\cong$ $\bigtriangleup$ STU
The equivalent congruent statements are $\bigtriangleup$ EFD $\cong$ $\bigtriangleup$ TUV and $\bigtriangleup$ FDE $\cong$ $\bigtriangleup$ UST.
In all these statements the order of corresponding vertices are maintained on either side.3. ASA Congruence Postulate or ASA Postulate ( Angle Side Angle)
If two angles and the included side of a triangle are congruent to the corresponding two angles and included side of another triangle, then the two triangles are congruent.
In the adjoining diagram the angles Q and R and the side included QR of triangle PAR are shown to be congruent to the angles V and W and the included side VW of triangle UVW.
By ASA postulate for triangle congruency
$\bigtriangleup$ PQR $\cong$ $\bigtriangleup$ UVW
The congruency also exists even if the side is not included, which is stated by AAS postulate.
4. AAS Congruence Postulate or AAS Postulate ( Angle Angle Side)
If two angles and a non included side of one triangle are congruent to two angles and a non included side of another triangle, then the two triangles are congruent.
In the adjoining diagram angles B and C and the non included side CA of triangle ABC are congruent to the two angles Q and R and the non included side RP of triangle PQR.
By AAS postulate for triangle congruency
$\bigtriangleup$ ABC $\cong$ $\bigtriangleup$ PQR
This theorem is a corollary of ASA theorem. If two angles of a triangle are congruent, by triangle sum property, the third angles are also congruent. That would make angles A and P congruent here.
5. HL Theorem (Hypotenuse Leg)
If the hypotenuse and a leg of a right triangle are congruent to the corresponding parts of another right triangle, then the two triangles are congruent.
In the diagram given Hypotenuse AB and leg CA of right triangle ABC are congruent to hypotenuse LM and leg NL of the second right triangle LMN.
By HL postulate for triangle congruency
$\bigtriangleup$ ABC $\cong$ $\bigtriangleup$ LMN
The fact that the two right angles in the triangles are congruent is implied by the inclusion hypotenuse in the name of the postulate. A general SSA situation does not imply triangle congruence. HL
is indeed the version of SSS postulate to be applied to right triangles. If the hypotenuse and a leg are congruent to the corresponding parts of another triangle, by Pythagorean theorem the other
legs are also congruent.
Triangle Similarity Theorems
Two triangles are said to be similar if the three angles of one triangle are congruent to three angles of another. The sides in this case will not be congruent but proportional. Triangle congruency
is a special case of similarity where this proportion = 1. The theorems stating the types of similarity are given below.
AAA Similarity
In the diagram it is shown the angles A and B of triangle ABC are congruent to the angles P and Q of triangle PQR. By triangle sum property the third angles C and R are also congruent. Hence by the
definition of similar triangles,
$\bigtriangleup$ ABC ||| $\bigtriangleup$ PQR
SSS Similarity
The sides of triangle LMN are of half the lengths of the sides of triangle PQR.
The equality of the ratios of the sides determines the similarity.
By SSS similarity
$\bigtriangleup$ PQR ||| $\bigtriangleup$ LMN
SAS similarity
In this case the lengths of two sides of the triangle STU are shown proportional to the lengths of two sides of triangle XYZ and the corresponding included angles S and X are congruent.
By SAS similarity
$\bigtriangleup$ STU ||| $\bigtriangleup$ XYZ
Conversely if two triangles are given to be congruent, the lengths of corresponding sides are proportional.
$\Delta PQR |||\Delta LMN\rightarrow \frac{\overline{PQ}}{\overline{LM}}=\frac{\overline{QR}}{\overline{MN}}=\frac{\overline{RP}}{\overline{NL}}$
Mid Point Theorem
This theorem is commonly known as the mid segment theorem. This theorem is proved using similarity of the triangles formed and taking the proportions of the corresponding sides.
The segment joining the midpoints of two sides of a triangle is parallel to the third side, and is half the length of the third side.
In triangle ABC, D and E are the mid points of the sides AC and BC. The mid segment theorem states that DE is parallel to the third side AB and,
DE =
Concurrency Theorems
The four points of concurrencies related to a triangle are circumcenter, orthocenter, incenter and centroid.
1. Perpendicular Bisector Theorem
The perpendicular bisectors of the three sides of a triangle intersect at a point equidistant from the vertices.
The perpendicular bisectors of the triangle ABC intersect at O. Point O is equidistant from the three vertices A, B and C. The circle drawn with O as center passing through the vertices is called the
circumcircle of the triangle and the point O is called the circumcenter.
The circumcenter is the point of concurrence of the perpendicular bisectors of the triangle.
2. Angle Bisector Theorem
The angle bisectors of a triangle intersect at a point that is equidistant from the sides of the triangle.
The angle bisectors of triangle ABC intersect at point I. Since this point is equidistant from the sides of the triangle, a circle can be drawn with I as center touching the three sides of the
triangle. The circle is called the incircle and I the incenter of the triangle.
Incenter is the point of concurrence of the angle bisectors of a triangle.
3. Altitudes Theorem
The three altitudes of a triangle are concurrent.
In the given diagram for triangle ABC , the altitudes AD, BE and DF of triangle ABC intersect at the point O, which is called the orthocenter of the triangle.
The orthocenter of the triangle is the point of concurrence of the altitudes of a triangle.
4. Centroid Theorem
The Medians of a triangle intersect at a point which is called the centroid or center of gravity of the triangle.
The Medians AD, BE and CF of triangle ABC at G the centroid of the triangle.
The centroid is a point trisection of each median.
The centroid divides each median of the triangle in the ratio 2:1 | {"url":"http://math.tutornext.com/geometry/triangle-theorems.html","timestamp":"2014-04-19T11:56:31Z","content_type":null,"content_length":"30216","record_id":"<urn:uuid:40ef46c7-7376-4052-968b-92ad872667cc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hydrodynamic reductions of multi-dimensional dispersionless PDEs: the test for integrability
Seminar Room 1, Newton Institute
A (d+1)-dimensional dispersionless PDE is said to be integrable if it possesses infinitely many n-component hydrodynamic reductions parametrized by (d-1)n arbitrary functions of one variable. Among
the most important examples one should primarily mention the three-dimensional dKP and the Boyer-Finley equations, as well as the four-dimensional heavenly equation descriptive of self-dual
Ricci-flat metrics. It was observed that the integrability in the sense of hydrodynamic reductions is equivalent to the existence of a scalar pseudopotential playing the role of dispersionless Lax
pair. Lax pairs of this type constitute a basis of the dispersionless d-bar and twistor approaches to multi-dimensional equations. | {"url":"http://www.newton.ac.uk/programmes/GMR/seminars/2005112216001.html","timestamp":"2014-04-18T18:14:23Z","content_type":null,"content_length":"5022","record_id":"<urn:uuid:b7d0681c-6634-4e63-bcb7-1b965f519204>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Life Insurance Contract Simulations
ERCIM News No.38 - July 1999
Life Insurance Contract Simulations
by Mireille Bossy
A common feature of life insurance contracts is the early exit option which allows the policy holder to end the contract at any time before its maturity (with a penalty). Because of this option,
usual methodologies fail to compute the value and the sensibility of the debt of the Insurance Company towards its customers. Moreover, it is now commonly admitted that an early exit option is a
source of risk in a volatile interest rates environment. The OMEGA Research team at INRIA Sophia Antipolis studies risk management strategies for life insurance contracts which guarantee a
minimal rate of return augmented by a participation to the financial benefits of the Company.
A preliminary work of OMEGA consisted in studying the dependency of the Insurance Company s debt value towards a given customer on various parameters such as the policy holder criterion of early
exit and the financial parameters of the Company investment portfolio. Statistics of the value of the debt are obtained owing to a Monte Carlo method and simulations of the random evolution of
the Company s financial portfolio, the interest rates and of the behaviour of a customer.
More precisely, the debt at the exit time t from the contract (with an initial value of 1), is modeled by D(t) = p(t)[exp(r t) + max(0, A(t) - exp(r t))]. Here, r is the minimal rate of return
guaranteed by the contract and exp(r t) stands for the guaranteed minimal value of the contract at time t. A(t) is the value of the assets of the Company invested in a financial portfolio. A
simplified model is A(t) = a S_t + b Z(t), where S(t) (respectively Z(t)) is the value of the stocks (respectively of the bonds) held by the Company; a and b denote the proportions of the
investments in stocks and in bonds respectively. Finally, the function p(t) describes the penalty applied to the policy holder in the case of an anticipated exit of the contract. Two kinds of
exit criterions are studied: the historical customer chooses his exit time by computing mean rates of return on the basis of the past of the contract; the anticipative customer applies a more
complex rule which takes the conditional expected returns of the contract into account. In both cases, a latency parameter is introduced to represent the customer s rationality with respect to
his exit criterion. (The simulation of a large number of independent paths of the processes S and Z permits to compute the different values of assets and liabilities in terms of the parameters of
the market, a, b, and the strategy followed by the policy holder.)
In our first simulations, the asset of the Company was extremely simplified: S(t) is the market price of a unique share (described by the Black and Scholes paradigm) and Z(t) is the market price
of a unique zero-coupon bond (derived from the Vasicek model). Even in this framework, the computational cost is high and we take advantage of the Monte Carlo procedure to propose a software
(named LICS) which attempts to demonstrate the advantage of parallel computing in this field. This software was achieved within the FINANCE activity of the ProHPC TTN of HPCN.
The computational cost corresponding to more realistic models can become huge. Starting in March 99, the AMAZONE project is a part of the G.I.E. Dyade (BULL/INRIA). Its aim is to implement LICS
on the NEC SX-4/16 Vector/Parallel Supercomputer. This version will include a large diversification of the financial portfolio (around thousand lines) and an aggregation of a large number of
contracts mixing customers behaviors.
In parallel to this the OMEGA team studies the problem of the optimal portfolio allocation in the context of simplified models for life insurance contract. For more information, see:
Please contact:
Mireille Bossy - INRIA
Tel: +33 4 92 38 79 82
E-mail: Mireille.Bossy@sophia.inria.fr
return to the ERCIM News 38 contents page | {"url":"http://www.ercim.eu/publication/Ercim_News/enw38/bossy.html","timestamp":"2014-04-18T05:33:33Z","content_type":null,"content_length":"5424","record_id":"<urn:uuid:9af72482-bc24-479f-ad30-dbc3998a3fd7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Single Variable Calculus Vol. 2 : Early Transcendentals 7th Edition | 9780538498708 | eCampus.com
Single Variable Calculus Vol. 2 : Early Transcendentals
by Stewart, James
List Price: [S:$159.00:S]
Only one copy
in stock at this price.
In Stock Usually Ships in 24 Hours.
Currently Available, Usually Ships in 24-48 Hours
Downloadable Offline Access
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 7th edition with a publication date of 11/23/2010.
What is included with this book?
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
Success in your calculus course starts here! James Stewart's CALCULUS: EARLY TRANSCENDENTALS texts are world-wide best-sellers for a reason: they are clear, accurate, and filled with relevant,
real-world examples. With SINGLE VARIABLE CALCULUS: EARLY TRANSCENDENTALS, Seventh Edition, Stewart conveys not only the utility of calculus to help you develop technical competence, but also gives
you an appreciation for the intrinsic beauty of the subject. His patient examples and built-in learning aids will help you build your mathematical confidence and achieve your goals in the course!
Table of Contents
Diagnostic Tests
Areas and Distances
The Definite Integral
Discovery Project: Area Functions
The Fundamental Theorem of Calculus
Indefinite Integrals and the Net Change Theorem
Writing Project: Newton, Leibniz, and the Invention of Calculus
The Substitution Rule
Problems Plus
Applications Of Integration
Areas between Curves
Volumes by Cylindrical Shells
Average Value of a Function
Applied Project: Where to Sit at the Movies
Techniques Of Integration
Integration by Parts
Trigonometric Integrals
Trigonometric Substitution
Integration of Rational Functions by Partial Fractions
Strategy for Integration
Integration Using Tables and Computer Algebra Systems
Discovery Project: Patterns in Integrals
Approximate Integration
Improper Integrals
Problems Plus
Further Applications Of Integration
Arc Length
Discovery Project: Arc Length ConteSt. Area of a Surface of Revolution
Discovery Project: Rotating on a Slant
Applications to Physics and Engineering
Discovery Project: Complementary Coffee Cups
Applications to Economics and Biology
Problems Plus
Differential Equations
Modeling with Differential Equations
Direction Fields and Euler's Method
Separable Equations
Applied Project: Which is Faster, Going Up or Coming Down?
Models for Population Growth
Applied Project: Calculus and Baseball
Linear Equations
Predator-Prey Systems
Problems Plus
Parametric Equations And Polar Coordinates
Curves Defined by Parametric Equations
Laboratory Project: Families of Hypocycloids
Calculus with Parametric Curves
Laboratory Project: Bezier Curves
Polar Coordinates
Areas and Lengths in Polar Coordinates
Conic Sections
Conic Sections in Polar Coordinates
Problems Plus
Infinite Sequences And Series
Laboratory Project: Logistic Sequences
The Integral Test and Estimates of Sums
The Comparison Tests
Alternating Series
Absolute Convergence and the Ratio and Root Tests
Strategy for Testing Series
Power Series
Representations of Functions as Power Series
Taylor and Maclaurin Series
Laboratory Project: An Elusive Limit
Writing Project: How Newton Discovered the
Binomial Series
Applications of Taylor Polynomials
Applied Project: Radiation from the Stars
Problems Plus
Numbers, Inequalities, and Absolute Values
Coordinate Geometry and Lines
Graphs of Second-Degree Equations
Sigma Notation
Proofs of Theorems
The Logarithm Defined as an Integral
Complex Numbers
Answers to Odd-Numbered Exercises
Table of Contents provided by Publisher. All Rights Reserved. | {"url":"http://www.ecampus.com/single-variable-calculus-vol-2-early/bk/9780538498708","timestamp":"2014-04-18T01:06:20Z","content_type":null,"content_length":"64059","record_id":"<urn:uuid:236ee5cb-d505-4b2f-aa3f-de6fad4138a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Count and die
The history of mathematics
Count and die
THE romantic figure at the heart of Mario Livio's fascinating book about mathematical equations and symmetry is a brilliant Frenchman, Evariste Galois, who died in 1832 at the age of only 20.
Galois's work was the culmination of an ancient quest to solve ever more complicated mathematical equations. Simple linear equations (such as 2+x=5) were routinely solved in ancient Babylon.
Quadratic equations (with an x^2) are a bit harder, but the formula for solving them was discovered by medieval Arabic mathematicians.
Cubic equations are a different matter. Mr Livio describes the feverish search for a formula in Renaissance Italy, with mathematicians clashing in public equation-solving contests. Both the cubic and
the quartic equations were finally solved in the 1540s by Gerolamo Cardano and Ludovico Ferrari, although not without a spectacularly brutal scientific feud complete with accusations of plagiarism
and bad faith.
Clearly, the quintic was next, but progress was remarkably slow. Only in the 19th century did the answer become clear, when Niels Henrik Abel, a Norwegian mathematician, showed that the quintic is
insoluble. While many individual quintic equations can be solved, there is no simple formula that deals with them all.
By developing a general theory of equations, Galois provided the definitive solution to the problem. Earlier mathematicians had concentrated on one type of equation at a time. Instead of looking for
individual solutions of an equation, Galois turned his attention to the symmetries in its solutions. As Mr Livio explains, Galois's achievement was to show that the structure of these symmetries
determines whether the equation has a simple solution. This explained at a stroke why the quintic is insoluble—there are too many symmetries—and created a new way of understanding mathematical
The idea of symmetry is one of the book's recurring themes. Mr Livio describes many types of symmetry, including the sixfold rotational symmetry of snowflakes (which so fascinated the astronomer
Johannes Kepler that he wrote an entire treatise on “The six-cornered snowflake”). He also writes passionately about the role of symmetry in human perception and the arts, and the fundamental
importance of symmetry in the laws of physics.
The centrepiece of the book is Galois's death. The night before he died, the young mathematician frantically scribbled an account of his unpublished discoveries, at one point writing the words “I
have no time” in the margin. The following morning, fatally wounded by a pistol shot in a duel, he uttered his last words to his brother: “Don't cry, I need all my courage to die at 20.” | {"url":"http://www.economist.com/node/4316105/print","timestamp":"2014-04-19T00:56:38Z","content_type":null,"content_length":"60345","record_id":"<urn:uuid:126b65c6-6e01-4875-8b4b-d794a0f2a921>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutoring
Your algebra tutor should be able to go over each and every concept of algebra without jumping ahead. Algebra tutoring, like most other forms of mathematics tutoring, is about patience,
understanding, and mentorship. It is only with the right tutor-student relationship that a student can excel in algebra, thus fostering a passion for math leading up to more advanced subjects like
trigonometry and calculus. Something as complex as finding variables and expressing an equation in the form of a graph is something that cannot be brushed over. An algebra tutor who is passionate
about math can spend as much time on a very specific part of a concept without getting frustrated or giving up. It is this kind of tutor that your student needs. And Fortune Tutoring's algebra tutors
fit this model.
Our algebra tutors are well versed in all facets of math, but choose to specialize in algebra for very personal reasons. Our tutors come to us from very prestigious Universities and impeccable
academic backgrounds. Thus, you can be sure that your algebra tutoring experience from Fortune Tutoring will be the best possible academic advantage for your student.
Below are some of the concepts that our algebra tutors specialize in:
Pre Algebra
• Algebraic Expressions
• Formulas and Equations
• Fractions and Decimals
• Metric System
• Scientific Notation
Algebra I
• Linear Equations
• Polynomials and Factoring
• Rational Equations
• Functions and Graphing
• Quadratic Equations
Algebra II
• Factoring and Solving Equations
• Exponential and Logarithmic Functions
• Roots and Irrational Numbers
• System of Equations and Inequalities
• Imaginary and Complex Numbers | {"url":"http://www.fortunetutoring.com/algebra-tutoring.html","timestamp":"2014-04-17T00:48:08Z","content_type":null,"content_length":"24592","record_id":"<urn:uuid:5ee5fba4-6623-4120-97cb-0ed04076e517>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
The content of these web pages will not be updated anymore.
Two fact sheets presents the HP2C initiative and its projects.
Large scale Density Functional Electronic Structure Calculations in a Systematic Wavelet Basis Set
Solving the electronic Schroedinger equation is the basis for the solution of the majority of problems in chemistry, solid state physics, materials science, nano sciences and molecular biology. Even
though density functional calculations are nowadays standard for small systems they are at present too slow for large systems with more than some thousand atoms.
The BigDFT electronic structure code is a recently developed DFT electronic structure code which uses Daubechies wavelets as a basis set. Wavelets combine the advantages of Gaussian basis sets and
plane wave basis sets. Wavelets are adaptive and localized in real space as are Gaussians and at the same time they form like plane waves a systematic basis set. A systematic basis set is a basis set
that allows all quantities to be calculated with arbitrarily high accuracy for a sufficiently large basis set.
The BigDFT code is at present a mixed MPI/OpenMP code. Since one MPI process treats one or several Kohn-Sham orbitals the number of MPI processes can not exceed the number of orbitals. In order to
scale this code to some $10^5$ or $10^6$ cores the following possibilities exist. In the present MPI/OpenMP parallelization one MPI process is executed on more cores of a node. Since the number of
cores per node will increase strongly the code will run faster once more cores become available. Given the modest speedups of our (and other) codes within OpenMP, it is however questionable whether
OpenMP is a promising approach. A large part of the version of the code for periodic boundary conditions has already been ported to GPU's. On a GPU multicore architecture the speedup is considerable
( between 20 and 30 at present) in double precision.
When the computing speed of a single node increases by a large amount by using GPU's, the relative cost of the communication part will increase strongly in our code unless the bandwidth of the
communication network increases by an equal amount as the single node speed. This will presumably not happen. In order to reduce the time to solution for constant system sizes one of the most
important task in this project will therefore beto design and implement new communication algorithms which will accelerate the communication part.
Scaling to a very large number of cores can also be achieved if the system size, i.e. the number of electrons is increased. For more than some 1000 atoms a linear scaling approach is recommended. In
this case the electronic orbitals are not any more extended over the whole system but localized in smaller subvolumes. Such a localization will lead to new communication patterns which are less
global than in a traditional cubically scaling algorithms.
A second, and relatively easy, method to scale to a very large number of processors is to combine a electronic structure calculation with a specific application which will lead to an additional level
of parallelization. We will combine a parallel global optimization algorithm with the BigDFT program.
Principal Investigator
• Prof. Stefan Goedecker, University of Basel
There are 1 files. | {"url":"http://www.hp2c.ch/projects/bigdft/","timestamp":"2014-04-20T18:22:57Z","content_type":null,"content_length":"13434","record_id":"<urn:uuid:fafc3204-1f8a-47ad-9f52-a6d9620c95e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the following picture of? amino acid ketone
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/501028d6e4b009397c681dca","timestamp":"2014-04-17T12:52:02Z","content_type":null,"content_length":"41472","record_id":"<urn:uuid:8982b87c-ed7d-4824-9b51-13afd02d6bd2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elimination and substitution
February 24th 2009, 02:18 PM #1
Junior Member
Sep 2008
Elimination and substitution
I have this problem: y=1x and y=.5x+4
I got (2,4) and was wondering if you guys can show me the steps for both elimination and substitution. Also how would I graph this? just put one dot on the graph?
How did you get $(2,4)?$ If you try checking your work by substituting, you get
$x=2\Rightarrow y=2$
for the first, and
$x=2\Rightarrow y=5$
for the second. Your point is not on either of the curves, so it certainly could not be in the solution set.
Well, I did 1x=.5x+4
I subtracted .5x from both sides making 1x and .5x
now its .5x=4
divided .5x into 4 and got 2
so x=2?
Can you show me a step by step of how your doing it and how your substituting?
Oh, sorry about that.
If you don't mind can you show me the elimination and substitution equation you used for this. Thanks!
Certainly. We have
$\left\{\begin{array}{rcl}<br /> y&=&x\\<br /> y&=&\frac12x+4<br /> \end{array}\right.$
Substituting $x=y$ into the second equation produces
and solving for $y$ gives
$\frac12y=4\Rightarrow y=8.$
Back-substituting, we get $x=8.$
Let's first rearrange the equations a little,
$\left\{\begin{array}{rcl}<br /> y-x&=&0\\<br /> 2y-x&=&8<br /> \end{array}\right..$
Subtract the first equation from the second,
$\left\{\begin{array}{rcl}<br /> y-x&=&0\\<br /> y&=&8<br /> \end{array}\right.$
and subtract the second equation from the first:
$\left\{\begin{array}{rcl}<br /> -x&=&-8\\<br /> y&=&8<br /> \end{array}\right.\Rightarrow\left\{\begin{array}{ rcl}<br /> x&=&8\\<br /> y&=&8<br /> \end{array}\right.$
February 24th 2009, 02:34 PM #2
February 24th 2009, 03:13 PM #3
Junior Member
Sep 2008
February 24th 2009, 03:27 PM #4
February 24th 2009, 04:45 PM #5
Junior Member
Sep 2008
February 24th 2009, 05:03 PM #6 | {"url":"http://mathhelpforum.com/algebra/75572-elimination-substitution.html","timestamp":"2014-04-18T07:08:21Z","content_type":null,"content_length":"48398","record_id":"<urn:uuid:0282f4b0-7151-4c73-86b5-5b0438bfbaa8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Questions
Posted by steve on Tuesday, May 24, 2011 at 11:09pm.
Any answers are greatly appreciated.
a) In the game of “In-Between” two cards are drawn from a standard deck of 52 cards. Then a third card is drawn. To win, the value of the third card must be between the first two cards. Determine the
probability of winning; given the first two cards drawn are a 5 and a 9?
A: 8/25
B. 4/13
C. 3/50
D. 6/25
b) A married couple decides to have a family. They would like to have 5 children and of these 5 children the mother wishes to have exactly 3 girls. If both sexes are equally probable, what is the
probability of exactly 3 girls?
A. 5/8
B. ½
C. 5/16
D. 5/32
c) In 7 spins of a spinner the arrow landed on the blue sector at least 3 times. Which of the following is the complement of this event?
A. landing on 0 or 1 or 2 blue sectors
B. landing on 7 blue sectors
C. landing on 1 or 2 or 3 blue sectors
D. landing on more than 3 blue sectors
d) Statistics indicate that 45% of smokers get emphysema while 38% get lung cancer. 12% get both emphysema and lung cancer. What is the probability that a randomly selected smoker does not have
either of these diseases?
A. 15%
B. 39%
C. 61%
D. 85%
e) A survey at a high school determines that 69% of teenagers listen to rock music and 33% listen to classical music. 16% listen to neither type of music. What is the probability that a randomly
selected student listens to both classical and rock music?
A. 2%
B. 18%
C. 84%
D. 86%
f) A single card is drawn from a standard deck of playing cards. Determine the probability that the card is a diamond or face card.
A. ¼
B. 25/52
C. 3/52
D. 11/26
g) A box contains 7 green marbles and 5 blue marbles. Find the probability of drawing 3 green marbles if the marbles are not replaced after each draw.
A. 77/44
B. 35/288
C. 343/1728
D. 7/12
h) A bag contains 3 white balls and 4 black balls. A second bag has 3 black balls and 6 red balls. If one ball is drawn from each bag, what is the probability that 1 white ball and 1 red ball will be
A. 2/7
B. 5/21
C. 2/3
D. 23/21
i) In a standard deck of 52 playing cards, 3 cards are selected without replacement. The probability that all 3 cards are 7’s is:
A. 1/5525
B. 1/2197
C. 3/17576
D. 3/125
j) A high school soccer team has 2 goalies, 6 midfielders and 12 forwards. If 3 team members are selected at random what is the probability that all 3 will be forwards?
A. 243/250
B. 1/57
C. 27/125
D. 11/57
k) Team A defeats Team B 60% of the time. During the basketball season Team A and Team B play each other 3 times. What is the probability that Team A will win exactly 2 of the 3 games they play?
A. 0.432
B. 0.360
C. 0.216
D. 0.288
l) The cross-country running team consists of 6 boys and 8 girls. For the next meet the coach must select two co-captains. What is the probability these two co-captains are boys?
A. 3.330x10^-4
B. 48/91
C. 15/91
D. 4/13
m) In a first year University Mathematics class there are 130 females and 110 males. 35 females and 25 males own a cell phone. If a student is randomly selected from the class and that student owns a
cell phone, determine the probability the student is a girl.
A. 5/48
B. 19/48
C. 7/48
D. 7/12
n) The student government is holding elections for the school Prime Minister. There are three students running for Prime Minister. Student A is twice as likely to win as student B and student B is
twice as likely to win as student C. Determine the probability that student A wins the election.
A. ¾
B. 2/3
C. 4/7
D. 4/5
o) Two cards are dealt without replacement from a well-shuffled deck of 52 cards. Determine the probability the second card is a face card?
A. 11/221
B. 40/221
C. 11/51
D. 3/13
p) A multiple-choice test has 14 questions. Each question has 4 choices and only one of which is correct. If a student answers each question by guessing randomly, the probability that the student
gets at least 9 questions correct.
A. 0.0018
B. 0.00034
C. 0.0022
D. 0.9982
q) A diagnostic test was developed to detect a disease. The test is 98% accurate, which means the outcome of the results will be correct 98% of the time. It is known that 10% of the population has
the disease. Determine the probability that a randomly selected person tests negative.
A. 0.116
B. 0.020
C. 0.980
D. 0.884
• Probability Questions - drwls, Wednesday, May 25, 2011 at 4:00am
Harrumph. You are asking us to take a 17 question test for you. You have shown no effort at all. That is not what we do here.
In (o) NOT KNOWING what the first card drawn is, the probability of the second card being a face card is 12/52 = 3/13. I could be any card in the original 52 card deck, with equal probability.
Tell us what you think the other answers are and someone will comment on your logic.
Related Questions
Data Management - Two cards are drawn from a standard 52 card deck. a) what is ...
algebra 2 - Two cards are drawn from a standard deck of 52 where the first card ...
Algebra - Two cards are drawn from a standard deck of cards. Find the ...
Algebra II - Two cards are drawn from a standard deck of cards. Find the ...
math - Three cards are drawn without replacement from a well-shuffled deck of 52...
Math - Two cards are drawn without replacement from a well shuffled deck of 52 ...
Math - two cards are drawn without replacement from an ordinary deck of 52 ...
Math - Two cards are drawn without replacement from an ordinary deck of 52 ...
probability - consider the experiment of selecting a card from an ordinary deck ...
Math/ Probability - Two cards are drawn without replacement from an ordinary ... | {"url":"http://www.jiskha.com/display.cgi?id=1306292961","timestamp":"2014-04-23T07:06:26Z","content_type":null,"content_length":"13387","record_id":"<urn:uuid:e1d3556f-a73e-48b6-bad3-438de0f85f15>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Hyperelliptic Mapping Class Group Of Klein Surfaces
Gamboa Mutuberria, José Manuel and Bujalance, E. and Costa, A.F. (2001) The Hyperelliptic Mapping Class Group Of Klein Surfaces. Proceedings Of The Edinburgh Mathematical Society, 44 (2). pp.
351-363. ISSN 0013-0915
Restricted to Repository staff only until 31 December 2020.
Official URL: http://journals.cambridge.org/abstract_S0013091599000322
In this paper we study the algebraic structure of the hyperelliptic mapping class group of Klein surfaces, which is closely related to the mapping class group of punctured discs.
This group plays an important role in the study of the moduli space of hyperelliptic real algebraic curves.
Our main result provides a presentation by generators and relations for the hyperelliptic mapping class group of surfaces of prescribed topological type.
Item Type: Article
Uncontrolled Keywords: Mapping Class Group; Klein Surfaces
Subjects: Sciences > Mathematics > Algebra
ID Code: 15279
References: N. L. Alling and N. Greenleaf, Foundations of the theory of Klein surfaces, Lecture Notes in Mathematics, vol. 219 (Springer, Berlin, 1971).
J. S. Birman, Braids, links and mapping class groups, Annals of Mathematical Studies, vol. 82 (Princeton University Press, Princeton, NJ, 1975).
J. S. Birman and D. R. Chillingworth, On the homeotopy group of a nonorientable surface, Proc. Camb. Phil. Soc. 71 (1972), 437{448.
J. S. Birman and M. Hilden, On the mapping class groups of closed surfaces as covering spaces, Ann. Math. Stud. 66 (1971), 81{115.
E. Bujalance, J. J. Etayo, J. M. Gamboa and G. Gromadzki, Automorphism groups of compact bordered Klein surfaces, Lecture Notes in Mathematics, vol. 1439 (Springer, Berlin,
W. J. Harvey, On branch loci in Teichm¨uller space, Trans. Am. Math. Soc. 153 (1971), 387{399.
W. J. Harvey and C. Maclachlan, On mapping class groups and Teichm¨uller spaces, Proc. Lond. Math. Soc. (3) 30 (1975), 496{512.
A. Hatcher and W. Thurston, A presentation for the mapping class group of a closed orientable surface, Topology 19 (1980), 221{237.
A. M. Macbeath, The classication of non-Euclidean plane crystallographic groups, Can. J. Math. 19 (1967), 1192{1205.
A. M. Macbeath and D. Singerman, Spaces of subgroups and Teichm¨uller space, Proc. Lond. Math. Soc. 31 (1975), 211{256.
W. Magnus, Braids and Riemann surfaces, Commun. Pure Appl. Math. 25 (1972), 151{161.
W. Thurston, Three-dimensional geometry and topology, vol. 1 (Princeton University Press, Princeton, NJ, 1997).
H. Zieschang, On the homeotopy groups of surfaces, Math. Annln 206 (1973), 1{21.
Deposited On: 21 May 2012 10:36
Last Modified: 06 Feb 2014 10:20
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15279/","timestamp":"2014-04-20T13:47:07Z","content_type":null,"content_length":"28955","record_id":"<urn:uuid:fb6f0068-c837-4ae3-931a-a29b0199457d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
A matrix approach to the analytic-numerical solution of mixed partial differential systems.
(English) Zbl 0839.65105
The paper deals with systems of linear partial differential equations of the form
where $u=\left({u}_{1},\cdots ,{u}_{m}\right)$ is an unknown vector-function and $A$, $B$ are constant $m×m$ complex matrices. It is supposed that every eigenvalue of $\frac{1}{2}\left(A+{A}^{H}\
right)$ is positive (${A}^{H}$ denotes the conjugate transpose of $A$). Assuming initial and boundary conditions (2) $u\left(x,0\right)=F\left(x\right)$, $u\left(0,t\right)=u\left(p,t\right)=0$, the
authors seek the solution of (1), (2) in the form of a Fourier series with respect to $x$, whose coefficients depend on $t$. An approximate solution is obtained by truncating this series and by
suitable approximation of its coefficients. The estimate of the error is proved.
65M70 Spectral, collocation and related methods (IVP of PDE)
35K15 Second order parabolic equations, initial value problems | {"url":"http://zbmath.org/?q=an:0839.65105","timestamp":"2014-04-19T07:02:00Z","content_type":null,"content_length":"23553","record_id":"<urn:uuid:89721938-90f5-49dd-8b4e-2eb571b55a84>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [asa] Science proves there's no need for
From: D. F. Siemens, Jr. <dfsiemensjr@juno.com> Date: Fri Oct 10 2008 - 16:23:43 EDT
On Fri, 10 Oct 2008 11:29:29 -0600 (MDT) gordon brown
<Gordon.Brown@Colorado.EDU> writes:
> >>
> > I see a practical problem with this claim, for it means that all
> the
> > specific terms in the axioms are meaningless. As a consequence,
> any set
> > of consistent axioms, that is, empty terms with empty relations,
> would be
> > investigated. However, it seems that only a limited set of terms
> and
> > relations are worked with. Thus a plane is a two-dimensional
> structure,
> > with the earlier assumption that it is Euclidean restricted. What
> I can
> > draw on a sheet of paper fits Euclid's or Playfair's parallel
> axiom, with
> > the unexampled assumption that it be infinite. But it is equally
> possible
> > to deal with the surface of the earth as a Riemannian plane. Also,
> the
> > mathematical functions are essentially the same whether we deal
> with real
> > numbers, modular numbers or infinities, though there are
> different
> > consequences. So I hold that there are, despite claims to avoid
> > explanations, tacit assumptions about the underlying meanings.
> > Dave (ASA)
> > ____________________________________________________________
> > Love Graphic Design? Find a school near you. Click Now.
> >
> >
> The preferred modern approach to geometry is to use some
> approximation to
> David Hilbert's axioms. The undefined terms are point, line, lie on,
> between, and congruent. These are the basis for defining all other
> geometric terms. Of course, the axioms also use nongeometric words
> such as
> if and then as well as terms from set theory and arithmetic, both of
> which
> have their own axioms. No matter what one thinks these undefined
> terms
> should mean, only those of their properties which are given by the
> axioms
> can be used in proofs. If one removes Hilbert's Parallel Postulate
> (Playfair's Postulate), both Euclidean and hyperbolic geometry
> satisfy the
> remaining postulates. Restore it and you have Euclidean geometry. If
> you
> replace it by the Hyperbolic Parallel Postulate, you have hyperbolic
> geometry.
> For a nongeometric example of different interpretations of terms
> producing
> valid models, we can have subsets of a given set together with
> unions,
> intersections, and complements, or we can have propositions with
> and, or,
> & negation, or we can have divisors of some given square-free
> integer n
> with least common multiples, greatest common divisors, and division
> into
> n.
> Gordon Brown (ASA member)
So the current attitude is to deal with a purely formal system with terms
whose sole meaning is their abstract interrelationship. However, their
activity betrays something deeper, for I have not found them playing with
any random set of symbols in arbitrary combinations. Hilbert's axiom set
sprang from the desire to have proofs that do not depend on anything but
the logical relationships, whereas Euclid's version required deriving
some evidence from the diagrams. A similar restriction holds, I think,
with the modification of Peano's postulates so that they specifically
produce the sequence of integers, which was the original intent, rather
than multiple sequences.
Since mathematical calculi, like logical calculi, consist of tautologies,
any substitution instance that "begins" true will maintain truth. In the
case of logical calculi, a requirement is that a term and its complement
must cover the entire universe of discourse. This gives a problem with
terms which have no sharp line of distinction. While 'bald' is the usual
term cited, the problem applies broadly. We usually don't worry about
'red' and 'not-red', but there is also 'reddish' and 'partly red' among
empirical applications. It's easier to posit terms with no empirical
exemplification than to sweat applied mathematics.
Dave (ASA)
Click here to become a professional counselor in less time than you think.
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Fri Oct 10 16:28:56 2008 | {"url":"http://www2.asa3.org/archive/asa/200810/0150.html","timestamp":"2014-04-19T07:36:05Z","content_type":null,"content_length":"12376","record_id":"<urn:uuid:dc1a4a0c-3b05-4119-aec4-f2d52d555ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal article
KONEČNÝ Filip, IOSIF Radu and BOZGA Marius. Deciding Conditional Termination. Lecture Notes in Computer Science. 2012, vol. 2012, no. 7214, pp. 252-266. ISSN 0302-9743.
Publication language: english
Original title: Deciding Conditional Termination
Title (cs): Rozhodování podmíněné konečnosti
Pages: 252-266
Place: DE
Year: 2012
Journal: Lecture Notes in Computer Science, Vol. 2012, No. 7214, DE
ISSN: 0302-9743
termination problem, conditional termination problem, difference bounds relations, octagonal relations, finite monoid affine relations
This paper addresses the problem of conditional termination, which is that of defining the set of initial configurations from which a given program terminates. First we define the dual set, of
initial configurations, from which a non-terminating execution exists, as the greatest fixpoint of the pre-image of the transition relation. This definition enables the representation of this set,
whenever the closed form of the relation of the loop is definable in a logic that has quantifier elimination. This entails the decidability of the termination problem for such loops. Second, we
present effective ways to compute the weakest precondition for non-termination for difference bounds and octagonal (non-deterministic) relations, by avoiding complex quantifier eliminations. We also
investigate the existence of linear ranking functions for such loops. Finally, we study the class of linear affine relations and give a method of under-approximating the termination precondition for
a non-trivial subclass of affine relations.We have performed preliminary experiments on transition systems modeling real-life systems, and have obtained encouraging results.
author = {Filip Konečný and Radu Iosif and Marius Bozga},
title = {Deciding Conditional Termination},
pages = {252--266},
journal = {Lecture Notes in Computer Science},
volume = {2012},
number = {7214},
year = {2012},
ISSN = {0302-9743},
language = {english},
url = {http://www.fit.vutbr.cz/research/view_pub.php.en?id=9986} | {"url":"http://www.fit.vutbr.cz/research/view_pub.php.en?id=9986","timestamp":"2014-04-18T10:36:03Z","content_type":null,"content_length":"7836","record_id":"<urn:uuid:2e92a331-4a33-4fdd-9ead-7b4e47bb7bff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carleman estimate for Zaremba boundary condition
Seminar Room 1, Newton Institute
The Zaremba boundary condition is a mixed boundary condition of the following type, on a part of the boundary we impose Dirichlet boundary condition and on the other part we impose the Neumann
boundary condition. For such a problem we prove an logarithme type estimate for the decrease of energy of a solution for a damped wave equation. In the talk we shall explain the plan of the proof.
The main part is to prove a Carleman estimate in a neighborhood of the boundary where the type of boundary conditions changes.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/INV/seminars/2011080510001.html","timestamp":"2014-04-19T01:57:05Z","content_type":null,"content_length":"6288","record_id":"<urn:uuid:c217cc4a-7bf7-47ea-8610-a5047ac74a01>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: "Representative sampling?"
Replies: 10 Last Post: Nov 4, 2000 2:57 PM
Messages: [ Previous | Next ]
Re: "Representative sampling?"
Posted: Nov 4, 2000 2:57 PM
In article <8u1i7r$l7a$1@nnrp1.deja.com>, <robertd@athenesoft.com> wrote:
>However, there is at least one class of interesting problems for which
>deterministic sampling can yield better results than either random
>or stratified-random sampling. This is the class of problems of
>estimating an integral over some space, and for these problems sampling
>sequences can be constructed (so-called "low-discrepency sequences")
>which yield results with less variance than strictly random sampling.
This comparison is a bit of the apples and oranges kind. The variance
for the random sampling is with respect to a random choice of sample
points, with the function being integrated held fixed. The variance
for the deterministic method is for random choice of function, with
the points held fixed. The expected performance of the deterministic
method for your problem will depend on whether the distribution over
functions assumed in deriving this result (a particular sort of
Gaussian process) is close to what your actual prior distribution over
functions is.
Radford M. Neal radford@cs.utoronto.ca
Dept. of Statistics and Dept. of Computer Science radford@utstat.utoronto.ca
University of Toronto http://www.cs.utoronto.ca/~radford
Date Subject Author
11/2/00 "Representative sampling?" Ross J. Micheals
11/2/00 Re: "Representative sampling?" Jerry Dallal
11/2/00 Re: "Representative sampling?" Rich Ulrich
11/2/00 Re: "Representative sampling?" Chris C
11/2/00 Re: "Representative sampling?" Thomas Gatliffe
11/3/00 Re: "Representative sampling?" Elliot Cramer
11/3/00 Re: "Representative sampling?" Jerry Dallal
11/3/00 Re: "Representative sampling?" J Dumais
11/3/00 Re: "Representative sampling?" Ross J. Micheals
11/4/00 Re: "Representative sampling?" robertd@athenesoft.com
11/4/00 Re: "Representative sampling?" Radford Neal | {"url":"http://mathforum.org/kb/message.jspa?messageID=1534267","timestamp":"2014-04-20T08:49:16Z","content_type":null,"content_length":"29192","record_id":"<urn:uuid:e152e8b0-3edd-4a41-9b69-eecef39fc4b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heat Transfer through Evaporators
The heat convection equation will provide a good calculation of the heat transfer but it requires a knowledge of LMTD meaning you need to know the hot fluid inlet temp, hot fluid outlet temp, cold
fluid inlet temp and cold fluid outlet temp. You also need to be able to calculate U, which requires the use of empirical correlations (for condensers anyway, not sure about evaporators). It can be
difficult to accurately calculate all these terms.
If you are simply looking for a good approximation of the heat transfer, an energy rate equation (enthalpy eqn) should suffice. In the case of an evaporator, the working fluid (refrigerant) acts as
if it has an infinite specific heat capacity and can be assumed as isothermal. When adding energy to this fluid, the enthalpy of vaporization is the major contributor and sensible heat can probably
be neglected, provided the liquid is relatively saturated at inlet and outlet (isnt too subcooled or superheated) If this is the case, the energy eqn simplifies to the heat/energy added to the fluid;
All you need to know is the flow rate of the refrigerant and the change in enthalpy from inlet to outlet (get this in thermo tables for whatever fluid you are looking at). | {"url":"http://www.physicsforums.com/showthread.php?t=707105","timestamp":"2014-04-18T21:29:05Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:8dfb045e-e341-48a2-adfa-caceb6978601>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
The “modern physics” course has a lab where students measure the speed of sound. The apparatus consists of an air-filled tube with a sound generator at one end and a microphone that can be set at any
specified position within the tube. Using an oscilloscope, the transit time between the sound generator and microphone can be measured precisely. Knowing the position p and transit time t allows the
speed of sound v to be calculated, based on the simple model:
Here are some data recorded by a student group calling themselves “CDT”.
position transit time
(m) (millisec)
0.2 0.6839
0.4 1.252
0.6 1.852
0.8 2.458
1.0 3.097
1.2 3.619
1.4 4.181
Part 1.
Enter these data into a spreadsheet in the standard case-variable format. Then fit an appropriate model. Note that the relationship p = vt between position, velocity, and time translates into a
statistical model of the form p ~ t - 1 where the velocity will be the coefficient on the t term.
What are the units of the model coefficient corresponding to velocity, given the form of the data in the table above?
A meters per second
B miles per hour
C millimeters per second
D meters per millisecond
E millimeters per millisecond
F No units. It’s a pure number.
G No way to know from the information provided.
Compare the velocity you find from your model fit to the accepted velocity of sound (at room temperature, at sea level, in dry air): 343 m/s. There should be a reasonable match. If not, check whether
your data were entered properly and whether you specified your model correctly.
Part 2.
The students who recorded the data wrote down the transit time to 4 digits of precision, but recorded the position to only 1 or 2 digits, although they might simply have left off the trailing zeros
that would indicate a higher precision.
Use the data to find out how precise the position measurement is. To do this, make two assumptions that are very reasonable in this case:
1. The velocity model is highly accurate, that is, sound travels at a constant velocity through the tube.
2. The transit time measurements are correct. This assumption reflects current technology. Time measurements can be made very precisely, even with inexpensive equipment.
Given these assumptions, you should be able to calculate the position from the transit time and velocity. If the measured position differs from this model value — as reflected by the residuals — then
the measured position is imprecise. So, a reasonable way to infer the precision of the position is by the typical size of residuals.
How big is a typical residual? One appropriate way to measure this is with the standard deviation of the residuals. Give a numerical value for this.
0.001 0.006 0.010 0.017 0.084 0.128
Part 3.
The students’ lab report doesn’t indicate how they know for certain that the sound generator is at position zero. One way to figure this out is to measure the generator’s position from the data
themselves. Denoting the actual position of the sound generator as p[0], then the equation relating position and transit time is
This suggests fitting a model of the form p ~ 1 + t, where the coefficient on 1 will be p[0] and the coefficient on t will be v.
Fit this model to the data.
What is the estimated value of p[0]?
-0.032 0.012 0.000 0.012 0.032
Notice that adding new terms to the model reduces the standard deviation of the residuals. What is the new value of the standard deviation of the residuals?
0.001 0.006 0.010 0.017 0.084 0.128
Compare the estimated speed of sound found from the model p ~ t to the established value: 343 m/s . Notice that the estimate is better than the one from the model p ~ t - 1 that didn’t take into
account the position of the sound generator. | {"url":"http://www.macalester.edu/~kaplan/ISM/Exercises-HTML/6.5.html?access=not-defined&docname=6.5","timestamp":"2014-04-19T12:03:38Z","content_type":null,"content_length":"16922","record_id":"<urn:uuid:7c793ec6-4b79-4caa-86ef-ce4bba888c3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Calculator
Here is a handy little tool you can use to find the area of plane shapes.
Choose the shape, then enter the values.
The height h is at right angles to b:
More Complicated Shapes
For more complicated shapes you could try the Area of Polygon by Drawing Tool.
(Note: the old version is here) | {"url":"http://www.mathsisfun.com/area-calculation-tool.html","timestamp":"2014-04-19T14:32:10Z","content_type":null,"content_length":"5689","record_id":"<urn:uuid:c8263323-cd01-490c-8503-666ec90d3a6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
mixed partial derivatives
February 5th 2009, 04:47 AM #1
Junior Member
Oct 2008
mixed partial derivatives
can anyone help please...
f = exp(-xy^2)
find $\frac{\partial^2 f}{\partial x \partial y}$
and $\frac{\partial^2 f}{\partial y \partial x}$
are they both equal justify your answer
Last edited by sonia1; February 5th 2009 at 06:33 AM. Reason: wrong function written
For $\frac{\partial^2 f}{\partial x \partial y}$, differentiate with respect to x first, then with respect to y. Remember, when you differentiate with respect x, you pretend all other variables
are constants. And when you differentiate with respect to y, you pretend all other variables are constants.
$\frac{\partial f}{\partial x} = \frac{\partial}{\partial x} \exp(xy^2)$
By the chain rule:
$\frac{\partial f}{\partial x} = \exp(xy^2) \times \frac{\partial}{\partial x} (xy^2)$
$\frac{\partial f}{\partial x} = \exp(xy^2) \times y^2$
$\frac{\partial^2 f}{\partial x \partial y} = \frac{\partial}{\partial y} (\frac{\partial f}{\partial x}) = \frac{\partial }{\partial y} ( \exp(xy^2) \times y^2)$
By the product rule:
$\frac{\partial^2 f}{\partial x \partial y} = \exp(xy^2) \frac{\partial}{\partial y} (y^2) + y^2 \times \frac{\partial }{\partial y} \exp(xy^2)$
Now carry out the differentiation, remembering to use the product rule again on the 2nd term.
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y + y^2 \exp(xy^2) \times \frac{\partial }{\partial y} (xy^2)$
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y + y^2 \exp(xy^2) \times 2xy$
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y +2xy^3 \exp(xy^2)$
Now do the same again, only change the order of differentiation. Wrt y first, then wrt x. See if you get the same result.
In general $\frac{\partial^2 f}{\partial x \partial y} = \frac{\partial^2 f}{\partial y \partial x}$ if the function is sufficiently smooth.
Hello, sonia1!
Watch out for "products" . . .
$f(x,y) \:=\:e^{xy^2}$
Find: . $\frac{\partial^2 f}{\partial x \partial y}\,\text{ and }\,\frac{\partial^2 f}{\partial y \partial x}$
Are they equal? . . . . yes
$\frac{\partial f}{\partial y} \;=\;e^{xy^2}\!\cdot\!2xy \;=\;2xy\!\cdot\!e^{xy^2}$
$\frac{\partial^2y}{\partial x\partial y} \;=\;2xy\!\cdot\!e^{xy^2}\!\cdot\!y^2 + 2y\!\cdot\! e^{xy^2} \;=\;2ye^{xy^2}(xy^2+1)$
$\frac{\partial f}{\partial x} \;=\;e^{xy^2}\!\cdot\!y^2 \;=\;y^2\!\cdot\!e^{xy^2}$
$\frac{\partial^2f}{\partial y\partial x} \;=\;y^2\!\cdot\!e^{xy^2}\!\cdot\!2xy + 2y\!\cdot\!e^{xy^2} \;=\;2ye^{xy^2}(xy^2+1)$
For $\frac{\partial^2 f}{\partial x \partial y}$, differentiate with respect to x first, then with respect to y. Remember, when you differentiate with respect x, you pretend all other variables
are constants. And when you differentiate with respect to y, you pretend all other variables are constants.
$\frac{\partial f}{\partial x} = \frac{\partial}{\partial x} \exp(xy^2)$
By the chain rule:
$\frac{\partial f}{\partial x} = \exp(xy^2) \times \frac{\partial}{\partial x} (xy^2)$
$\frac{\partial f}{\partial x} = \exp(xy^2) \times y^2$
$\frac{\partial^2 f}{\partial x \partial y} = \frac{\partial}{\partial y} (\frac{\partial f}{\partial x}) = \frac{\partial }{\partial y} ( \exp(xy^2) \times y^2)$
By the product rule:
$\frac{\partial^2 f}{\partial x \partial y} = \exp(xy^2) \frac{\partial}{\partial y} (y^2) + y^2 \times \frac{\partial }{\partial y} \exp(xy^2)$
Now carry out the differentiation, remembering to use the product rule again on the 2nd term.
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y + y^2 \exp(xy^2) \times \frac{\partial }{\partial y} (xy^2)$
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y + y^2 \exp(xy^2) \times 2xy$
$\frac{\partial^2 f}{\partial x \partial y} =\exp(xy^2) \times 2y +2xy^3 \exp(xy^2)$
Now do the same again, only change the order of differentiation. Wrt y first, then wrt x. See if you get the same result.
In general $\frac{\partial^2 f}{\partial x \partial y} = \frac{\partial^2 f}{\partial y \partial x}$ if the function is sufficiently smooth.
what do you mean by it is equal if the fuction is sufficiently smooth
Hello, sonia1!
Watch out for "products" . . .
$\frac{\partial f}{\partial y} \;=\;e^{xy^2}\!\cdot\!2xy \;=\;2xy\!\cdot\!e^{xy^2}$
$\frac{\partial^2y}{\partial x\partial y} \;=\;2xy\!\cdot\!e^{xy^2}\!\cdot\!y^2 + 2y\!\cdot\! e^{xy^2} \;=\;2ye^{xy^2}(xy^2+1)$
$\frac{\partial f}{\partial x} \;=\;e^{xy^2}\!\cdot\!y^2 \;=\;y^2\!\cdot\!e^{xy^2}$
$\frac{\partial^2f}{\partial y\partial x} \;=\;y^2\!\cdot\!e^{xy^2}\!\cdot\!2xy + 2y\!\cdot\!e^{xy^2} \;=\;2ye^{xy^2}(xy^2+1)$
sorry i wrote the function wrong.
Also, what do u get when f = cos(y/x)?
hey thankx a lot guyz I understand partial derivatives now
February 5th 2009, 05:48 AM #2
Super Member
Dec 2008
February 5th 2009, 06:00 AM #3
Super Member
May 2006
Lexington, MA (USA)
February 5th 2009, 06:35 AM #4
Junior Member
Oct 2008
February 5th 2009, 06:39 AM #5
Junior Member
Oct 2008
February 5th 2009, 07:14 AM #6
Junior Member
Oct 2008 | {"url":"http://mathhelpforum.com/calculus/71939-mixed-partial-derivatives.html","timestamp":"2014-04-20T10:49:11Z","content_type":null,"content_length":"56766","record_id":"<urn:uuid:424154b3-4921-46cd-a59f-db402d05545d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Submitted to:
Agricultural Systems
Publication Type:
Literature Review
Publication Acceptance Date:
January 10, 2004
Publication Date:
February 24, 2004
Ma, L., Ahuja, L.R. 2004. Book review: mathematical modeling for system analysis in agricultural research by karel d. vohnout. Agricultural Systems. Vol 81, pp. 273-274.
Interpretive Summary:
The author assumes that mathematical modeling in agricultural systems is essentially an empirical process, with only few feasible theoretical considerations. Free choice of mathematical models of
agricultural systems is assumed. Thus the book deals with empirical modeling of agricultural system components based on statistical regression relationships. Less consideration is given to the entire
systems. The author neglects the fact that, over the past 30 years, models of agricultural systems have increasingly incorporated theoretical cause and effect relationships in
soil-water-plant-atmosphere processes, even though there are still some empirical relations used when knowledge gaps are encountered. For example, the movement of water through the
soil-plant-atmosphere continuum is theoretically based, so are the transformation and transport of carbon and nitrogen. On the other hand, the empirical modeling approaches described in the book can
be useful in developing and testing theoretical hypothesis of different components of a system, as well as of component interactions. The book may be useful for graduate level teaching of the
empirical statistical approaches to modeling. However, the level of the knowledge of mathematics required by the students will have to be much higher than the current level of graduate students in
agricultural sciences in general. The book is more suitable for graduate students in agricultural statistics and engineering.
Technical Abstract: The author assumes that mathematical modeling in agricultural systems is essentially an empirical process, with only few feasible theoretical considerations. Free choice of
mathematical models of agricultural systems is assumed. Thus the book deals with empirical modeling of agricultural system components based on statistical regression relationships. Less consideration
is given to the entire systems. The author neglects the fact that, over the past 30 years, models of agricultural systems have increasingly incorporated theoretical cause and effect relationships in
soil-water-plant-atmosphere processes, even though there are still some empirical relations used when knowledge gaps are encountered. For example, the movement of water through the
soil-plant-atmosphere continuum is theoretically based, so are the transformation and transport of carbon and nitrogen. On the other hand, the empirical modeling approaches described in the book can
be useful in developing and testing theoretical hypothesis of different components of a system, as well as of component interactions. The book may be useful for graduate level teaching of the
empirical statistical approaches to modeling. However, the level of the knowledge of mathematics required by the students will have to be much higher than the current level of graduate students in
agricultural sciences in general. The book is more suitable for graduate students in agricultural statistics and engineering. | {"url":"http://www.ars.usda.gov/research/publications/publications.htm?SEQ_NO_115=167996","timestamp":"2014-04-17T14:08:11Z","content_type":null,"content_length":"22745","record_id":"<urn:uuid:60c29016-5511-48c6-b63f-4ce555f9cf48>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Cumulative Distribution Function
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Cumulative Distribution Function
From Alison Wong <aliwong@ucalgary.ca>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Cumulative Distribution Function
Date Fri, 13 Sep 2002 11:33:50 -0600
Here is more information about my problem. In a block sense, I am trying to calculate the Inverse Mill Ratio (IMR) to be use to calculate some form of generalized residual. I read this method from
Francis Vella's article:
"A Simple Estimator for Simultaneous Models with Censored Regressors", International Economic Review, Volume 34, Issue 2 (May, 1993), 441-457.
In his article he suggested if you have a censored endogenous variable in a simultaneous equation system, than you could use a two step procedure (similar to the one Heckman presents) to get
consistent estimates. The first step is to regress the reduced form of the censored endogenous variable using tobit. By that you should get estimates of coefficients (beta) and standard errors
(sigma), then you could use these estimates to calculate the generalized residuals. The problem is in his generalized residual equation, he has the cumulative distribution function (cdf) and
probability density function (pdf) in it and presented in the form similar to the Inverse Mill Ratio. This is the part that I can't figure out how to calculate, because I can't find anything in Stata
that would calculate the Inverse Mill Ratio, of course, it calculate the IMR in the two step Heckman command but I don't want to use the Heckman command, so is there any other ways to obtain the IMR?
I hope I provide enough information this time. If anyone is interested in this paper pls. let me know and I could send you a copy of this paper. I would really appreciate if anyone who had read this
article to tell me how to calculate the generalized residuals using Stata.
Thank you!
FEIVESON, ALAN H. (AL) (JSC-SD) (NASA) wrote:
Alison -
I'm afraid your question is not specific enough. What model are you running
to get estimates of your coefficients? What random variable do you want to
get the cdf and/or pdf for? What distribution does it have under the assumed
model? Do you want to evaluate this cdf/pdf as a function prescribed by your
model over a range of values, or do you want to compute an empirical cdf or
pdf from data?
Al Feiveson
-----Original Message-----
From: Alison Wong [mailto:aliwong@ucalgary.ca]
Sent: Thursday, September 12, 2002 12:01 PM
To: statalist@hsphsun2.harvard.edu
Subject: st: Cumulative Distribution Function
I have a problem of getting the cumulative distribution function (cdf) and probability density function (pdf). In particular I need to get the cdf and pdf evaluated at the estimated coefficients
(beta) and standard errors (sigma). I realize there are the "cumul" & "kdensity" command in stata, but those only allows a single variable, but I have more than one variable.
Does anyone have a better idea of how to get cdf or pdf?
Alison Wong
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2002-09/msg00225.html","timestamp":"2014-04-20T14:32:30Z","content_type":null,"content_length":"9113","record_id":"<urn:uuid:b43da702-40da-4a1a-adbe-5a3e58bbf16d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Computing Seminar
Introduction to Multilevel Modeling Using HLM
This seminar covers the basics of two-level hierarchical linear models using
HLM 6.04
. The single data set used for this seminar is created using the two SPSS data sets hsb1.sav and hsb2.sav, which come with HLM software located in the folder \HLM6\Examples\Chapter2. This is the
data set
• Input Data and Creating the "MDM" file
□ from a level-1 and a level-2 SPSS file
□ from a single SPSS file
□ from a level-1 and a level-2 SAS data file
• Exploratory Data Analysis
□ summary statistics
□ data-based graphs
• Model Building
□ unconditional means model
□ regression with means-as-outcomes
□ random-coefficient model
□ intercepts and slopes-as-outcomes model
• Hypothesis Testing, Model Fit
□ Multivariate hypothesis tests on fixed effects
□ Multivariate Tests of variance-covariance components specification
□ Model-based graphs
• Other Issues
□ Modeling Heterogeneity of Level-1 Variances
□ Models Without a Level-1 Intercept
□ Constraints on Fixed Effects
Starting HLM and Getting Data into HLM
The data file used for this presentation is a subsample from the 1982 High School and Beyond Survey and is used extensively in Hierarchical Linear Models by Raudenbush and Bryk. It consists of 7185
students nested in 160 schools. Here is a list of 15 or so rows from the data file.
Let's list all the variables used in this presentation.
□ id: school id, the linking variable to define the 2-level structure
□ mathach: student-level math achievement score, continuous outcome variable
□ student-level: female and ses, the social-economic-status at student level
□ school-level: schtype school type (0 = public and 1 = private) and meanses (ses aggregated to school level)
HLM 6 uses an "MDM" file (Multivariate Data Matrix) for hierarchical linear models. An MDM file is a binary file and is constructed based on an MDM template file. A template file is an ASCII file
containing information on the location and the structure of the data files. Once the MDM file is created, HLM does not need the original data files anymore for the subsequent analyses. This enables
HLM to perform very efficient calculations for the models.
It is worth mentioning that HLM does not have any data management capability. That is to say that most of the variables in a model have to be created outside HLM, in other statistical packages, such
as in SPSS. For example, if you have a categorical variable at level-1 and you want to include it and possibly some interaction terms with other level-1 variables in the model, then you have create
all the dummy variables and all the interaction terms before entering your data into HLM. In short, HLM assumes that you have cleaned your data files and have done all the exploratory statistical
analysis and ready to do your multilevel analysis.
1. Creating MDM from a level-1 and a level-2 data files in SPSS format
HLM website has many examples including some detailed ones with screen shots on how to create an MDM file using SPSS input file.
• Two data sets are usually required for a two-level model. A level-1 data file and a level-2 data file. The two files are linked by a common level-2 id variable.
• Level-1 cases must be grouped together by their level-2 id. A usual strategy is to sort both the level-1 data file and the level-2 data file by the level-2 id variable and save them before
entering them into HLM.
• The ID variable can be either numeric or character.
• All other variables in the data file must be numeric.
2. Creating MDM from a single SPSS data file
One improvement that HLM 6 offers is that HLM 6.x allows the use of a single data file containing both the level-1 and level-2 variables. The single data set should be sorted by the level-2 id
variable and the steps are basically the same as the steps for using level-1 and level-2 data files, except the same data file is used twice, once for level-1 and once for level-2. HLM will figure
out that it has to aggregate the single data file to get the level-2 variables. If the single file is huge, it might be more efficient use the two-file approach.
For level-1, we choose these variables:
For level-2, we choose these variables:
The last steps consist of a couple of clicks: Make MDM => Check Stats => Done.
3. Creating MDM from a level-1 and a level-2 data files in SAS format
Let's say that we have the HS&B file in SAS sas7bdat format, hsb1.sas7bdat and hsb2.sas7bdat. We can follow a similar routine to import the data files. HLM uses DBMSCOPY to import data files of
different formats. For example, to import files in .sas7bdat format, the first thing to do is to set the type of data to other non-ASCII data via the File then Preferences pull-down menu.
Following similar steps as described in the example of import SPSS files and also by choosing the right data file type when we "Browse" to choose, we will get to the following window:
The rest of the routine is fairly straightforward and we will demonstrate during the seminar and skip the minute details here.
4. What files have been created?
Let's now go back to the approach of using a single SPSS input file and find out what files have been created and how to use them in the future. Here is the list of files that are created during the
process of creating the MDM file:
The MDM file test.mdm can be opened directly in HLM for analyses. What needs to point out is the template file. The template file test.mdmt is an ASCII file and here what it contains:
*begin l1vars
*end l1vars
*begin l2vars
*end l2vars
If we just want to add a few new variables from the original data file, we can open this template file from within HLM or edit the template file directly.
The .STS file contains the descriptive statistics and is useful in checking if the data file used in creating the MDM file is what we think it is.
LEVEL-1 DESCRIPTIVE STATISTICS
VARIABLE NAME N MEAN SD MINIMUM MAXIMUM
MINORITY 7185 0.27 0.45 0.00 1.00
FEMALE 7185 0.53 0.50 0.00 1.00
SES 7185 0.00 0.78 -3.76 2.69
MATHACH 7185 12.75 6.88 -2.83 24.99
LEVEL-2 DESCRIPTIVE STATISTICS
VARIABLE NAME N MEAN SD MINIMUM MAXIMUM
SECTOR 160 0.44 0.50 0.00 1.00
MEANSES 160 -0.00 0.41 -1.19 0.83
Exploratory Data Analysis
HLM offers some really nice data-based graphs. It is always a good idea to plot our data before constructing our models.
1. Box-whisker plot
2. Scatter plot
Model Building
Model 1: Unconditional Means Model
This model is referred as a one-way random effect ANOVA and is the simplest possible random effect linear model. The motivation for this model is the question on how much schools vary in their mean
mathematics achievement. In terms of equations, we have the following, where r[ij] ~ N(0, σ^2) and u[0j ]~ N(0, τ^2),
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= γ[00 ]+ u[0j]
The data source for this run = C:\Data\test.mdm
The command file for this run = whlmtemp.hlm
Output file name = C:\Data\hlm2.txt
The maximum number of level-1 units = 7185
The maximum number of level-2 units = 160
The maximum number of iterations = 100
Method of estimation: restricted maximum likelihood
Weighting Specification
Weighting? Name Normalized?
Level 1 no
Level 2 no
Precision no
The outcome variable is MATHACH
The model specified for the fixed effects was:
Level-1 Level-2
Coefficients Predictors
---------------------- ---------------
INTRCPT1, B0 INTRCPT2, G00
The model specified for the covariance components was:
Sigma squared (constant across level-2 units)
Tau dimensions
Summary of the model specified (in equation format)
Level-1 Model
Y = B0 + R
Level-2 Model
B0 = G00 + U0
Iterations stopped due to small change in likelihood function
******* ITERATION 4 *******
Sigma_squared = 39.14831
INTRCPT1,B0 8.61431
Tau (as correlations)
INTRCPT1,B0 1.000
Random level-1 coefficient Reliability estimate
INTRCPT1, B0 0.901
The value of the likelihood function at iteration 4 = -2.355840E+004
The outcome variable is MATHACH
Final estimation of fixed effects:
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.636972 0.244412 51.704 159 0.000
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.636972 0.243628 51.870 159 0.000
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
INTRCPT1, U0 2.93501 8.61431 159 1660.23259 0.000
level-1, R 6.25686 39.14831
Statistics for current covariance components model
Deviance = 47116.793477
Number of estimated parameters = 2
1. The model we fit was
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= γ[00 ]+ u[0j]
Filling in the parameter estimates we get
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= 12.64[ ]+ u[0j ]
V(r[ij]) = 39.15
V(u[0j]) = 8.61
2. If we describe our model in terms of a single equation, we will have to substitute the level-2 equation back to level-1 equation. Here is how it will look like in a single equation as shown in
the HLM "mixed" window: MATHACH[ij ]= γ[00 ]+ u[0j ]+ r[ij].
3. The estimated between variance, τ^2 corresponds to the term INTRCPT1 in the output of Final estimation of variance components and the estimated within variance, σ^2, corresponds to the term
level-1 in the same output section.
4. Based on the covariance estimates, we can compute the intra-class correlation: 8.61431/(8.61431 + 39.14831) = .18. This tells us the portion of the total variance that occurs between schools.
5. To measure the magnitude of the variation among schools in their mean achievement levels, we can calculate the plausible values range for these means, based on the between variance we obtained
from the model: 12.64 ± 1.96*(8.61)^1/2 = (6.89, 18.39).
6. The reliability of the random effect of level-1 intercept is the average reliability of the level-2 units. It measures the overall reliability of the OLS estimates for each of the intercept.
Model 2: Including Effects of School Level (level 2) Predictors -- predicting mathach from meanses
This model is referred as regression with Means-as-Outcomes by Raudenbush and Bryk. The motivation of this model is the question on if the schools with high MEANSES also have high math achievement.
In other words, we want to understand why there is a school difference on mathematics achievement. In terms of regression equations, we have the following.
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= γ[00 ]+ γ[01](MEANSES) + u[0j]
Final estimation of fixed effects:
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.649436 0.149280 84.736 158 0.000
MEANSES, G01 5.863538 0.361457 16.222 158 0.000
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.649436 0.148377 85.252 158 0.000
MEANSES, G01 5.863538 0.320211 18.311 158 0.000
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
INTRCPT1, U0 1.62441 2.63870 158 633.51744 0.000
level-1, R 6.25756 39.15708
Statistics for current covariance components model
Deviance = 46959.446959
Number of estimated parameters = 2
1. The model we fit was
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= γ[00 ]+ γ[01](MEANSES) + u[0j]
Filling in the parameter estimates we get
MATHACH[ij ]= β[0j ]+ r[ij]
β[0j ]= 12.65 +5.86(MEANSES) + u[0j
]V(r[ij]) = 39.16
V(u[0j]) = 2.64
2. In a single equation our model will be written as: MATHACH[ij ]= γ[00 ]+ γ[01](MEANSES) + u[0j ]+ r[ij].
3. The coefficient for the constant is the predicted math achievement when all predictors are 0, so when the school has mean SES of 0, the students' math achievement is predicted to be 12.65.
4. A range of plausible values for school means, given that all schools having meanses of zero, is 12.65 ± 1.96 *(2.64)^1/2 = (9.47, 15.83).
5. The variance component representing variation between schools decreases greatly (from 8.61 to 2.64). This means that the level-2 variable meanses explains a large portion of the school-to-school
variation in mean math achievement. More precisely, the proportion of variance explained by meanses is (8.61 - 2.64)/8.61 = .69, that is about 69% of the explainable variation in school mean math
achievement scores can be explained by meanses.
6. Do school achievement means still vary significantly once meanses is controlled? The output of Final estimation of variance components gives the test for the variance component for the INTRCPT1
to be zero with chi-square of 633.52 of 158 degrees of freedom. This is highly significant. Therefore, we conclude that after controlling for meanses, significant variation among school mean math
achievement still remains to be explained.
Model 3: Including Effects of Student-Level Predictors--predicting mathach from student-level ses
This model is referred as a random-coefficient model by Raudenbush and Bryk. Pretend that we run regression of mathach on ses on each school, that is we are going to run 160 regressions.
1. What would be the average of the 160 regression equations (both intercept and slope)?
2. How much do the regression equations vary from school to school?
3. What is the correlation between the intercepts and slopes?
These are some of the questions that motivates the following model.
MATHACH[ij ]= β[0j ]+ β[1j] SES + r[ij]
β[0j ]= γ[00 ] + u[0j
]β[1j ]= γ[10 ] + u[1j]
Sigma_squared = 36.82835
INTRCPT1,B0 4.82978 -0.15399
SES,B1 -0.15399 0.41828
Tau (as correlations)
INTRCPT1,B0 1.000 -0.108
SES,B1 -0.108 1.000
Random level-1 coefficient Reliability estimate
INTRCPT1, B0 0.797
SES, B1 0.179
The value of the likelihood function at iteration 21 = -2.331928E+004
The outcome variable is MATHACH
Final estimation of fixed effects:
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.664935 0.189874 66.702 159 0.000
For SES slope, B1
INTRCPT2, G10 2.393878 0.118278 20.240 159 0.000
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.664935 0.189251 66.921 159 0.000
For SES slope, B1
INTRCPT2, G10 2.393878 0.117697 20.339 159 0.000
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
INTRCPT1, U0 2.19768 4.82978 159 905.26472 0.000
SES slope, U1 0.64675 0.41828 159 216.21178 0.002
level-1, R 6.06864 36.82835
Statistics for current covariance components model
Deviance = 46638.560929
Number of estimated parameters = 4
1. The model we fit was
MATHACH[ij ]= β[0j ]+ β[1j] (SES) + r[ij]
β[0j ]= γ[00 ] + u[0j
]β[1j ]= γ[10 ] + u[1j]
Filling in the parameter estimates we get
MATHACH[ij ]= β[0j ]+ β[1j] (SES) + r[ij]
β[0j ]= 12.66[ ] + u[0j
]β[1j ]= 2.39 + u[1j]
V(r[ij]) = 36.82
V(u[0j]) = 4.83
V(u[1j]) = .42
2. In a single equation our model will be written as:
MATHACH[ij ]= γ[00 ] + u[0j ]+ (γ[10 ] + u[1j] )(SES) + r[ij
= ] γ[00 ] + γ[10 ]*(SES) + u[0j ]+ u[1j] *(SES) + r[ij]
3. The estimate for the variance of the slope fo ses is 0.42. The p-value is .002. The test being significant tells us that we can not accept the hypothesis that there is no difference in slopes of
ses among schools.
4. The 95% plausible value range for the school means when the ses is zero is 12.66 ± 1.96 *(4.83)^1/2 = (8.35, 16.97).
5. The 95% plausible value range for the SES-achievement slope is 2.39 ± 1.96 *(.42)^1/2 = (1.12, 3.66).
6. Notice that the residual variance is now 36.82, comparing with the residual variance of 39.15 in the one-way ANOVA with random effects model. We can compute the proportion variance explained at
level 1 by (39.15 - 36.82) / 39.15 = .060. This means using student-level SES as a predictor of math achievement reduced the within-school variance by 6%.
Model 4: Including Both Level-1 and Level-2 Predictors --predicting mathach from meanses, schtype, group-centered ses and the cross level interaction of meanses and schtype with group-centered
This model is referred as an intercepts and slopes-as-outcomes model by Raudenbush and Bryk. We have examined the variability of the regression equations across schools. Now we are ready to build our
final model based on our theory and our preliminary analyses.
MATHACH[ij ]= β[0j ]+ β[1j] (SES - MEANSES) + r[ij]
β[0j ]= γ[00 ] + γ[01](SCHTYPE) + γ[02](MEANSES) + u[0j
]β[1j ]= γ[10 ] + γ[11](SCHTYPE) + γ[12](MEANSES) + u[1j]
Sigma_squared = 36.70313
INTRCPT1,B0 2.37996 0.19058
SES,B1 0.19058 0.14892
Tau (as correlations)
INTRCPT1,B0 1.000 0.320
SES,B1 0.320 1.000
Random level-1 coefficient Reliability estimate
INTRCPT1, B0 0.733
SES, B1 0.073
The value of the likelihood function at iteration 61 = -2.325094E+004
The outcome variable is MATHACH
Final estimation of fixed effects:
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.096006 0.198734 60.865 157 0.000
SCHTYPE, G01 1.226384 0.306272 4.004 157 0.000
MEANSES, G02 5.333056 0.369161 14.446 157 0.000
For SES slope, B1
INTRCPT2, G10 2.937981 0.157135 18.697 157 0.000
SCHTYPE, G11 -1.640954 0.242905 -6.756 157 0.000
MEANSES, G12 1.034427 0.302566 3.419 157 0.001
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For INTRCPT1, B0
INTRCPT2, G00 12.096006 0.173699 69.638 157 0.000
SCHTYPE, G01 1.226384 0.308484 3.976 157 0.000
MEANSES, G02 5.333056 0.334600 15.939 157 0.000
For SES slope, B1
INTRCPT2, G10 2.937981 0.147620 19.902 157 0.000
SCHTYPE, G11 -1.640954 0.237401 -6.912 157 0.000
MEANSES, G12 1.034427 0.332785 3.108 157 0.003
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
INTRCPT1, U0 1.54271 2.37996 157 605.29503 0.000
SES slope, U1 0.38590 0.14892 157 162.30867 0.369
level-1, R 6.05831 36.70313
Statistics for current covariance components model
Deviance = 46501.875643
Number of estimated parameters = 4
1. The model we fit was
MATHACH[ij ]= β[0j ]+ β[1j] (SES - MEANSES) + r[ij]
β[0j ]= γ[00 ] + γ[01](SCHTYPE) + γ[02](MEANSES) + u[0j
]β[1j ]= γ[10 ] + γ[11](SCHTYPE) + γ[12](MEANSES) + u[1j]
Filling in the parameter estimates we get
MATHACH[ij ]= β[0j ]+ β[1j] (SES - MEANSES) + r[ij]
β[0j ]= 12.10[ ] + 1.22(SCHTYPE) + 5.33(MEANSES) + u[0j
]β[1j ]= 2.94 + -1.64(SCHTYPE) + 1.03(MEANSES) + u[1j]
V(r[ij]) = 36.7
V(u[0j]) = 2.37
V(u[1j]) = .15
2. In a single equation our model will be written as:
MATHACH[ij ]= γ[00 ] + γ[01](MEANSES) + γ[02](SCHTYPE) + u[0j
]+ (γ[10 ] + γ[11](MEANSES) + γ[12](SCHTYPE) + u[1j])* (SES - MEANSES) + r[ij]
= γ[00 ] + γ[01](MEANSES) + γ[02](SCHTYPE)
+ γ[10]*(SES-MEANSES)[ ] + γ[11]*MEANSES*(SES-MEANSES) + γ[12*]SCHTYPE*(SES-MENASES)
+ u[0j ]+ u[1j]* (SES - MEANSES) + r[ij]
3. The estimate for the variance of the SES slope is .15 with p-value .369. That means that the hypothesis that the there is no significant variation among the slope of grouped-centered ses can not
be rejected. We may want to use a simpler model where the slope of SES varies non-randomly with respect to level-2 variable meanses and schtype. We will show later how to compare the two models.
4. The correlation between the level-1 intercept and the slope for SES is given as .32 from the earlier part of the output.
Hypothesis Testing, Model Fit and Diagnostics
1. Multivariate Hypothesis Tests on Fixed Effects
We will test the effect of schtype on the intercept and on the slope of ses simultaneously. This will be a test of two degrees of freedom.
Click on the box labeled "1" and then fill out the boxes below to indicate we wish to test jointly that γ[01] = 0 and γ[11 ] = 0 .
Results of General Linear Hypothesis Testing
Coefficients Contrast
For INTRCPT1, B0
INTRCPT2, G00 12.096006 0.000 0.000
SCHTYPE, G01 1.226384 1.000 0.000
MEANSES, G02 5.333056 0.000 0.000
For SES slope, B1
INTRCPT2, G10 2.937981 0.000 0.000
SCHTYPE, G11 -1.640954 0.000 2.000
MEANSES, G12 1.034427 0.000 0.000
Chi-square statistic = 60.596880
Degrees of freedom = 2
P-value = 0.000000
2. Multivariate Tests of Variance-Covariance Components Specification
From Model 4 that we ran before, we saw that the variance for the slope of group-centered ses is not very large and its p-value is not statistically significant. This suggests that we may not want to
model the group-centered ses as a random effect. A simpler model will be that the slope of variable ses varies non-randomly on level-2 variables schtype and meanses. We may want to compare these two
models to decide if the simpler model is just about as good as the previous one.
• REML (restricted maximum likelihood) vs. FML (full maximum likelihood)
□ REML and FML will usually produce similar results for the level-1 residual (σ^2), but there can be noticeable differences for the variance-covariance matrix of the random effects
□ REML is the default estimation method for HLM.
□ If the number of level-2 units is large, then the difference will be small.
□ If the number of level-2 units is small , then FML variance estimates will be smaller than REML, leading to artificially short confidence interval and significant tests.
• Nested Models
□ fixed effects are the same, only fewer random effects , then REML or FML are both fine for likelihood ratio tests;
□ One model has fewer fixed effects and possibly fewer random effects, then use FML to compare models using likelihood ratio tests.
To compare two models, we will have to obtain the deviance (which is just -2*log likelihood) for the first model and enter it to the Hypothesis Testing before running the second model.
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
INTRCPT1, U0 1.54118 2.37524 157 604.29895 0.000
level-1, R 6.06351 36.76611
Statistics for current covariance components model
Deviance = 46502.952743
Number of estimated parameters = 2
Statistics for current covariance components model
Deviance = 46501.875643
Number of estimated parameters = 4
Variance-Covariance components test
Chi-square statistic = 1.07710
Number of degrees of freedom = 2
P-value = >.500
3. Model-based Graphs
HLM 6 offers many model-based graphs. The graphs below are based on the following model.
Level 1 equation Graphing:
Level-2 EB/OLS coefficient confidence intervals
Other Issues
1. Modeling Heterogeneity of Level-1 Variances
Sometimes, the level-1 variance might be heterogeneous. For example, we may expect that female students and male students have different variances. Thus, we want to model the level-1 variance to be a
function of variable female.
From pull-down menu Other Settings => Estimation Settings => Heterogeneous .sigma^2. We then have a choice on which variable(s) to choose to model the heterogeneity. Here we picked the variable
(macro iteration 4)
Var(R) = Sigma_squared and
log(Sigma_squared) = alpha0 + alpha1(FEMALE)
Model for level-1 variance
Parameter Coefficient Error Z-ratio P-value
INTRCPT1 ,alpha0 3.66570 0.024718 148.301 0.000
FEMALE ,alpha1 -0.12106 0.033936 -3.567 0.001
Summary of Model Fit
Model Number of Deviance
1. Homogeneous sigma_squared 10 46494.59261
2. Heterogeneous sigma_squared 11 46482.09334
Model Comparison Chi-square df P-value
Model 1 vs Model 2 12.49926 1 0.001
2. Models Without a Level-1 Intercept
Sometimes, we may want to exclude the intercept from our model. For example, we may have a level-1 categorical variable and we want to include all the categories of this variable in the model. To
this end, we have to exclude the intercept, otherwise our model will be over-parameterized. To this end, we are going to create another binary variable for male (=1-female). As we have mentioned
before, since HLM does not have any data management facility, we have to create this variable outside HLM. We chose SPSS for this task and modified the template file created earlier to create a new
MDM file.
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For FEMALE slope, B1
INTRCPT2, G10 10.684432 0.298122 35.839 158 0.000
SCHTYPE, G11 2.932540 0.446512 6.568 158 0.000
For MALE slope, B2
INTRCPT2, G20 12.174859 0.322616 37.738 158 0.000
SCHTYPE, G21 2.597771 0.487027 5.334 158 0.000
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
FEMALE slope, U1 2.41260 5.82064 121 481.99916 0.000
MALE slope, U2 2.64370 6.98917 121 483.25462 0.000
level-1, R 6.22438 38.74285
3. Constraints on Fixed Effects
Let's say that we believe that the effect of schtype is the same on both female and male. We need to impose the constraint γ[11 = ]γ[21. ]
The outcome variable is MATHACH
Final estimation of fixed effects
(with robust standard errors)
Standard Approx.
Fixed Effect Coefficient Error T-ratio d.f. P-value
For FEMALE slope, B1
INTRCPT2, G10 10.723664 0.295717 36.263 158 0.000
SCHTYPE, G11 * 2.804823 0.417646 6.716 158 0.000
For MALE slope, B2
INTRCPT2, G20 12.103608 0.313462 38.613 159 0.000
The "*" gammas have been constrained. See the table on the header page.
Final estimation of variance components:
Random Effect Standard Variance df Chi-square P-value
Deviation Component
FEMALE slope, U1 2.40847 5.80071 121 484.11557 0.000
MALE slope, U2 2.63048 6.91943 121 483.35444 0.000
level-1, R 6.22449 38.74426
HLM has some very nice features for multilevel data analysis, including
• a very intuitive interface for specifying the model using a multi-equation format;
• easy to create cross-level interactions;
• produces many data-based and model-based graphs;
• latent variable regression;
• use of multiple imputed data;
• use of sampling weight | {"url":"http://www.ats.ucla.edu/stat/hlm/seminars/hlm6/mlm_hlm6_seminar.htm","timestamp":"2014-04-21T07:04:51Z","content_type":null,"content_length":"65062","record_id":"<urn:uuid:2bd33ae1-6b5a-4b70-bf36-ddc63bbdb20b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimization: What's a Steiner Tree?
Any of you who have worked with VPLS or NG-MVPNs are likely already familiar with using Point-to-Multipoint (P2MP) LSPs to get traffic from a single ingress PE to multiple egress PEs.
The reason that P2MP LSPs are desired in these cases is that it can reduce unnecessary replication by doing so only where absolutely required, for example where a given P2MP LSP must diverge in order
to reach two different PEs.
However, typically the sub-LSPs which are part of a given P2MP LSP traverse the shortest-path from ingress to egress based on whatever user defined constraints have been configured. While this is
fine for many applications, additional optimizations might be required such that additional bandwidth savings can be realized.
We will take a look at something called a Steiner-Tree which can help the network operator to realize these additional savings, when warranted, reducing the overall bandwidth used in the network and
fundamentally changing the way in which paths are computed.
Let's start by taking a look at a simple example in which RSVP is used to signal a particular P2MP LSP, but no constraints are defined. All the links in this network have a metric of 10. In this
case, the sub-LSPs will simply traverse along the shortest path in the network, as can be seen in the diagram below.
Here we see a P2MP LSP where PE1 is the ingress PE and PE2, PE3, and PE4 are all egress nodes. Since no constraints have been defined the calculated ERO for each of the sub-LSPs will follow along the
shortest path where we can see one sub-LSP taking the PE-P1-P2-PE2 path, another is taking the PE1-P1-P3-PE3 path, and the third is taking the PE1-P1-P4-PE4 path. In this case, each sub-LSP has a
total end-to-end cost of 30.
Under many circumstances this type of tree would be perfectly acceptable, especially when the end-goal is the minimize end-to-end latency, however there are other cases where we may want to introduce
additional hops in an effort to reduce overall bandwidth utilization. This is where the concept of a minimum-cost tree, otherwise known as a Steiner Tree, comes into play.
This may seem counter-intuitive at first; after all, doesn't a shortest-path tree attempt to minimize costs? The answer is yes, but it usually only does so by looking at costs in terms of end-to-end
metrics or hops through a network. Once you understand the mechanics of the Steiner Tree algorithm, and how it attempts to minimize the total number of interconnects, it starts to make more sense.
According to Wikipedia, "the Steiner tree problem, or the minimum Steiner tree problem, named after Jakob Steiner, is a problem in combinatorial optimization, which may be formulated in a number of
settings, with the common part being that it is required to find the shortest interconnect for a given set of objects".
That's a pretty fancy way of saying it's attempting to optimize the path to be the shortest path possible while at the same time reducing the total number of interconnects between all devices to only
those that are absolutely required.
Steiner Tree optimizations are very useful where an ingress PE must send large amounts of data to multiple PEs and it is preferable to ensure that overall bandwidth utilization is reduced, perhaps
because of usage-based billing scenarios which require that overall circuit utilization be reduced as much as possible in order to save money.
Let's take a look at an example, once again using the same network as before, but this time performing a Steiner Tree optimization whereby cost is measured in terms of overall bandwidth utilization.
In this case we still see that we have the requirement to build the P2MP LSP from PE1 to PE2, PE3, and PE4.
However, this time we are going to compute an ERO such that replication will only take place where absolutely necessary in order to reduce the total number of interconnects and hence overall
bandwidth utilization.
After performing a Steiner Tree path computation, we determine that PE3 is a more logical choice to perform the replication to PE2 and PE4, even though it increases the overall end-to-end metric cost
to 40. The reason for this is we have now effectively eliminated the bandwidth utilization on the P1-P2, P2-PE2, P1-P4, and P4-PE4 links. In effect, we've gone from utilizing bandwidth across seven
links to only five. If the P2MP LSP was servicing a 100 Mbps video stream, we have just effectively reduced overall bandwidth utilization on the network as a whole by 200 Mbps.
One of the interesting side-effects of this approach is that we now see that PE3 is not only an egress node, but it is now also a transit node as well (for the sub-LSPs terminating at PE2 and PE4).
Due to this, we'll also see that in these types of scenarios the Penultimate Hop Popping (PHP) behavior is different on P3 in that we don't want it popping the outer label before sending frames to
PE3 since PE3 may need to accommodate labeled packets heading to PE2 or PE3. We will cover some of this in a subsequent article on the signaling mechanisms inherent in P2MP LSPs and some of the
changes to the behavior in MPLS forwarding state.
Path computation for P2MP LSPs can be complex, especially when the goal is create Steiner Trees. The reason for this added complexity when computing Steiner Trees is that sub-LSP placement has a
direct correlation with other sub-LSPs, which is contrary to what happens when shortest-path trees are calculated where each sub-LSP may be signaled along their own unique path without regard to the
placement of other sub-LSPs.
As with traditional LSPs, similar methods of determining the paths through the network and hence the ERO can be used, i.e. manual, offline computation.
The easiest approach would be to use constructs like Link Coloring (Affinity Groups for you Cisco wonks) to influence path selection, for example, by coloring the PE1-P1, P1-P3, P3-PE3, PE3-PE2, and
PE3-PE4 links with an included color, or coloring the remaining links with a different color and excluding that color from the LSP configuration.
However, this approach is merely a trick. We are feeding elements into the CSPF algorithm such that the shortest path which is calculated essentially mimics that of a Steiner Tree. In other words,
it's not a true Steiner Tree calculation because the goal was not to reduce the total number of interconnects, but rather to only utilize links of an included color.
Furthermore, such an approach doesn't easily accommodate failure scenarios in which PE3 may go down, because even though Fast Reroute or Link/Node Protection may be desired, if the remaining links do
not have the included colors they may be unable to compute an ERO for signaling.
Workarounds to this approach are to configure your Fast Reroute Detours or your Link/Node Protection Bypass LSPs to have more relaxed constraints, such that any potential path might be used.
However, more commonly what you'll see is that some type of additional computations might be performed using traditional offline approaches (using modeling tools such as those provided by vendors
such as WANDL, OPNET, or Cariden) which factors both steady-state as well as failure scenarios to assist the operator in determining optimal placement of all elements.
An interesting side-note is that there are some pretty significant developments underway whereby online computation can be performed in such a way as to optimize all P2MP LSPs network-wide, using
something known as Path Computation Elements (PCEs).
These are essentially any entity which is capable of performing path computation for any set of paths throughout a network by applying various constraints. It is something that looks to be especially
useful in large carrier networks consisting of many LSPs, and especially so in the case of Steiner Tree P2MP LSPs where the sub-LSP placement is highly dependent on others. See the charter of the PCE
Working Group in the IETF for more information on this and other related developments.
As a side note, it should be fairly evident that in order to perform path optimizations on anything other than shortest-path trees (i.e. Steiner Trees or any other type of tree based on user-defined
constraints), RSVP signaling must be used in order to signal a path along the computed ERO. LDP certainly can be used to build P2MP LSPs (aka mLDP), however much like traditional LSPs built via LDP,
the path follows the traditional IGP path.
Stay tuned as we will cover more exciting articles on P2MP LSPs and some of the other underpinnings behind many of the next generation MPLS services being commonly deployed.
Cross-posted from Shortest Path First
You Must Register or Login to Comment | {"url":"http://www.infosecisland.com/blogview/16704-Optimization-Whats-a-Steiner-Tree.html","timestamp":"2014-04-16T19:13:38Z","content_type":null,"content_length":"23857","record_id":"<urn:uuid:65ce8239-ac45-47f8-8f1f-ee17723b312e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help getting started...
June 28th 2010, 03:00 PM #1
Oct 2009
Hey guys I hope you can help me out with these two.
I think the first one is separable but I can't see how to isolate the variables. It's not exact nor can it be made exact by an integrating factor, at least I don't think it can be.
The second one I need to get in terms of y/x (dy/dx = f(y/x)) to use the z substitution method. That root xy has got me stuck.
Any help is much appreciated! Thanks!
Your link to the questions is not clear.
Recall $\sqrt{xy} =\sqrt{x}\sqrt{y}$
If you have [IMG] code on, it will be.
But here is the direct link.
I got the second one now, thank you! I'm still unsure about the first one.
Hey guys I hope you can help me out with these two.
I think the first one is separable but I can't see how to isolate the variables. It's not exact nor can it be made exact by an integrating factor, at least I don't think it can be.
The second one I need to get in terms of y/x (dy/dx = f(y/x)) to use the z substitution method. That root xy has got me stuck.
Any help is much appreciated! Thanks!
The right hand side of your first one can be written $\frac{x (y + 3) - (y + 3)}{x (y - 2) + 4 (y - 2)}$, it should now be clear how to continue.
June 28th 2010, 03:22 PM #2
June 28th 2010, 03:38 PM #3
Oct 2009
June 28th 2010, 04:08 PM #4
June 28th 2010, 04:53 PM #5
Oct 2009 | {"url":"http://mathhelpforum.com/differential-equations/149621-need-help-getting-started.html","timestamp":"2014-04-19T18:51:49Z","content_type":null,"content_length":"43677","record_id":"<urn:uuid:56374b0c-e5d2-49c3-a622-fda98883b074>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving a system of 3 nonlinear equations
It is not a quadratic equation. And it is not a "nice" solution.
I have determined that z^3-cz^2+bz-a = 0. So, if we can find the roots of the cubic function, then we have z as a function of a, b, and c. Then, it should be straightforward to find x and y in terms
of a, b, and c.
But I forget how to find the roots of a cubic function. | {"url":"http://www.physicsforums.com/showthread.php?p=4240047","timestamp":"2014-04-17T07:37:23Z","content_type":null,"content_length":"27572","record_id":"<urn:uuid:9f3283e2-e43f-40a7-8082-b488d4041a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explanation for "saddle" pattern
Submitted by Kathy Maffei on Thu, 2006-03-23 14:30
If you'll recall, we had a discussion yesterday in class about the "saddle" pattern seen in testing a range of input values (x & y each from 0 to 1) for the xor problem. There was some question as to
why the saddle always ran like "/" rather than "\" Basically, the center range of x & y insisted on returning high values, even when Doug added training data for (0.5,0.5) to return 0. My intuition
was that it had something to do with the calculations involved in adjusting the network's weights during back-propagation. Math isn't my forte (unfortunately, for a comp sci major!), but I'm pretty
sure I've confirmed that the backprop algorithm is biased for answers of 1 over 0. Let me know if there's a hole in my logic, here. I've written out a few examples and
posted them online
in case anyone would like to see them. Basically, for each of 4 examples I compared two cases of error that were the same distance from the goal (desiredOutput) but in opposite directions. Basic
logic (at least mine!) would suggest that regardless of which direction (positive or negative) you are from the goal, you would want to adjust the same amount (negative or positive) for comparable
distances away. But, in all but one case, the weightAdjustment was very different for a negative error than for a positive error of the same absolute value (distance from the desiredOutput). The only
case where the weightAdjustments were the same absolute value was Example 2, where the actualOutput was 0.5 for each, and the goals were 1 and 0 - same distance (pos & neg) and same actualOutput.
Why? Because for some reason the actualOutput is factored into the final weightAdjustment. This is what causes the bias. I'm sure Doug will be able to explain why the algorithm is configured this way
- there must be a good reason. And like I said, maybe there's a hole in my logic. Any thoughts?
Submitted by DougBlank on Thu, 2006-03-23 20:07 Permalink
Thanks for attempting to tackle this problem. Your logic is fine, but I see a problem with your analysis. In order to compute the adjustment of a weight between a unit m and a unit i, you need to
know the activation at both nodes. The activation at unit m is needed to compute the error, and the activation at unit i is needed to compute the change in weight. In the weight update:
weightUpdate[m][i] = (EPSILON * delta[m] * actualOutput[i]) + (MOMENTUM * weightUpdate[m][i])
the actualOutput[i] is the activation from the previous layer. So, you won't be able to do your analysis without the activation from the previous layer. But you can test values in the range 0 - 1 to
see what the weight change would be.
I'll take a look too, and see if I see anything suspicious. If you have any more ideas, please let us know.
Submitted by Kathy Maffei on Mon, 2006-03-27 12:51 Permalink
I knew it was too obvious to be true... | {"url":"http://serendip.brynmawr.edu/oneworld/blog/explanation-saddle-pattern","timestamp":"2014-04-19T00:37:22Z","content_type":null,"content_length":"20922","record_id":"<urn:uuid:b891b66a-85af-417d-a5fa-6b755221108c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tags: bound states
Note: Results do not include pending, unpublished, and some private items.
Ranking is calculated from a formula comprised of user reviews and usage data. Learn more ›
Reviews & Citations
Avg. Review: 0.0 out of 5 stars
02 Jul 2011 | Teaching Materials | Contributor(s): Dragica Vasileska
This write up describes basic concepts of closed systems and bound states calculations. Emphasis is placed on bound states calculation for infinite potential well, finite potential well, triangular …
20 Jul 2010 | Teaching Materials | Contributor(s): Dragica Vasileska, Gerhard Klimeck
The objective of this exercise is to teach the students the theory behind bound states in a quantum well.
16 Jun 2010 | Teaching Materials | Contributor(s): Gerhard Klimeck, Parijat Sengupta, Dragica Vasileska
Exercise Background Quantum-mechanical systems (structures, devices) can be separated into open systems and closed systems. Open systems are characterized with propagating or current carrying …
01 Jun 2010 | Teaching Materials | Contributor(s): Dragica Vasileska
bound states, open systems, transfer matrix approach, gate leakage calculation in Schottky gates
05 Jul 2008 | Tools | Contributor(s): Dragica Vasileska, Gerhard Klimeck, Xufeng Wang
Calculates bound states for square, parabolic, triangular and V-shaped potential energy profile
07 Jul 2008 | Series | Contributor(s): Dragica Vasileska, Gerhard Klimeck
In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as …
06 Jul 2008 | Teaching Materials | Contributor(s): Dragica Vasileska, Gerhard Klimeck
The problems in this exercise use the Bound States Calculation Lab to calculate bound states in an infinite square well, finite square well and triangular potential. Students also have to compare …
07 Jul 2008 | Teaching Materials | Contributor(s): Dragica Vasileska
nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | {"url":"http://nanohub.org/tags/boundstates?sort=date","timestamp":"2014-04-20T11:05:05Z","content_type":null,"content_length":"37009","record_id":"<urn:uuid:dacdbe77-b30e-49b5-8c17-3da8d54805a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton, MA Algebra 1 Tutor
Find a Newton, MA Algebra 1 Tutor
...Once you understand it, science and math is fun and easy. My style is not one who instructs. Instead, I lead and guide.
12 Subjects: including algebra 1, chemistry, physics, calculus
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including algebra 1, calculus, statistics, geometry
...I know what the test is like and can help teach study strategies that can prepare one for the types of questions they like to ask while also developing general reading skills. I have taken the
ACT and received an English score of 35. I know what the ACT is looking for in the English section and can help with study strategies to succeed in this portion of the exam.
20 Subjects: including algebra 1, reading, elementary math, ACT Science
...I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics.
Seasonally I work with students on SAT preparation, which I love and excel at.
15 Subjects: including algebra 1, physics, calculus, geometry
I have been actively tutoring students in Chemistry and math for 10 years. In this time, I have worked with high school and college students. I have a unique talent in my abilities to translate
difficult material into words and scenarios that students of all levels can understand.
10 Subjects: including algebra 1, chemistry, algebra 2, organic chemistry
Related Newton, MA Tutors
Newton, MA Accounting Tutors
Newton, MA ACT Tutors
Newton, MA Algebra Tutors
Newton, MA Algebra 2 Tutors
Newton, MA Calculus Tutors
Newton, MA Geometry Tutors
Newton, MA Math Tutors
Newton, MA Prealgebra Tutors
Newton, MA Precalculus Tutors
Newton, MA SAT Tutors
Newton, MA SAT Math Tutors
Newton, MA Science Tutors
Newton, MA Statistics Tutors
Newton, MA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Auburndale, MA algebra 1 Tutors
Brighton, MA algebra 1 Tutors
Brookline, MA algebra 1 Tutors
Cambridge, MA algebra 1 Tutors
Newton Center algebra 1 Tutors
Newton Centre, MA algebra 1 Tutors
Newton Highlands algebra 1 Tutors
Newton Upper Falls algebra 1 Tutors
Newtonville, MA algebra 1 Tutors
Roxbury, MA algebra 1 Tutors
Somerville, MA algebra 1 Tutors
Waban algebra 1 Tutors
Waltham, MA algebra 1 Tutors
Watertown, MA algebra 1 Tutors
West Newton, MA algebra 1 Tutors | {"url":"http://www.purplemath.com/newton_ma_algebra_1_tutors.php","timestamp":"2014-04-18T23:58:22Z","content_type":null,"content_length":"23983","record_id":"<urn:uuid:80d1124c-26c3-4259-8b4a-b2cf7663ed9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meaning of PDFs in the context of statistics
Cole A.
If someone looks at a building and says that its height in feet is described by N(100, 50), and another claims that its height is described by Unif(0, 200), what are they saying exactly? The
building's height is an absolute, fixed, unchanging number. What meaning does a PDF possibly have in this context?
Unless you are studying Bayesian statistics, you don't find such statements in a statistics text.
In "frequentist" statistics, the kind normally studied in introductory courses, you would not find the height of one building described by a probability distribution. You might find the height of a
randomly selected building from a population of buildings described by a distribution. You might find the measured height of a single building described by a probability distribution if the
measurement has a random error.
If you are studying Bayesian statistics, you might use a probability distribution for the height of one building. The distribution can be regarded as stating a "belief" about the height or you can
pretend that when the building was built, it's height was selected at random from a population of possible heights that might have occurred. | {"url":"http://www.physicsforums.com/showthread.php?p=4252279","timestamp":"2014-04-18T08:29:40Z","content_type":null,"content_length":"36072","record_id":"<urn:uuid:68d7c760-c8cf-4c34-a62a-05180e0a2736>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Observables in quantum gravity
Moshe Rozali has kindly initiated a blog discussion about this important topic. Let us join, too.
The goal of every quantum-mechanical theory is to predict the probabilities that particular physical quantities - "observables" - will take one value or another value after some evolution of the
system, assuming certain initial conditions.
For example, we are using Schrödinger's equation to predict the probabilities that a particle appears at a certain point of the screen in a double-slit experiment. Alternatively, we are predicting
the probability that the (decay products of a) Higgs boson and/or superpartners will appear inside a pixel of the LHC detector, and so forth.
Mathematics of quantum mechanics makes it inevitable that observables have to be identified with linear operators on the Hilbert space of allowed states. The allowed values of observables are the
eigenvalues of the corresponding operators and the probabilities are squared absolute values of the appropriate complex amplitudes.
In the case of mechanics, the "fundamental" observables are usually x,p - position and momentum - but one may construct more complicated ones such as the angular momentum J (the generator of
rotations) or the Hamiltonian (the generator of time evolution). Each particle typically carries its own x,p,J, terms contributing to H, and so forth.
In quantum field theory, the natural observables "look" different. While quantum field theory still implies the existence of particles with their momenta and (usually not quite localized) positions,
every quantum field theory is a quantized version of a field theory, after all.
So the natural observables are fields such as phi(x,y,z,t) or E(x,y,z,t) - the latter is the electric field at a given point of spacetime. (Let's neglect additional dimensions of space.) More
precisely, these objects are "operator-distributions" rather than operators and you should integrate them over some regions (with test functions) to get genuine operators whose commutators are
functions rather than distributions.
But that's not a conceptually difficult technicality: every physicist who knows how to manipulate with distributions (such as the delta-function) may deal with the operator distributions, too. She
can continue to call them "operators".
The operators phi(x,y,z,t) tell us something about the state of the system - the field phi - at a given point. So this operator has nothing to do with points in space that are separated: in fact, the
(graded) commutator of phi(x,y,z,t) with an operator at a point separated by a space-like interval must vanish. That's why we call the operator "local". Special relativity implies that spatially
separated regions can't influence each others so it is not surprising that such local operators exist.
Quantum field theory is naturally rewritten in terms of these local operators. The Hamiltonian is typically an integral over space. Other operators that can be measured may also be constructed as
integrals of functions of the local operators (and their derivatives) over space. Even if we add Yang-Mills gauge invariance, it is still possible to construct local operators associated with a point
in space that are gauge-invariant and whose eigenvalues are thus fully measurable.
(Gauge-non-invariant operators depend on the gauge i.e. on a convention: they cannot be directly measured. For example, the magnetic field strength in electromagnetism is gauge-invariant but the
vector potential itself is not.)
Adding gravity
What happens if we add the metric tensor and gravity? Well, something does. In general relativity, we must also add the diffeomorphism group into the full package of gauge symmetries. Only
gauge-invariant operators are independent of conventions. Only gauge-invariant operators can be directly measured.
Some laymen at Moshe's blog are deeply confused about the very basic questions here. Special relativity is not invariant under diffeomorphisms - unless we express the special-relativistic theory with
additional, completely redundant, unphysical degrees of freedom (a metric tensor whose curvature must vanish and which is therefore "non-dynamical" in this case).
Without this useless extension, special relativity only allows the invariance under Poincaré transformations and puts inertial frames (but not other frames) on equal footing. States are not required
to be annihilated by any generators of transformations of spacetime. The fact that the commenter named "iphigenia" is not able to see that the metric tensor is non-dynamical in special relativity and
it (or diffeomorphisms) cannot therefore cause any dynamical problems in special relativity (unlike GR) doesn't mean that people with 60 IQ points above "iphigenia" are also unable to throw the
unphysical degree of freedom out.
On the other hand, general relativity is invariant under diffeomorphisms. Both the metric tensor at each point and the diffeomorphism symmetry are necessary in every description of general relativity
that keeps the important symmetries manifest. Because the spacetime is generically curved, there exists no natural subset of "inertial frames" and all coordinate systems are equally good to express
the equations of general relativity.
This diffeomorphism symmetry turns out to be an important problem in general relativity where it cannot be thrown away. Before we added gravity, the quadruple of numbers (x,y,z,t) in phi(x,y,z,t)
described a very physical point in spacetime. In a different reference frame, you would associate the point with a different quadruple of coordinates. But once you pick your coordinates, there is a
one-to-one map between the quadruples of coordinates and the "objective" points in spacetime: this map is independent on the state of the system.
The previous sentences fail to hold once you add gravity. Why? Because the numbers (x,y,z,t) are just coordinates that can be reparameterized in an arbitrary way, without changing the physics. So by
saying what (x,y,z,t) are, you don't really identify any "objective" point in space. So you don't know which operator phi(x,y,z,t) you can possibly mean. The freedom to reparameterize the coordinate
is large enough for (x,y,z,t) to mean anything you want or anything you don't want.
You might say that we faced the same problem in special relativity, too. There were also different acceptable coordinate systems over there. However, what's important is that we could agree upon a
few conventions about our coordinates and then all quadruples (x,y,z,t) meant something specific. It's not the case in general relativity because you would need to reveal an infinite amount of
information to determine what point is associated with any (x,y,z,t).
This infinite difference is a reason why the reparameterizations in special relativity are "global symmetries" while those in general relativity are "local symmetries". States can't be required to be
invariant under global symmetries - because the energy etc. would have to be zero all the time - but they should be required to be invariant under local symmetries - e.g. because you wouldn't know
how they evolve with time (the local, gauge transformations can always depend on time).
Moreover, the choices of coordinates in general relativity cannot be "canonical". In special relativity, the spacetime is flat so if you determine the coordinates of a few points, you may just assume
that the coordinates are extrapolated linearly across the spacetime that is linear, too.
However, the spacetime is curved in general relativity: for general states (and their gravitational fields), the coordinates have to be "inherently non-linear". Moreover, the precise curvature of the
spacetime does depend on the state of the physical system. For example, if an atom is found at one place, it has a different gravitational field - different profile of spacetime curvature - than if
it is localized at another point. Dead and alive cats have different gravitational fields, too.
So the fact that you can't choose any "canonical" coordinates - and therefore "objective" gauge-invariant observables similar to phi(x,y,z,t) - depends on two complications:
1. the physical observables must be gauge-invariant i.e. independent of diffeomorphisms; that means that they can't depend on arbitrary coordinates
2. there are no "simple" non-arbitrary coordinates because the spacetime is curved according to the matter inside.
At Moshe Rozali's blog, various people have asked which of these circumstances actually explains that you can't define any simple gauge-invariant local observables in quantum gravity: in reality, it
is only the union of both of them that makes things hard.
Now, I feel the urge to say that you could imagine that you can define some "objective" coordinates in a curved spacetime of general relativity, too. For example, choose a reference point P that is
very far from all matter (somewhere near infinity). And then you can parameterize points in spacetime by specifying
1. in which direction Omega (angular coordinates) from the point P the point is sitting (a small region around P is flat so Omega behaves just like in non-gravitational physics)
2. what is the proper distance or proper time (the length of the shortest geodesic) from P to the point you want to describe
However, this particular convention is difficult to realize because geodesics in spacetime are complicated, especially for generic mass distributions. Moreover, the coordinates won't be unique in
general: recall that in the case of gravitational lensing, two different light rays can get from a galaxy to your eye (along two different geodesics). Moreover, the definition assumes that the metric
tensor is a good-behaving, nearly "classical" tensor that exists everywhere in space. In reality, the metric tensor is wildly fluctuating, especially at very short distances. If you wanted to follow
the geodesics accurately, it would be much more messy than the classical ideas indicate.
For these reasons, attempts to define privileged coordinates in spacetime based on geodesics, proper distances, and extrapolations are not very well-defined, reliable, convergent, or convenient.
Incidentally, string theory gives us a better way to define privileged coordinates, the light cone gauge. In the light cone gauge, all fields or string fields are naturally interpreted as functions
of a "light-cone time", x^+. The remaining coordinates, x^- and the transverse x^i coordinates, could a priori be redefined as well. But the Hamiltonian in the light cone gauge - the generator of
translations in x^+ - automatically gives you a preferred value of all these coordinates.
The light-cone gauge coordinates are working well for the superselection sector of the flat space - all states that converge to a flat spacetime at infinity. You may talk about the evolution in the
"bulk" or the evolution after "finite time". But let us assume that the reader doesn't like any gauge-fixed description of the physics because it obscures the "natural symmetries" and it could become
problematic if the density of matter is high (and the fields are strong). Let's imagine that the reader wants the democracy between all the coordinates (and the Lorentz symmetry) to remain manifest.
Scattering and holography
Well, if this is her dream, the situation changes dramatically. In quantum field theory without gravity, we had the operators phi(x,y,z,t) and we could have computed their correlators - expectation
values of their products in the vacuum state - for arbitrary values of (x,y,z,t) for each operator. By Fourier transform, these became the Green's functions of the external momenta and the external
momenta (p_x,p_y,p_z,p_t) could have been any off-shell momenta.
In gravity, we are forced to talk about the on-shell amplitudes only - those that are relevant for scattering. Why is it so? It is because the general correlators are hard because the nature of the
operators phi(x,y,z,t) etc. is ill-defined. However, things simplify if you study scattering.
The initial and final states in the scattering process correspond to safely, spatially separated particles such as gravitons. Because they are so separated, the gravitational field around them is
very weak and, in fact, universal. So it actually makes a perfect sense to define the initial or final state with a graviton that has a certain momentum (determined up to the accuracy of 1/X where X
is an arbitrarily huge distance that separates the gravitons): the spacetime at infinity - where the incoming particles arrive or the outgoing particles leave - behaves just like in special
relativity: you may forget about diffeomorphism and curvature over there. You can rightfully assume that there exist coordinates in which the space at infinity is flat. In these coordinates, things
are as clear as in special relativity.
The graviton might still have a gravitational field around it, even when it is at infinity, but you don't need to know any details about it to define the external states. It is enough to say what the
on-shell momenta are. By locality, which is a feature of the theory, you may also argue that there must exist multi-particle states in which the individual particles have independent momenta and are
still safely separated. The further the particles are, the better approximation yours is.
For this reason, every meaningful quantum theory of gravity must be able to calculate the scattering amplitudes of gravitons (and perhaps other particles that are present) arbitrarily accurately if
they scatter from infinity, assuming that the theory admits spaces where particles can flee to infinity.
The case of AdS/CFT is very clear in this respect. Let's talk about an AdS5/CFT4 pair, to be very specific. There is also a compact five-manifold (a sphere etc.) but let us neglect these extra
dimensions because they're compact: all fields may be expanded into Kaluza-Klein spherical harmonics, anyway.
So the quantum gravitational theory must be able to compute the scattering of gravitons whose 5-momenta are on-shell i.e. light-like. With this constraint, there are only 4 independent parameters for
each such momentum. It turns out that by the AdS/CFT dictionary, the scattering amplitudes are fully encoded in correlators of (off-shell) local operators (for gravitons, it's the stress-energy
tensors) on the boundary.
Note that each such an operator on the boundary depends on
coordinates, e.g. (x,y,z,t), which is exactly the right number to parameterize
momenta of gravitons in
dimensions. The gravitational theory has five large dimensions but only four of them are accessible for the computation of exact correlators: that's why one dimension is effectively lost. In other
words, it is a manifestation of holography.
On the other hand, the boundary theory allows you to access all Green's functions so it is not holographic. It is the very presence of gravity - and diffeomorphisms and/or black holes - that makes
certain theories holographic. Recall that holography implies that the number of degrees of freedom (entropy) can't exceed the surface area in Planck units. But the Planck area is proportional to
Newton's constant, "G", so if you turn off gravity, it goes to zero and the inequality becomes vacuous: "the entropy should be less than infinity".
Holography manifests itself in many other ways: the black hole is the final stage of a gravitational collapse of any localized system. Its entropy is only proportional to the area of its event
horizon but by the second law of thermodynamics, it can't be lower than the entropy of any system that led to the birth of the black hole. It follows that all localized states in the given volume
must have entropy lower than the area: the area in the Planck units tells you how many degrees of freedom you need to describe
in the region. Whatever happens there is encoded in a "hologram" on the surface.
Holography: does it depend on string theory?
Holography emerges at many places of string theory. When you try to compute the amplitudes for strings, you can see that the results are only meaningful - conformally invariant on the worldsheet - if
the external particles are on-shell (the dimension of the vertex operators must be (1,1) to keep them marginal and integrable over the worldsheet). The AdS/CFT is another manifestation of the
holographic nature of string theory: whatever happens in the bulk (with gravity) may be described by a non-gravitational theory on the boundary.
So is it OK to say that holography only holds in string theory? I think it would be an incorrect conclusion. String theory is a very "specific", well-defined theory of quantum gravity where similar
aspects appear to be crisp and clear. However, I think it is obvious that many arguments supporting holography have nothing to do with "strings" per se. Holography is a property of any consistent
theory of quantum gravity (which is probably a term equivalent to "string theory" anyway, but even if it is not, the first part of this sentence should be valid).
So which observables do we have?
As we have suggested, in the flat Minkowski space or the AdS space, the scattering amplitudes at infinity are the observables to be studied. And they can be computed by stringy methods - from the
worldsheet correlators or the boundary correlators. Does it mean that string theory (or any theory of quantum gravity) can't say anything about finite regions of the bulk?
Well, it depends on the accuracy you want. If you want completely well-defined results and you are not ready to accept a huge set of conventions that define your privileged coordinates, such as the
light-cone gauge or worse, it is only the scattering "from infinity" that is completely well-defined. However, if particles scatter from distances comparable to 10^{-18} meters, like those on the
LHC, you should realize that this distance, while small for humans, is still gigantic in comparison with the string scale or the Planck scale.
It means that all collisions at the LHC may be viewed as on-shell collisions from infinity. Yes, there is also physics that can't be reduced to collisions, such as the analysis of the spectrum of
hadrons. But for this physics, four-dimensional gravity is pretty irrelevant, up to a very tiny error. So the on-shell limitations of quantum gravity won't cripple your ability to "practically"
calculate any realistic situation.
The higher energy scale you choose and the closer you approach the Planck scale, the more important the scattering experiments become for your observational tests. Near the Planck scale, the
curvature of space and the fluctuations of the spatial geometry may become important but the scattering will become the only doable method to probe this physical regime. Even the tiniest microscopic
black holes must be studied by scattering. (And the big ones are described well by classical general relativity, with the rest of the matter living on this curved classical background.)
I should emphasize that there are many more ways to determine non-scattering physics out of the full theory. For example, you may always derive a low-energy approximation of your theory and treat it
classically or semiclassically. This can tell you a lot of things about "local" physics that doesn't obviously reduce to scattering. But in some sense, all of this physics is encoded in the
scattering amplitudes.
De Sitter space
Another problem is that not all spaces have a region at infinity where it is easy to define the identity of particles with a certain momentum. For example, the energy density of our space seems to be
dominated by the cosmological constant. Assuming that the cosmological constant is the right explanation of the observed "dark energy", this emptiness will get even worse and our Universe will be
increasingly similar to de Sitter space. The spatial slices of this space look like sphere and your particles can never escape "quite" to infinity even though 13.7 light years is pretty far
relatively to the Planck length.
Nevertheless, if you academically insist that your theory should calculate some quantities that are absolutely exact and, in principle, testable with an arbitrary accuracy, de Sitter space will show
that you are far too immodest. A certain degree of uncertainty - such as the random thermal radiation coming from the cosmological horizon - seems to be an innate feature of de Sitter space. This
uncertainty (and the thermal radiation) is too weak to matter for any conceivable experiment we can do today or in any foreseeable future but it seems to be there.
Once again, I must admit that the comments above are not a proof of a no-go theorem. There can exist a very specific Hilbert space with very well-defined observables whose evolution makes complete
sense in de Sitter space. But because it seems clear that we can never measure things absolutely accurately in a de Sitter space anyway, it is questionable whether we really want and need a theory
that can predict things absolutely accurately. Maybe we don't. If we don't want it, it still remains puzzling what it means to have a full theory that inherently predicts all probabilities
Preon degrees of freedom in the bulk
Some people could propose that the metric tensor is composed out of other objects or fields that are defined in the same "bulk". It can be a composite of gauge fields, superconducting stuff, preons,
or any other buzzword of the same type. Well, I doubt it is the case. But more importantly, I don't think that any of these assumptions would solve the basic problem with the gravitational degrees of
freedom, as explained at the beginning. When you try to define the observables like g_{mn}(x,y,z,t) in quantum gravity, the main problem is not the g_{mn} part but the (x,y,z,t) part.
Also, I find any idea that tries to present the metric tensor as a privileged degree of freedom to be misguided and obsolete. The metric tensor is clearly just a low-energy effective degree of
freedom arising from a theory that contains and must contain much more stuff. In perturbative string theory, closed strings can carry infinitely many "Hagedorn" excitations that are in principle
equally important as the graviton mode. The only way to make "graviton" look special is to go to long distances.
Beyond the perturbative series, there are always many black hole microstates. A pole (or branch cut) in a scattering amplitude corresponding to an intermediate black hole is as important as a pole
arising from an intermediate graviton. At some level, when you think out of the box of low-energy approximations, black hole microstates must be equally important as excited string modes or the
graviton mode itself. Well, black holes look like "composites" of gravitons (and their fields) but I feel that this proposition only holds - as a form of tautology - if you assume that the
gravitational field is fundamental while the "black hole microstate fields" are not.
String field theory
What about some kind of string field theory? String field theory is a way to rewrite string theory that is as similar to an ordinary quantum field theory as you can get. However, it must contain
infinitely many elementary fields, corresponding to all excited states of the string, and a correspondingly enhanced gauge symmetry principle. Some people like to say that string field theory is as
"background-independent" (in the stringy sense) as general relativity in the first place.
There are problems with this assertion. First, when we talk about the background, we should allow all fields - including the scalar fields - to take any values they can take, without making the
description much less appropriate. The dilaton is perhaps the most important scalar field in string theory that is special in the perturbative series. But string field theory is so heavily based on
"strings" that it is only good for a weak coupling (the dilaton goes to minus infinity). It even seems likely that string field theory doesn't tell us anything new about nonperturbative physics of
string theory than other perturbative approaches. For example, attempts to write type IIA string theory as string field theory don't seem to imply that the strong-coupling limit has the
11-dimensional Lorentz invariance.
That's bad and the "special role" of the weak string coupling certainly shows that string field theory is not "quite" background-independent. Even if you solved the difficult technical problems of
closed string field theory, the background independence wouldn't quite hold.
Matrix theory or matrix string theory are radically opposite in this sense: they are equally well-defined for any value of the string coupling, including the infinite value (in type IIA, you get back
to M-theory captured by the original BFSS model). In this sense, it is "background-independent" because it deals equally well with any value of the coupling. On the other hand, you need a different
model for each value and only models for superselection sectors that are asymptotically flat (or some pp-waves) are known: that cripples the "normal", geometric part of the background independence.
But both AdS/CFT and Matrix theory show that space and positions in space are not fundamental concepts. They're emergent. Systems that don't look anything like AdS5 or the 11-dimensional Minkowski
space M11 end up behaving exactly as AdS5 or M11. That's really amazing. The people outside string theory who like to talk about emergent space or background independence or any of these nice words
usually end up with one of these two outcomes:
• they have a model that already has an underlying space where objects are "localized": whenever it is so, they can't really say anything about the question "what are the diffeomorphism,
gauge-invariant operators"
• they have a model in which no space exists at the beginning but they cannot show that a smooth space emerges, either: well, it's probably because it doesn't emerge in which case they have thrown
the baby with the bath water
Which of these two problems occurs sometimes depends on what you are still ready to call "space". At any rate, if you end up with one of the results above, your theory is useless for the conceptual
questions of quantum gravity. It is only interesting to describe a theory of emergent space if the space is (apparently) not there at the beginning but if it is (demonstrably) there at the end. ;-)
When you look at the successful descriptions of quantum gravity - and all of those that are known as of today were written by string theorists - they roughly fall into two groups:
• descriptions that make unitarity manifest and allow you to calculate exact amplitudes in principle for any values of moduli etc.
• descriptions that make the geometrical interpretation (and Lorentz symmetries) manifest but that suffer from various limitations (e.g. assumptions of weak coupling or low energies).
There seems to be a trade-off going on here. The more manifest the geometrical interpretation of your description is, the more limited the description is in its ability to deal with situations with
high energy, strong coupling, high density, or rapid time dependence. That shouldn't be surprising because a geometrical interpretation of a spacetime is often known to emerge. But it often emerges
at low energies and (for compact dimensions) in decompactification corners of the moduli space only.
You shouldn't be shocked that if you try to study more extreme situations, simple, geometrically manifest descriptions disappear. Well, they really disappear: it is not a bug of your description but
rather a true fact about Nature. The "local" degrees of freedom associated with a large manifold only occur if you can show that the size of the manifold is large in string/Planck/brane units and if
all physical quantities are changing much more slowly than by 100% per Planck length.
If you want to find a description - a choice of degrees of freedom - that are equally good in every situation of quantum gravity, it must be an amazing meta-geometric description that secretly knows
about all the dualities and about all the ways how degrees of freedom can emerge from the interactions of others and how some of them can become light in various limits.
Frankly speaking, I am convinced that such a description, if it exists, cannot be based on any "predetermined set of degrees of freedom" that just happen to exhibit a rich variety of different
behaviors. I think that the very composition and organization of the degrees of freedom that you should start with must be a solution to a set of consistency criteria. I am convinced that people
should spend much more time by trying to understand which consistency criteria actually imply that things must go in one way and not another. At some moment, they will have to get back to the
bootstrap program.
And yes, I am convinced that the best, most universal description of the degrees of freedom in quantum gravity will have no space - and probably no time - to start with. But because space - and
especially time - is so essential to design any system of physics that qualitatively resembles what we know, we will learn some rules that allow you to define what you mean by space and especially
time in the pre-geometric structure.
There is no guarantee that such a structure exists or will be found in five years or fifty years. But people simply have to keep on trying because such a setup would become an extremely robust pillar
of any research of quantum gravity in particular and theoretical physics in general.
And that's the memo.
snail feedback (1) :
For online games you prefer, such as pc?
WoW Gold and Diablo 3 Gold and Swtor Credits and Runescape Gold Are among the most popular game currencies, including the PC and web games. | {"url":"http://motls.blogspot.com/2008/10/observables-in-quantum-gravity.html","timestamp":"2014-04-17T18:23:43Z","content_type":null,"content_length":"219109","record_id":"<urn:uuid:b2b6384c-88f0-4c4e-a4b6-8151e2ad896e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of Texas
Jeff Haack
University of Texas at Austin
RTG Instructor
Mailing Address
The University of Texas at Austin
Department of Mathematics
2515 Speedway Stop C1200
Austin, TX 78712
RLM 10.132
ACE 3.342
Office phone: 232-7761 (ACE)
haack (at sign) math (point) utexas (point) edu
• Numerical Analysis
• Kinetic Equations
• Fluid Dynamics
• Parallel Computing
• Multiscale Methods | {"url":"https://www.ma.utexas.edu/users/haack/","timestamp":"2014-04-19T19:37:21Z","content_type":null,"content_length":"5361","record_id":"<urn:uuid:704ad662-4238-49f5-8fb0-eab4cf7b22d1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
US State Maps using map_data()
December 11, 2012
By is.R()
Today’s short post will show how to make a simple map using map_data().
Let’s assume you have data in a CSV file that may look like this:
Notice the lower case state names; they will make merging the data much easier. The variable of interest we’re going to plot is the relative incarceration rates by race (whites and blacks) across
each of the fifty states (we’ll remove DC once we load the data). Using the map_data(“state”) command, we can load a data.frame called “all_states”, shown below:
Merging that data with the data frame we have as a CSV produces:
We can then plot each state and shade it by our variable of interest:
Full code is below:
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/us-state-maps-using-map_data/","timestamp":"2014-04-20T10:54:12Z","content_type":null,"content_length":"34826","record_id":"<urn:uuid:a2497995-536d-475f-895c-6399250b82f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some New Results in Inverse Reconstruction
Some New Results in Inverse Reconstruction
Alfred Carasso
Applied and Computational Mathematics Division, NIST
Tuesday, January 31, 2012 15:00-16:00,
Building 101, Lecture Room C
Tuesday, January 31, 2012 13:00-14:00,
Room 1107
This talk will discuss two distinct topics involving nonstandard parabolic problems. The first topic deals with the problem of reconstructing the past from imprecise knowledge of the present, which
arises in numerous contexts. Currently, identifying sources of ground water pollution, and deblurring astronomical galaxy images, are two important applications generating considerable interest in
the numerical computation of parabolic equations backward in time. However, while backward uniqueness typically prevails in parabolic equations, the precise data needed for the existence of a
particular backward solution is seldom available. Recently, an iterative procedure originating in the field of Spectroscopy has been successfully applied to solve nonlinear parabolic equations
backward in time. This has led to the discovery of previously unsuspected 1D examples of well-behaved, physically plausible, but COMPLETELY FALSE reconstructions of the initial data at time t=0,
given approximate values for the solution at time t=1. More striking examples of false reconstructions are likely in 2D. These examples indicate that highly detailed prior information about the true
solution is a necessary ingredient in many backward reconstruction problems.
The second topic represents important collaborative work with Andras Vladar, who leads NIST's Scanning Electron Microscope Metrology Project. Helium ion microscopes (HIM) are capable of acquiring
images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these
images while preserving delicate surface information. This talk will present a powerful SLOW MOTION denoising technique, based on solving linear fractional diffusion equations forward in time. The
method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of
the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to
eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. This tool is applied
to rank order the above three distinct denoising approaches, in terms of their texture preserving properties.
Speaker Bio: Alfred S. Carasso received the Ph.D degree in mathematics at the University of Wisconsin in 1968. He was a professor of mathematics at the University of New Mexico, and a visiting staff
member at the Los Alamos National Laboratory, prior to joining NIST in 1982. His major research interests lie in the theoretical and computational analysis of ill-posed continuation problems in
partial differential equations, together with their application in inverse heat transfer, system identification, and image reconstruction and computer vision. He pioneered the use of time-reversed
fractional and logarithmic diffusion equations in blind deconvolution of wide classes of images. He is the author of seminal theoretical papers, is a patentee in the field of image analysis, and is
an active speaker at national and international conferences in applied mathematics.
Presentation Slides: PDF
Contact: B. Cloteaux
Note: Visitors from outside NIST must contact Robin Bickel; (301) 975-3668; at least 24 hours in advance. | {"url":"http://math.nist.gov/mcsd/Seminars/2012/2012-01-31-Carasso.html","timestamp":"2014-04-20T08:16:23Z","content_type":null,"content_length":"8264","record_id":"<urn:uuid:4ed4dcf0-2f0a-43bc-b10b-348332693082>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Effect of high frequency current on resistance
I found something quite interesting about resistance in my book. It says that at high frequency (MHz scale), resistance R decreases due to "inductance L" and "capacitance C" characteristics of
resistor. So the resistor can be modeled as (Ro in series with L) // C (Ro denote the resistance in the model as to distinguish from R, which is the real effective resistance). The model can explain
mathematically why R decreases when frequency f increases. However, what I'm concerned about is the nature of the phenomenon. In any geometry of the resistor, there should be magnetic field, and that
accounts for L. But if the resistor is just a straight wire, not a coil, how should we explain the existence of C? Besides, is there any paper or text analyzing this effect theoretically?
Thank you very much. | {"url":"http://www.physicsforums.com/showthread.php?t=424718","timestamp":"2014-04-18T08:19:59Z","content_type":null,"content_length":"36643","record_id":"<urn:uuid:790c90bd-0584-4538-87f7-14b44287f281>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Negations and Algebras in Fuzzy Set Theory
Franceso Esteva
EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-87-330
March 1986
Dual automorphisms, involutions and intuitionistic negations on the lattice F(X) of Fuzzy Sets taking values on a complete distributive lattice L are characterized. All Symmetric, De Morgan and
Intuitionistic Algebras on F(X) and their isomorphisms are described. Finally, negation functions and their conjugated classes are characterized.
BibTeX citation:
Author = {Esteva, Franceso},
Title = {On Negations and Algebras in Fuzzy Set Theory},
Institution = {EECS Department, University of California, Berkeley},
Year = {1986},
Month = {Mar},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/5999.html},
Number = {UCB/CSD-87-330},
Abstract = {Dual automorphisms, involutions and intuitionistic negations on the lattice F(X) of Fuzzy Sets taking values on a complete distributive lattice L are characterized. All Symmetric, De Morgan and Intuitionistic Algebras on F(X) and their isomorphisms are described. Finally, negation functions and their conjugated classes are characterized.}
EndNote citation:
%0 Report
%A Esteva, Franceso
%T On Negations and Algebras in Fuzzy Set Theory
%I EECS Department, University of California, Berkeley
%D 1986
%@ UCB/CSD-87-330
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1986/5999.html
%F Esteva:CSD-87-330 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/5999.html","timestamp":"2014-04-16T04:13:25Z","content_type":null,"content_length":"5178","record_id":"<urn:uuid:baa9da42-dc17-4eb3-9c03-b7882a30d3cf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dirac's Monopole Trick
P. A. M. Dirac's monopoles mimic radial charges, with the addition of a singular string attached. This note presents such fields, using the lecture notes of Professor Jose Figueroa as a starting
point. I suspect that the electric field, and Faraday's electric field lines, may have a Dirac basis, as opposed to the radial Coulomb basis.
Dirac Radial Field from Curled Vector Potential
Electromagnetic Duality in SI Units
As noted by Heaviside, electric and magnetic fields can be transformed into each other by an abstract rotation of 90 degrees, leaving the Maxwell equations unchanged. Larmor extended this to a
continuous rotation. However, this requires the presense of magnetic monopoles. This note presents their work, using SI units.
Electromagnetic Duality in SI Units
Spherical Arcs Illustrated using Quaternion Division
Quaternion division is commonly illustrated using spherical arcs on a sphere. This note turns things around, to show how to draw spherical arcs using quaternion division. We start with standard
quaternion definitions, then show the math and standard c code for drawing great arcs on spheres. While illustrated using three-space, a very similar extension applies to four-space.
Spherical Arcs Using Quaternion Division
Dual Orthogonal Rotations in Four-Space
The quaternion product produces coupled rotations in orthonal planes in four-space, being a rotation in a (time, vector) plane as well as a rotation in the normal three-space plane. This note
presents the geometric interpretation for unit quaternion multiplication for both pre and post multiplication, as well as for conjugated pre and post products. The note ends with a simple explanation
for the 'sandwich' quaternion formula for space rotations.
Dual Rotation and Quaternions
Hyperbolic Product Preserving Transforms
I want to explore scenarios where two items vary, preserving their product. The first scenario is electromagnetism, varying epsilon and mu. while keeping the product epsilon*mu constant. The simplest
way, of course, is to just use a factor and the inverse. This note however, show a nice way to carry out this operation using an angular parameter.
The Hyperbolic Transform
Exercises with Magnetic Monopoles
Magnetic monopoles are a natural extension to electrodynamics. In this set of notes, I look at the angular momentum of the fields associated with a pure electric charge, and a pure magnetic monopole.
Using a purely classical approach, I obtain the standard result that a pure monopole and a pure electric charge will have a component of angular momentum independent of separation. When integrating
the angular momentum using cylindrical coordinates, I gain an insight not seen with delta function or spherical integration approaches. I find that half the spin of this system is concentrated in the
planes between the two charges. As the separation between charges approaches zero, this spin concentration becomes immense, being a planar delta function. Next, I look at duality of electric and
magnetic charges under the Maxwell equation. I find that the maximum spin for a electron/monopole mix is too small to account for the known electron spin. This conclusion does not rule out inherent
spin due to duality in the electron, but rather states there must be other sources for spin, such as dual monopole creating a dipole, or magnetic field due to circulation of electronic charge.
Magnetic Monopoles
The Parson Ring Model of the Electron
In 1915, Alfred Lauck Parson published "A Magneton Theory of the Structure of the Atom" in Smithsonian Miscellaneous Collection, Pub 2371. In his paper, he proposed a spinning ring model of the
electron, where the magnetic force associated with the current ring provided the binding forces of chemical bonds. This model received attention from Arthur H. Compton, Clinton Davisson, Lars O.
Grondahl, David L. Webster, and H. Stanley Allen, but fell out of favor due to the rise of quantum mechanics. This model has been periodically rediscovered, most recently by Suichi Iida, W. Bostick,
David Bergman, J. Paul Wesley, and Philip Kanarev. This paper provides a closed form solution for the electric and magnetic potentials of a spinning charged ring, demonstrates *why* the ring must
rotate at the speed of light, and discusses the strengths and weaknesses of the Parson model.
The Parson Ring Model of the Electron
The Voltage Potential and Electric Field of a Charged Ring
This note provides a step by step derivation of an elliptic integral closed form solution for the voltage potential and and electric field associated with a charged filament. At the end of the note
is a listing of a quick numerical verification of closed form and discrete sum models.
Charged Ring Potential and Field.
Homopolar Generator Design Exercises
This note illustrates the design of disk and drum style homopolar generators using the FEMM open source finite element solver of David Meeker. Along the way, we show the utility of the vector A
potential for modelling and explaining homopolar generator characteristics.
Homopolar Generator Design Exercises
Pseudo-Random Number Generators for PLCs
Programmable Logic Controllers occasionally need random number generators for software testing or for adaptive response. The Linear Congruential Generator (LCG) is easy to implement, and works well.
Here are implementations and discussions of LCGs for both Seimans S7-200 family PLCs, and the Allen Bradley MicroLogix family PLCs.
LCGs for PLCs
Vector Formulas for Curvature and Torsion in Three-space
I've previously extended the Frenet-Serret formulas to use quaternions in four-space. However, it seems I've never documented the same extension to vectors in three-space. Here then, are some very
useful formulas extending curvature and torsion as vectors in three-space, as opposed to simple signed numbers as found in the standard Frenet-Serret formulas.
Vector Curvature and Torsion In 3D
Parametric Formulas for Villarceau Circles
At every point on a torus, four perfect circles intersect. Two of these circles are the toroidal and poloidal circles commonly used for coordinates on a torus. The other two circles are the
Villarceau circles, created by slicing the torus at an angle bitangent to the interior opening of the torus. These circles can be used as an alternative coordinate system for the torus. These circles
are also of technological interest for high frequency, resonant, air core transformers.
Parametric Formulas for Villarceau Circles
Small C Program Creating Wireframe Villarceau Circles
Hunting the Elusive Pigtailed Electrocrab!
An illusionist going by the handle lifehack2012 has posted a number of youtube videos showing small motors, lights and DVMs apparently powered by a permanent magnet and coil combinations. Assuming a
trickster at work, we can duplicate his effects by concealed batteries in the DVM for the DC voltage measurement, (where the coil acts as a closing switch), concealed battery in the DC motor (using a
short rotor, or using only one of two arc magnets, (with the coil acting as a closing switch again), and a lighting demonstration using an open coil with hairline wires to run the light. Naturally,
an armchair approach denouncing these claims as tricks is not as much fun as actually building a non-working replication, and then denouncing these claims as tricks.
Hunting the Elusive Pigtailed Electrocrab!
Step by Step From A to B Field
Elliptic integrals are very useful for magnetic field calculations in cylindrical symmetry. Here is a step by step example of calculating B fields from A potentials using K(k) and E(k). Along the
way, we see some amazing cancellations in the calculations.
A to B with Elliptic Integrals
Discrete Groups via Multiplication Tables
I've always wanted to add a chapter to the textbooks using a multiplication table approach to group theory. Nathan Carter has provided such a tool with his Group Explorer program. In these notes, I
am using GAP and Group Explorer to illustrate the smallest groups, with a heavy emphasis on multiplication tables, which were the tool used by Galois, Cayley and others when developing the theory of
Discrete Groups via Multiplication Tables
Trinary Logic
What could be more fun than explicitly listing all sixteen dual input binary logic functions? Why listing all 19683 dual input trinary logic functions!
In this note, I list the 27 single input trinary gates, which include three inverters, two rotators, three static levels and a whole slew of information losers. After that, I run through 19683
different dual input trinary gates, identifying the associative, commutative, and both associative and commutative gates. The 63 'both' gates are broken into three families of just three members, for
static level, equality and rotation gates, and then 9 families of six members covering functions such as single trit multiplication and modulo three addition.
Free new nomeclature:
• trinary - three level logic akin to binary.
• trit - the analog to a bit
• onefer - gates with a single output level
• twofer - gates with two of three output levels
• threefer - gates which express all three output levels
Enumerating Trinary Logic
Reverse Engineering Willis Linsay's Stepper
Steven Baxter and Terry Pratchett have provided a science-fiction gem with "The Long Earth". Unfortunately, the Stepper assembly diagram by Richard Shailer has numerous errors and omissions, making
replication a frustrating experience. Likewise, an uncredited stepper photo found on "www.io9.com" has errors and irrelevancies. The following document attempts to improve the state of the
documentation for DIY stepping technology.
Willis Linsay's Stepper
Interior Partitioning The Tetrahedron
We can generate embedded polyhedra by a simple technique. Between any two vertices which make an edge, we create a new vertex for the embedded polyhedra in the middle of that edge. Joining those new
vertices generates the new solid. For the case of the cube, this process immediately results in a truncated cube, the cuboctahedron. Curiously enough, we can also generate the cuboctahedron by two
stages of dividing the tetrahedron. The end of the first stage results in an octohedron. Dividing the octahedron results in a cuboctahedron. While the tetrahedron and octohedron are stiff structures,
the cube and cuboctahedron are not. Internally bracing the cuboctahedron with a twelve point star does achieve a stiff structure with cartesian orientation and stackability.
Picture and commentary
Reflections on Trusting Trust
One of the most influential essays I've read is "Reflections on Trusting Trust", written by Ken Thompson, published in Communications of the ACM, August 1984, Volume 27, Number 8. This is a Turing
Lecture by one of the cofounders of the C programming language, and a significant developer of Unix. Many PDFs of this cult classic are on the web. However, the scanned images increase the ambiguity
of the program listings provided. Here is a complete version of his program listing
Figure 1
illustrating a self replicating C program.
To compile,
"gcc replicator.c -o replicator".
To run, "./replicator > moose.c".
To verify equality. "md5sum replicator.c moose.c".
I very much appreciate Ken's admission and explanation of the backdoor he placed into the early Unix systms, as well as his cautions about embedded backdoors in compilers, microcode, and hardware
Text To Speech for SpeakJet, VoiceBox Shield and Arduino
The SpeakJet is an artificial speech IC found on the VoiceBox Shield for the Arduino, available at SparkFun.com. This chip really is a phoneme generator, and needs the programmer to translate speech
into phonemes, and then into numerical codes to send to the chip via a serial link before speech actually occurs. In a minimal configuration, this chip requires a serial link with CTS at 9600 baud
(one data wire, one CTS from CPU, as well as ground), and provides recognizable speech. The default demo "All your base are belong to us" was pleasant and easily understood. However, the effort to
create new phrases was rather steep. SpeakJet provides a free application - PhraseALator - to convert text to their numerical codes. However, words not in their dictionary are ignored, and left to
the user to provide. I really like my robots to speak, yet I am very lazy. So, what to do?
The happy answer is to examine open source text to speech systems, looking for existing text to speech which I can modify. One such answer is the Festival Light (flite) software from Carnegie-Mellon.
After compiling their software (used in many Asterisk PBX systems, as an aside), I use the t2p application to convert text to phonemes. I then convert their phonemes to SpeakJet codes using a sed
script (beware - this is a Linux command line) - Flite2Speakjet.sed - provided in the link previous. Now, the codes provided don't modify pitch, and are not comprehensive, yet they get me much
further toward recognizable speech than the official solution provided.
Sample sequence of commands
t2p "Now is the time for all good men to come to the aid of their country" > moose.txt
sed -f Flite2Speakjet.sed moose.txt > rawcodes.txt
I then use a text editor to place the codes into the Arduino program, and then begin modifying by adding pitch and pauses until I am satisfied.
Know Limits - No Limits
I was recently asked by Balu Kumar to provide a short speech prior to the First Lego League Robotics Competition for elementary and middle schools hosted by Westwood High School (our local high
school). I very much respect Dean Kaman, who founded First, and National Instruments, who provided a simplified version of LabView for the competitors. However, in general, I tend to reject any
restrictions on possible solutions for problems, such as use of Lego kits as the only mechanical basis, rather than custom designed components. While we have to know our limits, I prefer no limits.
This became the theme of this particular address. Pictures are courtesy of Hubble and NASA.
Force Balanced Transmission Lines
Eric Dollard posed a transmission line puzzle.
************* Original Posting ***********************
I have a D.C. transmission line, the conductors are 2 inches in
diameter, spacing is 18 feet.
How many ounces of force are developed upon a 600 foot span of
this line, for the following;
1. 1000 ampere line current.
2. 1000 KV line potential?
Here is my answer.
Tachyonic Neutrinos and Excellent Science
I am very impressed with CERN's professionalism in verifying and publicizing the tachyonic nature of neutrinos. I have done nothing in this area, but I am so pleased with their work, that I want to
publicly thank these many people.
Like any major event, there have been prophets who correctly called reality. In this case, George Sudarshan, Chodos, Hauser, Kosteleck, Ehrlich and Eue Jin Jeong are some names to look for.
Chodos, I believe, makes the correct assertion that the left hand only helicity of neutrinos, and the right hand only helicity of anti-neutrinos, guaranteed luminal or tachyonic speeds, and that the
presence of neutrino flavor oscillation locked out luminal only speeds leaving strictly superluminal speeds for neutrinos. Because his argument is so neat, I've spent some time to understand his
points, and I'm hoping to be able to communicate his arguments.
Neutrinos have an inherent spin, and consequently can be thought of as following a spiral path as they propagate. A good mental picture is to think of the tips of a propellor on an airplane. As the
plane flies, the tips of the propellor trace a spiral path. Helicity is measure of the torsion of a spiraling curve. If we are stationary, watching a plane advance toward us, counter-clockwise
rotation of the propellor and the closing radial distance traces out a right hand thread, in the sense of screw. This is positive torsion, positive helicity. Now, assume we change our speed from
stationary to faster than the airplane. The airplane is now separating from us, as we leave the airplane behind. From our point of view, the trajectory of the propellor tips has changed from a
counter-clockwise motion approaching to a counter-clockwise motion receding. The apparent pitch of the spiral, from our point of view, has become negative, and is described as a left handed screw
with negative torsion (from our point of view).
We can directly measure high energy neutrinos, and we indirectly infer high and low energy neutrinos when looking at particle decays. The experimental fact is that we see only left-hand neutrinos,
and only right-hand anti-neutrinos. If neutrinos travelled at subluminal speeds, a change of reference velocity would guarantee a mix of left and right handed neutrinos. The lower in energy, the
closer to 50/50 the randomized mix should be. Because we see *none*, we know neutrinos had to be luminal or beyond.
Now for the fun stuff here. Ordinarily, when we work with mass, we are dealing with stable particles. (Think electrons, protons, etc.) Mass for these particles is a real number, corresponding to an
inverse spatial distance in natural units. Unstable particles, on the other hand (think muons, neutrons, etc.), get an imaginary component of mass proportional to the inverse particle lifetime.
Particles with mass, even imaginary mass, cannot propagate at light speed. Consequently, when the solar neutrino paradox of 1/3 neutrino flux came up, when physicist proposed neutrino flavor
oscillations, this implied neutrino mass, and that, in turned, ruled out luminal speeds. (Neutrino flavor oscillations have been experimentally verified using reactor generated neutrinos, emitted as
electron neutrinos, with time correlated detection of electron and muon neutrinos at remote detectors. Japanese Kat experiment, Minnesota experiment.)
Chodos argument from 1985 is thus: Neutrinos can't be subluminal, can't be luminal, must be superluminal.
First verification was supernova 1987A, where neutrinos were detected prior to optical spotting. (Found in retrospect.) Arguments about the delayed photons propagating from the supernova core
prevented this observation from being conclusive, but certainly provided indication. Consequently, experiments which generated time resolved, spatially resolved neutrino beams began to look for time
of flight measurements. Fermilab MINOS measured superluminal speeds, but the uncertainty in the measurements were less than six standard deviations, and consequently was not deemed definitive. CERN,
in turn, has followed up, and reduced measurement uncertainties to the six sigma standard.
Implications for future supernova detection: The supernova events have a large neutrino pulse at fairly constant energy during the collapsing phase transformation (flash), followed by rapidly cooling
neutrinos from the hot neutron core. As high energy neutrinos travel slower than low energy neutrinos, (think of proximity to light speed being the high energy condition), we will see the time
reversed rising sizzle, then flash for supernova events. Being specific, if we see an increasing neutrino flux coming from Betelguese or Eta Carinae, we would then later see the high energy neutrino
flash followed by the optical event.
This would be a very interesting verification, to say the least.
Some references:
The neutrino as a tachyon, Chodos, Alan, Hauser, Avi I., Kostelecky, V. Alan, Phys. Lett. B150 (1985) 431.
Neutrino mass^2 inferred from the cosmic ray spectrum and tritium beta decay, Ehrlich, Robert, Phys. Lett. B493 (2000) 229-232, arXiv:hep-ph/0009040.
Eue Jin Jeong: arXiv:hep-ph/9704311 v4, 1997
Virtual Particles and Four-Space Trajectories
In four-space, trajectories with constant curvature, torsion and lift are trapped on a hypersphere of fixed radius. To moving observers, such as ourselves, as we move along the time axis, we will see
a transient disturbance as the trapped particle passes our time plane. For particles with specific ratios of curvature:torsion:lift, we will see a pair of particles form, separate, re-approach and
disappear. For most cases, however, we will see a scatter of a large number of particle pairs, which last no longer than the transit time of our time motion over the diameter of the trapped particle
hypersphere. The mathematics of the constant curvature trajectories is given in Curves of Constant Curvatures in Four Dimensional Spacetime. To show these curves in four dimensions, the program RK4.c
is provided. This program compiles with any C compiler, and produces two files. Curves.xyzt is a four dimensional coordinate set intended to be used with hyper.c, as described below. Curves.xyt is a
projection into three-space, suitable for use with the truss.c and flyby.c programs, also referenced below.
Interesting Puzzle
Graeme Base is the author of Animalia, The Eleventh Hour - A Curious Mystery and Worst Band in the Universe. These books, officially for children, are especially appreciated by adults who enjoy
clever illustrations or puzzles. Unlike the Eleventh Hour, which had a red envelope at the back which revealed the puzzle solution, the real world does not usually publish answer keys. While the
dysfunctional world of "Worst Band in the Universe" had a happy ending (as well as a nice CD!), the real world does not favor independent idealists.
Frenet-Serret Formulas In Quaternion Format
The three dimensional Frenet-Serret formulas describe a trajectory using pathlength (deviation from a point) as a parameter, and specifying curvature (deviation from a line) and torsion (deviation
from a plane) as a function of pathlength. Two curves with the same curvature and torsion histories are congruent, despite origin, translation or rotations. It is tempting to describe physics by
having laws specify curvature and torsion. In practice, however, the time history is essential, as different time histories and forces can result in similar spatial curves. Consequently, if one wants
to describe physics by curvature and torsion, one must move to four dimensional spacetime, and accept another curvature, which I call lift, measuring deviation from a volume. It turns out that the
quaternion divisional algebra is a natural fit for the Frenet-Serret equations extended to four-space. It also turns out that the left handed space form has a pleasing simplicity. Even more fun, it
is easy to extend curvatures from scalar to vectors, with an interesting alignment occuring in fourspace between curvature and lift. Quaternion Curvatures presents the left handed, four dimensional
Frenet-Serret equations, initially scalar form, later vector curvature form. In addition to the Frenet forms, formulas for curvatures paramaterized by an arbitrary parameter, such as proper time, or
angular position, are provided.
Superluminal Weber Force Laws Simulations
The Weber force law is solved for orbits using the technique shown in Clemente and Assis, Int-J-Theor-Phys-V30-p537-545(1991), as well as Assis - Weber's Electrodynamics 1994. Having verified
agreement with low speed orbits with significant angular momentum, I then go the the extremes of tachyonic behavior for a zero angular momentum pair of particles. The solution has repeating behaviors
criticized by Helmholtz, but which I view as fascinating. Specifically, we see two particles collide at 1.414c, pass transparently through each other at speed, accelerate to infinite speed in a
finite distance, change direction at infinite speed (which corresponds to zero kinetic energy for tachyons), fall through zero radius again at 1.414c, then expand outward as classical particles,
slowing to zero speed, then repeating the fall toward zero again.This exercise does not describe reality as we know it, as Weber's law does not use delayed potential, nor was mass scaled
relativistically. However, this exercise has provided insight into tachyonic behavior, with a critical speed not of c, but 1.414c.
Equivalent Magnetic Force Laws
Equivalent force laws have been a source of conflict among electrical students since the time of Ampere, Maxwell and Heaviside. Periodically, a new batch of enthusiasts discover alternative force
laws, but aren't aware that these force laws cannot be distinguished using macroscopic closed current sources for magnetic fields. To distinguish between various proposed force law requires electron
scale modelling, which is a topic for my next posting. The pdf above provides MKS representation of five different force laws, shows equivalent macroscopic observables, yet different differential
force elements. Source code for the open sourced MagneticForces.c is also provided.
Flyby - An Immersive, Interactive, 3D Wireframe Plotter
Flyby was initially written as a Macintosh application in the 1990s, to allow translation through a data set, as well as rotations. For stiff problems, where multiple resonances exist on widely
different timescales, it is useful to be able to zoom in to see small oscillations which are superimposed upon slow changes. Flyby served me well when studying trajectories. Flyby.c is an open
source, Linux/X-windows port in C with minimal dependencies.
(Compile with gcc, command gcc flyby.c -o flyby -lX11 -lXdmcp -lXau -lm )
Useful Formulas for Elliptic Integrals
Elliptic Integrals are usually tabulated in canonical form, rather than in the form found when solving problems. Useful Elliptic Integral Formulas Sheet is the set of formulas I find useful when
working with circular current loops.
Magnetic Field Calculation Utilities
Close form solutions for a circular current loop's magnetic fields have been around since before Maxwell. However, the use of elliptic integrals seems to intimidate potential users. Closed Loop
Formulas for A and B with Code is a simple example of closed form solutions for both the vector potential "A" and the magnetic field "B", with verifying source code for the integrals and simulation
in simple C.
Thank-you to Terry Sewards for pointing out a missing rho in an earlier copy of this paper.
Plotting Utilities
I need simple wire frame plotters in three and four dimensions when I'm working with trajectory simulations of fundamental particles. The archive includes a statically linked X11 port and source file
for the *hyper* 4D plotter and the *truss* 3D wireframe plotter, as well as sample input files. These programs are descendents of the NASA Truss-3D program from the seventies, and Ameraal's excellent
guide to computer graphics from that same time frame.
This is a good point to advertise http://www.StaticRamLinux.com . I very much appreciate small, simple programs and operating systems.
Plasma Striations
Tesla coils are a popular demonstration. We often use fluorescent bulbs and neons as interactive loads for the Tesla coils, and these fluorescents often show a banding pattern of light and dark. Here
are a few videos demonstrating these striations, and some thoughts about their cause.
Plasma Striations
Single Layer Air Core Solenoids
Spring semester 2011, the students in my AC circuits class built a variety of air core solenoids to be used as Tesla coils to light neons and fluorescent tubes. This is a nice example of Radio Shack
hobby level technology.
PVC Form Single Layer Solenoids
Permutation Sequences
This note, and referenced programs Lexical.c and stdperm.c illustrate some well known methods of enumerating all possible permutations associated with a set.
Mutual Inductance via Elliptic Integrals in MKS Units
Coaxial circular windings can have mutual inductance and field calculations simplified by use of complete elliptic integrals or AGM functions. Here is a step by step classical derivation, followed by
a C numerical double integration. Both work well, but the classical form calculates much faster. This note has been corrected on March 6, 2011 to correct a missing 4 pi divisor as pointed out by
Clifford Curry. Thanks!
Classical Calculation for Mutual Inductance of Two Coaxial Loops in MKS Units
"Magnetic Hematite"
A novelty sold at local fairs are 'Snake Eggs', made in China, billed as magnetized hematite. The first observation is that the residual field in these magnets is quite strong, seeming comparable to
neodynium/boron magnets. The second observation is that the material a tough, impact tolerant, ceramic. The third, is that this material is a good insulator. This is clearly an interesting material
for permanent magnets motors. So, it is time to investigate.
My tasks here are to
1. Identify the products, sources and factories for this product.
□ This is the original product of interest.
Magnetic Singing Rattle Snake Egg Magnets (YHMT-001)
Zhejiang Dongyang Changle Toys Co., Ltd.
DongYang Double Swallow Magnetic Stone Ltd.
No. 168 Xingsheng west road, Dongyang, Zhejiang, China Zip/Postal : 322118
Contact: Ms. Chenjie
Telephone : 86-579-6551-138 Fax : 86-579-6551-138
□ This is a different company which sells magnetic clasps and beads, using the 'magnetic hematite' tag. I don't (yet) have their product.
Shanmei Arts & Gifts Factory
No.2, Building 25, Hejie Village, Houzhai Street,Yiwu City, Jinhua, Zhejiang, China
http://www.joyjw.com http://www.joyjw.cn
□ Out of Germany, we have ChenYang Magnetics.
ChenYang-Technologies GmbH & Co. KG
Markt Schwabener Str. 8
85464 Finsing
Products CY-SE18x60 and CY-SE16x45 (Apparently 2005 time frame)
This site clearly calls these products polished ceramic magnets.
□ BearHaversack.com sells `magnetic hematite' by the pound. I bought a five pound bag of various geometrical shapes (not the Snake Eggs shape). These appear to be the same base material, but
have parallel planes for easy magnetization.
2. Ascertain the actual material used. Hematite is not naturally magnetic.
□ From http://www.webmineral.com/data/Hematite.shtml
Hematite - Fe2 O3 (Fe 3+)
Molecular weight 159.69 gm
No residual magnetic field
Reddish brown streak
So, first, easy tests are the magnetic and streak tests.
Compare with hematite ore samples from BearHaversack. The ore is non-magnetic. The ore streak test against white unglazed ceramic (bottom of coffee cup) is reddish brown for mineral hematite,
but blackish grey for "magnetic hematite". These are clearly different materials. Checking the internet for other investigations, we find . . .
□ From http://www.mindat.org/min-1856.html
NOTE - the 'hematite' used in jewelery, and often sold as magnetized items, is nothing of the sort and is an artificially created material, see Magnetic Hematite.
□ From http://www.mindat.org/min-35948.html
An artificially created magnetic material (this contains NO natural Hematite) widely sold as 'magnetic hematite' or simply 'hematite'. Please note that the name 'hematite' is quite
misleading, as this is NOT a natural stone.
Investigation of one item offered for sale as 'magnetic hematite' showed it was composed of a ceramic barium-strontium ferrite magnet: (Ba,Sr)Fe12O19 that has the magnetoplumbite structure.
The average grain size of the ceramic is 5-10 microns, and the porosity is 10-15%. In addition, the magnetic field strength of this material is much larger than that of any magnetite
specimen. Other items were identified as magnetoplumbite-type SrFe12O19.
3. I measured the density of twelve of the BearHaversack magnets by using an A&D Model HJ-150 scale to measure weight, and a small beaker to measure volume. To isolate magnetic effects, I used an
eight inch tall piece of styrofoam packing to keep the magnets away from the ferrous material in the base of the scale. To verify accurate measurement of the mass of the magnets, rather than mass
and magnetic attraction to ferrous materials in the base and supports, the magnets were doubled over to form a quadrapole, rather than dipole, and the magnets were re-measured in multiple
directions and orientations while observing the same measured mass. The total mass was 180.5g, the volume (measured by water displacement in a beaker), was 38 mL. The density is 4.75 g/mL. (As
opposed to 5.6 for hematite.) CY-Magnetics calls out a density of 4.8 to 4.9 for their Hard Ceramic Ferrite materials.
4. CY-Magnetics' ceramic magnets have a Curie temperature of about 450 C. Red hot is about 520 C. Heating in an oven did not get hot enough to demagnetize the sample. Heating with a propane torch
resulted in brittle failure due to too high a rate of heating. Instrumented kiln heating looking for magnetic drop is probably required to measure an accurate Curie temperature.
5. The surface residual magnetic field was measured using a Sypris/FWBell FH-520 (177101) Hall probe. This probe has a nominal sensitivity of 100mV/T when run at an excitation of 25 mA. I did not do
a Helmholtz calibration on this particular probe. The particular probe has a 3.5 mV offset, and was run at 22.9 mA, for an estimated sensitivity of 91.3 mV/T. Measurements were made with the
probe white side up and then in the same place with the probe white side down, the numbers subtracted and divided by two to reduce the offset voltage. The BearHaversack magnets had a residual
field of 0.17T to 0.26T, highest near edges, lower in middle of faces. By contrast, my neodynium magnets measured 0.52T. These numbers are consistent with a Y10 ceramic material for the
BearHaversack material, and a half-magnetized N27 neodynium material.
Conclusion - This really great magnetic material is not hematite. It happens to be a really great magnetic material, probably barium-strontium ferrite ceramic.
Future work - I want to make samples of magnetite via the Massurt method both for ferrofluid fun and for characterization of magnetite's magnetic properties.
Arduino Based Gauss Level Three Axis Magnetic Sensor
I've posted a small program to read the HoneyWell HMC5843 triple axis sensor using the I2C interface of the Arduino 2009 board. This sweet sensor is used as a solid state compass in cell phones, and
is not really suited for high field (motors and magnets) work. It is, however, ideal for low fields.
Acquire and Test Capacitors from EEStor
This is a small, Cedar Park based company which claims a process for making barium titanite capacitors with an energy density comparable to batteries. I want them to do well, but I what I really
want, are samples.
My business plan for this company is to set up manufacturing lines in the former Motorola Ed Bluestein facilities, as well as the former Dell Topfer facility in North Austin. I would use Celestica
and Flextronics (formerly Solectron) for power module assembly. I would also hire ACC electronics students as interns, and graduates as full employees. (This, of course, is just me talking my book.)
Project 42
The Answer is 42.
Complex numbers, quaternions and octonions are division algebras, where multiplication and magnitudes can be defined in such a way that the magnitude of a product is also the product of the magnitude
of the two multiplying terms. This type of product then allows us to define division. My fascination has been the fact that there are so many different formulas in these two, four, and eight
dimensional spaces that also satisfy the norm relationship above. Initially, I used brute force to enumerate working algebras, finding 2^3 solutions for the complex family (comps), 2^8 solutions for
the quaternion family (quads), and 2^19 families for the octonion familty (octs). These individual solutions are merely choices for the polarity (sign) of the product terms in the defining basis
multiplication table.
The encoding of the basis vectors as binary numbers, and the product base being given by XOR is worth illustrating numerically, as some interesting interpretations can be made about dimensionality
and multiplication. For traditional quaternions, we have numbers and three spatial dimensions. Borrowing notation from spacetime, I'll call t=00 as the numbers (scalars), i=01 as one space axis, j=10
as another space axis, and k=11 as a third. The multiplication table is
Right Hand Quaternion Unit Vector Multiplication Table
Prefactor Postfactor | Binary format
| |
| 1 i j k | t*i = 00^01 = 01 = i
| \----------------------- | t*j = 00^10 = 10 = j
1 | 1 i j k | t*k = 00^11 = 11 = k
| |
i | i -1 k -j | i*j = 01^10 = 11 = k
| | i*k = 01^10 = 10 = j
j | j -k -1 i |
| | j*k = 10^11 = 01 = i
k | k j -i -1 |
When we extend to traditional octonions, we have
Left Hand Octonion Unit Vector Multiplication Table
Prefactor Postfactor
| 1 i j k E I J K
| \-----------------------------------------------
1 | 1 i j k E I J K t = 000 Scalar
i | i -1 -k j I -E +K -J i = 001 Vector
j | j k -1 -i J -K -E I j = 010 Vector
k | k -j i -1 K J -I -E k = 011 Area
E | E -I -J -K -1 i j k E = 100 Scalar
I | I E K -J -i -1 k -j I = 101 Area
J | J -K E I -j -k -1 i J = 110 Area
K | K J -I E -k j -i -1 K = 111 Volume
Product base formed by XOR of two factors. Example i*K => 001^111 = 110 = J
Polarity (sign) determined separately.
The interesting interpretation of the above, seen in Clifford algebra and geometric algebra, is that the quaternion table above is *not* a four dimensional structure, but rather a two dimensional
structure, where the multiplication terms involving i and j give rise to an areal term k. In a similar fashion, complex numbers are really dealing with a one dimensional space, and octonions with a
three dimensional space. To get a real space-time (four true dimensions), will require sedenions.
Knowing that the basis multiplication table can be encoded as binary numbers XORed together, and seeing the power of two number of solutions, I decided that I should examine the solutions as a
digital logic problem. Given that the basis logic was XOR based, I was pleased to find Reed-Muller XOR implementations of the sign or polarity logic found above.
Having found digital logic solutions for normed algebras in two, four and eight dimensions, my next target was 16 dimensions. Sedenions are known to not be normal. While I can find numerical special
cases where two integer sixteen vectors and a sixteen vector product satisfy the norm relations (based upon any integer being the sum of four squares), there is no general formula. Doing the sixteen
bit digital exercise, despite knowing unlikely success, led to an interesting result. In my approach, I used one bit to determine an active high/active low default state for a bit. I then used higher
order bits to determine participation of free variables in the sign of the term in the multiplication table. The interesting result, is that while I had conflicting definitions for active high/active
low default bit states, I had a consistent set of definitions for how the default state would be modified by 42 free variables.
This result has me very excited. I've always wanted to find a simple explanation for quantum superposition. My hope is to find a simple, mathematical analog to the ring oscillator or logic paradox,
where a feedback path with an odd number of inversions, coupled with a propagation delay through the logic creates an inherently oscillating system. A multiplication table which is inherently
oscillatory, gives rise to an oscillatory metric, which in turn justifies much of our experience with quantum multivalued weirdness. Philosophically, an inherently oscillating metric structure of
space is a good model for Planck scale quantum foam, and in the bigger picture, justification for 'free will', or non-predestination, on the quantum scale.
So, what do I know? I know that the base definition is inconsistent, and that there are flaws (inherent conflicts) in my model for the multiplication table logic. However, I also know, that once a
suitable basis is defined, I can give 2^42 new variations on a successful basis.
My current task is to re-examine fundamentals of division algebras. I am re-evaluating the works by Hamilton, Cayley, Kirkland, Clifford, and other great mathematicians from the 1840-1890s, as well
as Sylvester (1867), Hadamard (1893) and Walsh (1923) in more recent times. My most recent influence is the geometric algebra interpretation of Clifford algebras by David Hestenes.
My current working assumptions are
• Complex numbers are associated with one dimensional spaces. There are three degrees of binary freedom associated with complex numbers, giving 8 (2^3) different comps.
• Quaternions are associated with two dimensional spaces. There are eight degrees of binary freedom, giving rise to 256 (2^8) different quads.
• Octonions are associated with three dimensional spaces. There are nineteen degrees of binary freedom, giving rise to 2^19 different octs.
• Each of the previous spaces have no ambiguity in the calculation of their structure constants.
• Sedenions are associated with four dimensional spaces. Seds and higher spaces fail to form traditional division algebras. The failure to form static division algebras at sedenions and beyond is
well known (Frobenius and Kirkland). One work around is the non-distributive approach of Albrecht Pfister , kindly demonstrated in code by Warren Smith. Another approach, that of J. R. Young
(1848), is to note that a product of 16 squares can be written as a sum of 32 squares (using octonions), which can be mapped into complex components. My new goal, is to use the complex definition
of Young, and identify 2^42 variations.
The general antisymmetric symbol is such an excellent tool.
The Antisymmetric Symbol
The Fibonacci numbers are deeply related to the Golden Ratio. Here is a simple proof of the closed form equation for the n'th Fibonnaci numbers. This formula can be extended to negative integers, as
well as treated as a continous function of n. This continuous function has many similarities to the force law of Roger Boscovitch.
Fibonacci Numbers
Every trajectory in space can be described by the curvature and torsion as a function of pathlength. Pathlength differentially measures deviation or distance from a point. Curvature differentially
measures deviation from a line. Torsion differentially measures deviation from a plane. The circle is a curve of constant curvature, while the spiral has constant curvature and torsion. Extending to
four dimensions, we now have another curvature, "lift", which differentially measures lift out of a volume. The curve of constant curvature, torsion and lift is a trajectory on the surface of a
hypersphere, consisting of circulations at two different linear frequencies in orthogonal plane sets.
The Three Curvatures in Fourspace
Quaternions, Four Dimensional Spacetime, Frenet-Serret Equations with Vector Curvatures
Quaternion Toolbox
These notes show the derivation of the node coordinates for a tetrahelix, then look at whether tetrahedrons can form mathematically closed hoops.
Tetrahedral Coordinate Calculations
Tetrahelices are a chiral structure made from tetrahedrons. I learned about these structures playing with my children's GeoMags.
Extending Classical Mechanics to Allow Acceleration and Jerk in Dynamical Potentials
Euler's Equations Extended
Angular Velocity and Angular Momentum from Different Points of View
Angular Momentum
Quantum Mechanics and Fourier Transforms
Quantum Mechanics
Tesla Coils as Transmission Lines
Telegrapher's Equations for Tesla Coil Transmission Lines
Transmission Lines, Reflection, and Terminations
Lab Notes on Coax Cable Reflections
My Retro-Linux project is at http://www.StaticRamLinux.com/ .
Due to spam, no e-mail address is listed. However, first name dot last name at domain is a fair bet. | {"url":"http://www.kurtnalty.com/","timestamp":"2014-04-21T02:01:56Z","content_type":null,"content_length":"52379","record_id":"<urn:uuid:1d996714-b4a5-44c5-8432-a84d82397117>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Motion matrices in homogeneous coordinates (need help)
Replies: 0
Motion matrices in homogeneous coordinates (need help)
Posted: Jul 10, 1996 12:10 PM
I have to rotate some images. The usual rotation matrix in homogeneous
coordinates is
cos(angle) -sin(angle) 0
sin(angle) cos(angle) 0
In this case, the rotation center is (0,0) which is in the top left corner of
the image. I'd like the rotation center to be (a, b) and I tried to combine :
a translation (-a, -b), the rotation and a translation (a, b). This gives:
cos(angle) -sin(angle) a( cos(angle)-1 ) - bsin(angle)
sin(angle) cos(angle) b( cos(angle)-1 ) - asin(angle)
This doesn't seem to work! Could you help me? | {"url":"http://mathforum.org/kb/thread.jspa?threadID=13613","timestamp":"2014-04-17T15:45:42Z","content_type":null,"content_length":"14072","record_id":"<urn:uuid:275b19d4-f3dd-4ba1-98e6-229c3e4889ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Colby Science Tutor
Find a South Colby Science Tutor
...I have also had extensive classes with respect to the following subjects (all passed with B's or A's at an undergraduate and/or graduate institution): Genetics & Gene Regulation, Molecular &
Cell Biology, Developmental Biology, Microbiology, Virology, Genetics, Plant Physiology, Animal Behavior &...
25 Subjects: including genetics, chemistry, ACT Science, microbiology
...As a teacher, I identify what isn't yet understood and I work with patience and diligence until the student really truly understands the important concepts and techniques. I don't waste time. I
focus on the task at hand because I love to see it when understanding begins.I've been using algebra in my life and my studies for many years.
18 Subjects: including physical science, algebra 2, biology, chemistry
...I passed the military intelligence electronics specialist course quite quickly. I received a 3.7 or higher in my electrical engineering / electronic devices courses at Washington State
University. I have used electrical engineering principles in a variety of theoretical and practical applications: fluid flow, transport phenomenon, and electric devices.
62 Subjects: including ACT Science, biochemistry, physics, Spanish
...Technically put, it is the science of life which concentrates on the structure, function, distribution, adaptation, interactions, origins and evolution of living organisms, a grouping which
encapsulates both plants and animals. The principles of biology include the structure and function of the ...
8 Subjects: including chemistry, physics, biochemistry, zoology
...I look forward to sharing my love of science with you!I received my BS from the University of Nevada Reno with a major in biochemistry and a minor in biophysical organic chemistry. Although I
have only taken one biology course (as a prerequisite for my BS degree) and have no experience teaching ...
7 Subjects: including chemistry, flute, biology, organic chemistry | {"url":"http://www.purplemath.com/south_colby_science_tutors.php","timestamp":"2014-04-18T08:27:14Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:44796e13-5f13-4aba-b65d-ce231e2f6050>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic, Algebra and Truth Degrees
The conference "Logic, Algebra and Truth Degrees" will be held on 8-11 September 2008 in the College Santa Chiara, Siena.
Logic, Algebra and Truth Degrees is the first official meeting of the recently founded EUSFLAT Working Group on Mathematical Fuzzy Logic.
Mathematical Fuzzy Logic is a subdiscipline of Mathematical Logic which studies the notion of comparative truth. The assumption that "truth comes in degrees" has revealed very useful in many, both
theoretical and applied, areas of Mathematics, Computer Science and Philosophy.
The main goal of this meeting is to foster collaboration between researchers in the area of Mathematical Fuzzy Logic, and to promote communication and cooperation with members of neighbouring fields.
The featured topics include, but are not limited to, the following:
• Proof systems for fuzzy logics: Hilbert, Gentzen, natural deduction,tableaux, resolution, computational complexity, etc.
• Algebraic semantics: residuated lattices, MTL-algebras, BL-algebras, MV-algebras, Abstract Algebraic Logic, functional representation, etc.
• Game-theory: Giles games, Rényi-Ulam games, evaluation games, etc.
• First-order fuzzy logics: axiomatizations, arithmetical hierarchy, model theory, etc.
• Higher-order fuzzy logical systems: type theories, Fuzzy Class Theory, and formal fuzzy mathematics.
• Extended fuzzy logical systems: adding modalities or truth constants, "dynamification", evaluated syntax, etc.
• Philosophical issues: connections with vagueness and uncertainty.
• Applied fuzzy logical calculi: foundations of logical programming, logic-based reasoning about similarity, description logics, etc.
The conference scientific programme will include several
invited lectures
contributed talks
and a
round table
for discussing the general directions of the area. Researchers whose interests fit in the general aims of the conference are encouraged to participate.
The meeting will be supported by the "Gruppo Nazionale per le Strutture Algebriche e Geometriche e loro Applicazioni" (GNSAGA) and by the "Istituto Nazionale di Alta Matematica" (INDAM) | {"url":"http://www.mathfuzzlog.org/latd2008/","timestamp":"2014-04-20T10:46:23Z","content_type":null,"content_length":"5753","record_id":"<urn:uuid:f0bf660b-8960-4cba-9886-31e06c599029>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hazen william Vs Darcy (pipe sizing)
if we have a pipe system like in the figure below where a pump exist in the system in order to deliver 200 gpm of water from the tank "A". we need to deliver 150 gpm from "C" and 50 gpm from "D"
usually we use the pipe flow chart of william hazen in order to size each pipe according to its flow from the chart we get the head loss per 100 feet of each pipe and finaly we calculate the head of
the pump according to the path that have the bigger head loss.
my question is : in the problem below if we apply bernoulli equation between point "A,the top of the tank " and point "C" and between point "A" and point "D" and ignoring the dynamic energy effet
cause it is small we get the head loss between these 2 paths are equal , but if we use william hazen equation and putting for each path a flow rate we get different head losses ,is there a
contradiction ??
A-C bernoulli equation : 0 pressure + 0 velocity =HEAD LOSS THROUGH THE PATH -hEAD OF PUMP +Height between the top of the tank and point C
so the head loss through the path ABC is equal to head of pump + heaight between top of the tank and point C
now :
A-D bernoulli equation we will get the same Head loss .
but using william hazen equation each path has different head loss.
is there a contradiction or not ?
The Hazen-Williams is an empirical formula and doesn't have a theoretical basis. It is only meant to be used for
flowing in a
single pipe
for a specific range of pipe diameters. It cannot be used to analyze a parallel pipe network.
EDIT: I should probably clarify that H-W can be used for estimating the head loss instead of Darcy during a network analysis but you cannot just simply compare the head losses for two single pipes.
The head loss in each branch will be the same since the fluid ends up at the same energy state at the outlet node (atmosphere in this case). | {"url":"http://www.physicsforums.com/showthread.php?t=398187","timestamp":"2014-04-20T11:25:58Z","content_type":null,"content_length":"46554","record_id":"<urn:uuid:820053c9-a628-4dbb-8c37-8c38c58eab2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix initialization in Library mode
Mikael Johansson on Tue, 02 Mar 2004 13:08:12 +0100
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Matrix initialization in Library mode
• To: pari-users@lists.cr.yp.to
• Subject: Matrix initialization in Library mode
• From: Mikael Johansson <mik@math.su.se>
• Date: Tue, 2 Mar 2004 12:59:49 +0100 (CET)
• Delivery-date: Tue, 02 Mar 2004 13:08:13 +0100
• Mailing-list: contact pari-users-help@list.cr.yp.to; run by ezmlm
For a homology calculation application I am building, I need to calculate
matrix ranks over any characteristic. I decided that the Pari library
seems to help me do just this, and so started my coding and construction.
Now I've reached the point where I need to start involving the Pari
library, and promptly run into slight problems. I need to initialize a
(probably rather sparse) matrix with entries in {-1, 0, 1} for use with
either the keri or ker_mod_p library calls. The users guide gives in
section 4.3 an example on how to create a matrix by using the cgetg
command recursively - first for the root, and then for the columns of the
matrix; but on the other hand, the entries in section 4.5 specifying the
handling of t_MAT and t_COL state that these are introduced for specific
GP use and recommend that one uses standard malloced C matrices when
programming in library mode.
Is there a way to build the matrix in malloced memory and then introduce
it to Pari for use of the rank calculation functions? Or should I ignore
the comment in section 4.5? The code example with cgetg allocated 4 t_COLs
of size 5 as the initialization for a 4x3 matrix - why 5?
I obviously miss certain parts of the programming philosophy, and await
enlightenment hopefully.
Mikael Johansson | To see the world in a grain of sand
mik@math.su.se | And heaven in a wild flower
| To hold infinity in the palm of your hand
| And eternity for an hour | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0403/msg00000.html","timestamp":"2014-04-20T08:19:07Z","content_type":null,"content_length":"4776","record_id":"<urn:uuid:df9f65b8-1430-417e-b7e6-5c3e7d443e22>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Proof of a Theorem of Pálfy
Pálfy proved that a group $G$ is a CI-group if and only if $\vert G\vert = n$ where either $\gcd(n,\varphi(n)) = 1$ or $n = 4$, where $\varphi$ is Euler's phi function. We simplify the proof of "if $
\gcd(n,\varphi(n)) = 1$ and $G$ is a group of order $n$, then $G$ is a CI-group".
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v13i1n16/0","timestamp":"2014-04-18T15:38:29Z","content_type":null,"content_length":"13860","record_id":"<urn:uuid:8b2b9440-61b2-40cd-8f25-7947d05a7f05>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sausalito Calculus Tutor
Find a Sausalito Calculus Tutor
...I have been active with the Boy Scouts for 20 years, and counseled scouts in several merit badge categories, including Personal Fitness, Computing, and Citizenship. I spent three semesters
working with Junior Achievement in the Palo Alto and Mountain View School Districts.I have experience in de...
39 Subjects: including calculus, English, chemistry, reading
...Precalculus is the gate to calculus and provides a comprehensive view of the behavior of polynomials, rational functions, and trigonometric functions. I tutor high school, college, and
university students on an almost daily basis in Precalculus. I emphasize the understanding because it will not...
41 Subjects: including calculus, geometry, statistics, algebra 1
...Teaching math and physics is exciting for me because I am passionate about these subjects and enjoy sharing that passion with my students. I find that many students shy away from the core
concepts in math and physics, preferring instead to learn only the specific problems they are assigned. This can result in the student becoming confused when confronted with a new problem.
25 Subjects: including calculus, physics, algebra 1, statistics
...I can't wait to work together! :)I tutored a Cal undergrad in introductory Statistics last Spring. This undergrad had dropped the course in the Fall due to a failing grade after the first
midterm. After regularly working with me during the Spring semester, my undergrad tutee ended up with an A in the Statistics class.
27 Subjects: including calculus, chemistry, physics, geometry
...Basic logical reasoning and methods of proof (deductive, indirect, and possibly mathematical induction) are also great parts of the class. Since geometry works on sets of points that can be
easily understood visually, and thus many geometric theorems are immediately visually obvious, it is in a ...
14 Subjects: including calculus, physics, statistics, geometry | {"url":"http://www.purplemath.com/sausalito_calculus_tutors.php","timestamp":"2014-04-16T19:11:06Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:15ad2a87-003b-4a9e-8b00-e178940ecbbb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
#include#define SIZE 10 Int Main() { Int A[SIZE] ... | Chegg.com
#include#define SIZE 10 int main() { int a[SIZE] = {2, 6, 4, 8, 10, 12, 89, 68, 45, 37 }; int i, pass, hold; printf(“Data items in original order\n”); for(i=0; i<= SIZE-1; i++) printf(“M”, a[i]); for
( pass = 1; pass <= SIZE -1; pass++) for(i=0; i<=SIZE -2; i++) if(a[i] > a[i+1]) { hold = a[i]; a[i] = a[i+1]; a[i+1] = hold; } printf(“\n Data items in ascending order\n”); for(i=0; i<= SIZE – 1;
i++) printf( “4d”, a[i]); printf(“\n”); return 0; } Data items in original order 2 6 4 8 10 12 89 68 45 37 Data items in ascending order 2 4 6 8 10 12 38 45 68 89 Note: Compares a[0] to a[1] then a
[1] to a[2] then a[2] to a[3] and so on until it completes the pass by comparing a[8] to a[9] Note although there are 10 elements only 9 comparisons are performed. IMPORTANT: Large values move down
the array many positions on a single pass Small values may move up only one position On the first pass the largest value is guaranteed to sink to the bottom element of the array a[9] On the second
pass, the second largest value is guaranteed to sink to a[8] On the ninth pass the ninth largest value sinks to a[1] so the smallest value is in a[0] Problem – from Dietel and Dietel Chapter 6 page
239 No. 6.11 The bubble sort presented is inefficient for large arrays. Make the following simple modifications to improve the performance of the bubble sort. First modify the program so the data
values are entered by a user then do the following: a. After the first pass, the largest number is guaranteed to be in the highest number element of the array; after the second pass, the two highest
numbers are “in place” and so on. Instead of making nine comparisons on every pass, modify the bubble sort to make eight comparisons on the second pass, seven on the third pass, and so on. b. The
data in the array may already be in the proper order or near-proper order, so why make nine passes if fewer will suffice? Modify the sort to check at the end of each pass if any swaps have been made.
If none has been made, then the data must already be in the proper order, so the program should terminate. If swaps have been made, then at least one more pass is needed. c. Modify the program to
print out how many passes it took to sort the list.
Computer Science | {"url":"http://www.chegg.com/homework-help/questions-and-answers/include-define-size-10-int-main-int-size-2-6-4-8-10-12-89-68-45-37-int-pass-hold-printf-da-q3353450","timestamp":"2014-04-25T05:22:30Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:b01391cd-5609-49c7-970b-71446e65e089>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building Better Problem Solvers: One Step at a Time
Aubrey has a shelf full of books.
• Exactly 1/3 of the books on the shelf are mysteries.
• Aubrey has read 10 of the mysteries on the shelf.
• The number of mysteries Aubrey has read is greater than 1/5
of the number of mysteries on the shelf, and less than 1/ 4
of the number of mysteries on the shelf?
Which could be the number of books on the shelf?
a. 120
b. 142
c. 147
d. 150
Most people read a problem like this and their initial reaction is, "WHAT?" Teachers read this problem and the first thing they think is, "How the heck am I going to break this down so my students
can solve it?"
Solving open-ended, higher order math problems is messy business for a lot of reasons. First of all, these kinds of problems really highlight the range of abilities in a classroom. You present this
type of problem to a class and some kids have the answer before you've even finished reading the problem, and other kids will stare at the paper for as long as you leave it in front of them because
they haven't the foggiest idea where to even start. Then, there's the fact that by the very nature of their design, these problems are not cut and dry. There may be only one correct solution, but
there can be as many strategies and methods students use to get to that answer, as there are students in your class.
As teachers, we know it is our responsibility to scaffold instruction for students and gradually release responsibility for learning to them, with autonomy being the ultimate goal. This is easier
said than done in the best of circumstances, but can seem impossible when you have 20-30 students with a wide range of cognitive abilities and different learning styles who have all been given a
problem that is intended to stretch their understanding and push them to notice obtuse patterns and relationships.
It's no surprise that teachers get intimidated by higher order, open-ended math word problems. The problems are HARD, and they're so unpredictable. I've always struggled with finding the best way to
scaffold open-ended problems for my own students. For most of them, solving higher-order math problems is a battle, but I am bound and determined to arm them with as many weapons as possible so they
can be victorious.
I am pleased to announce that I have finally found a problem-solving template that is working in Room 202. At least, it gives all my students a common starting point and a reliable framework for
dismantling these complex problems into smaller components that they can tackle incrementally. We have been working with the template all year, and I have seen some measurable growth in most of the
children's problem solving skills. My revised version of the template looks like this:
We solve problems like the "Aubrey" problem on Fridays in Room 202. The problems we work on are aligned to whatever eligible content we are covering in math that week. Initially, the lessons were
entirely teacher-led and featured a lot of me "thinking aloud" at the SMARTBoard. At this point, we only work on Step 1 together as a class. After we have read and scrutinized the problem carefully,
my students now work through Steps 2-5 independently. There are still several students who are not able to move passed Step 2 on their own. I provide very targeted, explicit 1:1 instruction for the
students who still need it, as I circulate during problem solving time.
I have also developed a rubric for measuring my students' implementation of this problem solving platform. The rubric is tailored to the steps on the template and it looks like this:
My goal is to practice these problem solving strategies with my students frequently enough that they become automatic for them. (I do see the kids underlining the question and circling key
information in other classwork problems, so I know there has been some transfer.) Ultimately, I want my kids to feel confidence rather than intimidation when they read an open-ended math problem. I
want them to intuitively apply the strategies we have practiced together so they can systematically get to the point of what the problem is asking, make a plan for how to answer that question, and be
able to explain why they did what they did. Sounds simple enough, but we all know it's NOT! It's actually about as complicated as it gets when it comes to math instruction, and often seems utterly
impossible, but I refuse to throw in the towel. It's when the work is the hardest, that our students need us the most, and this is really a life skill the kids need.
Problem solving is a fixture in life, and it is my goal as an educator to prepare my students for LIFE. Problems pop up everyday. Sometimes they are small and sometimes they are large. You run into
problems everyday, from flat tires to a failing product line at work. Sometimes solving a problem is a matter of life and death, and other times it is merely a matter of keeping your sanity.
Regardless of why we need to use problem solving, we can not deny that we do need it. There is also no denying that the best problem solvers become the most successful and productive citizens, and
that's ultimately what I want for the kids I teach.
If you want your students to be good problem solvers, too, you can get my problem solving template and rubric along with 15 problems (and answers) aligned to the fifth grade Common Core Math
Standards in my TpT store. Click
if you'd like them for your classroom. | {"url":"http://new-in-room-202.blogspot.com/2014/01/building-better-problem-solvers-one.html","timestamp":"2014-04-21T04:32:14Z","content_type":null,"content_length":"84515","record_id":"<urn:uuid:cdb63be0-9e9a-4373-aa7d-6505b8c09277>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic
Joseph Halpern, Leandro Rego
There has been a great deal of work on characterizing the complexity of the satisfiability and validity problem for modal logics. In particular, Ladner showed that the satisfiability problem for all
logics between K and S4 is PSPACE-hard, while for S5, it is NP-complete. We show that it is negative introspection axiom that causes the gap: if we add this axiom to any modal logic between K and S4,
then the satisfiability problem becomes NP-complete. Indeed, the satisfiability problem is NP-complete for any modal logic that includes the negative introspection axiom.
Subjects: 9.2 Computational Complexity
Submitted: Oct 15, 2006
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/IJCAI/2007/ijcai07-371.php","timestamp":"2014-04-20T21:10:03Z","content_type":null,"content_length":"2603","record_id":"<urn:uuid:868fa546-4326-4217-8f8b-a89a54542d29>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
A complex version of the baer-krull theorems
Puente Muñoz, Maria Jesus de la (2000) A complex version of the baer-krull theorems. Communications in Algebra, 28 (8). pp. 3727-3737. ISSN 0092-7872
Official URL: http://www.informaworld.com/smpp/title~content=t713597239
The Baer-Krull theorems deal with the relationship between orderings of a valued field compatible with the valuation, and orderings of the residue class field. For these theorems it is necessary that
the valuation ring should be convex with respect to the ordering.
For a real field R, and an extension K _ R, the author defines SpecC(K/R) in terms of equivalence classes of embeddings of K over R into an algebraic closure C of K. This is done in such a way that
when K is also real, the points of SpecC(K/R) correspond to the orderings of K
over R. Given a point of SpecC(K/R), the author extends the definition of convexity to subsets of K (again this is the usual definition when K is real).
Now let R be real, K be an extension of R, and B be a valuation ring in K. Let R be the residue class field of R\B. Suppose that R is a real subfield of K. The author studies relations between SpecC
(K/R) and SpecC(K/R). In particular, it is shown that there is a lifting of each element of SpecC(K/R) to an element of SpecC(K/R), compatible with the valuation, and such that the lifting has the
generalised convexity property.
While a more elementary treatment of this result is possible if R = Q, for general R the proof involves model theory in a nontrivial way.
Item Type: Article
Uncontrolled Keywords: Real spectrum; Complex spectrum; Involution; Residually real; Valuation ring; Canonical place
Subjects: Sciences > Mathematics > Algebra
ID Code: 12795
Deposited On: 01 Jun 2011 11:33
Last Modified: 01 Jun 2011 11:33
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/12795/","timestamp":"2014-04-18T15:48:27Z","content_type":null,"content_length":"26432","record_id":"<urn:uuid:87ee9933-4161-4301-b18c-d92f0883faa5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
y = –x + 2 y = 3x – 1 (1 point)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
when x =0 what does y=?
Best Response
You've already chosen the best response.
@umer247 Sorry bud you can't use a computer during tests.
Best Response
You've already chosen the best response.
1 idk
Best Response
You've already chosen the best response.
y = –(0) + 2 what is y?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
add them together
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@TuringTest @amistre64
Best Response
You've already chosen the best response.
what i have to do? do i find out what is what is x and y?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
The thing is that you need two point to graph. (x,y) So pick two points (1,y) (0,y) You plug in x=1 and find y You plug in x=0 and find y Plot them and draw a line between them
Best Response
You've already chosen the best response.
please answer i have short time
Best Response
You've already chosen the best response.
Why are you taking a test?
Best Response
You've already chosen the best response.
its a assement and i am behind
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
lesson 1 unit 6
Best Response
You've already chosen the best response.
@kelly 226
Best Response
You've already chosen the best response.
ill help with spanish if you get me an a
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hey did you do lesson1 unit 6
Best Response
You've already chosen the best response.
Um no. I'm really behind in algraba :(
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ad33d2e4b0e906b4a559f6","timestamp":"2014-04-17T03:56:15Z","content_type":null,"content_length":"88874","record_id":"<urn:uuid:8bdfb389-7d5a-4db3-8976-082e96dd2ccb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Mcdowell Statistics Tutor
Find a Fort Mcdowell Statistics Tutor
For the past five years, I have been teaching statistics or accounting at two community colleges. In addition, I am the manager (and tutor) of a tutoring center at a community college that only
tutors students enrolled in statistics or accounting. I have spent thousands of hours tutoring students one-on-one, in statistics and accounting, since I started tutoring in 2008.
2 Subjects: including statistics, accounting
...Each student's tutoring should be specially designed to suit what they need. For example, I will work to isolate on your student's difficulty in a subject, so that tutoring and hours are not
spent where they are not needed. I hold a State of AZ Department of Public Safety Level One Fingerprint Clearance Card.
27 Subjects: including statistics, chemistry, reading, calculus
Hi! My name is Kay. I offer individual or group tutoring for nursing, writing, math and various statistics courses - health care, psychology, research, bio, and business.
10 Subjects: including statistics, nursing, public speaking, study skills
...Students in my test prep classes are pleased with their results and often refer friends and family members to my sessions. I have taught probability and statistics for over 12 years. I have
tutored students one-on-one in the areas of probability and statistics.
15 Subjects: including statistics, calculus, geometry, algebra 1
...I earned A's in both courses and passed the comprehensive exam required to obtain my graduate degree - M.S. in Mathematics. I worked with students with behavioral problems including ADHD and
tutored them while working with therapeutic foster children. I also tutored for the Math Literacy program for grade school students.
27 Subjects: including statistics, chemistry, calculus, algebra 1 | {"url":"http://www.purplemath.com/fort_mcdowell_statistics_tutors.php","timestamp":"2014-04-19T23:59:05Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:8c18ac94-9cbc-4d23-b708-5d0c946ac536>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westvern, CA Algebra 1 Tutor
Find a Westvern, CA Algebra 1 Tutor
...I also tutored a number of pupils in the 6-12 years range in English and Spanish. The biggest strengths that I would bring to the post are good organizational skills to plan activities and
excellent communications skills, including listening to the needs of the students. I would like to be your...
15 Subjects: including algebra 1, Spanish, geometry, GED
...To ensure that students have a quite and comfortable study space in which to work with me, I have a tutoring office in View Park-Windsor Hills, where I teach private lessons and group classes.
I am happy to travel to another location if that is more convenient for the student. My strength as a ...
12 Subjects: including algebra 1, reading, writing, geometry
...Furthermore, my educational background and work experience lends me to assist with college applications, admission essays and interviews. Additionally, I speak in front of hundreds to
thousands of college students every summer. If you need help with public speaking then I can offer my expertise.
25 Subjects: including algebra 1, reading, English, ASVAB
...I have also enjoyed teaching English to middle-school, college, and adult students seeking professional advancement from many parts of the world privately, in language institute settings, and
abroad. German: Having attended German universities for three years, taught in German public schools fo...
15 Subjects: including algebra 1, reading, English, German
My name is Jacquelyn and I am an experienced tutor who cares deeply about the education of my clients. I bring years of in-classroom experience to every tutoring session. I received my Bachelors
Degree at Georgetown University in English Literature.
28 Subjects: including algebra 1, English, reading, writing
Related Westvern, CA Tutors
Westvern, CA Accounting Tutors
Westvern, CA ACT Tutors
Westvern, CA Algebra Tutors
Westvern, CA Algebra 2 Tutors
Westvern, CA Calculus Tutors
Westvern, CA Geometry Tutors
Westvern, CA Math Tutors
Westvern, CA Prealgebra Tutors
Westvern, CA Precalculus Tutors
Westvern, CA SAT Tutors
Westvern, CA SAT Math Tutors
Westvern, CA Science Tutors
Westvern, CA Statistics Tutors
Westvern, CA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Broadway Manchester, CA algebra 1 Tutors
Cimarron, CA algebra 1 Tutors
Dockweiler, CA algebra 1 Tutors
Dowtown Carrier Annex, CA algebra 1 Tutors
Foy, CA algebra 1 Tutors
Green, CA algebra 1 Tutors
La Tijera, CA algebra 1 Tutors
Lafayette Square, LA algebra 1 Tutors
Miracle Mile, CA algebra 1 Tutors
Pico Heights, CA algebra 1 Tutors
Preuss, CA algebra 1 Tutors
Rimpau, CA algebra 1 Tutors
View Park, CA algebra 1 Tutors
Wagner, CA algebra 1 Tutors
Windsor Hills, CA algebra 1 Tutors | {"url":"http://www.purplemath.com/Westvern_CA_algebra_1_tutors.php","timestamp":"2014-04-19T04:52:06Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:da9313db-f79e-4e69-81e6-8e4d7314a664>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capitol Heights Trigonometry Tutor
Find a Capitol Heights Trigonometry Tutor
...I began tutoring more officially while earning my bachelor's degree (in Classics, with significant additional coursework in music theory and history), when I helped fellow undergraduates
through the university's bureau of study counsel as a paid peer tutor in Latin and calculus. Since graduating...
18 Subjects: including trigonometry, writing, calculus, geometry
...I've read multiple strategy books and am currently ranked #418/1364 on the itsyourturn.com chess ladder. I have been a Christian my entire life, and I've been studying the Bible since I could
read. I have read commentaries, and have memorized over 100 verses.
27 Subjects: including trigonometry, calculus, physics, geometry
...With almost 1500 hours of experience tutoring math subjects on WyzAnt, which includes elementary math as well as middle school, high school, and college level math courses, I am well qualified
to tutor the math portion of the exam. I have successfully tutored several students recently for ISEE, ...
33 Subjects: including trigonometry, reading, GRE, geometry
...My lessons are fun and creative but most importantly, I try to show how Mathematics is relevant. I believe in teaching the content as well as instilling confidence so students can become
successful, life-long learners. I am a regional instructor for Texas Instruments and have expertise in the technology used in your student's classroom.
14 Subjects: including trigonometry, geometry, SAT math, algebra 2
...I strive to establish an intellectual connection with the student, regardless of age or background. Each student is an individual who learns most effectively at his/her own pace. I am very
sensitive to finding that pace for each student and working at the pace.
10 Subjects: including trigonometry, Spanish, calculus, geometry | {"url":"http://www.purplemath.com/capitol_heights_md_trigonometry_tutors.php","timestamp":"2014-04-20T16:05:20Z","content_type":null,"content_length":"24597","record_id":"<urn:uuid:fc9c40dd-c491-4e96-8fe1-070effe472b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Wheel Industries is considering a three-year expansion project. The project requires an initial investment of $1.5 million. The project will use straight-line depreciation method. The project has no
salvage value. It is estimated that the project will generate additional revenues of $1.2 million per year and has annual costs of $600,000. The tax rate is 35 percent. Calculate the cash flows for
the project. If the discount rate were 6 percent, calculate the NPV of the project, then offer your thoughts as to whether this is an economically acceptable project to undertake. Clinton Co. has jus
• one year ago
• one year ago
Best Response
You've already chosen the best response.
After tax cash flow per year = (1,200,000 - 600,000) * (1-0.35) = 390,000 Value of the project = (390,000) / 0.06 = $6,500,000 NPV = 6,500,000 - 1,500,000 = $5,000,000
Best Response
You've already chosen the best response.
The npv would be 390/1.06 + 390/1.06^2 + 390/1.06^3
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a17fece4b0e22d17ef1388","timestamp":"2014-04-21T12:27:34Z","content_type":null,"content_length":"30566","record_id":"<urn:uuid:a7e29f2f-6ae4-41a5-9409-febf45ba7bda>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pearland Algebra Tutor
Find a Pearland Algebra Tutor
...My success as a tutor is very simple. I am not just concerned with you passing the test but I want you to master the subject. If you are in need of Math and Science Tutoring, please contact me.
22 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I have graded, reviewed and tutored for the test for over 10 years. Over 100 students were able to do really well on it under my guidance and I am very excited to provide support for anyone who
needs help in it. I have been teaching how to prepare students for TAKS for over 10 years.
13 Subjects: including algebra 1, algebra 2, geometry, Russian
...While math is my first love, I've also excelled at writing and written communication in general. As a professionally successful tutor with a number of certifications and a client base ranging
from elementary school to college years, I work well with students of all ages. I am a tutor first and foremost but am working towards a degree in Mathematics.
14 Subjects: including algebra 2, algebra 1, reading, calculus
...In a more professional atmosphere, I have helped students achieve their academic goals through tutoring on a high school and college level. That being said, there is a certain thrill that I
have when I see someone comprehend a subject that was previously foreign to them. I have seen the look regarding subjects from an elementary level to a collegiate level.
38 Subjects: including algebra 2, algebra 1, reading, chemistry
My enthusiasm for tutoring stems from a lifelong journey in education. As a National Achievement Finalist, magna cum laude graduate, master's degree recipient and PhD candidate, I am fully aware
of the value of hard work and education. Throughout my education, I have tutored many students and served as a teaching assistant for multiple classes.
18 Subjects: including algebra 1, algebra 2, chemistry, reading | {"url":"http://www.purplemath.com/Pearland_Algebra_tutors.php","timestamp":"2014-04-16T04:59:41Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:db4cf14c-742d-4a04-b8f0-50d1741b5d77>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the sin of Pi/4?
• What is the difference between an Episcopalian church and a Catholic church ?
For instance, Catholics believe that the immaculate conception refers to Mary, that her mother conceived without sex so that Mary would be free from original sin. We don't believe ... family and
I laughed, the more pi$$ed off she became!
• The 7 deadly sins of IT management
"It's all because of greed." IT sin No. 4: Slothful approaches to IT IT professionals work hard -- that's a given. But all too often, they're unwilling to step outside their comfort zones or go
the extra mile. Despite their hard work, IT managers often ...
• The 7 Sins Of Replatforming
4. No matter how much better your new site is than your old one, there’s going to be someone who doesn’t like it Before this redesign, our web site had been pretty much the same for around two
years. Our customers were used to knowing where things ...
• Sin is sin regardless of whether you know it
Everything is uncovered and laid bare before the eyes of him to whom we must give account" (Hebrews 4:13). I want you to understand, however, why God is displeased when we sin. First, God knows
that whenever we sin, we are hurting ourselves (as well as ...
• Is This Reporter’s Crazy Basketball Shot Real?
This gives a vertical change in height of 4 feet. (1.2 meters) The time of flight for the ball ... In y-equation, it has the sin of θ but I have cosine. Look at this fake triangle. See, this
triangle has the same value for cosine of θ. | {"url":"http://answerparty.com/question/answer/what-is-the-sin-of-pi-4","timestamp":"2014-04-20T05:45:27Z","content_type":null,"content_length":"18398","record_id":"<urn:uuid:19e02be3-16f0-4cf4-b80f-315ae7cbb40a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Metric System
Why to use the metric system
1. Every other country but America uses it.
2. Most people who use the British Standard Engineering system don't know many of its units. Rods, hogshead, barrels, slugs etc.
3. Scientists use it exclusively.
I could go on. The Metric or SI system of units has a couple other advantages.
1. Converting from unit to unit requires only powers of ten (10s,100s,1000s) to convert.
2. It is based on standards that are found in nature readily.
The basis of the system
Did you know?
The Earth is exactly 20 million meters from Pole to Pole. It is 40 million meters around following this path at sea level. It is defined to be exactly this way.
If you take a box that is 1/10th of a meter on each side and fill it to the brim with water, you will be looking at a Liter of water.
If you hold this same box and neglect the mass of the box you are holding a kilogram of water.
If you build a pendulum 1 meter long and let it swing it will swing once per second.
These were some of the original standards used by the French Systeme Internationale des Unites (SI). These standards all have their own problems, warps in Earth's crust, microfluctuations in gravity,
temperature and pressure dependence of water's density, etc., These problems have been fixed by comparing the standards to light, but the examples above were first.
Thanks to soft drink manufacturers you have held 2 Liters and 2 kilograms in your hand. Perhaps you've even held a 1 Liter soda at some point. A meter is a little more than 3 feet. The distance of a
doorknob to the ground.
The prefixes
The Metric System is governed by a set of prefixes that are the same for every base unit. They are above in a simple format. There are 6,000,000,000 people on Earth. That would be 6 Gigapeople or 6
Thanks to memory storage devices for computers most people know that a Gigabyte of memory is more than a Megabyte. The others you may not have used, yet.
The internet can do it for you
Click on the calculator and it will take you to a site that can do it for you.
Check your work with it if you want to. | {"url":"http://www.sophia.org/tutorials/the-metric-system--2","timestamp":"2014-04-19T11:57:30Z","content_type":null,"content_length":"49296","record_id":"<urn:uuid:5057aed7-de67-4983-a469-987d9df8b72b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating a Z-factor to assess the quality of a screening assay.
Frequently Asked Questions
Calculating a Z-factor to assess the quality of a screening assay.
FAQ# 1153 Last Modified 27-September-2010
When developing (or assessing) an assay to test the effectiveness of various drugs, you want to quantify how well the assay works. One way to do this is via the signal to noise ratio, but this
doesn't really capture what you want to know. Zhang and colleagues developed a method to quantify the quality of an assay (1).
The figure above (Figure 4 from Zhang, with a few extra labels) defines the separation band. The horizontal axis shows the value determined by the assay. The vertical axis shows how commonly each
value occurs. The graph shows data for both negative and positive controls. The idea is simple:
• Virtually all the background values will be less than a threshold defined as the mean of the background values plus three times the standard deviation of those values. If the values come from a
Gaussian distribution, you expect 99.86% of the values to be less than that threshold and 0.14% of the values to be greater than that threshold.
• Similarly, you expect virtually all of the true "hits" to have values greater than a threshold set by the mean of the positive controls minus three times the standard deviation of those values.
• The separation band is the difference between those two thresholds.
• The dynamic range of the assay is defined as the difference between the means of the negative and positive controls.
• Comparing the lengths of the separation band and dynamic rangle tells you about how well the assay works.
Zhang and colleagues (1) defined Z as the result of the following calculations.
1. Compute the threshold value for negative controls as the mean signal of the negative controls plus three times their standard deviation.
2. Compute the threshold value for positive controls as the mean signal of the positive controls minus three times their standard deviation.
3. Compute the difference between the two thresholds and call it the 'separation band' of the assay, S. If the threshold computed in step 1 is less than the one computed in step 2, then this
difference is positive. Otherwise, this difference will have a negative value.
4. Compute the absolute value of the difference between the two means and call it the 'dynamic range' of the assay, R.
5. Compute the Z as S/R
To interpret the Z-factor , use these guidelines (direct from Zhang's paper).
• A Z-factor of 1, ideal. An assay can never have a Z-factor of 1.00000. This value is approached when you have a huge dynamic range with tiny standard deviations. In this situation, the separation
band is almost as long as the dynamic range. Z-factors can never be greater than 1.0.
• A Z-factor between 0.5 and 1.0 is an excellent assay.
• A Z-factor between 0 and 0.5 is marginal.
• A Z-factor less than 0 means that the signal from the positive and negative controls could overlap, making the assay not very useful or screening purposes.
Note that the use of the variable Z here has absolutely nothing to do with the use of z to describe how far a value is from the mean as the z ratio, which is the number of standard deviations away
from the mean. All statistics books use z in this context, which has nothing to do with the Z-factor used to assess a screening assay.
Also note that the Z factor is computed based on a fairly arbitrary equation. Why compute the thresholds as three times the respective standard deviations rather than use a factor of two or four or
any other factor? It's arbitrary. But experience has shown that this Z factor is a useful way to describe an assay, and it has become standard.
No GraphPad program computes the Z-factor. But you can use the Row or Column statistics analyses in Prism to compute the means and SDs. Once you have those values, computing the Z-factor is easy.
An alternative way to quantify the difference between means compared to the SD of the two groups is to compute R^2. This is not commonly done as part of t test calculations, but is part of the output
of GraphPad Prism.
(1) Zhang JH, Chung TD, Oldenburg KR, A Simple Statistical Parameter for Use in Evaluation and Validation of High Throughput Screening Assays. J Biomol Screen. 1999;4(2):67-73.
[Major rewrite Sept. 2010 to include figure and better explain the calculations.]
Keywords: high-throughput screening HTS | {"url":"http://www.graphpad.com/support/faqid/1153/","timestamp":"2014-04-21T05:57:50Z","content_type":null,"content_length":"16866","record_id":"<urn:uuid:f9bd1324-e22f-4d69-b772-2ab64a8da6b6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rearranging Series
Yes, that's exactly what we wanted!
Can you do this in general now? If you calculate the first k terms, there will be a lot of terms cancelling out. So which of the terms remain?
If you don't see this immediately, try taking k some other values. Try taking k=20 and k=30. Then you will be ready to handle the general situation... | {"url":"http://www.physicsforums.com/showthread.php?t=452395&page=2","timestamp":"2014-04-20T14:12:53Z","content_type":null,"content_length":"62660","record_id":"<urn:uuid:fcec1d18-9b82-41a3-a409-f9f891a9ac80>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: margins: (not estimatable)
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: margins: (not estimatable)
From D-Ta <altruist81@gmx.de>
To statalist@hsphsun2.harvard.edu
Subject Re: st: margins: (not estimatable)
Date Mon, 18 Jul 2011 10:25:51 +0200
Wonderful, thanks for your support!!
Am 13.07.2011 21:09, schrieb Jeff Pitblado, StataCorp LP:
Darjusch<altruist81@gmx.de> is using -margins- after a -logit- model that
contains the interaction of a single factor variable with the linear,
quadratic, and cubic terms of a continuous variable and is getting the
'(not estimable)' label in the -margins- output:
I have read a similar thread
(http://www.stata.com/statalist/archive/2011-06/msg00407.html), but the
answer wouldnt solve my problem, I run a logit model where I am
interested in the marginal effect of fail (dummy) on dropout at the
value of mc_1styr_c==0 (the model is based on a regression discontinuity
research design).
Here is what I do:
. logit dropout3_en i.fail##(c.mc_1st##c.mc_1st##c.mc_1st) if sex==0,
vce(cluster mc_1st)
(output omitted)
. margins ,dydx(fail) at(mc_1st==0)
Conditional marginal effects Number of obs =
Model VCE : Robust
Expression : Pr(dropout3_enrollment), predict()
dy/dx w.r.t. : 1.fail
at : mc_1styr_c~d = 0
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf.
1.fail | (not estimable)
Note: dy/dx for factor levels is the discrete change from the base level.
The command: margins ,dydx(fail) at(mc_1st==0) should give me the effect
and the significance level of interest. If I enlarge the sample (lets
say, not condition on sex==0) it works.
Could someone explain me the core of the problem and how to solve it?
The check for 'Estimable functions' performed by -margins- is detailed in the
methods and formulas section of -[R] margins-.
The basic idea here is that -logit- saves off a hidden matrix, let's call it
H, that -margins- uses to check for estimable functions. This H matrix should
only contain values -1, 0, and 1. The check is as follows
z*b is estimable if z = z*H
where b is the coefficient vector, and z a vector of fixed/hypothetical values
of the independent variables in the fitted model. -margins- compares z and z*H
via relative differences with a numeric tolerance of 1e-5, controlable via the
-estimtolerance()- option.
We suspect that the cubic polynomial in Darjusch's fitted -logit- model is
causing the H matrix to have values outside -1, 0, and 1. This can happen
with numerically unstable calculations due to huge scale differences between
the independent variables, otherwise caused by the result of the propagation
of errors in the finite precision calculations.
Darjusch can verify if there are problems with the H matrix by typing the
following Stata commands after the -logit- model fit:
. matrix H = get(H)
. matrix list H
If the output has values other than -1, 0, and 1, Darjusch can use the
-noestimcheck- option with -margins- to prevent the estimability check. We
feel confident in giving Darjusch this advice in this case since there is only
one factor variable in the model specification, and the polynomial terms of
the continuous variable should not, in theory assuming infinite precision
calculations, result in non-estimable functions.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2011-07/msg00623.html","timestamp":"2014-04-16T16:06:52Z","content_type":null,"content_length":"11443","record_id":"<urn:uuid:b52546c1-e3e3-4b8a-aa00-8fbbf3d31bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
RPM resource bogofilter
The System and Arch are optional added filters, for example System could be "redhat", "redhat-7.2", "mandrake" or "gnome", Arch could be "i386" or "src", etc. depending on your system.
Bogofilter is a Bayesian spam filter. In its normal mode of operation, it takes an email message or other text on standard input, does a statistical check against lists of "good" and "bad" words, and
returns a status code indicating whether or not the message is spam. Bogofilter is designed with fast algorithms (including Berkeley DB system), coded directly in C, and tuned for speed, so it can be
used for production by sites that process a lot of mail.
Package Summary Distribution Download
bogofilter-1.2.4-3.8.ppc64le.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for ppc bogofilter-1.2.4-3.8.ppc64le.rpm
bogofilter-1.2.4-3.6.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for i586 bogofilter-1.2.4-3.6.i586.rpm
bogofilter-1.2.4-3.6.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for x86_64 bogofilter-1.2.4-3.6.x86_64.rpm
bogofilter-1.2.4-3.3.aarch64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for aarch64 bogofilter-1.2.4-3.3.aarch64.rpm
bogofilter-1.2.4-3.1.armv7hl.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for armv7hl bogofilter-1.2.4-3.1.armv7hl.rpm
bogofilter-1.2.4-3.1.ppc.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for ppc bogofilter-1.2.4-3.1.ppc.rpm
bogofilter-1.2.4-3.1.ppc64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for ppc bogofilter-1.2.4-3.1.ppc64.rpm
bogofilter-1.2.4-2.1.3.aarch64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for aarch64 bogofilter-1.2.4-2.1.3.aarch64.rpm
bogofilter-1.2.4-2.1.3.ppc.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for ppc bogofilter-1.2.4-2.1.3.ppc.rpm
bogofilter-1.2.4-2.1.3.ppc64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for ppc bogofilter-1.2.4-2.1.3.ppc64.rpm
bogofilter-1.2.4-2.1.2.armv6hl.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for armv6hl bogofilter-1.2.4-2.1.2.armv6hl.rpm
bogofilter-1.2.4-2.1.2.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for i586 bogofilter-1.2.4-2.1.2.i586.rpm
bogofilter-1.2.4-2.1.2.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for x86_64 bogofilter-1.2.4-2.1.2.x86_64.rpm
bogofilter-1.2.4-2.1.1.armv7hl.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 13.1 for armv7hl bogofilter-1.2.4-2.1.1.armv7hl.rpm
bogofilter-1.2.4-2.mga4.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia Cauldron for i586 bogofilter-1.2.4-2.mga4.i586.rpm
bogofilter-1.2.4-2.mga4.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 4 for i586 bogofilter-1.2.4-2.mga4.i586.rpm
bogofilter-1.2.4-2.mga4.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia Cauldron for x86_64 bogofilter-1.2.4-2.mga4.x86_64.rpm
bogofilter-1.2.4-2.mga4.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 4 for x86_64 bogofilter-1.2.4-2.mga4.x86_64.rpm
bogofilter-1.2.4-1.1.armv6hl.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE Factory for armv6hl bogofilter-1.2.4-1.1.armv6hl.rpm
bogofilter-1.2.3-17.4.1.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 updates for i586 bogofilter-1.2.3-17.4.1.i586.rpm
bogofilter-1.2.3-17.4.1.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 updates for x86_64 bogofilter-1.2.3-17.4.1.x86_64.rpm
bogofilter-1.2.3-5.fc20.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for armhfp bogofilter-1.2.3-5.fc20.armv7hl.rpm
bogofilter-1.2.3-5.fc20.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for armhfp bogofilter-1.2.3-5.fc20.armv7hl.rpm
bogofilter-1.2.3-5.fc20.i686.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for i386 bogofilter-1.2.3-5.fc20.i686.rpm
bogofilter-1.2.3-5.fc20.i686.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for i386 bogofilter-1.2.3-5.fc20.i686.rpm
bogofilter-1.2.3-5.fc20.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for ppc bogofilter-1.2.3-5.fc20.ppc.rpm
bogofilter-1.2.3-5.fc20.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for ppc bogofilter-1.2.3-5.fc20.ppc.rpm
bogofilter-1.2.3-5.fc20.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for ppc64 bogofilter-1.2.3-5.fc20.ppc64.rpm
bogofilter-1.2.3-5.fc20.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for ppc64 bogofilter-1.2.3-5.fc20.ppc64.rpm
bogofilter-1.2.3-5.fc20.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for s390 bogofilter-1.2.3-5.fc20.s390.rpm
bogofilter-1.2.3-5.fc20.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for s390 bogofilter-1.2.3-5.fc20.s390.rpm
bogofilter-1.2.3-5.fc20.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for s390x bogofilter-1.2.3-5.fc20.s390x.rpm
bogofilter-1.2.3-5.fc20.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for s390x bogofilter-1.2.3-5.fc20.s390x.rpm
bogofilter-1.2.3-5.fc20.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Secondary Rawhide Sources bogofilter-1.2.3-5.fc20.src.rpm
bogofilter-1.2.3-5.fc20.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide Sources bogofilter-1.2.3-5.fc20.src.rpm
bogofilter-1.2.3-5.fc20.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 20 for x86_64 bogofilter-1.2.3-5.fc20.x86_64.rpm
bogofilter-1.2.3-5.fc20.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Rawhide for x86_64 bogofilter-1.2.3-5.fc20.x86_64.rpm
bogofilter-1.2.3-3.1.1.armv7hl.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.3 for armv7hl bogofilter-1.2.3-3.1.1.armv7hl.rpm
bogofilter-1.2.3-3.1.1.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.3 for i586 bogofilter-1.2.3-3.1.1.i586.rpm
bogofilter-1.2.3-3.1.1.ppc.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.3 for ppc bogofilter-1.2.3-3.1.1.ppc.rpm
bogofilter-1.2.3-3.1.1.ppc64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.3 for ppc bogofilter-1.2.3-3.1.1.ppc64.rpm
bogofilter-1.2.3-3.1.1.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.3 for x86_64 bogofilter-1.2.3-3.1.1.x86_64.rpm
bogofilter-1.2.3-3.fc19.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for armhfp bogofilter-1.2.3-3.fc19.armv7hl.rpm
bogofilter-1.2.3-3.fc19.i686.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for i386 bogofilter-1.2.3-3.fc19.i686.rpm
bogofilter-1.2.3-3.fc19.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for ppc bogofilter-1.2.3-3.fc19.ppc.rpm
bogofilter-1.2.3-3.fc19.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for ppc64 bogofilter-1.2.3-3.fc19.ppc64.rpm
bogofilter-1.2.3-3.fc19.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for s390 bogofilter-1.2.3-3.fc19.s390.rpm
bogofilter-1.2.3-3.fc19.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for s390x bogofilter-1.2.3-3.fc19.s390x.rpm
bogofilter-1.2.3-3.fc19.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Secondary Rawhide Sources bogofilter-1.2.3-3.fc19.src.rpm
bogofilter-1.2.3-3.fc19.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 19 for x86_64 bogofilter-1.2.3-3.fc19.x86_64.rpm
bogofilter-1.2.3-3.fc18.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for arm bogofilter-1.2.3-3.fc18.armv5tel.rpm
bogofilter-1.2.3-3.fc18.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for armhfp bogofilter-1.2.3-3.fc18.armv7hl.rpm
bogofilter-1.2.3-3.fc18.i686.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for i386 bogofilter-1.2.3-3.fc18.i686.rpm
bogofilter-1.2.3-3.fc18.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for ppc bogofilter-1.2.3-3.fc18.ppc.rpm
bogofilter-1.2.3-3.fc18.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for ppc64 bogofilter-1.2.3-3.fc18.ppc64.rpm
bogofilter-1.2.3-3.fc18.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for s390 bogofilter-1.2.3-3.fc18.s390.rpm
bogofilter-1.2.3-3.fc18.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for s390x bogofilter-1.2.3-3.fc18.s390x.rpm
bogofilter-1.2.3-3.fc18.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for x86_64 bogofilter-1.2.3-3.fc18.x86_64.rpm
bogofilter-1.2.3-2.mga3.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 3 for i586 bogofilter-1.2.3-2.mga3.i586.rpm
bogofilter-1.2.3-2.mga3.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 3 for x86_64 bogofilter-1.2.3-2.mga3.x86_64.rpm
bogofilter-1.2.3-1.fu2013.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.3-1.fu2013.src.rpm
bogofilter-1.2.3-1.fc18.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for ppc bogofilter-1.2.3-1.fc18.ppc.rpm
bogofilter-1.2.3-1.fc18.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 updates for ppc64 bogofilter-1.2.3-1.fc18.ppc64.rpm
bogofilter-1.2.3-1.fc17.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for arm bogofilter-1.2.3-1.fc17.armv5tel.rpm
bogofilter-1.2.3-1.fc17.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for armhfp bogofilter-1.2.3-1.fc17.armv7hl.rpm
bogofilter-1.2.3-1.fc17.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for ppc bogofilter-1.2.3-1.fc17.ppc.rpm
bogofilter-1.2.3-1.fc17.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for ppc64 bogofilter-1.2.3-1.fc17.ppc.rpm
bogofilter-1.2.3-1.fc17.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for ppc64 bogofilter-1.2.3-1.fc17.ppc64.rpm
bogofilter-1.2.3-1.fc17.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for s390 bogofilter-1.2.3-1.fc17.s390.rpm
bogofilter-1.2.3-1.fc17.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 updates for s390x bogofilter-1.2.3-1.fc17.s390x.rpm
bogofilter-1.2.3-1.fc16.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 updates for s390 bogofilter-1.2.3-1.fc16.s390.rpm
bogofilter-1.2.3-1.fc16.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 updates for s390x bogofilter-1.2.3-1.fc16.s390x.rpm
bogofilter-1.2.3-1.el6.i686.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 6 for i386 bogofilter-1.2.3-1.el6.i686.rpm
bogofilter-1.2.3-1.el6.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 6 for ppc64 bogofilter-1.2.3-1.el6.ppc64.rpm
bogofilter-1.2.3-1.el6.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 6 for x86_64 bogofilter-1.2.3-1.el6.x86_64.rpm
bogofilter-1.2.3-1.el5.i386.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 5 for i386 bogofilter-1.2.3-1.el5.i386.rpm
bogofilter-1.2.3-1.el5.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 5 for ppc bogofilter-1.2.3-1.el5.ppc.rpm
bogofilter-1.2.3-1.el5.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Extras Packages for Enterprise Linux 5 for x86_64 bogofilter-1.2.3-1.el5.x86_64.rpm
bogofilter-1.2.2-5.fc18.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for arm bogofilter-1.2.2-5.fc18.armv5tel.rpm
bogofilter-1.2.2-5.fc18.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for armhfp bogofilter-1.2.2-5.fc18.armv7hl.rpm
bogofilter-1.2.2-5.fc18.i686.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for i386 bogofilter-1.2.2-5.fc18.i686.rpm
bogofilter-1.2.2-5.fc18.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for ppc bogofilter-1.2.2-5.fc18.ppc.rpm
bogofilter-1.2.2-5.fc18.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for ppc64 bogofilter-1.2.2-5.fc18.ppc64.rpm
bogofilter-1.2.2-5.fc18.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for s390 bogofilter-1.2.2-5.fc18.s390.rpm
bogofilter-1.2.2-5.fc18.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for s390x bogofilter-1.2.2-5.fc18.s390x.rpm
bogofilter-1.2.2-5.fc18.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Secondary Rawhide Sources bogofilter-1.2.2-5.fc18.src.rpm
bogofilter-1.2.2-5.fc18.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 18 for x86_64 bogofilter-1.2.2-5.fc18.x86_64.rpm
bogofilter-1.2.2-4.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel cooker for i586 bogofilter-1.2.2-4.i586.rpm
bogofilter-1.2.2-4.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel cooker for x86_64 bogofilter-1.2.2-4.x86_64.rpm
bogofilter-1.2.2-3.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2011 for i586 bogofilter-1.2.2-3.i586.rpm
bogofilter-1.2.2-3.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2011 for x86_64 bogofilter-1.2.2-3.x86_64.rpm
bogofilter-1.2.2-3.fc17.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for arm bogofilter-1.2.2-3.fc17.armv5tel.rpm
bogofilter-1.2.2-3.fc17.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for armhfp bogofilter-1.2.2-3.fc17.armv7hl.rpm
bogofilter-1.2.2-3.fc17.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for ppc bogofilter-1.2.2-3.fc17.ppc.rpm
bogofilter-1.2.2-3.fc17.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for ppc64 bogofilter-1.2.2-3.fc17.ppc64.rpm
bogofilter-1.2.2-3.fc17.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for s390 bogofilter-1.2.2-3.fc17.s390.rpm
bogofilter-1.2.2-3.fc17.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 17 for s390x bogofilter-1.2.2-3.fc17.s390x.rpm
bogofilter-1.2.2-3.fc17.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Secondary Rawhide Sources bogofilter-1.2.2-3.fc17.src.rpm
bogofilter-1.2.2-2.2.mga2.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 2 for i586 bogofilter-1.2.2-2.2.mga2.i586.rpm
bogofilter-1.2.2-2.2.mga2.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 2 for x86_64 bogofilter-1.2.2-2.2.mga2.x86_64.rpm
bogofilter-1.2.2-2mgc30.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.2-2mgc30.src.rpm
bogofilter-1.2.2-2mgc30.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.2-2mgc30.x86_64.rpm
bogofilter-1.2.2-2.fc15.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for arm bogofilter-1.2.2-2.fc15.armv5tel.rpm
bogofilter-1.2.2-2.fc15.armv7hl.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for armhfp bogofilter-1.2.2-2.fc15.armv7hl.rpm
bogofilter-1.2.2-2.fc15.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for ppc bogofilter-1.2.2-2.fc15.ppc.rpm
bogofilter-1.2.2-2.fc15.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 for ppc bogofilter-1.2.2-2.fc15.ppc.rpm
bogofilter-1.2.2-2.fc15.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 for ppc64 bogofilter-1.2.2-2.fc15.ppc64.rpm
bogofilter-1.2.2-2.fc15.ppc64.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for ppc64 bogofilter-1.2.2-2.fc15.ppc64.rpm
bogofilter-1.2.2-2.fc15.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for s390 bogofilter-1.2.2-2.fc15.s390.rpm
bogofilter-1.2.2-2.fc15.s390.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 for s390 bogofilter-1.2.2-2.fc15.s390.rpm
bogofilter-1.2.2-2.fc15.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 15 for s390x bogofilter-1.2.2-2.fc15.s390x.rpm
bogofilter-1.2.2-2.fc15.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 16 for s390x bogofilter-1.2.2-2.fc15.s390x.rpm
bogofilter-1.2.2-2.fc15.src.html Fast anti-spam filtering by Bayesian statistical analysis Fedora Secondary Rawhide Sources bogofilter-1.2.2-2.fc15.src.rpm
bogofilter-1.2.2-2.mga1.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 1 for i586 bogofilter-1.2.2-2.mga1.i586.rpm
bogofilter-1.2.2-2.mga1.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 2 for i586 bogofilter-1.2.2-2.mga1.i586.rpm
bogofilter-1.2.2-2.mga1.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 2 for x86_64 bogofilter-1.2.2-2.mga1.x86_64.rpm
bogofilter-1.2.2-2.mga1.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mageia 1 for x86_64 bogofilter-1.2.2-2.mga1.x86_64.rpm
bogofilter-1.2.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.2-1.src.rpm
bogofilter-1.2.2-1.fc14.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 14 for arm bogofilter-1.2.2-1.fc14.armv5tel.rpm
bogofilter-1.2.2-1.fu14.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.2-1.fu14.src.rpm
bogofilter-1.2.2-1.fc13.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 13 updates for arm bogofilter-1.2.2-1.fc13.armv5tel.rpm
bogofilter-1.2.1-17.1.3.ppc.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 for ppc bogofilter-1.2.1-17.1.3.ppc.rpm
bogofilter-1.2.1-17.1.3.ppc64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 for ppc bogofilter-1.2.1-17.1.3.ppc64.rpm
bogofilter-1.2.1-17.1.2.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 for i586 bogofilter-1.2.1-17.1.2.i586.rpm
bogofilter-1.2.1-17.1.2.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.2 for x86_64 bogofilter-1.2.1-17.1.2.x86_64.rpm
bogofilter-1.2.1-13.1.2.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.1 for i586 bogofilter-1.2.1-13.1.2.i586.rpm
bogofilter-1.2.1-13.1.2.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 12.1 for x86_64 bogofilter-1.2.1-13.1.2.x86_64.rpm
bogofilter-1.2.1-9.3.i586.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 11.4 for i586 bogofilter-1.2.1-9.3.i586.rpm
bogofilter-1.2.1-9.3.x86_64.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 11.4 for x86_64 bogofilter-1.2.1-9.3.x86_64.rpm
bogofilter-1.2.1-2mdv2010.1.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2010.1 for i586 bogofilter-1.2.1-2mdv2010.1.i586.rpm
bogofilter-1.2.1-2mdv2010.1.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2010.1 for x86_64 bogofilter-1.2.1-2mdv2010.1.x86_64.rpm
bogofilter-1.2.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.1-1.src.rpm
bogofilter-1.2.1-1mdv2010.0.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2010.0 for i586 bogofilter-1.2.1-1mdv2010.0.i586.rpm
bogofilter-1.2.1-1mdv2010.0.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2010.0 for x86_64 bogofilter-1.2.1-1mdv2010.0.x86_64.rpm
bogofilter-1.2.1-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.2.1-1.el5.rf.i386.rpm
bogofilter-1.2.1-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.2.1-1.el5.rf.x86_64.rpm
bogofilter-1.2.1-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.2.1-1.el4.rf.i386.rpm
bogofilter-1.2.1-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.2.1-1.el4.rf.x86_64.rpm
bogofilter-1.2.1-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.2.1-1.el3.rf.i386.rpm
bogofilter-1.2.1-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.2.1-1.el3.rf.x86_64.rpm
bogofilter-1.2.0-2.fc12.armv5tel.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 13 for arm bogofilter-1.2.0-2.fc12.armv5tel.rpm
bogofilter-1.2.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.2.0-1.src.rpm
bogofilter-1.2.0-1mdv2009.1.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2009.1 for i586 bogofilter-1.2.0-1mdv2009.1.i586.rpm
bogofilter-1.2.0-1mdv2009.1.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2009.1 for x86_64 bogofilter-1.2.0-1mdv2009.1.x86_64.rpm
bogofilter-1.2.0-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.2.0-1.el5.rf.i386.rpm
bogofilter-1.2.0-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.2.0-1.el5.rf.x86_64.rpm
bogofilter-1.2.0-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.2.0-1.el4.rf.i386.rpm
bogofilter-1.2.0-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.2.0-1.el4.rf.x86_64.rpm
bogofilter-1.2.0-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.2.0-1.el3.rf.i386.rpm
bogofilter-1.2.0-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.2.0-1.el3.rf.x86_64.rpm
bogofilter-1.1.7-3.fc11.s390x.html Fast anti-spam filtering by Bayesian statistical analysis Fedora 11 for s390x bogofilter-1.1.7-3.fc11.s390x.rpm
bogofilter-1.1.7-2mdv2009.0.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2009.0 for i586 bogofilter-1.1.7-2mdv2009.0.i586.rpm
bogofilter-1.1.7-2mdv2009.0.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2009.0 for x86_64 bogofilter-1.1.7-2mdv2009.0.x86_64.rpm
bogofilter-1.1.7-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.7-1.src.rpm
bogofilter-1.1.7-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.1.7-1.el5.rf.i386.rpm
bogofilter-1.1.7-1.el5.rf.ppc.html Fast anti-spam filtering by Bayesian statistical analysis DAG Fabian packages for Red Hat Linux el5 ppc bogofilter-1.1.7-1.el5.rf.ppc.rpm
bogofilter-1.1.7-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.1.7-1.el5.rf.x86_64.rpm
bogofilter-1.1.7-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.1.7-1.el4.rf.i386.rpm
bogofilter-1.1.7-1.el4.rf.ppc.html Fast anti-spam filtering by Bayesian statistical analysis DAG Fabian packages for Red Hat Linux el4 ppc bogofilter-1.1.7-1.el4.rf.ppc.rpm
bogofilter-1.1.7-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.1.7-1.el4.rf.x86_64.rpm
bogofilter-1.1.7-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.1.7-1.el3.rf.i386.rpm
bogofilter-1.1.7-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.1.7-1.el3.rf.x86_64.rpm
bogofilter-1.1.6-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.6-1.src.rpm
bogofilter-1.1.6-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.1.6-1.el5.rf.i386.rpm
bogofilter-1.1.6-1.el5.rf.ppc.html Fast anti-spam filtering by Bayesian statistical analysis DAG Fabian packages for Red Hat Linux el5 ppc bogofilter-1.1.6-1.el5.rf.ppc.rpm
bogofilter-1.1.6-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.1.6-1.el5.rf.x86_64.rpm
bogofilter-1.1.6-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.1.6-1.el4.rf.i386.rpm
bogofilter-1.1.6-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.1.6-1.el4.rf.x86_64.rpm
bogofilter-1.1.6-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.1.6-1.el3.rf.i386.rpm
bogofilter-1.1.6-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.1.6-1.el3.rf.x86_64.rpm
bogofilter-1.1.5-2mdv2008.0.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2008.0 for i586 bogofilter-1.1.5-2mdv2008.0.i586.rpm
bogofilter-1.1.5-2mdv2008.0.ppc.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel cooker for ppc bogofilter-1.1.5-2mdv2008.0.ppc.rpm
bogofilter-1.1.5-2mdv2008.0.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2008.0 for x86_64 bogofilter-1.1.5-2mdv2008.0.x86_64.rpm
bogofilter-1.1.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.5-1.src.rpm
bogofilter-1.1.5-1mdv2007.1.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2007.1 for i586 bogofilter-1.1.5-1mdv2007.1.i586.rpm
bogofilter-1.1.5-1mdv2007.1.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2007.1 for x86_64 bogofilter-1.1.5-1mdv2007.1.x86_64.rpm
bogofilter-1.1.5-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.1.5-1.el5.rf.i386.rpm
bogofilter-1.1.5-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.1.5-1.el5.rf.x86_64.rpm
bogofilter-1.1.5-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.1.5-1.el4.rf.i386.rpm
bogofilter-1.1.5-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.1.5-1.el4.rf.x86_64.rpm
bogofilter-1.1.5-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.1.5-1.el3.rf.i386.rpm
bogofilter-1.1.5-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.1.5-1.el3.rf.x86_64.rpm
bogofilter-1.1.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.4-1.src.rpm
bogofilter-1.1.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.3-1.src.rpm
bogofilter-1.1.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.2-1.src.rpm
bogofilter-1.1.1-174.13.ppc.html Fast Anti-Spam Filtering by Bayesian Statistical Analysis OpenSuSE 11.1 for ppc bogofilter-1.1.1-174.13.ppc.rpm
bogofilter-1.1.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.1-1.src.rpm
bogofilter-1.1.1-1mdv2007.0.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2007.0 for i586 bogofilter-1.1.1-1mdv2007.0.i586.rpm
bogofilter-1.1.1-1mdv2007.0.sparc.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel cooker for sparc bogofilter-1.1.1-1mdv2007.0.sparc.rpm
bogofilter-1.1.1-1mdv2007.0.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2007.0 for x86_64 bogofilter-1.1.1-1mdv2007.0.x86_64.rpm
bogofilter-1.1.1-1.el5.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 i386 bogofilter-1.1.1-1.el5.rf.i386.rpm
bogofilter-1.1.1-1.el5.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el5 x86_64 bogofilter-1.1.1-1.el5.rf.x86_64.rpm
bogofilter-1.1.1-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.1.1-1.el4.rf.i386.rpm
bogofilter-1.1.1-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.1.1-1.el4.rf.x86_64.rpm
bogofilter-1.1.1-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.1.1-1.el3.rf.i386.rpm
bogofilter-1.1.1-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.1.1-1.el3.rf.x86_64.rpm
bogofilter-1.1.1-1.el2.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el2.1 i386 bogofilter-1.1.1-1.el2.rf.i386.rpm
bogofilter-1.1.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.1.0-1.src.rpm
bogofilter-1.0.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.0.3-1.src.rpm
bogofilter-1.0.3-1.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.0.3-1.el4.rf.i386.rpm
bogofilter-1.0.3-1.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.0.3-1.el4.rf.x86_64.rpm
bogofilter-1.0.3-1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.0.3-1.el3.rf.i386.rpm
bogofilter-1.0.3-1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.0.3-1.el3.rf.x86_64.rpm
bogofilter-1.0.3-1.el2.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el2.1 i386 bogofilter-1.0.3-1.el2.rf.i386.rpm
bogofilter-1.0.2-6.0.el5.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl5 i386 bogofilter-1.0.2-6.0.el5.i386.rpm
bogofilter-1.0.2-6.0.el5.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for el5 i386 bogofilter-1.0.2-6.0.el5.i386.rpm
bogofilter-1.0.2-6.0.el5.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl5 x86_64 bogofilter-1.0.2-6.0.el5.x86_64.rpm
bogofilter-1.0.2-6.0.el5.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for el5 x86_64 bogofilter-1.0.2-6.0.el5.x86_64.rpm
bogofilter-1.0.2-6.el6.i686.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl6 i386 bogofilter-1.0.2-6.el6.i686.rpm
bogofilter-1.0.2-6.el6.i686.html Fast Bayesian Spam Filter. ATrpms Stable packages for el6 i386 bogofilter-1.0.2-6.el6.i686.rpm
bogofilter-1.0.2-6.el6.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl6 x86_64 bogofilter-1.0.2-6.el6.x86_64.rpm
bogofilter-1.0.2-6.el6.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for el6 x86_64 bogofilter-1.0.2-6.el6.x86_64.rpm
bogofilter-1.0.2-6.el4.at.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl4 i386 bogofilter-1.0.2-6.el4.at.i386.rpm
bogofilter-1.0.2-6.el4.at.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for el4 i386 bogofilter-1.0.2-6.el4.at.i386.rpm
bogofilter-1.0.2-6.el4.at.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for el4 x86_64 bogofilter-1.0.2-6.el4.at.x86_64.rpm
bogofilter-1.0.2-6.el4.at.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl4 x86_64 bogofilter-1.0.2-6.el4.at.x86_64.rpm
bogofilter-1.0.2-6.el3.at.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl3 i386 bogofilter-1.0.2-6.el3.at.i386.rpm
bogofilter-1.0.2-6.el3.at.i386.html Fast Bayesian Spam Filter. ATrpms Stable packages for el3 i386 bogofilter-1.0.2-6.el3.at.i386.rpm
bogofilter-1.0.2-6.el3.at.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for el3 x86_64 bogofilter-1.0.2-6.el3.at.x86_64.rpm
bogofilter-1.0.2-6.el3.at.x86_64.html Fast Bayesian Spam Filter. ATrpms Stable packages for sl3 x86_64 bogofilter-1.0.2-6.el3.at.x86_64.rpm
bogofilter-1.0.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.0.2-1.src.rpm
bogofilter-1.0.1-1.2.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.0.1-1.2.el4.rf.i386.rpm
bogofilter-1.0.1-1.2.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.0.1-1.2.el4.rf.x86_64.rpm
bogofilter-1.0.1-1.1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.0.1-1.1.el3.rf.i386.rpm
bogofilter-1.0.1-1.1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.0.1-1.1.el3.rf.x86_64.rpm
bogofilter-1.0.1-1.0.el2.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el2.1 i386 bogofilter-1.0.1-1.0.el2.rf.i386.rpm
bogofilter-1.0.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.0.1-1.src.rpm
bogofilter-1.0.0-1.2.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-1.0.0-1.2.el4.rf.i386.rpm
bogofilter-1.0.0-1.2.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-1.0.0-1.2.el4.rf.x86_64.rpm
bogofilter-1.0.0-1.1.el3.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-1.0.0-1.1.el3.rf.i386.rpm
bogofilter-1.0.0-1.1.el3.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-1.0.0-1.1.el3.rf.x86_64.rpm
bogofilter-1.0.0-1.0.el2.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el2.1 i386 bogofilter-1.0.0-1.0.el2.rf.i386.rpm
bogofilter-1.0.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.0.0-1.i586.rpm
bogofilter-1.0.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-1.0.0-1.src.rpm
bogofilter-0.96.6-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.6-1.i586.rpm
bogofilter-0.96.6-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.6-1.src.rpm
bogofilter-0.96.5-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.5-1.i586.rpm
bogofilter-0.96.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.5-1.src.rpm
bogofilter-0.96.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.4-1.i586.rpm
bogofilter-0.96.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.4-1.src.rpm
bogofilter-0.96.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.3-1.i586.rpm
bogofilter-0.96.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.3-1.src.rpm
bogofilter-0.96.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.2-1.i586.rpm
bogofilter-0.96.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.2-1.src.rpm
bogofilter-0.96.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.1-1.i586.rpm
bogofilter-0.96.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.1-1.src.rpm
bogofilter-0.96.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.0-1.i586.rpm
bogofilter-0.96.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.96.0-1.src.rpm
bogofilter-0.95.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.2-1.i586.rpm
bogofilter-0.95.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.2-1.src.rpm
bogofilter-0.95.2-1mdk.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2006.0 for i586 bogofilter-0.95.2-1mdk.i586.rpm
bogofilter-0.95.2-1mdk.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel 2006.0 for i586 bogofilter-0.95.2-1mdk.i586.rpm
bogofilter-0.95.2-1mdk.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 2006.0 for x86_64 bogofilter-0.95.2-1mdk.x86_64.rpm
bogofilter-0.95.2-1mdk.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva devel 2006.0 for x86_64 bogofilter-0.95.2-1mdk.x86_64.rpm
bogofilter-0.95.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.1-1.i586.rpm
bogofilter-0.95.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.1-1.src.rpm
bogofilter-0.95.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.0-1.i586.rpm
bogofilter-0.95.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.95.0-1.src.rpm
bogofilter-0.94.14-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.14-1.i586.rpm
bogofilter-0.94.14-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.14-1.src.rpm
bogofilter-0.94.13-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.13-1.i586.rpm
bogofilter-0.94.13-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.13-1.src.rpm
bogofilter-0.94.12-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.12-1.i586.rpm
bogofilter-0.94.12-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.12-1.src.rpm
bogofilter-0.94.11-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.11-1.i586.rpm
bogofilter-0.94.11-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.11-1.src.rpm
bogofilter-0.94.10-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.10-1.i586.rpm
bogofilter-0.94.10-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.10-1.src.rpm
bogofilter-0.94.9-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.9-1.i586.rpm
bogofilter-0.94.9-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.9-1.src.rpm
bogofilter-0.94.8-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.8-1.i586.rpm
bogofilter-0.94.8-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.8-1.src.rpm
bogofilter-0.94.7-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.7-1.i586.rpm
bogofilter-0.94.7-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.7-1.src.rpm
bogofilter-0.94.6-2.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.6-2.i586.rpm
bogofilter-0.94.6-2.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.6-2.src.rpm
bogofilter-0.94.6-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.6-1.i586.rpm
bogofilter-0.94.6-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.6-1.src.rpm
bogofilter-0.94.5-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.5-1.i586.rpm
bogofilter-0.94.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.5-1.src.rpm
bogofilter-0.94.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.4-1.i586.rpm
bogofilter-0.94.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.4-1.src.rpm
bogofilter-0.94.4-1mdk.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 10.2 for i586 bogofilter-0.94.4-1mdk.i586.rpm
bogofilter-0.94.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.3-1.i586.rpm
bogofilter-0.94.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.3-1.src.rpm
bogofilter-0.94.3-1mdk.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 10.2 for x86_64 bogofilter-0.94.3-1mdk.x86_64.rpm
bogofilter-0.94.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.2-1.i586.rpm
bogofilter-0.94.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.2-1.src.rpm
bogofilter-0.94.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.1-1.i586.rpm
bogofilter-0.94.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.1-1.src.rpm
bogofilter-0.94.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.0-1.i586.rpm
bogofilter-0.94.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.94.0-1.src.rpm
bogofilter-0.93.5-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.5-1.i586.rpm
bogofilter-0.93.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.5-1.src.rpm
bogofilter-0.93.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.4-1.i586.rpm
bogofilter-0.93.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.4-1.src.rpm
bogofilter-0.93.3.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.3.1-1.i586.rpm
bogofilter-0.93.3.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.3.1-1.src.rpm
bogofilter-0.93.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.3-1.i586.rpm
bogofilter-0.93.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.3-1.src.rpm
bogofilter-0.93.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.2-1.i586.rpm
bogofilter-0.93.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.2-1.src.rpm
bogofilter-0.93.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.1-1.i586.rpm
bogofilter-0.93.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.1-1.src.rpm
bogofilter-0.93.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.0-1.i586.rpm
bogofilter-0.93.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.93.0-1.src.rpm
bogofilter-0.92.8-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.8-1.i586.rpm
bogofilter-0.92.8-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.8-1.src.rpm
bogofilter-0.92.7-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.7-1.i586.rpm
bogofilter-0.92.7-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.7-1.src.rpm
bogofilter-0.92.6-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.6-1.i586.rpm
bogofilter-0.92.6-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.6-1.src.rpm
bogofilter-0.92.6-1mdk.i586.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 10.1 for i586 bogofilter-0.92.6-1mdk.i586.rpm
bogofilter-0.92.6-1mdk.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis Mandriva 10.1 for x86_64 bogofilter-0.92.6-1mdk.x86_64.rpm
bogofilter-0.92.5-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.5-1.i586.rpm
bogofilter-0.92.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.5-1.src.rpm
bogofilter-0.92.4-1.2.el4.rf.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 i386 bogofilter-0.92.4-1.2.el4.rf.i386.rpm
bogofilter-0.92.4-1.2.el4.rf.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el4 x86_64 bogofilter-0.92.4-1.2.el4.rf.x86_64.rpm
bogofilter-0.92.4-1.1.el3.dag.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 i386 bogofilter-0.92.4-1.1.el3.dag.i386.rpm
bogofilter-0.92.4-1.1.el3.dag.x86_64.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el3 x86_64 bogofilter-0.92.4-1.1.el3.dag.x86_64.rpm
bogofilter-0.92.4-1.0.el2.dag.i386.html Fast anti-spam filtering by Bayesian statistical analysis DAG packages for Red Hat Linux el2.1 i386 bogofilter-0.92.4-1.0.el2.dag.i386.rpm
bogofilter-0.92.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.4-1.i586.rpm
bogofilter-0.92.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.4-1.src.rpm
bogofilter-0.92.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.3-1.i586.rpm
bogofilter-0.92.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.3-1.src.rpm
bogofilter-0.92.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.2-1.i586.rpm
bogofilter-0.92.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.1-1.i586.rpm
bogofilter-0.92.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.1-1.src.rpm
bogofilter-0.92.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.0-1.i586.rpm
bogofilter-0.92.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.92.0-1.src.rpm
bogofilter-0.91.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.4-1.i586.rpm
bogofilter-0.91.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.4-1.src.rpm
bogofilter-0.91.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.3-1.i586.rpm
bogofilter-0.91.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.3-1.src.rpm
bogofilter-0.91.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.2-1.i586.rpm
bogofilter-0.91.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.2-1.src.rpm
bogofilter-0.91.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.1-1.i586.rpm
bogofilter-0.91.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.1-1.src.rpm
bogofilter-0.91.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.0-1.i586.rpm
bogofilter-0.91.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.91.0-1.src.rpm
bogofilter-0.90.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.90.0-1.i586.rpm
bogofilter-0.90.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.90.0-1.src.rpm
bogofilter-0.17.5-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.5-1.i586.rpm
bogofilter-0.17.5-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.5-1.src.rpm
bogofilter-0.17.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.4-1.i586.rpm
bogofilter-0.17.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.3-1.i586.rpm
bogofilter-0.17.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.3-1.src.rpm
bogofilter-0.17.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.2-1.i586.rpm
bogofilter-0.17.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.2-1.src.rpm
bogofilter-0.17.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.1-1.i586.rpm
bogofilter-0.17.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.1-1.src.rpm
bogofilter-0.17.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.0-1.i586.rpm
bogofilter-0.17.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.17.0-1.src.rpm
bogofilter-0.16.4-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.4-1.i586.rpm
bogofilter-0.16.4-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.4-1.src.rpm
bogofilter-0.16.3-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.3-1.i586.rpm
bogofilter-0.16.3-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.3-1.src.rpm
bogofilter-0.16.2-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.2-1.i586.rpm
bogofilter-0.16.2-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.2-1.src.rpm
bogofilter-0.16.1-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.1-1.i586.rpm
bogofilter-0.16.1-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.1-1.src.rpm
bogofilter-0.16.0-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.0-1.i586.rpm
bogofilter-0.16.0-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.16.0-1.src.rpm
bogofilter-0.15.13.2-2.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13.2-2.i586.rpm
bogofilter-0.15.13.1-2.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13.1-2.i586.rpm
bogofilter-0.15.13-2.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13-2.i586.rpm
bogofilter-0.15.13-2.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13-2.src.rpm
bogofilter-0.15.13-1.i586.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13-1.i586.rpm
bogofilter-0.15.13-1.src.html Fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.13-1.src.rpm
bogofilter-0.15.12-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.12-1.i586.rpm
bogofilter-0.15.12-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.12-1.src.rpm
bogofilter-0.15.11-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.11-1.i586.rpm
bogofilter-0.15.11-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.11-1.src.rpm
bogofilter-0.15.10-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.10-1.i586.rpm
bogofilter-0.15.10-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.10-1.src.rpm
bogofilter-0.15.9-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.9-1.i586.rpm
bogofilter-0.15.9-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.9-1.src.rpm
bogofilter-0.15.8-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.8-1.i586.rpm
bogofilter-0.15.8-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.8-1.src.rpm
bogofilter-0.15.7-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.7-1.i586.rpm
bogofilter-0.15.7-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.7-1.src.rpm
bogofilter-0.15.6-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.6-1.i586.rpm
bogofilter-0.15.6-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.6-1.src.rpm
bogofilter-0.15.5.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5.2-1.i586.rpm
bogofilter-0.15.5.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5.2-1.src.rpm
bogofilter-0.15.5.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5.1-1.i586.rpm
bogofilter-0.15.5.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5.1-1.src.rpm
bogofilter-0.15.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5-1.i586.rpm
bogofilter-0.15.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.5-1.src.rpm
bogofilter-0.15.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.4-1.i586.rpm
bogofilter-0.15.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.4-1.src.rpm
bogofilter-0.15.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.3-1.i586.rpm
bogofilter-0.15.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.3-1.src.rpm
bogofilter-0.15.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.2-1.i586.rpm
bogofilter-0.15.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.2-1.src.rpm
bogofilter-0.15.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.1-1.i586.rpm
bogofilter-0.15.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.1-1.src.rpm
bogofilter-0.15.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.0-1.i586.rpm
bogofilter-0.15.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.15.0-1.src.rpm
bogofilter-0.14.5.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.4-1.i586.rpm
bogofilter-0.14.5.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.4-1.src.rpm
bogofilter-0.14.5.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.3-1.i586.rpm
bogofilter-0.14.5.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.3-1.src.rpm
bogofilter-0.14.5.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.2-1.i586.rpm
bogofilter-0.14.5.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.2-1.src.rpm
bogofilter-0.14.5.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.1-1.i586.rpm
bogofilter-0.14.5.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5.1-1.src.rpm
bogofilter-0.14.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5-1.i586.rpm
bogofilter-0.14.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.5-1.src.rpm
bogofilter-0.14.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.4-1.i586.rpm
bogofilter-0.14.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.4-1.src.rpm
bogofilter-0.14.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.3-1.i586.rpm
bogofilter-0.14.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.3-1.src.rpm
bogofilter-0.14.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.2-1.i586.rpm
bogofilter-0.14.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.2-1.src.rpm
bogofilter-0.14.1.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.1.1-1.i586.rpm
bogofilter-0.14.1.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.1.1-1.src.rpm
bogofilter-0.14.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.1-1.i586.rpm
bogofilter-0.14.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.1-1.src.rpm
bogofilter-0.14.0.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.0.1-1.i586.rpm
bogofilter-0.14.0.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.0.1-1.src.rpm
bogofilter-0.14.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.0-1.i586.rpm
bogofilter-0.14.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.14.0-1.src.rpm
bogofilter-0.13.7.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.3-1.i586.rpm
bogofilter-0.13.7.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.3-1.src.rpm
bogofilter-0.13.7.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.2-1.i586.rpm
bogofilter-0.13.7.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.2-1.src.rpm
bogofilter-0.13.7.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.1-1.i586.rpm
bogofilter-0.13.7.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7.1-1.src.rpm
bogofilter-0.13.7-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7-1.i586.rpm
bogofilter-0.13.7-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.7-1.src.rpm
bogofilter-0.13.6.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.3-1.i586.rpm
bogofilter-0.13.6.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.3-1.src.rpm
bogofilter-0.13.6.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.2-1.i586.rpm
bogofilter-0.13.6.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.2-1.src.rpm
bogofilter-0.13.6.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.1-1.i586.rpm
bogofilter-0.13.6.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6.1-1.src.rpm
bogofilter-0.13.6-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6-1.i586.rpm
bogofilter-0.13.6-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.6-1.src.rpm
bogofilter-0.13.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.5-1.i586.rpm
bogofilter-0.13.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.5-1.src.rpm
bogofilter-0.13.4.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.4.1-1.i586.rpm
bogofilter-0.13.4.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.4.1-1.src.rpm
bogofilter-0.13.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.4-1.i586.rpm
bogofilter-0.13.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.4-1.src.rpm
bogofilter-0.13.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.3-1.i586.rpm
bogofilter-0.13.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.3-1.src.rpm
bogofilter-0.13.2.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.2.1-1.i586.rpm
bogofilter-0.13.2.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.2.1-1.src.rpm
bogofilter-0.13.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.2-1.i586.rpm
bogofilter-0.13.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.2-1.src.rpm
bogofilter-0.13.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.1-1.i586.rpm
bogofilter-0.13.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.1-1.src.rpm
bogofilter-0.13.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.0-1.i586.rpm
bogofilter-0.13.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.13.0-1.src.rpm
bogofilter-0.12.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.3-1.i586.rpm
bogofilter-0.12.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.3-1.src.rpm
bogofilter-0.12.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.2-1.i586.rpm
bogofilter-0.12.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.2-1.src.rpm
bogofilter-0.12.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.1-1.i586.rpm
bogofilter-0.12.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.1-1.src.rpm
bogofilter-0.12.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.0-1.i586.rpm
bogofilter-0.12.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.12.0-1.src.rpm
bogofilter-0.11.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.2-1.i586.rpm
bogofilter-0.11.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.2-1.src.rpm
bogofilter-0.11.1.9-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.9-1.i586.rpm
bogofilter-0.11.1.9-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.9-1.src.rpm
bogofilter-0.11.1.8-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.8-1.i586.rpm
bogofilter-0.11.1.8-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.8-1.src.rpm
bogofilter-0.11.1.7-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.7-1.i586.rpm
bogofilter-0.11.1.7-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.7-1.src.rpm
bogofilter-0.11.1.6-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.6-1.i586.rpm
bogofilter-0.11.1.6-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.6-1.src.rpm
bogofilter-0.11.1.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.5-1.i586.rpm
bogofilter-0.11.1.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.5-1.src.rpm
bogofilter-0.11.1.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.4-1.i586.rpm
bogofilter-0.11.1.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.4-1.src.rpm
bogofilter-0.11.1.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.3-1.i586.rpm
bogofilter-0.11.1.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.3-1.src.rpm
bogofilter-0.11.1.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.2-1.i586.rpm
bogofilter-0.11.1.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.2-1.src.rpm
bogofilter-0.11.1.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.1-1.i586.rpm
bogofilter-0.11.1.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1.1-1.src.rpm
bogofilter-0.11.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1-1.i586.rpm
bogofilter-0.11.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.1-1.src.rpm
bogofilter-0.11.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.0-1.i586.rpm
bogofilter-0.11.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.11.0-1.src.rpm
bogofilter-0.10.3.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.3.1-1.i586.rpm
bogofilter-0.10.3.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.3.1-1.src.rpm
bogofilter-0.10.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.3-1.i586.rpm
bogofilter-0.10.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.3-1.src.rpm
bogofilter-0.10.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.2-1.i586.rpm
bogofilter-0.10.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.2-1.src.rpm
bogofilter-0.10.1.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.5-1.i586.rpm
bogofilter-0.10.1.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.5-1.src.rpm
bogofilter-0.10.1.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.4-1.i586.rpm
bogofilter-0.10.1.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.4-1.src.rpm
bogofilter-0.10.1.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.3-1.i586.rpm
bogofilter-0.10.1.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.3-1.src.rpm
bogofilter-0.10.1.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.2-1.i586.rpm
bogofilter-0.10.1.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.2-1.src.rpm
bogofilter-0.10.1.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.1-1.i586.rpm
bogofilter-0.10.1.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1.1-1.src.rpm
bogofilter-0.10.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1-1.i586.rpm
bogofilter-0.10.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.1-1.src.rpm
bogofilter-0.10.0-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.0-1.i586.rpm
bogofilter-0.10.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.10.0-1.src.rpm
bogofilter-0.9.1.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1.2-1.i586.rpm
bogofilter-0.9.1.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1.2-1.src.rpm
bogofilter-0.9.1.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1.1-1.i586.rpm
bogofilter-0.9.1.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1.1-1.src.rpm
bogofilter-0.9.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1-1.i586.rpm
bogofilter-0.9.1-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1-1.i586.rpm
bogofilter-0.9.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.1-1.src.rpm
bogofilter-0.9.0.5-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.5-1.i586.rpm
bogofilter-0.9.0.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.5-1.src.rpm
bogofilter-0.9.0.4-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.4-1.i586.rpm
bogofilter-0.9.0.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.4-1.src.rpm
bogofilter-0.9.0.3-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.3-1.i586.rpm
bogofilter-0.9.0.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.3-1.src.rpm
bogofilter-0.9.0.2-1.i586.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.2-1.i586.rpm
bogofilter-0.9.0.2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.2-1.src.rpm
bogofilter-0.9.0.1-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.1-1.i386.rpm
bogofilter-0.9.0.1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0.1-1.src.rpm
bogofilter-0.9.0-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0-1.i386.rpm
bogofilter-0.9.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.9.0-1.src.rpm
bogofilter-0.8.0-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0-1.i386.rpm
bogofilter-0.8.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0-1.src.rpm
bogofilter-0.8.0.rc3-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc3-1.i386.rpm
bogofilter-0.8.0.rc3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc3-1.src.rpm
bogofilter-0.8.0.rc2-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc2-1.i386.rpm
bogofilter-0.8.0.rc2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc2-1.src.rpm
bogofilter-0.8.0.rc1-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc1-1.i386.rpm
bogofilter-0.8.0.rc1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.8.0.rc1-1.src.rpm
bogofilter-0.7.6.0-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.0-1.i386.rpm
bogofilter-0.7.6.0-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.0-1.src.rpm
bogofilter-0.7.6.rc2-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc2-1.i386.rpm
bogofilter-0.7.6.rc2-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc2-1.i386.rpm
bogofilter-0.7.6.rc2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc2-1.src.rpm
bogofilter-0.7.6.rc2-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc2-1.src.rpm
bogofilter-0.7.6.rc1-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc1-1.i386.rpm
bogofilter-0.7.6.rc1-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.6.rc1-1.src.rpm
bogofilter-0.7.5-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.5-1.i386.rpm
bogofilter-0.7.5-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.5-1.src.rpm
bogofilter-0.7.4-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.4-1.i386.rpm
bogofilter-0.7.4-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.4-1.src.rpm
bogofilter-0.7.3-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.3-1.i386.rpm
bogofilter-0.7.3-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7.3-1.src.rpm
bogofilter-0.7-1.i386.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7-1.i386.rpm
bogofilter-0.7-1.src.html fast anti-spam filtering by Bayesian statistical analysis SourceForge bogofilter-0.7-1.src.rpm | {"url":"http://www.rpmfind.net/linux/rpm2html/search.php?query=bogofilter","timestamp":"2014-04-17T10:10:29Z","content_type":null,"content_length":"236705","record_id":"<urn:uuid:5e183c53-3f46-42db-b121-54da5bbc3528>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
can anyone help me with these questions? what is the correct answer?
July 23rd 2010, 10:52 PM
can anyone help me with these questions? what is the correct answer?
Is it a,b,c or d??
1) solve log(x-3) +log(x-2) = log(2x+24)
b)x= -2
c)x= -2, and x=9
d)x= -9
2)If (x-3) is a factor of f(x)=2X2 + ax-3,the value of a is:
July 23rd 2010, 11:01 PM
mr fantastic
1) Using a well known log rule, the equation simplifies to log[(x-3)(x-2)] = log(2x+24) => (x-3)(x-2) = 2x + 24. Either solve this equation or test each option. If you want to solve the equation,
please show all your work and say where you get stuck if you need help.
2) If (x - 3) is a factor then f(3) = 0. Solve for a.
July 23rd 2010, 11:37 PM
1) Using a well known log rule, the equation simplifies to log[(x-3)(x-2)] = log(2x+24) => (x-3)(x-2) = 2x + 24. Either solve this equation or test each option. If you want to solve the equation,
please show all your work and say where you get stuck if you need help.
2) If (x - 3) is a factor then f(3) = 0. Solve for a.
actually i solved both the qusetions but i dont know if my answers are right...
my answers:
are they right??
July 24th 2010, 02:59 PM
mr fantastic | {"url":"http://mathhelpforum.com/pre-calculus/151838-can-anyone-help-me-these-questions-what-correct-answer-print.html","timestamp":"2014-04-18T18:35:34Z","content_type":null,"content_length":"7926","record_id":"<urn:uuid:01a93e06-daf4-42b8-8c2e-a5f7e6f6f325>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Long Weekend on Friday, July 31, 2009 at 6:45pm.
The Ali Baba Co is the only supplier of a particular type of Oriental carpet. The estimated demand for its carpets is Q= 112,000 500P + 5M, where Q= number of carpets, P= price of carpets (dollar
per unit), and M= consumers income per capita. The estimated average variable cost function for Ali Baba s carpets is AVC= 2000 0.012Q + 0.000002Q2
Consumers income per capita is expected to be $20,000 and total fixed cost is $100,000
#1. How many carpets should the firm produce in order to maximize profit?
#2. What is the profit maximizing price of carpets?
#3. What is the maximum amount of profit that the firm can earn selling carpets?
#4. Answer parts a through c if consumer s income per capita is expected to be $30,000 instead.
• Economics - economyst, Friday, July 31, 2009 at 10:58pm
This is a straight monopoly problem; set MC=MR and solve for Q, and then determine P. The tricky part is that the total cost function is a cubic.
Since M is given, rewrite the demand equation as Q= 212,000 - 500P. Next rewrite so P is a function of Q. so P=424 - .002Q
Now then TR=P*Q=424Q - .002Q^2
MR is the first derivitive, so MR=424-.004Q
TVC = AVC*Q = 2000Q-.012Q^2 + .000002Q^3
MC is the first derivitive
MC=2000 - .024Q + .000006Q^2
My quadratic equation is 0=1576 - .02*Q + .000006Q^2.
Which, by my calculations, does not have a solution, which means no maxima. Please Please check my math. I am confident in my methodology, less so in my arithmitic.
• Economics - LongWeekend, Saturday, August 1, 2009 at 3:53pm
Economyst, could you elaborate on #1 and #4. I'm stuck!
• Economics - economyst, Monday, August 3, 2009 at 9:45am
for #1 Always always always, maximize when MC=MR. (Since you have a quadratic for an MC equation, you may have two points where MC=MR. One will represent where profits are maximized, the other
where profits are minimized. In calculas, check second-order conditions. Or, for simplicity, test both answers and pick the one with the highest profit.) So, I set MR=MC as 424-.004Q=
2000-.024Q+.000006Q^2. Rearranging terms, I got my quadritic equation; the next step is to solve for Q
for #4. It is the exact same procedures as #1-#3, except with a different cost function. Here Q=262,000 - 500P
I hope this helps.
• Managerial Economics - Jorge, Friday, December 11, 2009 at 10:06pm
The reason this cannot be worked out is because there is an extra 0 on the first term in the AVC = 2000 0.012Q + 0.000002Q2. It should really be AVC = 200 0.012Q + 0.000002Q2.
• Economics - Jorge, Friday, December 11, 2009 at 11:25pm
The estimated demand equation is Q = 112,000 500P + 5(20,000) = 212,000 500P.
The inverse demand function is P = 424 0.002Q
The marginal revenue is MR = 424 0.004Q
The average variable cost is AVC = 200 0.012Q + 0.000002Q2
The marginal cost is SMC = 200 0.024Q + 0.000006Q2
a. How many carpets should the firm produce in order to maximize profit?
To maximize profit, marginal revenue is set to equal marginal cost.
MR = SMC = 424 0.004Q = 200 0.024Q + 0.000006Q2 = 224 + 0.020Q 0.000006Q2
Solving for Q, it is equal to 4666.67 and 8000. Since there can be no negative output, the firm should produce 4667 carpets.
b. What is the profit-maximizing price of carpets?
The profit maximizing price P = 424 0.002Q, when Q = 4667, the price is $414.33.
c. What is the maximum amount of profit that the firm can earn selling carpets?
The maximum amount of profit the firm can earn selling carpets is:
(P * Q) [(AVC * Q) + TFC] = (414.33 * 4667) [(187.56 * 4667) + 100,000)] = $958,341.48
d. Answer parts a through c if consumer s income per capita is expected to be $30,000 instead.
The estimated demand equation is Q = 112,000 500P + 5(30,000) = 262,000 500P.
The inverse demand function is P = 524 0.002Q
The marginal revenue is MR = 524 0.004Q
The average variable cost is AVC = 200 0.012Q + 0.000002Q2
The marginal cost is SMC = 200 0.024Q + 0.000006Q2
To maximize profit, marginal revenue is set to equal marginal cost.
MR = SMC = 524 0.004Q = 200 0.024Q + 0.000006Q2 = 1476 0.020Q + 0.000006Q2
Solving for Q, it is equal to 5833.33 and 9166.67. Since there can be no negative output, the firm should produce 5834 carpets.
The profit maximizing price P = 524 0.002Q, when Q = 5834, the price is $512.33.
The maximum amount of profit the firm can earn selling carpets is:
(P * Q) [(AVC * Q) + TFC] = (512.33 * 5834) [(198 * 5834) + 100,000)] = $1,733,801.22.
• Economics - Ramesh, Tuesday, February 23, 2010 at 11:59pm
Demand Equation Q= 612000- 500P
Related Questions
ECONOMICS - Q = 39,000 500P AVC = 30 + 0.005Q Q is quantity demanded and ...
Economics - Q = 39,000 500P AVC = 30 + 0.005Q Q is quantity demanded and ...
ecomonics - I am trying to understand the math part of supply and demand . I am ...
maths - a carpet is on the floor 8m by 5m . area of border on the carpet is 12m ...
Pre-Algebra - Carpet Masters charges $9.50 per square yard to clean a carpet. If...
capital cost - Supplier A has old equipment that produces at the rate of 10,000 ...
Managerial Economics - (Economyst Only Please): A television station is ...
Statistics - A purchasing agent for a trucking company is shopping for ...
Statistics - A purchasing agent for a trucking company is shopping for ...
Economics - A television station is considering the sale of promotional DVDs. It... | {"url":"http://www.jiskha.com/display.cgi?id=1249080343","timestamp":"2014-04-20T16:24:41Z","content_type":null,"content_length":"13306","record_id":"<urn:uuid:01683dcd-438d-4548-bcdf-b13455d40956>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ellsworth J.
Hello! Thank you for visiting my profile page! My name is Ellsworth and I would LOVE to help you overcome any difficulties you may be facing with math or related subjects.
First of all, who better to learn math from than … an actual math teacher? That’s right! I taught high school math for the last eight (8) years. Here’s a quick read of my qualifications:
•Bachelor’s degree in Electrical Engineering (from MIT!)
•Master’s degree in Computer Science
•Eight years of high school classroom math teaching experience in TWO states (Texas and California)
•Currently certified to teach math in Texas
•Extensive prior tutoring (math) experience both as a school teacher and with tutoring firms
•Prior college teaching experience in various math courses
WHAT CAN YOU HELP ME WITH?
Probably just about any math area you’re likely to face in high school or college. This includes, but is not limited to:
•Algebra 1*
•Math Models*
•Algebra 2*
•Calculus (intro)
•Developmental Math
•College Algebra
(* = “I’ve taught this class at least once, in an actual school classroom, with real live students!”)
Also, I can help you prepare for tests:
•SAT (especially math!)
•ACT (especially math!)
•GRE (especially… well, you get the idea…)
I also like to do what I call “math literacy”, which is basically everyday math that most schools have now abandoned because it’s not a college-prep course. I think this is a terrible mistake! I’m
talking about general math skills that you encounter in daily life, like scaling up a recipe to make multiple batches of cookies, calculating how much that sale discount takes off the price of a
sweater, or how to (fairly) split up the cost of a meal with friends (including tax!).
If you need something not on this list, just ask. I’ll give you some information and/or point you in the right direction.
SO, HOW DO WORK WITH STUDENTS TO KNOW WHAT THEY NEED?
Well, there are at least three approaches, and I use one or more depending on the situation:
1) Interview the student and family to get a sense of strengths and areas for improvement
2) Perform a general assessment (that’s teacher-speak for “give a test”) to spot trouble areas. I made my own Algebra skills test just for this purpose!
3) Ask them if there is some area they are specifically having trouble with (e.g. word problems, solving quadratic equations, reducing rational expressions…)
GREAT. SO THEN WHAT?
Based on what we find, we develop customized tutoring plans. I recommend several possible courses of tutoring, the same way a doctor writes you a prescription when you’re sick. The difference here,
though, is I can give you a range of options to consider, depending on the results you want to achieve and the time frame in which you want them.
Using your preferences and requirements, we can fine-tune a plan to fit your budget and schedule.
I have developed an approach that I call “I do / we do / you do”:
•I Do: I explain and demonstrate the concept by solving sample problems
•We Do: We do a couple more problems, but I put more of the load on your shoulders
•You Do: Still more problems which I let you work completely on your own while I watch
In this way, you are assured that you can do the problems because… well, you just did them! If you run into trouble we can repeat the above process, or, alternatively, step back and work on a
supporting concept.
HOW DO I KNOW WE’LL ALL GET ALONG? I DON’T WANNA RISK MONEY TO FIND OUT…
Well, you can certainly “try before you buy”! The first session is free, where we discuss your needs and maybe do a little light problem-solving to see how things go. That way everyone can assess
their comfort level and act accordingly.
SOUNDS WONDERFUL! WHAT DO I DO NOW?
You can send me an e-mail with specific questions that you may have. Or you can just contact the folks at Wyzant to help you set something up. Either way, we can start the process of getting you the
help you need.
Math is hard for people for many different reasons. Perhaps it’s because it wasn’t taught well the first time. Maybe you just weren’t ready to receive the information, or connections weren’t made
between one math topic and another.
Whatever the cause, it’s 100% fixable, but with a catch: you have to BE WILLING TO DO THE WORK to become proficient. No one who has seriously applied themselves has ever failed my math class, and,
conversely, the single leading cause of failure is “not doing the work”.
So if you’re looking to improve your math proficiency, come with the right attitude. I LOVE TEACHING MATH, and nothing gives me more satisfaction than helping a deserving student succeed where they
struggled before, demonstrating first-hand that doing well in math IS possible if you put in the effort.
I’m ready to start our math adventure. Are you?
Ellsworth's subjects | {"url":"http://www.wyzant.com/Tutors/TX/Katy/8018116/?g=3FI","timestamp":"2014-04-18T14:10:51Z","content_type":null,"content_length":"96916","record_id":"<urn:uuid:298a77e8-ad38-4bbd-808c-b31d9c16af27>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |