content
stringlengths
86
994k
meta
stringlengths
288
619
Radar Imaging - Week 1 The lectures will begin with an overview of the idea of radar imaging, including some beautiful images to provide motivation. The radar system architecture is discussed, along with a mathematical description of what each part of the system does to the transmitted or received signal. The processing of the received signal already involves interesting mathematical ideas to pull the signal out of the noise. Next there will be a discussion of how the electromagnetic waves propagate after they leave the antenna. The propagation of electromagnetic waves is governed by Maxwell's equations; the audience will learn why a scalar wave equation is appropriate for most analysis of radar scattering. The analysis of radar signals will be provided first with a simple one-dimensional wave propagation model. The audience will learn from this that radar can measure both target range and target velocity, but that there is an uncertainty principle that both range and velocity cannot be measured simultaneously with arbitrary accuracy. The theory based on the one-dimensional model is sufficient to understand some of the basic ideas behind high-resolution imaging, and understanding this point of view is important for communicating with radar engineers. Next, a fully three-dimensional scalar wave propagation model will be introduced, and the basics of scattering theory will be discussed. The three-dimensional model leads to a formula for the radar signal that includes the transmitted waveform, the reflectivity of the target, and geometrical spreading factors. This model will be used to explain ISAR (Inverse Synthetic Aperture Radar) imaging, which is generally used for airborne targets. The audience will learn that ISAR imaging commonly reduces to a multidimensional Fourier transform. A movie will be shown of ISAR images. For targets on the ground, it is important to analyze the radiation from the antenna. The discussion of antennas will begin with a slide show of the many different forms of antennas. For the analysis of radiation from antennas, the full vector Maxwell's equations will be used; in particular, the vector potential formulation will be introduced and used to derive a formula relating the current density on the antenna to the far-field radiation pattern. Examples will be included to show how typical antenna beam patterns arise. With three-dimensional wave propagation and antenna beam patterns now understood, the lectures will address spotlight-mode SAR. The audience will discover the similarity between ISAR and spotlight-mode SAR. Next, strip-map mode SAR will be addressed. Here the imaging process is more complicated and depends on the fact that the map from target reflectivity to radar signal is a FIO (Fourier Integral Operator). The audience will learn how to construct a parametrix (approximate inverse) for the FIO; applying this parametrix to the radar signal results in a strip-map mode SAR image. Properties of this image follow from the properties of the FIO; the necessary notions from microlocal analysis will be introduced. The lectures will end with a survey of the state of the art and of some of the areas of active research.
{"url":"http://www.siam.org/students/g2s3/2011/radar.html","timestamp":"2014-04-19T17:03:10Z","content_type":null,"content_length":"7734","record_id":"<urn:uuid:a495b905-930a-42b3-a6a2-2572cb2a68fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: Graphs of Polynomial Functions Video | MindBites Calculus: Graphs of Polynomial Functions About this Lesson • Type: Video Tutorial • Length: 10:14 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 110 MB • Posted: 06/26/2009 This lesson is part of the following series: Calculus (279 lessons, $198.00) Calculus: Final Exam Test Prep and Review (45 lessons, $64.35) Calculus: Curve Sketching (20 lessons, $25.74) Calculus: Graphing Using the Derivative (4 lessons, $5.94) Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http:// www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Curve Sketching Graphing Using the Derivative Graphs of Polynomial Functions Page [1 of 3] So now, let's put these ideas together and actually do a few examples where you actually see finding the critical points, increasing, decreasing regions and concavity to actually sketch very accurate pictures of more elaborate functions. So, the first one I thought we would look at is the following - f of x equals x to the one fourth minus ten x cubed plus five. And the meta question here is to sketch a very accurate drawing of this. But within that meta question, here's what we have to do. First, we have to locate all of the critical points. Second, we have to find out, using the critical points, the regions where the function is increasing and where the function is decreasing. That, by the first derivative test, will actually enable us to figure out which of those critical points are max, which are mins and which are neither. Then, once we're armed with that then we take a look at the second derivative, which will then give us information about concavity. We can find the points of inflection, where the curvature changes, and then draw a little sign chart and see where we're concave up, where we're concave down and then we can sketch a picture. For each of these problems, we're going to do many, many steps. And in fact, I want to give you an opportunity to try these before you actually watch them with me. So to begin with, why don't you try to locate all the critical points for this function f of x? Give it a shot. All right. Well, let's see how we made out. The first thing I need to do is take the derivative, because remember the critical points are those places where the derivative either equals zero or the derivative is undefined but the function actually exists there. So we take the derivative - we see four x cubed minus thirty x squared. And what I'd like to do is first set equal to zero. So if we set equal to zero, well, what do we have? Well, you'll notice that I can actually factor out the common factor of two x squared. Well, actually let's do that. Factoring things is always a great idea in these problems, by the way. So here I'm left with a two x minus fifteen and I'm setting that derivative equal to zero. I want to find out where those slopes potentially are horizontal, where those tangents are zero. Okay, now if I have a product of two things that yield zero, I know that either this is zero or that's zero. Well, if this term is zero that automatically means that x has to equal zero. So there's a critical point right there. And the other possibility is that this equals zero. And if I solve this, I would see that x would have to be fifteen over two or seven point five. So, therefore, I see I have two critical points. What about that other variety, that other flavor of critical points, namely, where the derivative is undefined? Well, the derivative actually is just some cubic polynomial; it's always defined. If you give me any value of x, I can always cube it. I can always square. I can always define. So, in fact, these are the only two critical points. So these are the critical points. Okay, let's make a little sign chart now to see if any of these are max, mins or neither. So I make a little sign chart for the first derivative. I wonder what that would look like. Well, what I like to do is I like to actually label these points here. So I put in zero and I put in seven point five. Why don't you right now try to fill in the sign chart with the appropriate signs? And then determine where the function is increasing, where it's decreasing and then using that, tell me if either of these are max, mins or neither. Give it a shot. Okay. Well, let's see how we made out here. What I just do is I just pick a point, one representative point in each of these regions and see what the function is doing. So here, for example, you might want to pick, say negative one, that's to the left of zero. And when you plug in negative one in here, what do we see? Well, negative one cubed is negative one. So I see minus four, minus thirty, that's very negative. So, in fact, this region here is going to be a negative region. Here the derivative is zero. Now, what happens here? Well, between zero and seven point five, I could pick something like say one. Plug in one in here and I see four minus thirty, well, that's still negative. So I have a negative region here. Here the derivative equals zero, we already saw that. And what about here? Well, here you could pick something really large; you could pick something like say ten. Ten cubed is a one thousand, so this is four thousand minus only three thousand, so that's positive. So, in fact, this is the sign chart. Using the sign chart for the derivative, what do we see? Well, we see that the function must be falling here because the derivative is negative. It has negative slopes. So the function's falling and the function's also falling here. Here, however, the function is on the rise. So this sign chart actually provides the following information - I now see that the function is decreasing up to zero and then decreasing past zero up to seven point five but then it increases. So what happens to zero? A max or a min? Well, it's neither, because I am decreasing and then I'm decreasing more. That's not a peak and it's not a valley. What about here? Well, that plainly is a valley. We come down and then we go up. So, in fact, that's a min. So this is a min, and this is neither. Well, that tells us all about increasing, decreasing of the function and we also got to create this one minimum, no max, and zero is neither. Okay, now let's take a look at the curvature. So, what do we do for curvature? Well, we want to look at the second derivative. And so we take a look at the second derivative and what do we get? Well, the second derivative is the derivative of the first derivative, so that's twelve x squared minus sixty x. So, the first thing I'll do is actually figure out when that second derivative equals zero. Those are finding possible points of inflection, places where the concavity might switch from concave up, concave down, concave down to concave up. So, let's set that equal to zero, just like before. You'll notice that this is very parallel to what we were doing before. So, in fact, let me right now have you try to find the points of inflection for this. Give it a shot right now. Okay, well if you set this equal to zero, I see I can factor out a common factor of twelve x. And when I do that I see twelve x times x minus five. And if I set that equal to zero, I see either x is zero - that sounds familiar, doesn't it? We already saw that x equals zero produces a place where the derivative is zero. But now we see the second derivative is also zero there. And the other possibility is that x equals five. So these are two candidates for points of inflection. To see if they really are, what I do is I sort of parallel this theory. I make a little sign chart now for the second derivative. And I just mark down these points, so I'll mark down zero and five and I want to fill in with the sign of the second derivative. Why don't you try that right now and determine a couple of things - figure out where this curve is concave up, where the curve is concave down and see if either of these points are points of inflections. Give it a shot. Well, let's see, if I pick a point to the left of zero, say one, and go back to the second derivative, if I plug in one for x, I see negative. So there's a negative region here. Here I see the second derivative equals zero. Between zero and five, I could pick one. X equals one. Plug that in here. If you put in x equals one; you'd still see negative region. And then what happens here after five? Well, after five what I see is if I pick something like ten, if I put in ten here, what would I see? Well, if I put in ten, this would give me six hundred. This would give me one hundred, so this would give me times twelve would be twelve hundred, so this actually becomes positive. Now let's recap and make sure this is all okay. So, what if I plugged in a negative one? If I plugged in a negative one, actually look what happens if you plug in a negative one. This actually is a plus sixty and this is still positive. So, in fact, these should really be positive here. So you put in a negative one, negative one actually makes this positive. So, be careful. Well, I see positive, negative, positive, it's zero here. So I see both of these points are points of inflection - so these are points of inflections. Great! And what about the concavity? Well, since this is positive in this region, I see this is concave up so I sort of put a little happy face here. Here negative region, concave down and here I see positive region, concave up. So that's all the data and using that we can actually figure out what the graph of this looks like. One thing we might want to have are the values of the functions at all these points. So you, in fact, you would know to just plug into the original function, which gives you height to determine what the value of the function is in each of these points. So, for example, f of zero - now I go back to the original function to get the height, where the dot is. And I see immediately that's five. The next point I'll need is f of seven point five. And if you plug in seven point five here and work that out, I think you'll see minus one thousand forty-nine point six eight and it goes on for a while. And then if you plug in this five, which is a point of inflection, and evaluate I think you'll see minus six hundred twenty. Now, using all this information and these two charts see if you can sketch an accurate picture of the graph. Try it right now. Okay, well, let's see if we can do this together. We have all the information we need right over there and so let's take a look at what we have. Okay, well we know that at zero, the function is zero. Sorry, at zero we know that the function is five - so let me put in a point right here zero five. At seven point five, right around here, we know the function is minus one thousand forty nine, so it's way down. So I'm not going to be able to draw this perfectly to scale, I'm going to put the point right down here. So this is minus one thousand forty-nine and this is nearly seven point nine. And then at five, which is the other point that we're going to need, I'll put the five right here - where are we? We're at minus six hundred and something, so I'll put a point right in here. And what do I know? I know that the function is decreasing and the function decreases all the way to here and then it increases. That's what that charts says, that first chart tells me. So I'm going to get some sort of vee effect here. That's the minimum, as we already saw. Okay, but now what about the curvature? Well, what we saw was that it's concave up, so it's going to be sitting up like this and then concave down and then concave up. And so we put all that information together, decreasing, increasing, concave up, then concave down, then concave up. It would look something like this. The graph would look something like this. I have a pretty version of that so you can really see all of the detail. There's a really pretty version of this and you can see how the function is concave up and then it levels off and then it's concave down for a while and then it becomes concave up again and only now does it start to increase. The derivative now is positive and so you get a sketch like this. Take a look at this. Give this a try yourself and I'll be back with another example in a second. See you there. Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/3471-calculus-graphs-of-polynomial-functions","timestamp":"2014-04-19T12:51:59Z","content_type":null,"content_length":"65054","record_id":"<urn:uuid:e23e46c5-3835-4ca6-846f-27604ce33878>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: indicator variable and interaction term different signs but both Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: indicator variable and interaction term different signs but both significant From David Hoaglin <dchoaglin@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: indicator variable and interaction term different signs but both significant Date Sun, 7 Apr 2013 21:51:08 -0400 Thanks for the thoughtful discussion. I'm glad to elaborate. The short answer, oversimplifying somewhat (but not a lot), is that the "common phrasing" is incorrect, because it does not reflect the way multiple regression works. For reference, not tied to the present example, one version of the common interpretation (which appears in far too many books) is that a coefficient in a multiple regression tells us about the change in y corresponding to an increase of 1 unit in that predictor when the other predictors are held constant. In less categorical language, I usually say that, as a general interpretation, it is oversimplified and often incorrect. Thus, my "preferred interpretation" is superior simply because it accurately reflects the way multiple regression works (more below). When you say, "According to the model ...," the phrase "when ... the values of other variables are the same for both" is not actually "according to the model." The distinction may be clearer if you consider the partial-regression plot (also called the "added-variable plot") for a chosen predictor. The vertical coordinate is the residual from the regression of y on the other predictors, and the horizontal coordinate is the residual from the regression of the chosen predictor on the other predictors. The slope of the regression line through the origin of the partial-regression plot equals the coefficient of the chosen predictor in the multiple regression (in which the predictors are the chosen predictor and the other predictors). This result is straightforward mathematics, and it motivates the interpretation that the coefficient of the chosen predictor tells how the dependent variable changes per unit change in that predictor after adjusting for simultaneous linear change in the other predictors in the data at hand. The adjustment consists of freeing y (and the chosen predictor) of regression on the other predictors. The process of fitting a multiple regression model does not hold those other predictors constant. Cook and Weisberg (1982, Section 2.3.2) give a proof. I haven't tried to locate the earliest proof, but Yule (1907, Section 9) has an elegant proof. Mosteller and Tukey (1977) have a chapter entitled "Woes of Regression Coefficients" and a proof (in Section 14K). The development of regression in the introductory textbook by De Veaux et al. (2012) includes the correct general interpretation. My point about not extrapolating beyond the data is not moot, because I was focusing mainly on size, leverage, litigation, private_D, and Multiple regression is often more complex than it appears. To gain a proper understanding, however, one has to come to grips with the complexity. The "held constant" interpretation of regression coefficients introduces avoidable confusion and impedes proper I hope this discussion helps. David Hoaglin Cook RD, Weisberg S (1982). Residuals and Influence in Regression. Chapman and Hall. De Veaux RD, Velleman PF, Bock DE (2012). Stats: Data and Models, 3rd ed. Addison-Wesley. Mosteller F, Tukey JW (1977). Data Analysis and Regression. Addison-Wesley. Yule, GU (1907). On the theory of correlation for any number of variables, treated by a new system of notation. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character. 79:182-193. On Sun, Apr 7, 2013 at 4:58 PM, Richard Williams <richardwilliams.ndu@gmail.com> wrote: > Thanks David, but I admit I am still confused. According to the model, it is > the case that "The coefficient for OC_D is the predicted difference between > an overconfident manager and a regular manager when MV = 0 and the values of > other variables are the same for both." If MV = 0 is an uninteresting or > impossible value, that is pretty much a worthless thing to know, but it is > still a correct statement. > Part of what I like about my phrasing (which appears to be a more or less > common phrasing) is that I believe it helps make clear (perhaps along with > some graphs) why you generally shouldn't make a big deal of the coefficient > for the dummy variable, in this case OC_D. It is simply the predicted > difference between the two groups at a specific point, MV = 0, a point that > may not even be possible in practice. Lines go off to infinity in both > directions, and if the lines are non-parallel (as when there are > interactions) there will be an infinite number of possible differences > between the two lines, most of which will be totally uninteresting. I used > to have students making statements like "once you control for female * > income, the effect of female switches from positive to negative" and they > tried to come up with profound theoretical explanations for that. > I agree with you about being careful about extrapolating beyond the range of > the data, but if MV = 0 isn't even theoretically possible it is kind of a > moot point. Testing the statistical significance of any predicted values you > compute should also give you some protection. > The main thing, though, is that I am confused by your preferred wording: > "The appropriate general interpretation of an estimated coefficient is that > it tells how the dependent variable changes per unit change in that > predictor after adjusting for simultaneous linear change in the other > predictors in the data at hand." Why exactly is that a superior wording? I'm > not even totally sure what that means. Are you just trying to warn against > extrapolating beyond the observed range of the data? If so I think there is > probably a more straightforward way of phrasing it. And, I don't think it is > clear what "simultaneous linear change in the other predictors" is supposed > to mean. Nor do I think the wording makes it clear what substantive > interpretation you should give to the coefficient for OC_D. > I think we are in agreement on most points, i.e. we both think there is > little point on making a big deal of when MV = 0 when that may not be > interesting or even possible -- but I don't understand why you think your > preferred wording is better and other wordings are incorrect. But I'd be > interested in hearing you elaborate. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-04/msg00288.html","timestamp":"2014-04-16T16:14:28Z","content_type":null,"content_length":"16603","record_id":"<urn:uuid:7a341612-4e45-408e-8b63-ae1692d96eca>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] curve_fit step-size and optimal parameters josef.pktd@gmai... josef.pktd@gmai... Wed Jun 10 11:12:51 CDT 2009 On Wed, Jun 10, 2009 at 3:58 AM, Sebastian Walter<sebastian.walter@gmail.com> wrote: > If you try to fit the frequency with the least-squares distance the > problem is not only nonlinearity > but rather the fact that the objective functions has many local minimizers. > At least that's what I have observed in a toy example once. > Has anyone experience what to do in that case? (Maybe use L1 norm instead?) I would look at estimation in frequency domain, which I know next to nothing about. But for your example, I manged to get the estimate either by adding a bit of noise to your y, or by estimating the constant separately. When I remove the constant, the indeterminacy (?) in the parameter estimate went away. Also if there is a small trend then the estimation worked. The other way, I would try in cases like this would be to use a penalization term (as in Tychonov or Ridge) in the objective function, but I didn't try out how well this would work in your case. > On Mon, Jun 8, 2009 at 10:19 PM, Robert Kern<robert.kern@gmail.com> wrote: >> 2009/6/8 Stéfan van der Walt <stefan@sun.ac.za>: >>> 2009/6/8 Robert Kern <robert.kern@gmail.com>: >>>> On Mon, Jun 8, 2009 at 14:59, ElMickerino<elmickerino@hotmail.com> wrote: >>>>> My question is, how can I get curve_fit to use a very small step-size for >>>>> the phase, or put in strict limits, and to therefore get a robust fit. I >>>>> don't want to tune the phase by hand for each of my 60+ datasets. >>>> You really can't. I recommend the A*sin(w*t)+B*cos(w*t) >>>> parameterization rather than the A*sin(w*t+phi) one. >>> Could you expand? I can't immediately see why the second >>> parametrisation is bad. >> The cyclic nature of phi. It complicates things precisely as the OP describes. >>> Can't a person do this fit using non-linear >>> least-squares? Ah, that's probably why you use the other >>> parametrisation, so that you don't have to use non-linear least >>> squares? >> If you aren't also fitting the frequency, then yes. If you are fitting >> for the frequency, too, the problem is still non-linear. >> -- -------------- next part -------------- # author: michael a schmidt # purpose: fit sinusoid with constant offset from numpy import * from scipy.optimize import * def f(x, a, b, c, d): return (b*sin(c*x + d)) + a#/(x+1e2) #a*x**2 # #return (a + b*sin(c*x + d)) #return (a + b*sin(c*x)) + d*cos(e*x) def f2(x, b, c, d): return (b*sin(c*x + d)) def fp(y, x, a, b, c, d): # penalized objective function, not used return np.sum(y - (b*sin(c*x + d))**2) + 1e-4*(a**2 + b**2 + c**2 + d**2) def do_it(): x, y = load_data('data_file.txt') y = (1e6)*y + 0.05*np.random.normal(size=y.shape) f_probe = 5.380 f_pump = 5.408 #Hz f_beat = abs(f_probe - f_pump) w_beat = 2.*pi*f_beat V0 = 0.5*(max(y) + min(y)) V1 = max(y) - V0 d0 = 0.0 #initial guess of zero phase p0=[V0, V1, w_beat, d0] popt, pcov = curve_fit(f, x, y, p0=[V0, V1, w_beat, d0]) print "found: V1 = %f +/- %f" % (popt[1], sqrt(pcov[1][1])) return popt, x,y, p0, pcov def do_it2(): x, y = load_data('data_file.txt') y = (1e6)*y f_probe = 5.380 f_pump = 5.408 #Hz f_beat = abs(f_probe - f_pump) w_beat = 2.*pi*f_beat V0 = 0.5*(max(y) + min(y)) V1 = max(y) - V0 d0 = 0.0 #initial guess of zero phase p0=[V0, V1, w_beat, d0] popt, pcov = curve_fit(f2, x, y-y.mean(), p0=[V1, w_beat, d0]) aest = (y - f2(x, *popt)).mean() popt2 = np.hstack((aest,popt)) print "found: V1 = %f +/- %f" % (popt[1], sqrt(pcov[1][1])) return popt2, x,y, p0, pcov def load_data(in_file): input = open(in_file, 'r') lines = input.readlines() x = array([0.]*len(lines)) y = array([0.]*len(lines)) for i, line in enumerate(lines): v1 = float(line.split()[0]) v2 = float(line.split()[1]) x[i] = v1 y[i] = v2 return (x, y) popt, x,y, p0, pcov = do_it() import matplotlib.pyplot as plt yh = f(x, *popt) pstd = np.sqrt(np.diag(pcov)) print pcov/np.outer(pstd,pstd) popt, x,y, p0, pcov = do_it2() yh = f2(x, *popt[1:]) + popt[0] pstd = np.sqrt(np.diag(pcov)) print pcov/np.outer(pstd,pstd) More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-June/021439.html","timestamp":"2014-04-20T07:16:59Z","content_type":null,"content_length":"7877","record_id":"<urn:uuid:c369da7b-277b-40e4-bc09-17ac7ec67561>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Gemini Observatory Spectroscopic throughput Spectroscopic Throughput All throughput measurements are based on observations of spectrophotometric standards, collected under photometric conditions. Throughputs are defined as follows: where: N[d](λ) is the number of photons detected with wavelength λ and N[t](λ) is the number of photons of the same wavelength hitting the telescope's primary mirror. The latter is given by: N[t](λ) = N[*](λ) 10 ^(-0.4 μ A[λ]) where N[*](λ) is the star's photon flux at wavelength λ above Earth's atmosphere, in number of photons/cm^2/s, μ is the airmass, and A[λ] is a mean monochromatic extinction coefficient. The numbers available below are average values, that do not account for throughput variations as a function of grating angle. ┃ Instrument │ Gratings │ Latest Throughput │ All measurements ┃ ┃ NIFS │ ZJHK │ 20140306 │ ZJHK ┃
{"url":"http://www.gemini.edu/sciops/instruments/ipm/data-products/NIFS/nifs-spec-throughput.html","timestamp":"2014-04-19T22:10:39Z","content_type":null,"content_length":"13515","record_id":"<urn:uuid:b1b4d6ba-c484-4227-b03b-3d94574742b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics 8054 (Geyer, Spring 2011) How To How do I temporarily install an R package I am working on? You don't need to, if it has been checked, then it is already installed. R CMD check foo R --vanilla library(foo, lib.loc = "foo.Rcheck") and you are in business. How do I install an R package if I don't have administrator privileges? You install it somewhere where you do have write privileges (on linux, in your home directory). Say you want to install in (under linux) in ~/library. R CMD INSTALL --library=~/library foo_0.1-2.tar.gz will install the library from the unix command line and install.packages("foo", lib = "~/library") will install a library on CRAN from the R command line. Then to use the library put a file .Renviron in your home directory that contains the single line R_LIBS = ~/library Then R will always know to always also look there for installed packages (in addition to the library subdirectory of the RHOME directory). Where is the R API documentation? Documentation? We don't need need no stinkin' documentation. Real programmers don't do documentation (see also the legend of Mel the real programmer). There ain't no documentation, you just have to RTFS. As bad as this situation is, it is infinitely worse for closed source proprietary software. That you just can't extend. R you can. It is very well behaved with respect to extensions. The R API consists of the C functions in the headers you can include when doing R CMD INSTALL. What are these? On our linux boxen do cd `R RHOME` find include -name '*.h' This will list all the header files that can be included. Any C function that doesn't have a prototype in one of these headers is not part of the R API. Don't try to call it from C. Any C function that does have a prototype in one of these headers is part of the R API. These include files constitute a contract between the R core team and application programmers that they will try very hard to not break any of these functions. If you use them, they should continue to work in the future. Source code So suppose you have found a function you want to call, for definiteness say find include -name '*.h' | xargs grep pbeta shows you the prototype, but what do the arguments do? For that we have to RTFS and for that you have to get the R source and unpack it. Say wget http://www.biometrics.mtu.edu/CRAN/src/base/R-2/R-2.10.1.tar.gz tar zxf R-2.10.1.tar.gz But here in the stat department we already have a copy. cd /APPS/src/R-2.10.1 find src -name pbeta.c This may not work because the file name does not have to reflect the function(s) defined in the file. You may have to do find src -name '*.c' | xargs grep pbeta This, of course, turns up not only the file pbeta.c where the function is defined but every place in the R codebase where this function is called. How do I turn on Compiler Flags in my R Package? The short answer is Makevars. For example (the one done in class), to check the Fortran in your package, create a file name Makevars in the src directory of your package that contains the single line PKG_FFLAGS = -fbounds-check -Wall -Wextra For extra flags for C code use PKG_CFLAGS. For extra flags for C++ code use PKG_CXXFLAGS. Warning: You cannot use these flags in the form of the package you ship to CRAN. They only allow “standard” compiler flags (whatever that means). But you can use this for debugging. What is the Difference Between the | and || Operators? See the on-line help obtained by doing, for example help("||"). & and && indicate logical AND and | and || indicate logical OR. The shorter form performs elementwise comparisons in much the same way as arithmetic operators. The longer form evaluates left to right examining only the first element of each vector. Evaluation proceeds only until the result is determined. The longer form is appropriate for programming control-flow and typically preferred in if In short, you want | when dealing with vectors. You may want || when dealing with if clauses. How do I Vectorize Logical And and Or? The all (on-line help) and any (on-line help) functions do this. How to find LaTeX operator names? One way is to look in Lamport (2nd ed) or the LaTeX Companion (2nd ed). Another way is to use the Detexify^2, the LaTeX symbol classifier (Sai Okabayashi told me about this one). How to do Null and Alternative Hypotheses in LaTeX? H_0 & : h_1(t) = h_0(t), \qquad \text{for all $t \in[0,\tau]$} H_a & : h_1(t) \ne h_0(t), \qquad \text{for some $t \in[0,\tau]$} How do I do Math in LaTeX that has Goofy Word-Like Variables? $D \times (1 - \text{EPS})$ $\text{D} \times (1 - \text{EPS})$ How to do Clean Examples in R? How about twostage(T, DELTA, GROUP) How to Browse the C API? The unix functions man and man -k find out all kinds of stuff about unix. Functions in the C standard library are in sections 2 and 3. They will say in the “CONFORMING TO” section C89 if they are in the C standard adopted in 1989 and C99 if they are in the C standard adopted in 1999. man -k tangent lists all the functions that have in their description. And man atanh shows the manual page for the specific function How to Make BEAMER Work like Powerpoint? I'm not sure I understood the question, and I hate BEAMER. It faithfully copies most of the vices of Powerpoint (there aren't any virtues). It allows you to encumber your slides with a lot of fritterware that gives the audience some eye candy to look at instead listening to what you say. Real valuable! I assumed the question was about doing things like this for which I used the good old-fashioned slides document class. Here is the LaTeX. It uses • the color package to set the background and text colors, • the graphicx package to include the JPG figure downloaded from the web, and • the fancybox package to absolutely position the text so it goes over the figure. An alternative to the fancybox package is the textpos package. I have not tried the latter, but I assume it works just as well. Note: This file must be processed using pdflatex because the latex command only handles PostScript (not JPG) includes. How to use Valgrind when Not Checking a Package? R --debugger=valgrind runs R under Other flags can be combined with this, for example R CMD BATCH --vanilla --debugger=valgrind foo.R How to Differentiate Sums in R? An example: log likelihood for the Cauchy location model logl <- expression(- log(1 + (x - theta)^2)) scor <- D(logl, "theta") hess <- D(scor, "theta") foo <- function(theta) list(value = sum(eval(logl)), gradient = sum(eval(scor)), hessian = matrix(sum(eval(hess)))) ### make up data x <- rcauchy(30) ### do it trust(foo, median(x), 0.5, 2, minimize = FALSE) The trick is to use the sum rule yourself and take the derivative inside the sum. How to Use Mathematica? How to fit LM or GLM to large datasets? R contributed package on-line help ) may be of use. How to use the R formula mini-language? See the function dr in the library dr by Sandy Weisberg. How to make new operators in LaTeX? In the preamble (the technical term for between \documentclass and \begin{document}) put Then in your document $\var_\theta(X)$ does the right thing, something like var[θ](X). How to change the interline space in a LaTeX table? one thing & $x$ \\ and another & $\frac{\int_0^\infty g(x) f(x) \, d x}{\int_0^\infty f(x) \, d x}$ \\ and yet another & $\frac{\frac{a + b}{c + d}}{\frac{e + f}{g + h}}$ \\ The secret is \renewcommand{\arraystretch}{1.75} which multiplies the interline spacing by a factor of 1.75. Because it is done inside an environment, it only applies to this one table. How to fix the way LaTeX spaces math formulas In general you don't. TeX knows more about typesetting than you do. The LaTeX eqnarray environment is so brain-damaged that it is impossible to use it to produce non-ugly mathematics. AMS LaTeX provides six different replacements to do a variety of jobs, all of which must be done and done poorly by eqnarray when you use plain LaTeX. All six of the AMS LaTeX replacements work beautifully. There are two places where TeX doesn't know enough about math to know where to put in space and you have to do that. Before the differential in integrals \int_{- infty}^\infty f(x) \, d x The \, makes a “thin space”. For all TeX knows, this is just f(x) times d times x. It doesn't know calculus. So you have to tell it about the extra space before a differential. There are two kinds of set builder notation { a, b, c } and { x ∈ A : x < 3 }. The latter but not the former should have thin space inside the curly brackets. I write the latter in LaTeX as \set{x \in A : x < 3} where I have defined the macro in the preamble of the document (between \documentclass and \begin{document}).
{"url":"http://www.stat.umn.edu/geyer/8054/howto/","timestamp":"2014-04-21T03:01:02Z","content_type":null,"content_length":"15035","record_id":"<urn:uuid:a34e694b-e267-4860-9def-36113a0f2a29>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the dependence vector Eric Fisher <joefoxreal@gmail.com> Thu, 4 Jun 2009 06:21:41 +0000 (UTC) From comp.compilers | List of all articles for this month | From: Eric Fisher <joefoxreal@gmail.com> Newsgroups: comp.compilers Date: Thu, 4 Jun 2009 06:21:41 +0000 (UTC) Organization: A poorly-installed InterNetNews site Posted-Date: 04 Jun 2009 15:47:35 EDT I'm reading the paper "A Data Locality Optimizing Algorithm", by Michael E. Wolf and Monica S. Lam. In section 2, it describes the dependence vector as, a generalization of distance and direction vectors. A dependence vector in an n-nested loop is denoted by a vector each component di is a possibly infinite range of integers, represented by [di_min, di_max], where di_min b Z b* {-b}, di_max b Z b* {b} and di_min b $ di_max Here, a n-nested loop corresponds to a finite convex polyhedron of iteration space Zn (n power). a) What does di mean? Does it mean the dependence of loop i? I think the dependence should refer to two statements. A single dependence vector therefore represents a set of distance vectors, called its distance vector set: N5(d)={(e1,...,en) | ei b Z and di_min b $ ei b $ di_max} b) Here, what does 'N5(d)' mean? What does 'N5' mean? The dependence vector d is also a distance vector if each of its components is a degenerate range consisting of a singleton value, that is, di_min=di_max. c) What does this sentence above mean? Too more questions about dependence representations. Eric Fisher Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/09-06-023","timestamp":"2014-04-20T21:04:45Z","content_type":null,"content_length":"5183","record_id":"<urn:uuid:a0542c3f-58bd-40ce-9daf-afffb03113cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
4. DETERMINATION OF H[0] The range of both previous and current published values for the expansion rate, or Hubble constant, H[0] (see Figure 1a), attest to the difficulty of measuring this parameter accurately. Fortunately, the past 15 years has seen a series of substantive improvements leading toward the measurement of a more accurate value of H[0]. Indeed, it is quite likely that the 1-H[0] is now approaching 10%, a significant advance over the factor-of-two uncertainty that lingered for decades. Briefly, the significant progress can be mainly attributed to the replacement of photographic cameras (used in this context from the 1920's to the 1980's) by solid-state detectors, as well as to both the development of several completely new, and the refinement of existing, methods for measuring extragalactic distances and H[0] (e.g., Livio, Donahue & Panagia 1997; Freedman 1997b). Currently there are many empirical routes to the determination of H[0]; these fall into the following completely independent and very broad categories: 1) the gravitational lens time delay method, 2) the Sunyaev-Zel'dovich method for clusters, and 3) the extragalactic distance scale. In the latter category, there are several independent methods for measuring distances on the largest scales (including supernovae), but most of these methods share common, empirical calibrations at their base. In the future, another independent determination of H[0], from measurements of anisotropies in the cosmic microwave background, may also be feasible, if the physical basis for the anisotropies can be well-established. Each of the above methods carries its own susceptibility to systematic errors, but the methods as listed here, have completely independent systematics. If history in this field has taught us nothing else, it offers the following important message: systematic errors have dominated, and continue to dominate, the measurement of H[0]. It is therefore vital to measure H[0] using a variety of methods, and to test for the systematics that are affecting each of the different kinds of techniques. Not all of these methods have yet been tested to the same degree. Important progress is being made on all fronts; however, some methods are still limited by sample size and small-number statistics. For example, method 1), the gravitational time delay method, has only two well-studied lens systems to date: 0957+561 and PG 1115. The great advantage of both methods 1) and 2), however, is that they measure H[0] at very large distances, independent of the need for any local calibration.
{"url":"http://ned.ipac.caltech.edu/level5/Freedman2/Freed4.html","timestamp":"2014-04-18T15:45:55Z","content_type":null,"content_length":"4095","record_id":"<urn:uuid:0255b51a-a8d2-4439-be33-6e786af14746>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
A Biophysical Model of the Mitochondrial Respiratory System and Oxidative Phosphorylation • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS Comput Biol. Sep 2005; 1(4): e36. A Biophysical Model of the Mitochondrial Respiratory System and Oxidative Phosphorylation Philip Bourne, Editor^ This article has been cited by other articles in PMC. A computational model for the mitochondrial respiratory chain that appropriately balances mass, charge, and free energy transduction is introduced and analyzed based on a previously published set of data measured on isolated cardiac mitochondria. The basic components included in the model are the reactions at complexes I, III, and IV of the electron transport system, ATP synthesis at F[1]F[0] ATPase, substrate transporters including adenine nucleotide translocase and the phosphate–hydrogen co-transporter, and cation fluxes across the inner membrane including fluxes through the K^+/H^+ antiporter and passive H^+ and K^+ permeation. Estimation of 16 adjustable parameter values is based on fitting model simulations to nine independent data curves. The identified model is further validated by comparison to additional datasets measured from mitochondria isolated from rat heart and liver and observed at low oxygen concentration. To obtain reasonable fits to the available data, it is necessary to incorporate inorganic-phosphate-dependent activation of the dehydrogenase activity and the electron transport system. Specifically, it is shown that a model incorporating phosphate-dependent activation of complex III is able to reasonably reproduce the observed data. The resulting validated and verified model provides a foundation for building larger and more complex systems models and investigating complex physiological and pathophysiological interactions in cardiac energetics. Cells are able to perform tasks that consume energy (such as producing mechanical force in muscle contraction) by using chemical energy delivered in the form of a chemical compound called adenosine triphosphate, or ATP. Two Nobel Prizes were awarded (in 1978 to Peter D. Mitchell and in 1997 to Paul D. Boyer and John E. Walker) for the determination of how ATP is synthesized from the components adenosine diphosphate (ADP) and inorganic phosphate in a subcellular body called the mitochondrion. The operating theory, called the chemiosmotic theory, describes how a driving force called the proton motive force, which arises from the sum of contributions from the electrical potential and the hydrogen ion concentration difference across the mitochondrial inner membrane, is developed by reactions catalyzed by certain enzymes and consumed in generating ATP. Yet, to date, no computer model has successfully described the development and consumption of both the chemical and electrical components of the proton motive force in a thermodynamically balanced simulation. Beard introduces such a model, which is extensively validated based on previously published sets of data obtained on isolated mitochondria. The model is used to test hypotheses about how intracellular respiration is regulated; this model could serve as a foundation for investigating the control of mitochondrial function and for developing larger integrated simulations of cellular metabolism. As the key cellular organelle responsible for transducing free energy from primary substrates into the ATP potential that drives the majority of energy-consuming processes in a cell, the mitochondrion plays a central role in the majority of eukaryotic intracellular events. Therefore, the development of a quantitative mechanistic understanding of cellular function must rely on a reasonable quantitative description of mitochondrial function. Additionally, development of computational models of physiological systems that span multiple scales, from intracellular biochemistry to whole-organ function, requires a self-consistent integrated description of the biophysical processes observed at the molecular, cellular, tissue, and whole-organ levels of resolution. Recognizing the need for a computational model of mitochondrial energetics and the need that such a model be available for integration with other physiological systems models, a computational model of the biophysics of the respiratory system and oxidative phosphorylation was developed to meet the following requirements: (1) the model must be consistent with the available experimental data, and (2) the model must be constrained by the relevant physics/biophysics. The utility of the first requirement is self-evident. The second requirement is that models must obey applicable physical laws (e.g., conservation of mass, charge, and energy; the Second Law of Thermodynamics). The alternative to the second requirement is to use empirically derived relationships that are often useful in developing data-driven models based on specific datasets. This approach is not used here because the resulting models often fail when combined together. Physics-based models, on the other hand—i.e., models built on principles including the laws of mechanics and thermodynamics, in which assumptions and approximations are made explicit—operate with a common currency of mass, charge, energy, and momentum [1–3]. Such models naturally integrate across disparate scales. Previous models of oxidative phosphorylation fail to meet either one or both of the above requirements. For example, the widely used model of Korzeniewski and Zoladz [4–6] invokes an empirical linear relationship between the difference in pH across the mitochondrial inner membrane (matrix pH minus cytosol pH) and the magnitude of inner membrane potential. While the Korzeniewski model has been validated and verified based on a number of studies and is widely applied [7–9], the central empirical relationship between electrostatic potential and pH difference is not expected to apply under all conditions. For example, in the extensive study of isolated cardiac mitochondria published by Bose et al. [10], this relationship is not obeyed. In fact, not only is the linear relationship violated, it is observed in the study of Bose et al. [10] that the matrix pH is nearly constant and can even drop as the magnitude of membrane potential increases. For example, the matrix can become more acidic as the magnitude of the membrane potential increases. This phenomenon cannot be explained by a model that collapses the proton concentration gradient, the membrane potential, and the proton motive force into a single state variable. The model developed by Magnus–Keizer [11] was integrated by Dudycha and Jafri and colleagues into a detailed model of mitochondrial metabolism, including the reactions of the TCA cycle [12,13]. The Dudycha–Jafri model has been adopted by Cortassa and co-workers and recently extended to account for the production of reactive oxygen species by the respiratory chain [14,15]. The Magnus–Keizer model, developed based on Hill's formalism for biochemical kinetics and free energy transduction [16], is self-consistent and thermodynamically balanced. While it has the advantage over the Korzeniewski model in that the electrostatic potential is treated as a state variable, charge is balanced, and bulk electroneutrality is obeyed in the steady state, the Magnus–Keizer model treats the pH gradient across the inner membrane as a constant. Also, the Magnus–Keizer model addresses certain aspects of mitochondrial ion transport—specifically the transport of calcium across the inner membrane—that are not included at the present stage in the current model. However, details that are unique to the present model—specifically pH buffering by potassium ion exchange [17,18]—are necessary to analyze the extensive experimental dataset considered here. The goal of the current work is to introduce a quantitative model representing the chemiosmotic theory of the respiratory chain and ATP synthesis that treats the proton gradient and mitochondrial membrane potential as distinct state variables. To be of maximal utility, this biophysically based model of mitochondrial respiration must conserve charge, mass, and energy, and use a treatment of the electron transport system, ATP synthesis, and substrate transporters that does not violate the laws of thermodynamics. The model is developed based on the dataset of Bose et al. [10]. Values of 16 adjustable parameters are estimated based on fitting a relatively large number of experimentally measured data curves (nine data curves). The resulting parameterized model is further validated by comparison to data measured at low oxygen level in isolated mitochondria by Wilson et al. [19] and Gnaiger and Kuznetsov [20,21]. This section is organized as follows. First, a thermodynamically balanced biophysical model of mitochondrial oxidative phosphorylation is developed, along with identification of model parameters based on the measured dependence of NADH and cytochrome c redox states, oxygen consumption, and matrix pH on levels of buffer phosphate. It is shown that the developed model cannot match data on the inner membrane electrostatic potential without incorporating phosphate-dependent control of oxidative phosphorylation. In the next section, the isolated mitochondrial model is modified to include phosphate-dependent activation of complex III and is fit to the data on inner membrane potential. In the final section, the behavior of the model at low oxygen level is shown to compare favorably to data reported by Gnaiger and Kuznetsov [20,21] and Wilson et al. [19]. Mitochondrial Model without Phosphate Control The basic components of the mitochondrial model, which include the reactions at complexes I, III, and IV of the respiratory chain and ATP synthesis at F[1]F[0] ATPase, are illustrated in Figure 1A. Substrate transporters, including adenine nucleotide translocase (ANT) and the phosphate–hydrogen co-transporter (PiHt) are illustrated in Figure 1B. Cation fluxes across the inner membrane, illustrated in Figure 1C, include fluxes through the K^+/H^+ antiporter and passive H^+ and K^+ permeation. The model includes two mitochondrial compartments (matrix and intermembrane [IM] space). In the following description of the model equations, subscripts “x”, “i”, and “e” denote concentrations of reactants in the matrix, IM space, and external space, respectively; the concentration variables [fATP][i] and [fATP][x], and [fADP][i] and [fADP][x], denote the magnesium-unbound components of ATP in the IM space and the matrix, and unbound ADP concentrations in the IM space and matrix, respectively; variables [mATP][i] and [mATP][x], and [mADP][i] and [mADP][x], denote the magnesium-bound components of ATP in the IM space and the matrix, and magnesium-bound ADP concentrations in the IM space and matrix, respectively. The external space corresponds to the extra-mitochondrial buffer used in the experiments simulated below. Illustration of the Components Included in the Model of Mitochondrial Oxidative Phosphorylation Note on units. All concentrations in the following sections are expressed in molar units—specifically moles per liter of compartment volume. Fluxes are expressed in units of mass per unit time per unit mitochondrial volume. To avoid confusion, the units on flux variables are written as “mol s^−1 (l mito volume)^−1” and not simplified to “M s^−1”, which would be ambiguous when referring to fluxes for membrane transporters. Dehydrogenase flux. In this model the TCA cycle and other NADH-producing reactions are not explicitly modeled. Instead, a phenomenological driving force is used to simulate the phosphate-dependent rate of reduction of NAD^+ to NADH via the reaction NAD^+ NADH + H^+ in the mitochondrial matrix. The following expression is used to model the dehydrogenase flux: where [NADH][x] and [NAD][x] denote the concentrations of NADH and NAD in the mitochondrial matrix; [Pi][x] denotes the matrix inorganic phosphate concentration, and X[DH], r, k[Pi,1], and k[Pi,2] are empirical parameters. Thus, the dehydrogenase flux drives the [NADH][x]/[NAD][x] ratio toward r, with a reaction rate dependent on the concentration of phosphate, which serves as a substrate for mitochondrial dehydrogenase enzymes. Electron transport system fluxes. The overall reaction for electron transfer from NADH to ubiquinol at complex I is expressed as where the notation ΔH^+ is used to indicate hydrogen ion transferred from the matrix to the cytosol, against the electrochemical gradient. The direction of positive flux for the reaction of equation 2 and for all reactions introduced below is left-to-right. The flux through complex I is modeled using the following expression: where X[C1] is an adjustable parameter, [Q] and [QH[2]] denote oxidized and reduced ubiquinol, RT is the gas constant multiplied by the absolute temperature; ΔG[o,C1] = −69.37 kJ mol^−1 is the standard free energy for the reaction H^+ + NADH + Q NAD^+ + QH[2] at pH = 7; and ΔG[H] is the proton motive energy, or the free energy change associated with pumping a proton from the matrix side to the cytosol side of the mitochondrial inner membrane. The factor of four in the exponent of equation 3 arises from the four protons pumped from the matrix space to the IM space for each pair of electrons transferred from NADH to ubiquinol. The proton motive energy is computed as follows: where F is Faraday's constant, ΔΨ is the membrane potential measured as the outer potential minus the inner potential, and [H^+][e]/[H^+][x] is the ratio of external hydrogen ion concentration to matrix concentration. Equation 3 represents a minimal thermodynamically balanced one-parameter model for the flux through the first step in respiratory system—i.e., the flux described by equations 3 and 4 drives concentrations towards thermodynamic equilibrium. Specifically, the flux is driven towards a thermochemical equilibrium defined by the effective equilibrium expression which is an explicit function of the proton gradient and electrostatic gradient across the inner membrane. For complex III, it is assumed that four protons are pumped for each pair of electrons transferred from ubiquinol to cytochrome c [22,23]: where cytC(ox)^3+ and cytC(red)^2+ denote the oxidized and reduced forms of cytochrome c, respectively. As indicated in Figure 1, cytochrome c is assumed to be present in the IM space. Although four protons are pumped across the inner membrane for each unit flux through this reaction, the total number of charges transferred is two, owing to the redox transfer from ubiquinol to cytochrome c, which generates two matrix hydrogen ions for each turnover of the reaction. The flux through complex III takes a form similar to equation 3: where X[C3] is an adjustable parameter, ΔG[o,C3] = −32.53 kJ mol^−1. Below, the expression for complex III flux is modified to test the hypothesis that phosphate modulates complex III activity. It will be shown that, while much of the available experimental data can be explained by the model developed in this section, which does not consider phosphate-dependent control of the respiratory chain enzymes, the observed data are better fit by a model that incorporates phosphate-dependent control of complex III activity. The overall reaction of complex IV involves the transfer of two protons across the membrane and a total of four charges [24,25]: with a flux computed as where X[C4] is an adjustable parameter, ΔG[o,C4] = −122.94 kJ mol^−1, and cytC[tot] = [cytC(ox)^3+] + [cytC(red)^2+]. The complex IV flux of equation 9 is expressed in a form similar to those of complexes I and III, with a few key differences. While, as for complexes I and III, the flux is formulated to drive the system toward thermodynamic equilibrium, additional multiplicative factors have been included in order to successfully reproduce the observed data. The factor is included in equation 9 to account for the observed dependence of the rates of oxygen consumption and ATP generation on oxygen concentration [6,19–21,26,27]. It will be shown that the factor [cytC(red)^2+]/cytC[tot] is found to provide better fits to the observed data than are possible without it. ATP synthesis. ADP is phosphorylated to ATP in the matrix via the F[1]F[0]-ATPase reaction: where n[A] ≈ 3 is the number of protons transported each time this reaction turns over. Since ATP synthesis requires magnesium as a cofactor, the flux through this complex is modeled using the thermodynamically balanced expression where X[F1] is an adjustable parameter, ΔG[o,ATP] = −36.03 kJ mol^−1 and K[Mg-ATP] and K[Mg-ADP] >are the equilibrium dissociation constants for ATP and ADP binding with Mg^2+. The factor 1 M multiplying [mATP][x] is used so that the term in parenthesis is balanced in terms of units. Magnesium binding. Binding between magnesium ion and ATP and ADP is driven via the following fluxes: fATP and fADP denote the concentrations of ATP and ADP that are not bound to magnesium ion in the matrix and IM space; mATP and mADP denote the magnesium-bound species. The parameter X[MgA] is the forward binding rate constant for these reactions; the effective unbinding constant is computed to satisfy the equilibrium dissociation relations for ATP and ADP binding with Mg^2+. Substrate transport. Permeation of ATP, ADP, AMP, and inorganic phosphate between the external buffer and the IM space is governed by the following fluxes: where the subscripts “e” and “i” denote external buffer and IM space, respectively. The buffer concentrations are set as constants in this study. The permeabilities of the outer membrane to adenine nucleotides and to inorganic phosphate are given by p[A] and p[Pi], respectively; γ denotes the ratio of mitochondrial outer membrane area to total cardiomyocyte cell volume. ANT flux involves the displacement of one negative charge from the matrix to the IM space, and is therefore coupled to the electrostatic membrane potential. The following empirical expression [4,6] is used to model the ANT flux: where the ANT is assumed to operate on magnesium-unbound ATP and ADP in the two compartments. Transport of inorganic phosphate between the matrix and IM space is coupled to the hydrogen ion gradient [28]. It is assumed that H^+ and are transported by a co-transport process, with H^+ and moving together across the membrane in a 1:1 ratio in a net electroneutral exchange. Hydrogen binding to inorganic phosphate via the reaction is assumed to be in equilibrium on either side of the membrane with and , where k[dH] is the dissociation constant for the reaction. In these expressions Pi represents the sum of species and . The phosphate-hydrogen co-transporter flux is modeled as reversible Michaelis–Menten flux: where X[PiHt] is an adjustable parameter, k[PiHt] is the Michaelis–Menten constant for on the outside of the membrane. Adenylate kinase reaction. High-energy phosphates are transferred between ATP, ADP, and AMP in the IM space via the Adenylate kinase (AK) reaction: The AK flux in the IM space is computed as follows: where K[AK] = 0.4331 is the equilibrium constant for the reaction of equation 17, and X[AK] is the AK enzyme activity. Cation transport. The present work assumes that calcium and sodium concentrations and fluxes have only secondary effects on membrane potential compared to the primary effects of currents associated with the respiratory chain, the ANT current, and the proton leak. Therefore, fluxes of sodium and calcium are not considered at this stage. Since K^+ is required to buffer the matrix pH [17] and Mg^2+ is required for ATP synthesis and the ANT flux, these ions are considered in the model. Expressions for K^+ and Mg^2+ channel and transporter fluxes are developed below. The expression for the leak of H^+ across the inner membrane is obtained by solving the one-dimensional Nernst–Planck equation, the differential equation for diffusion and drift of a charged species across a permeable membrane. The resulting flux is calculated from the Nernst–Goldman equation [29,30]: Passive flux of potassium into the matrix is modeled using a similar expression: While significant evidence exits for passive flux of potassium through various channels into the matrix [17,31], it is unclear exactly what transporters are present to prevent potassium concentrations from approaching thermochemical equilibrium across the inner membrane. It is assumed that the outflow of potassium ions from the matrix is coupled to the proton gradient, and outflow is modeled using a simple reversible antiporter with flux given by mass-action kinetics: The above expressions for K^+ and H^+ transport assume that these ions rapidly equilibrate across the outer membrane. Therefore, the IM space concentrations are assumed to be equal to the external space concentrations. Governing equations. The flux expressions are used to construct a kinetic model for the system; the overall system is governed by the following set of 17 differential equations: where V[x] and V[i] are the matrix and IM space water volumes, and r[buff] is the buffering capacity of the matrix space, which is set to r[buff]^−1 = (100 M^−1) · [H^+][x] [32]. The stoichiometric coefficients multiplying the complex I, III, and IV, and F[1]F[0]-ATPase fluxes include two terms, one term representing the number of protons transported across the inner membrane for a given reaction, and one term representing the number of protons consumed by the associated biochemical reaction. For example, the complex III reaction pumps four H^+ out of the matrix and consumes one matrix H^+ for every turnover of the reference biochemical reaction H^+ NADH + Q NAD^+ + QH[2], resulting in a total of three matrix H^+ consumed. Thus, the net stoichiometric coefficients multiplying J[C1], J[C3], J[C4], and J[F1] are −(4 − 1), −(2 + 2), −(4 − 2), and +(n[A] − 1). The membrane potential kinetics depends on the effective membrane capacitance, C[IM], which is estimated below. In addition to the 17 state variables treated in equation 22, the concentrations of oxidized matrix NAD, Q, and cytochrome c, and the matrix ADP concentration, are computed as follows: where NAD[tot], Q[tot], cytC[tot], and A[tot], are the total concentrations of NAD(H), ubiquinol, cytochrome c, and adenine nucleotide in the matrix, respectively. Parameter values. The values of the parameters used in this section are listed in Table 1. The units on activities are expressed as mass flux per unit time per unit total mitochondrial volume, specifically mol s^−1 (l mito volume)^−1. The parameters have been categorized into three classes, denoted classes A, B, and C. The meaning of these categories is as follows. Mitochondrial Model Parameter Values Class A refers to free parameters with values determined by fitting model simulations to the data published by Bose et al. [10]. In total, there are 14 adjustable parameters, which are estimated by fitting to seven data curves (described below.) Of the 14 adjustable parameters, four correspond to the phenomenological model of dehydrogenase flux, while the remaining ten are associated with the biophysical model of oxidative phosphorylation and electron transport system in cardiac mitochondria. Class B refers to 17 parameters for which values are established in the literature. These parameter values were fixed and not treated as adjustable. The values used for NAD[tot], A[tot], Q[tot], and cytC[tot] are obtained from the previous models of Vendelin et al. [8] and Korzeniewski and Zoladz [4,6]. The current model assumes cytochrome c to be distributed within the IM space, in contrast to previous models, in which cytochrome c is in the matrix. To keep the total mass of cytochrome c consistent, cytC[tot] is set to 2.7 mM, a value that is ten times greater than that used in the Vendelin and Korzeniewski models, since the IM volume is assumed to be 1/10 of the mitochondrial water volume. The value for the outer membrane permeability to adenine nucleotides is estimated from Lee et al. [33]. Assuming the mitochondrial inner membrane has a capacitance of 1 μF per square centimeter of surface area [34], the inner membrane capacitance is calculated to be 6.75 × 10^−6 mol (l mito volume)^−1 mV^−1 for an inner membrane area of 60 μm^2. It is observed that the steady-state model behavior presented below is not sensitive to the assumed value of mitochondrial membrane capacitance. The steady-state membrane potential is determined by bulk electroneutrality, which imposes the constraint that the sum of the various currents across the membrane is zero. Thus, 4J[C1] + 2J[C3] + 4J[C4] − n[A]J[F1] −J[ANT] − J[Hle] − J[K] − 2J[Mg] = 0, where each of these fluxes depends on the membrane potential. The ratio of mitochondrial surface area to volume γ is estimated from morphological data [35] to be 5.99 μm^−1. Given this value of γ and the assumed values of outer membrane permeability, the gradients obtained for ATP and ADP across the outer membrane at maximal respiration rate are 20 μM. In this range, the resistance to passive transport between the inner membrane space and buffer is not great enough to be significant and the model behavior is not sensitive to the assumed value of γ. Class C refers to two parameters that are set to extreme values such that the simulated model behavior is not sensitive to the specific value chosen. The AK and magnesium-binding activities are set to values high enough that the corresponding reactions maintain equilibrium. Table 2 lists the values for the standard free energies for the reactions of the respiratory chain at pH = 7. Standard Free Energies of Respiratory Chain Reactions Simulation of isolated mitochondria. The extensive dataset published by Bose et al. [10] is used to parameterize the mitochondrial model. The external K^+, H^+, ATP, ADP, AMP, and inorganic potassium concentrations were set as constants according to the buffer concentrations imposed in the experiment in order to compare model simulations to the experimental data. Specifically, [H^+][e] = 10^−7.1, [ATP][e] = 0, and [AMP][e] = 0; [ADP][e] was set at either 0 or 1.3 mM, as described below, and [Pi][e] was varied from 0 to 10 mM. The total magnesium concentration in the buffer was fixed at 5.0 mM; buffer potassium concentration was fixed at 150 mM. The simulations described in this section were computed with the oxygen partial pressure in the matrix set to 20 mm Hg, or [O[2]] = 2.6 × 10^−5 M. The black curves plotted in Figure 2 illustrate comparisons between model-simulated and experimentally measured values for the dataset used to estimate the 14 adjustable parameters listed in Table 1. Parameters values were adjusted to obtain the best fit (least squares error) between model simulations and experimental measures for steady-state values of NADH concentration, rate of oxygen consumption, cytochrome c redox state, and matrix pH, as shown in the figure. Optimal parameter values were found using a global Monte-Carlo-based simulated annealing algorithm that searched for the optimal set of parameter values to simultaneously fit several data curves. In total, seven independent curves were used to estimate the 14 parameter values. It was found that best-fit model solutions are obtained by setting the passive potassium and magnesium fluxes to zero. Shown in Figure 2A are model simulations of steady-state NADH (normalized to NAD[tot]) as a function of external inorganic phosphate, [Pi][e]. The two curves correspond to two different values of external ADP concentration, 0 and 1.3 mM, as indicated in the figure. Also shown are data from Bose et al. [10], collected from isolated mitochondria suspensions, with buffer ADP concentrations of 0 mM (circles) and 1.3 mM (triangles). Figure 2B illustrates the experimentally measured and model-simulated values of the rate of oxygen consumption (MV[O2]) for the same conditions as described for the NADH curves in Figures 2A. When substrate concentration (either ADP or inorganic phosphate) goes to zero, the nonzero MV[O2] corresponds to the basal oxygen consumption necessary to maintain the proton motive energy with a finite proton leak across the inner membrane. Figure 2C illustrates the model-simulated and experimentally measured values of cytochrome c redox state; Figure 2D illustrates the model-simulated and experimentally measured values of matrix pH. The matrix pH is buffered at a nearly constant value via H^+/K^+ exchange. Comparison of Model Simulations to Experimental Data on NADH, MV[O2], Cytochrome C Redox, and Matrix pH for Model without Phosphate Control Also shown as red curves plotted in Figure 2C is the best fit to the cytochrome redox state data obtained without the factor cytC(red)^2+]/cytC[tot] multiplying the J[C4] flux expression of equation 9. It is observed that the model's fits to the cytochrome c redox data are improved by incorporating this multiplicative factor in the expression for complex IV flux. Thus the best-fit model solution assumes the flux through complex IV depends on [cytC(red)^2+]^2, which may be explained by the fact that two reduced cytochrome c molecules are require to donate a single electron pair one oxygen atom, generating H[2]O. In contrast to the results illustrated in Figure 2, the model described in this section is unable to reproduce data on mitochondrial membrane potential. In Figure 3 are plotted the model-simulated and experimentally measured values of ΔΨ, as a function of [Pi][e] for the cases of [ADP][e] = 0 mM and [ADP][e] = 1.3 mM. Comparison of Model Simulations to Experimental Data on Membrane Potential for Model without Phosphate Control In the inactive state ([ADP][e] = 0), as the external (buffer) phosphate concentration is increased, the dehydrogenase flux governed by equation 1 increases, providing an increased thermodynamic driving force for electron transport and a corresponding increase in the magnitude of the membrane potential. However, the model-simulated rate of increase in ΔΨ with [Pi][e] is much smaller than that observed experimentally. When the active state is simulated ([ADP][e] = 1.3 mM), the model fit is even worse than for the inactive state. The addition of ADP to the external buffer, representing a sink for the free energy stored in the redox state in the matrix and the membrane potential, results in a drop in both redox and membrane potential. The simulated magnitude of the potential difference decreases with increasing [Pi][e], the opposite of the trend that is observed experimentally. With 14 adjustable parameters, it is not possible to reproduce the experimentally observed behavior of ΔΨ while maintaining reasonable fits to the curves plotted in Figure 2. Mitochondrial Model with Phosphate Control Reasonable fits to the observed ΔΨ require that phosphate-dependent control be incorporated into the model, as proposed by Bose et al. [10]. It is found that by including phosphate-dependent control of complex III it is possible to obtain improved fits to the data on membrane potential. The flux expression of equation 7 is modified as follows: where two new parameters, k[Pi,3] and k[Pi,4], have been introduced. Thus, the total number of adjustable parameters in the model is increased from 14 in the previous section to 16. By varying model parameters, including these additional parameters, we are able to obtain fits to the nine data curves as illustrated in Figures 4 and and5.5. Thus, the large number of parameters is offset by a relatively large number of data curves to fit for estimation of parameter values. As for the model described in the previous section, it is found that best-fit model solutions are obtained by setting the passive potassium flux to zero. Therefore, in the model parameterization presented in Table 1 the activities of the corresponding channels are set to zero, effectively reducing the number of adjustable parameters. Sensitivity analysis reveals that the activities of the channels cannot be determined based on the present dataset. The activities of potassium channels are known to be modulated by ischemic and anesthetic preconditioning [31,36,37] and will be explored in future work. Comparison of Model Simulations to Experimental Data on NADH, MV[O2], Cytochrome C Redox, and Matrix pH for Model with Phosphate Control Comparison of Model Simulations to Experimental Data on Membrane Potential for Model with Phosphate Control The fits obtained using equation 24 to model the complex III flux are plotted in Figures 4 and and5.5. Note that the model simulation of NADH, MV[O2], cytochrome c state, and matrix pH plotted in Figure 4 produces values similar to those obtained using the previous model (see Figure 2). Of particular note is the behavior of the NADH redox state and MV[O2] curves, plotted in Figure 4A and and4B.4B. In these cases the mean squared error between observed data and the model fits is slightly lower than that obtained using the model with no phosphate control (see Figure 2A and and22B). The major difference between the model of this section and that of the previous section is seen in the simulated membrane potential values, plotted in Figure 5. With the phosphate-modulated control described by equation 24, the model remains unable to reproduce the observed data on membrane potential as phosphate concentration goes to zero. Yet while the model's fits to the observed data remain imperfect, the agreement between simulation and experimental data is significantly improved by incorporating the expression of equation 24 to model the complex III flux. Possible mechanisms explaining differences between model simulations and the data observed in the limit [Pi][e] → 0 at [ADP][e] = 1.3 mM are outlined in the Discussion. The values of the parameters used in generating Figures 4 and and55 are listed in Table 1. The sensitivities of the estimated values of model parameters are considered in the Discussion. Behavior of Model at Low Oxygen Concentration A key to understanding cellular energetics during hypoxia and ischemia is a mechanistic model of mitochondrial function at low oxygen concentration. In this section the behavior of the model is compared to measurements of oxygen consumption and cytochrome c reduction in isolated mitochondria as functions of the oxygen concentration of the medium. The behavior of the model at low oxygen concentration is illustrated in Figure 6. Plotted are the rate of mitochondrial oxygen consumption and the predicted reduced fraction of cytochrome c as functions of the oxygen content of the medium. Behavior of Model at Low Oxygen Concentration The oxygen consumption curve was computed for active state-3 respiration, with [ADP][e] and [ATP][e] set to 1.0 mM and 0 mM, respectively. This curve corresponds to the curves reported in Figure 7A of [20] and Figure 2A of [21]. The model-predicted P[50] for half-maximal oxygen consumption is 0.373 μM (or an oxygen partial pressure of 0.287 mm Hg), close to the reported value of 0.35 ± 0.07 μM [20,21]. The curve for the predicted reduced fraction of cytochrome c (dashed line in Figure 6) was computed for state-3 respiration ([ADP][e] = 0.5 mM; [ATP][e] = 0.87 mM), corresponding to the measurements of Wilson et al. [19]. The predicted curve can be compared to Figures 4B and and5A5A of [19]. Wilson et al. found that in mitochondria isolated from rat liver, cytochrome c is approximately 15%–18% reduced for oxygen concentration in the range of 40–50 μM. The current model predicts a slightly lower value of 14% reduced at [O[2]][e] = 50 μM. Thus, beyond the dataset used for parameterization, the model was further validated by comparison to additional datasets measured from mitochondria isolated from rat heart [20,21] and liver [19] and observed at low oxygen concentration. The model agrees quantitatively with the measured dependence of oxygen consumption rate on oxygen concentration in state-3 respiration. Although the observed cytochrome c redox state is slightly (1%–4%) more reduced in measurements reported for isolated rat liver mitochondrial at low oxygen concentrations compared with the model predictions, the predicted behavior of cytochrome c reduction relative to oxygen concentration is qualitatively similar to the corresponding experimental observations. Allowing for differences in the behaviors of hepatic and cardiac mitochondria, an exact quantitative agreement may not be expected. Model Development and Parameterization The main contribution of the current study is the introduction of a self-consistent thermodynamically balanced model of oxidative phosphorylation and the electron transport system in mitochondria. A biophysical model incorporating all of the components illustrated in Figure 1 required the development of a system of 17 differential equations and the introduction of 16 adjustable parameters. To identify such a large number of parameters, it was necessary to make use of a large number of independent measurements made on mitochondria isolated from rat cardiac tissue [10]. This previously published dataset consists of measures of NAD(H) redox state, cytochrome c redox state, rate of oxygen consumption, mitochondrial membrane potential, and matrix pH, for a range of buffer conditions, including a range of concentrations of inorganic phosphate and for resting and active state mitochondria ([ADP][e] = 0 and 1.3 mM, respectively). In total, 16 parameters were estimated by simultaneously fitted model-simulated steady states to nine independent data curves (see Figures 4 and and5),5), providing quantitative estimates of the model parameters. While an exhaustive statistical analysis of the 16-dimensional parameter space is not computationally feasible, it is possible to compute the sensitivities of the mean squared error between the model solutions to the estimated parameter values. Parameter sensitivity can be estimated from the diagonal entries of the Hessian of the error function, , where E is the mean squared difference between model simulations and experimental data and x[i] represents ith parameter. However, the partial derivatives of E with respect to parameter values represent local measures that do not necessarily reflect how the error changes with finite changes in the parameter values. To estimate the sensitivity to finite changes in parameter values, the sensitivity to each parameter was computed as the relative change in mean squared error due to a 10% change in a given parameter value: where E* is the minimum mean squared difference between model simulations and experimental data, and is the optimal value of the ith parameter. The term is the error computed while setting parameter x[i] to 10% above and below its optimal value. The relative sensitivities to the adjustable parameters are listed in Table 1. These sensitivity values represent a measure of the degree to which the curves plotted in Figures 4 and and55 are sensitive to the value of the individual parameters. A high sensitivity value indicates that changing a given parameter results in significant changes to the simulated curves used to identify the set of adjustable parameter values. Note that six of the adjustable parameters show relative sensitivity of less than 5%, indicating that model solutions are not particularly sensitive to these parameters in the neighborhood of the reported values and that these parameters are not well estimated by the present analysis. In particular, in comparison to the dataset presented here, the model is relatively insensitive to the values of the activities of the potassium transporters and the F[1]F[0] ATPase. The best model fits were obtained with the potassium channel activities set effectively to zero. Therefore, the relative sensitivity to this parameter is reported to be zero as well. The ATP synthesis activity X[F1] is determined to be large enough that the reaction is effectively maintained in equilibrium. To estimate the parameters that are poorly identified, it will be necessary to obtain appropriate data in the future. In fact, because the model is parameterized based solely on steady-state data, it may not accurately match time-dependent behavior of mitochondrial oxidative phosphorylation. Thus, it is expected that kinetic data, in particular, will be of great value in refining the parameter estimates, refining the model, and generally improving the ability of the model to predict observed behavior. Yet while the model represents only one component of mitochondrial energy metabolism that one may build on and refine, it does represent the most complete model of the respiratory chain that is available to date. The model includes the major components of oxidative phosphorylation and the electron transport system and appropriately balances mass, charge, and free energy. By integrating the components illustrated in Figure 1 into a self-contained model, the observations of Bose et al. [10] have been explained based on a model that incorporates phosphate control at the dehydrogenase flux (perhaps via mass action) and phosphate-dependent activation of complex III. Of course, the model does not reproduce the experimental data with perfect fidelity. The major shortcomings of the current model analysis are the inability to reproduce the observed data in state-3 mitochondria as buffer phosphate concentration approaches zero and the inability to sensitively identify parameters for membrane ion transporters. As discussed below, the observations at [Pi][e] → 0 may be influenced by an experimental artifact; detailed identification of the inner membrane ion transporters will require the design of further experiments sensitive to the kinetic behavior of these channels. Phosphate-Dependent Control of the Respiratory Chain To obtain the model solutions illustrated in Figures 4, ,5,5, and and6,6, an expression for complex III flux was developed based on the hypothesis that inorganic phosphate level modulates the activity of the complex III. While no data directly measuring the activity of complex III in intact mitochondria as a function of phosphate concentration are available, the hypothesis is supported by the fact that the model's fits to the observed data are significantly improved when phosphate-dependent control is included compared to the case when it is not. In work not detailed, a number of similar control expressions were tested using phosphate and other species (ATP, ADP, Mg^2+) as putative controllers of complexes I, III, and IV and of F[1]F[0] ATPase and the ANT system. It was found that no other choice of controlled enzyme and controller species could provide fits to the data of Figure 5 as reasonable as that of the hypothetical control of complex III by inorganic phosphate. Thus, 20 independent hypotheses (each formulated as the activity of one of five enzymes dependent on one of four species) were quantitatively tested and 19 were excluded as not able to reproduce the observed data. Yet, by increasing the complexity of the model, in particular by hypothesizing phosphate-dependent control of a more than one of the enzymes in the system, it is possible to obtain improved fits to the observed data. However, doing so requires introducing additional free parameters that cannot be well identified. For this reason, a minimal model that satisfactorily explains the data within a reasonable error tolerance was developed. An alternative hypothesis for the biophysical mechanism behind phosphate-dependent activation of the electron transport system is that phosphate modulates the redox coupling of cytochrome b and c, as proposed by Bose et al. [10]. It has been observed that binding to phosphate and other ions changes the apparent redox potential for cytochrome c [38]. These data on binding of cytochrome c and phosphate allow one to formalize this alternative hypothesis by appropriately modifying ΔG[o,C3] and ΔG[o,C4] to depend on phosphate concentration. However, a simple model (results not shown) using the linear relationship between the apparent cytochrome c redox potential and phosphate concentration observed by Gopal et al. [38] was unable to reproduce the phosphate-dependent behavior observed by Bose et al. [10]. Thus, the model detailed in this work represents the most parsimonious explanation that was found for the data illustrated in Figures 4 and and55. In addition to reproducing the data used to parameterize the model, the mechanism of phosphate-dependent control of respiration matches observations in permeabilized cardiomyocytes indicating that phosphate represents the major feedback signal in low and medium work loads [7]. As noted above, the model remains unable to reproduce the observed data on membrane potential as phosphate concentration goes to zero. Note that as [Pi][e] → 0, the observed [NADH]/NAD[tot] ratio, which is the driving force for electron transport and proton pumping, is approximately 0.6 for both levels of buffer ADP concentration that were studied (see Figure 4A). Yet the observed membrane potential, which is ultimately driven by the matrix redox state, is approximately 25 mV lower at [ADP][e] = 1.3 mM than at [ADP][e] = 0. Since there can be no ATP synthesis when [Pi][e] = 0, this behavior cannot be explained by a load on the F[1]F[0] ATPase or the ANT. It is expected that there always exists trace phosphate in the buffer and matrix and that nonviable mitochondria (e.g., mitochondria with compromised membranes) present in the experimental isolation act as ATPases, consuming ATP and generating phosphate [39]. Therefore, to obtain improved fits to the membrane potential data with ADP present at low phosphate concentration it may be appropriate to include an ATP-consuming reaction in the model of isolated mitochondria. However, introducing an ATP-consuming reaction into the model of isolated mitochondria has the effect of increasing the predicted oxygen consumption. With the current model it is not possible to reproduce both the difference in ΔΨ observed at [Pi][e] = 0 and the oxygen consumption curve by introducing an ATP-consuming reaction into the model (results not shown). Likely, the observed drop in membrane potential for [ADP][e] = 1.3 mM (and [Pi][e] = 0) compared to at [ADP][e] = 0 is due to residual phosphate in the bath and in the mitochondrial matrix, as proposed by Bose et al. [10]. Another mechanism that was considered as a possible explanation for the drop in ΔΨ observed when [ADP] is added to the buffer at [Pi][e] = 0 is that the activity of one or more the of the components of the electron transport system is enhanced when [ADP][e] = 0 compared to when [ADP][e] ≠ 0. Using the current model, one can obtain improved fits to the observed data by reducing the complex I or III activity, or by reducing the dehydrogenase activity, when [ADP][e] = 1.3 mM compared to the case when [ADP][e] = 0. Thus, inhibition of the activity of respiratory complexes by ADP represents a possible mechanism to explain the observations of Bose et al. [10] at [Pi][e] = 0. However, the present dataset does not provide enough information to determine which sites in the respiratory chain are modulated by the addition of ADP into the buffer. Further experiments are necessary to test competing hypotheses and to formulate a mechanistic model that explains the phenomenon in detail. Future Directions As indicated above, the model remains imperfect and there remains room for improvement to model's fits to observed data. One potential avenue for improving the scope and predictive power of the model is to extend the model to include additional components of cardiac energetic metabolism. This task must be undertaken under the guidelines outlined in the Introduction. This operating philosophy behind model development requires that the use of purely data-driven empiricisms be avoided wherever possible. While nonphysical empirical relationships were not introduced in modeling the central components of the current model illustrated in Figure 1, the model is driven by an arbitrary four-parameter function (equation 1) used to represent overall dehydrogenase flux in the isolated system of Bose et al. [10]. This expression invokes a phenomenological dependence of the dehydrogenase flux on phosphate concentration, required to reproduce the observed data. In general, it is often difficult to avoid invoking such driving functions at the boundaries of a given model. Since it is planned to integrate the current model of oxidative phosphorylation with other components of cardiac metabolism [40], the data-driven dehydrogenase flux will be replaced with realistic models of the TCA cycle [12,13,40] and other reactions generating intracellular reducing potential [40]. While the current model was developed to analyze data from isolated mitochondria respiring on complex I substrates, such an integrated model would require a biophysical treatment of FAD(H[2]) redox handling at complex II of the respiratory chain. Thermodynamically balanced flux expressions for complex II flux could be developed in a manner analogous to that for complexes I and III in this study. These steps represent planned progress toward a major long-term goal of the construction of a complete energetic model of the cell [41]. Large-scale models of cellular energetics can be used for a variety of applications. For example, the current model may be linked with excitation–contraction models and calcium-dependent control of energy metabolism by extending the model to include sodium and calcium ion exchangers, as has been done in previous models [11,13,40]. Simulation and analysis of cardiac energetic requires integrating metabolic models into models of substrate transport at the cellular and tissue levels [42,43]. Thus, the current model provides a basis for integrating a detailed model of mitochondrial function into multiple-scale models of the heart. While it was determined that the passive potassium channel flux is effectively zero in the model parameterization developed here, modeling the K^+/H^+ exchanger is required to buffer the matrix pH when both ΔΨ and [H^+][x] are treated as state variables. Mitochondrial ATP-dependent potassium channel flux [17,18,37,44–47] will need to be incorporated to investigate the cardiac metabolic response to ischemia, hypoxia, and preconditioning. In addition, kinetic data must be obtained to effectively parameterize and validate the time-dependent behavior of the model. In sum, while a great number of extensions and improvements are possible, and many are planned, the current model represents the foundation for building larger and more complex systems and investigating complex physiological and pathophysiological interactions. Materials and Methods The model was implemented, simulated, and analyzed using the MATLAB (The Mathworks, Natick, Massachusetts, United States) computing environment. All calculations were performed on a desktop PC. Computer codes are available from the author upon request. In addition, the model is available in the CellML exchange format at http://www.cellml.org. The author thanks Peter Hunter, Peter Villiger, and the Auckland Bioengineering Research Group for help with porting the model to the CellML model exchange format. I am grateful to James Bassingthwaighte, Robert Balaban, Saleet Jafri, and Marko Vendelin for valuable discussions. This work was supported by National Institutes of Health grant HL072011. adenylate kinase adenine nucleotide translocase rate of oxygen consumption phosphate–hydrogen co-transporter Competing interests. The author has declared that no competing interests exist. Author contributions. DAB conceived and designed the experiments, performed the experiments, analyzed the data, and wrote the paper. A previous version of this article appeared as an Early Online Release on August 4, 2005 (DOI: 10.1371/journal.pcbi.0010036.eor). Citation: Beard DA (2005) A biophysical model of the mitochondrial respiratory system and oxidative phosphorylation. PLoS Comput Biol 1(4): e36. • Qian H, Beard DA, Liang SD. Stoichiometric network theory for nonequilibrium biochemical systems. Eur J Biochem. 2003;270:415–421. [PubMed] • Beard DA, Qian H, Bassingthwaighte JB. Stoichiometric foundation of large-scale biochemical system analysis. In: Ciabanu G, Rozenberg G, editors. Modelling in molecular biology. New York: Springer; 2004. pp. 1–20. pp. • Qian H, Beard DA. Thermodynamics of stoichiometric biochemical networks in living systems far from equilibrium. Biophys Chem. 2005;114:213–220. [PubMed] • Korzeniewski B. Regulation of ATP supply during muscle contraction: Theoretical studies. Biochem J. 1998;330:1189–1195. [PMC free article] [PubMed] • Korzeniewski B. Regulation of ATP supply in mammalian skeletal muscle during resting state→intensive work transition. Biophysical Chemistry. 2000;83:19–34. [PubMed] • Korzeniewski B, Zoladz JA. A model of oxidative phosphorylation in mammalian skeletal muscle. Biophys Chem. 2001;92:17–34. [PubMed] • Saks VA, Kongas O, Vendelin M, Kay L. Role of the creatine/phosphocreatine system in the regulation of mitochondrial respiration. Acta Physiol Scand. 2000;168:635–641. [PubMed] • Vendelin M, Kongas O, Saks V. Regulation of mitochondrial respiration in heart cells analyzed by reaction-diffusion model of energy transfer. Am J Physiol Cell Physiol. 2000;278:C747–C764. [ • Matsuoka S, Sarai N, Jo H, Noma A. Simulation of ATP metabolism in cardiac excitation-contraction coupling. Prog Biophys Mol Biol. 2004;85:279–299. [PubMed] • Bose S, French S, Evans FJ, Joubert F, Balaban RS. Metabolic network control of oxidative phosphorylation: Multiple roles of inorganic phosphate. J Biol Chem. 2003;278:39155–39165. [PubMed] • Magnus G, Keizer J. Minimal model of beta-cell mitochondrial Ca2+ handling. Am J Physiol. 1997;273:C717–C733. [PubMed] • Dudycha SJ. A detailed model of the tricarboxylic acid cycle in heart cells [thesis]. Baltimore: Johns Hopkins University; 2000. 141 p. p. • Nguyen MHT, Ramay HR, Dudycha SJ, Jafri MS. Modeling the cellular basis of cardiac excitation-contraction coupling and energy metabolism. Recent Res Dev Biophys. 2004;3:125–145. • Cortassa S, Aon MA, Marban E, Winslow RL, O'Rourke B. An integrated model of cardiac mitochondrial energy metabolism and calcium dynamics. Biophys J. 2003;84:2734–2755. [PMC free article] [PubMed • Cortassa S, Aon MA, Winslow RL, O'Rourke B. A mitochondrial oscillator dependent on reactive oxygen species. Biophys J. 2004;87:2060–2073. [PMC free article] [PubMed] • Hill TL. Free energy transduction in biology: The steady-state kinetic and thermodynamic formalism. New York: Academic Press; 1977. 229 p. p. • Garlid KD, Paucek P. Mitochondrial potassium transport: The K(+) cycle. Biochim Biophys Acta. 2003;1606:23–41. [PubMed] • Garlid KD, Dos Santos P, Xie ZJ, Costa AD, Paucek P. Mitochondrial potassium transport: The role of the mitochondrial ATP-sensitive K(+) channel in cardiac function and cardioprotection. Biochim Biophys Acta. 2003;1606:1–21. [PubMed] • Wilson DF, Rumsey WL, Green TJ, Vanderkooi JM. The oxygen dependence of mitochondrial oxidative phosphorylation measured by a new optical method for measuring oxygen concentration. J Biol Chem. 1988;263:2712–2718. [PubMed] • Gnaiger E. Bioenergetics at low oxygen: Dependence of respiration and phosphorylation on oxygen and adenosine diphosphate supply. Respir Physiol. 2001;128:277–297. [PubMed] • Gnaiger E, Kuznetsov AV. Mitochondrial respiration at low levels of oxygen and cytochrome c. Biochem Soc Trans. 2002;30:252–258. [PubMed] • Trumpower BL. The protonmotive Q cycle. Energy transduction by coupling of proton translocation to electron transfer by the cytochrome bc1 complex. J Biol Chem. 1990;265:11409–11412. [PubMed] • Crofts AR. The cytochrome bc1 complex: Function in the context of structure. Annu Rev Physiol. 2004;66:689–733. [PubMed] • Michel H. The mechanism of proton pumping by cytochrome c oxidasex127e. Proc Natl Acad Sci U S A. 1998;95:12819–12824. [PMC free article] [PubMed] • Michel H. Cytochrome c oxidase: Catalytic cycle and mechanisms of proton pumping—A discussion. Biochemistry. 1999;38:15129–15140. [PubMed] • Wilson DF, Erecinska M. The oxygen dependence of cellular energy metabolism. Adv Exp Med Biol. 1986;194:229–239. [PubMed] • Wilson DF, Rumsey WL. Factors modulating the oxygen dependence of mitochondrial oxidative phosphorylation. Adv Exp Med Biol. 1988;222:121–131. [PubMed] • Greenbaum NL, Wilson DF. The distribution of inorganic phosphate and malate between intra- and extramitochondrial spaces. Relationship with the transmembrane pH difference. J Biol Chem. 1985;260 :873–879. [PubMed] • Lakshminarayanaiah N. Transport phenomena in membranes. New York: Academic Press; 1969. 517 p. p. • Fall CP, Wagner J, Marland E, editors. Computational cell biology. New York: Springer; 2002. 468 p. p. • O'Rourke B. Pathophysiological and protective roles of mitochondrial ion channels. J Physiol. 2000;529:23–36. [PMC free article] [PubMed] • Korzeniewski B. Theoretical studies on the regulation of oxidative phosphorylation in intact tissues. Biochim Biophys Acta. 2001;1504:31–45. [PubMed] • Lee AC, Zizi M, Colombini M. Beta-NADH decreases the permeability of the mitochondrial outer membrane to ADP by a factor of 6. J Biol Chem. 1994;269:30974–30980. [PubMed] • Gentet LJ, Stuart GJ, Clements JD. Direct measurement of specific membrane capacitance in neurons. Biophys J. 2000;79:314–320. [PMC free article] [PubMed] • Munoz DR, de Almeida M, Lopes EA, Iwamura ES. Potential definition of the time of death from autolytic myocardial cells: A morphometric study. Forensic Sci Int. 1999;104:81–89. [PubMed] • da Silva MM, Sartori A, Belisle E, Kowaltowski AJ. Ischemic preconditioning inhibits mitochondrial respiration, increases H2O2 release, and enhances K+ transport. Am J Physiol Heart Circ Physiol. 2003;285:H154–H162. [PubMed] • Mironova GD, Negoda AE, Marinov BS, Paucek P, Costa AD, et al. Functional distinctions between the mitochondrial ATP-dependent K+ channel (mitoKATP) and its inward rectifier subunit (mitoKIR) J Biol Chem. 2004;279:32562–32568. [PubMed] • Gopal D, Wilson GS, Earl RA, Cusanovich MA. Cytochrome c: Ion binding and redox properties. Studies on ferri and ferro forms of horse, bovine, and tuna cytochrome c. J Biol Chem. 1988;263 :11652–11656. [PubMed] • Vendelin M, Lemba M, Saks VA. Analysis of functional coupling: Mitochondrial creatine kinase and adenine nucleotide translocase. Biophys J. 2004;87:696–713. [PMC free article] [PubMed] • Jafri MS, Dudycha SJ, O'Rourke B. Cardiac energy metabolism: Models of cellular respiration. Annu Rev Biomed Eng. 2001;3:57–81. [PubMed] • Bassingthwaighte JB. The modelling of a primitive ‘sustainable' conservative cell. Philos Trans R Soc London A. 2001;359:1055–1072. [PMC free article] [PubMed] • Beard DA, Schenkman KA, Feigl EO. Myocardial oxygenation in isolated hearts predicted by an anatomically realistic microvascular transport model. Am J Physiol Heart Circ Physiol. 2003;285 :H1826–H1836. [PubMed] • Schenkman KA, Beard DA, Ciesielski WA, Feigl EO. Comparison of buffer and red blood cell perfusion of guinea pig heart oxygenation. Am J Physiol Heart Circ Physiol. 2003;285:H1819–H1825. [PubMed] • Sasaki N, Sato T, Marban E, O'Rourke B. ATP consumption by uncoupled mitochondria activates sarcolemmal K(ATP) channels in cardiac myocytes. Am J Physiol Heart Circ Physiol. 2001;280:H1882–H1888. • Belisle E, Kowaltowski AJ. Opening of mitochondrial K+ channels increases ischemic ATP levels by preventing hydrolysis. J Bioenerg Biomembr. 2002;34:285–298. [PubMed] • Peart JN, Gross GJ. Sarcolemmal and mitochondrial K(ATP) channels and myocardial ischemic preconditioning. J Cell Mol Med. 2002;6:453–464. [PubMed] • Gross ER, Peart JN, Hsu AK, Grover GJ, Gross GJ. K(ATP) opener-induced delayed cardioprotection: Involvement of sarcolemmal and mitochondrial K(ATP) channels, free radicals and MEK1/2. J Mol Cell Cardiol. 2003;35:985–992. [PubMed] • Tomashek JJ, Brusilow WS. Stoichiometry of energy coupling by proton-translocating ATPases: A history of variability. J Bioenerg Biomembr. 2000;32:493–500. [PubMed] • Alberty RA. John Wiley-Interscience. Hoboken (New Jersey).: 2003. Thermodynamics of biochemical reactions. 397 p. • Vinnakota KC, Bassingthwaighte JB. Myocardial density and composition: A basis for calculating intracellular metabolite concentrations. Am J Physiol Heart Circ Physiol. 2004;286:H1742–H1749. [ Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1201326/?tool=pubmed","timestamp":"2014-04-21T13:46:02Z","content_type":null,"content_length":"165209","record_id":"<urn:uuid:eef921fd-665e-41f2-b80b-fce15f3960e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Cohomology of algebraic varieties Seminar Room 1, Newton Institute My lecture will try to explain the miracle of the many ways to compute the cohomology of algebraic varieties, and associated structures. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/NAG/seminars/2009080317001.html","timestamp":"2014-04-21T13:29:52Z","content_type":null,"content_length":"5761","record_id":"<urn:uuid:e514db49-14cf-4492-b73b-2d69922867ce>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
GIVEAWAY: Breville Sous Chef Food Processor! UPDATE: The winner of the Breville Sous Chef Food Processor is: #50 – Ashley. Her comment about being a Facebook follower was the winner, but this was the last thing she baked: “I baked butterscotch and chocolate chunk cookies! I am in desperate need of a food processor :)” Congratulations Ashley! Be sure to reply to the email you’ve been sent to confirm your mailing address, and your new food processor will be shipped off to you! It’s Monday, it’s officially fall, it’s getting cooler, and before we know it the holidays are going to be here. ‘Tis the season for soups, sauces and stews! To celebrate the change in seasons and flavors, I’m giving away this amazing Breville Sous Chef Food Processor. It was recently given the highest rating by Conusmer Reports – it has a 1200-watt motor, comes with a 16-cup work bowl, along with a 2.5-cup mini prep bowl (both are BPA-free!), 5 multi-function disks and 3 blades. It also has a digital timer that counts both up and down, so you can keep track of your mixing time. This food processor will serve all of your kitchen needs and then some – a true workhorse that you’re going to love. I’m excited to be giving one away to a lucky reader; read below for details on how to enter to win! Giveaway Details One winner will receive one (1) Breville Sous Chef Food Processor. How to Enter To enter to win, simply leave a comment on this post and answer the question: “What’s the last thing you baked?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment. 2. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment. 3. Tweet the following about the giveaway: “Enter to win a @BrevilleUSA Sous Chef Food Processor from @browneyedbaker! http://wp.me/p1rsii-574”. Come back and let me know you’ve Tweeted in an additional comment. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment. Deadline: Sunday, September 23, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 24 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker. Good Luck!! 6,345 Responses to “GIVEAWAY: Breville Sous Chef Food Processor!” 1. I just LIKED your facebook page!!! 2. I’m subscribed via email. 3. I’m also following you on Pinterest! Thanks! Hope I win! 4. I follow beb on pinterest. 5. I’m already a follower on Pinterest. Lots of lovely stuff there! 6. The last thing I baked was the vanilla bean latte cake for my best friend’s bridal shower! 7. I follow you on Google Reader 10. I made pumpkin bread this weekend! 11. I am now following you on pinterest 12. The last thing I baked were my Mom’s oatmeal cookies, so good. Yum 13. i subscribe via email. n the last thing i baked was your strawberry cupcake with meringue strawberry cream. 14. I follow you on Facebook Samii Alexis Meyer 15. I follow you on Pintrest 16. Just yesterday I baked a delicious pound cake! 17. The last thing I baked was funfetti cupcakes for my twins’ 13th birthday. It’s their all-time favorite cake, 18. Yesterday I made a ricotta pie 19. I had a 2 day baking marathon last week because I was going to Dafuskie Island with some friends… I made your Almond-Apple Crisp, but added the shortbread crust from the Russian Grandmothers’ Apple Pie Cake on the bottom only, and it was delicious! I also made your Buckeye Peanut Butter Cup Cookies which were a huge hit. In addition, I made some Ruby Jewel Cookies, No Bake Nutella Cheesecake and my all time favorite – Brussels Mint Cookies. We ate really well, and finished it all off with great desserts! 20. The last thing I baked was homemade graham crackers. 22. I subscribe to your e-mails 23. I follow you on pintrest too 24. last thing I baked was Monster Cookies. 25. I made your peach crumb bars! 26. The last thing I baked was a lemon cake with lemon glaze this past weekend. 27. I follow your blog faithfully! 28. I already subscribe to your blog. Love it! 29. I follow you on facebook already 31. i’ve subscribed to emails 32. I follow you on pinterest:) 34. have been following on pinterest 37. The last thing I baked was banana nut bread. 39. I baked cinnamon scones and chocolate chunk cookies! 42. Yesterday I baked tomato corn pie and gooey butter cake (first time – well worth the effort!) 43. Last thing I baked was a red velvet cake 44. The last thing I baked was a birthday cake for my mom last Saturday. It was vanilla bean with peach filling. 46. The last thing I baked where Bajan Sweet breads. Hmmm 50. I follow Brown Eyed Baker via email 60. last thing I baked – whole wheat garlic and rosemary bread, my favorite! this time i added a mozzarella swirl…yum! keep the great recipes coming! 61. The last thing I baked was German Chocolate Bars on Sunday night. Decadently delicious! 64. I subscribed to your emails! 65. I tweeted about the giveaway. 66. The last thing I baked was a chocolate cake. Mary H. 67. The last thing I baked was peanut butter finger brownies. 69. The last think I baked were Banana Pudding Cookies. They were a huge hit! 71. The last thing I baked was chocolate chip cookies for a Children’s event for our Church last week. But I am also getting ready to bake a cake this morning for a birthday boy tomorrow. 72. I subscribe to your website. 73. Hi I am now following you on Pinterest! I also receive emails…Recipes are awesome! I also am a fan on the Last thing I baked was your Apple Cinnamon Bread – my kids devoured it! 75. I tweeted about your contest on Twitter. 76. I baked the Chunky Blondies from the Boston Globe food section. They’re delicious! I subscribe via email. 77. I am a fan of yours on FaceBook. 78. Last thing I baked was a peaches and cream cake. Success! 79. I love it. The last thing I baked was an apple cake! 80. I baked Italian Knots this weekend. All ready follow you on Pinterest, Facebook and receive your email updates. 81. Last thing I baked was a Mennonite Apple Cake with homemade caramel sauce, and I follow you via e-mail! 82. The last thing I baked were cookies. I am still fighting with my rental houses oven so I have not been baking much. 83. I follow your blog by email. 84. I baked cranberry scones and a lime pie! 85. The last thing I baked was chocolate chip cookies for my children as a surprise to come home to after school. The whole house was boxed up for our move and they didn’t suspect it. 86. The last thing I baked was chocolate chips cookies in yet another attempt to find the best recipe …and this wasn’t it either! 87. I follow you on Pinterest also. 88. I already receive you post by emai! 89. I follow you through e-mail! 91. My old food processor is on its last legs! 92. I follow you on twitter (mishelle42) 93. And I baked bread with my son… I also follow on pinterest 94. I follow you on facebook too 95. Following on twitter @peggyherself 96. I baked Leftover Oatmeal Muffins—family’s favorite! 97. Do Rice Krispies count as baked goods? 98. I baked a delicious sweet potato pie and today I am going to bake a bundt cake (not sure what kind yet) 99. Twitted about giveaway! Couldn’t get copy of tweet link on my iPad. 100. The last thing I baked was a batch of chocolate chip cookies!
{"url":"http://www.browneyedbaker.com/2012/09/17/giveaway-breville-sous-chef-food-processor/comment-page-4/","timestamp":"2014-04-18T03:35:26Z","content_type":null,"content_length":"120679","record_id":"<urn:uuid:7a299021-f2cf-4344-96be-5eb2d36e30d7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
integrate 1/(2 + 3 sin x) dx July 19th 2011, 10:56 PM integrate 1/(2 + 3 sin x) dx Please help me get started on this one... I don't see how to apply substitution, integration by parts, partial fractions, trigonometric substitution, or any of the standard techniques... $\int \frac{1}{2 + 3 \cdot \sin x} \operatorname{d}x$ July 19th 2011, 11:02 PM Prove It Re: integrate 1/(2 + 3 sin x) dx See here, click "Show Steps". July 19th 2011, 11:03 PM Re: integrate 1/(2 + 3 sin x) dx The substitution I described in... ... transforms in all cases integrals of this type into an integral containing rational functions in t... Kind regards
{"url":"http://mathhelpforum.com/calculus/184860-integrate-1-2-3-sin-x-dx-print.html","timestamp":"2014-04-17T05:11:18Z","content_type":null,"content_length":"5825","record_id":"<urn:uuid:4700355b-56e8-4723-b299-8b9ec76e835b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Quotations by Quotations by Jacob Bernoulli We define the art of conjecture, or stochastic art, as the art of evaluating as exactly as possible the probabilities of things, so that in our judgments and actions we can always base ourselves on what has been found to be the best, the most appropriate, the most certain, the best advised; this is the only object of the wisdom of the philosopher and the prudence of the statesman. Ars Conjectandi It seems that to make a correct conjecture about any event whatever, it is necessary to calculate exactly the number of possible cases and then to determine how much more likely it is that one case will occur than another. Ars Conjectandi The sum of an infinite series whose final term vanishes perhaps is infinite, perhaps finite. Ars conjectandi Even as the finite encloses an infinite series And in the unlimited limits appear, So the soul of immensity dwells in minutia And in the narrowest limits no limit in here. What joy to discern the minute in infinity! The vast to perceive in the small, what divinity! Ars Conjectandi Eadem mutata resurgo Though changed I shall rise the same Inscribed on his tomb in the Münster, Basel, with a equiangular spiral, in imitation of Archimedes. JOC/EFR December 2013 The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Quotations/Bernoulli_Jacob.html","timestamp":"2014-04-21T10:15:59Z","content_type":null,"content_length":"2178","record_id":"<urn:uuid:a042cfea-d6e9-41ed-a488-05f1904542e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
[Lapack] LAPACK meeting notes, 17 March. Many people attended, but distressingly few people at UCB wore UCB: DoE notes UCB: DARPA visit at UCB next week; HPCS pitch UCB: GAMM meeting in Berlin UTK: Remi Delas's wrappers UTK: Divide and conquer failure report and performance UCB: Technical progress UTK: Multi-precision source generation * UCB: DoE notes ** Basic grant status Program manager just returned from vacation; Jack Dongarra will ping him when appropriate. ** SciDAC status from UTK 230 SciDAC proposals: 120 from labs and 110 from universities. Announcements will be in June, but there is an April meeting to discuss the proposals. The institute winners will be announced on Monday and must be ready to present on Friday. * UCB: DARPA visit at UCB next week; HPCS pitch How do we pitch the project to the HPCS folks? Hardware will not exist for quite some time. ** Performance modeling The HPCS systems are covered by NDAs, so we cannot publish models ScaLAPACK and PBLAS performance that use features of specific HPCS systems. Modeling non-disclosable systems that do not exist with non-disclosable features is a challenge. [Serving as a cheap labor arm of some companies sucks.] *** Users are requesting better models. Later in the meeting, Xiaoye Li mentioned that one user, Julian Borril at LBNL, wants more detailed performance models. His problem of interpreting CMB data involves symmetric inversion, eigenvalue computations, and many low-level PBLAS operations. He needs better models to predict how his codes will perform on larger systems and with more data. ** Asynchrony and heterogeneous systems Some of these non-disclosable systems may or may not require greater asynchrony to achieve full performance. So exploiting asynchrony may or may not be more important than it already is. Also, following some discussion of the same old broken ideas on the 754 mailing list, John Lewis asked in email about a possible future Cray system that may or may not be related to certain other topics. The system may or may not have a vector coprocessor with different floating-point semantics than the main COTS processor. Jim Demmel volunteered to relay the historical horror stories regarding heterogeneous floating-point semantics to the DARPA folks next week. [Amusing (?) note: There's another company named Clearspeed retrying the same basic idea. They have a limited Maspar MP-2 on a PCI-X card (HTX in development) and yet another damned SIMD C variant for programming the thing. The public information is enough to know this is the same old idea, but this company at least claims full IEEE-754 compliance. Also, many desktops already have heterogeneous floating-point units in one system: the x87 and SSE FPUs. The GNU compilers have tried to use both units at once, but they dropped that idea because of performance problems and not numerical ones. The performance killer is comparisons between values in separate ** Language issues Going back to the heady days of yore, each new, non-disclosable system has its own new language semi-structured to expose each new, non-disclosable feature. These languages should make it easier to exploit the unknown features on unknown hardware that doesn't exist yet. Jack Dongarra suggests we propose to rewrite the three amigos in these HPCS languages. Jim Demmel also wants to investigate Hessenberg QR reductions. [I nominate Chevy Chase to be re-written in Fortress.] ** Fault tolerance UTK is interested in three levels of fault tolerance on massive 1) automatic checkpointing and restarting, 2) log-based resumption, and 3) library-level solutions. The first is the classical solution. Some monitoring system saves entire process images and restarts them when a node fails. Parallel systems need consistency between nodes. The second replays all the messages sent to a node before its failure to bring it back up to the status. The third requires redundant representations of the input data along with more algorithmic and development work. * UCB: GAMM meeting in Berlin Send Jim your slides from SIAM PP04. * UTK: Remi Delmas's wrappers Slides sent to some subset of people. Remi Delmas's script produces an intermediate form and then C and Matlab wrappers. These are proper low-level wrappers and not a higher-level interface. There is little error checking; you can pass the wrong types in and the code will overwrite memory. Remi and Julien Langou have a higher-level Matlab interface to the eigenvalue routines which Julien demonstrated at Berkeley. Julien has sent the higher-level routine to show that writing higher-level wrappers is a non-trivial task. * UTK: Divide and conquer failure email and performance Julien Langou discussed a report of divide and conquer failures with different compilers, precisions, and systems. Jason Riedy pointed out that we know how to convince most compilers to break just about any code. Julien has since mailed the report to the UCB mailing lists. [Note: All mailings larger than 40k sit in my mail until I approve them. Please send _pointers_ to information whenever reasonable. -- ejr] Jack Dongarra notes that Remi's Matlab interface makes testing and timing simple. They were surprised to see that divide and conquer applied to a symmetrized rand(n,n) (uniformly random entries) performs about 3x faster than MRRR. For the Gaussian ensemble, D&C slightly out-performs MRRR. This is rather surprising to the folks at UCB, who have performance data showing the opposite situation. * UCB: Technical progress ** Least-squares refinement. Least-squares refinement is nearing algorithmic completion. ** Interface plans The iterative refinement routines return many error estimates and condition numbers. UCB proposes to return them in an array rather than as separate parameter arrays. The array would contain the following information for each right-hand side: 1: "Guaranteed" normwise error estimate 2: "Guaranteed" componentwise error estimate 3: Componentwise backward error 4: RCOND = 1/kappa_inf(R*A*C) (traditional RCOND) 5: RCOND_NRM = 1/kappa_inf(Rs*A) where Rs equilibrates the rows in the 1-norm. 6: RCOND_CMP = 1/kappa_inf(Rs*A*diag(X)). 7: 1/final pivot growth 8: Raw normwise error estimate 9: Raw componentwise error estimate UCB will send a more detailed proposal. The least squares refinement will return even more bounds. ** Testing plans Add new routines to existing tests to ensure all input options work. To test new numerical functionality, we will construct special systems (e.g. Hilbert) and test returned answers and bounds against explicit formulas. * UTK: Multi-precision source generation Yozo Hida sent a pointer to his Perl script around. The script transforms double and double-complex routines into single, quad, and other precisions. Julie Langou is reading Yozo's script and will compare it to other options, including NAG's tools.
{"url":"http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg00038.html","timestamp":"2014-04-17T21:29:48Z","content_type":null,"content_length":"12439","record_id":"<urn:uuid:431ae879-7571-42cc-983d-db39ec71479f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
What is really SF ? June 28th 2008, 07:02 PM What is really SF ? Let's S be a s-algebra in X and F be a s-algebra in Y .S´F is defined as the smallest s-algebra in X ´ Y containing all measurable rectangles . Is it true that S´F is equal to the set of all countable unions of measurable rectangles? If so could you prove it, if not what is a contradictory example .
{"url":"http://mathhelpforum.com/advanced-math-topics/42644-what-really-s-f-print.html","timestamp":"2014-04-20T17:45:45Z","content_type":null,"content_length":"4274","record_id":"<urn:uuid:5616c863-e91a-490e-83c4-d3017dc73b48>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressing the performance in diff. notations August 21st, 2009, 11:26 AM #1 Junior Member Join Date Aug 2009 Expressing the performance in diff. notations When we express the performance of an algorithm in terms of average, worst, and best cases, we often use the O-notation, however I sometimes see other notations and I do not know why we use them. Any guidance is appreciated. Thanks in advance Re: Expressing the performance in diff. notations Ω(f(n)) (Omega notation) - algorithm runs at least f(n) steps (asymptotically - meaning, there is some k for which for every x > k the algorithm runs at least f(x) steps). This is, of course, a simplification, but if you are trying to find an algorithm to solve a problem and not just analyze a given algorithm to solve a problem, it is often useful to know if you can or can't find an algorithm that solves this problem in less than f(n) steps. Θ(f(n))(Theta notation) - best and worst complexities exhibit the same asymptotic behavior (grow at about the same speed). If you have an algorithm for which you want to show that it will run the same time for any given input of length n - this notation is useful. Usually in the "real world" people want their algorithms to run as fast as possible, and therefore are satisfied with - "it won't run more than f(n) steps (asymptotically)", but the Θ(f(n)) notation gives a little bit more data, saying - "it won't run less than f(n) steps (asymptotically)". Logically you might say that: If an algorithms runs Ω(f(n)) steps and it also runs O(f(n)) steps, then this algorithm runs Θ(f(n)) steps. Other notations are o("little-oh") and ω("little-Omega"), I find less useful in real world terms, others might think differently: o(f(n)) means that the algorithm's complexity is O(f(n)) but not Θ(f(n)). ω(f(n)) means that the algorithm's complexity is Ω(f(n)) but not Θ(f(n)). August 21st, 2009, 12:45 PM #2 Member + Join Date Oct 2006
{"url":"http://forums.codeguru.com/showthread.php?482912-Building-a-graph-according-to-a-depth-first-search&goto=nextnewest","timestamp":"2014-04-17T08:34:12Z","content_type":null,"content_length":"67632","record_id":"<urn:uuid:a1a13625-0ffb-4f3d-9969-894a8294c5b4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Prediction of Crack-Opening Stress Levels for Service Loading Spectra STP1411: Prediction of Crack-Opening Stress Levels for Service Loading Spectra Khalil, M Research Assistant, University of Waterloo, Waterloo, Ontario DuQuesnay, D Professor, Royal Military Collage of Canada, Kingston, Ontario Topper, TH Professor Emirates, University of Waterloo, Waterloo, Ontario Pages: 15 Published: Jan 2002 The fatigue lives of automotive components subjected to variable amplitude service loading are assumed to be dominated by a process of small crack growth. It is generally accepted that crack closure is responsible for most of the variation in fatigue crack growth rates and fatigue lives. This investigation examined the effect of different types and magnitudes of service loadings on crack closure behavior. Three different Society of Automotive Engineers (SAE) standard service load histories with different mean stresses were applied to notched specimens of a 2024-T351 aluminum alloy. The three spectra are the SAE Grapple Skidder history, which has a positive mean stress, the SAE Log Skidder history which has a zero mean stress, and the inverse of the SAE Grapple Skidder history which has a negative mean stress. A curve of maximum stress in the history versus fatigue life was constructed for each spectrum. The crack-opening stress (COS) levels were measured at frequent intervals in order to capture the behavior of the opening stress for each spectrum, for each of a set of scaled histories with different maximum stress ranges. A crack growth analysis based on a fracture mechanics approach was used to model the fatigue behavior of the aluminum alloy specimens for the given load spectra and stress ranges. The crack growth analysis was based on an effective strain-based intensity factor, a crack growth rate curve obtained during closure-free loading cycles, and a local notch strain calculation based on Neuber's rule. The COS levels were modeled assuming that the COS follows an exponential build-up formula that is a function of the difference between the current crack opening stress and the steady state crack opening stress of the given cycle unless this cycle is below the intrinsic stress range, or the maximum stress is below zero. However, the build-up only occurs when the crack-opening steady state stress level for the given cycle is higher than the previously calculated crack opening stress. The modeled crack opening stress level was in good agreement with the measured crack opening stress. The average of the measured crack opening stresses and those calculated using the model were nearly the same for all the histories examined. When these average crack-opening stresses were used in the life prediction model they gave predictions as good as those obtained by modeling COS on a cycle by cycle basis. In the interest of simplifying the use of COS in design the average COS was correlated with the frequency of occurrence of the cycle reducing the COS to the average level. The use of a COS level corresponding to the one in 200 cycle gave a conservative estimate of average COS for all the histories. Service spectrum, crack opening stress, effective strain intensity factor, steady state crack-opening stress Paper ID: STP10612S Committee/Subcommittee: E08.06 DOI: 10.1520/STP10612S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP10612S.htm","timestamp":"2014-04-19T09:38:44Z","content_type":null,"content_length":"15859","record_id":"<urn:uuid:d868e524-9147-4483-af00-7967e431f6fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Tests of Significance Once sample data has been gathered through an observational study or experiment, statistical inference allows analysts to assess evidence in favor or some claim about the population from which the sample has been drawn. The methods of inference used to support or reject claims based on sample data are known as tests of significance. Every test of significance begins with a null hypothesis H[0]. H[0] represents a theory that has been put forward, either because it is believed to be true or because it is to be used as a basis for argument, but has not been proved. For example, in a clinical trial of a new drug, the null hypothesis might be that the new drug is no better, on average, than the current drug. We would write H[0]: there is no difference between the two drugs on average. The alternative hypothesis, H[a], is a statement of what a statistical hypothesis test is set up to establish. For example, in a clinical trial of a new drug, the alternative hypothesis might be that the new drug has a different effect, on average, compared to that of the current drug. We would write H[a]: the two drugs have different effects, on average. The alternative hypothesis might also be that the new drug is better, on average, than the current drug. In this case we would write H[a]: the new drug is better than the current drug, on average. The final conclusion once the test has been carried out is always given in terms of the null hypothesis. We either "reject H[0] in favor of H[a]" or "do not reject H[0]"; we never conclude "reject H [a]", or even "accept H[a]". If we conclude "do not reject H[0]", this does not necessarily mean that the null hypothesis is true, it only suggests that there is not sufficient evidence against H[0] in favor of H[a]; rejecting the null hypothesis then, suggests that the alternative hypothesis may be true. (Definitions taken from Valerie J. Easton and John H. McColl's Statistics Glossary v1.1) Hypotheses are always stated in terms of population parameter, such as the mean one-sided or two-sided. A one-sided hypothesis claims that a parameter is either larger or smaller than the value given by the null hypothesis. A two-sided hypothesis claims that a parameter is simply not equal to the value given by the null hypothesis -- the direction does not matter. Hypotheses for a one-sided test for a population mean take the following form: Hypotheses for a two-sided test for a population mean take the following form: A confidence interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data. (Definition taken from Valerie J. Easton and John H. McColl's Statistics Glossary v1.1) Suppose a test has been given to all high school students in a certain state. The mean test score for the entire state is 70, with standard deviation equal to 10. Members of the school board suspect that female students have a higher mean score on the test than male students, because the mean score The null hypothesis H[0] claims that there is no difference between the mean score for female students and the mean for the entire population, so that Significance Tests for Unknown Mean and Known Standard Deviation Once null and alternative hypotheses have been formulated for a particular claim, the next step is to compute a test statistic. For claims about a population mean from a population with a normal distribution or for any sample with large sample size n (for which the sample mean will follow a normal distribution by the Central Limit Theorem), if the standard deviation z-test, where the test statistic is defined as z = The test statistic follows the standard normal distribution (with mean = 0 and standard deviation = 1). The test statistic z is used to compute the P-value for the standard normal distribution, the probability that a value at least as extreme as the test statistic would be observed under the null hypothesis. Given the null hypothesis that the population mean [0], the P-values for testing H[0] against each of the possible alternative hypotheses are: P(Z > z) for H[a]:[0] P(Z < z) for H[a]:[0] 2P(Z>|z|) for H[a]:[0]. The probability is doubled for the two-sided test, since the two-sided alternative hypothesis considers the possibility of observing extreme values on either tail of the normal distribution. In the test score example above, where the sample mean equals 73 and the population standard deviation is equal to 10, the test statistic is computed as follows: z = (73 - 70)/(10/sqrt(64)) = 3/1.25 = 2.4. Since this is a one-sided test, the P-value is equal to the probability that of observing a value greater than 2.4 in the standard normal distribution, or P(Z > 2.4) = 1 - P(Z < 2.4) = 1 - 0.9918 = 0.0082. The P-value is less than 0.01, indicating that it is highly unlikely that these results would be observed under the null hypothesis. The school board can confidently reject H[0] given this result, although they cannot conclude any additional information about the mean of the distribution. Significance Levels The significance level for a given hypothesis test is a value for which a P-value less than or equal to P-value is 0.0082, so the probability of observing such a value by chance is less that 0.01, and the result is significant at the 0.01 level. In a one-sided test, z^* such that P(Z > z^*) = For example, if the desired significance level for a result is 0.05, the corresponding value for z must be greater than or equal to z^* = 1.645 (or less than or equal to -1.645 for a one-sided alternative claiming that the mean is less than the null hypothesis). For a two-sided test, we are interested in the probability that 2P(Z > z^*) = , so the critical value z^* corresponds to the z|) must be greater than or equal to the critical value 1.96 (which corresponds to the level 0.025 for a one-sided test). Another interpretation of the significance level decision theory, is that H[0]. In the above example, the value 0.0082 would result in rejection of the null hypothesis at the 0.01 level. The probability that this is a mistake -- that, in fact, the null hypothesis is true given the z-statistic -- is less than 0.01. In decision theory, this is known as a Type I error. The probability of a Type I error is equal to the significance level Of all of the individuals who develop a certain rash, suppose the mean recovery time for individuals who do not use any form of treatment is 30 days with standard deviation equal to 8. A pharmaceutical company manufacturing a certain cream wishes to determine whether the cream shortens, extends, or has no effect on the recovery time. The company chooses a random sample of 100 individuals who have used the cream, and determines that the mean recovery time for these individuals was 28.5 days. Does the cream have any effect? Since the pharmaceutical company is interested in any difference from the mean recovery time for all individuals, the alternative hypothesis H[a] is two-sided: z = (28.5 - 30)/(8/sqrt(100)) = -1.5/ 0.8 = -1.875. The P-value for this statistic is 2P(Z > 1.875) = 2(1 - P((Z < 1.875) = 2(1- 0.9693) = 2(0.0307) = 0.0614. This is not significant at the 0.05 level, although it is significant at the 0.1 level. Decision theory is also concerned with a second error possible in significance testing, known as Type II error. Contrary to Type I error, Type II error is the error made when the null hypothesis is incorrectly accepted. The probability of correctly rejecting the null hypothesis when it is false, the complement of the Type II error, is known as the power of a test. Formally defined, the power of a test is the probability that a fixed level H[0] when a particular alternative value of the parameter is true. In the test score example, for a fixed significance level of 0.10, suppose the school board wishes to be able to reject the null hypothesis (that the mean = 70) if the mean for female students is in fact 72. To determine the power of the test against this alternative, first note that the critical value for rejecting the null hypothesis is z^* = 1.282. The calculated value for z will be greater than 1.282 whenever ( P((> 71.6 | = P((> (71.6 - 72)/1.25) = P(Z > -0.32) = 1 - P(Z < -0.32) = 1 - 0.3745 = 0.6255. The power is about 0.60, indicating that although the test is more likely than not to reject the null hypothesis for this value, the probability of a Type II error is high. Significance Tests for Unknown Mean and Unknown Standard Deviation In most practical research, the standard deviation for the population of interest is not known. In this case, the standard deviation estimated standard deviation s, also known as the standard error. Since the standard error is an estimate for the true value of the standard deviation, the distribution of the sample mean t distribution with mean t distribution is also described by its degrees of freedom. For a sample of size n, the t distribution will have n-1 degrees of freedom. The notation for a t distribution with k degrees of freedom is t(k). As the sample size n increases, the t distribution becomes closer to the normal distribution, since the standard error approaches the true standard deviation n. For claims about a population mean from a population with a normal distribution or for any sample with large sample size n (for which the sample mean will follow a normal distribution by the Central Limit Theorem) with unknown standard deviation, the appropriate significance test is known as the t-test, where the test statistic is defined as t = The test statistic follows the t distribution with n-1 degrees of freedom. The test statistic z is used to compute the P-value for the t distribution, the probability that a value at least as extreme as the test statistic would be observed under the null hypothesis. The dataset "Normal Body Temperature, Gender, and Heart Rate" contains 130 observations of body temperature, along with the gender of each individual and his or her heart rate. Using the MINITAB "DESCRIBE" command provides the following information: Descriptive Statistics Variable N Mean Median Tr Mean StDev SE Mean TEMP 130 98.249 98.300 98.253 0.733 0.064 Variable Min Max Q1 Q3 TEMP 96.300 100.800 97.800 98.700 Since the normal body temperature is generally assumed to be 98.6 degrees Fahrenheit, one can use the data to test the following one-sided hypothesis: The t test statistic is equal to (98.249 - 98.6)/0.064 = -0.351/0.064 = -5.48. P(t< -5.48) = P(t> 5.48). The t distribution with 129 degrees of freedom may be approximated by the t distribution with 100 degrees of freedom (found in Table E in Moore and McCabe), where P(t> 5.48) is less than 0.0005. This result is significant at the 0.01 level and beyond, indicating that the null hypotheses can be rejected with confidence. To perform this t-test in MINITAB, the "TTEST" command with the "ALTERNATIVE" subcommand may be applied as follows: MTB > ttest mu = 98.6 c1; SUBC > alt= -1. T-Test of the Mean Test of mu = 98.6000 vs mu < 98.6000 Variable N Mean StDev SE Mean T P TEMP 130 98.2492 0.7332 0.0643 -5.45 0.0000 These results represents the exact calculations for the t(129) distribution. Data source: Data presented in Mackowiak, P.A., Wasserman, S.S., and Levine, M.M. (1992), "A Critical Appraisal of 98.6 Degrees F, the Upper Limit of the Normal Body Temperature, and Other Legacies of Carl Reinhold August Wunderlich," Journal of the American Medical Association, 268, 1578-1580. Dataset available through the JSE Dataset Archive. Matched Pairs In many experiments, one wishes to compare measurements from two populations. This is common in medical studies involving control groups, for example, as well as in studies requiring before-and-after measurements. Such studies have a matched pairs design, where the difference between the two measurements in each pair is the parameter of interest. Analysis of data from a matched pairs experiment compares the two measurements by subtracting one from the other and basing test hypotheses upon the differences. Usually, the null hypothesis H[0] assumes that that the mean of these differences is equal to 0, while the alternative hypothesis H[a] claims that the mean of the differences is not equal to zero (the alternative hypothesis may be one- or two-sided, depending on the experiment). Using the differences between the paired measurements as single observations, the standard t procedures with n-1 degrees of freedom are followed as In the "Helium Football" experiment, a punter was given two footballs to kick, one filled with air and the other filled with helium. The punter was unaware of the difference between the balls, and was asked to kick each ball 39 times. The balls were alternated for each kick, so each of the 39 trials contains one measurement for the air-filled ball and one measurement for the helium-filled ball. Given that the conditions (leg fatigue, etc.) were basically the same for each kick within a trial, a matched pairs analysis of the trials is appropriate. Is there evidence that the helium-filled ball improved the kicker's performance? In MINITAB, subtracting the air-filled measurement from the helium-filled measurement for each trial and applying the "DESCRIBE" command to the resulting differences gives the following results: Descriptive Statistics Variable N Mean Median Tr Mean StDev SE Mean Hel. - Air 39 0.46 1.00 0.40 6.87 1.10 Variable Min Max Q1 Q3 Hel. - Air -14.00 17.00 -2.00 4.00 Using MINITAB to perform a t-test of the null hypothesis H[0]: H[a]: T-Test of the Mean Test of mu = 0.00 vs mu > 0.00 Variable N Mean StDev SE Mean T P Hel. - A 39 0.46 6.87 1.10 0.42 0.34 The P-Value of 0.34 indicates that this result is not significant at any acceptable level. A 95% confidence interval for the t-distribution with 38 degrees of freedom for the difference in measurements is (-1.76, 2.69), computed using the MINITAB "TINTERVAL" command. Data source: Lafferty, M.B. (1993), "OSU scientists get a kick out of sports controversy," The Columbus Dispatch (November 21, 1993), B7. Dataset available through the Statlib Data and Story Library The Sign Test Another method of analysis for matched pairs data is a distribution-free test known as the sign test. This test does not require any normality assumptions about the data, and simply involves counting the number of positive differences between the matched pairs and relating these to a binomial distribution. The concept behind the sign test reasons that if there is no true difference, then the probability of observing an increase in each pair is equal to the probability of observing a decrease in each pair: p = 1/2. Assuming each pair is independent, the null hypothesis follows the distribution B(n,1/2), where n is the number of pairs where some difference is observed. To perform a sign test on matched pairs data, take the difference between the two measurements in each pair and count the number of non-zero differences n. Of these, count the number of positive differences X. Determine the probability of observing X positive differences for a B(n,1/2) distribution, and use this probability as a P-value for the null hypothesis. In the "Helium Football" example above, 2 of the 39 trials recorded no difference between kicks for the air-filled and helium-filled balls. Of the remaining 37 trials, 20 recorded a positive difference between the two kicks. Under the null hypothesis, p = 1/2, the differences would follow the B(37,1/2) distribution. The probability of observing 20 or more positive differences, P(X>20) = 1 - P(X<19) = 1 - 0.6286 = 0.3714. This value indicates that there is not strong evidence against the null hypothesis, as observed previously with the t-test.
{"url":"http://www.stat.yale.edu/Courses/1997-98/101/sigtest.htm","timestamp":"2014-04-17T21:31:36Z","content_type":null,"content_length":"23189","record_id":"<urn:uuid:64c77eca-37d7-4e67-af99-07b09cda7c79>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Bipartite Random Graphs and Cuckoo Hashing - IN PROCEEDINGS OF THE 16TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA , 2008 "... Cuckoo hashing holds great potential as a high-performance hashing scheme for real applications. Up to this point, the greatest drawback of cuckoo hashing appears to be that there is a polynomially small but practically significant probability that a failure occurs during the insertion of an item, r ..." Cited by 19 (5 self) Add to MetaCart Cuckoo hashing holds great potential as a high-performance hashing scheme for real applications. Up to this point, the greatest drawback of cuckoo hashing appears to be that there is a polynomially small but practically significant probability that a failure occurs during the insertion of an item, requiring an expensive rehashing of all items in the table. In this paper, we show that this failure probability can be dramatically reduced by the addition of a very small constant-sized stash. We demonstrate both analytically and through simulations that stashes of size equivalent to only three or four items yield tremendous improvements, enhancing cuckoo hashing’s practical viability in both hardware and software. Our analysis naturally extends previous analyses of multiple cuckoo hashing variants, and the approach may prove useful in further related schemes. "... Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distribute ..." Cited by 9 (4 self) Add to MetaCart Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes. In this work we construct a practical history-independent dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings. Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation. , 2009 "... We study the the following question in Random Graphs. We are given two disjoint sets L, R with |L | = n = αm and |R | = m. We construct a random graph G by allowing each x ∈ L to choose d random neighbours in R. The question discussed is as to the size µ(G) of the largest matching in G. When consi ..." Cited by 9 (0 self) Add to MetaCart We study the the following question in Random Graphs. We are given two disjoint sets L, R with |L | = n = αm and |R | = m. We construct a random graph G by allowing each x ∈ L to choose d random neighbours in R. The question discussed is as to the size µ(G) of the largest matching in G. When considered in the context of Cuckoo Hashing, one key question is as to when is µ(G) = n whp? We answer this question exactly when d is at least three. We also establish a precise threshold for when Phase 1 of the Karp-Sipser Greedy matching algorithm suffices to compute a maximum matching whp. "... In this paper, we provide a polylogarithmic bound that holds with high probability on the insertion time for cuckoo hashing under the random-walk insertion method. Cuckoo hashing provides a useful methodology for building practical, high-performance hash tables. The essential idea of cuckoo hashing ..." Cited by 4 (2 self) Add to MetaCart In this paper, we provide a polylogarithmic bound that holds with high probability on the insertion time for cuckoo hashing under the random-walk insertion method. Cuckoo hashing provides a useful methodology for building practical, high-performance hash tables. The essential idea of cuckoo hashing is to combine the power of schemes that allow multiple hash locations for an item with the power to dynamically change the location of an item among its possible locations. Previous work on the case where the number of choices is larger than two has required a breadth-first search analysis, which is both inefficient in practice and currently has only a polynomial high probability upper bound on the insertion time. Here we significantly advance the state of the art by proving a polylogarithmic bound on the more efficient randomwalk method, where items repeatedly kick out random blocking items until a free location for an item is found. 1 "... Abstract. The purpose of this brief note is to describe recent work in the area of cuckoo hashing, including a clear description of several open problems, with the hope of spurring further research. 1 ..." Cited by 2 (1 self) Add to MetaCart Abstract. The purpose of this brief note is to describe recent work in the area of cuckoo hashing, including a clear description of several open problems, with the hope of spurring further research.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9046219","timestamp":"2014-04-16T11:30:10Z","content_type":null,"content_length":"23398","record_id":"<urn:uuid:0aa3839f-ee55-44cc-a302-8e7a40b4b281>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Middle City East, PA Algebra Tutor Find a Middle City East, PA Algebra Tutor ...I have taught in the Philadelphia School System for the past 9 years with a heavy emphasis on Algebra. I have been told by students that they enjoy my teaching and tutoring methods because I am able to make math seem practical and relevant to their lives. I have learned through the years how to make math seem easy. 11 Subjects: including algebra 1, algebra 2, statistics, geometry ...So, before every issue, I would work one-on-one with each of my writers to introduce them to new writing techniques and work to rewrite their articles to prepare for print. I'm generally a math and writing nerd. When I teach, I find it most important to -Give perspective about the field of study we're covering. 25 Subjects: including algebra 2, algebra 1, chemistry, writing ...As a part of my graduate school training, I am taught to explain difficult scientific concepts to people of all levels of education. This allows me to teach students of all ages difficult concepts they may be learning in subjects such as Biology, Physiology, Genetics, and basic Chemistry. My ba... 11 Subjects: including algebra 1, biology, grammar, anatomy Hello! My name is Dr. Jeff. 21 Subjects: including algebra 1, algebra 2, chemistry, biology ...In high school, I tutored some of my peers for 3 years in various subjects. Now, I would like to tutor for all ages and grades. If you are an Education college student,I would be more than happy to help you with the Praxis I or the Praxis II, Elem. 15 Subjects: including algebra 1, reading, English, grammar Related Middle City East, PA Tutors Middle City East, PA Accounting Tutors Middle City East, PA ACT Tutors Middle City East, PA Algebra Tutors Middle City East, PA Algebra 2 Tutors Middle City East, PA Calculus Tutors Middle City East, PA Geometry Tutors Middle City East, PA Math Tutors Middle City East, PA Prealgebra Tutors Middle City East, PA Precalculus Tutors Middle City East, PA SAT Tutors Middle City East, PA SAT Math Tutors Middle City East, PA Science Tutors Middle City East, PA Statistics Tutors Middle City East, PA Trigonometry Tutors Nearby Cities With algebra Tutor Belmont Hills, PA algebra Tutors Carroll Park, PA algebra Tutors Center City, PA algebra Tutors East Camden, NJ algebra Tutors Eastwick, PA algebra Tutors Fernwood, PA algebra Tutors Middle City West, PA algebra Tutors Overbrook Hills, PA algebra Tutors Passyunk, PA algebra Tutors Penn Ctr, PA algebra Tutors Philadelphia algebra Tutors Philadelphia Ndc, PA algebra Tutors South Camden, NJ algebra Tutors West Collingswood, NJ algebra Tutors Westmont, NJ algebra Tutors
{"url":"http://www.purplemath.com/Middle_City_East_PA_Algebra_tutors.php","timestamp":"2014-04-19T19:43:48Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:f93cfcf0-c39d-4e18-bbbc-c71d217223a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment Problems « View all new features in Mathematica 9 ◄ previous | next ► A math department would like to offer seven courses. There are eight professors, each of whom is willing to teach certain courses. Find a maximal matching where professors only teach courses they are interested in teaching.
{"url":"http://www.wolfram.com/mathematica/new-in-9/enhanced-graphs-and-networks/assignment-problems.html","timestamp":"2014-04-17T16:45:11Z","content_type":null,"content_length":"16823","record_id":"<urn:uuid:64355fe9-306b-47f1-bca3-5a0ee4891770>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Feedforward noise cancellation rejects supply noise | EE Times Design How-To Feedforward noise cancellation rejects supply noise All voltage regulators generate some level of undesirable noise. Some are designed for low-noise performance, but those may not be sufficiently quiet to supply power for certain ultra-low-noise oscillators, instruments, and high-quality audio. A simple feed-forward noise-cancellation technique can reduce the supply noise by more than 26dB, while maintaining a low input-to-output voltage drop and high power efficiency. The feed-forward noise cancellation technique, Figure 1a, AC-couples the noise voltage to the input of a voltage-controlled current source. Click to Enlarge Image Figure 1: A feed-forward noise cancellation technique for power supplies (a) employs the voltage-controlled current source g[m]V[IN]. The voltage-controlled current source in this circuit (b) is implemented with an op-amp and a MOSFET. The noise voltage modulates the current source (g[m] • V[IN]) such that the resulting IR drop across R[S] cancels the input noise voltage: V[IN-AC] • g[ m] • R[S] = V[NOISE] = V[ IN-AC] g[m] • R[S] = 1. The voltage-controlled current source is similar to the hybrid-π small-signal model of a MOSFET or bipolar transistor. Transistors are sometimes used in the feed-forward noise-cancellation circuit, but because their parameters vary considerably from unit to unit, discrete-transistor circuits require some manual tuning to obtain a precise gm. The circuit of Figure 1b, based on the technique of Figure 1a, needs no fine-tuning. The voltage-controlled current source is implemented with a low-noise op-amp and an n-channel MOSFET, and produces a g[ m] value precisely equal to 1/R[1] . Choose the R[S] value such that its voltage drop is small at the maximum output current (a voltage drop of 50mV to 200mV across R[S] is acceptable). R[1] and R[S] must be equal in value and well matched, so a tolerance of 1% or better is recommended. R[S] must be rated to dissipate the power at maximum current. Next, the quiescent current for M 1 should equal the maximum noise voltage divided by R[ S]: I[Q] = V[noise-max] /R[S] I[Q] = V[Q] /R[1] . V[Q] is the quiescent voltage at the op amp’s noninverting terminal, obtained from the voltage divider R[3]-R[6]: Click to Enlarge Image The circuit in Figure 1b assumes the maximum noise voltage is 1mV[PP] . Therefore, I[Q] is 10mA and V[Q] is 1mV. Note that the rejection capability is degraded if the noise voltage exceeds 1mV[PP] when V[Q] is set to 1mV. V[Q] should therefore be set equal to the maximum anticipated noise voltage. To ensure that V[ Q] is unaffected by bias current, choose an op amp with low input-bias current, such as the one shown. The AC-coupling capacitor (C[1] ) should be large enough to couple broadband noise into the op amp. During power-up, while C[1] is charging, the current through R[1] and M[1] is larger because V[Q] is higher than normal. R[2] is therefore included to limit the current through M[ 1] during power-up: R[2] <>[OUT] – V[DSM1] )/I[Q] , where V[ DSM1] is the drain-source voltage of M[ 1]. Figure 2 shows noise rejection vs. frequency for the Figure 1b circuit operating with a load current of 1A. Click to Enlarge Image Figure 2: Supply-noise rejection in the circuit of Figure 1b is better than 26dB at 1kHz. Noise rejection is better than 26dB at lower frequencies, and better than 18dB within the audio frequency range. Noise rejection decreases at higher frequencies, but the higher-frequency noise is easier to filter with a capacitor (C[2] in this circuit). About the Author Ken Yang is an application engineer at Maxim Integrated Products, www.maxim-ic.com
{"url":"http://www.eetimes.com/document.asp?doc_id=1272276&piddl_msgorder=thrd","timestamp":"2014-04-24T05:55:27Z","content_type":null,"content_length":"129249","record_id":"<urn:uuid:76274a96-2c79-4773-a563-3203af202780>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
TR05-079 | 25th July 2005 00:00 Expanders and time-restricted branching programs The \emph{replication number} of a branching program is the minimum number R such that along every accepting computation at most R variables are tested more than once. Hence 0\leq R\leq n for every branching program in n variables. The best results so far were exponential lower bounds on the size of branching programs with R=o(n/\log n). We improve this to R=cn for a constant c>0. This also gives a new and simple proof of an exponential lower bound of Beame, Saks and Tchathachar for branching programs of length (1+c)n. We prove our lower bounds for quadratic functions of Ramanujan graphs. The proofs are simple.
{"url":"http://eccc.hpi-web.de/report/2005/079/","timestamp":"2014-04-16T19:01:15Z","content_type":null,"content_length":"19573","record_id":"<urn:uuid:97b06dd9-f9bc-4807-9b08-271649bfd999>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: \[\int\limits_{0}^{1}\int\limits_{\tan^{-1} }^{\frac{ \pi }{ 4 }}f(x,y)dydx\] Sketch the region of integration and change the order of integration • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a4f87de4b057f4e6f3d4b6","timestamp":"2014-04-21T15:15:44Z","content_type":null,"content_length":"257485","record_id":"<urn:uuid:cb2105ad-1818-4e68-abd9-c669a205a41e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
True 3D mandelbrot type fractal Just a suggestion - where you discuss the formula you could also mention Jos Leys' algorithm for accurate analytical DE for the formula (and related formulas such as Garth Thornton's variations and Rudy Rucker's formula). Hi David, That's an idea, at least a mention would be good. I think Paul's done a good job of collecting the different formulas, so I probably won't repeat the math. But in any case, I may first link to some thread posts until I understand some of them better. Out of interest, how much faster is rendering after using Jos Leys' algorithm compared to before? Cheers for the formulae Paul. I look forward to implementing it, especially the 4th power one. I take it that one can use it in conjunction with Jos Leys' DE algorithm? Hi all, just did some accurate timing tests (using my core2duo instead of the somewhat unreliable timings on this P4HT) just to see the difference between my Delta DE and Jos Leys' Analytical DE and also to see how much faster the non-trig versions are. First to anwer Daniel's question - yes you can use the non-trig versions with Jos' analytical DE *but* at the moment I only know how to do that using the trig version of the formula for calculating the derivative (with the "normal" iteration using the non-trig version). There's probably a way to get the derivative without the trig but it's beyond my maths ability OK now the timings - all rendered within UF at the same (high) detail settings with shadowcasting and all at the same magnification. In all cases the "entire" Mandelbrot was in view (all renders 1024 Delta DE (trig): 9 mins 12 secs Analytical DE (trig): 5 mins 25 secs Delta DE (no trig): 1 min 44 secs Analytical DE (trig for derivative only): 4 mins 26 secs Delta DE (trig): 4 mins 53 secs Analytical DE (trig): 3 mins 07 secs Delta DE (no trig): 0 min 53 secs Analytical DE (trig for derivative only): 2 mins 29 secs Delta DE (trig): 4 mins 04 secs Analytical DE (trig): 2 mins 30 secs Delta DE (no trig): 0 min 59 secs Analytical DE (trig for derivative only): 2 mins 04 secs Pretty much as you'd expect based on the relative amounts of calculation involved If anyone's wondering there was essentially zero visible difference between the trig renders and the non-trig renders - just the odd pixel in the "bowl" areas of the z^3 and z^4 analytical DE renders - probably down to greater inaccuracies in the trig versions as compared to the non-trig versions (the way I calculate the normals is especially sensitive to small variations).
{"url":"http://www.fractalforums.com/3d-fractal-generation/true-3d-mandlebrot-type-fractal/msg8477/","timestamp":"2014-04-17T12:32:44Z","content_type":null,"content_length":"118162","record_id":"<urn:uuid:2ea9fda4-a4dd-4800-833d-c577c5be3b68>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
About Abacus The Abacus Genius students initially use an a abacus which is very similar to the soroban(Japanese Abacus) but it has 17 rods. The abacus is used while lying flat on the table. After 24 hours of classroom training and subsequent home training the children can develop the image of the abacus (soroban) in the right brain and move the beads by using corresponding manipulations for addition. And over a period of 2 years… A brief view on development of Numbers(Numeric Systems) Men of ancient times used sign language or sounds to communicate with each other. These sounds were only oral and did not have any corresponding signs or signage(alphabets). It was later that each sound was given a shape in the form of alphabets. Similarly when men counted they used sounds and their finger for signs. The roman numbers are a classic example of these if we analyse them closely. │Arabic Number│Roman Number│ Explanation of the signage │ │ 1 │ I │ One finger │ │ 2 │ II │ Two fingers │ │ 4 │ IV │ Five but one finger taken to left hand │ │ 5 │ V │ Thumb and index finger │ │ 6 │ VI │ Five and additional finger │ │ 8 │ VIII │ Five and the remaining fingers on the hand │ │ 10 │ X │Both hands full so 2 fingers crossed or 2fives while one is inverted │ But again they ran to a road block as they could not calculate any numbers larger than 10. The Arabic number also did not have the zero and could not proceed any further. This being the case when the tenth came along the ancient man would make a mark of the floor, or use a stone to indicate each ten. The word ‘abacus’ derives from the Greek word ‘abax’ or ‘abakon’ meaning table or tablet, which originated from the Semitic word ‘abaq’ meaning sand. The plural of ‘abacus’ is ‘abacuses’ or ‘abaci’. And a user is called abacist. The first abacus was almost certainly based on a flat stone covered with sand or dust. Words and letters were drawn in the sand; eventually numbers were added and pebbles used to aid calculations. In outdoor markets of those times, the simplest counting board involved drawing lines in the sand with ones fingers or with a stylus, and placing pebbles between those lines as place-holders representing numbers (the spaces between 2 lines would represent the units 10s, 100s, etc.). The more affluent people, could afford small wooden tables having raised borders that were filled with sand (usually coloured blue or green). A benefit of these counting boards on tables, was that they could be moved without disturbing the calculation— the table could be picked up and carried indoors. As each civilizations grew they were able to develop an abacus that met their needs the best. The Babylonians used this dust abacus as early as 2400 BC. The origin of the counter abacus with strings is obscure, but India, Mesopotamia or Egypt are seen as probable points of origin. China played an essential part in the development and evolution of the abacus. The evolution of the abacus can be divided into three ages: Ancient Times, Middle Ages, and Modern Times. The time-line below traces the developing abacus from its beginnings from 500 B.C., to the present. It can be noted like every other development the growth of the abacus was slow during the first part of its life but the last millennium it grew at changed at a rapid pace. Babylonian abacus Babylonians may have used the abacus for the operations of addition and subtraction. However, this primitive device proved difficult to use for more complex calculations. Some scholars point to a character from the Babylonian cuneiform which may have been derived from a representation of the abacus. Egyptian abacu The use of the abacus in ancient Egypt is mentioned by the Greek historian Herodotus, who writes that the manner of this disk’s usage by the Egyptians was opposite in direction when compared with the Greek method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument have not been discovered, casting some doubt over the extent to which this instrument was used. Grecian abacus A tablet found on the Greek island Salamis in 1846 dates back to 300 BC, making it the oldest counting board discovered so far. This is known as the Salamis TabletIt is a slab of white marble 149 cm long, 75 cm wide, and 4.5 cm thick, on which are 5 groups of markings. In the center of the tablet is a set of 5 parallel lines equally divided by a vertical line, capped with a semi-circle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semi-circle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Roman abacus The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles, calculi, were used. Later, and in medieval Europe, jetons were manufactured. Marked lines indicated units, fives, tens, hundreds etc. as in the Roman numeral system. This system of ‘counter casting’ continued into the late Roman empire and in medieval Europe, and persisted even into the nineteenth century. In addition to the more common method using loose counters, several specimens have been found of a Roman abacus, shown here in reconstruction. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. (Please find the picture) The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives—five units, five tens etc., essentially in a bi-quinary coded decimal system, obviously related to the Roman numerals. The short grooves on the right may have been used for marking Roman ounces. For more details and greater understanding of usage and workings click here – Roman Abacus. Chinese abacus The Chinese abacus was called a Suanpan. It is about 20 cm tall and it comes in various widths depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom for both decimal and hexadecimal computation. Modern abacuses have one bead on the top deck and four beads on the bottom deck. The beads are usually rounded and made of a hardwood. The beads are counted by moving them up or down towards the beam. If you move them high, you count their value. If you move them down, you don’t count their value. The suanpan can be reset to the starting position instantly by a quick jerk along the horizontal axis to spin all the beads away from the horizontal beam at the center. Suanpans can be used for functions other than counting. Unlike the simple counting board used in elementary schools, very efficient suanpan techniques have been developed to do multiplication, division, addition, subtraction, square root and cube root operations at high speed. For more details and greater understanding of usage and workings click here – Suanpan. Japanese abacus (Soroban) A soroban (meaning “Counting tray”) is a Japanese-modified version of the Chinese abacus. It is devised from the suanpan, imported from China to Japan through the Korean peninsula in the 15th century. The modification was that the Japanese took 2 beads from each rod, i.e. one from the upper deck and another from the lower deck. Like the suanpan, the soroban is still used in Japan today, even with the proliferation, practicality, and affordability of pocket electronic calculators. Russian abacus (Schoty) The Russian abacus, the schoty usually has a single slanted deck, with ten beads on each wire (except one wire which has four beads). This wire is usually near the user. The Russian abacus is often used vertically, with wires from left to right in the manner of a book. The wires are usually bowed to bulge upward in the center, in order to keep the beads pinned to either of the two sides. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually have a colour different from the other 8 beads. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color. The Russian abacus is still in use today in shops and markets throughout the former Soviet Union, although it is no longer taught in most schools. School abacus Around the world, abaci have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic. In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame has been common (see image). It is still often seen as a plastic or wooden toy. The type of abacus shown here is often used to represent numbers without the use of place value. Each bead and each wire has the same value and used in this way it can represent numbers up to 100. The most significant educational advantage of using an abacus, rather than loose beads or counters, when practicing counting and simple addition is that it gives the student an awareness of the groupings of 10 which are the foundation of our number system. Although adults take this base 10 structure for granted, it is actually difficult to learn. Many 6-year-olds can count to 100 by rote with only a slight Abacus Genius The Abacus Genius students initially use an abacus which is very similar to the soroban(Japanese Abacus) but it has 17 rods. The abacus is used while lying flat on the table. After 24 hours of classroom training and subsequent home training the children can develop the image of the abacus (soroban) in the right brain and move the beads by using corresponding manipulations for addition. And over a period of 2 years students will be skilled to do all operations like addition, subtraction, multiplication & division. So in reality unlike all the types of abacuses (abaci) discussed above this is unique as this involves real children who can perform the work of the abacus using their well trained brain. For more details of the program click on course
{"url":"http://www.abacusgenius.com/our-products/about-abacus/","timestamp":"2014-04-20T05:42:21Z","content_type":null,"content_length":"32914","record_id":"<urn:uuid:c0c7e14f-2bb2-46df-a576-2319b6c07c35>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Are black holes surrounded by firewalls? Dilaton has noticed a new, extremely provocative concept that was introduced among the quantum gravity researchers two months ago: the firewall. For decades, people teaching general relativity – including your humble correspondent (e.g. here) – have been explaining that nothing special happens to an infalling observer when she crosses a black hole event horizon. The curvature is usually pretty small there – the curvature radius is close to the black hole radius – and you only get torn apart once you approach the black hole singularity which may be much later. Advertisement of a future text: Read also Raphael Bousso is right about firewalls The event horizon is just a coordinate singularity; with a better choice of coordinates, the vicinity of the horizon (including a region below and a region above the horizon) looks like a nearly flat piece of the Minkowski spacetime. These coordinates may be "extremely distorted" functions of some other coordinates you may use for other purposes but they exist. Because the laws of general relativity are local, the (nearly) flat geometry of the region implies that there will be (nearly) the same phenomena there as in the flat space. Later, some quantum properties of black holes have been pretty much established, too. The picture has made sense to everyone who has ever been considered a top expert in quantum gravity. That was the case until July 2012. Let me first say what the quantum insights about black holes have been. The black holes evaporate and, as seen in AdS/CFT and Matrix Theory, it's still possible without any violation of the principles of quantum mechanics. So pure states evolve into pure states. From the viewpoint of the observers at infinity, a black hole is just another object with a discrete energy spectrum (well, the levels aren't really sharp because the black hole is unstable: they have a width) that effectively exists outside the event horizon only. There may be an apparent contradiction between all these things and the validity of the effective field theory for low-energy processes but it's been believed that the contradictions go away because of the "black hole complementarity" paradigm, an opinion that the degrees of freedom (fields) inside the black hole aren't quite independent from those that are outside. They are complicated, scrambled functionals of them. Now, in mid July 2012, four authors – two of whom are already established as quantum gravity black hole experts you don't want to overlook – published an explosive preprint called Black Holes: Complementarity or Firewalls? Joseph Polchinski, Donald Marolf, James Sully, and Ahmed Almheiri – sorry that I sorted the names from the most famous ones – decided to claim that after they have investigated some "detailed models" what happens with the information during the black hole evaporation, they concluded that the usual assumptions are mutually inconsistent, after all. Joe Polchinski's guest blog at Cosmic Variance They considered some thought experiments about entangled qubits that fall into the black hole - constructed out of the \(s\)-wave or other waves in the spherical harmonic decomposition – and decided that the only sensible conclusion is that when a black hole becomes "old" (i.e. when it emits or loses one-half of its initial Bekenstein-Hawking entropy), its event horizon gets transformed into a firewall that destroys everything that gets there. (If you want to do an experiment, note that you will only be able to say "how you feel" after you cross the event horizon to those people who also fell into a black hole and whose lives are as doomed as yours.) A song on several observers approaching a firewall. The musician suggests that it's only burning in the observer's eyes. Within days, an emotional Leonard Susskind replied . The horizon may be kept intact; it's only the singularity of an old black hole that may need a make-up. In another day, Susskind released the second version of the manuscript. Two weeks later, he withdrew the paper because he "no longer believed the argument was right". A week after the initial provocative paper, Raphael Bousso replied with a rather intelligent paper arguing why Polchinski et al. are wrong. It's clear that Raphael Bousso had to think it was wrong because he's really closer to classical general relativity and Polchinski et al. wanted to question its validity in environments that seem completely mundane! Bousso pointed out that Polchinski et al. were sloppy about the information that various observers, especially the infalling one, may access. When one realizes that they can only evaluate the "causal diamond", all the proofs of contradictions (which typically claim that one may xerox a quantum bit which must be impossible – or which clearly is impossible, depending on your goals – in every consistent quantum theory) become impossible. Bousso's talk at Strings 2012 about this issue Daniel Harlow posted another seemingly intelligent reply four days after Bousso. Polchinski et al. were sloppy when they were converting the observations from one observer's reference frame to another. However, Donald Marolf, a co-author of the original paper, kept on fighting and convinced Harlow that there was a hole in his argument. So Harlow withdrew the paper, just like Susskind. The topic of the black hole firewalls surely seems to be a firewall when it comes to burning the actual papers. ;-) Great Firewall of China, by Ryan McLaughlin But the fight for freedom and against firewalls continued. Yasunori Nomura, Jaime Varela, and Sean J. Weinberg argued in a way that is somewhat similar to Harlow: one must be careful when she constructs the map between the unitary quantum mechanics with the qubits on one side and the semiclassical world on the other side. The paper exists in the version v3 as well but unlike Harlow's paper, it hasn't been withdrawn yet. Samir D. Mathur and David Turton "paradoxically" disagree with the firewall, too. I say it's "paradoxical" because Mathur is the father of fuzzballs which also "brutally change" the appearance of the black hole interior. However, they actually believe that the infalling observer has a complementary "nothing happens" description. Their explanation why Polchinski et al. are wrong is seemingly different again: Polchinski et al. assumed that an observer near the event horizon may say lots about the Hawking radiation even if he only looks outside the stretched horizon. Mathur and Turton say that he must actually go all the way to the real horizon and all the answers therefore depend on the Planckian physics. Borun D. Chowdhury and Andrea Puhm picked catchy words for the same question: Is Alice burning of fuzzing? ;-) Among the followups, they're the closest ones so far to the original paper. They claim that all the critics of Polchinski et al. are just babbling irrelevant nonsense. The only exception are the fuzzball guys from the previous paragraph. Chowdhury and Puhm declare that it's important to get rid of the observer-centric description and talk about decoherence. When it's done, Alice burns when she is a low-energy packet but she may keep on living in the complementary fuzzball picture when she is a high-energy excitation. I suppose that for real people falling into a large, old black hole, this means that they're burned at stake. In mid August, Leonard Susskind posted a new preprint, unusually similar to the previous one that was withdrawn weeks earlier. It's only the singularity that is modified for an old black hole. However, in the new paper, the evolution of the singularity is rather dramatic because it is – thanks to the growing entanglement – growing towards the event horizon and it ultimately overlaps with it. So Polchinski et al. are right that there's a firewall that burns you at that place; Susskind just says that it's more natural to call it a grown-up singularity, not an event horizon. He at least hopes that this only happens to old black holes (after the Page time, half entropy etc.), not after a much shorter scrambling time (which is just by a logarithmic factor longer than the black hole radius). The debate wasn't stopped, of course. A day later, Iosif Bena, Andrea Puhm, and Bert Vercnocke formulated the question in yet another way: Non-extremal Black Hole Microstates: Fuzzballs of Fire or Fuzzballs of Fuzz? They take the fuzzball picture as a dogma and try to figure out how the interior looks to an infalling observer. Their conclusions seem inconclusive to me but they surely say lots of general and vacuous things that it could be an important research. ;-) Amit Giveon and Nissan Itzhaki became supporters of the firewall when they decided to publish a related provocative concept: they think that string theory adds an extra degree of freedom, a zero mode, to the tip of the cigar (the counterpart of the event horizon in simple 1+1-dimensional examples of black holes) relatively to general relativity and this extra degree (or these extra degrees) of freedom may get generalized to a firewall that kills you what you fall into a higher-dimensional black hole. Tom Banks and Willy Fischler use Tom's somewhat incomprehensible axiomatic framework , the holographic spacetime (I've been exposed to very intensely to as Tom's student), and they conclude that this axiomatic framework doesn't imply any firewalls. Amos Ori prefers to assume that the semiclassical physics simply has to hold and adjusts any claims about the quantum information as necessary to agree with the primary assumption. With this attitude, he reaches a nearly comparably dramatic conclusion about the black hole information. Most of the information remains trapped throughout most of the evaporation process. Effectively, a small black hole behaves as a black hole remnant. Ram Brustein wrote so far the most recent followup. The author chooses some very conservative language but arguably proposes a much more radical departure from the lore. The event horizon is a wrong concept; it only exists in the classical theory. In the quantum theory, the black hole's Compton wavelength is nonzero which, the author believes, creates a region near the horizon where the densities are inevitably high and quantum gravity is needed to predict what happens in this new extreme region. I guess that arXiv.org hasn't hit a firewall yet so new and new followups will keep on emerging. Your humble correspondent has an opinion what happens but I don't want to extend this cacophony. You must already feel it's crazy. There's surely no consensus here at all and if there were any majority, you would manifestly see that it's irrelevant. The researchers don't seem to agree about anything at all! ;-) Some of the papers are potentially compatible with some of the other papers but you won't find a pair of papers that are really answering the question by Polchinski et al. in equivalent ways. It's plausible that the reason is that all the questions "what an infalling observer sees and feels" is ill-defined. He may feel "nothing special" but the transformation of the quantum information needed to produce his future state may become arbitrarily contrived once he crosses the horizon, with no need to have any simple relation to perceptions by other observers. After all, extremely singular coordinate transformations are bound to translate to extreme transformations on the Hilbert space, especially if it includes some Planckian degrees of freedom (well, degrees of freedom interpreted as "Planckian" by some of the near-horizon observers). Well, one of the papers above was making a similar point. Perceptions and observations depend on the sensory system's being described by a predictable Hilbert space that reacts in predictable ways. If you can't isolate the Hilbert space that behaves as an "ordinary Hilbert space for the sensory system", it makes no sense to talk about someone's perceptions. (I don't really need to reconstruct eyes; what may get destroyed at the event horizon are much more brute pieces of material, too.) On the other hand, when you redefine the degrees of freedom and evolve them by an ad hoc evolution you would expect outside the black hole, it's not a problem and it won't lead to real contradictions with the things outside because the infalling observer is never going to liberate herself, anyway. I also think it's problematic to assume that the radiation may be described as a pure state even before the black hole evaporates. The state of the radiation may be obtained by tracing over the interior and the horizon degrees of freedom. Even if the strictly internal degrees of freedom are reshuffled outside degrees of freedom, the influence of the near-horizon degrees of freedom could still make the state of the "radiation only" mixed. One may only be sure about the purity when the black hole is really gone. Well, I actually think that Polchinski et al. and many others are doing exactly the opposite mistake, too. They think that the radiation is maximally entangled with the black hole so it must be described by a heavily mixed state and can't be maximally entangled with someone else. However, the very point of complementarity, as I understand it, is that the black hole interior's degrees of freedom are just "scrambled copies" of the external ones so you shouldn't double count them (which would be spurious quantum xeroxing). The radiation without the interior is nearly or entirely in a pure state at the Page time! I realize this paragraph says exactly the opposite than the previous one but whichever way it goes, I feel they're not being careful about these important considerations. At any rate, it surely looks bizarre that the quantum gravity folks can't agree about such a seemingly elementary question, namely the existence and character of the hypothetical firewalls. Many of them are excellent folks but maybe they have focused on too ill-defined questions. Maybe this huge cacophony is a warning sign that the research into these "excessively conceptual" questions got stuck in a swampland observed in a letter by Richard Feynman to his wife after he visited the 1962 conference on (general) relativity in Warsaw: "I am not getting anything out of the meeting. I am learning nothing. Because there are no experiments, this field is not an active one, so few of the best men are doing work in it. The result is that there are hosts of dopes here (126) and it is not good for my blood pressure. Remind me not to come to any more gravity conferences!" Are we there again? The black hole interior will always be a mostly inaccessible place for most lucky people so these questions will remain theoretical. But are they meaningful as theoretical questions at all? When you look at the amplitudes that string theory allows you to naturally calculate, such as the S-matrix in various Minkowski spaces, you will find out that the "perceptions of an infalling observer" are not among these calculable things. Maybe string theory has a very good reason why it's trying to hide those would-be observables from us! When I wrote about the reincarnation of the infalling observer , it wasn't quite a joke. I really feel that questions about the infalling observer may be somewhat analogous to various spiritual questions about near-death experiences etc. Some of them may be inaccessible to science – and really ill-defined from a scientific viewpoint. snail feedback (28) : Seth Lloyd has already defined this in what is left. PlatoHagel Seth Lloyd has already defined this in what is left. The Reference Frame: Are black holes surrounded by firewalls? · 3 minutes ago Dear Lumo, for an outside observer (pun intended :-P ...), the actual agitation going on in the Arxiv as you nicely describe it in this article, is some kind of interesting fun to watch. Maybe we just have to wait until the dust has settled down a bit, I dont know ... (?). Or could it be that some kind of a "shut up and calculate" rule should be applied and one should just try not too hard to imagine what happens to an infalling observer in analogy to what I always think about all this quantum interpretation business (more accurately: nonsense in my opinion ;-) ...)? And the means to caclculate in the case of firewalls and infalling observers would be the holographic principle ? Nevertheless, I always thought that trying to find out what the microscopic degrees of freedom of a black hole really are and how the information comes out again is some kind of a difficult but real and interesting topic :-/ ...? Holy crap I didn't know so many papers had been published on this. It will be interesting to look at it when the dust settles. Seth Lloyd is a joke... As an ignoramus (in respect of relevant details) but nevertheless interested bystander, I enjoyed and felt satisfied reading this overview/analysis and its concluding comment [one that _I interpret_ to mean that the 'Firewall idea' is an anomalous hypothesis that ought to have self-incinerated before it was hatched ;>]. The study of black holes has had an interesting trajectory. Back when I was working on them around 1971, people were looking for exact solutions to Einstein's equations for various matter and charge distributions, stationary or in motion...an example would be the Kerr solution for an uncharged rotating black hole. Another would be the eponymous solution :). The next phase has been a transition to numerical relativity (Matt Choptuik, Franz Praetorius etc), still focusing on classical GR, but with the evolution of computing power, various simulations are now possible. Now, with what Lubos has reviewed, we seem to be getting into a quantum (gravity) quagmire applied to black holes. The situation is reminding me of ancient Greek philosophers who had untrammeled speculation not constrained by experiment...eg Parmenides,Zeno-- nothing changes, block time vs Heraclitus---everything changes...I am not criticizing this, just a comment. An (admittedly uninformed) suggestion from me would be to emulate Dirac's invention of his equation and come up with models for quantum gravity and simply see where the math leads. If the equations lead to a contradiction or inconsistency, that would still be useful. If they don't seem to, explore what the results mean. We seem to have moved beyond the Descartes/Galileo/Newton model of experimental theoretical physics and back to the Greeks armed with much more powerful tools (math). Yes, my rather long blather above does sort of boil down to "play with beautiful equations, and shut up and calculate". Einstein and Dirac and Heisenberg all did this, and the genesis of their insight often seemed like magic and not reason from experiment. Dear Gordon, physics switched into the research of quantum physics of black holes in 1974, due to Hawking's groundbreaking discovery, so it's clearly invalid to suggest that "today" we are making the transition. Also, many people today are working on analytic solutions to GR. One could even say that not much has changed about the composition of the research. So your suggestion that the 1970s were about the exact solutions and "today" is about the quantum properties is just bullshit. You may have only worked on the less revolutionary part of these GR-related insights but you were not all of GR research, sorry to say. If you would dismiss even Hawking's discovery and similar theoretical ones just because they're theoretical ones, I couldn't disagree more because it's one of the greatest discoveries of the 20th century science. I think it's complete nonsense, a Smolin-style nonsense, and a kind of insulting nonsense, that we have moved "back to the Greeks". Greeks were asking and (usually incorrectly) answering ambitious questions because they're the most attractive ones and they couldn't answer any questions really correctly, so among the possible questions to work on, they chose according to the We are solving ambitious questions because the less ambitious ones have been genuinely solved and our knowledge and tools are marginally enough to attack the ambitious ones. This boundary moving towards the previously "hopelessly detached" questions is what defines the progress in science and it's been moving largely uniformly in the positive direction, so your suggestion that 2,000 years have been undone is just shit. I have never seen a more COMPLETE misunderstanding of what I have said ever, anywhere. Geez, Lubos. Get someone else to scan stuff before you rant. I was certainly not calling for a modification of the Dirac equation---I was suggesting the method he used---playing with beautiful equations and following the consequences, could prove a fruitful strategy. So it goes with ALL the rest. Certainly I dont think that nothing quantum was done before the present and I didnt say that. I have never come across anyone so black and white as you, or someone who seems incapable of getting the sense of a post---I am not suggesting we go back to the Greeks---just suggesting that speculative theory not tied to experiment may be forced on us by inability to experiment. And the ancient Greeks were virtuosos at that. For a smart person, you are pretty dumb. Yes, string theory is the best candidate quantum gravity theory, but is it presently a canonical qgt? Where, oh where am I denigrating Hawking's contributions? We referenced chats with him in our paper. About slinging mud on modern science---total bullshit. I was trying to praise it, not bury it. And I am not suggesting we go back to Galileo or Dirac. I am beginning to understand how people get into frustrating disagreements with you. Dear Gordon, you always complain that you were misunderstood. How can it be misunderstood? You just repeated the same thing. What does "playing with beautiful equations" have to do with answering what the infalling observer observes when he crosses the event horizon? Try to play with the Dirac equation or equations of string theory and answer the question. Others have tried. It hasn't been possible. The Dirac equation clearly has nothing to say about it and it seems that string theory doesn't allow one to calculate "exact values" of any observables for an infalling observer, either. One has to use different methods than just "playing with beautiful equations" to find out what happens when the horizon is crossed. For the required answer, something conceptual – and perhaps some equations – are missing so the equations would first have to be found if the answer boiled down to equations. So why are you pumping this completely irrelevant junk about playing with beautiful equation etc. if you must know that this has nothing whatsoever to do with the essence of the question here? Why are you introducing Dirac or Galileo into these debates who have *absolutely* nothing to do with these matters and who wouldn't really understand any of the papers? And what about the ancient Greeks? WTF? If a theory is "forced on us", it is no longer speculative, so this part of your comment is internally inconsistent, too. Ancient Greeks were never properly forced to accept any theory – their arguments have never really worked. Why are you comparing their situation with the situation in science, especially modern science? They have nothing to do with each other. The ancient Greeks weren't really doing science, except for some very elementary branches of it. String theory is not only the "canonical" theory of quantum gravity but it's also the only mathematically possible consistent one. Never heard of us, huh!!? You were denigrating Hawking's – and many others' – contribution in your comment containing the sentence: Now, with what Lubos has reviewed, we seem to be getting into a quantum (gravity) quagmire applied to black holes. First, this uses a negatively sounding word "quagmire" for an exciting – and largely understood as of today – science about the quantum properties of black holes. Second, this sentence is saying that quantum properties of black holes are only starting to be studied "now" (your word) which implies that you think that what Hawking realized and people started to study in 1975 is either not a research of quantum properties of black holes or it should be ignored. You did ignore it in all your comments, which is why you're denigrating this pillar of the field. You did the same to string theory. Any reaction of S. Hawking regarding this matter yet? Oh Lumo, I'm so sorry that the discussion below your very nice answer to my question has gone bad and I hope I did not make it worse :-/ Since I'm here on TRF I always thought that Gordon likes and appreciates modern fundamental physics a lot too. And I still dont think that he is aligned with the sourballs who want the physics wisdom we have today to be thrown out of the window by a next Newton, Einstein, etc ... Maybe Gorden was just a bit clumsy in choosing his formulations (which I agree look partly some kind of similar to what Sabine Hossenfelder could say for example ...) to explain what he wanted to say. Maybe the dust between you and Gordon has to settle too a little bit too... ;-) ? Anyway, I think the discussion by papers in the Arxiv among your colleagues about the firewalls is interesting and I'm curious about if some deeper insights (about the microscopic degrees of freedom of a black hole or how the information can come out again for example) will result from this when the dust has settled :-) Dear Lubos, can I ask some questions about the standard view on quantum black holes? You wrote: "The black holes evaporate and, as seen in AdS/CFT and Matrix Theory, it's still possible without any violation of the principles of quantum I thought that presently, string theory can only describe the thermodynamics of extremal black holes. But the temperature of extremal black holes is zero, so they don't evaporate. Or is this an outdated view? "So pure states evolve into pure states." If I describe the black hole evaporation in the "QFT in classical curved spacetime" framework, I think that the result should be a mixed state, because it is a finite temperature state. So am I right, that in quantum gravity, this low energy effective mixed state is produced by entanglement between low and high energy degrees of freedom? Thank you, There is no doubt we are all still blind individuals when it comes to the subject here. Us layman more so then others. :) The real issue here is a progressive approach to the questions revealed by any theoretical approach and open discussions. This provides for framework and basis for that discussion. This has been historically verify by information already processed so by laying out Susskinds thought experiment as Gedanken Experiments Involving Black Holes I have provided for similarity of discussions on that basis alone. Adding Seth Lloyd to the question of entanglement shows this progressive connection. Dear Rezso, the view is hugely outdated. It was already outdated in 1996. Only the first paper by Strominger and Vafa focused on a particular extremal, supersymmetric black hole. It has over 2,000 followups at this moment, many of which are computing thermodynamic properties (the right values) of near-extremal or completely non-extremal black holes, including rotating non-supersymmetric Kerr, and 7-parameter families, and infinitely many higher-order corrections to various black holes. Many things can't be calculated analytically. However, it's still possible to prove that AdS/CFT and Matrix theory contain non-extremal black holes as they should, and so on, and so on. There's no reasonable doubt that string theory describes thermodynamics of all black holes correctly (and all of their behavior outside the event horizon, to make the possible gap very explicit). Yes, the mixed/thermal state of the Hawking radiation is just an approximation, and in any exact theory of quantum gravity, which realistically means in any implementation or vacuum of string theory we know today, it may be seen that a pure state always evolves into a pure state, whether there is a black hole or not. The detailed information about the initial state is imprinted into subtle correlations and entanglement between all the degrees of freedom. Ah, I see---you took the word "quagmire" and conflated it into an all out attack by me on quantum mechanics and modern science. Also, my reference to Galileo, if you actually read it for the sense, was to say that Galileo and Newton were instrumental in tying physics to actual experiments and hard data--ie, the scientific method. This is not a speculation by me, and is not either promoting Galileo or wanting to return to him (or Dirac, whom I do admire). What you have done is to focus on one or two words and look for negative connotations. Yes, when there is experimental evidence, direct or indirect, theory needs to conform to it. I was simply TRYING to point out that these wars over a firewall remind me of the old Greek philosophers throwing up speculative theories---that in modern astrophysics, like in high energy particle physics, more theories will be inaccessible to many direct or maybe indirect--this does not mean that I attack these theories at all if they have explanatory power like string theory does (and also I am not saying that string theory is inaccessible to confirmation either, like Woit, whose attitudes are deplorable.) I do apologize for flaming you, and admit that the way I wrote the initial post made it seem like QM just recently entered into black hole theory. But my post wasn't meant to be critical, certainly of Hawking, whom I admire extremely and with whom my supervisor co-edited two books and spent a sabbatical year with. Also, I in no way challenge your authority in scientific matters---I have, as I indicated, been out of any active physics activity since 1972. So please lighten up. Calling me Smolian or whatever is a total insult and will simply result in my packing it in. Of course I haven't read any of the firewall papers---I would be incapable of following them at this point. I assume that you don't want to limit your audience to only active theoretical As for being off topic, I don't think so. Just like me saying I am misunderstood, your saying "off topic" doesn't make it so...maybe a bit tangential. Anyway, thats it for at least this post. I just got back from two weeks at Cambridge and am horribly jet lagged from delays, and the initial post was quickly written after scanning your blog post (no, I didn't pay enough attention to it) but I would suggest you assume I am an ally, and if what I say sounds stupid, see if you may be over-analyzing something or taking a word literally and missing nuance or irony or maybe misuse of a word due to fatigue. Dear Lubos, thank you for the answer. However, I'm still sceptical about the claim, that the evaporation of non extremal black holes is fully understood in string theory. 1. I found the following Strominger paper from 1996 about "Nonextremal Black Hole Microstates": Below equation (1.1): "We have not been able to obtain a stringy derivation of the full expression." Below equation (3.7): "We do not know of a systematic derivation of this formula using D-brane technology. However miraculously it agrees with the Bekenstein-Hawking entropy calculated in the previous section from the area of the event horizon." As it seems, my view was not outdated in 1996. :) 2. I looked at a recent CMS collaboration paper which is based on the ADD model (TeV scale quantum gravity). "Search for Microscopic Black Hole Signatures at the Large Hadron Collider" It sais: "The parton-level cross section of black hole production is derived from geometrical considerations ...... The exact cross section cannot be calculated without knowledge of the underlying theory of quantum gravity and is subject to significant uncertainty." It seems to me that black hole evaporation is only understood in the semiclassical approximation. an expert's opinion would be good. Dear George, the Alcubierre warp drive is a solution of General Relativity and it was proposed by the mexican physicist Miguel Alcubierre. The basic idea is that if you can create a special spacetime geometry (warp-bubble), where space expands behind a spaceship, and contracts in front of it, than it can lead to faster than light travel. Note that this solution doesn't contradict the basic principle of Einstein's theory, because the speed of light is never exceeded in the LOCAL reference frame. This means that a light beam within the warp-bubble would still always move faster than the ship. However, it is very very very hard ( if not completely impossible ) to create spacetimes like this, because it requires negative energy density to be present at various locations. But there is an experimentally verified quantum phenomena, the Casimir effect, where negative energy density exists in Nature, so the solution is not completely excluded. Wow, Lubos. If it was not for Joseph Polchinski's name on the paper (and the names of several other serious physicists discussing it) I would think this is just a crackpot paper which does not deserve a minute of attention. We may just be falling through the Rindler horizon of some aliens with their space ship so we might all be burned to death soon. :-) Does nobody have an idea how to calculate the experience of an infalling observer of a black hole within string theory or is the calculation to hard? Dear Rezso, I am confident that such solutions are impossible in the real world, while the technical reason is probably that they violate an energy (positivity) condition. Alternatively, one may say that the warp bubble before and behind the spaceship is a gravitational wave and it is not allowed to move superluminally, another constraint you violate. The local speed limit at "c" isn't really the only constraint that may be derived from a special relativity limit of GR. There's another ramification. If the spacetime is asymptotically Minkowski flat, then impulses in it can't propagate superluminally, either, whether or not you find an excuse in the form of modified local geometry. Dear Mikael, their quantum-information argument is highly nontrivial, even if it is ultimately wrong. It may look like a crackpot paper but it's not. I think that string theory doesn't allow us to calculate observations by an observer inside even in principle. Because of the finite lifetime in front of her, there are no really exact observables known that could be computed and verified. As I said, I suspect that string theory has a good reason why it keeps silence about these matters. Dear Lubos, the lifetime may be finite but with a big enough black hole you can make it as big as you want. Physics suddenly breaking down in a binary way is just not plausible for me. The answers should become less sharp in a continuous way when making the horizon smaller. Also all the paradoxes of the falling observer appearing frozen at the horizon and the horizon appearing hot for the distant observer already exist for the Rinder horizon of an accelerated observer in Minkowski space. Just read the guest blog of Polchinski. Really exciting stuff going on,
{"url":"http://motls.blogspot.com/2012/09/are-black-holes-surrounded-by-firewalls.html","timestamp":"2014-04-21T14:55:10Z","content_type":null,"content_length":"262150","record_id":"<urn:uuid:0e582d17-0395-4851-8f0c-45b4fa14a44e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov Chain Monte Carlo Methods Fall 2006, Georgia Tech Tuesday and Thursday, 9:30-11am, in Cherry Emerson room 322 Instructor: Eric Vigoda Textbook: I have some lecture notes which I'll post. Also there's a nice monograph by Mark Jerrum covering many of the topics in this course. They are also available on his webpage, though the book is cheap. For project details go here HW 4 pdf: due Thursday October 19 HW 3 pdf: due Tuesday October 3 HW 2 pdf: due Thursday Sept 14 Here's a very rough schedule to give you an idea of the topics we'll cover. Many of the dates will probably change as we go along. Lectures 1-2 (8/22, 8/24): Classical Exact Counting Algorithms Spanning Trees (Kirchoff's Martrix-Tree Theorem) Kasteleyn's poly-time algorithm for the Permanent of Planar graphs Lecture notes: PDF See Section 1 of Jerrum's book for a different proof of Kirchoff's result. Lecture 3 (8/29): Complexity Class #P, and the Permanent is #P-complete Lecture notes: PDF See Section 2 of Jerrum's book. Lectures 4-5 (8/31, 9/5): Counting versus Sampling Reductions between Approximate Counting and Approximate Sampling Lecture notes: PDF See Sections 3.1/3.2 of Jerrum's book. Lecture 6 (9/7): Sampling: Markov Chain Fundamentals Coupling technique Ergodic Markov chains have a unique stationary distribution Lecture notes: PDF HW 2 pdf: due Thursday Sept 14 Lecture 7 (9/12): Coupling from the Past Lecture notes: PDF Lectures 8-9 (9/14, 9/19): Bounding mixing time via coupling Random spanning trees Path coupling technique Random Colorings Lecture notes: PDF Lectures 10 (9/21): Coupling application: Lozenge tilings Lecture by Dana Randall Lectures 11 (9/26): Linear extensions Generating a random linear extension of a partial order Notes: see Section 4.3 of Jerrum's book. HW 3 pdf: due Tuesday October 3 Lectures 12 (9/28): Advanced coupling Random colorings -- avoiding the worst case and non-Markovian couplings Notes: see the following survey Lectures 13-14 (10/3, 10/5): Spectral methods Canonical Paths Generating a random matching Notes: see Chapter 5 of Jerrum's book. Lectures 15-16 (10/10, 10/12): Approximating the permanent of non-negative matrices HW 4 pdf: due Thursday October 19 Supplemental notes: postscript, PDF, including an algorithm/proof sketch for general bipartite graphs Lecture 17 (10/19): Ising Model Connections between phase transitions in Statistical Physics models and fast covergence of Markov chains Strong spatial mixing and O(nlogn) mixing time of the Glauber dynamics Lecture 18 (10/24): Counting/Sampling Algorithms for the Ising Model Approximating the partition function via the high-temperature expansion Random sampling via the random-cluster representation Lecture 19 (10/26): Conductance Bounding the mixing time via conductance Lecture 20 (10/31): Torpid mixing for the Glauber dynamics Contours argument Lecture by Dana Randall Lectures 21-23 (11/2, 11/7, 11/9): Estimating the volume of convex bodies Lectures by Santosh Vempala Notes: see survey article by Santosh (also, Section 6 of Jerrum's book) Lecture 24 (11/14): Approximate Counting via Dynamic Programming Dyer's #Knapsack result Lecture notes: PDF November 20: Make sure to attend the DIMACS lectures Week 13 (11/28): Weitz's deterministic approx counting alg for independent sets Week 14 (11/30, 12/5, 12/7): Project Presentations
{"url":"http://www.cc.gatech.edu/~vigoda/MCMC_Course/index.html","timestamp":"2014-04-18T13:51:46Z","content_type":null,"content_length":"5841","record_id":"<urn:uuid:e28b5175-b444-4a50-932f-ca1aa996caa4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynn, MA ACT Tutor Find a Lynn, MA ACT Tutor ...I have helped numerous students master both the foundations and the specific skills taught in a variety of calculus courses. Geometry is the study and exploration of logic and visual reasoning. There are numerous tools and facts that you need to understand and then you build on them throughout the year. 23 Subjects: including ACT Math, calculus, statistics, GRE ...They will have gained a large vocabulary, reading and writing skills, and a rigorous and interesting course of study of math topics and concepts; all is geared toward preparation for the ISEE. The ISEE is the entrance exam for Boston's exam schools and many Boston area private schools. The exam results in a ranking of students in the same grade. 9 Subjects: including ACT Math, geometry, algebra 2, algebra 1 ...Trigonometry is when math starts getting more complicated and you move from Algebra towards pre-calculus and calculus. Still it is very important and provides the foundation of much of the more advanced math and so it is very important to have a good grasp of the concepts. I have extensive experience with several different types of writing activities both professionally and 28 Subjects: including ACT Math, reading, English, writing ...I focus not only on the essential reading, quantitative, and writing skills, but also coach you on how to allocate your time during the test, how to analyze the structure of a question to gain insight into the probable answer, and how to improve your odds of guessing correctly. My broad academic... 44 Subjects: including ACT Math, chemistry, writing, physics ...I have scored in the 99th percentile on standardized tests (all sections) and have many helpful test preparation strategies to share with those planning to take the SAT and GRE exams. My schedule is very flexible, and I'm available most weekends and some weekday afternoons and evenings. My styl... 44 Subjects: including ACT Math, reading, English, chemistry Related Lynn, MA Tutors Lynn, MA Accounting Tutors Lynn, MA ACT Tutors Lynn, MA Algebra Tutors Lynn, MA Algebra 2 Tutors Lynn, MA Calculus Tutors Lynn, MA Geometry Tutors Lynn, MA Math Tutors Lynn, MA Prealgebra Tutors Lynn, MA Precalculus Tutors Lynn, MA SAT Tutors Lynn, MA SAT Math Tutors Lynn, MA Science Tutors Lynn, MA Statistics Tutors Lynn, MA Trigonometry Tutors Nearby Cities With ACT Tutor Beverly, MA ACT Tutors Boston ACT Tutors Brookline, MA ACT Tutors Cambridge, MA ACT Tutors Chelsea, MA ACT Tutors Everett, MA ACT Tutors Malden, MA ACT Tutors Nahant ACT Tutors Peabody, MA ACT Tutors Revere, MA ACT Tutors Roxbury, MA ACT Tutors Salem, MA ACT Tutors Saugus ACT Tutors Somerville, MA ACT Tutors Swampscott ACT Tutors
{"url":"http://www.purplemath.com/lynn_ma_act_tutors.php","timestamp":"2014-04-17T07:42:26Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:f939c968-90ca-402a-b8b6-38aaa92ff101>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
On sensitivity: Part IOn sensitivity: Part I Climate sensitivity is a perennial topic here, so the multiple new papers and discussions around the issue, each with different perspectives, are worth discussing. Since this can be a complicated topic, I’ll focus in this post on the credible work being published. There’ll be a second part from Karen Shell, and in a follow-on post I’ll comment on some of the recent games being played in and around the Wall Street Journal op-ed pages. What is climate sensitivity? Nominally it the response of the climate to a doubling of CO[2] (or to a ~4 W/m^2 forcing), however, it should be clear that this is a function of time with feedbacks associated with different components acting on a whole range of timescales from seconds to multi-millennial to even longer. The following figure gives a sense of the different components (see Palaeosens (2012) for some extensions). In practice, people often mean different things when they talk about sensitivity. For instance, the sensitivity only including the fast feedbacks (e.g. ignoring land ice and vegetation), or the sensitivity of a particular class of climate model (e.g. the ‘Charney sensitivity’), or the sensitivity of the whole system except the carbon cycle (the Earth System Sensitivity), or the transient sensitivity tied to a specific date or period of time (i.e. the Transient Climate Response (TCR) to 1% increasing CO[2] after 70 years). As you might expect, these are all different and care needs to be taken to define terms before comparing things (there is a good discussion of the various definitions and their scope in the Palaeosens paper). Each of these numbers is an ‘emergent’ property of the climate system – i.e. something that is affected by many different processes and interactions, and isn’t simply derived just based on knowledge of a small-scale process. It is generally assumed that these are well-defined and single-valued properties of the system (and in current GCMs they clearly are), and while the paleo-climate record (for instance the glacial cycles) is supportive of this, it is not absolutely guaranteed. There are three main methodologies that have been used in the literature to constrain sensitivity: The first is to focus on a time in the past when the climate was different and in quasi-equilibrium, and estimate the relationship between the relevant forcings and temperature response (paleo constraints). The second is to find a metric in the present day climate that we think is coupled to the sensitivity and for which we have some empirical data (these could be called climatological constraints). Finally, there are constraints based on changes in forcing and response over the recent past (transient constraints). There have been new papers taking each of these approaches in recent months. All of these methods are philosophically equivalent. There is a ‘model’ which has a certain sensitivity to 2xCO2 (that is either explicitly set in the formulation or emergent), and observations to which it can be compared (in various experimental setups) and, if the data are relevant, models with different sensitivities can be judged more or less realistic (or explicitly fit to the data). This is true whether the model is a simple 1-D energy balance, an intermediate-complexity model or a fully coupled GCM – but note that there is always a model involved. This formulation highlights a couple of important issues – that the observational data doesn’t need to be direct (and the more complex the model, the wider range of possible constraints there are) and that the relationship between the observations and the sensitivity needs to be demonstrated (rather than simply assumed). The last point is important – while in a 1-D model there might be an easy relationship between the measured metric and climate sensitivity, that relationship might be much more complicated or non-existent in a GCM. This way of looking at things lends itself quite neatly into a Bayesian framework (as we shall see). There are two recent papers on paleo constraints: the already mentioned PALAEOSENS (2012) paper which gives a good survey of existing estimates from paleo-climate and the hierarchy of different definitions of sensitivity. Their survey gives a range for the fast-feedback CS of 2.2-4.8ºC. Another new paper, taking a more explicitly Bayesian approach, from Hargreaves et al. suggests a mean 2.3°C and a 90% range of 0.5–4.0°C (with minor variations dependent on methodology). This can be compared to an earlier estimate from Köhler et al. (2010) who gave a range of 1.4-5.2ºC, with a mean value near 2.4ºC. One reason why these estimates keep getting revised is that there is a continual updating of the observational analyses that are used – as new data are included, as non-climatic factors get corrected for, and models include more processes. For instance, Köhler et al used an estimate of the cooling at the Last Glacial Maximum of 5.8±1.4ºC, but a recent update from Annan and Hargreaves and used in the Hargreaves et al estimate is 4.0±0.8ºC which would translate into a lower CS value in the Köhler et al calculation (roughly 1.1 – 3.3ºC, with a most likely value near 2.0ºC). A paper last year by Schmittner et al estimated an even smaller cooling, and consequently lower sensitivity (around 2ºC on a level comparison), but the latest estimates are more credible. Note however, that these temperature estimates are strongly dependent on still unresolved issues with different proxies – particularly in the tropics – and may change again as further information comes in. There was also a recent paper based on a climatological constraint from Fasullo and Trenberth (see Karen Shell’s commentary for more details). The basic idea is that across the CMIP3 models there was a strong correlation of mid-tropospheric humidity variations with the model sensitivity, and combined with observations of the real world variations, this gives a way to suggest which models are most realistic, and by extension, what sensitivities are more likely. This paper suggests that models with sensitivity around 4ºC did the best, though they didn’t give a formal estimation of the range of And then there are the recent papers examining the transient constraint. The most thorough is Aldrin et al (2012). The transient constraint has been looked at before of course, but efforts have been severely hampered by the uncertainty associated with historical forcings – particularly aerosols, though other terms are also important (see here for an older discussion of this). Aldrin et al produce a number of (explicitly Bayesian) estimates, their ‘main’ one with a range of 1.2ºC to 3.5ºC (mean 2.0ºC) which assumes exactly zero indirect aerosol effects, and possibly a more realistic sensitivity test including a small Aerosol Indirect Effect of 1.2-4.8ºC (mean 2.5ºC). They also demonstrate that there are important dependencies on the ocean heat uptake estimates as well as to the aerosol forcings. One nice thing that added was an application of their methodology to three CMIP3 GCM results, showing that their estimates 3.1, 3.6 and 3.3ºC were reasonably close to the true model sensitivities of 2.7, 3.4 and 4.1ºC. In each of these cases however, there are important caveats. First, the quality of the data is important: whether it is the LGM temperature estimates, recent aerosol forcing trends, or mid-tropospheric humidity – underestimates in the uncertainty of these data will definitely bias the CS estimate. Second, there are important conceptual issues to address – is the sensitivity to a negative forcing (at the LGM) the same as the sensitivity to positive forcings? (Not likely). Is the effective sensitivity visible over the last 100 years the same as the equilibrium sensitivity? (No). Is effective sensitivity a better constraint for the TCR? (Maybe). Some of the papers referenced above explicitly try to account for these questions (and the forward model Bayesian approach is well suited for this). However, since a number of these estimates use simplified climate models as their input (for obvious reasons), there remain questions about whether any specific model’s scope is adequate. Ideally, one would want to do a study across all these constraints with models that were capable of running all the important experiments – the LGM, historical period, 1% increasing CO2 (to get the TCR), and 2xCO2 (for the model ECS) – and build a multiply constrained estimate taking into account internal variability, forcing uncertainties, and model scope. This will be possible with data from CMIP5, and so we can certainly look forward to more papers on this topic in the near future. In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range. It is worth adding though, that temperature trends over the next few decades are more likely to be correlated to the TCR, rather than the equilibrium sensitivity, so if one is interested in the near-term implications of this debate, the constraints on TCR are going to be more important. 104 Responses to “On sensitivity: Part I” 1. 101 Ray Ladbury #93 “I agree that Jeffrey’s Prior is attractive in a lot of situations. However, it is not clear that it would help in this case, is it? I mean in some cases, JP is flat” The form of the Jeffreys’ prior depends on both the relationship of the observed variable(s) to the parameter(s) and the nature of the observational errors and other uncertainties, which determine the form of the likelihood function. Typically the JP is only uniform where the estimation is of a simple location parameter, with the measured variable being the parameter (or a linear function thereof) plus an error whose distribution is independent of the parameter. Where (equilibrium/effective) climate sensitivity (S) is the only parameter being estimated, and the estimation method works directly from the observed variables (e.g., by regression, as in Forster and Gregory, 2006, or mean estimation, as in Gregory et al, 2002) over the instrumental period, then the JP for S will be almost of the form 1/S^2. That is equivalent to an almost uniform prior were instead 1/S, the climate feedback parameter (lambda), to be estimated. The reason why a 1/S^2 prior is noninformative is that estimates of climate sensitivity depend on comparing changes in temperature with changes in {forcing minus the Earth’s net radiative balance (or its proxy, ocean heat uptake)}. Over the instrumental period, fractional uncertainty in the latter is very much larger than fractional uncertainty in temperature change measurements, and is approximately normally distributed. There is really no valid argument against using a 1/S^2 prior in cases like Forster & Gregory, 2006 and Gregory et al, 2002, and that is what frequentist statistical methods implicitly use. For instance, Forster and Gregory, 2006, used linear regression of {forcing minus the Earth’s net radiative balance} on surface temperature, which as they stated implicitly used a uniform in lambda prior for lambda. When the normally distributed estimated PDF for lambda resulting from that approach is converted into a PDF for S, using the standard change of variables formula, that PDF implicitly uses a 1/S^2 prior for S. However, for presentation in the AR4 WG1 report (Fig. 9.20 and Table 3) the IPCC multiplied that PDF by S^2, converting it to a uniform-in-S prior basis, which is highly informative. As a result, the 95% bound on S shown in the AR4 report was 14.2 C, far higher than the 4.1 C bound reported in the study itself. Where climate sensitivity is estimated in studies involving comparing observations with values simulated by a forced climate model at varying parameter settings (see Appendix 9.B of AR4 WG1), the JP is likely to be different from what it would be were S estimated directly from the same underlying data. Where several parameters are estimated simultaneously, the JP will be a joint prior for all parameters and may well be a complex nonlinear function of the parameters. 2. 102 I’m in need of some clarification on what we should be now using as a GWP for methane. From Archer 2007: …..so a single molecule of additional methane has a larger impact on the radiation 5 balance than a molecule of CO2, by about a factor of 24 (Wuebbles and Hayhoe, 2002)…… …..To get an idea of the scale, we note that a doubling of methane 10 from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and 10 times present methane would be equivalent to about a doubling of CO2. A release of 500 Gton C as methane (order 10% of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2…… …..The current inventory of methane in the atmosphere is about 3 Gton C. Therefore, the release of 1 Gton C of methane catastrophically to the atmosphere would raise the methane concentration by 33%. 10 Gton C would triple atmospheric methane. (so doubling atmos methane requires 3 Gton release, 10x present methane requires 30 Gton released?) Here also GWP methane is taken as 24. As we know 20yr GWP methane is commonly stated as 72 (IPCC) or 105 (shindel). Factoring in findings of : Large methane releases lead to strong aerosol forcing and reduced cloudiness 2011 T. Kurt ´en1,2, L. Zhou1, R. Makkonen1, J. Merikanto1, P. R¨ais¨anen3, M. Boy1,N. Richards4, A. Rap4, S. Smolander1, A. Sogachev5, A. Guenther6, G. W. Mann4,K. Carslaw4, and M. Kulmala1 -That previous GWP methane figures need x1.8 correction factor…. We should be using 20yr GWP methane of 130 or 180. This is 5.4 or 7.5 times the 24 GWP that Archer 2007 appears to be using? So maybe the above should say, looking at a 20yr period(using the 100 becomes 180 gwp)?: …..To get an idea of the scale, we note that a [100% increase/7.5= 13% increase] of methane from present-day concentration would be equivalent to 60 ppm increase in CO2 from present-day, and [10 times/7.5= 1.333times] present methane would be equivalent to about a doubling of CO2. A release of [500/7.5=66.7] Gton C as methane (order [10%/7.5=1.3%] of the hydrate reservoir) to the atmosphere would have an equivalent radiative impact to a factor of 10 increase in atmospheric CO2…… 3. 103 Aaron Franklin (102) I wouldn’t go so far to say that the collective climate science community has completely moved on from the idea, but I’d argue that GWP is a rather outdated and fairly useless metric for comparing various greenhouse gases. It is also very sensitive to the timescale over which it is calculated. It’s correct that an extra methane molecule is something like 25 times more influential than an extra CO2 molecule, although that ratio is primarily determined by the background atmospheric concentration of either gas, and GWP typically assumes that forcing is linear in emission pulse, which is not valid for very large perturbations. But because there’s not much methane to begin with, it’s not true that 1.33x methane has more impact than a doubling of CO2 (we’ve already increased methane by well over this amount)…a doubling of methane doesn’t even have nearly as much impact as a doubling of CO2. The key point, however, is the much longer residence time of CO2 in the atmosphere…GWP tries to address this in its own mystical way, but there are much better ways of thinking about the issue. See the recent paper from Susan Solomon, Ray Pierrehumbert, and others. 4. 104 Re: The Norwegian findings (#96-100), they’re still under review. Scroll down to “update”: Clicking the Cicero link provided there takes you round trip — RealClimate is extensively referenced.
{"url":"http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/comment-page-3/?wpmp_tp=1&wpmp_switcher=desktop","timestamp":"2014-04-17T03:53:34Z","content_type":null,"content_length":"68930","record_id":"<urn:uuid:0ad9aa98-fcc7-42a0-90fb-2282f5cca14f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
class description - source file - inheritance tree TGeoChecker TGeoChecker() TGeoChecker TGeoChecker(TGeoManager* geom) TGeoChecker TGeoChecker(const char* treename, const char* filename) TGeoChecker TGeoChecker(const TGeoChecker&) virtual void ~TGeoChecker() void CheckPoint(Double_t x = 0, Double_t y = 0, Double_t z = 0, Option_t* option) static TClass* Class() void CreateTree(const char* treename, const char* filename) void Generate(UInt_t npoints = 1000000) virtual TClass* IsA() const void RandomPoints(TGeoVolume* vol, Int_t npoints, Option_t* option) void RandomRays(Int_t nrays, Double_t startx, Double_t starty, Double_t startz) void Raytrace(Double_t* startpoint, UInt_t npoints = 1000000) TGeoNode* SamplePoints(Int_t npoints, Double_t& dist, Double_t epsil, const char* g3path) virtual void ShowMembers(TMemberInspector& insp, char* parent) void ShowPoints(Option_t* option) virtual void Streamer(TBuffer& b) void StreamerNVirtual(TBuffer& b) void Test(Int_t npoints, Option_t* option) void TestOverlaps(const char* path) TGeoManager* fGeom TTree* fTreePts A simple geometry checker. Points can be randomly generated inside the bounding box of a node. For each point the distance to the nearest surface and the corresponting point on that surface are computed. These points are stored in a tree and can be directly visualized within ROOT A second algoritm is shooting multiple rays from a given point to a geometry branch and storing the intersection points with surfaces in same tree. Rays can be traced backwords in order to find overlaps by comparing direct and inverse points. Default constructor TGeoChecker(TGeoManager *geom) Constructor for a given geometry TGeoChecker(const char *treename, const char *filename) void CheckPoint(Double_t x, Double_t y, Double_t z, Option_t *) --- Draw point (x,y,z) over the picture of the daughers of the volume containing this point. Generates a report regarding the path to the node containing this point and the distance to the closest boundary. void RandomPoints(TGeoVolume *vol, Int_t npoints, Option_t *option) Draw random points in the bounding box of a volume. void RandomRays(Int_t nrays, Double_t startx, Double_t starty, Double_t startz) Randomly shoot nrays from point (startx,starty,startz) and plot intersections with surfaces for current top node. TGeoNode* SamplePoints(Int_t npoints, Double_t &dist, Double_t epsil, const char* g3path) shoot npoints randomly in a box of 1E-5 arround current point. return minimum distance to points outside make sure that path to current node is updated get the response of tgeo void Test(Int_t npoints, Option_t *option) Check time of finding "Where am I" for n points. void TestOverlaps(const char* path) --- Geometry overlap checker based on sampling. void CreateTree(const char *treename, const char *filename) These points are stored in a tree and can be directly visualized within ROOT. void Generate(UInt_t npoint) Points are randomly generated inside the bounding box of a node. For each point the distance to the nearest surface and the corresponding point on that surface are computed. void Raytrace(Double_t *startpoint, UInt_t npoints) A second algoritm is shooting multiple rays from a given point to a geometry branch and storing the intersection points with surfaces in same tree. Rays can be traced backwords in order to find overlaps by comparing direct and inverse points. void ShowPoints(Option_t *option) Inline Functions TClass* Class() TClass* IsA() const void ShowMembers(TMemberInspector& insp, char* parent) void Streamer(TBuffer& b) void StreamerNVirtual(TBuffer& b) TGeoChecker TGeoChecker(const TGeoChecker&) Author: Andrei Gheata 01/11/01 Last update: root/geom:$Name: $:$Id: TGeoChecker.cxx,v 1.3 2002/07/17 14:22:54 brun Exp $ Copyright (C) 1995-2000, Rene Brun and Fons Rademakers. * ROOT page - Class index - Top of the page This page has been automatically generated. If you have any comments or suggestions about the page layout send a mail to ROOT support, or contact the developers with any questions or problems regarding ROOT.
{"url":"http://root.cern.ch/root/html303/TGeoChecker.html","timestamp":"2014-04-20T08:14:51Z","content_type":null,"content_length":"14880","record_id":"<urn:uuid:37c616a6-3984-4266-9fea-9cfd81aa26b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
pressure piping code Help for calculating maximum allowable piping pressure according to the ASME pressure piping code B31.3 Applets are programs based on the java language that are designed to run on your computer using the Java Run Time environment. The ASME code recommends an allowable tensile stress level in the pipe material (see the terminology section at the end of this article). The pressure that can generate this tensile stress level can be calculated taking into account the type of material, temperature and other factors. The formula (see B31.3-1999 code, page 20) which gives the relationship between the pressure (p) labeled p (see equation[1]), the outside diameter (D), the allowable tensile stress (S) and the thickness (t) of the pipe is: where E: material and pipe construction quality factor as defined in ASME Process Piping code B31.3-1999, Table A-1A Y: wall thickness coefficient with values listed in ASME Process Piping code B31.3-1999, Table 304.1.1 Formula [1] is re-written in terms of the pressure (p) of the fluid within the pipe: Calculation example The pipe is a typical spiral-weld construction assembled according to the specification ASTM A 139-96. The material is carbon steel ASTM A 139. The outside diameter of the pipe is 20.5 inches and the wall thickness is 0.25-inch. For this material, the ASME code recommends that an allowable stress (S) of 16,000 psi be used for a temperature range of -20°F to +100°F. The quality factor E for steel A139 is 0.8; the wall thickness coefficient Y is 0.4. Material Minimum tensile strength(psi) ASME code Allowable stress (S)(psi) ASTM A139 48000 16000 The value of the internal fluid pressure that will produce the tensile stress level stipulated by the ASME code is 315 psig (see formula [3]). This pressure should be compared to the normal operating pressure. The pressure in a pump system can vary dramatically from place to place. The pressure level vs. location can only be determined on a case by case basis. However, typically the pressure is maximum near the pump discharge and decreases towards the outlet of the system. It is possible that the system could be plugged. When the system plugs, the pump head increases and reaches (at zero flow) the shut-off head in the case of a centrifugal pump. The maximum pressure in the pump system will then be the pressure corresponding to the shut-off head plus the pressure corresponding to the pump inlet suction head. Since the system is plugged, this pressure will extend all the way from the pump discharge to the plug if the plug is at the same elevation as the pump discharge. The relationship between pressure head and pressure is given in equation [4]. where (H) is the pressure head, (p) the pressure and (SG) the specific gravity of the fluid. If the shut-off pressure exceeds the allowable operating pressure as calculated by the ASME code, then pressure relief devices may have to be installed. This is not likely to occur in single pump systems, but multiple series pump systems may produce excessive shut-off pressures since the pressure at the outlet of the last pump depends on the sum of the shut-off pressures of each pump. Exceptions are provided for in the code and are relative to the duration of the maximum pressures events, if they are of short duration these events may be allowed for short periods. Rupture disks are often used in these situations. They are accurate, reliable pressure relief devices. However, these devices are not mandatory in many systems and their installation are then a matter of engineering judgment. Existing systems In an existing system, one should not rely on the original thickness of the pipe to do the pressure calculations. The pipe may suffer from corrosion, erosion or other chemical attacks which may reduce the wall thickness in certain areas. The pipe wall thickness can easily be measured by devices such as the Doppler ultra sound portable flow meter. The smallest wall thickness should be used as the basis for the allowable pressure calculations or the damaged areas should be replaced. New systems In new systems, consider if a corrosion allowance (depending on the material) should be used. The corrosion allowance will reduce the wall thickness that is used in the allowable pressure Also the piping code allows pipe manufacturers a fabrication tolerance which can be as high as 12.5% on the wall thickness, this allowance should be considered when determining the design pipe wall Figure 1 shows the location of the various stress levels in a typical stress vs. strain graph. TS: Tensile strength YP: Yield point BS: Breaking strength The following four figures are excerpts from the ASME Power Piping Code B31.3 This pdf document provides information on different flange pressure ratings, construction, ANSI class and materials base on the ASME code B16.5. This applet will help you calculate the allowable pressure according to the pressure piping code B31.3. You can download an example of this type of calculation as well as the formulas used including an extract of the pressure piping code. The formula for max. pressure is based on the well known hoop stress formula in which two additional factors have been added, Y a factor based on the type of steel and E a factor based on the type and quality of the weld. The pressure piping code is not readily available on the internet. You can probably find it in your local technical university or college librairy. The book "Piping Handbook" by Mohinder L. Nayyar published by McGraw Hill has extracts of the code. Remember that when you check the maximum allowable piping pressure you must also check the maximum allowable flange pressure, this depends on the ANSI class of the flange, the material and the temperature.
{"url":"http://www.pumpfundamentals.com/help15.html","timestamp":"2014-04-20T13:18:52Z","content_type":null,"content_length":"11081","record_id":"<urn:uuid:ec777001-a202-4cae-b8ab-d5e1cc1f53be>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2000/013 Concurrent Zero-Knowledge in Poly-logarithmic RoundsJoe Kilian and Erez PetrankAbstract: A proof is concurrent zero-knowledge if it remains zero-knowledge when run in an asynchronous environment, such as the Internet. It is known that zero-knowledge is not necessarily preserved in such an environment; Kilian, Petrank and Rackoff have shown that any {\bf 4} rounds zero-knowledge interactive proof (for a non-trivial language) is not concurrent zero-knowledge. On the other hand, Richardson and Kilian have shown that there exists a concurrent zero-knowledge argument for all languages in NP, but it requires a {\bf polynomial} number of rounds. In this paper, we present a concurrent zero-knowledge proof for all languages in NP with a drastically improved complexity: our proof requires only a poly-logarithmic, specifically, $\omega(\log^2 k)$ number of rounds. Thus, we narrow the huge gap between the known upper and lower bounds on the number of rounds required for a zero-knowledge proof that is robust for asynchronous composition. Category / Keywords: foundations / zero-knowledgeDate: received 24 Apr 2000, revised 28 May 2000Contact author: erez at cs technion ac ilAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20000528:112402 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2000/013","timestamp":"2014-04-19T06:54:15Z","content_type":null,"content_length":"2726","record_id":"<urn:uuid:9585fffb-8b81-42cd-bd43-8afd04592f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:The Time Machine (1st edition).djvu/21 This page has been "It is simply this, That space, as our mathematicians have it, is spoken of as having three dimensions, which one may call Length, Breadth, and Thickness, and is always definable by reference to these planes, each at right angle to the others. But some philosophical people have been asking why three dimensions particularly—why not another direction at right angles to the other three?—and have even tried to construct a Four-Dimensional geometry. Professor Simon Newcomb was expounding this to the New York Mathematical Society only a month or so ago. You know how on a flat surface, which has only two dimensions, we can represent a figure of a Three-Dimensional solid, and similarly they think that by models of three dimensions they could represent one of four—if they could master the perspective of the thing. See?" "I think so," murmured the Provincial Mayor; and, knitting his
{"url":"http://en.wikisource.org/wiki/Page:The_Time_Machine_(1st_edition).djvu/21","timestamp":"2014-04-21T08:19:48Z","content_type":null,"content_length":"23059","record_id":"<urn:uuid:fda7c55b-ba70-451e-b13f-50c2d69535d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Hunters Point, New York, NY New York, NY 10016 GRE, GMAT, SAT, NYS Exams, and Math ...I specialize in tutoring math and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged,... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Hunters_Point_New_York_NY_geometry_tutors.aspx","timestamp":"2014-04-20T11:27:09Z","content_type":null,"content_length":"61691","record_id":"<urn:uuid:1ab20f70-113d-4d5a-92fc-48adcc545ed1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamental Theorem of Symmetric Polynomials December 2nd 2008, 09:37 AM #1 Junior Member Nov 2008 Fundamental Theorem of Symmetric Polynomials I do not quite understand this theorem.Please ,show me how to convert symmetric polynomial to elementary symmetric. Show me nontrivial examples please! Let us work with the field $\mathbb{C}$, it ultimately does not matter because it will work for any field, I just want to use a specific field to make it clearer to you. Consider the polynomial $f(x_1,x_2) = x_1^2 + x_2^2$, this is a symmetric polynomial (of two variables) because $f(x_1,x_2)=f(x_2,x_1)$. The elementary symmetric polynomials (of two variables) are $s_1 = x_1+x_2$ and $s_2 = x_1x_2$. We see that $x_1^2 + x_2^2 = x_1^2 + 2x_1x_2 + x_2^2 - 2x_1x_2$. Therefore, $f(x_1,x_2) = s_1^2 - 2s_2$. In general let $f(x_1,...,x_n)$ be a polynomial in $n$ variables. We say that $f$ is symmetric iff $f(x_{\sigma(1)}, ... , x_{\sigma(n)}) = f(x_1,...,x_n)$ where $\sigma$ is any permutation of $\ {1,2,...,n\}$ i.e. $\sigma \in S_n$. For example, the polynomial $f(x_1,...,x_n) = x_1^2 + ... + x_n^2$ is obviously a symmetric polynomial because no matter how we permute those summands we still have the same polynomial. The polynomials $s_1 = x_1 + ... + x_n$, $s_2 = x_1x_2 + x_1x_3 + ... + x_{n-1}x_n$, ... , $s_n = x_1x_2...x_n$ are referred to the elementary symmetric polynomials for $n$ variables. There is a theorem that says that if $f$ is a symmetric polynomial then we can write it terms of elementary symmetric polynomials $s_1,...,s_n$. This is what we want to prove. Let $K = \mathbb{C}(x_1,...,x_n)$ be the field of rational functions in $n$ variables i.e. the field of $\frac{f(x_1,...,x_n)}{g(x_1,...,x_n)}$ (where $f,g$ are any polynomials they need not be symmetric). This is a field under addition and multiplication of polynomials. Now for any $\sigma \in S_n$ we can define $\sigma \left( \frac{f(x_1,...,x_n)}{g(x_1,...,x_n)} \right) = \frac{f(x_ {\sigma(1)},...,x_{\sigma(n)})}{g(x_{\si gma(1)},...,x_{\sigma(n)})}$. It can be show (in a straightforward manner) that $\sigma$ is an automorphism of $K$, so $\sigma \in \text{Aut} (K)$. We can identity $S_n$ therefore as a subgroup of $\text{Aut}(K)$. Now we will define $F = K^{S_n}$ i.e. $F$ is the fixed field under the automorphism subgroup $S_n$. Then $K/F$ is a Galois extension and $\text{Gal}(K/F) = S_n$. Notice that $F$ is the field of all symmetric rational functions. Let $s_1,...,s_n$ be the elementary symmetric polynomials, we wish to show that $\mathbb{C}(s_1,...,s_n) = F$, this will show that any symmetric rational function can be expressed in terms of elementary symmetric polynomials alone, exactly what we are trying to prove. Let $p(t) = t^n - s_1t^{n-1} +s_2t^{n-2} - ... + (-1)^n s_n \in \mathbb{C}(s_1,...,s_n)[t]$. But notice that $p(t) = (t - x_1)(t-x-2)...(t-x_n)$ (just expand RHS to see this). Therefore, $K$ is a splitting field of $p(t)$ over $\mathbb{C}(s_1,...,s_n)$. This forces, $[K: \mathbb{C}(s_1,...,s_n)] \leq n!$*. But $[K:F] = |S_n| = n!$, this implies that $[K:\mathbb{C}(s_1,...,s_n)] \leq n!$ since $\mathbb{C} (s_1,...,s_n)\subseteq F$. This forces, $[K:\mathbb{C}(s_1,...,s_n)] = n!$ and so $\mathbb{C}(s_1,...,s_n) = F$. And this completes the proof. *)Theorem: Let $K$ be a splitting field of a non-constant polynomial $f(x) \in F[x]$ then $[K:F]\leq n!$. Thank you! December 2nd 2008, 10:49 AM #2 Global Moderator Nov 2005 New York City December 2nd 2008, 12:23 PM #3 Junior Member Nov 2008
{"url":"http://mathhelpforum.com/advanced-algebra/62879-fundamental-theorem-symmetric-polynomials.html","timestamp":"2014-04-18T18:40:56Z","content_type":null,"content_length":"48489","record_id":"<urn:uuid:f313980b-47a3-4c11-a934-054086176dbc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
The Best Barrel Length For .17 Mach2 by Scott E. Mayer | January 4th, 2011 0 Scott Mayer reveals how to maximize the velocity of the .17 Mach 2 Barrel length has an effect on muzzle velocity, but does an inch or two one way or the other make a meaningful difference? Is it possible to have a barrel so long that it reduces velocity? Is there an optimal barrel length for highest velocity? These are questions that often come up among shooters, and I recently had the chance to answer them as regards the .17 Mach2 (.17M2) rimfire cartridge. I just completed a report on the Thompson/Center R55 semiautomatic in .17M2, which will appear in an upcoming issue of Shooting Times, and during that project I had occasion to shorten the barrel on a T/C Contender also chambered for .17M2 to the same 20-inch length as that of the T/C R55. The purpose of the amputation was to compare muzzle velocity of the fixed-breech Contender to that of the R55 autoloader to see if there was any significant velocity loss from the self-loader. There wasn’t. In fact, the R55 produced slightly higher velocity. But the project also presented an opportunity to cut the Contender barrel off in one-inch increments to see if there was an apparent optimal barrel length for velocity that I could ascribe to the little .17-caliber rimfire. Published experiments on how barrel length affects muzzle velocity have been done with the .22 Long Rifle, and from those experiments it has been concluded that generally any barrel length greater than 18 inches is actually causing the .22 Long Rifle bullet to slow down. The precise optimal barrel length for a .22 will vary from one load to another and one gun to the next because of different powder charges in the loads and tolerances in the bore dimensions. Regardless, the reason for the bullet slow down at that short a barrel length is because the expansion ratio (the sum of the volume of the bore and powder chamber divided by the volume of the powder chamber) for the .22 Long Rifle is so high. In other words, it has a very small powder chamber relative to the bore. As powder burns it increases in volume about 1000 times, which increases pressure if contained as within a chamber. That pressure starts the bullet down the bore against the engraving forces, bullet-on-bore friction and the pressure of the air in the bore in front of the bullet. As the bullet travels down the bore, the volume of the space behind the bullet is increasing such that after reaching a certain point, gas pressure no longer increases. Eventually, the gas pressure and bullet friction reach a point of equilibrium, followed by a transition to the effects of bore friction being greater than gas pressure. If the bullet is still in the bore after that transition, it slows down. While it happens at around 18 inches in a .22 Long Rifle, it would take a barrel several feet long in a cartridge such as the .308 Winchester because it has a much lower expansion ratio. The closer you get to optimal barrel length for maximum velocity, the less significant each increase in velocity becomes, which is why we get along fine with sporter barrel lengths in centerfire rifles. I calculated the expansion ratio for a .22 Long Rifle in an 18-inch barrel and came up with 39. For the .308 Winchester, an 18-inch barrel results in an expansion ratio of about 7. To get an expansion ratio of about 39 with the .308 Winchester takes a barrel around nine feet long. That does not mean a .308 Winchester obtains its highest velocity in a nine-foot long barrel, nor is an expansion ratio of 39 a magic number. There are other factors that influence velocity, including the difference in engraving force and coefficient of friction between the outside lubricated lead .22 bullet and the copper jacketed .308 bullet. There is also the difference in surface area between the base of a .308-inch bullet and a .224-inch bullet to consider. So all this brings me back to the Contender project and cutting off its barrel in one-inch increments and how the velocity changed as a result of it. Because the .17M2 has such a small powder chamber (it measured .0176 in3), I assumed optimal length would also be about 18 inches, as with the .22 Long Rifle, so I started with a 23-inch factory barrel. Using an Oehler Model 35P chronograph set up with four-foot screen spacing at 15 feet from the muzzle, my procedure was to chronograph and record five shots, then clamp the barrel in a Wheeler Engineering barrel vise, cut off an inch using a hacksaw and record five new shots. With each inch, velocity declined until I got to 20 inches, then velocity appeared to climb again. By the time I had the barrel cut to 17 inches, velocity appeared to be going back down. Physically, that little increase in the velocity can’t occur, and indeed when I checked the data using Tioga Engineering’s Baltec1 program, it showed that the blip in velocity was not statistically significant and might not have actually occurred. Blip or not, I decided to repeat the test using a special 27-inch barrel T/C made for the occasion and to fire 20 shots per inch of barrel to try and lower the standard deviation. I also made it a point to control my variables a lot better by crowning the muzzle between cuts using a brass round-head screw and valve-grinding compound in a cordless drill. This time I also cleaned the bore thoroughly between groups of shots using a Hoppe’s BoreSnake, fired one fouling shot before recording velocities for each group of shots, and used an indicator on the shooting bench top to make sure that regardless of barrel length, the muzzle was always at the same distance from the “start” chronograph screen. This time, muzzle velocity appeared to increase steadily as the barrel was shortened until a barrel length of 23 inches was reached. There was no “blip” in velocity like I had with the previous barr el, and velocity appeared stable between 16.5 and 23 inches of barrel length. A line graph of those results is shown nearby. Because of the .17M2′s small powder chamber, I assumed it, like the .22 Long Rifle, had a high expansion ratio and expected similarly to find a barrel length near 18 inches where velocity would be highest. To check expansion ratio, I measured the volume of the little .17′s powder chamber by pulling a bullet, dumping the powder, filing a small groove along the length of the bullet, and then weighing the two in grams on a PACT electronic scale. Next, I filled the case with water, reseated the bullet allowing the water to be displaced through the groove I had filed, and weighed the assembled cartridge. The difference between the empty and full case is the weight of the water in grams, which, through the magic of the metric system, was also the volume of the powder area in cubic centimeters, which I converted to cubic inches. Bore volume was obtained by multiplying the effective length of the barrel (distance from the boltface to the muzzle, plus the seating depth of the bullet, minus the case length) by IIr2. That value was taken at 98.5 percent to account for volume displaced by rifling. Measured as such, the .17M2 fired in an 18-inch barrel has an expansion ratio of about 25. With the special 27-inch barrel T/C made, expansion ratio is around 38, so I reasoned 27 inches would be about optimum for highest velocity with the .17M2 and expected velocity to decline with each inch of barrel less. Because my experiment indicated otherwise, I contacted Dave Emary of Hornady to see if he had done any similar experiments when initially developing the .17M2. “We did the same thing very early on with a minimum spec test barrel. We found that with barrel lengths over 21 to 22 inches, we started to lose velocity. Barrel lengths from 16 to 21 inches produced virtually the same velocity,” Emary said. You can clearly see on the line graph that velocity for the .17M2 is reduced using a barrel more than 23 inches long. From previous discussions with ballistics experts William C. Davis Jr. and Charles R. Fagg of Tioga Engineering, I knew there could be more information in the raw data than was readily apparent. I consulted with them, and Davis was kind enough to use various data smoothing techniques to find a smooth curve that best fit the raw data. Smooth curves filter out normal fluctuations in data allowing you to see through to the changes occurring in the sample. From the nine different curves Davis tried, a parabola was chosen as the best fit. The smoothed data is shown as the red trend line. With the data smoothed, it showed that maximum velocity for the .17M2 is obtained with a 19- to 20-inch barrel and, as Hornady found in the development of the cartridge, that velocity is essentially the same for all sporter barrel lengths. What this means to a .17M2 shooter, then, is that you can opt for a rifle with a shorter barrel and not sacrifice anything in the way of velocity, and if maximum velocity is what you’re after, you should realize it with a barrel 19 to 20 inches long.
{"url":"http://www.shootingtimes.com/2011/01/04/ammunition_stmach2_032706/","timestamp":"2014-04-20T20:55:00Z","content_type":null,"content_length":"94296","record_id":"<urn:uuid:1c91e6c2-5ca1-4ca9-95b9-6e9960977e47>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
nd Subtracting Fractions Without Common Denominators 1. Add these fractions together. Simplify where needed and change all Improper Fractions into Mixed Numerals. 2. Subtract these fractions. Simplify where needed and change all Improper Fractions into Mixed Numerals. 3. Add these fractions. Simplify where needed and change all Improper Fractions into Mixed Numerals. 4. Subtract these fractions. Simplify where needed and change all Improper Fractions into Mixed Numerals. 5. Add these fractions. Simplify where needed and change all Improper Fractions into Mixed Numerals. 6. Subtract these fractions. Simplify where needed and change all Improper Fractions into Mixed Numerals. 7. Add these quantities. Simplify where needed and change all Improper Fractions into Mixed Numerals. 8. Subtract these quantities. Simplify where needed and change all Improper Fractions into Mixed Numerals. 9. Add these quantities. Simplify where needed and change all Improper Fractions into Mixed Numerals. 10. Subtract these quantities. Simplify where needed and change all Improper Fractions into Mixed Numerals.
{"url":"http://www.proprofs.com/quiz-school/story.php?title=adding-subtracting-fractions-without-common-denominators","timestamp":"2014-04-20T00:41:04Z","content_type":null,"content_length":"121367","record_id":"<urn:uuid:45e8056a-dd6f-4c30-b06e-e46d2e6180c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Diaries I know in my life at least, there are periods when all I want to do is talk to the public about physics, and then periods where all I would like to do is focus on my work and not talk to anyone. Unfortunately, the last 4 or so months falls into the latter category. Thank goodness, however, I am now able to take some time and write about some interesting physics which had been presented both this year and last. And while polar bears don’t really hibernate, I share the sentiments of this one. A little while ago, I posted on Dalitz Plots, with the intention of listing a result. Well, now is the time. At the 7th International Workshop on the CKM Unitarity Triangle, LHCb presented preliminary results for CP asymmetry in the channels \(B\to hhh\), where \(h\) is either a \(K\) or \(\pi\). Specifically, the presentation was to report on searches for direct CP violation in the decays \(B^{\pm}\to \ pi^{\pm} \pi^{+} \pi^{-}\) and \(B^{\pm}\to\pi^{\pm}K^{+}K^{-}\). If CP was conserved in this decay, we would expect decays from \(B^+\) and \(B^-\) to occur in equal amounts. If, however, CP is violated, then we expect a difference in the number of times the final state comes from a \(B^+\) versus a \(B^-\). Searches of this type are effectively “direct” probes of the matter-antimatter asymmetry in the universe. By performing a sophisticated counting of signal events, CP violation is found with a statistical significance of \(4.2\sigma\) for \(B^\pm\to\pi^\pm\pi^+\pi^-\) and \(3.0\sigma\) for \(B^\pm\to\pi^\ pm K^+K^-\). This is indeed evidence for CP violation, which requires a statistical significance >3\(\sigma\).The puzzling part, however, comes when the Dalitz plot of the 3-body state is considered. It is possible to map the CP asymmetry as a function of position in the Dalitz plot, which is shown on the right. It’s important to note that these asymmetries are for both signal and background. Also, the binning looks funny in this plot because all bins are of approximately equal populations. In particular, notice red bins on the top left of the \(\pi\pi\pi\) Dalitz plot and the dark blue and purple section on the left of the \(\pi K K\) Dalitz plot. By zooming in on these regions, specifically \(m^2(\pi\pi_{high})>\)15 GeV/c\(^2\) and \(m^2(K K)<\)3 GeV/c\(^2\), and separating by \(B ^+\) and \(B^-\), a clear and large asymmetry is shown (see plots below). Now, I’d like to put these asymmetries in a little bit of perspective. Integrated over the Dalitz Plot, the resulting asymmetries are \(A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-) = +0.120\pm 0.020(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)\) \(A_{CP}(B^\pm\to\pi^\pm K^+K^-) = -0.153\pm 0.046(stat)\pm 0.019(syst)\pm 0.007(J/\psi K^\pm)\). Whereas, in the regions which stick out, we find: \(A_{CP}(B^\pm\to\pi^\pm\pi^+\pi^-\text{region}) = +0.622\pm 0.075(stat)\pm 0.032(syst)\pm 0.007(J/\psi K^\pm)\) \(A_{CP}(B^\pm\to\pi^\pm K^+K^-\text{region}) = -0.671\pm 0.067(stat)\pm 0.028(syst)\pm 0.007(J/\psi K^\pm)\). These latter regions correspond to a statistical significance of >7\(\sigma\) and >9\(\sigma\), respectively. The interpretation of these results is a bit difficult: the asymmetries are four to five times that of the integrated asymmetries, and are not necessarily associated with a single resonance. We would expect in the \(\rho^0\) and \(f_0\) resonances to appear in the lowest region of \(\pi\ pi\pi\) Dalitz plot, in the asymmetry. In the \(K K\pi\) Dalitz plot, there are really no scalar particles which we expect to give us an asymmetry of the kind we see. One possible answer to both these problems is that the quantum mechanical amplitudes are only partially interfering and giving the structure that we see. The only way to check this would be to do a more detailed analysis involving a fit to all of the possible resonances in these Dalitz plots. All I can say is that this result is certainly puzzling, and the explanation is not necessarily clear.
{"url":"http://www.quantumdiaries.org/tag/cpv/","timestamp":"2014-04-18T14:15:17Z","content_type":null,"content_length":"117927","record_id":"<urn:uuid:c81f4829-96cd-4bc5-89fd-4a5ee465a3d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Emerson, GA Algebra Tutor Find an Emerson, GA Algebra Tutor I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry, algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe... 20 Subjects: including algebra 2, algebra 1, chemistry, reading ...I try to improve upon their homework habits, study skills, method of approach and observational acuteness with tips and strategies. It's not all about working hard, but working smart as well. I guarantee I can teach study skills! 25 Subjects: including algebra 1, algebra 2, reading, chemistry ...One of my skills is breaking down subject matter into smaller chunks that make it meaningful to the student. The best tutorial method for special needs students is the hands on, kinesthetic approach to learning. I currently teach math to ADD/ADHD students. 18 Subjects: including algebra 2, algebra 1, reading, English Hello! I am a flexible and creative certified educator available for private tutoring sessions. I have a Master's Degree in Teaching from USC and experience at the high school and middle school 25 Subjects: including algebra 1, reading, geometry, GED ...I worked to establish an ESL institute in San Pedro de Macoris, Dominican Republic, for over a year. I developed the curriculum as well as taught courses at all levels. I taught ESL in a private institution in Chihuahua, Mexico for 3 months. 22 Subjects: including algebra 1, English, Spanish, reading
{"url":"http://www.purplemath.com/Emerson_GA_Algebra_tutors.php","timestamp":"2014-04-20T13:22:43Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:bccc88d3-0ec5-488d-847b-5c2e7405cb30>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Chapter 1 On­line Choice of On­line Algorithms Yossi Azar \Lambda Andrei Z. Broder \Lambda Mark S. Manasse \Lambda Let fA1 ; A2 ; : : : ; Amg be a set of on­line algorithms for a problem P with input set I. We assume that P can be represented as a metrical task system. Each A i has a competitive ratio a i with respect to the optimum off­ line algorithm, but only for a subset of the possible inputs such that the union of these subsets covers I. Given this setup, we construct a generic deterministic on­line algorithm and a generic randomized on­line algorithm for P that are competitive over all possible inputs. We show that their competitive ratios are optimal up to constant factors. Our analysis proceeds via an amusing card game. 1 Introduction A common trick of the trade in algorithm design is to combine several algorithms using round robin execution. The basic idea is that, given a set of m algorithms for a problem P , one can simulate them one at a
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/586/3827177.html","timestamp":"2014-04-16T18:08:17Z","content_type":null,"content_length":"8097","record_id":"<urn:uuid:a4964331-bcab-4ac5-ad1d-39b367b84f73>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Oblivion Continuum Time Dilation Time dilation is pretty much the basis for Einstein’s theory of special relativity. To explain what it is, it’s important to know what a frame of reference is. A common analogy used is to imagine someone is on a train bouncing a basketball. When the train is moving, whoever is bouncing the ball will simply see the ball moving up and down. However, to someone who is not on the train, the ball will appear to move a bit like this: This is an example of two different frames of reference, the one on the train and one off the train both observe different things. Now imagine that train is going really fast, like half of the speed of light, and replace the basketball with a device to measure time. Since electromagnetic radiation (light) travels at the same speed in a vacuum, the ‘timer’ should measure light in some way. The timer, shown below, has an emitter that emits a photon or beam of light, and a receiver, that detects the light. For simplicity, I’m going to assume that L[0] is very large compared to d[0] so that the light travels directly up and down. The time taken for this process, which we call ∆t is equal to the distance the light travels, divided by the speed of light (c). ∆t[0 ]= 2L[0]/c (equation 1) So while the train is not moving, both observers can agree on what ∆t[ ]equals. When the train reaches a constant velocity, half the speed of light, the observer who didn’t get on the train will see train-person’s ‘timer’ do this: Now the light has to travel much further to reach the receiver (rather than just straight up and down), and therefore the stationary observer sees the person on the having a slower time interval. Focusing on the frame on reference that the stationary observer is in, the time interval ∆t[ ]is the same as equation 1, but from the stationary observer’s point of view, the time interval will be different. If you want to read the math part: The distance, 2r, that the light travels will equal: 2r = 2√(L[0]^2 + (0.5c(∆t[0])/2)^2 ) (equation 2) D = 0.5c(∆t[0]), because the train is moving at half the speed of light. The new time measured by the stationary observer will be: ∆t’ = 2r/c (equation 3) Then after substituting equation 2 into equation 3 and a bit of algebra (which I won’t do because there’s already too much maths here) eventually you get this: t’ = ∆t[0] / [√(1 - [(0.5c)^2/(c^2)])] 158 notes tagged as: relativity. science. time.
{"url":"http://oblivioncontinuum.tumblr.com/post/44683790931/time-dilation-time-dilation-is-pretty-much-the","timestamp":"2014-04-18T13:21:45Z","content_type":null,"content_length":"61118","record_id":"<urn:uuid:691b6bd5-8862-4941-be5e-49eea4f783b2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Into the next dimension with HIT f/x Note: Last week I wrote an article on hit f/x, and within 15 minutes a number of astute and informed readers pointed out that I was wrong about where the point of contact was being measured from, which led to some incorrect conclusions. Mea culpa, but it did start an interesting discussion which you can still find in the comments below, although the numbers and conclusions in the following article have been corrected: Warren Spahn once said: “Hitting is timing. Pitching is upsetting timing.” And for all that PITCHf/x has told us about who has the nastiest pitches and what locations hitters feast off of, it has had very little to say about such a critical aspect of the hitter-pitcher duel as timing. As a result, while PITCHf/x can easily tell where in the strike zone batters are succeeding or struggling to make solid contact, for the most part why that is the case is just guesswork. Further complicating saying anything useful about hitters is that the normal PITCHf/x data gives the location of each pitch as it crosses the very front of the plate. That’s fine for pitchers, since that’s where balls and strikes are decided. But with breaking balls moving wildly as they pass the plate and a full six feet of batter’s box in which the hitter could theoretically be standing as he reaches for the ball, where the pitch crossed the front of the plate and where the batter actually made contact with it could be far apart. Enter HITf/x to save the day on both counts. Along with the horizontal and vertical location at which the bat struck the ball, it measures how far along its flight path the ball was when it was struck. This will be especially useful once we have enough data to compare a hitter’s normal contact point to where it is when he is in a slump or on a streak to see if anything has changed. But for now, here are the average travel distances for each type of pitch using the available April data. I have converted the values hit f/x provides to where the ball is in relation to the front of the plate (as pitch f/x measures things), with positive numbers how far out front of the plate the ball was struck, and negative numbers for when a ball was hit after crossing the front of the plate. Pitch Type Average Distance Average Velocity (mph) in front of Home Plate (ft) Fastball 1.06 91.2 Slider 0.84 83.1 Changeup 0.79 82.4 Curve 0.71 76.5 Total 0.95 87.0 So on average, batters make contact with the ball about a foot in front of the plate, with fastballs hit sooner than offspeed pitches by about four inches. Here’s a look at the distribution of contact for every batted balls in April (hat tip to Alan Nathan in the comments): There isn’t much variation between hitters, with batters at the extremes (of earliest and latest contact) in the league taking their cuts at balls within a foot of each other. Here are the top five of each through April—for the full list, you can download this Excel file . Earliest contact (at least 25 balls in play) Player Name Average Distance In front of plate (ft) Alexei Ramirez 1.44 Mike Lowell 1.42 Hank Blalock 1.35 Justin Morneau 1.32 Garrett Atkins 1.30 Latest contact (at least 25 balls in play) Player Name Average Distance In front of plate (ft) Anderson Hernandez 0.40 Emmanuel Burriss 0.46 Nick Johnson 0.51 Kazuo Matsui 0.53 Chipper Jones 0.55 One player on this list who demands attention is Alexei Ramirez: he hit .214 in April but has hit .288 since, suggesting that he may have been jumping at pitches too soon in April. But it’s really too early to know if these are typical numbers for these hitters, let alone what this might all mean about their swings and approaches at the plate. But in general, should different pitches be hit at different depths? We saw above that offspeed pitches on average travel about an extra four inches before they are hit. But should batters actually try to let certain pitches travel further, and contact others aggressively before they cross the plate, or is that just a result of the type of pitch? The following two graphs show how batting averages are affected on balls hit into play by how far in front of the plate contact is made. Zero is the very front of the plate, which extends 1.417 feet past that (picture a pitcher throwing from the right hand side of the following graphs, with 0 being the front edge of the plate). Fastball distance Offspeed Distance As both groups of pitches (I lumped all the offspeed pitches together because they were essentially identical) peak at about the same point, there seems to be no difference in where hitters should try to make contact with the ball for maximum results. However, while offspeed pitches are equally hittable where they are contacted, batting average against fastballs rises steadily the later they are contacted, to a difference of almost 200 points between hitters when they make relatively early to late contact. Once all the data are available, that will open up a new area of study—not just for hitters, in terms of little tidbits like who has the best timing on each pitch, or who can still get a hit when they’re fooled, but for the whole art of pitching that is disrupting timing, and how they do it as well. For now, here’s a rundown we’ve learned during our preview of the third dimension: — Hitters tend to hit the ball about a foot before it crosses the front edge of the plate. — Batters hit fastballs earlier, and let offspeed pitches travel further before making contact. — However, if hitters are able to stay back and contact fastballs later, they get better results. — On the other hand, it doesn’t matter when a player makes contact with offspeed pitches. — Fastballs and offspeed pitches are equally bad with which to make very late contact. 1. Nick Steiner said... That second strikezone chart doesn’t look right. How could the pitch rise a foot after it crosses home plate? 2. Sky Kalkman said... Great point about needing to know how much a pitch moves from the front of the strike zone to when it’s contacted. I’m having trouble believing some of the data in the charts, however. For example, the change-up to Chipper dropped a foot vertically while only traveling 1.6 feet horizontally? That seems really steep. And the curve to Josh Johnson traveled 1.5 feet sideways over 1.3 feet horizontally? That’s a ridiculous angle. I’m open to having my world flipped upside down, but those seem unlikely. Or I’m reading the graphs wrong. 3. Sergei said... >Almost all balls are hit BEFORE they reach home plate, not after. Bravo Mr. Jensen! Some of those saberanalysts musta never seen real baseball. 4. Shane Elzer said... Hey Jon. This is great work and certainly provides an early look into the possibilities of HIT/fx. One thing that I must challenge, though – and this, in a way, reflects Peter’s sentiments – is your measurement and reporting of “travel”. Home plate is roughly about 17” in depth from the front edge to the back corner. So I’m finding it difficult to believe that most hitters, on average, are making contact with a fastball 2.5’ beyond the front edge of home plate, or over one foot BEHIND home plate. In a recent article at Viva El Birdos (http://www.vivaelbirdos.com/2009/8/19/992832/albert-pujols-anatomy-of-the-swing), you can see that Pujols is making contact just about at the front edge of the plate. While Pujols is certainly not representative of the average hitter, I imagine that this contact point is a better estimate across the board. While this article is very exciting and interesting, I can’t buy into the information that most big league hitters are making contact with pitches in foul territory. 5. Sergei said... Otherwise, the author makes a great point: “… that will open up a new area of study … for the whole art of pitching that is disrupting timing” And that’s why I wanna see not just Pitch f/x and Hit f/x, but BAT F/X! PS Mr. Hale, why doesn’t your article mention MGL? This study was all based on his original ideas, no? 6. Jonathan Hale said... Peter: Thanks, what a boondoggle. I should have known by where all the bunted balls were contacted. So yes, there are a lot of opposite conclusions to be made here: - Hitters do manage to contact fastballs earlier in the zone and wait on changeups. - But the more they’re ahead of fastballs the worse they make contact with them. - Alexei Ramirez was probably standing forward in the zone or lunging at balls in April. That’s really too bad that it might not be accurate at all, though – it makes a lot of these excited ramblings less rambling. Maybe game f/x can incorporate something about where the ball lands as you did. 7. Jonathan Hale said... Nick: Bad graph! It should be the other way around. 8. Nick Steiner said... Thanks Jon, I probably should have figured that out. Also, I hate to be a bother, but do you think that you or Peter could give me a quick explanation of some of the fields in Hit f/x? I’ve just started playing around with my data, and I’d be nice to know what I’m dealing with. 9. Alan Nathan said... Re Peter Jensen’s comments: 1. The hit_y values in hitf/x are the distance from the point of home plate (with a positive number meaning toward the pitcher). They are NOT measured from the front edge of home plate. For example, if hit_y=1.417, the contact is made right at the front of home plate. If hit_y is larger than 1.417, contact is made in front of home plate. etc. etc. 2. Peter’s 2nd comment is worth expanding on a bit. The hitf/x data do not extend right to the contact point. Instead, they start some number of feet from the front of home plate (I forget the exact number and in any case it is ball-park dependent, and even batter-dependent). But in any case, the trajectory is measured some distance from the contact point and is only measured over a short distance. The basic idea is that the measured trajectory is extrapolated backwards to find where it intersects the pitch trajectory. Now, as Peter has said, the extrapolation assumes a linear trajectory (i.e., constant velocity), since there is not enough information to do a constant-acceleration fit. This was discussed briefly at the summit. Rand Pendleton (from Sportvision) and I were working on a technique to use the reasonably-well-known drag coefficients and gravity (which is very well known) to calculate the acceleration, then use that to extrapolate back to the contact point. It is still a work in progress. I doubt it will have a great deal of effect on the actual contact point (it has more of an effect on the contact velocity—the so-called speed off bat—which is actually a bit higher than hitf/x tells you because of drag). 10. Jonathan Hale said... Sky: I know, they messed me up too. But I checked them again and against others and that’s what we’re looking at in terms of amplitude at least (although as I mentioned that sideways curve to Johnson was kind of unusual) of the distance between the two. Maybe something is just completely wacky with the data, but I managed to convince myself that a curveball travelling down and away at a 45 degree angle (i.e. down and sideways as much as it’s going forward) right before it got to the catcher’s glove wasn’t a completely ridiculous idea. 11. Peter Jensen said... Alan – Thanks Alan for correcting me. All Pitch f/x and Hit f/x Y values are measured from the back of home plate, not the front. I new that, of course, but in my haste to correct Jonathan’s error I made one of my own. 12. Nick Steiner said... Jon, but if your initial assumption was wrong, and contact is being made in front of the plate, I don’t see how your graphs can be right. Again, that would assume that the ball is rising. 13. Jonathan Hale said... Sergei: If MGL has written an similar article, please share. Otherwise these are my original ideas (and mistakes). 14. Nick Steiner said... Scratch my last comment. 15. Sergei said... With all due respect to everybody else, I have to confess that Mr. Alan Nathan is my king! Keep up the good work. Alan, I’d only ask you to maybe forward the Bat f/x idea – I’m sure you understand what I’m talking about – to people who might be actually able to make it possible someday. Or maybe even tomorrow. - Sergei from Moscow 16. Alan Nathan said... Sergei: I saw your bat f/x comment but actually don’t know exactly what you refer to. Please expand on this a bit. And while you’re at it, tell us how someone from Moscow got interested in baseball (privately, if you prefer, at 17. Alan Nathan said... Here is a link to a hastily prepared plot of the distribution of hit_y values for the 15k hits in the hitf/x data base. Note that y=0 is the point of home plate, y=1.417 is the front of home plate. The data are binned in 0.25 ft buckets. (I just posted this also at The Book) Note that the distribution is peaked just in front of the front edge of home plate (1.5-1.75 ft) 18. Alan Nathan said... Jonathan: Despite your misinterpretation of the meaning of hit_y, your idea for an analysis is a good one. I hope you are able to continue it with your redefined values. Let me know if I can be of any help. Nick: you wanted a definition of the various fields in the hitf/x data. To my knowledge, no one has written that down yet. Here is a quickee (assuming you are already familiar with the pitchf/x coordinate system): hit_initial_speed: batted ball speed at contact, in mph hit_horizontal_angle: spray angle, in degrees, with the convention that 1B line is 45 deg, 2B 90 deg, 3B line is 135 deg. hit_vertical_angle: vertical launch angle, in degrees hit_vx0, hit_vy0, hit_vz0: initial x,y,z components of batted ball speed, in ft/s (you can check that this is consistent with hit_initial_speed) hit_x0,hit_y0,hit_z0: location of ball x,y,z at contact, in feet. hit_y0 (called hit_y in several posts) is the quantity currently under discussion. hit_ave_lop: you can ignore this. It is some measure of how well the constant-vel curve fits the data (smaller is better). Most of the remainder of the fields are pitchf/x quantities, which should already be familiar to you. Given the 9P fit to the pitchf/x trajectory and the initial hitf/x trajectory, you should be able to produce a plot showing the full pitch trajectory and the initial part of the batted ball trajectory. 19. Sergei said... Alan, e-mailed you with some thoughts on bat f/x. Don’t see a real reason to post it here – briefly, it’s just about capturing the bat itself – because you’re the only one to reply on it anyway. But I’ll answer you second question right here as well – just for fun > And while you’re at it, tell us how someone from Moscow got interested in baseball. Mostly a typical case of “seeing the greener grass where we ain’t” – it was back in the 80s. But now, having watched enough of Russian baseball, turning to cricket instead. 20. Gilbert said... While it was commented on that the FB avg dropped sooner than the offspeed avg, the offspeed avg drops suddenly off a cliff at 4’ while FB is still solid to 4.5. If there is not a measurement problem it may mean that extra late contact on an offspeed pitch is hit softly and the late break makes it harder to hit solidly, while late FB contact still produces some off-field liners. They both have a bump at 4.5 possibly because so few are hit there that the fielders are not in a position for it. After that, outs in foul territory without any credit for fouls that drop probably cause the steep drop. 21. Dave Studeman said... Great stuff, Jonathan! 22. Peter Jensen said... Jonathan – I am afraid that you are misinterpreting the Hit f/x data. The Hit_Y values are the distance toward the pitcher from the front of the plate where contact was made. Almost all balls are hit BEFORE they reach home plate, not after. 23. Peter Jensen said... I also don’t think that Sportvision has been able to resolve the point of contact very well. I mentioned this to Marv White at the recent summit. Since they don’t have the full 9 parameters (they don’t have enough information to estimate accelerations) on the hit ball it is almost impossible to extrapolate back toward the point of contact and get an accurate measurement. When I worked with the actual Pitch f/x-Hit f/x camera images last year I was not able to locate a precise point of contact until I estimated the balls landing location from the commercial video feed and calculated the acceleration values for the hit ball. Then I was able to resolve the pitch path with the hit ball path, usually to within 1/2” and .001 second in time. I did this for about 10 hit balls and every one was hit before it reached the front of the plate. 24. cody k said... Just want to thank Professor Nathan for representing the University well. Go Illini 25. Dave said... This analysis is getting very interesting. I suspect the key to how early a ball is struck is not the pitch type but the location. Offspeed pitches are more likely to be outside, so they must be struck later, otherwise the batter will miss or hit the pitch off the end of the bat. At he same time the hitter has to get the bat around more quickly and hit the ball further in front to get around on an inside fastball. Doing this same analysis but breaking the hitting zone into 9 quadrants might be very informative. Leave a Reply Cancel reply
{"url":"http://www.hardballtimes.com/into-the-next-dimension-with-hit-f-x/","timestamp":"2014-04-18T03:09:29Z","content_type":null,"content_length":"87761","record_id":"<urn:uuid:a77b9d3a-443c-4fae-b42e-fb435e5d1c88>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Why does so much recent work involve K3 surfaces? up vote 26 down vote favorite I've been noticing that a whole lot of papers published to the Arxiv recently involve K3 surfaces. Can anyone give me (someone who, at this point, knows little more about K3 surfaces than their definition) an idea why they are coming up so often? Some questions that might be relevant: Are there particular reasons that they are so important? Are there special techniques that are available for K3 surfaces, but not more generally, making them easier to study? Are they just "in vogue" at the moment? Are they more like a subject of research (e.g., people are carrying out some sort of program to better understand K3 surfaces) or a testing ground (people with ideas in all sorts of different areas end up working the ideas out over K3 surfaces, because more general versions are much more difficult)? ag.algebraic-geometry cv.complex-variables J.C. Ottem's answer seems quite reasonable to me, although I might add to it that they are also hard enough to deal with compared to curves, rational varieties and abelian varieties to have stood 5 as a challenge to leading algebraic geometers for the last 60 or more years. (Weil's 1958 coining of the term K3 surface references the K2 peak -- i.e., very hard to master.) But I would also say that the love affair with K3 surfaces is neither especially recent nor especially particular to K3 surfaces: algebraic geometers have been interested in lots of algebraic varieties for some time now... – Pete L. Clark Mar 23 '11 at 21:49 2 In my opinion, a better question would be something like: "What properties of K3 surfaces have people been especially interested in recently? What is known for K3 surfaces that is not known for (e.g.) other surfaces or for other Calabi-Yau varieties?" – Pete L. Clark Mar 23 '11 at 21:51 2 For example, if you look at papers on the arxiv with "K3" in the title posted in 2011, you'll see that many of them involve moduli spaces of certain sheaves on K3s. These kinds of moduli spaces of sheaves are supposed to be related to or analogous to the moduli spaces of curves that are considered in, e.g., Gromov-Witten theory... (Very rough idea: consider ideal sheaves of dimension 1 subvarieties of your variety, rather than maps of curves into your variety.) – Kevin H. Lin Mar 24 '11 at 2:07 2 There are certain aspects in which one just can not beat K3 surfaces, in the sense that they have unique properties, that simply don't hold for (almost) any other algebraic variety. One thing is that spaces of sheaves on $K3$ always have a holomorphic symplectic structure (this is due to dim(K3)=2, and the fact that K3 has a holomorphic volume form). – Dmitri Mar 24 '11 at 5:54 2 Because they're there? – Tom Goodwillie Mar 24 '11 at 22:05 show 1 more comment 5 Answers active oldest votes Projective algebraic surfaces are classified first by their Kodaira number $k(X)$. Surfaces with $k(X) = -1$ have been much studied, they are either rational or ruled. Surfaces with $k(X) = 2$ are of general type. Surfaces with $k(X) = 0$ are of several types (abelian, K3, Enriques, or hyperelliptic). Notice the rough analogy with curves, where we have genus 0 ($k(X)=-1$) are rational curves, genus 2 or greater ($k(X)=1$) are general type, and genus 1 ($k(X)=0$) are elliptic curves. So surfaces with $k(X)=0$ provide a testing ground for surface theory similar to the testing ground for curves provided by elliptic curves. Among the $k(X)=0$ surfaces, certainly abelian surfaces have been the most studied. On the other hand, Enriques and hyperelliptic surfaces are rather special. That leave K3 surfaces as surfaces with $k(X)=0$ that do not have a group structure, yet exist in vast quantities. (The moduli space of algebraic K3 surfaces consists of a countable union of 19 dimensional varieties.) So presumably for geometers, K3 surfaces are a challenge because they have no group structure, yet are much easier than surfaces of general type. up vote 24 down As a number theorist, I look on K3 surfaces as providing a huge challenge to understand their arithmetic, e.g., the distribution of rational points, or the distribution of integral points vote on affine pieces. (Vojta's conjecture implies that the latter set is not Zariski dense, so this would be a great place to prove a piece of Vojta's conjecture that does not use an underlying group structure.) Another big conjecture (known in many cases) is that if a K3 surface $X$ is defined over a number field $K$, then there is a finite extension $L$ of $K$ such that $X(L)$ is Zariski dense in $X$. [I know I omitted the $k(X)=1$ surfaces. They are elliptic surfaces, so also extremely interesting from both a geometric and an arithmetic perspective. But not relevant to the question about K3 surfaces.] @Joe: I'm pretty sure that where you say "18", you mean "19". – Pete L. Clark Mar 24 '11 at 0:25 @Pete: Thanks, you're right, I meant 19. I fixed my answer. The space of (not necessarily algebraic) K3s has dimension 20, and the algebraic ones form a countable union of 19 dimensional families. – Joe Silverman Mar 24 '11 at 17:25 add comment From the viewpoint of classical algebraic geometry, the reason is simple: they are easy to deal with and many things can be computed on them (e.g., their moduli, Picard lattices, automorphism groups, etc). For example, complete linear systems on projective K3 surfaces are particularly easy to study using results of Saint-Donat and Nikulin. Moreover, many special K3s turn up naturally in other problems from algebraic geometry e.g., as complete intersections (and this is not so much the case for other 'exotic' surfaces like Enriques surfaces). This makes K3 surfaces a nice testing ground for results in algebraic geometry. Some examples: -- Let $D$ be an effective divisor on a K3 surface. Then $|D|$ has no base-points outside its fixed components. -- K3 surfaces have nice vanishing theorems: The vanishing of $h^2(X,D)=\dim H^2(X,O(D))$ for $D\neq 0$ effective is immediate by Serre duality: $h^2(X,D)=h^0(X,-D)=0$. up vote 21 down -- Let now $D$ be an effective nef divisor (so that $D.C\ge 0$ for every curve $C$). If $D^2>0$, then either $|D|$ is base-point free, then $h^1(X,D)=0$ and the generic member of $|D|$ is vote smooth and irreducible, or $|D|$ is not base-point free and then we have $D\sim kE+\Gamma$, where $k\geq2$, $E$ is an elliptic curve, $\Gamma$ is a rational curve and $E.\Gamma=1$. If $D^2= 0$, then $| D|$ is composed with a pencil, i.e $D=kE$, where $E$ is an elliptic curve and $h^1(X,D)=k-1$. In sum, if you have a divisor that is free from fixed components, then you can calculate the dimensions of all the cohomology groups using Riemann-Roch. -- Moreover, Saint-Donat gives precise results on the degrees of the generators and relations of the section ring $A=\bigoplus_{n\ge 0}H^0(X,nD)$. This means that one can easily study concrete projective models of K3 surfaces. -- There are also strong results by Kovács on the effective cone of a K3 surface. 3 Also worth mentionning: Deligne proved the Weil conjectures first for K3 surfaces (1972). – Xandi Tuni Mar 24 '11 at 12:48 2 The name is Saint-Donat. – Georges Elencwajg Mar 24 '11 at 18:42 Thanks! $\,\,\,$ – J.C. Ottem Mar 24 '11 at 20:03 add comment A famous instance of "K3 surfaces as proving ground" is: Deligne, Pierre La conjecture de Weil pour les surfaces $K3$. (French) Invent. Math. 15 (1972), 206–226. up vote 16 down vote Compare with: Deligne, Pierre La conjecture de Weil. I. (French) Inst. Hautes Études Sci. Publ. Math. No. 43 (1974), 273–307. add comment K3 surfaces are also interesting from the point of view of complex dynamics. To quote from Curtis T. McMullen's introduction to his paper ``Dynamics on K3 surfaces: Salem numbers and Siegel disks", Journal fur die Reine und Angewandte Mathematik 2002(545): 201–233, "The first dynamically interesting automorphisms of compact complex manifolds arise on K3 surfaces. Indeed, automorphisms of curves are linear (genus 0 or 1) or of finite order (genus 2 or more). Similarly, automorphisms of most surfaces (including $\mathbb{P}^2$, surfaces of general type and ruled surfaces) are either linear, finite order or skew-products over automorphisms of curves. Only K3 surfaces, Enriques surfaces, complex tori and certain non-minimal rational surfaces admit automorphisms of positive topological entropy [Ca2]. The automorphisms of tori up vote are linear, and the Enriques examples are double-covered by K3 examples." 9 down vote In this paper McMullen gives examples of K3 surfaces admitting automorphisms with Siegel disks (i.e., domains on which the automorphism is conjugate to a rotation). There are countably many such surfaces, all of them non-projective. The citation [Ca2] is to the paper by S. Cantat, Dynamique des automorphismes des surfaces projectives complexes. CRAS Paris S ́er. I Math. 328 (1999), 901–906, which grew into a larger work: S. Cantat : Dynamique des automorphismes des surfaces K3 ; Acta Math. 187:1 (2001), 1--57 These papers are only a few examples (perhaps of landmark character); there has been a lot of study of dynamics on K3 surfaces going on indeed. They were important for arithmetic dynamics, too, The K3 surfaces with a pair of non-commuting involutions (the ones that are the intersection of a (1,1) form and a (2,2) form in $P^2\ 1 times P^2$) were, I believe, the first varieties beyond abelian varieties where canonical heights were constructed and used to prove arithmetic results such as the fact that the periodic points for $i_1\circ i_2$ are a set of bounded height. – Joe Silverman Mar 3 at 1:18 add comment In the mathematical physics community, there was recently a resurgence of interest for K3-surfaces due to the observation of Egushi, Ooguri and Tashikawa that the elliptic genus of K3-surfaces seems to be built out of representations of the Mathieu group M24. This phenomenon has been dubbed the "Mathieu moonshine". There is still no definitive understanding of this up vote 2 relation, as far as I know. down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry cv.complex-variables or ask your own question.
{"url":"http://mathoverflow.net/questions/59347/why-does-so-much-recent-work-involve-k3-surfaces/59355","timestamp":"2014-04-20T06:02:30Z","content_type":null,"content_length":"84594","record_id":"<urn:uuid:7f18366c-28b8-4540-be93-fb2c30562ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Constraint in Lagrangian Mechanics, explanation please? Ok so in example you have a marble on an inclined plane with general coordinates theta and x. You need the constraints to get the lagrangian? I don't understand what the constraint is or how it allows you to get the lagrangian and equations of motion. Also in this particular case would there be two equations of motion or one?
{"url":"http://www.physicsforums.com/showpost.php?p=3833339&postcount=1","timestamp":"2014-04-17T15:43:04Z","content_type":null,"content_length":"8693","record_id":"<urn:uuid:cb4906c5-87f9-4f24-ba2b-5a2171b61e46>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics The author’s abstract: “It is proved that $K\left({k}_{+}\right)=\left[{\left(4-\eta \right)}^{1/2}-{\left(1-\eta \right)}^{1/2}\right]K\left({k}_{-}\right),$ where $\eta$ is a complex variable which lies in a certain region ${ℛ}_{z}$ of the $\eta$ plain, and $K\left({k}_{\mp }\right)$ are complete elliptic integrals of the first kind with moduli ${k}_{\mp }$ which are given by ${k}_{\mp }^{2}\equiv {k}_{\mp }^{2}\left(\eta \right)=\frac{1}{2}\mp \frac{1}{4}\eta {\left(4-\eta \right)}^{1/2}-\frac{1}{4}\eta {\left(4-\eta \right)}^{1/2}-\frac{1}{4}\left(2-\eta \right){\left (1-\eta \right)}^{1/2}·$ This basic result is then used to express the face-centred cubic and simple cubic lattice Green functions at the origin in terms of the square of a complete elliptic integral of the first kind. Several new identities involving the Heun function $F\left(a,b;\alpha ,\beta ,\gamma ,\delta ;\eta \right)$ are also derived. Next it is shown that the three cubic lattice Green functions all have parametric representations which involve the Green function for the two-dimensional honeycomb lattice. Finally, the results are applied to a variety of problems in lattice statistics. In particular, a new simplified formula for the generating function of staircase polygons on a four-dimensional hypercubic lattice is derived”. 33E05 Elliptic functions and integrals 33E20 Functions defined by series and integrals 33E30 Functions coming from differential, difference and integral equations 60G50 Sums of independent random variables; random walks
{"url":"http://zbmath.org/?q=an:0808.33015","timestamp":"2014-04-20T01:03:33Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:643c02bb-d67f-4630-98b9-a77622ecc83e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume and Capacity for Numeracy Test Questions You’d be forgiven for thinking that examiners had an obsession with putting things in boxes. The two things numeracy tests often ask about the capacity of boxes (in other words, how much they hold) 1. The volume of the box (roughly, how much liquid you could fit in it). 2. How many objects of a given size fit in it. In numeracy tests, you usually measure volume and capacity in either millilitres (ml) or centimetres cubed (cm^3) – they’re actually two different ways of saying the same size. You may also see litres (1000ml) or metres cubed (m^3), which work out to be the same as 1000 litres or 1,000,000cm^3. Working out the volume of a box is very straightforward: 1. Work out the three dimensions – the height, width and depth of the box – make sure they’re all in the same units (usually centimetres). 2. Multiply the height by the width, and the result by the depth. That’s your answer – if you measured everything in centimetres, your answer is in cm^3; if everything was in metres, your answer would be in m^3. It doesn’t matter which order you multiply the three dimensions in (because the volume doesn’t change if you rotate the box), so feel free to change the order if you can come up with an easy sum Working out how many boxes fit into a bigger box is a slightly trickier problem. You’ll normally be given something like the figure, showing the dimensions of a big box and a smaller box – luckily, the smaller box will be facing in the direction you’ll be packing it (so you don’t need to worry about twisting it around). If you’ve ever moved house, you’ll know that you can fit more cuboids – boxes or books – into a big box if you twist some of them around. As far as numeracy tests go, you can safely ignore this strategy – the question just asks about packing boxes in a boring fixed orientation. Here are the steps: 1. Work out how many small boxes fit along the front of the bigger box. Divide the width of the big box by the width of the smaller box. If the number doesn’t go exactly, you round down, no matter how close you are to the higher number. Write this number down. 2. Work out how many boxes fit along the side. Divide the depth of the big box by the depth of the smaller box. Again, round down if it doesn’t go exactly. Write this number down as well. 3. Work out how many boxes fit the height. Divide the height of the bigger box by the height of the smaller box. Once more, round down if you need to, and write the number down. 4. Multiply your answers from Steps 1, 2 and 3. This is the number of boxes that will fit in the big box.
{"url":"http://www.dummies.com/how-to/content/volume-and-capacity-for-numeracy-test-questions.html","timestamp":"2014-04-16T11:41:53Z","content_type":null,"content_length":"51995","record_id":"<urn:uuid:8c01f09d-2144-45b6-ad09-12ec186f1b19>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
The Perpendicular Distance Calculator - Documentation Peter J. Ersts*, Ned Horning*, and Marco Polin** *American Museum of Natural History, Center for Biodiversity and Conservation, Central Park West at 79th Street, New York, New York, 10024 USA **Department of Physics and Center for Soft Matter Research, New York University, 4 Washington Place, New York, New York, 10003 USA Introduction Line transect distance sampling is a suite of related methods commonly used for estimating the density and/or abundance of biological populations (Buckland et al. 2001). Of these methods, line transect distance sampling is the most broadly applied technique. The general concept behind line transect surveys, as summarized in Thomas et al. (2002), involves an observer or team of observers traversing a linear search path (i.e., transect or transect leg) looking for objects of interest. Objects of interest can be either a single animal or plant, or groups of animals or plants. For each object of interest detected, observers record the perpendicular distance from the object of interest or center of group to the transect leg. Perpendicular distances are used to model detection functions that estimate the proportion of objects present but undetected during the survey process. Detection functions are a pivotal component in calculating reliable estimates of population abundance and/or densities. Detailed, in-depth descriptions of distance sampling methods can be found in Buckland et al. (2001, 2004). Under ideal conditions, observers can directly measure perpendicular distance using a tape measure or more advanced equipment, such as, a laser range finder. However, for perpendicular distances exceeding 1000 meters or surveys for highly mobile, inconspicuous or easily displaced animals, it may not be possible to directly measure perpendicular distance; in which case it is also necessary to evaluate if this sampling method is appropriate to begin with. Alternatives to direct measurement involve the derivation of perpendicular distance from measurements taken at the time of initial detection. These measurements become inputs to standard trigonometric or geodesic equations. The most widely implemented method for deriving perpendicular distance is founded in planar geometry and simply involves solving for one side of a right-angled triangle. This planar method only requires two inputs, a radial distance, which we will call a detection distance, and an angle. The detection distance is the distance from the observers to the object of interest at the time of initial detection. The angle is traditionally defined as the angle between the transect leg and a line connecting the observers and the object of interest, or center of group, at the initial time of observation. While the planar method is intuitive and easy to implement, this method requires an angle to be measure relative to the transect and does not directly provide any geographically explicit information regarding the objects of interest. As GPS receivers become common pieces of equipment in researchers' inventories, other alternatives to the planar method for calculating perpendicular distance are possible. We describe a robust, alternative method for calculating perpendicular distance that implements spherical geometry. The following methods implement basic vector, geometric and trigonometric procedures, which form the basis for all geodetic (e.g., Bomford 1971), celestial (e.g., Green 1985, Danby 1988) and navigational (e.g., Bowditch, 1995) calculations. Various implementations and discussions of the equations that follow can also be found on numerous Internet resources (e.g., Ed Williams's Aviation Formulary The Math Forum at Drexel University For this discussion, all angles and geographic coordinates are expressed in radians, unless otherwise noted. Several of the following geodesic equations require modulus and arctangent operations. The modulus or MOD function returns the remainder when one number is divided by another. For example, 22 / 7 = 3 remainder 1, thus 22 MOD 7, or MOD(22, 7) as it may appear in a programming language or spreadsheet application, would return 1. Likewise, 2 MOD 7 would return 2 because 2 / 7 = 0 remainder 2. Many spreadsheet applications and programming languages have several arctangent functions. The arctangent operation implemented in the following methods, typically called atan2, uses the signs of the two input variables to determine the quadrant of the result. Additionally, to fully understand the methods that follow, a general understanding of basic vector mathematics is required. Planar method To calculate perpendicular distances (Δ ) using planar geometry, the observers are required to record a detection (radial) distance distance (D) and a angle (Θ ) relative to the transect leg. The perpendicular distance can simply be expressed as one side of a right-angled triangle, which can be rewritten as, Δpd = sin(Θooi) x D The resulting perpendicular distance will be in the same units as the detection distance. While this planar approach for calculating perpendicular distance is simple and intuitive, this method requires that angles relative to the transect leg be recorded at the time of initial detection. A common method for determining the angle is through the use of an angle board. Angle boards are easy to implement and construct but hastily or poorly constructed angle boards can be very imprecise. Without the use of specialized equipment, which may be prohibitively expensive or require careful calibration and large, stable working platforms, it can be difficult or even impossible under many field conditions for observers to accurately record the angle relative to the transect leg. Furthermore, angle boards are generally fixed to the survey platform. Therefore, an angle board often provides an angle and subsequent perpendicular distance relative to the orientation or midline of survey platform rather than the transect leg itself. Thus, navigational error and movement of the survey platform (e.g., a boat on rough water) may cause the midline of the survey platform to deviate from the transect, impacting the accuracy and validity of the angle measurement. Spherical approach The spherical approach for deriving perpendicular distances is accomplished by determining the intersection of two great circles. A great circle is a circle that is produced by the intersection of the Earth's surface with a plane passing through the center of the Earth. The shortest distance between two points on the Earth's surface is an arc or segment of a great circle. The first great circle (G1) referenced by this method is the great circle passing through the start and end point of the transect leg. The second great circle (G2) intersects G1 at a 90 degree angle and passes through the geographic location of a sighting or object of interest. The perpendicular distance is the shortest arc length between the object of interest and the intersection of G1 and G2. In this method, the angle is just a compass bearing. Thus, the angle is not relative to the transect leg nor the survey platform but instead represents the angle east of True North. The benefits of this spherical approach to calculating perpendicular distance are 1) the result is "geometrically" correct, 2) coordinate data are derived for each object of interest, and 3) the resulting perpendicular distance is not affected by navigational error or orientation of the survey platform. This spherical approach assumes that 1) the Earth is a perfect sphere, 2) the observers or survey platform implement great circle navigation rather than rhumb line navigation (simply following a course of constant bearing), and 3) the observers correct their compass for magnetic declination and magnetic signature of the survey platform. The geographic coordinates of the object of interest are derived post survey from the geographic coordinates of the observers or survey platform at the time of initial detection (Lat and Lon ), the detection distance (D), and the bearing to the object of interest (θ ) using following equations; where r is the radius of the spherical representation of the Earth. It is important to note that D and r must be in the same units of measurement and the calculation of Longitude utilizes derived Latitude not Latitude The initial step in calculating perpendicular distance using spherical geometry is to transform the geographic coordinate system into a three dimensional Cartesian coordinate system so that each geographic coordinate pair (i.e., latitude and longitude) is represented as a vector with three terms (i.e., x y z). For this transformation, 0 ≤ longitude ≤ 2π and latitude is actually defined as the colatitude (π - latitude) where 0 ≤ colatitude ≤ π. This transformation is applied to the spherical coordinates for the start of the transect leg (Ts), the end of the transect leg (Te) and the derived location of the object of interest (P The second step is to cross normalize the vectors representing the start (Ts) and end (Te) of the transect leg, which produces a vector (T) that has unit length and is perpendicular to the great circle (G1) passing through the start and end of the transect leg. The next step is to calculate the dot product of T and P which will equal the cosine of the angle between vector T, the center or origin of the Earth (O), and the P The difference between θTOP and π / 2 is the angle subtended by the segment of the great circle (G2) passing through the object of interest (P ) and perpendicular to the transect leg, that represents our perpendicular distance. The length of such great-arc segment is the perpendicular distance (δpd) and can be obtained by multiplying by r, the radius of the spherical representation of the Earth. The resulting perpendicular distance will be in the same units as r. Spherical computations are undoubtedly more involved. Therefore, we have developed and released the Perpendicular Distance Calculator, a platform independent, open-source implementation of the approach described above. This spherical approach for calculating perpendicular distance is an alternative method to the planar approach. The greatest advantage of the spherical approach over the planar approach is that the results are not affected by navigational error. An added benefit of this spherical approach is that the actual geographic localities for each object of interest or sightings can easily be derived. Having the coordinates of sightings creates possibilities for other types of spatial analysis and modeling opportunities (e.g., Hedley and Buckland 2004). Furthermore, perpendicular distances can be calculated directly from geographic coordinates (i.e., latitude and longitude) without the need for additional spatial software, which is often expensive and requires extensive specialized knowledge to properly use. Results obtained using this spherical method will only be as accurate and precise as the measurements used as inputs. High quality sighting compasses allow for readings in half degree increments and, if used properly, represent a significant improvement over handmade angle boards. Detection distances that are estimated by eye will be far less accurate than distances estimated through other means and imprecise distance estimates probably represent the single greatest source of error in distance sampling. In general, inaccurate or imprecise inputs will affect the spherical approach as much as any other method. Furthermore, accuracy of results can also vary between GPS receivers and the numerical precision used through the calculation. For example, assuming that the radius of the Earth is 6378137 meters (WGS 84 definition), one meter at the Equator is equivalent to 0.000008983 degrees. Thus, it is important to maintain as many significant digits as possible during data recording and subsequent calculations. The assumption that the Earth is a perfect sphere is inaccurate, but the violation of this assumption does not introduce significant adverse effects or biases to the resulting perpendicular distances due the length of distances being calculated and traversed relative to the radius of the Earth; poor detection distance estimates are a far greater source of error. The assumption that observers implement great circle navigation rather than rhumb line navigation requires special attention. As an observer moves along a great circle, the compass bearing to their destination is constantly changing, which makes precise navigation along great circles over long distances by compass impractical. The disparity between the rhumb line and great circle distance becomes more significant as one approaches the poles. Accurate and precise great circle navigation is only possible with the aid of an electronic navigation device. However, with the increased accessibility and popularity of GPS receivers, researchers wishing to implement this spherical methodology will not be limited by the requirement to implement great circle navigation along transect legs Literature cited Bomford, G. 1971. Geodesy, Oxford University Press, Ely House, London. UK. Bowditch, N. 1995. The American Practical Navigator: An Epitome of Navigation 1995 edition, Defense Mapping Agency Hydrographic/Topographic Center, Bethesda, MD, USA. Buckland, S.T., D. R. Anderson, K. P. Burnham, J. L. Laake, D. L. Borchers, and L. Thomas. 2001. Introduction to Distance Sampling, Oxford University Press, Oxford, UK. Buckland, S.T., D. R. Anderson, K. P. Burnham, J. L. Laake, D. L. Borchers, and L. Thomas eds. 2004. Advanced Distance Sampling, Oxford University Press, Oxford, UK. Danby, J. M. 1988. Fundamentals of Celestial Mechanics 2nd ed., rev. ed., Willmann-Bell, Richmond, USA. Drexel University. 2007 Jan 10. The Math Forum @ Drexel. . Accessed 10 Jan 2007. Green, R. M. 1985. Spherical Astronomy, Cambridge University Press, New York, USA. Hedley, S. L., and S. T. Buckland. 2004. Spatial models for line transect sampling. Journal of Agriculture, Biological and Environmental Statistics, 9, 181-199. Thomas, L., S. T. Buckland, K. P. Burnham, D. R. Anderson, J. L. Laake, D. L. Borchers, and S. Strindberg. 2002. Distance Sampling. Pages 544-552 in A.H El-Shaarawi and W.W. Piegorsch eds. Encyclopedia of Environmetrics. John Wiley & Sons, Chichester, UK. Williams, E. 2007 Jan 10. Aviation Formulary V1.43. . Accessed on 10 Jan 2007. We would like to thank Eleanor Sterling for her unwavering support and dedication to the study and conservation of biodiversity. Liz Nichols, Eugenia Naro-Maciel, Kevin Koy, Matthew Leslie, Samantha Strindberg, and an anonymous reviewer provided valuable comments and suggestions that strengthen the districton of this method. We would also like to thank FFEM, Collectivite Departementale de Mayotte, Observatoire des Mammiferes Marins, Office National de la Chasse et de la Faune Sauvage, Direction de l'Agriculture et de la Foret, l'Association MEGAPTERA, Howard Rosenbaum, and the Captain and crew of the Turquoise for making the survey, for which this application was developed, possible. This work was partially funded by NASA under award No. NNG05G041G and award No. NA055SEC46391002 from the National Oceanic and Atmospheric Administration, U.S. Department of Commerce. Using the Perpendicular Distance Calculator On most systems, you should be able to activate the application by double clicking on the JAR file. The application can also be started by opening a terminal window or command prompt and entering java -jar PerpendicularDistanceCalculator_v1.2.2.jar to manually launching the application. PerpendicularDistanceCalculator_v1.2.2.jar requires JRE version 1.5.x or higher. If the application does not load, check your version of java by opening a command prompt or terminal window and typing java -version. You can also just download the newest version of Java from http://java.com Calculating Perpendicular Distances Manually calculating perpendicular distances using spherical methods is time consuming and prone to errors. The Perpendicular Distance Calculator greatly simplifies this process with its intuitive, easy to use interface. to calculate perpendicular distance using spherical methods you will need the following data: • -geographic coordinates for the start and the end of each transect or transect leg • -geographic coordinates of the observer or survey platform at the time of initial detection • -the compass bearing to the observation or object of interest at the time of initial detection • -the detection distance All fields in the in the Perpendicular Distance Calculator main window must be filled in. Fields in the upper half of the input area pertain to the transect or transect leg. Fields in the lower portion of the input area pertain to data collected at the time of the initial detection. Once all fields are filled in, the calculation can be completed by pressing the Process OBS button. Results will appear as rows of the table in the output area. The calculation returns the following data: • Leg ID: This is the same value entered into the Transect Leg ID field found in the input area • Current Bearing: The current bearing from the observer or observation platform to the end of the transect • Sighting ID: This is the same value entered into the Sighting ID field found in the input area • Sighting Latitude: This is the latitude of the sighting or object of interest derived from the geographic coordinates of the observer, detection distance, and compass bearing recorded at the time of initial detection • Sighting Longitude: This is the longitude of the sighting or object of interest derived from the geographic coordinates of the observer, detection distance, and compass bearing recorded at the time of initial detection • Perpendicular Distance: This value represents the great circle distance of the arc, perpendicular to the great circle represented by the start and end of the transect or transect leg, passing through the derived geographic coordinates of the sighting You can clear all rows in the output area by clicking the Clear All button. You can delete the last row by clicking the Clear Last Row button. Alternatively you can delete interior rows by selecting it with your mouse and clicking on the Clear Selected Row to remove it. Results can be exported into a tab delineated file by simply clicking the Export button. A file dialog will open and as you for a filename. Examining Navigational Accuracy You can check the navigational accuracy of the observer of survey platform by entering the geographic coordinates of the observer or survey platform at the time of the intial detection, the geographic coordinates for the start and end of the corresponding transect or transect leg and simply 0 for the Detection Distance and Bearing to Observation. Clicking the Process OBS button will calculate the perpendicular distance between the observer or survey platform and the transect, thus indicating the level of navigational accuracy and potential bias associated the corresponding Calculating Great Circle Distances A great circle is a circle that is produced by the intersection of the Earth's surface with a plane passing through the center of the Earth. The shortest distance between two points on the Earth's surface is an arc, which is simply a segment of the great circle passing through said points. The Perpendicular Distance Calculator provides you with the ability to calculate the distance between to geographic locations. This calculation can be activated by selecting Calculations -> Great Circle Distance from the menu in the main window. This calculation simply requires the geographic coordinates, in decimal degrees, for two points on the Earth's surface. After selecting your output units and spheroid, press the Calculate button to compute the results. This calculation will return both the distance between your two points but also the initial compass bearing from Position 1 to Position 2. Remember, as you travel along a great circle, your compass bearing to your destination is always changing. Calculating Intermediate Great Circle Points In certain circumstances, you may wish to divide a great circle into even intervals. The Intermediate Great Circle Points feature will do just that. You can activate this feature by selecting Calculations -> Intermediate Great Circle Point from the menu in the main window. In addition to the geographic coordinates for two points on the Earth's surface, the Intermediate Great Circle Points calculation requires you to enter the percentage of the distance along the great circle, connecting Position 1 and Position 2, at which Position 3 lies. After selecting your output units and spheroid, press the Calculate button to compute the results. This calculation will return the great circle distance between Position 1 and Position 2 great circle distance between Position 1, and Position 3 and the longitude and latitude for Position 3. If you wish to cite the background or documentation we suggest the following format: Ersts,P.J., Horning, N., and M. Polin[Internet] Perpendicular Distance Calculator(version 1.2.2) Documentation. American Museum of Natural History, Center for Biodiversity and Conservation. Available from http://biodiversityinformatics.amnh.org/open_source/pdc. Accessed on . If you use the application on data that results in a publication, report, or online analysis, we ask that you include the following reference: Ersts,P.J.[Internet] Perpendicular Distance Calculator(version 1.2.2). American Museum of Natural History, Center for Biodiversity and Conservation. Available from http:// biodiversityinformatics.amnh.org/open_source/pdc. Accessed on .
{"url":"http://biodiversityinformatics.amnh.org/open_source/pdc/documentation.php","timestamp":"2014-04-21T02:46:46Z","content_type":null,"content_length":"28699","record_id":"<urn:uuid:cf245496-9462-409b-ac0f-c719bfbba715>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Get ball path based on angle and start/end point 01-18-2012, 07:09 AM #1 Senior Member Join Date Nov 2004 Get ball path based on angle and start/end point If I know the start point and the end point of an object that has been 'launched', for instance a ball has been kicked in the air, and I know the angle with which it has been launched, would it then be possible to (roughly) describe the path that it followed (and the maximum height) and the velocity based on these values? Ultimately I'd want to reproduce that path by tweening a line, so that the user can see how he kicked the ball... Any tips and ideas and code examples are very welcome, Actually i just want to know the maximum height that the ball has reached. So the values I know are: start point end point angle of launch Is that possible if you don't know the initial velocity? should be possible, let me try to solve it right here... assuming you start at x=0, and land at x=r, given cc angle a, you have: vy/vx = tan a, y = vy * t - g * t * t, x = vx * t. put 3rd into 2nd: y = x tan a - x^2 * (g / vx^2) the height would be y when x=r/2: h = (r/2) tan a - (r/2)^2 * (g / vx^2) now to find (g / vx^2) remember that y(x=r)=0: 0 = r tan a - r^2 * (g / vx^2) (g / vx^2) = (r tan a) / (r^2) = (tan a) / r so, finally, h = (r/2) tan a - (r/2)^2 * (tan a) / r = (r/2 - r/4) tan a = (r/4) * tan a blah blah blah Looks like you might have meant y = vy * t - 0.5 * g * t * t for that second equation, although you must have corrected it somewhere along the line because the final answer is correct: max height = tan(angle) * range / 4 yes, there should be 0.5 - my mistake but since the final answer does not include g, it does not matter if you use g or 0.5g or 25g blah blah blah That's right - it's interesting because on first glance at that equation, it looks like the maximum height of a projectile is independent of gravity .... but of course it isn't, because the range is in turn dependent on gravity. 01-19-2012, 04:25 AM #2 Senior Member Join Date Nov 2004 01-20-2012, 06:38 PM #3 Senior Member Join Date Oct 2002 03-15-2012, 09:07 PM #4 Senior Member Join Date Jan 2008 03-16-2012, 05:00 AM #5 Senior Member Join Date Oct 2002 03-16-2012, 07:02 AM #6 Senior Member Join Date Jan 2008
{"url":"http://board.flashkit.com/board/showthread.php?826098-Get-ball-path-based-on-angle-and-start-end-point&p=4297649","timestamp":"2014-04-16T16:11:41Z","content_type":null,"content_length":"81115","record_id":"<urn:uuid:57c8b726-d488-4d47-81b7-1bf945e13d77>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorema: Towards computer-aided mathematical theory exploration Buchberger, Bruno and Craciun, Adrian and Jebelean, Tudor and Kovacs, Laura and Temur, Kutsia and Koji, Nakagawa and Piroi, Florina and Popov, Nikolaj and Robu, Judit and Rosenkranz, Markus and Windsteiger, Wolfgang (2006) Theorema: Towards computer-aided mathematical theory exploration. Journal of Applied Logic, 4 (4). pp. 470-504. ISSN 1570-8683. (The full text of this publication is not available from this repository) Theorema is a project that aims at supporting the entire process of mathematical theory exploration within one coherent logic and software system. This survey paper illustrates the style of Theorema-supported mathematical theory exploration by a case study (the automated synthesis of an algorithm for the construction of Groebner Bases) and gives an overview on some reasoners and organizational tools for theory exploration developed in the Theorema project. • Depositors only (login required):
{"url":"http://kar.kent.ac.uk/29973/","timestamp":"2014-04-16T04:47:49Z","content_type":null,"content_length":"22324","record_id":"<urn:uuid:7ec8580c-f239-4bb9-ba02-4d726eb087db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Kittredge Prealgebra Tutor Find a Kittredge Prealgebra Tutor ...I love working with kids of any school age as well as adults, helping them understand math and helping them find joy in it! I am very patient and work from where the student is, solidifying existing understanding, clarifying misconceptions, and building from there. I have a Bachelor's degree in Civil Engineering with a minor in mathematics and a Master's degree in Education. 14 Subjects: including prealgebra, reading, geometry, algebra 1 ...A very important part of a successful career is being able to communicate ideas and/or research results successfully to a broad audience. One of the most effective ways to do this is through the use of a PowerPoint presentation. I have been using PowerPoint consistently since my freshman year of high school. 18 Subjects: including prealgebra, calculus, physics, GRE ...Then, each session will start with the 'flipped model'. This means that I will have the student watch a 5-10 minute YouTube video on the topic and try a few problems as homework. Then, when we have our session, I will work with the student to pinpoint where they are doing things incorrectly. 41 Subjects: including prealgebra, reading, Spanish, English ...I completed medical school in Urumqi, China at Xinjiang Medical University in 2011. After conducting surgery at Kashgar No.1 hospital two years, I accepted a position at Hokkaido University, Japan, as a visiting scholar. Then I came to the USA to study for PhD. 3 Subjects: including prealgebra, English, algebra 1 I have over ten years of experience teaching and tutoring at the high school and college levels. I received my bachelor's degree in Physics from Lewis and Clark College in Portland, Oregon, and my master's degree in Physics from the University of Utah. Subjects I have taught include the following:... 11 Subjects: including prealgebra, physics, calculus, geometry
{"url":"http://www.purplemath.com/Kittredge_Prealgebra_tutors.php","timestamp":"2014-04-20T19:25:35Z","content_type":null,"content_length":"24096","record_id":"<urn:uuid:59b5061f-6871-4037-a5bc-d6d6646faed2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Lucas Numbers and Generating Functions Here is my problem and my attempt at the answer. Any help or advice is highly appreciated. With the famous sequence of Lucas numbers 1, 3, 4, 7, 11, 18... (Where each number is the sum of the last two and the first two are defined as 1 and 3.) use generating functions to find an explicit formula for the Lucas function. Attempted Solution We have where F[j] denotes the j^th Fibonacci number and n is going to infinity. Then we add that to Where F[-1] = -1 and F[0] = 0 And that should get us a function of Lucas numbers right?
{"url":"http://www.physicsforums.com/showthread.php?t=97465","timestamp":"2014-04-16T10:25:21Z","content_type":null,"content_length":"30079","record_id":"<urn:uuid:44cbc62a-a3fe-4459-9f9f-2d0ce5ab4628>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
The rank-nullity theorem in linear algebra says that dimensions either get • thrown in the trash • or show up after the mapping. By “the trash” I mean the origin—that black hole of linear algebra, the /dev/null, the ultimate crisscross paper shredder, the ashpile, the wormhole to void and cancelled oblivion; that country from whose bourn no traveller ever returns. The way I think about rank-nullity is this. I start out with all my dimensions lined up—separated, independent, not touching each other, not mixing with each other. ||||||||||||| like columns in an Excel table. I can think of the dimensions as separable, countable entities like this whenever it’s possible to rejigger the basis to make the dimensions linearly independent. I prefer to always think about the linear stuff in its preferably jiggered state and treat how to do that as a separate issue. So you’ve got your 172 row × 81 column matrix mapping 172→ separate dimensions into →81 dimensions. I’ll also forget about the fact that some of the resultant →81 dimensions might end up as linear combinations of the input dimensions. Just pretend that each input dimension is getting its own linear λ stretch. Now linear just means multiplication. Linear stretches λ affect the entire dimension the same. They turn a list like [1 2 3 4 5] into [3 6 9 12 15] (λ=3). It couldn’t be into [10 20 30 − 42856712 50] (λ=10 except not everywhere the same Also remember – everything has to stay centred on [S:0:S]. (That’s why you always know there will be a zero subspace.) This is linear, not affine. Things stay in place and basically just stretch (or So if my entire 18th input dimension [… −2 −1 0 1 2 3 4 5 …] has to get transformed the same, to [… −2λ −λ 0 λ 2λ 3λ 4λ 5λ …], then linearity has simplified this large thing full of possibility and data, into something so simple I can basically treat it as a stick |. If that’s the case—if I can’t put dimensions together but just have to λ stretch them or nothing, and if what happens to an element of the dimension happens to everybody in that dimension exactly equal—then of course I can’t stick all the 172→ input dimensions into the →81 dimension output space. 172−81 of them have to go in the trash. (effectively, λ=0 on those inputs) So then the rank-nullity theorem, at least in the linear context, has turned the huge concept of dimension (try to picture 11-D space again would you mind?) into something as simple as counting to 11
{"url":"http://isomorphismes.tumblr.com/tagged/size","timestamp":"2014-04-16T13:06:32Z","content_type":null,"content_length":"235659","record_id":"<urn:uuid:804debf7-ba58-430a-9799-70cc97390962>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Min/Max Calculus Problems January 9th 2007, 03:44 PM #1 Jan 2006 Min/Max Calculus Problems Hi, i have two problems of bounded areas which i just don't even know how to start. If you could give me some pointers on how to do bounded area problems in general as well, that would be A rectangle is bounded by the x-axis and the semicircle y= squareroot(25-x^2) what length and wideth should the rectange have so that its are is maximum? A rectangular package to be sent by a postal service can have a amazimum combined length and girth of 108 inches. Find the dimensions of the package of maximum volume that can be sent. Thanks a lot. Hi, i have two problems of bounded areas which i just don't even know how to start. If you could give me some pointers on how to do bounded area problems in general as well, that would be A rectangle is bounded by the x-axis and the semicircle y= squareroot(25-x^2) what length and wideth should the rectange have so that its are is maximum? Maximum in area? (The problem probably assumes symettric in the Region). Let the rectangle be moved over $x$ units the origin. That means the length is $2x$ and the height until the semicircle is $f(x)=\sqrt{25-x^2}$. Thus, the area it has is, $A'(x)=2\sqrt{25-x^2}+(2x)\left( -\frac{x}{\sqrt{25-x^2}} \right)=0$ Multiply through by denominator, $x=\pm \frac{\sqrt{50}}{2}$ Note the plus-mius does not matter all it says we can start drawing the rectangle from either side. January 9th 2007, 04:34 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/9776-min-max-calculus-problems.html","timestamp":"2014-04-19T14:55:04Z","content_type":null,"content_length":"36700","record_id":"<urn:uuid:c76f59c4-c9e6-4698-9561-8283c8fab631>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Extending Set Theory with Indiscernibles Ali Enayat enayat at american.edu Wed Mar 2 13:50:00 EST 2005 Taranovsky's interesting recent note [Feb 27, 2005] on extensions of set theory has inspired me to briefly discuss some recent results that characterize the rather surprising strength of a natural system of set theory with a class of indiscernibles, here dubbed ZFCI. ZFCI is a theory formulated in the language {epsilon, I(x), <}, where I(x) is a unary predicate [to distinguish the indiscernibles] and < is a binary relation [for a global well-ordering]. Intuitively speaking, ZFCI is an extension of ZFC that strongly negates Leibniz's dictum on the identity of indiscernibles by asserting "there are a proper class of indiscernibles". The axioms of ZFCI are as follows: 0. The axioms of ZFC; 1. the sentence expressing "I is a proper class of ordinals"; 2. a schema expressing that < is a global well-ordering; 3. the Replacement scheme for formulae using I and <; and 4. the Indiscernibility scheme, which is a scheme asserting that (I,<) is a class of order indiscernibles [in the usual sense of model theory] for formulae in the language {epsilon, <}. It turns out that ZFCI goes well beyond ZFC since it proves the existence of n-Mahlo cardinals for each concrete natural number n. However, if consistent, it will not prove the statement "for each natural number n, there is an n-Mahlo cardinal". Indeed, one can precisely describe the first order consequences of ZFCI in the usual language of set theory {epsilon}. In order to do so, let PHI be the set of sentences of following form (where n is a concrete natural "there is an n-Mahlo cardinal kappa such that V(kappa) is a SIGMA_n elementary submodel of the universe". It is not hard to see that ZFC + PHI is equiconsistent with ZFC plus axioms of the form "there is an n-Mahlo cardinal" (again, where n is a concrete natural number n). The former theory of course proves the latter theory, but not vice versa. Here is the first main result: Theorem A. For any sentence S in the usual language of set theory {epsilon}, the following two conditions are equivalent: (i) ZFCI proves S; (ii) ZFC + PHI proves S. It is worth pointing out that the situation for Peano arithmetic (or equivalently: the theory of finite sets) is quite different, i.e., if one formulates an analogous theory, PAI, extending PA, then the formalized version of Ramsey's theorem can be used to show that PAI proves precisely the same arithmetical sentences as PA itself. This adds more plausibility to the theory ZFCI since it shows that the indiscernibles allow ZFC to "catch-up" with PA. One may also wish to *iterate* the idea of adding indiscernibles by adding countably many new unary predicates I_n (x) for each natural number n in order to formulate a theory ZFCI# extending ZFCI by adding axioms asserting that I_(n+1) is a proper class of indiscernibles for formulae in the language {epsilon, <}augmented with I_1, ., I_n. As it turns out, this will not buy us any new theorem of set theory that ZFCI could not prove already, i.e., Theorem A can be improved to the following result which connects ZFCI and ZFCI! to other systems of set Theorem B. For any sentence S in the usual language of set theory {epsilon}, the following five conditions are equivalent: (i) ZFCI# proves S; (ii) ZFCI proves S; (iii) GBC + "the class of ordinals is weakly compact" proves S; (iv) NFUA proves "S holds in CZ"; (v) ZFC + PHI proves S. Here GBC is the Godel-Bernays theory of classes; NFUA is the extension of the Quine-Jensen system of set theory NFU with a universal set obtained by adding the axioms of Choice, Infinity, and the axiom expressing "every Cantorian set is strongly Cantorian"; and CZ is the canonical model of ZFC that can be *interpreted* in models of NFUA. The equivalence of (i), (ii) and (v) is will be soon available, but the equivalence of (iii), (iv), and (v) [which were inspired by the work of Solovay and Holmes] appear in my paper: Automorphisms, Mahlo Cardinals, and NFU, Nonstandard Models of Arithmetic and Set Theory, Contemporary Mathematics (Enayat and Kossak, ed.), volume 361, American Mathematical Society, 2004. Also available on: http://academic2.american.edu/~enayat/Aut.pdf Best regards, Ali Enayat More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-March/008815.html","timestamp":"2014-04-17T10:22:49Z","content_type":null,"content_length":"6905","record_id":"<urn:uuid:b2123fe1-d8aa-4f2c-bc5b-ef2f6df8a781>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
From Hanlon Financial Systems Lab Web Encyclopedia What is R? Where to obtain it? R is a software package, one of the most versatile and useful in existence. In principal it is thought as a statistical software, however its versatility makes it useful for virtually any problem that requires use of software for its analysis. Differences and similarities with other statistical software There is one important difference between R and any other professional software. R IS FREE. Normally, when one uses free versions one expects to use a stripped down, no good version of a commercial software. The reasoning in the commercial world we live in usually is: if it is valuable – it should cost money. R is an exception. It actually is more powerful that its commercial predecessor (S and Splus) and it is absolutely free. Even more, one can obtain support for it by subscribing to R-help mailing list, https://stat.ethz.ch/mailman/listinfo/r-help, which has over 200 posts every day. Amazingly, this free support was by far my best experience with support of any software I have ever used. If the question is interesting you should expect an answer very quickly. Careful what you post though, the people who maintain and answer questions are well known statistics professors and take very harshly at silly questions which could be easily answered by reading the introductory R vs Matlab In spirit R is similar with Matlab through the uses of objects such as vectors and matrices rather than one dimensional variables. It is superior to Matlab when dealing with specific statistical analysis such as we see in the current course. However, linear algebra (matrix decomposition and numerical analysis of PDE is debatably faster with Matlab). R vs Splus Commercial software similar with R. Very expensive. Predecessor to R. Going into point and click direction to try and differentiate itself from R. Not advised. R vs SAS SAS is a commercial version used for statistical analysis. It is a strong software that essentially is different in approach and analysis from R. It is complementing R. Better and much faster at dealing with large datasets (millions of observations). Harder to program nested algorithms than R. Spreadsheet software EXCEL, SPSS, Minitab etc. These are all point and click programs. Not recommended for serious analysis. They can work with small data sets but crash quickly. They were designed to understand statistical concepts in undergraduate courses as well as manipulate data in a simple way. Symbolic software: Mathematica, Maple etc. These programs serve a different purpose than R. Mathematicians use these programs to essentially deal with non-random problems. Since statistics is by definition studying randomness they are not R with C, C++, JAVA These are low level languages. Any program you write in any of the software mentioned thus far you should be able to program using low level language. The development of the program will take much longer, thus they are not suitable for use in a class such as the one presented. They, however have no equal when the speed of execution is an issue. Conclusion: R pretty much has no equal. A good statistician should know how to use R, SAS and a low level language preferably C. Obtaining R. Useful tools For a detailed description please see the file “R Installation and Administration” on the course web site or on the R web site directly(see Figure above). You can find a wealth of information about R on the [Manuals] page and on the [Wiki page for R]. R editor There are lots of editors(free or not) designed for R, such as Tinn-R, RStudio, SourceEdit and so on. One can choose the editor that he/her prefers. Online Tutorial You can easily find more details about R code, R packages, and R examples on r-project.org.
{"url":"http://web.stevens.edu/hfslwiki/index.php?title=R","timestamp":"2014-04-20T02:10:26Z","content_type":null,"content_length":"23449","record_id":"<urn:uuid:2483bb4f-439f-40eb-b6d9-fa426743a528>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Can a harmonic number be a rational number for non-integer rational argument? up vote 11 down vote favorite Define harmonic numbers for a complex argument $z$ as $H_z=\frac{\Gamma'(z+1)}{\Gamma(z+1)}-\Gamma'(1)$. For $n\in\mathbb{N}$, $H_n$ are usual harmonic numbers $\sum^n_{k=1} k^{-1}$ . They are obviously rational and are known (Taeisinger 1915) to be non-integers for $n>1$. Question: Is there a non-integer rational $q$ such that $H_q\in\mathbb{Q}$? nt.number-theory special-functions gamma-function rational-points add comment 1 Answer active oldest votes The answer is "no". Your function $H_z$ which is the same as $\psi(z+1)+\gamma$, where $\psi$ is the digamma function, and $\gamma$ is the Euler-Mascheroni constant, takes transcendental values at non-integer rationals. This is a theorem of M. Ram Murty and N. Saradha, "Transcendental values of the digamma function". up vote 10 down vote accepted Notice that at rational values the digamma function has an explicit evaluation given by Gauss's formula. Oops! Thanks Vladimir! – Gjergji Zaimi May 7 '13 at 2:07 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory special-functions gamma-function rational-points or ask your own question.
{"url":"http://mathoverflow.net/questions/129914/can-a-harmonic-number-be-a-rational-number-for-non-integer-rational-argument","timestamp":"2014-04-19T22:28:58Z","content_type":null,"content_length":"54111","record_id":"<urn:uuid:ff12bf79-632c-4308-bf3e-2ba3f1aff487>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming, Probability, and the Modern Mathematics Classroom Consider the fact that you are reading this blog on some device, be it a computer, tablet, mobile phone, phablet, etc. Every aspect of what is being displayed is a result of some form of programming. Programming is everywhere, be it the CSS to handle the layout, or the HTML to render the text, or the protocols to transmit the information electronically from some server to your viewing device, or or or or … As such, the sooner students get exposure to the various aspects of programming, the better equipped they will be for the fast changing world that lies ahead. There are a lot of ways to introduce programming into the classroom and this has been an ongoing effort. Nowadays there are so many programming languages that it seems like a daunting task to even start. To rattle off some current active programming languages in no specific order and with no intention of being exhaustive we have C/C++, Java, Python, Ruby, Lisp, Visual Basic, Haskell, etc. So where to start? The answer is, anywhere! There’s programming for developing graphical user interfaces (GUIs), video game programming, database programming, programming for mobile applications, numerical programming, etc. If you don’t start, then you won’t start! So just start! As such, this series will be focused on ways to introduce quantitative / numerical programming into the math classroom by considering problems that can be solved via probabilistic methods (Monte Carlo Methods). One of the things that this series will aim (hope) to do is get away from the “boring” problems of writing a computer program to, say, find the area of a circle if the radius were provided, but rather to engage students in solving problems that are probabilistic in nature. The added positive side effect is that it will expose students early on to Probability and Statistics even if in an informal sense and it will demonstrate that mathematics isn’t just about tidy little formulas that always resolve nicely into an answer. Real problems in industry cannot all be solved in a complete theoretical manner. The constraints of the industrial mathematician, researcher, engineer are often rooted in budgets, schedules, profitability, etc. Take, for example, meteorology — a very, very sophisticated field of study. One of the areas of study in meteorology is weather forecasting. If, I know that at 12pm it is raining heavily in New York City, I can say with almost absolute certainty that at 12:01pm it will still be raining heavily in New York City. But what can be said about the weather one hour later? One day later? One week later? It may not be raining heavily for one continuous week, but what will the weather be like? Sunny? Warm? Windy? When Superstorm Sandy devastated the New Jersey coastline, weather forecasting models provided several plausible scenarios for the trajectory of the storm. Figuring out where a storm like Superstorm Sandy will hit exactly is a daunting, if not impossible task. However, the uncertainty in a forecast can be controlled by using sophisticated simulation and weather modelling techniques. And this requires quantitative programming. Monte Carlo Methods are not the only, nor are they the final, word in quantitative programming. To name-drop a little bit so that the uninitiated reader can have searchable terms, here is a non-exhaustive, unordered list of topics: • Quasi-Monte Carlo and randomized quasi-Monte Carlo Methods (I have some published work here.) • Genetic algorithms (I have some work here too) • Genetic programming (technically different from genetic algorithms) • Bootstrapping • Finite difference schemes for solving partial differential equations • Finite element methods • Symbolic computation Some first steps Here is what, you, the instructor will need to ensure exists. This is geared for high school students in their junior and senior year. • You will have to have a decent understanding of a programming language of your choice. If you are not programming savvy and would like to learn, please contact me and let’s see if something can be arranged with me either directly with your school or private instruction. If you want to learn on your own, I recommend learning Python. Here are some links that seem reasonable for self-learning: learnpython.org, codeacademy.com, and Python’s tutorial. Additionally, if you stick with Python, then hopefully the examples in this series will be intuitive enough so that you can just move forward on your own. • You should have a classroom or get access to a classroom equipped with computers. • Your students will need to have installed, preferably, the same version of the programming language that you are working with. • You and your students should have patience, fun, and a willingness to learn and try something new. The next thing to do is to get students familiar with the programming environment. I will use Python in this series. I’m on Python 3.1.x and I’ll use the default IDE (IDLE) that comes with a standard install of Python. One way to get students started with programming is to provide a full programming course. However, I would recommend that the students just be given the code and their tasks should be to • comment it line by line and • modify it as per the exercises. When students can comment the code they will immediately know what they know and what they don’t know. Additionally, for those who have some anxiety about programming, they don’t have to learn “by fire”; they can take a “look-and-see” approach to gain familiarity. The instructor can step in and fill in the gaps for the student and perhaps the class. Of course, there are going to be those students who will catch on quite quickly. For them, let them advance. Some tips for the instructor In any classroom setting, there will never be a uniform rate of learning; some students will advance faster than others. So, don’t worry about controlling pace. Students will learn at the pace they will learn — it’s not synchronized swimming. Just set goals, keep tabs, and motivate. That is all that an instructor really should have to do in terms of teaching. What to look for in posts of this topic This post was about the “why”. The remainder of this week’s posts will focus on the actual “how-to” for instructors / education institutions interested in integrating programming into their math classrooms. Posts in this series will be tagged as “ppmmc”. Need help? Interested in introducing something like this at your educational institution? Get in touch! We’ll be happy to discuss, advise, assist, implement, demonstrate, etc. 20 thoughts on “Programming, Probability, and the Modern Mathematics Classroom” 1. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 4 | Math Misery? 2. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 8 | Math Misery? 3. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 2 | Math Misery? 4. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 3 | Math Misery? 5. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 5 | Math Misery? 6. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 6 | Math Misery? 7. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 7 | Math Misery? 8. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 9 | Math Misery? 9. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 10 | Math Misery? 10. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 11 | Math Misery? 11. David Wees I’ve taught early programming concepts to children as young as four years old, and Python is not the best choice for children that young (Turtle Art, Blockly, or Scratch are good), but I think it is an excellent choice for children who are text-literate already, or building on solid skills with text (learning how to program almost certainly helps build literacy skills). Thank you for working on this project, Manan. I’ve been working on the same project at my school (with some success – I have introduced programming concepts to kindies, 2nd, 3rd, 4th, 5th, 6th, 8th, 9th, and 10th grade students at some point during the past three years, and it has absolutely been rewarding. 1. Manan Shah Post author Hi David, Yes, I completely agree. Python is probably not a good choice for the young kids. I remember doing Turtle Art when I was kid and it was quite enjoyable! I also remember taking Pascal in the 9th grade and it was incomprehensible. I finally learned C++ after I finished undergrad because I was working and had to learn it out on my own. Go figure! 12. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 1 | Math Misery? 13. Pingback: Programming, Probability, and the Modern Mathematics Classroom — Exercises Part 0 | Math Misery? 14. Manan Shah Post author Agreed! Java is also a great language to introduce as it too is very popular. 15. James Slocum I fully agree with teaching Python as a first language! There are many aspects that make it ideal. The syntax is robust and straight forward, It forces good indenting practices, and it is used by real industry (unlike Alice or Scratch). While Alice and Scratch do have an audience with younger kids, I don’t think it wise to underestimate what a child, even a young child, can learn with practice and interest. I also think Java is a good first language for teens since Minecraft mods can be written in Java and this will generate interest.
{"url":"http://mathmisery.com/wp/2013/05/28/programming-probability-and-the-modern-mathematics-classroom/","timestamp":"2014-04-18T18:19:26Z","content_type":null,"content_length":"41539","record_id":"<urn:uuid:a338ee1d-4704-4b8b-838f-ab3d63d2fc3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
module Genarray: sig .. end type ('a, 'b, 'c) t The type is the type of big arrays with variable numbers of dimensions. Any number of dimensions between 1 and 16 is supported. The three type parameters to Genarray.t identify the array element kind and layout, as follows: • the first parameter, 'a, is the Caml type for accessing array elements (float, int, int32, int64, nativeint); • the second parameter, 'b, is the actual kind of array elements (float32_elt, float64_elt, int8_signed_elt, int8_unsigned_elt, etc); • the third parameter, 'c, identifies the array layout (c_layout or fortran_layout). For instance, (float, float32_elt, fortran_layout) Genarray.t is the type of generic big arrays containing 32-bit floats in Fortran layout; reads and writes in this array use the Caml type val create : ('a, 'b) Bigarray.kind -> 'c Bigarray.layout -> int array -> ('a, 'b, 'c) t Genarray.create kind layout dimensions returns a new big array whose element kind is determined by the parameter (one of , etc) and whose layout is determined by the parameter (one of ). The parameter is an array of integers that indicate the size of the big array in each dimension. The length of determines the number of dimensions of the bigarray. For instance, Genarray.create int32 c_layout [|4;6;8|] returns a fresh big array of 32-bit integers, in C layout, having three dimensions, the three dimensions being 4, 6 and 8 respectively. Big arrays returned by Genarray.create are not initialized: the initial values of array elements is unspecified. Genarray.create raises Invalid_arg if the number of dimensions is not in the range 1 to 16 inclusive, or if one of the dimensions is negative. val num_dims : ('a, 'b, 'c) t -> int Return the number of dimensions of the given big array. val dims : ('a, 'b, 'c) t -> int array Genarray.dims a returns all dimensions of the big array a, as an array of integers of length Genarray.num_dims a. val nth_dim : ('a, 'b, 'c) t -> int -> int Genarray.nth_dim a n returns the n-th dimension of the big array a. The first dimension corresponds to n = 0; the second dimension corresponds to n = 1; the last dimension, to n = Genarray.num_dims a - 1. Raise Invalid_arg if n is less than 0 or greater or equal than Genarray.num_dims a. val kind : ('a, 'b, 'c) t -> ('a, 'b) Bigarray.kind Return the kind of the given big array. val layout : ('a, 'b, 'c) t -> 'c Bigarray.layout Return the layout of the given big array. val get : ('a, 'b, 'c) t -> int array -> 'a Read an element of a generic big array. Genarray.get a [|i1; ...; iN|] returns the element of whose coordinates are in the first dimension, in the second dimension, ..., in the -th dimension. If a has C layout, the coordinates must be greater or equal than 0 and strictly less than the corresponding dimensions of a. If a has Fortran layout, the coordinates must be greater or equal than 1 and less or equal than the corresponding dimensions of a. Raise Invalid_arg if the array a does not have exactly N dimensions, or if the coordinates are outside the array bounds. If N > 3, alternate syntax is provided: you can write a.{i1, i2, ..., iN} instead of Genarray.get a [|i1; ...; iN|]. (The syntax a.{...} with one, two or three coordinates is reserved for accessing one-, two- and three-dimensional arrays as described below.) val set : ('a, 'b, 'c) t -> int array -> 'a -> unit Assign an element of a generic big array. Genarray.set a [|i1; ...; iN|] v stores the value in the element of whose coordinates are in the first dimension, in the second dimension, ..., in the -th dimension. The array a must have exactly N dimensions, and all coordinates must lie inside the array bounds, as described for Genarray.get; otherwise, Invalid_arg is raised. If N > 3, alternate syntax is provided: you can write a.{i1, i2, ..., iN} <- v instead of Genarray.set a [|i1; ...; iN|] v. (The syntax a.{...} <- v with one, two or three coordinates is reserved for updating one-, two- and three-dimensional arrays as described below.) val sub_left : ('a, 'b, Bigarray.c_layout) t -> int -> int -> ('a, 'b, Bigarray.c_layout) t Extract a sub-array of the given big array by restricting the first (left-most) dimension. Genarray.sub_left a ofs len returns a big array with the same number of dimensions as , and the same dimensions as , except the first dimension, which corresponds to the interval [ofs ... ofs + len - 1] of the first dimension of . No copying of elements is involved: the sub-array and the original array share the same storage space. In other terms, the element at coordinates [|i1; ...; iN|] of the sub-array is identical to the element at coordinates [|i1+ofs; ...; iN|] of the original array Genarray.sub_left applies only to big arrays in C layout. Raise Invalid_arg if ofs and len do not designate a valid sub-array of a, that is, if ofs < 0, or len < 0, or ofs + len > Genarray.nth_dim a val sub_right : ('a, 'b, Bigarray.fortran_layout) t -> int -> int -> ('a, 'b, Bigarray.fortran_layout) t Extract a sub-array of the given big array by restricting the last (right-most) dimension. Genarray.sub_right a ofs len returns a big array with the same number of dimensions as , and the same dimensions as , except the last dimension, which corresponds to the interval [ofs ... ofs + len - 1] of the last dimension of . No copying of elements is involved: the sub-array and the original array share the same storage space. In other terms, the element at coordinates [|i1; ...; iN|] of the sub-array is identical to the element at coordinates [|i1; ...; iN+ofs|] of the original array Genarray.sub_right applies only to big arrays in Fortran layout. Raise Invalid_arg if ofs and len do not designate a valid sub-array of a, that is, if ofs < 1, or len < 0, or ofs + len > Genarray .nth_dim a (Genarray.num_dims a - 1). val slice_left : ('a, 'b, Bigarray.c_layout) t -> int array -> ('a, 'b, Bigarray.c_layout) t Extract a sub-array of lower dimension from the given big array by fixing one or several of the first (left-most) coordinates. Genarray.slice_left a [|i1; ... ; iM|] returns the ``slice'' of obtained by setting the first coordinates to , ..., . If dimensions, the slice has dimension N - M , and the element at coordinates [|j1; ...; j(N-M)|] in the slice is identical to the element at coordinates [|i1; ...; iM; j1; ...; j(N-M)|] in the original array . No copying of elements is involved: the slice and the original array share the same storage space. Genarray.slice_left applies only to big arrays in C layout. Raise Invalid_arg if M >= N, or if [|i1; ... ; iM|] is outside the bounds of a. val slice_right : ('a, 'b, Bigarray.fortran_layout) t -> int array -> ('a, 'b, Bigarray.fortran_layout) t Extract a sub-array of lower dimension from the given big array by fixing one or several of the last (right-most) coordinates. Genarray.slice_right a [|i1; ... ; iM|] returns the ``slice'' of obtained by setting the last coordinates to , ..., . If dimensions, the slice has dimension N - M , and the element at coordinates [|j1; ...; j(N-M)|] in the slice is identical to the element at coordinates [|j1; ...; j(N-M); i1; ...; iM|] in the original array . No copying of elements is involved: the slice and the original array share the same storage space. Genarray.slice_right applies only to big arrays in Fortran layout. Raise Invalid_arg if M >= N, or if [|i1; ... ; iM|] is outside the bounds of a. val blit : ('a, 'b, 'c) t -> ('a, 'b, 'c) t -> unit Copy all elements of a big array in another big array. Genarray.blit src dst copies all elements of src into dst. Both arrays src and dst must have the same number of dimensions and equal dimensions. Copying a sub-array of src to a sub-array of dst can be achieved by applying Genarray.blit to sub-array or slices of src and dst. val fill : ('a, 'b, 'c) t -> 'a -> unit Set all elements of a big array to a given value. Genarray.fill a v stores the value v in all elements of the big array a. Setting only some elements of a to v can be achieved by applying Genarray .fill to a sub-array or a slice of a. val map_file : Unix.file_descr -> ('a, 'b) Bigarray.kind -> 'c Bigarray.layout -> bool -> int array -> ('a, 'b, 'c) t Memory mapping of a file as a big array. Genarray.map_file fd kind layout shared dims returns a big array of kind , layout , and dimensions as specified in . The data contained in this big array are the contents of the file referred to by the file descriptor (as opened previously with , for example). If , all modifications performed on the array are reflected in the file. This requires that be opened with write permissions. If , modifications performed on the array are done in memory only, using copy-on-write of the modified pages; the underlying file is not affected. Genarray.map_file is much more efficient than reading the whole file in a big array, modifying that big array, and writing it afterwards. To adjust automatically the dimensions of the big array to the actual size of the file, the major dimension (that is, the first dimension for an array with C layout, and the last dimension for an array with Fortran layout) can be given as -1. Genarray.map_file then determines the major dimension from the size of the file. The file must contain an integral number of sub-arrays as determined by the non-major dimensions, otherwise Failure is raised. If all dimensions of the big array are given, the file size is matched against the size of the big array. If the file is larger than the big array, only the initial portion of the file is mapped to the big array. If the file is smaller than the big array, the file is automatically grown to the size of the big array. This requires write permissions on fd.
{"url":"http://caml.inria.fr/pub/old_caml_site/ocaml/htmlman/libref/Bigarray.Genarray.html","timestamp":"2014-04-19T06:22:39Z","content_type":null,"content_length":"25929","record_id":"<urn:uuid:e4b39124-ce2c-4250-90a4-84db27cd496f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology § 224 Date: Mar 17, 2013 8:11 PM Author: ross.finlayson@gmail.com Subject: Re: Matheology § 224 On Mar 17, 4:59 pm, Virgil <vir...@ligriv.com> wrote: > In article > <ba28932b-ac48-4567-8e5c-a7e9262f8...@z4g2000vbz.googlegroups.com>, > WM <mueck...@rz.fh-augsburg.de> wrote: > > On 17 Mrz., 21:48, Virgil <vir...@ligriv.com> wrote: > > > Mathematical truth is independent of time. > > In fact??? Amazing! After Cantor's list has been diagonalized, it is > > possible to include all diagonals into the list. But someone has > > forbidden to change the list after time t_0 when the diagonalizers > > start to do their work. > Why does WM claim that after what WM calls "Cantor's list" has been > diagonalized, he can include all anti-diagonals, when it is always > possible to find others that have been so far overlooked? > After each anti-dagonal of any list is found, prefix it to that list and > then the anti-diagonal to the new list is not in the new list or the old > sub-list. > This procedure always finds new lines which are non-members of any of > the prior lists of lines including all lines of any original list and > all previously found anti-diagonals of those prior lists. > WM is just not paying attention! > ###################################################################### > WM has frequently claimed that HIS mapping from the set of all infinite > binary sequences to the set of paths of a CIBT is a linear mapping. > In order to show that such a mapping is a linear mapping, WM would first > have to show that the set of all binary sequences is a linear space > (which he has not done and apparently cannot do) and that the set of > paths of a CIBT is also a vector space (which he also has not done and > apparently cannot do) and then show that his mapping, say f, satisfies > the linearity requirement that f(ax + by) = af(x) + bf(y), > where a and b are arbitrary members of the field of scalars and x and y > and f(x) and f(y) are arbitrary members of suitable linear spaces. > While this is possible, and fairly trivial for a competent mathematician > to do, WM has not yet been able to do it. > But frequently claims already to have done it. > -- With whatever expansions EF would have, the binary antidiagonal is at the end. Obviously, prepending .111... to the beginning is not then EF (the function modeled by n/d, n->d, d->oo). Bo-ring. Virgil my good man: at this rate your work will be that So, the other day, you appended to your signature, as it were, though it's not the four lines nor is it split from the body with --, that there was such a linear mapping, then, how is [0,1] or the CIBT or Cantor space of the Cantor set of the sequences 2^w, a linear space? It's simple to define operations that would be fields except for associativity of *, distributivity, or multiplicative inverses, so I wonder what positive input you had in mind (and that they would otherwise satsify the vector space axioms). A simple and trivial continuous mapping was noted. Ross Finlayson
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8657983","timestamp":"2014-04-20T19:36:32Z","content_type":null,"content_length":"4517","record_id":"<urn:uuid:32b3af8a-17ca-47b9-ae4b-e317d5a7781c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
What Mathematical Concepts Do Infants and Toddlers Learn? What Mathematical Concepts Do Infants and Toddlers Learn? (page 2) E. Geist — Pearson Allyn Bacon Prentice Hall Updated on Jul 20, 2010 This example shows how a child acts like a scientist by testing many different variables, including the height and number of blocks, as well as experimenting with the method of knocking over the tower. Infants use logic and scientific processes to reconstruct, mentally, the world around them. (Forman, 1982; Sinclair & Kamii, 1995). Progressive organizing behaviors, such as the following, exist at a very young age: • making comparisons between and among objects based on similarity • putting one object in one hole (one-to-one correspondence) • putting objects into a series from smallest to biggest as preparation for more complex number concepts As teachers of children of this age, we have to be mindful of the complex constructions going on in their minds. Every time they touch, see, smell, taste, or move something they put it into their minds in a certain way (Sinclair & Kamii, 1995). The brain instinctively wants to make sense of this new information and puts it into a framework that Piaget called a schema. For infants and toddlers, these schema are very concrete and are directly linked to their senses and motor activity. From 24–36 months, children are developing some representational thought, that is, they begin representing their knowledge using language, drawings, and objects. As the child grows, representational thought is going to be very important in the construction of mathematics. A large part of mathematics is based on representational thought. Representation is the process of making one thing stand for another. For example, we use the numeral “4” to stand for | | | |. We use all sorts of symbols and signs to represent more abstract ideas in mathematics such as addition (&plus;), subtraction (&minus;), and multiplication (&times;). In the infant and toddler years, the simple act of supporting a child’s burgeoning representational thought process is supporting his future mathematical ability. Teachers of children from 2–3 years are on the cutting edge of fostering this vital intelligence. Children this age will begin to incorporate what they know into their imaginative play and this becomes a fertile area for mathematics. When a child pretends to use a block as a telephone, he is using representational thought. He has made a block stand for the phone just like “4” stands for | | | |. Many times, children develop a better understanding of these concepts if they are presented in a play context. For example, children playing in the housekeeping area could be asked questions like “how many apples or spoons?” or “how much soup will you need?” They could be asked to put toy cans in their places in the pantry, which supports one-to-one correspondence. The children are also developing rudimentary counting and number skills and can be asked to count, even if they are not always accurate. “Let’s see how many spoons are here. Let’s count together: 1, 2, 3, 4. . .” Even before infants can count, however, they begin to discern similarities and differences in their environment that form the basis for forming mathematical relationships (E. M. Brannon et al., 2004; Kuhlmeier, Bloom, & Wynn, 2004; McCrink & Wynn, 2004), as in the following observation: Twelve-and-a-half-month-old Xu is given a set of cups that fit inside one another and a group of sticks of different lengths. He takes the second-to-longest stick, looks at it, and keeps it in his left hand. With the other hand, he takes the second biggest cup. With the stick he firmly touches the cup. Then using the rod he firmly touches the largest cup, then the next smallest cup, and then the cup he is holding. As children try to apply order to their environment, through mentally and physically acting on objects in their environment, they are thinking logically and even mathematically. When they do very simple actions such as using a stick to touch three cups in sequence they are systematically applying order to their environment using number, and logic (Sinclairs 1989). This is the child making sense of the world. As teachers in programs for young children, we need to recognize the rich mathematical learning that is occurring, and that the environment is vital to the infant’s construction of mathematics. Setting up a stimulating environment is the most important thing a teacher can do for infants and toddlers. Make objects available that children can observe, sort, and act on mentally, as we saw in the vignette with Xu. Interactions with objects will aid the child in developing the basic concepts needed for higher-level mathematics, such as one of the first basic concepts they will learn: the concept of “more.” View Full Article Excerpt from Children are Born Mathematicians: Supporting Mathematical Development, Birth to Age Eight, by E. Geist, 2009 edition, p. 145-149. © ______ 2009, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher. Popular Articles Wondering what others found interesting? Check out our most popular articles.
{"url":"http://www.education.com/reference/article/mathematics-infants-toddlers-learn/?page=2","timestamp":"2014-04-18T01:23:28Z","content_type":null,"content_length":"103498","record_id":"<urn:uuid:acfa551c-f05e-4d47-9905-10a8a030ea7e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Pie Cuts' printed from http://nrich.maths.org/ The problem here is to cut a perfectly circular pie into n equal pieces using exactly three cuts. For what values of n is it possible to do this with straight cuts, and then how many different solutions are there? What if you are not restricted to straight cuts? There are some solutions here but also more challenges; it remains in the tough nut category! Ling Xiang Ning, Allan, Tao Nan School, Singapore sent in this solution for 4 equal parts. It is not a trivial matter to find A but you might like to use an approximation method. Without loss of generality you can suppose the radius of the circle is 1 unit. The problem is easier for 6 parts. Use 3 cuts along diameters to divide the pie into sectors with angles of 60 degrees at the centre. Find the endpoint by marking chords equal in length to the radius of the circle. For 8 pieces Lewis O'Neill from Roundwood Primary School, Hertfordshire said "you cut it into 4, then you put it on its side and cut it collaterally". A super solution Lewis, well done. Edwin Taylor remembered being set this in primary school and described the same solution as follows: "Looking from above the pie you cut it into quarters with two cuts. Then, looking at it side on, you make a horizontal cut parallel to the base of the pie. (I know the third cut isn't really a chord, but I assume that rule does not count if "cuts don't have to be straight line.)" Here is another solution for 8 equal areas which gives equal shares of the icing on the top. Can you find the radius of the smaller circle?
{"url":"http://nrich.maths.org/656/solution?nomenu=1","timestamp":"2014-04-17T09:54:11Z","content_type":null,"content_length":"4759","record_id":"<urn:uuid:7cf62616-20b7-4729-8dda-a384ddfc723f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Split of fuel oil consumption to heatup multiple type liquid I have three different liquids l1, l2 and l3 stored for 10 days. The liquid are required to be kept in temp x1, x2 and x3. As input i have following: 1) daily temp of all three liquids 2) daily qty i need to keep temp of all three liquid constant - to do that heating is done. Due to heating fuel is consumed i keep track of daily consumed fuel. Now my question is - how can the total daily consumed fuel be split so that i can know how much fuel is spent on l1, l2 and l3. i cannot change (they are recorded and is to be treated as fixed): 1. Daily consumed fuel (total of all three liquids) 2. Daily individual temp of each liquid 3. Sum total of fuel consumed for 10 days. Can anyone help me in guiding as to how should i proceed. The simple qty*variation in temp formula does not work (it allocates excessive fuel oil consumption in case where there has been drop in liquid temperature). Would really appreciate if someone out there can help me.
{"url":"http://www.physicsforums.com/showpost.php?p=4196197&postcount=1","timestamp":"2014-04-20T00:52:11Z","content_type":null,"content_length":"9497","record_id":"<urn:uuid:c7535533-d11c-4f57-aa20-466b6e2dec50>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
MNP – Multinomial Probit Regression models MNP is a useful R program for modeling discrete choices, such as choosing among a finite number of alternatives. What makes this an interesting alternative to software such as Stata or Limdep is that model parameters are estimated via Bayesian MonteCarlo Markov Chain(MCMC) methods. Covariates are allowed and control over MCMC tuning is provided. Predictions under a model are available via the posterior predictive distribution. This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://sites.tufts.edu/statisticalcomputingmatters/2007/01/03/mnp-multinomial-probit-regression-models/","timestamp":"2014-04-20T04:58:18Z","content_type":null,"content_length":"14408","record_id":"<urn:uuid:9175ab00-a1eb-4feb-842c-462f168e12a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which of the following inequalities matches the graph? (see attached photo) A y less than or greater to –one halfx - 4 B y > –one halfx - 4 C y < –one halfx - 4 D The correct inequality is not • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. It looks like the slope of the line is -1/2 x, and the y-intercept is -4. The shaded region is below this line so that means y is less than -1/2 - 4 If the line was solid instead of dashed it would be y <= -1/2 x - 4 but since it is dashed we just say y < -1/2 x - 4 Best Response You've already chosen the best response. @LogicalApple so it be C Best Response You've already chosen the best response. Best Response You've already chosen the best response. could you help me with another Best Response You've already chosen the best response. Best Response You've already chosen the best response. Which of the following inequalities matches the graph? (See attached photo) A -6x + y < 3 B 6x + y < 3 C 6x - y < -3 D The correct inequality is not listed. Best Response You've already chosen the best response. Can you identify the slope of this line ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. It's hard to say but it looks like that is a slope of 6x with a y-intercept of 3 So the equation of the line is y = 6x + 3 But since the shaded region is above the line this becomes y > 6x + 3 Rewriting this becomes y - 6x > 3 Then if we multiply both sides by -1 and change the inequality: 6x - y < -3 Best Response You've already chosen the best response. @LogicalApple thank you, can you help me with another Best Response You've already chosen the best response. Best Response You've already chosen the best response. Which of the following inequalities matches the graph? (see attached photo) A x greater than or equal to -7 B x less than or greater to -7 C y less than or greater to -7 D y greater than or equal to -7 Best Response You've already chosen the best response. Do you mean "less than or equal to" for B and C? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ah ok. This line is vertical and crosses x = -7 So it is either x >= -7 or x <= -7. Which one do you think? Notice which side of the line the shaded region is at. Best Response You've already chosen the best response. @LogicalApple is it x > = -7 Best Response You've already chosen the best response. The shaded region is to the left of -7, so that would indicate less than -7. Since the line is solid, it is less than or equal. x <= -7 Best Response You've already chosen the best response. so the answer is B? @LogicalApple Best Response You've already chosen the best response. Best Response You've already chosen the best response. @LogicalApple okay i only have 4 more questions could you help me with them? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Which of the following inequalities matches the graph? (See attached photo) A x < 2 B y > 2 C y < 2 D x > 2 Best Response You've already chosen the best response. The dashed line is vertical and passes through 2. The shaded region lies to the right. It is A or D ? Best Response You've already chosen the best response. @LogicalApple is it D Best Response You've already chosen the best response. Yes it is :) Best Response You've already chosen the best response. @LogicalApple Which of the following inequalities matches the graph? (see attached photo) A x less than or greater to 0 B x greater than or equal to 0 C y less than or greater to 0 D y greater than or equal to 0 Best Response You've already chosen the best response. The line is solid and horizontal. The shaded region is below 0. What is your choice? Best Response You've already chosen the best response. @LogicalApple is it A Best Response You've already chosen the best response. The line is y = 0, but the shaded region is less than or equal to this. So y <= 0 Best Response You've already chosen the best response. @LogicalApple thats A, do you need me to draw all of them again? Best Response You've already chosen the best response. I think based on what you wrote down, it is C. It should be "y is less than or equal to 0" Best Response You've already chosen the best response. @LogicalApple okay thanks i'll send you the next one Best Response You've already chosen the best response. Which of the following inequalities matches the graph? (see attached photo) A x less than or greater to -1 B x greater than or equal to -1 C y less than or greater to -1 D y greater than or equal to -1 Best Response You've already chosen the best response. Which one do you think ? The line is y = -1 and it is solid. And the shaded region is above it. Best Response You've already chosen the best response. would it be B @LogicalApple ? Best Response You've already chosen the best response. Nope. Remember that any time you see a horizontal line, it's going to be y. If it is a vertical line it's going to be x. The solid line is actually y = -1, and the region shaded is greater than or equal to it. So y >= -1 or D Best Response You've already chosen the best response. you know i was stuck on B and D lol @LogicalApple Best Response You've already chosen the best response. You will get the hang of it :) Best Response You've already chosen the best response. The non-profit organization you volunteer for is throwing a fundraiser cookout. You are in charge of buying the hamburgers, which cost $3 per pound and hotdogs, which cost $2 per pound. The meat budget you are given totals $600. The inequality 3x + 2y less than or greater to 600 represents the possible combinations of pounds of hamburgers (x) and hotdogs (y) you can buy. (see attached photo) Which of the following represents a solution to the inequality? A 200 pounds of hamburgers and 140 pounds of hotdogs B 150 pounds of hamburgers and 60 pounds of hotdogs C 100 pounds of hamburgers and 240 pounds of hotdogs D 240 pounds of hamburgers and 40 pounds of hotdogs Best Response You've already chosen the best response. If hamburgers are on the x axis and hotdogs on the y axis, the points become A: (200, 140) B: (150, 60) C: (100, 240) D: (240, 40) Now the question is, which one of these points is either on the line or inside the shaded region? Best Response You've already chosen the best response. @LogicalApple so would the answer be A? Best Response You've already chosen the best response. I plotted the points -- which one is within the shaded region? Best Response You've already chosen the best response. 150 so it would be B @LogicalApple Best Response You've already chosen the best response. Best Response You've already chosen the best response. @LogicalApple that was it! THANKS for all your help! have a great new year! :) Best Response You've already chosen the best response. You too! Happy 2013 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e22c99e4b0e36e3513f025","timestamp":"2014-04-20T03:40:09Z","content_type":null,"content_length":"170550","record_id":"<urn:uuid:6d1a3f12-4779-4d45-be24-75aaf54d5cd8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization Problem November 18th 2007, 11:41 AM #1 Nov 2007 Optimization Problem Here is the problem, A straight piece of wire is to be cut into two pieces. One piece is to be formed into a circle and the other piece into a square. Where should the wire be cut so that the sum of the areas of the circle and square will be greatest? The least? I know the square area formula x=side, x(squared). The same stuff for the circle, x=circumference, x=2(Pi)r I do not know the Length of the wire, but that length is a constant away (right?, I will assume this for now) so I will use “L”. So the side of the square will be x, the circumference will be L-x Stuck right here, not sure what to do next A straight piece of wire is to be cut into two pieces. One piece is to be formed into a circle and the other piece into a square. Where should the wire be cut so that the sum of the areas of the circle and square will be greatest? The least? I know the square area formula x=side, x(squared). The same stuff for the circle, x=circumference, x=2(Pi)r I do not know the Length of the wire, but that length is a constant away (right?, I will assume this for now) so I will use “L”. So the side of the square will be x, the circumference will be L-x Stuck right here, not sure what to do next Good start, except that the perimeter of the square will be x (not the side). So the side of the square will be x/4, and the radius of the circle will be (L–x)/2π. That means that the total area enclosed will be $(x/4)^2 + \pi\Bigl({\textstyle\frac{L-x}{2\pi}}\Bigr)^2$. Now use calculus to find when that is a maximum or a minimum. wow, thats for the great start, looks like I am just a "little" algebra away now What now Batman??? Optimization Problem Cont..... Okay, found the algebra easier if I made the square "L-x" and let the circle be "x" the sum of the two areas would be Circle + Square = 2Pi(x/2Pi)(squared)+((L-x)/4)(squared) I think that is right. So the next step should be to take a derivative, SO I will do the left hand side first, add the right hand later (this can get ugly looking quickly here): Left hand side: x/2Pi for the left hand side right hand now: Combined back: x/2Pi + -L+x/8 so set top to zero Stuck again, been at this for hours, I know I need to find both max and min, but the L is causing me great problems. The original problem does not address L or ask for it, could I use a simple value of 1 "one" for it, as the ratio of any cut of the wire would apply to all lengths. I need to walk away for a moment, bang my head against a solid object, multiple times (no I will not take a derivative to find v of that), and hope someone reads this a gives me the push in the right direction Soon to have a headache Edit: I figured I should add an overview to make this easier to follow since it's so long. 1. Set up the problem (define values and default relationships) 2. Solve for area of a square in terms of length of it's wire 3. Solve for area of a circle in terms of length of it's wire 4. Add them together to get a formula for total area 5. Find where the slope of the area = 0 6. Test that point. 7. Test the end points of your domain to see where they lie 8. Wrap up the process into a conclusion SETUP THE PROBLEM the length of the wire is L the length of the piece that will make the square is s the length of the piece that will make the circle is c FIND THE AREA OF THE SQUARE So, now you need to find an equation to make the square, you know the circumference of a square is 4 times one side, and that is equal to the length of your piece of wire which will make the square. So one side of the square will be 1/4 of s, or s/4. And since the Area of a square is the one side squared, you know that the area of the square is $(\frac{s}{4})^{2}$ which is $\frac{s^ FIND THE AREA OF THE CIRCLE Your piece of wire b will become the circumference of the circle, and the circumference of a circle is $2\pi r$ so $c=2\pi r$ and so $r=\frac{c}{2\pi}$. And you know the area of the circle is $\ pi r^{2}$ so substitute the value of r into the equation for area and get $\pi(\frac{c}{2\pi})^{2}$ Which can be simplified to $\pi(\frac{c^{2}}{4\pi^{2}})$ Which simplifies to to $\frac{c^{2}}{4 Now, you don't want two variables (s and c) and we know that c=L-s (L is not a variable, it is a constant). so we substitute that into our equation. $\frac{(L-s)^{2}}{4\pi}$ and simplify: $\frac{L^{2}-2Ls+s^{2}}{4\pi}$ FIND THE EQUATION FOR THE SUM OF THE SQUARE AND THE CIRCLE This is easy, just add the area of the square to the area of the circle Common denominator: $\frac{\pi s^{2}}{16\pi}+\frac{4(L^{2}-2Ls+s^{2})}{4(4\pi)}$ $\frac{\pi S^{2} + 4L^{2}-8Ls+4s^{2}}{16\pi}$ And turn this into an equation, by realizing it is equal to our area, A, in terms of s (s is the only variable in this equation since we substituted out the b and L is a constant) $A(s)=\frac{\pi s^{2} + 4L^{2}-8Ls+4s^{2}}{16\pi}$ A=area in terms of s, so the value of A will give us our total area, and the graph of A will show us our area as it changes according to a. Now, we want the point where area is greatest. At that point, we know that it will be the highest on the graph of area. And we know that because it is highest, it must be higher than all the points around it, which means the slope of A is increasing up to it, and decreasing after it, and equal to 0 at that point. By the same thinking, wherever the area is minimized, the slope will be decreasing before that piont and increasing after it, and equal to zero on it. So wherever the slope is equal to zero is a potential maximum/minimum value of area. So lets find the slope and set it to zero, then test the points around it to see what the slope is doing. So we want to find the equation for the slope of this line, and then set that equal to zero. The derivative is the equation for the slope of the line, so lets find the derivative. FIND THE DERIVATIVE Initial equation: $A\prime(s)=\frac{\pi s^{2} + 4L^{2}-8Ls+4s^{2}}{16\pi}$ $A\prime(s)=\frac{1}{16\pi}(\pi s^{2} + 4L^{2}-8Ls+4s^{2})$ Differentiate (remember that L is a constant, and the derivative of a constant is zero) $A\prime(s)=\frac{1}{16\pi}(2\pi s -8L+8s)$ FIND THE ZERO OF THE SLOPE Now we have the equation of our slope, lets find where it is equal to zero as that will be a potential maximum value: $0=\frac{1}{16\pi}(2\pi s -8L+8s)$ Divide out the irrelevant constant $0=2\pi s -8L+8s$ Divide out a 2 $0=\pi s -4L+4s$ Factor out an s $0=s(\pi +4) -4L$ Add 4L $4L=s(\pi +4)$ Divide by the coefficient of s $\frac{4L}{\pi +4}=s$ Now we have a value for s where the change in the total area of our circle and square is zero, but lets make sure that it is a maximum. Because all of the terms we are using are arbitrary, (s, c, L) it is very difficult to choose actual points to plug into our first derivative, so what we can instead do is find the second derivative, plug our point into the second derivative and see whether it is concave up or down. If it is concave down, then the open end is down, so it must be increasing up to that point and decreasing after it, making it a maximum. If the open end is up, it looks like a parabola, and we can see that our point is at the bottom, making it a minimum. FIND THE SECOND DERIVATIVE First derivative $A\prime(s)=\frac{1}{16\pi}(2\pi s -8L+8s)$ $A\prime\prime(s)=\frac{1}{16\pi}(2\pi +8)$ $A\prime\prime(s)=\frac{\pi +4}{8\pi}$ This tells us that Now we can see that the second derivative is ALWAYS positive, this means that the slope is always concave up, so our s value must be a minimum. This tells us that when $s=\frac {4L}{\pi +4}$ our area is at a minimum. Well what do we do now? what about our maximum? There was only 1 value where the prime was equal to zero, so where does our other point come from? Well, we only have 2 other critical numbers, the two endpoints of our graph, where the domain begins and ends. Now we know we can't have a length less than zero, so s must be > zero. And we can't have a length of s that is greater than the length of the wire, so s is < to L. This means that the domain is 0 < s < L Now we have two more critical values to check out. Whichever is higher must be our maximum value. CHECK THE AREA OF OUR TWO ENDPOINTS Initial equation $A(s)=\frac{\pi s^{2} + 4L^{2}-8Ls+4s^{2}}{16\pi}$ *Check area of s=zero $A(0)=\frac{\pi 0^{2} + 4L^{2}-8L*0+4*0^{2}}{16\pi}$ $A(0)=\frac{L^{2}}{4\pi} \approx .079599L^{2}$ *Check area of s=L $A(L)=\frac{\pi L^{2} + 4L^{2}-8L*L+4L^{2}}{16\pi}$ $A(L)=\frac{\pi L^{2}}{16\pi}$ $A(L)=\frac{L^{2}}{16} = .0625L^{2}$ Now our two endpoints are each in terms of L^2, and we know that L is positive, so we can just compare their coefficients and see which is greater: .079599 > .0625 Therefore our area is maximized when s equals 0 We know our area is minimized when $s=\frac{4L}{\pi +4}$ And maximized when $s=0$ So to get the greatest area, do not cut the wire at all and instead use it to make only a circle, and to get the least area, cut the wire $\frac{4}{\pi+4}$ of the length and use that to make a square, and the remaining bit to make a circle. There was a lot in there, I hope I didn't make any errors :/ Last edited by angel.white; November 18th 2007 at 09:04 PM. TY - Angel you are If you are not a teacher, you should be, great explaination, its earily, but I will review and review, thank you November 18th 2007, 11:58 AM #2 November 18th 2007, 12:10 PM #3 Nov 2007 November 18th 2007, 06:24 PM #4 Nov 2007 November 18th 2007, 08:04 PM #5 November 19th 2007, 02:04 AM #6 Nov 2007
{"url":"http://mathhelpforum.com/calculus/23043-optimization-problem.html","timestamp":"2014-04-17T15:35:09Z","content_type":null,"content_length":"60999","record_id":"<urn:uuid:51b5d72d-8bdb-48be-bb0d-b52ebdb4b12b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Why does the power series of sin(x) require radian? September 9th 2013, 12:22 PM #1 Mar 2013 North Yorkshire, UK Why does the power series of sin(x) require radian? I learnt the maclaurin series of sin x and then i realised it cant work for both degrees and radians. Of course it should work in radians but i cant seem to understand why. In the proof of the serie, which is x - x^3/3! + x^5/5! - x^7/7! + ....(cba with latex) , you only require sin 0 = 0 and cos 0 = 1. this is true for both degree and radian so that cant be the reason why degree dont work. now i understand that you need sinx/x -> 1 as x -> 0, but this seem to be true for degree as well, because as an angle x in degree get smaller, so does sin x, so they both -> 0. so sinx/x -> 1 for degree too (surely?) Then i found a proof that prove sinx/x = 1 without referring to degree or radian again. link Direct Proof from Definition of Sine This proof works directly from the definition of the sine function: sinx = ∑n=0∞(−1)nx2n+1(2n+1)! By the definition of the sine function = (−1)0x2⋅0+1(2⋅0+1)!+∑n=1∞(−1)nx2n+1(2n+1)! = x+∑n=1∞(−1)nx2n+1(2n+1)! limx→0sinxx = limx→0x+∑∞n=1(−1)nx2n+1(2n+1)!x = limx→0xx+limx→0∑∞n=1(−1)nx2n+1(2n+1)!x = 1+limx→0∑∞n=1(−1)nx2n(2n)!1 by Power Series Differentiable on Interval of Convergence and L'Hôpital's Rule = 1+limx→0∑n=1∞(−1)nx2n(2n)! = 1+∑n=1∞(−1)n02n(2n)! by Polynomial is Continuous = 1 now this confuse me even more cos now we have a proof of sinx/x without referring to deg or rad, which suggest that both will work. i know deg cant work for the power serie, cos then you could divide the 360deg angle into any number of parts and call that a measurement and it would work. but why radian? its just dividing the 360 angle into 2pi parts sorry if anything is unclear. please ask. this is probably a noob question with a obvious answer but its bugging me for ages. i just cant find the answer or understand it Re: Why does the power series of sin(x) require radian? The issue is that the Maclauren Series uses the derivatives of a function to develop the series: $f(a) = f(0) + af'(0) + \frac {a^2}{2!}f''(0) + \frac {a^3} {3!} f'''(0) + ...$ For the case of f(x) = sin(x) this yields: $\sin(x) = \sin (0)+ x \cos (0) + \frac {x^2}{2!} (- \sin (0)) + \frac {x^3}{3!}(- \cos (0)) + ...$ which becomes the series you are familiar with: $\sin (x) = x - \frac { x^3}{3!} + \frac {x^5}{5!} + ...$ This formula works because if x is in radians then the derivative of sin(x) is cos(x), the second derivative of sin(x) is -sin(x), etc. If x is in units of degrees then the derivative would be $\ frac {d(\sin (x) )}{dx} = ( \frac {\pi}{\180} ) \cos(x)$. The second derivative is $\frac {d^2(\sin (x) )}{dx^2} = - (\frac {\pi}{\180})^2 \sin(x)$, etc. The infinite series for x in degrees then becomes: $sin(x) = ( \frac { \pi}{180} ) x - (\frac {\pi}{180})^3 \frac {x^3}{3!} + (\frac { \pi}{180})^5 \frac {x^5}{5!} -....$ Last edited by ebaines; September 9th 2013 at 01:01 PM. Re: Why does the power series of sin(x) require radian? oh thanks! thats it, but i cant remember that you need radian for differentiation, where does it say that in the proof of the derivative of sin, i vaguely remember that there was an arc and a circle Re: Why does the power series of sin(x) require radian? Re: Why does the power series of sin(x) require radian? The degree is an extremely arbitrary unit of measure, which disrupts all the subtle connections between the trigonometric functions. Radians are natural because they take the arc length and give you the x and y coordinates. Re: Why does the power series of sin(x) require radian? thank you so much guys! one last thing, is it true that only in radian sinx/x->1 as x->0? is this not true for degree? if so why? Re: Why does the power series of sin(x) require radian? It cannot possibly be true for both of them...the function which takes x in degrees and gives the sin of the angle is $\sin(\frac{180x}{\pi})$, where the inner sin function works in radians. If we try to apply the same limit to this function, we get: $lim_{n \to 0} \frac{\sin(\frac{180x}{\pi})}{x} = \frac{180}{\pi}$ So no, it certainly does not hold, and there's no reason why it would. September 9th 2013, 12:57 PM #2 September 9th 2013, 01:03 PM #3 Mar 2013 North Yorkshire, UK September 9th 2013, 01:53 PM #4 September 9th 2013, 03:44 PM #5 Sep 2012 Planet Earth September 9th 2013, 11:03 PM #6 Mar 2013 North Yorkshire, UK September 9th 2013, 11:26 PM #7 Sep 2012 Planet Earth
{"url":"http://mathhelpforum.com/trigonometry/221846-why-does-power-series-sin-x-require-radian.html","timestamp":"2014-04-17T19:32:05Z","content_type":null,"content_length":"72919","record_id":"<urn:uuid:872b7df5-ca0e-4048-8a3d-b171c6496d71>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
CLEP Professor CLEP Professor College Algebra contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 3 computer-based practice exams with video solutions. Pre-requisite: High School Algebra 1 and Algebra 2 (Saxon Algebra 2 2nd or 3rd ed). This course teaches every topic on the CLEP College Algebra exam. The College Board does not offer an AP College Algebra exam. Up to three hours of college credit can be earned by passing this exam. Click on the image or the title for a complete description. CLEP Professor Precalculus contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 3 computer-based practice exams with video solutions. Pre-requisite: High School Pre-Calculus (Saxon Advanced Math, 2nd ed). This course teaches every topic on the CLEP Pre-Calculus exam. The College Board does not offer an AP Pre-Calculus exam. Up to three hours of college credit can be earned by passing this exam. Click the image or tile for a more complete description. CLEP Professor Calculus contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 4 computer-based practice exams (2 CLEP, 2 AP) with video solutions. Pre-requisite: High School Calculus (Saxon Calculus, 2nd ed.). This course teaches every topic on the CLEP Calculus and the AP Calculus AB exams, which cover material normally taught in a one-semester college calculus course. Students pursuing a non-science college major should consider CLEP Calculus, while science and engineering students should consider the AP Calculus AB exam. (NOTE: CLEP Professor does not prepare students for the AP Calculus BC Exam). Up to four hours of college credit are granted for the AP exam and up to three hours are granted for the CLEP exam. Click on the image or the title for a complete description. CLEP Professor Biology contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 4 computer-based practice exams (2 CLEP, 2 AP) with video solutions. Pre-requisite: High school Biology (DIVE Biology recommended). This course teaches every topic on both the CLEP Biology and the AP Biology exams. Students persuing a non-science college major should consider CLEP Biology, while science and engineering students should consider the AP Biology Exam. Up to 8 hours of college credit can be earned by passing either of these exams. Click the image or title for a complete description. CLEP Professor Chemistry contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 4 computer-based practice exams (2 CLEP, 2 AP) with video solutions. Pre-requisite: High School Chemistry (DIVE Chemistry recommended) and Algebra 2 (Saxon Algebra 2 2nd or 3rd ed). This course teaches every topic on both the CLEP Chemistry and AP Chemistry exams. Students pursuing a nonscience college major should consider CLEP Chemistry, while science and engineering students should consider the AP Chemistry Exam. Up to eight hours of college credit can be earned by passing either of these exams. Click the image or title for a complete description. CLEP Professor for AP Physics contains a diagnostic test, 20 lessons with over 200 practice problems plus video solutions, and 3 computer-based AP practice exams, with video solutions. Pre-requisite: High school Physics (Saxon Physics recommended). This course teaches every topic on the AP Physics “B” exam. The College Board does not offer a CLEP Physics exam.Up to 8 hours of college credit can be earned by passing this exam.
{"url":"http://www.clepprofessor.com/products.html","timestamp":"2014-04-20T14:18:08Z","content_type":null,"content_length":"9992","record_id":"<urn:uuid:16a1e49f-fe6d-4ca7-a599-f40a15214cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Really silly things to do with C# expression trees As you may remember from my last post I’m current reading Structure and Interpretation of Computer Programs as part of a study group we started during Pablo’s Fiesta. I continue to find Lisp to be a fascinating language especially this really interesting piece of code. Ok so what’s going on here? In my previous post I demonstrated how you might implement this in javascript. Well I found it so interesting that I decided to find out if I could do it in C# as well. The following is my attempt at trying to return the + or – symbol from an if statement. Shut up troll! I’m not telling anybody to go off and write their code in expression trees when the following code is clear the correct why to write it. See that’s way easier, but this isn’t a post about smart things to do with expression trees now is it? Let’s up the silliness some more and make it an extension method just for fun. I was having so much fun with this that I even roped some of my teammates in on it. Heres Chad Myers solution. So I think its cool that you can kind of return + or – from an if statement, but what started as a fun little exercise turned into an amazing adventure into learning expression trees. So as my final brain beating exercise I tried make this into a Func<int,int,int> completely build using expression trees. So here’s a little challenge. In the comment section I invite you to completely school me in the ways of Expression Trees. Try and come up with a way more awesome version of a_plus_abs_b using expression trees or IL or whatever awesome sauce is in the .NET framework that I’m not aware of. Herp Derp, Post Footer automatically generated by Add Post Footer Plugin for wordpress. How about some ridiculous operator overloading? I’m not sure what you mean. Link me a gist EDIT: didn’t see the gist link in my email. Weird. That’s pretty awesome! I forgot you could overload operators for classes, too bad you can’t do that with out the Number class, but definitely cool thanks! I would wrap the expression compile with a lazy<func> so that you only need to compile once. @e.z. Hart You could also use implicit cast to int in your example. That would make it even easier to get the result or imlicit cast to the function delegate This seems a fairly literal version of the Lisp code: https://gist.github.com/1335868 Nice this seems a lot like chads example. I wasn’t aware the you could just cast a lambda like that (Func) (x,y) => x + y Kind of interesting actually That’s the reason C# sucks at functional programming. The closest thing to the original Lisp would be this mess: I really like how you composed the cond ? then : else I know in lisp that you have to be careful because the (if predicate then else) acts as a special construct that behaves slightly differently. I like this method I want to see what will happen if you turned that into some kind of recursive function. Something really contrived like adding the numbers by incrementing a while decrementing You can also try F# here : let (++) a b = (if a > 0 then (+) else (-)) a b You can then use the infix operator ++ : 3 ++ -5 or : if you want an operator for if then else : let (!?) x f e = if x then f else e let (++) a b = (!? (b > 0) (+) (-)) a b Have F#un ! That is really awesome! So you can return + or – in F#! Amazing how I can now read F# and know what its doing, functional FTW! SICP is opening my eyes a lot! BTW how did you post code in disqus like that? □ http://openid.thinkbeforecoding.com/jeremie.chassaing I made a copy through notepad For the +, – and other operators in F# there are two things: You can simply use it like in a + b, in this case, it is infixed, and you have to provide both operands. When using it between parentheses, it becomes a simple function, hence prefixed and curryable. You can define almost any operator like ??, |?, , +-, ~> by defining it between () like I did for ??. I actually kind of like the prefix notation. I hear thats what most people hate the most about Lisp Chad & Sam seem to be on the right path, but both seem to go off in an odd direction. I think this combines simplicity, reusablity, and comes closest to the original lisp code: Nice to see another RedditorProgrammer. LULZ, how could you tell This entry was posted in Functional Programming and tagged csharp, Lisp. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
{"url":"http://lostechies.com/ryanrauh/2011/11/02/really-silly-things-to-do-with-c-expression-trees/","timestamp":"2014-04-19T03:02:05Z","content_type":null,"content_length":"36764","record_id":"<urn:uuid:50f4415f-7b8a-419d-8e9d-73b986fad47b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 379. Chromatic Tuner Given a frequency, return the number of cents difference between the given frequency and its nearest semitone (in 12-tone equal temperament with A440) as well as the semitone it is closest to (in scientific pitch notation, using # for sharp and using natural pitches whenever possible) with A440 referring to A4. Refer to the wikipedia articles below for more information. Problem Comments 4 Comments Show 1 older comment Venu Lolla on 23 Feb 2012 @bmtran: Could you please verify that the octave numbers in the semitone names are correct in the test suite? Some of them do not seem to be consistent with http://en.wikipedia.org/wiki/ Scientific_pitch_notation Cheers, VL. on 24 Feb 2012 sorry about that. i've fixed it to the best of my knowledge. i hope this didn't cause you too much distress. Venu Lolla on 24 Feb 2012 Thanks for fixing it. No distress anymore. Cheers. on 24 Aug 2012 Nice problem, specially when coming from Austin, Live Music Capital of the World.
{"url":"http://www.mathworks.com/matlabcentral/cody/problems/379-chromatic-tuner","timestamp":"2014-04-18T13:23:29Z","content_type":null,"content_length":"29817","record_id":"<urn:uuid:b5b868d8-b453-433d-a88e-ad052d9febd4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Solve Inequalities with Absolute Value on the ACT As when solving an ACT Math problem that includes an expression with absolute value, you also need to split an inequality with absolute value into two separate inequalities. However, keep in mind one twist: One of the two resulting inequalities is simply the original inequality with the bars removed. The other inequality is the original inequality with • The bars removed • The opposite side negated (as with absolute value equations) • The inequality reversed (as with inequalities when you multiply or divide by a negative number) These rules aren’t difficult, but they’re a little complicated, so be careful to do all three parts correctly. Example 1 Which of the following values is in the solution set of (A) 0 (B) 2 (C) –2 (D) 4 (E) –4 Begin by splitting the inequality: Notice that the second of these two inequalities has the bars removed, the right side negated, and the inequality sign reversed. You’re now ready to solve both of these inequalities for t: To make these inequalities a little easier to read, put them in the following form: Thus, 0 falls into the range of solutions, so the right answer is Choice (A). In some cases, the solution to an inequality with absolute value can lead to a pair of inequalities that appear to contradict each other. When this happens, both inequalities aren’t true, but at least one of them is, so link them with the word or. This concept is a little tricky, so don’t worry if it’s not making sense. The next problem provides a concrete example. Example 2 What is the solution set for Before you begin, notice that the original inequality is so no solution can include either As a result, you can rule out Choices (G) and (J). Now isolate on the left side of the inequality: You’re now ready to remove the bars and split the inequality: Notice that the second of these two inequalities has the bars removed, the right side negated, and the inequality sign reversed. You’re now ready to solve the first one: Next, solve the second inequality: Notice that the two solutions seem to contradict each other: If n is greater than 4, how can it be less than 1? When this situation occurs, either solution can be true, so link the two resulting solutions with the word or: Thus, the correct answer is Choice (K). Be extra careful when working with an inequality that sets an absolute value either greater than or greater than or equal to another value that includes a variable. This type of inequality can sometimes produce a false (or extraneous) solution — that is, a solution that appears correct but doesn’t work when plugged back into the problem. The next example shows you how and why this can Which of the following is the solution set for To begin, remove the absolute value bars, split the inequality, and solve each separately: According to this result, x < 1 and x < –3 both appear correct, so you may be tempted to choose Choice (E). However, if this answer were correct, then x = 0 should be outside the solution set. So plugging 0 into the original inequality should give you the wrong answer: This solution is unexpected. In fact, x = 0 is in the solution set for this inequality. What went wrong? Take another look at the original inequality: This inequality sets an absolute value greater than 2x. So if x is any negative number, the absolute value (which can never be negative) must be in the solution set. Therefore, the solution x < –3 is false because it tells you that only certain negative values of x are in the solution set. Throwing out this false solution leaves you with the correct answer, which is x < 1; so the correct answer is Choice (A).
{"url":"http://www.dummies.com/how-to/content/how-to-solve-inequalities-with-absolute-value-on-t.navId-813771.html","timestamp":"2014-04-21T16:51:47Z","content_type":null,"content_length":"56460","record_id":"<urn:uuid:4e4e4fd9-fea9-4946-b4f4-be2773005ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrons are not enough: Cuprate superconductors defy convention The latest news from academia, regulators research labs and other things of interest Posted: Mar 19, 2013 Electrons are not enough: Cuprate superconductors defy convention (Nanowerk News) To engineers, it’s a tale as old as time: Electrical current is carried through materials by flowing electrons. But physicists at the University of Illinois and the University of Pennsylvania found that for copper-containing superconductors, known as cuprates, electrons are not enough to carry the current ("Absence of Luttinger’s Theorem due to Zeros in the Single-Particle Green Function"). “The story of electrical conduction in metals is told entirely in terms of electrons. The cuprates show that there is something completely new to be understood beyond what electrons are doing,” said Philip Phillips, a professor of physics and of chemistry at the U. of I. Graph showing the breakdown of Luttinger's theorem in the normal state of cuprate superconductors. The horizontal axis is the expected number of mobile electrons while the vertical axis is the measured number. The two should be equal if the theorem were true. (Graphic: Philip Phillips) In physics, Luttinger’s theorem states that the number of electrons in a material is the same as the number of electrons in all of its atoms added together. Electrons are the sub-atomic particles that carry the current in a conductive material. Much-studied conducting materials, such as metals and semiconductors, hold true to the theorem. Phillips’ group works on the theory behind high-temperature superconductors. In superconductors, current flows freely without resistance. Cuprate superconductors have puzzled physicists with their superconducting ability since their discovery in 1987. The researchers developed a model outlining the breakdown of Luttinger’s theorem that is applicable to cuprate superconductors, since the hypotheses that the theorem is built on are violated at certain energies in these materials. The group tested it and indeed found discrepancies between the measured charge and the number of mobile electrons in cuprate superconductors, defying Luttinger. “This result is telling us that the physics cannot be described by electrons alone,” Phillips said. “This means that the cuprates are even weirder than previously thought: Something other than electrons carries the current.” “Theorists have suspected that something like this was true but no one has been able to prove it,” Phillips said. “Electrons are charged. Therefore, if an electron does not contribute to the charge count, then there is a lot of explaining to do.” Now the researchers are exploring possible candidates for current-carriers, particularly a novel kind of excitation called unparticles. Subscribe to a free copy of one of our daily Nanowerk Newsletter Email Digests with a compilation of all of the day's news.
{"url":"http://www.nanowerk.com/news2/newsid=29605.php","timestamp":"2014-04-17T18:28:31Z","content_type":null,"content_length":"38036","record_id":"<urn:uuid:b929e064-4cde-49d9-9b45-0a6eb560235b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology and Its Applications “What’s it good for?” We have heard this question from disgruntled students countless times. In part to combat the attitude that mathematics is a theoretical discipline studied only for its own sake (not that there is anything wrong with that!), textbook authors now include real-world applications along with the usual abstract theory, then append the term “with applications” to the title. Even traditionally “pure” subjects such as abstract algebra have jumped on this bandwagon. With the publication of William Basener’s Topology and its Applications, topology is the latest theoretical subject to get the treatment. First and foremost, this is a topology textbook. It covers point set topology, topological and smooth manifolds, simplicial complexes, topological groups, fixed point theorems, vector fields, the fundamental group, and homology. This may seem like an ambitious list of topics for a text just over 300 pages, but the author strategically chooses to sacrifice depth for breadth, and does not delve too deeply into any of these topics. For example, in the chapters on algebraic topology he discusses the fundamental group and simplicial and singular homology, but does not mention exact sequences, torsion, homology with arbitrary coefficients, cohomology, duality, or functors. The applications are numerous and are spread throughout the book. They come in three varieties — topological applications (e.g., the ham sandwich theorem, the hairy ball theorem, Cantor sets), applications to other areas of mathematics (e.g., number theory, dynamical systems, geometry, differential equations, algebra), and real-world applications (e.g., population dynamics, robotics, cosmology, computer drawing software, economic game theory, computer graphics, condensed matter physics, computational algorithms). Overall the choice of applications is good — they are interesting and show a real use of the topological ideas. Most of the applications are presented as snapshots of how topology could be applied, not as an exhaustive study. Thus the text could be used a springboard to further investigation. I discovered one or two ideas that I will stash away as future research projects for talented mathematics majors. With this purpose in mind, it would have been nice to have a more extensive bibliography with suggested readings. Perhaps most useful to the budding mathematician are the applications of topology to other areas of mathematics. Mathematicians know that topology is ubiquitous in modern mathematics, but most students have to learn this the hard way — seeing a little topology here, a little more there. This textbook does an excellent job of showing how topology pervades the other mathematical disciplines. My main complaint about the book involves the physical presentation of the material. I was repeatedly frustrated trying to find things in the text. The bare-bones index was of little use and the internal numbering scheme made it difficult to refer back to items (for instance, Chapter 5 begins with Lemma 31, Core Intuition 13, Lemmas 32 and 33, Definition 64, Theorem 61, and Proposition 25). The artwork in the book is very inconsistent. The hand-drawn line art is often wiggly and amateurish (of course, it was the artistically-challenged Poincaré who called topology “the art of reasoning well on badly made figures.”). Color images that were taken from outside sources or generated by software were poorly reproduced in low-resolution grayscale containing distracting horizontal lines across the images. All of the images in the appendix on knot theory are missing their right half. The material in Topology and its Applications should be accessible to any mathematics graduate student or to a strong undergraduate. Because of the emphasis on breadth over depth, it would be an excellent text for a student who wants to learn some topology but not to be a topologist. This book would not be a substitute for one of the traditional textbooks on algebraic topology such as Munkres, Hatcher, Massey, Bredon, etc., but it could be used to supplement those. On the other hand, one could argue that Topology and its Applications contains the topology we would like any non-topologist, PhD mathematician to know. This book is a celebration of topology and its many applications. I enjoyed reading it and believe that it would be an interesting textbook from which to learn. The theory is presented well and the applications are varied, interesting, and not contrived. With a book such as this one, one can always point to topics or applications that the author could have included (differential topology, knot theory, applications to biology, etc.), but that is just personal preference. Basener does an excellent job of surveying the field of topology and arguing for its usefulness. After reading this textbook, a student will know “what topology is good for.” Dave Richeson is an Associate Professor of Mathematics at Dickinson College in Carlisle, PA. His interests include dynamical systems, topology, and the history of mathematics.
{"url":"http://www.maa.org/publications/maa-reviews/topology-and-its-applications","timestamp":"2014-04-18T19:14:12Z","content_type":null,"content_length":"99663","record_id":"<urn:uuid:7a36cf90-8f80-4c08-ae28-716ea5da75cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Splitting infinite sets up vote 20 down vote favorite There are two questions here, an explicit one, and another (more vague) one that motivates it: I am pretty certain the following should have a negative answer, but at the moment I'm not seeing how to argue about this and cannot locate an appropriate reference. In set theory without choice, suppose $X$ is an infinite set such that for every positive integer $n$, we can split $X$ into $n$ (disjoint) infinite sets. Does it follow that $X$ can be split into infinitely many infinite sets? What would be a reasonably weak additional assumption to ensure the conclusion. ("Reasonably weak" would ideally be something that by itself does not suffice to give us that $X$ admits such a splitting, but I am flexible.) This was motivated by a question at Math.SE, namely whether an infinite set can be partitioned into infinitely many infinite sets. This is of course trivial with choice. In fact, all we need to split $X$ is that it can be mapped surjectively onto ${\mathbb N}$. However, without choice there may be counterexamples: A set $X$ is amorphous iff any subset of $X$ is either finite or else its complement in $X$ is finite. It is consistent that there are infinite amorphous sets. If $X$ is infinite and a finite union of amorphous sets, then $X$ is a counterexample. The question is a baby step towards trying to understand the nature of other counterexamples. Note that any counterexample must be an infinite Dedekind finite (iDf) set $X$. One can show that for any iDf $X$, ${\mathcal P}^2(X)$ is Dedekind infinite. For any $Y$, if ${\mathcal P}(Y)$ is Dedekind infinite, then $Y$ can be mapped onto $\omega$ (this is a result of Kuratowski, it appears in pages 94, 95 of Alfred Tarski, "Sur les ensembles finis", Fundamenta Mathematicae 6 (1924), 45–95). As mentioned above, our counterexample $X$ cannot be mapped onto $\omega$, so ${\mathcal P}(X)$ must also be an iDf set. The second, more vague, question asks what additional conditions should a counterexample satisfy. set-theory axiom-of-choice 1 If X is infinite, then 2^X can be mapped onto omega. This is provable in ZF and has nothing to do with X being Dedekind finite. – Ricky Demer Jan 5 '11 at 2:18 1 @Ricky: Sure. That was poorly phrased. I've changed the sentence into what I really meant. – Andres Caicedo Jan 5 '11 at 2:53 Could one of the compactness theorems of logic be used here? – Michael Hardy Jan 6 '11 at 7:47 @Michael: I am not sure I see how. Given $X$ you would need to devise a theory (presumably with infinitely many relational symbols that would play the role of the partition) for which there is a model with universe equipotent to $X$. This seems very delicate (we do not have the Löwenheim–Skolem theorem without choice). Moreover, we need to ensure that the theory has no models of size $Y$ for any amorphous $Y$. Plus, we would need to anticipate (to set up the language!) the infinite set $Z$ such that $X$ is a union indexed by $Z$ of infinite sets (I think $Z$ doesn't need to be countable). – Andres Caicedo Jan 6 '11 at 7:58 @Michael: Given $X$, the other possibility I can think of would be to have the language contain a relational symbol $R$ and a function symbol $f$, and set up a theory that would have a model of the form $X\sqcup Y$ where $R$ is interpreted by $X$, $Y$ is infinite, and $f\upharpoonright X$ is a function from $X$ onto $Y$ with the preimage of each element of $Y$ being infinite. Again, the lack of an appropriate version of Löwenheim–Skolem is a serious issue here. – Andres Caicedo Jan 6 '11 at 8:01 show 1 more comment 1 Answer active oldest votes Define a permutation model of ZFA as follows. Starting as usual (Ch 4 of Jech's Axiom of Choice) from a well-founded model $\mathcal M$ of ZFAC with infinite set $A$ of atoms, let $G$ be the group of all permutations of $A$; so $G$ can be identified with the group of all automorphisms of $\mathcal M$. For each finite partition $T$ of $A$, let $G_{(T)}$ be the group of permutations in $G$ which fix each element of $T$ (meaning $\sigma\in G_{(T)}$ iff for each $B\in T$ we have that $b\in B$ implies $\sigma b\in B$). Let $\mathcal F$ be the set of subgroups of $G$ which contain $G_{(T)}$ for some finite partition $T$; then $\mathcal F$ is a normal filter of subgroups of $G$, and contains the stabilizer subgroup of each atom in $A$. up vote 7 As usual, a set or atom $x\in\mathcal M$ is called symmetric if its stabilizer subgroup is a member of $\mathcal F$, and we let $\mathcal N$ be the class of hereditarily symmetric elements down vote of $\mathcal M$. Then $\mathcal N$ is a model of ZFA providing a counterexample. The model $\mathcal N$ has all the finite partitions of $A$ found in $\mathcal M$, but every infinite partition of $A$ into non-singletons would fail to be symmetric. Hi Eric. Welcome to MO! Many thanks; I definitely need to become more comfortable with this technique. – Andres Caicedo Jan 9 '11 at 15:14 Hello and thanks. Yes, if you like this kind of question, then the technique is worthwhile. But permutation groups may not be necessary. If you prefer, you could think about $L(S)$, where, in a model of ZFAC, $S$ is the set of finite partitions of the set of atoms. (Does anyone in fact prefer to think of it that way?) So far, I don't have an answer to your second question. – Eric Hall Jan 9 '11 at 16:26 add comment Not the answer you're looking for? Browse other questions tagged set-theory axiom-of-choice or ask your own question.
{"url":"http://mathoverflow.net/questions/51171/splitting-infinite-sets?sort=votes","timestamp":"2014-04-20T06:34:14Z","content_type":null,"content_length":"62246","record_id":"<urn:uuid:74106a77-f5bb-4ce8-8002-e53f740a7a83>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
A Class of Weingarten Surfaces in Euclidean 3-Space Abstract and Applied Analysis Volume 2013 (2013), Article ID 398158, 6 pages Research Article A Class of Weingarten Surfaces in Euclidean 3-Space ^1School of Mathematics and Quantitative Economics, Dongbei University of Finance and Economics, Dalian 116025, China ^2College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, China Received 21 April 2013; Revised 31 July 2013; Accepted 19 August 2013 Academic Editor: Ondřej Došlý Copyright © 2013 Yu Fu and Lan Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The class of biconservative surfaces in Euclidean 3-space are defined in (Caddeo et al., 2012) by the equation for the mean curvature function and the Weingarten operator . In this paper, we consider the more general case that surfaces in satisfying for some constant are called generalized bi-conservative surfaces. We show that this class of surfaces are linear Weingarten surfaces. We also give a complete classification of generalized bi-conservative surfaces in . 1. Introduction Let be an isometric immersion of a submanifoldinto Euclidean (pseudo-Euclidean) space. We denote by,, andthe position vector, the mean curvature vector field, and the Laplace operator of, respectively, with respect to the induced metric. The submanifoldinis said to biharmonic if it satisfies the equation. According to the well-known Betrami’s formula, the biharmonic condition in Euclidean spaceis also known as the equation. There is a well-known conjecture of Chen [1]. Chen’s Conjecture. The only biharmonic submanifolds of Euclidean spaces are the minimal ones. This conjecture has been proved by some geometers for some special cases. For example, Chen proved that every biharmonic surface in the Euclidean 3-spaceis minimal. Hasanis and Vlachos [2] proved that every biharmonic hypersurface inis minimal, also see [3]. For the general case, the conjecture is still open so far. The study of biharmonic submanifolds is nowadays a very active research subject. Many interesting results on biharmonic maps and submanifolds have been obtained in the last decade, see [1–13]. Very recently, Caddeo et al. introduced the notion of biconservative submanifolds in [14], which is a natural generalization of biharmonic submanifolds. It is interesting that biconservative submanifolds form a much bigger family of submanifolds including biharmonic submanifolds. Recall a well-known result; see, for instance, [3]. Theorem A. Letbe a hypersurface with mean curvature vector. Then,is biharmonic if and only if the following equations hold: whereis the shape operator of the hypersurface with respect to the unit normal vector. Following the definition of Caddeo et. al. in [14], a hypersurfacein an -dimensional Euclidean spaceis called biconservative if In general, a submanifold is biconservative if the divergence of the stress bienergy tensor vanishes. In 1995, Hasanis and Vlachos, in [2], firstly studied biconservative hypersurfaces, which are also called -hypersurfaces. The authors gave a classification of biconservative hypersurfaces in Euclidean 3-spaces and 4-spaces. Recently, Caddeo et al. [14] investigated biconservative surfaces in the three-dimensional Riemannian space forms. Moreover, they proved that a biconservative surface in Euclidean 3-space is either a CMC (constant mean curvature) surface or a surface of revolution. This class of surfaces carry some interesting geometry. It was proved in [14] that the mean curvature functionof a non-CMC biconservative surface in a three-dimensional space formsatisfies the following relation: whereanddenote the Gaussian curvature and mean curvature of the surfaces, respectively. Clearly, the forementioned relation implies that all the biconservative surfaces in the Euclidean 3-space are linear Weingarten surfaces. A surface is called a Weingarten surface if there exists the Jacobi equationbetween the Gaussian curvatureand the mean curvatureon the surface. Weingarten surfaces were introduced by Weingarten in 1861 in the context of the problem of finding all surfaces isometric to a given surface of revolution. Along the history, they have been of interest for geometers. There is a great amount of literature on Weingarten surfaces, beginning with works of Chern, Hartman, Winter, and Hopf in the fifties of the last century. For a long time, many geometers tried to look for examples of linear Weingarten surfaces, for example, see [10]. For surfaces in, the biconservative condition is equivalent to the equation Observe from [14] that the forementioned equation corresponds to a class of linear Weingarten surfaces, which include CMC surfaces and a family of surfaces of revolution. From the view of equation, a natural idea is to extend this class of surfaces in order to search for more examples of Weingarten surfaces of revolution. Hence, from the view of geometry, we propose to study the surfaces insatisfying a more general equation: We would like to call this new class of surfaces is generalized biconservative In this note, we focus on the equation and study this class of surfaces in. Precisely, we will prove that any generalized biconservative surface in Euclidean 3-space is a linear Weingarten surface satisfying a more general relationfor some constant. A local classification of generalized biconservative surfaces inis also obtained. Note that our method is slightly different from the method developed by Caddeo et al. in [14]. 2. Preliminaries Let be an isometric immersion of a surfaceinto. Denote the Levi-Civita connections of andbyand, respectively. Letanddenote vector fields tangent to, and letbe a normal vector field. The Gauss and Weingarten formulas are given, respectively, by (cf. [8, 15]) where,are the second fundamental form and the shape operator. It is well known that the second fundamental formand the shape operatorare related by The Gauss and Codazzi equations are given respectively by whereis the curvature tensor of the Levi-Civita connection on. The mean curvature vector fieldand the Gauss curvature ofare given respectively, by As known from the Introduction, a surfaceinis biconservative if the mean curvature functionsatisfies Motivated by the above equation for biconservative surfaces in, we propose to the notion of generalized biconservative surfaces in. Definition 1. A surface in Euclidean 3-spaceis called generalized biconservative surface if the mean curvature functionand the Weingarten operator satisfy a equation for some. Note that this class of surfaces include all the biconservative surfaces as a subclass when. Clearly, all of the CMC surfaces inare trivially generalized biconservative surfaces. This is also the case of biconservative surfaces. We are interested in the case of non-CMC surfaces in. 3. The Characterizations of Generalized Biconservative Surfaces In this section, let us focus on the situation of non-CMC generalized biconservative surfaces in. Suppose thaton any point. It follows from (12) thatis a principal direction andis the corresponding principal curvature. We can choose a local orthonormal frame fieldsuch thatis parallel to. Therefore, we have. Since (12) gives, it follows that. According to the Gauss equation, the Gaussian curvatureis given by which implies the following. Theorem 2. The generalized biconservative surfaces inare linear Weingarten surfaces. If we put, then. Using the remark above, the Codazzi equation reduces to Sinceis nonconstant, fromone has. So, the second equation of (14) yields. Moreover, the first equation of (14) implies that. Without the loss of generality, one assumes that . According to the second equation of (14), one divides it into the following two cases. Case A (). In this case, the surface is flat. Then, the second equation of (14) yieldsas well. Choose the local coordinates onasand. By applying the Gauss and Weingarten formulas (6) and (7) respectively, the immersion satisfies that By solving the second and third equations of (15), we obtain that for a constant vectorand a curvein. Substitute (17) into the first equation of (15) and the first equation of (16), respectively. Combining these equations, we obtain a three-order differential equation as follows: In order to solve the above equation, we introduce two functions,(vector-valued function) andby puttingand. Note thatis the nonzero principal curvature andis not constant. Denote by “” the derivative with respect to the new variable. With these symbols, (18) becomes whose solution is given by whereandare constant vectors in. Consequently, by a suitable translation, the immersionis given by Considering the metric of surfaces, we may choose that Hence, the surface can be expressed as Remark that the surface (23) is a cylinder, but not a circular cylinder, since the curvature of the curveis not constant. Case B (). Let. Since, it follows from the second equation of (14) that. Therefore, there exist local coordinatesonsuch thatand. Then, the metric tensor ofis given by Since, we have thatis a function depending only on the variable. Consequently, the Levi-Civita connectionis given by the expressions and the second fundamental form is given by Moreover, it follows from (25) and second fund form that the Gauss and Weingarten formulas (6) and (7) yield, respectively By (28), the compatibility condition of PDE system (27) is given by Integrating on (29), we obtain for some integral constant. Clearly, we havefor nonconstant function. Solving the second equation of (27), the immersionis given by for two vector-valued functionsandin. Case B.1(k = 0). In this case, (27) and (28) become It follows from (30) that for some constant, and. Substituting (31) into the first equation of (32), after a suitable translation, we obtain the immersion whereis another curve in. Substituting (35) into the third equation of (32) and applying (33), we have the following three-order differential equation: By (30), the solution of (36) is given by for constant vector,in. Hence, the immersion becomes In view of the metric (24), one can obtain that,,are mutual orthonormal and After choosing,, andas the immersion can be expressed as Note that this surface is a cone. Case B.2 (k ≠ 0,1,2). Substituting (31) into the first and third equations of (27), we have which is equivalent to Substituting (29) and (30) into the previous equation in succession, we have In view of (44), the two sides of the equation have different variables, respectively. Hence, we have for some constant vectorin. Solving (45) gives for two constant vectorsandin. Looking at (31), we may assume that . In fact, the immersion can be rewritten as whereand. Hence, the immersion becomes Solving (46) gives for a constant vector. One can compute from (49) that It follows from the above expressions and the metric (24) that Combining the previous expression (52) with (30) gives After a change of the variable, we can assume. Hence, the three vectors,,incan be chosen as Now, let us consider (50), which can be rewritten as whereis the derivative ofwith respect to. By applying (30), (55) becomes By solving (56) for some constant vectorin. Combining (49) with (54) and (57), and by a suitable translation, we obtain the immersion where In this case, the immersion is a surface of revolution with non-constant mean curvature. In summary, we have the following classification result. Theorem 3. Letbe a nondegenerate generalized biconservative surface immersed in the 3-dimensional Euclidean space. Then, the immersionis either a CMC surface or locally given by one of the following three surfaces: (1) a cylinder given by where the function satisfies ; (2) a cone given by where ; (3) a surface of revolution given by where is defined as for . 4. Some Examples of Generalized Biconservative Surfaces In this section, we give some examples of generalized biconservative surfaces (3) in Theorem 3, depending on different values for. Example 1. In the case, the functioncan be integrated as for some integral constant. Hence, by a suitable translation, the non-CMC biconservative surface in(see also [14]) is given by whereis defined as See Figure 1. Example 2. For, the functioncan be integrated as for some integral constantand. We have a non-CMC generalized biconservative surface (after a suitable translation) in, given by The authors would like to thank the referee for giving very valuable suggestions and comments to improve the present paper. Lan Li author was supported by the National Natural Science Foundation of China (11201309), the Tianyuan Youth Fund of Mathematics in China (11126070), and the Natural Science Foundation of SZU (Grant no. 201111).
{"url":"http://www.hindawi.com/journals/aaa/2013/398158/","timestamp":"2014-04-19T02:17:58Z","content_type":null,"content_length":"440256","record_id":"<urn:uuid:4012a0ca-3dda-4bb6-a622-b6775fdcedc3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
RootsWeb: GENEALOGY-DNA-L [DNA] Time to Most Recent Common Ancestor GENEALOGY-DNA-L Archives Archiver > GENEALOGY-DNA > 2003-11 > 1068849443 From: "Larry Dean" <> Subject: [DNA] Time to Most Recent Common Ancestor Date: Fri, 14 Nov 2003 14:37:23 -0800 I would appreciate some assistance understanding the probability of time to most recent common ancestor. The material FTDNA sent me with my 25-marker YDNA results includes some graphs of probability vs. Time to MRCA. If I eyeball those probability distributions I note that the mode of the 24/25 curve is about 7 generations and the median or 50% point is roughly 11 generations. Yet the table included with the FTDNA material says that the 50% probability point in that MRCA distribution is no longer than 18 generations. Similarly, eyeballing the 23/25 graph indicates that the mode is about 14 generations and the median or 50% point is roughly 18 generations. But Ann Turner's MRCA calculator gives 28 generations for the median of this probability distribution. Can someone help me understand this? Larry Dean This thread:
{"url":"http://newsarch.rootsweb.com/th/read/GENEALOGY-DNA/2003-11/1068849443","timestamp":"2014-04-21T07:10:36Z","content_type":null,"content_length":"4059","record_id":"<urn:uuid:04086c76-085c-4990-9106-212855a4b36f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
The Comonad.Reader Sun 25 Dec 2011 Andrej Bauer recently gave a really nice talk on how you can exploit side-effects to make a faster version of Martin Escardo's pseudo-paradoxical combinators. A video of his talk is available over on his blog, and his presentation is remarkably clear, and would serve as a good preamble to the code I'm going to present below. Andrej gave a related invited talk back at MSFP 2008 in Iceland, and afterwards over lunch I cornered him (with Dan Piponi) and explained how you could use parametricity to close over the side-effects of monads (or arrows, etc) but I think that trick was lost in the chaos of the weekend, so I've chosen to resurrect it here, and improve it to handle some of his more recent performance enhancements, and show that you don't need side-effects to speed up the search after all! Last time we derived an entailment relation for constraints, now let's get some use out of it. Reflecting Classes and Instances Most of the implications we use on a day to day basis come from our class and instance declarations, but last time we only really dealt with constraint products. Max Bolingbroke has done a wonderful job on adding Constraint kinds to GHC. Constraint Kinds adds a new kind Constraint, such that Eq :: * -> Constraint, Monad :: (* -> *) -> Constraint, but since it is a kind, we can make type families for constraints, and even parameterize constraints on constraints. So, let's play with them and see what we can come up with! Today I hope to start a new series of posts exploring constructive abstract algebra in Haskell. In particular, I want to talk about a novel encoding of linear functionals, polynomials and linear maps in Haskell, but first we're going to have to build up some common terminology. Having obtained the blessing of Wolfgang Jeltsch, I replaced the algebra package on hackage with something... bigger, although still very much a work in progress. I gave a talk last night at Boston Haskell on finger trees. In particular I spent a lot of time focusing on how to derive the construction of Hinze and Paterson's 2-3 finger trees via an extended detour into a whole menagerie of tree types, and less on particular applications of the final resulting structure. I have updated the reflection package on hackage to use an idea for avoiding dummy arguments posted to the Haskell cafe mailing list by Bertram Felgenhauer, which adapts nicely to the case of handling Reflection. The reflection package implements the ideas from the Functional Pearl: Implicit Configurations paper by Oleg Kiselyov and Chung-chieh Shan. Now, you no longer need to use big scary undefineds throughout your code and can instead program with implicit configurations more naturally, using Applicative and Monad sugar. Updated my little type-level 2s and 16s complement integer library to be ghc 6.6 friendly and uploaded it to hackage based on popular (er.. ok, well, singular) demand. O.K. it was more of a polite request, but I did update it.
{"url":"http://comonad.com/reader/category/type-hackery/","timestamp":"2014-04-16T15:59:10Z","content_type":null,"content_length":"30574","record_id":"<urn:uuid:7ca3b409-6854-4f02-bc47-8bf8e5a75dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Michigan State University Libraries Libraries Home | Hours | Account Support MSU Libraries Research Guides • Electronic Mathematics Journals A list of electronic journals in mathematics/statistics in alphabetical order. • Mathematics Research Guide This general guide contains information on the most frequently used print and electronic resources in mathematics. • Statistics and Probability Research Guide This general guide contains information on the most frequently used print and electronic resources in statistics and probability. • Mathematical Biology Research Guide Mathematical Biology is an interdisciplinary field which uses mathematics to model biological processes. • Famous Mathematicians Information on famous mathematicians and their contributions to mathematics. • Library Resources for College Mathematics Preparation Suggested library resources for preparing for your college mathematics courses.
{"url":"http://www.lib.msu.edu/branches/mth/researchguides.jsp","timestamp":"2014-04-20T03:35:25Z","content_type":null,"content_length":"22433","record_id":"<urn:uuid:f912f6ba-c0c5-47a8-808f-00a4d5ef7a8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] float32 to float64 casting Olivier Delalleau shish@keba... Fri Nov 16 05:47:42 CST 2012 2012/11/16 Charles R Harris <charlesr.harris@gmail.com> > On Thu, Nov 15, 2012 at 8:24 PM, Gökhan Sever <gokhansever@gmail.com>wrote: >> Hello, >> Could someone briefly explain why are these two operations are casting my >> float32 arrays to float64? >> I1 (np.arange(5, dtype='float32')).dtype >> O1 dtype('float32') >> I2 (100000*np.arange(5, dtype='float32')).dtype >> O2 dtype('float64') > This one is depends on the size of the multiplier and is first present in > 1.6.0. I suspect it is a side effect of making the type conversion code > sensitive to magnitude. >> I3 (np.arange(5, dtype='float32')[0]).dtype >> O3 dtype('float32') >> I4 (1*np.arange(5, dtype='float32')[0]).dtype >> O4 dtype('float64') > This one probably depends on the fact that the element is a scalar, but > doesn't look right. Scalars are promoted differently. Also holds in numpy > 1.5.0 so is of old provenance. > Chuck My understanding is that non-mixed operations (scalar/scalar or array/array) use casting rules that don't depend on magnitude, and the upcast of int{32,64} mixed with float32 has always been float64 (probably because the result has to be a kind of float, and float64 makes it possible to represent exactly a larger integer range than float32). Note that if you cast 1 into int16 the result will be float32 (I guess float32 can represent exactly all int16 integers). -=- Olivier -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20121116/68a2699b/attachment-0001.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-November/064498.html","timestamp":"2014-04-16T10:32:00Z","content_type":null,"content_length":"4907","record_id":"<urn:uuid:31869c39-0bae-4dd5-9626-092df65ab0bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Do your kids cheer when you tell them it's time to practice math facts? Do they beg you for more time to work on math? Do they boast to you about how good they are at math? If not, then take a free trial of Reflex and see how it can change your kids' attitudes about practicing math facts and about their own math abilities. ExploreLearning Reflex is a next-generation online system that assesses students' abilities, coaches them in individualized sessions, challenges and engages them with exciting games tailored to their ability levels, and rewards their success. Try Reflex today. Greetings! I have again tried to find something for everybody, no matter what your grade level. Hope these resources are helpful! 1. Math Mammoth news 2. Strategies for addition & subtraction facts 3. Two free math books: Geometry in Art & Fun Math for Young Learners 4. Area Battle game 5. Tidbits 1. Math Mammoth news Just an advance sale notice... I will be running a sale around Thanksgiving , and all downloads & CDs will be 25% off! Another upcoming news is that the Math Mammoth PDF files will soon be enabled for annotating. This means that you will be able to open the file in Adobe Acrobat Reader (version 9 or X) and use a neat "typewriter" tool to actually TYPE in the document. You can also highlight words, add sticky notes, or draw circles, lines, polygons, and more! So... your children can actually fill in the book on the computer, if they so wish, including completing geometric drawing problems. While I cannot check this personally, I THINK this will also work on a tablet device such as a iPad. However, it looks like this will NOT work on the PDFs sold at Currclick, because they add their own security settings (and watermark) to the files... I have total control over the ones sold at Kagi but not over those sold at Currclick. If you are an old customer from Kagi and would need these types of files, please email me and I can provide you new download links. Make sure to include the email address or name you used when ordering at Kagi. If you're an old customer from Currclick, and purchased Light Blue series products, you will be able to simply redownload your order once I upload the new files. For other books, email me, and attach your email receipt from Currclick. (I have no access to the customer database at Currclick.) Note: If I get lots of requests for these new files, it may take WEEKS or even over a month to go through them. So please don't expect a fast response. And if I get over a thousand requests, I might not be able to do this at all... since there is no automated way to do it. We'll have to see. 2. Strategies for addition & subtraction facts Learning math facts seems to be a constant "hot topic", it just never goes out of popularity! So... I want to help (or remind) you once again about the basics. I heartily recommend first to concentrate on the concepts and strategies, and only later add in random drills. That is true, whether it is addition, subtraction, multiplication, or division. I've made a video about strategies for learning addition facts . In it, I list several strategies to learn addition facts for first and second grade math. I show the pattern of "Sums with 7", which also is used with other sums, then the 9-trick, the 8-trick, the doubles, doubles plus one more, and how to do random drill using the structure of the addition table. Then, a new video of mine about strategies for subtraction facts . I recommend the usage of FACT FAMILIES in order to learn the basic subtraction facts. That way, when children have a subtraction problem, such as 7 − 5 = ____, they will learn to think through addition and fact families: 5 and 2 and 7 form a fact family, OR that 5 + 2 = 7, so 7 − 5 = 2. The video also shows a fun tool for subtraction facts called number rainbows! DadsWorksheets.com is your source for easy to use math practice. Timed tests, fractions, patterns, rounding, word problems and more! No registration or signup - Just click, print and practice! 3. Two free math books: Geometry in Art & Fun Math for Young Learners Renee from SchoolSparks.com is offering a totally free ebook for kindergarten math! It basically is a worksheets collection for K, colorful and beautiful. The book covers number recognition, counting, patterns, sorting and classifying, and an introduction to graphs, and has 53 pages. I was told about a free download of the book Geometry in Art , by Hilton Andrade de Mello. (On the page, you need to scroll down to the words "free downloading".) This book is a basic introduction to geometry in art, with topics such as polygons, spirals, polyhedrons, tessellations, perspective, the golden ratio, symmetry, geometry and symbolism, and geometry and informatics. It has lots of illustrations and artwork by various artists, and can serve as a nice introduction for anyone who hasn't studied these topics before. 4. Area "Battle" game I talked about area and perimeter in my last newsletter as well, but this is too good to pass by! Math Hombre has posted a neat game for students to practice area and perimeter with lots of images to show what's going on... It's basically the game of "war" adapted to area, and the students themselves make the cards for the game, thereby having to really apply the concept. Suitable for about 5th grade. Area Battle game 5. Tidbits • Mowing the lawn - mathematically Mowing the lawn is a oh-so familiar task to many of us. Now here's an article that approaches the problem of pushing the lawnmower the shortest distance possible mathematically. • Foiled by the FOIL method? A neat post showing how FOIL can fool students (just let them multiply a binomial by a trinomial)... and an argument for crumpling FOIL altogether. • GeoGebra 4.0 available Those of you who use GeoGebra - a new version is now available. And if you don't use it--it is a FREE dynamic geometry software meant for middle school on up. It seems to be getting better and better, and there are more and more supportive resources as well, including a new GeoGebraTube! • Kakuro - cross-sums puzzle You fill in the white space using digits 1 to 9. The little numbers indicate what the sum of the neighboring row or column of blocks should be. You cannot use the same digit twice in any sum, so for example 12 cannot be 4 + 4 + 4. I find it gives nice simple addition practice for elementary school kids, yet is fun. This game is immensely popular in Japan, according to Wikipedia. • Tiling word problem This is good for middle school - whenever students are studying decimals. Square polystyrene tiles, 50 cm by 50 cm, are used to cover the ceiling of a classroom measuring 7.4 m by 4.5 m. Find the number of tiles that are needed, and the total cost if one tile costs $130. • Mathematics and Music I stumbled upon a nice article explaining some of the basics of how MATH and MUSIC connect together. This article talks about frequencies & wavelengths, shows graphs for the frequencies of the notes using the exponential function, graphs of sine waves, the harmonic series, the semi-tones within an octave, and so on. Feel free to forward this issue to a friend/colleague! Subscribe here Till next time, Maria Miller
{"url":"http://www.homeschoolmath.net/newsletter/volume57-november-2011.htm","timestamp":"2014-04-20T10:53:03Z","content_type":null,"content_length":"13989","record_id":"<urn:uuid:eb99da2b-320a-41ca-aa7c-f868cc90d774>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
6.2. Continuous Functions If one looks up in a thesaurus, one finds synonyms like lack of interruption . Descartes said that a function is continuous if its graph can be drawn without lifting the pencil from the paper. Example 6.2.1: • Use the above imprecise meaning of continuity to decide which of the two functions are continuous: 1. f(x) = 1 if x > 0 and f(x) = -1 if x < 0. Is this function continuous ? 2. f(x) = 5x - 6. Is this function continuous? However, if we want to deal with more complicated functions, we need mathematical concepts that we can manipulate. Definition 6.2.2: Continuity A function is continuous at a point c in its domain D if: given any there exists a such that: if x D and | x - c | < then | f(x) - f(c) | < A function is continuous in its domain D if it is continuous at every point of its domain. This, like many definitions and arguments, is not easy to understand. Click on the icon to see an applet that tries to illustrate the definition. Continuous functions are precisely those groups of functions that preserve limits, as the next proposition indicates: Proposition 6.2.3: Continuity preserves Limits If f is continuous at a point c in the domain D, and { x[n] } is a sequence of points in D converging to c, then . If for every sequence { x[n] } of points in D converging to c, then f is continuous at the point c. Again, as with limits, this proposition gives us two equivalent mathematical conditions for a function to be continuous, and either one can be used in a particular situation. Example 6.2.4: • Which of the following two functions is continuous: 1. If f(x) = 5x - 6, prove that f is continuous in its domain. 2. If f(x) = 1 if x is rational and f(x) = 0 if x is irrational, prove that x is not continuous at any point of its domain. • If f(x) = x if x is rational and f(x) = 0 if x is irrational, prove that f is continuous at 0. • If f(x) is continuous in a domain D, and { x[n] } is a Cauchy sequence in D, is the sequence { f ( x[n] ) } also Cauchy ? Continuous functions can be added, multiplied, divided, and composed with one another and yield again continuous functions. Proposition 6.2.5: Algebra with Continuous Functions • The identity function f(x) = x is continuous in its domain. • If f(x) and g(x) are both continuous at x = c, so is f(x) + g(x) at x = c. • If f(x) and g(x) are both continuous at x = c, so is f(x) * g(x) at x = c. • If f(x) and g(x) are both continuous at x = c, and g(x) # 0, then f(x) / g(x) is continuous at x = c. • If f(x) is continuous at x = c, and g(x) is continuous at x = f(c), then the composition g(f(x)) is continuous at x = c. While this proposition seems not very important, it can be used to quickly prove the following: Examples 6.2.6: • Every polynomial is continuous in R, and every rational function r(x) = p(x) / q(x) is continuous whenever q(x) # 0. • The absolute value of any continuous function is continuous. Continuity is defined at a single point, and the epsilon and delta appearing in the definition may be different from one point of continuity to another one. There is, however, another kind of continuity that works for all points of domain at the same time. Definition 6.2.7: Uniform Continuity A function f with domain D is called uniformly continuous on the domain D if for any there exists a such that: if s, t D and | s - t | < then | f(s) - f(t) | < . Click here for a graphical Take a look at this Java applet illustrating uniform continuity. While this definition looks very similar to the original definition of continuity, it is in fact not the same: a function can be continuous, but not uniformly continuous. The difference is that the delta in the definition of uniform continuity depends only on epsilon, whereas in the definition of simply continuity delta depends on epsilon as well as on the particular point c in question. The next theorem illustrates the connection between continuity and uniform continuity, and gives an easy condition for a continuous function to be uniformly continuous. Theorem 6.2.9: Continuity and Uniform Continuity If f is uniformly continuous in a domain D, then f is continuous in D. If f is continuous on a compact domain D, then f is uniformly continuous in D. Next, we will look at functions that are not continuous.
{"url":"http://www.mathcs.org/analysis/reals/cont/contin.html","timestamp":"2014-04-20T21:06:35Z","content_type":null,"content_length":"15713","record_id":"<urn:uuid:a21cbbae-abd0-462c-a077-a433b6866230>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 24th 2009, 11:51 AM #1 Mar 2009 I have the polynomial $f(x) = x^4-10x^2+1$ over $\mathbb Q$. It can be easily seen that Klein's Four group is the Galois group of this polynomial. How can I show that the corresponding polynomial $g(x) \in \mathbb F_p[x]$ is irreducible in $\mathbb F_p[x]$ for every prime $p$. Last edited by Jhevon; March 27th 2009 at 12:05 AM. Oh man, please, it's polynomial ! This is not true. If you take $p=3$, $x^4 - 10x^2 + 1 = (x^2+1)^2$. I think you want to prove that this polynomial is always reducible as opposed to irreducible. One way to proved is by Dedekind's theorem (again) and by an understanding of transitive subgroups of $S_4$. The transitive subgroup $V$ (Klein four group) consists of an identity element and $2$-cycle products ( $V=\{\ text{id},(12)(34),(13)(24),(14)(23)\}$). Therefore, mod any $p$ the polynomial $x^4-10x^2+1$ either splits or factors into quadradic factors. Therefore, it is always reducible. Sorry what I meant was reducible not irreducible, it was a mistake. Here is another way of proving this which is elementary. Let $p$ be prime with $p\geq 7$. Notice the following factorizations: $x^4-10x^2+1 = (x^2 + 2\sqrt{2}x-1)(x^2-2\sqrt{2}x-1)$ [1] $x^4-10x^2+1 = (x^2 + 2\sqrt{3}x+1)(x^2 - 2\sqrt{3}x+1)$ [2] $x^4-10x^2+1 = (x^2 - 5 + 2\sqrt{6})(x^2 - 5 - 2\sqrt{6})$ [3] If $2$ is a square root in $\mathbb{F}_p$ then we can use the factorization in [1] where we replace $\sqrt{2}$ by a square root of $2$ modulo $p$. If $3$ is a square root in $\mathbb{F}_p$ then we can use the factorization in [2] where we replace $\sqrt{3}$ by a square root of $3$ modulo $p$. If neither $2,3$ are squares mod $p$ then it means $(2/p),(3/p)=-1$ (Legendre symbol). However, this means $(6/p) = (2/p)(3/p) = (-1)(-1) = 1$. Therefore, $6$ would be a square in $\mathbb{F}_p$ and so we can use the factorization in [3]. Therefore, $\overline{f(x)}\in \mathbb{F}_p[x]$ is always reducible for any $p$. Last edited by ThePerfectHacker; March 28th 2009 at 03:11 PM. why should p be greater than or equal to 7? March 24th 2009, 12:06 PM #2 March 27th 2009, 11:26 AM #3 Global Moderator Nov 2005 New York City March 28th 2009, 12:17 AM #4 Mar 2009 March 28th 2009, 02:53 PM #5 Global Moderator Nov 2005 New York City March 29th 2009, 10:04 AM #6 Mar 2009 March 29th 2009, 10:10 AM #7 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/advanced-algebra/80436-polynomial.html","timestamp":"2014-04-20T00:13:25Z","content_type":null,"content_length":"57691","record_id":"<urn:uuid:5f35280b-ecd0-4163-9975-e6cce5442bae>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Mass Concept Name: B Hope Status: educator Age: 30s Location: N/A Country: N/A Date: 9/6/2004 Question: One of my 11th grade students is having a really tough time with the concept of mass. Any suggestions? How can I go about giving her the simplest of explanations and clear it up for her. She thinks the definitions for Mass and matter are circular. Mass is the amount of matter in something and matter is anything with mass.... Replies: Your student is very perceptive and is to be congratulated for recognizing the circularity of the terms mass and matter, in most texts and references the two definitions ARE CIRCULAR. Van Nostrand's Scientific Encyclopedia defines "mass" as: " The physical measure of the principal inertial property of a body, i.e., its resistance to change of motion." It continues: " Under these circumstances [speeds small compared to the speed of light], the masses M1 and M2 of two bodies may be compared by allowing the two bodies to interact [whatever that means]. Then M1/M2 = |a1|/|a2| where |a1| and |a2| are the magnitudes of the respective accelerations of the two bodies as a result of the interaction." The two citations below are equally useless [that is a value judgment on my part]. What is glossed over in physics is that there are certain terms in the theory that cannot be defined within the system itself. "Matter" (or mass) is one of these. As you see from above there are logical difficulties because the definition introduces an undefined term "interact". Such theories are called "effective theories" because we study the "effects" of these undefined terms. Geometry provides an example. In plane geometry we study the behavior of points and lines, but these terms are not defined within the subject itself. They are the "undefined" terms whose relations to one another is the subject matter of geometry. If we introduce the concept of real number plane, we can define a point as an ordered pair of real numbers (X,Y), which gets us out of trouble with the definition of "point" only to introduce other undefined terms. For example what is the definition of "number"? Similarly, within the context of real numbers "i", the square root of (-1), is undefined. This only becomes defined when the theory of complex numbers is introduced. In my opinion we must grasp that within any description of nature there are going to be certain fundamental terms that are undefined within the system. This is also related to Goedel's incompleteness theorem. Vince Calder They really are not circular, but as you have indicated below, you can use one to describe the other. In our macro universe, and I am not going into quantum mechanics here, Matter is a term used to describe a collection of molecules/atoms which make up objects. We have either matter or energy (which of course are interrelated). When we generally speak of matter, that matter has a property called Mass. Mass is different then weight. Mass is a constant that remains with that object and has some control over how that object reacts in the universe. Bob Hartwell Mass and matter are completely different concepts. Matter is anything that can be touched -- that takes up space and is distinguishable from empty space. Matter is just another word for "stuff". Mass is a property of matter -- one of many properties, including color, shape, and charge. Mass is the property of matter that defines what force a piece of matter feels when it's in a gravitational field (i.e., near some other thing that has mass), and how strong a gravitational field the piece of matter generates. (Similarly, charge defines what force a piece of matter feels from an electromagnetic field, and how strong an electromagnetic field it generates.) But mass also defines what an object does when any force acts on it. Exactly the same property tells you two completely different things: 1) what gravitational force there is on a object, and 2) how that object responds to (is accelerated by) forces of any kind. I don't think anybody understands why, and people have gone nuts trying to measure any difference between (1) and (2). Tim Mooney B. Hope H., Mass is a measurement, a measure of how much matter there is. Volume is also a measure of how much matter there is, but a different kind of measure. An example is groceries. Some groceries have volume on the package. Some groceries have how much mass. Both tell you how much material you are buying. Matter is a state of existence. Some things in the universe qualify as matter. Some qualify as waves. When you get down to super-tiny scales, i.e. quantum physics, matter and waves start to merge. Things begin to have both matter and wave properties at the same time. On an everyday scale, matter interacts according to Newtonian physics. Waves interact according to different laws. Energy of matter is in the mass and speed. Energy of a wave is in the frequency. Matter is made of particles (quarks and leptons). Waves are not. A wave can be made of photons, e.g. microwaves. A wave can be made of the motion of matter, e.g. sound. Dr. Ken Mellendorf Physics Instructor Illinois Central College Dear B Hope, This is a common misconception that many students (and former students) have trouble with -- mass and matter are different (but related) concepts, and mass and weight are different (but related) concepts. So, start with a definition of matter. Matter is anything that takes up space (has volume) and has mass. So here are some examples of matter: rocks (solid), water (liquid), air (gas), your desk, your arm, your shoe. Now students seem convinced that everything (even the air!!) is matter. So, what are some things that are NOT matter (things that do not take up space and do not have mass)? Here are some "non-matter" things to get you started: heat, light, an idea. These are things that are NOT matter. Because of its definition, matter has four general properties, that is, four quantities that can always be assigned to something that is matter: mass, weight, volume, and density. Mass and weight are different, but related quantities. An object's mass is a measure of its inertia -- the more mass an object has, the more resistant it is to changes in motion. So a big rock has more mass than a small rock. The big rock also weighs more than the little rock in the earth's gravitational field. If you took both rocks way into the universe, outside of any gravitational field, both rocks would not have any weight -- but the big rock would still be more resistant to a change in motion than the little rock -- that is, the big rock still has more mass than the little rock, even though both rocks are "weightless". In the metric system, weight and mass are designated with different units. An object's weight is a measure of the force of gravity acting on the object, measured in Newtons. An object's mass is a measure of its resistance to inertia, measured in kilograms. Let us say that an object has a weight of about 60 Newtons (about 13.5 pounds) on the earth. Its mass would be about 6 kilograms. If you took that object to the surface of the moon, where the gravitational is less, its weight would now be about 10 Newtons (about 2.25 pounds), but its mass is still 6 kilograms. I hope this helps. The key point is that mass is a property of matter; by definition, anything that is matter has mass and takes up space. The property of mass is the object's resistance to changes in motion (its inertia). The greater the quantity of matter, the more inertia (hence, more mass) it has. Todd Clark, Office of Science U.S. Department of Energy Matter can be held. Mass is a measure of the amount of matter. Greg Bradburn Click here to return to the Physics Archives NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by Argonne National Laboratory Educational Programs , Andrew Skipor, Ph.D., Head of Educational Programs. For assistance with NEWTON contact a System Operator (help@newton.dep.anl.gov) , or at Argonne's Educational Programs NEWTON AND ASK A SCIENTIST Educational Programs Building 360 9700 S. Cass Ave. Argonne, Illinois 60439-4845, USA Update: June 2012
{"url":"http://newton.dep.anl.gov/askasci/phy00/phy00860.htm","timestamp":"2014-04-20T23:38:26Z","content_type":null,"content_length":"17587","record_id":"<urn:uuid:c868bd50-1050-429e-8dd4-770bc43b5aea>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Trigonometric area optimization Replies: 13 Last Post: Dec 20, 2012 3:29 PM Messages: [ Previous | Next ] Topics: [ Previous | Next ] bobmck Re: Trigonometric area optimization Posted: Dec 17, 2012 1:48 AM Posts: 15 From: On December 11, 2012, AdamThor wrote: Registered: Hello. Earlier today, I took a math exam and there was one problem that I just couldn't solve. The problem is as follows: The lines y=10-2x, y=mx and y=(-1/m)x, where m > 1/2, form a right triangle. Find an m, so that the area measurement of the triangle is as small as possible. I just couldn't, for the life of me, find a suitable function which I could take the derivative of. I made some honest attempts at solving this, but nothing gave me a definitive A response to the responders: Gee, I wouldn't have guessed that this topic could run off in a direction of "who solved first" or who deserves "credit/applause." I have little interest in those aspects, but I will toss in my 2 cents...take it for what it's worth. I, too, appreciate the OP bringing an interesting problem to our attention and I think that we have all been trying to be helpful. My approach to the query by AdamThor was simply this: he indicated that this came up in an exam and he obviously had tried hard to find a solution, but was unsuccessful, and he wanted some answer. This, to me, shows that the exam had achieved one of its important purposes: help point out what you *don't* know...and possibly entice the testee to study further and fill in weak points in the subject matter (Sometimes, in tests I have designed, I followed the maxim that challenge can be good thing: if you achieve a perfect score on tests, then you haven't really been "tested"!). As an instructor I would bend over backwards to help any student who showed that interest...particularly post-exam. My attention was drawn to the OP's mentioning his exam but didn't go into details. But since this was posted on a geometry.college board, we might be able to guess at the appropriate level to address a response. I like to work out problems with trig and calculus as much as anyone, and the OP was obviously already familiar with derivatives. But...we don't know exactly what kind of exam it was, only that this was "one problem (he) just couldn't solve," but I would guess that it was a timed event (versus a "take home" or "take all the time you want"). It might have been "multiple choice" (unlikely) or a "show all your work." (The "multiple choice" becomes ineffective if the testee can just plug in the choices and see which one works, except if one of the choices is "none of the above".) But if this exam was timed then time management can be an important consideration. When time to solve becomes paramount, it can be useful to assume that there IS a solution that the exam constructor had in mind and that it is achievable within the time constraints. So, this becomes a search for one's mathematical insight or at least for an "Aha!" moment. Most any approach that requires deep calculation or solutions to equations that seem intractable, is not likely to be successful (nor were those the exam's intent). So, even if this particular exam came at the end of a teaching unit on "the use of derivatives in analytical geometry," sometimes an exam question is intended to show you when a specific approach is NOT the most expeditious path. What was behind my own insight into this problem? I once observed a wire coat-hanger swinging on a nail on a wall. Its shape, other than the hook, was similar to an isosceles triangle (although not a right triangle), and it happened to be hanging not from its hook but rather from the top of its triangular shape. I noticed a horizontal trim line on the wall behind the hanger (but above and parallel to the hanger's bottom when the hanger was at rest). When the hanger would swing back-and-forth in pendulum fashion, the area outlined by the triangle sides and the trim line varied from a larger to a smaller amount. But this was truly an example of triangles of equal height and third sides having a common direction: obviously the areas subtended depended only upon the length of wall trim subtended, and this led to the insight that isosceles yielded the smallest area. It didn't matter whether or not this was proven back in high school geometry class...the common-place demonstration stuck with me! Thank you AdamThor and all responders for taking the time/effort and for taking me down a path of questioning methods and motives. Well...maybe all that was worth 3 cents! Date Subject Author 12/11/12 Trigonometric area optimization AdamThor 12/11/12 Re: Trigonometric area optimization Peter Scales 12/14/12 Re: Trigonometric area optimization bobmck 12/14/12 Re: Trigonometric area optimization Peter Duveen 12/15/12 Re: Trigonometric area optimization AdamThor 12/15/12 Re: Trigonometric area optimization Peter Duveen 12/15/12 Re: Trigonometric area optimization Peter Duveen 12/16/12 Re: Trigonometric area optimization Peter Scales 12/16/12 Re: Trigonometric area optimization Peter Duveen 12/16/12 Re: Trigonometric area optimization Peter Scales 12/17/12 Re: Trigonometric area optimization bobmck 12/17/12 Re: Trigonometric area optimization Peter Duveen 12/20/12 Re: Trigonometric area optimization AdamThor 12/20/12 Re: Trigonometric area optimization Peter Duveen
{"url":"http://mathforum.org/kb/thread.jspa?messageID=7937958&tstart=0","timestamp":"2014-04-17T21:54:33Z","content_type":null,"content_length":"36150","record_id":"<urn:uuid:449d7983-e426-4a33-abf7-e6b05aedd596>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Looking for a C code for roots of high order polynomials 05-16-2011 #1 Registered User Join Date May 2011 Looking for a C code for roots of high order polynomials Hi, there, I need to solve hight order polynomial equations like ax^15+bx^12+cx^9+dx^6+ex^3+fx^2+g=0 in my research. Any body knows where to find a c code to solve the roots? Many thanks, Say hello to your new best friend: GSL - GNU Scientific Library - GNU Project - Free Software Foundation (FSF) Can you give a little education about how to find what I want? The red bit of text is a link. You click on it. Why don't you use mathlab? If you're asking what I thnk you're asking, then it is believed to be impossible. Wikipedia backs this up: For a univariate polynomial of degree less than five, we have closed form solutions such as the quadratic formula. However, even this degree-two solution should be used with care to ensure numerical stability. The degree-four solution is unwieldy and troublesome. Higher-degree polynomials have no such general solution, according to the Abel–Ruffini theorem (1824, 1799). My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" If you're asking what I thnk you're asking, then it is believed to be impossible. Wikipedia backs this up: For a univariate polynomial of degree less than five, we have closed form solutions such as the quadratic formula. However, even this degree-two solution should be used with care to ensure numerical stability. The degree-four solution is unwieldy and troublesome. Higher-degree polynomials have no such general solution, according to the Abel–Ruffini theorem (1824, 1799). My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" If you're asking what I thnk you're asking, then it is believed to be impossible. Wikipedia backs this up: For a univariate polynomial of degree less than five, we have closed form solutions such as the quadratic formula. However, even this degree-two solution should be used with care to ensure numerical stability. The degree-four solution is unwieldy and troublesome. Higher-degree polynomials have no such general solution, according to the Abel–Ruffini theorem (1824, 1799). My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" There're numerical methods to solve high order polynomial equation. I've written one that gives me all the roots(including complex roots).using Durand algorithm. Last edited by Bayint Naung; 05-17-2011 at 12:39 AM. 05-16-2011 #2 05-16-2011 #3 Registered User Join Date May 2011 05-16-2011 #4 05-16-2011 #5 Registered User Join Date May 2010 05-17-2011 #6 05-17-2011 #7 05-17-2011 #8 05-17-2011 #9 Registered User Join Date May 2010
{"url":"http://cboard.cprogramming.com/c-programming/138059-looking-c-code-roots-high-order-polynomials.html","timestamp":"2014-04-19T10:26:13Z","content_type":null,"content_length":"76117","record_id":"<urn:uuid:78a4f1a1-024c-4952-9cc7-2b243e1245d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus/Related Rates/Solutions 1. A spherical balloon is inflated at a rate of $100 ft^3/min$. Assuming the rate of inflation remains constant, how fast is the radius of the balloon increasing at the instant the radius is $4 ft$? $V=\frac{4}{3}\pi r^{3}$ Take the time derivative: $\dot{V}=4\pi r^{2}\dot{r}$ Solve for $\dot{r}$: $\dot{r}=\frac{\dot{V}}{4\pi r^{2}}$ Plug in known values: $\dot{r}=\frac{100}{4\pi4^{2}}=\mathbf{\frac{25}{16\pi} \frac{ft}{min}}$ 2. Water is pumped from a cone shaped reservoir (the vertex is pointed down) $10 ft$ in diameter and $10 ft$ deep at a constant rate of $3 ft^3/min$. How fast is the water level falling when the depth of the water is $6 ft$? $V=\frac{1}{3}\pi r^{2}h=\frac{1}{3}\pi(\frac{h}{2})^{2}h=\frac{1}{12}\pi h^{3}$ Take the time derivative: $\dot{V}=\frac{1}{4}\pi h^{2}\dot{h}$ Solve for $\dot{h}$: $\dot{h}=\frac{4\dot{V}}{\pi h^{2}}$ Plug in known values: $\dot{h}=\frac{(4)(3)}{\pi6^{2}}=\mathbf{\frac{1}{3\pi} \frac{ft}{min}}$ 3. A boat is pulled into a dock via a rope with one end attached to the bow of a boat and the other wound around a winch that is $2ft$ in diameter. If the winch turns at a constant rate of $2rpm$, how fast is the boat moving toward the dock? Let $R$ be the number of revolutions made and $s$ be the distance the boat has moved toward the dock. $\frac{R}{s}=\frac{1}{2\pi r}$ (each revolution adds one circumferance of distance to s) Solve for $s$: $s=2\pi rR$ Take the time derivative: $\dot{s}=2\pi r\dot{R}$ Plug in known values: 4. At time $t=0$ a pump begins filling a cylindrical reservoir with radius 1 meter at a rate of $e^{-t}$ cubic meters per second. At what time is the liquid height increasing at 0.001 meters per Last modified on 19 August 2011, at 17:59
{"url":"http://en.m.wikibooks.org/wiki/Calculus/Related_Rates/Solutions","timestamp":"2014-04-17T06:48:14Z","content_type":null,"content_length":"20815","record_id":"<urn:uuid:266a45b9-0061-4304-bdf3-06decb62f54a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Velocity Reviews - Choosing a matrix library for image processing. Blitz++,MTL or others? Choosing a matrix library for image processing. Blitz++,MTL or others? I'll swictch from Matlab to C++. So I want to find a matrix library of C++, Whitch can process images conveniently as Matlab does. I've googled, and found that Blitz++ and MTL are popular and powerful. I want to know whitch of them fit for image processing better. Or have other choices? Any suggestion whill be appreciated. benben 03-22-2006 09:09 AM Re: Choosing a matrix library for image processing. Blitz++,MTL orothers? Guch wrote: > I'll swictch from Matlab to C++. So I want to find a matrix library of > C++, Whitch can process images conveniently as Matlab does. > I've googled, and found that Blitz++ and MTL are popular and powerful. > I want to know whitch of them fit for image processing better. > Or have other choices? > Any suggestion whill be appreciated. Re: Choosing a matrix library for image processing. Blitz++,MTL or others? But their function are mostly the same. syntheticpp@yahoo.com 03-22-2006 12:08 PM Re: Choosing a matrix library for image processing. Blitz++,MTL or others? Re: Choosing a matrix library for image processing. Blitz++,MTL or others? What's adobe gil's advantage over Blitz++? Is it efficient enough? =?iso-8859-1?q?Peter_K=FCmmel?= 03-22-2006 12:36 PM Re: Choosing a matrix library for image processing. Blitz++,MTL or others? gil is specialized for images so there are functions which blitz does not have (all the color stuff). I don't know the internals of gil (does it use expression templates?). But if you ask there on the mailing list or the forum I'm sure they'll aswer all you questions. When you only need standard image manipulations I assume gil is the better choice, but if you wanna implement complicated image manipulation algorithms, then maybe blitz is better because it was designed to make the implementation of complex calculations more convenient than unsing pure C. Catalin Pitis 03-22-2006 02:44 PM Re: Choosing a matrix library for image processing. Blitz++,MTL or others? I remember that Intel also has some matrix library, optimized for Intel processors (SIMD). I don't know what are the constrains from platform portability point of view... All times are GMT. The time now is 08:36 AM. Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc. SEO by vBSEO ©2010, Crawlability, Inc.
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=452749","timestamp":"2014-04-19T08:36:48Z","content_type":null,"content_length":"8794","record_id":"<urn:uuid:a9c8310e-4702-4f1e-b7a7-ff4fd7251c34>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Math with Mr. Chase Friday, June 15--bruce.chase@lmsvsd.k12.ca.us if any questions--have a great summer! Thursday, June 14--no homework Wednesday, June 13--no homework Tuesday, June 12--no homework Monday, June11-no homework Friday, June 8--any missing work. Study to retake any old test Thursday, June 7--any missing work. Study to retake any old test Wednesday, June 6--any missing work. Study to retake any old test Tuesday, June 5--Any missing work, study for 4th quarter test on Wednesday Monday, June 4--Any missing work, study for 4th quarter test on Wednesday Friday, June 1--Any missing work, study for any retakes Thursday, May 31--Any missing work, study for any retakes Wednesday, May 30--Study practice test for tomorrows Grossmont Test Tuesday, May 29--review sheet to practice for this Thursdays Grossmont Algebra Readiness Test Friday, May 25--Any missing work Thursday, May 24--Study for test Wednesday, May 23--Practice Test for Friday test Tuesday, May 22--12-3 worksheet Monday, May 21--12-2 worksheet Friday, May 18--11-6 worksheet--did in class, any missing work Thursday, May 17--11-7 worksheet--did in class Wednesday, May 16--11-4 worksheet Tuesday, May 15--10-7 worksheet Monday, May 14--10-5 and 10-6 #'s 11-26 in workbook Friday, May 11-any missing work and studying to retake test Thursday, May 10--any missing work Wednesday, May 9--any missing work Tuesday, May 8--any missing work Monday, May 7--any missing work Friday, May 4--any missing work Thursday, May 3--any missing work Wednesday, May 2--any missing work Tuesday, May 1--Period 1 11-7 #'s 1-4, and review worksheet. Period 3-5 12-3 and review worksheet Monday, April 30--Period 1 11-6 #'s 1 and 2, and review worksheet. Period 3-5 12-2 evens, and review worksheet Friday, April 27--any missing work and study for test retake Thursday, April 26--Period 1 11-4 #'s 1 and 2 and review worksheet. Period 3-5 11-7 and review worksheet Wednesday, April 25--Period 1 has lesson 11-4 in workbook and review sheet, Period 2-5 has lesson 11-6 in workbook and review sheet Tuesday, April 24--Period 1 has lesson 10-7 in workbook skipping #14 and 20. plus review worksheet Period 2-5 has 11-4 in workbook and review worksheet Monday, April 23--All classes have a review worksheet. Period 1 has #'s 1-10 in lessons 10-5 and 10-6. Periods 3-5 have all problems in lesson 11-3. They can estimate their graph if they do not have protractor and they can use a calculator for their data. Friday, April 20--Period 4 has a review worksheet. All classes should work on missing work or study to retake any tests. Thursday, April 19--Review worksheet and numbers 1-10 in workbook lessons 10-6 & 10-7. Period 1 did review worksheet and numbers 1-3 in workbook lessons 10-2 and 10-4 Wednesday, April 18--Review worksheet and numbers 1-10 in workbook lesson 10-5 Tuesday, April 17--Period 1 test tomorrow! Also review worksheet, Period 3,4,5--review worksheet and first 3 problems in workbook lesson 10-4 Monday, April 16--Period 1--practice test, periods 3,4,5--#'s 1-3 in lesson 10-2 in workbook, as well as a review worksheet as we get ready for CST Testing. Friday, March 30--any missing work from the first week of school, and study for test retake, if you feel you did not know the material when we took the test. Thursday, March 29--test today in period 3,4,6 took test on their vote as being ready, so they have no homework unless they want to study tonight and take it tomorrow. Period 5 needs to study Wednesday, March 28--period 1 9-8 in workbook. Period 3-5, Practice test for test on Friday. Tuesday, Marcy 27--9-3 #9-7 in workbook--and any missing work for report card Monday, March 26--9-5 in workbook Friday, March 23--missing work or study to retake test Thursday, March 22--9-3 in workbook, #'s 1-8 Wednesday, March 21--9-2 in workbook Tuesday, March 20--9-1 in workbook Monday, March 19--Worksheet on equations Friday, March 16--Test on the worksheets from this week. This is all in their notes as well. Thursday, March 15--7-7 worksheet on Surface Area of 3D figures Wednesday, March, 14--7-5 worksheet on Volume of Prisms and Cylinders Tuesday, March 13--7-3 worksheet on Complex figures using area formulas. Monday, March 12--7-1 worksheet on Circumference and Area Friday, March 9--test today. Any missing work can be done on the weekend Thursday, March 8--study for test on equations Wednesday, March 7--Practice Test Form D for Friday's test--Can use any notes, show work on both sides of equation. Tuesday, March 6--Practice Test for Friday's Test=Can use any notes, show work on both sides of equation. Monday, March 5--8-8 in workbook Friday, March 2--any missing work Thursday, March 1--8-7 in workbook #'s1-15 and 22-29 Wednesday, Feb 29--8-4 in workbook, on separate page. Tuesday, Feb 28--8-2 worksheet Monday, Feb 27--8-3 in workbook, on separate page. Friday, Feb 24--Any missing work, and study to retake test if needed Thursday, Feb 23--8-1 #'s 1-12, 26-28 in workbook Wednesday, Feb 22--8-1 #'s 13-25 in workbook Tuesday, Feb 21--Worksheet Friday, Feb 17--Any missing work Thursday, Feb 16--Any missing work Wednesday, Feb 15--Study practice test, workbook, notes for test tomorrow Tuesday, Feb 14--Finish Practice test for Thursday test Friday, Feb 10--any missing work, and study for any test retake Thursday, Feb 9--7-7 in workbook on separate paper. Wednesday, Feb 8--7-5 in workbook #'s 1-7 11, 13 Tuesday, Feb 7--7-4 worksheet Monday, Feb 6--7-3 in workbook, on separate paper Friday, Feb 3--missing work or study to retake test Thursday, Feb 2--worksheet on area Wednesday, Feb 1--7-1 in workbook, numbers 5-8, 9-12-just area questions, and 13-17 Tuesday, January 31--7-1 in workbook, numbers 1-4 and 9-12 Monday, January 30--I am out today, Mrs. Lindsay was going to give a worksheet that can be done in class if they work hard, otherwise it is homework. It is a worksheet of skills needed for the next unit of study. Friday, January 27--finished test, no homework this weekend unless they are behind on the 6-7 lesson and the practice test for the new quarter. Thursday, January 26--Test today, no homework unless they didn't finish, then study some more! Wednesday, January 25--Study lessons 6-1 thru 6-7 and notes and practice test for a test tomorrow! Tuesday, January 24--Practice test for D. They can use notes or any other resource. We also started in class. Monday, January 23--6-7 in workbook Friday, January 20--missing work, study to retake tests for report cards! Thursday, January 19--6-5 in class, 6-6 in workbook Wednesday, January 18--6-4 in workbook Tuesday, January 17--5-7 worksheet odds only Friday, January 13--missing work or study for test retake if needed. report cards in two weeks Thursday, January 12--Worksheet on triangles and quadrilaterals except period 1 who did not have homework today Wednesday, January 11--6-1 in workbook Tuesday, January 10--test corrections Monday, January 9--5-7 worksheet, evens. Do on back or separate paper. Wednesday, Dec 21--Any missing work and studying to retake any test. Tuesday, Dec 20--Study practice test and any notes for test on Percent problems tomorrow! And returned signed goal sheet Monday, Dec 19--Percent practice test Form C Friday, Dec 16--any missing work Thursday, Dec 15--5-9 evens in workbook Wednesday, Dec 14--5-8 evens in workbook Tuesday, Dec 13--5-4 evens in class, 5-6 all in workbook Monday, Dec 12--5-7 evens in workbook. On separate paper Friday, Dec 9--Missing work. Test was returned, they can study to retake a different version. Thursday, Dec 8--5-1 and 5-2 evens for both from workbook Wednesday, Dec 7--review worksheet Tuesday, Dec 6--Study for test tomorrow! You have Practice test with answers Monday, Dec 5--Practice test on Proportions for test on Wednesday Friday, Dec 2--any missing work Thursday, Dec 1--4-8 worksheet, both sides on separate paper Wednesday, Nov 30--4-9 and 4-10 in workbook Tuesday, Nov 29--4-6 in workbook Monday, Nov 28--4-5 in workbook Friday, Nov 18--missing work Thursday, Nov 17--4-3 in workbook Wednesday, Nov 16--4-1 #'s 9-14. Use separate page, and show all division. Tuesday, Nov 15--4-1 #1-8 in workbook Monday, Nov 14--study test corrections if wanting to retake test Friday, Nov 11--no school--work on missing work Thursday, Nov 10--any missing work Wednesday, Nov 9--Study practice test for test tomorrow! Tuesday, Nov 8--Practice test for unit test on Thursday. Monday, Nov 7--2-8 in workbook--tests this week! Friday, Nov 4--missing work. 1st week of the quarter and there is lot's of missing work! check web grades Thursday, Nov 3--lesson 3-5, numbers 3,4,8, and 9 in workbook, and lesson 3-6 #2 and 5. Can use Calculators, but show all steps on a separate paper. Wednesday, Nov 2--3-1 and 3-2 in workbook Tuesday, Nov 1--2-9 in workbook except for period 3 which has one field question. Monday, Oct 31--2-10 in workbook Friday, Oct 28--Test today, no homework, but work on missing work! Thursday, Oct 27--Study for test retake! They have notes! Wednesday, Oct 26--2-3/2-4 worksheet evens Tuesday, Oct 25--2-1 worksheet. One side done in class, back at home. Monday, Oct 24--2-6 worksheet. One side done in class, evens on back. Friday, Oct 7--missing work, study for retake of tests, if you need to talk to me, I will be checking email and I am not going out of town. Have a nice two weeks! Thursday, Oct 6--missing work, test passed back. Wednesday, Oct 5--Unit 2 test today, no homework Tuesday, Oct 4--Study Practice test for tomorrow's test! Monday, Oct 3--Practice Test Form D for Test on Wednesday Friday, Sept 30--just missing work Thursday, Sept 29--2-7 in workbook on separate paper showing property of equality Wednesday, Sept 28--2-6 in workbook Tuesday, Sept 27--students had a midquarter test today, so just any missing work for homework Monday Sept 26--2-5 in workbook Friday, Sept 23--any missing work Thursday, Sept 22--2-4 in workbook Wednesday, Sept 21--2-3 in workbook Tuesday, Sept 20--2-2 in workbook, skip #4,7 Monday, Sept 19--2-1 odds in workbook--tests were passed back today Friday, Sept 16--Ch 1 test today, homework is any missing work Thursday, Sept 15--Study corrected Practice Test for 40 minutes at least! Wednesday, Sept 14--Ch 1 Form D practice for test on Friday. They can use notes--they did most of it in class today. Tuesday, Sept 13--lesson 1-8 in workbook. Also, any missing work as progress report is on Friday! Monday, Sept 12--lesson 1-10 all in workbook. Must use property of equality! Thursday, Sept 8--lesson 1-9 in workbook. Pick 2 word problems. Must use property of equality--see notes! Wednesday, Sept 7--lesson 1-7 all in workbook Tuesday, Sept 6--1-6 #'s 1-18 and 29 Friday, Sept 2--missing work Thursday, Sept 1--Take home quiz, they can use notes Wednesday, Aug 31--Lesson 1-4 in practice booklet Tuesday, Aug 30--Lesson 1-4 in new practice booklet Monday, Aug 29--Worksheet on Integers and Absolute Value Friday, Aug 26--no homework, but bring marker for whiteboard practice Thursday, Aug 25--Pg 232 1-9, and pg 233 1-9 Wednesday, Aug 24--Worksheet problems 1-8 and 17 Tuesday, Aug 23--no homework Monday, Aug 22--no homework
{"url":"http://www.mrchasemath.com/index.html","timestamp":"2014-04-18T05:30:03Z","content_type":null,"content_length":"28006","record_id":"<urn:uuid:f81cb843-8f24-45da-960e-a623108cebed>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
A Match Made in Math-Heaven If series are peanut butter, then integrals are jelly. You could put one or the other on a couple slices of bread and have a satisfying sandwich. You could also put them both together and have a tantalizing treat in your lunch box. The two go together so nicely for a reason. Although they are very different, they have some nice properties that are very similar. It's no coincidence. Remember that an integral is defined in terms of a limit that the left hand sum (LHS) and the right hand sum (RHS) are the same. An infinite number of intervals is usually used in this limit, so these sums look just like infinite series. There is one difference. To use these properties, we have to know already that the series converge. After all, peanut butter doesn't taste like jelly. They just go well together. Assuming the series both converge, then converges and c is a constant, then Find the sum of the series. You may assume Find the sum of the series. You may assume Find the sum of the series. You may assume Find the sum of the series. You may assume Find the sum of the series. You may assume converge or diverge? Does the answer depend on the particular choice of a[n] and/or b[n]?
{"url":"http://www.shmoop.com/series/series-properties-help.html","timestamp":"2014-04-16T20:30:27Z","content_type":null,"content_length":"42365","record_id":"<urn:uuid:fd082b75-907a-4b93-a29b-032bfa42bce2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Groups of Order 8 November 15th 2009, 06:23 AM #1 Nov 2009 Groups of Order 8 Prove that no pair of the following groups of order 8: I8 (congruence class mod 8), I4 x I2, I2 x I2 x I2, D8 (dihedral group), Q are isomorphic. The link http://www.mathhelpforum.com/math-he...tml#post392182 shows how D8 and Q are not isomorphic, but I have no idea how to prove the rest of the pairs are not isomorphic. Prove that no pair of the following groups of order 8: I8 (congruence class mod 8), I4 x I2, I2 x I2 x I2, D8 (dihedral group), Q are isomorphic. The link http://www.mathhelpforum.com/math-he...tml#post392182 shows how D8 and Q are not isomorphic, but I have no idea how to prove the rest of the pairs are not isomorphic. Well, the first 3 are abelian: $\mathbb{Z}_8\,,\,\mathbb{Z}_4\times \mathbb{Z}_2\,,\,\mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_2$ , so they can't be isomorphic with the non-abelian quaternion and dihedral groups. Now, how these three aren't isomorphic between them? Easy: count number of elements. For example, $\mathbb{Z}_8$ has an element of order 8 and none of the other groups has such an element, and all the elements in the third group have order 2, but not all in the second one... November 15th 2009, 07:56 AM #2 Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/114655-groups-order-8-a.html","timestamp":"2014-04-16T19:41:19Z","content_type":null,"content_length":"34134","record_id":"<urn:uuid:9434a270-b4b8-4ca2-a69e-cc2d3fa056b8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Social Dilemma Social Dilemma and related Definitions Informally, a social dilemma is a collective action situation in which there is a conflict between individual and collective interest. It is a situation in which individuals could do better if they either changed their strategies or changed the rules of the game. Some technical definitions: Nash equilibrium A set of strategies for the players in a game with the property that no player can benefit by deviating from his or her strategy, and thereby improve his or her payoffs while the strategies of other players are unchanged. Pareto optimal An outcome is Pareto optimal if there is no alternative outcome that makes some individuals better off and leaves all other individuals at least as well off. More formally, a social dilemma is a collective action situation in which the Nash equilibrium results in outcomes below the Pareto optimal. A dominating strategy is a strategy that yields the best outcome for an individual regardless of what anyone else does. Definitions of Nash and Pareto are from Ostrom and Walker (eds.), Trust and reciprocity: interdisciplinary lessons from experimental research, Russell Sage Foundation, 2003. Definition of Dominating Strategy from Kollock, P., "Social Dilemmas: The Anatomy of Cooperation", Annual Review of Sociology, 24: 183-214, 1998.
{"url":"http://faculty.washington.edu/jtenenbg/courses/tgh303/s09/docs/socialDilemmaFormal.html","timestamp":"2014-04-19T22:10:44Z","content_type":null,"content_length":"1914","record_id":"<urn:uuid:835311ab-f67e-4c09-901d-9dc6ac490d32>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
BDBComp - Biblioteca Digital Brasileira de Computação Analise Transiente de Modelos de Fonte Multimidia Carlos Fish de Brito, Edmundo de Souza e Silva, Morganna C. Diniz, Rosa M. M. Leão. The traffic carried by modern high speed networks have drastically different characteristics. New multimedia, real time applications require that a minimum QoS be offered by the network. Therefore a lot of modeling effort has been spent in order to better understand the issues involved in multiplexing multimedia trafnc over high speed links, and to maximize the allocation of network resources. Among the main problems that cause QoS degradation is the loss of cells due to buffer overflow in the switches. The main goal of this work is to present a new algorithm to calculate efficiently the transient queue length distribution, when the queue is fed by given traffic sources, directly from the traffic model being used. Thus, the algorithm we developed allows the study of the queue length variation over a given period of time, unlike most studies that are aimed at the steady state behavior. As a byproduct, we obtain the fraction of time the queue length is above a given value, during a finite observation interval. These results are important, for instance to study the influence of different types of traffic over a high speed channel. Caso o link acima esteja inválido, faça uma busca pelo texto completo na Web: Buscar na Web
{"url":"http://www.lbd.dcc.ufmg.br/bdbcomp/servlet/Trabalho?id=6196","timestamp":"2014-04-17T07:28:26Z","content_type":null,"content_length":"7256","record_id":"<urn:uuid:861ed8bb-df4c-4074-a558-0d8a0117b1b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof by contradiction in a topos Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. In a topos which is not Boolean topos, can we use proof by contradiction? add comment It depends on what examples you have in mind when you say "proof by contradiction". This topic has come up a number of times recently at MO, but I recommend to your attention the useful blog post by Andrej Bauer, which explains that there is a subtle distinction to be made between "proof of negation" and "proof by contradiction". If the proposition to be proved is already of the form $\neg p$, then it may help to recall that $\neg p$ is (by definition) the weakest assumption one could make such its conjunction with $p$ entails falsity (in symbols, $x \leq \neg p$ iff $x \wedge p \leq 0$). This is true in intuitionistic logic as well as in classical logic. So a proof of a negated proposition $\neg p$ would quite properly begin, "suppose $p$, then ... contradiction". Many people call this a proof by contradiction, because the structure of the argument-phrasing looks just like any old proof by contradiction. An example of this is Cantor's theorem (that there is no surjection from a set to its power set, or $\neg$ "there exists a surjection..."). This can be formulated in any topos and is true in any topos, Boolean or not. (If this helps, notice that in intuitionistic logic, we have that $\neg p$ is equivalent to $\neg \neg \neg p$: a negated proposition is always equivalent to its double negation.) But contrast this with for example the Hahn-Banach theorem: every locally convex topological vector space admits a continuous functional to the ground field. This proposition, which is not in negated form, is a prime example of something which has no constructive proof. A typical method of proof would be something like "by Zorn's lemma, there is a maximal closed subspace that admits such a continuous functional, and suppose this were not the whole space" and eventually derive a contradiction. This type of reasoning is not valid in a general topos. For another example, consider "$\sqrt{2}$ is irrational". This is a negative proposition: "$\neg (\exists p, q \in \mathbb{Z}_+ \; p^2 = 2 q^2)$". The usual arithmetic proofs are valid in any topos. Yes, that's a good answer. The word topos is a distraction. One should just learn how to write proofs constructively wherever possible. Maybe the result that alphaa wants can be proved Paul Taylor May 2 '13 at 18:55 add comment up vote 2 down vote There is no need to say any more than that, since the answer is in the question, except that MathOverflow will not let me submit something with so few characters. add comment Not the answer you're looking for? Browse other questions tagged topos-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/129437/proof-by-contradiction-in-a-topos","timestamp":"2014-04-19T15:24:30Z","content_type":null,"content_length":"54888","record_id":"<urn:uuid:43460cf0-ad74-42d3-91b3-5483e087f262>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 50 - Theoretical Informatics and Applications , 1990 "... We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up acco ..." Cited by 100 (4 self) Add to MetaCart We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up accordingly the factorization and primality testing algorithms using elliptic curves. 1. Introduction. Recent algorithms used in primality testing and integer factorization make use of elliptic curves defined over finite fields or Artinian rings (cf. Section 2). One can define over these sets an abelian law. As a consequence, one can transpose over the corresponding groups all the classical algorithms that were designed over Z/NZ. In particular, one has the analogue of the p \Gamma 1 factorization algorithm of Pollard [29, 5, 20, 22], the Fermat-like primality testing algorithms [1, 14, 21, 26] and the public key cryptosystems based on RSA [30, 17, 19]. The basic operation performed on an elli... "... This paper presents a new probabilistic primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] ..." Cited by 69 (4 self) Add to MetaCart This paper presents a new probabilistic primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] in that its assertions of primality are certain, rather than being correct with high prob-ability or dependent on an unproven assumption. Thc test terminates in expected polynomial time on all but at most an exponentially vanishing fraction of the inputs of length k, for every k. This result implies: • There exist an infinite set of primes which can be recognized in expected polynomial time. • Large certified primes can be generated in expected polynomial time. Under a very plausible condition on the distribution of primes in "small" intervals, the proposed algorithm can be shown'to run in expected polynomial time on every input. This - Invent. Math , 1997 "... 2. More detailed results 7 2.1. The algebraic geometry of even periodic ring spectra 7 ..." "... Based on an NSF-CBMS lecture series given by the author at the University of Central Florida in Orlando from August 8 to 12, 2001, this monograph surveys some recent developments in the arithmetic of modular elliptic curves, with special emphasis on the Birch and Swinnerton-Dyer conjecture, the ..." Cited by 31 (9 self) Add to MetaCart Based on an NSF-CBMS lecture series given by the author at the University of Central Florida in Orlando from August 8 to 12, 2001, this monograph surveys some recent developments in the arithmetic of modular elliptic curves, with special emphasis on the Birch and Swinnerton-Dyer conjecture, the construction of rational points on modular elliptic curves, and the crucial role played by modularity in shedding light on these questions. , 2001 "... We produce explicit elliptic curves over Fp(t) whose Mordell-Weil groups have arbitrarily large rank. Our method is to prove the conjecture of Birch and Swinnerton-Dyer for these curves (or rather the Tate conjecture for related elliptic surfaces) and then use zeta functions to determine the rank. I ..." Cited by 28 (1 self) Add to MetaCart We produce explicit elliptic curves over Fp(t) whose Mordell-Weil groups have arbitrarily large rank. Our method is to prove the conjecture of Birch and Swinnerton-Dyer for these curves (or rather the Tate conjecture for related elliptic surfaces) and then use zeta functions to determine the rank. In contrast to earlier examples of Shafarevitch and Tate, our curves are not isotrivial. Asymptotically these curves have maximal rank for their conductor. Motivated by this fact, we make a conjecture about the growth of ranks of elliptic curves over number fields. - Journal of the ACM , 1999 "... Abstract. We present a primality proving algorithm—a probabilistic primality test that produces short certificates of primality on prime inputs. We prove that the test runs in expected polynomial time for all but a vanishingly small fraction of the primes. As a corollary, we obtain an algorithm for ..." Cited by 22 (0 self) Add to MetaCart Abstract. We present a primality proving algorithm—a probabilistic primality test that produces short certificates of primality on prime inputs. We prove that the test runs in expected polynomial time for all but a vanishingly small fraction of the primes. As a corollary, we obtain an algorithm for generating large certified primes with distribution statistically close to uniform. Under the conjecture that the gap between consecutive primes is bounded by some polynomial in their size, the test is shown to run in expected polynomial time for all primes, yielding a Las Vegas primality test. Our test is based on a new methodology for applying group theory to the problem of prime certification, and the application of this methodology using groups generated by elliptic curves over finite fields. We note that our methodology and methods have been subsequently used and improved upon, most notably in the primality proving algorithm of Adleman and Huang using hyperelliptic curves - Bull. Amer. Math. Soc. (N.S , 1993 "... Would a reader be able to predict the branch of mathematics that is the subject of this article if its title had not included the phrase “in Number Theory”? The distinction “local ” versus “global”, with various connotations, has found a home in almost every part of mathematics, local problems being ..." Cited by 21 (0 self) Add to MetaCart Would a reader be able to predict the branch of mathematics that is the subject of this article if its title had not included the phrase “in Number Theory”? The distinction “local ” versus “global”, with various connotations, has found a home in almost every part of mathematics, local problems being often a stepping-stone to - HEEGNER POINTS AND RANKIN L-SERIES, EDITED BY HENRI DARMON AND SHOU-WU , 2004 "... Well-known analogies between number fields and function fields have led to the transposition of many problems from one domain to the other. In this paper, we discuss traffic of this sort, in both directions, in the theory of elliptic curves. In the first part of the paper, we consider various works ..." Cited by 13 (0 self) Add to MetaCart Well-known analogies between number fields and function fields have led to the transposition of many problems from one domain to the other. In this paper, we discuss traffic of this sort, in both directions, in the theory of elliptic curves. In the first part of the paper, we consider various works on Heegner points and Gross–Zagier formulas in the function field context; these works lead to a complete proof of the conjecture of Birch and Swinnerton-Dyer for elliptic curves of analytic rank at most 1 over function fields of characteristic> 3. In the second part of the paper, we review the fact that the rank conjecture for elliptic curves over function fields is now known to be true, and that the curves which prove this have asymptotically maximal rank for their conductors. The fact that these curves meet rank bounds suggests interesting problems on elliptic curves over number fields, cyclotomic fields, and function fields over number fields. These , 2008 "... Abstract. We study algebraic K3 surfaces (defined over the complex number field) with a symplectic automorphism of prime order. In particular we consider the action of the automorphism on the second cohomology with integer coefficients (by a result of Nikulin this action is independent on the choice ..." Cited by 13 (8 self) Add to MetaCart Abstract. We study algebraic K3 surfaces (defined over the complex number field) with a symplectic automorphism of prime order. In particular we consider the action of the automorphism on the second cohomology with integer coefficients (by a result of Nikulin this action is independent on the choice of the K3 surface). With the help of elliptic fibrations we determine the invariant sublattice and its perpendicular complement, and show that the latter coincides with the Coxeter-Todd lattice in the case of automorphism of order three. In the paper [Ni1] Nikulin studies finite abelian groups G acting symplectically (i.e. G |H2,0(X,C) = id |H2,0(X,C)) on K3 surfaces (defined over C). One of his main result is that the action induced by G on the cohomology group H2 (X, Z) is unique up to isometry. In [Ni1] all abelian finite groups of automorphisms of a K3 surface acting symplectically
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1064632","timestamp":"2014-04-21T13:05:46Z","content_type":null,"content_length":"35342","record_id":"<urn:uuid:72018f5a-2ec4-427b-af23-a9ad2dcfe449>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
general relativity The theory states that the quality of the definition given is inversely proportional to the length of the definition. Thus, the longer the definition, the crapper it is. Through the theory of general relativity, we can deduce that the top definition is not worth reading. A theory which states that space and time are actually a four-dimensional continuum known as "spacetime", and that gravity is a manifestation of the curvature of the geometry of this four-dimensional spacetime. General Relativity explains the gravitational bending of starlight, how gravity slows down the flow of time, predicts the existence of blackholes, and even predicts the expansion of the universe. The theory was formulated by Albert Einstein in order to incorporate gravity into his theory of relativity. The famous eclipse experiment of 1919 confirmed Einstein's prediction that light rays would bend in a gravitational field, according to his theory of general relativity.
{"url":"http://www.urbandictionary.com/define.php?term=general%20relativity","timestamp":"2014-04-16T05:05:19Z","content_type":null,"content_length":"45218","record_id":"<urn:uuid:977bceb0-844b-4c2a-8de1-3cbd6b9b7693>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
intuitionistic logic intuitionistic logic <logic, mathematics> Brouwer's foundational theory of mathematics which says that you should not count a proof of (There exists x such that P(x)) valid unless the proof actually gives a method of constructing such an x. Similarly, a proof of (A or B) is valid only if it actually exhibits either a proof of A or a proof of B. In intuitionism, you cannot in general assert the statement (A or not-A) (the principle of the excluded middle); (A or not-A) is not proven unless you have a proof of A or a proof of not-A. If A happens to be undecidable in your system (some things certainly will be), then there will be no proof of (A or not-A). This is pretty annoying; some kinds of perfectly healthy-looking examples of proof by contradiction just stop working. Of course, excluded middle is a theorem of classical logic (i.e. non-intuitionistic logic). Last updated: 2001-03-18 Try this search on Wikipedia, OneLook, Google Nearby terms: Intrusive Testing « Intuition « intuitionism « intuitionistic logic » intuitionistic probability » intuitionist logic » invariant Copyright Denis Howe 1985
{"url":"http://foldoc.org/intuitionistic+logic","timestamp":"2014-04-20T00:43:15Z","content_type":null,"content_length":"5762","record_id":"<urn:uuid:a385ae03-10c3-4910-8757-b52104bf1401>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
The exact time period for baryogenesis is not known but most certainly it is completed no later than the electroweak phase transition. There are of course many mechanisms for baryogenesis which have been proposed in the literature [36], too many to be comprehensive here. All require baryon number violation, C and CP violation, and a departure from thermal equilibrium [37]. The original out-of-equilibrium decay (OOED) scenario [38] is probably still the simplest of all mechanisms. Originally formulated in the context of grand unified theories, OOED involved the decay of a superheavy gauge or Higgs boson with baryon number violating couplings. For example, the X gauge boson of SU(5) couples to both a (e^-d). The decay rate for X will be However decays can only begin occurring when the age of the Universe is longer than the X lifetime ^-1[D], i.e., when [D] > H The out-of-equilibrium condition is that at T = M[X], [D] < H or In this case, we would expect a maximal net baryon asymmetry to be produced, n[B] / s ~ 10^-2 In the context of inflation, OOED requires either strong reheating (or preheating) [39], or that the decaying particles have masses less than the inflaton mass so that they can be produced (necessarily out of equilibrium) by inflaton decays. For example, if the inflaton potential carries only a single scale which is fixed by the magnitude of density fluctuation as measured in the microwave background radiation, then we can write [40] so that In this case a relatively light Higgs is necessary since the inflaton is typically light (m[] ~ µ^2 / M[P] ~ O(10^11) GeV) and the baryon number violating Higgs would have to be produced during inflaton decay. Clever model building could allow for such a light Higgs, even in the context of supersymmetry [41]. In this case, the baryon asymmetry is given simply by where T[R] is the reheat temperature after inflation, n[] = [] / m[] ~ ^2 M[P]^2 / m[], and m[]^3 / M[P]^2 is the inflaton decay rate. In the context of supersymmetry, there is an extremely natural mechanism for the generation of the baryon asymmetry utilizing flat directions of the scalar potential [42]. One can show that there are many directions in field space such that the scalar potential vanishes identically when SUSY is unbroken. That is, One such example is [42] where a, v are arbitrary complex vacuum expectation values. SUSY breaking lifts this degeneracy so that where [0] ~ M[gut], a large baryon asymmetry can be generated [42, 43]. This requires the presence of baryon number violating operators such as O = qqql such that <O> When combined with inflation, it is important to verify that the AD flat directions remain flat. In general, during inflation, supersymmetry is broken. The gravitino mass is related to the vacuum energy and m[3/2]^2 ~ V / M[P]^2 ~ H^2, thus lifting the flat directions and potentially preventing the realization of the AD scenario as argued in [44]. To see this, consider a minimal supergravity model whose Kähler potential is defined by where z is a Polonyi-like field [45] needed to break supergravity, and we denote the scalar components of the usual matter chiral supermultiplets by ^i. W and ^i and z respectively. In this case, the scalar potential becomes Included in the above expression for V, one finds a mass term for the matter fields ^i, e^G[i]*^i = m[3/2]^2[i]*^i [46]. As it applies to all scalar fields (in the matter sector), all flat directions are lifted by it as well. The above arguments can be generalized to supergravity models with non-minimal Kähler potentials. There is however a special class of models called no-scale supergravity models, that were first introduced in [47] and have the remarkable property that the gravitino mass is undetermined at the tree level despite the fact that supergravity is broken. No-scale supergravity has been used heavily in constructing supergravity models in which all mass scales below the Planck scale are determined radiatively [48], [49]. These models emerge naturally in torus [50] or, for the untwisted sector, orbifold [51] compactifications of the heterotic string. In no-scale models (or more generally in models which possess a Heisenberg symmetry [52]), the Kähler potential becomes Now, one can write It is important to notice that the cross term |[i]*W|^2 has disappeared in the scalar potential. Because of the absence of the cross term, flat directions remain flat even during inflation [53]. The no-scale model corresponds to f = - 3 lnf ^'2 = 3f^'' and the first term in (31) vanishes. The potential then takes the form which is positive definite. The requirement that the vacuum energy vanishes implies <W[i]> = <g[a]> = 0 at the minimum. As a consequence m[3/2]( The above argument is only valid at the tree level. An explicit one-loop calculation [54] shows that the effective potential along the flat direction has the form where [0] ~ M[P] will be generated and in this case the subsequent sfermion oscillations will dominate the energy density and a baryon asymmetry will result which is independent of inflationary parameters as originally discussed in [42, 43] and will produce n[B]/s ~ O(1). Thus we are left with the problem that the baryon asymmetry in no-scale type models is too large [55, 53, 56]. In [56], several possible solutions were presented to dilute the baryon asymmetry. These included 1) Moduli decay, 2) the presence of non-renormalizable interactions, and 3) electroweak effects. Moduli decay in this context, turns out to be insufficient to bring an initial asymmetry of order n[B]/s ~ 1 down to acceptable levels. However, as a by-product one can show that there is no moduli problem [57] either. In contrast, adding non-renormalizable Planck scale operators of the form ^2n - 2 / M[P]^2n - 6 leads to a smaller initial value for [0] and hence a smaller value for n[B]/s. For dimension 6 operators (n = 4), a baryon asymmetry of order n[B]/s ~ 10^-10 is produced. Finally, another possible suppression mechanism is to employ the smallness of the fermion masses. The baryon asymmetry is known to be wiped out if the net B - L asymmetry vanishes because of the sphaleron transitions at high temperature. However, Kuzmin, Rubakov and Shaposhnikov [58] pointed out that this erasure can be partially circumvented if the individual (B - 3L[i]) asymmetries, where i = 1, 2, 3 refers to three generations, do not vanish even when the total asymmetry vanishes. Even though there is still a tendency that the baryon asymmetry is erased by the chemical equilibrium due to the sphaleron transitions, the finite mass of the tau lepton shifts the chemical equilibrium between B and L [3] towards the B side and leaves a finite asymmetry in the end. Their estimate is where the temperature T ~ T[C] ~ 200 GeV is when the sphaleron transition freezes out (similar to the temperature of the electroweak phase transition) and m[](T) is expected to be somewhat smaller than m[](0) = 1.777 GeV. Overall, the sphaleron transition suppresses the baryon asymmetry by a factor of ~ 10^-6. This suppression factor is sufficient to keep the total baryon asymmetry at a reasonable order of magnitude in many of the cases discussed above. Finally, it is necessary to mention one other extremely simple mechanism based on the OOED of a heavy Majorana neutrino [59]. This mechanism does not require grand unification at all. By simply adding to the Lagrangian a Dirac and Majorana mass term for a new right handed neutrino state, the out-of-equilibrium decays ^c -> L + H^* and ^c -> L^* + H will generate a non-zero lepton number L ^-3 ^2 M[P] < M and M could be as low as O(10) TeV. (Note that once again in order to have a non-vanishing contribution to the C and CP violation in this process at 1-loop, at least 2 flavors of ^c are required. For the generation of masses of all three neutrino flavors, 3 flavors of ^c are required.) Sphaleron effects can transfer this lepton asymmetry into a baryon asymmetry since now B - L 40, 60]. As the Inner Space/Outer Space II workshop, and these proceedings are in memory of Dave Schramm, it is fitting to acknowledge Dave's role. Dave's research interests were predominantly concerned with the epoch of Big Bang Nucleosynthesis and the post BBN Universe. His research interests are known to have been extremely diverse covering such areas as chemical evolution, cosmic rays, cosmochronology, dark matter, galaxy formation and mergers, the gamma-ray background, magnetic fields, mass extinctions, neutrinos, (late-time) phase transitions, supernovae, ... In addition, he was a master at synthesizing and finding relationships between these topics. Though pre-BBN cosmology was not Dave's mainstay, he did of course make important contributions in areas such as baryogenesis and topological defects. But Dave's most important contribution was the early recognition of the impact of cosmology on particle physics. He can legitimately be considered a founding father of astro-particle physics. On a more personal note, it is difficult to put into words the immense impact that Dave has had on my research, as an advisor, colleage, and friend. This work was supported in part by DOE grant DE-FG02-94ER40823 at Minnesota.
{"url":"http://ned.ipac.caltech.edu/level5/Olive/Olive4.html","timestamp":"2014-04-17T18:55:09Z","content_type":null,"content_length":"19203","record_id":"<urn:uuid:6b39117b-9980-4c66-b4de-9633efd3e198>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Training a Neural Network in PHP, Page 2 Training a Neural Network in PHP Before being able to solve the problem, the artificial neural network has to learn how to solve it. Think of this network as an equation. You have added test data and the expected output, and the network has to solve the equation by finding the connection between input and output. This process is called training. In neural networks, these connections are neuron weights. A few algorithms are used for network training, but backpropagation is used most often. Backpropagation actually means backwards propagation of errors. After initializing random weights in the network, the next steps are to: 1. Loop through the test data 2. Calculate the actual output 3. Calculate the error (desired output - current network output) 4. Compute the delta weights backwards 5. Update the weights The process continues until all test data has been correctly classified or the algorithm has reached a stopping criterion. Usually, the programmer tries to teach the network for a maximum of three times, while the maximum number of training rounds (epochs) is 1000. Also, each learning algorithm needs an activation function. For backpropagation, the activation function is hyperbolic tangent ( Let's see how to train a neural network in PHP: $max = 3; // train the network in max 1000 epochs, with a max squared error of 0.01 while (!($success=$n->train(1000, 0.01)) && $max-->0) { // training failed -- re-initialize the network weights //training successful if ($success) { $epochs = $n->getEpoch(); // get the number of epochs needed for training Mean squared error (mse) is the average of the squares of the errors, which is also known as standard deviation. The default mean squared error value usually is 0.01, which means that training is successful if the mean squared error is less than 0.01. Before seeing the working example of an artificial neural network in PHP, it is good practice to save your neural network to a file or a SQL database. If you don't save it, you will have to do the training every time someone executes your application. Simple tasks are learned quickly, but training takes much longer for more complex problems, and you want your users to wait as little as possible. Fortunately, there are save and load functions in the PHP class in this example: Note that the file extension must be .ini. The PHP Code for Our Neural Network Let's look at the PHP code of the working application that receives red, green and blue values and calculates whether the blue or red color is dominant: $r = $_POST['red']; $g = $_POST['green']; $b = $_POST['blue']; $n = new NeuralNetwork(4, 4, 1); //initialize the neural network $n->load('my_network.ini'); // load the saved weights into the initialized neural network. This way you won't need to train the network each time the application has been executed $input = array(normalize($r),normalize($g),normalize($b)); //note that you will have to write a normalize function, depending on your needs $result = $n->calculate($input); If($result>0.5) { // the dominant color is blue else { // the dominant color is red Neural Network Limitations The main limitation of neural networks is that they can solve only linearly separable problems and many problems are not linearly separable. So, non-linearly separable problems require another artificial intelligence algorithm. However, neural networks solve enough problems that require computer intelligence to earn an important place among artificial intelligence algorithms.
{"url":"http://www.developer.com/lang/php/training-a-neural-network-in-php.html","timestamp":"2014-04-20T05:47:06Z","content_type":null,"content_length":"57416","record_id":"<urn:uuid:aec5606d-6ef5-43f0-95e3-4f6e94e06166>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
The Cap-Set Problem and Frankl-Rodl Theorem (C) Update: This is a third of three posts (part I, part II) proposing some extensions of the cap set problem and some connections with the Frankl Rodl theorem. Here is a post presenting the problem on Terry Tao’s blog (March 2007). Here is an open discussion post around Roth theorem (March 2009). Here are two “open discussion” posts on the cap set problem (first, and second) from Gowers’s blog (January 2011). These posts include several interesting Fourier theoretic approaches towards improvement of the Roth-Meshulam bound. The second post describes a startling example by Seva Lev. Suppose that the maximum size of a cap set in $\Gamma(n)=\{1,2,3\}^n$ is $n/g(n)$. Here is an obvious fact: The maximum size of a set $F$ in $G$ with the property that every $x,y\in G$ ($x e y$) satisfy $x+y otin G$ is at most the maximum size of a cap set in $G$. Proof: Indeed the condition for $F$ is stronger than being a cap set. We require for every $x,y \in F$$x e y$ not only that $x+y otin F$ but even $x+y otin G$. Part C. A more direct relation between Frankl-Rodl’s theorem and the cap set problem and all sorts of fine gradings on subsets of {1,2,…,n}. In part A we moved from sets avoiding affine lines (cap sets) to sets avoiding “modular lines” and used the Frankl-Rodl theorem to deal with the latter. This may look artificial. Here is a simpler connection between the cap set problem and the Frankl-Rodl theorem. 17. Why weaker forms of the Frankl-Rodl Theorem follow from upper bounds on cap sets. Consider a subset $F$ of $\Gamma_{m,m,m}$, where n=3m. Suppose that $F$ does not contain the following configuration: two vectors $x,y$ such that for every $a=0,1,2$, $|\{i:x_i+y_i=a(mod~3)\}|=m$. This is stronger than the assertion that $F$ is a cap set. The Frankl-Rodl theorem implies that the size of $F$ is at most $(3 - \epsilon)^n$. In other words, both the optimistic upper bound of $(3-\epsilon)^n$ for the cap set problem and the Frankl-Rodl theorem have the same weaker consequence: A subset of $\Gamma_{m,m,m}$ without two vectors whose sum is also in $\Gamma_{m,m,m}$ has size at most $(3-\epsilon)^{3m}$. 18. Diversion: various gradings on subsets. Let us talk a little about subsets. We have a grading on subsets by their size. Problems about sets of fixed size are usually more delicate than problems on arbitrary sets. We have a finer grading on sets according to the sum of their elements. Problems about sets whose sum of elements are fixed are even more delicate (and rare.) We do not have to stop there: We can associate to a set $S$ the parameters $a_k(S)= \sum_ {i \in S} {{i-1} \choose {k}}$. Studying extremal problems for sets where $a_1 (S),a_2 (S),\dots ,a_r (S)$ are fixed (or just $a_r(S)$ is fixed) is not common. There are a few such results, and related ideas come into play in various contexts. Let me mention one: The Sperner property for the poset of sets with prescribed sums of coordinates is closely related to the famous Erdos-Moser problem: What is the maximum number of equal sums for subsets of $n$ distinct positive real numbers. The precise bounds (attained by {1,2,…,n}) was proved by Stanley using the Hard-Lefshetz theorem. (Simpler proofs based on elementary representation theory were subsequently found.) We can ask if the Frankl-Rodl theorem can be extended to this setting. The type of theorem we are looking for will start with a family $\cal F$ of sets whose fine parameters are fixed and consider pairwise intersections of such sets also with presecibed parameters. Then we can conjecture that if $\cal G$$\subset \cal F$ is a subfamily which contains a proportion $(1-\epsilon)^n$ of the sets, the number of intersections with the prescribed parameters will go down by only $(1-\delta)^n$. (Where $\delta$ tends to 0 as $\epsilon$ does. While at it, we can consider several subfamilies, $t$-fold intersections, larger alphabets, etc., just like Frankl and Rodl did. The point is that (for an alphabet of size 3) the optimistic upper bound for the cap set problem implies some weak consequences of these Frankl-Rodl hypothetical generalizations. 19. Applications of upper bounds for cap sets in the Frankl-Rodl spirit. Consider now a vector $x\in \Gamma(n)$. Again, we can write for $j=0,1,2$, $a^j_k(x)= \sum_ {i: x_i=j} {{i-1} \choose {k}}$. Let $\Gamma^r(n)$ be the set of vectors $x \in \Gamma (n)$ s so that $a^ 0_k(x)=a^1_k(x)=a^2_k(x)$ for every $k=1,2,\dots, t$. (We need to assume that ${{n} \choose {k}}$ is divisible by 3 for $k=1,2,\dots, t+1$.) An upper bound for the cap set problem of the form $(3-\epsilon)^n$ will imply that a subset of $\Gamma^r (n)$ without two vectors $x,y$ whose sum also belongs to $\Gamma^r(n)$ has size at most $(3- \epsilon)^n$. This looks like a strong theorem of the Frankl-Rodl type. The question is if Frankl-Rodl’s methods can be extended to our finer gradings of $\Gamma$. This is believable (and perhaps doable) for bounded value of $t$. The case where $t$ goes slowly to infinity may be a good place to look for counterexamples. 20. An averaging fact worth knowing: Suppose that the maximum size of a cap set in $latex\Gamma(n)=$$\{0,1,2\}^n$ is $n/g(n)$. The folllowing fact follows from a simple averaging argument. Let $G \subset \Gamma(n)$ then If the maximum size of a cap set in $G$ is $c(G)$ and $|G|/c(G) =u$ then $u \le g(n)$. As a matter of fact $G$ can be a multi-subset (elements can appear several times) or even a general distribution. To prove this assertion consider the intersection of a cap set of maximum size in $\Gamma(n)$ with all sets of the form $G+x$, and note that a $F$ is a cap set in $G$ iff $F+x$ is a cap set in Remark: This simple averaging trick is useful on many occasions. One example are “Elias bounds” for error-correcting codes. For the related problem of “density Hales Jewett” discussed on Gower’s blog , it turns out that while the problem is not invariant under a group action, it is still correct that its statement for various different measures on $\Gamma(n)$ are (qualitativly) equivalent using a somewhat more complicated averaging argument. This phenomenon deseves further study. (This is the third and last post in this series; The first two are here and here.)
{"url":"http://gilkalai.wordpress.com/2009/05/18/the-cap-set-problem-and-frankl-rodl-theorem-c/?like=1&source=post_flair&_wpnonce=c1ce40f6e1","timestamp":"2014-04-16T18:57:18Z","content_type":null,"content_length":"118417","record_id":"<urn:uuid:29b1f438-d09e-4a5e-b628-a331ef9db533>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionAnalysis of Positioning Signals near Large Joint GapsSwitching Algorithm Based on Adaptive Linear PredictionNoise Suppression Pretreatment Based on Wavelet AnalysisTime Delay Characteristics Analysis of the Switching AlgorithmSwitching Experiments of the SensorConclusionsReferencesFigures Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s120811294 sensors-12-11294 Article Switching Algorithm for Maglev Train Double-Modular Redundant Positioning Sensors HeNing LongZhiqiang^* XueSong College of Mechatronics Engineering and Automation, National University of Defense Technology, Changsha 410073, China; E-Mails: hening0606@126.com (N.H.); songself@126.com (S.X.) Author to whom correspondence should be addressed; E-Mail: zhqlong@263.net; Tel.: +86-1387-4967-811; Fax: +86-731-8451-6000. 2012 15 08 2012 12 8 11294 11306 28 06 2012 30 07 2012 01 08 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments. maglev train positioning sensor switching algorithm wavelet analysis adaptive prediction The suspension function of high speed maglev trains is carried out by the electromagnetic attractive force between the electromagnets and the rail, and the train is driven by a linear synchronous motor [1–3]. The 3-phased primary windings are inlaid in the slots of the long stator fixed along the rail, and the secondary windings are the electromagnets shown in Figure 1. In order to implement high efficient synchronous traction, the traction system needs the precise relative position between the electromagnets and the long stator. Because of the non-contact between the train and the rail, the technical requirements for the position and speed measurements of a maglev train are different from those for wheel-track systems [4]. Considering the dimensional accuracy of the tooth-slot structure of the long stator, high precision positioning with millimeter-sized resolution can be achieved by detecting the tooth-slot structure through variable magnetic-resistance type sensors [5–9]. Detecting coils fixed on one side of the positioning sensor facing the long stator are driven by high frequency signal sources. Taking one of the coils for example, when the coil moves along the stator at a certain suspension gap, its inductance varies periodically. As a result, the amplitude of the coil signal varies accordingly forming an amplitude modulation signal. Then, a square signal corresponding to the tooth-slot structure can be obtained by comparing the amplitude demodulation signal to a certain threshold. By counting the jumping edges of the square signal, the number of tooth-slot periods passed by the train can be determined and the positioning with higher resolution in a tooth-slot period can be achieved by looking up the mapping table between the sampled values of the amplitude demodulated signals and the relative position. A magnetic pole period of the 3-phased windings contains six tooth-slot periods as shown in Figure 1(b). Thus, the positioning in a tooth-slot period can be denoted by an electrical phase angle between 0° and 60°, as shown in Figure 2. In practice, due to installation requirements, there are some large joint gaps between long stator sections. The length of a gap is about 2 tooth-slot periods as shown in Figure 3. When the sensor is moving below a large joint gap, the positioning signals are invalidated, but the traction system still requires normal positioning signals as shown in Figure 3, so redundant positioning sensors are needed. Because of the space constraints of the train structure, only two-modular redundancy is adopted, which increases the difficulty in the switching algorithms. In order to identify the invalidated positioning signal in time, fault diagnosis technologies can be adopted. Generally speaking, fault diagnosis technologies can be classified into three categories: methods based on system models, methods based on signal processing and methods based on knowledge. Because model parameters such as carriage mass, tractive force, electrical brush friction and slop grade of the rail are unknown to the positioning sensor, methods based on model are not feasible. Besides, methods based on knowledge usually require complicated inference procedures and knowledge bases, so it's hard for these methods to satisfy the time limit in this situation. Therefore, the methods based on signal processing are considered to implement the switching of the positioning sensors in real time. In [8] some simulation studies of the two-modular switching algorithms for the positioning sensor based on adaptive filter are performed. However, the waveform of the signals collected in actual runs of a maglev train is much worse than that of the simulated signals because of various disturbances and uncertainties in practical situation. Therefore, the performances of the method mentioned in [8] will be reduced considerably in practical application. In order to enhance the reliability of the switching algorithm, wavelet analysis is adopted to suppress measuring disturbances without weakening the signal characteristics caused by the stator joint gaps based on the correlation between the wavelet coefficients of adjacent scales in this paper. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified algorithm is proven through simulations and experiments. Figure 4 shows the phase signal waveforms of a positioning sensor recorded during a test run. As Figure 4 shows, the phase signal near a large joint gap has serious waveform distortions. A large joint gap will also causes tooth-slot period number counting losses, that is to say, the tooth-slot period number obtained by the sensor will be one less than the number required by the traction system. When the sensor has passed across a large joint gap, the phase signal will become normal again, but the phase error caused by the tooth-slot period number counting loss will be accumulated and cannot be corrected automatically. The counting loss of one tooth-slot period corresponds to a phase lag of 60°. The phase difference will break the synchronization between the traveling magnetic field and the electromagnets' magnetic field, reduce the efficiency of the traction considerably and even cause overcurrent protection or damages to the traction system. If the accumulated phase difference reaches 180° after passing several these gaps, the traction system will generate a wrong tractive force with an unexpected direction. This is a potential safety hazard. Therefore, the switching algorithm should identify the signal distortion caused by the large gaps accurately in time, and switch to the other positioning sensor before counting loss happens, and then correct the tooth-slot period count value of the invalid senor according to that of the normal The jumping edges of the sawtooth phase wave containing lots of high frequency harmonic components will complicate the signal prediction and characteristics extraction. Therefore, firstly the phase signal needs to be converted to a certain form to eliminate the influence of the jumping edges. Let p[h](k) denote the current phase value and n(k) denote the current tooth-slot period number. Considering a tooth-slot period corresponds to a phase angle of 60°, the converted signal is calculated as follows [10]: p h a ( k ) = 60 n ( k ) + p h ( k ) Because of the slight time differences between the jumping edges of the phase signal and those of the tooth-slot number signal, there are spike pulses in signal p[ha](k). The spike pulses can be eliminated through simple logical pretreatment [10]. As the spike pulses are about 60°, so they can be identified by detecting the difference value of the adjacent sampling phases. If the phase difference is about 60°, and then make the current sampling phase equal to the former, so that the spike pulses can be eliminated. The converted phase signal near a large joint gap is shown in Figure 5, where the dashed line denotes the converted signal of an ideal phase signal. It can be seen that the tooth-slot period number counting loss results in a phase lag. The converted phase signal is not a stationary random process. Its statistical properties vary continually. In this case, the least-mean-square-error adaptive linear prediction is applicable to predict the phase signal based on the evaluation of the short time statistics of the signal. The basic idea of the switching algorithm is to predict p[ha](k) according to historical data: p[ha](k – 1), p[ha](k − 2), …, p[ha](k − m). Then, prediction error can be obtained as e(k) = p[ha](k) − p̂[ha](k), where p̂[ha](k) denotes the predicted value of p[ha](k). Thus the sensor switching can be triggered by comparing the prediction error to a suitable threshold. Let m denote the length of the predictive filter, and the predicted value p̂[ha](k) can be calculated as follows [11]: p ^ h a ( k ) = w ( k ) T p h a ( k )where: p h a ( k ) = [ p h a ( k − 1 ) ] p h a ( k − 2 ) ⋯ p h a ( k − m ) T ] w ( k ) = [ w 1 ( k ) w 2 ( k ) … w m ( k ) ] ] Tand w(k) is the weight vector of the predictive filter. The adaptive process of the weight vector based on the least-mean-square-error criterion is given as follows: w ( k + 1 ) = w ( k ) + μ p h a ( k ) e ( k ) Reference [11] gives the range of μ to make the iterative process of Equation (5) converge when m is relatively big, but the phase signal varies quickly when the train is running, so the nonstationarity degree of the signal is high. Therefore, m should be set to a small value. In this case, the upper limit for μ can't be exactly calculated. So, in this paper, μ is given experientially as follows: μ = 1 / λ maxwhere λ[max] denotes the maximal eigenvalue of the matrix p[ha](k)p[ha](k)^T. Figure 6(a) shows the contrast between the converted phase signal and the predicted signal obtained through the adaptive linear prediction method discussed above. Figure 6(b) shows the absolute value of the prediction error. In Figure 6(a), the jumping distortion of the waveform at about the 410th step indicates that the sensor is beginning to be affected by the large joint gap. This jumping distortion corresponds to the peak at about the 410th step in Figure 6(b). The peak at about the 534th step indicates that the sensor is moving out of the influence range of the large joint gap. However, as Figure 6(b) shows, the prediction error peaks due to the large joint gap is not larger than the prediction error due to measurement noise. Considering there are various uncertainties during practical runs of a maglev train, sensor-switching based on straightforwardly comparing the prediction error to a certain threshold may result in false operation or switching failure. Therefore, it is necessary to further amplify the difference between the signal characteristics reflecting large joint gaps and that of the measurement disturbances. Low Pass Filters (LPF) can be used to smooth the converted signal and suppress noise, but they will also weaken the signal characteristics reflecting the stator joint gaps at the same time, whereas, the method based on the correlation between the wavelet coefficients of adjacent scales [12] can suppress measuring disturbances without weakening the signal characteristics. Generally, for a signal, the Lipschitz index is larger than zero at continuous sections and equal to zero at step type discontinuous points, whereas, the Lipschitz index of a noise signal is less than zero. Accordingly, the wavelet coefficients of the three cases have different propagation characteristics on each transformation scales. For the former two cases, the wavelet coefficients of adjacent scales have a relatively strong correlation, but for noise signal, the correlation is not obvious. Hence, by multiplying each wavelet coefficient of a scale by the corresponding coefficient of an adjacent scale respectively, the noise can be suppressed, and at the same time, the valid signal characteristics are enhanced [12]. For a discrete parameter wavelet transformation, the numbers of the wavelet coefficients of different scales are not the same because of binary down sampling, so it's not feasible to do the one-to-one multiplication for the coefficients of adjacent scales straightforwardly. In order to solve this problem, stationary wavelet transform algorithm (a'trous algorithm) is adopted to make the number of the wavelet coefficients of each scale equal to the length of the original data when finite length problem is not considered. Consider an orthogonal discrete parameter wavelet with a limited support interval. Let h denote the low pass analytical filter determined by the wavelet and g denote the corresponding high pass analytical filter. So h and g are Finite Impulse Response (FIR) filters. Suppose the length of the filter is 2N. Let h^j and g^j respectively denote the filters of the j th scale obtained by inserting (2^j − 1) zero elements behind each element of h and g. They are expressed as follows: h j = [ h 0 j h 1 j ⋯ h 2 j + 1 N − 1 j ] g j = [ g 0 j g 1 j ⋯ g 2 j + 1 N − 1 j ] Let p[ha]^j(k) and d^j(k) respectively denote the scale coefficients and the wavelet coefficients obtained by applying stationary wavelet transform to the signal p[ha]^j(k) on the j th scale. They are calculated as follows: p h a j + 1 ( k ) = [ p h a j ( k ) p h a j ( k + 1 ) ⋯ p h a j ( k + 2 j + 1 N − 1 ) ] h j T d j + 1 ( k ) = [ p h a j ( k ) p h a j ( k + 1 ) ⋯ p h a j ( k + 2 j + 1 N − 1 ) ] g j Tp[ha]^j(k) can be perfectly reconstructed as follows: p h a j + 1 ( k ) = [ p h a j + 1 ( k ) p h a j + 1 ( k − 1 ) ⋯ p h a j + 1 ( k − 2 j + 1 N + 1 ) ] h j T + [ d j + 1 ( k ) d j + 1 ( k − 1 ) ⋯ d j + 1 ( k − 2 j + 1 N + 1 ) ] g j T According to Equations (9,10), d^j(k) is the linear combination of p[ha](k), p[ha](k + 1), …, p[ha](k + 2^j^+1N − 2N − j), and d^j^+1(k) is the linear combination of p[ha](k), p[ha](k + 1), …, p[ha] (k + 2^j^+2N − 2N − j − 1). So d^j(k), …, d^j(k + 2^j+1N − 1) all have strong correlations with d^j(k) respectively. In order to enhance the signal characteristics and suppress noise more effectively, it's necessary to find a certain d^j(k′) which has the maximal correlation with d^j^+1(k), and then multiply d^j(k′) by d^j^+1(k). Here, an empirical formula to choose k′ is given as follows: k ′ = k + int ( c ( 2 j + 1 N − 1 ) ) , c ∈ [ 0 , 1 ] For different wavelets, c is different and can be determined by experiments. The analytical results of the converted signal p[ha](k) based on stationary wavelet transform algorithm are shown in Figure 7. The signal is transformed on two scales (j = 1,2). Figure 7(a,b) show the scale coefficients, and Figure 7(c,d) show the wavelet coefficients. Considering the time limit and calculation load requirement of the sensor's practical operating condition, the “db1” wavelet (“haar” wavelet) is selected, which has the shortest filter length with N = 1, and is not affected by the finite length problem. Figure 8 shows the waveform of d^j(k) &middot; d^j(k′), where c = 0.5. It can be seen that after the multiplication between the wavelet coefficients of adjacent scales, the wavelet coefficients indicating the moments when the sensor is moving into and out of the influence range of the large joint gap are enhanced obviously. Choose a threshold T = 10, and update the wavelet coefficients by setting d^j(k) and d^j(k′) to zero when d^j(k) &middot; d^j(k′) < T. Then a reconstructed signal can be gotten by applying Equation (11) to the scale coefficients and the updated wavelet coefficients shown in Figure 9. Compared to the original signal p[ha](k), the reconstructed signal is smoother without weakening some key characteristics. The predicted signal according to the reconstructed signal based on adaptive linear prediction is also shown in Figure 9. Figure 10 shows the prediction error. In contrast with Figure 6(b), after the noise suppression pretreatment, the characteristic distinctions between the signal distortions caused by large joint gaps and the measurement noise are amplified effectively. So the effectiveness and reliability of the sensor switching decision can be improved by comparing the signal shown in Figure 10 to a suitable At first, we consider the switching method based on adaptive linear prediction without the noise suppression process. Because the prediction error e(k) can be figured out at the same sampling period when p[ha](k) is obtained, the time delay of this method is within a sampling period. When the discrete wavelet transformation is introduced into the process, the finite length problem (boundary effect) needs to be considered except for the “db1” wavelet. The data affected by the boundary effect are always the latest sampled values of the phase signal. Considering the algorithm based on Equations (9,10), boundary effect means that the length of the transformation coefficients is shorter than the original signal, and the serial number of the latest transformation coefficient is smaller than that of the latest sampled signal. This means the signal characteristics reflected by the latest coefficient is lag behind the latest sampled signal. As a result, the switching moment will be postponed for several sampling steps. Usually, the time delay of the switching will be more serious, if the length of FIR wavelet filters and the number of the transformation layers (scales) increase. One way to solve the boundary problem is to add certain data behind the latest sampled datum to extend the original signal artificially until the serial number of the latest transformation coefficient is equal to that of the latest sampled value. However, the added data are different from the real data sampled later, so the corresponding coefficients can't reflect the signal characteristics exactly. References [13,14] investigate the boundary effect of the discrete wavelet transform. The Gram-Schmidt orthogonalization method is adopted to orthogonalize the boundary vectors of the wavelet transformation matrix. This technique can guarantee the orthogonality and reversibility of the transformation for a finite-length data sequences, however, the orthogonalized boundary vectors lose the frequency-selective function. The wavelet coefficients and scale coefficients corresponding to these orthogonalized vectors still cannot reflect the detailed information and rough tendency of the original signals clearly. Thus these coefficients are actually invalid data, and can't be used for switching decision. Therefore, the switching delay caused by the inherent boundary problem of wavelet transformation can't be solved nicely except for choosing a wavelet filter with a small N. Furthermore, the signal reconstruction will also introduce a delay. Let p[ha](k) denote the datum at the jumping distortion point indicating the sensor is beginning to be affected by the large joint gap. Let j denote the scale of the wavelet transformation, and the scale of the original data is denoted as j = 0. Denote the latest wavelet coefficient and scale coefficient on the jth scale obtained based on Equations (9,10) according to the data p[ha](k), p[ha](k − 1), p[ha](k − 2), … by d^j(k[j]) and p[ha]^j(k[j]). The serial number k[j] can be calculated as follows: k j = k − ( 2 1 N − 1 ) − ( 2 2 N − 1 ) − ⋯ − ( 2 j N − 1 ) = k − 2 j + 1 N + 2 N + j Furthermore, we denote the serial number of the latest reconstructed datum on the 0^th scale obtained based on Equation (11) according to the wavelet coefficients and scale coefficients on the jth scale with serial numbers no bigger than k[j] by k[0]. k[0] can be gotten according to Equation (11) as follows: k 0 = k j = k − 2 j + 1 N + 2 N + j The analysis above indicates that the switching algorithm combining the adaptive linear prediction and the stationary wavelet transformation has a time delay of about 2^j^+1N−2N−j steps. The sampling period of the positioning sensor is 500 μs. Considering the “db1” wavelet is chosen in Section 4 and the sampled signal is transformed to the 2nd scale, the time delay will be about 2 ms. In engineering practice, the positioning sensor is only enabled when the train is running at a speed below 20 km/h. When the running speed of the train exceeds 20 km/h, the position and phase information can be obtained by detecting the back electromotive force of the primary windings. In a time span of 2 ms, the train can run a distance of about 11 mm with a speed of 20 km/h. Considering that the length of a tooth-slot period is about 86 mm, this algorithm can avoid the tooth-slot period number counting loss in time. In order to further reduce the time delay and computation needs, the algorithm discussed in Section 4 needs to be simplified. Actually, after the noise suppression pretreatment, the signal characteristics due to the stator joint gap has already been distinguished from the noise obviously according Figure 8. So the switching can be triggered directly by comparing the product of the wavelet coefficients of adjacent scales to a suitable threshold. Thus, the reconstruction and the adaptive prediction processes can be truncated to simplify the algorithm considerably. According to Equation (12), the serial number of the original signal which is mostly relevant to the k[j]th wavelet coefficients on j th scale can be calculated as follows: k 0 = k j + int ( c ( 2 j N − 1 ) + c ( 2 j − 1 N − 1 ) + ⋯ + c ( 2 1 N − 1 ) ) ≈ k − int ( ( 1 − c ) ( ( 2 j N − 1 ) + ( 2 j − 1 N − 1 ) + ⋯ + ( 2 1 N − 1 ) ) ) = K − int ( ( 1 − c ) ( 2 j + 1 N − 2 N − j ) ) That is to say, if p[ha](k) is the datum at the first jumping distortion point, the corresponding peak of the product signal of the wavelet coefficients will be postponed for about int((1 − c)(2^j^ +1N − 2N − j)) steps. Comparing Equation (15) with Equation (14), it can be seen that after the algorithm simplification, the time delay reduced by about int(c(2^j^+1N − 2N − j) steps. When N = 1, j = 2, c = 0.5, and the sampling period is still 500 μs, the time delay is about 1ms. In addition, with the simplified algorithm, a shorter sampling period can be adopted to further decrease the switching delay. The experiments are carried out on the 1.5 km high speed maglev train test line in Shanghai, China. Large joint gaps shown in Figure 3 are located on the test line at intervals. The two-modular redundant positioning sensors are installed on the box girder of the maglev train as shown in Figure 1 with a distance about three tooth-slot periods (258 mm), so the phase signals of the two sensors are almost the same, and even though with the switching between the phase signals, the final phase signal is still continuous. The sensors are connected with an upper computer via communication cables, and upload the phase signal via RS485 interface in real time. The upper computer identifies the signal characteristics due to the joint gaps based on the correlation between the wavelet coefficients of adjacent scales and then implement the sensor switching. The flow chart of the switching algorithm of the upper computer is shown in Figure 11. The switching experiment results are shown in Figure 12. For a maglev train, there are two positioning sensors in one set of speed and position detection system. As Figure 11 shows, when the train is passing a large joint gap, the phase signal distortions of two sensors occur one after another. At about the 220th step, the phase signal of sensor 1 (denoted by the red dash line) begins to be distorted, and then the final signal (denoted by the black line) is switched to sensor 2, which is normal at present. About three tooth-slot periods later (at about the 440th step), the phase signal of sensor 2 (denoted by the blue line) begins to be distorted, and then the final signal is switched to sensor 1. Therefore, when the train is passing a large joint gap, the final signal always keeps From Figure 11, it can be seen that the switching algorithm can effectively avoid the accumulated phase errors caused by the tooth-slot period number counting loss. However, because of the switching delay, the waveform distortions near the switching moments are not eliminated completely. This situation can be improved by introduce proper filtering and shaping processes [10] into the algorithm. This paper studies the two-modular switching algorithms for the positioning sensors to solve the problem caused by the stator joint gaps. At first, adaptive filtering is applied to predict the phase signal of the sensor, and the switching is triggered based on the prediction error. In order to enhance the reliability and effectiveness of the switching algorithm, wavelet analysis is introduced in to suppress measuring disturbance without weakening the signal characteristics affected by the stator joint gaps based on the correlation between the wavelet coefficients of adjacent scales. To improve the response speed of the algorithm, a simplified algorithm is proposed, and its time delay characteristic is analyzed. The analytical and the experimental results show that when the train is running at a speed below 20 km/h, the designed algorithm can switch the positioning sensors in good time and can effectively eliminate the accumulated phase errors due to the tooth-slot period number counting losses. This work was performed at the Engineering Research Center of Maglev Technology at National University of Defense Technology with funding from National Natural Science Foundation of China under grant No. 60974128. LiuH.Q.University of Electronic Since and Technology of China PressChengdu, China19957488 WuX.M.Shanghai Science and Technology PressShanghai, China200818081811 LongZ.Q.HeG.XueS.Study of EDS and EMS hybrid suspension system with permanent-magnet halbach array2011474717472410.1109/TMAG.2011.2159237 García-MartínJ.Gómez-GilE.Vázquez-SánchezJ.Non-destructive techniques based on eddy current testing2011112525256510.3390/s11030252522163754 QianC.Y.HanZ.Z.ShaoD.R.XieW.D.Survey on the techniques of speed and position measurement of maglev train20041119021906 DaiC.H.LongZ.Q.XieY.D.XueS.Research on the filtering algorithm in speed and position detection of maglev trains2011117204721810.3390/s11070720422164012 XueS.DaiC.H.LongZ.Q.Research on Location and Speed Detection for High Speed Maglev Train Based on Long StatorProceedings of the 8th World Congress on Intelligent Control and Automation (WCICA)Ji'nan, China7–9 July 201069536958 LiL.WuJ.LuoH.H.Research of joint problem in speed and position detection system of maglev train2009316972 GuoX.Z.WangX.WangS.X.Location and speed detection system for high-speed maglev train20044455459 XueS.LongZ.Q.HeN.High precision position sensor design and its signal processing algorithm for maglev train2012552255245 SimonH.3rd ed.ThomasL.Prentice HallUpper Saddle River, NJ, USA2005 SunY.K.China Machine PressBeijing, China2005 CormacH.MartinV.Orthogonal time-varying filter banks and wavelet packets19941026502663 CormacH.Boundary filters for finite-length signals and time-varying filter banks19952102114 (a) Sketch map of a high speed maglev train; (b) Sketch map of the substructure of a high speed maglev train. The relationship between the electrical phase angle and the tooth-slot structure. The phase requirement at a large joint gap. (a) Normal phase signal; (b) phase signal near a large joint gap. The converted phase signal. (a) The predicted phase signal; (b) The prediction error. (a) The scale coefficients on the scale j = 1; (b) The scale coefficients on the scale j = 2; (c) The wavelet coefficients on the scale j = 1; (d) The wavelet coefficients on the scale j = 2. The product of the wavelet coefficients of the 1st and 2nd scales. The reconstituted signal and predicted signal. The prediction error. The flow chart of the switching algorithm. The experimental results of the two-modular switching.
{"url":"http://www.mdpi.com/1424-8220/12/8/11294/xml","timestamp":"2014-04-20T04:39:22Z","content_type":null,"content_length":"66413","record_id":"<urn:uuid:42ff4235-d962-4220-9067-317d5365c047>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"}