content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I want to solve it, just walk me through it:) Assume you are rolling two dice; the first one is red, and the second is green. Use a systematic listing to determine the number of ways you can roll a total less than 12 on the two dice. In how many ways can you roll less than 12? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. ^ see the table. Best Response You've already chosen the best response. All ways are valid, besides {6,6} Best Response You've already chosen the best response. so 35? Best Response You've already chosen the best response. There are 36 sample spaces. Best Response You've already chosen the best response. you want a total less than 12. Cool. So, how many ways can I get it? all except 6,6 i.e. when both dice have 6 on their face. so, one outcome is unfavorable out of 36 possible ones. whaddya think Best Response You've already chosen the best response. yes 35!! right-o! Best Response You've already chosen the best response. Thanks! && Thanks for that picture! Best Response You've already chosen the best response. Best Response You've already chosen the best response. Anytime :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fd23b4ee4b057e7d221494b","timestamp":"2014-04-20T00:42:37Z","content_type":null,"content_length":"50045","record_id":"<urn:uuid:f852161f-ddf3-4367-a4ee-64a2bcbf74fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 10 Resolution: standard / high Figure 10. Number of predicted motifs and number of matches in ScerTF at 15% FDR, for RED^2 (mutual-information scoring function; same as Figure 8), FIRE and MatrixREDUCE. FIRE was run with default parameters (optimized for yeast), with k = 5, 10, 20, 40 and 80 clusters, and the number of clusters that yields the highest number of motifs was selected a posteriori for each dataset. MatrixREDUCE was run with default parameters, with seeds of length 7, 8 and 9. (Left) Average results of the three methods on the 24 yeast datasets. (Center) RED^2 and FIRE. Number of predicted motifs that match a known motif at 15% FDR in the ScerTF database for the 24 yeast datasets. The y-axis corresponds to the number achieved by RED^2 and the x-axis to the number achieved by FIRE with the best clustering procedure. Superimposed points are indicated by shading. RED^2 has more matches than FIRE in 21 datasets and fewer in three datasets, which gives a sign test P value of 0.0003. (Right) RED^2 and MatrixREDUCE (same explanations as for the center panel). RED^2 has more matches than MatrixREDUCE for 22 datasets and is on par for the remaining two, which gives a sign test P value of 4.77 × 10^ Lajoie et al. Genome Biology 2012 13:R109 doi:10.1186/gb-2012-13-11-r109 Download authors' original image
{"url":"http://genomebiology.com/2012/13/11/R109/figure/F10?highres=y","timestamp":"2014-04-20T09:20:55Z","content_type":null,"content_length":"12721","record_id":"<urn:uuid:7bffa7cd-a4e4-44d5-a73c-4f9b84601ca9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry, Prove the Identity and more do you like them? it's from Algebra and Trigonometry, Beecher (Calc-prep book) ... i love it! i didn't learn Trigonometry properly b/c i took it in a 3-week mini-mester ... i love Math now so i'm studying harddd Thanks you. Yes I do. Just finished the first one. wow, I almost missed these ones! I only did examples from 6.3, and nothing exciting was there, so I moved to the next exercise also because I had less time that day, and was tired from the work. I am on the last chapter of that I am doing this because I thought I need to brush up my basic skills before I start my first year of university(in EE). In calculus, I got to integration by parts, and then got reluctant.
{"url":"http://www.physicsforums.com/showthread.php?t=181548","timestamp":"2014-04-19T15:13:14Z","content_type":null,"content_length":"74092","record_id":"<urn:uuid:e23c95ac-ab60-48cb-80f6-cba4f6ed08f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Category Archives: Optimization As we said in the previous lecture it seems stupid to consider that we are in a black-box situation when in fact we know entirely the function to be optimized. Consider for instance the LASSO objective where one wants to minimize over . By … Continue reading In the following table we summarize our findings of previous lectures in terms of oracle convergence rates (we denote ). Note that in the last two lines the upper bounds and lower bounds are not matching. In both cases one can … Continue reading The reader is encouraged to read Part I of this series before reading this post. For ease of reference, we recall the polynomial optimization problem from our previous post: (1) This is the task of minimizing a multivariate polynomial … Continue reading Today we will talk about another property of convex functions that can significantly speed-up the convergence of first-order methods: strong convexity. We say that is -strongly convex if it satisfies (1) Of course this definition does not require differentiability … Continue reading Sum of squares optimization is an active area of research at the interface of algorithmic algebra and convex optimization. Over the last decade, it has made significant impact on both discrete and continuous optimization, as well as several other disciplines, … Continue reading In this lecture we consider the same setting than in the previous post (that is we want to minimize a smooth convex function over ). Previously we saw that the plain Gradient Descent algorithm has a rate of convergence of … Continue reading In this lecture we consider the unconstrained minimization of a function that satisfy the following requirements: (i) admits a minimizer on . (ii) is continuously differentiable and convex on . (iii) is smooth in the sense that the gradient mapping … Continue reading In this lecture we consider the unconstrained minimization of a function that satisfy the following requirements: (i) admits a minimizer on such that . (ii) is convex on . Recall that in particular this implies for any . (iii) is … Continue reading Our study of the computational complexity of mathematical optimization led us to realize that we are still far from a complete understanding of what exactly can be optimized with a computer. On the positive side we proved that some problems can be … Continue reading In this post I will go over some simple examples of applications for the optimization techniques that we have seen in the previous lectures. The general theme of these examples is the canonical Machine Learning problem of classification. Since the … Continue reading
{"url":"http://blogs.princeton.edu/imabandit/category/optimization/page/2/","timestamp":"2014-04-19T02:03:23Z","content_type":null,"content_length":"44067","record_id":"<urn:uuid:c1a7354d-63a8-4a64-939f-4267e248f85b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
This website requires JavaScript in order to work correctly. Please enable JavaScript in your browser. ^probability/ˌprɑːbəˈbɪləti/ noun Learner's definition of PROBABILITY 1 : the chance that something will happen • The probability [=likelihood] of an earthquake is low/high. • There is a low/high probability that you will be chosen. 2 [singular] : something that has a chance of happening 3 : a measure of how often a particular event will happen if something (such as tossing a coin) is done repeatedly in all probability : almost certainly: very likely • In all probability, he will go home tomorrow. • We will contact you, in all probability, next week. Comments & Questions See the answer » Learn More »
{"url":"http://www.learnersdictionary.com/definition/probability","timestamp":"2014-04-17T15:53:51Z","content_type":null,"content_length":"53898","record_id":"<urn:uuid:3d0791c1-5d21-40eb-879f-402503a2c6e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Epsilon - Delta Definition of Limit. October 8th 2012, 10:46 PM #1 Sep 2012 San Jose, CA Epsilon - Delta Definition of Limit. I need to prove Lim X --> (-2) of (3x)/(3x+2) = 3/2 Using the Epsilon - Delta definition of limit. ( | are my absolute value signs, not a form of parentheses ) For preliminary i have: if, 0<|x+2|< (delta), then, | { (3x)/(3x+2) } - { 3/2 } | < (epsilon) I believe I need to then figure that | { (3x) (2) / (3x+2) (2) } - { (3) (3x+2) / (2) (3x+2) } | < (epsilon) So I have | { (6x)/(3x+2)(2) } - { (9x+6)/(3x+2)(2) } | < (epsilon) Then I have | { (-3x-6)/(3x+2)(2) } | < (epsilon) If that part is correct so far, I have no idea where to go next. Re: Epsilon - Delta Definition of Limit. We want $\left|\frac{-3x-6}{2(3x+2)}\right|=\frac{3}{2}\frac{|x+2|}{|3x+2|} <\varepsilon$. This is the case if $\frac{|x+2|}{|3x+2|}<\frac{2\varepsilon}{3}$. We have an upper bound for the numerator: $|x+2|<\delta$. To bound $\frac{|x+2|}{|3x+2|}$ from above, try bounding $|3x+2|$from below when $|x+2|<\delta$. Re: Epsilon - Delta Definition of Limit. Hey powercroat783. For these kind of problems, you want to get an epsilon in terms of a delta. So you have a bound for x in terms of delta, and now you want to get an epsilon in terms of that delta so that when you use the epsilon relation above (i.e. |blah| < epsilon) the x's are in terms of epsilons (or at least some algebraic expression is) and then you show that the stuff in the expression meets the criteria with the given epsilon. So the real thing for these problems becomes picking a epsilon that is a function of delta and you know that |x+2| is in-between 0 and delta. So you have 3x+6 = 3(x+2) > 3x+2 so you want to consider your epsilon in terms of this function. The triangle inequality is useful in that |A +- B| <= |A| + |B| < epsilon. Usually the way that this kind of thing is done is that if you have the A and B terms above, you want to show that each term is less than epsilon/2 and then you use this triangle inequality to bring things together. So try and focus on getting the larger term less than epsilon/2 and show that the other ones is less than that and you're done. Re: Epsilon - Delta Definition of Limit. For these kind of problems, you want to get an epsilon in terms of a delta. So you have a bound for x in terms of delta, and now you want to get an epsilon in terms of that delta so that when you use the epsilon relation above (i.e. |blah| < epsilon) the x's are in terms of epsilons (or at least some algebraic expression is) and then you show that the stuff in the expression meets the criteria with the given epsilon. I am having trouble understanding this, in particular phrases like "the x's are in terms of epsilons" and "the stuff in the expression meets the criteria with the given epsilon." Of course, in the end you want a delta in terms of an epsilon, not the other way around. Re: Epsilon - Delta Definition of Limit. When I did these kind of things a long time ago, we got the expression regarding |f(x) - f(l)| < epsilon by showing that the expression inside the norm had those properties and thus got a condition of the terms with respect to epsilon. I understand that all this is doing is saying that mult-dimensionally, you shrink the range of the domain around a point and the norm of the mapping between the two values shrinks as well intuitively, but your delta's can be written in epsilons as well as the epsilons written in delta. I'm just explaining how I used to do this: you get a way of showing the norm of the difference of the final map has the properties by relating them to an expression involving epsilon and showing that these quantities maintain the inequality of the whole norm (i.e. your |f(x) - l| term). October 9th 2012, 12:55 AM #2 MHF Contributor Oct 2009 October 9th 2012, 01:01 AM #3 MHF Contributor Sep 2012 October 9th 2012, 01:08 AM #4 MHF Contributor Oct 2009 October 9th 2012, 02:19 AM #5 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/calculus/204915-epsilon-delta-definition-limit.html","timestamp":"2014-04-20T13:03:27Z","content_type":null,"content_length":"44987","record_id":"<urn:uuid:757251b9-9da4-4f0b-8950-8fa0d6c89b98>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
M@h*(pOet)?ica—Circles, Part 3 | Guest Blog, Scientific American Blog Network Editor’s note (11/7/13): Find the entry point and new posts of Bob Grumman’s M@h*(pOet)?ica at http://poeticks.com/ #StorySaturday is a Guest Blog weekend experiment in which we invite people to write about science in a different, unusual format – fiction, science fiction, lablit, personal story, fable, fairy tale, poetry, or comic strip. We hope you like it. Today’s lesson will start with something of mine that is so elementary I won’t bother to try to explain it: More complex is the following poem by the only other mathexpressive poetry specialist besides myself that I know of, Kaz Maslanka, “Beginner’s Mind”: Kaz kindly agreed to provide a brief lecture on this for presentation here: “There are two ideas of perfection and they both take the form of a circle. The mathematical expression shown in the piece is the analytic geometrical equation for a circle and it is so perfect in its conception that nothing in nature comes close to its perfection – it is infinitely perfect in its denotation yet, ultimately it is a manifestation of the mind. The circle is also used as a symbol of infinity. The swirling circle image was originally done with ink on paper and later digitized. The image is of the Korean Il Won Sang or Japanese Enso. It connotes the perfect idea of emptiness. Many Zen monks practice drawing this circle as an exercise to focus on the present moment. The Chinese characters in the background literally say beginner’s mind.” I love its title. “Beginner’s mind.” That, I feel is exactly what we should all be using when encountering a poem—especially what I call an “otherstream” poem, the otherstream being where poems of a kind (like mathexpressive poems) not yet certified by the Poetry Establishment can be found. To pin down (tediously) what Kaz’s poem objectively is most exactly, because I deem it worthy of a full-scale examination, I say it is first of all something to be read since it consists of Chinese words, and mathematical symbols I consider likewise to be words because they represent sounds those understanding them will interpret as words (e.g., “squared”–and “ex,” which here is a letter used as a word rather than just a letter). It is also what I call a “visimage”—a visual image in a much larger sense than printed words are. The verbal and the visual interacting: hence, a visual poem. This is important to know; otherwise one may take it as just an equation with a pleasant design as a background—an equation with no aesthetic significance. But, to me (aided by its author’s commentary on it), the equation, by existing all by itself in otherwise empty space, poetically represents something reduced (but also intensified) to the perfection of the wholly symbolic. Below it is the clearly unabstract, roughly and swiftly made, incomplete circle, acting as a specific specimen of what the equation universally denotes. It, too, is alone in otherwise empty space. I think what it connotes is especially vivid in comparison with the equation’s lack of connotative value—of sensual connotative value, at any rate. (Nothing can avoid being in some way connotative. One thing this, or probably any, equation, connotes—as the image on the page it can’t avoid being–is a kind of philosophic ethereality). As Kaz points out, his Zen circle speaks in an oddly earthy way of infinity, of rushing somewhere it will never get to as the infinite does, but speaking of the never-beginning, never-ending structure that it attempts to portray. Note that the circle has a texture, as the equation does not. Width and thickness, too. Then we have what suggests to me a calendar behind the equation and uncompleted circle, an implicitly unending repetition of durations. A mind, eternally churning—at its beginning, turning a seen circle into a felt image, then ascending into an equation for, its perfect essence. Final result: the marriage of the physical and the conceptual in a manner almost impossible for either words alone or mathematical symbols alone to achieve. With Connie Tettenborn’s “Staying Centered,” we return to the formula for a circle’s circumference, “pi times twice the radius,” which she makes equal to the never-beginning, never-endings circle Kaz Maslanka had in mind, as a way of evoking what the inner remoteness of jogging long distances is like. Nicely adding to the experience are the repeated glimpses of the ground the countless footsteps are covering, covering, covering.. But note how these glimpses ever so subtly suggest where the wandering thoughts may be going. . . . Next up, “The Mind of Wallace Stevens,” a work of mine inspired in part by “Beginner’s Mind,” but also by a fear I might not have enough poems for a full entry this time around. As is the case with all mathexpressive poems, it has a good deal to do with the abstract versus the material. To the left of the equals sign is the standard formula for the volume of a sphere, using “the eternal sky” as the radius of the sphere involved, which the righthand side of the equation says equals the mind of Wallace Stevens. The sky seemed appropriate to me because above us, and seemingly close to being abstract because so unsensual, as well as huge, the way Stevens’s poetry so often seems. Note that I didn’t use just “sky” for the radius, I used “the sky”—because I meant the whole thing, not what might be only a local piece of it, or less. And I said, “the eternal sky,” even though aware how that risked sounding pretentious, because I wanted to underscore the size of the sky in time as well as in space, thus implying how much sensual matter it must have, and Stevens’s poetry and mind must have (clouds and birds being only the most obvious of this matter). To repeat that point, I give the words denoting Stevens’s mind color, color which at times cannot be contained by it. Working with the letters on the other side of the equation, which include the Greek pi (which is also in this case a full word), I provided outlining to them in hopes of subtly making what they denote more than what they denote. Again, the concern in my work with the idea of the difference between a mathematical symbol (or any other kind of symbol) of one color and the same symbol colored differently. And of employing color where it normally is not in poems to make a poem new, which is by now a boringly unnew idea, but still valid. Now for a circle whose area rather than circumference is what’s important, Andrew Topel’s Sound1. He did not plan it as a mathexpressive poem, but I liked it so much that I made it one so I could show it here without guilt feelings. It was easy: I just determined the value of its radius: I did the same with a second circle of Andrew’s that I liked, coming up with: Okay, let me be honest: neither of the values of these radiuses is completely accurate. The first, in fact, is off by more than a quarter tone. But do keep in mind that this is only Mathexpressive Poetry I. And if you use an exponent of the correct color, you’ll be off less than a trillionth. I couldn’t resist including the untitled work by John Moore Williams above, so ordained that the symbol for infinity is two circles to make it fit the circle theme of my entry. I laughed when I first saw it—laughed in awe of it! And at its sheer cleverness. The plainest element of arithmetic, one, is shown in contrast with the weirdest (if zero isn’t, and zero is visually there, too!) Matter and anti-matter as well as a kind of negative/positive dichotomy. And the wonderfulness of scientific symmetry (however incomplete physicists may be finding it of late). Somehow it describes the entire cosmos for me! Speaking of which, here, from Sue Simon, more explicitly, is such a description: According to its creator, “this painting describes the 4 dimensions that we know: first, second, third and the fourth, which is time. String theory proposes several more dimensions so I thought this painting would ask the question ‘how many are there?’ “The equation: time dilation. In the equation t=time, v=velocity, c=speed of light.” (Note from Blogmeister Grumman: there are only 4 dimensions.) I included this because it has a circle—a sphere, too! I find it extremely visually appealing, but especially delight in the tension between ethereally Platonic visual representations of geometry and colorful ungeometry—with the simple red 4 and the elegant equation taking the work beyond visuality into pure thought. On one hand, too, it is crazy—but so carefully, uncrazily laid out in nine squares (for 9 dimensions?) The way the outlined square (as I take it as) in the top left interacts with the also white non-square in the middle is just one little move the work carries out that makes it a continuing source of fun simply as a painting, as well as being what I consider to be a visiomathexpressive poem. It suddenly occurred to me to bring back the circle below, Carlyle Baker’s, “Random Number,” because I think it works interestingly with “Are There Only 4 Dimensions?” Maybe someday I’ll be able to say why. I was so pleased with Sue Simon’s piece, by the way, that I asked her if she had any pieces with circles in them that I could use here, and she sent me three really nice ones, including, “Black Holes,” which I decided to include because it was the only one of the three with explicit math in it. About this, Sue says, “I am fascinated by the thought of a black hole where nothing can escape. This painting uses a bit of artistic license to think about these strange and interesting places. The equation: Maximum black hole angular momentum.” It is not really a poem so much as a labeled painting, but I’m not letting taxonomical rigor keep me from presenting first-rate material! To finish off my entry, I thought I would show you three pieces by Karl Kempton, whose work will be the feature of my next entry. They aren’t mathematical, but they are circles, so not wholly off topic. The top two are from a series of his called Rose Window: 29 “windows,” the first three containing all the letters, the other 26 each built of repetitions of one of the letters of the alphabet and nothing else, ending with the same sort of views into the world around us that the best of cathedral windows provide—ergo: the alphabet as the means toward stained glass enlightenment. . . . In his third piece, Karl uses the word for the rose and fragments of the flower to reach a similar peak. I didn’t see the word, “rose,” at first, but assured by the author that it was there, I found it: Previously in this series: M@h*(pOet)?ica: Summerthings M@h*(pOet)?ica–Louis Zukofsky’s Integral M@h*(pOet)?ica—Scott Helmes M@h*(pOet)?ica—of Pi and the Circle, Part 1 M@h*(pOet)?ica – Happy Holidays! 1. Log Norm 11:12 am 01/16/2013 Mmm… and on top of it :- “Ex squared and why squared equals are squared” also happens to represent the Pythagorean Theorem, in its circular argument form! Link to this Add a Comment You must sign in or register as a ScientificAmerican.com member to submit a comment.
{"url":"http://blogs.scientificamerican.com/guest-blog/2013/01/12/mhpoetica-circles-part-3./","timestamp":"2014-04-17T21:42:52Z","content_type":null,"content_length":"108108","record_id":"<urn:uuid:3f720150-934a-4e4a-918d-0d69d0806113>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
How far an object is? November 24th 2009, 02:42 PM #1 Oct 2009 Suppose that an object is thrown down towards the ground from the top of the 1100 foot tall sears tower in Chicago with an initial velocity of -20 feet per second.Given that the acceleration of the object is constant at -32 feet per second,determine how far above the ground the object is exactly six seconds after being thrown? Remember the distance formula: s = 0.5at^2 + vt + s0? Could you pls elaborate on that? All I could understand from the problem is that h=1100 foot How does distance formula used in this?Iam confused Your textbook... Your text should have a section entitled: Constant Velocity Motion Uniformly accelerated Motion or some such title. See the examples in this section. Your problem on free fall is motion under constant acceleration with a non-zero initial velocity. The kinematic formulas can be derived with calculus, but are easy to remember. Start with a(t)=-32. Integrate to find v(t) and use the info that v(0)=-20 to solve for the +C that results from the integral. Integrate again to get s(t) and use s(0)=1100 to solve for the +C OP - You need to show effort in your posts. If this all seems new to you then your teacher hasn't covered enough material for the problem. Does this make sense to you? November 24th 2009, 03:06 PM #2 Senior Member Nov 2009 November 24th 2009, 03:15 PM #3 Oct 2009 November 24th 2009, 03:21 PM #4 Senior Member Nov 2009 November 24th 2009, 05:32 PM #5 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/calculus/116557-how-far-object.html","timestamp":"2014-04-16T17:12:56Z","content_type":null,"content_length":"39644","record_id":"<urn:uuid:2f04215d-afbe-47fc-bf91-b5072b6b4cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: iszero Up: Parameters - Alphabethical Previous: istop when interpolating the boundaries and velocities using izint, ivuint and ivlint, the corresponding switches equal to 0, 1 or -1 (see lines c, f and i in the description of the file r.in for the program RAYINVR) will be set equal to 1 if either of the switches across which the interpolation is performed is equal to 1 and it will be set equal to -1 if the two switches equal 0 and -1; if iswit =0, the corresponding switch will be set equal to 0 if the switches across which the interpolation is performed are different (default: 1) Ingo Pecher Sat Mar 7 19:13:54 EST 1998
{"url":"http://pubs.usgs.gov/of/2004/1426/rayinvr/node82.html","timestamp":"2014-04-20T03:17:12Z","content_type":null,"content_length":"2137","record_id":"<urn:uuid:1cb65304-9e20-42db-8017-4b71d1ffdbea>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: USER COMMANDS (1) Updated: 28 May 2010 Local index Up mcxmap - permute or remap the indices of graphs and matrices. mcxmap -imx fname (input) [-o fname (output)] [-make-map (output map file name)] [-make-mapc (output map file name)] [-make-mapr (output map file name)] [-cmul a (coefficient)] [-cshift b (translate) ] [-rmul c (coefficient)] [-rshift d (translate)] [-mul e (coefficient)] [-shift f (translate)] [-map fname (row/col map file)] [-rmap fname (row map file)] [-cmap fname (column map file)] [-mapi fname (row/col map file (use inverse))] [-rmapi fname (row map file (use inverse))] [-cmapi fname (column map file (use inverse))] [-tab fname (read (and map) tab file)] This utility relabels graphs or matrices. Its main use is in applying a map file to a given matrix or graph. A map file contains a so called map matrix in mcl format that has some special properties (given further below). The functionality of mcxmap can also be provided by mcx, as a mapped matrix (i.e. the result of applying a map matrix to another matrix) is simply the usual matrix product of a matrix and a map matrix. However, mcx will construct a new matrix and leave the original matrix to be mapped alone. When dealing with huge matrices, considerable gains in efficiency memory-wise and time-wise can be achieved by doing the mapping in-place. This is what mcxmap does. In the future, its functionality may be embedded in mcx with new mcx operators. The special properties of a map matrix are 1m 1m • The column domain and row domain are of the same cardinality. 1m 1m • Each column has exactly one entry. 1m 1m • Each row domain index occurs in exactly one column. These properties imply that the matrix can be used as a map from the column domain onto the row domain. An example map matrix is found in the EXAMPLES Section. 2m 2m -o fname (output file) Output file. 2m 2m -imx fname (input file) Input file. 2m 2m -map fname (row/col map file)) 2m 2m -rmap fname (row map file) 2m 2m -cmap fname (column map file) 2m 2m -mapi fname (row/col map file (use inverse)) 2m 2m -rmapi fname (row map fil (use inverse)) 2m 2m -cmapi fname (column map fil (use inverse)) Different ways to specify map files. 2m 2m -make-map (output map file name) 2m 2m -make-mapc (output map file name) 2m 2m -make-mapr (output map file name) Generate a map that maps the specified domain onto the appropriate canonical domain and write the map matrix to file. 2m 2m -cmul a (coefficient) 2m 2m -cshift b (translate) These options have affect if neither a column map file nor column canonification is specified. If any of the first two options is used, column indices i are mapped to a*i+b. 2m 2m -rmul c (coefficient) 2m 2m -rshift d (translate) These options have affect if neither a row map file nor row canonification is specified. If any of the first two options is used, indices i are mapped to c*i+d. 2m 2m -mul e (coefficient) 2m 2m -shift f (translate) If a map file is specified for a given domain, neither a map file nor canonification is specified. If any of the first two options is used, the indices i will be mapped to e*i+f. 2m 2m -tab fname (read (and map) tab file) This option requires the -map option. mcxmap will output the mapped tab definition. The matrix below has two canonical domains which are identical. It denotes a map of the canonical domain onto itself, in which node 0 is relabeled to 8, node 1 is relabeled to 5, et cetera. mcltype matrix dimensions 12x12 0 8 $ 1 5 $ 2 3 $ 3 2 $ 4 4 $ 5 6 $ 6 7 $ 7 9 $ 8 1 $ 9 10 $ 10 11 $ 11 0 $ Stijn van Dongen. mcxio(5), mcx(1), mcxsubs(1), and mclfamily(7) for an overview of all the documentation and the utilities in the mcl family. This document was created by man2html, using the manual pages. Time: 21:23:48 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/1/M/mcxmap","timestamp":"2014-04-17T09:37:27Z","content_type":null,"content_length":"12384","record_id":"<urn:uuid:1d37a526-bac3-4341-9f06-4235769917c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Light bulbs in parallel and series FYI, I've moved this thread over to Introductory Physics. On to your question: Am I right in thinking this? You sure are! And can someone explain this more to me? Yes, but rather than type it all out I am going to ask you some leading questions. You don't have to answer them of course, but if you do then you will be well on your way towards expressing your thoughts in the same way that a physicist would. Let the voltage of the battery in each circuit be [itex]V[/itex] and let the resistance of each bulb be [itex]R[/itex] and the current drawn from the battery be [itex]I[/itex]. The answers to all the questions I am going to ask should be put in terms of these three symbols. 1.) In the first circuit, what is the voltage [itex]V_L[/itex] across the lightbulb? 2.) In the first circuit, what is the current [itex]I[/itex] drawn by the lighbulb? 3.) In the first circuit, what is the power [itex]P_L[/itex] drawn by the lightbulb? You should have an equation for power. 4.) In circuit 2, what is the voltage [itex]V_L[/itex] across each lightbulb? (Hint: The two voltages are the same. Can you explain why?) 5.) In circuit 2, what is the current [itex]I_L[/itex] drawn by each lightbulb? (Hint: The two currents are the same. Can you explain why?) 6.) In circuit 2, what is the power [itex]P_L[/itex] drawn by each battery? (Hint: If you get 4 and 5, then it should be clear that the two values of the power are the same).
{"url":"http://www.physicsforums.com/showthread.php?t=95253","timestamp":"2014-04-19T02:13:30Z","content_type":null,"content_length":"28887","record_id":"<urn:uuid:30f14cb4-808b-4f9c-867e-93e37995392b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Are You Right Brained or Left Brained? I've been writing instructions for drafting a full circle, half circle, and 3/4 circle skirt to go in my book. Readers, this involves math. And not just any math: this is algebra and stuff. In a particularly frustrating moment, I t , "Don't you hate it when sewing involves math?" Interestingly enough, the responses were generally 50/50. Some of you hate math too, some of you love it. And one of you (thanks, Tatiana!) was particularly practical about sewing intersecting with math: "It always does." Indeed, sewing always involves some sort of math. It can be simple, like adding 2" to a skirt to make it knee length. Or it can be pretty involved, like figuring out the radius of a 3/4 circle skirt. I've always had a bad relationship with math. And I was sort of the black sheep in my family: my mother was an accountant. My father was an engineer, and my brother followed in his footsteps. I was the stereotypically artsy one; I loved theater and music and arts and crafts. One of my most vivid memories of learning math is pretty depressing. We moved around a bit when I was growing up, and when I entered second grade in a new school I was really behind in math. My new classmates were already subtracting a 3 digit number from another three digit number. (Like 543 minus 256, for instance.) My first night of school, my accountant mom stayed up late with me, trying to help me understand the homework. She really had her work cut out for her. I remember getting teary, looking at the numbers and thinking: these numbers mean absolutely nothing to me. Seriously, my brain would just sort of shut down when I tried to make sense of the 6 digits stacked in rows of 3 on top of each other. My math career continued along this dismal trajectory. In high school, a D in algebra was the blight on my otherwise A-average report card. Geometry, however, was a bright spot. I easily earned an A, and it made perfect sense to me. What did it all mean? There's a pretty simple term for this affliction: right-brained. It's been figured out at this point that there are two types of people in this world, those who are predominately right-brained and those who are left-brained. The right brain controls creative functions, while the left brain is analytical. Geometry tends to be easier for right-brained people since it involves visual learning and 3D thinking. The cool thing about sewing is that it attracts both the right-brained and the left-brained. There's pretty much something for everyone! The engineering and mathematics of sewing will attract left-brained types, while right-brainers enjoy the intuitive and creative parts of the craft. But I would argue that a bigger plus of the hobby is that we get to flex our non-dominant brain. When else would I voluntarily do , for goodness sake? Just like doing crosswords or other puzzles can keep us young and sharp, I would imagine that getting out of our comfort zone intellectually can do us a world of good. I'm curious if sewing folks are really generally 50/50 left or right brained. What's your experience? 91 comments: 1. I would say I am an equal mix. I did terrible in math, but I do like it (however that might work). I also use math for my work everyday...which is designing houses...so I definitely fit into both sides. I'm quite creative, but very analytical at the same time. -Kath 2. I'm definetly a mathsy person (studying engineering) and have recently discovered that I love drafting - it still baffles my brain that the 2D drawing actually works on my body though. 3. I am much like you (in fact, reading this post was much like revisiting my school-age math history! lol.). More abstract mathematics has never been my strong point, and I've struggled a lot with that. But, geometry is something I adore, which has come in mighty handy with sewing! I do find though, that when it comes to it, I can figure something out--even using the "dreaded" Algebra to figure out things as needed. Though math has never been and will never be my area of expertise, I am glad that I have to flex that side of my brain regularly thanks to sewing--keeps my grey matter nimble! ;) lol. ♥ Casey 4. I'm very right-brained, and my school experiences with math sound almost exactly like yours, down to the long teary evenings with the frustrated parent/tutor. I went on to get a fine arts degree. But later I went back for a degree in computer science, and found that while I still hate "mathy" math, I enjoy stuff like algorithms. There are a lot of engineering-like aspects to sewing. Like knowing when and how to change the shape of a dart, for example -- there's certainly an algorithm for that! 5. I actually love the part of sewing that involves maths. It makes my whole brain work at a time. Unfortunately, it also more time-intensive, and that's why I still haven't made up anything from my Pattern magic books... 6. You're so right! I often find myself thinking "this hurts my brain!" when I'm trying to do pattern alterations or figure out ways of attaching things that need to be flipped inside out, upside-down etc. I was very good at maths in school but I disliked it, and have since moved very much in a creative direction. I think my 'maths brain' has kind of dried up. But if there was no challenge to sewing, I wouldn't enjoy it half as much. I think it's very healthy for my brain to have to oil up its maths cogs now and then... however much I might curse the work at the time. Since I work in a 'creative' sector, people often say to me "oh I could never do that, I'm not creative". Yes I do believe people are born with certain tendencies, but I generally reply that it takes practice, and the more you 'work out' the creative side of your brain, the stronger it becomes. Thanks for mentioning the geometry-3D-visual thinking part. I didn't realise that was linked to the right brain. That explains a lot about one of my kids! 7. i'm definitely right-brained... geometry was always very easy and clear to me, but all other math stuff was just a soup of numbers that exited my head again as soon as the test was over. i now study illustration and sometimes have to calculate the number of pages in a book or how large the page has to be to fit a certain paper size after all the printer's marks have been added... it's not even that hard, but i still don't like it! 8. I'm with you on the math, I shut down completely whenever numbers come up, theres a dial tone in my brain whenever I have to add. But then the maths that I've been using when drafting patterns is giving me an 'I can do this' that I've never had before. Makes me wish I was drafting instead of staring blankly at maths books at school! 9. I dont think I am particularly either sided brain. I love math and science, I'm good at artsy things. I dont think of myself as a lefted or righted. I think also a lot of times people aren't bad at math, but more afraid of it. I went to alternative school with a lot of kids who were doing poorly in math. Once they got over the anxiety and fear of testing a lot of people did better in math, and other things! 10. Actually, as I understand it, credible neurologists/neurophysicists think the right brained/left brained dichotomy is a total myth. The split-brain research of the 60s that birthed this myth has been largely rejected, after brain-scanning technologies came into play which showed both hemispheres working hard on tasks which were supposed to be right or left brained. That's not to say there's not some lateralization to the brain, but it is a whole lot more subtle and complex than "left=math, right=creative". I'm a scientist (currently working as a lab tech in the medical field), with all the logic and analytical skills that implies. I'm also hugely creative. Take from that what you will. 11. I think I tend to both. I really enjoyed math, am an analytical type, but I like the creative aspects of sewing etc too. BTW, musical talent is on the same side as math, and is the opposite side as spoken language. 12. I'm the same way. Geometry was easy for me because it is visual and "real". Algebra was hard because it was theoretical and didn't apply to anything physical. Even now, if I can see and touch something I learn it very easily. If it is all theory...zzzzzzzz. 13. corvustristis, you're definitely right when you say it's a lot more subtle/complexthis than this. I'm a neuroscience major who just did a semester long course in laterality, so I'm just gonna back right outta this thread while people try to decide which hemisphere is their 'dominant' one, haha. 14. I too hated math in school, and I'm convinced that memorizing songs from Multiplication Rock is the sole reason I got through my times tables. In high school I was thrilled and amazed at doing well in geometry and decided to end my scholastic math career immediately following. I never mind sewing math through, and pattern drafting just seems more like drawing to me. My preference for flat pattern over draping is frankly surprising to me, but it's definitely the case. Years of work in retail has definitely made me adept at everyday math, as well as no teacher expectation that all your work needs to be shown in longhand. It makes me lame at helping my kids with their math woes however, and my oldest is as left brained as can be. 15. We don't even have to give it a name...each of us has a unique brain with its own quirky traits. I have always 'considered' myself very right brained. Many things that are very easy for those with an analytical brain are hard for me. It is also a question of HOW we learn. Are we visual learners? Or auditory learners? or a mixture. It is very complicated. I can do math sort of...but spelling. Ugh!!! I read constantly...put no matter how often I see certain words, when it comes to spelling them on paper I can't. Just writing this comment, I have had to go to google 3 times to get correct spelling. 16. Contrary to popular belief, accountants can't do the math you're talking about. All we know are how to make the best of reporting laws and memorize tax calculations. Pi? What's that? Personally I like theoretical math better so geometry was my worst subject while algebra was pretty good. Sewing is both fabulous and dreadful because of that. 17. I'm actually naturally ambidextrous, but learned more and more as a kid to be right handed because the school system thought you should be just one or the other, not both. But my brain still held on to that so I'm a very visual person, but I am also very good at math and science. I am getting a degree in engineering after all. 18. algebra is the devil. My downfall was actually keeping track of the negatives and decimals and little fiddly things. Geometry and trig are fabulous. You know what else is nice? Calculus. I had to take calculus in high school. And while it was one of the hardest classes ever, I actually enjoyed it. The numbers go away, and you play with concepts. Once I got my brain around the concepts (that was the hard part) things started to flow in ways they never did in algebra. I almost took calculus in college. 19. I'm glad to hear from some scientist commenters that this dicotomy it might be a myth because I've always been a 'middle brained' person. I'm a programmer/web designer and work with the arty web design and the mathy programing. But I don't really like hardcore back-end coding, I'm purely a object oriented girl, for web pages and Flash games. I'm also not a huge fan of serious art. I like looking at it but can't manage to paint/draw something that is really 'high art'. I just like drawing flowers and puppies and stuff. 20. I love math! Normally, I see myself as a more left brain typed person, but I also love reading and writing. I really appreciate art (even if I have no talent what so ever in drawing, painting, etc.) I also tend to be quite emotional, which is usually right brain associated. Physics is always the bit that gets me and there are definitely some physics-y bits in sewing. I have issues visualizing how all of the shapes of fabrics I just cut out are going to combine into something meaningful. 21. I loooove this post. It hits on so many of my favorite ideas. I have always done well in Math, but I am also reasonably adept and comfortable in right-brained activities. I have spent my life (48 years and counting) dabbling in all sorts of things. I find creative ways to function in the scientific world (and am misunderstood), and scientific ways to function in the arts (which seems far more acceptable). I did, however, experience a similar 2nd grade Math trauma. My Dad was sent to Management Training for IBM. This involved 2-3 months in Rochester, MN. So the whole family took an apt. there, a long way from Mass. My new class did math by recitation, the old-fashioned way, where you had to stand next to your desk and recite the equations. I was terrified. And on top of it, the teacher's method of correcting you, was to have you repeat it over and over, while your classmates snickered behind their hands. I remember waiting my turn, and giving myself a public-speaking pep talk, but at least confident of my math skills. Imagine my horror, to hear teacher say, "No, try again." over and over and over, while I gave the same wrong answer over and over and over. I thought I would die. As it turned out, my math was indeed correct. My error was in using the term "take away" (which I was then told was for babies). Apparently, sophisticated Minnesotan second-graders were expected to use the term "minus". (Gah! This experience is permanently embedded in my brain as an example of why not to use humiliation as a teaching tool!) And finally, I really applaud your attitude of challenging yourself to use all of your intellect as a hedge against losing it! I heartily agree! 22. The math versus creativity stereotype always makes me a bit sad. In my experience, mathematicians tend to be very creative, also visually. Most of friends from when I studied math at university either sews or knits. Several considered dropping out to do art or design, but stayed on because they also loved math. One girl later got accepted at the Danish school of design and another changed to history of art. 23. Mixed. I'm mildly learning disabled: I understand math concepts easily but have a very difficult time with the arithmetic needed to express them in mathematical terms. If I were allowed to write out the process of solving them, in paragraphs, I would have done fine. But that's not how school works, of course. I like both geometry and algebra, and I liked physics, even though I stink at them. I did, however, inherit my mother's spatial skills. I was absolutely wicked in drafting class, where we had to draw an object at different angles. Everybody else had wrong lines connecting to wrong lines, and I could do it standing on my head. That definitely makes sewing easier! I'm moderately artistic, at best. I appear to be more artistic than I am because I'm confident enough to try things even if I'm not at all sure I'll succeed. I'm always shocked by people--some of my quilting friends come to mind--who need commercial patterns for everything. Somebody tried to sell me a commercial template for a tumbler quilt once. Like I can't cut a cardboard tumbler?? Or draw a primitive chicken appliqué? But I am not extraordinarily creative. I don't come up with really outrageous artistic ideas. The most frustrating thing is my almost complete lack of ability to improvise music. People who can be told a key signature and then go hog wild improvising amaze me. I am far too mathematical for that. 24. Yay for neuroscience majors! I agree with Gry, I think maths can very creative and, as your post describes, creative pursuits can involve a lot of maths. So maybe there is not so much difference? I was always much better at, I guess, more abstract mathematics than ones that involved things in the real world, like geometry. Which is strange because I have really good spatial reasoning skills and imagining how patterns fit together is not a problem for me. 25. The right/left brained dichotomy sends me into a bit of identity crisis. I am left handed and was very good at drawing & painting from a very early age, so I grew up thinking that I was "creative" and right-brained. But in college (in art school, no less!) I began to suspect that I'm not so creative after all and noticed that I had strengths in a lot of left-brained areas. I had been pretty good at math in high school, especially geometry, but wasn't interested in it. 26. I think this is true in the knitting world too...I'm a lefty who knits and crochets with my right hand (sheer necessity really I'd have to redo the instructions otherwise. Pain, pain!). Now I think it's cool that I can 'drive' the machine the way I would drive a car - with my right foot! 27. I always HATED math class growing up, especially word problems, uggh! But I find when it comes to figuring out tasks I want to accomplish (ex: grading a pattern), I don't hate math nearly as 28. Another who knows about brains (studed neuropysch at uni) who agrees that the dichotomy is not nearly so clearcut. I can do maths, but only really enjoy it when it's applied. Pure mathematics just doesn't work for me, but I'm a stats geek and I love drafting patterns using both a good eye and a mathematical A lot of the whole "Oh, I can't do maths/I'm not creative" malarkey is down to how you were taught and supported as a child. I had good maths teachers but also lots of support to be creative, so I'm quite happy in both fields. I know I'm lucky in this, though. 29. I always liked calculus and algebra FAR more than geometry---yet actually enjoy the pattern drafting and construction part of sewing quite a bit. 30. I think I might be no-brained. Geometry made no sense to me in high school. Neither, however, did algebra or precalculus. Give me languages, or physics. My seamstress mother is continually baffled by the fact that I have a photographic memory, but can't figure out what she's talking about when she tries to describe how an article of clothing goes together. She starts talking, and my brain shuts down. I can make bread from scratch, I can knit a hat in under 3 hours, and I cannot picture how a sleeve fits into a shoulder seam when someone tries to explain it to me. But once I see it, I'll always be able to do it. 31. I am right brained for sure. Math drives me crazy. Although in some areas I feel I am left brained, just not when it comes to math. I've been planning on brushing up a bit on it, though. It would be very helpful. 32. I was pretty good at math until I got to parts where you really need to be creative, people who do high level math are some of the most creative around, and I just didn't have that sort of brain, I could follow explainations; but not forge new ground. I wonder if a lot of this dichotomy is either in learning styles (visual, auditory, kinetic etc.) or differences in whether you're a detail person (who loves algebra) or a whole object person (who loves geometry). My method for teaching multiplication tables was to use car trips and start with the easy ones. 2x3=6 3x2=6 6/2=3 6/3=2 so that they could get the dance steps started, that multiplication and division are partners. Starting low and going to the twelves. It didn't take long until they felt it was too simple; and the ten minutes per trip was about right for learning. 33. My story is 100% the inverse of yours. I was a math major and my current job is designing math software. I remember sobbing over homework, but only when it involved drawing. My least favorite math class was geometry, though. It wasn't until I started sewing that I discovered that I could be "creative" and "artsy" without having to be able to draw. 34. I have the same issue-- can't do complex arithmetic, can't handle algebra, OK at geometry. It's not actually left- or right-brainedness, it's actually a learning disorder called dyscalculia. It's similar to dyslexia, only with mathematical concepts instead of words. It's a very poorly-studied and under-diagnosed learning disability, but it's gaining currency as a concept. I mean, lots of people are bad at math; some people's brains just can't accept it at all as it's generally taught. 35. I love sewing because the math is easy: it usually involves multiplication or division which is simply a matter of folding a tape measure to get the results! p.s. I get circle skirt waistlines by finding the waistline length on a tape measure and folding it to get 1/4 then I arrange this length in a quarter circle on my folded fabric where the waistline should be before I chalk it--see? no math, and it works great! 36. I think this dichotomy is way too loosey-goosey to be worth bickering with. But I come to sewing as a former math major and I think it's a perfect fit as sewing regularly offers lovely little puzzles. Anecdotally it almost seems more people come to sewing from this background. I've even suspected that women with math/science aptitude were expressing themselves somewhere in domestic arts before academic fields were open to them and I figured garment sewing was a likely outlet. Anyway I can't stand hearing creativity balanced against math skills. After all math gives access to the most fundamental ideas and there is no better foundation for creativity. 37. ha! i LOVE math - i'm one of those people who mentally calculates their total (including the 9.25% sales tax) while waiting in line at the cashier :) everyone told me during high school that i would either love algebra or geometry, but i loved them both equally & did very well in each. anyway, as far as sewing goes, i do enjoy the mathematical part that tends to pop up every now and again. not to say i don't occasionally get lazy - my little brother lived with me for a while a few years ago, and as he's a math major i definitely asked him the answers to some sewing math problems because i knew he would be quicker & more accurate. 38. I loved math and had straight A's in it. Algebra was a favorite in high school, I just loved getting to the "X = ..." =) However, I've also found that I'm very fond of pattern drafting. I just get the geometrical aspect of it, as well as the algebra part. In all else in school I shone in the humanistic disciplines and struggled with chemestry etc. I'm generally an "artsy" person, but I seem to get both sides of math (on a lower level). I'm an analytic person, but I prefer to apply it history rather than physics. Maybe both my brain-halfs work half-well? =) 39. I am totally not a math person. Could not do algebra to save my life...in my head, letters and numbers do not go together!!!! The only time I was able to squeek by in math, is if it had a $ in front of the numbers. I am much better at grammer and literature. give me classic books and a sentence to diagram and I'm in heaven. 40. I love sewing for two reasons: First, I get to be creative. I love working with color and texture. Second, I get to be analytical. I love rectangular construction with means I get to work with graph paper and calculate measurements. 41. I remember trying so hard to study math in high school thinking if I just sat down and worked at it I could do better but it always ended in tears of frustration. But, I'm also not one of those people who sees a lamp and runs to their dress form to drape a dress inspired by the lamp. I need instructions. I guess I'm a Type A right brainer? 42. I'm very much like you as well! Oh goodness, I'm such a fail when it comes to math. Being a music major in college didn't help either. Me and counting rhythms did not work. I was the emotional diva, not the precise technician. Sigh.... That sad story of your younger years brought me back. When I was in second grade, I had to be taken aside by the teacher everyday for math. I was on the brink of going into a resource class for it. And it didn't help that I would write in the corner of all of my assignments in very tiny letters "I hate math." Oh my gosh! I got in so much trouble. She really stuck it to me one day. Told me I couldn't write that anymore and made me erase all it off of all of my previous assignments. So sad. 43. I have both math and engineering degrees so that pretty much says a lot. I was not the best student in both of these degrees. I had to work and study...they were both a challenge. But I love to sew and craft. And both sewing and crafting are because of the creative aspect. I would say the ratio (yeah, I totally talk math) is 70/30. 44. I'm good at math and have very good visual spacial relations both of which help with sewing. I'm not sure that I love the math so much as that I don't really notice it/it doesn't bother me. I very much enjoy the creative part as well but am more of a cook then a chef. I'm great at using patterns and making knockoffs but can't start from scratch easily. 45. I'm a biologist who deals with statistics a lot, so the math side comes fairly easily to me. I always enjoy the math/geometrical side of sewing - it's like flexible architecture. I think I sew because it gives me a chance to combine the physicality of working with my hands with the mental exercise of figuring out how it all goes together. Plus you often get a cute garment out of it. :) 46. My math story is very similar to yours (I loved geometry and hated everything else). I always have to have my physicist fiancé double check my math, though I love the geometry of changing dart positions and such on patterns. I really wish someone had used sewing, knitting, and cooking to teach me math when I was a kid. I think I would have learned it much more quickly and easily if someone had shown me how useful math would be to change a sewing or knitting pattern! 47. I think I am a mix. I love math, and I love to create. 48. I love math good thing for an engineering major :) I like Lizzie recently discovered drafting...what joy. My bodice sloper isn't perfect but I'm having a whole lotta fun with it :) It's funny that you did well with geometry because that was the one math subject that gave me fits. I could visualize any of it. 49. I had a similar high school track record - nearly all A's except math (and gym). But I was also really good at geometry. My biggest problem with sewing is the math bit...and the spatial relationships (which I think might be math related, or at any rate a left brain thing). I'm glad to know I'm not alone, and sucky math skills do not necessarily mean inability to sew. 50. I'm slightly more right-brained than left. I got decent grades in math, but it's easier when it's for something practical and preferably either tangible or visually represented. Sixteen years out of high school, I could no longer tell you the point of a sine, cosine, or tangent. Ask me to figure out area or volume, however, or how to alter seam width or draft a circle skirt, and I'm your 51. GOD BLESS GEOMETRY!! I had an amazingly similar experience to yours. Subtraction (particularly 'borrowing') left me guessing on tests. I would mask the fact that I was guessing by squinting, tapping my pencil a few times, and then saying a random number. Later in college we had to take an entrance exam for math, to place us in the right class. With the idea that I should always 'do my best' on tests, I placed myself neatly in a class full of people who had already taken calculus and just wanted a refresher before moving on to vectors. Me and a girl who went to a special art school were the only ones who didn't know what the eff was goign on. BUT that teacher was so good he made me think for about a year that I WAS good at math, and that I COULD understand it and be good at it. To this day I really respect the kind of teacher who can change your mindset to that extent. 52. On the surface I'm a math and science person, but I've always been drawn to arts and crafts, and it has been so gratifying to realize how much my methodical analytical brain can help with knitting and sewing. Here's to balance! 53. I'm a bit of both. I'm a CPA, but I'm pretty creative, enjoying a myriad of crafting hobbies (sewing, drawing, photography, etc.). I actually enjoy drafting patterns, because it allows me to use my analytical side along with my creative side. Oh, and yes, accountants don't do complicated math anymore. That is what Excel is for LOL! 54. I'm a left-brained math loving engineer. I find math being really helpful in sewing. 55. Haha, I like to think of myself as right-brained, but as my Bachelors degree is going to be in algebra, I thiiiiink I might reconsider that! hahaha. 56. I rejected the whole right brain/left brain theory at an early age. I hated it, because my siblings would tease me by calling me "left brain", thus implying that I was not creative. And I *am* a very creative person, always heavily involved in some artistic endeavor. I did well in math when I had a good math teacher, and badly when I had a bad math teacher. I also did well in music theory (the mathematical aspect of music), and learning pattern-drafting was something I considered fun. But these things never made me less creative. 57. I am 100% right brained. My experiences learning math and algebra (even chemistry) were exactly like yours, Gertie. Anxiety, confusion, tears, screams of frustration. Well, maybe yours didn't involve the screaming. But math--anything besides the basics and geometry--makes no sense to me. It's a religion. You have to BELIEVE that adding x and y will yield z (whatever those numbers represent). I could do the same problem/calculation five times and get a different answer every time. And don't get me started on trig! I'm never more anxious and my belief in my own stupidity is never more sure than when I'm confronting my complete inability to comprehend higher math. 58. I hate math and math in school was pure torture. I really don't understand it and when ever someone talks numbers at me (like when we were buying our house) I turn into a Homer Simpson type person and start thinking about a unicorn jumping over a rainbow. However, lately I have made a real effort to focus, driven by my need to get my small business book-keeping under control. Sewing too is great because it does challenge me and make me use the dried up other side of my brain! 59. I like to call myself mathtarded. I spent so much time hopping from one math class to failing and starting an even more basic math class, that I collected my needed math credits in HS without advancing beyond algebra. A sad story really... It wasn't until starting Fashion Design at AIS that adding fractions became second nature, which is also helpful in baking. I find fractions work for me, though sometimes I'd like to try the metric system, as some people have said in fashion designing. I'm quite happy working with my sometimes impossible fraction dividing. 60. This is a GREAT question, and I have to say that I am 50/50. I did terrible in math in high school except for geometry - just like you! Ironically, I have a degree in accounting, and I use algebra quite often in sewing and dollmaking. Crazy fun if you ask me. 61. I don't understand math. I don't think its funny or anything to say, I honestly, when I start to do a problem, see a black wall in my head blocking any sort of calculations. In sewing, I have to ask my husband for help because I make stupid mistakes when measuring. It's embarrassing. I can spot a typo in an instant in one thousand plus words but if I add a column of numbers five times I'll get five different answers. Even using a calculator! 62. Sheryl in TXMarch 14, 2011 at 4:14 PM I totally understand! Give me a paint brush, thread and needle, fabric, any craft and I'm a happy camper. Give me math? No Way! LOL 63. I am right brained. I too had trouble with maths. Although I loved Algebra and maths in upper grades with formulas, I think my brain saw the pictures. Generally, even now, doing quick problems in my head doesn't happen. There is no craft without maths. I have a degree in textile design, I specialised in weaving, you can't make cloth without maths. I knit,crochet and machine knit, you can do none of these without math. And of course I sew. My brain happily does the math for all of these. I once read that people can learn math skills and apply them to a specific task without having a greater understanding of math. We may not love it but no craft can work without maths. 64. You know, it seems rather narrow-minded to define left and right brained people (whether that's a real thing or not, which I doubt) as "creative" and "analytical". If you do any research into the history of physics or chemistry or math, you'll find that there is a shocking amount of creativity that goes into solving the world's problems, big and small. I'd bet almost everyone reading this site would agree that Elias Howe was a pretty darn creative guy, but I've never read anything about him painting a landscape or writing a poem. There's more than one kind of creativity out there. I think it's important to value both. As someone who often finds herself stuck in the middle of the art vs. science debate. I wish we could all just have a little more respect for each other. 65. Wow, I had no idea so many would take offense at the labeling of left-brainers as analytical rather than creative. Especially since analytical types seem more valued in American culture at least. And I'm not the one who came up with it anyway! But . . . this is a great opportunity for an op-ed if any of your creative science/math types would like to contribute one! It could be really cool! 66. people keep giving me tape measures [maybe as a hint, who knows] but i just use pieces of string to measure... 67. Right brained, most definitely. How's this for weird--I've done trig with few problems, but calculus baffled me. And they're related! It was just that unlike calculus, trig had *pictures*. I've always been able to see how things fit together and most of my sewing is self-taught. Funny thing is, I've found techniques listed in places as sewing 'secrets' that I figured out by looking at the pieces. Drives my instruction-prone mom insane how I work, because my first action is almost always to toss the pattern instructions over one shoulder and ignore them! 68. I think I must be right brained. Not only am I left handed, and this apparently goes hand in hand, but numbers? No way. Geometry (aka drawing pictures and pretending it's maths)? Big yes! Algebra (letters pretending to be numbers)? Yes. Simple arithmetic? absolutely no way! 69. My career requires both left and right brain functions, often simultaneously. I'm a music editor for tv/film, which is a creative job - playing with music, emotion - but it's all done on computer, which requires logic and technical knowledge and sometimes confusing, complicated, math. I love using both sides, especially when creative thinking solves a problem that logic cannot. (and in my spare time, I quilt; math and creative design go hand in hand there too!) 70. My father was an artist and a hairdresser, my mother was a chemistry, math and physics teacher. I seem to have inherited both sides. When trying to decide on a University degree it was a toss up between science and design. Design won but I work in a science driven field as an educator. I am drawn to design and the arts but am very analytical and methodical in my approach. With my sewing I would agree that I'm drawn to the beauty of it but relish the science behind it. I still don't like math though *cringe* haha. 71. Im very creative and learn by visual means best but I must confess that I love algebra. I love the idea of making sums out of letters and symbols it speaks to me and I just 'get it'. Weird. Other maths I find challenging 72. I'm definitely more like you. I never really liked math, being the girl who always had my nose stuck in a book and later became as stereotypical a band/art geek as my not-so-artsy private school would allow. (Like the Cupcake Goddess, I went on to major in music. Though rhythm does generally make sense to me.) I was actually ok at math until about 6th grade, and then I hit a wall. The only test I ever failed (or even got below a C)in my life was an algebra one, and not for lack of studying. But like you, geometry made a lot more sense. I'm pretty sure that math is the reason that I've never truly had a success in drafting my own patterns, actually. Stupid numbers. 73. I don't think you can draw such a sharp distinction between left and right braininess - many people have aptitudes for both. Just the other day my friend - who I had met in a technical, science-y graduate program - was telling me that she thinks of me as more creative than scientific ... "Not that I don't think of you as scientific. But even the way you do science is creative. You find answers or methods that aren't in books." The world doesn't have to be an either/or place! It can be a both/and place too! 74. Right brained for sure - I remember my mum trying to teach me fractions and getting teary because I just didn't get it! I hate math, but interestingly really enjoy it when it comes to sewing. Probably because I can actually see my math-in-action. 75. My strengh turned out to be thinking in 3 dimensional forms, creative math for sewing and package design! 76. Funny, but I've always thought of math - even the theoretical stuff - in very visual terms. I "see" the numbers, much the same way I "see" patterns and practical math as I'm manipulating my sewing projects. It's the detail in both math and sewing that attracts me. Small steps that come together to make a whole. 77. I am good at math, and have a Mechanical Engineering degree. I can accurately judge volume visually. I am very creative too (design and make doll clothes now) so I guess I am right brained. Pattern drafting is something I enjoy tremendously. Drafting poofy sleeves and fitted corsets are some of my faves to think about! Pattern making is almost more fun than sewing. I do the sewing mostly to see if my pattern fits and then I am wanting to draft something else! 78. I loved math while I was in school. It was my favorite subject. But, I have been out of school now for 12 years and it is much harder. I realized this just a few days ago when my third grader came home with improper fractions for homework. I sat there for an hour trying to remember what an improper fraction was. lol. But I think I am a mix of analytical and creativity. 79. How inappropriate to have a circle-skirts-hurt-my-head post on pi day, I am offended. 80. This is such an appropriate post for international Pi day (3/14)!!! 81. haha - just spent the morning doing tests for a recruitment agency. don't think the numbers ones went so well. but can manage to adjust a pattern without breaking into a sweat! 82. I don't know that I have the greatest pure aptitude in math, but I do have a lot of math phobia, which does not help. I regret it, because math is important. As applied to garment construction, in a pattern making class I tried to measure things that weren't practical to measure. I was trying to measure to the 16th of an inch. The teacher said it didn't matter. Or there was one occasion, I don't recall the details, when it was easier to take the tape, walk it into position, make the mark, and then copy it to the other side by tracing. Measuring a mannequin reminded me of Carl Sandburg's famous poem, "Arithmetic." Arithmetic is where the answer is right and everything is nice and you can look out of the window and see the blue sky-or the answer is wrong and you have to start all over and try again and see how it comes out this time. 83. Mega right brained, left handed here... 84. I'm not sure about this RB/LB thing. The original idea seems to come from clinical neurologists Fink and Marshall's (1996) work at the London's Institute of Neurology which showed differences in the activity of right and left hemispheres according to the type of task - creative or logical. The idea was so neat it passed rapidly into print and became an accepted theory. However, later work by psychologist Joseph Hellige at University of Southern California suggests that both hemispheres are active when processing all tasks: its the speed of processing and the type of brain activity that differs in each hemisphere. Anyway, whether you favour creative or logical processes, it makes sense to try and develop the ability to cope with both. As sewing involves elements of creativity and logicality perhaps it should be promoted/marketed as a brain development tool! But those of us who sew probably knew that already, didn't we? 85. I would say I'm a bit more left-brained. I have a degree in mathematics, and am very much an analytical/logical thinker. However, I started sewing less than a year ago and absolutely love it. I've not progress much beyond skirts, but I'm exploring ways to alter patterns to fit me better and am very excited to learn more techniques and advance to tops and dresses. Would the book you are working on be helpful to someone of my abilities, as well as, my desire to alter patterns to fit me perfectly? 86. I'm actually 50/50. I started college with 1 year in electrical engineering and graduate with a bachelors in computer information systems, which is comp sci + business. However when I ended up working it was discovered that I could design, so now I design and code iPhone apps and websites. I love sewing because it lets me use both parts of my brain on one project! 87. Thank you SO much for FINALLY explaining me to myself! I am EXACTLY what you have described here. I was so bad in Math that to get me to graduate they put me in special ed math courses! My mother had little tricks in sewing to avoid the math involved, folding things a certain way to cut them right etc. As always you amaze me! 88. When I read this it made me smile because as a weird coincidence I recently posted on my blog about my algebra/knitting headache! I would say that im 50/50 in the left/right brain thing, perhaps slightly more creative with old age (and out of practise with mathematics!) 89. Well, actually I think part of the problem is considering adding 2" to a skirt to be MATH. Really basic arithmetic, perhaps, (a flea on math's back) but I can hardly believe y'all don't have either calculators or yardsticks to take care of these details. I'm afraid your right and left brain thing is way too simplistic, neurologically speaking. There is no actual center for creativity. The right brain is more a center of spatial ability, something which is at least a useful in geometry as in crafts. If anything's getting clear from all the research done on it, it's that truly creative people have more connections between sides of the brain, and thus are better able to attack a problem from different angles and synthetize different points of view. But assuming that math takes less creativity than sewing is just so off the wall, so utterly blind, I'm speechless. Of course I'm an engineer, who was a mathematician at first. I enjoy sewing, but if you think that's advanced math it's a sad commentary on the sorry state of US education. You know, during those retro times that you admire so much sartorially, girls were not so scientifically challenged. Many fine female scientists did great work at that time (Rosalind Russel and the structure of DNA, Barbara McClintock and transposable genetic elements, Julia Robinson and decision problems..) you could look them up yourself if you weren't so dead set against the whole field. 90. This is interesting. I taught my self to sew less than a year ago it is the first time I have ever been good at something crafty. It is such a cool thing to flex this creative side of my brain while still using technical knowledge as well.! 91. this girlApril 20, 2011 at 2:36 PM RIGHT-BRAINED! RIGHT-BRAINED!! RIGHT-BRAINED!!!
{"url":"http://www.blogforbettersewing.com/2011/03/are-you-right-brained-or-left-brained.html?showComment=1303324562391","timestamp":"2014-04-19T14:38:05Z","content_type":null,"content_length":"337482","record_id":"<urn:uuid:35c4483e-5408-4094-af13-783552209fac>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Port Orchard Trigonometry Tutor Find a Port Orchard Trigonometry Tutor ...Sincerely, Mary AnnI enjoy tutoring Algebra 1, trying to make it interesting and easy to learn. I've helped many students with their math and improved their grades. If you don't understand something or can't solve an algebra problem, I can simplify it until you get it and solve it all by yourself. 13 Subjects: including trigonometry, geometry, Chinese, algebra 1 ...I teach by breaking problems down into simple steps and keeping careful track of all quantities as we work. Working as a technical writer in the software industry, I wrote, edited, illustrated, and published professional documentation. I have a deep understanding of English grammar and usage, and a keen eye for readability. 18 Subjects: including trigonometry, chemistry, physics, geometry ...I hold a PhD in Aeronautical and Astronautical Engineering from the University of Washington, and I have more than 40 years of project experience in science and engineering. I am uniquely qualified to tutor precalculus, with a PhD in Aeronautical and Astronautical Engineering from the University... 21 Subjects: including trigonometry, chemistry, English, calculus ...I have been dealing with these issues all my life. I was diagnosed with ADD a few years ago, just before I turned 21. Everyone kind of figured I had it, but we didn't really do anything about it so I needed to fend for myself. 26 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...Have extensive IT industry experience and have been actively tutoring for 2 years. I excel in helping people learn to compute fast without or without calculators, and prepare for standardized tests. Handle all levels of math through undergraduate levels. 43 Subjects: including trigonometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Port_Orchard_trigonometry_tutors.php","timestamp":"2014-04-21T02:46:11Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:4d47944d-1fc9-49c2-931e-bb1ac951a186>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Mathematics? Mathematics is the science and study of quality, structure, space, and change. Mathematicians seek out patterns,^ ^ formulate new conjectures, and establish truth by rigorous deduction from appropriately chosen axioms and definitions.^ There is debate over whether mathematical objects such as numbers and points exist naturally or are human creations. The mathematician Benjamin Peirce called mathematics "the science that draws necessary conclusions".^ Albert Einstein, on the other hand, stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."^ Through abstraction and logical reasoning mathematics evolved from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity for as far back as written records exist. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Mathematics continued to develop, in fitful bursts, until the Renaissance, when mathematical innovations interacted with new scientific discoveries, leading to an acceleration in research that continues to the present day. Today, mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new disciplines. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind, although practical applications for what began as pure mathematics are often discovered later.^
{"url":"http://www.tntech.edu/math/whatismath/print/?tmpl=component","timestamp":"2014-04-17T04:23:10Z","content_type":null,"content_length":"5885","record_id":"<urn:uuid:f033dc9a-876e-4b97-b4fb-ada90bf0fc45>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: EXPAND sqrt{/frac{1-x}{1+x}} • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510cb618e4b0d9aa3c475a18","timestamp":"2014-04-18T18:28:11Z","content_type":null,"content_length":"141401","record_id":"<urn:uuid:8fd8183b-e283-49b2-b1d1-68f6ee05a985>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
On-diagonal to off-diagonal heat kernel lower bounds, Davies' argument up vote 3 down vote favorite Theorem 3.3.4 in Davies' Heat Kernels and Spectral Theory begins with ``on-diagonal'' lower bounds for the heat kernel $K$ of $H$, (i.e. $K = e^{-Ht}$), where $H$ is a uniformly elliptic operator acting on $L^{2}(\mathbb{R}^{N})$. That is, Davies has already proved $$K(t,x,y) \geq C t^{-N/2} \qquad \text{ when } \qquad |x-y|^{2} \leq C t$$ Now for arbitrary $x,y \in \mathbb{R}^{N}$ and $t > 0$, he defines a sequence of points $$x_{r} = x + r(y-x)/M$$ where $0 \leq r \leq M$ and $M$ is the smallest integer such that $4(y-x)^{2}/Ct \leq M$. Then he uses the inequality $$K(t,x, y) \geq \int\cdots\int K(t/M,x,y_{1})K(t/M,y_{1},y_{2})\cdots K(t/M, y_{M-1},y)\,dy_{1}\cdots dy_{M-1}$$ where $y_{r}$ is being integrated over the set $$\{|y_{r} - x_{r}| < 1/4\,C(t/M)^{1/2}\}$$ But this step I do not understand at all. Where does the iterated integral come from?? pr.probability ap.analysis-of-pdes parabolic-pde add comment 1 Answer active oldest votes This is a classic trick for heat kernel proofs: The idea is that if $f(t,x)$ is a solution to the heat equation with initial data $f_0(x)$, then $g(t,x) := f(t+s,x)$ is a solution to the heat equation with initial data $g_0 (x) = f(s,x)$. Rephrasing that $f(t,x)$ is a solution to the heat equation as: $$ f(t,x) = \int K(t,x,y) f_0(y) dy, $$ we have that $$ f(t+s,x) = \int K(t+s,x,y) f_0(y)dy. $$ up vote 4 down vote On the other hand, we also have that $$ f(t+s,x) = \int K(t,x,y) f(s,y) dy $$ by our second observation. Now, combine the first and third lines to give $$ f(t+s,x) = \int \int accepted K(t+s,x,y) K(s,y,z) f_0(z) dz dy. $$ By uniqueness of the heat kernel, this shows that for $0 < s < t$ $$ K(t,x,y) = \int K(t-s,x,w) K(s,w,y) dw. $$ To go from this to your desired inequality, iterate, and then shrink the domain of integration as desired (using positivity of the heat kernel). 3 In short, it's the fact that $e^{tH}$ is a semigroup. – Nate Eldredge Jun 26 '13 at 19:39 add comment Not the answer you're looking for? Browse other questions tagged pr.probability ap.analysis-of-pdes parabolic-pde or ask your own question.
{"url":"http://mathoverflow.net/questions/134822/on-diagonal-to-off-diagonal-heat-kernel-lower-bounds-davies-argument","timestamp":"2014-04-16T07:44:36Z","content_type":null,"content_length":"53598","record_id":"<urn:uuid:5011de2e-5cd7-4ac5-84ad-ae814109606d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Best Lock Can you help me make this into a 3 Acts problem? I was thinking some thing along these lines: • Act 1: movie clip of someone trying to crack a combination lock. I want to set up the question “how long will it take?” • Act 2: What are the rules for these combination locks? Maybe I could even be so lucky as to find these listed on a website with the number of permutations. • Act 3: Which lock will take longer to crack?* * Interesting factoid: I started this idea with the extension, not the first act. That is, I knew I wanted to present my students with a permutations-of-a-lock problem. I spotted these two locks on a website. Then wondered if kids could tell me which was more secure. For those who have written 3 Acts problems, is this a typical workflow? Standard schmandard… Georgia Performance Standards (GPS) for Math: MM1D1b MM1D1. Students will determine the number of outcomes related to a given event. b. Calculate and use simple permutations and combinations. 2 thoughts on “Finding the Best Lock” 1. Super fun, Megan. The third act is going to be purely speculative, I imagine, but the first act is a blast. I need to film that video. That’s a great starter. □ Thanks, Dan. I’m digging the Meyer-esque clips from recent flicks. Aside from inspiration hitting when watching a film, how *do* you find the supporting scenes? I wish there was a way to search films for useful math scenes. Then I’d just have to ask: “movie crack a combination lock”. Oh wait, there is. Off to the Google machines!
{"url":"http://kalamitykat.com/2011/08/14/finding-the-best-lock/","timestamp":"2014-04-17T21:23:38Z","content_type":null,"content_length":"49019","record_id":"<urn:uuid:ee533d31-a84b-4430-a12a-f13124c7e48d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
physics and maths 01-23-2002 #1 physics and maths i havnt seen any physics and maths forums for discussions and algorithms so ive created some - pay a visit at www.forums.iainpb.com - ive also created boards for the new vxml if anyone would like to discuss it. Monday - what a way to spend a seventh of your life Them weird ass europeans. :P Prove you can code in C++ or C# at TopCoder, referrer rrenaud Read my livejournal Computer Science makes Math useful. Fourier once said: math is just a set of tools needed to do physics. Originally posted by Shiro Fourier once said: math is just a set of tools needed to do physics. Also, thanks to Fourier for making it possible to have the Internet, it's some of his math that makes it possible. Seems that we have a State Diagram or an odd life cycle the . are needed to perserve spaceing ..............................MATH -----+----- COMPUTERS ......................................\........|.. .... / Last edited by bman1176; 01-23-2002 at 02:09 PM. Another great French mathematician, Laplace, didn't agree with Fourier at all. According to Laplace mathematics was a science on it's own with no intention to be a tool for other sciences. Also Laplace did some nice mathematical discoveries that in some way brought us the internet. the arguement my physics DR used to put forward was matematics is a tool, alone its useless - adapted to real world it is very powerful. now visit those boards and discuss fibonnacci (sp), fourier transforms and advanced maths algorithms until your heart is content.! Monday - what a way to spend a seventh of your life I wish Laplace and Fourier never existed. I would rather live in a cave then learn what they have come up with. SVG is the future >>fibonnacci (sp), << fibonacci - but there's not a lot to discuss (start with the terms 1 and 1, and each successive term should be equal to the sum of the last 2 terms). well me being dumb and all that made a million posts to answer a simple question (http://www.iainpb.co.uk/forums/uploa...hp?threadid=64). Iain, can't you upgrade your vBoard to allow and edit function for posts? Last edited by mithrandir; 01-24-2002 at 12:02 AM. Someone explain this, Issac Newton invented Calculus to do his Physics. Does this mean Calculus is a product of Physics? Wouldn't it mean the opposite? That his physics theories were as a result of calculus? Originally posted by [stealth] Wouldn't it mean the opposite? That his physics theories were as a result of calculus? He had the theories in his head and invented Calculus to prove them. His theries would not work with out calculus, but there was no need for the Calculus until he had the theries to prove. <shameless and feeble plug> hey there! That little problem sounds like a question for the mathematics and physics forum over at www.forums.iainpb.com the friendly folk will be happy to help </shameless and feeble plug> Monday - what a way to spend a seventh of your life Computer Science is something what makes Math interesting(like Physics). But I don“t have it as Subject at School although I“ve lots of math and physics lessons When I close my eyes nobody can see me... >Computer Science is something what makes Math interesting >(like Physics). But would physics be interesting without mathematics? I like math and need it sometimes to do my work or at least understand what I'm doing. Applied mathematics is very useful, though pure mathematics is also very interesting. Some math doesn't have an application yet, but I think that one they it will. Look at number theory, it found an application in cryptography. 01-23-2002 #2 01-23-2002 #3 Registered User Join Date Aug 2001 Fort Worth, TX 01-23-2002 #4 Join Date Aug 2001 Groningen (NL) 01-23-2002 #5 Registered User Join Date Aug 2001 Fort Worth, TX 01-23-2002 #6 Join Date Aug 2001 Groningen (NL) 01-23-2002 #7 01-23-2002 #8 01-23-2002 #9 01-24-2002 #10 Registered User Join Date Aug 2001 Fort Worth, TX 01-24-2002 #11 01-24-2002 #12 Registered User Join Date Aug 2001 Fort Worth, TX 01-24-2002 #13 01-24-2002 #14 Registered User Join Date Oct 2001 01-24-2002 #15 Join Date Aug 2001 Groningen (NL)
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/9344-physics-maths.html","timestamp":"2014-04-17T12:51:46Z","content_type":null,"content_length":"95018","record_id":"<urn:uuid:13ba8599-e2b0-4c84-bf29-fdd3d77b66d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Math Tutors Concord, CA 94521 Math, Statistics, & Computer Science ...I started tutoring in college and found that I have a keen insight into where students miss things and how to cultivate their understanding. My undergraduate degree is in computer science (CSU Hayward) and included discrete math (combinatorics, graphs, automata... Offering 10+ subjects including discrete math
{"url":"http://www.wyzant.com/Concord_CA_Discrete_Math_tutors.aspx","timestamp":"2014-04-21T06:02:59Z","content_type":null,"content_length":"60449","record_id":"<urn:uuid:cf5891de-d85a-46dd-8894-8e68763c50ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Radiation Pressure Consider a sheet of surface current K: KKKKKKKKKKKKKKKKKKKKKKKKKK with the EM wave incident from above. The B field comes from two sources 1) the wave B_W, and 2)the current, B_K. Below the surface (in the metal) B is zero. That mean the two components of B cancel so B_W=B_K in magnitude and opposite in sign. Above the surface, B_K changes sign, but B_W doesn't, so the total B field outside is twice the field B_W which acts on the surface current.
{"url":"http://www.physicsforums.com/showthread.php?t=575272","timestamp":"2014-04-17T07:21:26Z","content_type":null,"content_length":"34573","record_id":"<urn:uuid:ee20dfeb-8889-4673-a0af-84bcca443db5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Colloquium Publications 1905; 188 pp; softcover Volume: 1 ISBN-10: 0-8218-4588-8 ISBN-13: 978-0-8218-4588-2 List Price: US$46 Member Price: US$36.80 Order Code: COLL/1 The 1903 colloquium of the American Mathematical Society was held as part of the summer meeting that took place in Boston. Three sets of lectures were presented: Linear Systems of Curves on Algebraic Surfaces, by H. S. White, Forms of Non-Euclidean Space, by F. S. Woods, and Selected Topics in the Theory of Divergent Series and of Continued Fractions, by Edward B. Van Vleck. White's lectures are devoted to the theory of systems of curves on an algebraic surface, with particular reference to properties that are invariant under birational transformations and the kinds of surfaces that admit given systems. Woods' lectures deal with the problem of the classification of three-dimensional Riemannian spaces of constant curvature. The author presents and discusses Riemann postulates characterizing manifolds of constant curvature, and explains in detail the results of Clifford, Klein, and Killing devoted to the local and global classification problems. The subject of Van Vleck's lectures is the theory of divergent series. The author presents results of Poincaré, Stieltjes, E. Borel, and others about the foundations of this theory. In particular, he shows "how to determine the conditions under which a divergent series may be manipulated as the analytic representative of an unknown function, to develop the properties of the function, and to formulate methods of deriving a function uniquely from the series." In the concluding portion of these lectures, some results about continuous fractions of algebraic functions are presented. Graduate students and research mathematicians interested in analysis. • H. S. White -- Linear systems of curves on algebraic surfaces • F. S. Woods -- Forms of non-Euclidean space • E. B. Van Vleck -- Selected topics in the theory of divergent series and of continued fractions • Bibliography
{"url":"http://ams.org/bookstore?fn=20&arg1=collseries&ikey=COLL-1","timestamp":"2014-04-17T07:54:44Z","content_type":null,"content_length":"15957","record_id":"<urn:uuid:91c0f276-aecb-410f-ae8d-31b22d7d7ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematicians' glasses Mathematicians’ glasses October 26, 2011 | 4 Comments Many of the papers I’m reading leap between equations with phrases like • “The equation may readily be solved to give [next equation which looks totally unlike previous equation],” • “The reader may easily supply the details [of the derivation I read this paper to learn],” and • “[the next step I really need to make my model work] is obvious, and hence left as an exercise for the reader.” Clearly I need to get a pair of the special glasses mathematicians must be wearing to make three pages of torturous algebraic manipulation and tricky rearrangements and substitutions “obvious.” Or perhaps “clear,” “obvious,” “trivial,” and “easy” are defined differently in math? 4 Comments 1. At least you are getting past the equations to the sentences that follow. I think plenty of people probably stop at the sight of those weird long S shapes and go straight to the discussion! A thing that bugs me about those phrases is that the authors likely spent AT LEAST a few months working through the ideas. For a reader to follow them through the steps would require being either a close colleague who saw the development of the ideas or similar large amounts of time. Then again, if it is not in a mathematics-focused journal, there might not be a need to go through all of the details. And if the author skips the details, they need to make it seem like they are confident that it is correct (by taunting readers into challenging the theory, haha). □ Some of the papers I’ve read lately are hardly anything _but_ equations, like this Kimura paper. One of the things that’s helped me keep plugging away at these papers (and kept me from feeling like there’s no hope for me as a scientist) is talking about these sorts of papers in discussion groups. Last week I asked a question I thought was probably stupid about how to get from equation c to equation d in some paper. But my question sent someone I really admire running to her office for an entire book chapter describing how to do it and the hidden assumptions in the paper. 2. Mathematician’s glasses, I’ve found (inasmuch as they resemble theoretical physicists’ glasses), just instantly show everything as few degrees’ worth of polynomial approximation around a point of At least, they did in my field of physics. 3. R. A. Fisher was infamous for this kind of thing. It’s why it took decades for people to figure out what he was talking about with the Fundamental Theorem of Natural Selection.
{"url":"http://sarcozona.org/2011/10/26/mathematicians-glasses/","timestamp":"2014-04-19T01:46:44Z","content_type":null,"content_length":"34377","record_id":"<urn:uuid:ca8983ca-248a-4edc-ab12-b51dce7def82>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling boundary measurements of scattered light using the corrected diffusion approximation We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. © 2012 OSA OCIS Codes (000.3860) General : Mathematical methods in physics (030.5620) Coherence and statistical optics : Radiative transfer (170.3660) Medical optics and biotechnology : Light propagation in tissues (170.7050) Medical optics and biotechnology : Turbid media (290.1990) Scattering : Diffusion ToC Category: Optics of Tissue and Turbid Media Original Manuscript: January 5, 2012 Revised Manuscript: February 9, 2012 Manuscript Accepted: February 9, 2012 Published: February 21, 2012 Ossi Lehtikangas, Tanja Tarvainen, and Arnold D. Kim, "Modeling boundary measurements of scattered light using the corrected diffusion approximation," Biomed. Opt. Express 3, 552-571 (2012) Sort: Year | Journal | Reset 1. A. Ishimaru, Wave Propagation and Scattering in Random Media (IEEE, New York, 1997). 2. L.V. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, Hoboken, NJ, 2007). 3. S.R. Arridge, “Optical tomography in medical imaging,” Inv. Prob.15, R41–R93 (1999). [CrossRef] 4. S.R. Arridge and J.C. Schotland, “Optical tomography: forward and inverse problems,” Inv. Prob.25123010 (2009). [CrossRef] 5. A.P. Gibson, J.C. Hebden, and S.R. Arridge, “Recent advances in diffuse optical imaging,” Phys. Med. Biol.50R1–R43 (2005). [CrossRef] [PubMed] 6. B.T. Cox, S.R. Arridge, K.P. Köstli, and P.C. Beard, “Two-dimensional quantitative photoacoustic image reconstruction of absorption distributions in scattering media by use of a simple iterative method,” Appl. Opt.45, 1866–1875 (2006). [CrossRef] [PubMed] 7. G. Bal and K. Ren, “Multi-source quantitative photoacoustic tomography in a diffusive regime,” Inv. Prob.27, 075003 (2011). [CrossRef] 8. A.Q. Bauer, R.E. Nothdurft, T.N. Erpelding, L.V. Wang, and J.P. Culver, “Quantitative photoacoustic imaging: correcting for heterogeneous light fluence distributions using diffuse optical tomography,” J. Biomed. Opt.16, 096016 (2011). [CrossRef] [PubMed] 9. R. Aronson, “Extrapolation distance for diffusion of light,” Proc. SPIE1888, 297–304 (1993). [CrossRef] 10. R. C. Haskell, L. O. Svaasand, T.-T. Tsay, T.-C. Feng, M. S. McAdams, and B. J. Tromberg, “Boundary conditions for the diffusion equation in radiative transfer,” J. Opt. Soc. Am. A11, 2727–2741 (1994). [CrossRef] 11. R. Aronson, “Boundary conditions for the diffusion of light,” J. Opt. Soc. Am. A12, 2532–2539 (1995). [CrossRef] 12. R. Aronson and N. Corngold, “Photon diffusion coefficient in an absorbing medium,” J. Opt. Soc. Am. A16, 1066–1071 (1999). [CrossRef] 13. J. Ripoll and M. Nieto-Vesperinas, “Index mismatch for diffuse photon density waves at both flat and rough diffuse-diffuse interfaces,” J. Opt. Soc. Am. A16, 1947–1957 (1999). [CrossRef] 14. S. Fantini, M. A. Franceschini, and E. Gratton, “Effective source term in the diffusion equation for photon transport in turbid media,” Appl. Opt.36, 156–163 (1997). [CrossRef] [PubMed] 15. X. Intes, B. Le Jeune, F. Pellen, Y. Guern, J. Cariou, and J. Lotrian, “Localization of the virtual point source used in the diffusion approximation to model a collimated beam source,” Waves Random Media9, 489–499 (1999). [CrossRef] 16. T. Spott and L. O. Svaasand, “Collimated light sources in the diffusion approximation,” Appl. Opt.39, 6453–6465 (2000). [CrossRef] 17. L.-H. Wang and S. L. Jacques, “Hybrid model of Monte Carlo simulation and diffusion theory for light reflectance by turbid media,” J. Opt. Soc. Am. A10, 1746–1752 (1993). [CrossRef] 18. T. Tarvainen, M. Vauhkonen, V. Kolehmainen, and J. P. Kaipio, “Finite element model for the coupled radiative transfer equation and diffusion approximation,” Int. J. Num. Math. Eng.65, 383–405 (2006). [CrossRef] 19. H. Gao and H. Zhao, “A fast forward solver of radiative transfer equation,” Transp. Theory Stat. Phys38, 149–192 (2009). [CrossRef] 20. A.D. Kim, “Correcting the diffusion approximation at the boundary,” J. Opt. Soc. Am. A28, 1007–1015 (2011). [CrossRef] 21. E.W. Larsen and J. B. Keller, “Asymptotic solution of neutron transport problems for small mean free paths,” J. Math. Phys.15, 75–81 (1974). [CrossRef] 22. G.J. Habetler and B. J. Matkowsky, “Uniform asymptotic expansions in transport theory with small mean free paths, and the diffusion approximation,” J. Math. Phys.16, 846–854 (1975). [CrossRef] 23. G.C. Pomraning and B. D. Ganapol, “Asymptotically consistent reflection boundary conditions for diffusion theory,” Ann. Nucl. Energy22, 787–817 (1995). [CrossRef] 24. L.G. Henyey and J.L. Greenstein, “Diffuse radiation in the Galaxy,” Astrophys. J.93, 70–83 (1941). [CrossRef] 25. J. Heino, S.R. Arridge, J. Sikora, and E. Somersalo, “Anisotropic effects in highly scattering media,” Phys. Rev. E68, 031908 (2003). [CrossRef] 26. A.D. Kim and J. B. Keller, “Light propagation in biological tissue,” J. Opt. Soc. Am. A20, 92–98 (2003). [CrossRef] 27. A.D. Kim, “Transport theory for light propagation in biological tissue,” J. Opt. Soc. Am. A21, 820–827 (2004). [CrossRef] 28. S.R. Arridge, M. Schweiger, M. Hiraoka, and D. T. Delpy, “A finite element approach for modeling photon transport in tissue,” Med. Phys.20, 299–299 (1993). [CrossRef] [PubMed] 29. K.D. Paulsen and H. Jiang, “Spatially varying optical property reconstruction using a finite element diffusion equation approximation,” Med. Phys.22, 691–702 (1995). [CrossRef] [PubMed] 30. M. Schweiger and S.R. Arridge, “The finite-element method for the propagation of light in scattering media: frequency domain case,” Med. Phys.24, 895–902 (1997). [CrossRef] [PubMed] 31. V. Kolehmainen, S.R. Arridge, W.R.B Lionheart, M. Vauhkonen, and J.P. Kaipio, “Recovery of region boundaries of piecewise constant coefficients of an elliptic PDE from boundary data,” Inv. Prob.15, 1375–1391 (1999). [CrossRef] 32. T. Tarvainen, M. Vauhkonen, V. Kolehmainen, and J.P. Kaipio, “Hybrid radiative-transfer-diffusion model for optical tomography,” Appl. Opt.44, 876–886 (2005). [CrossRef] [PubMed] 33. T. Tarvainen, M. Vauhkonen, V. Kolehmainen, S.R. Arridge, and J.P. Kaipio, “Coupled radiative transfer equation and diffusion approximation model for photon migration in turbid medium with low-scattering and non-scattering regions,” Phys. Med. Biol, 50, 4913–4930 (2005). [CrossRef] [PubMed] 34. A.D. Kim and M. Moscoso, “Diffusion of polarized light,” Multiscale Model. Simul.9, 1624–1645 (2011). [CrossRef] 35. T. Tarvainen, V. Kolehmainen, S.R. Arridge, and J.P. Kaipio, “Image reconstruction in diffuse optical tomography using the coupled radiative transport-diffusion model,” J. Quant. Spect. Rad. Trans.1122600–2608 (2011). [CrossRef] 36. M. Schweiger, S.R. Arridge, M. Hiraoka, and D.T. Delpy, “The finite element method for the propagation of light in scattering media: boundary and source conditions,” Med. Phys.22, 1779–1792 (1995). [CrossRef] [PubMed] 37. M. Keijzer, W.M. Star, and P.R.M. Storchi, “Optical diffusion in layered media,” Appl. Opt.27, 1820–1824 (1988). [CrossRef] [PubMed] 38. G. Kanschat, “A robust finite element discretization for radiative transfer problems with scattering,” East West J. Num. Math.6, 265–272 (1998). 39. S. Richling, E. Meinköhn, N. Kryzhevoi, and G. Kanschat, “Radiative transfer with finite elements,” Astron. Astrophys.380, 776–788 (2001). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/boe/abstract.cfm?uri=boe-3-3-552","timestamp":"2014-04-19T23:25:15Z","content_type":null,"content_length":"229841","record_id":"<urn:uuid:f866dc11-8e89-4259-b921-43d67592ef18>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - showing F is not continuous Re: showing F is not continuous Sorry, I can't access the 'Edit' button for some reason. A standard counter to having a function beeing continuous in each variable, yet not overall continuous is the one given by tomboi03. My point is that a homotopy between functions f,g, is defined to be a _continuous_ map H(x,t) with H(x,0)=f and H(x,1)=g. Since we cannot count on H(x,t) being continuous when each of H(x,.) and H (.,y) is continuous :what kind of result do we use to show that our map H(x,t) is continuous? Do we use the 'good-old' inverse image of an open set is open , or do we use the sequential continuity result that [{x_n}->x ] ->[f(x_n)=f(x)] (with nets if necessary, i.e., if XxI is not 1st-countable)? I saw a while back an interesting argument that if continuity on each variable alone was enough to guarantee continuity, then every space would have trivial fundamental group: e.g, for S^1, use H(e^i*2Pi*t,s) :=e^i2Pi(t)^s
{"url":"http://www.physicsforums.com/printthread.php?t=294826","timestamp":"2014-04-19T04:34:39Z","content_type":null,"content_length":"7286","record_id":"<urn:uuid:16b4bcfa-a1cb-46e6-875b-debbb89df002>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Hip roof calculator One of users asked us to create calculator which will help him estimate hip roof parameters, like rafter's lengths, roof rise and roof area. Here it is. To get results you need to provide roof base dimensions (length and width) and roof pitch (we assume it is identical for all sides). All formulae are below the calculator.
{"url":"http://planetcalc.com/1147/","timestamp":"2014-04-17T21:22:56Z","content_type":null,"content_length":"59736","record_id":"<urn:uuid:79068ff3-377b-43e1-99a3-1beeac993287>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Studies of Wave Propagation in Sea-Ice Abstract (Summary) This thesis presents mathematical studies of wave propagation at the surface of the ocean covered with sea-ice sheets. During the winter months the sea around the coast of Antarctica freezes to form a vast area of ice covered region. The ice sheets found in this area are often large and apparently featureless. From these properties, a large scale sea-ice sheet has been modelled as a homogeneous thin elastic plate with a constant Young's modulus when the deformation of the ice sheet is small. A partial differential equation describing the motion of an ice sheet is de-rived from the classical theory of linear elasticity. A well known system of partial differential equations is derived to describe the vertical deflection of a thin elastic plate coupled with a linearized Bernoulli's equation at the surface and Laplace's equation for incompressible fluid of finite depth. The Fourier transform is used to derive the vertical deflection of the ice sheet when a localized time harmonic vertical load is applied. A fundamental solution can be expressed by infinite summations of fractional functions at complex roots of the dispersion equation. The inverse Fourier transform is calculated by hand using the Hankel transform and the resulting formulae are directly turned into numerically stable computer codes without any modification. It is shown from the finite water depth solution, that the infinite series expansion of the deflection can be reduced to a sum of special functions whose modes are three roots of a fifth order polynomial. Furthermore, the space and time variables are non-dimensionalized using a characteristic length and characteristic time which are determined by the thickness and Young's modulus of the ice sheet. As a consequence of the non-dimensionalization, the solutions are insensitive to the ice thickness and categorized according to distinctive physical characteristics. It is conversely shown that the characteristic length and characteristic time can be measured from direct observations of the flexural motion, such as the surface strain or tilt, of the ice sheet. Interaction between two semi-infinite ice sheets that are joined by a straight infinite line transition is considered. When a plane wave is obliquely incident on the transition, reflection and transmission of wave energy occurs. Analytical formulae for the coefficients of a modal expansion of the waves in the ice sheet are derived using the Wiener-Hopf technique and formulae for the reflection and transmission coefficients of the surface waves are also presented. The formulae are directly turned into stable computer codes. The application of the Wiener-Hopf technique given here is modified from a standard method to accommodate the incident wave from infinity better. A set of physical conditions at the transition can be expressed using a pair of 4 x 4 matrix and 4-element vector, whose elements are computed from the coefficients of the modal expansion of the solutions. Hence the solution procedure is able to cope with various transition conditions simply by changing the matrix and the vector. Modelling of finite ice sheets using the boundary integral equation method is also considered. Formulation of boundary integral equations is given and it is shown that the dynamics of the ice sheet is represented by the boundary integrals only on the edge of the ice sheet. Connection between the Wiener-Hopf technique and boundary integral method is made. Bibliographical Information:
{"url":"http://www.openthesis.org/documents/Mathematical-Studies-Wave-Propagation-in-551220.html","timestamp":"2014-04-20T19:06:53Z","content_type":null,"content_length":"11337","record_id":"<urn:uuid:af8545b6-d143-4cc7-9cd5-d8482a9bc6e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
New to discrete and in need of help!! Set question. February 24th 2009, 07:36 PM #1 Junior Member Feb 2009 Hello everyone, I am really new to discrete and totally dont get it!! I have a question that I have no idea how to get started or really approach it. It is a Set theory question. I really appreciate it if someone can give me the step by step procedure of how to solve it and explanations so I can use it as a learning guide. THANK YOU!!! If the statement is true, give a proof. If it is false then write its negation and prove it. Assume all sets are subsets of a universal set U. For all sets A, B,and C, A-(B-C) = (A-B)-C Suppose that you want to prove two sets A and B to be equal. There are two main ways to do that: 1) Take one of them, say A, and performing algebraic transformations try to turn it into B. 2) Prove that $A\subseteq B$ and $B\subseteq A$. To do this, first let $x\epsilon A$ and try to show that $x\epsilon B$ ($A\subseteq B$), and then let $x\epsilon B$ and try to show that $x\ epsilon A$ ($B\subseteq A$). Hello everyone, I am really new to discrete and totally dont get it!! I have a question that I have no idea how to get started or really approach it. It is a Set theory question. I really appreciate it if someone can give me the step by step procedure of how to solve it and explanations so I can use it as a learning guide. THANK YOU!!! If the statement is true, give a proof. If it is false then write its negation and prove it. Assume all sets are subsets of a universal set U. For all sets A, B,and C, A-(B-C) = (A-B)-C Take A={1,2,3,4},B={1,2,3},C={3}. Now A-(B-C)= {1,2,3,4}-[ {1,2,3}-{3}]= {1,2,3,4}-{1,2}={3,4} But (A-B)-C = [{1,2,3,4}-{1,2,3}]-{3}= {4}-{3} = {4} Hence $A-(B-C)eq (A-B)-C$ Awesome guys, you've both been a great help. However I got another question.. How would I prove this statement is true? The previous one wasnt too bad cause it was false and I would just provide a counter-example, but this one is true. For all sets A, B and C, A x (B-C) = (A x B) - (A X C) Would I assume the negation is true and provide a counter example to prove the negation is false and conclude that the original statement is true? Awesome guys, you've both been a great help. However I got another question.. How would I prove this statement is true? The previous one wasnt too bad cause it was false and I would just provide a counter-example, but this one is true. For all sets A, B and C, A x (B-C) = (A x B) - (A X C) Would I assume the negation is true and provide a counter example to prove the negation is false and conclude that the original statement is true? No because like that is like proving the above by using only one example .We want to give a general proof Anyway if the above is true why not try to prove it > So let xε[Ax(B-C)] THAT implies xεA AND xε(B-C)====> xεA AND xεB AND ~xεC(= x does not belong to C) and that implies: (xεA AND xεB)AND(~xεC v ~xεA)====> xε(AxB) AND ~xε(ΑxC) ===>..........................BY using de morgan.......... ====>xε[(AxB)-(AxC)] ====> $Ax(B-C)\subseteq (AxB)-(AxC)$ You try the converse Last edited by benes; February 25th 2009 at 07:19 PM. Reason: WRONG SENTENCE No because like that is like proving the above by using only one example .We want to give a general proof Anyway if the above is true why not try to prove it > So let xε[Ax(B-C)] THAT implies xεA AND xε(B-C)====> xεA AND xεB AND ~xεC(= x does not belong to C) and that implies: (xεA AND xεB)AND(~xεC v ~xεA)====> xε(AxB) AND ~xε(ΑxC) ===>..........................BY using de morgan.......... ====>xε[(AxB)-(AxC)] ====> $Ax(B-C)\subseteq (AxB)-(AxC)$ You try the converse Ahh perfect thanks alot. However to complete the proof I have to do the other way around where (AxB)-(AxC) = Ax(B-C) as well right? $x\in[(AxB)-(AxC)]\Longrightarrow[(x\in A\wedge x\in B)\wedge(eg x\in A\veeeg x\in C)]\Longrightarrow$$[(x\in A\wedge x\in B)\wedge(x\in A\rightarrow eg x\in C)]$$\Longrightarrow[x\in A\wedge x\ in B\wedgeeg x\in C]\Longrightarrow[x\in A\wedge(x\in B\wedgeeg x\in C)]\Longrightarrow x\in[Ax(B-C)]$ Hence $[(AxB)-(AxC)]\subseteq Ax(B-C)$ Thanks alot!! February 25th 2009, 12:14 AM #2 February 25th 2009, 03:43 PM #3 Feb 2009 February 25th 2009, 06:18 PM #4 Junior Member Feb 2009 February 25th 2009, 07:12 PM #5 Feb 2009 February 26th 2009, 12:30 PM #6 Junior Member Feb 2009 February 26th 2009, 04:05 PM #7 Feb 2009 February 26th 2009, 05:26 PM #8 Junior Member Feb 2009
{"url":"http://mathhelpforum.com/discrete-math/75636-new-discrete-need-help-set-question.html","timestamp":"2014-04-19T21:03:38Z","content_type":null,"content_length":"52608","record_id":"<urn:uuid:e5e94722-c5a6-43d7-8743-5b9962f8644d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersecting Parallel And Perpendicular Lines > Intersecting, Parallel And Perpendicular Lines < you are here Intersecting, Parallel And Perpendicular Lines Intersecting Lines cross each other at a point. There is no symbol for intersecting lines. Parallel Lines never meet (never intersect) and is always the same distance apart. The symbol for Parallel Line is a pair of vertical lines. Perpendicular Lines meet at right angles. Perpendicular lines crosses each (intersect) to form a right angle or 90 degree angle. Perpendicular symbol is
{"url":"http://mathatube.com/geometry-intersec-parallel-perpendi.html","timestamp":"2014-04-17T18:25:35Z","content_type":null,"content_length":"15597","record_id":"<urn:uuid:f509c9fb-7ca4-40d2-969b-a9e2b20fe03c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
strory problem help January 6th 2007, 06:31 PM strory problem help Sam lives on a lot that he thought was a square, 157 feet by 157 feet. When he had it surveyed , he discovered that one side was actaully 2 feet longer then he thought and the other was actually 2 feet shorter then he thought. How much less area does he have then he thought he had? I was coming up with this anwer 457 ft less then he thought, would that be right? January 6th 2007, 06:53 PM 4 feet less? January 6th 2007, 07:45 PM Hello, fancyface! This doesn't require any algebra at all. So exactly where is your difficulty? Sam lives on a lot that he thought was a square, 157 feet by 157 feet. When he had it surveyed, he discovered that one side was actaully 2 feet longer then he thought and the other was actually 2 feet shorter then he thought. How much less area does he have then he thought he had? Sam thought he had: $157 \times 157 \:=\:24,649$ ft². Instead, his lot was actually 159 feet by 155 feet. . . So he had: $159 \times 155 \:=\:24,645$ ft². Let's see, that's a difference of . . . um . . . (Where's my calculator?) January 6th 2007, 08:26 PM Sam lives on a lot that he thought was a square, 157 feet by 157 feet. When he had it surveyed , he discovered that one side was actaully 2 feet longer then he thought and the other was actually 2 feet shorter then he thought. How much less area does he have then he thought he had? I was coming up with this anwer 457 ft less then he thought, would that be right? It depends on where those two sides with errors are on the lot. a) If two opposite sides were actually 159 ft each, and so the other two opposites were actually 155 ft each, then, He thought the area was (157)(157) = 157^2 But actual area = = (157 +2)(157 -2) = 157^2 -2^2 = (Area of square lot he thought) minus 4. Therfore, the actual lot is 4 sq. ft. less than he thought. b) If the two sides with errors are adjacent to each other, then the actual lot area can be computed by dividing the actual lot into two triangles. One triangle is an isosceles rigth triangle; the other is a not a right triangle but whose 3 sides are 159, 157 and the hypotenuse of the isosceles right triangle. Not as easy to do. Lots of square roots. Heron's formula. c) If the two erroneous sides are opposite each other, then there are two possible ways, depending on which erroneous side is perpendicular to one of the 157-ft side. Again, same method as in part b) above for each way.
{"url":"http://mathhelpforum.com/math-topics/9624-strory-problem-help-print.html","timestamp":"2014-04-16T16:36:15Z","content_type":null,"content_length":"7307","record_id":"<urn:uuid:b7e2c519-d3d2-40ba-b581-eaff53194ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Walnut Creek, CA Geometry Tutor Find a Walnut Creek, CA Geometry Tutor ...Before I became a tutor, I worked as a Copy Editor in the business world (e.g. Dolby Laboratories) and academia (UC Berkeley). I have created and taught a highly acclaimed Grammar Course for high school students. I teach grammar in a way that is accessible to all students, including those for whom English is a non-native tongue. 53 Subjects: including geometry, English, reading, Spanish ...I have years of experience using STATA. I used this program in a number of settings and am well-versed in it uses. I have previously tutored a number of students in this application. 49 Subjects: including geometry, calculus, physics, statistics ...I adapt to each student’s needs. I want to learn from my students as much as they want to learn from me. This is a two-way process. 24 Subjects: including geometry, chemistry, physics, algebra 1 I am a former university professor and have taught in the undergraduate and graduate levels for more than 10 years. In all my years of teaching, I have consistently received &quot;Excellent&quot; rating in the university SET (Student Evaluation of Teachers). I am a very patient and conscien... 10 Subjects: including geometry, chemistry, statistics, algebra 1 ...These students have gained acceptance from universities including: UC Davis, UC Berkeley, San Jose State University, Sacramento State University, and others. I'm proud of the work these students have completed, am delighted to see parents taking interest in their students, and look forward at th... 13 Subjects: including geometry, calculus, ASVAB, algebra 1 Related Walnut Creek, CA Tutors Walnut Creek, CA Accounting Tutors Walnut Creek, CA ACT Tutors Walnut Creek, CA Algebra Tutors Walnut Creek, CA Algebra 2 Tutors Walnut Creek, CA Calculus Tutors Walnut Creek, CA Geometry Tutors Walnut Creek, CA Math Tutors Walnut Creek, CA Prealgebra Tutors Walnut Creek, CA Precalculus Tutors Walnut Creek, CA SAT Tutors Walnut Creek, CA SAT Math Tutors Walnut Creek, CA Science Tutors Walnut Creek, CA Statistics Tutors Walnut Creek, CA Trigonometry Tutors Nearby Cities With geometry Tutor Alameda geometry Tutors Alamo, CA geometry Tutors Antioch, CA geometry Tutors Berkeley, CA geometry Tutors Concord, CA geometry Tutors Danville, CA geometry Tutors Lafayette, CA geometry Tutors Moraga geometry Tutors Oakland, CA geometry Tutors Piedmont, CA geometry Tutors Pittsburg, CA geometry Tutors Pleasant Hill, CA geometry Tutors San Leandro geometry Tutors San Pablo, CA geometry Tutors San Ramon geometry Tutors
{"url":"http://www.purplemath.com/Walnut_Creek_CA_geometry_tutors.php","timestamp":"2014-04-16T04:35:56Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:3e9fff6c-3b58-4fa0-a9ab-ad32f01ca9d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
randquant(q,n) uses quantizer object q to generate an n-by-n matrix with random entries whose values cover the range of q when q is a fixed-point quantizer object. When q is a floating-point quantizer object, randquant populates the n-by-n array with values covering the range -[square root of realmax(q)] to [square root of realmax(q)] randquant(q,m,n) uses quantizer object q to generate an m-by-n matrix with random entries whose values cover the range of q when q is a fixed-point quantizer object. When q is a floating-point quantizer object, randquant populates the m-by-n array with values covering the range -[square root of realmax(q)] to [square root of realmax(q)] randquant(q,m,n,p,...) uses quantizer object q to generate an m-by-n-by-p-by ... matrix with random entries whose values cover the range of q when q is fixed-point quantizer object. When q is a floating-point quantizer object, randquant populates the matrix with values covering the range -[square root of realmax(q)] to [square root of realmax(q)] randquant(q,[m,n]) uses quantizer object q to generate an m-by-n matrix with random entries whose values cover the range of q when q is a fixed-point quantizer object. When q is a floating-point quantizer object, randquant populates the m-by-n array with values covering the range -[square root of realmax(q)] to [square root of realmax(q)] randquant(q,[m,n,p,...]) uses quantizer object q to generate p m-by-n matrices containing random entries whose values cover the range of q when q is a fixed-point quantizer object. When q is a floating-point quantizer object, randquant populates the m-by-n arrays with values covering the range -[square root of realmax(q)] to [square root of realmax(q)] randquant produces pseudorandom numbers. The number sequence randquant generates during each call is determined by the state of the generator. Because MATLAB^® resets the random number generator state at startup, the sequence of random numbers generated by the function remains the same unless you change the state. randquant works like rng in most respects.
{"url":"http://www.mathworks.com/help/fixedpoint/ref/randquant.html?nocookie=true","timestamp":"2014-04-23T15:05:46Z","content_type":null,"content_length":"39940","record_id":"<urn:uuid:6f854a26-60ec-4fa4-8b77-1e829c0e2e21>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Is there always a prime number between \(n\) and \(2n\)? Where, approximately, is the millionth prime? And just what does calculus have to do with answering either of these questions? It turns out that calculus has a lot to do with both questions, as this book can show you. The theme of the book is approximations. Calculus is a powerful tool because it allows us to approximate complicated functions with simpler ones. Indeed, replacing a function locally with a linear--or higher order--approximation is at the heart of calculus. The real star of the book, though, is the task of approximating the number of primes up to a number \(x\). This leads to the famous Prime Number Theorem--and to the answers to the two questions about primes. While emphasizing the role of approximations in calculus, most major topics are addressed, such as derivatives, integrals, the Fundamental Theorem of Calculus, sequences, series, and so on. However, our particular point of view also leads us to many unusual topics: curvature, Padé approximations, public key cryptography, and an analysis of the logistic equation, to name a few. The reader takes an active role in developing the material by solving problems. Most topics are broken down into a series of manageable problems, which guide you to an understanding of the important ideas. There is also ample exposition to fill in background material and to get you thinking appropriately about the concepts. Approximately Calculus is intended for the reader who has already had an introduction to calculus, but wants to engage the concepts and ideas at a deeper level. It is suitable as a text for an honors or alternative second semester calculus course. Request an examination or desk copy. Undergraduate students interested in calculus and number theory. "This fascinating book is a novel approach to undergraduate analysis, which combines most topics in single variable calculus with some elementary number theory. ... It is very well written and fully engages readers in its developments, often beginning with examples and leading them to develop generalizations and, ultimately, theorems and proofs. ... An attractive book, well worth consulting for ideas on presenting topics, or for examples." -- J.H. Ellison, Choice "The book is very well written and contains many references to articles in journals that are accessible to students ..." -- MAA Reviews
{"url":"http://ams.org/bookstore?fn=20&arg1=tb-an&ikey=ACALC","timestamp":"2014-04-19T15:56:58Z","content_type":null,"content_length":"16878","record_id":"<urn:uuid:d0df681d-627e-4f9f-8777-ffd22db08477>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
icas math paper year 3 2008 answers Search results 70 Articles (Search results 1 - 10) : Math for Moms and Dads 6 July 2012 Math for Moms and Dads Math for Moms and Dads - A Dictionary of Terms and Concepts … Just for Parents by Catherine V. Jeremko Kaplan Publishing | October 2008 | ISBN-10: 1427798192 | ePUB/PDF | 3.15/18.3 mb Kids are struggling with math in school, on tests, and with homework. Parents feel stressed, helpless, and math-phobic. They struggle to encourage and assist on the very subject they are least prepared to manage: MATH. Broken down using straightforward, simple language, this guide offers parents who are easily intimidated by math instructive and handy concepts to use when helping their students with homework or studying for a big test. Parents banish math phobia once and for all by facing math head-on in Math for Moms and Dads. Frequently, the issue isn’t “how to,” it’s actually “what do they want me to do?” Learning the language of math in context is the first step in the right direction for helping yourself in today’s math morass so parents can help their child find his or her way out of any math quagmire. Using a similar methodology applied in SAT Score-Raising Math Dictionary, Kaplan now focuses on the parent in this no-nonsense guide to the lexicon of math. Math terminology and key concepts are defined and decoded into regular, everyday language to promote authentic understanding of what’s at the heart of any math problem. Other helpful elements are sample problems (with answers, so parents won’t sweat it!) broken down step-by-step; calculator tips so parents can troubleshoot technology-related concerns facing their kids; visual representations of math for visual learners; and National Council of Teachers of Mathematics standards so parents can plan for what their children are responsible for in the upcoming grade and get the help they may need in the appropriate time frame. Finally, a handy timeline detailing which ages/grades kids need to know and master certain math skills helps parents understand the overall math snapshot for the middle school and high school years ahead. Parents (and then students) learn how to kick word problems to the curb once they figure out how to simplify the language of math with Math for Moms and Dads' easy-to-follow lexicon and resource 1001 Math Problems: Fast, Focused Practice that Improves Your Math Skills 23 January 2011 1001 Math Problems: Fast, Focused Practice that Improves Your Math Skills English | 2009 | 300 pages | PDF | 7 MB Whether you are student who needs more practice than your textbook provides or a professional eager to brush up on your skills, 1001 Math Problems gives you all the practice you need to succeed. The ultimate learn-by-doing preparation guide, 1001 Math Problems will teach you how to: Prepare for important exams Develop multiple-choice test strategies Learn math rules and how to apply them to problems Overcome math anxiety through skills reinforcement and focused practice How does 1001 Math Problems build your math skills? Janice VanCleave's Math for Every Kid: Easy Activities that Make Learning Math Fun 13 December 2010 Janice VanCleave's Math for Every Kid: Easy Activities that Make Learning Math Fun 1991 | 224 | ISBN: 0471542652 | PDF | 2 Mb How long is the world’s longest earthworm? How tall was a brachiosaurus? What’s the average diameter of human hair? What’s the circumference of the earth at the equator? Now you can discover the answers to these and other fascinating questions about math. Packed with illustrations, Math for Every Kid uses simple problems and activities to teach you about measurements, fractions, graphs, problem solving, and much more! Using activities that relate math to everyday life, this book will help you feel comfortable with math—right from the start. You’ll make a sun clock, create a thermometer from a straw, race a paper boat, grow your own bean plant, and even play a game of ring the bottle. Each of the many problems and activities is broken down into its purpose, a list of materials, step-by-step instructions, expected results, and an easy to understand explanation. Every activity has been pretested and can be performed safely and inexpensively in the classroom or at home. ... Math Matters: Understanding the Math You Teach Grades K-8, 2nd Edition 15 December 2010 Math Matters: Understanding the Math You Teach Grades K-8, 2nd Edition By Suzanne H. Chapin, Art Johnson Publisher: Math Solutions; 2nd edition | 2006 | 376 Pages | ISBN: 0941355713 | PDF | 14 MB Math Matters, first published in 2000, quickly became an invaluable resource for math educators nationwide, helping them clarify their own understanding of the math concepts they are required to teach. This important book contains activities and discussions on key elementary topics such as whole number computation, fractions, algebra, geometry, and measurement. The scope in this second edition has now been expanded to address key topics in the middle school math curriculum as well, including sections on integers, exponents, similarity, the Pythagorean Theorem, and more. Schoolhouse Technologies Math Resource Studio 4.4.11.1 23 February 2011 Schoolhouse Technologies Math Resource Studio 4.4.11.1 | 7.67 Mb Provide printable math worksheets and activities for the differentiated classroom with this much-anticipated upgrade to Mathematics Worksheet Factory. The new upgrade to Mathematics Worskheet Factory. Create single-page, single-concept math worksheets or multi-page, multi-concept math reviews almost effortlessly with Math Resource Studio. Math Resource Studio combines the same ease-of-use that made Mathematics Worksheet Factory a favorite instructional tool of teachers around the world with greatly improved design flexibility. Now you can go beyond single-page, single-concept math worksheets to multi-page, multi-concept math reviews, learning-packs and workbooks almost effortlessly. Generate printable math worksheets and activities to provide students with the precise skills development and practice they need as part of a differentiated numeracy program. Math Resource Studio makes it easy to create differentiated activities to support your lesson objectives and target the learning needs of all of your students. Match the varied skill levels in your classroom with the exact practice required to advance those skills to the next level. And do it in seconds. The Little Green Math Book: 30 Powerful Principles for Building Math and Numeracy Skills 8 November 2011 The Little Green Math Book: 30 Powerful Principles for Building Math and Numeracy Skills by Brandon Royal Ma ven Publi shing | English | 2010 | ISBN: 1897393504 | 240 pages | PDF | 1,5 MB For Math Aficionados From All Walks of Life THE LITTLE GREEN MATH BOOK is based on a simple but powerful observation: Individuals who develop outstanding math and numeracy skills do so primarily by mastering a limited number of the most important math principles and problem solving techniques, which they use over and over again. What are these recurring principles and techniques? The answer to this question is the basis of this Bookmark and Share 1001 Math Problems: Fast, Focused Practice that Improves Your Math Skills-Cayenne 5 June 2012 English - -300 pages -PDF -1.83 MB Whether you are student who needs more practice than your textbook provides or a professional eager to brush up on your skills, 1001 Math Problems gives you all the practice you need to succeed. The ultimate learn-by-doing preparation guide, 1001 Math Problems will teach you how to: Prepare for important exams Develop multiple-choice test strategies Learn math rules and how to apply them to problems Overcome math anxiety through skills reinforcement and focused practice How does 1001 Math Problems build your math skills? Master Math Basic Math And Pre Algebra 14 April 2010 Master Math: Basic Math and Pre-Algebra Publisher: Delmar Cengage Learning | pages: 192 | 1996 | ISBN: 1564142140 | PDF | 26 mb Master Math: Basic Math and Pre-Algebra teaches the reader in a very user-friendly and accessible manner the principles and formulas for establishing a solid math foundation. This book covers topics such as complex fractions, mixed numbers and improper fractions; converting fractions, percents, and decimals; solving equations with logarithms or exponents, and much more. How Chefs Use Math (Math in the Real World) 10 September 2010 How Chefs Use Math (Math in the Real World) Chelsea Clubhouse | ISBN: 1604136081 | 2009-10-30 | PDF | 32 pages | 12 Mb If you have ever eaten food at a restaurant, had someone cook for you, or cooked something yourself, you have seen math in action. How Chefs Use Math demonstrateshow chefs use math to measure, prepare, and cook to create tasty, delicious food. Concepts and skills emphasized include: Teach Your Child Math : Making Math Fun for the Both of You 29 November 2010 Teach Your Child Math : Making Math Fun for the Both of You 224 pages | Dec 12, 2007 |ISBN:0737301341 | PDF | 8 Mb By transforming math "problems" into games, this easy-to-follow book gives parents a fun way to help their children learn math. With an expanded section on problem solving, fun word problems, and entertaining visual concepts, it proves that math can be interesting.
{"url":"http://www.downeu.me/d/icas+math+paper+year+3+2008+answers.html","timestamp":"2014-04-19T17:03:55Z","content_type":null,"content_length":"25635","record_id":"<urn:uuid:643cdaf4-ca8a-47f1-9f2a-5e22dcb6c20b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-Dev] SciPy Goal josef.pktd@gmai... josef.pktd@gmai... Thu Jan 5 17:04:18 CST 2012 On Thu, Jan 5, 2012 at 5:30 PM, Neal Becker <ndbecker2@gmail.com> wrote: > Travis Oliphant wrote: >> On Jan 5, 2012, at 1:19 PM, Neal Becker wrote: >>> Travis Oliphant wrote: >>>> On Jan 5, 2012, at 10:00 AM, josef.pktd@gmail.com wrote: >>>>> On Thu, Jan 5, 2012 at 10:32 AM, Neal Becker <ndbecker2@gmail.com> wrote: >>>>>> Some comments on signal processing: >>>>>> Correct me if I'm wrong, but I think scipy signal (like matlab) implement >>>>>> only a >>>>>> general purpose filter, which is an IIR filter, single rate. Efficiency >>>>>> is very important in my work, so I implement many optimized variations. >>>>>> Most of the time, FIR filters are used. These then come in variations for >>>>>> single rate, interpolation, and decimation (there is also another design >>>>>> for >>>>>> rational rate conversion). Then these have variants for scalar/complex >>>>>> input/output, as well as complex in/out with scalar coefficients. >>>>>> IIR filters are seperate. >>>>>> FFT based FIR filters are another type, and include both complex in/out as >>>>>> well as scalar in/out (taking advantage of the 'two channel' trick for >>>>>> fft). >>>>> just out of curiosity: why no FFT base IIR filter? >>>>> It looks like a small change in the implementation, but it is slower >>>>> than lfilter for shorter time series so I mostly dropped fft based >>>>> filtering. >>>> I think he is talking about filter design, correct? >>> The comments I made were all about efficient filter implementation, not about >>> filter design. >>> About FFT-based IIR filter, I never heard of it. I was talking about the >>> fact that fft can be used to efficiently implement a linear convolution >>> exactly (for the case of convolution of a finite or short sequence - the >>> impulse response of the filter - with a long or infinite sequence, the >>> overlap-add or overlap-save techniques are used). >> Sure, of course. It's hard to know the way people are using terms. I agree >> that people don't usually use the term IIR when talking about an FFT-based >> filter (but there is an "effective" time-domain response for every filtering >> operation done in the Fourier domain --- as you noted). That's what I was >> referring to. >> It's been a while since I wrote lfilter, but it transposes the filtering >> operation into Direct Form II, and then does a straightforward implementation >> of the feed-back and feed-forward equations. >> Here is some information on the approach: >> https://ccrma.stanford.edu/~jos/fp/Direct_Form_II.html >> IIR filters implemented in the time-domain need something like lfilter. FIR >> filters are "just" convolution in the time domain --- and there are different >> approaches to doing that discrete-time convolution as you've noted. IIR >> filters are *just* convolution as well (but convolution with an infinite >> sequence). Of course, if you use the FFT-domain to implement the filter, >> then you can just as well design in that space the filtering-function you want >> to multiply the input signal with (it's just important to keep in mind the >> impact in the time-domain of what you are doing in the frequency domain --- >> i.e. sharp-edges result in ringing, the basic time-frequency product >> limitations, etc.) >> These same ideas come under different names and have different emphasis in >> different disciplines. >> -Travis > Here, I claim the best approach is to realize that > 1. Just making the coefficients in the freq domain be samples of a desired > response gives you no exact result (as you noted), but > 2. On the other hand, fft can be used to perform fast convolution, which is (can > be) mathematically exactly the same as time domain convolution. Therefore, just > realize that > * use your favorite FIR filter design tool (e.g., remez) to design the filter > Now the only approximation is in the fir filter design step, and you should know > precisely what is the nature of any approximation Thanks, if I understand both of you correctly, then the difference comes down to whether we want to have a parsimonious IIR parameterization, with only a few parameters that can be estimated as in time series analysis (Box-Jenkins), or whether you want to design a filter where having a "long" FIR representation doesn't have any disadvantages (in frequency domain, FFT, the filter might be full length anyway). > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev More information about the SciPy-Dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2012-January/016884.html","timestamp":"2014-04-17T01:44:19Z","content_type":null,"content_length":"8880","record_id":"<urn:uuid:b84f69e7-92f9-4a95-83df-d0a2cbc61f50>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Extreme value problems in random matrix theory, spin glasses and directed polymers Seminar Room 1, Newton Institute We will review a few applications of extreme values in the theory of disordered systems and mention several open problems, in particular concerning the generalisation of Parisi's solution or of the Tracy-Widom distribution when the disorder has "fat tails".
{"url":"http://www.newton.ac.uk/programmes/PDS/seminars/2006063009001.html","timestamp":"2014-04-20T03:16:30Z","content_type":null,"content_length":"4375","record_id":"<urn:uuid:9db9d87c-b3e6-4ac3-9fd2-eeb164d91cac>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Transforming left-terminating programs: The reordering problem Results 1 - 10 of 12 - THEORY AND PRACTICE OF LOGIC PROGRAMMING , 2002 "... Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It ..." Cited by 54 (12 self) Add to MetaCart Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It is achieved through a well-automated application of parts of the Burstall-Darlington unfold/fold transformation framework. The main challenge in developing systems is to design automatic control that ensures correctness, efficiency, and termination. This survey and tutorial presents the main developments in controlling partial deduction over the past 10 years and analyses their respective merits and shortcomings. It ends with an assessment of current achievements and sketches some remaining research challenges. - J. LOGIC PROGRAMMING , 1999 "... ..." - Proceedings of the International Workshop on Logic Program Synthesis and Transformation (LOPSTR'96), LNCS 1207 , 1996 "... . Recently, partial deduction of logic programs has been extended to conceptually embed folding. To this end, partial deductions are no longer computed of single atoms, but rather of entire conjunctions; Hence the term "conjunctive partial deduction". Conjunctive partial deduction aims at achieving ..." Cited by 27 (19 self) Add to MetaCart . Recently, partial deduction of logic programs has been extended to conceptually embed folding. To this end, partial deductions are no longer computed of single atoms, but rather of entire conjunctions; Hence the term "conjunctive partial deduction". Conjunctive partial deduction aims at achieving unfold/fold-like program transformations such as tupling and deforestation within fully automated partial deduction. However, its merits greatly surpass that limited context: Also other major efficiency improvements are obtained through considerably improved side-ways information propagation. In this extended abstract, we investigate conjunctive partial deduction in practice. We describe the concrete options used in the implementation(s), look at abstraction in a practical Prolog context, include and discuss an extensive set of benchmark results. From these, we can conclude that conjunctive partial deduction indeed pays off in practice, thoroughly beating its conventional precursor on a wide... - Partial Evaluation: Practice and Theory, LNCS 1706 , 1999 "... ..." - ACM Transactions on Programming Languages and Systems , 1998 "... Given a program and some input data, partial deduction computes a specialized program handling any remaining input more efficiently. However, controlling the process well is a rather difficult problem. In this article, we elaborate global control for partial deduction: for which atoms, among possibl ..." Cited by 12 (0 self) Add to MetaCart Given a program and some input data, partial deduction computes a specialized program handling any remaining input more efficiently. However, controlling the process well is a rather difficult problem. In this article, we elaborate global control for partial deduction: for which atoms, among possibly infinitely many, should specialized relations be produced, meanwhile guaranteeing correctness as well as termination? Our work is based on two ingredients. First, we use the concept of a characteristic tree, encapsulating specialization behavior rather than syntactic structure, to guide generalization and polyvariance, and we show how this can be done in a correct and elegant way. Second, we structure combinations of atoms and associated characteristic trees in global trees registering “causal ” relationships among such pairs. This allows us to spot looming nontermination and perform proper generalization in order to avert the danger, without having to impose a depth bound on characteristic trees. The practical relevance and benefits of the work are illustrated through extensive experiments. Finally, a similar approach may improve upon current (on-line) control strategies for program transformation in general such as (positive) supercompilation of functional programs. It also seems valuable in the context of abstract interpretation to handle infinite domains of infinite height with more precision. "... We consider the replacement transformation operation, a very general and powerful transformation, and study under which conditions it preserves universal termination besides computed answer substitutions. With this safe replacement we can significantly extend the safe unfold/fold transformation sequ ..." Cited by 9 (3 self) Add to MetaCart We consider the replacement transformation operation, a very general and powerful transformation, and study under which conditions it preserves universal termination besides computed answer substitutions. With this safe replacement we can significantly extend the safe unfold/fold transformation sequence presented in [11]. By exploiting typing information, more useful conditions can be defined and we may deal with some special cases of replacement very common in practice, namely switching two atoms in the body of a clause and the associativity of a predicate. This is a first step in the direction of exploiting a Pre/Post specification on the intended use of the program to be transformed. Such specification can restrict the instances of queries and clauses to be considered and then relax the applicability conditions on the transformation operations. - PROCEEDINGS OF THE NINTH INTERNATIONAL WORKSHOP ON LOGIC-BASED PROGRAM SYNTHESIS, LOPSTR'99 , 2000 "... We propose an unfold-fold transformation system which preserves left termination for definite programs besides its declarative semantics. The system extends our previous proposal in [BCE95] by allowing to switch the atoms in the clause bodies when a specific applicability condition is satisfied. The ..." Cited by 3 (1 self) Add to MetaCart We propose an unfold-fold transformation system which preserves left termination for definite programs besides its declarative semantics. The system extends our previous proposal in [BCE95] by allowing to switch the atoms in the clause bodies when a specific applicability condition is satisfied. The applicability condition is very simple to verify, yet very common in practice. We also discuss how to verify such condition by exploiting mode information. - Knowledge Engineering Review , 1996 "... this paper, from formal specifications one may obtain executable, efficient programs by using techniques for transforming logic programs. This is, indeed, one of the reasons that makes logic programming very attractive for program construction. During this final step from specifications to programs, ..." Cited by 2 (0 self) Add to MetaCart this paper, from formal specifications one may obtain executable, efficient programs by using techniques for transforming logic programs. This is, indeed, one of the reasons that makes logic programming very attractive for program construction. During this final step from specifications to programs, in order to improve efficiency one may want to use program transformation for avoiding multiple visits of data structures, or replacing complex forms of recursion by tail recursion, or reducing nondeterminism of procedures. This paper is structured as follows. In Section 2 we present the rule-based approach to program transformation and its use for the derivation and synthesis of logic programs from specifications. In Section 3 we consider the schema-based transformation technique for the development of efficient programs. In Section 4 we consider the partial evaluation technique and its use for the specialization of logic programs when the input data are partially known at compile time. In the final section we discuss some of the achievements and challanges of program transformation as a tool for logic-based software engineering. For simplicity reasons in this paper we will only consider definite logic programs, although most of the techniques we will describe can be applied also in the case of general logic programs. We refer to [35, 41] for all notions concerning logic programming and logic program transformation which are not explicitly presented here. , 2000 "... We introduce a logic programming language with higher order features. In particular, in this language the arguments of the predicate symbols may be both terms and goals. We define the operational semantics of our language by extending SLD-resolution, and we propose for this language a set of program ..." Cited by 1 (0 self) Add to MetaCart We introduce a logic programming language with higher order features. In particular, in this language the arguments of the predicate symbols may be both terms and goals. We define the operational semantics of our language by extending SLD-resolution, and we propose for this language a set of program transformation rules. The transformation rules are shown to be correct in the sense that they preserve the operational semantics. In our higher order logic language we may transform logic programs using higher order generalizations and continuation arguments, as it is done in the case of functional programs. These program transformation techniques allow us to derive very efficient logic programs and also to avoid goal rearrangements which may not preserve correctness.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=291716","timestamp":"2014-04-21T02:16:44Z","content_type":null,"content_length":"37448","record_id":"<urn:uuid:5222ad3d-7fc7-4d8c-86d2-e614bba29062>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics 2010 Mathematics Subject Classification: Primary: 26B15 [MSN][ZBL] The perimeter of a planar region bounded by a rectifiable curve is the total length of the corresponding curve. In higher dimension, the perimeter of an open set $U\subset \mathbb R^n$ with $C^1$ boundary $\partial U$ (i.e. such that $\partial U$ is a $C^1$ submanifold) is the $n-1$-dimensional volume of $\partial U$, namely \[ {\rm Vol}^{n-1} (\partial U) = \int_{\partial U} {\rm d vol} \] where ${\rm d vol}$ is the $n-1$ dimensional volume form, for the Riemannian structure induced by the restriction of the scalar Euclidean product on the tangent space to $\partial U$. In fact such integral coincides with the $n-1$-dimensional Hausdorff measure of $\partial U$, see Area formula. An analogous definition can be given for open subsets of a Riemannian manifold with $C^1$ boundary. For Lebesgue measurable subsets of $\mathbb R^n$ a very general definition was proposed originally by Caccioppoli in [Ca] and used later to build a far-reaching theory by De Giorgi (see for instance Definition 1 Let $E\subset \mathbb R^n$. The perimeter of $E$ is defined as the infimum of \[ \liminf_{k\to \infty} {\rm Vol}^{n-1} (\partial U_k) \] taken over all sequences of open sets $U_k$ with smooth boundary such that $\lambda ((U_k\setminus E) \cup (E \setminus U_k)) \to 0$ (here $\lambda$ denotes the $n$-dimensional Lebesgue measure). This notion of perimeter coincides with the usual one if, for instance, $E$ is an open set with piecewise smooth boundary. Remark 2 Actually, in the original definition of Caccioppoli and De Giorgi the infimum is taken over sequences of polytopes, where ${\rm Vol}^{n-1} (\partial U_k)$ is defined as the sum of the $n-1$-dimensional volumes of the corresponding faces. The definition given above is however much more convenient, it is the most common in modern textbooks and it is equivalent to the original one of A set $E$ for which the perimeter is finite is called set of finite perimeter or Caccioppoli set. A fundamental characterization, due to De Giorgi is the following Theorem 3 A measurable set $E$ with $\lambda (E) < \infty$ has finite perimeter if and only if the indicator function \[ {\bf 1}_E (x):= \left\{\begin{array}{ll} 1\qquad & \mbox{if } x\in E\\ 0 & \ mbox{otherwise} \end{array} \right. \] is a function of bounded variation. See Function of bounded variation for the most important properties of the sets of finite perimeter. [Be] M. Berger, "Geometry" , 1–2 , Springer (1987) (Translated from French) [BZ] Yu.D. Burago, V.A. Zalgaller, "Geometric inequalities" , Springer (1988) (Translated from Russian) [Ca] R. Caccioppoli, "Misura e integrazione sugli insiemi dimensionalmente orientati I" Rend. Accad. Naz. Lincei Ser. 8 , 12 : 1 (1952) pp. 3–11 [Ca1] R. Caccioppoli, "Misura e integrazione sugli insiemi dimensionalmente orientati II" Rend. Accad. Naz. Lincei Ser. 8 , 12 : 2 (1952) pp. 137–146 [DG] E. de Giorgi, "Sulla proprietà isoperimetrica dell'ipersfera, nella classe degli insiemi aventi frontiera orientata di misura finita" Rend. Accad. Naz. Lincei Ser. 1 , 5 : 2 (1958) pp. 33–34 [EG] L.C. Evans, R.F. Gariepy, "Measure theory and fine properties of functions" Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. MR1158660 Zbl 0804.2800 [Fe] H. Federer, "Geometric measure theory", Springer-Verlag (1979). MR0257325 Zbl 0874.49001 [Gi] E. Giusti, "Minimal surfaces and functions of bounded variation" , Birkhäuser (1984) [Si] L. Simon, "Lectures on geometric measure theory", Proceedings of the Centre for Mathematical Analysis, 3. Australian National University. Canberra (1983) MR0756417 Zbl 0546.49019 [Sp] M. Spivak, "Calculus on manifolds" , Benjamin/Cummings (1965) MR0209411 Zbl 0141.05403 How to Cite This Entry: Perimeter. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Perimeter&oldid=30848 This article was adapted from an original article by V.A. Zalgaller (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Perimeter","timestamp":"2014-04-18T18:12:26Z","content_type":null,"content_length":"25183","record_id":"<urn:uuid:bc758dbf-30f8-429f-b1f7-970ab6aa4954>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Test Statistics / Confidence Intervals - Lost : need a bit of help please February 6th 2011, 03:41 PM #1 Feb 2011 Problem 4: Susan Sound predicts that students will learn most effectively with a constant background sound, as opposed to no sound at all. She randomly divides sixteen students into two groups of eight. All students study a passage of text for 30 minutes. Those in group 1 study with background sound at a constant volume in the background. Those in group 2 study with no sound at all. After studying, all students take a 10 point multiple choice test over the material. Assuming equal population variance, do the data provide sufficient evidence to indicate that the mean scores differ between two groups with a 0.1 significance level? Their scores follow. For the constant sound group: 5, 9, 3, 7, 5, 6, 9, 8. And for the no sound group: 6, 4, 5, 3, 10, 6, 5, 3. A. Assume that the standard deviations are equal and find the test statistic for the test that the between mu1 and mu2 is zero. Susan Sound predicts that students will learn most effectively with a constant background sound, as opposed to no sound at all. She randomly divides sixteen students into two groups of eight. All students study a passage of text for 30 minutes. Those in group 1 study with background sound at a constant volume in the background. Those in group 2 study with no sound at all. After studying, all students take a 10 point multiple choice test over the material. Assuming equal population variance, do the data provide sufficient evidence to indicate that the mean scores differ between two groups with a 0.1 significance level? Their scores follow. For the constant sound group: 6, 3, 10, 9, 4, 5, 1, 6. And for the no sound group: 8, 3, 6, 4, 5, 3, 2, 7. A. Assume that the standard deviations are equal and find the upper limit of the appropriate confidence interval for the difference between mu1 and mu2. Problem 1 (population proportions): In a survey conducted by Wright State University , senior high school students were if they had ever used marijuana. We are interested in whether there is a difference in use by male and female students. If 460 out of 934 male, and 463 out of 997 female students reported that they have tired marijuana, find the lower bound on a 99% confidence interval for the difference in the population proportions. Problem 2: In a survey conducted by Wright State University , senior high school students were if they had ever used marijuana. We are interested in whether there is a difference in use by male and female students. If 387 out of 1,138 male, and 376 out of 996 female students reported that they have tired marijuana, find the z-test statistic that you would use to test whether H0: p1 = Can someone show me how to do some of these? I'm so completely lost. Problem 4: Susan Sound predicts that students will learn most effectively with a constant background sound, as opposed to no sound at all. She randomly divides sixteen students into two groups of eight. All students study a passage of text for 30 minutes. Those in group 1 study with background sound at a constant volume in the background. Those in group 2 study with no sound at all. After studying, all students take a 10 point multiple choice test over the material. Assuming equal population variance, do the data provide sufficient evidence to indicate that the mean scores differ between two groups with a 0.1 significance level? Their scores follow. For the constant sound group: 5, 9, 3, 7, 5, 6, 9, 8. And for the no sound group: 6, 4, 5, 3, 10, 6, 5, 3. A. Assume that the standard deviations are equal and find the test statistic for the test that the between mu1 and mu2 is zero. The confidence interval for this is $\displaystyle (\bar{x_1}-\bar{x_2})\pm t_{(\frac{0.1}{2},n_1+n_2-2)}\times s\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}$ What do you get? a question 4 Seems to me you are looking at a straightforward between-subjects t-test, with an alpha level of .01 You need to compute the difference between the two sample means and divide by the estimated standard error. As with other versions of standard error this is Sum of Squared divided by Degrees of Freedom, except that in this case you have two SS and two DF, so you need to pool the variance, i.e. (SS for sample 1 + SS for sample 2) divided by (DF for sample 1 + DF for sample 2). For the calculation you can forget about mu - mu, as under the null hypothesis this always sums to zero. Hope this helps, if you need further please feel free to reply. I'm new to this site so feel free to send me a personal email if that's easiest. February 6th 2011, 03:55 PM #2 February 8th 2011, 03:58 AM #3 Feb 2011
{"url":"http://mathhelpforum.com/advanced-statistics/170387-test-statistics-confidence-intervals-lost-need-bit-help-please.html","timestamp":"2014-04-18T02:12:39Z","content_type":null,"content_length":"41152","record_id":"<urn:uuid:934adce9-53c1-407f-b3ab-56e95a90c405>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Causal diagrams Causal diagrams (2008) Download Links author = {Sander Greenland}, title = {Causal diagrams}, year = {2008} Abstract: From their inception, causal systems models (more commonly known as structural-equations models) have been accompanied by graphical representations or path diagrams that provide compact summaries of qualitative assumptions made by the models. These diagrams can be reinterpreted as probability models, enabling use of graph theory in probabilistic inference, and allowing easy deduction of independence conditions implied by the assumptions. They can also be used as a formal tool for causal inference, such as predicting the effects of external interventions. Given that the diagram is correct, one can see whether the causal effects of interest (target effects, or causal estimands) can be estimated from available data, or what additional observations are needed to validly estimate those effects. One can also see how to represent the effects as familiar standardized effect measures. The present article gives an overview of: (1) components of causal graph theory; (2) probability interpretations of graphical models; and (3) methodologic implications of the causal and probability structures encoded in the graph, such as sources of bias and the data needed for their control. 7056 Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference - Pearl - 1988 134 Causal diagrams for epidemiologic research - Greenland, Pearl, et al. - 1999 63 Probabilistic Evaluation of Sequential Plans from Causal Models with Hidden Variables - Pearl, Robins - 1995 58 Causal inference from graphical models - Lauritzen - 2001 39 Causal diagrams for empirical research (with discussion - Pearl - 1995 35 An introduction to instrumental variables for epidemiologists - Greenland - 2000 35 An overview of relations among causal modelling methods - Greenland, Brumback - 2002 31 background knowledge in etiologic inference - Data 28 Quantifying biases in causal models: Classical confounding vs colliderstratification bias. Epidemiology - Greenland - 2003 23 Causal inference in statistics: an overview - Pearl 22 Humphreys: ‘Are there algorithms that discover causal structure?’, Synthese 121 - Freedman, Paul 13 Fallibility in estimating direct effects - Cole, Hernán - 2002 11 On the Impossibility of Inferring Causation from Association Without Background Knowledge, Unpublished manuscript, CMU Dept. of Statistics - Robins, Wasserman - 1996 9 Causal knowledge as a prerequisite of confounding evaluation: An application to birth defects epidemiology - Hernán, Hernández-Diaz, et al. - 2002 9 Statistics for Epidemiology - Jewell 8 Signed directed acyclic graphs for causal inference - VanderWeele, Robins - 2009 4 Causal Diagrams. Ch. 12 - Glymour, Greenland - 2008
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.153.6739","timestamp":"2014-04-16T08:24:07Z","content_type":null,"content_length":"23401","record_id":"<urn:uuid:5c379272-aa5c-48ec-b6fa-18544fffa03d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
OVI Traders Club There are seven factors that influence an option’s price: 1. The type of option (call or put) 2. The price of the underlying asset 3. The exercise price (or strike price) of the option 4. The expiration date 5. Volatility – Implied and Historical 6. Risk-free interest rate 7. Dividends and stock splits Vega starts with a V and stands for Volatility When you trade stocks, you must be aware ofvolatility. Volatility is a measure of how a security’s price ismoving. Volatility is recognized as a measure of risk. If a stockprice fluctuates all over the place in wild swings, then you’d find ituncomfortable because you wouldn’t have a clue what it was going to donext and it would feel risky. If a stock price remains static all thetime, then you might get a bit bored, but you wouldn’t have to reachfor the Pepto-Bismol! So, higher volatility is predicated by wider,faster price fluctuations. This translates into greater risk. Thegreater the volatility and risk, the more expensive options premiumsbecome. Volatility is calculated by measuring thestandard deviation of closing prices, then expressed as an annualizedpercentage figure. Volatility is not directional. If a stock ispriced at $100 and has volatility of 20%, then we expect the stock totrade in the range of $80-$120 for the next year. Vega Κ (also known as Kappa or Lambda) Vega measures an option’s sensitivity to the stock’s volatility. This volatility is known as Historical or Statistical volatility. Diagram: Volatility There are two categories of volatility: Historical and Implied. │Historical Volatility│is derived from the standard deviation of the underlying asset price movement over a known period of time. │ │Implied Volatility │is derived from the market price of the option itself. │ Remember that there are 7 variables that affect an option's premium. Six of these variables are known with certainty: 1. stock price 2. strike price 3. type of option 4. time to expiration 5. interest rates 6. dividends The final variable can be considered not to be known with certainty and is the expected volatility of the stock going forward. Implied Volatility There are several mathematical models for calculating the theoretical value of an option. In the main they manipulate the above seven variables to arrive at the correct theoretical option price. I stress the word theoretical because the theoretical price is not themarket price for the option. Sometimes the figures will be the same,sometimes they’ll be different, there’s no magic rule. • The thing to remember is that the Theoretical option price uses Historical Volatility (of the stock) to calculate the theoretical value of an option. So, all the seven factors go into the pot and we emerge with a theoretical option price. • The market price of an option premium has a volatility figure implied within it. We reverse the theoretical option price model in order to find out what figure for volatility was implied. So, with a real market option where we know what price it is tradingat, we mix the 6 factors (not volatility) into the pot with the actualmarket option price to work out what the Implied Volatility figure must be to create that market price. Diagram: Theoretical option price Diagram: Implied Volatility calculated from real option prices in the market This expected volatility figure is expressed as anannualized percentage and, working back from the option premium itself,is an "implied" figure, hence Implied Volatility. A reminder: Historical Volatility is the annualized standard deviation of past price movements of the stock. We can use Historical Volatility as a reference figure for calculating what the Fair Value of the option should be, given the stock's Historical Volatility. Inthe real world, option premiums frequently trade away from their fairvalues, adopting trading ranges driven more by demand and supply in thecut and thrust of market activity. │Volatility │Based on … │ │Historical:│Underlying stock volatilityover a period of time, for example the past 20 trading days. Expressedas a % reflecting the average annual standard deviation.│ │Implied: │The volatility derived from the option’s traded market price using an option pricing model. Expressed as a %. │ The mechanical pricing of options involves complex mathematicalformulae, which we don’t need to explore here. There are also a numberof different methodologies available for options pricing models, eachwith their associated merits. Typically I’ll be tacitly referring tothe Black-Scholes Options Pricing Model (for stocks and American-style(early exercise) options) and Black’s option Pricing Model (for futuresand European-style (no early exercise) options). What we need to remember is that there are sevenmajor influences for pricing an option (above). We also need toremember that Volatility is one of them. In the actual marketplace thevalue assigned to an option is determined by market forces. This cangive rise to inconsistency between the Fair Value of an option and theactual price of the option in the marketplace. The Fair Value of anoption is the mathematically based calculation of the option price,using Historical Volatility as the figure for volatility. The inconsistency emerges when the market pricediffers from the Fair Value, which is a common occurrence. Out of allseven factors that influence the option price, the only one which couldbe subject to any form of debate is Volatility. Let’s go through theseven factors again: │Factor influencing │Comment │ │option price │ │ │The type of option (call│This is fixed and cannot be changed, the option is either a call or a put │ │or put): │ │ │The price of the │No room for manoeuvre here because the option price is directly correlated with the underlying asset price │ │underlying asset: │ │ │Strike Price: │The strike price is fixed for each option │ │The expiration date: │The expiration date is fixed for each option │ │ │Although Historical Volatility itself is fixed(with respect to whatever time period we’re assigning to it, say, 20trading days), the choice of time frame can be somewhat │ │Volatility* – Implied │arbitrary anddoesn’t necessarily fit with the time left to the option’s expiration. │ │and Historic: │ │ │ │The discretion between the option’s market valueand its Fair Value is therefore interpreted as an anomaly of volatility(it simply cannot be any of the other six factors). │ │ │Implied Volatilityis a calculated figure arising from the actual market price itself. │ │Risk-free interest rate:│The risk-free rate is fixed │ │Dividends and stock │This is fixed │ │splits: │ │ *Volatility is always expressed as a percentage. Question: What does Historical Volatility mean? Answer: Historical Volatility is a reflection of how the underlying asset has moved in the past. Consider a stock priced at $41.41 on 1 May and with July $40 strike calls and puts priced at $9.30 and $7.40 respectively. │Option │Expiration│Option premium│Historical Volatility (23 days) │Implied Volatility│ │Call strike 40│July │$9.30 │196.74% │111% │ │Put strike 40 │July │$7.40 │196.74% │111% │ If the options were priced in the market according to the Historical Volatility, the call would be worth $15.41 and the put would be priced at $13.51.Are we getting a bargain here for our options?4 Well, that would dependon whether Implied Volatility is usually at a discount or premium toHistorical Volatility with this particular stock, as well as a numberof other factors. Each stock, each underlying asset will have differentcharacteristics with regard to the relationship between Implied andHistorical Volatility of their options chains. Just like you have tofamiliarize yourself with a stock’s personality, you also have tofamiliarized yourself with its option chain’s personality and thehistorical relationship between Historical and Implied Volatility. For now, just remember that Historical Volatilityis figure derived from the underlying asset price movement, and ImpliedVolatility is derived from the actual market premium of the optionitself. │Volatility │Based on │ │ │Underlying asset volatility over a period of time, for example, the past 20 trading days. │ │Historical/Statistical:│ │ │ │Expressed as a percentage reflecting the average annual range (i.e. standard deviation) │ │ │The volatility derived from the option’s traded market price using an option pricing model. │ │ │ │ │Implied: │Expressed as a percentage and based on the perception of where market will be in the future. │ │ │ │ │ │This is the volatility figure derived from the Black-Scholes Options Pricing Model. │ (Thehigher the Implied Volatility, the higher the option price will be andvice versa. If Implied Volatility is substantially lower thanHistorical Volatility, there could be an argument to suggest good valuein the option price itself.) In terms of trading, if you can recognize how Implied and HistoricalVolatility relate to each other with a specific stock, you can alsoidentify powerful ways with which to trade the options. The following table is a typical guide to how totrade the relationship between Implied and Historical Volatility, but Iurge you to exercise caution here. Typical does not necessarilymean it’s right! The key is what the relationship has been like in thepast and whether the present is significantly different. Volatilityswings are often likened to the ‘rubber band effect’ where if therubber band is stretched too tight in one direction or too loose in theother, it will generally revert back to its most natural position mostof the time. Therefore, if Implied Volatility is generally around 70%for a stock, but for a period of time it plummets to, say 30%, could itbe possible that the options prices might be good value? Or, using thesame example, say Implied Volatility rockets up to 110%, could theoptions perhaps be overvalued? This is how the rubber band effect isbest illustrated. Over the medium to long term, Implied Volatility doestend to veer towards the Historical Volatility figure, but this willdepend upon how consistent the Historical Volatility of the underlyingasset is. │Look for │Typical interpretation (not necessarily the right interpretation) │ │Implied > Historical:│Options prices could be overvalued as a result of higher implied volatility, therefore look to sell options premiums │ │Historical > Implied:│Options prices could be undervalued, indicatinggood buying opportunities, particularly if you anticipate underlyingasset price movement│ Diagram: Implied Volatility and the rubber band effect So, in simplistic terms some traders look to buyoptions with low Implied Volatility (because the option premium will below) compared with the Historical Volatility of the underlying stock. In this way, the perception is that the options are cheap orundervalued; therefore they must represent a good trade. As stated above, this is a dangerous assumptionto make. For a start, option premiums often have Implied Volatilitiesconsistently inconsistent with the Historical Volatility of theunderlying stock. Secondly, just because an option is cheap today,doesn’t mean it’ll be expensive tomorrow. So the rationale for thattactic is flawed. Of far more relevance would be to look at thehistory of Implied Volatility and see if current options prices aretrading away from their own averages of Implied Volatility. Similarly, some traders look to sell options withpremiums reflecting high Implied Volatility (because the option premiumwill be high) compared with the Historical Volatility of the stock. Again, this is a flawed methodology in the real world of trading, evenif the logic initially looks plausible. Vega characteristics Vega is identical and positive for (long) callsand puts. This reflects the fact that higher volatility increases theoption’s premium. When Vega is positive, it generally suggests thatincreasing volatility is helping our position. When Vega is negative,it generally suggests that increasing volatility is hurting ourposition. Let’s take a stock on 1 May. The stock price is$49.40 and we’ll look at the December 50 strike calls and puts, whichare priced at $7.50 and $6.90 respectively. Chart: Long Call Vega profile Chart: Long Put Vega profile As you can see, Vega is identical for both callsand puts. Notice how it increases around the money (strike price) andalso how Vega is vastly reduced where there is less time to expiration.This is because there is less time for increased volatility to make animpact on the Time Value component of the option’s value. Diagram: Vega summary
{"url":"http://www.privatetradersclub.com/post/2007/11/18/Volatility-and-Options.aspx","timestamp":"2014-04-19T02:40:56Z","content_type":null,"content_length":"59132","record_id":"<urn:uuid:4e8a25d3-2855-47ae-a818-8ad92dd296d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: 116:Communicating Minds IV Harvey Friedman friedman at math.ohio-state.edu Fri Jan 4 02:02:01 EST 2002 Harvey M. Friedman January 4, 2002 This posting is based on two minds communicating about the concept of extensional classes. The basic idea is clean and close to the Russell paradox. Mind I forms a virtual class as usual, {x: phi(x)}, where x ranges over I's classes, the bound variables range over I's classes, and there are free variables - parameters - standing for some of I's classes. As usual, this may not form one of I's classes by the Russell paradox. I.e., phi(x) may be "x notin x". We want a sufficient condition for {x: phi(x)} to form a class of I. We will take this in the sense that we want there to be a class y of I such that for all classes x of I, x in y if and only if phi(x). Axiom scheme 6c asserts that the following condition is sufficient. We first reinterpret phi(x) by mind II, where the parameters are unchanged, but the bound variables now range over II's classes, and x ranges over II's classes. We write this interpretation by mind II as phi*(y). The sufficient condition is that if phi*(y) then y is one of mind I's classes. I.e., the sufficient condition is that we don't pick up new classes when reinterpreting by mind II. Axiom scheme 6c goes further and asserts that not only does {x: phi(x)} form a class of I, but it also forms a domain for full comprehension for mind I. Axiom scheme 6d also provides a domain for full comprehension. This time the hypothesis is the exact opposite: that phi*(y) holds of some y that is among II's classes but not among I's classes. The conclusion is that such a y can be found so that we can assert full comprehension for mind I. With 6c we get to ZFC, and with both 6c and 6d we get to measureable cardinals and beyond. NOTE: There are some related ways of getting to ZFC and also beyond measureables that are somewhat stronger. These use full comprehension for classes, whereas 6c and 6d just use full comprehension for certain classes. These are presented in sections 15 and 16. Finally, in sections 17 and 18 we give formulations which are single sorted and meant to be as simple as possible. The ones in 17 and 18 are somewhat further away from the two minds idea. 13. I's CLASSES, II's CLASSES, SPECIAL COMPREHENSION. We use variables x1,x2,... over I's classes. We use variables y1,y2,... over II's classes. The atomic formulas are of the following forms: i. u = v, where u,v are variables of either sort. ii. u in v, where u,v are variables of either sort. We close under atomic formulas under connectives and quantifiers in the usual way. We call this language L6. We use the standard 2 sorted predicate calculus with equality appropriate for L6. 6a. (therexists y1)(x1 = y1). 6b. y1 = y2 iff (forall y3)(y3 in y1 iff y3 in y2). 6c. (forall y1)(phi* implies (therexists x1)(x1 = y1)) implies (therexists xk+1)(forall x1)(x1 in xk+1 iff (phi and psi)), where phi is a formula of L6 with all variables among x1,x2,...,xk, phi* is the result of replacing each bound occurrence of xi by yi and each free occurrence of x1 by y1, and psi is a formula of L6 in which xk+1 is not free. THEOREM 13.1. The system 6a-6c is mutually interpretable with ZFC. 14. I's CLASSES, II's CLASSES, ADDITIONAL SPECIAL COMPREHENSION. We add an additional restricted comprehension axiom scheme. 6d. (therexists y1)(phi* and not(therexists x1)(x1 = y1)) implies (therexists y1)(phi* and not(therexists x1)(x1 = y1) and (therexists x1)(forall x2)(x2 in x1 iff (x2 in y1 and psi))), where phi is a formula of L6 with all variables among x1,x2,..., phi* is the result of replacing each bound occurrence of xi by yi and each free occurrence of x1 by y1, and psi is a formula of L6 in which x1 is not free. THEOREM 14.1. The system 6a-6d can be proved consistent in, and hence is interpretable in, ZFC + "there exists a nontrivial elementary embedding j:V(kappa + 1) into V(lambda + 1). The system 6a-6d proves the consistency of, and hence interprets, ZFC + "there ZFC + there exists arbitrarily large Woodin cardinals". 15. I's CLASSES, II's CLASSES, COMPREHENSION, COMMUNICATION. 6a. (therexists y1)(x1 = y1). 6b. y1 = y2 iff (forall y3)(y3 in y1 iff y3 in y2). 6e. (therexists y1)(forall x1)(x1 in y1). 6f. (therexists x1)(forall x2)(x2 in x1 iff (x2 in x3 and phi)), where phi is a formula of L6 in which x1 is not free. 6g. (therexists x1)(forall y1)(y1 in x1 iff (y1 in x3 and phi)), where phi is a formula of L6 in which x1 is not free. 6h. (therexists y1)(forall y2)(y2 in y1 iff (y2 in y3 and phi)), where phi is a formula of L6 in which y1 is not free. 6i. phi iff phi*, where phi is a formula of L6 whose variables are among x1,x2,..., and phi* is the result of replacing each bound occurrence of xi by yi. Note that 6g implies 6f in the presence of 6a. Also 6f implies 6h in the presence of 6i. THEOREM 15.1. The system 6a,6b,6e-6i and the system 6a,6b,6e,6f,6i are each mutually interpretable with ZFC. 16. I's CLASSES, II's CLASSES, COMPREHENSION, COMMUNICATION, DOMINANCE. We consider a dominance axiom scheme. 6j. (therexists y1)(phi and (forall x1)(x1 not= y1)) implies (therexists y1)(phi and (forall x1)(x1 not= y1) and (therexists x1)(y1 in x1)), where phi is a formula of L6 with all free variables among y1,x1,x2,..., and all bound variables among y1,y2,... . THEOREM 16.1. The system 6a,6b,6e-6j and the system 6a,6b,6e,6f,6i,6j have the properties in Theorem 14.1. 17. SINGLE SORTED SYSTEM CORRESPONDING TO ZFC. Here we use variables x1,x2,..., membership, and the constant symbol W. Call this single sorted language L7. 7a. (forall x3)(x3 in x1 iff x3 in x2) implies (x1 in x3 iff x2 in x3). 7b. (therexists x1)(forall x2)(x2 in x1 iff (x2 in x3 and phi)), where phi is a formula of L7 in which x1 is not free. 7c. (x1,...,xk in W and (therexists xk+1)(phi)) implies (therexists xk+1)(phi and x1 in W), where phi is a formula of L7 not mentioning W, and all free variables are among x1,...,xk+1. This is the system we discussed in #90:Two Universes, 6/23/00 1:34PM. THEOREM 17.1. The system 7a-7c and the system 7b-7c are mutually interpretable with ZFC. We strengthen 7c as follows. 7d. (x1,...,xk in W and (therexists xk+1 notin W)(phi)) implies ((therexists xk+1 in W)(phi) and (therexists xk+1 notin W)(phi and (therexists xk+2 in W)(xk+1 in xk+2))), where phi is a formula of L7 not mentioning W, and all free variables are among x1,...,xk. THEOREM 18.1. The system 7a,7b,7d has the properties in Theorem 14.1. I use http://www.mathpreprints.com/math/Preprint/show/ for manuscripts with proofs. Type Harvey Friedman in the window. This is the 115th in a series of self contained postings to FOM covering a wide range of topics in f.o.m. Previous ones counting from #100 are: 100:Boolean Relation Theory IV corrected 3/21/01 11:29AM 101:Turing Degrees/1 4/2/01 3:32AM 102: Turing Degrees/2 4/8/01 5:20PM 103:Hilbert's Program for Consistency Proofs/1 4/11/01 11:10AM 104:Turing Degrees/3 4/12/01 3:19PM 105:Turing Degrees/4 4/26/01 7:44PM 106.Degenerative Cloning 5/4/01 10:57AM 107:Automated Proof Checking 5/25/01 4:32AM 108:Finite Boolean Relation Theory 9/18/01 12:20PM 109:Natural Nonrecursive Sets 9/26/01 4:41PM 110:Communicating Minds I 12/19/01 1:27PM 111:Communicating Minds II 12/22/01 8:28AM 112:Communicating MInds III 12/23/01 8:11PM 113:Coloring Integers 12/31/01 12:42PM 114:Borel Functions on HC 1/1/02 1:38PM 115:Aspects of Coloring Integers More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-January/005123.html","timestamp":"2014-04-19T17:05:21Z","content_type":null,"content_length":"9902","record_id":"<urn:uuid:45c5b280-f628-4e8f-bb90-6310b1b32f09>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Rocket Math Free Downloads To fully understand how to use each of our products click on and read the complete directions below (or download them if you wish). • Observation form: Observing and evaluating Rocket Math during practice--a must for principals who want Rocket Math to run effectively in their schools. • Oops! Multiplication Answer Keys had some errors until we caught them in March of 2014. Here's the Set C Practice Answer Key corrected page you can download and print out. Set D Practice answer key's corrected page is here. Set Y corrected page is here. • NEW!! Rocket Charts for Rocket Writing for Numerals. By customer request we added special Rocket Charts to this program. • Handouts for Training DVD. Here are the handouts you will want to use to follow along with the training DVD to learn how to use the original, worksheet-based Rocket Math curriculum.
{"url":"http://www.rocketmath.com/p/free-downloads.html","timestamp":"2014-04-19T07:13:38Z","content_type":null,"content_length":"20904","record_id":"<urn:uuid:18b5fb61-1b81-4796-99d5-420f5c23130d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Ancient Babylonia - Mathematics Babylonians excelled. Theoretical mathematics intrigued them and a large number of texts involving geometry and algebra of a quite sophisticated sort has been preserved. The theorems of Euclid and Pythagoras were already known in the Old Babylonian period. As their civilization developed the Sumerians developed the need for a numerical system. They needed it for measurements and business transactions and for all the other requirements a civilized society has. From these beginnings Babylonian mathematics arose and was soon highly developed. The Sumerians and thus the Babylonians were one of the first peoples to have some fairly complex mathematics, some of which were not learned in parts of the world until recent centuries. Babylonian influence can still be clearly seen in such things as the measurement of time and degrees of angles. The Babylonian numerical system was sexagesimal i.e. base sixty. This is why there are 60 minutes in an hour and 360 degrees in a circle. Strangely the Babylonians by the time of Hammurapi also had symbols for ten, one hundred and one thousand making their system part decimal. The Babylonians were very advanced for their time. They knew about square roots and completing the square and they knew the value of p quite accurately. Ancient Babylonia Return to Bible History Online
{"url":"http://www.bible-history.com/babylonia/BabyloniaMathematics.htm","timestamp":"2014-04-21T09:37:35Z","content_type":null,"content_length":"5762","record_id":"<urn:uuid:cc64caf1-d280-452a-b6ad-afb2f3ee13b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Taboo I participated in a great twitter conversation the other day where we brainstormed a few strategies to help make our courses more accessible to English Language Learners (we used the hashtag # ELLmath, the approximate transcript is here if you are interested). It was a great start to what needs to be a running dialogue for me, as I teach almost 100% students for whom English is not their first language. If anyone has any ideas about #ELLmath, I would love to hear them in the comments. The conversation reminded me of a little idea I had last year, playing the game Math Taboo to help students expand “definitions” to actual understandings of concepts. Now, I’m sure other people do this, and a quick Google search leads me to believe it’s not all that novel, but while discussing # ELLmath, it struck me as a particularly good exercise for ELL students. The idea of the real game is to get your partner to guess a word by describing without using any of the five taboo words, which are usually the first words that anyone would go to in a description. So the obvious math equivalent is to pick a term that you are throwing around in your class and get students to describe it without using their go-to math descriptors. We played during beginning-of-the-year-review as a class, with the word to guess already known to everyone, and I gave students a chance to take a stab at verbalizing a definition without using the taboo words, one at a time until we got an acceptable description. However, this could easily be adapted to be a much more interactive activity (though its creation might take just a bit of time). So why play this? Whenever working one on one with students, I found myself trying to diagnose why they were not understanding a problem. I would ask them things like, “Well, what is a derivative anyway?” and they would often answer with something that I found acceptable, but perhaps could have been just something that they had figured out should be said as the “correct” answer. Even if they weren’t saying book definitions (which would actually be easier to deal with), many times they were using my informal definitions – words that they had internalized about the concept that might not actually display a deep understanding, but that I had been mistakenly accepting as evidence of learning. Definitions are important, but assuming that those are indicators of deep understanding is, of course, very problematic, no matter where those definitions come from. So, this Taboo game serves a two-fold purpose: learning for the students (by forcing them to think deeply about a mathematical concept; by having them trade in math jargon for conceptual understanding; and by hearing classmates describe something in more accessible vernacular) and learning for me (by seeing how well students actually understand a concept; and by seeing what language students use to talk math in the hopes that my mathematical narrative can better reflect theirs in the future). Alternative game: In how few words can you express this definition? I have never tried this game I’m about to describe, but the idea is to start out with a long definition from a math textbook and see how few words you can use to express the same idea. Delving into the Twitter world this summer I have realized how wordy I am, and the process of editing my tweets down has made me realize how many words I use that are unnecessary. Twitter forces me to think about what is the core of my idea, which led me to think up this exercise. This could be done competitively (give groups 5 minutes to brainstorm), or you could do it countdown style, trying to lower the number of words by one each time. This could get students to really consider what is important about a mathematical concept and to get them to realize that the thing itself is more important the words you use to express it. 13 thoughts on “Math Taboo” 1. For what it’s worth, I’ve never played this myself, but I’ve always thought that if I did, it would be a worthwhile activity for the kids to make the cards too. You give out the vocabulary and the students have to come up with the five taboo words. 2. One of my co-workers allowed students a single text message as a reference sheet on a test. I really like the emphasis on concise definitions that they write themselves. 3. What an interesting idea! Thanks for sharing. I think it would help ELL and Special Education students especially, but I’m sure the benefits could be seen across all classrooms to help students “talk math” and/or describe the concepts in their own words. 4. Good stuff. The second game you describe is “25 Words Or Less” (less popular than Taboo). I agree that kids should make the cards. I was always partial to Pyramid for this. “Things a Rhombus Would Say”. Slope taboo card needs the word CHANGE! 5. Hey~ I thought I was brilliant in inventing this game but I guess I just reinvented the wheel. =P I love having my students make the cards, like Stacy had mentioned. I think it helped my kids more than the game itself because they got pretty clever by the end. If you’re still looking for ELL vocab stuff, I’m trying something new this year that I’m LOVING and it’s super simple is my favorite part. I taught at a school that was 60%+ ELL for six years in LA. =) 6. Fantastic idea and thanks for sharing. I think any activity that combines literacy and numeracy makes for a richer learning experience, ELL or otherwise. Thanks,too, for mentioning the #ellmath tag -will follow that. I find that with maths strugglers – as with new language learners – decoding a lump of text can be tricky (think word problems). So, I devised this method to help students and called it GGSC. I know this has helped some students. 7. Love this idea! Thanks for sharing! (Found this through Megan Golding’s blog). I shared it on my facebook page just now. If you get a chance, I would love for you to check out my blog! I added Teaching Thursday’s dedicated to teaching tips, activities, etc. Have a great day! 8. so many great #ELLmath ideas! thank you all. 9. thank you so very, very much! I appreciate your sharing. I am a vocabulary finatic in algebra. You are most wonderful
{"url":"http://samjshah.com/2011/08/13/math-taboo/","timestamp":"2014-04-16T16:26:06Z","content_type":null,"content_length":"94453","record_id":"<urn:uuid:d47d10b3-4fdc-47b9-810e-693ab89ce926>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
log rule for integration confusion July 15th 2011, 08:17 AM #1 Oct 2009 log rule for integration confusion If I have a function: f(x) = 1 / (x+1), and I want to find it's indefinite integral, I can apply the log rule and get: F(x) = ln |x+1| + C If, however, I multiply both the numerator and the denominator by 2, before integrating, I get: f(x) = 1 / (x+1) = 2 / 2x + 2 F(x) = ln |2x+2| + C Clearly, f(x) are equal in both cases, but I can't see how ln |2x+2| equals ln |x+1| Any explanation? Thanks Re: log rule for integration confusion If I have a function: f(x) = 1 / (x+1), and I want to find it's indefinite integral, I can apply the log rule and get: F(x) = ln |x+1| + C If, however, I multiply both the numerator and the denominator by 2, before integrating, I get: f(x) = 1 / (x+1) = 2 / 2x + 2 F(x) = ln |2x+2| + C Clearly, f(x) are equal in both cases, but I can't see how ln |2x+2| equals ln |x+1| Any explanation? Thanks Let $u = 2x+2$ therefore $du = 2 dx \leftrightarrow dx = \dfrac{du}{2}$ and $\int \dfrac{2}{2x+2} dx = \int \left(\dfrac{2}{u} \cdot \dfrac{du}{2}\right)$ We can cancel those two's to give $\int \dfrac{du}{u}$ and continue as normal from there. Last edited by e^(i*pi); July 15th 2011 at 08:25 AM. Reason: fixing missing integral sign Re: log rule for integration confusion If you write: $\ln|2x+2|+C=\ln|2(x+1)|+C=\ln(2)+\ln|x+1|+C$ Because $\ln(2)$ is also an constant number you can say: with $C'$ a new constant integration term. Re: log rule for integration confusion Your mistake in using the same constant, i.e. they are different constants. Recall that $\ln(|2x+2|)=\ln(|x+1|)+ln(2)$. If $C$is the constant in the first then $C+\ln(2)$ is in the second. Re: log rule for integration confusion Thanks! Now I get it. July 15th 2011, 08:24 AM #2 July 15th 2011, 08:24 AM #3 July 15th 2011, 08:26 AM #4 July 15th 2011, 08:51 AM #5 Oct 2009
{"url":"http://mathhelpforum.com/calculus/184604-log-rule-integration-confusion.html","timestamp":"2014-04-16T20:17:34Z","content_type":null,"content_length":"46385","record_id":"<urn:uuid:c80e8290-7181-4cee-b34f-ded65b2d9aa7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Public Library Websites from Piper Mountain Webs General Math Help Sites websites Return to Index This is a really good site, because it lets you type in a math problem that you're working on, and then it gives you a step-by-step guide on how to solve it. The site covers general math, algebra, geometry, calculus, and other stuff. A very comprehensive site - there is help with basic math, algebra, geometry, trigonometry, calculus, and more. Again, though, be careful to follow links that are part of the site, and not other ads. Stay to the "Select Subject" menu on the left. A good site with some basic math reference tables. One really cool feature of this site is a math message board where you can post your specific math question, and get answers from other students and from experts.
{"url":"http://www.bournelibrary.org/jbl/teenhomework2.asp?subcategory=General%20Math%20Help%20Sites","timestamp":"2014-04-21T09:50:18Z","content_type":null,"content_length":"6257","record_id":"<urn:uuid:2f156f42-8fc9-4f83-9a25-39fcbb4c88e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical proofs? Re: Mathematical proofs? Let the numbers be and for . Then we have that for . We see that all of the numbers are the same. It is now obvious that they must all be zero. Last edited by anonimnystefy (2013-11-25 05:28:02) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=291922","timestamp":"2014-04-17T01:02:15Z","content_type":null,"content_length":"19715","record_id":"<urn:uuid:41223daa-940e-4319-a98c-5f2e716a06ea>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Uncertainty aversion and equilibrium existence in games with incomplete information Azrieli, Yaron and Teper, Roee (2009): Uncertainty aversion and equilibrium existence in games with incomplete information. This is the latest version of this item. Download (207Kb) | Preview We consider games with incomplete information a la Harsanyi, where the payoff of a player depends on an unknown state of nature as well as on the profile of chosen actions. As opposed to the standard model, players' preferences over state--contingent utility vectors are represented by arbitrary functionals. The definitions of Nash and Bayes equilibria naturally extend to this generalized setting. We characterize equilibrium existence in terms of the preferences of the participating players. It turns out that, given continuity and monotonicity of the preferences, equilibrium exists in every game if and only if all players are averse to uncertainty (i.e., all the functionals are quasi--concave). We further show that if the functionals are either homogeneous or translation invariant then equilibrium existence is equivalent to concavity of the functionals. Item Type: MPRA Paper Original Uncertainty aversion and equilibrium existence in games with incomplete information Language: English Keywords: Games with incomplete information, equilibrium existence, uncertainty aversion, convex preferences. Subjects: D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D81 - Criteria for Decision-Making under Risk and Uncertainty C - Mathematical and Quantitative Methods > C7 - Game Theory and Bargaining Theory > C72 - Noncooperative Games Item ID: 17617 Depositing Yaron Azrieli Date 01. Oct 2009 18:21 Last 21. Feb 2013 02:30 [1] P. Artzner, F. Delbaen, J.M. Eber, D. Heath, Coherent measures of risk, Mathematical Finance 9 (1999), 203-228. [2] R. J. Aumann, Correlated equilibrium as an expression of Bayesian rationality, Econometrica 55 (1987), 1-18. [3] S. Bade, Ambiguous act equilibria, (2008) Manuscript. [4] S. Bade, Electoral competition with uncertainty averse parties, (2008) Manuscript. [5] S. Bose, E. Ozdenoren, A. Pape, Optimal auctions with ambiguity, Theoretical Economics 1 (2006), 411-438. [6] S. Cerreia, F. Maccheroni, M. Marinacci, L. Montrucchio, Uncertainty averse preferences, (2008) Manuscript. [7] V. Crawford, Equilibrium without independence, Journal of Economic Theorey, 50 (1990), 127-154. [8] F. Delbaen, Coherent risk measures on general probability spaces, in: K. Sandmann et al. (Eds.), Advances in Finance and Stochastics (Essays in Honour of Dieter Sondermann), Springer, pp1-37. [9] D. Ellsberg, Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961), 643-669. [10] L.G. Epstein, A definition of uncertainty aversion, Review of Economic Stusied 66 (1999), 579-608. [11] L.G. Epstein, T. Wang, Beliefs about beliefs without probabilities, Econometrica 64 (1996), 1343- 1373. [12] H. Fäollmer, A. Scheid, Convex measures of risk and trading constraints, Finance and Stochastics 6 (2002), 429-447. [13] P. Ghirardato, M. Marinacci, Ambiguity made precise: A comparative foundation, Journal of Economic Theory 102 (2002), 251-289. [14] I. Gilboa, D. Schmeidler, Maxmin expected utility with a non-unique prior, Journal of Mathemat- ical Economics 18 (1989), 141-153. [15] E. Hanany, P. Klibanoff, Updating ambiguity averse preferences, (2007) Manuscript. [16] J.C. Harsanyi, Games with incomplete information played by Bayesian players I-III, Management Science 14 (1967), 159-182, 320-334, 486-502. [17] A. Kajii, T. Ui, Incomplete information games with multiple priors, Japanese Economic Review 56 (2005), 332-351. [18] E. Karni, State{dependent utility, in: P. Anand, P. K. Pattanaik, C. Puppe (Eds.), Handbook of rational and social choice, Oxford University Press, Oxford, Forthcoming. [19] D. Levin, E. Ozdenoren, Auctions with a set of priors: uncertain number of bidders, Journal of Economic Theory 118 (2004), 229-251. [20] K.C. Lo, Sealed bid auctions with uncertainty averse bidders, Economic Theory 12 (1998), 1-20. [21] F. Maccheroni, M. Marinacci, A. Rustichini, Ambiguity aversion, robustness, and the variational representation of preferences, Econometrica 74 (2006), 1447-1498. [22] A. Mas-Collel, The recoverability of consumers' preferences from market demand behavior, Econo- metrica, 45 (1977), 1409-1430. [23] P.R. Milgrom, R.J. Weber, Distributional strategies in games with incomplete information, Math- ematics of Operations Research 10 (1985), 619-632. [24] S. Mukerji, J. M. Tallon, An overview of economic applications of David Schmeidler's models of decision making under uncertainty, in: I. Gilboa (Editor), Uncertainty in Economic Theory: A collection of essays in honor of David Schmeidler's 65 th birthday, Routledge Publishers, London, 2004, pp283-302. [25] T. Rader, Theory of Microeconomics, Academic Press, New York, 1972. [26] A. Salo, M. Weber, Ambiguity aversion in first-price sealed-bid auctions, Journal of Risk and Uncertainty 11 (1995), 123-137. [27] D. Schmeidler, Subjective probability and expected utility without additivity, Econometrica 57 (1989), 571-587. [28] W.R. Shephard, Theory of Cost and Production Functions, Princton University Press, Princeton, NJ., 1970. URI: http://mpra.ub.uni-muenchen.de/id/eprint/17617 Available Versions of this Item
{"url":"http://mpra.ub.uni-muenchen.de/17617/","timestamp":"2014-04-17T09:37:02Z","content_type":null,"content_length":"26082","record_id":"<urn:uuid:0630f1a1-232a-4b19-9a84-dd57f0898c39>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Why $O(4n,\mathbb{C})$ (orthogonal group) acts transitively on the space of maximal isotropics of $V\bigotimes \mathbb{C}$ ? up vote 1 down vote favorite We say $L< (V\oplus V^{*})\bigotimes \mathbb{C}$ is isotropic when $< X,Y>=0$ for all $X,Y\in L$ Why $O(4n,\mathbb{C})$ (orthogonal group) acts transitively on the space of maximal isotropics of $V\bigotimes \mathbb{C}$ ? (here $V$ is a vector space of finite dimention $2n$) geometry gr.group-theory spin-geometry mp.mathematical-physics You need to explain what you mean by $V$. Is it $V = \mathbb{R}^{4n}$? – Robert Bryant Dec 6 '12 at 13:17 1 I'm afraid that just saying that $V$ is a vector space of finite dimension doesn't give us enough information. You have to tell us how you think $\mathrm{O}(4n,\mathbb{C})$ is acting on $V\otimes\ mathbb{C}$. Obviously, this transitivity won't hold for any finite dimensional $V$ and any action of $\mathrm{O}(4n,\mathbb{C})$ on $V\otimes\mathbb{C}$. What are you not telling us? – Robert Bryant Dec 6 '12 at 13:48 I revised my question – Hassan Jolany Dec 6 '12 at 15:14 Would your question be more simply phrased as "How can one prove that $\mathrm{O}(4n,\mathbb{C})$ acts transitively on the maximal isotropic subspaces of $\mathbb{C}^{4n}$?" What I'm getting at is 1 whether it's important for you that your complex vector space be written as $V\otimes\mathbb{C}$ for some (I presume) real vector space $V$ of dimension $2n$ over $\mathbb{R}$. If the 'real structure' is not important to you, then the above version of your question would be adequate. (By the way, in this case, you'd have transitivity for all $n$, so I'm not sure why you are assuming $4n$.) – Robert Bryant Dec 6 '12 at 16:33 In fact maximal isotropic $L$ corresponds to a generalized complex structure on $V$ and so the dimension of $V$ have to be even, – Hassan Jolany Dec 7 '12 at 11:14 add comment 1 Answer active oldest votes I think a more general statement can be proved along the following line: Let $V= \mathbb R^n$. Then on $W:= V\oplus V^*$ the symmetric bilinear form $((v,v^*)|(w,w^*)) = \langle w^*,v\rangle + \langle v^*, w\rangle$ has signature $(n,n)$. Now we are quite up vote 2 similar to a symplectic vector space. Given isotropic $L$, choose a basis $w_1,\dots w_n$ of $L$. Then $(w_i|w_j)=0$ for all $i,j$. We can extend this to a basis $w_1,\dots,w_n, w^1,\ down vote dots,w^n$ of $W$ such that $(w_i,w_j)=0$, $(w^i,w^j)=0$ for all $i,j$ and $(w_i,w^j)=\delta_i^j$. Then $w^1,\dots, w^n$ spans a complementary isotropic subspace. The group $O(n,n,\mathbb accepted R)$ acts transitively on the set of all such bases. Thus it acts transitively on the set of pairs of complementary isotropic subspaces. Thus also transitively on the set of isotropic subspaces. Now $O(2n,\mathbb C)$ is the complexification of $O(n,n,\mathbb R)$. 3 And this idea can be extended to give a proof of Witt's Theorem, that in a non-degenerate "formed space" (quadratic form, hermitian form, symplectic, etc.) an isometry from one subspace to another extends to an isometry of the whole space. – paul garrett Dec 6 '12 at 17:41 Dear Paul, if possible for you please more explain your second comment – Hassan Jolany Dec 8 '12 at 1:42 add comment Not the answer you're looking for? Browse other questions tagged geometry gr.group-theory spin-geometry mp.mathematical-physics or ask your own question.
{"url":"http://mathoverflow.net/questions/115607/why-o4n-mathbbc-orthogonal-group-acts-transitively-on-the-space-of-maxi?sort=newest","timestamp":"2014-04-18T10:55:50Z","content_type":null,"content_length":"61615","record_id":"<urn:uuid:3e69bd1a-849e-45e8-9188-2c93b9f2cbae>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Subect Help: Math - Cartoon Dolls Community - Doll forums and doll maker discussion boards iluvdolphin, you had the first one correct As for the second, you made a minor mistake. If 18 is in front of the x, it is a multiple of a number. Notice how the last was 6, which is a multiple of 18. You divided each number inside by 6, but you kept the 18 for some reason even though its a multiple So, while you had the inside correct, the 18 should be 6. To check this, you would need to try to redistribute it and see if it equals the original problem. When doing problems like this, you should always see if the number without the letter could be divided into the one in front of the letter. If it can, you can reduce the numbers in antimaie, do you remember how those formulas are factored? In the end, you are supposed to get something like (#x+/-#)(#x+/-#). I'll take your first problem and show you how. y^2 - 6y - 55 So, looking at this problem, you know absolutely that the first letters of each parenthese does not have a number in front of it because there are none proceeding the ys, so you have part of your problem (y+/-#)(y+/-#). Next, we look at the signs. Notice that they are -s, which means that there is a + in one and a - in the other parenthese because when you distribute the equation, a positive number times a negative number produces negative numbers. So, we know that it must now be (y+#)(y-#). We are left now figuring out what numbers not only multiply to equal -55, but also will equal -6 when added together. You'll want to list out all of the multiples. -11 x 5 11 x -5 -1 x 55 1 x -55 Those are all you can get with whole numbers. Next, you determine if any will equal -6. So, if you look below, I have added them together now. -11 + 5 = -6 11 + -5 = 6 -1 + 55 = 54 1 + -55 = -54 So, the first pair is the only option, so your factored equation would be (y+5)(y-11). This would change a bit will numbers in front of the "y" value. In your second problem, 6t^2 - t - 5, you begin differently, first figuring out what values will multiply together to get 6. That is 6 and 1, 2 and 3, the best option would be the 6 and 1 because you need to determine what numbers will add/subtract to equal -1(-6+5=-1). So, you begin with (6t+/-#)(1t+/-#). Next, you'll want to figure out what'll equal 5, which is simply 1 x 5. The tough part is then figuring out which one each number belongs in. If 6 is multiplied by 5, you will end up with -29, so you know now that 6 and 5 have to be in the same parenthese so they aren't multiplied together so your factoring is now at (6t+/-5)(1t+/-1). The final step is determining the signs. To get a -1, the 6 would have to be multipled with a negative number since it is larger, so you now have (6t+5)(1t-1). Just redistribute to check. Grim, first off, with exponents(which is what I'm assuming you mean by "to the power of 4"), it is not the lower number times the exponent(what you did was 6 x 4 = 24), it is the lower number multipled by itself the exponent number's amount of times(in this case, 4, so it'd be 6 x 6 x 6 x6 = 1296). So, your equation would then be 1296 + 3(t-6) = 3t -7 not what you listed. Your second issue is when you distribute a number through parentheses(in this case 3(t-6)), you need to multiply everything incide the parentheses by the outside number, in this case positive three, so your problem would then be: Strangely enough, I'm looking at that equation, and it wouldn't work because the ts would cancel each other out. To explain, you don't subtract the 3 from the t since they are being multipled together, so you would divide each side by three. Once that is done, you would want to isolate the t, but since they are equal to each other, they would cancel each other out(t-t=0). Are you sure that was written down right? That doesn't make sense and there is something seriously wrong with the equation since my only other thought is you are supposed to factor the left, but there is no variable with the exponent part. Last edited by Silver_Wolf_Kitty : 09-22-2009 at 12:15 PM.
{"url":"http://www.thedollpalace.com/forum/homework-help/15741-subect-help-math.html?mode=hybrid","timestamp":"2014-04-16T08:00:50Z","content_type":null,"content_length":"97319","record_id":"<urn:uuid:1e43f0a8-a541-44c6-84e7-574ad1327e2a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Hello Hi Howardroark; Thanks, I did not realize that. She never came back so I had no chance to ask. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=169332","timestamp":"2014-04-17T07:03:21Z","content_type":null,"content_length":"27733","record_id":"<urn:uuid:935fe38f-3208-4d49-b0c4-50457aa0fc47>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Teachers need to be patient. Really patient. SO INCREDIBLY PATIENT. Because here’s the thing. Some kids will get stuff the first time. And then there are some kids who will kind of get it the first time, but then they will really get it the second time. And then there are some kids who need to see something 3, 4, 5 times before they get it. All of that takes patience. But then there are the kids who still don’t get it after six billion examples, three billion leading questions, nineteen thousand either/or choices, and 392 erasures. For these kids, teachers need the patience of a…searching for a good metaphor…they just need to be really patient. Like a saint or something. I am not a saint. Just ask my parents. So sometimes I get frustrated with making the same mistake over and over and still not knowing it, and then I let that frustration show on my face, and then the kid can tell I’m frustrated, and since their fragile self-confidence was halfway hanging on the fact that I believed they could do it, they crumple. And then I’m the worst person in the world, because what kind of awful jerkface gets upset with a kid because they don’t understand? I have a student who cannot multiply. It’s bizarre, because sometimes she can miraculously do it, but we have been working on things like 53 x 12 since literally November, me reteaching her multiplication every time it comes up (which, as you might imagine, is often). We worked together, one on one, half an hour a day for three months. I still have to reteach every time it comes up so that we don’t get things like 7 x 14 = 38. (For extra bonus points, can you see the mistake she always makes? Sometimes she gets 92, which is a different mistake. Write it up and down and it might jump out at you…) And it doesn’t help that this particular kid has such language issues, she just doesn’t understand what I’m saying half the time. So then we get situations where she does one problem wrong three times, we work like heck and she knows she’s getting it wrong and she knows I just explained how to do it right, but she can’t for the life of her understand what I’m saying. She’s upset because I’m using this awful excessively patient tone, which exacerbates the language issue because she’s flustered and desperate to do it right, and she still can’t do the problem. Sometimes she cries. And then I feel like crap. I have another student who cannot remember pretty much anything, especially multiplication facts. Here’s a recap of our conversation with flash cards today: Me: “What’s 7 x 7?” Her: “Uh….” (Ten seconds later.) Me: “It’s 49. What’s 7 x 7?” Her: “49.” Me: “Great! What’s 8 x 8?” Her: “Uh…” Me: “8 x 8 fell on the floor, picked itself up it was…” Her: “64!” Me: “Awesome! What’s 7 x 7 again?” Her: “Uh…” &*(&#@!&$!!!!!! Blach! It’s so incredibly frustrating. For her and for me. Anyone got ideas for memorizing multiplication facts for kids who can’t remember things reliably? I’m thinking taping facts to her desk until she memorizes one, then switching it out. Or starting everything I say to her with a math fact, the same one all day. I’m hoping desperately that one billion more repetitions will do the trick, because I’m kind of at a loss as to what else to do. I’m tired of feeling like a jerk when they don’t get it but although my patience is less than saintly my determination is terrier-like, and I’m not ready to give up yet. Until then, I guess I take some deep breaths. This is no quick fix, but have you tried the multiplication table game at freerice.com (perhaps on your TFA iPad)? It gets my kids doing times table drilling when it seems like nothing else will. Good call on the iPad games–we started today and made a Top Score chart for her desk. We’ll see how it goes! Do these two students have learning disabilities? Their profiles sound very much like the profiles of students who have learning disabilities. It is extremely common for students with LD to remember something one day, not remember it the next day, then remember it again an hour after an exam. I’ve seen LD students remember something on page one of a test and forget it by page three. I am preparing a referral packet for one, but unfortunately she was so good at faking what everyone else is doing, and her performance is so randomly right while being mostly wrong that it took me a while to realize how severe her math problems were. So we’re working on it. The other just has immense memory issues, which are legitimate problems but hard to get approval for a referral. Maybe I should try anyway? I think it’s worth it to try with both if you think there might be legitimate issues. The worst that can happen is that one or both get denied services. Even then, at least it’s documented that they had previously been referred, which may be important in the future if other teachers also think there might be a reason for referral – schools usually do keep that in mind when considering whether to test a student. As for the student being good at faking what other people are doing – time to get on your A+++ game for documentation. Photocopy her work whenever you can, that way you can show how, even if she happened to make a few mistakes that canceled one another out to get the right answer, she missed the important concepts. Also, be sure to document any strategies you have tried and if they were at all successful, and if so, to what degree. Especially with the RTI model, it is important to show that other interventions are not enough. Good luck!! • You stole my thunder! I was plnnaing on writing about that trick for tipping. I guess I still could. I typically tip 15% so it makes that math a little easier. I just take 10% of the bill and then add half of that to the 10%. So, with a $80 bill, I would take 10% and get $8 and then half of the $8 is $4. Add those two together to get your $12 tip. Pretty simple! Have you tried raising your expectations? Care to elaborate? I don’t think I see the connection you do. Oh, I was just joking — giving what I perceived to be the TFA staff response. I really liked this post because it does show how tough teaching is. Kids often don’t ‘get it’ despite the fact that the teacher has seemingly done everything possible. This is a great honest post you wrote — keep up the good work. OK, that’s a relief. Half of my brain thought your comment was so useless not even my MTLD would try to offer that as advice (to be fair, I have never had TFA staff spout jargon at me when I am seriously asking for help). The other half of my brain started to do some serious soul-searching about whether I really DO have high expectations, until I decided that merely expecting them to learn things is useless unless I expect myself to find new ways to teach them. Which is probably a good conversation to have with oneself every now and again, but thanks for the clarification! Not saying your kids don’t need intervention or that memorizing multiplication tables isn’t important, or that these aren’t symptoms of a larger cognitive issue. But I’ve honestly never been able to remember my multiplication and division tables beyond the 6s (and the 10s, of course), and I aced high school and college math all the way through calculus. Thank goodness adults are allowed to use calculators. I’m not sure what it is about 7 through 9 that keeps them from sticking in my memory, but I honestly find it necessary to do, say, 6*4*2 in place of 6*8 when mental math is required. Oh, absolutely – if this problem is localized to multiplication, or even just math, then I wouldn’t be concerned about a learning disability or anything like that. Considering the fact that the first student has “such language issues” (unless by language issues, you mean English is not the first language, which is different from what I assumed) and the second student “cannot remember pretty much anything,” I made the possibly false assumption that these issues are presenting themselves across a variety of settings and subjects. No, out here language issues means receptive and expressive language impairments, which are a legacy of the boarding schools where students were beaten for speaking their own language but not really taught English. A whole generation of people has poor language skills because of it, and because language is something you learn from your parents, especially when you live in rural isolation, it only continues down the generations to my kids. The problems, both language and memory (separate issue), present themselves across nearly every setting and subject. If they could do some multiplication in their heads and use it to circumvent fact memorization, I’d be ecstatic because that’s better conceptual understanding anyway.
{"url":"http://eminnm.teachforus.org/2012/04/30/patience/","timestamp":"2014-04-17T06:42:08Z","content_type":null,"content_length":"48733","record_id":"<urn:uuid:ebc15fb4-8da7-4959-b268-d71c2dc76e9f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: arXiv:math.CA/0609827v128Sep2006 September 28, 2006 I. ASSANI Abstract. Consider v a Lipschitz unit vector field on Rn and K its Lipschitz constant. We show that the maps Ss : Ss(X) = X + sv(X) are invertible for 0 |s| < 1/K and define nonsingular point transformations. We use these properties to prove first the differentiation in Lp norm for 1 p < . Then we show the existence of a universal set of values s [-1/2K, 1/2K] of measure 1/K for which the Lipschitz unit vector fields vS-1 satisfy Zygmund's conjecture for all functions in Lp ) and for each p, 1 p < . 1. Introduction Lebesgue differentiation theorem states that given a function f L1(R) the averages
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/248/2625545.html","timestamp":"2014-04-17T19:23:39Z","content_type":null,"content_length":"7804","record_id":"<urn:uuid:49296936-2d07-48a4-a67c-38d5e9991698>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Interplay of Biomechanical, Energetic, Coordinative, and Muscular Factors in a 200 BioMed Research International Volume 2013 (2013), Article ID 897232, 12 pages Research Article Interplay of Biomechanical, Energetic, Coordinative, and Muscular Factors in a 200m Front Crawl Swim ^1Centre of Research, Education, Innovation and Intervention in Sport, Faculty of Sport, University of Porto, Rua Dr. Plácido Costa 91, 4200-450 Porto, Portugal ^2Higher Education Institute of Maia (ISMAI), Avenida Carlos Oliveira Campos, 4475-690 Maia, Portugal ^3Center for Research and Education in Special Environments, Department of Physiology and Biophysics, University at Buffalo, 3435 Main Street, Buffalo, NY 14214, USA ^4Porto Biomechanics Laboratory, University of Porto, Rua Dr. Plácido Costa 91, 4200-450 Porto, Portugal Received 9 July 2012; Accepted 5 February 2013 Academic Editor: Francisco Miró Copyright © 2013 Pedro Figueiredo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This study aimed to determine the relative contribution of selected biomechanical, energetic, coordinative, and muscular factors for the 200m front crawl and each of its four laps. Ten swimmers performed a 200m front crawl swim, as well as 50, 100, and 150m at the 200m pace. Biomechanical, energetic, coordinative, and muscular factors were assessed during the 200m swim. Multiple linear regression analysis was used to identify the weight of the factors to the performance. For each lap, the contributions to the 200m performance were 17.6, 21.1, 18.4, and 7.6% for stroke length, 16.1, 18.7, 32.1, and 3.2% for stroke rate, 11.2, 13.2, 6.8, and 5.7% for intracycle velocity variation in x, 9.7, 7.5, 1.3, and 5.4% for intracycle velocity variation in y, 17.8, 10.5, 2.0, and 6.4% for propelling efficiency, 4.5, 5.8, 10.9, and 23.7% for total energy expenditure, 10.1, 5.1, 8.3, and 23.7% for interarm coordination, 9.0, 6.2, 8.5, and 5.5% for muscular activity amplitude, and 3.9, 11.9, 11.8, and 18.7% for muscular frequency). The relative contribution of the factors was closely related to the task constraints, especially fatigue, as the major changes occurred from the first to the last lap. 1. Introduction The goal of competitive swimming is to perform the race distance as fast as possible, for that swimmers must achieve their highest average velocity for that distance. Swimming velocity () is the product of the stroke rate (SR) and the distance moved through the water with each complete stroke cycle (SL) [1] and can be expressed as For the same several combinations of SR and SL are possible and are a result of modifications of the time spent in different phases of the stroke cycle (interarm coordination), which can be measure in front crawl with the index of coordination (IdC; [2–4]). However, swimmers do not move at a constant velocity within each stroke cycle, and variations in the action of the arms, legs, and trunk result in intermittent application of force and lead to variations in the swimming velocity around the mean velocity within each stroke cycle. These intermittent movements and resultant variations in velocity increase the work done by the swimmer [5], compared to swimming at a constant velocity. The average velocity attained by the swimmer results from the average of the instantaneous velocity, resulting from intracycle velocity variation (IVV): In addition to these factors, maximal swimming velocity () depends on the maximal metabolic power of the swimmers () and on their energy cost of locomotion (): where can be computed based on measures /estimates of the aerobic, anaerobic lactic, and anaerobic alactic energy contributions and (i.e., the amount of metabolic energy spent to cover one unit of distance, KJ·m^−1). The depends on biomechanical factors such as the mechanical efficiency (), the propelling efficiency (), and the mechanical work to overcome hydrodynamic resistance (): To assess several methods have been proposed; however there is no agreement on the most valid method [6–8], and thus it remains difficult to determine active drag during a competitive event while preserving the ecology of the movement. On the other hand, propelling efficiency includes work done against drag and is defined as the ratio of useful mechanical work () to total mechanical work (): where in aquatic environments is lower than , since a fraction of the work produced by the contracting muscles is used to accelerate a variable amount of water backwards (wasted work) [9] and for the internal work [10]. The includes and is dependent on the swimmers’ technique and is velocity-dependent and affected by fatigue. In addition, mechanical efficiency is related to how muscles produce the mechanical work needed to sustain a given speed [10, 11]. Muscle efficiency arises from the range of either their force/length and/or force/speed relationships. Relations between force and iEMG have been used to estimate different efficiencies. Also, it has been suggested that the reduction in electrical efficiency with fatigue indicated that more motor units were recruited to generate the same amount of force compared with the nonfatigued muscle [12, 13]. However, the diagnostic value of the time domain analysis (iEMG) in muscle fatigue evaluation is considered to be more limited than that of the frequency domain analysis (Freq; [14]). So, to minimize the metabolic cost of high performance activities, the limbs must generate large power outputs while the muscles perform work at high efficiencies. As described above, theoretical models have been developed that attempt to explain the influence of various factors on performance. In spite of the fact that velocity is common to the theoretical approaches, they cannot be combined due to incompatibility of terms and units. This has led to attempts at practical approaches, relating swimming performance to different anthropometrical, physiological, and biomechanical parameters [15–18]. This kind of research can be developed by comparing different competitive level swimmers, employing the neural network, computing cluster analysis, or developing statistical models from the swimmer’s profile [19]. However, these studies have not theorized/assessed swimming performance completely using a biophysical approach, particularly at high swimming speeds [19–21]. The 200m swim and freestyle swimming are the dominant competitive events and thus of great interest. Therefore, the purpose of this study was to determine the relative contribution of selected biomechanical (SL, SR, horizontal IVV, vertical IVV, ), energetic (), coordinative (IdC), and muscular factors (iEMG and Freq) for the 200m front crawl performance and each of its four laps. The approach used, in the absent of an appropriate theoretical approach, was a multivariate analysis of the important factors among those listed above that would account for the average swimming velocity in a 200m front crawl swim and its component lengths, in well-trained swimmers. It was hypothesized that the biomechanical and energetic factors would be most important, with the coordinative and muscular factors also playing an important, but lesser, role. 2. Methods 2.1. Subjects Ten well-trained swimmers (yr) who were specialists in the 200m front crawl event participated in this study. Height, arm span, body mass, and percentage of adipose tissue were cm, cm, kg, and %, respectively. The subjects had an average of yrs of competitive experience. Their performances in the 200m front crawl were s, which correspond to a mean velocity that represents % of the mean velocity of the short course pool world record for men. The protocol was approved by the local ethics committee and followed the rules of the Declaration of Helsinki (2000). Swimmers were informed of the procedure, the potential risks involved, and the benefits of the study and then gave a written consent to participate. During the testing period, subjects were asked to adapt the intensity and the total volume of training to avoid stressful training programs. Swimmers’ practiced with and were accustomed to all procedures, particularly swimming with the snorkel used for measurement of . 2.2. Experimental Procedures All tests were conducted in a 25m indoor pool and each subject swam alone in the middle lane, avoiding pacing or drafting effects. Following a warm-up that consisted of a self-selected swim of about 1000m, including some swimming with the snorkel, swimmers performed a 200m maximum effort front crawl swim after a push start and using open turns without a glide. They were instructed to replicate their pacing and strategy used in competition. After 90 min of active rest, swimmers performed a 50m front crawl test and twenty-four hours later a 150m and a 100m tests, with 90 min active rest interval between them. Together 50, 100, and 150m tests were at the same swimming speed as in the previous 200m paced by a visual light pacing system placed in the bottom of the pool. The pacing lights led the swimmers as the lights progressed down the pool with a flash every 5m (TAR 1.1, GBK-Electronics, Aveiro, Portugal). 2.3. Data Collection and Analysis 2.3.1. Biomechanical Factors Each swimmer’s performance was recorded with a total of six stationary and synchronized video cameras (Sony, DCR-HC42E, Tokyo, Japan), four below and two above the water. The calibration set-up, accuracy, and reliability procedures have been previously described in detail [22]. The twenty-one landmarks videoed (Zatsiorsky’s model adapted by [23]) that define the three-dimensional position and orientation of the head, torso, upper arms, forearms, hands, thighs, shanks, and feet were manually digitized at 50Hz using a commercial software package (Ariel Performance Analysis System, Ariel Dynamics, Inc., USA). The Direct Linear Transformation Algorithm [24] was used for three-dimensional reconstruction and a digital low-pass filter at 6Hz was used to smooth the data. 2.3.2. Stroking Parameters One complete stroke cycle (defined as the period between the instant of entry of one hand to the next instant of entry of the same hand) for each of the 50m laps of the 200m front crawl was analyzed. From these data, the center of mass position as a function of time was computed. The mean velocity () was calculated by dividing the horizontal displacement of the center of mass in one stroke cycle over its total duration. Additionally, the horizontal distance travelled by the center of mass during the stroke cycle was used to determine the stroke length (SL). The stroke rate (SR) was determined as the inverse of the time (seconds) to complete one stroke cycle, which was then multiplied by 60 to yield units of strokes per minute. 2.3.3. Intracycle Velocity Variation To determine and analyze the whole body centre of mass’ IVV in the , , and axes of motion, the coefficient of variation (/mean) was computed as previously suggested [19, 22, 25]. 2.3.4. Propelling Efficiency Propelling efficiency was calculated from the computed 3D hand velocity as the sum of the instantaneous 3D velocity of the right and left hand combined during the underwater phase of the stroke (3Du). The was calculated from the ratio of the speed of the center of mass to the 3D mean hand velocity (), since this ratio represents the theoretical efficiency in all fluid machines and has been used in swimming [18, 26]. 2.4. Energetic Factors 2.4.1. Total Energy Expenditure and Energy Cost of Swimming () Oxygen uptake () was recorded by means of the K4b² telemetric gas exchange system (COSMED, Rome, Italy), during the 200m front crawl test. This equipment was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. This system was previously validated and widely used [15, 25, 26]. Expired gas concentrations were measured breath-by-breath and averaged every 5s, to get the used in subsequent calculations. Net was calculated by subtracting the resting from the steady state measured during swimming. Before, and after, the 50, 100, 150, and 200m tests, capillary blood samples (5µL) were collected from the ear lobe to assess rest and postexercise blood lactate () using a portable lactate analyzer (Lactate Pro, ARKRAY, Inc.). Lactate was measured at 1, 3, 5, and 7 min after test, and the peak value was used for further analysis. Since the 200m front crawl energy contribution is supplied from the three energy sources [26–28], was calculated for each 50m lap (for review see [28]): where is the total energy expenditure, is the aerobic contribution (calculated from the time integral of the net versus time), is the net accumulation of lactate after exercise, is the energy equivalent for lactate accumulation in blood (2.7mL O[2]·mM^−1·kg^−1), PCr is the alactic contribution, is the time duration, and is the time constant of PCr splitting at work onset (23.4s). The contribution of each energy pathway was calculated for each lap, and on the basis of these data, was computed and was calculated as the ratio between and . 2.5. Coordinative Factors 2.5.1. Index of Coordination The calculation of the index of coordination (IdC) requires the identification of key points in the stroke cycle [2, 4], specifically, (A) entry and catch of the hand in the water, (B) pull in the water, (C) push in the water, and (D) recovery out of the water. Each phase, within the stroke cycle, was determined from the swimmer’s horizontal () and vertical () displacement of the hand noting the time corresponding to start and end of these phases for two arm stroke cycles previously digitized. The IdC was calculated as the time gap between the propulsion (pull and push phases) of the two arms and expressed as a percentage of the duration of the complete arm-stroke cycle (sum of the propulsive and nonpropulsive phases (catch and exit phases)) [3, 29, 30]. IdC was the mean of IdC left and IdC right. 2.6. Muscular Factors The EMG signals of eight muscles (flexor carpi radialis, biceps brachii, triceps brachii, pectoralis major, upper trapezius, rectus femoris, biceps femoris, and tibialis anterior), which have been shown to have high activity during front crawl swimming [31, 32], were recorded simultaneously from the right side of the body using bipolar (interelectrode distance of 2.0cm) Ag–AgCl circular surface electrodes. The skin of the swimmer was shaved and cleaned with alcohol and the electrodes with preamplifiers placed in line with the muscle’s fibre orientation on the surface in the midpoint of the contracted muscle belly according to international standards [33] and covered with an adhesive bandage (OPSITE FLEXIFIX) [34, 35]. A reference electrode was attached to the body’s patella. All cables were fixed to the skin by adhesive tape to minimize artifacts during swimming. Additionally, swimmers wore a total body coverage swimsuit (Fastskin, Speedo) to cover the electrodes and recording wires. The total gain of the amplifier was set at 1100 times with a common mode rejection ratio of 110. The data were sampled at 1000Hz with a 16-bit analog to digital conversion and recording system (BIOPAC System, Inc) and stored on a computer for later analysis. An electronic flashlight signal synchronized with an electronic trigger marked simultaneously the video and EMG recordings, respectively, to synchronize EMG and video recordings. The EMG data analysis was performed using the MATLAB 2008a software environment (MathWorks Inc., Natick, Massachusetts, USA). 2.6.1. iEMG Raw EMG signals were band-passed (8–500Hz), rectified to obtain the full wave signals, and smoothed with a 4th order Butterworth filter (10Hz) for the linear envelope. The integration of the rectified EMG was calculated, per unit of time, to eliminate the stoke cycle duration effect (iEMG/T) and normalized to the maximum iEMG observed (signal was partitioned in 40ms windows to identify the maximal iEMG) [36]. All iEMG values from the measured muscles taken in the mid-pool section for each 50m were averaged. In addition, the average iEMG values of all 8 muscles were added together (iEMG) and used to represent the total electrical activity of swimming. 2.6.2. Frequency Analysis For the frequency analysis (Freq), spectral indices were calculated [37] and averaged. Spectral indices were obtained for each stroke, defined by video analysis, in the mid-section of the pool for each 50m lap and they were averaged for each muscle. The spectral indices for each muscle were then averaged to determine the Freq factor used to represent spectral muscle information. Spectral indices have been shown to most accurately detect changes in muscle power during dynamic contractions [38], and their increases indicate fatigue [37, 38]. 2.6.3. Statistical Analysis Mean (SD) computations for descriptive analysis were obtained for all variables (normal Gaussian distribution of the data was verified by the Shapiro-Wilk’s test). A one-way repeated measures ANOVA was used to compare each factor along the 200m. When a significant -value was achieved, Bonferroni post-hoc procedures were performed to locate the pairwise differences between the means. All the statistical analysis was performed using STATA 10.1 (StataCorp, USA), with the level of significance set at 0.05. The effect size () for each variable was calculated in accordance with Cohen [39] to measure the magnitude of difference. 2.7. Modeling of Performance As described in the Introduction, absence of a theoretical model to combine the factors that contribute to swimming performance, a multiple linear regression was used to identify the relative contributions of factors that are associated with swimming performance. These, among the previous defined, factors are biomechanical (SL, SR, , , ), energetic (), coordinative (IdC), and muscular (iEMG and Freq). This analysis was carried out for the 200m front crawl velocity and then repeated for the velocities of each of the component 50m laps to examine and compare the relative contribution of the factors in each segment of the swim. A common general multiple linear regression analysis was used to identify the weight of the factors identified as contributing to 200m swim velocity and attaining 100% of the variance of the performance. The equation used for all the models tested was where is the mean swimming velocity for the 200m or the mean velocity of each 50m lap that equals the sum of the model’ constant with the factors, stroke length, stroke rate, intracycle velocity variation ( and ), propulsive efficiency, total energy expenditure, index of coordination, muscular activation, and spectral indices weighted by their specific beta coefficients (). Both and IVV were not used in the model to limit the number of factors and they were reflected in and or , , respectively. To better express the relative importance of the factor, the weights of the regression were converted to standardized regression coefficients (beta weights). 3. Results Mean velocity for the total 200m front crawl was 1.41 (±0.04) m·s^−1. Figure 1 shows the data for the average velocity of each 50m lap, along with the observed stroke rate and stroke length, expressed as a percentage of their mean for the 200m swim. The velocity in the first lap was faster than the average velocity but decreased below the average in the second lap, after which it remained constant (). Swimming velocity is the product of SR and SL, and they both decreased concomitantly with velocity (Figure 1). SR had a mean value for the 200m of 38.41 (±3.05) cycles·min^−1 but decreased across the swim, reaching a statistical difference after the third lap (). SL decreased below the mean for the 200m of 2.20 (±0.14) m in lap 3 but reached significance only in the last lap (). Figure 2 shows the four groups of factors identified as contributing to the 200m front crawl swim (i.e., biomechanical, energetic, coordinative, and muscular). Biomechanical factor IVV (, , and ) (Figure 2(a)) mean values for the 200m were 0.22 (0.03), 0.76 (0.08), and 0.83 (0.03), respectively. A stable pattern over the 50m laps was observed (: ; : ; : ,). Another biomechanical factor, , presented a mean value over the four laps of 0.42 (0.02) (Figure 2(a)), however, showed a significant reduction in the 4th 50m lap (,). Energetic factors, () and () (Figure 2(b)), showed significant changes for the 50m laps, with a mean of 80.11 (7.97) mml ·kg^−1·min^−1 and 1.60 (0.16) KJ·m^−1, respectively. The coordinative factor, IdC, presented a mean value of −14.94 (2.15)% (Figure 2(c)) and showed a significant increase in the 4th 50m lap (). The two muscular factors, Freq () and iEMG (Figure 2(d)), showed a significant increase (), in the last 50m lap and the mean values were 1.97e^−14 (0.22e^−14) and 1.76 (±0.37), respectively. The beta coefficients for all factors are presented in Table 1, for their contribution in the four laps to the 200m velocity (upper half) and to the average velocity in each 50m lap (lower half). Standardized coefficients from the multiple linear regression model showed that the contributions of the first and last 50m laps velocity to the mean 200m velocity were higher (26.1 and 30.8%, resp.) than the contributions of the second and third laps (21.7 and 21.4%, resp.) of the 200m front crawl. The model had an , , , and adjusted for these factors. These data are consistent with the changes in velocity shown in Figure 1. The biomechanical factors showed a great importance, manly the SL and SR (Figure 3) to the overall performance of the 200m front crawl (16.2% and 17.5%, resp.). However, their contribution decreased in the final lap (from 17.6% and 16.1% to 7.6% and 3.2%, resp.). The SR had a very high contribution in the third 50m lap (32.1%); concomitant with this, there was a great decrease in the contribution of the other biomechanical factors (6.7% for , 1.3% for , and 2.0% for ), with the and factors increasing afterwards (5.4% and 6.4%, resp.). The contribution increases continually during the four laps (4.5%, 5.8%, 10.9% and 23.7%), while the IdC factor shows a “U” pattern with a large contribution at the beginning (10.1%), a decrease in the middle (5.1%), and then increase at the end of the swim (23.7% for the fourth lap). Relative to the muscular parameters (iEMG and Freq), iEMG appears to be quite stable (ranging from 5.5 to 9.0%), with only small oscillations, while the contribution of Freq increased over the length of the swim (from 3.9% in the first lap to 18.7% in the last lap). In Figure 4 the contributions of the relative importance of the factors used in the analysis for the average velocity in each lap individually are showed. The biomechanical factors (SL, SR, , , and ) had a higher contribution (81.1%) than the energetic (, 3.9%), coordinative (IdC, 5.5%), and muscular (iEMG and Freq, 9.5%) factors. Within all the analyzed factors SL and SR showed the highest contribution (26.4% and 34.6%, respectively) the remaining ones (, , , , IdC, iEMG and Freq) had a similar contributions (ranging from 3.8 to 6.9%). It should be noted that SL and SR are related mathematically with the . However, the contribution of each of these two factors for each 50m lap performance showed that SL contribution decreased in the third lap (from 27.9% to 24.8%), in spite of its increase tendency over the four laps (from 20.0% in the first lap to 33.1% in the last lap), while the SR increased throughout the entire 200m swim (from 17.6% to 49.4%). All the other factors used in the model showed a tendency to decrease their contribution from the beginning until the end of the swim, as the contributions of SL and SR increase. 4. Discussion Although previous studies have evaluated the role of biomechanical [1, 40, 41], energetic [26, 27, 42], muscular [32, 35, 43], or coordinative [2, 4, 29] factors on the performance and others developed models to predict performance combining several factors [44], we are unaware of a study that examined their combined interactive effects as was performed in this study. The regression analysis performed was not intended to predict performance but to determine the contribution of the important factors to it. For the mean velocity of 1.41m·s^−1, the biomechanical, energetic, coordinative, and muscular factors were 58.1%, 11.2%, 18.9%, and 11.8%, respectively, with SL and SR factors explaining 33.7% of the 200m mean velocity. A decrease in velocity during the second 50m lap was observed, and then velocity was constant. Although the patterns were different, SL and SR decreased from the first 50m and together accounted for the decrease in velocity. These changes in SL and SR are in agreement with previous studies [1, 29], showing the increase on the last lap of the SR to compensate for the SL decrease, in an attempt to maintain the velocity as high as possible. Also, the velocities that account for the major contribution to the overall performance of the 200m front crawl were the first and last lap velocities, suggesting two important stages during this particular event. On the first lap, the highest velocities are achieved and on the last lap the consequences of fatigue were felt, and although velocity was constant, the contribution of the factors determining it changed. Among the 50m laps, the contribution of biomechanical, energetic, coordinative, and muscular factors was on average 81.1%, 3.9%, 9.5%, and 5.5%, respectively, and 61% of the biomechanical contribution was attributed to the SL and SR. 4.1. Biomechanical Factors Stability in the IVV (, , and ) was observed over the four laps, as previously reported by Psycharakis et al. [40] and Alberty et al. [29]. IVV stability seems to be related with a coordinative adaptation of the upper limbs, as IdC changes along the effort, as well as the SR, mainly in the last 50m lap [45, 46]. IVV (, ) accounted for 15.2% of the variability of the 200m swim and 13.2% for the 50m laps. In spite of the stability of , the decreased in the last lap likely due to fatigue, as fatigue has been shown to evolve during the 200m front crawl [26, 29]. Also, it indicates a reduction in stroke technique quality; at the end of effort [47], higher lactate accumulation occurs [48], as well as neuromuscular fatigue [49]. As a result, accounted for 9.2% of the variability of the 200m swim and on average 6.9% for the 50m laps individual performance. 4.2. Energetic Factors and decreased over the second and third 50m laps, concomitant with the velocity decrease. However, taking into account the determinants of (the hydrodynamic resistance and the propelling efficiency) and since decreased due to the development of fatigue, , and thus , increased in the last lap, which is in agreement with previous studies [26, 50]. accounted for 11.2% of the variability of the 200m swim and on average 3.9% for the 50m laps. 4.3. Muscular Factors The assessed muscular factors revealed in spite of swimming at maximum effort that the observed muscles were involved at a submaximum level, as amplitude increased and frequency decreased (i.e., increase in the spectral indices), as previously reported for amplitude [43] and frequency [51] for a m test simulating the 200m front crawl, and also shown for the 100m swim [32]. Similar results were observed in other sports activities [52, 53]. Most of these studies interpreted the increase in the EMG activity amplitude as increased motor units recruitment and increased motor units synchronization, as well as the decrease in muscle fiber conduction velocity, due to an accumulation of metabolic products. The iEMG and Freq factors contributed by 7.3% and 11.6%, respectively, to the variance of the 200m swim and on average by 5.1% and 4.4%, respectively, to the 50m laps. 4.4. Coordinative Factors As velocity and the SL-SR ratio changed, interarm coordination adapted, with an increase in IdC in the final stages of the 200m event. This observation is consistent with the development of fatigue as reported previously [29, 30]. Interlimb coordination is adapted, as an optimization mechanism to obtain as much speed as possible in face of constraints imposed [54], showing that an effective front crawl swimming technique must be sufficiently flexible and adaptable [55]. This factor (IdC) accounted for 18.9% of the variance of the 200m swim performance and on average for 5.5% of the 50m laps. 4.5. Interplay among Factors A theoretical framework for the interaction of the biomechanical, energetic, coordinative, and muscular factors is presented in Figure 5 and used in the subsequent discussion. The biomechanical factors had the highest contribution to the 200m front crawl and also to each 50m lap mean velocities, where together they accounted for up to 33.7% and 61.0%, respectively. These contributions are understandable, as the product of two of these factors (SL and SR) determines swimming velocity [1]. The contributions of SL and SR to the total performance are very important to achieve high velocities however their contributions decreased during the swim, which suggested that several other factors had increased importance in determining the last 50m lap velocity (see Figure 5). The contribution of SL showed a higher contribution than SR in the last 50m. This observation is supported by a study of Craig et al. [1], where the best swimmers in the 200m front crawl could maintain higher SL distances at the end of the event, in spite of having similar SRs. Changes in SL and SR are associated with and (see Figure 5); however, the latter showed a stable pattern over the 200m. In spite of its stability, showed a decreased contribution over the length of the swim. IVVy’s contribution decreased even more than in the third lap, and then it increased in the fourth lap. Relative to the individual lap performances, had similar mean contributions to velocity as and , and all of them decreased from the beginning to the end of the swim. IVVx’s and SL contributions to the 200m performance showed a similar pattern, which could be explained by the increased time between propulsive phases as SL decreases and SR increases [2–4]. This change is also associated with a decrease in (see Figure 5) [9, 18, 26] and increase in the IVVx and [56]. This can be explained as a smaller IVV will lead to a lower energy cost, for example, if two swimmers swim at equal mean velocity but the in swimmer 1 and in swimmer 2 , then mean power of swimmer 1 will be but in swimmer 2 it will be >, as has the same relation with [17, 27]. Supporting this concept, it was found that swimming with hand paddles, which increases and SL [57], results in decrease of and increase of IdC [58]. On the other hand, and showed a contribution pattern that was inverse of that of SR. can be linked to the medial-lateral hand movements that account for vertical displacement changes suggesting great importance of the sideways movements during the stroke’s propulsive actions, which have been highlighted by previous studies (for review see [59]) and are decreased with higher SRs. In addition, as is SL-related [18, 26], its contribution to the variance in performance decreases when the SR contribution increases. The similar pattern observed for and seems to confirm the possible link between IVVy and the sideways hand pattern motion, which resulted in a high . This may also account for the larger contribution of than . Propelling efficiency’s decreasing contribution to the laps performance might be linked to reduced muscles force production during the stroke due to fatigue. It is likely that a reduced muscle force production occurs, as indicated by the changes in EMG factors, and the swimmers became unable to sustain the initial SL [1, 60], as observed in this study. The spectral indices (Freq) have been suggested as one of the first indicators of fatigue [37, 38]. The SL and decreases shown in this study are likely the result of fatigue developing toward the end of the 200m swim. As the biomechanical factors show a decreased contribution to the variance of the 200m in each 50m swim performance between the first and the last laps, other factor’s contributions must increase (see Figure 5). This was the case for the energetic and coordinative factors. Over the 50m laps, the contribution of to the overall performance increased; thus the swimmer’s capacity to deliver higher energy expenditure became more important over the 200m. Swimmers can have the same time splits for the 50, 100, and 150m, but if cannot be increased to match the increase in C in the last 50m, velocity cannot be sustained. The contribution of in the three final laps is similar to that of IdC, which could be explained by the swimmer naturally adopting a movement pattern to minimize his metabolic energy expenditure [61, 62]. The reduction in , and thus , in laps 2 and 3 may involve the process of self-optimization [62] which occurs to overcome the constraints imposed, in this case by the fatigue task constraint [3]. The IdC factor had the inverse pattern of contribution to performance in the first three laps compared to that of SL. As indicated by previous studies, based on the dynamical theories of motor organization, stroke rate is the first determinant of motor organization in swimming [3] and it has an inverse relation with SL [4, 60]. As SR and IdC are associated with each other (see Figure 5), IdC had a higher contribution to first 50m lap, as was the case in overall performance analysis. This is likely due to the direct relationship between IdC and velocity that has been suggested [2, 4 ]. After the first 50m, the contribution of IdC starts to decrease, as a result of the decrease in velocity and SL, until the development of fatigue, which resulted in an increase in the contribution of the SR to a greater extent than SL. When strokes are closer to one another, or overlap, this has the effect of increasing the average propulsive force while the mean force per stroke is maintained [30]. These changes in stroke patterns increase the contribution of IdC in the latter stages of the 200m swim, which also has previously been shown [29]. The increased contribution to overall performance reported in this study is related to the changes in the balance of the three energy pathways (aerobic, anaerobic lactic, and anaerobic alactic) as a function of time as previously reported [26–28]. The increase in contribution of the anaerobic lactate contribution and resultant lactate accumulation by the end of the effort [26] contribute to the explanation of the decrease in SL and . These changes are consistent with the deterioration of stroke mechanics observed by other authors [50, 60]. The reduced SL and increased Freq are associated with muscle fatigue most likely brought about by high lactate levels and reduced muscle glycogen [63]. This conclusion is supported by the suggestion that the increase in blood lactate concentration may change the stroking strategy significantly [60] and thus and . These deviations from the optimal combination of SL and SR result in a significant increase in energetic demand (see Figure 5), suggesting that minimizing energy cost may be an important factor contributing to cadence determination in cyclical forms of locomotion [62]. Supporting this, swimmers preferred to swim front crawl at the lowest SR (or the longest SL) that does not require an increase in oxygen uptake [64], as a significant decrease in the preferred SR, for example, determines the decline in time limit exercise duration [65], which might be caused by an unusual muscular recruitment. The increase in , particularly of anaerobic lactic contribution, in the final lap due to muscle fatigue is generally (although not exclusively) attributed to the reduced muscular fibre conduction velocity, which is causally related to a decrease in the pH [66]. Although pH was not directly measured, high values of blood lactate concentration collected after the 200m swim [26, 27, 67] implied a significant pH decrease during swimming. As muscles fatigue, power output is reduced during the swim [41], as is the case for SL. Since the SL is an index of propelling efficiency [18, 59], should decrease, as was observed in this study. The resultant deterioration of stroke mechanics in fatigued subjects is expected to lead to a progressive increase in the energy cost of swimming (see Figure 5), as was observed in this study. However, to maintain the total mechanical power output as Craig et al. [1] have shown for races of 200m and longer, the distance per stroke tends to decrease as fatigue develops and SR has to increase to compensate to maintain the speed constant, or if SR and cannot be increased velocity decreases, which happens in this study in the second lap. In addition, increases in muscle activity can lead to decreases in efficiency (see Figure 5) with no increase in power output if the muscle coordination is inappropriate [11]. Muscle coordination changes due to fatigue in swimming have been shown [35]. In the first lap, the contribution of the SL is higher than the SR, but in the last lap SR is greater suggesting fatigue in the last lap, which is supported by the EMG data (see Figure 5). The muscular factor iEMG (amplitude analysis) has a tendency to decrease its contribution to the overall performance of the 200m during subsequent 50m laps. In the first 50m lap, the highest contribution of the iEMG over the 200m could be associated with the high velocity and also a higher contribution of the SL, which is linked to higher force production (see Figure 5) [68]. This would also be associated with a higher power output and velocity, as was the case for the first lap and concomitant with the high contribution observed. On the third lap, the iEMG contribution increases after its decrease in the second lap, which might explain the decrease in the absolute value of SR in this particular lap. This is supported by the higher SR in this lap that was associated with higher EMG activity [69] and its increase in contribution. Also, additional recruitment or increased synchronization of muscle fibers as a result of submaximal fatigue [70] most likely explains the reduced contribution in the last 50m. If the velocity is an indicator of the power output and it was stable in the last three laps, mechanical efficiency and concomitant efficiency of the electrical activity was decreased, as the iEMG increased, if the electrical efficiency the ratio force to iEMG was considered [12, 13]. In spite of these associations described above, the relationship between iEMG and force is not linear and the diagnostic value of the time domain analysis (iEMG) in muscle fatigue evaluation is considered to be more limited than that of the frequency domain analysis (Freq) [14]. Freq showed a higher contribution to the 200m swim than iEMG in the mean values and for the second, third, and fourth laps. These higher contributions might be explained by the absolute values and contributions, as absolute value is higher on the first lap, because of the higher velocity when swimmers are not fatigued. However, after the first lap velocity starts to decline, as did , and a statistical stability during the second and third laps is being maintained. Freq’s contribution increases during these two laps. As swimmers reach the fourth lap, Freq increases and SL decreases suggesting the presence of fatigue; increases in both absolute value and contribution to velocity in spite of the constant velocity. The contribution of the increased Freq over the swim distance attained a similar contribution to the overall performance as did the energetic and coordinative factors. For the mean velocity in each lap, both iEMG and Freq present a similar mean contribution; however their pattern of change over the laps is different. The iEMG has its highest contribution on the first lap, whereas Freq has a small contribution. However, Freq is higher, and iEMG lower, in the last lap. This can suggest that at the beginning of the effort higher muscular activation is needed to recruit more fast-twitch muscle fibers and achieve the higher SR at this stage. In the second lapes, the contribution of Freq surpasses that of iEMG, and after this, it decreases constantly until the end of the 200m effort. The decreased contribution of iEMG is contrary to the increase in absolute values relative to the mean value for 200m. This pattern of changes is similar to the decrease in spectral parameters that indicate the evolvement of fatigue. As higher requires higher effective application of propulsive force, the decrease of the contribution of iEMG might be associated with a decrease in the contribution of and be associated with muscle fatigue. Notwithstanding the results and discussion, as well as the combined interactive effects of performance influencing factors on several research fields in well-trained swimmers, the approach used has some limitations that have to be acknowledged. The regression analysis was not intended to predict performance, only to determine the contribution of the factors, and the variables used represent discrete and extremely important outcomes, each of them for the understanding of the swimming performance and aquatic human locomotion. The relation between the number of variables and subjects evaluated was poor, which may influence the results of the analysis performed, over- or underestimating the contribution of the factors. 5. Conclusion The swimmers in this study had the highest velocity in the first lap of the 200m swim. The factors contributing to this were a balance of SL, SR, , , IdC, and iEMG, denoting particular importance for the biomechanical factors (SL, SR, and ), as this first lap is done comfortable enough, without fatigue constraints. From the second through the fourth lap, although the velocity was similar, dynamical changes occurred in the importance of the contributing factors, especially in the fourth lap. In this last, the contributions of Freq and IdC were high and suggest fatigue of the muscles used in swimming, resulting in a high contribution of and lower contribution of . These data may suggest swimming at a uniform velocity, to avoid the effects of fatigue, and/or training to increase and muscular endurance. This investigation was supported by Grants of the Portuguese Science and Technology Foundation (SFRH/BD/38462/2007) (PTDC/DES/101224/2008—FCOMP-01-0124-FEDER-009577). 1. A. B. Craig Jr., P. L. Skehan, J. A. Pawelczyk, and W. L. Boomer, “Velocity, stroke rate, and distance per stroke during elite swimming competition,” Medicine and Science in Sports and Exercise, vol. 17, no. 6, pp. 625–634, 1985. View at Scopus 2. D. Chollet, S. Chalies, and J. C. Chatard, “A new index of coordination for the crawl: description and usefulness,” International Journal of Sports Medicine, vol. 21, no. 1, pp. 54–59, 2000. View at Publisher · View at Google Scholar · View at Scopus 3. L. Seifert, D. Chollet, and A. Rouard, “Swimming constraints and arm coordination,” Human Movement Science, vol. 26, no. 1, pp. 68–86, 2007. View at Publisher · View at Google Scholar · View at 4. L. Seifert, H. M. Toussaint, M. Alberty, C. Schnitzler, and D. Chollet, “Arm coordination, power, and swim efficiency in national and regional front crawl swimmers,” Human Movement Science, vol. 29, no. 3, pp. 426–439, 2010. View at Publisher · View at Google Scholar · View at Scopus 5. L. D’Acquisto, D. Ed, and D. Costill, “Relationship between intracyclic linear body velocity fluctuations, power and sprint breaststroke performance,” Journal of Swimming Research, vol. 13, pp. 8–14, 1998. 6. H. M. Toussaint, P. E. Roos, and S. Kolmogorov, “The determination of drag in front crawl swimming,” Journal of Biomechanics, vol. 37, no. 11, pp. 1655–1663, 2004. View at Publisher · View at Google Scholar · View at Scopus 7. P. Zamparo, G. Gatta, D. Pendergast, and C. Capelli, “Active and passive drag: the role of trunk incline,” European Journal of Applied Physiology, vol. 106, no. 2, pp. 195–205, 2009. View at Publisher · View at Google Scholar · View at Scopus 8. S. V. Kolmogorov and O. A. Duplishcheva, “Active drag, useful mechanical power output and hydrodynamic force coefficient in different swimming strokes at maximal velocity,” Journal of Biomechanics, vol. 25, no. 3, pp. 311–318, 1992. View at Publisher · View at Google Scholar · View at Scopus 9. H. M. Toussaint, W. Knops, G. de Groot, and A. P. Hollander, “The mechanical efficiency of front crawl swimming,” Medicine and Science in Sports and Exercise, vol. 22, no. 3, pp. 402–408, 1990. View at Scopus 10. P. Zamparo, D. R. Pendergast, B. Termin, and A. E. Minetti, “How fins affect the economy and efficiency of human swimming,” Journal of Experimental Biology, vol. 205, part 17, pp. 2665–2676, 2002. View at Scopus 11. J. M. Wakeling, O. M. Blake, and H. K. Chan, “Muscle coordination is key to the power output and mechanical efficiency of limb movements,” Journal of Experimental Biology, vol. 213, no. 3, pp. 487–492, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. T. I. Arabadzhiev, V. G. Dimitrov, N. A. Dimitrova, and G. V. Dimitrov, “Interpretation of EMG integral or RMS and estimates of “neuromuscular efficiency” can be misleading in fatiguing contraction,” Journal of Electromyography and Kinesiology, vol. 20, no. 2, pp. 223–232, 2010. View at Publisher · View at Google Scholar · View at Scopus 13. M. R. Deschenes, J. A. Giles, R. W. McCoy, J. S. Volek, A. L. Gomez, and W. J. Kraemer, “Neural factors account for strength decrements observed after short-term muscle unloading,” The American Journal of Physiology, vol. 282, no. 2, pp. R578–R583, 2002. View at Scopus 14. R. Merletti, A. Rainoldi, and D. Farina, “Myoelectric manifestations of muscle fatigue,” in Electromyography, R. Merletti and F. A. Parker, Eds., IEEE Press, Piscataway, NJ, USA, 2004. 15. R. J. Fernandes, K. L. Keskinen, P. Colaço et al., “Time limit at $\text{V}·{\text{O}}_{2\text{max}}$ velocity in elite crawl swimmers,” International Journal of Sports Medicine, vol. 29, no. 2, pp. 145–150, 2008. View at Publisher · View at Google Scholar · View at Scopus 16. D. Pendergast, P. Zamparo, P. E. di Prampero et al., “Energy balance of human locomotion in water,” European Journal of Applied Physiology, vol. 90, no. 3-4, pp. 377–386, 2003. View at Publisher · View at Google Scholar · View at Scopus 17. H. M. Toussaint and A. P. Hollander, “Energetics of competitive swimming: implications for training programmes,” Sports Medicine, vol. 18, no. 6, pp. 384–405, 1994. View at Scopus 18. P. Zamparo, D. R. Pendergast, J. Mollendorf, A. Termin, and A. E. Minetti, “An energy balance of front crawl,” European Journal of Applied Physiology, vol. 94, no. 1-2, pp. 134–144, 2005. View at Publisher · View at Google Scholar · View at Scopus 19. T. M. Barbosa, J. A. Bragada, V. M. Reis, D. A. Marinho, C. Carvalho, and A. J. Silva, “Energetics and biomechanics as determining factors of swimming performance: Updating the state of the art,” Journal of Science and Medicine in Sport, vol. 13, no. 2, pp. 262–269, 2010. View at Publisher · View at Google Scholar · View at Scopus 20. D. R. Pendergast, C. Capelli, A. Craig et al., “Biophysics in swimming,” in Proceedings of the 10th International Symposium of Biomechanics and Medicine in Swimming, pp. 185–189, Porto, Portugal, 21. J. P. Vilas-Boas, “Biomechanics and medicine in swimming, past, present and future,” in Biomechanics and Medicine in Swimming XI, K. L. Kjendlie, R. K. Stallman, and J. Cabri, Eds., pp. 11–19, Norwegian School of Sport Science, Oslo, Norway, 2010. 22. P. Figueiredo, J. P. Vilas-Boas, J. Maia, P. Gonçalves, and R. J. Fernandes, “Does the hip reflect the centre of mass swimming kinematics?” International Journal of Sports Medicine, vol. 30, no. 11, pp. 779–781, 2009. View at Publisher · View at Google Scholar · View at Scopus 23. P. de Leva, “Adjustments to zatsiorsky-seluyanov's segment inertia parameters,” Journal of Biomechanics, vol. 29, no. 9, pp. 1223–1230, 1996. View at Publisher · View at Google Scholar · View at 24. Y. Abdel-Aziz and H. Karara, “Direct linear transformation: from comparator coordinates into object coordinates in close range photogrammetry,” in Proceedings of the Symposium on Close-Range Photogrammetry, pp. 1–18, American Society of Photogrammetry, Falls Church, Va, UA, 1971. 25. T. M. Barbosa, K. L. Keskinen, R. Fernandes, P. Colaço, A. B. Lima, and J. P. Vilas-Boas, “Energy cost and intracyclic variation of the velocity of the centre of mass in butterfly stroke,” European Journal of Applied Physiology, vol. 93, no. 5-6, pp. 519–523, 2005. View at Publisher · View at Google Scholar · View at Scopus 26. P. Figueiredo, P. Zamparo, A. Sousa, J. P. Vilas-Boas, and R. J. Fernandes, “An energy balance of the 200m front crawl race,” European Journal of Applied Physiology, vol. 111, no. 5, pp. 767–777, 2011. View at Publisher · View at Google Scholar · View at Scopus 27. C. Capelli, D. R. Pendergast, and B. Termin, “Energetics of swimming at maximal speeds in humans,” European Journal of Applied Physiology and Occupational Physiology, vol. 78, no. 5, pp. 385–393, 1998. View at Publisher · View at Google Scholar · View at Scopus 28. P. Zamparo, C. Capelli, and D. Pendergast, “Energetics of swimming: a historical perspective,” European Journal of Applied Physiology, vol. 111, no. 3, pp. 367–378, 2011. View at Publisher · View at Google Scholar · View at Scopus 29. M. Alberty, M. Sidney, F. Huot-Marchand, J. M. Hespel, and P. Pelayo, “Intracyclic velocity variations and arm coordination during exhaustive exercise in front crawl stroke,” International Journal of Sports Medicine, vol. 26, no. 6, pp. 471–475, 2005. View at Publisher · View at Google Scholar · View at Scopus 30. M. Alberty, M. Sidney, P. Pelayo, and H. M. Toussaint, “Stroking characteristics during time to exhaustion tests,” Medicine and Science in Sports and Exercise, vol. 41, no. 3, pp. 637–644, 2009. View at Publisher · View at Google Scholar 31. J. P. Clarys and J. Cabri, “Electromyography and the study of sports movements: a review,” Journal of Sports Sciences, vol. 11, no. 5, pp. 379–448, 1993. View at Scopus 32. I. Stirn, T. Jarm, V. Kapus, and V. Strojnik, “Evaluation of muscle fatigue during 100-m front crawl,” European Journal of Applied Physiology, vol. 111, no. 1, pp. 101–113, 2011. View at Publisher · View at Google Scholar 33. H. J. Hermens, B. Freriks, C. Disselhorst-Klug, and G. Rau, “Development of recommendations for SEMG sensors and sensor placement procedures,” Journal of Electromyography and Kinesiology, vol. 10, no. 5, pp. 361–374, 2000. View at Publisher · View at Google Scholar · View at Scopus 34. K. de Jesus, K. de Jesus, P. Figueiredo et al., “Biomechanical analysis of backstroke swimming starts,” International Journal of Sports Medicine, vol. 32, no. 7, pp. 546–551, 2011. View at Publisher · View at Google Scholar · View at Scopus 35. A. H. Rouard, R. P. Billat, V. Deschodt, and J. P. Clarys, “Muscular activations during repetitions of sculling movements up to exhaustion in swimming,” Archives of Physiology and Biochemistry, vol. 105, no. 7, pp. 655–662, 1997. View at Publisher · View at Google Scholar · View at Scopus 36. V. Caty, Y. Aujouannet, F. Hintzy, M. Bonifazi, J. P. Clarys, and A. H. Rouard, “Wrist stabilisation and forearm muscle coactivation during freestyle swimming,” Journal of Electromyography and Kinesiology, vol. 17, no. 3, pp. 285–291, 2007. View at Publisher · View at Google Scholar · View at Scopus 37. G. V. Dimitrov, T. I. Arabadzhiev, K. N. Mileva, J. L. Bowtell, N. Crichton, and N. A. Dimitrova, “Muscle fatigue during dynamic contractions assessed by new spectral indices,” Medicine and Science in Sports and Exercise, vol. 38, no. 11, pp. 1971–1979, 2006. View at Publisher · View at Google Scholar · View at Scopus 38. M. González-Izal, A. Malanda, I. Navarro-Amézqueta et al., “EMG spectral indices and muscle power fatigue during dynamic contractions,” Journal of Electromyography and Kinesiology, vol. 20, no. 2, pp. 233–240, 2010. View at Publisher · View at Google Scholar · View at Scopus 39. J. Cohen, Statistical Power Analysis for the Behavioral Sciences, Lawrence Erlbaum, Hillsdale, NJ, USA, 2nd edition, 1988. 40. S. G. Psycharakis, R. Naemi, C. Connaboy, C. McCabe, and R. H. Sanders, “Three-dimensional analysis of intracycle velocity fluctuations in frontcrawl swimming,” Scandinavian Journal of Medicine and Science in Sports, vol. 20, no. 1, pp. 128–135, 2010. View at Publisher · View at Google Scholar · View at Scopus 41. H. M. Toussaint, A. Carol, H. Kranenborg, and M. J. Truijens, “Effect of fatigue on stroking characteristics in an arms-only 100-m front-crawl race,” Medicine and Science in Sports and Exercise, vol. 38, no. 9, pp. 1635–1642, 2006. View at Publisher · View at Google Scholar · View at Scopus 42. R. J. Fernandes, V. L. Billat, A. C. Cruz, P. J. Colaço, C. S. Cardoso, and J. P. Vilas-Boas, “Does net energy cost of swimming affect time to exhaustion at the individual's maximal oxygen consumption velocity?” Journal of Sports Medicine and Physical Fitness, vol. 46, no. 3, pp. 373–380, 2006. View at Scopus 43. Y. A. Aujouannet, M. Bonifazi, F. Hintzy, N. Vuillerme, and A. H. Rouard, “Effects of a high-intensity swim test on kinematic parameters in high-level athletes,” Applied Physiology, Nutrition and Metabolism, vol. 31, no. 2, pp. 150–158, 2006. View at Publisher · View at Google Scholar · View at Scopus 44. T. M. Barbosa, M. Costa, D. A. Marinho, J. Coelho, M. Moreira, and A. J. Silva, “Modeling the links between young swimmers' performance: energetic and biomechanic profiles,” Pediatric Exercise Science, vol. 22, no. 3, pp. 379–391, 2010. View at Scopus 45. P. Figueiredo, J. P. Vilas-Boas, L. Seifert, D. Chollet, and R. J. Fernandes, “Inter-limb coordinative structure in a 200m front crawl event,” Open Sports Sciences Journal, vol. 3, pp. 25–27, 2010. View at Publisher · View at Google Scholar 46. C. Schnitzler, L. Seifert, M. Alberty, and D. Chollet, “Hip velocity and arm coordination in front crawl swimming,” International Journal of Sports Medicine, vol. 31, no. 12, pp. 875–881, 2010. View at Publisher · View at Google Scholar 47. K. Wakayoshi, L. J. D'Acquisto, J. M. Cappaert, and J. P. Troup, “Relationship between oxygen uptake, stroke rate and swimming velocity in competitive swimming,” International Journal of Sports Medicine, vol. 16, no. 1, pp. 19–23, 1995. View at Scopus 48. K. Wakayoshi, J. Acquisto, J. M. Cappaert, and J. P. Troup, “Relationship between metabolic parameters and stroking technique characteristics in front crawl,” in Biomechanics and Medicine in Swimming VII, J. P. Troup, A. P. Hollander, D. Strasse, S. W. Trappe, J. M. Cappaert, and T. A. Trappe, Eds., pp. 152–158, E & FN Spon, London, UK, 1996. 49. P. Figueiredo, R. Sanders, T. Gorski, J. P. Vilas-Boas, and R. J. Fernandes, “Kinematic and electromyographic changes during 200m front crawl at race pace,” International Journal of Sports Medicine, vol. 34, no. 1, pp. 49–55, 2013. View at Publisher · View at Google Scholar 50. P. Zamparo, M. Bonifazi, M. Faina et al., “Energy cost of swimming of elite long-distance swimmers,” European Journal of Applied Physiology, vol. 94, no. 5-6, pp. 697–704, 2005. View at Publisher · View at Google Scholar · View at Scopus 51. V. Y. Caty, A. H. Rouard, F. Hintzy, Y. A. Aujouannet, F. Molinari, and M. Knaflitz, “Time-frequency parameters of wrist muscles EMG after an exhaustive freestyle test,” Portuguese Journal of Sport Sciences, vol. 6, no. S2, pp. 28–30, 2006. 52. K. N. Mileva, J. Morgan, and J. Bowtell, “Differentiation of power and endurance athletes based on their muscle fatigability assessed by new spectral electromyographic indices,” Journal of Sports Sciences, vol. 27, no. 6, pp. 611–623, 2009. View at Publisher · View at Google Scholar · View at Scopus 53. K. Watanabe, K. Katayama, K. Ishida, and H. Akima, “Electromyographic analysis of hip adductor muscles during incremental fatiguing pedaling exercise,” European Journal of Applied Physiology, vol. 106, no. 6, pp. 815–825, 2009. View at Publisher · View at Google Scholar · View at Scopus 54. K. M. Newell, “Constraints on the development of coordination,” in Motor Development in Children: Aspect of Coordination and Control, M. G. Wade and H. T. A. Whiting, Eds., pp. 341–360, Nijhoff, Dordrecht, The Netherlands, 1986. 55. P. S. Glazier, J. S. Wheat, D. L. Pease, and R. M. Bartlett, “Dynamic systems theory and the functional role of movement variability,” in Movement System Variability, K. Davids, S. Bennett, and K. M. Newell, Eds., pp. 49–72, Human Kinetics, Champaign, Ill, USA, 2006. 56. T. M. Barbosa, F. Lima, A. Portela et al., “Relationships between energy cost, swimming velocity and speed fluctuation in competitive swimming strokes,” Portuguese Journal of Sport Sciences, vol. 6, no. S2, pp. 28–29, 2006. 57. H. M. Toussaint, T. Janssen, and M. Kluft, “Effect of propelling surface size on the mechanics and energetics of front crawl swimming,” Journal of Biomechanics, vol. 24, no. 3-4, pp. 205–211, 1991. View at Scopus 58. M. Sidney, S. Paillette, J. Hespel, D. Chollet, and P. Pelayo, “Effect of swim paddles on the intra-cycle velocity variations and on the arm coordenation of front crawl stroke,” in Proceedings of the 21st International Symposium on Biomechanics in Sports, J. R. Blackwell and R. H. Sanders, Eds., pp. 39–42, University of San Francisco, San Francisco, Calif, USA, 2001. 59. H. M. Toussaint and P. J. Beek, “Biomechanics of competitive front crawl swimming,” Sports Medicine, vol. 13, no. 1, pp. 8–24, 1992. View at Scopus 60. K. L. Keskinen and P. V. Komi, “Stroking characteristics of front crawl swimming during exercice,” Journal of Biomechanics, vol. 9, pp. 219–226, 1993. 61. L. Seifert, J. Komar, P. M. Leprêtre et al., “Swim specialty affects energy cost and motor organization,” International Journal of Sports Medicine, vol. 31, no. 9, pp. 624–630, 2010. View at Publisher · View at Google Scholar · View at Scopus 62. W. A. Sparrow and K. M. Newell, “Metabolic energy expenditure and the regulation of movement economy,” Psychonomic Bulletin and Review, vol. 5, no. 2, pp. 173–196, 1998. View at Scopus 63. D. L. Costill, M. G. Flynn, J. P. Kirwan et al., “Effects of repeated days of intensified training on muscle glycogen and swimming performance,” Medicine and Science in Sports and Exercise, vol. 20, no. 3, pp. 249–254, 1988. View at Scopus 64. S. P. McLean, D. Palmer, G. Ice, M. Truijens, and J. C. Smith, “Oxygen uptake response to stroke rate manipulation in freestyle swimming,” Medicine and Science in Sports and Exercise, vol. 42, no. 10, pp. 1909–1913, 2010. View at Publisher · View at Google Scholar · View at Scopus 65. M. R. Alberty, F. P. Potdevin, J. Dekerle, P. P. Pelayo, and M. C. Sidney, “Effect of stroke rate reduction on swimming technique during paced exercise,” Journal of Strength and Conditioning Research, vol. 25, no. 2, pp. 392–397, 2011. View at Publisher · View at Google Scholar · View at Scopus 66. D. G. Allen, G. D. Lamb, and H. Westerblad, “Skeletal muscle fatigue: cellular mechanisms,” Physiological Reviews, vol. 88, no. 1, pp. 287–332, 2008. View at Publisher · View at Google Scholar · View at Scopus 67. P. Pelayo, I. Mujika, M. Sidney, and J. C. Chatard, “Blood lactate recovery measurements, training, and performance during a 23-week period of competitive swimming,” European Journal of Applied Physiology and Occupational Physiology, vol. 74, no. 1-2, pp. 107–113, 1996. View at Scopus 68. K. L. Keskinen, L. J. Tilli, and P. V. Komi, “Maximum velocity swimming: interrelationships of stroking characteristics, force production and anthropometric variables,” Scandinavian Journal of Sports Sciences, vol. 11, no. 2, pp. 87–92, 1989. View at Scopus 69. J. M. H. Cabri, L. Annemans, J. P. Clarys, E. Bollens, and J. Publie, “The relation of stroke frequency, force, and EMG in front crawl tethered swimming,” in Swimming Science V, pp. 183–189, 70. S. C. Gandevia, “Spinal and supraspinal factors in human muscle fatigue,” Physiological Reviews, vol. 81, no. 4, pp. 1725–1789, 2001. View at Scopus
{"url":"http://www.hindawi.com/journals/bmri/2013/897232/","timestamp":"2014-04-17T07:04:57Z","content_type":null,"content_length":"276619","record_id":"<urn:uuid:81734a88-997b-4d26-934c-0d71bb765035>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
for Finite Projective Planes pzip: A Compression Utility for Finite Projective Planes This package consists of two C programs designed to run under linux: pzip (for compressing) and punzip (for decompressing) data storage files for finite projective planes (expressed as incidence matrices, lists of lines through each point, or complete sets of mutually orthogonal Latin squares). Compression attained exceeds that possible with general compression utilities such as gzip (see Performance) since we take advantage of the special nature of this type of incidence information. • Version 1 (Sept 2009) developed at the University of Delaware during my sabbatical. I gratefully acknowledge their hospitality. • Version 1.1 (June 2010) is current. Changes since Version 1.1 are minor, and include the deletion of obsolete lines of diagnostic code, and the introduction of dynamic memory allocation to conserve memory. pzip n f m filename punzip filename n ∈ {2, 3, ..., 255} is the order of the plane f ∈ {i, l, m} is the input format (incidence matrix, list of incidences, or mutually orthogonal Latin squares). Incidence matrices are lists of 0's and 1's (whitespace characters, including blanks and end-of-line characters, if present, are ignored). Lists consist of n^2+n+1 lists of n+1 integers each, separated by whitespace characters; each list represents the set of lines incident with a given point. Lines in these lists are denoted by integer values 0, 1, 2, ..., n^2+n (or 1, 2, 3, ..., n^2+n+1; but please be consistent). Mutually orthogonal Latin squares (i.e. MOLS) are listed using any set of n symbols (each symbol represented by a single ASCII character) and whitespace characters, if present, are ignored. m ∈ {e, i} is the mode of recovery for the plane for purposes of decompression (exact or isomorph-only). Replacing the plane by an isomorph (a copy isomorphic to the original) allows for a slightly greater rate of compression. If an exact copy of the original plane is required, use the exact mode. In the case of Latin squares, the original set of symbols will be preserved in exact mode; but in isomorph-only mode, they will be replaced by a standard set of characters 0, 1, 2, ..., 9, a, b, c, ... filename = name of file, in the current directory, containing the projective plane Compression using pzip stores the compressed plane in the current directory in a new file named filename.pz and decompression using punzip will send the extracted plane to the standard output (by default, the display terminal). We compare typical file sizes for projective planes. File size varies slightly for different planes of the same order; and even for isomorphic copies of the same plane; so sizes listed are only approximate. Execution time required for compression/decompression is minimal. Storage requirements for planes of order 11: │ │original file│gzipped original│pzipped, exact mode│pzipped, isomorph-only mode │ │incidence matrix │ 17 KB │ 2 KB │ 264 bytes │ 64 bytes │ │list of line sets│ 5 KB │ 2 KB │ 264 bytes │ 64 bytes │ │ 10 MOLS(11) │ 1.3 KB │ 250 bytes │ 115 bytes │ 64 bytes │ Storage requirements for planes of order 25: │ │original file│gzipped original│pzipped, exact mode│pzipped, isomorph-only mode │ │incidence matrix │ 410KB │ 23 KB │ 2300 bytes │ 950 bytes │ │list of line sets│ 63 KB │ 23 KB │ 2300 bytes │ 950 bytes │ │ 24 MOLS(25) │ 15 KB │ 9 KB │ 1300 bytes │ 950 bytes │ Storage requirements for planes of order 49: (no one would bother here with incidence matrices!) │ │original file│gzipped original│pzipped, exact mode│pzipped, isomorph-only mode │ │list of line sets│ 550 KB │ 210 KB │ 12 KB │ 6 KB │ │ 48 MOLS(49) │ 110 KB │ 81 KB │ 8 KB │ 6 KB │ For Desarguesian planes stored (in any format) with points and lines ordered systematically, gzip actually does better than the average cases tabulated above (almost as well as pzip!). But pzip is always the leader; usually by a long shot. 1. Download pzip.tar into your favourite directory under linux. (File size 30 KB; check the md5sum or sha256sum.) 2. Execute tar -xvf pzip.tar 3. Enter the directory ./pzip and execute make 4. I'd be interested in hearing from you if you make use of pzip Description of Algorithm Elementary... If the input format is an incidence matrix or a list of line sets, this is first converted to n−1 MOLS(n) (using O(n^3log(n)) bits of storage). Now process the MOLS one at a time, line by line, entry by entry. Each entry is constrained (by the symbols previously read) to lie in some k-subset of the n symbols, where 1 ≤ k ≤ n ; it can therefore be represented using log(k) bits instead of the naïve log(n) bits. These bits are written to filename.pz along with any additional information required to reconstruct the original file (including the value of n, format specifications, choice of permutations of rows and columns used to convert matrices to standard form, etc.) To Do • Currently the package works on 64-bit processors as well as 32-bit processors (the version of processor is dynamically detected), but issues of compatibility between the two versions have not yet been resolved. • I plan to extend the package to allow nets and sets of mutually orthogonal latin squares. • Currently n is bounded by N=255 (and a smaller limit in the case of Latin squares). These limits could be raised. If you email me to bug me about these issues, I might get version 2.0 ready sooner. Under Construction
{"url":"http://www.uwyo.edu/moorhouse/pzip.html","timestamp":"2014-04-18T00:33:11Z","content_type":null,"content_length":"15034","record_id":"<urn:uuid:8a9fe4ec-03dd-43e6-b3cb-d2b6ac8bcdb9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
need help! problem solving involving quadratic equations January 30th 2009, 09:44 AM #1 Jan 2009 need help! problem solving involving quadratic equations Ok, here's the question.... A lifeguard uses 700 m of marker buoys to rope off a rectangular swimming area in a lake. One side of the swimming area is a sandy beach. a) Determine the dimensions that will result in the maximum swimming area. b) What is the maximum swimming area? I'm totally stuck.... my head's is spinning right now... please help! $2a+b=700$. $A=ab=a\cdot\left(700-2a\right)$ It's roots are: $0\text{ and } 350$ therefore its maximum is at $a=175$. So the answer: swimming area 175 x 350 which is $61250\text{ }\rm{m^2}$. Thanks james_bond. Last edited by mr fantastic; January 30th 2009 at 12:30 PM. Reason: Moved the new question to a new thread January 30th 2009, 09:58 AM #2 Senior Member Nov 2007 January 30th 2009, 10:29 AM #3 Jan 2009
{"url":"http://mathhelpforum.com/algebra/70845-need-help-problem-solving-involving-quadratic-equations.html","timestamp":"2014-04-17T05:36:58Z","content_type":null,"content_length":"35099","record_id":"<urn:uuid:8680d9f5-8717-4c9f-aee3-d1ce80cecba4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Conversions of Length, Mass, Capacity in Metric Units ( Read ) | Measurement Josh and his sister Karen were working on homework when the topic of Everest and the metric system came up. “What about meters?” Karen asked. “How many meters high is Mount Everest?” “Why are you always thinking of things that cause me more work?” Josh asked, but then he smiled at Karen. “It’s alright. I was thinking of that today anyway.” “How can we figure it out?” Karen asked. “Well first, we need to know how many feet are in 1 meter. I already looked that up online, and I found out that there are 3.28 feet in 1 meter. Now I know that the height of Mount Everest is 29,035 feet high, so we can work from there,” Josh explained. “Yeah, but how?” “Well, we can use proportions.” Let’s stop right there. Do you know how to use a proportion to figure out this metric conversion? Well, pay attention to this Concept and if you aren’t sure how to do it now, you will know by the end of it. The metric system of measurement is the primary measurement system in many countries; it contains units such as meters, kilometers and liters. You can remember the conversions by learning the prefixes: Milli-means thousandth, centi-means hundredth, and kilo-means thousand. So a millimeter is one-thousandth of a meter, and a kilometer is one thousand meters. Write these units of measurement down in your notebooks. Now that you have reviewed these units of measurement, we can look at converting among the different units of measurement. Just like we used proportions when we converted among customary units of measurement, we can use proportions and ratios here too. How do we use proportions to convert among metric units of measure? First, set up the proportion in the same way you used to find actual measurements from scale drawings. Use the conversion factor as the first ratio, and the known and unknown units in the second How many centimeters are in 5 meters? First, set up a proportion. The conversion factor is the number of centimeters in 1 meter. We can look at the chart above and see that there are 100 centimeters in 1 meters. That is our first ratio: $\frac{100 \ centimeters}{1 \ meter}$ Now write the second ratio. The known unit is 5 meters. The unknown unit is $x$ $\frac{100 \ centimeters}{1 \ meters} = \frac{x \ centimeters}{5 \ meters}$ Now cross-multiply to solve for $x$ $(1)x &= 100(5)\\x &= 500$ There are 500 centimeters in 5 meters. Henry is making a recipe for lemonade that uses 2 liters of water. If he makes 3 batches of the recipe, how many milliliters of water will he need? First find the total number of liters he needs. If there are 2 liters in one batch, and he is making 3 batches, then he will need $2 \times 3 = 6 \ liters$ Next, set up a proportion. The conversion factor is the number of milliliters in a liter. $\frac{1000 \ milliliters}{1 \ liter}$ Now write the second ratio, making sure it follows the form of the first ratio. $\frac{1000 \ milliliters}{1 \ liter} = \frac{x \ milliliters}{6 \ liters}$ Cross-multiply to solve for $x$ $(1)x &= 1000(6)\\x &= 6000$ He will need 6000 milliliters of water. Convert each measurement. Example A 4500 ml = ____ Liters Solution: 4.5 liters Example B 5.5 grams = ____milligrams Solution: 5500 mg Example C 40 mm = ____centimeters Solution: 4 cm Now let's go back to the dilemma from the beginning of the Concept. First, let’s write a proportion. Josh told us that there are 3.28 feet in 1 meter. That is our first ratio in the proportion. Next, we write the second ratio. That compares the unknown number of meters, our variable with the current height of Everest in feet. Our proportion is: $\frac{3.28}{1}= \frac{29035}{x}$ Next, we cross multiply and solve. The answer is that Mount Everest is about 8852 meters high. We did need to round the answer, so that is why we used the word “about” in our answer. Metric System a system of measurement commonly used outside of the United States. It contains units such as meters, milliliters and grams. Guided Practice Here is one for you to try on your own. Kyle is going to be traveling with his family over the winter holidays. He wants to figure out how many kilometers it is from his home in Cincinatti to his grandparents home in Chicago. Which unit of measurement should Kyle use? First, let’s think about the correct unit of measurement for Kyle to use. If Kyle is measuring a far distance, he needs a measure of length. We know that the metric units for measuring length are millimeters, centimeters, meters and kilometers. Kyle is measuring the distance between two cities. It makes the most sense for him to use the largest unit for measuring length, and that is kilometers. Kyle would use kilometers to measure the distance. Video Review Directions: Solve each problem. 1. 3 km = _____ m 2. 2000 m = _____ km 3. 5.5 km = _____ m 4. 2500 m = _____ km 5. 12000 m = _____ km 6. 500 cm = _____ m 7. 6000 cm = _____ m 8. 4 m = _____ cm 9. 11 m = _____ cm 10. 50 mm = _____ cm 11. 3 cm = _____ mm 12. 15 cm = _____ mm 13. 2000 g = _____ kg 14. 35000 g = _____ kg 15. 7 kg = _____ g
{"url":"http://www.ck12.org/measurement/Conversions-of-Length-Mass-Capacity-in-Metric-Units/lesson/Convert-Metric-Units-of-Measurement/","timestamp":"2014-04-20T11:54:02Z","content_type":null,"content_length":"111949","record_id":"<urn:uuid:88d9e437-03e8-4f5e-8875-78e843627a36>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 362 Algebraic Structures Catalog Description: Groups, rings, fields, homomorphisms, and quotient structures. (Offered fall semester only.) MATH 261 Linear Algebra 1. Abstract vector spaces. 2. Function concept including real-valued function, linear transformations of vector spaces, determinant as a mapping, etc. 3. Polynomials and rational root test. Required Course Materials: John B. Fraleigh, A First Course in Abstract Algebra, 7^th edition, Pearson, 2003 (ISBN: 9780201763904) Course Coordinator: Angela C. Hare, Ph.D., Professor of Mathematics Course Audience: Juniors and seniors majoring in Mathematics Course Objectives: 1. To gain an introduction to groups and rings. 2. Mastery of computational details in algebraic structures is foundational to the work. 3. Theoretical concepts will be central to cover the major theorems and their proofs. 4. Deductive proof is the distinctive reasoning and writing form of the mathematical discipline. 5. Emphasis on deductive proof and problem solving with complete and clear written presentation of solutions. 1. Binary operations and isomorphic structures 2. Definitions of a group, examples 3. Subgroups, cyclic groups 4. Permutation groups, orbits, cycles and alternating groups 5. Cosets, Lagrange’s theorem 6. Direct products of groups 7. Homomorphisms, factor groups 8. Rings and fields, definitions and examples 9. Integral Domains 10. Fermat’s and Euler’s theorems 11. Quotient fields 12. Polynomial rings 13. Homomorphisms, factor rings, ideals
{"url":"http://www.messiah.edu/departments/mathsci/courses/syllabi/MATH362.html","timestamp":"2014-04-16T16:00:16Z","content_type":null,"content_length":"4493","record_id":"<urn:uuid:ec9a199f-df38-4afd-b537-e75b1170c8c3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mackey Topology on a Von Neumann Algebra up vote 14 down vote favorite Every von Neumann algebra $\mathcal M$ is the dual of a unique Banach space $\mathcal M_* $. The Mackey topology on $\mathcal M$ is the topology of uniform convergence on weakly compact subsets of $\ mathcal M_*$. Is it known whether given a von Neumann subalgebra $\mathcal N \subseteq \mathcal M$, the Mackey topology on $\mathcal M$ restricts to the Mackey topology on $\mathcal N$? The article below indicates that the answer was unknown at the time of its publication. Aarnes, J. F., On the Mackey-Topology for a Von Neumann Algebra, Math. Scand. 22(1968), 87-107 http://www.mscand.dk/article.php?id=1864 von-neumann-algebras fa.functional-analysis oa.operator-algebras add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged von-neumann-algebras fa.functional-analysis oa.operator-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/71696/the-mackey-topology-on-a-von-neumann-algebra","timestamp":"2014-04-18T15:44:19Z","content_type":null,"content_length":"46930","record_id":"<urn:uuid:7a587808-3283-4aa7-ab92-0a90a705f528>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
[citation needed] UPDATE: I’ve posted a very classy email response from Friston here. In a “comments and controversies” piece published in NeuroImage last week, Karl Friston describes “Ten ironic rules for non-statistical reviewers”. As the title suggests, the piece is presented ironically; Friston frames it as a series of guidelines reviewers can follow in order to ensure successful rejection of any neuroimaging paper. But of course, Friston’s real goal is to convince you that the practices described in the commentary are bad ones, and that reviewers should stop picking on papers for such things as having too little power, not cross-validating results, and not being important enough to warrant publication. Friston’s piece is, simultaneously, an entertaining satire of some lamentable reviewer practices, and—in my view, at least—a frustratingly misplaced commentary on the relationship between sample size, effect size, and inference in neuroimaging. While it’s easy to laugh at some of the examples Friston gives, many of the positions Friston presents and then skewers aren’t just humorous portrayals of common criticisms; they’re simply bad caricatures of comments that I suspect only a small fraction of reviewers ever make. Moreover, the cures Friston proposes—most notably, the recommendation that sample sizes on the order of 16 to 32 are just fine for neuroimaging studies—are, I’ll argue, much worse than the diseases he diagnoses. Before taking up the objectionable parts of Friston’s commentary, I’ll just touch on the parts I don’t think are particularly problematic. Of the ten rules Friston discusses, seven seem palatable, if not always helpful: • Rule 6 seems reasonable; there does seem to be excessive concern about the violation of assumptions of standard parametric tests. It’s not that this type of thing isn’t worth worrying about at some point, just that there are usually much more egregious things to worry about, and it’s been demonstrated that the most common parametric tests are (relatively) insensitive to violations of normality under realistic conditions. • Rule 10 is also on point; given that we know the reliability of peer review is very low, it’s problematic when reviewers make the subjective assertion that a paper just isn’t important enough to be published in such-and-such journal, even as they accept that it’s technically sound. Subjective judgments about importance and innovation should be left to the community to decide. That’s the philosophy espoused by open-access venues like PLoS ONE and Frontiers, and I think it’s a good one. • Rules 7 and 9—criticizing a lack of validation or a failure to run certain procedures—aren’t wrong, but seem to me much too broad to support blanket pronouncements. Surely much of the time when reviewers highlight missing procedures, or complain about a lack of validation, there are perfectly good reasons for doing so. I don’t imagine Friston is really suggesting that reviewers should stop asking authors for more information or for additional controls when they think it’s appropriate, so it’s not clear what the point of including this here is. The example Friston gives in Rule 9 (of requesting retinotopic mapping in an olfactory study), while humorous, is so absurd as to be worthless as an indictment of actual reviewer practices. In fact, I suspect it’s so absurd precisely because anything less extreme Friston could have come up with would have caused readers to think, “but wait, that could actually be a reasonable concern…” • Rules 1, 2, and 3 seem reasonable as far as they go; it’s just common sense to avoid overconfidence, arguments from emotion, and tardiness. Still, I’m not sure what’s really accomplished by pointing this out; I doubt there are very many reviewers who will read Friston’s commentary and say “you know what, I’m an overconfident, emotional jerk, and I’m always late with my reviews–I never realized this before.” I suspect the people who fit that description—and for all I know, I may be one of them—will be nodding and chuckling along with everyone else. This leaves Rules 4, 5, and 8, which, conveniently, all focus on a set of interrelated issues surrounding low power, effect size estimation, and sample size. Because Friston’s treatment of these issues strikes me as dangerously wrong, and liable to send a very bad message to the neuroimaging community, I’ve laid out some of these issues in considerably more detail than you might be interested in. If you just want the direct rebuttal, skip to the “Reprising the rules” section below; otherwise the next two sections sketch Friston’s argument for using small sample sizes in fMRI studies, and then describe some of the things wrong with it. Friston’s argument Friston’s argument is based on three central claims: 1. Classical inference (i.e., the null hypothesis testing framework) suffers from a critical flaw, which is that the null is always false: no effects (at least in psychology) are ever truly zero. Collect enough data and you will always end up rejecting the null hypothesis with probability of 1. 2. Researchers care more about large effects than about small ones. In particular, there is some size of effect that any given researcher will call ‘trivial’, below which that researcher is uninterested in the effect. 3. If the null hypothesis is always false, and if some effects are not worth caring about in practical terms, then researchers who collect very large samples will invariably end up identifying many effects that are statistically significant but completely uninteresting. I think it would be hard to dispute any of these claims. The first one is the source of persistent statistical criticism of the null hypothesis testing framework, and the second one is self-evidently true (if you doubt it, ask yourself whether you would really care to continue your research if you knew with 100% confidence that all of your effects would never be any larger than one one-thousandth of a standard deviation). The third one follows directly from the first two. Where Friston’s commentary starts to depart from conventional wisdom is in the implications he thinks these premises have for the sample sizes researchers should use in neuroimaging studies. Specifically, he argues that since large samples will invariably end up identifying trivial effects, whereas small samples will generally only have power to detect large effects, it’s actually in neuroimaging researchers’ best interest not to collect a lot of data. In other words, Friston turns what most commentators have long considered a weakness of fMRI studies—their small sample size—into a virtue. Here’s how he characterizes an imaginary reviewer’s misguided concern about low power: Reviewer: Unfortunately, this paper cannot be accepted due to the small number of subjects. The significant results reported by the authors are unsafe because the small sample size renders their design insufficiently powered. It may be appropriate to reconsider this work if the authors recruit more subjects. Friston suggests that the appropriate response from a clever author would be something like the following: Response: We would like to thank the reviewer for his or her comments on sample size; however, his or her conclusions are statistically misplaced. This is because a significant result (properly controlled for false positives), based on a small sample indicates the treatment effect is actually larger than the equivalent result with a large sample. In short, not only is our result statistically valid. It is quantitatively more significant than the same result with a larger number of subjects. This is supported by an extensive appendix (written non-ironically), where Friston presents a series of nice sensitivity and classification analyses intended to give the reader an intuitive sense of what different standardized effect sizes mean, and what the implications are for the detection of statistically significant effects using a classical inference (i.e., hypothesis testing) approach. The centerpiece of the appendix is a loss-function analysis where Friston pits the benefit of successfully detecting a large effect (which he defines as a Cohen’s d of 1, i.e., an effect of one standard deviation) against the cost of rejecting the null when the effect is actually trivial (defined as a d of 0.125 or less). Friston notes that the loss function is minimized (i.e., the difference between the hit rate for large effects and the miss rate for trivial effects is maximized) when n = 16, which is where the number he repeatedly quotes as a reasonable sample size for fMRI studies comes from. (Actually, as I discuss in my Appendix I below, I think Friston’s power calculations are off, and the right number, even given his assumptions, is more like 22. But the point is, it’s a small number either way.) It’s important to note that Friston is not shy about asserting his conclusion that small samples are just fine for neuroimaging studies—especially in the Appendices, which are not intended to be ironic. He makes claims like the following: The first appendix presents an analysis of effect size in classical inference that suggests the optimum sample size for a study is between 16 and 32 subjects. Crucially, this analysis suggests significant results from small samples should be taken more seriously than the equivalent results in oversized studies. In short, if we wanted to optimise the sensitivity to large effects but not expose ourselves to trivial effects, sixteen subjects would be the optimum number. In short, if you cannot demonstrate a significant effect with sixteen subjects, it is probably not worth demonstrating. These are very strong claims delivered with minimal qualification, and given Friston’s influence, could potentially lead many reviewers to discount their own prior concerns about small sample size and low power—which would be disastrous for the field. So I think it’s important to explain exactly why Friston is wrong and why his recommendations regarding sample size shouldn’t be taken What’s wrong with the argument Broadly speaking, there are three problems with Friston’s argument. The first one is that Friston presents the absolute best-case scenario as if it were typical. Specifically, the recommendation that a sample of 16 – 32 subjects is generally adequate for fMRI studies assumes that fMRI researchers are conducting single-sample t-tests at an uncorrected threshold of p < .05; that they only care about effects on the order of 1 sd in size; and that any effect smaller than d = .125 is trivially small and is to be avoided. If all of this were true, an n of 16 (or rather, 22—see Appendix I below) might be reasonable. But it doesn’t really matter, because if you make even slightly less optimistic assumptions, you end up in a very different place. For example, for a two-sample t-test at p < .001 (a very common scenario in group difference studies), the optimal sample size, according to Friston’s own loss-function analysis, turns out to be 87 per group, or 174 subjects in total. I discuss the problems with the loss-function analysis in much more detail in Appendix I below; the main point here is that even if you take Friston’s argument at face value, his own numbers put the lie to the notion that a sample size of 16 – 32 is sufficient for the majority of cases. It flatly isn’t. There’s nothing magic about 16, and it’s very bad advice to suggest that authors should routinely shoot for sample sizes this small when conducting their studies given that Friston’s own analysis would seem to demand a much larger sample size the vast majority of the time. What about uncertainty? The second problem is that Friston’s argument entirely ignores the role of uncertainty in drawing inferences about effect sizes. The notion that an effect that comes from a small study is likely to be bigger than one that comes from a larger study may be strictly true in the sense that, for any fixed p value, the observed effect size necessarily varies inversely with sample size. It’s true, but it’s also not very helpful. The reason it’s not helpful is that while the point estimate of statistically significant effects obtained from a small study will tend to be larger, the uncertainty around that estimate is also greater—and with sample sizes in the neighborhood of 16 – 20, will typically be so large as to be nearly worthless. For example, a correlation of r = .75 sounds huge, right? But when that correlation is detected at a threshold of p < .001 in a sample of 16 subjects, the corresponding 99.9% confidence interval is .06 – .95—a range so wide as to be almost completely Fortunately, what Friston argues small samples can do for us indirectly—namely, establish that effect sizes are big enough to care about—can be done much more directly, simply by looking at the uncertainty associated with our estimates. That’s exactly what confidence intervals are for. If our goal is to ensure that we only end up talking about results big enough to care about, it’s surely better to answer the question “how big is the effect?” by saying, “d = 1.1, with a 95% confidence interval of 0.2 – 2.1″ than by saying “well it’s statistically significant at p < .001 in a sample of 16 subjects, so it’s probably pretty big”. In fact, if you take the latter approach, you’ll be wrong quite often, for the simple reason that p values will generally be closer to the statistical threshold with small samples than with big ones. Remember that, by definition, the point at which one is allowed to reject the null hypothesis is also the point at which the relevant confidence interval borders on zero. So it doesn’t really matter whether your sample is small or large; if you only just barely managed to reject the null hypothesis, you cannot possibly be in a good position to conclude that the effect is likely to be a big one. As far as I can tell, Friston completely ignores the role of uncertainty in his commentary. For example, he gives the following example, which is supposed to convince you that you don’t really need large samples: Imagine we compared the intelligence quotient (IQ) between the pupils of two schools. When comparing two groups of 800 pupils, we found mean IQs of 107.1 and 108.2, with a difference of 1.1. Given that the standard deviation of IQ is 15, this would be a trivial effect size … In short, although the differential IQ may be extremely significant, it is scientifically uninteresting … Now imagine that your research assistant had the bright idea of comparing the IQ of students who had and had not recently changed schools. On selecting 16 students who had changed schools within the past five years and 16 matched pupils who had not, she found an IQ difference of 11.6, where this medium effect size just reached significance. This example highlights the difference between an uninformed overpowered hypothesis test that gives very significant, but uninformative results and a more mechanistically grounded hypothesis that can only be significant with a meaningful effect But the example highlights no such thing. One is not entitled to conclude, in the latter case, that the true effect must be medium-sized just because it came from a small sample. If the effect only just reached significance, the confidence interval by definition just barely excludes zero, and we can’t say anything meaningful about the size of the effect, but only about its sign (i.e., that it was in the expected direction)—which is (in most cases) not nearly as useful. In fact, we will generally be in a much worse position with a small sample than a large one, because at least with a large sample, we at least stand a chance of being able to distinguish small effects from large ones. Recall that Friston suggests against collecting very large samples for the very reason that they are likely to produce a wealth of statistically-significant-but-trivially-small effects. Well, maybe so, but so what? Why would it be a bad thing to detect trivial effects so long as we were also in an excellent position to know that those effects were trivial? Nothing about the hypothesis-testing framework commits us to treating all of our statistically significant results like they’re equally important. If we have a very large sample, and some of our effects have confidence intervals from 0.02 to 0.15 while others have CIs from 0.42 to 0.52, we would be wise to focus most of our attention on the latter rather than the former. At the very least this seems like a more reasonable approach than deliberately collecting samples so small that they will rarely be able to tell us anything meaningful about the size of our What about the prior? The third, and arguably biggest, problem with Friston’s argument is that it completely ignores the prior—i.e., the expected distribution of effect sizes across the brain. Friston’s commentary assumes a uniform prior everywhere; for the analysis to go through, one has to believe that trivial effects and very large effects are equally likely to occur. But this is patently absurd; while that might be true in select situations, by and large, we should expect small effects to be much more common than large ones. In a previous commentary (on the Vul et al “voodoo correlations” paper), I discussed several reasons for this; rather than go into detail here, I’ll just summarize them: • It’s frankly just not plausible to suppose that effects are really as big as they would have to be in order to support adequately powered analyses with small samples. For example, a correlational analysis with 20 subjects at p < .001 would require a population effect size of r = .77 to have 80% power. If you think it’s plausible that focal activation in a single brain region can explain 60% of the variance in a complex trait like fluid intelligence or extraversion, I have some property under a bridge I’d like you to come by and look at. • The low-hanging fruit get picked off first. Back when fMRI was in its infancy in the mid-1990s, people could indeed publish findings based on samples of 4 or 5 subjects. I’m not knocking those studies; they taught us a huge amount about brain function. In fact, it’s precisely because they taught us so much about the brain that researchers can no longer stick 5 people in a scanner and report that doing a working memory task robustly activates the frontal cortex. Nowadays, identifying an interesting effect is more difficult—and if that effect were really enormous, odds are someone would have found it years ago. But this shouldn’t surprise us; neuroimaging is now a relatively mature discipline, and effects on the order of 1 sd or more are extremely rare in most mature fields (for a nice review, see Meyer et al (2001)). • fMRI studies with very large samples invariably seem to report much smaller effects than fMRI studies with small samples. This can only mean one of two things: (a) large studies are done much more poorly than small studies (implausible—if anything, the opposite should be true); or (b) the true effects are actually quite small in both small and large fMRI studies, but they’re inflated by selection bias in small studies, whereas large studies give an accurate estimate of their magnitude (very plausible). • Individual differences or between-group analyses, which have much less power than within-subject analyses, tend to report much more sparing activations. Again, this is consistent with the true population effects being on the small side. To be clear, I’m not saying there are never any large effects in fMRI studies. Under the right circumstances, there certainly will be. What I’m saying is that, in the absence of very good reasons to suppose that a particular experimental manipulation is going to produce a large effect, our default assumption should be that the vast majority of (interesting) experimental contrasts are going to produce diffuse and relatively weak effects. Note that Friston’s assertion that “if one finds a significant effect with a small sample size, it is likely to have been caused by a large effect size” depends entirely on the prior effect size distribution. If the brain maps we look at are actually dominated by truly small effects, then it’s simply not true that a statistically significant effect obtained from a small sample is likely to have been caused by a large effect size. We can see this easily by thinking of a situation in which an experiment has a weak but very diffuse effect on brain activity. Suppose that the entire brain showed ‘trivial’ effects of d = 0.125 in the population, and that there were actually no large effects at all. A one-sample t-test at p < .001 has less than 1% power to detect this effect, so you might suppose, as Friston does, that we could discount the possibility that a significant effect would have come from a trivial effect size. And yet, because a whole-brain analysis typically involves tens of thousands of tests, there’s a very good chance such an analysis will end up identifying statistically significant effects somewhere in the brain. Unfortunately, because the only way to identify a trivial effect with a small sample is to capitalize on chance (Friston discusses this point in his Appendix II, and additional treatments can be found in Ionnadis (2008), or in my 2009 commentary), that tiny effect won’t look tiny when we examine it; it will in all likelihood look enormous. Since they say a picture is worth a thousand words, here’s one (from an unpublished paper in progress): The top panel shows you a hypothetical distribution of effects (Pearson’s r) in a 2-dimensional ‘brain’ in the population. Note that there aren’t any astronomically strong effects (though the white circles indicate correlations of .5 or greater, which are certainly very large). The bottom panel shows what happens when you draw random samples of various sizes from the population and use different correction thresholds/approaches. You can see that the conclusion you’d draw if you followed Friston’s advice—i.e., that any effect you observe with n = 20 must be pretty robust to survive correction—is wrong; the isolated region that survives correction at FDR = .05, while ‘real’ in a trivial sense, is not in fact very strong in the true map—it just happens to be grossly inflated by sampling error. This is to be expected; when power is very low but the number of tests you’re performing is very large, the odds are good that you’ll end up identifying some real effect somewhere in the brain–and the estimated effect size within that region will be grossly distorted because of the selection process. Encouraging people to use small samples is a sure way to ensure that researchers continue to publish highly biased findings that lead other researchers down garden paths trying unsuccessfully to replicate ‘huge’ effects. It may make for an interesting, more publishable story (who wouldn’t rather talk about the single cluster that supports human intelligence than about the complex, highly distributed pattern of relatively weak effects?), but it’s bad science. It’s exactly the same problem geneticists confronted ten or fifteen years ago when the first candidate gene and genome-wide association studies (GWAS) seemed to reveal remarkably strong effects of single genetic variants that subsequently failed to replicate. And it’s the same reason geneticists now run association studies with 10,000+ subjects and not 300. Unfortunately, the costs of fMRI scanning haven’t come down the same way the costs of genotyping have, so there’s tremendous resistance at present to the idea that we really do need to routinely acquire much larger samples if we want to get a clear picture of how big effects really are. Be that as it may, we shouldn’t indulge in wishful thinking just because of logistical constraints. The fact that it’s difficult to get good estimates doesn’t mean we should pretend our bad estimates are actually good ones. What’s right with the argument Having criticized much of Friston’s commentary, I should note that there’s one part I like a lot, and that’s the section on protected inference in Appendix I. The point Friston makes here is that you can still use a standard hypothesis testing approach fruitfully—i.e., without falling prey to the problem of classical inference—so long as you explicitly protect against the possibility of identifying trivial effects. Friston’s treatment is mathematical, but all he’s really saying here is that it makes sense to use non-zero ranges instead of true null hypotheses. I’ve advocated the same approach before (e.g., here), as I’m sure many other people have. The point is simple: if you think an effect of, say, 1/8th of a standard deviation is too small to care about, then you should define a ‘pseudonull’ hypothesis of d = -.125 to .125 instead of a null of exactly zero. Once you do that, any time you reject the null, you’re now entitled to conclude with reasonable certainty that your effects are in fact non-trivial in size. So I completely agree with Friston when he observes in the conclusion to the Appendix I that: …the adage ‘you can never have enough data’ is also true, provided one takes care to protect against inference on trivial effect sizes – for example using protected inference as described above. Of course, the reason I agree with it is precisely because it directly contradicts Friston’s dominant recommendation to use small samples. In fact, since rejecting non-zero values is more difficult than rejecting a null of zero, when you actually perform power calculations based on protected inference, it becomes immediately apparent just how inadequate samples on the order of 16 – 32 subjects will be most of the time (e.g., rejecting a null of zero when detecting an effect of d = 0.5 with 80% power using a one-sample t-test at p < .05 requires 33 subjects, but if you want to reject a ‘trivial’ effect size of d <= |.125|, that n is now upwards of 50). Reprising the rules With the above considerations in mind, we can now turn back to Friston’s rules 4, 5, and 8, and see why his admonitions to reviewers are uncharitable at best and insensible at worst. First, Rule 4 (the under-sampled study). Here’s the kind of comment Friston (ironically) argues reviewers should avoid: Reviewer: Unfortunately, this paper cannot be accepted due to the small number of subjects. The significant results reported by the authors are unsafe because the small sample size renders their design insufficiently powered. It may be appropriate to reconsider this work if the authors recruit more subjects. Perhaps many reviewers make exactly this argument; I haven’t been an editor, so I don’t know (though I can say that I’ve read many reviews of papers I’ve co-reviewed and have never actually seen this particular variant). But even if we give Friston the benefit of the doubt and accept that one shouldn’t question the validity of a finding on the basis of small samples (i.e., we accept that p values mean the same thing in large and small samples), that doesn’t mean the more general critique from low power is itself a bad one. To the contrary, a much better form of the same criticism–and one that I’ve raised frequently myself in my own reviews–is the following: Reviewer: the authors draw some very strong conclusions in their Discussion about the implications of their main finding. But their finding issues from a sample of only 16 subjects, and the confidence interval around the effect is consequently very large, and nearly include zero. In other words, the authors’ findings are entirely consistent with the effect they report actually being very small–quite possibly too small to care about. The authors should either weaken their assertions considerably, or provide additional evidence for the importance of the effect. Or another closely related one, which I’ve also raised frequently: Reviewer: the authors tout their results as evidence that region R is ‘selectively’ activated by task T. However, this claim is based entirely on the fact that region R was the only part of the brain to survive correction for multiple comparisons. Given that the sample size in question is very small, and power to detect all but the very largest effects is consequently very low, the authors are in no position to conclude that the absence of significant effects elsewhere in the brain suggests selectivity in region R. With this small a sample, the authors’ data are entirely consistent with the possibility that many other brain regions are just as strongly activated by task T, but failed to attain significance due to sampling error. The authors should either avoid making any claim that the activity they observed is selective, or provide direct statistical support for their assertion of selectivity. Neither of these criticisms can be defused by suggesting that effect sizes from smaller samples are likely to be larger than effect sizes from large studies. And it would be disastrous for the field of neuroimaging if Friston’s commentary succeeded in convincing reviewers to stop criticizing studies on the basis of low power. If anything, we collectively need to focus far greater attention on issues surrounding statistical power. Next, Rule 5 (the over-sampled study): Reviewer: I would like to commend the authors for studying such a large number of subjects; however, I suspect they have not heard of the fallacy of classical inference. Put simply, when a study is overpowered (with too many subjects), even the smallest treatment effect will appear significant. In this case, although I am sure the population effects reported by the authors are significant; they are probably trivial in quantitative terms. It would have been much more compelling had the authors been able to show a significant effect without resorting to large sample sizes. However, this was not the case and I cannot recommend publication. I’ve already addressed this above; the problem with this line of reasoning is that nothing says you have to care equally about every statistically significant effect you detect. If you ever run into a reviewer who insists that your sample is overpowered and has consequently produced too many statistically significant effects, you can simply respond like this: Response: we appreciate the reviewer’s concern that our sample is potentially overpowered. However, this strikes us as a limitation of classical inference rather than a problem with our study. To the contrary, the benefit of having a large sample is that we are able to focus on effect sizes rather than on rejecting a null hypothesis that we would argue is meaningless to begin with. To this end, we now display a second, more conservative, brain activation map alongside our original one that raises the statistical threshold to the point where the confidence intervals around all surviving voxels exclude effects smaller than d = .125. The reviewer can now rest assured that our results protect against trivial effects. We would also note that this stronger inference would not have been possible if our study had had a much smaller sample. There is rarely if ever a good reason to criticize authors for having a large sample after it’s already collected. You can always raise the statistical threshold to protect against trivial effects if you need to; what you can’t easily do is magic more data into existence in order to shrink your confidence intervals. Lastly, Rule 8 (exploiting ‘superstitious’ thinking about effect sizes): Reviewer: It appears that the authors are unaware of the dangers of voodoo correlations and double dipping. For example, they report effect sizes based upon data (regions of interest) previously identified as significant in their whole brain analysis. This is not valid and represents a pernicious form of double dipping (biased sampling or non-independence problem). I would urge the authors to read Vul et al. (2009) and Kriegeskorte et al. (2009) and present unbiased estimates of their effect size using independent data or some form of cross validation. Friston’s recommended response is to point out that concerns about double-dipping are misplaced, because the authors are typically not making any claims that the reported effect size is an accurate representation of the population value, but only following standard best-practice guidelines to include effect size measures alongside p values. This would be a fair recommendation if it were true that reviewers frequently object to the mere act of reporting effect sizes based on the specter of double-dipping; but I simply don’t think this is an accurate characterization. In my experience, the impetus for bringing up double-dipping is almost always one of two things: (a) authors getting overly excited about the magnitude of the effects they have obtained, or (b) authors conducting non-independent tests and treating them as though they were independent (e.g., when identifying an ROI based on a comparison of conditions A and B, and then reporting a comparison of A and C without considering the bias inherent in this second test). Both of these concerns are valid and important, and it’s a very good thing that reviewers bring them up. The right way to determine sample size If we can’t rely on blanket recommendations to guide our choice of sample size, then what? Simple: perform a power calculation. There’s no mystery to this; both brief and extended treatises on statistical power are all over the place, and power calculators for most standard statistical tests are available online as well as in most off-line statistical packages (e.g., I use the pwr package for R). For more complicated statistical tests for which analytical solutions aren’t readily available (e.g., fancy interactions involving multiple within- and between-subject variables), you can get reasonably good power estimates through simulation. Of course, there’s no guarantee you’ll like the answers you get. Actually, in most cases, if you’re honest about the numbers you plug in, you probably won’t like the answer you get. But that’s life; nature doesn’t care about making things convenient for us. If it turns out that it takes 80 subjects to have adequate power to detect the effects we care about and expect, we can (a) suck it up and go for n = 80, (b) decide not to run the study, or (c) accept that logistical constraints mean our study will have less power than we’d like (which implies that any results we obtain will offer only a fractional view of what’s really going on). What we don’t get to do is look the other way and pretend that it’s just fine to go with 16 subjects simply because the last time we did that, we got this amazingly strong, highly selective activation that successfully made it into a good journal. That’s the same logic that repeatedly produced unreplicable candidate gene findings in the 1990s, and, if it continues to go unchecked in fMRI research, risks turning the field into a laughing stock among other scientific disciplines. The point of all this is not to convince you that it’s impossible to do good fMRI research with just 16 subjects, or that reviewers don’t sometimes say silly things. There are many questions that can be answered with 16 or even fewer subjects, and reviewers most certainly do say silly things (I sometimes cringe when re-reading my own older reviews). The point is that blanket pronouncements, particularly when made ironically and with minimal qualification, are not helpful in advancing the field, and can be very damaging. It simply isn’t true that there’s some magic sample size range like 16 to 32 that researchers can bank on reflexively. If there’s any generalization that we can allow ourselves, it’s probably that, under reasonable assumptions, Friston’s recommendations are much too conservative. Typical effect sizes and analysis procedures will generally require much larger samples than neuroimaging researchers are used to collecting. But again, there’s no substitute for careful case-by-case consideration. In the natural course of things, there will be cases where n = 4 is enough to detect an effect, and others where the effort is questionable even with 100 subjects; unfortunately, we won’t know which situation we’re in unless we take the time to think carefully and dispassionately about what we’re doing. It would be nice to believe otherwise; certainly, it would make life easier for the neuroimaging community in the short term. But since the point of doing science is to discover what’s true about the world, and not to publish an endless series of findings that sound exciting but don’t replicate, I think we have an obligation to both ourselves and to the taxpayers that fund our research to take the exercise more seriously. Appendix I: Evaluating Friston’s loss-function analysis In this appendix I review a number of weaknesses in Friston’s loss-function analysis, and show that under realistic assumptions, the recommendation to use sample sizes of 16 – 32 subjects is far too First, the numbers don’t seem to be right. I say this with a good deal of hesitation, because I have very poor mathematical skills, and I’m sure Friston is much smarter than I am. That said, I’ve tried several different power packages in R and finally resorted to empirically estimating power with simulated draws, and all approaches converge on numbers quite different from Friston’s. Even the sensitivity plots seem off by a good deal (for instance, Friston’s Figure 3 suggests around 30% sensitivity with n = 80 and d = 0.125, whereas all the sources I’ve consulted produce a value around 20%). In my analysis, the loss function is minimized at n = 22 rather than n = 16. I suspect the problem is with Friston’s approximation, but I’m open to the possibility that I’ve done something very wrong, and confirmations or disconfirmations are welcome in the comments below. In what follows, I’ll report the numbers I get rather than Friston’s (mine are somewhat more pessimistic, but the overarching point doesn’t change either way). Second, there’s the statistical threshold. Friston’s analysis assumes that all of our tests are conducted without correction for multiple comparisions (i.e., at p < .05), but this clearly doesn’t apply to the vast majority of neuroimaging studies, which are either conducting massive univariate (whole-brain) analyses, or testing at least a few different ROIs or networks. As soon as you lower the threshold, the optimal sample size returned by the loss-function analysis increases dramatically. If the threshold is a still-relatively-liberal (for whole-brain analysis) p < .001, the loss function is now minimized at 48 subjects–hardly a welcome conclusion, and a far cry from 16 subjects. Since this is probably still the modal fMRI threshold, one could argue Friston should have been trumpeting a sample size of 48 all along—not exactly a ‘small’ sample size given the associated costs. Third, the n = 16 (or 22) figure only holds for the simplest of within-subject tests (e.g., a one-sample t-test)–again, a best-case scenario (though certainly a common one). It doesn’t apply to many other kinds of tests that are the primary focus of a huge proportion of neuroimaging studies–for instance, two-sample t-tests, or interactions between multiple within-subject factors. In fact, if you apply the same analysis to a two-sample t-test (or equivalently, a correlation test), the optimal sample size turns out to be 82 (41 per group) at a threshold of p < .05, and a whopping 174 (87 per group) at a threshold of p < .001. In other words, if we were to follow Friston’s own guidelines, the typical fMRI researcher who aims to conduct a (liberal) whole-brain individual differences analysis should be collecting 174 subjects a pop. For other kinds of tests (e.g., 3-way interactions), even larger samples might be required. Fourth, the claim that only large effects–i.e., those that can be readily detected with a sample size of 16–are worth worrying about is likely to annoy and perhaps offend any number of researchers who have perfectly good reasons for caring about effects much smaller than half a standard deviation. A cursory look at most literatures suggests that effects of 1 sd are not the norm; they’re actually highly unusual in mature fields. For perspective, the standardized difference in height between genders is about 1.5 sd; the validity of job interviews for predicting success is about .4 sd; and the effect of gender on risk-taking (men take more risks) is about .2 sd—what Friston would call a very small effect (for other examples, see Meyer et al., 2001). Against this backdrop, suggesting that only effects greater than 1 sd (about the strength of the association between height and weight in adults) are of interest would seem to preclude many, and perhaps most, questions that researchers currently use fMRI to address. Imaging genetics studies are immediately out of the picture; so too, in all likelihood, are cognitive training studies, most investigations of individual differences, and pretty much any experimental contrast that claims to very carefully isolate a relatively subtle cognitive difference. Put simply, if the field were to take Friston’s analysis seriously, the majority of its practitioners would have to pack up their bags and go home. Entire domains of inquiry would shutter overnight. To be fair, Friston briefly considers the possibility that small sample sizes could be important. But he doesn’t seem to take it very seriously: Can true but trivial effect sizes can ever be interesting? It could be that a very small effect size may have important implications for understanding the mechanisms behind a treatment effect – and that one should maximise sensitivity by using large numbers of subjects. The argument against this is that reporting a significant but trivial effect size is equivalent to saying that one can be fairly confident the treatment effect exists but its contribution to the outcome measure is trivial in relation to other unknown effects… The problem with the latter argument is that the real world is a complicated place, and most interesting phenomena have many causes. A priori, it is reasonable to expect that the vast majority of effects will be small. We probably shouldn’t expect any single genetic variant to account for more than a small fraction of the variation in brain activity, but that doesn’t mean we should give up entirely on imaging genetics. And of course, it’s worth remembering that, in the context of fMRI studies, when Friston talks about ‘very small effect sizes,’ that’s a bit misleading; even medium-sized effects that Friston presumably allows are interesting could be almost impossible to detect at the sample sizes he recommends. For example, a one-sample t-test with n = 16 subjects detects an effect of d = 0.5 only 46% or 5% of the time at p < .05 and p < .001, respectively. Applying Friston’s own loss function analysis to detection of d = 0.5 returns an optimal sample size of n = 63 at p < .05 and n = 139 at p < .001—a message not entirely consistent with the recommendations elsewhere in his commentary. Friston, K. (2012). Ten ironic rules for non-statistical reviewers NeuroImage DOI: 10.1016/j.neuroimage.2012.04.018 time-on-task effects in fMRI research: why you should care There’s a ubiquitous problem in experimental psychology studies that use behavioral measures that require participants to make speeded responses. The problem is that, in general, the longer people take to do something, the more likely they are to do it correctly. If I have you do a visual search task and ask you to tell me whether or not a display full of letters contains a red ‘X’, I’m not going to be very impressed that you can give me the right answer if I let you stare at the screen for five minutes before responding. In most experimental situations, the only way we can learn something meaningful about people’s capacity to perform a task is by imposing some restriction on how long people can take to respond. And the problem that then presents is that any changes we observe in the resulting variable we care about (say, the proportion of times you successfully detect the red ‘X’) are going to be confounded with the time people took to respond. Raise the response deadline and performance goes up; shorten it and performance goes down. This fundamental fact about human performance is commonly referred to as the speed-accuracy tradeoff. The speed-accuracy tradeoff isn’t a law in any sense; it allows for violations, and there certainly are situations in which responding quickly can actually promote accuracy. But as a general rule, when researchers run psychology experiments involving response deaadlines, they usually work hard to rule out the speed-accuracy tradeoff as an explanation for any observed results. For instance, if I have a group of adolescents with ADHD do a task requiring inhibitory control, and compare their performance to a group of adolescents without ADHD, I may very well find that the ADHD group performs more poorly, as reflected by lower accuracy rates. But the interpretation of that result depends heavily on whether or not there are also any differences in reaction times (RT). If the ADHD group took about as long on average to respond as the non-ADHD group, it might be reasonable to conclude that the ADHD group suffers a deficit in inhibitory control: they take as long as the control group to do the task, but they still do worse. On the other hand, if the ADHD group responded much faster than the control group on average, the interpretation would become more complicated. For instance, one possibility would be that the accuracy difference reflects differences in motivation rather than capacity per se. That is, maybe the ADHD group just doesn’t care as much about being accurate as about responding quickly. Maybe if you motivated the ADHD group appropriately (e.g., by giving them a task that was intrinsically interesting), you’d find that performance was actually equivalent across groups. Without explicitly considering the role of reaction time–and ideally, controlling for it statistically–the types of inferences you can draw about underlying cognitive processes are somewhat limited. An important point to note about the speed-accuracy tradeoff is that it isn’t just a tradeoff between speed and accuracy; in principle, any variable that bears some systematic relation to how long people take to respond is going to be confounded with reaction time. In the world of behavioral studies, there aren’t that many other variables we need to worry about. But when we move to the realm of brain imaging, the game changes considerably. Nearly all fMRI studies measure something known as the blood-oxygen-level-dependent (BOLD) signal. I’m not going to bother explaining exactly what the BOLD signal is (there are plenty of other excellent explanations at varying levels of technical detail, e.g., here, here, or here); for present purposes, we can just pretend that the BOLD signal is basically a proxy for the amount of neural activity going on in different parts of the brain (that’s actually a pretty reasonable assumption, as emerging studies continue to demonstrate). In other words, a simplistic but not terribly inaccurate model is that when neurons in region X increase their firing rate, blood flow in region X also increases, and so in turn does the BOLD signal that fMRI scanners detect. A critical question that naturally arises is just how strong the temporal relation is between the BOLD signal and underlying neuronal processes. From a modeling perspective, what we’d really like is a system that’s completely linear and time-invariant–meaning that if you double the duration of a stimulus presented to the brain, the BOLD response elicited by that stimulus also doubles, and it doesn’t matter when the stimulus is presented (i.e., there aren’t any funny interactions between different phases of the response, or with the responses to other stimuli). As it turns out, the BOLD response isn’t perfectly linear, but it’s pretty close. In a seminal series of studies in the mid-90s, Randy Buckner, Anders Dale and others showed that, at least for stimuli that aren’t presented extremely rapidly (i.e., a minimum of 1 – 2 seconds apart), we can reasonably pretend that the BOLD response sums linearly over time without suffering any serious ill effects. And that’s extremely fortunate, because it makes modeling brain activation with fMRI much easier to do. In fact, the vast majority of fMRI studies, which employ what are known as rapid event-related designs, implicitly assume linearity. If the hemodynamic response wasn’t approximately linear, we would have to throw out a very large chunk of the existing literature–or at least seriously question its conclusions. Aside from the fact that it lets us model things nicely, the assumption of linearity has another critical, but underappreciated, ramification for the way we do fMRI research. Which is this: if the BOLD response sums approximately linearly over time, it follows that two neural responses that have the same amplitude but differ in duration will produce BOLD responses with different amplitudes. To characterize that visually, here’s a figure from a paper I published with Deanna Barch, Jeremy Gray, Tom Conturo, and Todd Braver last year: Each of these panels shows you the firing rates and durations of two hypothetical populations of neurons (on the left), along with the (observable) BOLD response that would result (on the right). Focus your attention on panel C first. What this panel shows you is what, I would argue, most people intuitively think of when they come across a difference in activation between two conditions. When you see time courses that clearly differ in their amplitude, it’s very natural to attribute a similar difference to the underlying neuronal mechanisms, and suppose that there must just be more firing going on in one condition than the other–where ‘more’ is taken to mean something like “firing at a higher rate”. The problem, though, is that this inference isn’t justified. If you look at panel B, you can see that you get exactly the same pattern of observed differences in the BOLD response even when the amplitude of neuronal activation is identical, simply because there’s a difference in duration. In other words, if someone shows you a plot of two BOLD time courses for different experimental conditions, and one has a higher amplitude than the other, you don’t know whether that’s because there’s more neuronal activation in one condition than the other, or if processing is identical in both conditions but simply lasts longer in one than in the other. (As a technical aside, this equivalence only holds for short trials, when the BOLD response doesn’t have time to saturate. If you’re using longer trials–say 4 seconds more more–then it becomes fairly easy to tell apart changes in duration from changes in amplitude. But the vast majority of fMRI studies use much shorter trials, in which case the problem I describe holds.) Now, functionally, this has some potentially very serious implications for the inferences we can draw about psychological processes based on observed differences in the BOLD response. What we would usually like to conclude when we report “more” activation for condition X than condition Y is that there’s some fundamental difference in the nature of the processes involved in the two conditions that’s reflected at the neuronal level. If it turns out that the reason we see more activation in one condition than the other is simply that people took longer to respond in one condition than in the other, and so were sustaining attention for longer, that can potentially undermine that conclusion. For instance, if you’re contrasting a feature search condition with a conjunction search condition, you’re quite likely to observe greater activation in regions known to support visual attention. But since a central feature of conjunction search is that it takes longer than a feature search, it could theoretically be that the same general regions support both types of search, and what we’re seeing is purely a time-on-task effect: visual attention regions are activated for longer because it takes longer to complete the conjunction search, but these regions aren’t doing anything fundamentally different in the two conditions (at least at the level we can see with fMRI). So this raises an issue similar to the speed-accuracy tradeoff we started with. Other things being equal, the longer it takes you to respond, the more activation you’ll tend to see in a given region. Unless you explicitly control for differences in reaction time, your ability to draw conclusions about underlying neuronal processes on the basis of observed BOLD differences may be severely hampered. It turns out that very few fMRI studies actually control for differences in RT. In an elegant 2008 study discussing different ways of modeling time-varying signals, Jack Grinband and colleagues reviewed a random sample of 170 studies and found that, “Although response times were recorded in 82% of event-related studies with a decision component, only 9% actually used this information to construct a regression model for detecting brain activity”. Here’s what that looks like (Panel C), along with some other interesting information about the procedures used in fMRI studies: So only one in ten studies made any effort to control for RT differences; and Grinband et al argue in their paper that most of those papers didn’t model RT the right way anyway (personally I’m not sure I agree; I think there are tradeoffs associated with every approach to modeling RT–but that’s a topic for another post). The relative lack of attention to RT differences is particularly striking when you consider what cognitive neuroscientists do care a lot about: differences in response accuracy. The majority of researchers nowadays make a habit of discarding all trials on which participants made errors. The justification we give for this approach–which is an entirely reasonable one–is that if we analyzed correct and incorrect trials together, we’d be confounding the processes we care about (e.g., differences between conditions) with activation that simply reflects error-related processes. So we drop trials with errors, and that gives us cleaner results. I suspect that the reasons for our concern with accuracy effects but not RT effects in fMRI research are largely historical. In the mid-90s, when a lot of formative cognitive neuroscience was being done, people (most of them then located in Pittsburgh, working in Jonathan Cohen‘s group) discovered that the brain doesn’t like to make errors. When people make mistakes during task performance, they tend to recognize that fact; on a neural level, frontoparietal regions implicated in goal-directed processing–and particularly the anterior cingulate cortex–ramp up activation substantially. The interpretation of this basic finding has been a source of much contention among cognitive neuroscientists for the past 15 years, and remains a hot area of investigation. For present purposes though, we don’t really care why error-related activation arises; the point is simply that it does arise, and so we do the obvious thing and try to eliminate it as a source of error from our analyses. I suspect we don’t do the same for RT not because we lack principled reasons to, but because there haven’t historically been clear-cut demonstrations of the effects of RT differences on brain The goal of the 2009 study I mentioned earlier was precisely to try to quantify those effects. The hypothesis my co-authors and I tested was straightforward: if brain activity scales approximately linearly with RT (as standard assumptions would seem to entail), we should see a strong “time-on-task” effect in brain areas that are associated with the general capacity to engage in goal-directed processing. In other words, on trials when people take longer to respond, activation in frontal and parietal regions implicated in goal-directed processing and cognitive control should increase. These regions are often collectively referred to as the “task-positive” network (Fox et al., 2005), in reference to the fact that they tend to show activation increases any time people are engaging in goal-directed processing, irrespective of the precise demands of the task. We figured that identifying a time-on-task effect in the task-positive network would provide a nice demonstration of the relation between RT differences and the BOLD response, since it would underscore the generality of the problem. Concretely, what we did was take five datasets that were lying around from previous studies, and do a multi-study analysis focusing specifically on RT-related activation. We deliberately selected studies that employed very different tasks, designs, and even scanners, with the aim of ensuring the generalizability of the results. Then, we identified regions in each study in which activation covaried with RT on a trial-by-trial basis. When we put all of the resulting maps together and picked out only those regions that showed an association with RT in all five studies, here’s the map we There’s a lot of stuff going on here, but in the interest of keeping this post short slightly less excruciatingly long, I’ll stick to the frontal areas. What we found, when we looked at the timecourse of activation in those regions, was the predicted time-on-task effect. Here’s a plot of the timecourses from all five studies for selected regions: If you focus on the left time course plot for the medial frontal cortex (labeled R1, in row B), you can see that increases in RT are associated with increased activation in medial frontal cortex in all five studies (the way RT effects are plotted here is not completely intuitive, so you may want to read the paper for a clearer explanation). It’s worth pointing out that while these regions were all defined based on the presence of an RT effect in all five studies, the precise shape of that RT effect wasn’t constrained; in principle, RT could have exerted very different effects across the five studies (e.g., positive in some, negative in others; early in some, later in others; etc.). So the fact that the timecourses look very similar in all five studies isn’t entailed by the analysis, and it’s an independent indicator that there’s something important going on here. The clear-cut implication of these findings is that a good deal of BOLD activation in most studies can be explained simply as a time-on-task effect. The longer you spend sustaining goal-directed attention to an on-screen stimulus, the more activation you’ll show in frontal regions. It doesn’t much matter what it is that you’re doing; these are ubiquitous effects (since this study, I’ve analyzed many other datasets in the same way, and never fail to find the same basic relationship). And it’s worth keeping in mind that these are just the regions that show common RT-related activation across multiple studies; what you’re not seeing are regions that covary with RT only within one (or for that matter, four) studies. I’d argue that most regions that show involvement in a task are probably going to show variations with RT. After all, that’s just what falls out of the assumption of linearity–an assumption we all depend on in order to do our analyses in the first place. Exactly what proportion of results can be explained away as time-on-task effects? That’s impossible to determine, unfortunately. I suspect that if you could go back through the entire fMRI literature and magically control for trial-by-trial RT differences in every study, a very large number of published differences between experimental conditions would disappear. That doesn’t mean those findings were wrong or unimportant, I hasten to note; there are many cases in which it’s perfectly appropriate to argue that differences between conditions should reflect a difference in quantity rather than quality. Still, it’s clear that in many cases that isn’t the preferred interpretation, and controlling for RT differences probably would have changed the conclusions. As just one example, much of what we think of as a “conflict” effect in the medial frontal cortex/anterior cingulate could simply reflect prolonged attention on high-conflict trials. When you’re experiencing cognitive difficulty or conflict, you tend to slow down and take longer to respond, which is naturally going to produce BOLD increases that scale with reaction time. The question as to what remains of the putative conflict signal after you control for RT differences is one that hasn’t really been adequately addressed yet. The practical question, of course, is what we should do about this. How can we minimize the impact of the time-on-task effect on our results, and, in turn, on the conclusions we draw? I think the most general suggestion is to always control for reaction time differences. That’s really the only way to rule out the possibility that any observed differences between conditions simply reflect differences in how long it took people to respond. This leaves aside the question of exactly how one should model out the effect of RT, which is a topic for another time (though I discuss it at length in the paper, and the Grinband paper goes into even more detail). Unfortunately, there isn’t any perfect solution; as with most things, there are tradeoffs inherent in pretty much any choice you make. But my personal feeling is that almost any approach one could take to modeling RT explicitly is a big step in the right direction. A second, and nearly as important, suggestion is to not only control for RT differences, but to do it both ways. Meaning, you should run your model both with and without an RT covariate, and carefully inspect both sets of results. Comparing the results across the two models is what really lets you draw the strongest conclusions about whether activation differences between two conditions reflect a difference of quality or quantity. This point applies regardless of which hypothesis you favor: if you think two conditions draw on very similar neural processes that differ only in degree, your prediction is that controlling for RT should make effects disappear. Conversely, if you think that a difference in activation reflects the recruitment of qualitatively different processes, you’re making the prediction that the difference will remain largely unchanged after controlling for RT. Either way, you gain important information by comparing the two models. The last suggestion I have to offer is probably obvious, and not very helpful, but for what it’s worth: be cautious about how you interpret differences in activation any time there are sizable differences in task difficulty and/or mean response time. It’s tempting to think that if you always analyze only trials with correct responses and follow the suggestions above to explicitly model RT, you’ve done all you need in order to perfectly control for the various tradeoffs and relationships between speed, accuracy, and cognitive effort. It really would be nice if we could all sleep well knowing that our data have unambiguous interpretations. But the truth is that all of these techniques for “controlling” for confounds like difficulty and reaction time are imperfect, and in some cases have known deficiencies (for instance, it’s not really true that throwing out error trials eliminates all error-related activation from analysis–sometimes when people don’t know the answer, they guess right!). That’s not to say we should stop using the tools we have–which offer an incredibly powerful way to peer inside our gourds–just that we should use them carefully. Yarkoni T, Barch DM, Gray JR, Conturo TE, & Braver TS (2009). BOLD correlates of trial-by-trial reaction time variability in gray and white matter: a multi-study fMRI analysis. PloS one, 4 (1) PMID: Grinband J, Wager TD, Lindquist M, Ferrera VP, & Hirsch J (2008). Detection of time-varying signals in event-related fMRI designs. NeuroImage, 43 (3), 509-20 PMID: 18775784
{"url":"http://www.talyarkoni.org/blog/tag/neuroimaging/","timestamp":"2014-04-17T21:41:26Z","content_type":null,"content_length":"110311","record_id":"<urn:uuid:0cb906fa-0752-46a0-ac98-1d77db70b8e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Twins paradox and ageing The folks left behind are moving at exactly the same speed as the folks in the ship (when seen from the ship) (I say speed not velocity, the velocities are equal and opposite). This is simply not true. The coordinate velocity of earth in the ship's frame doesn't match the velocity of the ship in earth's frame for the entire trip. In the ship's frame(s), earth's coordinate distance from the ship increases at a constant rate, then during the turnaround goes from the length contracted distance to the proper distance and back again in a very short time, then decreases at a constant rate back to zero. This is very different from the ship's coordinate distance from earth in earth's frame, which just increases at a constant rate then decreases at a constant rate. Maybe someone could graph the coordinate distance between the earth and ship against each twin's clock reading to show how different they are.
{"url":"http://www.physicsforums.com/showpost.php?p=2501951&postcount=24","timestamp":"2014-04-19T15:13:16Z","content_type":null,"content_length":"8317","record_id":"<urn:uuid:63cf8d63-2eaf-42c9-ae70-ba17a047d690>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrically-aware design improves analog/mixed-signal productivity | EE Times Design How-To Electrically-aware design improves analog/mixed-signal productivity Electromigration and its causes Electromigration and its causes Electromigration (EM) effects can seriously damage interconnect wires and vias, having an adverse impact on IC reliability. Electrically-aware design provides new in-design methodology opportunities for EM verification, and the same general methodology could be applied to a range of in-design electrical checking and simulation solutions. EM is the transport of material in a solid conductor that results from collisions between the flow of electrons and metal atoms in the interconnect. Proportional to the current per unit area, the continual movement of metal atoms from their lattice position can lead to a degradation in performance as the resistance of the interconnect increases. At some point the wire eventually fails, creating an electrical open (void) or short connection (hillock) downstream. The dominant factor in determining mean time to failure (MTTF) is the current density. Since current density depends on the wire geometry, it cannot be determined until the routed net is generated. Given that the width of the wire or via is a variable, designers need to know whether the current flowing through that particular area has exceeded the maximum allowed current. The maximum current limit for a given geometry is expressed as rules that every wire segment or via must adhere to for a specified operating or maximum temperature. This set of rules is contained in a technology file that is computed and distributed according to each process technology specification. With the aggressive scaling of interconnect geometries at advanced technology nodes, the EM effects are dependent not only on the local current density of a geometric wire segment or via, but also on the homogeneity of the current flow through a region. As such, current density limits or rules vary based on not just each geometric dimension of the wire segments and vias, but on the geometries of the connecting wires and vias as well. There are additional complications. There is a minimum length below which the net won't fail due to EM, commonly referred to as Blech length. This adds another dimension to the problem of sizing a wire for EM. The EM rules for a metal layer are specified in buckets for different limits based on the width and length of shapes on the layer. Without assistance from an EDA tool, it would be cumbersome for a layout engineer to route nets on various layers and account for the ever-increasing number of EM rules associated with these buckets. The EM rules become even more complicated when vias and contacts are considered. Traditionally, single via or contact cuts had a current density number associated with them, while clusters of vias would have more relaxed rules. At advanced technology nodes, a single via or contact will have different current density limits based on different shapes (e.g. square or rectangular). This issue is compounded when additional conductors are connected to the via(s) and the current density limits can change based on the width and length of the connecting conductors. This behavior is illustrated in Figure1 with three interconnect examples that are based on different widths and lengths of connecting wires. All three have different current density (EM) limits. Each has an upper and lower interconnect wire connected through the same via shape, but each example forms a different geometric profile where the length or width dimensions of the interconnect differ. The leftmost connection in the figure has the shortest lengths in the dimension of the upper and lower interconnect wires, with the result that the connecting via cut has the highest current density limit and thus can carry the most current without violating the limit. The middle connection has the same interconnect wire width as the left one, but the wire lengths of the upper and lower interconnect are much longer. The impact is that the current density limit for the via cut is lower than the left example. As such, the middle connection can carry less current and remain within the EM-related reliability limits. In the rightmost connection in Figure 1, the upper and lower interconnect lengths are the same but the width of the wires coming into the via are reduced. The result is that the current density limit for the via cut is significantly reduced from the middle and leftmost examples. Note that in all three examples the via geometry is the same and, if examined without any context of the surrounding wire geometry, it is impossible to know which current density limits to apply. Figure 1: Geometric-based rules require proximity of current calculations with conductor (net) geometry In addition to the geometric properties of the interconnect, the underlying topology of the routing can also significantly change the distribution of current flow and, subsequently, current densities. Thus the same geometric description (such as a specific width and length of a wire) may or may not meet pre-specified current density limits, depending on the topology of the surrounding Figures 2 and 3 highlight a case where the incoming current to the MOS device may be connected to the strap in two different locations. In Figure 2, the current is sourced into one side of the M2 strap, which results in a large amount of current flowing through the strap to reach the M1 vertical finger connections on the other end. Color coding is used to indicate the proximity of that wire location to the specified current density limit. In this case, the green coding indicates that the current is far from reaching the limit; red coding represents that the current is over the limit for that particular wire segment. Figure 2: Connection to MOS device resulting in current density violations with highlighted circle noting the current density violations in finger connections to the source In Figure 3, the incoming current is connected to the middle of the M2 strap and, as such, the current is distributed more evenly throughout the source vertical connections (fingers). As the color coding indicates, the current limit is not exceeded, even though the geometric properties of the fingers, strap, and incoming wire are very similar—the primary distinction being where the connection has been made. This example also illustrates why it is difficult to specify current-correct routing decisions a priori without knowledge of the underlying topology. This rather simple case was chosen to illustrate the problem, but there are many such cases where the correction to the routing topology is not straightforward, often resulting in a cycle of layout modification, extraction, and Figure 3: Alternate connection to same MOS device from previous figure, resulting in no current density violations The prior examples illustrate just some of the complexities of designing at advanced nodes. As the rules become more geometry-dependent, it will become increasingly difficult to know which wire segments or vias to fix and how to do so without creating another violation. The following paragraphs propose an electrically-aware design methodology for in-design EM checking that improves productivity while reducing the risk and uncertainty of moving to advanced nodes.
{"url":"http://www.eetimes.com/document.asp?doc_id=1280068&page_number=2","timestamp":"2014-04-23T14:33:28Z","content_type":null,"content_length":"144649","record_id":"<urn:uuid:80efde45-9b33-4880-971b-0081c6103d5e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
8 Queens 8 Queens is one of the simple strategy games based on one of the chess rules, demonstrating the behavior of the queen in the board, to win you have to find spots to move the queen 8 times. The eight queens puzzle is based on the classic stategy games problem which is in this case putting eight chess queens on an 8×8 chessboard such that none of them is able to capture any other using the standard chess queen’s moves. The color of the queens is meaningless in this puzzle, and any queen is assumed to be able to attack any other. Thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens puzzle of placing n8 queens on an n×n chessboard like the board below: Finding all solutions to this strategy game (the 8 queens puzzle) is a good example of a simple but nontrivial problem. For this reason, it is often used as an example problem for various programming techniques, including nontraditional approaches such as constraint programming, logic programming or genetic algorithms. Most often, it is used as an example of a problem which can be solved with a recursive algorithm, by phrasing the n queens problem inductively in terms of adding a single queen to any solution to the n?1 queens problem. The induction bottoms out with the solution to the 0 queens problem, which is an empty chessboard. Related words: Functional programming, Strategy Mathematical game, Backtracking.
{"url":"http://www.brainmetrix.com/8-queens/","timestamp":"2014-04-18T09:06:27Z","content_type":null,"content_length":"16068","record_id":"<urn:uuid:249353a1-6452-4abe-be91-b4087bb04c83>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Collision detection [Archive] - OpenGL Discussion and Help Forums 12-31-2000, 04:49 AM I made a simple 3D world, based on 2D squares. (Each square can have 4 walls, N, E, S or W) To check if the viewer collides with one of the walls, i make the following steps: -calculate the point the viewer is heading to (p2, p1 would be the viewers previous position) -rico = (p2.y - p1.y) / (p2.x - p2.y) -for each wall in a square if (wall.orientation = north) if (p1.x < xt) && ( xt < p2.x) collision; (for south, east and west the code is mush the same) This does work in all cases except -if you walk straight (angle = 0°) on to a wall (i can understand that (div/0)) -in a corner between two walls, you can walk through them What am i doing wrong? (If there are thing you want to know, i'll post the full code)
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-131426.html","timestamp":"2014-04-17T07:27:31Z","content_type":null,"content_length":"4506","record_id":"<urn:uuid:06ddd38d-1ff1-48ab-b539-d65c5616e444>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
strong approximation and one-dimensional automorphic representations up vote 6 down vote favorite Let $D$ be a quaternion algebra over $\mathbf Q$ such that $D\otimes\mathbf R = M_2(\mathbf R)$. Let $\pi = \pi_\infty \otimes_p \pi_p$ be an irreducible automorphic representation of $D^\times$. Supposedly if $\pi_\infty$ is one-dimensional, then every $\pi_p$ is one-dimensional, and it should follow from strong approximation, but how to prove it? automorphic-forms rt.representation-theory nt.number-theory add comment 1 Answer active oldest votes Let $G$ be the group of norm one elements in $D^{\times}$. An easy argument shows that it suffices to prove the claim for $G$ in place of $D^{\times}$. In other words, I will let $\ pi$ be an automorphic rep. of $G$. Now suppose that $\pi_{\infty}$ is one-dimensional. This means that in fact $\pi_{\infty}$ is trivial (because $G(\mathbb Q_v) = SL_2(\mathbb R)$ has no non-trivial characters). Now we use the fact that the automorphic representation lives inside the space of automorphic forms on $G(\mathbb Q)\backslash G(\mathbb A).$ Thus $\pi$ is a space of functions $f(g)$ on $G(\mathbb A)$ such that $f(\gamma g u) = f(g)$ for any $\gamma \in G(\mathbb Q)$ and $u \in U$, a compact open subgroup of $G(\mathbb A^{\infty})$ (with $U$ depending on $f$). Our assumption on $\pi$ shows that furthermore $g_{\infty}f = f$ for all $g_{\infty} \in G(\mathbb R)$. Thus if $\gamma \in G(\mathbb Q)$, $g \in G(\mathbb A)$, $g_{\infty} \in G(\mathbb R)$, and $u \in U$, then $$f(\gamma g ug_{\infty}) = (g_{\infty} f)(\gamma g u) = f(\gamma g u) = f up vote 10 down vote accepted Now strong approximation says that the double coset space $G(\mathbb Q)\backslash G(\mathbb A)/ U G(\mathbb R)$ is a point, and combined with the above calculation, this shows that $f$ is constant, and thus generates the trivial representation under the action of $G(\mathbb A)$. Since $f$ was an arbitrary element of $\pi,$ we see that $\pi$ is trivial, i.e. that all $\pi_v$ are one-dimesional. Note that it is important here that $\pi$ was a subspace of automorphic forms, and not just a subquotient. This is automatic when $D$ is a (non-trivial) quaternion algebra, because then $G$ is anisotropic, hence all automorphic forms are automatically $L^2$, and so the space of automorphic forms is semi-simple. On the other hand, if we consider $GL_2$, then while the above proof goes through for cuspidal representations (which are necessarily subreps., and not just subquotients, of automorphic forms), one can find (non-cuspidal) automorphic reps. of $GL_2$ which are trivial at $\infty$ but infinite-dimensional at the other places. (Of course, these are then subquotients of automorphic forms which can't be split off as subrepresentations.) Thanks, Matt. As always, pure clarity! – unknown Jun 8 '11 at 11:37 add comment Not the answer you're looking for? Browse other questions tagged automorphic-forms rt.representation-theory nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/67207/strong-approximation-and-one-dimensional-automorphic-representations?sort=votes","timestamp":"2014-04-16T22:06:52Z","content_type":null,"content_length":"53467","record_id":"<urn:uuid:1932728c-a532-4cb4-bb18-b9296db29ca0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions, Graphs, and Limits There are three steps to solving a math problem. • Figure out what the problem is asking. • Solve the problem. • Check the answer. Sample Problem Graph the function • Figure out what the problem is asking. This problem is asking for a lot. While it may seem like it's asking for a picture, the picture needs to show any holes, vertical asymptotes, horizontal asymptotes, and x or y intercepts of the function. First we need to factor the function: If we simplify, we find Since the term x is removed from the denominator after simplifying, the function has a hole at x = 0. The full coordinates of the hole are (0,-\frac49). Since the expressions (x - 3) and (x + 3) are still in the denominator after simplifying, there will be vertical asymptotes at 3 and -3. Since the degree of the numerator is less than the degree of the denominator, the function will have a horizontal asymptote at 0. In terms of the picture, here's what we have now: Now we need to figure out the sign of the function f using a number line: When x < -3, both (x-3) and (x + 3) are negative, therefore f is positive. When -3 < x < 3 we know (x - 3) is negative and (x + 3) is positive, therefore f is negative. When 3 < x both (x - 3) and (x + 3) are positive, therefore f is positive. Now we can fill in the number line: Now we have enough information to draw the graph. Since f can't change sign on the interval (-∞,-3) or on the interval (3,∞), it must look like this: For a problem like this, it is probably necessary to double check the solution. Look over the graph to make sure everything is labeled. If necessary, make sure the asymptotes are correct and known values of the function are labeled. Use a graphing calculator to check that any graph makes sense.
{"url":"http://www.shmoop.com/functions-graphs-limits/solving-math-problem.html","timestamp":"2014-04-19T09:43:20Z","content_type":null,"content_length":"29903","record_id":"<urn:uuid:ca7080f4-19d1-4119-bc1e-e39a4959d76f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Modern Physics Physics 201: Modern Physics Fall 2012 Class schedule: M,W,F 10:00-10:50am, Small Hall, Room 111 Tutorial: F 2-2:50pm, Small Hall, Room 111 Office hours: Wed 3-4pm (Phys 201 priority), Wed 4-5pm (Phys 101 priority) Instructor - Josh Erlich Small Hall, Room 223 Office Phone: 757-221-3763 Email: erlich@physics.wm.edu Grader - Zhen Wang (zwang01@email.wm.edu) Course website:&nbsp http://physics.wm.edu/~erlich/201F12/ Course Summary: In the 20th century the classical understanding of physical law was replaced by a new world view. The speed of light was discovered to be a fundamental constant; the notions of space and time were redefined; and the outcome of experiments could no longer be predicted with certainty. This course will cover the experiments and concepts underlying Special Relativity, General Relativity, and Quantum Mechanics. We will also discuss some applications of these revolutionary theories, selected from the fields of Atomic, Condensed Matter, Nuclear, Particle, and Gravitational Physics. Text:&nbsp R. Serway, C. Moses and C. Moyer, Modern Physics, Third Edition. This course assumes a background in first year physics at the level of Phys 101-102 or Phys 107-108, and calculus at the level of Math 111-112. Course requirements and grade: • Problem sets (30%) • Midterm Exams (2 x 20%) • Final Exam (30%) Problem Sets: Weekly problem sets will generally be due on Fridays in class and returned the following Friday. Questions about the grading should first be addressed to the grader. After speaking with the grader if you feel that a grading issue was not adequately addressed, then see me. • Midterm Exam 1: Monday, October 8, in class. • Midterm Exam 2: Monday, November 12, in class. • Final Exam: Monday, December 10, 2-5pm.
{"url":"http://physics.wm.edu/~erlich/201F12/index.html","timestamp":"2014-04-16T16:03:47Z","content_type":null,"content_length":"6318","record_id":"<urn:uuid:d177aadb-4324-4260-b077-97471816252c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
The Median block computes the median value of each row or column of the input, along vectors of a specified dimension of the input, or of the entire input. The median of a set of input values is calculated as follows: 1. The values are sorted. 2. If the number of values is odd, the median is the middle value. 3. If the number of values is even, the median is the average of the two middle values. For a given input u, the size of the output array y depends on the setting of the Find the median value over parameter. For example, consider a 3-dimensional input signal of size M-by-N-by-P: ● Entire input — The output at each sample time is a scalar that contains the median value of the M-by-N-by-P input matrix. y = median(u(:)) % Equivalent MATLAB code ● Each row — The output at each sample time consists of an M-by-1-by-P array, where each element contains the median value of each vector over the second dimension of the input. For an input that is an M-by-N matrix, the output is an M-by-1 column vector. y = median(u,2) % Equivalent MATLAB code ● Each column — The output at each sample time consists of a 1-by-N-by-P array, where each element contains the median value of each vector over the first dimension of the input. For an input that is an M-by-N matrix, the output at each sample time is a 1-by-N row vector. y = median(u) % Equivalent MATLAB code For convenience, length-M 1-D vector inputs are treated as M-by-1 column vectors when the block is in this mode. Sample-based length-M row vector inputs are also treated as M-by-1 column vectors when the Treat sample-based row input as a column check box is selected. ● Specified dimension — The output at each sample time depends on Dimension. If Dimension is set to 1, the output is the same as when you select Each column. If Dimension is set to 2, the output is the same as when you select Each row. If Dimension is set to 3, the output at each sample time is an M-by-N matrix containing the median value of each vector over the third dimension of the y = median(u,Dimension) % Equivalent MATLAB code The block sorts complex inputs according to their magnitude. Fixed-Point Data Types For fixed-point inputs, you can specify accumulator, product output, and output data types as discussed in Dialog Box. Not all these fixed-point parameters are applicable for all types of fixed-point inputs. The following table shows when each kind of data type and scaling is used. ┃ │ Output data type │ Accumulator data type │ Product output data type ┃ ┃ Even M │ X │ X │ ┃ ┃ Odd M │ X │ │ ┃ ┃ Odd M and complex │ X │ X │ X ┃ ┃ Even M and complex │ X │ X │ X ┃ The accumulator and output data types and scalings are used for fixed-point signals when M is even. The result of the sum performed while calculating the average of the two central rows of the input matrix is stored in the accumulator data type and scaling. The total result of the average is then put into the output data type and scaling. The accumulator and product output parameters are used for complex fixed-point inputs. The sum of the squares of the real and imaginary parts of such an input are formed before the input elements are sorted, as described in Description. The results of the squares of the real and imaginary parts are placed into the product output data type and scaling. The result of the sum of the squares is placed into the accumulator data type and scaling. For fixed-point inputs that are both complex and have even M, the data types are used in all of the ways described. Therefore, in such cases, the accumulator type is used in two different ways. Dialog Box The Main pane of the Median block dialog appears as follows. Specify whether to sort the elements of the input using a Quick sort or an Insertion sort algorithm. Specify whether to find the median value along rows, columns, entire input, or the dimension specified in the Dimension parameter. For more information, see Description. Select to treat sample-based length-M row vector inputs as M-by-1 column vectors. This parameter is only visible when the Find the median value over parameter is set to Each column. Specify the dimension (one-based value) of the input signal, over which the median is computed. The value of this parameter cannot exceed the number of dimensions in the input signal. This parameter is only visible when the Find the median value over parameter is set to Specified dimension. The Data Types pane of the Median block dialog appears as follows. │ Note: Floating-point inheritance takes precedence over the data type settings defined on this pane. When inputs are floating point, the block ignores these settings, and all internal data │ │ types are floating point. │ Select the rounding mode for fixed-point operations. Select the overflow mode for fixed-point operations. Specify the product output data type. See Fixed-Point Data Types and Multiplication Data Types for illustrations depicting the use of the product output data type in this block. You can set it ● A rule that inherits a data type, for example, Inherit: Same as input ● An expression that evaluates to a valid data type, for example, fixdt([],16,0) Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Product output data type parameter. See Specify Data Types Using Data Type Assistant for more information. Specify the accumulator data type. See Fixed-Point Data Types for illustrations depicting the use of the accumulator data type in this block. You can set this parameter to: ● A rule that inherits a data type, for example, Inherit: Same as product output ● An expression that evaluates to a valid data type, for example, fixdt([],16,0) Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Accumulator data type parameter. See Specify Data Types Using Data Type Assistant for more information. Specify the output data type. See Fixed-Point Data Types for illustrations depicting the use of the output data type in this block. You can set it to: ● A rule that inherits a data type, for example, Inherit: Same as accumulator ● An expression that evaluates to a valid data type, for example, fixdt([],16,0) Click the Show data type assistant button to display the Data Type Assistant, which helps you set the Output data type parameter. See Specify Block Output Data Types for more information. Specify the minimum value that the block should output. The default value is [] (unspecified). Simulink^® software uses this value to perform: ● Simulation range checking (see Signal Ranges) ● Automatic scaling of fixed-point data types Specify the maximum value that the block should output. The default value is [] (unspecified). Simulink software uses this value to perform: ● Simulation range checking (see Signal Ranges) ● Automatic scaling of fixed-point data types Select this parameter to prevent the fixed-point tools from overriding the data types you specify on the block mask.
{"url":"http://www.mathworks.com/help/dsp/ref/median.html?nocookie=true","timestamp":"2014-04-25T05:45:41Z","content_type":null,"content_length":"54134","record_id":"<urn:uuid:287294c7-a639-4809-a478-4b8a9e502a95>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from February 2009 on The Gauge Connection Following the discussion with Rafael Frigori (see here) and returning to sane questions, I discuss here a class of exact classical solutions that must be considered in the class of Maximal Abelian Gauge. As usual, we consider the following Yang-Mills action $S=\int d^4x\left[\frac{1}{2}\partial_\mu A^a_u\partial^\mu A^{au}+\partial^\mu\bar c^a\partial_\mu c^a\right.$ $-gf^{abc}\partial_\mu A_u^aA^{b\mu}A^{cu}+\frac{g^2}{4}f^{abc}f^{ars}A^b_\mu A^c_u A^{r\mu}A^{su}$ $\left.+gf^{abc}\partial_\mu\bar c^a A^{b\mu}c^c\right]$ being $c,\ \bar c$ the ghost field, $g$ the coupling constant and, for the moment we omit the gauge fixing term. Let us fix the gauge group being SU(2). We choose the following (Smilga’s choice, see the book): being $\phi$ a scalar field. The other components are taken to be zero. It easy to see that the action becomes $S=-6\int d^4x\left[\frac{1}{2}\partial_\mu\phi\partial^\mu\phi+\partial\bar c\partial c\right]+6\int d^4x\frac{2g^2}{4}\phi^4.$ This is a very nice result as if we have a solution of the scalar field theory we get immediately a classical solution of Yang-Mills equations while the ghost field decouples and behaves as that of a free particle. But such solutions do exist. We can solve exactly the equation $\phi = \mu\left(\frac{2}{2g^2}\right)^\frac{1}{4}{\textit sn}(p\cdot x+\theta,i)$ being sn Jacobi snoidal function, $\mu,\ \theta$ two arbitrary constants, if holds We see that the field acquired a mass notwithstanding it was massless and the same happens to the Yang-Mills field. These are known as non-linear waves. These solutions do not represent a new theoretical view. A new theoretical view is given when they are used to build a quantum field theory. This is the core of the question. What happens when we keep the gauge fixing term as $\frac{1}{\xi}(\partial\cdot A)^2?$ If you substitute Smilga’s choice in this term you will find a correction to the kinematic term implying a rescaling of space variables. This is harmless for the obtained solutions resulting in the end into a multiplicative factor for the action of the scalar field. The set of Smilga’s choice is very large and increases with the choice of the gauge group. But such solutions always exist. If you want more information about their use in QCD see the following: Infrared gluon and ghost propagators Phys. Lett. B Yang-Mills propagators and QCD to appear in Nucl. Phys. B: Proc. Suppl. Update: Together with Terry Tao, we agreed that these solutions hold in a perturbative sense, i.e. being $\eta_\mu^a$ a constant and $g$ the coupling taken to be very large. These become exact solutions when just time dependence is retained. So, the theorem contained in the above papers is correct for this latter case and approximate in the general case. As the case of interest is that of a large coupling, these results permit to say that all the conclusions drawn in the above papers are correct. This completed proof will appear shortly in Modern Physics Letters A (see http://arxiv.org/abs/0903.2357 ). Thanks a lot to Terry for the very helpful criticism. Wikipedia and abuses Till now, I have avoided to feed this flooding about abuses on Wikipedia. But I think that a few words are needed in order to clarify my position and to let my own point of view widely known. The problem started when Peter Woit took a look at the Yang-Mills entry of Wikipedia. He has found a section apparently self-promoting my work. What was about this section? The title said “Integrable solutions of classical Yang-Mills equations and QFT “. I think that there is a lot to say about this matter as classical solutions of Y-M exist and is well acquired matter. But in this section a class of solutions were put, cited in the Smilga’s book, that I have generalized and introduced in my papers. Should they be there? I think yes as they belong to the class of solutions stated in the title. They appear to be too recent for inclusion but this is plain mathematics. Mathematics is a two-way switch: It is either right or wrong and so, if these solutions are right, they should be there as a bookkeeping for the readers. The worst question is anyhow self-promotion. On this ground Peter Woit did worst: He is self-promoting his book (see here , thank you Lubos). Promoting a book means to earn money for the author while promoting a scientific idea may have the right side that, being the idea good, a good service has been done to the community. The worst aspect of the story has been the intervention of Woit through his blog. This is a perfect war machine that when activated may leave a lot of casualties. People should be smart at their defense as otherwise the risk is to be counted in that number. A flood of people moved toward Wikipedia with any means trying to remove the questioned section and attacking me and whoever has written it. A lot of comments in Woit’s blog was posted attacking me. I was forced to introduce moderation for comments in my blog. Of course this appears like a kind of lynching without any understanding of scientific merit. Curators of Wikipedia decided that majority was right and Woit have had his win: The section was finally removed. What next? This situation is quite interesting by my side. The reason is that the physical matter is Yang-Mills theory that is one of the biggest open problems both in phyics and mathematics. There is a lot of very good people working on that in this moment and my view is that a complete understanding is at hand. Ask yourself this question: What would be Woit’s position if I am right? This is not like string theory that we do not know when a confirmation will be at hand. Here we have computers, accelerators and a lot of smart people crunching this problem. In a very short time an eventual Woit’s error will be exposed. And by irony, Wikipedia’s entry will be updated with my ideas. Much better than now. The question to be asked is: Should Wikipedia support new material? Since the editors of scientific entries in Wikipedia are scientists themselves one cannot ask them impartiality. Science is a dynamic endeavor and Wikipedia a dynamic source of information. They should be merged to meet each other in the right way. Quote of the day Today let me quote Einstein: Great spirits have often encountered violent opposition from weak minds My dedication is for people working on string theory. Osaka and Berlin merge their data! Today in arxiv appeared a relevant paper by Osaka and Berlin groups (see here). This is a really important paper as these two groups merged their data for the lattice computation of the gluon and ghost propagators for SU(3) in the Coulomb gauge. As usual I give here a picture summing up their results about gluon propagator I would like to emphasize a couple of points that should be discussed with these results at hand. There is a paper, published on Physical Review Letters, that was claiming that the gluon propagator in the Coulomb gauge should take the Gribov form going to zero at lower momenta. You can find this paper here and here. I think that authors should reconsider their computations as the disagreement with lattice is really serious. All the research lines aimed at a proof of confinement scenarios heavily relying on Gribov ideas seem to have reached a failure point. There could be a lot of reasons for this but it seems to me that, as lattice computations improve, we are left with the only option that the starting points of all these studies are to be reconsidered. A second point to be made is the completely missing link between people working on the computation of propagators and those working on the spectrum of QCD. I think this is the moment to try to connet these two relevant areas as times are mature to try a consistency check between them. After the failure in view of some functional methods do we have to believe yet that Kaellen-Lehman formula does not apply in the infrared limit? I would like to point out to my readers Scholarpedia. This represents a significant effort of the scientific community to grant a wiki-like resource with the benefit of peer-review. This means that articles are written on invitation and reviewed by referees chosen by the Editorial Board. This resource is important as correctness of information is granted by the review process and by the choice of the authors that are generally main contributors to the considered fields. It is interesting to point out that, currently, there are articles written by 15 Nobelists and 4 Fields medalists. The most relevant aspect to be emphasized is that the information is freely accessible to everybody exactly in the spirit of Wikipedia. Quantum field theory and gradient expansion In a preceding post (see here) I showed as a covariant gradient expansion can be accomplished maintaining Lorentz invariance during computation. Now I discuss here how to manage the corresponding generating functional $Z[j]=\int[d\phi]e^{i\int d^4x\frac{1}{2}[(\partial\phi)^2-m^2\phi^2]+i\int d^4xj\phi}.$ This integral can be computed exactly, the theory being free and the integral is a Gaussian one, to give $Z[j]=e^{\frac{i}{2}\int d^4xd^4yj(x)\Delta(x-y)j(y)}$ where we have introduced the Feynman propagator $\Delta(x-y)$. This is well-knwon matter. But now we rewrite down the above integral introducing another spatial coordinate and write down $Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2]+i\int d\tau d^4xj\phi}.$ Feynman propagator solving this integral is given by and a gradient expansion just means a series into $p^2$ of this propagator. From this we learn immeadiately two things: • When one takes $p=0$ we get the right spectrum of the theory: a pole at $p_\tau^2=m^2.$ • When one takes $p_\tau=0$ and Wick-rotates one of the four spatial coordinates we recover the right Feynman propagator. All works fine and we have kept Lorentz invariance everywhere hidden into the Euclidean part of a five-dimensional theory. Neglecting the Euclidean part gives us back the spectrum of the theory. This is the leading order of a gradient expansion. So, the next step is to see what happens with an interaction term. I have already solved this problem here and was published by Physical Review D (see here). In this paper I did not care about Lorentz invariance as I expected it would be recovered in the end of computations as indeed happens. But here we can recover the main result of the paper keeping Lorentz invariance. One has $Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2-\frac{\lambda}{2}\phi^4]+i\int d\tau d^4xj\phi}$ and if we want something not trivial we have to keep the interaction term into the leading order of our gradient expansion. So we will break the exponent as $Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-\frac{\lambda}{2}\phi^4]-i\int d\tau d^4x\frac{1}{2}[(\partial\phi)^2+m^2\phi^2]+i\int d\tau d^4xj\phi}$ and our leading order functional is now $Z_0[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-\frac{\lambda}{2}\phi^4]+i\int d\tau d^4xj\phi}.$ This can be cast into a Gaussian form as, in the infrared limit, the one of our interest, one can use the following small time approximation $\phi(x,\tau)\approx\int d\tau' d^4y \delta^4(x-y)\Delta(\tau-\tau')j(y,\tau')$ being now that can be exactly solved giving back all the results of my paper. When the Gaussian form of the theory is obtained one can easily show that, in the infrared limit, the quartic scalar field theory is trivial as we obtain again a generating functional in the form $Z[j]=e^{\frac{i}{2}\int d^4xd^4yj(x)\Delta(x-y)j(y)}$ being now after Wick-rotated a spatial variable and having set $p_\tau=0$. The spectrum is proper to a trivial theory being that of an harmonic oscillator. I think that all this machinery does work very well and is quite robust opening up a lot of possibilities to have a look at the other side of the world. Most extreme gamma-ray blast yet As my blog’s readers know, I follow as far as I can space missions that can have a deep impact on our knowledge of universe. Most of them are from NASA. One of these missions is Fermi-GLAST that has produced a beautiful result quite recently. here). The paper with the results is appeared on Science (see here). The burst was seen in Carina constellation. These explosions are the most energetic processes in the universe and were uncovered by chance with military satellites named Vela used to find nuclear explosions in the atmosphere in the sixties of the last century. Understanding gamma-ray bursts implies a deeper understanding of stellar explosions. Too hilarious to not cite I have found this very nice piece of humor about Higgs particle at Cosmic Variance (see here). This text is a combination of quotes from famous movies. E.g., the last paragraph is taken from Ocean’s eleven and runs like this: Terry: All right. Now I have complied with your every request, would you agree? Rusty: I would. Terry: Good, ’cause now I have one of my own. Run and hide, asshole. Run and hide. If you should be picked up next week buying a hundred-thousand dollar sports car in Newport Beach, I am going to be supremely disappointed. Because I want my people to find you, and when they do, rest assured we are not going to hand you over to the police. So my advice to you again is this: run and hide. That is all that I ask. The first paragraph is taken from The Matrix and runs like this: Neo: I know you’re out there. I can feel you now. I know that you’re afraid… you’re afraid of us. You’re afraid of change. I don’t know the future. I didn’t come here to tell you how this is going to end. I came here to tell you how it’s going to begin. I’m going to hang up this phone, and then I’m going to show these people what you don’t want them to see. I’m going to show them a world without you. A world without rules and controls, without borders or boundaries. A world where anything is possible. Where we go from there is a choice I leave to you. Nobody at Cosmic Variance was able to hit the citation for the second paragraph. It is very well accustomed to LHC situation so not that easy to identify. If you know it please, share your knowledge. Movie quotes are from Internet Movie DB. Lubos and divergent series I have read this nice post and I have found it really interesting. The reason is the kind of approach of Lubos Motl, being physicist-like, on such somewhat old mathematical matter. The question of divergent series and their summation is as old as at least Euler and there is a wonderful book written by a great British mathematician, G. H. Hardy, that treats this problem here. Hardy is well-known for several discoveries in mathematics and one of this is Ramanujan. He had a long time collaboration with John Littlewood. Hardy’s book is really shocking for people that do not know divergent series. In mathematics several well coded resummation techniques exist for these series. With a proper choice of one of these techniques a meaning can be attached to them. A typical example can be and this is true exactly in the same way is true that the sum of all integers is -1/12. Of course, this means that discoveries by string theorists are surely others and most important than this one that is just good and known mathematics. I agree with Lubos that these techniques are not routinely taught to physics students and come out as a surprise also to most mathematics students. I am convinced that Hardy’s book can be used for a very good series of lectures, for a short time, to make people acquainted with this deep matter that can have unexpected uses. I think that mathematicians have something to teach us that is really profound: Do not throw anything out of the window. It could turn back in an unexpected way. Update: I have three beautiful links about this matter that is very well explained leaving readers with a puzzle: Quantum field theory and prejudices It happened somehow, discussing in this and others blogs, that I have declared that today there are several prejudices about quantum field theory. I never justified this claim and now I will try to do this making the arguments as clearest as I can. The point is quite easy to be realized when we recognize that the only way we know to manage a quantum field theory is small perturbation theory. So, all we know about this matter is obtained through this ancient mathematical technique. You should try to think about our oldest thinkers looking at the Earth and claiming no other continent exists rather than Europe. Whatever variation you get is about Europe. After a lot of time, a brave man took the sea and uncovered America. But this is another story. There is people in our community that is ready to claim that a strong coupling expansion is not possible at all. I have read this on a beautiful textbook, Peskin and Scroeder. This is my textbook of choice but is plainly wrong about this matter. It is like our ancient thinkers claiming that nothing else is possible other than Europe because this is the best of all the possible worlds. Claims like this come from our renormalization group understanding of quantum field theory. As you may know, such understandings are realized through small perturbation theory and we are taken back to the beginning. How is the other side of the World? What should one say about renormalization? Renormalization appears in any attempt to use small perturbation theory in quantum field theory. It naturally arises from the product of distributions at the same point. This is not quite a sensible mathematical situation. The question to ask is then: Is this true with any kind of perturbation technique? One could answer: we haven’t any other and so the requirement of renormalizability becomes a key element to have a valid quantum field theory. The reason for this is the same again: The only technique we believe to exist to do computations in quantum field theory is small perturbation technique and this must work to have a sensible theory. Meantime, we have also learned to work with non-renormalizable theories and called them effective theories. All this is well-known matter. Of course, the wrong point about such a question is the claim that there are no other techniques than small perturbation theory. There is always another face to a medal as the readers of my blog know. But this implies a great cultural jump to be accepted. The same that happened to Columbus.
{"url":"http://marcofrasca.wordpress.com/2009/02/","timestamp":"2014-04-19T06:52:19Z","content_type":null,"content_length":"142167","record_id":"<urn:uuid:f7bd034d-e0a1-457c-9f93-fd0cccb14d66>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Murphy, TX Math Tutor Find a Murphy, TX Math Tutor ...In my tutoring sessions, I have each student review and practice grammar, punctuation, organization, style, and other concepts covered on the ACT English section. If a student is taking the ACT Writing section, I include ACT essay practice as well. I’ve tutored over 100 hours of ACT and SAT prep, including over 30 hours of ACT and SAT Math. 15 Subjects: including algebra 1, algebra 2, grammar, geometry ...I have taught students grades K - high school. I also have many years experience writing math activities and assessments for elementary. I have a special interest in early number development as well. 10 Subjects: including SAT math, algebra 1, geometry, prealgebra ...But the pace will be entirely your own! My area of expertise is Biology and Chemistry, but I can easily teach any of the other math or science subjects. I also taught AP English Literature for three years as an undergraduate at MIT, so I am very comfortable with teaching English as well. 30 Subjects: including geometry, precalculus, trigonometry, SAT math ...I have learned how to work with and motivate many different personality and learning types.I tutor high school, introductory and first chemistry. I am NOT available to tutor organic chemistry or biochemistry. I typically tutor introductory and general statistics courses. 4 Subjects: including statistics, chemistry, economics, vocabulary ...My name's Cody. I have a degree in civil engineering and I worked in that field for eight years. I'm now attending UT Dallas to get my certification as a high school physics teacher. 28 Subjects: including algebra 1, algebra 2, biology, chemistry Related Murphy, TX Tutors Murphy, TX Accounting Tutors Murphy, TX ACT Tutors Murphy, TX Algebra Tutors Murphy, TX Algebra 2 Tutors Murphy, TX Calculus Tutors Murphy, TX Geometry Tutors Murphy, TX Math Tutors Murphy, TX Prealgebra Tutors Murphy, TX Precalculus Tutors Murphy, TX SAT Tutors Murphy, TX SAT Math Tutors Murphy, TX Science Tutors Murphy, TX Statistics Tutors Murphy, TX Trigonometry Tutors Nearby Cities With Math Tutor Allen, TX Math Tutors Fairview, TX Math Tutors Farmers Branch, TX Math Tutors Garland, TX Math Tutors Highland Park, TX Math Tutors Lucas, TX Math Tutors Parker, TX Math Tutors Plano, TX Math Tutors Richardson Math Tutors Rockwall Math Tutors Rowlett Math Tutors Sachse Math Tutors St Paul, TX Math Tutors University Park, TX Math Tutors Wylie Math Tutors
{"url":"http://www.purplemath.com/murphy_tx_math_tutors.php","timestamp":"2014-04-19T17:34:56Z","content_type":null,"content_length":"23464","record_id":"<urn:uuid:f03726d3-374f-44ad-b42c-1a37142f2382>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Union Square, NJ Statistics Tutor Find an Union Square, NJ Statistics Tutor ...Practical experience working through concrete examples coupled with theory is the best instruction and what I know to be most effective. Prealgebra is the foundation for all secondary level math. It is vital that the math fundamentals are present for any student in order to become successful in any capacity. 26 Subjects: including statistics, calculus, writing, GRE ...I find research fun and make stats simple. Mostly grad students seek me out for their thesis research and/or their classes. Social work or psychology students are common; of course, I am open to whatever kind of program you are in. 4 Subjects: including statistics, SPSS, SAS, biostatistics ...When a student does well, I feel like I have done well. Some of the students have told me that they wish I was their teacher. I really enjoy what I do and love children/people.I am currently tutoring students from K-6th grade, and have been doing well. 47 Subjects: including statistics, chemistry, reading, accounting ...I have completed numerous hours of volunteer work which has included tutoring students in a variety of subjects and working with elementary aged children. I specialize in language arts, mathematics, social studies, and science for grades K-5. In regard to teaching, I tend to use hands-on activities and outside resources that engage a student. 49 Subjects: including statistics, Spanish, English, reading ...I am also a graduate of Union County College and hold an Associate of Arts degree in Early Childhood Education. In addition, I am a well respected substitute teacher for the Township Of Union Public School system for the past 7 years. I'm a WyzAnt certified tutor in English, American History, G... 16 Subjects: including statistics, reading, English, writing Related Union Square, NJ Tutors Union Square, NJ Accounting Tutors Union Square, NJ ACT Tutors Union Square, NJ Algebra Tutors Union Square, NJ Algebra 2 Tutors Union Square, NJ Calculus Tutors Union Square, NJ Geometry Tutors Union Square, NJ Math Tutors Union Square, NJ Prealgebra Tutors Union Square, NJ Precalculus Tutors Union Square, NJ SAT Tutors Union Square, NJ SAT Math Tutors Union Square, NJ Science Tutors Union Square, NJ Statistics Tutors Union Square, NJ Trigonometry Tutors Nearby Cities With statistics Tutor Arlington, NJ statistics Tutors Bayway, NJ statistics Tutors Elizabeth, NJ statistics Tutors Elmora, NJ statistics Tutors Greystone Park, NJ statistics Tutors Hopelawn, NJ statistics Tutors Menlo Park, NJ statistics Tutors Midtown, NJ statistics Tutors Monroe, NJ statistics Tutors North Elizabeth, NJ statistics Tutors Parkandbush, NJ statistics Tutors Peterstown, NJ statistics Tutors Rockaway Point, NY statistics Tutors Tabor, NJ statistics Tutors West Arlington, NJ statistics Tutors
{"url":"http://www.purplemath.com/Union_Square_NJ_statistics_tutors.php","timestamp":"2014-04-18T01:07:49Z","content_type":null,"content_length":"24396","record_id":"<urn:uuid:b3bda7a5-1c21-4dfb-b246-8ca6d196d0ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
George Karniadakis George Karniadakis Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics Brown University; also Research Scientist at MIT George_Karniadakis at Brown.edu "People who wish to analyze nature without using mathematics must settle for a reduced understanding", Richard Feynman 2013-Spring Course: [APMA2580] Exciting News!!! My former PhD student, Ronald D. Henrdesron, received an Oscar (2014) for the fluid system he built for DreamWorks! See the press release: A new report by the National Research Council points to the direction and philosophy of my group practiced over the last 20 years --Thank you NRC!: Here's a small sample: "...But the value of the mathematical sciences to the overall science and engineering enterprise and to the nation would be heightened if the number of mathematical scientists who share the following characteristics could be increased: (1) They are knowledgeable across a broad range of the discipline, beyond theirown area(s) of expertise; (2) They communicate well with researchers in other (3) They understand the role of the mathematical sciences in the wider world of science, engineering, medicine, defense, and business; and (4)They have some experience with computation..." Academic background: George Karniadakis received his S.M. (1984) and Ph.D. (1987) from Massachusetts Institute of Technology. He was appointed Lecturer in the Department of Mechanical Engineering at MIT in 1987 and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech (1993) in the Aeronautics Department. He joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics on January 1, 1994. He became a full professor on July 1, 1996. He has been a Visiting Professor and Senior Lecturer of Ocean/ Mechanical Engineering at MIT since September 1, 2000. He was Visiting Professor at Peking University (Fall 2007 & 2013). He is a Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the CFD award (2007) and the J Tinsley Oden Medal (2013) by the US Association in Computational Mechanics. His h-index is 63 and he has been cited about 20,000 times (see my google scholar citations). See complete CV here. You can also check out our Geneaology Tree (remember to zoom in!). Karniadakis is the lead PI of an OSD/AFOSR MURI on Uncertainty Quantification and Director of a new DOE Center of Mathematics for Mesoscale Modeling of Materials (CM4). Research Interests: His research interests include diverse topics in computational science both on algorithms and applications. A main current thrust is stochastic simulation (in the context of uncertainty quantification and beyond), fractional PDEs, and multiscale modeling of physical and biological systems (especially the brain). Can you believe that we solve problems in 100 dimensions - check this out!. Read here about the exciting field of "New Biology" described by the National Research Council (2009). Read here about our work on sickle cell anemia and also on modeling malaria from first principles, which was also featured on the web site of the National Public Radio. Read here about our work on the first large multiscale modeling of a brain aneurysm (finalist in the Gordon Bell Award, Supercomputing'11). Our new area is neurovascular coupling in the brain, i.e., bridging the gap between neuroscience and vascular mechanics. New experimental evidence suggests the intriguing possibility that by slightly modulating the brain blood flow one can control information processing -- read our paper here! Recent feature article of our work ("Blood in Motion") in American Particular aspects include: Honors and Awards : The USACM J Tinsley Oden Medal, 2013 The USACM Computational Fluid Dynamics Award, 2007 Fellow of the American Society of Mechanical Engineers (ASME) 2003 Where to contact George Karniadakis: Professor George Karniadakis Box F, Division of Applied Mathematics, Brown University, Providence RI 02912, USA. Additional Links: MAY 22 - 31, 2013 JUNE 3 - 5, 2013
{"url":"http://www.cfm.brown.edu/people/gk/","timestamp":"2014-04-16T19:15:22Z","content_type":null,"content_length":"25533","record_id":"<urn:uuid:38ca2180-36ec-4deb-b6e9-64ef499292fa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
IAS 17 Leases Example 1 1. Hi sir, for Example 1, #4, you said Dr OUFL A/C Dr F.L. Int Cr Cash 3500 What are the figures for the debit entry? 2. Hi Mr Mike – I understand what you are doing in the lecture, yet when I tried to solve the first question in the mini questions, I was not able to figure out how did they arrive in the answer (pilot paper) that 55,000 is the outstanding non current obligation – Here is the question: On 1 April 2010 Kala entered into a lease for an item of plant which had an estimated life of five years. The lease period is also five years with annual rentals of $22 million payable in advance from 1 April 2010. The plant is expected to have a nil residual value at the end of its life. If purchased this plant would have a cost of $92 million and be depreciated on a straight-line basis. The lessor includes a finance cost of 10% per annum when calculating annual rentals. (Note: you are not required to calculate the present value of the minimum lease payments.) Why in the answer the way they calculated as follows: the non current obligation is simply 77000 (outstanding obligation at the end of the year) less 22,000 (which is the next installment). This is not the same thing you are doing here, or is it? I am really confused, could you please show me how to arrive at the current and non current element of the outstanding obligation the same way you did in the lecture? Many thanks! □ If the cash price is 92 and the first installment of 22 is entirely capital (because it’s paid in advance and therefore includes no interest) then the capital outstanding after that deposit is paid is 70. Add 10% interest to get to the end of the first year and the year end outstanding amount is 77 and we pay 22 “tomorrow”? The amount of CAPITAL outstanding at the end of the first year is 70. Move forward to tomorrow and we pay 22. That 22 settles to 7 accrued interest + 15 of the capital. So the capital now outstanding after that second payment of 22 is now 55 ie 70 capital outstanding less the capital element of the second payment of 22. As at the end of the first year, the CAPITAL OUTSTANDING payable more than 12 months hence is therefore 55 Is that clear? If not, post again ☆ Thank you Mr Mike for this clarification, I get it but it is confusing me this way, I want to solve it the same way you did it in the lecture, but when I do, I don’t get the same answer, here is my answer, can you tell me where did I go wrong? Fv 92,000 (22,000) >> Acts as a deposit 70,000 >> FV at 1.4.2010 77,000 (outstanding amount at 31.3.2011) 60,500 (outstanding amount at 31.3,2012) As per the above calculation, the current liab. should be 16,500 (77,000-60,500), which means that the long term liability is 60,500 (16,500-77,000). This is how I would solve it according to the method in the lecture which I am pretty much comfortable with, but I am not sure what is wrong in my answer!!! Thank you ☆ The long term liability should be JUST the capital element. In the figures quoted above, the capital element is 55.000. At the previous year end, the capital element outstanding is 70,000. So, of that 70, 55,000 is payable >12 months hence and is therefore long term debt and the remaining 15,000 is a current liability. Of course, there is a further current liability and that is the 7,000 interest which is also payable “tomorrow” within the 22,000 payment Is that better? ☆ Mr. Little, I FINALLY get my mistake! This is what you have been doing in the lecture and I understood it, I am not sure why I got confused over this example specifically! Maybe because the payment is made in advance. Anyways, I appreciate the time you gave explaining this to me. Thank you very much! I hope our efforts will pay off and I will pass! 3. NCA is leased(finance lease) by lessee and 1st payment is made.the requirement is to charge 1st payment to CGS. □ Is it because the first payment is IN ADVANCE and therefore includes no interest – it’s entirely a payment of capital 4. anyone can tell me why lease payment is deducted from CGS….?? □ Was it added into CGS? Surely it would be better to show it as a finance charge (interest element) whilst the capital element should have been deducted from the Obligation Account Does that answer it? 5. HI Please explain the calculation used for disclosure for reconciling Minimum Lease payments to Fair Value on the gross basis. Initially we said that there are 7 instalments to be paid which is R24 000. But the calculation presented in the lecture is R21 000. Am I missing something □ Yes, the answer asks for the situation at the end of the first year – so only 6 more years to go. OK? 6. good and helpful lecture thanks! 7. Sir, I do not understand how you got $15244, canyou please explain □ @nigs001, Have you actually watched the video? The fair value is 17,500. A deposit is paid of 460. So the amount “borrowed” is 17,040, This is outstanding for one complete year and interest is accruing at 10%. So, at the end of the first year Sergijus owes capital of 17,040 plus interest of 1,704 ( a total of 18,744 ) And then he pays 3,500 and reduces the amount owing to 18,744 – 3,500 which equals ???? ☆ sir your reply made it much easier! 8. i enjoy and concerntrate more wen i see your face mike □ @taftedmelody, TM, you and me alike! I have to concentrate when I see my face – it’s normally when I have a razor in my hand! ☆ @MikeLittle, you are very funny sir either way im loving your lectures You must be logged in to post a comment.
{"url":"http://opentuition.com/acca/f7/paper-f7-ias-17-leases-example-1/","timestamp":"2014-04-19T10:45:04Z","content_type":null,"content_length":"61863","record_id":"<urn:uuid:33264bbe-c9f8-42d9-bce5-948cedb75a4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Determine the domain f(x) = 4/x^2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ffcaa3ae4b00c7a70c57ab9","timestamp":"2014-04-23T09:26:40Z","content_type":null,"content_length":"62746","record_id":"<urn:uuid:1d797673-6932-4c4d-9cf3-f53bd64bd1e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
General inductive inference types based on linearly ordered sets - Fundamenta Informaticae , 1999 "... In this paper, we reconsider the denition of procrastinating learning machines. In the original denition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It ..." Cited by 8 (2 self) Add to MetaCart In this paper, we reconsider the denition of procrastinating learning machines. In the original denition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It turns out that using certain ordered sets it is possible to dene inductive inference types dierent from the previously known ones. We investigate properties of the new inductive inference types and compare them to other types. This research was supported by Latvian Science Council Grant No.93.599 and NSF Grant 9421640. Some of the results from this paper were presented earlier [AFS96]. y The third author was supported in part by NSF Grant 9301339. 1 Introduction We study inductive inference using the model developed by Gold [Gol67]. There is a well known hierarchy of larger and larger classes of learnable sets of phenomena based on the number of time a learning machine is - Information and Computation "... This paper proposes the use of constructive ordinals as mistake bounds in the on-line learning model. This approach elegantly generalizes the applicability of the on-line mistake bound model to learnability analysis of very expressive concept classes like pattern languages, unions of pattern languag ..." Cited by 2 (2 self) Add to MetaCart This paper proposes the use of constructive ordinals as mistake bounds in the on-line learning model. This approach elegantly generalizes the applicability of the on-line mistake bound model to learnability analysis of very expressive concept classes like pattern languages, unions of pattern languages, elementary formal systems, and minimal models of logic programs. The main result in the paper shows that the topological property of effective finite bounded thickness is a sufficient condition for on-line learnability with a certain ordinal mistake bound. An interesting characterization of the on-line learning model is shown in terms of the identification in the limit framework. It is established that the classes of languages learnable in the on-line model with a mistake bound of α are exactly the same as the classes of languages learnable in the limit from both positive and negative data by a Popperian, consistent learner with a mind change bound of α. This result nicely builds a bridge between the two models. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=827291","timestamp":"2014-04-21T11:37:49Z","content_type":null,"content_length":"15922","record_id":"<urn:uuid:cdb7a361-d592-43ed-9e33-79d2d2ce8527>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Can I keep my amp with these speakers? [Archive] - Car Audio Forum - CarAudio.com 03-30-2008, 03:20 PM I'm sort of a noob, and I really need help. I have been blowing speakers the past 6 or so years (about 1 speaker every 2 years), I think from underpowering them. I am bad with ohms but each speaker is running off of one terminal so I think its at 4 ohms. If the rear speakers need 110, and the fronts need 55, what do I do as far as power? Because I can't direct an amount of power to each speaker as I see fit. Do I keep my amp or get a new one? Or for the same price get a different set of speakers for the front or something? Help me out... My amp is: MTX Thunder4320 • 40 watts x 4 into a 4 OHM load with less than .1% THD. • 80 watts x 4 into a 2 OHM load with less than .3% THD. • 160 Watts bridged x 2 into a 4 Ohm load with less than .3% THD. Dynamic Power (IHF-202 Standard) measured at 14.4 Volts DC: • 85 watts x 4 into a 4 Oh load. • 140 Watts x 4 into a 2 Ohm lead. • 280 watts bridged x 2 into a 4 Ohm load. Front speakers are: Infinity Kappa 52.5i (5 1/4 in) Power Handling, RMS 55 Watts Power Handling, Peak 165 Watts Sensitivity 94dB Frequency Response 55Hz - 25kHz Mounting Depth 2-1/16" Impedance 2 Ohms Rear speakers are: Infinity Kappa 693.5i (6x9) Power Handling, RMS 110 Watts Power Handling, Peak 330 Watts Sensitivity 95dB Frequency Response 35Hz - 25kHz Mounting Depth 3-3/16" Impedance 2 Ohms
{"url":"http://www.caraudio.com/forums/archive/index.php/t-303256.html?s=11518f424d522e03c130006454ad354e","timestamp":"2014-04-20T17:29:43Z","content_type":null,"content_length":"8384","record_id":"<urn:uuid:f2ebff8a-1949-456c-93eb-43e7d4b6f001>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
discounted payback period As there aren't any financial functions in Excel itself to find the discounted payback period thus it would take 3rd party programs such as tadXL to bridge the gap. Excel has been around since 1990 when Windows v3.0 was first launched and GUI based operating systems became commonplace. Even the graphical user interface offered by Windows v3.0 had already been around with programs such as Mac OS. Now speaking of the upcoming Excel 2015 or 2016, the word has it that Microsoft is planning to release its own set of financial functions to the Excel users. But if the story is infact true then more than likely most if not all of the new financial functions will be based on or taken from the tadXL add-in for Excel 2007, 2010 and 2013. I got to hear about Microsoft plans while answering a question about the Excel XIRR function on Bytes magazine site where a high ranking contributor justified the copied work that Microsoft is undertaking by saying that author of the original work, yours sincerely does not have the authority on the subject of finance as he lacks a degree. But you know those who steal will cook up excuses to justify the deed they had done. Having or not having a degree is not an issue here, what is at stake is one persons intellectual copyrights and you would think a company such as Microsoft who cries foul that it loses out on millions of dollars of revenue due to piracy would at least think twice before trampling upon someone else's work. But who cares I suppose when you have one billion installation of Microsoft Excel around the world, cooking up new financial functions for hungry masses brings in quite a lot of dough. Who loses out in the end is the one who invested his time and efforts to research, design and develop the set of financial functions that is called tadXL add-on. Now back to the topic of finding discounted payback period in Excel, the DPP as it is called is the time period required to recover the initial cost incurred in undertaking the capital investment. If you were to be concerned about recovery of all costs incurred in undertaking an investment then you are seeking the real discounted payback period instead. The task of finding the payback period is quite cumbersome as there are no formulas to find it and analsyt has to make use of tables to find it. Excel is a good fit product for crunching numbers and here one can exploit the programming capabilities of Excel engine to create the financial functions that are not native to Excel. 3rd party solutions such as tadXL offer series of financial functions to find the discounted payback period depending upon how much of information is available about the investment. There are 8 different financial functions in tadXL to find the discounted payback period namely tadDPP, tadDPPSchedule, tadTDPP, tadTDPPSchedule, tadXDPP, tadXDPPSchedule, tadXTDPP and tadXTDPPSchedule. So which one of these functions should you use, the answer depends upon how much information do you have about the cash flows from the investment. For the simplest of payback period calculations, you would be using the tadDPP function that will take a series of cash flows and the discount rate. Yet if you have a term structure for interest rates then finding the payback period is made easy by using tadDPPSchedule that not only accepts the series of cash flows but also a schedule of discount rates. But if you have access to a schedule of transaction dates for the cash flows then the tadXDPP function would take care of the payback period calculations. However if you happen to have a schedule of discount rates along with schedule of transaction dates then the tadXDPPSchedule is the function required. There are four other financial functions the list that allow you to find the real discounted payback period that ensure recovery of all the costs as opposed to just the initial cost. In addition to the 8 different financial functions mentioned thus far, there are even more financial functions found in tadXL add-in to find the discounted payback period when comparing investments that have unequal life spans. For example finding incremental and decremental discounted payback period is possible by using the tadIncDPP and tadDecDPP functions found in tadXL. On the same lines it is also possible to find the incremental and decremental real discounted payback period using the tadIncTDPP and tadDecTDPP functions. And if you haven't already guessed the other variants of incremental and decremental discounted payback period functions will allow for schedule of discount rates and transaction dates. And for sensitivity analysis, a number of financial functions in tadXL will allow for expected values when discounted payback period is considered as a random variable. An example of such as financial function in tadXL is the tadeDPP and tadeTDPP functions that accept the values for discounted payback periods of series of investmenst and their respective probability values to find the mean or average discounted payback period. The tadXL add-in is available for 32-bit Excel 2007, 2010 and 2013 and for 64-bit Excel 2010 and 2013. You will find the links to download the latest version of tadXL v2.5 from the following download cart that will permit you to find the discounted payback period using all of the financial functions that were discussed earlier. Please feel at home and get yourself copy of this unique, quality Excel add-in. If you thought that was the last of the financial functions in tadXL to find the discounted payback period then you are dead wrong, there are financial functions in the latest version of tadXL that permit you to find the discounted payback period for a porfolio of investments. This would require you to enter the series of data for each of the investments and their respective discount rates to find the discounted payback period of a portfolio of investments. The version of tadXL offering incremental, decremental discounted payback period and the options for portfolio analysis is currently under development and will be released soon. However you are free to download the latest of the currently available tadXL v2.5 from the following cart that offers the 32-bit tadXL and 64-bit tadXL for Excel 2007, 2010 and 2013 for Windows. │Item │Trial version │Full version│ │32-bit tadXL v2.5 │Download │Buy Now │ │64-bit tadXL v2.5 │Download │Buy Now │ Well if you do not have Microsoft Excel installed on your Windows based computer, then we will still offer you the tools that help find the discounted payback period. The tadBCR Windows 7 and 8 calculator runs on the desktop without requiring an installation file. Simply download this discounted payback period calculator from the shopping cart listed below and copy it onto your Windows desktop. Run the program by double clicking on the file icon to launch the program, once the calculator area is displayed you may enter the series of cash flows and the discount rate before clicking on the calculate button to find the discounted payback period. As usual it is free of cost and is available to download from the following shopping cart. │Item │Trial version│Full version│ │discounted payback period calculator │Download │Buy Now │ Now that you have used the Excel add-in and the windows calculator to find the discounted payback period, we will now offer you a study guide that will teach you the basics of finding the discounted payback period. It will begin by defining the discounted payback period and its uses in financial analysis and then to present you with the formula needed to find the discounted payback period. An example calculation will make it possible for you to see the steps required in finding the discounted payback period with paper and pencil. Information about Excel functions that find the discounted payback period will be provided. The study guide may be downloaded from the following shopping cart for your personal or corporate use. │Item │Trial version│Full version│ │discounted payback period e-Book │Download │Buy Now │ What if you haven't got Windows running on your computer and are using operating systems such as Mac OS, Linux or are connecting to this website from a smart phone. Obviously you have come here to find the discounted payback period, and we will be happy to assist you in finding the discounted payback period while you stay with us. Following this announcement you will notice an online discounted payback period calculator that accepts a series of cash flows and the discount rate to find the discounted payback period. Please feel at home and move ahead with using this online calculator to compute the discounted payback period. Data input Data output discounted payback period = years
{"url":"http://finance.thinkanddone.com/using-ms-excel-to-find-discounted-payback-period.html","timestamp":"2014-04-18T23:28:18Z","content_type":null,"content_length":"18206","record_id":"<urn:uuid:444aed0f-5785-4de6-81ee-e3457bc435fd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
ePrep SAT Subject Test - Math Level 1 - Premium Expert Review When should I take it? Most students take the Math Level 1 subject test as they are nearing the end of Algebra II in school—June of junior year for most students. What’s “on” it? Topics covered by the Math Level 1 subject test include: • Numbers and Operations • Algebra and Functions • Geometry and Measurement (plane geometry, three-dimensional geometry, and trigonometry) • Data Analysis, Statistics, and Probability In other words, if you’ve taken Algebra I, Geometry, and Algebra II with Trigonometry, you should have covered most of what you’ll need to know to do well on the Math Level 1 test. Do I have to take it? Many selective colleges require applicants to take 2 or 3 SAT subject tests. You should check with the schools on your college list on a case-by-case basis. If you do not yet have a college list, you should probably go ahead and take 1 or 2 SAT subject tests before senior year. Why? Your college list may change . . . and scrambling to complete applications, prepare for (and take) subject tests, and keep up in school, all during the fall of senior year, is not a good idea. How long is it? The Math Level 1 test is composed of 50 multiple-choice questions. Students are given 60 minutes to complete the test. Should I take Level 1 or Level 2? This is a tough question to answer. • If you’re already in Calculus and you’ve been a solid A/B math student throughout high school, you should seriously consider Level 2. If you take a few practice tests and get really frustrated, however, experiment with Level 1. (For expert advice and guidance, purchase ePrep’s Math Level 2 study program or ePrep’s discounted Math Combo study program.) • If you’ve taken Pre-Calculus and are a solid A/B math student, you should experiment by taking practice tests in both Level 1 and Level 2. (For expert advice and guidance, purchase ePrep’s discounted Math Combo study program.) • If you’re finishing up Algebra II and you are a solid A math student, you should experiment by taking practice tests in both. With some help from ePrep, you may be able to handle the Level 2. (For expert advice and guidance, purchase ePrep’s discounted Math Combo study program or ePrep’s Math Level 1 study program.) • If you’re finishing up Algebra II and you’re a B/C math student, you should take Level 1. (For expert advice and guidance, purchase ePrep’s Math Level 1 study program.) Math Level 1 Math Level 2 Math Combo SAT Subject Test SAT Subject Test Level 1 & Level 2 Suggested Time to Complete At Your Pace At Your Pace At Your Pace Full-Length Practice Tests 3 3 6 Expert Video Answers 150 150 300 Online Subscription Access 365 Days 365 Days 365 Days Score Improvement Guarantee 100 Points 100 Points 100 points (each course) $99 $99 $149
{"url":"http://www.eprep.com/courses/sat-subject/math1","timestamp":"2014-04-18T13:18:46Z","content_type":null,"content_length":"22622","record_id":"<urn:uuid:3131cbb3-cd55-4ed6-91a9-77624b99b189>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Professor Frank Bonsall, DSc, FRS, FRSE. Distinguished mathematician who contributed to the debate on Munro definitions Frank Bonsall, Emeritus Professor of Mathematics at the University of Edinburgh, was a distinguished mathematician and a significant figure in the mathematical life of the United Kingdom in the post-war years, particularly in Scotland and the north of England. His family moved to Welwyn Garden City in 1923, where Frank attended Fretherne House Preparatory School before moving on to Bishops Stortford College. He went up to Merton College, Oxford in 1938 to read mathematics. He later recalled how much he enjoyed the freedom of university life during that first year at Oxford, as well as his first encounters with rigorous analysis, the area of mathematics that was to become his speciality. His university career was interrupted by war service in the Royal Engineers from 1940 to 1946. The final two years of this were spent in India testing equipment under jungle He returned to Oxford, graduating with First Class Honours, and met a fellow mathematics undergraduate, Gillian (Jill) Patrick; they married in 1947. He could have stayed on at Oxford as a graduate student but decided instead to accept a one-year temporary lectureship at the University of Edinburgh. The following year, he moved to a lectureship at Newcastle where, encouraged by Prof WW Rogosinski, he made a start in research. As a self-taught research mathematician, he benefited greatly from Rogosinski's influence. His early research was along fairly classical lines but he was soon attracted to more abstract analysis. This interest was reinforced when he spent the academic year 1950-51 with a strong research group at Stillwater, Oklahoma. There, he had his first opportunity for the serious study of functional analysis, a subject that unifies different parts of mathematical analysis within a single more abstract framework and which was to be the focus of his research for the rest of his life. He was appointed to the chair at Newcastle in 1959 when Rogosinski retired but returned to the University of Edinburgh in 1965 to the newly created McLaurin Chair. He built up active groups in functional analysis at both Newcastle and Edinburgh, supervising numerous research students and doing much to strengthen the position of the subject more generally across the UK. Of particular note was the key role he played in founding the North British Functional Analysis Seminar, one of the first inter-university seminars in mathematics, and a model for many others. He also took undergraduate teaching seriously; countless former students will remember his lectures for their elegance and lucidity. The characteristic of Bonsall's research work was its aesthetic simplicity. His interests were wide and his contributions as significant as they were influential. They were recognised in his election to the Royal Society of Edinburgh in 1966 and to the Royal Society in 1970. He was awarded the Senior Berwick Prize of the London Mathematical Society in 1966 and was president of the Edinburgh Mathematical Society in 1976-77. He never sought committee work but took such work seriously when it came his way. Over the years, he served on the council of the London Mathematical Society, committees of the Royal Society and the Science Research Council, and on numerous editorial boards. Beyond mathematics, Bonsall had a great interest in mountain climbing, ascending his 280th Munro in 1977. He also contributed to the debate as to when two close tops count as separate Munros (that is, as separate mountains of height at least 3,000 feet). In two articles in the Scottish Mountaineering Club Journal (SMC) in 1973 and 1974, Bonsall developed a rule for determining this. His rule yields a list very close to the original one compiled by Sir Hugh Munro in 1891 and has influenced some subsequent revisions of SMC's definitive list. When Bonsall retired in 1984, he and Jill moved to Harrogate, where he had more time to devote to gardening, his other great interest. His garden was described by one friend as no less than spectacular. At the same time, he maintained his interest in mathematics, attending seminars at the University of Leeds, where he was an Honorary Fellow, and at the University of York, which awarded him an honorary degree in 1990. The flow of research papers continued for some years, the last appearing in 2000, just two years before he and Jill moved into a retirement home. Frank Bonsall was a gifted mathematician for whom his students and colleagues had respect and affection in equal measure. He is survived by his wife. Alastair Gillespie Professor Frank Bonsall, DSc, FRS, FRSE. Born: 31 March, 1920, in London. Died: 22 February, 2011, in Harrogate, aged 90. 7 April 2011 © Scotsman
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Obits2/Bonsall_Scotsman.html","timestamp":"2014-04-16T04:33:27Z","content_type":null,"content_length":"5599","record_id":"<urn:uuid:38d7e7b3-64e4-489d-b80d-6708d8cf1545>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
This popular Indonesian game is a form of block dominoes with several variations. The following description is based on information from Leo Isen and Riza Purwo Nugroho. Players and Equipment A Western style double-six set of 28 dominoes is used. Four or five people can play. When there are four players there are two variations: • each player takes a hand of six tiles, leaving four tiles face down in the middle of the table; • each player takes seven tiles, leaving none in the middle. Four can play as individuals or in teams, partners facing each other. When there are five players, each takes a hand of five tiles leaving three face down on the table. The Play This is a "single train" game. Tiles are played end to end in a single line on the table. At your turn you can add one tile to the line, at either end, in such a way that the touching ends of the tiles match. In the four-player game with seven tiles each, the first hand is begun by the holder of the [0-0], who must play this tile to the empty table. Subsequent hands are begun by the winner of the previous hand, and this player must begin with a double. If the player who is to start has no double, the right to begin passes on to the next player in turn who has a double. In the four- or five-player game where not all the tiles are dealt, one of the tiles left in the middle of the table is turned face up to begin the layout. In the first player the starting player is chosen at random, and thereafter the winner of each hand starts the next. Play continues in rotation with each player in turn adding a matching tile to one end of a layout. A player who is unable to place a tile (having no tiles that match either end of the layout) must discard one tile face down. This "dead" tile can no longer be used but will count against the player in the final scoring. Variation: There is another version in which a player who cannot play does not discard a tile but simply passes. Play continues until either a player runs out of tiles or no more tiles can be played. Players counts the total number of pips on their unplayed tiles (including dead tiles that were discard during the game, if any). For example [4-4] counts 8 and [0-3] counts 3. Some play that the [0-0] counts 25 points if the player has no other unplayed tiles with blanks. But a player who had for example [0-0] and [0-5] unplayed would count just 5 points (nothing for the The winner is the player with least points. If players tie for least points, the winner is the one holding the single tile with the lowest pip count. If these are also tied the winner is the one whose lowest tile has the lowest end. • Player A has [1-1] and [1-3], player B has [1-5], player C has [2-4] and player D has [2-3]. Player D wins, having only 5 points while the ohers have 6. • Player A has [1-1] and [1-3], B has [0-6], C has [2-4] and D has [3-3]. Everyone has 6 points, so player A wins because he has the lowest single tile - the [1-1]. • Player A has [2-2] and [1-5], B has [1-3] and [3-3], C has [6-4] and D has [0-5] and [2-3]. Everyone has 10 points, and both A and B have a 4-point tile. B wins because the [1-3] has the lowest Sometimes the game is just played to find a winner of each deal. When playing in teams the winning team is the team to which the winner belongs (even if the winner's partner has more points than the losing team have in total). Alternatively, when playing for money or chips, the players may settle up according to their points, each player paying each other player according to the difference in their scores. Example: Player A has a total of 12 points, player B has a total of 5 points, player C has 11 points and player D has 7 points. Player B, who has the lowest score, collects (12-5 = 7) chips from A, (11-5 = 6) chips from C and (7-5 = 2) chips from D, winning 15 in total. The next lowest player is D, who collects (12-7 = 5) chips from A and (11-7 = 4) chips from C for a net gain of 7 (since he has paid 2 to A). Player C collects (12-11 = 1) chip from A and thus has a net loss of 9 (having paid out 10 to A and B) while D has paid out a total of 13 to the other players.
{"url":"http://www.pagat.com/tile/wdom/gaple.html","timestamp":"2014-04-18T23:16:19Z","content_type":null,"content_length":"9237","record_id":"<urn:uuid:72250290-5513-4b3d-8c70-0a7f6d97b53f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the Inverse of the function: f(x)= (x/6)^3 - 7 My answer was f^-1(x) = 6(x+7)^3 is this correct? • one year ago • one year ago Best Response You've already chosen the best response. You exponent is incorrect. Best Response You've already chosen the best response. so would i write it like this? f^-1(x) = [6(x+7)]3 Best Response You've already chosen the best response. ohhh or is it f^-1(x) = 6(x^3 + 7)? Best Response You've already chosen the best response. No, what you did wrong was to assume that \[(y/6)^3 =x+7 \to y/6=(x+7)^3\] That's not how you get rid of exponents. if \[y^4=x\to y=x^{1/4}\] Best Response You've already chosen the best response. okay, so it should be f-1(x) = 18(x + 7) then? Thats the only other answer i got... Best Response You've already chosen the best response. You first answer was correct EXCEPT the actual number you put in the exponent. if you had \[y=(x/2)^3-1\] then to find the inverse swap x and y and solve \[x=(y/2)^3-1\] then \[x+1=(y/2)^3\] this is where you did something weird. Where I would take the 1/3 root of both sides: \[(x+2)^{1/3} = ((y/2)^3)^{\frac{1}{3}}=y\] you took both sides to the third power. Best Response You've already chosen the best response. Oh okay, i see what i types wrong...thank you so much! Best Response You've already chosen the best response. Good luck! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50aa57bde4b064039cbd30e2","timestamp":"2014-04-21T15:52:22Z","content_type":null,"content_length":"44686","record_id":"<urn:uuid:b8431df1-a864-4d09-8526-7b745ffb7387>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Features 1)Matlab calls the result as "ans" For example when you type at the command 4 + 6 and press "enter" the result is displayed as ans = 10. You can also save the result in a variable let us say x .Just type x = 4 + 6 at the command and press "enter" the result is displayed as x = 10. 2)Simple Math Operations in Matlab │ Operation │ Symbol │ Example │ │ addition ,a+b │ + │ 3 + 4.2 │ │ subtraction ,a-b │ - │ 4-2.4 │ │ multiplication ,a . b │ * │ 3*5 │ │ division ,a / b │ / │ 56/8 │ │ exponential function ,a power b │ ^ │ 2^7 │ The evaluation of expressions always take from left to right with the order of precedence being: b)exponential function("^") c)multiplication (*),division(/) d)addition (+), subtraction (-) • Use parenthesis as much possible for better readability and understandability. • parenthesis are evaluated from innermost parenthesis to the outer most parenthesis. Matlab Workspace 1)Matlab remembers the commands and variables as they are typed in a workspace called "Matlab Workspace". 2)The command "who" displays all the variables present in the workspace at any instant of time. 3)The cursor arrows can be used at the command prompt to scroll through the commands typed in the particular session. 4)The command "clear" deletes the variables present in the workspace. Example: Type a =3 ,b=4 at the command prompt. Type "who" at command prompt (displays the variables present in the session ) Scroll "UP" arrow key of the keyboard (The commands typed earlier are displayed at the command prompt) Now Type "a" and press enter(Value of "a" is displayed) Now Type "clear" (Clears the variables present in the memory) For a check now type "who" (Nothing is displayed). 1)Variable names case sensitive. 2)They may contain up to 31 characters. 3)Variables must start with a letter followed by a characters or numbers or under score. 4)Punctuation marks are not allowed in the variable names as they have a special meaning. 5)Apart from the variables matlab has some in built variables ,Some of them are │ Special variables │ Description │ │ pi │ Ratio of circumference to diameter of a circle(22/7) │ │ i(and) j │ Square Root (-1) │ │ nargin │ Number of function input arguments used. │ │ nargout │ Number of function output arguments used. │ • Values stored in a variable is erased when a new value is assigned to it. • Special variables can be assigned any value but when matlab is restarted or after execution of clear command the original values are restored . Comments and Punctuation 1)A comment can be written by using a "%" at the beginning of the comment. Example : a = 4 %This is a comment. 2)Two or more matlab statements can be placed on the same line if they are separated by a comma or a semicolon. 3)A semicolon after a statement suppresses the value to be printed at the command line . Example : a = 5;(The value of a is not displayed at the command) 4)"Ctrl + C" is pressed to stop the matlab processing .(e.g. Can be used to stop an infinite loop process) Complex Numbers 1)Assignment of a complex number to a variable "a" can be done in several ways a = x +yi(e.g. a = 2+6i) b = x+yi (e.g. a = 2+6j) c = x +sqrt(-1)*y(e.g. a = 2+sqrt(-1)*6) 2) real(a) Returns the real part of the complex number. imag(a) Returns the Imaginary part of the complex number. (e.g. real(2+6i) returns 2 and imag(2+6i) returns 6) 3) let the Polar form of the a+ib be M<x = M.e^jx [x is the angle of gradient and M is the Magnitude].Matlab provides functions to convert from rectangular form to the polar form. 4)abs(a) Returns the magnitude of a [ abs(a) = sqrt(a^2 + b^2)] 5)angle(a) Returns the gradient in radians [angle(a) = atan(b/a)] e.g. abs(2+6i) = 6.3246 and angle(2+6i) = 1.2490
{"url":"http://www.ece.utah.edu/~cfurse/Tutorials/matlab/basicfeatures.html","timestamp":"2014-04-16T16:19:18Z","content_type":null,"content_length":"6924","record_id":"<urn:uuid:10c3cb58-f501-41a0-9229-11d8aeef3986>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Minimal Surfaces Extend Shortest Path Segmentation Methods to 3D February 2010 (vol. 32 no. 2) pp. 321-334 ASCII Text x Leo Grady, "Minimal Surfaces Extend Shortest Path Segmentation Methods to 3D," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 321-334, February, 2010. BibTex x @article{ 10.1109/TPAMI.2008.289, author = {Leo Grady}, title = {Minimal Surfaces Extend Shortest Path Segmentation Methods to 3D}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {32}, number = {2}, issn = {0162-8828}, year = {2010}, pages = {321-334}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2008.289}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Pattern Analysis and Machine Intelligence TI - Minimal Surfaces Extend Shortest Path Segmentation Methods to 3D IS - 2 SN - 0162-8828 EPD - 321-334 A1 - Leo Grady, PY - 2010 KW - 3D image segmentation KW - minimal surfaces KW - shortest paths KW - Dijkstra's algorithm KW - boundary operator KW - total unimodularity KW - linear programming KW - minimum-cost circulation network flow. VL - 32 JA - IEEE Transactions on Pattern Analysis and Machine Intelligence ER - Shortest paths have been used to segment object boundaries with both continuous and discrete image models. Although these techniques are well defined in 2D, the character of the path as an object boundary is not preserved in 3D. An object boundary in three dimensions is a 2D surface. However, many different extensions of the shortest path techniques to 3D have been previously proposed in which the 3D object is segmented via a collection of shortest paths rather than a minimal surface, leading to a solution which bears an uncertain relationship to the true minimal surface. Specifically, there is no guarantee that a minimal path between points on two closed contours will lie on the minimal surface joining these contours. We observe that an elegant solution to the computation of a minimal surface on a cellular complex (e.g., a 3D lattice) was given by Sullivan [47]. Sullivan showed that the discrete minimal surface connecting one or more closed contours may be found efficiently by solving a Minimum-cost Circulation Network Flow (MCNF) problem. In this work, we detail why a minimal surface properly extends a shortest path (in the context of a boundary) to three dimensions, present Sullivan's solution to this minimal surface problem via an MCNF calculation, and demonstrate the use of these minimal surfaces on the segmentation of image data. [1] K. Aoshima and M. Iri, “Comments on F. Hadlock's Paper: Finding a Maximum Cut of a Planar Graph in Polynomial Time,” SIAM J. Computing, vol. 6, pp. 86-87, 1977. [2] B. Appleton and H. Talbot, “Globally Optimal Surfaces by Continuous Maximal Flows,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 106-118, Jan. 2006. [3] R. Ardon and L.D. Cohen, “Fast Constrained Surface Extraction by Minimal Paths,” Int'l J. Computer Vision, vol. 69, no. 1, pp. 127-136, Aug. 2006. [4] R. Ardon, L.D. Cohen, and A. Yezzi, “A New Implicit Method for Surface Segmentation by Minimal Paths: Applications in 3D Medical Images,” Proc. Int'l Workshop Energy Minimization Methods in Computer Vision and Pattern Recognition, A.Rangarajan, ed., pp.520-535, 2005. [5] C.J. Armstrong, W.A. Barrett, and B. Price, “Live Surface,” Proc. Volume Graphics '06, vol. 22, pp. 661-670, Sept. 2006. [6] N. Biggs, Algebraic Graph Theory. Cambridge Univ. Press, 1974. [7] I. Bitter, A.E. Kaufman, and M. Sato, “Penalized-Distance Volumetric Skeleton Algorithm,” IEEE Trans. Visualization and Computer Graphics, vol. 7, no. 3, pp. 195-206, July-Sept. 2001. [8] M.J. Black, G. Sapiro, D.H. Marimont, and D. Heeger, “Robust Anisotropic Diffusion,” IEEE Trans. Image Processing, vol. 7, no. 3, pp. 421-432, Mar. 1998. [9] Y. Boykov and M.-P. Jolly, “Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images,” Proc. Int'l Conf. Computer Vision, pp. 105-112, 2001. [10] Y. Boykov and V. Kolmogorov, “Computing Geodesics and Minimal Surfaces via Graph Cuts,” Proc. Int'l Conf. Computer Vision, vol. 1, Oct. 2003. [11] A.J. Briggs, C. Detweiler, D. Scharstein, M. College, and A. Vandenberg-Rodes, “Expected Shortest Paths for Landmark-Based Robot Navigation,” Int'l J. Robotics Research, vol. 23, nos.7/8, pp. 717-728, 2004. [12] C. Buehler, S.J. Gortler, M.F. Cohen, and L. McMillan, “Minimal Surfaces for Stereo,” Proc. Seventh European Conf. Computer Vision, vol. III, pp. 885-899, May 2002. [13] L. Cohen and T. Deschamps, “Grouping Connected Components Using Minimal Path Techniques. Application to Reconstruction of Vessels in 2D and 3D Images,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 2, pp 102-109, 2001. [14] L.D. Cohen and R. Kimmel, “Global Minimum for Active Contour Models: A Minimal Path Approach,” Int'l J. Computer Vision, vol. 24, no. 1, pp. 57-78, 1997. [15] A.X. Falcão and J.K. Udupa, “A 3D Generalization of User-Steered Live-Wire Segmentation,” Medical Image Analysis, vol. 4, pp. 389-402, 2000. [16] A.X. Falcão, J.K. Udupa, S. Samarasekera, S. Sharma, B.H. Elliot, and R. de A. Lotufo, “User-Steered Image Segmentation Paradigms: Live Wire and Live Lane,” Graphical Models and Image Processing, vol. 60, no. 4, pp. 233-260, 1998. [17] L.R. Ford and D.R. Fulkerson, “A Primal-Dual Algorithm for the Capacitated Hitchcock Problem,” Naval Research Logistics Quarterly, vol. 4, pp. 47-54, 1957. [18] L.R. Ford and D.R. Fulkerson, Flows in Networks. Princeton Univ. Press, 1962. [19] J. Forrest, D. de la Nuez, and R. Lougee-Heimer, CLP User Guide. IBM Research, 2004. [20] A.V. Goldberg, E. Tardos, and R.E. Tarjan, “Network Flow Algorithms,” Paths, Flows and VLSI-Design, B. Korte, L. Lovasz, H. Proemel, and A. Schrijver, eds., pp. 101-164, Springer-Verlag, 1990. [21] L. Grady, “Computing Exact Discrete Minimal Surfaces: Extending and Solving the Shortest Path Problem in 3D with Application to Segmentation,” Proc. IEEE CS Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 69-78, June 2006. [22] F. Hadlock, “Finding a Maximum Cut of a Planar Graph in Polynomial Time,” SIAM J. Computing, vol. 4, no. 3, pp. 221-225, 1975. [23] G. Hamarneh, J. Yang, C. McIntosh, and M. Langille, “3D Live-Wire-Based Semi-Automatic Segmentation of Medical Images,” Proc. SPIE Medical Imaging '05: Image Processing, pp. 1597-1603, 2005. [24] P.J. Hilton and S. Wylie, Homology Theory. Cambridge Univ. Press, 1960. [25] D. Kirsanov, “Minimal Discrete Curves and Surfaces,” PhD thesis, Harvard Univ., 2004. [26] M. Knapp, A. Kanitsar, and M.E. Gröller, “Semi-Automatic Topology Independent Contour-Based 2${1\over 2}$ D Segmentation Using Live-Wire,” J. WSCG, vol. 12, no. 2, pp. 229-236, 2004. [27] V. Kolmogorov, “Primal-Dual Algorithm for Convex Markov Random Fields,” Technical Report MSR-TR-2005-117, Microsoft, Sept. 2005. [28] V. Kolmogorov and C. Rother, “Minimizing Nonsubmodular Functions with Graph Cuts—A Review,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 7, pp. 1274-1279, July 2007. [29] S. König and J. Hesser, “3D Live-Wires on Pre-Segmented Volume Data,” Proc. SPIE Medical Imaging '05: Image Processing, pp. 1674-1679, 2005. [30] S. Lefschetz, Algebraic Topology, vol. 27. Am. Math. Soc. Colloquium Publications, 1942. [31] K. Li, X. Wu, D.Z. Chen, and M. Sonka, “Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 1, pp. 119-134, Jan. 2006. [32] C. Mattiussi, “The Finite Volume, Finite Element and Finite Difference Methods as Numerical Methods for Physical Field Problems,” Advances in Imaging and Electron Physics, pp. 1-146, Academic Press, Inc., Apr. 2000. [33] F. Morgan, Geometric Measure Theory, third ed. Academic Press, 2000. [34] E. Mortensen and W. Barrett, “Interactive Segmentation with Intelligent Scissors,” Graphical Models in Image Processing, vol. 60, no. 5, pp. 349-384, 1998. [35] G.L. Nemhauser and L.A. Wolsey, Integer and Combinatorial Optimization. John Wiley & Sons, 1999. [36] S. Okada, “On Mesh and Node Determinants,” Proc. IRE, vol. 43, p. 1527, 1955. [37] C.H. Papadimitriou and K. Steiglitz, Combinatorial Optimization. Dover, 1998. [38] S.V. Porter, M. Mirmehdi, and B.T. Thomas, “A Shortest Path Representation for Video Summarisation,” Proc. 12th Int'l Conf. Image Analysis and Processing, pp. 460-465, Sept. 2003. [39] S. Roy and I. Cox, “A Maximum-Flow Formulation of the n-Camera Stereo Correspondence Problem,” Proc. Int'l Conf. Computer Vision, pp. 492-499, 1998. [40] Z. Salah and J.O.D. Bartz, “Live-Wire Revisited,” Proc. Workshop Bildverarbeitung in der Medizin, pp. 158-162, 2005. [41] A. Schenk, G. Prause, and H.-O. Peitgen, “Efficient Semiautomatic Segmentation of 3D Objects in Medical Images,” Proc. Int'l Conf. Medical Image Computing and Computer-Assisted Intervention, pp. 186-195, 2000. [42] A. Schenk, G. Prause, and H.-O. Peitgen, “Local Cost Computation for Efficient Segmentation of 3D Objects with Live Wire,” Proc. SPIE Medical Imaging, M. Sonka and K.M. Hanson, eds., pp.1357-1364, 2001. [43] S. Seshu, “The Mesh Counterpart of Shekel's Theorem,” Proc. IRE, vol. 43, p. 342, 1955. [44] J.A. Sethian, “A Fast Marching Level Set Method for Monotonically Advancing Fronts,” Proc. Nat'l Academy of Sciences USA, vol. 93, no. 4, pp. 1591-1595, 1996. [45] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, Aug. 2000. [46] W.-K. Shih, S. Wu, and Y.S. Kuo, “Unifying Maximum Cut and Minimum Cut of a Planar Graph,” IEEE Trans. Computers, vol. 39, no. 5, pp. 694-697, May 1990. [47] J.M. Sullivan, “A Crystalline Approximation Theorem for Hypersurfaces,” PhD thesis, Princeton Univ., Oct. 1990. [48] C. Sun, “Fast Optical Flow Using 3D Shortest Path Techniques,” Image and Vision Computing, vol. 20, nos. 13/14, pp. 981-991, Dec. 2002. [49] E. Tonti, “On the Geometrical Structure of Electromagnetism,” Gravitation, Electromagnetism and Geometrical Structures, G.Ferraese, ed., pp. 281-308, Pitagora, 1996. [50] K. Truemper, “Algebraic Characterizations of Unimodular Matrices,” SIAM J. Applied Math., vol. 35, no. 2, pp. 328-332, Sept. 1978. [51] J.N. Tsitsiklis, “Efficient Algorithms for Globally Optimal Trajectories,” IEEE Trans. Automatic Control, vol. 40, no. 9, pp. 1528-1538, Sept. 1995. [52] A.J. Zomorodian, Topology for Computing. Cambridge Univ. Press, 2005. Index Terms: 3D image segmentation, minimal surfaces, shortest paths, Dijkstra's algorithm, boundary operator, total unimodularity, linear programming, minimum-cost circulation network flow. Leo Grady, "Minimal Surfaces Extend Shortest Path Segmentation Methods to 3D," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 321-334, Feb. 2010, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tp/2010/02/ttp2010020321-abs.html","timestamp":"2014-04-16T07:36:19Z","content_type":null,"content_length":"60248","record_id":"<urn:uuid:7dfdd871-6349-4bfd-9ca1-e2a547857b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Public digital signature with secret message (OR an alternative to password salting) No Profile Picture Registered User Devshed Newbie (0 - 499 posts) Join Date Jun 2011 Rep Power Public digital signature with secret message (OR an alternative to password salting) I want to store the signature of a secret message and a public key, without storing the message itself or the private key that was used to sign it. If I receive the same message in the future, I need to be able to verify that it matches the original signature using only the public key. This sounds like a job for a digital signature algorithm like RSA-PSS or DSA, but I need to to guarantee that the message itself is not revealed if the signature were made public. I don't know which, if any, signature algorithms have this property. In fact, I'm pretty sure RSA-PSS and PKCS1-v1_5 do not. What about DSA? Rabin? BLS? Can anyone help me out here? Here's my concrete use case: verifying passwords without storing them. The pervasive thinking is to hash passwords with a salt, either concatenated with the password before hashing or as an HMAC key. While this renders big precomputed dictionaries ineffective, it doesn't prevent an attacker who's compromised a system from generating new dictionaries once he knows the salts. I believe the current state of the art is to employ a key derivation function or other form of key stretching, which makes the former attack infeasible. Key stretching may be the best approach, but I'm wondering why I've never seen anyone store passwords as digital signatures as an alternative salting. Here's how it would work to store a new password: 1. Generate a public/private key pair 2. Sign the password with the private key 3. Discard the private key 4. Store the signature and the public key As long as the password itself (or the password digest depending on the algorithm) cannot be extracted from the signature, I think this would work great. Any thoughts? Are there any digital signature algorithms where the signed message is not compromised if the signature is made public along with the public key? This is not a good way to protect your database. First in PKCS1.5 or RSA-PSS signatures, the hash can be recovered. If the attacker has the public key (which seams likely since the server will need it and you have it included in the database) they can recover the hash and your passwords are no more secure than if you ran them through an unsalted SHA (or your hash of choice)--i.e. making them less secure than the traditional salted hash. Other algorithms like DSA don't provide a way to recover the hash of the message being signed. However I would not trust this to protect your passwords. Using a crypto non-traditionally is always bad since there is little to no research to prove the security of the implementation. Also you've only changed the dictionary attack to perform a DSA-verify between the dictionary of hashes and the user's password (yes it's more expensive, but again that's not a well-researched use of DSA--there may be flaws one can exploit). The design of signature algorithms is working against you here. The idea is to make it infeasible to forge a signature while leaving it easy to verify a signature--I.e. easy to perform a dictionary attack. (Since you usually don't sign 8-byte messages composed of common words this isn't a big deal.) For protecting passwords what you really want is an algorithm that is expensive to verify--like how bcrypt runs a few hundred salted hashes over the input. Current methods for improving protection on passwords involve: * Using a different salt for every user so that a separate dictionary would have to be computed to attack each password. * Using an expensive hashing algorithm such as bcrypt or HMACs to make it that much harder to compute a dictionary. (edit: Generating a new key pair is a rather costly operation. I'd worry that your server could be DOS'd by an attacker firing off a stream of password-change requests) Last edited by OmegaZero; June 24th, 2011 at 05:37 PM. sub{*{$::{$_}}{CODE}==$_[0]&& print for(%:: )}->(\&Meh); To save you the trouble of reading my entire post, let me start by saying that no asymmetric encryption algorithms consist of a public key that cannot decrypt messages encrypted using the private key, as this defeats the entire point of asymmetric encryption. This fact alone defeats your idea, but if you keep reading I will explain why. In your particular use-case, it would be a very bad idea to encrypt the original password using the private key and then save that as your signature. If you encrypt the password using the private key and then store the encrypted password along with the public key, and attacker can simply obtain the password by decrypting the encrypted message using the stored public key (assuming that if they can obtain the encrypted password they can also obtain the public key). If you first hash the password and then encrypt the hash using the private key, then it is impossible for the attacker to easily recover the original password using only the public key and the signature. However, if the attacker has the public key and the signature, then they can decrypt the signature and get the hash of the password, and they are right back at the step of attacking a hash (this is why your algorithm is pointless). Since this decryption can be done in essentially O(1) time, there is no benefit in terms of security to this. Even worse, if you hadn't salted the hash as part of the algorithm then it could easily be attacked using a rainbow table. I need to to guarantee that the message itself is not revealed if the signature were made public That is essentially the definition of a hash. In a digital signature algorithm, it is assumed that the receiver already has the message; thus there is no requirement for the signature to protect the message (although signatures are frequently based on hashing algorithms, so often they do actually protect the message, but it's the hash part of the algorithm that protects the message, not the digital signature part). PHP FAQ Originally Posted by Spad Ah USB, the only rectangular connector where you have to make 3 attempts before you get it the right way around Thank you both for your comments. Originally Posted by OmegaZero First in PKCS1.5 or RSA-PSS signatures, the hash can be recovered. If the attacker has the public key (which seams likely since the server will need it and you have it included in the database) they can recover the hash and your passwords are no more secure than if you ran them through an unsalted SHA (or your hash of choice)--i.e. making them less secure than the traditional salted Agreed, of course. This was the basic premise of my original post: which algorithms don't provide a way to recover the message or message digest from the signature? Originally Posted by OmegaZero Other algorithms like DSA don't provide a way to recover the hash of the message being signed. However I would not trust this to protect your passwords. Using a crypto non-traditionally is always I absolutely agree that crypto technologies shouldn't be used beyond their intended purpose, but I'm suggesting that by restating the password protection problem it can be seen as a subset of the problems that digital signatures are designed to address. With the only added requirement being that the signature itself does not reveal the message or message digest. I suspect the presence of this feature can be easily proved or disproved for DSA. Originally Posted by OmegaZero Also you've only changed the dictionary attack to perform a DSA-verify between the dictionary of hashes and the user's password... Yes. I'm not trying to contrive a solution to that; I'm simply exploring an alternative to salting, which is vulnerable to the same attack. E-Oreo, no offense, but your comments seem a little off target. Originally Posted by E-Oreo To save you the trouble of reading my entire post, let me start by saying that no asymmetric encryption algorithms consist of a public key that cannot decrypt messages encrypted using the private key, as this defeats the entire point of asymmetric encryption. No one is disputing this. Originally Posted by E-Oreo In your particular use-case, it would be a very bad idea to encrypt the original password using the private key and then save that as your signature... I'm not talking about encrypting passwords. I'm talking about actual digital signatures using signature algorithms, not encryption algorithms. Originally Posted by E-Oreo If you first hash the password and then encrypt the hash using the private key, then it is impossible for the attacker to easily recover the original password using only the public key and the signature. However, if the attacker has the public key and the signature, then they can decrypt the signature and get the hash of the password, and they are right back at the step of attacking a hash (this is why your algorithm is pointless). You've basically described the RSA digital signature algorithms, which were ruled out from the get-go. Why is knowing the hash a disadvantage? The whole point is that you can't easily reverse back to the original string that results in that hash. If you create a HMAC and run that operation several hundred times, there should be no issue. At the end of the day, if an attacker is snooping around in your database, regardless of the form you chose, he/she will still have deterministic output from the original input. Don't go trying to invent a method - chances are it won't be secure. You'd also be wise to try and prevent your database being accessed in the first place. Security isn't one thing in isolation, less so when you make it up. Best regards, No Profile Picture Contributing User Devshed Novice (500 - 999 posts) Join Date May 2007 Rep Power No Profile Picture Lost in code Devshed Supreme Being (6500+ posts) Join Date Dec 2004 Rep Power No Profile Picture Registered User Devshed Newbie (0 - 499 posts) Join Date Jun 2011 Rep Power No Profile Picture Contributing User Devshed Novice (500 - 999 posts) Join Date Feb 2008 Rep Power
{"url":"http://forums.devshed.com/security-cryptography/827512-public-digital-signature-secret-message-alternative-password-salting-last-post.html","timestamp":"2014-04-20T00:05:42Z","content_type":null,"content_length":"77325","record_id":"<urn:uuid:bfba7274-415e-4785-ae53-fbd2c5280008>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Windows Version of TGFF 3.1 (binary only) Original design: David Rhodes and Robert Dick. Subsequent development: Keith Vallerio. Task Graphs For Free (TGFF) was designed to provide a flexible and standard way of generating pseudo-random task-graphs for use in scheduling and allocation research. This includes the areas of embedded systems, hardware/software co-design, operating systems (both real-time and general-purpose), parallel or distributed hardware or software studies, flow-shop scheduling, as well as any other area which requires problem instances consisting of partially-ordered or directed acyclic graphs (DAGs) of tasks, i.e., task-graphs. While TGFF strives to be general-purpose, it generates graphs and associated resource parameters in accordance with the user's parameterized graph and database specifications and thereby allows the generation of graphs tailored to particular domains. The task-graphs (DAGs) generated by TGFF can be readily restricted to out-trees via user parameters. For example, a random out-tree can be generated by limiting the in-degree parameter to 1. Series-parallel graphs, graphs that consist of multiple chains of sequentially linked nodes, can be generated using the new algorithm. The resulting graphs may be viewed as a postscript file or using VCG. (An example comparing eps and vcg versions of TGFF output can be found here.) Either periodic or aperiodic cases can be generated. For periodic cases, the user specifies a set of integer period-multipliers which determine the relative periods of the multiple task-graphs in each case. For example, least-common multiple (LCM) solution approaches can be hampered by choosing relatively prime numbers. A period laxity factor is used to determine the relative laxity of the task-graph's period with respect to the deadlines of its tasks. Deadlines for each childless task within all task-graphs are determined based on the task's level and a deadline laxity factor. Challenging cases can be generated by making the laxity small while relatively easier instances can be created by making it large. For task-sets produced with TGFF's default settings, there is no guarantee of feasibility for the schedules produced. For another deadline setting approach, see the packed schedules feature. (Please note that packed schedules may not be used with some of the new features. Notably, the VCG output and series-parallel task graph options are not available for packed schedules.) Processor, communication, and task attributes can be declared and related in a correlated manner along with an inter-task communication data-size attribute. The program and its internal pseudo-random number generator are written in C++ and are portable to architectures with 32 bit floating point numbers which have 24 bit mantissas. This allows researchers to generate identical task graphs, given identical parameters and input seeds, Enabling direct comparison of their schedulers/allocators. The program provides the primary output in a text file and also generates an encapsulated PostScript file depicting the graphs. An example of the PostScript output is shown at the top of this page. Download the source code Download paper (Old algorithm only) Download documentation (pdf) Other links Extension suggestions Packed schedules Version notes Romanian translation of website by Aleksandra Seremina. Page maintained by Robert Dick.
{"url":"http://ziyang.eecs.umich.edu/~dickrp/tgff/","timestamp":"2014-04-17T22:27:03Z","content_type":null,"content_length":"5524","record_id":"<urn:uuid:a69c94a9-1045-493b-a752-18cba15e3aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Part 5. Appendixes XL Fortran for AIX 8.1 Language Reference This information is provided for the benefit of FORTRAN 77 users who are unfamiliar with Fortran 95, Fortran 90 and XL Fortran. Except as noted here, the Fortran 90 and Fortran 95 standards are upward-compatible extensions to the preceding Fortran International Standard, ISO 1539-1:1980, informally referred to as FORTRAN 77. Any standard-conforming FORTRAN 77 program remains standard-conforming under the Fortran 90 standard, except as noted under item 4 below regarding intrinsic procedures. Any standard-conforming FORTRAN 77 program remains standard-conforming under the Fortran 95 standard as long as none of the deleted features are used in the program, except as noted under item 4 below regarding intrinsic procedures. The Fortran 90 and Fortran 95 standard restricts the behavior of some features that are processor-dependent in FORTRAN 77. Therefore, a standard-conforming FORTRAN 77 program that uses one of these processor-dependent features may have a different interpretation under the Fortran 90 or Fortran 95 standard, yet remain a standard-conforming program. The following FORTRAN 77 features have different interpretations in Fortran 90 and Fortran 95: 1. FORTRAN 77 permitted a processor to supply more precision derived from a real constant than can be contained in a real datum when the constant is used to initialize a DOUBLE PRECISION data object in a DATA statement. Fortran 90 and Fortran 95 do not permit a processor this option. Previous releases of XL Fortran have been consistent with the Fortran 90 and Fortran 95 behavior. 2. If a named variable that is not in a common block is initialized in a DATA statement and does not have the SAVE attribute specified, FORTRAN 77 left its SAVE attribute processor-dependent. The Fortran 90 and Fortan 95 standards specify that this named variable has the SAVE attribute. Previous releases of XL Fortran have been consistent with the Fortran 90 and Fortran 95 behavior. 3. FORTRAN 77 required that the number of characters required by the input list must be less than or equal to the number of characters in the record during formatted input. The Fortran 90 and Fortran 95 standards specify that the input record is logically padded with blanks if there are not enough characters in the record, unless the PAD='NO' specifier is indicated in an appropriate OPEN statement. With XL Fortran, the input record is not padded with blanks if the noblankpad suboption of the -qxlf77 compiler option is specified. 4. The Fortran 90 and Fortan 95 standards have more intrinsic functions than FORTRAN 77, in addition to a few intrinsic subroutines. Therefore, a standard-conforming FORTRAN 77 program may have a different interpretation under Fortran 90 and Fortran 95 if it invokes a procedure having the same name as one of the new standard intrinsic procedures, unless that procedure is specified in an EXTERNAL statement. With XL Fortran, the -qextern compiler option also treats specified names as if they appear in an EXTERNAL statement. 5. In Fortran 95, for some edit descriptors a value of 0 for a list item in a formatted output statement will be formatted differently. In addition, the Fortran 95 standard unlike the FORTRAN 77 standard specifies how rounding of values will affect the output field form. Therefore, for certain combinations of values and edit descriptors FORTRAN 77 processors may produce a different output form than Fortran 95 processors. 6. Fortran 95 allows a processor to distinguish between a positive and a negative real zero, whereas Fortran 90 did not. Fortran 95 changes the behavior of the SIGN intrinsic function when the second argument is negative real zero. [ Top of Page | Previous Page | Next Page | Table of Contents | Index ]
{"url":"http://sc.tamu.edu/IBM.Tutorial/docs/Compilers/xlf_8.1/html/lr421.HTM","timestamp":"2014-04-18T05:31:39Z","content_type":null,"content_length":"5923","record_id":"<urn:uuid:b9cfd309-f49f-43ce-bb2a-543b6ab73104>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
crossing over line October 29th 2009, 08:38 AM #1 Junior Member Oct 2009 Let $l$ be a line through $(x_0,y_0)$ cutting the line $k: y=mx+c$ and forming an angle $\alpha$. Prove that the equation of the line $l$ is $y-y_0=\frac{m-\tan\alpha}{1+m\tan\alpha}(x-x_0)$ or Hello GTK X Hunter I think you have an error in the second of these two equations. Here is what I believe to be the correct result. The equation of the line with gradient $m_1$ passing through the point $(x_0,y_0)$ is This makes an angle $\theta$, say, with the direction of the $x$-axis, where $\tan\theta = m_1$. The line $y = mx + c$ makes an angle $\phi$, say, with the $x$-axis, where $\tan\phi = m$. Then, depending on which side of the line $y = mx + c$ the angle $\alpha$ is made: $\theta= \phi\pm\alpha$ (See the attached diagram. The lines shown are not necessarily the actual lines, but simply parallel to them through a single point.) $\Rightarrow m_1=\tan\theta = \tan(\phi\pm\alpha)= \frac{\tan\phi \pm\tan\alpha}{1\mp\tan\phi\tan\alpha}$ $=\frac{m\pm\tan\alpha}{1\mp m\tan\alpha}$ So the two gradients are $\frac{m+\tan\alpha}{1-\tan\alpha}$ and $\frac{m-\tan\alpha}{1+\tan\alpha}$. The error in the question can be easily verified, because the gradient of one of the the given lines is $(-1) \times$ the gradient of the other. Therefore they make equal angles above and below the $x$-axis. They should, of course, make equal angles above and below the line $y = mx + c$. Thankyou so much grandad October 30th 2009, 05:06 AM #2 March 27th 2010, 01:04 AM #3 Junior Member Oct 2009
{"url":"http://mathhelpforum.com/pre-calculus/111175-crossing-over-line.html","timestamp":"2014-04-20T00:05:20Z","content_type":null,"content_length":"43224","record_id":"<urn:uuid:70566a44-7306-427f-852a-1b80cea49158>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockwall Prealgebra Tutor Find a Rockwall Prealgebra Tutor ...I'm a very intuitive person, and from experience in studying with, and teaching peers (not an uncommon experience in my history) I have a knack for seeing where they people get stuck and accordingly how to best help realign them on the right track. I also know many memory tricks, and tips for do... 17 Subjects: including prealgebra, reading, chemistry, geometry ...I am certifying in Spanish, but I was also a physics major until I was forced to choose between Spanish or physics and opted out of the physics major. I grew up in Texas, so naturally I've been around Spanish my whole life. By the time I got to high school I felt I had a grasp on the language, so I took German for three years instead. 12 Subjects: including prealgebra, Spanish, English, physics ...I was a tutor in college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math is important to performing my job. I hold a Master's Degree in Education with emphasize on instruction in math and science for grades 4th through 8th. 11 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...As a economic analyst, I developed an advanced MS Excel tool to mine raw data, organize and analyze it which saved our organization countless man-hours of work. I'm well versed in MS Excel functions and the writing of macros in VBA. I have more than three years of paid economics tutoring experience at the undergraduate level. 7 Subjects: including prealgebra, algebra 1, economics, business ...I have a lot of activities put together to assist students in understanding many of the Algebra II concepts. I taught for 10 years in surrounding districts. I taught Geometry each year and also taught ESL Geometry for 2 years in one district. 10 Subjects: including prealgebra, geometry, algebra 1, algebra 2 Nearby Cities With prealgebra Tutor Allen, TX prealgebra Tutors Balch Springs, TX prealgebra Tutors Duncanville, TX prealgebra Tutors Farmers Branch, TX prealgebra Tutors Garland, TX prealgebra Tutors Heath, TX prealgebra Tutors Highland Park, TX prealgebra Tutors Lancaster, TX prealgebra Tutors Lucas, TX prealgebra Tutors Mesquite, TX prealgebra Tutors Murphy, TX prealgebra Tutors Parker, TX prealgebra Tutors Rowlett prealgebra Tutors Sachse prealgebra Tutors Wylie prealgebra Tutors
{"url":"http://www.purplemath.com/rockwall_tx_prealgebra_tutors.php","timestamp":"2014-04-16T10:31:19Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:a95599eb-d5f4-45e1-8302-aa804ac3628e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Holography: Vasiliev's higher-spin theories and O(N) models In December 2009, Simone Giombi and Xi Yin of Harvard University wrote a fascinating 90-page paper, Higher Spin Gauge Theory and Holography: The Three-Point Functions They have offered a new nontrivial piece of evidence supporting a holographic duality that I will explain momentarily. They calculated three-point functions on both sides, to obtain a function of the spins "s". The result is a ratio of gamma functions involving these spins "s", describing the fields, often shifted by 1/2. And yes, it completely agrees on both sides. There are some entertaining technical details of their calculation, like the analytic continuation in spins, but you have to read the paper to learn more about them. What duality am I talking about? Well, on the bulk side, you have Vasiliev's (and Fradkin's - I've met this funny chap) higher-spin theories in AdS4, see the 1999 review Higher Spin Gauge Theories: Star-Product and AdS Space. On the boundary side, there is an O(N) model in 3 dimensions. Its matter fields - scalars - transform in the vector representations of O(N). And one should naturally add the "(phi^a.phi^a)^2" quartic self-interaction (although many interesting things already appear in the free theory). See the 2002 paper by Klebanov and Polyakov who conjectured the duality: AdS Dual of the Critical O(N) Vector Model The theories on both sides have been known and considered somewhat important before the duality was formulated in the paper above. For obvious reasons, such dualities that relate previously known objects are more interesting than the dualities that have to construct the other side of the equivalence from scratch (especially if the new side is awkward). Difference from ordinary gravitating vacua Now, the theories on both sides look somewhat unusual, in comparison e.g. with the "AdS5 x S5" vacuum of type IIB string theory which is equivalent to N=4 SU(N) gauge theory on the boundary. To understand the differences, and why the novelties probably match on both sides, I must say some things about the Vasiliev theories. The Vasiliev theories in the bulk - which are somewhat manageable and understood in 3+1 dimensions we discuss here as well as 2+1 dimensions (because the higher-spin representation theory of the Lorentz group is more straightforward than it is in higher dimensions) - are conveniently formulated with a large gauge invariance. What is it? In normal Yang-Mills theories, generalizing electromagnetism, the transformations (or at least their global versions) are generated with a "charge" such as the "electric charge". This quantity is a spacetime scalar i.e. a spin-zero field. The corresponding gauge field must be able to compensate the effects of a spacetime gradient of the transformation parameter - so it must be a spin-one field. Recall that "δ A_m = ∂_m λ". Similarly, spin-1/2 charges (supercharges) that generate supersymmetry transformations require a spin-3/2 field, the gravitino, to play the role of the "compensator" of the local supersymmetry transformations in the the theory. And if the conserved quantities are spin-one objects - actually only one of them, namely one energy-momentum vector - you need a spin-two field, the metric tensor (or perhaps its unusual generalizations with antisymmetric pairs of indices), to undo the effects of the coordinate transformations - of the local versions of the spacetime translations generated by the energy-momentum vector. The Coleman-Mandula-like theorems are enough to prove that the conserved quantities with spins exceeding one - which would lead to "gauge fields" whose spin exceeds two (spin-5/2 is already too much!) - can only be included in non-interacting and therefore uninteresting theories. Why? The number of components of the conserved tensors (or spintensors) would constrain the theory too much and the scattering amplitudes would essentially have to vanish. There exists another loophole and it is given by the Vasiliev theories. There can be higher-spin fields and their corresponding gauge fields. But the higher-spin generators actually have to have nonzero commutators. And their commutators inevitably include generators of an even higher spin. So once you go in this path, you inevitably end with a theory whose fields can have an arbitrarily high spin. The theory's interactions are still limited in some way but the tower of fields make this theory non-trivial, anyway. Now the real differences Even the "ordinary" perturbative string theory kind of contains fields of an arbitrarily high spin: the excited string modes. But they're massive, with masses being comparable to the string scale. However, in the Vasiliev theory, all the new high-spin fields are actually massless! Note that the AdS curvature reinterprets what the word "massless" actually does, depending on the fields' spin. Also, there is a very different dependence of the number of "species" of the fields as a function of the spin. In perturbative string theory, there exists a whole "Hagedorn tower" of states which is getting exponentially dense as you go to higher excited string states. The number of states grows exponentially with the spin - or exponentially with a power of the spin, which is qualitatively similar. The density of black hole microstates which is relevant at strong coupling (or truly high masses) morally continues with this exponential growth that is already seen in perturbative string But in the Vasiliev theories, the number of distinct fields only grows linearly with the spin. For different dimensions, it could perhaps grow as another power law but there's no exponential growth over there. There exist differences on the boundary side, too. In the N=4 d=4 gauge theory, the fields transform in the adjoint representation of SU(N). However, in the O(N) model dual to the Vasiliev theory, they transform in the fundamental representation of O(N). In the context of noncommutative geometry or Matrix theory, the difference between adjoint and fundamental representations is simple to describe (although not necessarily relevant for our Vasiliev AdS/CFT pair). If the matrix indices are chosen to encode the dependence of a function on two emergent noncommutative dimensions - think of a fuzzy sphere or a fuzzy torus - then the adjoint representation of U(N) produces fields that live in the bulk of a "membrane" while the fundamental representation of U(N) may give birth to fields that live on one-dimensional curves inside the membrane. For example, the O(N) heterotic E8 Matrix theory for the 11D space with one Hořava-Witten boundary has 16 fundamental fermionic fields which produce the extra E8 degrees of freedom on the boundaries of cylindrical boundaries. However, the difference between the fundamental and adjoint representations seems to be somewhat more dramatic in this Vasiliev case. The absence of the adjoint representation eliminates the whole Hagedorn tower of states and replaces it with a simpler "linear" (or "power-law") tower of higher-spin states. It's very questionable whether such a theory or vacuum without gravity and without the corresponding growth of the density of states should be considered a part of string theory - whether it is another solution of some unified equations of motion. We don't have the most universal definition of "string theory" yet so we only "add" things that are manifestly connected with the old ones but my answer would probably be "no" at this point: it doesn't seem to be "the same theory". Nevertheless, the holographic duality still seems to be working here. In some sense, the theory is less complex than the vacua of string theory so one should try to understand where the holography comes from as comprehensibly as possible. Automatic holography of AdS spaces There is one feature of the holography that may be puzzling: the bulk theory is not a typical theory of gravity, with the black holes etc. So why is it holographic in the first place? How can something that looks more like a "local field theory in the bulk without gravity" be equivalent to a field theory in a lower dimension? Well, the Vasiliev theory clearly can't be just another generic local field theory of the usual type. The infinite collection of the fields must actually yet paradoxically "reduce" its number of degrees of freedom in a similar way as in the gravitating vacua of string theory. But maybe, this is not really necessary. It's because holography is kind of automatic in any theory defined on anti de Sitter space. It's because the volume of a large (much larger than the AdS radius) "spherical" region of "AdS_{d}" is actually proportional to the surface "area" (multiplied by the AdS curvature radius). So most of the volume "sits" near the boundary, anyway. That's where the warp factor squeezes most of the volume. In this sense, one could expect any theory, and not just a gravitating theory, on the AdS space to be equivalent to a lower-dimensional theory. However, it's still true that it won't be a simple theory with a finite number of fields - like the O(N) model - if you start with a generic local field theory in the bulk. The Vasiliev theory is special in this regard. It's special because of some features that make it analogous to string theory even though the detailed realization of the "X-factor" needed to get a nice local boundary dual are different in both qualitative cases. During the last 20 years, theoretical physics has made a stunning progress in the understanding of the equivalence of - and the transmutations between - seemingly very different physical objects and environments. Various dualities and transitions have taught us that many things that we used to consider very different are in fact completely equivalent descriptions of the same physics, at least in certain situations. Charges are the same things as momenta and windings, electrically charged particles and magnetic monopoles are two descriptions of the same things, confinement may look like the Higgs mechanism, and so on. However, this unified perspective on all conceivable physical phenomena is not quite complete yet. That's why it's important to look at different types of physical models that are seemingly as constrained as the vacua of string theory (or almost) but the precise character of the constraints differs in some technical aspects. The Vasiliev theories may be an example of this cutting edge. Because they're similar to the normal stringy vacua but they may be simpler from some viewpoint, we should be able to fully understand these "toy models", shouldn't we? And that's the memo. snail feedback (2) : reader Brian G Valentine said... I measured the distance between hyperbola verticies on (Escher's representation of) Klein's model of the Lobachewskian plane to find a representation of the metric Escher used to prepare his reader Eugene said... Hi, Lubos! I would like to comment a little bit since it seems to me that you have wrote that there is no gravity in Vasiliev theory ("local field theory in the bulk without gravity"). In fact, there is a spin-two field in Vasiliev theory, so the gravity is included. There are also black-hole type solutions in this theory. One more comment is that the number of states with some fixed spin does not grows with the spin, it is a constant, e.g. the minimal theory containts the files of each spin from 0 to infinity in one copy. Thanks for this topic!
{"url":"http://motls.blogspot.com/2010/02/holography-vasilievs-higher-spin.html","timestamp":"2014-04-20T21:09:58Z","content_type":null,"content_length":"204162","record_id":"<urn:uuid:d3f95d96-6c47-47ef-b56f-155bb929ccca>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Ch. 8 Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: Quinnipiac - EC - 271 EconomicsThe study of efficient utilization or management of limited productive resources which purpose is to attain maximum satisfaction from human material wants A limited amount of available resources to fulfill the unlimited wants Comparing mar Quinnipiac - EC - 111 Market Ceteris Paribus Demand and Supply CurvesAny mechanism that Brings Buyers and Sellers Together &quot;All else equals&quot; relationship between 2 Variables (Price &amp; Quantity) must be Held constant As Price Increases the Quantity Decreases, and vice-ver Quinnipiac - EC - 111 Channels of Distribution Retail Margin- The percentage of a product's retail selling price that the retailer receives as commission for selling that product. Channel Support- Funding a promotion campaign to encourage retailers to stock and promote yo University of Michigan - PSYCH - 111 POLSCI 160 Chapter 16 Notes: THE CAUSES OF WAR: STRUCTURAL ACCOUNTS WAR VERSUS NEGOTIATION: INDIVISIBILITY, UNCERTAINTY, AND COMMITMENT There are three factors that contribute to the risk of war. 1. Asymmetric information or in common parlance or unc University of Michigan - PSYCH - 111 POLSCI 111 Chapter 2 Notes: Constructing a Government: The Founding and the Constitution THE FIRST FOUNDING: INTERESTS AND CONFLICTS Five sectors of society had interests taht were important in colonial politics. 1. New England merchants. 2. Southern University of Michigan - PSYCH - 111 POLSCI 111 Chapter 4 Notes: The Constitutional Framework and the Individual CIVIL LIBERTIES: NATIONALIZING THE BILL OF RIGHTS Dual Citizenship University of Michigan - PSYCH - 111 POLSCI 111 CHAPTER 1: FIVE PRINCIPLES OF POLITICS There are several levels of government in the US. Federal, state, county, city, and town government exist and every state has a considerable amount of soveirgnty. There are 15 cabinet departments. MAK University of Michigan - PSYCH - 111 POLSCI 160 Chapter 7 Notes: What is Power? Power is divided into five parts. Persuasion, rewards, punishment, force, and fungibility of power. DEFINING POWER The ability of one actor, A, to get another actor, B, to do something B would not otherwise University of Michigan - PSYCH - 111 POLSCI 160 Chapter 4: INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE STRUCTURAL PERSPECTIVES Structural features are the characteristics of the set of nations that make up the international system, rather than the characteristics of any one par University of Michigan - PSYCH - 111 POLSCI 160 Chapter 2 Notes: Christopher Columbus and International Relations Decision Theory: How an individual weighs alternatives by taking probabilities, the likelihood that different possible outcomes will arise, and utilities, the value the choo University of Michigan - PSYCH - 111 POLSCI 160 Chapter 1 Notes: Modern Political Economic History and International Politics Policymakers must weigh the facts and logic of their circumstances in deciding what to do. If history is viewed as something that occurs by random chance, then t University of Michigan - PSYCH - 111 POLSCI 160 CHAPTER 2 NOTES: Evaluating arguments About International Politics THEORIES AS SIMPLIFICATIONS OF REALITY Theories: Provide a prospective explanation of reality. Theory provides a simplification of reality to allow people to say what is tr University of Michigan - POLISCI - 395 August Coup: This occurred in August of 1991 when a group of Soviet Government officials banded together to depose Gorbachev and take control of the country. They were hard-line members. They protested that the reforms were too radical and that it de University of Michigan - POLISCI - 160 POLSCI 160 Chapter 2 Notes: Christopher Columbus and International Relations Decision Theory: How an individual weighs alternatives by taking probabilities, the likelihood that different possible outcomes will arise, and utilities, the value the choo University of Michigan - POLISCI - 160 POLSCI 160 Chapter 4: INTERNATIONAL POLITICS FROM A STRUCTURAL PERSPECTIVE STRUCTURAL PERSPECTIVES Structural features are the characteristics of the set of nations that make up the international system, rather than the characteristics of any one par N. Arizona - BIO - 205 Heather Alden 11/06/07 Bio 205 Ebola: A Violent Disease Ebola virus, along with Marburg virus, makes up a family called filoviruses (Warfield 2007). Filoviruses cause severe hemorrhagic fever in humans and nonhuman primates (DiCarlo 2007). There are University of Michigan - POLISCI - 160 POLSCI 160 Chapter 1 Notes: Modern Political Economic History and International Politics Policymakers must weigh the facts and logic of their circumstances in deciding what to do. If history is viewed as something that occurs by random chance, then t University of Michigan - POLISCI - 160 POLSCI 160 CHAPTER 2 NOTES: Evaluating arguments About International Politics THEORIES AS SIMPLIFICATIONS OF REALITY Theories: Provide a prospective explanation of reality. Theory provides a simplification of reality to allow people to say what is tr Rice - STAT - 280 4: 2, 22, 23, 30, 34, 36, 40, 42, 50, 58Paul Tucker STAT 280 Sarah Thomas HW #34.2 a) Observational study; the study did not create control groups in order to isolate the response variable. b) because mobile phone use could not be isolated as the Rice - STAT - 280 STAT 280: Elementary Applied Statistics, Section 002 Spring 2008Instructor: Ms. Sarah Thomas email: sarah.thomas@rice.edu Office: 1041 Duncan Hall, phone: x6057 Mailbox: 1092 Duncan Hall (copy room) Office Hours: Monday 4-5pm (temporary) and by appt U. Houston - ACCT - intermedia View Attempt 1 of 4Title: Started: Submitted: Time spent: Review Exam February 14, 2008 10:31 PM February 15, 2008 12:18 AM 01:46:12 MaximumTotal score: 33/40 = 82.5% Total score adjusted by 0.0 possible score: 40 1.Generally Accepted Accounting U. Houston - ACCT - intermedia Total score: 38/40 = 95% Total score adjusted by 0.0 possible score: 40 1.MaximumThe body that has the power to prescribe the accounting practices and standards to be employed by companies that fall under its jurisdiction is the:Student Respons U. Houston - ACCT - intermedia 1. Generally accepted accounting principles:Student Response a. include detailed practices and procedures as well as broad guidelines of general application. b. are influenced by pronouncements of the SEC and IRS. c. change over time as the nature U. Houston - ACCT - intermedia View Attempt 3 of 4Title: Started: Submitted: Time spent: Review Exam February 16, 2008 1:29 PM February 16, 2008 2:59 PM 01:30:36 Total score adjusted by 0.0 Maximum possible score: 40Total score: 38/40 = 95% 1.Accounting principles are &quot;genera U. Houston - ACCT - intermedia View Attempt 2 of 4Title: Started: Submitted: Time spent: Review Exam February 15, 2008 3:12 PM February 15, 2008 4:57 PM 01:45:06 MaximumTotal score: 31/40 = 77.5% Total score adjusted by 0.0 possible score: 40 1.The body that has the power to Lehigh - PSYCH - 001 INTRODUCTION (Ch 1): Definition of psychology; goals of psychology. Psychology- scientific study of behavior and the mind. Goals-describe behavior, understand and predict behavior, optimize behavior The different influences on psychological thought ( Lehigh - PSYCH - 001 PSYC 001 - EXAM 2 STUDY GUIDE WHEN: Wednesday Feb. 27 FROM 9:10-10:00 WHERE: PA 101 BE SURE TO BRING YOUR STUDENT ID NUMBER AND TWO #2 LEAD PENCILS. If you have questions as you work through this study guide, here are several ways to get assistance: Lehigh - ENGL - 001 Dempsey 1Tim Dempsey American History X Paper 2American History X Heroes are everywhere. They are the police and the firefighters that protect us to the kid who stands up to the bully on the school yard. The only difference between the two is tha Lehigh - PSYCH - 001 PSYC 001 - STUDY GUIDE FOR EXAM III WHEN: MONDAY MARCH 31 from 9:10-10:00 WHERE: PA 101BE SURE TO BRING AT LEAST TWO #2 LEAD PENCILS TO THE EXAM. MAKE SURE YOU KNOW YOUR LEHIGH STUDENT ID NUMBER.PERSONALITY (CH 12): Psychoanalytic approaches: Id- Lehigh - AMERICAN M - 0 1 Hist 110 MidtermAnnihilation- To completely destroy and turn into nothing. This goes alongside with total war in which aims to completely destroy every part of an enemy. In battles of annihilation the army aims to destroy the enemy in that specif Youngstown - CRJUS - 5830 Blood Spatter Interpretation David RankinAbstract 3 27 FEB 2008The article is from the FBI Law Enforcement Bulletin and is written by Louis Akin. The article is titled &quot;Blood Spatter Interpretation at crime and accident scenes: a basic approach&quot; Youngstown - CRJUS - 5830 David P Rankin 18 March 2008 Violence in America: Topic Paper 2 Rape, by definition, is still not up to terms with todays society. According to the dictionary; rape is unlawful compelling of a woman through physical force or duress to have sexual int Youngstown - CRJUS - 5830 The Rational Choice Theory and Violence in America David P. Rankin CRJUS 5831The Rational Choice theory to me is the Occam's razor of the theories of violence, bluntly put, the easiest, simplest answer is usually the best and the right one. Same ho Youngstown - CRJUS - 5830 McLaughlin v. Florida 379 U.S 184 (1964) Court History: An interracial couple, identified in this case only as &quot;McLaughlin&quot; were prohibited from being married under Florida law, but decided to cohabit anyway, and they were subsequently convicted of F Youngstown - CRJUS - 5830 Data: The basics of Computer Forensics David RankinAbstract 2 13 Feb 2008The article is by Edward Pscheidt, who is a leading computer forensic authority, he provides expert witness testimony in complex litigations and the article is basically a l Wisconsin Milwaukee - BUS ADM - 455 Chapter 8Conduct of Monetary Policy: Tools, Goals, and TargetsThe Federal Reserve's Balance SheetThe conduct of monetary policy by the Federal Reserve involves actions that affect its balance sheet. This is a simplified version of its balance she Wisconsin Milwaukee - BUS ADM - 370 16SchedulingMcGraw-Hill/IrwinCopyright 2007 by The McGraw-Hill Companies, Inc. All rights reserved.Scheduling Scheduling: Establishing the timing of the use of equipment, facilities and human activities in an organization Effective schedul UC Riverside - BIO - 5c Ch 50 50.1 Organisms are open systems Ecology organism interaction with the environment o Determines distribution and abundance of organisms Ecological time (minutes, months, years) Evolutionary time (decades, centuries, millennia) Abiotic f UC Riverside - ETST - 001 Daniel SarkisRace and Ethnicity Prof: Dr. Lowy TA: Jake Section: 27 (TR 10:10-11:00) Topic #7Page 1Stannard defines genocide as &quot;the coordinated and planned annihilation of a national, religious, or racial group by a variety of actions aimed at Ithaca College - ICSM - 10107 Diviya Agrawal Collapse Chapter 14 Summary9/26/07In Chapter 14, Diamond gives various reasons for why societies make decisions which lead to the collapse of their society. One reason Diamond suggests is that a society may not see a problem coming UC Riverside - ETST - 001 Daniel SarkisRace and Ethnicity Prof: Dr. Lowy TA: Jake Section: 27 (TR 10:10-11:00) Topic #8 ImmigrationPage 1Westward expansion started with Manifest Destiny as European explorers felt that it was their mission sent by God to expand over all o Ithaca College - ICSM - 10107 Diviya Agrawal Chapter 15 Chapter 15 talks mostly about the relationship between big corporations and environmentalists. Some key ideas: -businesses can profit from hurting people and the environment in the short term (ex. fisherman in unmanaged fish UC Riverside - BIO - 5c Ch 52 52.1 Density is not a static property of a population but a result of a dynamic interplay between processes that add and remove individuals. Variations in a local density is the most important characteristic that a population ecologist might Ithaca College - ICSM - 10107 Diviya Agrawal Collapse: Chapter 49/17/071. What are the environmental, societal, and psychological causes of the collapse of the society in the chapter you were assigned to read? The environmental causes of the collapse of the Anasazi and the ne UC Riverside - BIO - 5c Ch 53 Ch 53.1 Features of Community Structure: (species diversity and feeding relationships) Species Diversity diff. organisms that make up the community with 2 components o Species Richness total number species in the community o Species Abundan Ithaca College - ICSM - 10107 DIviya Agrawal ICSM-10107 Buttermilk Falls Walk Reflection9/3/07I come from New York City and was delighted to find out we were going to explore the wilderness. I love the outdoors and take advantage of every opportunity to appreciate nature. The Ithaca College - ICSM - 10107 Ithaca Seminar Lecture on Diversity by Dr. Maura Cullen in the Emerson Suites on September 19th. Please make sure that you sign in with your instructor at the event.Please answer the following questions (~ page) and submit to section WebCT site by Cornell - CHEM - 2070 Recycling Aluminum October 12, 2007Results and Discussion Statement of purpose Observations/data tables Calculations Balanced equationsEvaulation of results Final conclusionsThe purpose of this experiment was to investigate the synthesis of alu Cornell - CHEM - 2070 The Spectrochemical Series 11/23/07Purpose: To measure wavelength absorbance of 5 unknown cobalt complexes, and thus determine their identities based on the spectrochemical series.The Spectrochemical Series Br- &lt; Cl- &lt; F- &lt; H2O &lt; NH3 (The above l Cornell - CHEM - 342132 Optical Spectometry 10/26/07The intention of this lab was to determine the identity of an unknown salt solution, based on its emission spectra and the measured emission spectra of known salt solutions, and to create a molecular orbital diagram for Cornell - CHEM - 2070 Purpose: Find the composition of an unknown salt solution. Create an orbital diagram of Hydrogen. First, a spectrometer was tested. It was used to look at an incandescent light, and a continuous emission spectrum was observed, with wavelengths at all Cornell - CHEM - 2070 Molecular Shape and Polarity 11/2/07Results NH3Structural data (unoptimized; optimized) H H bond distance: 1.653 A N-H bond length: 1.012 A; 1.003 A H-N-H bond angle: 109.47o; 107.21 o N-H-H bond angle: 35.26 oPartial charges: N = -1.087 H = 0 Cornell - BIO G - 102 2/4/08 Mitsley and Ris: DNA amount is constant but gametes have amt A=T C=G (Chargaff's rule) DNA Structure Purines (cytosine and thymine, one ring) pair with pyrimidines via hydrogen bonds Backbone = deoxyribose+ phosphate 5' pairs to 3' (&quot;antipara Cornell - BIO G - 102 Bio 2/18/08 Studying how gene functions or to learn its sequence often requries: 1. single gene away from other genes in the genome - restriction enzymes: cut DNA into specific sequences 2. methods to produce large quantity of the gene so that its fu Cornell - BIO G - 102 2/22/08 19.2 Histones: DNA wrapped around protein called histone, total unit called nucleosomes; DNA between nucleosomes called &quot;linker DNA&quot; DNA = 2 nm DNA + histone ~10 nm Second level of packing: nucleosomes pack together: ~30 nm fiber Third level: Cornell - CHEM - 2070 Densities of Liquids and SolidsExperiment #1 Due Friday, 9/14/07PurposeDesign and perform an experiment to determine the densities of an unknown liquid and an unknown solid. The unknown solid and liquid densities must be determined in the most p Cornell - CHEM - 342132 Hot- and Coldpacks 2/10/08Results Equations-qdissolution = qcalorimeter + qmixture -Hsolution = Ccalorimeter * Tcalorimeter + mmixture * smixture *Tmixture Percent error = accepted value mean value accepted value x 100The first part of the exp Sonoma - NAMS - 346 Placidity is having the ability to remain quiet and still - silence is comfortable. It also means being content with an acceptable situation and not always wanting more (i.e. the newest car when your old one works well) Ex: N.A. have few nervous mann Sonoma - NAMS - 346 off the road. They will say, Theres the edge of the road, pointing out the condition but leaving the decision up to the driver. Interconnectedness is where everything and everyone is connected in the universe in some form. Ex: We are connected to tre Sonoma - NAMS - 346 Orientation to the Present focus on living in the now -&quot;being rather than becoming.&quot; You cant really live in the past or future. Ex: A medical student may tell you he is studying medicine rather going to be a doctor. Paradox Two contradictory things/ Sonoma - NAMS - 346 Importance of Family is a major survival mechanism-of support for each individual. Ex: An uncle will probably get you your first job. Indifference to Ownership Non-materialistic being a good person is more valued than acquiring material goods/wealth
{"url":"http://www.coursehero.com/file/150189/Ch-8/","timestamp":"2014-04-19T02:37:34Z","content_type":null,"content_length":"52658","record_id":"<urn:uuid:18245543-19b2-441c-a7fd-b79c794d3f46>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
How to assess research "impact" for tenure/promotion committees up vote 20 down vote favorite Over the last several years, the college-level promotion & tenure committee at my university has increasingly been seeking to apply "objective" criteria for assessing the impact of candidates' research. Journal impact factors have been a favored metric, and we have tried to argue that these are not a reliable measure, for all the usual well-documented reasons. (The AMS statements on "The Culture of Research and Scholarship in Mathematics" have been helpful, if not entirely convincing.) Now they want to use more individual metrics, such as h-indices and g-indices. We would like to discourage this, but it's hard to argue convincingly to non-mathematicians about the flaws of such measures, and at the end of the day, they're demanding SOME numerical measure that they can use to compare candidates, both internally and to faculty at peer institutions. So, I'd like to hear how other math departments have dealt with such pressures. In particular, how can you articulate standards in such a way as to maintain high expectations, but also to minimize the damage to candidates who might be doing well-respected research, but for whatever reasons this results in relatively few papers, or papers with relatively few citations? And if we do end up having to use something like an h-index, is there any way to collect data from comparable math departments so we can at least say something about what is a "good" value for a particular index for advice career 3 publishing.mathforge.org – darij grinberg Apr 6 '12 at 22:30 16 I disagree with you on this one, Will. The OP is looking for information ("how other math departments have dealt with such pressures"). – Bill Johnson Apr 6 '12 at 22:31 8 I also think this is a fine question. – Andy Putman Apr 6 '12 at 23:44 6 Probably this should still be CW since it is asking for a list of what has been done. – Benjamin Steinberg Apr 7 '12 at 0:33 2 I think this question is more appropriate for academia.stackexchange.com – Joel Reyes Noche Apr 7 '12 at 0:50 show 5 more comments 3 Answers active oldest votes We have produced a list of top 10 journals for each area of mathematics represented in department plus a list of top 10 general subject journals so our candidates for tenure/promotion up vote 3 need to have publications in one of these journals. However I know for a fact that this has not stopped the administration from using impact factors, h-indices etc. Additionally tenure down vote decisions seem to be more and more conditioned on having outside funding, NSF, NSA etc. 4 I find this too restrictive. The value and impact of a math paper is all too often apparent only after it has been published, so it is unfair to discriminate against papers published in a lower ranked journal, if they have been cited by other good papers. – Deane Yang Apr 7 '12 at 9:29 I agree with you, but that was the compromise we reached with the administration to mollify the over-reliance on the impact factor which for mathematics journals is much smaller than say biology journals. However our annual dean's report asks us to use web of science to generate a citation report and an h-index. This is a disaster since web of science citation statistics are very inaccurate for mathematicians. Some of my chemist friends have similar complaints about this over-priced data basis. – Liviu Nicolaescu Apr 7 '12 at 10:35 7 It is very important to have a dean to understand that it is a very ba idea to compare numbers across different fields and that when you are hiring a mathematician, you're not trying to hire one as good or better than the biologist you already have but you're trying to hire one that is as good as or better than the ones at universities you're trying to match up against. – Deane Yang Apr 7 '12 at 12:09 2 Wouldn't that be nice! – Liviu Nicolaescu Apr 7 '12 at 13:47 add comment I think it is dangerous to be too qualitative or too simplistically quantitative about judging a candidate's accomplishments. I do believe, however, that looking at Mathscinet citations is a good start in evaluating both a person and the person's papers. Basically, you want to find evidence that a person's papers have had good impact in the sense that it has led to other respected work (as just by, say, the quality of journals and number of citations of those papers) that use or build on the person's contributions. All of this is quite problematic when you are judging someone who has been publishing for less than 10 years, but in my experience, for someone out 10 years or more, looking at citations per up vote paper or citations per year, as well as total number of citations, is a remarkably good guide for identifying the better mathematicians, in the sense that the ranking I get agrees rather 6 down well with my own subjective views. Then I simply focus on the exceptions and try to decide whether the citations are telling me that I've misjudged or whether the citations are simply vote misleading in that particular case. Even there, my conclusion is usually but not always the former. But there are still difficulties. It is quite noticeable, even within pure mathematics, that people working in some fields (like PDE's) get a lot more citations than others. So, you have to be careful about comparing people across different fields. Also, some work just doesn't get cited at all, even if it is used by thousands of mathematicians. For example people who work on efficient algorithms for computing software are at a huge 3 disadvantage, whenever publications and citations are used to asses faculty: they do serious mathematical work, their results are often more widely used in the mathematical community than the most brilliant publications; yet, they are almost never personally acknowledged in the papers that use the software and they often do not publish in high profile journals. – Alex B. Apr 7 '12 at 12:48 3 Another danger of focusing on Mathscinet citations is that they don't count citations appearing in the CS or physics literature, so this can substantially undercount citations for work near the interface between mathematics and other fields. – Henry Cohn Apr 7 '12 at 13:10 And of course, the problem at hand is that issues such as how typical citation counts vary across fields of mathematics are too subtle for university-level committees; they just want one standard for "mathematics." Attempts to explain that candidate X works in a field where publication rates (and hence citations) are typically on the low side tend to be met with skepticism, even if that statement is backed up by the external letter-writers. – Jeanne Clelland Apr 7 '12 at 13:49 Speaking from the point of view of a dean, I rely on my evaluation of the credibility of the departmental committee and the department head. Do I believe that they have real standards? 7 Can I see evidence that the department knows what it's doing? When the department reviews candidates, are they realistic about the strengths AND weaknesses of cases? Do they get letters from high-profile people and read those letters carefully? I am much more comfortable taking the advice of people who I trust than I am in any citation ranking system. – Jeremy Teitelbaum Apr 7 '12 at 15:36 1 If you must use citations, you should use Google Scholar in addition to Mathscinet because it captures citations missed by Mathscinet (vice versa also occasionally happens). – Bill Johnson Apr 7 '12 at 15:58 show 1 more comment This is not a direct answer to your question, but I think it is related. To get a perspective on why such things have been happening, I recommend the paper Neo-liberalism and up vote 2 Marketisation: the implications for higher education especially the section The Implications of Marketisation, starting on page 6. down vote Thanks for the link! – Jeanne Clelland Apr 7 '12 at 18:36 add comment Not the answer you're looking for? Browse other questions tagged advice career or ask your own question.
{"url":"http://mathoverflow.net/questions/93362/how-to-assess-research-impact-for-tenure-promotion-committees?answertab=votes","timestamp":"2014-04-23T23:49:24Z","content_type":null,"content_length":"79843","record_id":"<urn:uuid:4e3ca307-1222-4b40-a9b5-58ace43d0e07>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by jane Total # Posts: 1,315 2N2O(g) → 2N2(g) + O2(g) d[N2O]/dt = -5.8×10-4 M/s at a particular temperature and set of concentrations. What are d[O2]/dt and d[N2]/dt? d[O2]/dt = d[N2]/dt = What is the rate of the reaction? my answer for the first one im getting -.00016 and the second one 2.9e-... 2N2O(g) → 2N2(g) + O2(g) d[N2O]/dt = -5.8×10-4 M/s at a particular temperature and set of concentrations. What are d[O2]/dt and d[N2]/dt? d[O2]/dt = d[N2]/dt = What is the rate of the reaction? my answer for the first one im getting -.00016 and the second one 2.9e-... name at least 5 bodies of water ? what are they thank you Math homework oh... 3-(-4+8)=3-[-4+(-8)] 3-(-12) 3+12 15=3-[-4+(-8)] 15=3+12 15=15 Math homework what are you solving for? Math homework 3-8+3-8+2 =-5+3-8+2 =-2-8+2 =-10+2 =-8 social studies It was an attempt to prevent colonial tensions with Native Americans by establishing the western boundaries of the 13 coastal colonies. It is an alloy of copper and zinc, therefore it is a mixture, since a compound is chemically formed while an alloy is just a dissolving of one metal in another. A baseball is thrown straight up. The drag force is proportional to v^2. In terms of g, what is the y-component of the ball's acceleration when its speed is half its terminal speed and it is moving up? In terms of g, what is the y-component of the ball's acceleration w... Why would it be better to titrate a solution rather than just dilute it? University Research-level Biology Hello, this is an upper-level question, and I would sincerely appreciate help before Tuesday. I am reading a journal article, and there is one conclusion in it that I do not understand. It predicts a postsynaptic mechanism when EPSP amplitudes do not vary across trials, and a ... expand 9x - 36 + x = 10x - 40 + 4 put like terms on the same side 9x + x - 10x = 36 - 40 + 4 simplify 0=0 Well your question doesn't make sense. 6) We have afternoon classes on Tuesdays from 2pm to 4pm. 7) I prefer scientific subjects such as physics and maths to humanities. 8) We are tested orally twice a term and we get three written marks each term (is there a better way of expressing it??). 9) A calendar and an at... Hello everybody At a picnic, there is a contest in which hoses are used to shoot water at a beach ball from three directions. As a result, three forces act on the ball, F1, F2, and F3 (see drawing). The magitudes of 1 and 2 are F1 = 55.0 newtons and F2 = 95.0 newtons. Using a scale drawing an... They will hit the ground at the same time. the accelerate at the same rate of 9.8m/s^2 2x cubed minus x squared minus 6x plus 3. I know the answer is (2x-1)(x squared-3). Please help me with how they got that answer. Thank you! 2x cubed minus x squared minus 6x plus 3. I know the answer is (2x-1)(x squared-3). Please help me with how they got that answer. Thank you! Spain and Portugal Why is Cairo an important city? 9th grade math Correct. 3/4 times 3/4 equals 9/16 Why is rice important to the Middle East? If a soda can is 11.8 cm tall, how many would it take to make a stack to reach the moon? PLEASE write 2x+y=5 in polar form. a. -sqrt5 = r cos (theta -27degrees) b. sqrt5 = r cos (theta -27degrees) c. -sqrt5 = r cos (theta +27degrees) d. sqrt5 = r cos (theta +27degrees) is the answer c. Health, Safety, and Nutrition a or d prefix of osterperosis =8-2[3-(15x+5)] =8-2[-15x-2] =8+30x+4 =30x+12 I have this same problem. I just cannot figure out if the labor content is needed to calculate the NPV. Any ideas? I need help with 3(x-1)+5=15x +7-4-4(3x+1)+ 3 An equilateral triangle is one in with all three sides are length. If two vertices of an equilateral trangle (0,4) and (0,0), find the third vertex. How mant of these triangles are possible? How would you slove this?? Thanks I see where I made the mistake Thank you! The total area enclosed by the graphs of y=10x^2 x^3+x y=x^2+19x This is a really long problem I keep getting the answer of 20.25 but it is incorrect I dont know where I am going wrong? Leggio Corporation issued 20-year, 7% annual coupon bonds at their par value of $1,000 one year ago. Today, the market interest rate on these bonds has dropped to 6%. What is the new price of the bonds, given that they now have 19 years to maturity American History What two colonies were resistant towards slavery? If a diagram describes an initial state of the reaction A2 + B2 --> 2AB. Would the picture with more AB products represent the result at the higher temperature? I don t understand how to write this equation. Write an equation relating the concentration for a reactant A at t=0 to that at t=t for a first-order reaction. Define all the terms, and give their units. Do the same for a second- order reaction. -Would it be the rate law e... How would you solve this problem? C4H8(g)--> 2C2H4 Determine the order of the reaction and the rate constant based on the following pressures, which were recorded when the reaction was carried out 430 C in a constant volume vessel. Time (s) P(C4H8)(mmHg) 0 400 2,000 316 4,0... why do u say d?? some child psychologists have concluded that___ is the strongest predictor of both scholastic and career achievement A. physical illness b. respect for authority c. emotional well being d inherited Encouraging children to eat certain foods their families enjoy aroud the holidays is an example of? College Physics An airplane flies 200 km due west from city A to city B and then 240 km in the direction of 31.0° north of west from city B to city C. (a) In straight-line distance, how far is city C from city A? (b) Relative to city A, in what direction is city C? A silver ring contains 0.0134 mmol Ag. How many silver atoms does it contain? A silver ring contains 0.0134 mmol Ag. How many silver atoms does it contain? 8. We enjoyed some entertaining numbers on the program. what is .50 times 95000 lbs? Why moles? A chemical problem may be presented to you in units of moles, mass, or volume. Which one of these can be directly used in arithmetic no matter what the conditions are? You are working in the product development department of a company that creates household products. Your team has come up with an idea for a revolutionary new cleaning product. Using the seven phases of new product development as a guide, describe how your company will develop... A participant in a cognitive psychology study is given 50 words to remember and later asked to recall as many as he can of them. This participant recalls 17. What is the (a) variable, (b) possible values, and (c) score? I just don't understand what it all means. I've r... contrast ionic and molecular substances in terms of the types of attractive forces that govern their behavior I can't send you my homework. What shall I do? P $26 X Q 14 = $ 364 - Cost $238 = $126 , Which is max Profit among those. How Children learn what kind of responses do behaviorists look for in their students? find the LCM of (7 4z), (49-16zsquared), and (7-4z) is the answer 2/25 you roll a fair number cube twice. what is the probability of rolling two 3's if the first roll is 5 is the answer 1/657800 a sock drawer contains 10 white socks, 6 black socks, and 8 blue socks. if 2 socks are chosen at random, what is the probability of getting a pair of white socks? A spherical convex mirror has a radius of curvature of magnitude of 20cm. At what distance from the mirror should an object be placed to obtain an image that is virtual, four times smaller and how to say FUN in a PLURAL way my sentence is fundraising is the easiest and (fun) way in school Solve for x: (4x + 1)1/2 = 3 $20768 the percentage for this is 21% please help me through the formula on how this is done get 21% how do do you find the percentage of $38676? what does this mean? " when you judge another you do not define them ,you define yourself." what does this quote mean? " the surest way to corrupt the youth is to instruct him to hold in higher esteem those who think alike , then those who thinks differently" is it right!? sara looked happy going to school, but it was odd because she hated school. maybe it had do with BOY CRUSHES my entire snetence is this : sara looked happy going to school, but it was odd becuz she hated school. maybe it had to do with boy crushes i want to say this word in a plural way is this the correct way boy crushes or boys crushes how do you find the area of a square If f(x)= 3 + |x-2|, then f'(2) is A. -1 B. 1 C. 2 D. 3 E. nonexistent Mrs Munoz algebra class was collecting money for a party. The class decided that each student should give the same amount of money. They collected a total of $9.61. If everyone used 5 coins, how many nickels were collected? Can you please tell me which of the two is better? a polka-dotted T-shirt or a polka-dot T-shirt? I just want to know if I can send you my homework. is this correct? 1.a jar contains 5 white, 15 violet,and 21 blue marbles. a marble is drawn at random. P(violet or white) answer: 5x15x21= 1575 15x5=75 75/1575= 1/21 2. a number from 10 to 16 is drawn at random. P(a number greater than 14) answer: 2/7 find the number of possible choices when you choose one item from each category. 1. 3 hats, 7 shirts, 13 scarves 2. 6 pens, 2 nbk 3. 4 cars, 2 colors i dont get the question is it reasonable to expect the heat of neutralization to be the same regardless of which acid is used? or should we expect each acid and base combination to have its own heat of neutralization? In the neutralization of HCl and NaOH i was instructed to measure the temperature of the acid HCl prior to mixing the reaction components. I was to assume however, that the NaOH solution was at this same temperature. Is this a reasonable assumption? If a kicker produces a torque of 8Nm with her muscles and the resulting angular acceleration of her leg is 20rad/m2, what is the moment of inertia of her leg? Well, I know that the reaction is endothermic, because delta Hº is also positive. I just don't understand why delta G is positive, since the reaction seems to be spontaneous... If delta G for the dissociation of Borax at 30 degrees C (higher than room temperature) is positive, why does it dissolve at room temperature? Help!! I am doing an assignment with 3 scenarios and I have got the first 2 but having trouble with the last. I have read through my chapter and searched webmd and other sites and still can't figure it out. Richard has noted over the past several weeks that he is having mo... The slope intercept is 1/3 and the y intercept is 4xsquaredk. What is the slope intercept form. 10th grade The slope intercept is 1/3 and the y intercept is 4xsquaredk. What is the slope intercept form. Trig Math I need help with this problem. I don't get how to do it or how they got that answer. A force of 50 pounds acts on an object at an angle of 45 degrees. A second force of 75 pounds acts on the object at an angle of -30 degrees. What was the direction and the magnitude of the... The smallest object visible to the unaided eye is of 100 microns. If you use a microscope of m=1000, what is the size of the smallest object you can now see? (So=25cm) The answer is apparently 100nm. I am still confused as to why. Please help! world history why are farmers today not as tightly bound by geography and climate as they were in the past uummm i want to know how can telescope cxan see so far???? inorganic chemistry In the final step of making the nitro complex it involves the addtion of concentrated HCl. What are the role(s) of the HCl in this step of the synthesis? nitro complex: [Co(NH3)5(NO2)]Cl2 A mass spectrometer is used to separate isotopes. If the bean emerges with a speed of 374 km/s and the magnetic field in the mass selector is 1.5 T, what is the distance between the collectors for 235U+1 and 238U+1? The mass of 235U+1 is 235 u and the mass of 238U+1 is 238 u, ... Two horizontal parallel wires which carry currents I1 = 45 A (top)and I2 (bottom). The top wire is held in position, the bottom wire is prevented from moving sideways but can slide up and down without friction. If the wires have a mass of 9.2 g per metre of length, calculate c... What is 38 tmes 93 divided by 3 minus 4 Nevermind I figured it out thanks. I am still confused as to which direction the wires will rotate. Using the right hand rule, I found that the magnetic field of both wires will rotate counterclockwise (if you look from above and from the left side). Two current carrying wires cross at right angles (like a + sign). Wire 1 has current directed horizontally to the left. Wire 2 has current directed vertically up. If the wires are not restrained, what statement best describes the motion of the wires? A) Wire 1 moves out of the... How do you solve the expotential notation equation 120-6^2/4x8? We compare a silicon chip to a nerve membrane, with silicon having (about) twice the dielectric constant of the nerve membrane. Which of the following statements is true? A) 1cm^2 silicon chip has a higher capacitance than 1cm^2 nerve membrane. B) 1cm^2 silicon has the same ca... The frequency of an ambulance siren is 700 Hz. If a pedestrian on the side of the road hears the siren at 756 Hz approximately how fast and in what direction is the ambulance moving? The answer is 90 km/h and towards the person..im not sure how to come to this answer though...... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jane&page=10","timestamp":"2014-04-16T19:29:46Z","content_type":null,"content_length":"26799","record_id":"<urn:uuid:9c0a0249-86d0-4a39-afbc-8d5f0ba5ea08>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Plato Center Algebra Tutor I attained my Math degree in 2004 from Augustana College in Rock Island, IL. Since that time, I have been working and tutoring on the side. I recently went back to school at North Central College to get my teaching certification. 7 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...Having taught mathematics in both a traditional high school as well as an alternative education setting, I have learned effective ways to develop rapport with students to help them become successful with regard to mathematics concepts. My recent work with students includes two years of tutoring ... 16 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th grader) scores above 99 percentile in all math tests, and you too can have high scores.My PhD in Physics... 23 Subjects: including algebra 2, algebra 1, calculus, geometry I am a Software Engineer by profession but like to tutor math courses as hobby. I am an experienced tutor, who has taught Quantitative Aptitude and Analytic Reasoning. I have coached students on College algebra and trigonometry. 12 Subjects: including algebra 2, algebra 1, geometry, trigonometry
{"url":"http://www.purplemath.com/plato_center_algebra_tutors.php","timestamp":"2014-04-19T19:45:45Z","content_type":null,"content_length":"23943","record_id":"<urn:uuid:3e151e59-1c16-46fd-98c4-f03f4e25ac3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
The Market Portfolio Next: Portfolio Management and Information Up: CAPM Previous: Separation Theorem From the Separation Theorem we can see that in equilibrium, every security must be part of the investor's risky portion of the portfolio. The reason is that if a security isn't in T, no one is investing in it, meaning that its prices will fall, causing the expected returns of it to rise until the resulting tangency portfolio has a nonzero proportion associated with them. When all the price adjusting stops, the market will have been brought into equilibrium. • Each investor will want to hold a certain positive amount of each risky security. • The current market price of each security will be at a level where the number of shares demanded equals the number of shares outstanding. • The riskfree rate will be at a level where the total amount of money borrowed equals the total amount of money lent. This gives rise to the following definition of the market portfolio: Definition 1 The market portfolio is a portfolio consisting of all securities where the proportion invested in each security correspons to its relative market value. The relative market value of a security is simply equal to the aggreagte market value of the security divided by the sum of the aggregate market values of all securities. In equilibrium the proportions of the tangency portfolio will correspond to the proportions of the market portfolio. This tells us that the market portfolio plays a central role in the CAPM, since the efficient set consists of an investment in the market portfolio, coupled with a desired amount of either riskfree borrowing or lending. Figure: The Capital Market Line. M is the market portfolio and r[f] represents the riskfree rate of return. All portfolios other than those employing the market portfolio and riskfree borrowing or lending would lie below the CML. The linear efficient set of the CAPM is known as the Capital Market Line, which has the following equation We now know that using the CAPM we can decide whether the market price for a stock is too high or too low by looking at the market portfolio. Next: Portfolio Management and Information Up: CAPM Previous: Separation Theorem Magnus Bjornsson
{"url":"http://www.cs.brandeis.edu/~magnus/stocks/node7.html","timestamp":"2014-04-17T12:35:17Z","content_type":null,"content_length":"6026","record_id":"<urn:uuid:028a0adc-0d51-447f-8312-3b51e1129312>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
The Burden of Proof in the Inequality/Growth Debate Take a look at the figure below, which displays comprehensive household income data from the Congressional Budget Office (a fantastic data set).^1 The bottom line charts actual household income for the middle income fifth—it’s the average income of households between the 40^th and 60^th income percentiles. So, it’s households that are richer than forty percent of households as well as poorer than 40 percent of households. Think of it as a representative, if narrow, slice of the middle class. This income rose by 19.1 percent in the 28 years before the Great Recession (1979-2007), or 0.6 percent per year. Better than zero growth for sure, but, could it have been higher? The top line shows household incomes that start with middle-fifth incomes in 1979, but then are allowed to grow as fast as the overall average growth rate of household incomes. And since the very rich saw extraordinarily fast growth over this period (241 percent cumulative growth for the top 1 percent over this period!), this made overall average growth run much faster than growth for the middle-fifth—which is why that top line pulls progressively farther and farther away from the bottom line over time. By 2007, if middle-fifth incomes had grown simply as fast as overall average incomes, then they would be 27 percent higher (about $19,000). This is big money for moderate-income families. not imply: that the rise in inequality (i.e., the growing wedge between average and middle-fifth growth) somehow “hurt average income growth.” And yet it shows that rising inequality is enormously costly to middle-income families. If this rise in inequality ready did “hurt average income growth,” then this would all be worse—the counter-factual where middle-fifth income growth tracked overall average growth would see both the wedge between the two close and the top-line rise faster. It may actually be true that inequality has hurt growth and my “inequality tax” calculation is too conservative. But it doesn’t have to be to say that rising inequality did big damage to living standards at the bottom and middle. Of course, conservatives would say that my calculation is wrong because it assumes that we could have redistributed enough to allow middle-fifth families to match average income growth without actively harming average growth. But the conservatives’ assumption is that (nearly by definition) efforts at redistribution harm growth. But it’s an assumption not borne out in evidence. We summarize a lot of this evidence in our JEP piece. But, again, note who “wins” the inequality/growth argument in the grey zone between “can be proven to hurt growth” and “can be proven to aid growth.” [Update: I made a mess of the following sentence in the original. Corrected now] So long as rising inequality is either bad for growth or neutral, then policy measures to check it will aid middle-income families (actually, the vast majority, but, that’s another piece). In order to win an argument calling for pure complacency on inequality, you really have to prove that rising inequality has been a growth bonanza, and that’s just really hard to see in the data. 1. Note, CBO data has been updated since this came out, and they made a couple of methodological changes that make numbers different from what’s in this chart. I talk about some of those changes in the appendix to this paper, for those who care.
{"url":"http://www.epi.org/blog/burden-proof-inequalitygrowth-debate/","timestamp":"2014-04-20T16:37:44Z","content_type":null,"content_length":"82503","record_id":"<urn:uuid:48021b03-93dc-40f0-a88b-1eaedac20c45>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
White Plains, NY Precalculus Tutor Find a White Plains, NY Precalculus Tutor ...The geometry currently taught in US high schools is comprised mostly of aspects of the ancient Euclidean geometry utilizing a local, restricted, set of coordinates. Such a study is good for getting a grasp of basic facts about two-dimensional figures such as polygons, circles, and triangles, and... 19 Subjects: including precalculus, reading, writing, calculus ...He is now tutoring Westchester and Connecticut students in math, chemistry and physics. Typically Ken explains the core principle or subject clearly, often with an example. He then lets the students talk to gauge their initial understanding before working through specific problems. 12 Subjects: including precalculus, chemistry, calculus, physics ...As you can see by my ratings and reviews, I am a very experienced, patient and passionate tutor. I am very familiar with the new comon core standards that students are expected to demonstrate. In one or two sessions, I am able to assess each student's needs and tailor a lesson plan to ensure th... 29 Subjects: including precalculus, reading, biology, ASVAB ...However, it is equally important to reach students at their level of understanding. I always focus on finding, and then improving, my students' understanding of the basic concepts of their course. I try to illuminate what is essential, and what should be more readily grasped once the essential material is mastered. 6 Subjects: including precalculus, calculus, geometry, algebra 1 ...My lessons have a way of making you feel smart, as opposed to Isaac Newton, who made us all feel stupid by inventing calculus by his early twenties. I am also available for AP Calculus AB and BC, as well as Calculus II WCSU Mathematics Major - Tutoring since 2007 Geometry, one of the most ancie... 10 Subjects: including precalculus, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/White_Plains_NY_precalculus_tutors.php","timestamp":"2014-04-18T18:56:17Z","content_type":null,"content_length":"24477","record_id":"<urn:uuid:3aaf06e3-830f-48fe-acf5-6202b1923f1c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Tanya prepared 4 letters to be sent to 4 different Hermione wrote: sorrry guys... OK in that case let me see if I can offer some help - I hope you are familiar with basic probability fundas - Let's say you have just ONE letter and TWO envelopes ONE of which is correctly addressed and the other addressed incorrectly. What's the probability of putting the Letter in the correctly addressed envelope - To answer this question - we see IN HOW MANY WAYS can the letter be put into the envelope - you could put it (assuming you don't know which envelope is which) in either of the two - so in total you have TWO ways of shoving the letter in. However, there's only ONE way in which it can go into the correctly addressed envelope - so 1/2 is the prob of putting in correct envelope. This is easy. Now in our current problem - let's say we have just ONE letter but FOUR envelopes. Only one of these envelopes has the address corresponding to the letter. The remaining three envelopes are incorrectly addressed. So the probability that you will put the letter correctly is 1/4. Right? What happens if i ask you the reverse question? what is the prob for putting it in the incorrect envelope. Suddenly you have three envs that are incorrect so you can put the letter incorrectly with a prob of 3/4. Right? The whole problem can be broken down into Four Events that will fulfill the requirement of the question Event 1 - E1 We know that prob of putting ONE Letter correctly is 1/4. Now once ONE letter has been put CORRECTLY, what are you LEFT with? You are left with THREE ENVELOPES and the remaining THREE letters. Since the one letter has been put correctly (though technically we have just calculated the PROBABILITY that the first letter goes into the correct envelope) we have the remaining THREE Letters and THREE Event 2 - E2 Let's take letter number 2 now - what is the probability that it LANDS in the INCORRECT envelope. Again by the same logic as above - there are 3 envelopes remaining out of which ONLY ONE has the correct address for LETTER number 2. The remaining 2 have INCORRECT address and LETTER NUMBER 2 could go in either of these 2 to meet our condition. Thus the probability of this event is 2/3 So till now what we have done is - we have calculated the prob of shoving Letter number 1 in correct env -- 1/4 we have calculated the prob of shoving Letter number 2 in INcorrect env --- 2/3 Event 3 - E3 Now let's take letter number 3 - again according to question we want to shove this in the WRONG envelope. There are 2 remaining envelopes and hence the prob of shoving this in the wrong env (or equally in the RIght env) is 1/2. Finally we come to event E4 - the Letter number 4. This has only one way of going in so its probability of being put into the WRONG envelope is 1. ok so we can see that our grand event is actually a combination of FOUR EVENTS happening - each with a probability of its own. So to calculate the total probability of the Grand Event itself we just multiply the individual probabilities since each event happens INDEPENDENTLY of each other Egrand = 1/4 * 2/3 * 1/2 * 1/1 = 1/12 However at this point - I must introduce one last element in this question -since there are FOUR Letters - what we saw above was JUST ONE SEQUENCE of events leading to the desired result. If we arbitrarily call the letters L1 thru L4, and let's say the above was an example in which we started by Picking up Letter L1 and worked thru the remaining letters, we could have equally well started out with letter L2 or L3 or L4. Thus since each of these events ARE MUTUALLY EXCLUSIVE, meaning THEY CAN NEVER HAPPEN ALL THE SAME TIME BUT ONLY ONE LETTER AT A TIME, to calculate the TOTAL PROBABILITY of we will add the individual probabilities 1/12 + 1/12 + 1/12 + 1/12 which works out to 1/3.
{"url":"http://gmatclub.com/forum/tanya-prepared-4-letters-to-be-sent-to-4-different-36775.html?fl=similar","timestamp":"2014-04-24T23:57:58Z","content_type":null,"content_length":"166583","record_id":"<urn:uuid:4fb57cf1-2037-492b-8407-7d9ce5e347f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
SI Base Units The SI (Système International d'Unités) is a globally agreed system of units, with seven base units. Formally agreed by the 11th General Conference on Weights and Measures (CGPM) in 1960, the SI is at the centre of all modern science and technology. The definition and realisation of the base and derived units is an active research topic for metrologists with more precise methods being introduced as they become available. There are two classes of units in the SI: base units and derived units. The base units provide the reference used to define all the measurement units of the system, whilst the derived units are products of base units and are used as measures of derived quantities: There are recommendations as to how to use SI units. SI prefixes are used to form decimal multiples and submultiples of the units: Some non-SI units are still widely used: The definitions of the SI units have a continuing history of change: Fundamental Constants and Units The fundamental physical constants, such as the speed of light, the Planck constant and the mass of the electron provide a system of natural units. However, these must be related to the SI units by experiment. This experimental work is a global effort mostly undertaken in national standards laboratories to which NPL contributes. The constants provide the link between the SI units and theory and also between one part of physics and the SI and another. For more information, a review article describing the background to the change to units based on fundamental constants is available. NPL has activity in the Planck constant (watt balance) Rydberg constant (Hydrogen spectroscopy) Stefan-Boltzman constant (ARD). Recommended Values of the Constants A list of values and uncertainties of the most frequently used constants to CODATA Recommended Values (2005) is available. These values are taken from the recommended values of the constants which are produced by the CODATA Task Group on Fundamental Constants, based on a review of all the available data. The latest review is available at the CODATA fundamental constants page at NIST. This should be consulted for values of the less frequently used constants or for covariances between the constants. SI, Units & Constants FAQs • Metrology is a service discipline - responding to a perceived need for a particular measurement accuracy, either now or in the near future. • The recommended values of the fundamental constants are produced by the CODATA Task Group on Fundamental Constants the most recent evaluation was in 1998. • We already can set limits on the drift over a long period by looking at the values obtained for fundamental constants that depend critically on mass. • The very term fundamental physical constants invites two questions: are they fundamental and are they constant. • There are several reasons for maintaining separate national capabilities. • What is evolving is our knowledge of the constants not as far as we know their values, which for the purposes of evaluation are considered constant. • SI units are divided into two classes, base units and derived units. The base units are dimensionally independent. • The word metrology is derived from the Greek word `metron': to measure. • The relationships are many and complex. • A difficult question perhaps impossible. In what way the minimum? Some would say seven have been introduced into the SI because seven are needed. It has also been argued that with the use of fundamental constants only one unit is needed. • No dimensioned measurement can be made more accurately than its corresponding SI unit is known. Thus the measurements with the smallest uncertainty are those of frequencies as the second is the most precisely realised unit. • The international system is a set of seven base units chosen to fulfil the requirements of science and technology. The selection of seven base units is a matter of choice. • Temperature is an intensive property and we can only measure thermodynamic temperature via measurable quantities which change with temperature. Because this is re-measured from time to time and the values revised, this scale may differ from the true thermodynamic temperature scale. • If we did, we would have to change the definition of the metre each time we were able to make a more precise laser. • The name 'kilogram' is a historical quirk. • Discussions in Europe under the MERA project are pointing in the direction of this alternative to a single world or European institute, but some duplication and collaboration will probably always be required.
{"url":"http://www.npl.co.uk/reference/measurement-units/","timestamp":"2014-04-17T08:00:07Z","content_type":null,"content_length":"54900","record_id":"<urn:uuid:f756183f-b333-4a0f-8daf-355530233f54>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Godfrey Harold Hardy, F.R.S., Hon.F.R.S.E. by J M Whittaker Godfrey Harold Hardy was born on February 7, 1877. He was educated at Winchester and at Trinity College, Cambridge. At that time the principal examination, on which the wranglers were elected, was Part I of the Tripos, taken at the end of the third year. Hardy and one of his fellow scholars of Trinity, J H Jeans (afterwards Sir James Jeans, O.M.), took the unprecedented course of entering in their second year, and were so far successful that Jeans was bracketed second and Hardy fourth. They gained the Smith Prizes in 1901 and both were elected to Fellowships. Hardy held his Trinity Fellowship till his election, in 1919, to the Savilian Chair of Geometry at Oxford, when he became a Fellow of New College. He returned to Trinity in 1931 as Sadleirian Professor, in succession to E W Hobson, and continued in residence till his death on December 1, 1947. He was not married. Hardy came up to Cambridge at a time when profound changes in the mathematical teaching were in progress. The theory of functions of a complex variable, created by Cauchy in 1825, had been the backbone of continental mathematics for seventy years. In England it was virtually unknown till, in the nineties, A R Forsyth gave some post-graduate lectures on it at Cambridge. They were attended by E T (now Sir Edmund) Whittaker, who perceived its cardinal importance and gained for it the status, which it now occupies in all Universities, of being the principal branch of Pure Mathematics studied by undergraduates in their final year. The theory of functions of a real variable, also of continental origin, was introduced soon afterwards by E W Hobson. Both the complex and the real variable were subjects after Hardy's own heart. He had a strong feeling for the elegance of the formulae of the former, and an equally keen realisation of the need to establish them with the rigour demanded in real variable proofs. Rigorous investigation of the conditions in which a formula is true was to form the subject of an enormous number of his papers. They include the best of his early papers, and continued all through his life. Hardy's most vital work was, however, in a different field. In approaching it the biographer is met by a great difficulty. Almost all of this work was done in collaboration. No other mathematician has collaborated so much or so fruitfully. In 1914 he wrote two brilliant papers, one proving that Riemann's zeta function has an infinity of zeros on the critical line, the other on the convexity of the mean value of the modulus of an analytic function. Apart from these none of his very numerous papers can bear comparison with the best work of Hardy and J E Littlewood, or of Hardy and S Ramanujan. One is impressed by their elegance, their learning, their usefulness, their technique, but they lack the boldness of conception which is so evident in the best of his joint work. The papers of Hardy and Littlewood cover many fields, especially Tauberian theorems, Fourier series and problems of partitio numerorum. In the latter field, in which he also collaborated with Ramanujan, his work is universally regarded as one of the highest achievements of the century in mathematics. The great work of Hadamard in the last decade of the nineteenth century had added a new chapter, a kind of physical geography, to complex variable theory, by showing that the main features of functions, their zeros, rates of growth, and so forth, were connected in a manner much closer than had been suspected; and he had made a brilliant application of this discovery to prove the prime number theorem for the first time. Hardy and Littlewood applied this new conception of complex variable theory to problems even more ancient and intractable. Such is the conjecture that every even number is the sum of two primes. This was asserted by Goldbach in 1742, but no progress whatever had been made towards establishing it till Hardy and Littlewood brought their "circle method" to bear on it. They did not, indeed, solve the problem but they made a great step in advance, and Russian developments of this work leave little to be done. Still more complete was the work of Hardy and Ramanujan on partitions, culminating in an astonishingly accurate approximate formula for p(n). Hardy was elected an Honorary Fellow of the Society in 1946. Godfrey Harold Hardy's RSE obituary by J M Whittaker appeared in Royal Society of Edinburgh Year Book 1948 and 1949, 24-25. See also Obituary Notices of Fellows of the Royal Society, vi, 1948-49, pp. 447-462.
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Obits2/Hardy_RSE_Obituary.html","timestamp":"2014-04-18T13:14:13Z","content_type":null,"content_length":"5165","record_id":"<urn:uuid:7eeaa3a8-9b7e-4635-982a-b1b48d1a9c4d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: need help with a question Replies: 8 Last Post: Nov 27, 2012 10:19 AM Messages: [ Previous | Next ] Re: need help with a question Posted: Oct 18, 2010 9:56 AM > Prove that a! E O(aa) > p.s (aa) is suppose to be "a to the power of a" Does this mean that a! (a factorial) is bounded by some positive constant K times a^a (where a^a is standard for a to the power a)? Usually to say that f(x) = O(g(x)) as x --> infinity means there is a positive constant K such that (for x sufficiently large) f(x) <= k*g(x). You may want to look up Serling's approximation to n!. It gives an asymptotic estimate of n! which may be of use to answer your question. Date Subject Author 10/16/10 need help with a question Izzy 10/16/10 RE: need help with a question Ben Brink 10/16/10 Re: RE: need help with a question Izzy 10/18/10 Re: need help with a question Dan Cass 10/18/10 Re: need help with a question Dan Cass 10/18/10 RE: need help with a question Ben Brink 11/3/12 Re: need help with a question John 11/4/12 Re: need help with a question Salahuddin 11/27/12 Re: need help with a question grei
{"url":"http://mathforum.org/kb/message.jspa?messageID=7242100","timestamp":"2014-04-16T05:43:46Z","content_type":null,"content_length":"25747","record_id":"<urn:uuid:67ce1870-3418-441b-961c-4aadc312ab96>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00310-ip-10-147-4-33.ec2.internal.warc.gz"}