content
stringlengths
86
994k
meta
stringlengths
288
619
[racket] Making a contract between a function and "the world in general" [racket] Making a contract between a function and "the world in general" From: Matthias Felleisen (matthias at ccs.neu.edu) Date: Sat Oct 8 12:12:25 EDT 2011 (1) I do not understand Neil's problem. Say I have module A and want to protect its exports from abuses by clients, say module B, why do you use define/contract at all? The define/contract form is for splitting modules into module-lets -- in case your module is too large and you can't manage invariants in your head. If you believe that this is true for even small modules, I urge you to use Typed Racket. That's the better solution and real soon now TR will allow you to add contracts on top of types at provides. Right Sam? (2) I object to because I think programmers should explicitly state what they want (if they want something logical). We can already do this (define primes-to-primes (-> (listof prime?) (listof prime?))) [f primes-to-primes] [primes-to-primes contract?]) So in some client module we can write [f primes-to-primes]) -- Matthias Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2011-October/048542.html","timestamp":"2014-04-21T15:52:17Z","content_type":null,"content_length":"6582","record_id":"<urn:uuid:29bc0964-2d88-4e67-8b2c-e97a76eb8313>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Q-lattices and commensurability, any insight into the definition and intuition? up vote 0 down vote favorite I've been coming across $\mathbb{Q}$-lattices in $\mathbb{R}^n$ in my reading, and I'm having trouble understanding the definitions. Connes and Marcolli define it as a lattice $\Lambda \in \mathbb{R} ^n$ together with a homomorphism $\phi : \mathbb{Q}^n / \mathbb{Z}^n \to \mathbb{Q} \Lambda / \Lambda$. Moreover, two $\mathbb{Q}$-lattices $\Lambda_1$ and $\Lambda_2$ are commensurable iff 1) $\ mathbb{Q} \Lambda_1 = \mathbb{Q}\Lambda_2$ and 2) $\phi_1 = \phi_2$ mod $\Lambda_1 + \Lambda_2$. I think I understand condition 1): the lattices must be rational multiples of each other to be commensurable. I don't even understand the notation for condition 2). The best I can gather is that the homomorphism $\phi$ labels which positions in $\mathbb{Q} \Lambda / \Lambda$ come from your more normal "discrete hyper-torus" $\mathbb{Q}^n / \mathbb{Z}^n$. Condition 2) then says that the same points are labelled. Is this anywhere near the right ballpark? Can anyone recommend any literature on the subject? I'm a pretty young mathematician (not even in a PhD program...yet) so please forgive me if this question seems basic. nt.number-theory lattices Can you, please, give a reference to the paper where this is defined? As for your notational question: after identification $\mathbb{Q}\Lambda_1\simeq \mathbb{Q}\Lambda_2,$ you can mod out this group by the sublattice $\Lambda_1+\Lambda_2$ that contains $\Lambda_1$ and $\Lambda_2.$ The requirement is that compositions of $\phi_1$ and $\phi_2$ with the projection become equal. – Victor Protsak Aug 10 '10 at 18:04 Connes & Marcolli, Noncommutative Geometry, Quantum Fields, and Motives. Marcolli, Lectures on Arithmetic Noncommutative Geometry. Various other papers from those two. alainconnes.org/docs/ Qlattices.pdf is a short one. Thanks for the comments on the notation, very helpful. – mebassett Aug 10 '10 at 19:34 For condition 1, the lattices don't have to be rational multiples of each other, e.g. take $\Lambda_1=\langle(1,0),(0,1)\rangle$, $\Lambda_2=\langle(\frac{1}{2},0),(0,2)\rangle$. They just have to 1 be contained in the same $n$-dimensional $\mathbb{Q}$-vector space. Equivalently, there exist integers $N$ and $M$ so that $M\Lambda_1\subset\Lambda_2$ and $N\Lambda_2\subset\Lambda_1$. – Kevin Ventullo Aug 10 '10 at 21:24 add comment 1 Answer active oldest votes The condition $\mathbb Q\Lambda_1=\mathbb Q\Lambda_2=:X$ means that we have $\Lambda_1,\Lambda_2\subseteq X$ and then we have $\Lambda_1,\Lambda_2\subseteq \Lambda_1+\Lambda_2\subseteq X$. This means that we have quotient maps $$X/\Lambda_1\rightarrow X/(\Lambda_1+\Lambda_2){\rm\quad and\quad }X/\Lambda_2\rightarrow X/(\Lambda_1+\Lambda_2).$$ Condition 2) then means that the two composites $\mathbb Q^n/\mathbb Z ^n\rightarrow X/\Lambda_i \rightarrow X/(\Lambda_1+\Lambda_2)$ are equal. up vote 2 As for your interpretation of condition 2) it seems to be more or less OK, though the way you have phrased it $\phi$ could very well be $0$. To understand what 2) means in concrete terms down vote it is convenient to use some more advanced notions (which may or may not be unfamiliar to you). First the source and target of $\phi$ are torsion groups. We then have that for each prime accepted $p$ the $p$-torsion of $\mathbb Q^n/\mathbb Z^n$ is taken into the $p$-torsion of $\mathbb Q\Lambda/\Lambda$. Any map of these $p$-torsion groups corresponds exactly to a $\mathbb Z_p$-module map (where $\mathbb Z_p$ is the ring of $p$-adic integers) $\phi_p\colon\mathbb Z_p^n\rightarrow \Lambda\bigotimes \mathbb Z_p$. This is very analoguous to the situation when we instead would start with a continuous map $\phi\colon\mathbb R^n/\mathbb Z^n\rightarrow \mathbb R\Lambda/L$ which would correspond to a map $\phi_{\mathbb Z}\colon\mathbb Z^n\rightarrow add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory lattices or ask your own question.
{"url":"http://mathoverflow.net/questions/35142/q-lattices-and-commensurability-any-insight-into-the-definition-and-intuition","timestamp":"2014-04-17T01:32:08Z","content_type":null,"content_length":"57763","record_id":"<urn:uuid:fad733fb-57f1-48bb-8fc9-c7133243b0fd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Mrs. Jenkins is giving her students an Algebra test over systems of equations. She tells her students that the test has 38 questions worth 100 points. Each problem on the test is worth either 5 points or 2 points. She tells her students that she will give them extra credit on the test if they can figure out how many of each type of problem are on the test. How many of each type of problem are on the test? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5107ffe5e4b08a15e78482e4","timestamp":"2014-04-19T20:04:02Z","content_type":null,"content_length":"44650","record_id":"<urn:uuid:03b18aec-7ad3-44ed-983d-5b7ff24fa8ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear 1st Order PDE Query May 27th 2011, 12:08 PM #1 Linear 1st Order PDE Query Looking at past exam papers. There is one section as follows which I have no idea on. Prove that a characteristic curve is parallel to a solution surface and hence explain why a characteristic curve that intersects any intial data must be a solution curve. Basically a characteristic is a curve along which the directional derivative is known and given by the coefficients. To get these we solve an ODE, and if we are given initial data (Cauchy problem) we have initial conditions to impose, so everything is determined. On the other hand if you want the general solution this initial conditions are missing so you get a family of curves all satisfying the same differential equation, and if this curve intersects a solution surface, by fixing this point and using the existence and uniqueness theorem for ODE, we get that there is an 'interval' around this point in the curve which is contained in the surface; we can then conclude that the whole characteristic is contained in the surface as long as both exist. Basically a characteristic is a curve along which the directional derivative is known and given by the coefficients. To get these we solve an ODE, and if we are given initial data (Cauchy problem) we have initial conditions to impose, so everything is determined. On the other hand if you want the general solution this initial conditions are missing so you get a family of curves all satisfying the same differential equation, and if this curve intersects a solution surface, by fixing this point and using the existence and uniqueness theorem for ODE, we get that there is an 'interval' around this point in the curve which is contained in the surface; we can then conclude that the whole characteristic is contained in the surface as long as both exist. Thanks for the description. I wonder was the question loooking for an algebraic method etc? Let's refrase the claim to something that may be more familiar (and hopefully that you have seen proved, or can prove): If we are given a Cauchy problem along a characteristic curve (ie. the transversality condition is violated) then this problem has either no solution or an infinite number of solutions. June 3rd 2011, 09:57 PM #2 Super Member Apr 2009 June 4th 2011, 07:35 AM #3 June 9th 2011, 09:23 PM #4 Super Member Apr 2009
{"url":"http://mathhelpforum.com/differential-equations/181822-linear-1st-order-pde-query.html","timestamp":"2014-04-16T14:51:20Z","content_type":null,"content_length":"39950","record_id":"<urn:uuid:af96a49b-169f-4f9a-a87c-b0162292224d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: standard errors for dummy variables in logit [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: standard errors for dummy variables in logit From "Sayer, Bryan" <BSayer@s-3.com> To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu> Subject st: RE: standard errors for dummy variables in logit Date Thu, 13 Feb 2003 17:44:30 -0500 I'm pretty sure that this is a marginal effect, and the variance can be determined via -mfx Or see, Graubard, B.I. and Korn, E.L. "Predictive margins for survey data" Biometrics 55 about June, 1999 Bryan Sayer Statistician, SSS Inc. -----Original Message----- From: Anderson, Soren [mailto:SANDERSON@rff.org] Sent: Thursday, February 13, 2003 5:22 PM To: statalist@hsphsun2.harvard.edu Subject: st: standard errors for dummy variables in logit I am trying to calculate the standard error for the discrete change in probability associated with a dummy variable in a logit model by hand. The effect itself is given by the difference in predicted probability with and without the dummy variable equal to 1: where E[P] is the predicted probability and d is the dummy variable. The variance of the difference should be given by Var(E[P|d=1]) + Var(E[P|d=0]) - 2*Cov(E[P|d=1],E[P|d=0]). Greene (2000, p.824) has a nice little formula for the variance of the individual predicted probabilities (i.e., the first two terms). Does anybody know the formula for the covariance of two predicted probabilities (i.e., the last term)? Soren Anderson Resources for the Future 1616 P Street NW Washington, DC 20036 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-02/msg00392.html","timestamp":"2014-04-20T23:32:33Z","content_type":null,"content_length":"6403","record_id":"<urn:uuid:aa3511c1-d2db-4a16-a658-291076283ff8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
No. 1928: Malt Balls or M&Ms Today, a thought about commerce, cannonballs and M&Ms. The University of Houston presents this series about the machines that make our civilization run, and the people whose ingenuity created The other day at the candy counter, I got malt balls and my wife picked up M&Ms. Now an article in Science magazine tells about packaging objects. A picture shows groups of smooth spheres and groups of ellipsoids. They look just like M&Ms and malt balls. The article talks about creating the densest packages -- the smallest ones for a given number of items -- candy, grain or, for that matter, cannon balls. First, spherical objects: You might arrange ten rows of ten balls in the bottom of a square box. Then lay in identical layers, up to the top. That is very poor packaging. Only 52 percent of the space gets used. But if you shake the box, the spheres will find a much closer packing -- 64 percent. (Notice how dry cereal often settles, so the package looks only half full when you open it.) Now the catch: 64 percent is far from the tightest packing for spheres. Eighteenth-century sailors did much better when they stacked cannonballs in pyramids, nesting them within one another. That calls up an urban legend: On the old warships, stacks of cannonballs were held in place by frames called brass monkeys. They were brass, since cannonballs rusted and stuck to an iron frame. The story says that, since brass contracts more than iron, balls could be dislodged in very cold weather. Hence the saying that "It's cold enough to freeze the ..." Well, you know the rest. (And if you don't, I should not be the one to tell you.) It's an unlikely story. What is true is that, in that kind of hexagonal nest, each sphere occupies a space shaped as a dodecahedron -- a figure with twelve equal sides. That arrangement is 74 percent full. (A lot more malt balls in the package.) Kepler suggested that optimal packing for spheres, four hundred years ago. Ever since, mathematicians have been trying to prove that you can do no better in packing spheres. Only now, in the early twenty-first century, are they succeeding. But come back to those boxes of candy. Shake a box of malt balls and they won't reach Kepler's optimum. Shake a box of M&Ms and they will. That's because you can push a sphere only along a line through its center. Push on an M&M, and you can exert a torque. The M&M can be twisted about, but the sphere cannot. Shaking a box of M&Ms, or grains of sand, will nudge particles in far more ways. They'll find their optimal packing. That's probably the reason M&Ms are shaped like little flying saucers. So my wife got more candy in her box than I did in mine. To get the maximum number of malt balls in a box, you'd have to stack them manually -- the way sailors once stacked cannonballs. This may sound like a wedding of frivolity with arcane math. But think about our vast traffic in small objects -- ball bearings, rice, gravel, oranges. The people who manage all this commerce think long and hard about the weight and volume of moving produce. In the end, we've found a place far beyond just math or malt balls. I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds work. (Theme music) David A. Weitz, Packing in the Spheres. Science Magazine, Vol. 303, 13 February, 2004, pp. 968-969. You'll find many websites on this subject. See e.g.: http://mathworld.wolfram.com/HexagonalClosePacking.html Another related matter is that of walking on wet sand. See Episode 1529. Optimal packing -- the same form as used for cannonballs (photo by John Lienhard) The Engines of Our Ingenuity is Copyright © 1988-2004 by John H. Lienhard.
{"url":"http://uh.edu/engines/epi1928.htm","timestamp":"2014-04-18T20:44:14Z","content_type":null,"content_length":"8795","record_id":"<urn:uuid:10b3f75b-a86b-4f9c-9d1e-4436f94a6475>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Jurjan-Paul's blog Though the typical mathematics-heavy curriculum for 'Computer Science' studies seems to suggest otherwise, even in its simplest form with only one dimension this is probably the most advanced mathematical problem many programmers ever encounter in their jobs: "Do two intervals overlap?" Naturally, a whole range of applications need to have such a check in place with respect to the scheduling of resources within a time frame. Over time I've stumbled on a number of implementations in production code by different programmers who undoubtedly set out to take the task very seriously. I imagine sheets of paper with pairs of bars in all mutual configurations possible... (No, my imagination is not usually that creative... Let's say I know someone... And no, I am not mocking anybody.) Still, the resulting code was either overly complicated or flat out wrong, possibly because one possible configuration of two intervals was overlooked. The possibility of open ended intervals, sometimes in combination with using separate fields for date and time, didn't help in keeping the intent of the code immediately obvious, let alone easy to review for correctness. Of course the solution to the 'interval overlap' question is very simple and the code can be very short and understandable. In mathematical terms: two intervals [a[1], a[2]] and [b[1], b[2]] overlap if and only if a[1] < b[2] ∧ a[2] > b[1]. The details of writing that in your language of choice, dealing with open ended intervals and separate fields for date and time are left as an exercise to the reader. "But wait...", you say. "That looks so simple it can't be true. That expression can't possibly cover all the bar configurations I drew on these sheets of paper... Oh well, I can check with them of course. Hmm, it seems it might be true after all..." That was my voice (perhaps without the drama added here for effect). My colleague challenged me to proof that the expression above holds after all. Only months later, after encountering yet another piece of complex code that was written with the exact same goal, did I sit down to take up the challenge. Trying to remember from 'Analytics' class how one goes about proving stuff, not much more came up than just the phrases 'Complete Induction' and 'Complete Intimidation'. The latter proving technique is pretty attractive and often rather effective as well, but it's really frowned upon by the elite (of which I obviously want to be part desperately). As the Induction technique didn't seem suitable either... well, I came up with the following, which I think is conclusive enough: Intervals [a[1], a[2]] and [b[1], b[2]] overlap if a c exists that is part of both intervals: a[1] ≦ c ≦ a[2] b[1] ≦ c ≦ b[2] From these equations one can easily deduce that: (a[1] ≦ b[2]) ∧ (b[1] ≦ a[2]) Note that when (a[1] = b[2]) ∨ (b[1] = a[2]) the overlap has a length of 0, which you wouldn't consider an actual overlap. So, for an overlap with length>0 the condition remains: a[1] < b[2] ∧ a[2] > b[1] Please let me know if I'm oversimplifying things... although you may be forgiven for thinking I just 'overcomplicated' the issue. And in case you were really looking for something slightly more challenging, please have a go at this. ;-) Update:An anonymous commenter points out that I was indeed oversimplifying and provides the remainder of the proof. Thanks for that! ;-) I really like it when a little thinking helps to keep our code base maintainable by expressing the logic as compactly (yet readable) as possible! I do hope however that the above is hardly my most important contribution towards that goal... 1 comment: 1. What you have proved is "if [a1, a2] and [b1, b2] overlap then a1 <= b2 ∧ a2 >= b1". But for an "if and only if", you also need to prove it the other way, i.e that if a1 <= b2 ∧ a2 >= b1 then [a1, a2] and [b1, b2] overlap. So assume that a1 <= b2 ∧ a2 >= b1 and show that it necessarily follows that there must be at least one element in both [a1,a2] and [b1,b2]. To do so, you could consider the two numbers a1 and b1. One of the following statements must be true: 1) a1 = b1 2) a1 < b1 3) a1 > b1 If a1 = b1, then we are done because a1 is in both ranges. If a1 < b1, recall we have assumed that a2 >= b1. So a1< b1 <= a2. Thus b1 is in [a1,a2] and so is in both ranges. If a1 > b1, recall we have assumed that a1 <= b2. So b1 < a1 <= b2. Thus a1 is in [b1,b2] and so is in both ranges. For each possibility we have shown that it follows that there exists at least one one element in both [a1,a2] and [b1,b2]. Q.E.D. I disagree that you should change the <= to < I would consider a1=b1 to be an actual overlap - the ranges would then overlap at a point.
{"url":"http://jurjanpaul.blogspot.com/2008/09/overlap.html","timestamp":"2014-04-17T08:20:24Z","content_type":null,"content_length":"55299","record_id":"<urn:uuid:b3178a9b-c1e7-46e6-af5c-442f2a029876>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Is Sherlock Holmes a Good Detective? Is Sherlock Holmes a Good Detective? As a new season of the hit TV show debuts, Noah Charney asks if the famous sleuth was really all that good after all. Without Arthur Conan Doyle making him solve everything with ease, does his reasoning stand up to reasoning today. In the Arthur Conan Doyle story, “The Silver Blaze,” Sherlock Holmes discusses the theft of a race horse from a country estate that is guarded by a fierce watch dog. "Is there any point to which you would wish to draw my attention?" Holmes: "To the curious incident of the dog in the night-time." "The dog did nothing in the night-time." Holmes: "That was the curious incident.” Holmes later explains how the “dog that didn’t bark” helped him to solve the crime: I had grasped the significance of the silence of the dog, for one true inference invariably suggests others… A dog was kept in the stables, and yet, though someone had been in, and had fetched out a horse, he had not barked enough to arouse the two lads in the loft. Obviously the midnight visitor was someone whom the dog knew well. This is an example of abductive reasoning: an inference is made based on known facts, in an effort to explain them. It certainly sounds good here. Holmes is working on the premise that because (a) dogs bark loudly at strangers, but not at people they know; and (b) the dog didn’t bark loudly, if he barked at all; then (c) the dog knew the intruder. This is how many detectives and police officers work out a problem. But this reasoning contains fundamental flaws, claims Dr. Robin Bryant, Director of Criminal Justice Practice at Christ Church University, in Canterbury, England, a criminologist with an expertise in how detectives think. With the new season of the hugely-popular Sherlock television series just kicking off, it seems like an apt time to consider the question: outside the realm of fiction, where Holmes’ “deductions” all seem to end up correct, would Conan Doyle’s detective be considered a sound, logical thinker in today’s world of policing? A new book, Master-Mind: How To Think Like Sherlock Holmes by Maria Konnikova takes the famous fictional detective’s problem-solving skills and transforms them into a sort of self-help book. That sounds fine at first, and of course Holmes is a renowned problem-solver, at least in the world of fictional ink. But this raises the question: If Sherlock Holmes worked for a modern police force, would he be considered a good detective? According to Dr. Bryant, alas, he would not. His over-reliance on abductive reasoning, at the sacrifice of more powerful logical tools, make his conclusions suspect at best. Dr. Bryant could teach Sherlock Holmes a thing or two. He now travels Europe, teaching police how to analyze their own problem-solving processes, helping them to understand how they make decisions, where there are opportunities for logical inconsistencies, and how to avoid such pitfalls. “In the 21st century, with the advent of large databases and mathematical modeling, inductive forms of reasoning have become the more reliable methods of criminal investigation,” explains Dr. Bryant, when asked what lessons he might give, should Sherlock Holmes one day show up in his class. Under the cold light of mathematical logic, there are holes in Holmes’ conclusion (despite the fact that, in the novel, he solved the case). Holmes assumes that dogs behave in a rational manner when, in fact, there might be various reasons why the dog wouldn’t bark, even at a stranger. The stranger might have brought a sausage to appease the dog. The dog might have barked, but no one heard. The dog might even have been drugged (we might call this the Scooby Doo explanation). Because Holmes did not take these variables into consideration, one might conclude that the logic of Holmes’ argument is flawed. It is based on probability (dogs normally bark at strangers), not absolute fact. In film and fiction, the mental process of moving from observation (clues at a crime scene, the behavior of a suspect in an interrogation) to conclusion is presented in a dramatically foreshortened manner. The problem is that most real-life detectives attempt a Holmesian method, rather than the other logic methods in one’s mental arsenal. To demonstrate the sort of flexible thinking Dr. Bryant and other mental processes specialists advocate, consider a mathematical puzzle. Based on the following numbers, what would you guess comes next in this sequence: 2, 4, 8, 16, X. Most people guess that the next number, X, will be 32, each number doubling. That’s a fair guess. But the answer to this particular question is not 32. The next most common estimate is 8, respondents concluding that the sequence will reverse itself. This is also reasonable, but incorrect. The correct answer is 31. This defies our logic when we consider a sequence of numbers like this, but it does make sense as a plausible answer, if we reveal that the sequence refers to the number of points on a circle, and the number of sections into which the circle is divided. Sometimes called Moser’s Circle Problem, this determines the number of sections into which a circle is divided, if X number of points on its circumference are joined by lines. If you place two points along the circle, connecting them with a line, then you will have divided the circle into two sections. Add a third point to the circle and connect all three points and you will have created a triangle, dividing the circle into four sections. If you add a fourth point to the circle and connect all the points with lines, then the circle is divided into eight regions. This continues until you have six points along the circle which, when lines connect each of them, leaves you with 31 sections within the circle. The point of this exercise is not mathematical, nor does it have anything to do with circles or slices of apple pie. When used by Dr. Bryant, it is employed in the hope to demonstrate that there are multiple ways of looking at a problem, and that the most obvious solution, our instinctive reaction, is not necessarily the correct one. Look at the problem from one angle only, and you risk getting it wrong. That is the most frequent logical pitfall into which Sherlock Holmes falls. But because Holmes’ fate was in the hands of Sir Arthur Conan Doyle, he was set up to always succeed. Real-life detectives and amateur problem-solvers do not have that advantage. Whenever there is a problem to be solved, a decision to be made, we already use the logical methods (abduction, deduction) that neurologists use as categories—we just don’t tend to think of them in those terms. By considering the strengths and weaknesses of various logical methods, we can identify our own built-in prejudices and come to clearer, more logical conclusions. Dr. Bryant recommends that his students imagine growing a second brain, one whose role is to double-check the assumptions of one’s primary brain. The second brain is there to challenge how conclusions were reached, to ask “what form of reasoning is this?” and “How do I know that if a then b must be true?” This is a point that Maria Konnikova makes in her book: that we would do well to question what we hear immediately, rather than absorbing it as fact and only then questioning it in reflection. By way of example, consider hearing someone mention a “pink elephant.” We instantly imagine an elephant with pink skin, before we “engage in disbelieving it.” Konnikova writes “Holmes’ trick is to treat every thought, every experience, and every perception the way he would a pink elephant…begin with a healthy dose of skepticism instead of credulity…” The implication is that our instinctive credulity puts us at a disadvantage—when we accept what is said (an eyewitness account of a crime, for example), we absorb the prejudice of at first believing what we hear, and only later considering how we might change our mind. Skip this initial step of believing what you hear, and you’ll think more clearly. That sounds good on paper, but Sherlock Holmes did not regularly follow the advice that Maria Konnikova teaches based on Holmesian examples. In the instance of the dog in the night, Holmes solved the crime, but did so with flawed logic. Sherlock Holmes should have asked himself why he concluded that a dog not barking meant that the race horse thief was someone the dog knew. Holmes did not consider alternative explanation for why the dog might not have barked, just like Holmes might have assumed that the solution to Moser’s Circle Problem is 32, and have therefore gotten it wrong. I teach criminology as well as art history, and criminological techniques, of the sort that Dr. Bryant teaches and that we might apply to everyday life, as Konnikova seeks to do in her book, recommend the following method to avoid logical traps. Instead of going with the most obvious and immediate explanation for any action, try to first set that explanation aside and see whether the same facts fit an alternative explanation. For Holmes, this would mean considering all of the various ways in which a theft could occur without the guard dog barking. For a mathematician, it would mean accepting that there are various solutions to Moser’s Circle Problem: 32 is correct if you are doubling each number in the sequence. 8 is correct if the sequence is reversing. But the answer we’re looking for is 31. Would Holmes have solved Moser’s Circle Problem? He would likely have concluded 32 or 8, long before he reached the “right” answer of 31. Sometimes the answer lies not with what you conclude, but with how you approach the question.
{"url":"http://www.thedailybeast.com/articles/2014/01/26/is-sherlock-holmes-a-good-detective.html","timestamp":"2014-04-17T01:08:47Z","content_type":null,"content_length":"198552","record_id":"<urn:uuid:afdb3c20-cc70-4b11-b4a1-afa25bf6adbe>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Method of electrical prospecting - Esme, Rosaire E. This invention relates to geophysical exploration by measuring certain electrical properties of the earth's crust exhibited under certain states of excitation. An object of the invention is to determine the point-to-point variation on the earth's surface of an electrical property of the earth which varies with a factor related to time. In the electrical transient method of prospecting proposed by Blau in U. S. Patent 1,911,137, it was proposed to note the change of potential with respect to time as the result of a suddenly applied steady current. In various subsequent patents and publications, it has been shown that a quantity susceptible to measurement in the transient prospecting technique is the timeconstant or some quantity related thereto. For example, Saibara et al., in U. S. Patent 2,177,346, disclose a simplified method of determining such a time constant. K. E. Gould has treated some aspects of the relation between alternating current steady state response and transient response of the earth in a paper "Coupling between parallel earth-return circuits under D. C. transient conditions" pub-, lished in Electrical Engineering, September 1937, page 1159. Unfortunately the function which would be necessary to describe the transient response at any given location on the earth's surface would ordinarly be quite complicated, the number of terms in some cases being quite large. Usually, however, it occurs that one of the transient terms predominates, and the time-constant of this single term is used to express the transient property of the earth. Also, the shape of the frequency response curve or curve of amplitude against frequency generally displays a typical characteristic, and from such characteristic the time-constant may be computed. Equivalent electrical circuits have been devised to nearly duplicate the shape of potential transients due to suddenly applied direct currents and have been disclosed in copending application 226,668 filed Aug. 25, 1939, now Patent No. 2,230,803 of February 4, 1941. Obviously such circuits can be solved for frequency response as well as time constants, so that there exists a practical link between the transient and alternating current steady state performance. The relation beween steady state and transient phenomena is well established. For any linear circuit, for example, determination of the transient response from the alternating current steady state response may be accomplished by using Equation 405 of "Operational-Circuit Analysis" by V. Bush, John Wiley and Sons, 1932. The reverse procedure of determining the alternating current steady state response from the transient response is effected by use of Equation 115 in the same treatise. Besides the work of Gould mentioned above in treating the earth as a transmission network between generation and detection areas, the recent work of White is cited (G. E. White, "Application of Rapid Current Surges to Electric Transient Prospecting," American Institute of Mining and-Metallurgical Engineering, Technical Publication 1216, February 1940). Reference is also made to "A note on the relation of suddenly applied direct current earth transients to pulse response transients," Geophysics IV, October 4, 1939. In the Geophysics article mention is made (page 281) to the relation between the alternating current steady state and the transient. The results 'shown in the former paper prove that the mathematical processes relating various types of response, as for example the short impulse transient to the direct current transient, are as applicable to the earth as to a conventional circuit. Thus, further objects of the invention are to determine the time-constant of the earth by alternating current, steady-state electrical measurements and to determine quantities related to the time-constant as, for example, the reciprocaltime-constant which has the dimension of frequency, and apparent resistivities and mutual impedances at such frequencies that they show greater .contrast and have better prospecting value than the resistivity determined at zero frequency or direct current. A further object is to furnish a method of prospecting having a high contrast by using alternating current frequencies which are optimum with respect to the resistivity of the area to be surveyed. Another object is to gain high contrast by using optimum geometry of electrode spacing for the available frequency range with a given resistivity range in an area to be prospected. These and other objects will be apparent from the specification taken in connection with the accompanying drawings in which: Fig. 1 shows a preferred layout on the earth's surface for measuring the various electrical quantities, and is a vertical section of the earth's surface; Fig. 2 shows an equivalent circuit of the earth as a four terminal impedance; Fig. 3 is a diagram showing the mutual impedance plotted against frequency for various station locations over a profile, and is a threedimensional illustration in which the vertical axis shows mutual impedance in appropriate units, the horizontal axis shows the frequency range, and the diagonal axis represents distance along the profile; Fig. 4 is a map of the surface of Fig. 3 showing contours of equal mutual impedance; Fig. 5 is a map of the surface of Fig. 3 looked at from above showing the variation of mutual impedance along the profile at certain specific frequencies ; Fig. 6 shows the reciprocal-time-constant in frequency units determined by transient and steady state methods; Fig. 7 shows a comparison of the reciprocaltime-constant and the variation in frequency required to give a constant mutual impedance at the various stations along the profile; Fig. 8 shows two typical earth current transients; Fig. 9 shows the type of mutual impedancefrequency curve obtainable under certain arbitrary conditions; and Figs. 10 and 11 show the transient responses of Fig. 8 replotted to a logarithmic inverted time scale along with the magnitude of the frequency response corresponding to the respective transients, with the same logarithmic inverted time scale corresponding to frequency. In Fig. 1, electrodes 1, 2, 3 and 4 are embedded in the earth's surface, and current of known value is applied by means of generator G to electrodes I and 2. Lines of current flow are indicated at 6, 7 and 8. For conditions of near zero frequency, equipotential surfaces exist, the vertical traces of which are shown as 9 and 10. It is the potential between these surfaces which is measured by potential detecting and measuring instrument D. In Fig. 2 is shown a four terminal network which is the equivalent of the prospecting circuit of Fig. 1 in which Ri, R2, etc. represent the electrode resistances of electrodes 1, 2, 3, and 4 of Fig. 1 and Z is the mutual impedance which may be regarded as the impedance contained between the surfaces represented by traces 9 and 10 in Fig. 1 although only instantaneously during times of change in current state. The equivalent electrodes or terminals in this figure are shown as like primed reference characters I', 2', 3' and 4'. Fig. 3 is a three dimensional illustration of results obtainable by the invention and is constructed from data collected along a profile located over a known geological structure. The curves marked I , 12, 13, etc. are curves of mutual impedance plotted against frequency at each station 1, 2, etc. along the profile. The impedance axis or ordinates represent values of mutual impedance expressed in logarithm units and in terms of decibels above an arbitrarily chosen zero level of 0.001 volt per ampere. The horizontal axis represents frequency in log units so that each division represents one octave change in frequency. The diagonal axis represents distance along the profile, each division representing 1000 feet of distance. The contours on the surface represent equal levels of mutual impedance or potential-current ratio, expressed in decibels above the aforementioned zero level. Fig. 4 is a map of the surface shown in Fig. 3, the contours representing the variation in frequency necessary to produce a measured mutual impedance which is a constant for all stations. For example, the curves marked "15" show the variation in frequency necessary to produce a measured mutual impedance 15 decibels above the reference level of 0.001 volt per ampere. Fig. 5 is another map of the same surface viewed from the side, and each curve represents the mutual impedance measured at a given frequency; for example, the curve marked /=400 shows the mutual impedance measured at 400 cycles at the various stations along the profile, the units of mutual impedance again being in terms of decibels above the 0.001 volt per ampere reference level. In Fig. 6 is shown a comparison of reciprocaltime-constants determined in accordance with the present invention and as measured by the means and method of the aforementioned Salbara et al. patent. The reciprocal-time-constant values have been converted into frequency units. The absolute difference between the two methods is about 30% which is not large considering all the factors involved including instrumental errors, and the personal factors which pertain to obtaining a balance with the transient method. The comparative error between the two curves is less than 5% however, except at station O which discrepancy could quite possibly have been due to an error. Fig. 7 is a comparison of the time-constantdata, determined by the transient method, with the frequency anomaly data of the 17.5 decibel contour of Fig. 4, the solid curve being the frequency at which the mutual impedance was 17.5 decibels above the reference level and the dotted curve being the reciprocal-time-constant converted to frequency units. Figs. 10 and 11 graphically illustrate the relation between the transient and the alternating current steady state. The dotted curves show the transient response in inverted log-frequency units, and the solid curves represent the absolute values of frequency response which correspond to the given transient responses. As is usual the frequency response curves have been plotted with a logarithmic frequency scale, but the transient curves, departing from custom have been plotted with the time axis inverted so that the reciprocal of time progresses to the right, and the log units have been used so there is a correspondence between the frequency and reciprocal time scales. The transients of Figs. 10 and 11 have been plotted in the customary way in Fig. 8 where curve 16 shows the transient of Fig. 10 and curve 17 shows the transient of Pig. 11, both recognizable as transients frequently met with in earth current transient prospecting. It will be observed from these curves (Figs. 10 and 11) that there is a close enough similarity to enable the time constant to be estimated with a reasonable degree of accuracy from the frequency response curves, as close perhaps as from actual transient measurements. It should be noted that the slope of the frequency response curves is greater than the slope of the transient curves, a fact which enables a greater contrast to be attained with the steady state method than with the transient method. It was from the frequency response curves of Fig. 3 that the reciprocal-time-constant curve of Fig. 6 (curve marked "A. C. steady state") was determined. If the phase angle of the alternating current steady state response is measured along with the 1 magnitude, more rigorous methods of determining the transient response may be employed as was done by Gould in the aforementioned paper. From the foregoing, it is evident that by using currents of different frequencies, it is possible to determine to any desired degree of accuracy the' quantities which would be measurable directly with a square wave or transient, including the time-constant. As a method of prospecting however, there are other quantities dimensionally the same as the reciprocal-time-constant but which in some cases offer more contrast over a given profile than the reciprocal-time-constant. Thus in Fig. 7 is a comparison of the reciprocal-time-constant value compared with the frequency necessary to be used in order to measure a constant mutual impedance at the various stations. This frequency anomaly is in the same units since frequency has the dimension of the reciprocal of time, and the time constant has the dimension of time. The improved contrast is Another quantity that can profitably be measured is the mutual impedance at a selected frequency. In Fig. 5 is shown the values of mutual impedance measured at various frequencies. The best contrast is seen to have been measured at 400 cycles, the curve for 25 and 100 cycles showing smaller variations. The curve for 1000 cycles is very rough and erratic, and apparently meaningless as far as the anomaly in question is concerned. An examination of Fig. 3 will reveal that 400 cycles is the frequency which for all stations shown cuts the figure where the slope downward to the right is a maximum. Thus high contrast prospecting can be conducted using a single frequency if that frequency is selected so that for most or all the stations on a prospect that frequency falls on the amplitude-frequency response curve within a range of frequencies where the slope is decidely negative or is at a negative maximum. I prefer to choose that frequency such that the slope of the curve is at least nearly at its negative maximum value when increasing frequencies are plotted to the right and increasing values of mutual impedance are plotted upwards. In the practice of the invention, the spread, or distance between electrodes is of importance. With a given spread and resistivity, a definite frequency range will be required to produce a frequency response curve which flattens to the left and has adequate slope to the right for optimum contrast. In the example shown, the spread between current electrodes was 1000 feet, the spread between potential electrodes was likewise 1000 feet, and the distance between the nearest current and potential electrodes was 500 feet. For ,a spread of 1000 feet between center electrodes it would be expected that a frequency range of from about 6 cycles to about 250 cycles would have been needed. In another area where the average resistivity is twice as great as that in the example cited, the frequency range when using the same spreads should probably be in the order of 50 to 2000 cycles. For the same frequency range used in the example a spread of about 700 feet would have been appropriate in such an area. With a given set of equipment having the.frequency range used in the example, I prefer to adjust the spread so as to obtain curves of frequency response of the shapes shown. Generally speaking the frequency range depends upon the reciprocal of the square of the spread and directly upon the resistivity or, in other words, the spread depends upon the square root of the reciprocal of the frequency and upon the square root of the resistivity. To illustrate the relation between frequency range, spread and resistivity, a curve of frequency response or mutual impedance plotted against frequency is shown in Fig. 9 plotted in dimensionless units, based on computations for a homogeneous earth, and a distance a between collinearly spaced electrodes for each of the three spaces between electrodes. The ordinates are expressed in terms of the dimension a, the mutual impedance Z and the resistivity p in such form that the quantity aZ/p is dimensionless since a has the dimension of length, L, Z has the dimension of resistance R, and p has the dimension of resistance times length RL, so that aZ/p=LR/LR and is hence dimensionless. The abscissa is plotted in units of the product of the square of the spread times the ratio of frequency to resistivity which is therefore also dimensionless since frequency f has the dimension of reciprocal time or T-1 and P has the dimension L2T-1 so that a2J/p=L2T-1/L2T-1. The curve of Fig. 9 is distinctly an idealized curve, to be expected to be met with only approximately in practice since ordinarily the resistivity varies with depth and also changes along the horizontal. However, it does show that for a given resistivity range, the spread must be adjusted in terms of the square root of the frequency, and with a given frequency range the spread must be adjusted in terms of the reciprocal of the square root of the resistivity, in order that the desired range of frequencies will be caused to fall on the desired range of the curves actually encountered. C The essence of the present invention lies in the determination of a frequency, or a frequency range, which may be profitably employed to give electrical prospecting data with higher contrast and greater reliability than would be obtained by the use of frequencies outside this range. The determination of this frequency range depends upon the use of a given range of frequencies and different spreads to determine the curves of mutual impedance against frequency at various stations over an area to be prospected, and then selecting the particular frequency or range which gives rise to such mutual impedance-frequency curves possessing a substantially fiat top, a knee and a downiwardly sloping portion. The particular frequency or frequencies are then selected from the range on the curves where the slope is appreciable and preferably a maximum, or else a point on the curve which is either at constant level (mutual impedance) at the various stations, or is a constant ratio down from the maximum above the knee of the curve. Such a determination at a few stations will lead to information which makes possible economical prospecting at a single spread for the remaining stations in a given geological province. In the example shown, the slope of the mutual impedance-frequency curves does not become positive within the range of frequencies used. However, profiles have been run at various spreads and frequency ranges in which the response curves frequently were found to turn upward when the frequency become high enough. It is important in the practice of the invention to select the frequencies below any such region of positive slope, even in the range where the slope is leveling off into a lower knee as, for example around 1000 cycles in the profile illustrated here, results are apt to be very erratic as shown by the 1000 cycles curve of Fig. 5. After the essential step of determining the frequency range in which it is necessary to work or determining the spread necessary to meet the conditions of available frequencies and existing resistivities, there are three measurements which can be profitably made. One of these is the determination of the reciprocal-time-constant, another is the determination of the mutual impedance at a single fixed frequency within the optimum range, and the third is the determination of that frequency which gives rise to a constant value of mutual impedance measured at the various stations. These three measurements are illustrated respectively; reciprocal-time-constant in Fig. 6, mutual impedance at a single optimum frequency in Fig. 5 (curve of 400 cycles) and the frequency for constant mutual impedance in Fig. 4. Still another use to which the invention may be put is the determination of equivalent direct current resistivities by the use of a frequency below the value at or above the knee of the curve. Here again it is necessary to determine the shape of the mutual impedance frequency curve or the value of a few points on the 2 curve in order to be sure the frequency used is below that value of frequency represented by the top of the knee of the curve. I prefer to work in the higher range of frequencies, however, be- 3 cause of the greater contrast. The objection may be raised that the utility of the method is reduced when the spreads subtended by the respective pairs of electrodes are shortened in order to be able to use a given fre- 3 quency range. In some cases it may be better to reduce the frequency range rather than the spread. In the majority of cases, however, and in all the cases where this method has its greatest value, the reduced depth of penetration due 4C to the shortened spread is not of serious consequence, since the method is most applicable in the art of shallow stratigraphic prospecting where the shallow evidence of deeply buried deposits are sought for rather than the deposits them- 4 selves. See, for example, "Shallow stratigraphic variations over Gulf Coast structures," by E. E. Rosair, Geophysics III, 2, March, 1938. Experiments subsequent to this paper show generally that the conclusions reached in that paper are 51 applicable to other geological provinces. Obviously the same principles may be applied to well logging wherein electrodes are lowered into a drilled well and resistivities determined to indicate the nature of strata traversed. Instead & of measuring the resistivity, steady state alternating may be employed and a frequency used which is high enough to result in a curvilinear relation between mutual impedance and frequency whereby a higher contrast will result than would 6 be the case with direct current. In other words, the arrangement of Fig. 1 may be turned vertically, or a pair of electrodes may be arranged horizontally and the other pair arranged to be lowered into a bore hole, and the invention prac- g ticed just as in the case of horizontal prospecting to which attention has been directed in the greater part of this What is claimed: 1. The method of geoelectric prospecting in 7 which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current of known amplitude to flow in a region of the earth's crust, detecting and measuring the magnitude of a poten- 7 tial between points subjected to the flow of current, and varying the frequency of the current between such limits that the detecting and measuring step reveals the frequency range within which the slope of the curve of mutual impedance with respect to frequency is negative. 2. The method of geoelectric prospecting in which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current of known amplitude to flow in the earth between points located substantially on the earth's surface, detecting and measuring the magnitude of potential between points collinearly spaced from the current points and lying on an extended line passing through said current points, and varying the frequency of the current between such limits that the detecting and measuring step reveals the frequency range where the slope of the curve of mutual impedance with respect to frequency is negative. 3. The method of geoelectric prospecting in which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current of known amplitude to flow in a region of the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, varying the frequency of the current, and adjusting the spacing between the nearest current and potential points until the region of greatest negative slope of the curves of mutual impedance plotted against frequency falls within an arbitrary frequency range. 4. The method of geoelectrical prospecting in which the time constant of the earth acting as a four terminal electrical transmission medium is measured comprising the steps of, causing an alternating current of known value to flow in a region of the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, varying the frequency of the current throughout a range of frequencies of which the lowermost gives rise to a substantially constant measured potentialcurrent ratio, and measuring the potential-current ratio during the varying step, such range of frequencies including a specific frequency at which the ratio is a predetermined fraction of 0 the ratio at said lowest frequency, such specific frequency being related to the reciprocal-timeconstant of the earth by a simple constant. 5. The method of geoelectric prospecting in which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current to flow in a region of the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, varying the freSquency of the current between such limits that some of the higher frequencies give rise to measured mutual impedances which are materially less than the mutual impedance measured at the lowest frequency, determining the frequency 5 range over which the mutual impedance decreases as the frequency increases, repeating the measurements at a plurality of stations, selecting a mutual impedance value which for at least most of the stations is produced by frequencies within 0 the determined range, and determining for the various stations the frequency value required to exhibit the selected mutual impedance value. 6. The method of geoelectric prospecting in which the electrical transmission properties of 6 the earth are measured comprising the steps of, causing an alternating current to flow in a regionof the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, repeating the measurements at a plurality of stations, varying the frequency of the current between such limits that some high frequencies give rise to measured mutual impedances which are materially less than the mutual impedance measured at the lowest frequency, determining a frequency range which for the majority of the stations exhibits a negative slope of the curve of mutual impedance plotted against frequency, and determining for the stations the mutual impedances at a single frequency, said frequency being selected from within the determined frequency range. 7. The method of exploration for subsurface structural anomalies comprising the steps of causing an alternating electrical current to flow in the earth, detecting the potential between points subject-to the influence of the current, repeating the test at a plurality of frequencies including a range in which the mutual impedance decreases as the frequency increases, measuring at each of the test frequencies the mutual impedance of the current and detection paths, and repeating the group of measurements at a plurality of stations to determine the variation of mutual impedance with frequency at each station for the respective frequencies as an indication of the location and extent of subsurface anomalies. 8. The method of geophysical exploration comprising the steps of causing an alternating current of known amplitude to flow between points on the earth's surface, detecting the potential difference between spaced points in the area of current conduction, varying the frequency of the current, to determine the frequency below which there is no substantial change in mutual impedance, and measuring the mutual impedance at a plurality of stations while utilizing the determined frequency. 9. The method of geophysical exploration comprising the steps of causing an alternating current of known amplitude to flow between points on the earth's surface, detecting the potential difference between spaced points in the area of current conduction, varying the frequency of the current, measuring the mutual impedance at a plurality of frequencies repeating the measurements at a plurality of stations, and determining at each station the frequency which produces a constant mutual impedance. 10. The method of geophysical exploration comprising the steps of, causing an alternating current of known amplitude to flow between points on the earth's surface, detecting the potential difference between spaced points in the area of current conduction, varying the frequency of the current to determine the frequency below which there is no substantial change in mutual impedance, determining the optimum spread giving rise to a frequency response curve which includes a range in which the response decreases as the frequency increases within an arbitrary frequency range, and measuring the mutual impedance at a plurality of stations at the determined frequency and spread. 11. The method of geoelectric prospecting in which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current to flow in a region of the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, varying the frequency of the current over a range within which the mutual impedance decreases with increasing frequency, repeating the measurements at a plurality of station locations, and determining at each station the frequency which gives rise to a selected value of mutual impedance at each station. 12. The method of geoelectric prospecting in which the electrical transmission properties of the earth are measured comprising the steps of, causing an alternating current to flow in a region of the earth's crust, detecting and measuring the magnitude of a potential between points subjected to the flow of current, varying the frequency of the current and the spread between the current and detection points until a spread is determined which results in a decrease in mutual impedance as the frequency is increased from an arbitrary lower limit to an arbitrary upper limit, repeating the test at a plurality of locations with 5 the determined spread and the arbitrary frequency range, and determining at each location the frequency which will produce a selected value of mutual impedance at each of the stations. PAUL W. KLIPSCH.
{"url":"http://www.freepatentsonline.com/2293024.html","timestamp":"2014-04-21T12:50:37Z","content_type":null,"content_length":"51579","record_id":"<urn:uuid:a6742ab2-8d5f-46a7-a5ae-216ca152be69>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
GDP/Equilibrium and MPC 1. 51031 GDP/Equilibrium and MPC questions: Please use equation above for following questions: 1. If the planned investment is $200 billion, the equilibrium level of GDP is: 2. If the equilibrium is $2000 billion, autonomous investment is: Please use equation above to answer questions below. 3. Private sector equilibrium occurs at GDP of: 4. The equation for private sector equilibrium can be expressed as: 5. What is the value of the marginal propensity to consume? 6. What is the value of the multiplier 7. If planned investment increases by $10, by how much will equilibrium GDP increase? 8. The new equilibrium GDP after the $10 increase in investment is: 9. Assume government decided to spend $30. Equilibrium GDP at investment of $30 and government spending of $30 is: What is the value of the marginal propensity to consume?
{"url":"https://brainmass.com/economics/investments/51031","timestamp":"2014-04-17T00:50:09Z","content_type":null,"content_length":"29484","record_id":"<urn:uuid:2fe591a3-dc76-4aed-94f6-1ed2eb92771d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Partially optimal solutions in integer linear programming up vote 3 down vote favorite Linear programs with a totally unimodular system matrix are known to have an optimal integer point. They are therefore solvable via relaxing the integer constraints to intervals. An other interesting phenomenon occurs in linear programming relaxations to quadratic pseudo-Boolean functions. If those relaxation have an optimal point $x$ for which $x_i=0$, then there exists an optimal solution to the original problem $x^* $ for which $x^*_i=0$. The analogue holds for $x_i=1$. That is, even if the solution is not integral (Boolean), the points which are still tell something about the solution to the original problem. Is there a general property of integer (0/1) programs, similar to total unimodularity, which enables one to draw conclusions about partially optimal solutions like this? integer-programming linear-programming add comment 1 Answer active oldest votes Here is an example, where this property is proved for a very special type of integer program: G.L. Nemhauser and L.E. Trotter, Jr.: Vertex Packings: Structural Properties and Algorithms, Mathematical Programming, 1975. up vote 2 down vote accepted I am also interested in other examples. add comment Not the answer you're looking for? Browse other questions tagged integer-programming linear-programming or ask your own question.
{"url":"http://mathoverflow.net/questions/81500/partially-optimal-solutions-in-integer-linear-programming/81594","timestamp":"2014-04-21T02:21:10Z","content_type":null,"content_length":"49341","record_id":"<urn:uuid:7019041a-d68b-4976-876b-64aa255ec7db>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Phoebus: Erlang-based Implementation of Google's Pregel Chad DePue about Phoebus, the first (?) open source implementation of Google’s Pregel algorithm: Essentially, Phoebus makes calculating data for each vertex and edge in parallel possible on a cluster of nodes. Makes me wish I had a massively large graph to test it with. Developed by Arun Suresh (Yahoo!), the project ☞ page includes a bullet description of the Pregel computational model: • A Graph is partitioned into a groups of Records. • A Record consists of a Vertex and its outgoing Edges (An Edge is a Tuple consisting of the edge weight and the target vertex name). • A User specifies a ‘Compute’ function that is applied to each Record. • Computation on the graph happens in a sequence of incremental Super Steps. • At each Super step, the Compute function is applied to all ‘active’ vertices of the graph. • Vertices communicate with each other via Message Passing. • The Compute function is provided with the Vertex record and all Messages sent to the Vertex in the previous SuperStep. • A Compute funtion can: □ Mutate the value associated to a vertex □ Add/Remove outgoing edges. □ Mutate Edge weight □ Send a Message to any other vertex in the graph. □ Change state of the vertex from ‘active’ to ‘hold’. • At the begining of each SuperStep, if there are no more active vertices -and- if there are no messages to be sent to any vertex, the algorithm terminates. • A User may additionally specify a ‘MaxSteps’ to stop the algorithm after a some number of super steps. • A User may additionally specify a ‘Combine’ funtion that is applied to the all the Messages targetted at a Vertex before the Compute function is applied to it. While it sounds similar to mapreduce, Pregel is optimized for graph operations, by reducing I/O, ensuring data locality, but also preserving processing state between phases. Original title and link: Phoebus: Erlang-based Implementation of Google’s Pregel (NoSQL databases © myNoSQL)
{"url":"http://nosql.mypopescu.com/post/1292881817/phoebus-erlang-based-implementation-of-googles-pregel","timestamp":"2014-04-20T18:47:47Z","content_type":null,"content_length":"45206","record_id":"<urn:uuid:ed29de96-5b9b-4cbf-9df7-433200511a3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
High Performance 3D PET Reconstruction Using Spherical Basis Functions on a Polar Grid International Journal of Biomedical Imaging Volume 2012 (2012), Article ID 452910, 11 pages Research Article High Performance 3D PET Reconstruction Using Spherical Basis Functions on a Polar Grid ^1Instituto de Física Corpuscular, Universitat de València/CSIC, Edificio Institutos de Investigación, 22085 Valencia, Spain ^2Departamento de Física Atómica, Molecular y Nuclear, Universitat de València, 46100 Valencia, Spain Received 14 October 2011; Revised 18 January 2012; Accepted 26 January 2012 Academic Editor: Habib Zaidi Copyright © 2012 J. Cabello et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical iterative methods are a widely used method of image reconstruction in emission tomography. Traditionally, the image space is modelled as a combination of cubic voxels as a matter of simplicity. After reconstruction, images are routinely filtered to reduce statistical noise at the cost of spatial resolution degradation. An alternative to produce lower noise during reconstruction is to model the image space with spherical basis functions. These basis functions overlap in space producing a significantly large number of non-zero elements in the system response matrix (SRM) to store, which additionally leads to long reconstruction times. These two problems are partly overcome by exploiting spherical symmetries, although computation time is still slower compared to non-overlapping basis functions. In this work, we have implemented the reconstruction algorithm using Graphical Processing Unit (GPU) technology for speed and a precomputed Monte-Carlo-calculated SRM for accuracy. The reconstruction time achieved using spherical basis functions on a GPU was 4.3 times faster than the Central Processing Unit (CPU) and 2.5 times faster than a CPU-multi-core parallel implementation using eight cores. Overwriting hazards are minimized by combining a random line of response ordering and constrained atomic writing. Small differences in image quality were observed between implementations. 1. Introduction Iterative statistical methods are the preferred reconstruction algorithms in emission tomography. Image quality greatly depends on how accurately physical phenomena are modelled in the system, which is represented as the system response matrix (SRM) [1, 2]. The SRM models the probability of detection, of an annihilation produced in voxel , in a detector element or crystal pair , . The size of the SRM is imposed by the number of voxels that comprises the field of view (FOV) and by the number of detector elements of the scanner, . The size of the SRM, , is significant due to the large number of crystals used by high resolution scanners and the fine granularity of the FOV [3]. Fortunately, most elements in an SRM are typically zero, which means that the number of nonzero SRM elements to store is significantly lower than . The SRM can be directly obtained by taking measurements where a point source is placed in different positions over the FOV, interpolating intermediate positions when needed. This produces a highly accurate SRM at the cost of long acquisition times [4, 5]. To greatly reduce acquisition time, a single scan of an array of point sources can be simultaneously acquired [6]. However, rather than directly measuring the system matrix such methods are used only to estimate the shift-variant point spread function. Alternatively, the SRM can be calculated using analytical methods. The speed of these methods outperforms others, at the expense of limited precision [7, 8]. Finally, the SRM can be calculated using Monte Carlo methods [9–11]. These methods produce a more accurate SRM than analytical methods, but do not include the physical phenomena that are only found by measuring the SRM. Regarding the time necessary to calculate the SRM, a simulation is longer than analytical computations, but less time consuming than that necessary to obtain measurements experimentally. The high granularity of current PET scanners, used to obtain the best possible spatial resolution, makes the calculation of the SRM elements a cumbersome process. The use of cylindrical symmetries, taking advantage of the polygonal architecture of PET scanners, is a common approach to reduce the number of SRM elements required [9–15]. In the specific case of a Monte Carlo-based SRM, the simulation time and storage required can be greatly reduced. Although the most common model used to represent the image space is the cubic voxel, there exists a number of alternative basis functions. The use of polar voxels provides a convenient model to exploit scanner symmetries, which significantly reduces the simulation time and space necessary to store the SRM only modelling a small portion of the FOV [16]. Alternatively, spherical basis functions (blobs) have been shown to represent a more suitable basis to model the continuous radiotracer distribution [17]. Improved noise performance has been demonstrated, compared to cubic and polar voxels [18–20]. Moreover, better spatial resolution can be obtained with spherical basis functions, compared to postreconstruction filtered cubic and polar voxels [15]. Spherical basis functions overlap in space. Therefore, the number of basis functions intersected by any given line of response (LOR) is higher using blobs compared to voxels. This characteristic produces that the SRM has a significantly large number of nonzero SRM elements, resulting in long CPU-based reconstruction times. The forward and backward projections in a Maximum-Likelihood- (ML-) based reconstruction algorithm perform independent operations on each LOR and voxel, respectively, which makes these operations highly parallelizable. Graphical processing units (GPUs) technology have been successfully employed to enhance speed of image reconstruction in both precomputed [21–23] and on-the-fly [24, 25] SRMs, which makes it a potential candidate for blob-based reconstruction as well. An alternative approach to parallelize the reconstruction process is to use CPU-multicore architectures. In this work we have implemented an optimized CPU-multicore version of the reconstruction algorithm using the Message-Passing-Interface (MPI) libraries (http://www.mcs.anl.gov/research/projects/mpi/index.htm). This paper explores the suitability of these two approaches for the special case of ML-Expectation-Maximization (EM) PET reconstruction using a blob-based Monte Carlo precomputed SRM. The use of a large precomputed SRM in GPU technology represents one of the major challenges of this work. 2. Materials and Methods 2.1. Scanner Description In this study the small animal scanner MADPET-II [26], shown in Figure 1, is used as a model for all the simulations carried out. MADPET-II has a radial diameter of 71mm and an axial FOV of 18.1mm. The ring contains 18 modules, where each module has two layers of 8 × 4 LYSO crystals with individually read out electronics based on avalanche photodiodes. The size of the crystals in the front layer is and that of the rear layer is . The dual layer provides information of the depth of interaction, which mitigates parallax error. 2.2. Hardware Description The graphics card used in image reconstruction was the NVIDIA Tesla C2070, based on a Fermi architecture. The GPU contains 448 cores (thread processors or shaders) running at 1.15GHz (575MHz core), 6Gb GDDR5 on-board global memory, 48Kb of shared memory per block, and a bandwidth of 144Gb/s. Threads are internally organized as a grid of thread blocks, where threads in the same block communicate through shared memory. The maximum number of blocks is 65535 and the maximum number of threads per block is 1 024. The card has a peak performance of 515.2Gflops/s for double precision The algorithm was implemented for both the GPU and an eight-core desktop PC (Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz) with 12Gb of RAM. An optimized algorithm allowed the use of either a single core or multiple cores and distributed memory. 2.3. Polar Symmetries and SRM Calculation The total number of crystals of MADPET-II is 1152; for image reconstruction using cubic voxels an image space of (voxels of size mm^3) was used, resulting in elements. However, the SRM is highly sparse, significantly reducing the number of elements necessary for storage. A popular method to reduce the number of SRM elements to calculate and to reduce the SRM storage size is to exploit the scanner symmetries. The typical polygonal architecture in PET scanners allows the use of polar symmetries. For a Monte Carlo-based SRM this approach allows not only the reduction of the file size of the SRM, but also the simulation time. The level of reduction depends on the number of symmetries which can be exploited. The number of symmetries in a PET system is intrinsically linked to its geometry, specifically to the number of detector modules employed over the circumference. The high resolution small animal scanner MADPET-II contains 18 block detectors. Using 18 rotational symmetries and 2 reflection symmetries transaxially and 2 reflection symmetries axially, up to 72 symmetries can be exploited. Therefore, a factor of 72 reduction in the simulation time and in the SRM file size is achieved. More details about this implementation of symmetries in MADPET-II can be found in [16]. The SRM was calculated using GATE [27] in the local GRID facility which had 108 nodes (two Quad Core Xeon E5420 @ 2.50 Ghz machines) for sequential processing. The total number of simulated events in the SRM was , which represents events per voxel. The simulation of the events was split in 200 parallel jobs, where each parallel job took approximately 12 hours. The SRM was simulated using a back-to-back gamma source, thus half life, noncollinearity and positron range were ignored. A low energy threshold of 200keV was applied at singles level. Singles list mode data were processed after simulation to select coincidences using a coincidence timing window of 20ns. Accidental and multiple-scattered events are discarded in a postprocessing step. 2.4. Object Representation: Spherical Basis Functions An estimation of the continuous radiotracer distribution is represented in image space as a linear combination of image coefficients and basis functions, expressed as where represents the space coordinate, the estimated distribution, the image coefficients, the basis functions, the total number of voxels, and the placement grid. Spherical basis functions provide better noise properties compared to cubic voxels. Spherical basis functions have compact support, that is, the function is zero beyond a given value (blob radius), but have a smoother behaviour than the traditional cubic voxels. The spherical basis functions used in this work are based on the Kaisser-Bessel function, described as where is the order of the Bessel function , is the blob radius, and is the taper parameter. A thorough investigation of these parameters is found in [28] in which the optimal values are obtained. The study, performed in frequency space, determines that the optimal value of is 2 (the first derivative is continuous at the blob boundary), the optimal radius is 1.994 ( being the distance between elements of the underlying grid), and the optimal is 10.4. Figure 2 shows the wedge-like source simulated to calculate the SRM elements using symmetries, with an underlying polar grid and blobs placed over the grid using a body-centred strategy with the polar voxels used as reference. Using spherically symmetric basis functions and exploiting spherical symmetries, the final SRM was 5.3Gb in size and contained nonzero elements (out of the : 8740 blobs 662976 LORs). If eight cubic symmetries were used (2 rotational + 2 reflectional transaxially + 2 reflectional axially), instead of spherical symmetries, the resulting SRM file size would be 47.2Gb with the same statistical quality as that used. 2.5. Description of Phantoms Three different phantoms were simulated for this study. All phantoms were simulated using a back-to-back gamma source in agreement with the simulation of the SRM. Therefore, only true coincidence events remained for image reconstruction. Simulated data is stored in LOR-histograms where each histogram bin corresponds to an LOR, and no preprocessing was applied to the data [9, 29]. The three phantoms used here are detailed below.(1)A homogeneously filled phantom with a hot and a cold rod inserts has been simulated to study the image quality. The cylinder is 20mm long and 30mm radius, while the rod inserts are 20mm long and 10mm radius. The simulated activity of the phantom was 32MBq (0.86mCi) and 74MBq (2mCi) in the hot and warm regions, respectively (3:1 ratio). The experiment time simulated was 155 seconds, producing a total of ~4.5 ×10^7 coincidences.(2)To study the spatial resolution, an ellipsoidal phantom with six hot point sources and six cold spheres (1.5mm ), placed radially along the ellipsoid and separated by 5mm on a warm background, was simulated. The ellipsoid was 35mm by 20mm transaxially and 2mm long. The phantom was placed 12.5mm off-centre covering more than one half of the FOV. The activity simulated in the point sources was 3.7MBq (0.1mCi) and 207.2MBq (5.6mCi) in the background (200:1 ratio), producing a total of ~ 9.6 ×10^6 coincidences. The background was used in order to mitigate the resolution enhancement caused by the nonnegativity constraint of ML-EM [5, 30].(3)The digital mouse phantom MOBY [31] was simulated with a total activity of 0.2mCi. Five bed positions were necessary to acquire the whole mouse with six overlapping slices (3mm) between bed positions, with a time scan of four minutes per bed position. The activity simulated in each organ is the default relative activity set by the MOBY phantom files. It was not a purpose of this study to simulate realistic activity concentrations in each organ. 2.6. Quality Assessment: Figures of Merit The noise performance obtained with each of the implementations explored was measured in the image quality phantom using three figures of merit, the coefficient of variation (CV), the contrast to noise ratio (CNR), and the correlation coefficient (CC). The spatial resolution was also studied by measuring the full width at half maximum (FWHM) across the point sources of the ellipsoidal phantom described in Section 2.5. These figures of merit were measured to compare the different implementations presented here, and not to assess the performance of spherical basis functions. The CV and CNR were measured over regions of interest (ROI) of size 8 × 9 × 0.5mm^3, placed far enough from the boundaries so that there were no edge effects. The CC was measured over the entire image volumes. In all cases only one realization of the phantom was used. The CV is commonly used as a normalized measurement of noise in a given ROI and is described as where is the mean value and is the standard deviation measured in the ROI. The CNR is a measure of noise performance between two ROIs [32] given by where is the mean value and is the standard deviation measured in the background. The CC between two images is a statistical similarity measure defined as where is the intensity value at voxel in image , is the mean value of image , and similarly for image . A CC value of 1 represents two perfectly correlated images, while a value of −1 represents two completely uncorrelated images. The spatial resolution was measured as the FWHM taken from a profile drawn across the hot point sources embedded in the ellipsoidal phantom. 3. Hardware Implementations 3.1. GPU Implementation The SRM was precomputed and stored in sparse format. One of the main challenges addressed in this work was the difficulty to cope with the amount of information needed by a single thread to perform both forward and backward projection, given that the SRM was too large to be stored in local memory, registers or shared memory. While a memory access to a register or shared memory takes only one clock cycle, access to global memory takes 400–600 clock cycles and access to texture memory takes 1–100 clock cycles (depending on cache locality). The floating point values used in this work were distributed as follows: given the significant size of the SRM (5.3Gb) this was stored in global memory, while the arrays stored in texture memory were the ratio between measurements and projected image estimate (2.52Mb), a look-up table used to unfold the symmetries (2.40Mb) and the image estimate used in the forward projection (2.40Mb). The number of blocks in the GPU grid was optimized to achieve the shortest reconstruction time per iteration, being 32 blocks obtained empirically. The SRM was ordered by consecutive LOR indices in sparse format. Therefore, memory access to the SRM elements in the forward projection operation was consecutive, as opposed to the backward projection, where access to the SRM elements was highly irregular. The SRM organization implies that the forward projection was a gathering operation, while the backward projection was a scattering operation, which is slower. During the forward projection, each thread reads from a number of blob coordinates and calculates the projection of one LOR, hence writing in different memory positions. However, during the backward projection, each thread reads from a number of measurements, back-projects the current estimate to image space, and writes in the corresponding blob coordinate. Given that threads are organized by LORs, different threads can write in the same memory position. This represents what is known as a race condition. To avoid such problem three different approaches have been implemented as follows. An SRM ordered by consecutive LOR indices (detection) was used in the forward projection, while an SRM ordered by consecutive blob indices (image) was used in the backward projection. This second SRM represents the transpose of the initial SRM although stored in a different file due to the use of sparse format. Using this approach the LOR-ordered SRM is loaded for the forward projection and subsequently unloaded. The blobs-ordered SRM is then loaded for the backward projection. The process was repeated for each iteration. The advantages are that the access to SRM elements was in consecutive memory positions for both operations and that there was no overwrite hazards. The disadvantage of this implementation is the time required for each iteration to load and unload each SRM file in the GPU. To refer to this implementation the term SRM[reload] was used.An SRM ordered by consecutive LOR indices was used in both, the forward and backward projections. This approach represented an overwrite hazard during the backward projection when LORs intersecting the same voxels were processed by threads in parallel. This situation is likely to happen in those areas where most LORs intersect, that is, the centre of the FOV. Atomic writing prevents this situation from occurring, at the cost of longer computation time. To refer to this implementation the term SRM [atomic] was used.Similarly to the implementation above, an SRM ordered by consecutive LOR indices was used in both, the forward and backward projections. To mitigate the speed problem caused by atomic operations, a combination of two strategies was followed:the LORs were sent to the threads using a random ordering, hence not following any spatial correlation. By introducing spatial randomness in the execution of LORs in parallel, the probability of processing intersecting LORs simultaneously decreased drastically. However, writing in the same memory positions still happened in a region at the centre of the FOV;atomic writing was exclusively used for those voxels located in the two central slices. Otherwise a nonatomic operation was performed. To refer to this implementation the term SRM[rand-at] was used. 3.2. Implementation Using the MPI Libraries This approach used distributed memory, hence a portion of the SRM is sent to each core to perform the forward and backward projections, respectively. The SRM ordered by LOR indices was used in the forward projection, while a blobs-ordered SRM is used for the backward projection, similar to the GPU implementation using SRM[reload]. Subsequently, the SRM was loaded and unloaded each time a forward and a backward operation was performed, adding a computational overhead. 4. Results and Discussion 4.1. Timing Performance The reconstruction time obtained in this work using GPU technology and spherical basis functions is comparable to the reconstruction time obtained using polar or cubic basis functions on a CPU [33]. However, the time performance is not as high as that published in other works [22, 25] due to the nature of this approach, that is, the use of a large Monte Carlo precomputed SRM. This implies that the number of global memory accesses by each thread in a forward/backward projection corresponds to the number of nonzero elements of each LOR/blob multiplied by the number of symmetries. This clearly represents the main bottle-neck in this approach. Table 1 shows a comparison between the time performance measured using a single Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz, that is, nonparallelized (np), with the time performance measured using the CPU-multicore implementation for 1 (MPI-1), 2 (MPI-2), 4 (MPI-4), and 8 (MPI-8) cores, and finally the time performance measured with the GPU, using the different implementations described above (SRM ignoring the overwrite hazard, SRM[reload], SRM[atomic], and SRM[rand-at]). The results shown in Table 1 were obtained using the image quality phantom described in Section 2.5. Nevertheless, small variations were observed between different phantoms. These reconstruction times represent average times measured after several iterations. The CPU-single-core implementation has been taken as reference to calculate the improvement factors of the parallelized implementations. Table 1 shows that the GPU-based implementation where the overwrite hazard is ignored is the fastest implementation because atomic writing is not used, at the cost of producing unacceptable artefacts in the final reconstructed images (see Figure 3(c)). Using SRM[reload], memory access to SRM elements is performed consecutively, both in the forward and backward projection. In every other implementation, consecutive memory access is performed only in the forward projection but not in the backward projection, hence decreasing the time performance. Using the implementation with SRM[reload] the forward projection takes 97s and 58s to load the SRM, while the backward projection requires only 87s and 57s for loading. This represents an iteration time of 184s if we consider only processing time. However, an extra 115s is taken to load the SRM in each operation. Strict reconstruction time was very consistent for all the iterations. However, SRM loading/unloading time varied slightly for each iteration. The atomic operation clearly increases the backward projection time. However, by using the atomic operation only for those critical voxels where the probability of over-writing is high, the backward projection time of implementation using SRM[rand-at] is reduced to 132s, close to the backward projection time measured in the implementation where the overwrite hazard is ignored (121s). Moreover, the artefacts obtained in the reconstructed images when the overwrite hazard is ignored are removed using SRM[rand-at] (Figure 3(d)). 4.2. Quantitative Assessment and Image Quality When enhancing speed of an image reconstruction algorithm, it is of critical importance to produce the same image for each implementation. To demonstrate that the implementations detailed in this work do not have an impact on image quality, the phantoms described in Section 2.5 have been reconstructed using the CPU-single-core implementation, the CPU-multicore implementation using eight cores, and the GPU-based implementation using SRM[rand-at] listed in Table 1. 4.2.1. Noise Assessment The impact on the noise performance has been assessed using the image quality phantom described in Section 2.5, which has been reconstructed after 300 iterations using four of the implementations studied (Figure 3): the CPU-single-core, the CPU-multicore, and two of the GPU-based implementations, without atomic operation and with atomic operation used only in the two central slices (SRM [rand-at]). Special mention is required for the GPU-based implementation without atomic writing (Figure 3(c)) where significant artefacts are observed, mainly in the centre of the FOV. As explained above, these artefacts are due to parallel threads overwriting in the same memory positions during the backward projection, due to the high overlapping between LORs in the centre of the FOV. The artefacts are removed by performing an atomic operation (Figure 3(d)). The images obtained with SRM[reload] and SRM[atomic] (not shown in this work) are practically identical to the one obtained with SRM[rand-at]. A profile across the four reconstructed phantoms is shown in Figure 3(e), demonstrating great resemblance between the images obtained with the CPU-single-core, the CPU-multicore, and the GPU-based implementation with atomic writing, while the artefact observed in Figure 3(c) is clearly visible in Figure 3(e) in the black profile. For quantitative assessment, the CV (Figure 4) was measured in the hot, warm, and cold ROIs for the three different implementations, every 10 iterations for 300 iterations. The CV at iteration 300 is 0.15 and 0.28 for the hot and warm ROIs, respectively, for all three implementations, while the CV in the cold ROI for the CPU-based implementations is 0.72 and for the GPU-based implementation 0.73. In all cases the CV follows an increasing trend due to the known noise increase as more ML-EM iterations are calculated. Differences between the CPU-based implementations (single-core and multicore) are below 0.08 (Figure 4(b)), while higher differences were observed between the CPU-single-core and GPU-based implementations, where differences between −0.37 (hot ROI) and 0.81 (cold ROI) were measured. These differences are due to the different floating point precisions available in the CPU and the GPU. While the precision had little effect on individual SRM elements, the cumulative effect produced differences in the reconstructed images. However, these differences are expected to be dominated by statistical errors in the data. Figure 5 shows the evolution of the CNR measured between the hot and warm ROIs for 300 iterations at every 10 iteration. The differences between the CPU-based and the GPU-based implementations are shown in Figure 5(b). Similarly to the CV study, differences between the CPU-based implementations (single-core and multicore) are below 0.02, while small differences are observed between the CPU-single-core and the GPU implementations. The maximum difference is 2.5 and stabilizes after 200 iterations. However, from Figure 3 the images are visually indistinguishable. Finally, the CC (Figure 6) was measured between the entire volumes of the resulting reconstructed image quality phantoms obtained with the CPU-based implementation, the MPI-based implementation, and the GPU-based implementation. Similarly to the CV and the CNR studies, the CC measured between the CPU-based and the MPI-based implementations shows perfect correlation, while the comparison between the GPU-based implementation with the CPU-based and the MPI-based implementations show high correlation initially, but the trend is to slightly decrease at later iterations. However, the CC at 300 iterations is over 98.5% so that the difference can be considered negligible. To confirm that the trend was not exacerbated as more iterations are computed, the CC between the MPI-based implementation and the GPU-based implementation was computed over 600 iterations, showing a CC of 94% at 600 iterations and a slight trend correction. 4.2.2. Spatial Resolution Assessment Figure 7 shows the spatial resolution phantom reconstructed after 300 iterations using the CPU-single-core, the CPU-multicore, and the GPU-based implementations (using SRM[rand-at]). The three reconstructed phantoms show great resemblance by visual inspection. The profile drawn across the point sources in the three reconstructed phantoms, shown in Figure 8, confirms the conclusions made in the noise study. The CPU-single-core and the CPU-multicore implementations produce identical results while the images reconstructed using the GPU implementation are slightly different. This is further confirmed by Table 2, where the FWHM (mm) measured from each point source and each profile is shown (Ps 1 corresponds to the point source located closer to the centre of the FOV while Ps 6 corresponds to the point source located closer to the edge of the FOV). The spatial resolution decreases as the point source is located farther from the centre of the FOV due to the parallax effect. Similarly to the noise assessment study, the FWHM was measured over 300 iterations at steps of 10 iterations (Figure 9(a)), showing small differences measured between the CPU-based and the GPU-based implementations below 0.4 at 300 iterations (Figure 9(b)). 4.3. Qualitative Assessment For qualitative assessment, the MOBY phantom [31] has been reconstructed using blobs in the GPU and using the CPU with the MPI libraries. For comparison purposes, the phantom was also reconstructed using traditional cubic voxels. However, it is important to highlight that the focus of this work is not to compare these two basis functions. Spherical basis functions provide better noise performance than cubic voxels, so in order to perform a fair comparison, the image reconstructed using cubic voxels was filtered to match the noise performance of that achieved using spherical basis functions [33], which produces a visible image detail degradation. Figure 10(a) shows the ideal MOBY phantom. Figure 10(b) shows the reconstructed phantom obtained using cubic voxels with a postreconstruction Gaussian filter of = 0.5mm for comparison purposes. A = 0.5mm for the Gaussian filter has been applied to match the noise levels obtained with cubic voxels and blobs. Figure 10(c) shows the phantom reconstructed with spherically symmetric basis functions using the GPU implementation and Figure 10(d) shows the phantom reconstructed with spherically symmetric basis functions using the CPU-multicore implementation. 300 iterations were used to reconstruct each bed position in the three reconstructed MOBY phantoms presented here. It can be noticed that the thyroids and brain are more visible, and boundaries better delineated using spherical basis functions, compared to filtered cubic voxels (Figure 10(b)), as shown in Figures 10(c) and 10(d). The reconstruction time necessary using cubic voxels in a single core in the CPU (Figure 10(b)) was ~110 hours, the CPU-multicore implementation using spherical basis functions (Figure 10(d)) was ~ 220 hours, and the GPU-based implementation using spherical basis functions (Figure 10(c)) was ~88 hours. If the same phantom was reconstructed using spherical basis functions on a single core, the necessary reconstruction time would be ~380 hours. From Figure 10 it can be seen that the combination of blob-based reconstruction and a precomputed Monte Carlo SRM using GPU technology is a feasible alternative, not only for simple phantom geometries as those shown in Section 4.2, but also for multistage and highly realistic phantoms as the MOBY phantom. 5. Conclusions Increasing granularity in PET scanners provides improved spatial resolution while increasing the number of detector elements at the same granularity improves sensitivity. However, increased resolution means that the calculation of the SRM is an extremely cumbersome task, particularly for simulation-based SRM calculation. Nevertheless, Monte Carlo-based system matrices for iterative statistical image reconstruction applied to emission tomography are growing in popularity due to their image quality advantages. The extended availability of affordable computing power means that significant efforts are being put into sophisticated improvements of the system response model. Overlapping spherically symmetrical basis functions have clear advantages over nonoverlapping (cubic or polar voxels) even at the cost of a significantly high number of nonzero elements in the SRM, resulting in large SRM file sizes and long reconstruction times. These problems can be partly overcome by exploiting cylindrical symmetries to reduce the simulation time, the number of nonzero SRM elements, and hence the file size necessary to store the SRM. The combination of spherically symmetric basis functions and cylindrical symmetries makes this approach feasible for use in a clinical or preclinical application. However, reconstruction time is then the main concern due to the still large number of nonzero SRM elements required to process the forward and backward projection. Ordered-Subsets- (OS-) EM represents a common approach to speed up the reconstruction process. While OS-EM can be implemented in GPU technology, it requires a device-dependent level of complexity, and its inclusion may reduce the generality of the study presented here. Moreover, subset choice interacts with both speed and image quality while hardware-based solutions decouple this relationship. While it is expected that the combination of OS-EM and GPU technology can effectively further reduce reconstruction times, this is beyond the scope of this investigation. This work presents a SRM-generic hardware implementation that achieved reconstructed images 4.3 times faster using GPU technology compared to an optimized CPU-single-core implementation and 2.5 times faster than a CPU-multicore (8) implementation. A CPU-multicore implementation decreased the reconstruction time by only 1.7 times compared to the single-core implementation. Differences in image performance were assessed from an image quality and a spatial resolution perspective. Negligible differences were demonstrated between CPU-based single-core and multicore implementations, and small differences between CPU-single-core and GPU-based implementations were observed. Differences below 1 for the CV, below 2.5 for the CNR, and below 0.4 for the spatial resolution were measured between the CPU-single-core implementation and the GPU-based implementation. This work was supported by the Spanish Ministerio de Ciencia e Innovación/Plan Nacional de I + D + i Grants TEC2007-61047 and FPA2010-14891. J. E. Gillam is supported by the Spanish Ministerio de Ciencia e Innovación through Juan de la Cierva contract. The authors also acknowledge the support of the Nvidia Professor Partnership Program. 1. E. Ü. Mumcuoǧlu, R. M. Leahy, and S. R. Cherry, “Bayesian reconstruction of PET images: methodology and performance analysis,” Physics in Medicine and Biology, vol. 41, no. 9, pp. 1777–1807, 1996. View at Publisher · View at Google Scholar · View at Scopus 2. J. Qi and R. M. Leahy, “Iterative reconstruction techniques in emission computed tomography,” Physics in Medicine and Biology, vol. 51, no. 15, pp. R541–R578, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 3. R. Lecomte, “Technology challenges in small animal PET imaging,” Nuclear Instruments and Methods in Physics Research A, vol. 527, no. 1-2, pp. 157–165, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. V. Y. Panin, F. Kehren, C. Michel, and M. Casey, “Fully 3-D PET reconstruction with system matrix derived from point source measurements,” IEEE Transactions on Medical Imaging, vol. 25, no. 7, Article ID 1644806, pp. 907–921, 2006. View at Publisher · View at Google Scholar · View at Scopus 5. A. M. Alessio, C. W. Stearns, S. Tong et al., “Application and evaluation of a measured spatially variant system model for PET image reconstruction,” IEEE Transactions on Medical Imaging, vol. 29, no. 3, pp. 938–949, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 6. F. A. Kotasidis, J. C. Matthews, G. I. Angelis et al., “Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners,” Physics in Medicine and Biology, vol. 56, no. 10, pp. 2917–2942, 2011. View at Publisher · View at Google Scholar · View at PubMed 7. S. Moehrs, M. Defrise, N. Belcari et al., “Multi-ray-based system matrix generation for 3D PET reconstruction,” Physics in Medicine and Biology, vol. 53, no. 23, pp. 6925–6945, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 8. P. Aguiar, M. Rafecas, J. E. Ortuo et al., “Geometrical and Monte Carlo projectors in 3D PET reconstruction,” Medical Physics, vol. 37, no. 11, pp. 5691–5702, 2010. View at Publisher · View at Google Scholar · View at Scopus 9. M. Rafecas, B. Mosler, M. Dietz et al., “Use of a monte carlo-based probability matrix for 3-D iterative reconstruction of MADPET-II data,” IEEE Transactions on Nuclear Science, vol. 51, no. 5, pp. 2597–2605, 2004. View at Publisher · View at Google Scholar · View at Scopus 10. J. L. Herraiz, S. España, J. J. Vaquero, M. Desco, and J. M. Udías, “FIRST: Fast Iterative Reconstruction Software for (PET) tomography,” Physics in Medicine and Biology, vol. 51, no. 18, pp. 4547–4565, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 11. L. Zhang, S. Staelens, R. Van Holen et al., “Fast and memory-efficient Monte Carlo-based image reconstruction for whole-body PET,” Medical Physics, vol. 37, no. 7, pp. 3667–3676, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. C. Mora and M. Rafecas, “Polar pixels for high resolution small animal PET,” in IEEE Nuclear Science Symposium Conference Record, vol. 5, pp. 2812–2817, October 2007. View at Publisher · View at Google Scholar 13. R. Ansorge, “List mode 3D PET reconstruction using an exact system matrix and polar voxels,” in IEEE Nuclear Science Symposium and Medical Imaging Conference, vol. 5, pp. 3454–3457, October 2007. View at Publisher · View at Google Scholar 14. J. J. Scheins, H. Herzog, and N. J. Shah, “Fully-3D PET image reconstruction using scanner-independent, adaptive projection data and highly rotation-symmetric voxel assemblies,” IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 879–892, 2011. View at Publisher · View at Google Scholar · View at PubMed 15. J. Cabello and M. Rafecas, “Comparison of basis functions for 3D PET reconstruction using a Monte Carlo system matrix,” Physics in Medicine and Biology, vol. 57, no. 7, pp. 1759–1777, 2012. 16. J. Cabello, J. F. Oliver, I. Torres-Espallardo, and M. Rafecas, “Polar voxelization schemes combined with a Monte-Carlo based system matrix for image reconstruction in high resolution PET,” in IEEE Nuclear Science Symposium, Medical Imaging Conference, pp. 3256–3261, October 2010. View at Publisher · View at Google Scholar 17. R. M. Lewitt, “Multidimensional digital image representations using generalized Kaiser-Bessel window functions,” Journal of the Optical Society of America A, vol. 7, no. 10, pp. 1834–1846, 1990. View at Scopus 18. M. E. Daube-Witherspoon, S. Matej, J. S. Karp, and R. M. Lewitt, “Application of the row action maximum likelihood algorithm with spherical basis functions to clinical PET imaging,” IEEE Transactions on Nuclear Science, vol. 48, no. 1, pp. 24–30, 2001. View at Publisher · View at Google Scholar · View at Scopus 19. A. Yendiki and J. A. Fessler, “A comparison of rotation- and blob-based system models for 3D SPECT with depth-dependent detector response,” Physics in Medicine and Biology, vol. 49, no. 11, pp. 2157–2165, 2004. View at Publisher · View at Google Scholar · View at Scopus 20. A. Andreyev, M. Defrise, and C. Vanhove, “Pinhole SPECT reconstruction using blobs and resolution recovery,” IEEE Transactions on Nuclear Science, vol. 53, no. 5, Article ID 1710261, pp. 2719–2728, 2006. View at Publisher · View at Google Scholar · View at Scopus 21. A. Andreyev, A. Sitek, and A. Celler, “Acceleration of blob-based iterative reconstruction algorithm using tesla GPU,” in IEEE Nuclear Science Symposium Conference Record (NSS/MIC '09), pp. 4095–4098, October 2009. View at Publisher · View at Google Scholar · View at Scopus 22. J. L. Herraiz, S. España, R. Cabido et al., “GPU-based fast iterative reconstruction of fully 3-D PET Sinograms,” IEEE Transactions on Nuclear Science, vol. 58, no. 5, pp. 2257–2263, 2011. View at Publisher · View at Google Scholar 23. J. Zhou and J. Qi, “Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration,” Physics in Medicine and Biology, vol. 56, no. 20, pp. 6739–6757, 2011. View at Publisher · View at Google Scholar · View at PubMed 24. J.-Y. Cui, G. Pratx, S. Prevrhal, and C. S. Levin, “Fully 3D list-mode time-of-flight PET image reconstruction on GPUs using CUDA,” Medical Physics, vol. 38, no. 12, pp. 6775–6786, 2011. View at Publisher · View at Google Scholar · View at PubMed 25. G. Pratx and C. Levin, “Online detector response calculations for high-resolution PET image reconstruction,” Physics in Medicine and Biology, vol. 56, no. 13, pp. 4023–4040, 2011. View at Publisher · View at Google Scholar · View at PubMed 26. D. P. McElroy, W. Pimpl, B. J. Pichler, M. Rafecas, T. Schüler, and S. I. Ziegler, “Characterization and readout of MADPET-II detector modules: validation of a unique design concept for high resolution small animal PET,” IEEE Transactions on Nuclear Science, vol. 52, no. 1, pp. 199–204, 2005. View at Publisher · View at Google Scholar · View at Scopus 27. S. Jan, G. Santin, D. Strul et al., “GATE: a simulation toolkit for PET and SPECT,” Physics in Medicine and Biology, vol. 49, no. 19, pp. 4543–4561, 2004. View at Publisher · View at Google Scholar · View at Scopus 28. S. Matej and R. M. Lewitt, “Practical considerations for 3-D image reconstruction using spherically symmetric volume elements,” IEEE Transactions on Medical Imaging, vol. 15, no. 1, pp. 68–78, 1996. View at Scopus 29. D. J. Kadrmas, “LOR-OSEM: statistical PET reconstruction from raw line-of-response histograms,” Physics in Medicine and Biology, vol. 49, no. 20, pp. 4731–4744, 2004. View at Publisher · View at Google Scholar · View at Scopus 30. M. S. Tohme and J. Qi, “Iterative reconstruction of Fourier-rebinned PET data using sinogram blurring function estimated from point source scans,” Medical Physics, vol. 37, no. 10, pp. 5530–5540, 2010. View at Publisher · View at Google Scholar · View at Scopus 31. W. P. Segars, B. M. W. Tsui, E. C. Frey, G. A. Johnson, and S. S. Berr, “Development of a 4-digital mouse phantom for molecular imaging research,” Molecular Imaging and Biology, vol. 6, no. 3, pp. 149–159, 2004. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus 32. X. Song, B. W. Pogue, S. Jiang et al., “Automated region detection based on the contrast-to-noise ratio in near-infrared tomography,” Applied Optics, vol. 43, no. 5, pp. 1053–1062, 2004. View at 33. J. Cabello, J. F. Oliver, and M. Rafecas, “Using spherical basis functions on a polar grid for iterative image reconstruction in small animal PET,” in Medical Imaging, vol. 7961 of Proceedings of SPIE, February 2011. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/ijbi/2012/452910/","timestamp":"2014-04-18T18:43:23Z","content_type":null,"content_length":"152887","record_id":"<urn:uuid:6561cbca-a5cf-4be1-86b4-ee52812508a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/imagreencat/medals","timestamp":"2014-04-18T03:52:12Z","content_type":null,"content_length":"61845","record_id":"<urn:uuid:d4f403b8-99dc-4b6f-9910-b972bcc0c8b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendora, CA Math Tutor Find a Glendora, CA Math Tutor ...While maintaining a 3.8 GPA at UC Berkeley, Sharon worked as a research associate in the nutritional genomics department and taught standardized test prep to local high school students. She was voted "Favorite Teacher" two semesters in a row. Combining her passion in science, teaching and resea... 19 Subjects: including geometry, ACT Math, SAT math, trigonometry ...I continued private tutoring during my time at Occidental College, a private liberal arts school in Los Angeles. Over the course of my time at Occidental, I tutored two students in calculus AB and one student in algebra. I also have 3.5 years of experience working directly with students in the ... 16 Subjects: including algebra 2, European history, geometry, precalculus ...I mainly specialize in tutoring English including grammar, literature, reading, and spelling. However, I am able to tutor some math as well, if it is prealgebra or algebra 1. My goal is to make sure that the student goes home after every lesson fully understanding what they just learned and not just copying my examples. 5 Subjects: including algebra 1, reading, grammar, vocabulary I'm a graduate of Stanford University's Graduate School of Education. I've attended some of the best colleges in the country, and understand that success is about working HARD and working SMART. If you are looking for a tutor who understands the principles of high achievement and the patience to mentor students to better academic success and higher test scores, please contact me. 21 Subjects: including trigonometry, algebra 1, algebra 2, GRE ...When we came back to the USA I had to learn English. I speak SAE (Standard American English - Media, non-ascent). I had to learn phonics so I understand how to teach it. I was a single mom raising my daughter & going to school online at night...study skills are a must! 23 Subjects: including algebra 1, prealgebra, reading, English Related Glendora, CA Tutors Glendora, CA Accounting Tutors Glendora, CA ACT Tutors Glendora, CA Algebra Tutors Glendora, CA Algebra 2 Tutors Glendora, CA Calculus Tutors Glendora, CA Geometry Tutors Glendora, CA Math Tutors Glendora, CA Prealgebra Tutors Glendora, CA Precalculus Tutors Glendora, CA SAT Tutors Glendora, CA SAT Math Tutors Glendora, CA Science Tutors Glendora, CA Statistics Tutors Glendora, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/glendora_ca_math_tutors.php","timestamp":"2014-04-20T07:11:51Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:e840b79f-f13a-4cca-88b9-68d4b04d7c0c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Molecular field theory for biaxial smectic A liquid crystals Seminar Room 1, Newton Institute Stable biaxial nematics (Nb) have been reported in a few experimental systems and the phases are often difficult to prove conclusively; however, stable biaxial smectic A phases (SmAb) have been found in a larger number of systems in which the evident is conclusive. To understand the stability difference between Nb and SmAb, we use a molecular field theory that combines Straley's theory [1] for biaxial nematics and McMillan's theory [2] for uniaxial smectic A phases. To simplify the calculation, we use alternatively the geometric mean [3] and the Sonnet-Virga-Durand [4] approximation to reduce the number of biaxiality parameters to one; in addition, we use the Kventsel-Luckhurst-Zewdie [5] approximation to decouple the orientational and translational distribution functions. Thus our simple theory has one biaxiality parameter and one smecticity parameter; together with three order parameters. The resulting phase diagrams showed that, for a large region of the para meter space, the presence of the smectic A phases disallowed Nb to form. On the other hand, SmAb is always stable at ground state for positive smecticity parameter. Thus this may explain why SmAb has been found more abundant than Nb. [1] J. P. Straley, Phys. Rev. A 10, 1881 (1974). [2] W. L. McMillan, Phys. Rev. A 4, 1238 (1971). [3] G. R. Luckhurst, C. Zannoni, P. L. Nordio, and U. Segre, Mol. Phys. 30, 1345 (1975). [4] A. Sonnet, E. G. Virga, and G. E. Durand, Phys. Rev. E 67, 061701 (2003). [5] G. F. Kventsel, G. R. Luckhurst, and H. B. Zewdie, Mol. Phys. 56, 589 (1985). The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/MLC/seminars/2013052210001.html","timestamp":"2014-04-18T14:06:51Z","content_type":null,"content_length":"6824","record_id":"<urn:uuid:4b19c17c-47ab-481e-8d53-546d17945b3a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Employment Test Samples | Employment Test Samples | Solution Key Math Employment Test Samples Math employment test samples are given here so that people get an idea regarding the questions asked to the applicant. Math employment test plays an important role in the hiring method to determine whether the applicant has the essential skills required knowledge for the job since based on the scores from the math test employer select employees. Now, let’s follow step by step some of the problems on math employment test samples. From these samples, we can gain a lot of knowledge about math employment test. Math-Only-Math is the roots of maths where you will find from easy questions to difficulty level questions are explained in step by step. If you follow math employment test samples they don’t need any other help they can improve their knowledge by practicing the solutions step by step and working on the worksheets. ● Sample 1 Answers of Sample 1 ● Sample 2 Answers of Sample 2 ● Sample 3 Answers of Sample 3 ● Sample 4 Answers of Sample 4 ● Sample 5 Answers of Sample 5 ● Sample 6 Answers of Sample 6 ● Sample 7 Answers of Sample 7 ● Sample 8 From Math Employment Test Samples to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"http://www.math-only-math.com/Math-Employment-Test-Samples.html","timestamp":"2014-04-21T07:03:40Z","content_type":null,"content_length":"22260","record_id":"<urn:uuid:7af93536-980b-4cc0-af1a-a45806a048e3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Mata question [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: Mata question From "Ben Jann" <ben.jann@soz.gess.ethz.ch> To <statalist@hsphsun2.harvard.edu> Subject RE: st: Mata question Date Thu, 20 Apr 2006 18:38:03 +0200 Thanks Bill. Very helpful. > -----Original Message----- > From: owner-statalist@hsphsun2.harvard.edu [mailto:owner- > statalist@hsphsun2.harvard.edu] On Behalf Of William Gould, Stata > Sent: Thursday, April 20, 2006 5:10 PM > To: statalist@hsphsun2.harvard.edu > Subject: Re: st: Mata question > I just answered the question > > Is there a Mata equivalent to Stata's -capture- statement? > from Daniel Hoechle <daniel.hoechle@gmail.com>, and I now see that > Benn Jann <ben.jann@soz.gess.ethz.ch> chimed in, > > I would be interested in that, too. > In case you haven't noticed, Ben is pretty proficient with Mata and is > busy writing functions for all of us to use. So I would like to write > second answer aimed at function writers on how functions should > This will be of interest too to function consumers, because it will > reveal what one can expect when approaching a new function for the > time. > Let's say you are writing function xyz(). The function has a set of > requirements, let's call the set R, that must be met in order to > its actions. R might be that the input matrix is square and full > or that the file exist, etc. > There are three possible actions a function can take when a > is not met, > A1. Abort with error. > A2. Return a missing result with the appropriate number of rows > columns. > A3. Return a special value that indicates problems. > The purpose of this posting is to outline when the function should do > which. > Divide the requirements R into two subsets, R_1 and R_2. R_1 is the > subset > of requirements of that it is easy for the user to verify. R_2 are > remaining, the subset difficult to establish before calling. > The following are the guidelines we try to follow: > G1. For all elements in R_1, action A1 is appropriate. > G2. For all elements in R_2, actions A2 or A3 are appropriate. > G3. For an element in R_2, action A1 is allowed, but then > there should be a corresponding _xyz() function that takes > action > A2 or A3. > G4. For numerical functions, action A3 is to be avoided whenever > possible. Action A2 is preferred. > G5. Action A3 is appropriate only for nonnumerical functions, > ... > G1. For all elements in R_1, action A_1 is appropriate > ------------------------------------------------------- > xyz() might require that input matrix A be square. If the user wants > to be robust to nonsquare matrices, it is easy enough to code > if (rows(A)==cols(A) result = xyz(A) > else { > // do something else > } > G2. For all elements in R_2, actions A2 or A3 are appropriate > -------------------------------------------------------------- > xyz() might require input matrix A be positive definite. It is > for > the caller to know whether A really is positive definite, and > the > xyz() must take some action other than A1 in the non positive definite > case. > xyz() might require that input matrix A be full rank. It is easy > for the user to check that, > if (rank(A)!= rows(A)) ... > but look carefully at the documentation of rank(). Function rank() > considerable calculation in order to obtain its result. Thus, full > is considered R_2, not R_1. > In most cases, that A is not full rank will be easily discovered in > code > of xyz() because there will be a division by zero, an unexpected > intermediate calculation, and the like. If, however, xyz() would > discover that A is not full rank in the natural order of things, it > becomes > even more important that xyz() check that A be of full rank lest xyz() > return misleading results. > There would be an exception to the above: xyz() will be used > and it is desirable that xyz() be fast. Moreover, xyz() is typically > used along with a suite of other functions, all of which also require > full rankedness. Hence, xyz() does not want to waste time checking > something that is likely to be true. In such cases, if xyz() would > return a misleading result with a non full-rank matrix, xyz() should > be renamed _xyz(), and the documentation should emphasize that it is > the caller's responsibility to check that the matrix is full rank. > There are lots of other examples having nothing to do with matrices > that fit into the above model, such as whether a file exists, a > variable exists, etc. > G3. For an elements in R_2, action A1 is allowed, but ... > --------------------------------------------------------- > It is often the case, especially in numerical subroutines, that the > caller desires action A1. In 99.9% of cases, the requirement (say > positive definiteness) will be met, and in the .1% of cases where it > isn't, the user never intended to write code to handle the case, > Crashing out is a fine solution. > In that case, there needs to be a companion function _xyz() that does > not take action A1. Programmers implementing complicated systems > need to be able to capture unlikely situations. > Think of guideline G3 as the escape clause for G2. G3 allows you to > ignore G2 and make an easy-to-use function xyz() for most callers. > G4. For numerical functions, action A3 is to be avoided whenever > possible. Action A2 is preferred. > ----------------------------------------------------------------- > In the case of numerical functions, A2 is the preferred action. > The returned result should contain missing values, it should be of > the appropriate numerical type, and it should be of the appropriate > dimension. > For instance, function xyz(A) might return A^(-1). It might require > that A be square and positive definite. Action A1 would be > for handling the square restriction. To handle the second > appropriate action would be to return an n x n matrix of missing > The reason for this is that the caller can then ignore such issues if > or > she wishes. Subsequent calculations will work because matrices will > be conformable, but the missing values will propagate, just as they > should. > G5. Action A3 is appropriate only for nonnumerical functions, but ... > ---------------------------------------------------------------------- > Try to avoid returning special values, especially when they are mixed > in with valid values. > Sometimes it is unavoidable. In such cases, the function name should > start > with an underscore. Function _fopen() returns a positive or negative > result. > A positive result is a file handle. A negative result is a problem > Missing value is never considered a "special value", and this > does > not apply to missing value. Returning a missing value when > are not met is desirable. > Concerning special values, when only special values are returned, > convention is that 0 indicate success. 1 might indicate failure, or > different positive or negative values might be used to indicate the > type of failure. THIS IS DIFFERENT FROM THE CONVENTIONS USED IN > MANY OTHER PROGRAMMING LANGUAGES, where 0 is often used to indicate > failure. > -- Bill > wgould@stata.com > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-04/msg00761.html","timestamp":"2014-04-16T10:23:01Z","content_type":null,"content_length":"13650","record_id":"<urn:uuid:0615a706-7a45-4f74-8c7b-37df46865219>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Vacuous Cases, Empty Sets, and Empty Functions Date: 04/10/2004 at 08:38:12 From: Dua Subject: vacuous case, empty set, empty function I am having difficulty understanding 'vacuous' situations as in, if A is an empty set and B is a non-empty set then (i) there is one function f: A \to B namely the empty function but (ii) there is no function f: B \to A. An empty set is a set with no element but what is an empty function? There is a function from an empty set to a non-empty set (how) but not I am used to the case A and B are non-empty so A x B does not go against (my) intuition. In the case A is empty and B is non-empty, A x B is non-empty but B x A is empty? Date: 04/12/2004 at 08:16:36 From: Doctor Jacques Subject: Re: vacuous case, empty set, empty function Hi Dua, There are two ideas involved here: * Logical statements about the empty set * The definition of a function Let us first consider a statement about the elements of a set A. Assume S(x) is a statement about the object x (a logical proposition): depending on the particular object x, S(x) is either true or false. We can make a statement S(A) about the set A, by asserting that S(x) is true for every element of A : S(A) ::= "For all x in A, S(x) is true". For example, assume that x represents a ball, and S(x) is the statement "the ball x is red". Now, if A is a bag of balls, S(A) would mean: "For all balls x in the bag A, the ball x is red" or, more simply said: "All the balls in A are red" The question is now, what does this mean if A is empty? S(A) can only be false if you can find in A a ball that is not red. If A is empty, this is impossible, so S(A) cannot be false, and we conclude that S(A) is true--if the bag is empty, all the balls in it are red (although there are no balls at all). Note that it is also true that all the balls in the bag are black--there is no contradiction in this if the bag is empty. In a more abstract way, if S(A) is a statement of the form: For all x in A, S(x) is true then, whenever A is empty, S(A) is true--this does not depend on the particular form of the statement S(x). We can also see it in another way--S(A) means that A is a subset of the set of objects such that S(x) is true. Now, the empty set is a subset of any set, so, if A is empty, A is indeed a subset of the set of objects that verify S(x), and S(A) is true. The second aspect is the definition of a function. A function f : A -> B is merely a _set_, namely a set of ordered pairs (a,b) with a in A and b in B, and such that every element of A appears in exactly one pair. Like any set, a function can be empty. If A is empty, there are no ordered pairs (a,b), since there are no elements a to pick from A. The only possible function is then the empty function--(the empty set). To see that it is indeed a function, let f be the empty set. We must verify that: "For all a in A, a appears in exactly one element of f" As this statement is of the form "for all a in A, (whatever)", it is always true if A is empty. Now, if A is not empty, and B is empty, we can choose an element a in A. If the set f is to be a function, it must contain exactly one element (a,b) for the given a and some b in B. As there are no b in B available, this is impossible. Note that, if both A and B are empty, the empty set is also a function from A to B. If A and B are finite sets, of m and n elements respectively, then each of the m elements of A must appear in one pair (a,b), and there are n elements of B to choose from. This means that there are a total of n^m possible functions from A to B. If A is empty and B is not, n^m = n^0 = 1--there is one function from A to B. If A is not empty and B is empty, n^m = 0^m = 0--there are no functions from A to B. This breaks down if both A and B are empty, because it depends on how you define 0^0--this is essentially a matter of convention. In this case, we must define 0^0 = 1, which is not the most commonly accepted Does this help? Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Jacques, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/71760.html","timestamp":"2014-04-19T02:01:53Z","content_type":null,"content_length":"9258","record_id":"<urn:uuid:68766fb1-3f88-4567-9b1c-82f071b0032d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Kenneth Hoffman Ray Kunze Sponsored High Speed Downloads LINEAR ALGEBRA KENNETH HOFFMAN Professor of Mathematics Massachusetts Institute of Technology RAY KUNZE Professor of Mathematics University of California, Irvine LINEAR ALGEBRA KENNETH HOFFMAN Professor of Mathematics Massachusetts Institute of Technology RAY KUNZE Professor of Mathematics University of California, Irvine Linear Algebra and Matrix Theory MAT 5283 Spring, 2010 ... Kenneth Hoffman and Ray Kunze, Linear Algebra, second edition, Prentice Hall, 1971. Paul Halmos, Finite-Dimensional Vector Spaces, second edition, Springer-Verlag, 1974. Evaluation LINEAR ALGEBRA Second Edition KENNETH HOFFMAN Professor of Mathematics Massachusetts Institute of Technology RAY KUNZE Professor of Mathematics Kenneth Hoffman & Ray Kunze, Linear Algebra, Prentice-Hall of India, 1975 [Chapter 6: Sections 6.2 to 6.8, Chapter 7: Sections 7.1 to 7.4, ... 2. Ben Noble, James W. Daniel, Applied Linear Algebra, Prentice-Hall of India . MT – 1811 REAL ANALYSIS Category of the Course: MC Hrs/Week: 6 (Linear Algebra by 2.Kenneth Hoffman and Ray Kunze, Pearson Education, New Delhi.) (Linear Algebra by 3.Stephen H.Friedberg and others, published by Prentice-Hall International, Inc.) Multiple Integrals and Vector Calculus Unit - 3 Linear algebra , Kenneth Hoffman, Ray Alden Kunze, 1961, , 332 pages. . Elementary linear algebra , William Leon Perry, Jan 1, 1988, , 495 pages. . Back in the early works Landau showed that the impact tempting. The reaction product Linear Algebra by Kenneth Hoffman and Ray Kunze, Pearson Education (low priced edition), New Delhi 2. Linear Algebra by Stephen H. Friedberg et al Prentice Hall of India Pvt. Ltd. 4th edition 2007 Part B : Multiple integrals and Vector Calculus Linear Algebra by Kenneth Hoffman & Ray Kunze ATLAST Computer Exercises for Linear Algebra by Leon, et al Topic: Introduction to Vectors Solving Linear Equations Vector Spaces and Subspaces Orthogonality Determinants Eigenvalues and Eigenvectors Kenneth Hoffman and Ray Kunze. Linear Algebra (II Edition), Prentice-Hall of India Pvt. Ltd. , New Delhi, 2000. Unit I: Chapter 2: Sections 2.1 to 2.6, Unit II : Chapter 3: Section 3.1 to 3.5 Unit III: Chapter 6: Sections 6.1 to 6.5, Unit ... Kenneth Hoffman, Ray Kunze, Linear Algebra, 2nd edition, Prentice Hall of India, New Delhi. (1971) 2. P.B. Bhattacharya, Phani Bhushan Bhattacharya, S. K Jain, S. R. Nagpaul , First course in linear algebra, , New Age International Ltd Publishers, New Delhi. Kenneth Hoffman and Ray Kunze, "Linear Algebra," 2 nd edition, Pearson Education (Asia) Pte. Ltd/ Prentice Hall of India, 2004. 3. David C. Lay, “Linear Algebra and its Applications,” 3 rd edition, Pearson Education (Asia) Pte. Ltd, 2005. 6 4. Linear Algebra by Kenneth Hoffman and Ray Kunze, Pearson Education (low priced edition), New Delhi 2. Linear Algebra by Stephen H. Friedberg et al Prentice Hall of India Pvt. Ltd. 4 th edition 2007 3.A Text book of Matrices by Santhinarayana. Kenneth Hoffman and Ray Kunze, Linear Algebra, Second Edition, Prentice – Hall of India Private Limited, New Delhi :1975. UNIT – I - Chapters 1 and 2 UNIT – II - Chapter 3 UNIT – III - Chapter 4 and Chapter 5: Sections 5.1 to 5.4 UNIT ... LINEAR ALGEBRA AND VECTOR CALCULUS (Syllabus for the academic years 2010-2011 and onwards) PART A : LINEAR ALGEBRA UNIT - 1 ... Reference Books : l. Linear Algebra by Kenneth Hoffman and Ray Kunze, Pearson Ed cation (low priced edition), ... LINEAR ALGEBRA Gateway to Mathematics Robert Messer 1994 94. linear Algebra with applications W .KEITH NICHOLSON 1995 95. linear Algebra second edition KENNETH HOFFMAN,RAY KUNZE 1971 96. Linear Algebraic Groups Second Enlarged Edition Armand Borel 1991 97. logic and ... Robert A. Beezer A First Course in Linear Algebra , .5 T.Y ... HOMOLOGICAL ALGEBRA 2003 L. R. VERMANI 2003 Kenneth Kuttler An Introduction To Linear Algebra .21 ... KENNETH HOFFMAN and LINEAR ALGEBRA .82 RAY KUNZE 1971 ADVANCED MATHEMATICAL METHODS .83 Gilbert Strang, "Linear Algebra and its Applications”, 3rd edition, Thomson Learning Asia, 2003. 2. Kenneth Hoffman and Ray Kunze, "Linear Algebra," 2nd edition, Pearson Education (Asia) Pte. Ltd/ Prentice Hall of India, 2004. 3. David C. Lay, “Linear Algebra and its Applications,” 3rd ... LINEAR ALGEBRA CODE MMM 101 Unit I Vector Spaces: Definition, ... Kenneth Hoffman & Ray Kunze, Linear Algebra , Pearson Education. REAL ANALYSIS CODE MMM 102 ... Blanchard, Kenneth H and Johnson Dewey E., Pearson Education Kenneth Hoffman and Ray Kunze, Linear Algebra, Second edition, Prentice Hall of India Pvt.Ltd. 2. I.N. Herstein, Topics in Algebra, Second Edition, Wiley Eastern Ltd. 3. Michael Artin, Algebra, Prentice Hall of India Pvt Ltd, 1994. MT 2903 - MATHEMATICAL PHYSICS [Gel61] I. M. Gelfand, Lectures on Linear Algebra, Interscience Publishers, New York, 1961. ... [HK61] Kenneth Hoffman and Ray Alden Kunze, Linear Algebra, Prentice-Hall, Englewood Cliffs, N.J., 1961. [Hur45] Henry Hurwitz, Jr., LINEAR ALGEBRA KENNETH HOFFMAN Professor of Mathematics Massachusetts Institute of Technology RAY KUNZE Professor of Mathematics University of California, Irvine Linear Algebra in Twenty Five Lectures ... ... Kenneth Hoffman & Ray Kunze, Linear Algebra, 2nd edition, Pearson Education Inc., India, 2005 [2] S.Lang, Linear Algebra, 3rd edition, Undergraduate Texts in Mathematics, Springer, 1987 [3] I.N.herstein, Topics in Algebra, 2nd edition, John Wiley & Sons, 2007 12PMA1102 Linear Algebra 6 5 12PMA1103 Ordinary Differential Equations 6 5 12PMA1104 Classical Dynamics 6 5 I 12PMA1105 ... Linear Algebra, Kenneth Hoffman, Ray Alden Kunze, Second Edition, Prentice Hall of India Private Limited, New Delhi, 1975. Kenneth Hoffman and Ray Kunze, Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, 1971. 3. L. Janossy, Theory of Relativity Based on Physical Reality, Akademiai Kiado, Budapest, 1971. 4. Ian R. Porteous, Topological Groups, Van Nostrand Reinhold, London, 1969. 5. H. P. Linear Algebra 2nd Edition (Paperback) by Kenneth Hoffman, Ray Kunze, PHI Learning... Bokbeslut möte 4/2012 (121213) • Paperback: 276 pages • ... introduction to functional analysis with applications / Yamamoto, Yutaka . Hardback: 268 s. Kenneth Hoffman and Ray Kunze, "Linear Algebra," 2nd edition, Pearson Education (Asia) Pte. Ltd/ Prentice Hall of India, 2004. 3. Bernard Kolman and David R. Hill, "Introductory Linear Algebra with Applications”, Pearson Education (Asia) Pte. Ltd, 7th edition, 2003. ... Linear Algebra No. of credit points:6 M203: Topology – II ... Linear Algebra second edition By Kenneth Hoffman and Ray Kunze, ... LINEAR ALGEBRA by KENNETH HOFFMAN , RAY KUNZ (Second Edition) ... Linear Algebra 2nd Edition (Paperback) by Kenneth Hoffman, Ray Kunze, PHI Learning, 2009. NITTPGCSE13 8 CS604: Service Oriented Architecture and Web Security Credit: 3 Objective: To provide an overview of XML Technology and modeling databases in XML Linear Algebra . determinants . eigenvalues and eigenvectors . Cayley-Hamilton Theorem . ... Kenneth Hoffman and Ray Kunze, Linear Algebra, Prentice-Hall, 1971. [3] Thomas W. Hungerford, Algebra, Springer, 1974. [4] Roy Smith, Algebra Course Notes ... Kenneth Hoffman and Ray Kunze, Linear Algebra - Second Edition - Prentice–Hall of India Private Limited - New Delhi - 1975. UNIT – I Sections 1 to 4 of Chapter V of (1). UNIT – II Sections 5 and 6 of Chapter V of (1). Kenneth Hoffman and Ray Kunze, “Linear Algebra” 2nd edition, Pearson Education (Asia) Pvt . ... David C Lay, Linear algebra and its applications , 3rd edition, Pearson Education, (Asia) Pte, Ltd 2005 . APPLIED MATHEMATICS SUB CODE: 09MAT04 Total hours 52 UNIT 1. ... Linear Algebra by J.N. Sharma and A.R. Vasishtha, Krishna Prakashan Mandir, ... Linear Algebra by Kenneth Hoffman and Ray Kunze, Pearson Education (low priced edition), New Delhi. 2. Linear Algebra by Stephen H. Friedberg, Arnold J. Insal and Lawrance E. Spence, Prentice Hall of India Linear algebra: theory and application Ward Cheney and David Kincaid ... Linear algebra Kenneth Hoffman [and] Ray Kunze Prentice-Hall Van Nostrand N. Curle and H. J. Davies Modern fluid dynamics Business data processing and systems analysis Reference books: Linear Algebra, Stephen H.Friedberg , A.J. Insel, L.E. Spence, Prentice- Hill Course Outline: 1. Linear Equations ( 3 weeks) ... Text book: Linear Algebra, Kenneth Hoffman, Ray Kunze ; Prentice - Hill Reference books: Linear Algebra Stephen H.Friedberg , A.J. Insel, ... ... Linear Algebra by J.N.Sharma and A.R.Vasista, Krishna Prakasham Mandir, Meerut-250002. Reference Books: 1. Linear Algebra by Kenneth Hoffman and Ray Kunze, Pearson Education (low priced edition), New Delhi 2. Linear ... Kenneth Hoffman and Ray Kunze, Linear Algebra, Second Edition, Prentice – Hall of India Private Limited, New Delhi ,1975. REFERENCE(S) 1. S. Kumaresan, Linear Algebra, Prentice-Hall of India Ltd, 2000. 2. V. Krishnamurthy et al, Introduction to Linear Algebra, East Gilbert Strang, "Linear Algebra and its Applications”, 3rd edition, Thomson Learning Asia, 2003. Kenneth Hoffman and Ray Kunze, "Linear Algebra," 2nd edition, Pearson Education (Asia) Pte. Ltd/ Prentice Hall of India, 2004. Kenneth Hoffman and Ray Kunze, “Linear Algebra”, ... Prerequisites: Linear Algebra, Discrete Fourier Transforms, elementary Hilbert Space Theorems (No questions from the pre-requisites) UNIT I Construction of Wavelets on ZN the first stage. Linear Algebra 2nd Edition (Paperback) by Kenneth Hoffman, Ray Kunze, PHI Learning, 2009. NITTPGCSE13 7. CS604: Service Oriented Architecture and Web Security Credit: 3 Objective: • To provide an overview of XML Technology and modeling databases in XML Hoffman, Kenneth & Kunze, Ray, Linear Algebra (Second Edition), Prentice-Hall, 1971, viii + 407 pp. Excellent junior/senior-level text. The chapter headings are: linear equations, vector spaces, Kenneth Hoffman and Ray Kunze, Linear Algebra, 2nd edition, Prentice-Hall, Englewood Cliffs, NJ, 1971. A One-Sentence Proof That V2 Is Irrational DAVID M. BLOOM Brooklyn College of CUNY Brooklyn, NY 11210 If V2 were rational, say F = mn/n in lowest terms, then also ? Rowe, 2004]use linear algebra to find the best approximation that lies in a linear subspace of the space of pseudo-Boolean functions; for example, ... [Hoffman and Kunze, 1971] Kenneth Hoffman and Ray Kunze. Linear Algebra, 2nd edition. Prentice-Hall, En-gle wood Cliffs, Ne Jersey, 1971. LINEAR ALGEBRA Subject Code : 08EC046 IA Marks : 50 No. of Lecture Hours /week : 04 Exam Hours : 03 Total no. of Lecture Hours : 52 Exam Marks : 100 Linear Equations: Fields ... Kenneth Hoffman and Ray Kunze, "Linear Algebra ... Gilbert Strang ,Linear Algebra and its Applications, 4th ed., Thomson Learning Co., Belmont CA, 2006. 2. Kenneth Hoffman and Ray Kunze, Linear Algebra, 2nd ed., , Prentice‐ Hall 1971. 3. Roger Horn and ... 1.Gilbert Strang , "Linear Algebra and its Applications ”, 3 rd edition, Thomson Learning Asia, 2003. 2.Kenneth Hoffman and Ray Kunze, " Linear Algebra ," 2 nd edition, Pearson Education (Asia) Pte. Ltd/ Prentice Hall of India, 2004. necessary back ground in linear algebra and tensors needed for analyzing problems in mechanical engineering. Module 1 (15 Hrs.) ... Kenneth Hoffman and Ray Kunze, Linear Algebra , PHI Private Limitted, Newdelhi, INDIA. University of Calicut Kenneth Hoffman and Ray Kunze, Linear algebra, Prentice-Hall, Englewood Cliffs, N. J., 1964, pp. 187-201. 5. Heydar Radjavi and James Williams, Products of self-adjoint operators, Michi-gan Math. J. (to appear). University of Toronto necessary back ground in linear algebra and tensors needed for analyzing problems in mechanical engineering. ... Kenneth Hoffman and Ray Kunze, Linear Algebra, PHI Private Limitted, Newdelhi, INDIA. Internal continuous assessment: 100 marks Linear Algebra Kenneth Hoffman, Ray Kunze Pearson Education, New Delhi 512.5 H 698 Schaum’s Outline of Theory and Problem of Linear Algebra ... Linear Algebra A Ramachandya Rao & P Bhimasankaran Hindustan Book, New Delhi 512.5 R215 The Universal History of Numbers Georges Ifrah
{"url":"http://ebookily.org/pdf/linear-algebra-kenneth-hoffman-ray-kunze","timestamp":"2014-04-23T17:30:33Z","content_type":null,"content_length":"45506","record_id":"<urn:uuid:2cdcc518-2b84-4bdb-96e7-bf4c12396f95>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
generic etale morphism from curve to projective line up vote 1 down vote favorite Let $X$ be a smooth projective curve over $k$, ch$k=p>0$, dose there exist a generic etale morphism from $X$ to projective line ? 5 Yes. First, smoothness implies that $k(X)$ is a separable field extension of $k$, cf. Corollary 16.17 of Eisenbud's "Commutative Algebra". Thus there exists an element $f\in k(X)$ such that $k(X)$ is a finite, separable extension of the subfield $k(f)$. Considering $f$ as a rational function on $X$, $f$ extends to a regular $k$-morphism $f:X\to \mathbb{P}^1_k$ which is generically \'etale. – Jason Starr May 1 '12 at 12:01 I'm 3 minutes late. The only thing I can add is that, in case you don't have Eisenbud at hand, but you have Lang's "Algebra", then the reference for infinite separable extensions is Prop VIII.4.1. – Jef May 1 '12 at 12:08 Thank you for answering ! – kiseki May 1 '12 at 13:14 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/95661/generic-etale-morphism-from-curve-to-projective-line","timestamp":"2014-04-18T08:32:59Z","content_type":null,"content_length":"48869","record_id":"<urn:uuid:8c9b7a5b-acc6-4d55-b6a3-338f71f21c70>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Plus Advent Calendar Door #5: Five won't fit Triangles do it, squares do it, even hexagons do it — but pentagons don't. They just won't fit together to tile your bathroom wall. That's the reason why you'll find it very difficult to find pentagonal tiles in any hardware shop. It's actually really easy to see why pentagons won't tile the plane. We are talking regular pentagons here, shapes that have five sides of equal length and angles between them. In a regular pentagon the internal angle between two sides is 108°. In a regular tiling, adjacent tiles share whole edges, rather than just parts of edges, so corners of tiles meet corners of other tiles. To fit a number of tiles around a corner point, their internal angles must add up to 360°, since that's a full turn. If you try to fit three pentagons, you only get 3 x 108° = 324°, so there is a gap. If you fit four pentagons, you get 4 x 108° = 432°, so two of them overlap. The same isn't true for equilateral triangles, squares, or regular hexagons. Here the internal angles are 60°, 90° and 120° respectively, so you can fit six triangles, four squares and three hexagons around a corner point. (You can try and work out for yourself if any other regular polygons can give you a regular tiling.) Three pentagons arranged around a point leave a gap, and four overlap. Image: Craig Kaplan. If pentagons don't work can we perhaps use other shapes with five-fold symmetry to tile the plane? Find out more in The trouble with five. Return to the Plus Advent Calendar
{"url":"http://plus.maths.org/content/plus-advent-calendar-door-5-five-wont-fit","timestamp":"2014-04-19T15:38:30Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:4696c676-90cd-4487-9a78-5d398358917b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Plus Advent Calendar Door #5: Five won't fit Triangles do it, squares do it, even hexagons do it — but pentagons don't. They just won't fit together to tile your bathroom wall. That's the reason why you'll find it very difficult to find pentagonal tiles in any hardware shop. It's actually really easy to see why pentagons won't tile the plane. We are talking regular pentagons here, shapes that have five sides of equal length and angles between them. In a regular pentagon the internal angle between two sides is 108°. In a regular tiling, adjacent tiles share whole edges, rather than just parts of edges, so corners of tiles meet corners of other tiles. To fit a number of tiles around a corner point, their internal angles must add up to 360°, since that's a full turn. If you try to fit three pentagons, you only get 3 x 108° = 324°, so there is a gap. If you fit four pentagons, you get 4 x 108° = 432°, so two of them overlap. The same isn't true for equilateral triangles, squares, or regular hexagons. Here the internal angles are 60°, 90° and 120° respectively, so you can fit six triangles, four squares and three hexagons around a corner point. (You can try and work out for yourself if any other regular polygons can give you a regular tiling.) Three pentagons arranged around a point leave a gap, and four overlap. Image: Craig Kaplan. If pentagons don't work can we perhaps use other shapes with five-fold symmetry to tile the plane? Find out more in The trouble with five. Return to the Plus Advent Calendar
{"url":"http://plus.maths.org/content/plus-advent-calendar-door-5-five-wont-fit","timestamp":"2014-04-19T15:38:30Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:4696c676-90cd-4487-9a78-5d398358917b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Chapter 6 Test 1. The diagonals of a quadrilateral are perpendicular bisectors of each other. What name best describes the quadrilateral? 2. The vertices of a square are located at (a, 0), (a, a), (0, a), (0, 0). What is the length of a diagonal? 3. The vertices of a rhombus are located at (a, 0), (0, b), (-a, 0), (0, -b), where a, b > 0. What is the midpoint of the side that is in Quadrant II? 5. Quadrilateral EFGH is a kite. What is the value of x? 6. The vertices of a kite are located at (0, a), (b, 0), (0, -c), and (-b,0), where a, b, c, d > 0. What is the slope of the side in Quadrant IV? 8. The diagonals of a quadrilateral bisect both pairs of opposite angles. What name best describes the quadrilateral? 10. A parallelogram has four congruent sides. Which name best describes the figure? 11. Two consecutive angles of a trapezoid are right angles. Three of the following statements about the trapezoid could be true. Which statement CANNOT be true? 12. Which name best describes a parallelogram with four congruent figures? 15. Which statement is true for some, but not all, rectangles?
{"url":"http://www.proprofs.com/quiz-school/story.php?title=geometry-chapter-6-test","timestamp":"2014-04-17T07:15:35Z","content_type":null,"content_length":"148451","record_id":"<urn:uuid:fef58fb5-1ac5-4380-9540-35bf403b39ea>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Splish, Splash, I Was Takin' a Drink... [Archive] - Straightbourbon.com Forums 08-19-2003, 15:24 Any Bobby Darrin fans out there? http://www.straightbourbon.com/forums/images/graemlins/grin.gif Actually this is about a different kind of splash, namely that "splash" that we all refer to from time to time when we add water to our bourbon before drinking it. Today I decided to finish off a bottle of GTS, and for my first drink I added water in two doses before I achieved the balance of flavor and smoothness that most appealed to me today. In the process, I started wondering what proof I had achieved, knowing full well that there was no way to measure after the fact how much bourbon I started with, much less how much water I had added. But suppose I had measured the amount of bourbon that I started with and the amount of water I added. How hard would it be, I wondered, to calculate the proof of the resulting mixture. Or, perhaps more usefully, what if I knew the quantity and proof of the bourbon I started with and the desired proof of the end-product. How could I calculate the amount of water to add? It took me an embarrassingly long time to come up with an answer, and I know this wheel has probably rolled before. Nevertheless, here's what I came up with, albeit handicapped by my inability to use subscripts in this forum as I did originally. q = initial total amount of bourbon a = " " " " alcohol w1 = " " " " water w2= new " " " water splash = w2 - w1 p1 (i.e. initial %abv) = a / q = a / (a + w1) p2 (i.e. new %abv) = a / (a + w2) A few algebraic steps later, I came to the following: splash = a/p2 - q Now let's suppose that my GTS is an even 138 proof. Then the initial abv% ( or p1) is 69% or 0.69. If I start with exactly one ounce, just to make the calculation more illuminating, and if I want to end up with a drink that is 100 proof (or 50% abv), the following calculation tells me how to do splash = a/p2 - q = .69/.50 - 1 = 0.38 In other words, I must add 0.38 ounces of water to one ounce of GTS at 138 proof to yield a drink of 100 proof. If the target is 90 proof (45% abv), then splash = .69/.45 - 1 = 0.53 If the target is 80 proof (40% abv), then splash = .69/.4 - 1 = .73 I find the above counter-intuitive. Could it actually be that one can add almost three quarters of the original amount and still have 80 proof bourbon? I can't see an error in my math, but, of course, I am nearing the end of my second glass of Stagg. Bleeee! http://www.straightbourbon.com/forums/images/graemlins/grin.gif Yours truly, Dave Morefield
{"url":"http://www.straightbourbon.com/forums/archive/index.php/t-1908.html","timestamp":"2014-04-17T21:35:22Z","content_type":null,"content_length":"9934","record_id":"<urn:uuid:3f979d5a-104f-43bb-ad95-77dd9f1862b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Penn Valley, PA Calculus Tutor Find a Penn Valley, PA Calculus Tutor ...I used MatLAB during my junior and senior year of college to help model civil engineering problems. I took a semester-long class that involved projects created in MatLAB, and learning the essentials of the programming language. My specialty is helping novice users become more comfortable with the software. 21 Subjects: including calculus, reading, physics, geometry ...What will this course not include? -Provide a thorough understanding of basic concepts. If you understand the foundations of a subject very well, you can keep your head in a reasonable place when things get complicated. -Work on the same level as my students. As you work on a problem set, I will work through the same thing. 25 Subjects: including calculus, chemistry, physics, writing ...At college level, he has tutored students from the Universities of Princeton, Oxford, Pennsylvania State, Drexel, Temple, Phoenix, and the College of New Jersey. Dr Peter offers assistance with algebra, pre-calculus, SAT, AP calculus, college calculus 1,2 and 3, GMAT and GRE. He is a retired Vice-President of an international Aerospace company. 10 Subjects: including calculus, GRE, algebra 1, GED ...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and non-euclidean geometry. As an undergraduate, I took three semesters of elementary calculus, which included tw... 12 Subjects: including calculus, writing, geometry, algebra 1 ...MathAs an engineering graduate I have a very strong math background. I can teach any topic including geometry, trigonometry, algebra, calculus, linear algebra, probability and statistics. While in high school I scored a 780 on the math SAT and an 800 on the math SAT II. 15 Subjects: including calculus, chemistry, physics, algebra 1
{"url":"http://www.purplemath.com/Penn_Valley_PA_Calculus_tutors.php","timestamp":"2014-04-16T07:34:32Z","content_type":null,"content_length":"24447","record_id":"<urn:uuid:17353071-fdb8-4a71-9e7e-1e1b4067b648>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
What's in the Box? Copyright © University of Cambridge. All rights reserved. Why do this problem? Of course this problem is rather like a function machine, but it can be more interesting for the pupils and easily extended to challenge a wide range of pupils. It could be used to introduce children to the idea of common factors and offers opportunities for learners to record in whatever way they choose. Possible approach It may be necessary to introduce the class to just one number going in and to give them one outcome to start with so that they understand the process. Then, gradually increase the number of numbers going in until you reach four, as in the problem. Your own examples can be adjusted in complexity according to the level of your pupils. Once learners have had some time to work on the first part of the problem in pairs, ask them to share their ways of working with the whole group. Look out for those who give good reasons for choosing particular methods. At this stage, you could introduce the vocabulary of common factors if appropriate. You may also wish to draw attention to interesting ways of recording. Some children may have drawn pictures, others may have written calculations and others may have done both. Key questions What might have gone on in the box to get this number answer? Could that have produced the other answers too? Possible extension Outputs like $165, 45, 135$ and $315$ could obviously have "$ \times 5$" in the box, meaning the inputs were $33, 9, 27$ and $63$ but there is another possibility when fraction multiplication is allowed. If "$ \times 3.75$" was in the box then the input numbers would have been $44, 12, 36$ and $84$. In this example it would be appropriate to ask experienced pupils what was happening - in other words, encourage them to recognise that there are two solutions and ask them to explain how and why the numbers relate to each other. Challenging pupils in this way will almost certainly get them to consider number relationships very seriously, reinforcing what they have learnt and opening doors to further learning. Some pupils could go on to invent their own for others to do. Possible support For just one number going in you can use counters and a cloth. Cover the counters with the cloth and then secretly add the required extra number of counters under the cover before revealing them to the pupil. Then a number of probing questions can be asked: How many counters now? What must have happened under the cover? As they tackle the main problem, some learners might find it useful to have a multiplication square or calculator available.
{"url":"http://nrich.maths.org/5576/note?nomenu=1","timestamp":"2014-04-18T01:16:14Z","content_type":null,"content_length":"6819","record_id":"<urn:uuid:83247691-a80f-4b08-9699-302c6bfcae1c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Signature-based algorithms to compute standard bases • Standard bases are one of the main tools in computational commutative algebra. In 1965 Buchberger presented a criterion for such bases and thus was able to introduce a first approach for their computation. Since the basic version of this algorithm is rather inefficient due to the fact that it processes lots of useless data during its execution, active research for improvements of those kind of algorithms is quite important. In this thesis we introduce the reader to the area of computational commutative algebra with a focus on so-called signature-based standard basis algorithms. We do not only present the basic version of Buchberger’s algorithm, but give an extensive discussion of different attempts optimizing standard basis computations, from several sorting algorithms for internal data up to different reduction processes. Afterwards the reader gets a complete introduction to the origin of signature-based algorithms in general, explaining the under- lying ideas in detail. Furthermore, we give an extensive discussion in terms of correctness, termination, and efficiency, presenting various different variants of signature-based standard basis algorithms. Whereas Buchberger and others found criteria to discard useless computations which are completely based on the polynomial structure of the elements considered, Faugère presented a first signature-based algorithm in 2002, the F5 Algorithm. This algorithm is famous for generating much less computational overhead during its execution. Within this thesis we not only present Faugère’s ideas, we also generalize them and end up with several different, optimized variants of his criteria for detecting redundant data. Being not completely focussed on theory, we also present information about practical aspects, comparing the performance of various implementations of those algorithms in the computer algebra system Singular over a wide range of example sets. In the end we give a rather extensive overview of recent research in this area of computational commutative algebra. Author: Christian Eder URN (permanent link): urn:nbn:de:hbz:386-kluedo-29756 Advisor: Gerhard Pfister Document Type: Doctoral Thesis Language of publication: English Publication Date: 2012/04/13 Year of Publication: 2012 Publishing Institute: Technische Universität Kaiserslautern Granting Institute: Technische Universität Kaiserslautern Acceptance Date of the Thesis: 2012/04/13 Faculties / Organisational entities: Fachbereich Mathematik DDC-Cassification: 512 Algebra MSC-Classification (mathematics): 13P10 Gröbner bases; other bases for ideals and modules (e.g., Janet and border bases)
{"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2975","timestamp":"2014-04-16T17:43:17Z","content_type":null,"content_length":"21217","record_id":"<urn:uuid:a293e429-24e8-4692-90c4-e8b07c10c205>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Oak Hill, VA Math Tutor Find an Oak Hill, VA Math Tutor ...Math was one of my strongest subjects (750/800 on SAT Math and 35/36 on ACT Math) and I currently tutor a Freshman within FCPS for geometry. I scored very high on my math portions of the SAT and ACT (750 out of 800 SAT, 35 out of 36 ACT). I have helped several other students with study tips and ... 16 Subjects: including statistics, probability, algebra 1, algebra 2 ...I've tutored elementary math through Algebra II, GED prep, 100-level college algebra, test prep, etc. I have solid math content knowledge. I have a bachelor's degree in math and I passed the Praxis II math exam with a 162 (VA required 147, and 162 was an above-average score). The best way to maximize success is ensure the student completes all assigned homework correctly all the 10 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice a week, and working one-on-one with students. After graduating, I taught high school math for one year (courses were College Prep, Algebra II, and Geometry), but I am well ver... 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...He is especially strong in SAT/ACT Math and worked extensively with both his son and daughter at an early age to where they both attended TJ (Thomas Jefferson High School for Science & Technology). This tutoring in math, chemistry and physics continued at a high level to where they went on to th... 17 Subjects: including probability, ACT Math, SAT math, trigonometry ...I have experience teaching students with ADD/ADHD as well as a medical background and thorough understanding of what they are going through. I am very patient and know how to help them work efficiently. I am able to teach concepts and material in ways that accommodate to them. 17 Subjects: including geometry, algebra 1, ACT Math, SAT math Related Oak Hill, VA Tutors Oak Hill, VA Accounting Tutors Oak Hill, VA ACT Tutors Oak Hill, VA Algebra Tutors Oak Hill, VA Algebra 2 Tutors Oak Hill, VA Calculus Tutors Oak Hill, VA Geometry Tutors Oak Hill, VA Math Tutors Oak Hill, VA Prealgebra Tutors Oak Hill, VA Precalculus Tutors Oak Hill, VA SAT Tutors Oak Hill, VA SAT Math Tutors Oak Hill, VA Science Tutors Oak Hill, VA Statistics Tutors Oak Hill, VA Trigonometry Tutors Nearby Cities With Math Tutor Adelphi, MD Math Tutors Aspen Hill, MD Math Tutors Chantilly Math Tutors Colesville, MD Math Tutors Dale City, VA Math Tutors Darnestown, MD Math Tutors Franconia, VA Math Tutors Herndon, VA Math Tutors North Bethesda, MD Math Tutors North Potomac, MD Math Tutors Potomac Falls, VA Math Tutors Reston Math Tutors South Riding, VA Math Tutors Sully Station, VA Math Tutors West Springfield, VA Math Tutors
{"url":"http://www.purplemath.com/Oak_Hill_VA_Math_tutors.php","timestamp":"2014-04-20T13:38:45Z","content_type":null,"content_length":"24073","record_id":"<urn:uuid:ef9b1e60-c465-45f7-ac36-21b8d2279f7f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Interleaving Spurs: More Math Details for Gain Mismatch Now things are getting interesting. We've been looking at where the interleaving spurs are located and have taken a look at the level of the spur produced from the offset mismatch. By doing some calculations we were able to see how big of a spur would result from offset mismatch between two interleaved ADCs. Just as we did when looking at the locations of the spurs, we'll take a similar path now. We had first looked at offset mismatch, so now let's dive into how we can calculate the level of the spur produced at f[S]/2 ± f[in] due to the gain mismatch. It's time again to put on our mathematician's hat for just a moment… Don't worry though, we won't be wearing it for too much longer. We'll need it just for a while as we continue to look at some mismatch and dive into the gain mismatch spur. So how do we know how big the spur from the gain mismatch is going to be? Let's take a look at Equation 1 below where V[FS1] and V[FS2] are the full scale peak-to-peak voltages of the two ADCs that we are interleaving. Now, let's consider we have a typical gain mismatch between two 14-bit ADCs in a dual channel device. Typically this is about 1 percent of full scale for the nominal value. This means that ADC1 has a full-scale voltage of 2V[P-P] that ADC2 would have a full-scale voltage of 1.98V[P-P]. Substituting this in Equation 1 we get the following: Wow, that is quite interesting! One percent of full-scale doesn't seem like much of a gain error, but it results in a fairly large offset spur of 46dBc. I doubt that there are many applications today for high speed ADCs that could tolerate this level of spur in the output spectrum. This would easily dominate the spurious free dynamic range (SFDR) specification for the interleaved ADCs. Most applications require an SFDR of at least 70dBc or better which means that 46dBc is much too high. Let's take a look at where we need to be in order to meet or exceed a 70dBc level. Below in Figure 1 the magnitude of the gain mismatch spur is shown with respect to the gain mismatch given in percent full scale. This plot gives us some good information and some insight into what gain mismatch levels we can tolerate. In order to meet typical spurious requirements of 70 dBc, the gain mismatch must be less than 0.05% of full-scale for a 14-bit converter. This gives us an idea of how closely the gain between the two ADCs needs to be matched. It's pretty small. However, as process technologies shrink and matching techniques improve it becomes easier to minimize the gain mismatch. On a device like the AD9286, the typical gain mismatch is about 0.05% of full scale, which places us right on the 70dBc specification we are looking for. If we can reduce the mismatch by another 0.025% then we can lower the gain mismatch spur down to 78dBc. If we can go further and reduce the mismatch down to 0.005% then we can lower the spur down to 92dBc. This tells us that there is hope; we just need to figure out a good way to reduce the mismatch. This math hat is coming in handy. We can use it again next time as we look at calculating the level of the timing mismatch spur. Stay tuned and keep those comments and questions coming! Related posts:
{"url":"http://www.planetanalog.com/author.asp?section_id=3041&doc_id=561304","timestamp":"2014-04-16T20:16:44Z","content_type":null,"content_length":"118449","record_id":"<urn:uuid:e6c00a4f-3b97-46fa-9dfc-14292750ffe0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of Matching Times on a Clock Date: 09/14/2004 at 12:17:07 From: Michael Subject: What is the probablity What is the probability of two different times within the same hour ending in the same last digit? Like 08:13 and 08:43? I don't know the process of calculating the probability. Date: 09/14/2004 at 12:59:58 From: Doctor Edwin Subject: Re: What is the probablity Hi, Michael. In general, the chance of an event occurring is # of ways the event could happen total # of ways things could turn out So for example my chance of rolling a one or a two on a six-sided die # of ways I could roll 1 or 2 total # of ways the die could come up which is just 1/3. Your question has an interesting twist. The most obvious answer is 1/10 or 10%. Let's ignore everything but the last digit. If I pick a number between zero and nine, and you do the same, the chance that you picked the same number I did is 1/10. So if I pick a time at random, and you pick a time at random, the chance that the last digits will match is still 1/10. But in your problem, it adds two additional pieces of information. First, it says that the times must be different. So we can't both pick 8:13, for example. The other piece is that the two times must fall within the same hour. In order to figure out the probability of your time event, we have to figure out how many ways we could have the same number at the end, while picking DIFFERENT times in the same hour. Suppose like in your example I pick 8:13. Now, how many ways can you pick a time that ends with the same digit as mine? but one of those is the same time I picked, and you're not allowed to pick that one, so you're down to 5 possible ways to pick the time. So your probability is total # of times you could have picked So how many times could you have picked? There are 60 minutes in an hour, and you're not allowed to pick one of them. Can you figure out the probability from there? Write back if you're stuck. - Doctor Edwin, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/65591.html","timestamp":"2014-04-17T01:37:35Z","content_type":null,"content_length":"7415","record_id":"<urn:uuid:65411d45-837f-4fc7-af1b-da59f4e60bac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Chuck's Chatter Some recent discussions via various media have spurred me to post something about how pathetic mathematics education in this country has been for a long time -- long enough that I myself was influenced by its sorry state. As a boy, I began to encounter math phobia in grade school, where a tyrannical math teacher instilled the fear of math in me. I suppose her intentions were good, perhaps hoping to motivate me to greater achievement, but her methods were not successful in stimulating the response she probably wanted. Many subsequent teachers reinforced my fears, including my math teacher for 4 years in high school. His plan was to humiliate me into doing better, not unlike the tactics adopted by the fearsome math teacher in grade school. It worked no better in high school than it did in grade school. Instead, I began to doubt my ability to master the subject. Those doubts grew with time, to the point where in my college days, it was undermining my confidence that I could pursue the career I wanted for myself because I was performing so badly in math. Meteorology is a subject that uses mathematics to a considerable extent -- being incompetent at math was simply not an option. It took the insight of a graduate student teaching assistant at the University of Wisconsin to begin the process of overcoming my math phobia. At that point, I had two C's and a 5-credit D in 3rd semester Calculus behind me, with no prospect for a good performance in Differential Equations. But Mr. Hunte r found a way to open my eyes to the subject and I managed a B! He showed me that I could understand the material and enjoy it at the same time! In graduate school, my advisor looked over my transcript and made it clear to me I needed to minor in mathematics! I swallowed the lump of fear in my throat and did what I was told. By some miracle, I had three straight excellent math teachers (in Tensors and Vectors, Complex Variables, and Fourier Series & Boundary Value Problems) and aced all three!! My first A's in math since 2nd grade!! My fears were vanquished and I became fully confident regarding math from that point on. It was math that had been holding me back, not my ability! I wonder how many people that applies to? If I overcame my fear, then I came to believe others could, too. Since then, I've seen many people who have been denied a career in my field owing to their lack of math skills, and I believe that the abysmal teaching of math is largely responsible. Learning how to do math can be inspiring and insightful -- skills that are useful in a technological world that can be fun and exciting to apply to real-world problems. Why are so many turned off by math? I believe it's because math teachers are among the worst at teaching their subject. Math isn't about cookbook recipes that need to be memorized -- it's about understanding the abstract world that math occupies and being able to use it to solve problems of significance to the world. Math teachers usually suck at connecting the abstract world they inhabit to reality. Most math teachers have no clue why anyone besides a mathematician might need mathematics It's a universal truth that no one can teach a topic they themselves don't understand. Most math teachers feel at home in their abstract world but can't relate to those who seek to see the value of those abstractions in reality! These so-called teachers can't relate to the reality inhabited by their students because those mathematicians don't inhabit it! When I was a graduate student, the Engineering School at OU taught several math courses because the Math Department pretty much sucked at teaching those subjects, and the topics were very relevant to engineering. I benefited from several good teachers of mathematics in the Engineering School! If you have math phobia, believe me when I say it's largely been manufactured in your own mind, likely the result of lousy teaching. If you believe me when I say that math skills are useful to you, no matter what your profession, then you should accept the responsibility to learn math in spite of the lousy teachers!! I promise you it will be worth it!!
{"url":"http://cadiiitalk.blogspot.com/2011_06_19_archive.html","timestamp":"2014-04-16T10:09:45Z","content_type":null,"content_length":"121686","record_id":"<urn:uuid:fc49d85e-eb92-4b2b-8ffb-5f758b9f4ece>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
How to measure Wireshark parsing time of PCAP file? Server Fault is a question and answer site for professional system and network administrators. It's 100% free, no registration required. I need to find out how to measure the parsing time of a PCAP file when using Wireshark. Does anyone know how to do this? up vote 0 down vote favorite add comment I need to find out how to measure the parsing time of a PCAP file when using Wireshark. Not sure exactly what you are trying to do, but it may be impossible to do whatever that is. Calculating it accurately would be impossible without knowing enough of the variables that go into it, and measuring it would only accurately apply to a particular PCAP file on the same system/installation. To get an idea of the variables that play a part in how long it would take to parse, here are some of them: up vote 0 down • the hardware/computer parsing the file (disk I/O, memory, CPU) vote accepted • the OS Wireshark is running on • the contents of the PCAP (number of packets, how much payload, etc) • the user configurable settings within Wireshark (which data needs to be pulled for the selected columns, coloring rules, etc) add comment Not sure exactly what you are trying to do, but it may be impossible to do whatever that is. Calculating it accurately would be impossible without knowing enough of the variables that go into it, and measuring it would only accurately apply to a particular PCAP file on the same system/installation. To get an idea of the variables that play a part in how long it would take to parse, here are some of them: Not sure what you are trying to accomplish but you could try: time tshark -r foo.pcap > /dev/null up vote 0 down vote This is just parsing; no filtering is done. add comment Not sure what you are trying to accomplish but you could try:
{"url":"http://serverfault.com/questions/495698/how-to-measure-wireshark-parsing-time-of-pcap-file","timestamp":"2014-04-16T13:05:13Z","content_type":null,"content_length":"64722","record_id":"<urn:uuid:12c78369-30a2-4de9-a6cd-c969d4a97677>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2006 [00600] [Date Index] [Thread Index] [Author Index] Re: Uniform arc length basis curve fitting • To: mathgroup at smc.vnet.net • Subject: [mg67429] Re: Uniform arc length basis curve fitting • From: christopherpurcell <christopherpurcell at mac.com> • Date: Fri, 23 Jun 2006 04:31:59 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com This is a nice problem you are posing, with practical applications. I have the start of a solution here - leaving lots for you to add. The challenge comes in solving for the arc length. I do it here with NIntegrate and FindMinimum, but it's slow, and there must be a faster (* Fit a curve composed from a set of 2 (or 3) Interpolation functions (one for each vector component of the points) to a list of points. The fit works for points in 2D and 3D. The curve is parameterized by a parameter, say u, such that 0<u<1. *) p = {{0,0},{1,2},{-1,3},{0,1},{3,0}}; (* the 2d points to fit, could also be 3d *) (* Define the arc length S of the curve defined by the list of points pts, at parameter tt. *) In[10]:= S[p,1.] (* the length of the curve *) Out[10]= 10.6388 (* Now we re-parametrize the curve, using arc length s as the parameter. Note to sweep out the entire curve, we now have to let s vary from 0 to S[p,1.]. We see the plot looks the same, but it will be swept out using arc length as the parameter. This is painfully slow to evaluate. *) ParametricPlot[Curve[p,(tt /. FindMinimum[Abs[S[p,tt]-s],{tt, 0, 1}] (* We can demonstrate that this is arc length by creating points along the curve, and we see the points are nicely spaced. *) ListPlot[Table[Curve[p,(tt /. FindMinimum[Abs[S[p,tt]-s],{tt, 0, 1}] (* By comparison here we plot points using our initial parametrization, and if you look closely you will see they are not evenly spaced along the curve. *) I leave it to you to implement your tangent vectors, curvatures, etc. christopherpurcell at mac.com On Jun 19, 2006, at 1:01 AM, Narasimham wrote: > How to find slopes, curvature etc. of cubic splines as a function of > arc length? With this example from Help,attempted to find piecewise > derivatives, but it cannot be right as the given pts need not be > spaced > evenly on the arc. TIA. > << NumericalMath`SplineFit` > pts = {{0,0},{1,2},{-1,3},{0,1},{3,0} }; > spline = SplineFit[pts, Cubic] ; > plspl=ParametricPlot[spline[u], {u, 0, 4}, PlotRange -> All, Compiled > -> False]; > "derivative components" > der[x_]:=( spline[x+10^-10]-spline[x-10^-10] ) /( 2 10^-10); > dxu[x_]:=der[x].{1,0}; dxv[x_]:=der[x].{0,1}; > Plot[{dxu[v],dxv[v]},{v,0,4}]; > ">>> slopes >>>" > Plot[ArcTan[dxu[v],dxv[v]] ,{v,0,4}] ; > plder=ParametricPlot[der[v],{v,0,4}] ; > der[2] > Show[plspl,plder];
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Jun/msg00600.html","timestamp":"2014-04-21T14:58:46Z","content_type":null,"content_length":"37136","record_id":"<urn:uuid:a8ad5f86-52e8-4f41-beaa-c9f96de39df8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of a Region Between 2 Curves Area of a Region Between 2 Curves Area of a Region Between 2 Curves. Section 6.1. General Solution ... If 2 curves intersect at more that 2 points, then to find the area of the region ... – PowerPoint PPT presentation Number of Views:587 Avg rating:3.0/5.0 Slides: 47 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/3424b-N2M4N/Area_of_a_Region_Between_2_Curves_powerpoint_ppt_presentation","timestamp":"2014-04-16T13:10:04Z","content_type":null,"content_length":"101552","record_id":"<urn:uuid:c2d8156b-04a7-405e-86e1-169f881a07e9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonequilibrium thermodynamics and maximum entropy production in the Earth system by Judith Curry Albert Einstein on thermodynamics: A theory is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability. Therefore, the deep impression which classical thermodynamics made on me. It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: Applications and implications Axel Kleidon Abstract The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future. Naturwissenschaften (2009) 96:653–677 DOI 10.1007/s00114-009-0509-x [link to full paper]. This is the best paper that I’ve come across that clearly explains nonequlibrium thermodynamics and maximum entropy production with application to the climate system. The paper can probably be understood by anyone with an undergraduate degree in engineering, physics or chemistry. Its a long and complex paper, I will try to do it justice with some excerpts from the background and then cutting to the part that interested me most, on feedbacks: The parts of thermodynamics that we are usually most familiar with deal with equilibrium systems, systems that maintain a state of thermodynamic equilibrium (TE) and that are isolated, that is, they do not exchange energy or matter with their surroundings. In contrast, the Earth is a thermodynamic system for which the exchange of energy with space is essential. Earth system processes are fueled by absorption of incoming sunlight. Sunlight heats the ground, causes atmospheric motion, is being utilized by photosynthesis, and ultimately is emitted back into space as terrestrial radiation at a wavelength much longer than the incoming solar radiation. Without the radiative exchanges across the Earth–space boundary, not much would happen on Earth and the Earth would rest in a state of TE. Systems that are maintained far from TE dissipate energy, resulting in entropy production. The first and second laws of thermodynamics provide fundamental constraints on any process that occurs in nature. While the first law essentially states the conservation of energy, the second law makes a specific statement on the direction into which processes are likely to proceed. It states that the entropy of an isolated system, i.e., a system that does not exchange energy or mass with its surroundings, can only increase, or, in other words, that free energy and gradients are depleted in time. In the absence of external exchange fluxes, gradients would be dissipated in time, and hence, entropy production would diminish in time, reaching a state of TE. To sustain gradients and dissipative activity within the system, exchange fluxes with the surroundings are essential. A steady state of a system is reached when the entropy change averaged over sufficiently long time vanishes The proposed principle of MEP states that, if there are sufficient degrees of freedom within the system, it will adopt a steady state at which entropy production by irreversible processes is maximized. While MEP has been proposed for concrete examples, in particular, poleward transport of heat in the climate system, entropy production in steady state is a very general property of nonequilibrium thermodynamics, so that MEP should be applicable to a wide variety of nonequilibrium systems. MEP and feedbacks One of the most important implications of MEP is that it implies that the associated thermodynamic processes react to perturbations with negative feedbacks in the steady state behavior. This follows directly from the maximization of entropy production, which essentially corresponds to the maximization of the work done and the free energy dissipated by a process, as explained above. Imagine that a thermodynamic flux at MEP is perturbed and temporarily reduced. This reduction in flux would result in a build-up of the thermodynamic force, e.g., temperature gradient in the case of poleward heat transport. In this case, the process would not generate as much kinetic energy as possible. The enhanced temperature gradient would then act to enhance the generation of kinetic energy, and thereby the flux, thus bringing it back to its optimal value and the MEP state. If the boundary conditions shape the optimum change, then a perturbation of the state would be amplified until the new optimum is reached, which could be interpreted as a positive feedback to the perturbation. What MEP states is that the functional relationship itself takes a shape that maximizes entropy production and thereby results in negative feedbacks. This maximization can be understood as the direct consequence of the system to achieve its most probable configuration of states, as in the case of equilibrium statistical mechanics. This discussion of feedbacks and MEP is quite different from the conventional treatment of feedbacks in climatology, which are usually based on temperature sensitivities. In the usual analysis, the total change in temperature ΔTtotal is expressed as the sum of the direct response of temperature to the change in external forcing (ΔT0) and the contribution of feedbacks (ΔTfeedbacks): ΔTtotal = ΔT0 + ΔTfeedbacks. If the total change in temperature is expressed as ΔTtotal = f · ΔT0, with f being the feedback factor, then a positive feedback is defined as f > 1, while a negative feedback is defined as f < 1. The feedback framework plays a very important role in the analysis of anthropogenic climatic change. In principle, one could develop a similar feedback framework using entropy production rather than temperature as the central metric under considera-tion. The change in entropy production Δσtotal would then be expressed as the sum of the changes due to the external forcings and due to feedbacks: Δσtotal = Δσ0 + Δσfeedbacks, or Δσtotal = f · Δσ0. In steady state, MEP would be associated with f < 1, i.e., a negative feedback as discussed above. In case of changes in the external forcing, these would result in a change in the boundary conditions while the feedback would be associated with the change in internal configuration of the flux and gradients. With a change in external forcing, the tendency of systems to maximize entropy production would then state that, after the change, the feedback factor would initially be f > 1. That is, a small change in the flux would be amplified since the flux is no longer at the MEP state. This tendency would continue up to the point when the flux again reached the optimum value, at which point the feedback factor would change to values of f ≤ 1. This points out that optimality is a strong nonlinear aspect that is unlikely to be adequately treated in a linearized feedback framework. However, more work needs to be done to place MEP and optimality into the common feedback framework. MEP and Earth system evolution However, the Earth system has changed dramatically in the past. The early Earth very likely had an atmosphere with a high carbon dioxide concentration and in which free oxygen was basically absent. Over time, carbon dioxide was removed to trace-gas amounts, while oxygen increased substantially during the great oxidation event some 2.3 billion years ago, and again about 0.5 billion years ago, to near current levels. So how can nonequilibrium thermodynamics inform us about how the evolution of the Earth system has proceeded in the past? Kleidon (2009) proposes that the Earth system over time has evolved further away from the planetary TE state towards states of higher entropy production, and suggests that this overarching trend can be used to drive how the Earth’s environment has changed through time. Central here is that the reference states of TE with respect to motion and fluxes of water and carbon, as described in the section “Entropy production by earth system processes” above, are interconnected. TE at the planetary scale would be associated with the absence of large-scale motion,since only in the absence of motion would there be no frictional dissipation, hence, no entropy production by motion. Such a state of an atmosphere at rest would be saturated with water vapor since atmospheric mo- tion acts to dehumidify the atmosphere. A saturated atmosphere in turn would likely be associated with high cloud cover and no net exchange of moisture between the surface and the atmosphere. This implies that there is no continental runoff, and no associated cycling of rock-derived, geochemical elements. For the geologic carbon cycle, this implies no carbon sink, so that the atmospheric carbon dioxide concentration would be high, in turn resulting in a strong greenhouse effect and high surface temperatures. High surface temperatures would result in ice- and snow-free conditions. Overall, because of the high cloud cover, absorption of solar radiation would be low, as would be planetary entropy production. While it is unlikely that the Earth actually ever was in a state of TE, what is shown in Table 1 nevertheless provides an association of what the Earth’s environment should look like closer and further away from a state of planetary TE. A basic positive feedback between the water, carbon, and atmospheric dynamics was also postulated to be modulated by life: Stronger atmospheric dynamics (“motion”) would result in an atmosphere in which the hydrologic variables would be maintained further away from TE, which would imply a drier atmosphere, higher fluxes of precipitation and evapotranspiration, higher ocean–land transport, etc. This in turn would drive the geologic carbon cycle to lower carbon dioxide concentrations, resulting in a weaker greenhouse effect, which in turn would cool the Earth. A cooler Earth could maintain more extensive snow and ice cover, thus enhancing the radiative forcing gradient between the tropics and the poles. This, in turn, would strengthen the atmospheric dynamics and close the positive feedback loop. This positive feedback would cause fundamental, thermodynamic thresholds in the whole Earth system. These thresholds would imply that planetary entropy production would unlikely increase continuously during the evolution of the Earth system, but in a step- wise fashion. Once such a thermodynamic threshold is reached, the positive feedback would cause the Earth system to rapidly evolve to a state of higher entropy production, after which the system would be maintained in a stable, MEP state. These climatic trends associated with how far the Earth system is maintained away from TE at the planetary level could help us to better reconstruct and understand the past evolution of the Earth system. This would, however, need to be further evaluated, e.g., with more detailed simulation models that explicitly consider the nonequilibrium thermodynamic nature of Earth system processes. Summary and Conclusions At the same time, a more solid foundation of MEP is needed. Once this foundation is successfully established, it implies that the dynamical description of complex systems far from TE follow from the maximization of entropy production. This would have quite far-reaching implications for how we model the Earth system and understand Earth system change. It will provide us with a fundamental approach to understand the success of optimality approaches that have previously been used to understand complex systems. Nonequilibrium thermodynamic measures such as entropy production may also be a more useful property to express climate sensitivity than the conventional temperature measures, as it is closely associated with the dissipative activity of the process under consideration. In conclusion, nonequilibrium thermodynamics and MEP show great promise in allowing us to formulate a quantifiable, holistic perspective of the Earth system at a fundamental level. This perspective would allow us to understand how the Earth system organizes itself in its functioning, how it reacts to change, and how it has evolved through time. Further studies are needed to better establish the nonequilibrium thermodynamic basis of many Earth system processes, which can then serve as test cases for demonstrating the applicability and implications of MEP. JC comments: The 2nd law of thermodynamics is an underutilized piece of physics in climate science. It is not a simple beast to wrestle with, but I think there are some important insights to gain. Optimality, self-organizing criticality, and nonlinearity are factors that are not adequately accounted for in traditional climate feedback analyses, and an entropy-based framework would be more consistent with the climate shifts that are actually observed. With regards to the previous too big to know post, it is this kind of analysis and conceptual framework that is needed to advance our understanding, an idea that provides a blueprint for assembling the bricks into a structure. 364 responses to “Nonequilibrium thermodynamics and maximum entropy production in the Earth system” 1. Thank you for the non-paywalled link here □ Yes, Axel Kleidon’s opening statement, “The Earth system is maintained in a unique state far from thermodynamic equilibrium, . . .” is beautiful and could have been referenced in the 2011 paper Karo Michaelian and I wrote on the origin and evolution of life, “Life arose as a non-equilibrium thermodynamic process to dissipate the photon potential generated by the hot Sun and cold outer space.” □ Humility is the admission price to reality. World leaders print money. Mysterious science solved. □ Leaders of nations and experimental sciences compromised observations in an attempt to control reality after ~1971*. We have limited control over things in their own ego cages: Cause-and-effect controls everything outside – in reality. *Science 174, 1334-1336 (1971); Nature 240, 99-101 (1972); Trans. MO. Acad. Sci. 9, 104-122 (1975); Science 195, 208-209 (1977); Nature 270, 159 – 160 (1977); Science 201, 51-56 (1978); Geochem. J. 15, 245-267 (1981); Meteoritics 18, 209-222 (1983); Astron. Astrophys. 149, 65-72 (1985); Meteoritics Planet. Sci. 33, A97 (1998); ibid., 33, A99 (1998); J. Fusion Energy 19, 93-98 (2001); 32nd Lunar Sci. Conf., paper 1041, LPI Contribution 1080, ISSN No. 0161-5297 (2001); J. Fusion Energy 21, 193-198 (2002); National Geographic Magazine, feature story: “The Sun: Living with the Stormy Star (July 2004)]. 2. “Nonequilibrium thermodynamic measures such as entropy production may also be a more useful property to express climate sensitivity than the conventional temperature measures, as it is closely associated with the dissipative activity of the process under consideration.” Separating work and entropy in the maintaining of the lapse rate is one of my issues. It is also one of the issues that seems to be throwing the Unified Climate guys and the atmospheric mass fans into a Tizzy. Even Willis bouncing around the issue, conductivity. CO2 enhances the energy transfer in collisions of molecules in a mixed gas environment. Call me the Girma of conductivity. :) Which brings me back to MLEV. The Arctic has large enough variations in mixed-phase clouds to be noticeable. I see no reason why the same situation, on a less noticeable scale, is not equally possible anywhere in the atmosphere approaching the same temperature range and moisture level. Low clouds in the Arctic would be similar to a little higher, but still lower with respect to the average radiant layer clouds as you approach the tropics. Virtually transparent clouds are still a bit of a thermodynamic mystery and quite capable of being mixed-phase under some conditions. Anywho, that is a part of my crackpot theory, because below that cloud layer region, conductive transfer would be more dominate with respect to radiant transfer that seems to be estimated. □ “CO2 enhances the energy transfer in collisions of molecules in a mixed gas environment. ” So, CO2 acts as an atmospheric cooler, not a heater. I agree with stefanthe denier on this. CO2 is misconceptualized as a GHG from the very start of the theory or should I say hypothesis. □ @ “bollocks” I’ve been wondering about this and wanting to discuss it. I don’t actually see a “refutation” of the GHGs cool the earth idea on, say, SkS, but here is what I think: You get absorbption of the IR via GHGs. That “excites” GHG molecules in various vibrational quantum states. Since they’re constantly colliding with other gas molecules, including non-GHGs, some of that energy is transferred via collisions with the non-GHGs into rotational or translational motion which does not re-radiate significantly, leading to net warming. OTOH, GHG molecules are constantly getting smacked by other molecules and absorbing thermal energy that way, which they are then free to re-radiate, leading to net cooling. I think the key is that for cooling, the probability of the kinetic absorption coinciding with an energy state that allows the GHG molecule to dissipate the energy via radiation, is much lower than the probability that it will transfer some of the absorped IR kinetically. So net warming results. Perhaps in a highly saturated, IR opaque atmosphere, GHGs would provide a net □ BillC, Far from the surface the warming by absorption if IR and transfer of that energy to kinetic energy is almost exactly as strong as the cooling due to the inverse process. That’s what a thermal equilibrium is about. There’s a very small warming effect, because the extra radiation from warmer layers below is stronger than the reduced radiation from cooler layers above, but the net effect is small. Near surface there’s extra IR from the surface and warming is significantly stronger. That’s one of the ways surface transfers energy to the atmosphere. □ Pekka, This appears also to be the mechanism by which increasing CO2 concentrations cool the stratosphere and higher levels…basically, the CO2 is able to absorb more collisional energy here and release it by radiation, than it can absorb upwelling IR and release it via collision. □ It’s interesting what it says about rates. If I am an individual CO2 molecule near the surface, bumping into other molecules every nanosecond(?) or so, it is still more likely that I will absorb an IR photon between collisions, than collide and gain enough energy to radiate an IR photon (and even if I do, I radiate it in all directions, whereas the “incoming” radiation is greater from below than above). Yet I imagine that collisions of all sorts must happen more frequently than IR absorption (since we are talking about near-saturated conditions), and so the PDF of my energy gain from any collision must reside pretty far below the energy of the relevant IR photons. □ Stratosphere is, indeed, different, in particular the upper stratosphere that’s heated by solar UV. There the temperature of air is significantly higher than the temperature equivalent of the intensity of IR. Gas is heated by UV and looses energy by IR the coupling between them is rather weak as the collision rate is relatively low. Under these conditions the “temperatures” related to UV induced molecular excitations, kinetic energy of the molecules and molecular excitations that correspond to the IR energies are all significantly different. The different degrees of freedom are not in thermodynamic equilibrium ot even close to that as they are in the denser troposphere. I put the word “temperature” in quotes, because the temperature is not perfectly defined without full local thermal equlibrium and the deviations of the occupancies of the vibrational states are also a breakdown of local thermal equilibrium. □ Pekka Sreekanth Kolan extends to above 11 km Robert Essenhigh’s detailed thermo model of the lapse rate. See: Study of energy balance between lower and upper atmosphere The variation in temperature is shown in his Fig. 17 p 53. From the temperature vs. ozone mole fraction analysis, it is concluded that the temperature and ozone concentration are related and more research needs to be done in that area to be able to determine the relationship. Do you have any thoughts on his development? □ David, The Master’s Thesis that you linked is quite interesting and has some results that I haven’t seen elsewhere. The altitude profiles shown in Figure 16 are perhaps the best example. The model used is, however, based on a simplified integral equation, which is known to be only approximately valid as stated also in the paper and in Essenhigh’s earlier papers. It’s a nice simplification, which gives certainly qualitatively correct results and some new insight into the phenomena. Still it’s not accurate enough to serve as a real alternative for more accurate numerical models. It may be very near the best that can be achieved by so simple methods, but not more than that. The most severe approximation is probably the use of a single gray-body equivalent absorption-emission factor for the atmospheric gas rather than a set of banded absorption factors. As in most cases, where such simplified models are used, it’s not easy to estimate, how far the quantitative results are from the correct ones or where they deviate most seriously. The insight obtained by the simple models is useful, but more accurate models must be used to check the quantitative validity of the insight. All the above applies to, what we can learn from the basic assumptions comparing simpler approximate solutions of these assumptions to more accurate calculations. The further and very essential problem concerning the validity of the basic assumptions applies to both. Therefore I consider it an objective statement that Essenhigh’s approach is less accurate than many alternatives. That statement by itself is not a claim on the good accuracy of any model in describing the real atmosphere, only a comparison of two classes of models. The comment on the relationship between the temperature and ozone concentration appears to apply to the approach of the study rather than science in general as there are numerous other studies where the properties of the stratosphere are analyzed, while the paper doesn’t link to any of these. Here we see again that this is a Master’s Thesis, not a fully finalized scientific paper, where such a comment without references would not be acceptable. □ Thanks Pekka These could be enhanced by Ferenc Miskolczi’s Line By Line quantitative infra red absorption models. He includes all 11 greenhouse gases across 3490 IR lines with 9 different view directions, for 150 layers etc. □ David, I think that it’s not possible to use Essenhigh’s approach with line by line absorption coefficients. That’s at least what I remember from the time I looked more closely at his work. □ Pekka Is the difficulty with using LBL absorptivity evaluation because that would vary the absorptivity with elevation, making it difficult to do the analytical integration? Could the method otherwise be used to with numerical integration? □ The full equations with wavelength dependent absorptions cannot be solved analytically. Solving them directly leads essentially to those numerical methods that have been developed over years by scientists and that are in regular use like Modtran. One of the major differences is due to the fact that a fraction of radiation penetrates long distances in the atmosphere or even through tho whole atmosphere without being absorbed on the way. The integral equation is based on the assumption that all radiation has a free path that’s so short that the temperature is only little different at the points of emission and □ As my Venus/Earth temperature comparison definitively demonstrates, increasing atmospheric carbon dioxide neither increases nor decreases atmospheric temperature. Adding more carbon dioxide (Venus has 96.5%, to Earth’s 0.04%) merely increases the efficiency, thus the speed, with which deviations from the thermodynamically predominant, gravitationally-imposed lapse rate structure are dissipated. The Standard Atmosphere rules, the atmosphere is stable — not balanced on a knife edge between runaway heating and ice age as “consensus” science believes — and particularly so against changes in carbon dioxide concentration. These are the simple facts provided by the proper comparison of Venus and Earth, which current science needs to face, but which everyone seems determined to ignore. □ “which current science needs to face, but which everyone seems determined to ignore” because it’s bollocks □ @lolwot – “which current science needs to face, but which everyone seems determined to ignore” because it’s bollocks Can you elaborate on why this comment is bollocks? □ yes □ See my response above to Sam NC, which probably should have been a response to Capt. (or lolwot or Veritas) in order not to lose the threading. 3. This is an interesting perspective by Kleidon – one that he has apparently developed over many years. It will take me a while to digest the entirety of the full text version (which is worth visiting). My impression is that the MEP is reasonably consistent with existing data, but oversimplifies. For example, Kleidon illustrates one element by postulating an MEP feedback relationship in which surface warming induces an increase in cloud cover, but that appears to be an overgeneralization – see for example Variations in Cloud Cover. Apparently, the climate system is more complex than encompassed in Kleidon’s perspective. This doesn’t invalidate the MEP as a principle, of course, but it raises two questions: (1) is the MEP falsifiable, or is it so flexible that it can be adapted to fit any set of observations?; and, related: (2) is the MEP a more useful construct for understanding climate dynamics than the more conventional approaches? It should be noted that the two are not obviously in contradiction, and if the MEP is flexible enough, they never will be. (The issue of falsifiability has of course been raised many times in these threads, but relating it to mainstream climate science is a topic far too broad to resolve here) For an alternative to Kleidon’s perspective, although not an outright rejection, see Ken Caldeira’s editorial in Climatic Change. □ Kleidon discusses Caldeira’s argument in the paper □ Yes, his comment is as follows: “Kleidon (2007) points out the lack of appreciation of the thermodynamic nature of Earth system processes and several misunderstandings. Caldeira (2007) raises the comment that thermodynamics and MEP ‘may be true, but trivially so.’ Without going into much further detail on these discussions, one issue that becomes clear from these criticisms is that the thermodynamic basis of the Earth system far from TE seems to be mostly misunderstood and needs to be clarified further. At this point, no one would argue that MEP is well established and that applications are without their I think that’s an appropriately cautious statement, although exactly by whom the thermodynamics are “misunderstood” isn’t specified. I think it’s the “limitations” Kleidon refers to that need to be better defined by testing against real world data. I also want to reread the paper to better appreciate some of the specific principles that Kleidon calls on to develop his MEP □ Fred, I am going to have to either get better glasses or fix my printer, but it deserves a good pouring over. From what I have read so far, it tends to agree with what I would expect from a non-ergodic system. The biggest problem I see is the magnitude of the various impacts with changing conditions. If CO2 forcing is over estimated for current conditions, which is what I suspect, the relative magnitude of the various feed backs take on a different light. □ I’m not sure what point you’re making, Dallas. Kleidon doesn’t quantitatively go into the magnitude of feedbacks, nor does he dispute mainstream estimates of their magnitude. This is one of the reasons why it’s not clear how his approach would differ in its ultimate conclusions from mainstream approaches. □ Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer, but also no evidence here of a different answer. The linear approach that evolves from energy balance models is almost certainly oversimplistic, IMO. □ It’s my impression that mainstream approaches recognize the need to conform to both laws of thermodynamics. I’m not sure I see the particular relevance to energy balance models here, but if their limitations are a concern, this must be spelled out mathematically, explaining what and by how much a preferable approach would deviate. I actually think that those who utilize these models have done that to a commendable extent (e.g., Gregory and Forster, Padilla et al, and others), and have provided uncertainty estimates that are reasonable in regard to transient climate responses, but if anyone disagrees, then we need the exact numbers that represent the area of disagreement.. □ Nope, 2nd law does not appear in the traditional analyses of feedback. □ The paper parallels some of the things I have been trying to quantify. When I work from the glacial conditions to today, I get a better fit for the CO2 temperature change with an indication that CO2 is approaching the limit estimated by Calendar in the 1930s and going back to Arrhenius’ paper, his estimate, once the overly optimistic H2O feedback is removed, (the 1.6 (2.1) with water vapor). I don’t know if you are aware, but the approximate concentrations in his 1986 paper are 187PPM for the 0.67K and 420PPM for the 1.5K, where K is the CO2 concentration he used relative to his time. It is listed in the last table of his paper. You can compare the current observed to his estimates by latitude which tends to drive me toward Calendar and Manabe. That significantly reduces the surface impact of CO2 doubling. approximately 0.8 to 1.2 C, which changes the relative magnitude of the feed back potential of clouds and even my radical conductivity angle, though that is an approximate millennial scale feedback. It is interesting to me, but I am looking at the entire range of climate, glacial to interglacial not just end of the interglacial. □ Judy – I would argue that the Second Law is critical to feedback analysis, because it’s the basis of the Planck Response (via Stefan-Boltzmann) that limits feedback amplification of forcing to defined levels rather than allowing runaway climates due to positive feedbacks from water vapor and other moieties. It’s implicit in standard feedback parameter estimates, and is reflected in the Taylor series describing feedback iterations of fractional value f that lead to the parameter 1/1 – f by which no-feedback responses are multiplied. Without the Second Law, the other feedbacks would destabilize climate rather than simply leading to stabilization at a new temperature. □ I agree 2nd law is critical to feedback analysis, but I don’t see the 2nd law explicitly used in what you discuss □ I don’t think he means feedback in the sense that it’s used in climate sensitivity. I think he’s using it to mean the way a steady state is achieved. I think this does need to have a few more □ P.E. – I agree that Kleidon is using feedback as the process generating a steady state, and that “feedback” as typically (but not invariably) used in climatology lingo only refers to part of that process. Nevertheless, the final principles are the same, because the part often excluded from the formal use of “feedback” in climatology is the Planck Response- the tendency of a warmer body to shed more heat in accordance with the Stefan-Boltzmann equation and the Second Law of Thermodynamics. Even though it is not called a feedback, the operation of the Planck Response is fully incorporated into climate models and feedback analysis. Therefore, from both the standard and the Kleidon perspective, the climate follows what in standard control theory would be referred to as negative feedback. It’s too bad climatology has adopted its own terminology, but we have to accept it because it’s currently too deeply entrenched to change. Some references to feedback, however, do in fact include values for the Planck Response – e.g., Soden and Held 2008. □ In response to Dr. Curry, I agree that the Second Law, explicitly addressed in the MEP approach, is not explicit in conventional feedback analysis. □ You’re reading it the same way I am, Fred. It might have some impact on climate feedback, but it’s not obvious that it will. □ All of the thermodynamics is totally dependent on the use of the second law. With the first law alone almost nothing can be said. As a specific example the derivation of the adaibatic lapse rate is directly linked to the second law as the concept “adiabatic” gets quantitative meaning only through the second law. One can proceed without an explicit reference to the second law, when one uses concepts, which contain implicitly the second law, but that doesn’t mean that the second law would not be used in a totally essential way, when that’s done. □ “It will take me a while to digest the entirety of the full text version”. Kleidon has obviously complicated simple thermodynamics very succefully! I admit I did not bother to go for the full 4. This paper is a good follow-up to the Too Big post because it applies free energy arguments to demonstrate how to simplify our thinking about perturbed steady state systems. Notice how Kleidon aggregates entire categories of dissipation structures to extract the pertinent info. I use the maximum entropy principle to estimate distributions in many applications and can see how the maximum entropy production principle is a natural progression to the basic idea. The http:// AzimuthProject.org blog has recent posts on this topic and the related minimum energy principle. I sense that there is momentum in the belief that the general approach will help solve some of the climate change problems mired in complexity. As a nitpick, Kleidon probably should have used the acronym MEPP to distinguish it from MEP. There are three different principles with the same acronym (Maximum Entropy Principle, Maximum Entropy Production, and Minimum Entropy Production). □ WHT, any specific links at azimuth we should be looking at? □ The Azimuth post called Quantropy is interesting, and the comments are still active: John is trying to unify the methods that get to an energy minimum (dynamics, MEPP) and the methods used at an energy minimum (statics, MEP). It’s fun to follow because they never know where the math will take them. 5. The believers will be displeased. □ The deniers will be displeased also, CO2 has an impact, just not the exact impact predicted. If it is any consolation, land use has a much bigger impact :) □ Capt, the numbers who believe CO2 has no role at all has been inflated far more than those who believe CO2 is THE driver of climate. The biggest points this paper makes imho is that the science is not only not settled, it is not even well defined by the AGW leaders. The other is to also down the positive feedback runaway tipping point boogeyman. □ Hunter, I agree, though I am not sure how inflated the numbers may be. □ From both the empirical and modelling work Iv seen on land use, I dont know how you arrive at such a certain conclusion. its hardly a settled matter □ Those who scorn the CAGW consensus, because they don’t get basic radiative physics, are right for the wrong reason. If they are voters, then we can call them useful idiots. □ Steven Mosher said, “From both the empirical and modelling work Iv seen on land use, I dont know how you arrive at such a certain conclusion. its hardly a settled matter” It probably will never be a settled matter. It is hinted at quite strongly though. Since you have all the surface station data, if you compare minimum temperatures for true rural, the remote state and federal parks with suburban, farm land and urban, you should see a land use related trend. The UHI is obvious, but weighted properly should be included. How significant that trend is would depend on how significant the true CO2 trend is. Your estimate is 1.5 for doubling, mine is 0.8 at the surface for a doubling. □ Capt., I don’t think he has any data from areas that are truly rural. They did not put thermometers, where there weren’t any people to read them. Remote sensing posts in parks, etc. are a recent development. See the RAWS system: Not much to work with. □ Funny how any uncertainty has to work in one direction. Once again no-one even suggests that uncertainty here might make climate sensitivity even higher. Everyone just presumes it would make it lower if anything. □ lolwot, sensitivity can get be greater at times and less in others since the response times of the mixing layers vary. This just indicates that there are limits, maximum/minimum entropy, which are not exactly easy to figure out for each variable. □ lolwot, Yes, for decades the team’s consensus was never doubted. Now it is clear that they were massively wrong. The science is not even well framed, much less settled. It is astonishing that you have forgotten the Lovelock/Hansen school of fear mongering, with Earth becoming Venus. We have had plenty of push to the extreme side of sensitivity. Now we are presented with a sound case that shows this to be wrong, but you are complaining? It seems you are only really complaining about your side not controlling the conversation so much. □ “Yes, for decades the team’s consensus was never doubted.” Not true. The dominant theory has always been questioned and always will be. Every year alternative ideas are produced that either proclaim to overturn the dominant theory, or potentially could. From “Pressure-induced Thermal Enhancement” to “Skydragons” to cosmic rays. All these ideas hold the possibility of changing the mainstream position on climate sensitivity if they turn out correct. But they also have a history of being shot down in droves, which is why the mere existence of such alternative ideas doesn’t alter my acceptance of the dominant theory one bit. Until such alternative ideas actually get traction in the scientific community and become widely accepted I will regard them as unlikely possibilities. Entirely compatible with my view that there is an *unlikely chance* that climate sensitivity is low. Some people on the otherhand will cling to these alternative ideas and overplay their likelihood of them turning out true so they can pretend the dominant theory is in severe doubt. Yet if these alternative ideas *were* so challenging to the dominant theory then when they fall (which many of them do as mentioned above) it should provide a credibility boost to the dominant theory – ie it has just survived an important challenge. But certain people will just throw alternative ideas down the memory hole when they fall and move on to new ones while not altering their perception of the dominant theory at all. □ Is the part about “controlling the conversation” an admission that you only talk about these alternative ideas so that the subject matter is anything but the dominant theory? As if we had thousands of threads about the greenhouse effect being a fraud somehow that would rub off enough uncertainty that we could excuse ourselves for not accepting the GHE? □ Certainties or uncertainties here have no effect whatsoever on climate sensitivity. In your mind you might be a all deity, but in reality, you are just a dumb human like the rest of us. 6. JC wrote (emphasis by this poster); “The 2nd law of thermodynamics is an underutilized piece of physics in climate science. IT IS NOT A SIMPLE BEAST TO WRESTLE WITH, but I think there are some important insights to gain.” With all due respect, the Second (and also the First) Laws of thermodynamics are in fact very simple beasts to deal with. If your hypothesis or analysis or model appears to violate the Laws of Thermodynamics it very likely does (99.999%) and you should step back from your computer screen and start over in a few weeks. When I present my engineering design for a peer review (yes, we do those too) I would be ashamed if my peers suggested that my design violates the Laws of Thermodynamics, that is like the ultimate shame, I would slink away. Apparently in the climate science field when professionals from other fields suggest that your theories may violate the Laws, the response is to ridicule the suggestors, Well….. you can take that approach if you want to, but the Laws of Thermodynamics will “bite you (someplace uncomfortable)” if you choose to ignore them. Back in the day all engineering curriculums required a “thermodynamics 101” course which (if you passed it) gave you plenty of tools to “wrestle with the beast”. Perhaps climate science curriculums should add this course? There are lots of textbooks and the basics have been “settled science” for about two centuries. The basic summary is; Heat (also water) flows to colder (lower) locations, it does this all the time, at all locations. It does this at different velocities depending on the material(s) it is travelling through. Electromagnetic radiation (i.e. Infrared Light) travels through the system at about the speed of light, which is SIGNIFICANTLY faster than any known velocity of heat flow. Any climate science hypothesis that is described with or alleges “Net Energy Gains” or “Extra Energy” violates the First Law and properly trained engineers shake their heads when they hear these In summary, my hypothesis is that increases in “GHGs” in the atmosphere only cause the Gases in the atmosphere to warm up/cool down faster when the arriving amount of energy increases/decreases (i.e. sunrise/sunset). In the Electrical Engineering field we refer to this as the “response time” of a circuit/system. Cheers, KevinK (MSEE, Georgia Tech 1981) □ I think the main issue with Climate science and the second law is the merging of control theory. When they use the feed back parameter 1/(1-f). Perfectly proper looking, but f should be limited to the range 0 to 0.5, to allow for minimum entropy, or the perfect return for isotropic greenhouse gas. At least that is what I come up with using multi disc models. □ Well, your hypothesis needs work. Modifications to the Planck response by GHGs can obviously change the average temperature of an emitting body. This is perfectly in keeping with energy conservation and the laws of thermodynamics. Statistical mechanics is an outgrowth of this, and climate scientists do understand these fundamental principles. □ Web, Of course, they can, but what is the limit? I say the limit is 2 for perfect insulation or perfect return of OLR. Where do that require modification? Venus? □ Dear WebHubTelescope; Please define exactly what you mean by “Modifications to the Planck response by GHGs can obviously change the average temperature of an emitting body.” My understanding is that the “Planck response” is a model that predicts the spectral content of the radiation emitted by a surface. This model is in fact pretty good, but NO REAL emitting surface EXACTLY matches this model (not the SUN, not the Earth, not Edison’s lightbulb). In any case the “GHG’s” are not capable of modifying the “Planck response” of a surface UNLESS they can raise its temperature. Per the Second law a COLDER MEDIA CANNOT RAISE the temperature of a WARMER MEDIA. Yes I know that a colder media can deliver heat to a warmer surface (i.e. “warm” it) but unless the colder media can slow the rate of cooling enough to cause the warmer media’s temperature to rise it cannot modify its Planck response. My hypothesis states that the “GH” effect does not slow the rate at which a surface cools, in fact by displacing “non-GHG’s” the “effect” actually INCREASES the rate at which a surface cools (or ironically enough the rate at which it warms), albeit by such a small amount we will probably never be able to measure it. Climate scientists seem to have confused two effects, one is the rate at which a surface emits radiation while cooling (the number of ping pong balls I can throw every minute) with how fast the radiation travels away from the emitting surface (the speed of the ping pong balls, some of which return (i.e. back radiation) which I then throw again). Climate scientists have done a good job on calculating effect number one, while completely overlooking effect number two. To truly know how many “extra” ping pong balls there are left at the surface you need to consider BOTH effects. Cheers, Kevin. □ The ping pong analogy is for ding dongs. It all sounds like SkyDragon talk. □ Hey there WebHubTelescope, I certainly appreciate your equating my ping pong ball analogy with the “ding dong” ball case. However, please note that the the believers in the “greenhouse effect” have failed after THREE FULL DECADES to observe their alleged effect. So, please continue with your beliefs, but the REAL WORLD is not accommodating you at this time. Cheers, Kevin. 7. I don’t quite understand this post. It is intriguing that there is another maximization principle we can use in our modeling. These usually give rise to superior methods because they conserve critical quantities. For example the Bateman variational principle in fluid dynamics. However, and this is a critical point, these principles are ONLY useful when they are discretized and used to predict dynamics. These principles are very weak constraints in terms of what the actual behaviour is. To state that energy is globally conserved tells us virtually nothing about a system of any importance. To say that mass, momentum, and energy are conserved over EVERY control volume tells you virtually everything about the system. One thing is indeed true and that is that formulating things in terms of an optimality condition subject to constraints can yield a tremendous variety of ways to look at the problem, most are of no utility whatsoever. □ David – Like you, I’m concerned about the utility of the Maximum Entropy Production (MEP) Principle. To me, there is great appeal in the notion that the principle of maximum entropy defining an equilibrium state can be extended to MEP for a steady state. I’d like to have it turn out to be true, and I want to reread Kleidon for more insight into the evidence so far, which I take to be inconclusive. On the other hand, I suspect it’s easier to compute what maximizes entropy to produce an equilibrium than to compute what might maximize entropy production to produce a steady state – I’m not sure about this and would like to learn more, but the number of possible real world variables may be too formidable to render the concept very useful. It’s also true that our climate is affected by external factors that follow their own entropy considerations. These include varying solar output and Earth/sun geometry, as well as tectonic shifts that affect circulation patterns and the carbon cycle. .Sorting out the contributions of these factors vis-a-vis internal climate MEP will be difficult. A critical question will be whether MEP can help us understand or predict climate dynamics better than standard approaches. Predicting the past may be a rough guide, but a real test of competing theories is how well they predict future events. Given the slowness of climate change, that may require some time to find out. □ When you start documenting everything that can be potentially modeled with the maximum entropy principle, the list starts to get impressive. For example, distributions of: Wind speed Wave height Atmospheric pressure Planck response This is directly a consequence of nature tending to disorder, and as Jaynes explains the close ties between maximum entropy, statistical mechanics, the second law, and conservation laws. Applying constraints under uncertainty is what makes the approach so practical. In many cases, all we know are the moments of physical observables, such as the mean, and that is perfectly adequate for the maximum entropy principle. □ WHT, But the question remains: How should the problem be framed? What is the system, whose entropy production is considered and what constraints are applied to restrict immediate transition to the state of maximum entropy. The Kleidon paper is a live example that framing the problem is rather arbitrary and results are totally dependent on the particular choices made. That becomes obvious, when one starts to go through it’s basic example and to ponder, why it’s set up as it is. I’m sure that there are problems, where some particular choices are more natural than some others, but the problem of arbitrariness remains at some level. I haven’t looked at your work, and don’t say anything on that, but the Kleidon paper is not to the least convincing. □ Pekka, It is true that some of the variational problem solving approaches can seem kind of arbitrary, and that is why I have been chipping around the edges instead of trying to solve the whole ball of wax (so to speak). Take the example of wind energy, for example. Would you tend to believe that the aggregate wind energy summed over the entire planet approaches a constant value? I can’t imagine this amount of kinetic energy fluctuating that wildly. In my interpretation, this leads to a spatio-temporal distribution of wind speeds which complies with a maximum entropy principle. That is one chip off the block, and the entire system can get similarly decomposed. Kleidon, in my view, may be treating these as interlocking pieces in the bigger puzzle. 8. One of the seminal papers in this field – proposing a maximum entropy principle for poleward energy movement – was by Garth Paltridge in the 1970s. Paltridge is a prominent skeptic. □ Paltridge looks to have run into the same problem most scientists have, it’s complex. Different boundary conditions lead to different preferred steady states which leads to the extremal One of the most difficult to figure out, laminar flow regions, has a long history of aggravating scientists and engineers. Another is chemical reactions that are reversible. Add the right amount of the right kind of energy and the preferred direction can change. Paltridge criticism is mainly that he misinterpreted the extremal principle, which I would think is open to pretty wide interpretations, since extremal principals would lead to , “At present, for this area of investigation, the prospects for useful extremal principles seem clouded at best. C. Nicolis (1999)[52] concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production ” (wikipedia) which is funny to me, because MEP and Extremal principals tend to indicate exactly what happens in the climate, non-ergodic behavior. So the question I think remains, what trips the switches? 9. No one can go too far wrong starting with the fact that Einstein’s appreciation for classical thermodynamics would have from the outset distanced him from the pseud-oscience runaway global warming fearmongers. □ There may well be very few skeptics who actually believe that CO2 does not cause a “greenhouse” effect. There are certainly no mainstream scientists who believe that runaway global warming is likely. Dangerous warming yes but runaway no. □ So Hansen is no longer mainstream? Glad you cleared that up. And this guy is not mainstream? So will climate scientists come out and condemn this founding member of AGW? The IPCC Fourth Assessment Report talks about runaway climate change.”Anthropogenic warming could lead to some effects that are abrupt or irreversible, depending upon the rate and magnitude of the climate change.” That sure sounds like tipping point. And certainly those of who have followed the devolution of the IPCC would agree that it is not a mainstream science group, but rather a political marketing group. □ Is this one of the kook, non-mainstream scientists you were talking about? □ Neither a tipping point nor abrupt climate change corresponds to run away global warming and saying that “over a few centuries it is conceivable that …” is not equivalent to saying that it is 10. Maybe Kleidon and Trenberth can get together and merge their diagrams and give a better picture of the energy flows involved. I also don’t think a first law of thermodynamics approach is better or worse than a second law approach, as both need to be observed to solve the problem. Maximum entropy production means to me that if the earth’s systems gravitate to that state, then the earth will warm as fast as possible, since entropy can be thought of as the energy you can not get any useful work out of and just becomes waste heat. So to me, MEP means maximum heat production. Nothing in that paper that any warmist would have any problem with, except the anti woo Gaia stuff and the wooish holistic stuff. Why did he put that into what was a pretty good read and worth further study? □ bob droege said; “Maximum entropy production means to me that if the earth’s systems gravitate to that state, then the earth will warm as fast as possible, since entropy can be thought of as the energy you can not get any useful work out of and just becomes waste heat. So to me, MEP means maximum heat production.” LOL You have demonstrated the mistake that many climate scientists and posters are making. Energy can take many forms (heat, kinetic, potential, etc.). What makes you think that entropy loses necessarily turn into heat in the climate system? What part of Judith’s statements, “Nope, 2nd law does not appear in the traditional analyses of feedback”, and “Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer”, do you not understand? □ I don’t get the different physics are in play here part. Maybe they should use this equation instead of picking either the first or second law dU = TdS – PdV “The different physics are in play here” is the fundamentally most perplexing thing Judith has written on this blog, in my opinion. LOL, you have demonstrated the mistake many have made. Energy can take many forms (heat, kinetic, potential, entropy, etc), what makes you think entropy can turn into any form of energy you can do anything with such as kinetic, potential etc. □ This equation is nice for reversible processes, doesn’t hold tho for irreversible processes which are at issue here. □ bob droege said; what makes you think entropy can turn into any form of energy you can do anything with such as kinetic, potential etc. I never said that it did! That is what entropy is, missing energy. Energy that in the past could be observed, measured and accounted for, that after going through some process, can no longer be observed, measured and accounted for. Did you read the link I posted? Here, I’ll post it again. The title of the article is…. Where does the Entropy go? If you know where energy is, then it is not entropy, but enthalpy. □ The deep fundamental questions related to black holes and entropy are so remote from more common occurrence of the concepts of energy and entropy that they can safely be forgotten. Thinking about them will only add to the confusion that the concept of entropy creates in engineering or in understanding the atmosphere. Entropy is related to energy, but it’s not a form of energy. The entropy can change – and tends to change – even when energy stays constant. That’s just the second law: The entropy of a closed system increases although by definition the energy of the closed system is constant. (The second law allows for constancy of entropy, but that’s only an idealization that’s never The increase of entropy is due to two types of changes. In the first other types of energy are transformed to heat, in the second heat flows from higher temperature to lower. Both involve addition of heat somewhere but the source can be either higher temperature heat or some other form of energy like chemical energy, potential energy, kinetic energy of macroscopic volumes of matter or electricity. □ As a further hint, other really practical applications of entropy exist when one starts thinking about energy spread in terms of probability distributions. Sharper distributions have lower entropy and broader distributions have higher entropy, corresponding to more disorder. This gets to the dual views of entropy, corresponding to those who want to venture down the statistical mechanics path (probabilities, etc) and those that want to stay in the short-hand thermodynamics realm. □ Web said, “Sharper distributions have lower entropy and broader distributions have higher entropy, corresponding to more disorder.” Very true, one of the reasons I mentioned the super El Nino signature. Fred interprets that as a phenomenon where the ocean releases more heat. I interpret that as where radiant forcing reached minimum entropy, (or close anyway, because of the signature.) What that minimum radiant entropy value is, I would think would be a fairly valuable thing to know. This is the 185K or 65Wm-2 puzzle I mentioned. If Venus has THE maximum greenhouse effect, its black body temperature, 185K may be the minimum radiant entropy. What was the minimum temperature of the tropopause during the super El Nino? □ Pekka Pirilä said; A lot of nonsense. The purpose of posting the article was not to propose that studying entropy from black holes is pertinent the climate debate directly, but to help explain the concept of just what one is talking about when discussing entropy. All thermodynamic processes have entropy, everything from simple flow into steam tanks to black holes. Any form of energy that you can locate is not missing and therefore is not entropy. That is why Sean Carroll (a top theoretical physicist from Cal Tech) is asking where the heck the missing energy(entropy) has gone. He’s asking (and stating his theory on just where he thinks it goes) because nobody knows. Get it? “Entropy is related to energy, but it’s not a form of energy.” Entropy is energy loss that result from any thermodynamic process. Period. Entropy is energy loss that result from any thermodynamic process. Period. Barring the fact that entropy does not have the same units as energy. □ .. and barring also that energy is conserved and that therefore no thermodynamic process loses energy. □ WHT said; “Barring the fact that entropy does not have the same units as energy.” Nonsense. The energy of the enthalpy and entropy involved can have whatever units of measure you want them to have. Pekka Pirilä said; “…..energy is conserved and that therefore no thermodynamic process loses energy.” Well since you seem to think that no energy is lost, why don’t you tell us just exactly where the entropy is, so that the rest of the world can finally gain this useful knowledge. Unless of course you think that there is no entropy in the Earth’s climate system. You stated earlier in this same thread, “All of the thermodynamics is totally dependent on the use of the second law. With the first law alone almost nothing can be said.” Yep. The first law is only an idealized statement of little use. The second law is what applies in the real world. You seem to have forgotten this since you wrote it a few hours ago. □ Entropy is not a measure of the total amount of energy, it’s a measure of how far the distribution of energy to its various forms is from the equilibrium that extends throughout the system. Its sign is defined in such a way that the maximum value is reached at the equilibrium. As long as we are not at the equilibrium, the deviations can be taken advantage of. Temperature differences can be used to drive an engine, chemical energy to create temperature differences or perhaps to produce electricity in a fuel cell or battery, the possible uses of electricity are well known, Deviations from equilibrium drive also natural processes like weather phenomena or ocean currents. A closed system moves naturally closer and closer to equilibrium. In that it’s entropy increases asymptotically towards its maximum value given the constitution and the total energy. There are other concepts like free energy and exergy, which are related to the same phenomena. They have the unit of energy and they do indeed disappear, or diminish, when entropy is increasing, while the energy itself is conserved. □ Pekka, good overview. I wanted to add that entropy is in some sense hierarchical as well. As you describe, the global fluctuations can give rise to wind. And this wind can also show Maximum Entropy locally via the Rayleigh distribution of wind speeds. And within a small volume of moving air, the mixing of the contents will also follow the principle of Maximum Entropy. So this follows through many levels of coarse graining, which makes it a wonderfully intuitive concept. □ Pekka Pirilä said; “Entropy is not a measure of the total amount of energy, it’s a measure of how far the distribution of energy to its various forms is from the equilibrium that extends throughout the system. Its sign is defined in such a way that the maximum value is reached at the equilibrium.” The sign of the entropy you say? That is beyond nonsensical. I won’t belabor the point, but the idea of negative entropy is something that implies the thermodynamic processes of the universe can run in reverse to what they are doing now. This would be like driving your car and having it fill the tank by itself while you drive!! I have explained my opinion on the subject as well as I know how. I’ll leave you to it. You can have the last word if you want. “The sign of the entropy you say? That is beyond nonsensical. I won’t belabor the point, but the idea of negative entropy is something that implies the thermodynamic processes of the universe can run in reverse to what they are doing now. This would be like driving your car and having it fill the tank by itself while you drive!!” I think Pekka is referring to the concept of relative entropy or cross entropy, where the entropy values can go negative, as it is used to compare the relative strengths of two competing probability distributions. That is related to techniques that statisticians have long used, such as log-likelihood or maximum likelihood, to establish confidence in a model. Bottom line is that if a statistical measure can’t go negative, there is a hole in your quantitative toolbox. “I have explained my opinion on the subject as well as I know how. I’ll leave you to it. You can have the last word if you want. This is no longer for your benefit but for those that want to advance our understanding. □ WHT, That wasn’t part of my comment. The tarditionally defined entropy of a fineite closed system is bound on both sides, by zero from below and some maximum from above. The lower limit was irrelevant for my comment as the entropy is always increasing getting further from zero and closer to the upper limit. □ I agree if you say the sign is set by the sum of p * ln(p) and since p I between 0 and 1, then the leading sign is negative to keep the absolute entropy always positive. It looks like the CSI guy Gil thought he was onto something based on flimsy forensic evidence. 11. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing Are the given constraints and maximum possible rate measurable, so that the claim is potentially falsifiable? In my brief studies of thermodynamics, equilibria and energy balance very seldom predict rates: nitroglycerin and gasoline, to pick two examples, may sit around for a very long time without producing measurable entropy at all, and then produce it at a very high rate; sugar produces entropy at different rates depending on whether it is burning in an oxygen atmosphere or powering biological processes via enzymes. Enzymes (and other catalysts) can change the rate of entropy production by factors of about 10^15 — are enzymes “prevailing constraints”? Are blasting caps and spark plugs “prevailing constraints”? Photosynthesis uses solar power to reduce entropy locally, though the production of the solar energy in the sun increases entropy globally. The second law specifies an inequality, and as long as the inequality is satisfied it imposes few restraints on rates by which mechanisms operate. In the daytime, warming and photosynthesis occur, and at night cooling occurs; depending of the rate of energy influx in the daytime and the rate of energy eflux at night, the net entropy change over many cycles may be negative on Earth, as may have happened when atmospheric CO2 was sequestered in the first place. That’s not the only way that energy inflow may produce persistent structures in place of random variation. Is there even one potentially falsifiable new claim about the effects of changing CO2 concentration on Earth energy transfer? Last question, nonlinear dissipative systems with fluctuating inputs don’t generally produce equilibria, so is the concept of “driven to steady state” even applicable? □ “Last question, nonlinear dissipative systems with fluctuating inputs don’t generally produce equilibrium, so is the concept of “driven to steady state” even applicable? A. No. It should read “driven to unattainable equilibrium”. Our climate is a endlessly dynamic nonlinear dissipative system with fluctuating inputs. It is forever changing, unpredictably. The fluctuations are to many to quantify, within current scientific knowledge and instrumentation modelling, and basically it is too big to understand. A few scientists tried climate predictions using our best methods of scientific endeavor, although some say say the method was corrupt. Nevertheless, the qualifications needed for such a expedition were never applied, which is why the theory produced under this method is a flawed hypothesis, not only questionable but questioned for the last decade or more, unresolved. Humans will likely never have a answer for chaotic climate systems in totality. The most we can say about when we will know a truthful theory is that it will be in the future for the very reasoning expressed here. Einstein:.”A theory is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability.” We defer to Einstein’s theory on thermodynamics but disregard his hypothesis about the strength and applicability of scientific theory. I’d rather roll the dice towards Einstein whilst were are developing strategies to enhance benefits from a naturally changing biodiversity climate, rather the current paradigm of fearing the worst and stunting humankind’s development. Whether his statement is theory or philosophy, I do wonder what his thought would be on the methods of climatic reasoning today. 12. For ten points extra credit in a Physics final exam I once wrote that the apparent reversal of entropy on earth was a miracle consistent with the existence of God. Fortunately, I’d gauged the graduate assistant correctly; he had a sense of humour. □ That’s a pretty good exam question for a bot to think of. It’s hard not to think of something as evidence for the existence of God. In this case, the miracle is the existence of the Sun at exactly the right distance from the Earth to produce an environment capable of supporting life, which itself is another whole set of miracles. In earlier centuries, Newton’s laws were considered evidence for the existence of God. □ In an NPR interview Saul Perlmuter stated something similar to your second paragraph, w/o reference to God, but pointing out that somehow humans are just the right size to observe the phenomena of the very large (relativity) and the very small (quantum mechanics). Or, we see what we see because of where we stand? □ God is it not, nor are we in earlier centuries. Imagination predicts progress. Universal material equilibrium is possible, Probably has happened before, in a fraction of a moment before the kenetic energy of that moment dissiptated into a different universal state of all the the universes matter. Entropy, is it the driver of big bangs? P.S. No offence intended to God. 13. The paper of Kleidon has been discussed briefly in some earlier threads. I read the paper at that time and was not convinced as can be seen from this comment: □ If a behavior does show maximum entropy, then it must get to that state space through some fundamental process. It is entirely possible that maximum entropy production is analogous to the Principle of Least Action, or minimizing the free action as the folks at the AzimuthProject are trying to frame the problem. To side with you, I can’t read any of Roderick Dewar’s papers on max entropy production in that they suffer from a circular reasoning problem. Axel Kleidon is much better because he tries to apply some numbers, and bridges the gap between understanding the climate with the laudable goal of trying to estimate how much renewable energy that we can extract from our environment. □ WHT, I have noticed that Axel Kleidon studies relevant problems. His goals are laudable and there is nothing wrong in experimenting with new approaches to find out, haw far they can get. It’s, however, common in such work that the final results are meagre. My view is that the paper that this thread is discussing has not reached significant results. Kleidon states explicitly in the paper that the formal basis for the approach is lacking. Thus the paper is highly dependent on the examples. As I have written, my conclusion is that the examples testify rather on failure than on success. □ Some have argued that entropy arguments in general lack formality. Rota considers a dozen problems in probability that no one likes to bring up and two of these concern entropy : “The maximum entropy principle. which may be gleaned from the preceding examples, states that, in the absence of any further information, the best guess of a random variable satisfying given conditions is the random variable of maximum entropy that satisfies those conditions. Among all mathematical recipes, this is to the best of my knowledge the one that has found the most striking applications in engineering practice. The best techniques of image reconstruction known to date rely on the maximum entropy principle. I have myself been witness to police cases where the correct number plate of an automobile was reconstructed by maximum entropy from a photograph which to the naked eye looked like chaos. Even the solution of overdetermined systems of equations is at present best carried out by maximum entropy computations. In view of such a variety of lucrative applications, the complete and utter lack of justification of the maximum entropy principle is nothing short of a scandal. On learning that a normally distributed random variable of infinite variance has maximum entropy, it is natural to ask for an intuitive proof of the central limit theorem that relies on maximum entropy; such a proof has never been given, to the best of my knowledge, although several mathematicians attempted it, among them Linnik and Rényi.” This doesn’t mean it isn’t practical and useful, just that people can tear their hair out trying to understand it and formalize it. □ Web said, “This doesn’t mean it isn’t practical and useful, just that people can tear their hair out trying to understand it and formalize it.” Thanks to the increase in my male pattern baldness, I can testify to that! :) Think of the implication for just the basic up/down radiant model/kernel. Thermal diffusion in the mid to upper troposphere to the tropopause sink. The paper provides some detail of diffusion/conduction toward the polar regions, the tropopause expands that impact. That is not a bad place for you to use your maximum entropy diffusion methods. 15. Thanks greatly for this excellent paper, Judith. Although I would like to have seen a much deeper investigation of the constructal law and its relation to the maximization of entropy, this is the first paper that I’ve seen that was aware of the constructal law, so we’re making progress. As someone who has pushed for some time for greater understanding and use of the constructal law and its application to the climate, I can’t thank you enough for bringing this paper to our attention. It will take more time to digest, but the important point is clear—flow systems far from equilibrium evolve and change to maximize certain aspects of the system, including the entropy and the work produced. My thanks as always for your site, it’s a great addition to the climate discussion. □ Interesting. The Constructal Law is simply an amalgamation of the well known Principle of Least Action, the 2nd Law as exemplified by the Maximum Entropy Principle, and symmetry arguments tossed in. This may be one of those ideas that engineers have a different name for than physicists, □ Climate science’s selection of terminology versus engineering is a great deal of the battle. □ Re the constructual law, I am in discussions with Adrian Bejan re his new book and possibility of a guest post. □ There are laws that are strictly true for every system for which they can be properly applied. The First and Second law belong to those. That requires also that they can be formulated exactly in mathematical terms. Then there are “laws” that classify commonly occurring situations or developments. Such laws are often ill defined. Knowledgeable people may be likely to define them roughly in the same way, when given the field of application, but not exactly. These laws are also difficult to test as they are not followed rigorously, but are rather rules of thumb. To me it appears clear that the constructal law belongs to this second class. The situation is not quite as obvious for the principle of maximal entropy production, but every practical application of the principle that I have seen has the characteristic of this second class of “laws”. The laws of the second type may be useful, but their limits of applicability are unknown and one should never expect that they are valid for a new application until that has been verified. In that they are fundamentally different from the best established laws of physics, which are very likely to be true over a wide range of situations that have not yet been specifically studied. (They are less certain to be valid, when we proceed to essentially new areas like those that led to the breakdown of classical mechanics in areas, where QM or relativity are important.) □ Very true Pekka, engineering has quite a few rules of thumb to work in the real world. Climate science may need a few rules of thumb as long as they obey the laws of thermodynamics :) □ Possibly the following article shares Pekka’s viewpoint? From the concluding comments, “Long ago, Katchalsky and Prigogine described the formation of complex structures in nonequilibrium systems. Their dissipative structures could have a degree of complication that could grow rapidly in time. It is believed that comparably complex structures do not exist in equilibrium. … As science turns to complexity, one must realize that complexity demands attitudes quite different from those heretofore common in physics. Up to now, physicists looked for fundamental laws true for all times and all places. But each complex system is different; apparently there are no general laws for complexity. Instead, one must reach for lessons that might, with insight and understanding, be learned in one system and applied to another.” □ The related topic of fluctuation theorem – and fluctuation/dissipation theorem – may also be interesting. At least I’d like to understand a little better what they mean in climate context. For example, Kleidon references several papers by Dewar including http://arxiv.org/ftp/cond-mat/papers/0005/0005382.pdf. □ Pekka There are laws that are strictly true for every system for which they can be properly applied. Then there are “laws” that classify commonly occurring situations or developments…These laws are also difficult to test as they are not followed rigorously, but are rather rules of thumb. Maybe the problem is one of semantics. Isn’t is such that a “law” is a “law” (applies without exception for every system), while the other “rule of thumb” you cite is really not a “law”, but rather a “suggestion”, or at best an “uncorroborated hypothesis”? Just trying to get the wording straight here. □ Constructual law is coming soon, hopefully via a guest post (or at least Q&A) with Adrian Bejan □ Thanks Judith. Look forward to Bejan’s post. There are numerous papers being published around Maximum Entropy Production by Kleidon and others, which rarely overlap or cite with Bejan’s papers. Carnot’s theorem inherently incorporates the 2nd Law of Thermodynamics: All irreversible heat engines between two heat reservoirs are less efficient than a Carnot engine operating between the same reservoirs. Kleidon, Bejan and others modeling climate and winds as a Carnot heat engine driven by the temperature difference between the tropics and poles implicitly incorporate the 2nd Law of The atmosphere and oceans are not reversible systems. Consequently, the efficiency of earth’s climate engine is always less than the Carnot efficiency. SLOT thus enforces rigorous bounds on wind efficiency and the consequent temperature distribution over the Earth and with elevation. Note that Ferenc Miskolczi also appeals to entropy maximization – and is commonly derided as few understand the power of it. e.g. Miskolczi 2007 We believe that the β parameter is governed by the maximum entropy principle, the system tries to convert as much SW radiation to LW radiation as possible, while obeying the 2OLR/(3 f ) = F0 + P0 condition. Woe betide anyone who even imagines violating SLOT. Russian thermodynamicist Ivan P. Bazarov (Thermodynamics, Pergamon 1964) stated: “The second law of thermodynamics is, without a doubt, one of the most perfect laws in physics. Any reproducible violation of it, however small, would bring the discoverer great riches as well as a trip to Stockholm. The world’s energy problems would be solved at one stroke. It is not possible to find any other law (except, perhaps, for super selection rules such as charge conservation) for which a proposed violation would bring more skepticism than this one. Not even Maxwell’s laws of electricity or Newton’s law of gravitation are so sacrosanct, for each has measurable corrections coming from quantum effects or general relativity.” PS The US Patent office requires an inventor to physically demonstrate any invention that appears to require “perpetual energy” (of the 2nd kind) and violate of the 2nd law of thermodynamics 16. Of course, the most profound deduction from the 2nd Law is that the common use of the ‘back radiation’ concept in climate science is bunkum except for the particular case of a temperature In all cases the relevant radiative heat transfer phenomenon is the difference between Up and Down radiative signals! A second issue is the claim in climate science that net present GHG warming is 33 K. Again it is bunkum because this is lapse rate conflated with real GHG warming, probably about 9 K and set by the difference between mostly H2O GHG warming and cooling by clouds. As far as the MEP concept is concerned,this is deeply bound to statistical thermodynamics and the Gibbs Paradox. its application to climate science is with respect to the assumption of 100% direct thermalisation of IR. The resolution of the Gibbs paradox is to understand that at thermodynamic equilibrium, gas molecules have no specific identity. Thus when an IR photon is absorbed, an identical photon emitted almost immediately restores the Equipartition of Energy so except at high pressures and very high collision frequency, there can be little direct thermalisation. Indeed, unless otherwise proven, most thermalisation may be at second phases. Perhaps this is why the simplistic 1990 IPCC predictions deviate so greatly from experiment: http://www.sciencebits.com/IPCC_nowarming Be warned; non equilibrium thermodynamics is deceptively seductive .At all times refer to direct, simple experiments and never believe propaganda like the 33 K and back radiation claims! A second issue is the claim in climate science that net present GHG warming is 33 K. Again it is bunkum because this is lapse rate conflated with real GHG warming, probably about 9 K and set by the difference between mostly H2O GHG warming and cooling by clouds. The effect of 0% albedo or complete absorption leads to a 9 to 10 K differential, while 30% albedo leads to 33 K for perfect black-body. So are you saying that all radiation is at least temporarily absorbed by the planet? And that none of the earth has mirror-reflecting properties at all? □ In essence, the IPCC’c claim is that if you take away the atmosphere, the -18°C of the composite emitter at the top of atmosphere would move to the earth’s surface. Subtract [-18°C] from +15°C and you get 33 K. Lacis is on record as saying the models predict the same and it’s all GHG warming.. But that’s not true. By eliminating clouds and precipitated water = ice caps, albedo falls from .3 to .07 so the equilibrium radiative temperature increases to ~0°C, ~15 K GHG warming. Iterate to use the residual aerosols and the low IR radiative properties of N2 and O2 and you get ~9 K. The problem the modellers apparently have is that they fail to take into account the most basic part of the basis of their modelling. It’s that GHG warming raises the tropopause so must be separated from lapse rate warming at the surface. The zero albedo result is spurious. GHG warming is the result of the increase of IR impedance in the atmosphere which represents itself as the apparent emissivity/absorptivity of the atmosphere near to the Earth’s surface and facing it. As the CO2 in this is near IR band saturation, you get into the realm of self-absorption and reduction of that emissivity, an increase in the opposite direction, reducing net incremental CO2 climate sensitivity because IR impedance falls as [CO2] increases. There are other issues, e.g. the 13,1% change of CO2 Cp from 250 K to 350 K, and the variation of this in gas mixtures you get by partial molar Cp data. This change of Cp is the development of longer wavelength IR absorption bands so there is variable IR impedance as a function of temperature. It looks like the simple but very wrong World of climate science is suddenly developing into real science with thermodynamics. About time but expect radical changes from the assumptions made by those who grew up when climate science was to real science as painting by numbers is to fine art. Welcome to the real World! □ “Mydogsgotnonose | January 11, 2012 at 8:44 am | In essence, the IPCC’c claim is that if you take away the atmosphere, the -18°C of the composite emitter at the top of atmosphere would move to the earth’s surface. Subtract [-18°C] from +15°C and you get 33 K. Lacis is on record as saying the models predict the same and it’s all GHG warming.. But that’s not true.” If you only consider latent and sensible cooling (thermals) the effective radiant temperature of the surface is 267K degrees, That is 21 degrees which would be made up by radiant impact. That makes things interesting. The total atmospheric radiant or ghg impact would actually be greater than 33C degrees. This may seem counter intuitive, but that reduces the impact that the change in CO2 has as a percentage of the total impact. Manabe estimated roughly 70C total radiant impact, my quick and dirty estimate is 54C degrees impact. Dr. Roy Spencer stated, “One of the more significant aspects of the above discussion, which was demonstrated theoretically back in the mid-1960s by Manabe and Strickler, is that the cooling effects of weather short-circuit at least 50% of the greenhouse effect’s warming of the surface. In other words, without surface evaporation and convective heat loss, the Earth’s surface would be about 70 deg. C warmer, rather than 33 deg. C warmer, than simple solar absorption by the surface would suggest. “ 17. Fantastic post!!! Best technical thread yet, IMHO. CurryJA said, “Nope, 2nd law does not appear in the traditional analyses of feedback.” I posted some comments about this last year and only got a few takers willing to discuss the fact that most climate energy balance research was mostly ignoring the second law of thermodynamics. Link below; also at WUWT “Fred, Kleidon’s approach uses the 2nd law of thermodynamics. The mainstream approach uses the first law of thermodynamics. Different physics are in play here, there is no reason to expect the same answer, but also no evidence here of a different answer. The linear approach that evolves from energy balance models is almost certainly oversimplistic, IMO.” I agree with Judith. I think Trenberth will find his missing heat hiding in the entropy room. Wondering where entropy goes is not a problem exclusively for climate science. Example here, The Earth and Atmospheric Sciences department at Ga Tech is in great hands!! □ Heat can disperse into the ocean, just like CO2 disperses into sequestering sites. Given a constant increment of thermal forcing over time, the diffusion of heat can show an understandable diversion into a thermal heat sink. This is what the divergence from an expected temperature increase will look like given the heat sink never reaches a steady state in temperature: Alpha plot of dispersive MaxEnt diffusion. This can turn around if the heat sink thermalizes, and the temperature gradient disappears. I am working on this idea in more depth. □ Heat dispersion in the ocean still has to follow the second law. Mass absorption, like the change in vapor pressure allowing gases to return to solution, would be tricky, but the oceans are electrolytic. That is one reason I concentrated on the 4C density boundary. It does drive the ocean over turning current. □ First off, thermal or heat diffusion is a strikingly clear manifestation of the second law. Secondly, I apply maximum entropy to the thermal diffusion coefficient because I know that it varies over the planet. So it turns into diffusion with dispersion, which is real disorder. The result is an analytical expression that I hope will get some traction because of its □ Kevin Trenberth made the comment that the “missing” heat that had gone into the deep ocean could return. its “conservation of energy”. Not possible owing to the 2nd law (and the results of mixing), unless the temperature of the heat sink substantially rises. Another case of climate scientists using the 1st law, and not the 2nd law □ Trenberth is a climate scientist, not a climate scientists. Maybe you could take an art course where they use a lot of fine brushes. [Response: The circulation time for the deep ocean is on the order of hundreds to thousands of years. Change there is very slow - which makes the changes seen so far quite surprising. At any new (warmer) equilibrium, there will be a significant increase of OHC over what there was before. The damping of the rate of surface warming or the warming in the pipeline isn't anything to do deep ocean heat coming back out. I have no idea where this idea originated, but it is not accurate. - gavin] □ I have been wondering, what Trenberth really meant by the statement. A process where heat literarily goes into deep ocean and returns to warm doesn’t appear plausible, but something along the same lines is possible. It’s possible that the overall net heat transfer between deep ocean and the upper ocean varies in direction. It’s normal that the heat flux is down in some regions and up in others. The relative size of these fluxes may vary, and certainly does vary to some extent. It’s not even necessary that the net flux changes sign. For significant effects it’s enough that the downward flux varies in size. If it has been exceptionally large over some period that may lead to a significant reduction of the flux later. This would speed up the warming during the latter period. □ Pekka said, “It’s not even necessary that the net flux changes sign. For significant effects it’s enough that the downward flux varies in size. If it has been exceptionally large over some period that may lead to a significant reduction of the flux later. This would speed up the warming during the latter period.” □ The temperature differences eventually equilibrate and since the forcing function is still there (remember that co2 has a long adjustment time), the continually produced excess heat will have no where to go. And that is when the temperature would start to catch up. So I agree that the heat coming back is the wrong interpretation. It will remain mixed. □ “Kevin Trenberth made the comment that the “missing” heat that had gone into the deep ocean could return. its “conservation of energy”. Not possible owing to the 2nd law (and the results of mixing), unless the temperature of the heat sink substantially rises. Another case of climate scientists using the 1st law, and not the 2nd law” Judy – Could you quote his statement exactly, including the context, or better yet, link to the source so we can read the context? I’m skeptical that Trenberth was making an unrealistic claim, but I would want to see his exact point. As to whether heat that had gone into the deep ocean can return, of course it can – eventually -, but the direction of heat flow depends on temperature gradients, and the inevitably slow time course would reflect the vast heat capacity of the deep ocean. It wouldn’t require the temperature of the deep ocean heat sink to rise, but could happen if the temperature of the upper ocean and the surface falls. It’s one of the reasons why the long term surface warming from CO2 forcing would fail to subside rapidly from a later reduction in atmospheric CO2. Susan Solomon et al made this point in a PNAS paper a few years ago. □ http://www.dailycamera.com/boulder-county-news/ci_18932226 That the heat is buried in the ocean, and not lost into space, is troublesome, Trenberth said, since the heat energy isn’t likely to stay in the ocean forever, perhaps releasing back into the atmosphere during a strong El Nino, when sea surface temperatures in the tropical Pacific are warmer than average. “It can come back quite fast,” he said. “The energy is not lost, and it can come back to haunt us, so to speak, in the future.” □ Dr. C., my understanding is that the theory that the extra heat disappeared into the deep oceans assumes a very slow laminar flow that precludes significant mixing. In theory this seems possible, but not terribly plausible. It seems like this is a real key issue that should be possible to model without the complications from turbulence. Do you know of any such efforts? □ Fred, Pekka – See here for Trenberth’s statement. □ Pekka – I don’t think anybody knows what he was talking about! Once it’s measured in the deep ocean, found so no longer missing, it’s not coming back. It’s here among us – apparently for a very long time. I think he meant what caused the deep ocean warming could stop causing it and start heating up the things that I get to play with on Wood for Trees, ’cause from I can see, nothing changed at Wood for Trees when they found warming below 700 meters. □ So, either Trenberth failed undergraduate thermodynamics iserably or he never studied thermodynamics, had no idea of thermodynamics! □ Thanks, Pat. Trenberth’s comments are consistent with my understanding of his position, and with climate data. The one additional point he made that went beyond that general understanding was that a reversal of the heat flux could also occur fairly rapidly, but transiently, during an El Nino – a phenomenon in which the ocean loses heat to the atmosphere. As I understand this, most of the ocean heat loss occurs from the upper mixed layer, but flux out of the deep ocean would presumably contribute to the ability of the mixed layer to release heat upward. □ Fred, “during an El Nino – a phenomenon in which the ocean loses heat to the atmosphere. ” Wouldn’t the ocean just be losing more radiant heat to the atmosphere? La nina is an increased convection (westerlies) event, radiant impact is greater when there is less convection. Warmer SST during an El Nino would indicate less heat loss from the ocean. □ Dallas – The El Nino response is complicated, because it varies both with region and with time. During certain phases, however, heat moves to the surface from below and is released into the atmosphere, with a consequent net heat loss from the ocean. Increased atmospheric heating, however, can elicit transient positive feedbacks that lead to heat transfer back into the ocean. Because these phenomena are not synchronous in every region, the global pattern is not always easy to interpret. Nevetheless, the First Law is relevant in that as long as an El Nino is not a forced response to a radiative imbalance imposed at the top of the atmosphere, heat gain at the surface and atmosphere must signify heat loss from somewhere else. There have been suggestions that ENSO phenomena may partially reflect anthropogenic forcing, but outside of this possibility, total energy must be conserved. □ Fred – during an El Nino, can heat from below 3000 meters be brought to the surface? How about below 700 meters, from below 2000 meters. I’ve never seen “deep ocean” defined. During El Nino the GMT seems get hotter. During La Nina seems to get colder. That is what I meant by something I get to play with on Wood for Trees. □ JCH – During an El Nino, as long as energy is conserved, heat added to the surface and atmosphere must come from the ocean. The immediate source will be the upper layers, but I think Trenberth’s point is that their ability to release heat to the surface will be reinforced if some of their heat loss is replaced by heat from deeper levels. The net result would therefore be a contribution to surface and atmospheric warming from the deep ocean that is mostly indirect via the upper mixed layer rather than a bulk convective transfer of large amounts of heat upward over thousands of meters. The larger point, I think, and one I tried to make earlier, is that heat transfer out of the deep ocean will be triggered when the upper ocean starts to lose heat. Ordinarily, this would be associated with global cooling (e.g., during a sustained reduction in atmospheric CO2), but in the case of El Nino, one can argue that the upper ocean heat loss is transiently associated with surface warming – a phenomenon that will ultimately reverse itself within one or a few years. □ Hi Everybody - Non-equilibrium thermodynamics is a fundamental description of the Earth system and can be captured in a simple 1st order difference calculation. dS/dt = Ein/s – Eout/s – where dS/dt is the change in energy storage in the system and Ein/s and Eout/s are the average energies into and out of the system over a period. Within the simple global overview are of course myriad – and powerful – processes through which energy cascades in the deterministically chaotic system that is the fundamental mode of operation of Earth’s climate. ENSO is a sub-system that is itself deterministically chaotic. The intensity and frequency of ENSO events varies at least over at least 11 millennia that we know of – http:// We are used to thinking of the oceans as layer of warm water over cold water separated by the thermocline – the depth at which the rate of decrease of temperature with increase of depth is the largest. In terms of energy dynamics – this seems relatively arbitrary. The oceans heat as a whole and cool as a whole – but within this there are hydrodynamical and atmospheric processes that influence both local and average rates of warming or cooling. ENSO is a key process involving upwelling in the eastern Pacific in a La Niña and – when the trade winds falter – the flow eastward of a pool of warm water that had been piled up against Australia and Indonesia. The cold surface of the central Pacific in a La Niña loses less heat than the warm surface in an El Niño – remembering the net direction of energy flux. There are in addition cloud feedbacks in ENSO that again change the planetary energy dynamic. There are 2 lessons in this. First – that energy flux is complex and dynamic and that a maximum entropy principle tells us little about specific dissipation pathways. The specific and complex pathways cannot be neglected in simplifying assumptions without catastrophic loss of verisimilitude Secondly – that a La Niña cools the planet and an El Niño warms the planet – suggesting both a contribution to warming between 1977 and 1998 and a cooling influence for 20 to 40 years from 1998. Unless we can get this from an equation of maximum entropy – we are as far from the truth as ever. Robert I Ellison □ Spencer and Pielke Sr. discuss Trenberth’s missing energy, with numerous cites to Climate Etc. Where’s the missing heat. e.g. Spencer 2010 A Response to Kevin Trenberth I posted some comments here about my view that the missing energy does not really exist. I also pointed out that they failed to mention that the missing energy over the period since about 2000 was in the reflected sunlight component, not the emitted infrared. This now makes two “missing energy” sources…the other one being the lack of expected warming from increasing carbon dioxide concentrations, which causes a steadily increasing global radiative imbalance in the infrared. Kevin has apparently learned nothing from the released East Anglia e-mails. To refer to a published paper as “rubbish” without substantiating that claim is arrogant. This behaviour is what has gotten us to the politicization of climate science. A constructive way for Kevin Trenberth to respond would be to post a comment on Judy’s weblog that could then be debated, while he simultaneously prepares a rebuttal paper to the scientifically sound paper by Knox and Douglas. □ Trenberth could not understand thermodynamics, how the hell he could response! □ It’s been rather warm (mid 60F’s) and very clear due to a high pressure system being stuck over my area (N. CA) the last six weeks This has been very advantageous for my little PV system. Lots of kwh produced- record producing output actually. I was wondering if any of the climate models include changes to biological systems? I bring this up as my water buckets for the horses and wild life (deer, gophers, moles, etc.) are teeming with life currently- lots of various colored algae. For them to be growing aren’t they using a bit of energy. And in my case more heat more growth. 18. The variational principle that the integral of the scalar product of a flux and a potential gradient be stationary is satisfied by their direct proportionality, with solutions then those of the Laplacian=0. This is limited to a linear region near equilibrium. The rate of total entropy production is the volume integral of the scalar product of an energy flux density and a temperature gradient. This was all spelled out by Onsager in 1931 and he was duly awarded for his work. In the linear region, entropy production is a maximum only when boundary temperatures are fixed, implying maximum energy transport. When boundary fluxes are fixed, steady-state dissipation and the temperature differential are minima. To apply such an analysis to the troposphere requires that tropospheric dissipation be proportional to the square of its temperature differential – not too realistic. Beyond the linear region, entropy production remains given by the same scalar product but the assumption of local proportionality no longer holds. For the case of electric dissipation, the expression W=J(V1-V2) gives us an exact solution under virtually all steady-state conditions. One can show that an equally ‘trivial’ expression holds for thermal dissipation, W=J1(T1-T2)/T1, provided one assumes that the total dissipation of a system equals that of the sum of its individual parts (W=<J). Rigorously, as Onsager quite explicitly states, his solutions are not valid when forces are velocity dependent, i.e. magnetic and Coriolus, unless of even order in velocity. (A system's dissipation in a given external field equals that of its mirror image.) □ Ah, the Onsager reciprocity relationship! A fruitful area of research may be the sinking of ice melt water as it is heated and the diffusion of dissolved salts giving rise to an apparent density maximum at 2°C which exists at high pressures according to the equation of state. This more than anything else controls World climate via the deep ocean currents. The reciprocal part is the down ward diffusion of salinity at the tropics. Heats of mixing etc? □ Mydogsgotnonose | January 11, 2012 at 8:55 am “A fruitful area of research may be…” I tend to agree with much of what you post. But a “fruit area of research” would be how many commercial products are available based solely on the “back radiation” idea to gain 33K. I would have guessed that profit motive being what it is that someone somewhere would be selling coats, blankets, cookware, housing wrap etc to gain 33K. So far I have found none. But I think a nice blanket might be a nice invention. □ God has already given us cloud covers. □ I believe commercial smelting ovens use the properties of CO2 to create high temperatures. If engineers did not understand the process they couldn’t control it. CO2 lasers assumes strong interactions with infrared. Is this not what you want to hear? □ “”mkelly | January 11, 2012 at 9:49 am | I tend to agree with much of what you post. But a “fruit area of research” would be how many commercial products are available based solely on the “back radiation” idea to gain 33K.”” I love commerce. I got shares in Darwin, Australia on issue. Great place, 33K every day. Bloody hot but. Best I can say it has ugly mornings, unbearable days, chaotic evenings and restless nights. Still, as you are limited by the ecosystem to the function of drinking beer, life can be very merry indeed, for the current inhabitants. Send me a personal email if you’d like to buy into paradise. □ WebHubTelescope | January 11, 2012 at 12:46 pm | Both of your examples (laser and smelting) require work input. That is not so in the “back radiation” that causes and increase from 255K to 288K. Gases dissipate heat and our lives depend on this natural occurence. □ Now mkelly is requiring a perpetual motion machine. Nice use of the raising-of-the-bar fallacious argument. 19. Question: Is the maximum of entropy production that is possible within the system limited to the difference in entropy between the entropy of the incoming shortwave radiation and the more or less equal wattage of outgoing longwave radiation? □ That is a good question. I would think that maximum entropy would be limited by each thermal boundary layer. The oceans can loose a order of magnitude or more heat that the atmosphere can accept at the surface mixing layer. The latent portion of the lower atmosphere can lose more heat more rapidly than the dryer portion of the atmosphere can accept. Above the tropopause, CO2 is limited by its spectrum, so it is a space blanket with a lot of holes in it with out significant spectral broadening. Most of the broadening should be below the tropopause, so the change in local emissivity would be a major impact. □ The difference in entropy between incoming and outgoing radiation is significant. Incoming occupies a more narrow peaked frequency spectrum while outgoing is much broader with gaps, thus higher entropy. To balance out the energy gaps, the earth emitter slightly increases its temperature thus producing more energetic photons to radiate. This is basic statistical mechanics of Bose-Einstein particles, aka bosons. 20. Sorry “between the entropy of the incoming shortwave radiation and THE ENTROPY of the more or less equal wattage of outgoing longwave radiation” □ Why not go the whole hog and claim that the CO2 molecules line themselves up in the atmosphere to form a Fabry-Perot Etalon, effectively a dichroic mirror, with the stored energy a standing This is what an Aussie climate scientist suggested, seriously, to me as the explanation of ‘back radiation’! Ludicrous. But seriously, entropy change of a system as defined by statistical thermodynamics is delta q/T where delta q is the infinitesimal heat involved in reversible work at temperature T. Remember T is average molecular kinetic energy.. So, by this definition radiation has no entropy until it is converted to heat! Therefore we have SW energy in, converted to heat, increasing disorder as it moves upwards mainly convectively, then converted back to radiation at the upper atmosphere. The key parameter is the ‘back radiation’, the product of gas temperature and emissivity ~0.2 for clear sky but this is not an energy source; instead it’s a measure of resistance to IR transmission. Climate science wrongly thinks that because it has IR emission lines it is the heating of the GHGs; wrong, it’s just emissivity [see the works of Hoyt C. Hottell at MIT in the Under a cloud it rises because emissivity gets near 1.0. There are few IR emission limes here because of much lower optical path length. This radiation is ‘Prevost Exchange Energy’ which can do no thermodynamic work. In effect it couples the IR density of states at the Earth’s surface and the immediate atmosphere and if the atmosphere cools, less Exchange Energy causes the rate of heat energy in the solid being converted to radiation to increase. It’s a very subtle effect that no-one has researched apparently, and might prove a use for all those ‘back radiation’ measurements made by climate science but hitherto used wrongly! □ Noseless Dog said, “So, by this definition radiation has no entropy until it is converted to heat! Thermal mass? The thermosphere has high temperature and little energy. Temperature can be substituted for energy in the laws of thermodynamics, but only if the real energy plays along. The Aussie’s standing wave would be similar to a capacitor charging, the voltage doubling is great, but the energy has to be available for work to be done. □ Nonose’s comments are not very clear, but if he/she is suggesting that back radiation is due merely to redirected energy rather than atmospheric heating via GHG absorption of infrared radiation, then that misconception is an example of what is known as the “surface budget fallacy”. In fact, for a given increase in CO2, the large proportion of back radiation increase comes from the CO2-mediated atmospheric heating (a result of reduced OLR and consequent radiative imbalance) and a much smaller fraction from the redirection of energy downward; this can be calculated from the radiative transfer equations, and measurements of downwelling longwave radiation are consistent with this effect. The surface budget fallacy was prevalent in the first half of the twentieth century but was finally refuted by Manabe in the 1960s. Details and examples can be found in Chapter 6 of Pierrehumbert’s “Principles of Planetary Climate”. □ Mr. Molton says: “…this can be calculated from the radiative transfer equations…” What emissivity do you assign to CO2 in these heat transfer equations? □ That’s a coincidence. Mkelly was asking about GHG applications, and now Hottel’s name is brought up. Read up on his work with water vapor and co2 in furnaces. 21. Yes, this is a great technical thread. I think we would all benefit here from working on some specific examples, at least, us plebes. So, with respect to feedback: What does the MEP have to say about the combination of water vapor and lapse rate feedbacks? My initial answer, please correct me etc., is that the “constraints” on the MEP in systems exhibiting significant thermal activity in all 3 radiative, convective, and conductive process types, include constraints imposed by the quantum structure of the individual molecules in the atmosphere. So we’re back to the spectral line databases. So, the Earth will act to maximize entropy, given the constraints imposed by individual orbital energies in gas molecules. Back to square one? □ BillC said, “So, the Earth will act to maximize entropy, given the constraints imposed by individual orbital energies in gas molecules. Back to square one?” Kinda, but at least we have a square peg for a square hole this time. My biggest issue has been abuse of the second law, but misapplication of the zeroth law, http://en.wikipedia.org/wiki/ When people assume that energy is fungible, it can turn into fudge-able. Work is not fungible. The Arrhenius equation dT=5.25ln(Cf/Co) assumes that the temperature change is only dependent on the concentration change. The increase in concentration raises the average radiant layer of the CO2 “forcing”. The potential energy decreases with altitude so the impact of CO2 would decrease with altitude. In order for the maximum CO2 impact, the spectrum of CO2 at the average radiant layer would have to broaden, but broadening is itself a negative impact on CO2 forcing felt at the surface. That increases the impact or the latent, conductive/ convective and atmospheric window cooling mechanisms. So maximum entropy is a more realistic approach, but not exactly a simple approach, imnsho. □ Cap’n: “The Arrhenius equation dT=5.25ln(Cf/Co) assumes that the temperature change is only dependent on the concentration change.” I thought the modern database/GCM code version of this was a bit more sophisticated? “The increase in concentration raises the average radiant layer of the CO2 “forcing”. ” Yes, this is the explanation commonly invoked by Pierrehumbert et al. “The potential energy decreases with altitude so the impact of CO2 would decrease with altitude. ” What potential energy? ” In order for the maximum CO2 impact, the spectrum of CO2 at the average radiant layer would have to broaden, but broadening is itself a negative impact on CO2 forcing felt at the surface.” Not sure what you mean by this. Won’t the width of the absorption spectrum have a sort of a “lapse rate” where the broadest absorption will be near the surface? “That increases the impact or the latent, conductive/convective and atmospheric window cooling mechanisms. ” Assuming you mean “impact of”, I draw a parallel with a statement like “all atmospheric heat loss mechanisms will operate faster on a hotter planet”. Not much to argue with there (aside from clouds and water vapor feedback!) but the argument is all in the quantification of the effects, no? If in sum you are saying that applying the MEP will help this quantification, I agree, see my reply to P.E. below about “directing” the parameterization of variables in GCMs other than the line by line or block radiative transfer codes. □ BillC, I thought the modern database/GCM code version of this was a bit more sophisticated? It is supposed to be, not sophisticated enough to deal with the Antarctic or southern upper troposphere though. What potential energy? the potential energy of the parcel of air being warmed. The potential energy and temperature of the air being warmed is required to know the amount of impact it would have. The troposphere hot spot that is not as apparent as estimated. Not sure what you mean by this. Won’t the width of the absorption spectrum have a sort of a “lapse rate” where the broadest absorption will be near the surface? Yes it would, mixed with water vapor the total spectrum is near saturation. As the average radiant layer raises, the layer of saturation thickens, if DWLR is to be a real energy, it has to follow the real laws. In the Arctic, the DWLR impact doe increase water vapor feed back which is initially low enough to impact surface temperature, the GHE gameplan. In the Antarctic, water vapor does not respond due to the much lower temperature range, CO2 is saturated with itself, minimal broadening, minimal impact. In the tropics, latent and sensible cooling vary below the average radiant layer minimizing the impact. but the argument is all in the quantification of the effects, no? Exactly! http://wattsupwiththat.com/2012/01/10/the-climate-science-peer-pressure-cooker/ I would expect many more papers stating the same thing, that impact is over estimated. I want to know why, I already knew it was. The key to quantification I believe is thermal diffusion, thermal dissipation or thermal conductivity, which ever term you prefer, near the radiant boundary layer and the ocean mixing layer. □ Dude Capn nice find! WUWT I don’t read, but they do sometimes get the scoop! I don’t really care what Pat Michaels said, but the fact that the new paper says “Our analysis also leads to a relatively low and tightly-constrained estimate of Transient Climate Response of 1.3–1.8°C, and relatively low projections of 21st-century warming under the Representative Concentration Pathways” IN THE ABSTRACT is key. 22. Something bothered me about this last night, and now that it’s morning, I know what it is. At best, this approach can remove a degree of freedom. Did we have an extra one all along, or is this going to end up overconstraining the system? □ P.E. – I think the place this could help the most is in the parameterization of sub-grid scale dynamical processes in GCMs. AFAIK, in some sense these parameterizations have lots of DOFs, because they are not specifically solving equations of state (Navier Stokes), even numerically. I don’t see it’s application much to the radiative transfer part of the models. □ Pekka made a comment above that the second law is built into a lot of constituent pieces, such as the adiabatic lapse rate. That’s fine, but doesn’t give us anything new. This may also be of some help on the micro scale as you say, but without having anything to say about the macro scale picture, I don’t see this as having much impact. Somebody needs to run with this and show what it has to do with the overall energy flow. □ I think “bettering the models” using this approach to direct sub-grid scale parameterizations, has the potential to improve our understanding of most feedbacks – S-B, water vapor, lapse rate, clouds, ice etc. □ Heh heh, the S-B feedback, formerly known as Stefan-Boltzmann, now renamed in honor of Spencer and Braswell. 23. Two points from a non climate scientist but has done a lot of chemical thermodynamics: 1. N. Atlantic OHC: http://bobtisdale.files.wordpress.com/2011/12/figure-101.png This should not happen unless GHG-AGW <<1990's natural heating, now cooling. [It's actually associated with the forthcoming Arctic freezing in its 70 year cycle, over by 2020 according to Russian contacts when it'll be as cold as 1900.] 2. How can there be 100% thermalisation of absorbed IR when the Law of Equipartition of Energy means there is almost instantaneous random emission of an already thermally-excited molecule? [Remember, the CO2 in a PET bottle experiment measures warming from two other aspects - absorption or IR by scattered IR, the constrained pressurisation of CO2 which has a higher CTE than air. Nahle has picked up on this Cp change, as climate science should have done but did not - it explains an awful lot of the physics.] 24. Sorry absorption of scattered IR by the walls of the PET [or glass] bottle. Has anyone done IR absorption measurements is a NaCl [not IR absorbing] tube? □ Organic chemists have been taking IR measurements of millions of synthetic compounds between KBr discs for many decades [frequently as a "Nujol mull"]. As an undergrad, I recall learning that the C=O stretch of carbon dioxide was often one of the significant background impurities of the spectrum. The chemical literature, and the Sigma-Aldrich reference data base, probably hold a wealth of historical [inadvertent] IR measurements of carbon dioxide from around the world, under ambient lab conditions. Knocking it into some usable shape might be an insurmountable task This is a great post, JC. Since my school days I’ve been wondering about how the “earth system” exchanges entropy with the wider universe. 25. Here is a quote about the second law from the textbook that I used in my thermo classes at Ga Tech (Sonntag and Van Wylen were the authors). Basically they say the second law is evidence of a ‘… the authors see the Second Law of Thermodynamics as man′s description of the prior and continuing work of a Creator, who also holds the answer to the future destiny of man and the universe.′ provocative words!! I remember being struck by it when I was first studying the textbook. verifying link here; □ That was also in a hydraulic engineering textbook I had for a college course (Morris and Wiggert), they also noted that it shows evolution is a load of crap (paraphrasing). □ See Granville Sewell’s mathematical development: A second look at the second law where he develops the second law equations required for an open system. Thus the equations for entropy change do not support the illogical ‘‘compensation’’ idea; instead, they illustrate the tautology that ‘‘if an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable’’. And Sewell’s discussion on the Second Law I have not seen any sound rebuttal to either Sewell’s math or his tautology. □ What is the entropy diagram for photosynthesis? □ BillC Better yet, consider the reduction in entropy from stochastic processes with distributions of elements to a self reproducing organism relying on photosynthesis for energy. This massive change of entropy over Origin Of Life (OOL) scenario is where Sewell’s equations for the second law in an open system need to be applied. 26. “The proposed principle of MEP states that, if there are sufficient degrees of freedom within the system, it will adopt a steady state at which entropy production by irreversible processes is maximized. ” That would not happen per se, would it? Take for instance wet wood. After it is burned everything complex and ordered is transferred to simple and unordered gasses, heat and a minor residue, ash. That is why wood burns, whereas these gasses that result from that fire would not magically turn into wood, cooling the environment. Hence, the burning, even of wet wood is an entropy production Still, it takes considerable effort to burn wet wood. I may not understand what the man says, but doesn’t that refute his idea maximization? Or does my example have “sufficient degrees of freedom within the system”? □ O, I see my mistake now. Indeed it is what is called “freedom within the system”. 27. See MattStat’s comment above: □ The above was meant as a 2nd reply to David L. Hagen: 28. It is great to watch the scientists debate, just to show that the debate is not based on ignorance. I know nothing about MEP but articles like the following suggest that it too is seriously 29. Excellent article on MEP and it’s application to understanding both past and present climate. We really do need to understand MEP better, which, along with spatio-temporal chaos are the driving forces which create our world. However, I wonder if the Earth was ever even near TE, even billions of years ago? Since very early in the solar system genesis, our planet has always had our moon and sister planets as companions and their tidal effects will have always kept things on Earth well stirred. □ MEP is a step in the right direction as is spatio-temporal Chaos, but I think the cool part will be relativistic heat conduction. Every time I see the 1997/8 el nino temperature profile I think of thermal resonance and the wave nature of heat. I am pushing my luck just yakking about the impact of conductivity changes at thermal boundaries. Breaking out imaginary temperatures would be loony bin stuff :) □ Cap’n. You are insane to bring up relativistic heat conduction.This is a solution to a non-problem. Yes, diffusion can show infinite propagation speeds, but the width of this is a delta and so can be integrated out. Also, resonances are pressure waves and can be acoustically modeled. More stuff pulled out of nether regions. □ Web I said it was loony :) It still interests me though. When I was playing with the Kimoto equation, that started such a row at Lucia’s, it got me thinking about RHC and self organizing criticality. Maximum Entropy production defines a state the system is seeking, right? What happens when it finds it? It would change the source, sink or a little of both so it looks for a new state. That should be fairly predictable. Some of the climate shifts don’t seem to be that predictable and never will be if they are chaotic. With two approaches, MEP and something else, there may be more that is predictable. Anywho, the Kimoto equation needs some kind of validation and it looks like MEP may benefit from it, but there is still some weirdness or chaotic influence I don’t think it will find around 185K. If that something is thermal/non-thermal flux interaction, RHC may be the better approach in the end to fill in some gaps, or I am just nutz :) 30. Dr. Curry – What a great site where the layman and the scientist can converse, and reputation and stature must be checked at the door. So much to learn for layman….and scientist. Great post! 31. Judith, This thermodynamic lesson misses many areas that were not in consideration: No mention whatsoever of water loss. Different velocities missed. 90% of the planet being water and the differences. Angles of solar energy from the sun. Planetary tilting. Centrifugal force energy. Differentiations of gases in heat exchange. Planetary slowdown. HOW DID THEY GET THIS SINGLE CALCULATION ON AN ORB THAT HAS MANY DIFFERENT PHYSICAL PARAMETERS??? □ If this single calculation used averaging, then the planet it describes is a cylinder and NOT an orb. Just try reapplying that calculation back on an orb and it is totally different from the original data taken. 32. Entropy is not original sin, chaps! I believe some of the more, shall we say, esoteric argument in this thread was discussed somewhat more more entertainingly in this 1956 essay [i]The energy of the sun was stored, converted, and utilized directly on a planet-wide scale. All Earth turned off its burning coal, its fissioning uranium, and flipped the switch that connected all of it to a small station, one mile in diameter, circling the Earth at half the distance of the Moon. All Earth ran by invisible beams of sunpower.[/i] 33. Although the title seems to indicate that MEP is something associated to non-equilibrium thermodynamics, it is fair to say that currently accepted formulations of non-equilibrium thermodynamics (TIP, EIT…) have nothing to see with MEP hypothesis. The author acknowledges in that article that: However, the theoretical foundation for MEP is still work in progress, and some deficiencies have been pointed out. However, in his book http://books.google.com/books?id=YRjfuEP_QycC&pg=PA42 Kleidon provides a more accurate review of the real status of MAXENT and MEP. In that book he also presents a “MAXENT derivation of MEP”. There is a serious problem, however, MAXENT has been systematically showed wrong by thermodynamicians (e.g. by members of the Prigogine school as Radu Balescu) I find many other interesting points in the Kleidon paper: (i) He cites “Prigogine’s principle of minimum entropy production”, when this is not a principle but a proven theorem. It cannot be naive ignorance, because in another part Kleidon cites Kondepudi and Prigogine book, where the theorem is proven. Kondepudi and Prigogine also present an extension of the minimum entropy production to nonlinear regimes, but Kleidon does not cite this well-known result. (ii) He says that systems in thermodynamic equilibrium are characterized by S=k lnW. This is only true for microcanonical equilibrium, for which each microstate have the same probability, but the equation is not valid for other kind of equilibrium. The application of microcanonical results outside of the microcanonical regime has been a charateristic of the MAXENT school which has received strong criticism in the literature. (iii) “The resulting entropy production can be derived from the thermodynamic definition of entropy as dS = δQ/T”. First, in nonequilibrium thermodynamics dQ is an exact differential, not an inexact one as in classical thermodynamics. Second, that is only valid for closed systems, for open systems it is dQ/T + dmatterS. Third. that is not the definition of entropy but the definition of entropy flow deS = dQ/T (for closed systems), the identity deS = dS is only valid when diS = 0. Four, dQ is not “the change in heat content”, but the flow of heat. The change in heat content, in Kondepudi and Prigogine, includes the production of heat term. See Kondepudi and Prigogine for details. For a modern definition of heat and comparison with Kondepudi and Prigogine see http://vixra.org/pdf/1111.0024v1.pdf (iv) Equations from (15) to (21) are standard TIP equations. The comments done about (23) are plain wrong. The production term in (2) is always non-negative. dG in (23) is only negative for closed systems at constant pressure and temperature. A simple derivation shows that if G is being considered a thermodynamic potential then dH in (23) cannot be the “change in internal energy and changes in pressure/volume work” but represents the flow of heat with surrounds. There are more issues, but this post is going too long… □ Good to have your insight around. □ I have spent time studying the proposed principle of maximum rate of entropy production. My impression is that it is a fad. The very acronym MEP advertises to me that it is the province of lazy writers and slick thinkers. Juan Ramón González seems to me to be a serious expert and I would more or less echo his post, though I am not a serious expert like him. I find his post helpful to myself. As I interpret it within my limited competence, my reading of experts is that there cannot exist a valid and reliable general principle of far-from-equilibrium thermodynamics based simply on rate of entropy as is the proposed principle of maximum rate of entropy production. Special cases require appropriate special modes of analysis. One has to deal with the diverse special cases on their respective merits. □ The main question goes beyond if I am an expert or not. The same textbook by Kondepudi and Prigogine, cited by Kleidon in his own paper, has a section titled “17.2 The Theorem of Minimum Entropy Production”. Him calling this theorem the “minimum entropy production principle” indicates either he does not know nonequilibrium thermodynamic basic stuff or that he pretends to over-emphasize the hypotetical MEP principle associated to MAXENT. Virtually any textbook in thermodynamics, Kondepudi and Prigogine as well, explains that dG =< 0 only for constant N, p, and T. Kondepudi and Prigogine devote a page to explain why is dQ in nonequilibrium thermodynamics instead of δQ as in classical thermodynamics of equilibrium. Why does Kleidon cites a well-known textbook, but then ignores most of what says? I left this to readers as an exercise. You are right about of far-from-equilibrium thermodynamics. Effectively, outside the linear regime, the production of entropy continues being non-negative, but this production only covers the average behaviour, not the fluctuations. Near equilibrium fluctuations are small, but near bifurcation points, fluctuations are abnormally large and you cannot approximate the instantaneous rate of production \tilde{\sigma} by the averaged value \sigma used in equations (1) and (2) in his paper. □ Slightly off track, B.H. Lavenda (pages 64-65 of his TIP 1978) says that Prigogine’s proof does not use properly independent variables. This is at the boundaries of my level of understanding. Would you comment on this, and give references and links to criticism, discussion, or development? Judith Curry has my email address. □ Lavenda starts by criticizing a statement done by Nicolis about the production being always a minimum. Although finally, in page 65, he remarks that the production is minimum for stationary regimes near equilibrium. Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations hold. □ Thank you Juan Ramón González Álvarez for your kind response. The objection of Lavenda is to Prigogine’s proposed proof for the case of stationary regimes near equilibrium. Lavenda on page 62 writes that “the principle of minimum entropy production is a by-product of Onsager’s variational princple.” Lavenda writes that Prigogine did not [as he claimed to do, comment inserted by present writer] prove a theorem for a non-equilibrium stationary state, that is to say, for a state finitely far from thermodynamic equilibrium. Instead, according to my (possibly mistaken) reading of Lavenda, Prigogine proved a theorem for infinitesimally small displacements from thermodynamic equilibrium. As I read Lavenda, Prigogine derived only the principle of least dissipation of energy applied to infinitesimally small deviations from thermodynamic equilibrium. These are subtle matters, and call for careful thought and careful statements, especially for non-experts such as me. What I have written just here is compatible with your statement “Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations □ As far as I understand, the Onsager relations require that gradients are small, i.e. that the deviations from equlibrium are small in small volumes, not that the whole system is near thermodynamic equlibrium. Based on the limited reading that I have done the controversy is related to this distinction. (Full linearity is reached only at the limit, where all deviations go to zero, but I don’t think that this should be interpreted to imply that large scale smooth deviations from equlibrium should be excluded.) The Wikipedia article on non-equlibrium thermodynamics gives a good impression, but only a real expert might perhaps tell, whether it gives a balanced view of various opinions (and the expert should be one that accepts the value of views that differs from his own). The only thing that I dare to conclude with some confidence is that there remain open questions even on basics of non-reversible thermodynamics. □ That sounds reasonable for a minimum entropy production regime at steady state. At maximum entropy, deviations should give less entropy which makes it less likely an ensemble state. Production is a time derivative rate. Remember that steady state does not imply equilibrium. This is perhaps a naive reading, but it could explain the seeming contradiction between max entropy and min entropy production. □ Pekka, Onsager linearity assumes a local and fixed linearity between fluxes and potential gradients which prevails throughout the system. His proof of maximum entropy production is also only wrt flux variations within a prescribed temperature distribution – a constraint infrequently noted. Even with absolute linearity a given, however, when fluxes are conservative, there remains an issue of total energy dissipation exceeding associated internal flux when system temperature differences become commensurate with their absolute values unless an additional condition is introduced that energy once dissipated is no longer available for subsequent dissipation within the system. I have yet to discover a proof or even mention thereof, but my resources are rather □ “He says that systems in thermodynamic equilibrium are characterized by S=k lnW. This is only true for microcanonical equilibrium, for which each microstate have the same probability, but the equation is not valid for other kind of equilibrium.” This is an opportunity for me to enhance my understanding of entropy as a statistical mechanical concept. Could you cite examples of equilibria in which microstates have different probabilities? How serious an error would result from assuming equal probabilities when applied to climate variables? If we were simply looking at the behavior of atmospheric gases,would the errors be large? □ Drop an amount of ideal dye into a tank of water. The dye will disperse to uniform concentration, giving a constant value of W over the XYZ coordinates of the volume. This is a Maximum Entropy condition according to the p*ln(p) definition. Microstates of different probabilities would occur if the dye was charged in an electric field or had differential bouyancy with gravity. That adds a constraint to MaxEnt modifying the probability result How it gets there is a Maximum Entropy Production problem. One could just as soon use a master diffusion equation to try to solve the dynamical time evolution, but the production advocates suggest that there is an easier way, akin to plain old max entropy alone. □ Fred, For a system in equlibrium the relative probabilities of microstates are proportional to exp(-E/kT). In microcanonical ensemble all microstates have the same energy and are therefore equally probable in equlibrium. In canonical ensemble the energy varies and therefore the probabilities vary as well. □ Web said, “Microstates of different probabilities would occur if the dye was charged in an electric field or had differential bouyancy with gravity.” Is this a light bulb moment? □ Perhaps for dim bulbs. I might add that this can be used to solve the atmospheric lapse rate expression to first order by invoking an average gravity head as the constraint. □ For the attitude, you will still not be receptive, but what the heck. The 4C boundary layer, i,e, the maximum density of salt water, drives the ocean circulation. It is not a MaxEnt problem. The temperatures, viscosity and possible geomagnetic potential in the Antarctic atmosphere makes it not a MaxEnt problem. Do you agree? □ Concerning the lapse rate, it must be taken into account that it is a property of a stationary non-equlibrium atmosphere and any valid derivation must be consistent with that. □ To Pekka – Thanks. I wasn’t challenging the statement that assuming a microcanonical ensemble and equal probabilities of microstates introduces errors. I was interested in the extent of those errors – e.g., the variability of E for a given T as applied to climate states. For a given macroscopic volume of gas, how much deviation from equal probabilities is likely? This is just for my own interest, because I think the earlier comments on the MEP principle were sufficient to raise doubts about its ability to determine real world behavior. My interest here related more to my interest in interpreting macroscopic data on the basis of probabilities, which for me has great intuitive appeal.. □ Someone has suggested that the main ocean thermocline shape is a Maximum Entropy configuration. So you will have to argue with them. □ Fred, In statistical mechanics the number of particles and degrees of freedon is usually huge even in smallest macroscopic volumes. Therefore the relative deviations from the average are extremely small. Basically it’s expected that microcanonical, canonical and grand canonical ensemble have essentially identical macroscopic properties, when the energy of the microcanonical ensemble is chocen to be the same as the average energy of canonical ensemble and the number of particles in canonical and microcanonical ensemble is chocen to agree with the expectation value of the grand canonical ensemble. The ensemble is many ways similar to a set of separate volumes in identical circumstances, e.g. small neighboring volumes of gas, but formally the concept of ensemble is different. There’s a lot of literature on the relationship of ensembles to states of the same volume at different times or to the separate volumes at a specific time. From the short paper of Juan Ramon Gonzales we can also read that even the nobequilibrium thermodynamics is based on the assumption that it’s possibe to cosider small volumes in local thermal By all the above I want to say that the variability between states of canonical (or grand canonical) ensemble is not closely related to the differences between small volumes with even slightly different macroscopic properties. □ Web, I didn’t think I was being argumentative. I just noted your tone was not particularly receptive. From what you didn’t say, I assume that you think that the Antarctic atmospheric part of the question might have a different answer. I mentioned quite some time ago that I firmly believe that the Arrhenius CO2 equation falls apart at lower temperatures. I mentioned before the convergence of the 4C boundary and the change in the conductive properties of the atmosphere at lower temperatures, -20C is the peak. There is a lot happening in the Antarctic, MEP is a step in the right direction, but there are some oddities, I don’t know how it can handle. □ Cap’n, With all due respect, I still think your thought process is borderline insane. Let me parse your sentences to show how you conflate principles into a confusing mix of nonsense. “From what you didn’t say, I assume that you think that the Antarctic atmospheric part of the question might have a different answer. “ So you are talking about atmosphere and not the ocean thermocline now. “I mentioned quite some time ago that I firmly believe that the Arrhenius CO2 equation falls apart at lower temperatures.” How can a radiation transfer principle “fall apart” at lower temperatures? “I mentioned before the convergence of the 4C boundary and the change in the conductive properties of the atmosphere at lower temperatures, -20C is the peak.” The 4C boundary has to do with liquid water, not gases. And as we all know, the conductive properties of gases at atmospheric pressure is minimal in comparison to the convective properties. So I can’t see how this liquid thermocline boundary can have any impact on the twice-removed conductive properties of the atmosphere. I sense that these seem to all be random thoughts that are not pieced together in any intelligent way. The only reason I am trying to help you is that I have an interest in communicating scientific research in more intuitive ways. The last book I wrote is chalk full of applications of the Maximum Entropy Principle to various environmental and natural processes. My partial list of MaxEnt derivations is described in this Google Spreadsheet: http://bit.ly/wZIwnY This list continues to grow and an interesting one that I am working on is deriving wave energy spectra, which comes out very cleanly from a simple first order energy model and maximum entropy dispersion of the energy content in a wave. That illustrates my goal — to try to convey what many would think are complicated concepts using some rather simple mathematical concepts relating to disorder in our environment. Cheers and I seriously hope that you can try to create some order out of all those seemingly random thoughts that are racing through your mind. □ A simple example would be a closed system at thermal equilibrium with a heat bath. The probability of a microstate j is given by exp(-E_j/kT)/Z, where E_j is the energy associated to a microstate and Z a ‘normalization’ constant. Consider an atmospheric element of volume at equilibrium with surrounds. The different microstates have different probabilities. If you were to assume that any accessible microstate is equally probable, then the element of volume would have the same probability of being in a microstate with energy and composition equal to that measured, than of being in a hypothetical microstate containing the energy and matter of the whole atmosphere! □ Pekka Pirilä: My paper states that (i) the TIP formulation of nonequilibrium thermodynamics assumes local equilibrium (not microcanonical) and (ii) this assumption does not work for large gradients or fast processes. The EIT formulation of non-equilibrium thermodynamics does not assumes local equilibrium and can study those processes. My paper cites relevant books on both The probability of a microstate j is given by exp(-E_j/kT)/Z, where E_j is the energy associated to a microstate and Z a ‘normalization’ constant. So using Jaynes formulation for Maximum Entropy, he defines entropy as the expected value of the (-) log of probability considered over all states. In this case, the E_j range from 0 upward, so the final result is E_mean/T, which is the thermodynamic result for the ensemble. Then, working backwards from E_mean, one can get back the probability distribution by applying variational techniques to find the maximum entropy result. Perhaps that is the mathematical contrivance that Jaynes discovered. The MaxEnt principle seems to reduce to the classical statistical mechanics and thermodynamics in a very convenient way, and scientists and engineers just find this way of thinking practical and useful. □ WebHubTelescope: The canonical ensemble has been known in physics since Gibbs’ epoch. And that thermodynamic entropy is a maximum at equilibrium known since Clausius’ epoch, approx. Neither Jaynes was who defined entropy as “the expected value of the (-) log of probability considered over all states.” This all was done by Gibbs also and is known as Gibbs entropy. Maybe you are confounding Jaynes’ theory with the standard and well-tested classical thermodynamics and equilibrium statistical mechanics. I do not know if this is the case. To be clear, what Jaynes does in his theory is to assume that entropy is always a maximum, also at non-equilibrium states, which is not true. Moreover, Jaynes entropy is an informational entropy, which is different of the physical entropy used in thermodynamics and statistical mechanics. Your claims about the popularity of Jaynes’ theory were already corrected. 34. It is nice to see that where MEP is concerned, different schools of thought are recognized and respected. The same cannot be said for the climate debate and this is a good measure of the politicization of the science. It is all that many skeptics are asking for, □ If you read the Kleidon book linked above, you will discover that MEP and MAXENT are rejected by the immense majority of scientists. I would say that there are two schools: one accepting MEP (about half dozen of individuals as Kleidon) and other rejecting MEP (the rest, including Nobel laureates in thermodynamics as Onsager, Prigogine, and NESM worldwide experts as Balescu) □ I interpret what you are saying is that MaxEnt is rejected by mathematical physicists, but not by applied physicists and engineers. The latter use it all the time for such applications as Maximum Entropy Spectral Estimation. As I quoted elsewhere in this thread, the formal justification is perhaps lacking, but it nevertheless works in practice. □ WebHubTelescope: Therefore, when you read Kleidon’s book stating that such ideas are rejected by the immense majority of scientists, do you believe that most of scientists in the world are mathematical physicists? Wow! The engineers that I know (including chemical engineers) use the thermodynamics theories developed by physicists, chemists… An academic search of “Maximum Entropy Spectral Estimation” returns about 800 results; the immense majority being works about information theory, and without distinguishing ‘equilibrium’ from ‘nonequilibrium’ regimes. An academic search of “Minimum Entropy Production” returns more than 2600 results; almost all are works in chemical physics, physics, thermodynamics, meteorology, applied physics, engineering…, and the 100% deal with nonequilibrium regimes. □ So is this an issue of whether Maximum Entropy is useful for solving problems, versus whether it is not useful for describing physical behaviors? This is a quote from a reference book with a section on applying Maximum Entropy priors, “Statistical decision theory and Bayesian analysis” by James O. Berger, 1985: “Of course, in many physical problems available information is in the form of moment restrictions (cf. Jaynes (1983)), in which case maximum entropy is enormously successful.” This is really about reasoning under uncertainty, or with limited information, which is what much of science, and one of the main themes of Climate Etc, is all about. We have physical models of the environment, yet these models are not complete and can have aleatory uncertainty. The principle of Maximum Entropy allows us to fill in some of the gaps. So I suggest that it is useful both for modeling (filling in the unknowns) and for solving problems (estimating the unknowns). □ WebHubTelescope: About your citation of Berger from an old textbook on statistics. Most of textbooks on physics, chemistry, biology, engineering do not use Jaynes ideas but the physical, chemical, biological… theories developed and tested in labs during centuries. The citation is ambiguous because maximum entropy methods were not invented by Jaynes, but are used in physics since Clausius. Nobody doubts that entropy at equilibrium is maximum and techniques to exploit this fact are well-known and used in ordinary textbooks on equilibrium statistical mechanics, for instance. Jaynes main idea on that entropy would be also maximum outside equilibrium. It is this idea which has been rejected. And even when his theory is cured which the kind of ad hoc corrections denounced by Balescu (one of the fathers of the first formulation of non-equilibrium statistical mechanics), there is no way to obtain the evolution equations for the non-conserved variables, doing Jaynes et al. theory essentially useless for many problems of physical, chemical, or biological interest. 35. There are a few URL links, and links-to-links, here. Some applications of the approach seem to be open to question. I can move all the links over to here if the hostess prefers. □ I must be really missing something, because this whole discussion of MEP leaves me mystified – and I’ve taught thermodynamics. Viewing the Earth system as a whole the entropy production problem in steady steady is defined simply by the entropy difference between outgoing radiation (at the effective radiating temperature 255 K) and incoming radiation (at effective solar surface temperature around 5000 K). At steady state nothing else is changing, only incoming radiation and outgoing radiation, right? So what is there to maximize? The number is unchanging and determined by the first law energy balance (at steady state). I.e. given any process that converts a quantity of heat dQ from temperature T1 to a lower temperature T2, the entropy change dS = dQ(1/T2 – 1/T1). So the overall entropy change per unit time for the planet has to be constrained to be dS/dt = dQ/dt (1/255K – 1/5000K). How can it be different? If the temperature change goes through a series of steps rather than dropping in one jump, the final total change is still given by the change from initial to final temperature. Now if the entropy maximizing principle is intended to apply after the point of absorption by some surface system (i.e. T1 is Earth surface temperature, not solar surface temperature), then what is it that constrains MEP from forcing Earth surface temperature higher and higher and higher? Doesn’t that form of MEP imply an Earth surface temperature as hot as the Sun’s? Not that I expect a coherent answer here, but I just wanted to express how this whole idea makes no sense to me… □ A good example application is here: □ dan, thanks for spotting an online copy of this paper □ Author, I was hoping someone would have tried to answer your question by now. I was thinking about the same issue, but not going all the way back to the sun. 1367Wm-2 for 394K would be my T1 and space at 3K my T2 for maximum global entropy. I am thinking a multilayer model though, to try an incorporate all of the thermo boundary layers. Then half, allowing for shape, would make the Hemispherical T1=331K which I think should be used for minimum entropy to give a maximum global average surface temperature of 331K for perfect insulation. Jim Hansen would disagree since we would not be boiling oceans off very soon :) Individual processes and boundary conditions would limit within those maximum/minimum bounds. □ Very nice link Dan, Duke university, power plant models, constructional theory, real work, real entropy, real temperatures, real limits, this is getting to be real interesting :) 36. I don’t see how the MEP principle is applicable to the regulation of albedo. It is regulated somehow after all, basically by the fraction of snow & cloud cover over the surface, which is known to fluctuate in a narrow range. However, a simple black body would surely produce entropy on a higher rate for the same incoming radiation flux than this globe, covered with its haphazardly distributed white patches. Therefore the question is: if MEP holds, why the Earth is not pitch black? 37. The proposed principle of maximum rate of entropy production should have nothing specific to do with Jaynes’ principle of maximum entropy. One is proposed to be a physical principle, the other a rule for constructing probability distributions not necessarily related to physics. If Jaynes’ principle is not giving right answers to physical problems, that is surely due to the wrong use of the principle, not to any alleged invalidity for its proper uses. I think the case against the proposed physical principle of maximum rate of entropy production does not depend on a case against Jaynes’ principle. The case against the proposed physical principle of maximum rate of entropy production is properly stated in strictly physical terms, with no reference to Jaynes’ principle. □ I agree that about the conflation between production and the original maximum entropy principle, which as Christopher says is only a way of constructing probability distributions. This is a partial list of probability distribution behaviors that I have characterized using Jaynes originally conceived maximum entropy principle. Reservoir Size Reserve Growth Earthquake Size City Size Species Diversity Crystal Growth Human Transport Project Completion Volatile Investments Hyperbolic Discounting TCP/IP Latency Train Statistics Wind Energy Signal Fading Global Oil Discovery Bathtub-shaped Reliability Curve GPS Acquisition Popcorn Popping Dispersive Transport in Semiconductors Porous Transport Heat Conduction Labor Productivity Chemical Reaction – CO2 Residence 1/f Noise Classical Reliability So with all due respect to Juan Ramon Gonzalez and his expertise, I am still puzzled by his assertion that MAXENT is “rejected by the immense majority of scientists”. I can side with him on the max entropy production principle, only insofar it is still a weakly-defined concept. My own interpretation is that many of the the physical processes that I have characterized follow from rate concepts; and these rates are maximally dispersed in the steady-state, leading to the empirically measured probability distributions. □ Maybe, if you were to read what was actually said you would discover that the assertion was done by the own Kleidon, who acknowledges how MAXENT has been rejected by the majority of The reasons which MAXENT is rejected are well-known □ Not to nitpick, but please tell me of any physical events that do not have to do with physics? Maybe, if you were to read what was actually said you would discover that the assertion was done by the own Kleidon, who acknowledges how MAXENT has been rejected by the majority of That chapter you link to was written by Roderick Dewar, not by Kleidon, who appears to be co-editor of the volume. Moreover, that is a common tactic, to say that an approach is rejected by a majority of scientists, as it puts the contrary theory in more of a novel, even revolutionary light. In this case, I think Dewar is exaggerating. I really don’t understand how Jaynes’ Maximum Entropy principal can be rejected outright though. Once maximum entropy is reached, it seems that the extra entropy produced is minimal. A maximum is defined whereby the first derivative is zero — thereby the production rate at maximum entropy would be at a minimum. □ Yes, you are right, the chapter was written by Dewar. However, Kleidon is editor and in his own chapter he cites the chapter by Dewar in the same volume. You think that Dewar is exaggerating, but he is being sincere. You do not accept why Jaynes ideas are rejected, but technical reasons were given. You do not accept why Jaynes ideas are rejected, but technical reasons were given. I know of a few issues with the Jaynes formulation. For one, the continuous form for probability is problematic and there is a representation dependence for a specific result. Another is that many believe it doesn’t handle fat-tails well, thereby the research on Renyi entropy. Do those match your concerns? I tried looking up “non-transitive evolution law” but these are mainly circular refs back to the Wikipedia article. It sounds like this concern implies that ordering of causal actions or events is important. My personal research involves rethinking of the whole CTRW formulations of random walk. I think they are much too complicated and are much easier to explain in terms of disorder in the parameters, and thus in simplifying the stochastic equations. This is probably closer to the ideas you are espousing with respect to non-equilibrium dynamics. 39. The “continuous form for probability” is not a real problem once one acknowledges that nature is discrete. The problems are the assumption of maximum entropy outside equilibrium, the neglect of relative components to the ‘thermodynamic’ branch f^c used in the distribution, confusion between physical entropy and informational entropy, lack of evolution law for non-conserved quantities, and more. The concern by Balescu about the lack of a transitive evolution law is as follows. In Jaynes’ theory one starts at t_0 by maximizing entropy subject to certain ad hoc constraints. Now, application of the evolution of the microstates gives a constant entropy which does not agree with the second law of thermodynamics for irreversible processes. Then Jaynes et al, try to solve this by repeating the maximization process at posterior times t_1 < t_2 for forcing an increase on entropy in agreement with observations. This gives a serious problem, since the subsequent maximization processes are quite arbitrary: the one-step evolution from t_0 to t_2 does not yield the same results as the evolution from t_0 to t_1 followed by an evolution from t_1 to t_2. 40. MEP sceptics would consult this: We show that (1) an error invalidates the derivation of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim that the phenomenon of ‘self-organized criticality’ is a consequence of MaxEP for slowly driven systems is unjustified. □ Thanks for elaborating on the issues. I am not defending the derivation of Maximum Entropy — note that I already pointed out Gian-Carlo Rota’s description of the issues much earlier in this comment thread The major issue he sees as the “mathematical axiomatization of thermodynamics”. Rota also describes the issues between Shannon (information) entropy and Boltzmann entropy, with the skin-deep surface similarities not enough to keep scientists from using Boltzmann entropy exclusively for statistical mechanics and thermodynamics. I also said upthread that I couldn’t follow Dewar’s work as it appeared to require some circular reasoning. Thanks for the link by Grinstein and Linsker, which I will read carefully. There is a continuing discussion of this topic (deeper mathematical meaning of entropy in the context of dynamics and quantum mechanics) at the Azimuth Project blog, with a new post 41. May I recommend a 2008 discussion of some of these questions, by Walter T. Grandy, JR., ‘Entropy and the Time Evolution of Macroscopic Systems’, Oxford University Press, Oxford UK, ISBN 978-019-954617-6 ? 42. typo 978-0-19-954617-6 43. All of you make this way too difficult. The Temperature of Earth has cycled in a narrow range for Ten Thousand Years. Earth has a set point for Temperature. In the past Ten Thousand Years, if Temperature of Earth got one or two degrees above the set point, it always cooled. If Temperature of Earth got one or two degrees below the set point it always warmed. Of all the things that can be used to control Earth’s Temperature, the only one with a set point is Ice and Water. When Earth is warm, it melts Arctic Sea Ice and Massive Snowfall advances ice and increases Albedo. When Earth is cool, Arctic Sea Ice Freezes and reduced Snowfall allows the Sun to melt ice and Albedo Decreases and Earth Warms. This is the only forcing that Earth has that has a set point based on Temperature that is powerful enough and quick enough to have kept the Temperature of Earth Regulated in the bounds of the past Ten Thousand Years. □ HAP, so varying thermal inertia and hysteresis associated with related system processes in near equilibrium produces pseudo-cyclic fluctuations on varying timescales? Never heard of such a thing :) □ When the Arctic is liquid, Earth is cooling When the Arctic is ice, Earth is warming This is the Thermostat of Earth □ This is in spite of orbit cycles, tilt cycles, CO2, Solar Cycles and whatever else you can drive temperature with. 44. Christopher Game: This is what Lavenda says on page 65: Consequentely, Prigogine formulation of the principle of minimum entropy production is valid only for small displacements from equilibrium Lavenda is not saying that the theorem is not valid, but that only applies to linear nonequilibrium regimes. As I said, and you quote, “Any presentation of the Prigogine theorem that I know emphasizes that its validity is restricted to linear nonequilibrium thermodynamics where the Onsager relations hold.” Precisely, the section “17.2 The theorem of minimum entropy production” in Kondepudi and Prigogine book is found in the chapter “17 Nonequilibrium stationary states and their stability: linear What is the problem? 45. WebHubTelescope: You again cite another book which is not about thermodynamics or statistical mechanics. And the author only cites image recovery as application of maximum [information] entropy. I already referred to information theory before. You have extensible quoted this author in this thread about thermodynamics. In the same page that you quoted the author says: I do not know any thermodynamics. In a first reading of the rest of that chapter, it seems that the author confounds different entropies: Boltzmann, Gibbs, Shannon, physical, informational… Maybe this explains why he does not understand why Shannon entropy is ignored for most applications on statistical mechanics and thermodynamics. But at least he confirms what I have said in this thread: Jaynes ideas, MAXENT, MEP and similar ideas plays virtually no role in science (physics, chemistry, biology, geology…) and associated Sorry, I do not find any “deeper mathematical meaning of entropy” in the Azimuth blog. □ IMO, all of thermodynamics and statistical mechanics comes about from applying combinatorics via the mathematical shortcut known as Stirling’s approximation. This turns a factorial representation into a logarithm and from that, all of the different entropies fall out. Maybe this is not representative of a physical process, but it is representative of the statistics that can describe variations of a physical process, or the variations in the parameters of a physical process. If you want to say that maximum entropy “plays virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings” that is your choice and you can use your own methods. However, I am mystified by all the progress that scientists and engineers have made by using maximum entropy techniques for problem solving. Granted, some of these may just be Lagrangian variational techniques known long before Jaynes started applying them with a fresh perspective, but we still have all the evidence of the popular use. Here are a few examples: Geologists use maximum entropy all the time in exploring for natural resources. I have examples of using it for modeling oil reservoir sizes. Biologists use maximum entropy all the time for modeling ecological diversity. I have examples of using it for modeling relative species abundance. As an engineer who understands physics and chemistry, I use MaxEnt for modeling electrical transport in disordered semiconductors and in modeling crystal growth. The question is how do we recast all these solutions that I and many other people have branded as MaxEnt techniques into your vision of a science and engineering theory? There is always a dichotomy between the purity of an approach versus the practical applications of an applied model. We need a little help here in getting beyond this dichotomy. That was partly the theme of the lecture by Gian-Carlo Rota. He said that the issue is one of unifying the mathematics of probabilities with that of the physics. □ I would be careful to say that statistical mechanics relies essentially on combinatorics, while equilibrium macroscopic thermodynamics does not. Utterly non-equilibrium thermodynamics must, I think, rely also on combinatorics, but the scope of it is still a research program. I think that the account given by Solomon Kullback (Information Theory and Statistics, 1958) is often not used when it would help considerably. It makes more sense of the customary “Shannon” 46. WebHubTelescope: It is not true that “all of thermodynamics and statistical mechanics comes about from applying combinatorics via the mathematical shortcut known as Stirling’s approximation.” Combinatorics and Stirling approximations play no fundamental role in nonequilibrium thermodynamics and nonequilibrium statistical mechanics. The reasons are well-known, equilibrium is essentially statistical, nonequilibrium is essentially dynamical. This is another reason which the ‘statistical’ ideas by Jaynes et al have failed often in the science and engineering of Combinatorics and Stirling approximations play an important role in equilibrium statistical mechanics of large systems (macro scale). For the small systems (nano scale), Stirling is not used Combinatorics and Stirling approximations play little or no role in classical thermodynamics. You can give an entire course in equilibrium thermodynamics without even mentioning them, although can be mentioned in some appendix dealing with microscopic foundations of thermodynamics. It is not true that “all of the different entropies fall out”. Boltzmann, Gibbs, and Shannon are different entropies, the former are physical and are found in textbooks of physics and chemistry, the latter is informational and found in textbooks dealing with information theory. Some people confounds them as the guy who you cited, but he acknowledged “I do not know any thermodynamics.” Your words: “If you want to say that maximum entropy “plays virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings” that is your choice and you can use your own methods.” are a gross misinterpretation of what I said. First, my exact words were: Jaynes ideas, MAXENT, MEP and similar ideas play virtually no role in science (physics, chemistry, biology, geology…) and associated engineerings. Second, by “Jaynes and similar ideas” I already explained to you that I mean his theory that entropy must be a maximum at nonequilibrium. Third, I already explained in this this thread that maximum entropy methods are known in the science of equilibrium since Clausius and were applied by Gibbs et al. to equilibrium statistical mechanics. Jaynes has not the monopoly of the methods. Jaynes merely believes that he can extend Gibbs methods outside equilibrium. And it is this belief which has been criticized by scientists. Either you deliberately pretend to confound MAXENT and Jaynes ideas with any maximum entropy method used in science (e.g. with standard methods in equilibrium statistical mechanics) or you are deliberately ignoring what the own literature in MAXENT says about the lack of popularity and use of Jaynes ideas. Two different textbooks acknowledging that MAXENT and Shannon entropy are almost ignored by scientists and engineers were given here. I repeat the quotes: Jaynes’ MAXENT formulation of NESM has for so long failed to be accepted by the majority of scientists Boltzmann entropy is the one that dominates in applications to statistical mechanics and to thermodynamics. Your response was: Moreover, that is a common tactic, to say that an approach is rejected by a majority of scientists, as it puts the contrary theory in more of a novel, even revolutionary light. However, both textbooks are by authors who support Jaynes/Shannon ideas; the textbooks are not by people working in rival theories; therefore, you cannot accuse them of being biased against Jaynes/Shannon ideas. I am a scientist, whereas you are not; but if you want to believe that in your own universe MAXENT and Sannon entropy are used each day in scientific labs (or in chemical engineering labs), or if you want to believe that the ideas that you endorse give “deeper mathematical meaning”, that is all ok for me. □ Well Juan Ramon Gonzalez claims I don’t know science. I know that science is a long tough slog, but I am not ready to withdraw my doctoral dissertation quite yet. You really should consider participating in the http://azimuthproject.org blog discussion. This topic is worthy of discussion over there. □ Remark to Juan Ramón González Álvarez. I find your admirable article linked above, entitled ‘Non-redundant and natural variables definition of heat valid for open systems’ to be very valuable and helpful. Haase is not in my local university library and it will some time for me to get a copy. Also for library reasons it will take me some time to get a copy of Balescu 1997. (Balescu 1975 does not mention Jaynes.) □ By Balescu 1975 I suppose that you mean “Equilibrium and nonequilibrium statistical mechanics”. I do not remember now if he mentions or not Jaynes, but probably does not even mention him, because his ideas are not needed (or are wrong) for developing statistical mechanics, kinetic theory and thermodynamics. Balescu 1997 only mentions Jaynes in a final chapter where he discusses grand theories of irreversibility. He reports theories that work and are used each day by scientists and engineers, he discusses some recent advances done by Prigogine and coworkers, and finally he reports Jaynes Maximum entropy theory as a theory with problems and that has not given any new result. □ Thank you for your reply. I think that irreversibility is explained epistemically. Irreversibility is reported in macroscopic thermodynamic accounts but not in purely mechanical (no statistics, only Newton’s laws and the like, describing every detail for every “particle”) accounts. The purely mechanical accounts, as I read them, have perfect knowledge of all the details of all the “particles”. The Laplace idea is that they can give perfect predictions of every detail supposing they are given perfect initial data and exact dynamical laws such as Newton’s. Macroscopic thermodynamic accounts have only statistical or smoothed information, which is by definition less informative than the complete and perfect data of the purely mechanical accounts. Consequently macroscopic thermodynamic accounts cannot give perfect predictions of every detail. As it is attempted to predict further into the future or to retrodict further into the past with imperfect initial (time zero) data, the lack of detailed information in the data sees more and more accumulated error of detailed pre-(retro-)diction. The entropy, suitably defined, tells how far short of perfect detail is the macroscopic account. The result is that a macroscopic prediction of a macroscopic thermodynamic account shows increasing entropy as the time interval of prediction increases. This is another way of saying that the prediction of microscopic details gets worse as the interval of prediction increases, and this is due to ignorance expressed in the macroscopic account, not due to irreversibility of the basic laws such as Newton’s. The irreversibility works both ways in time (reckoned from time zero): errors of retrodiction of microscopic detail are just as necessary as are errors of prediction of microscopic detail. The ignorance is not a dynamical factor, it is only an epistemic commentary. An attempt to use an epistemic commentary as if it were a dynamical factor is of course ridiculous. As I understand Balescu, he is pointing to examples of attempts to use epistemic commentary as if it were a dynamic factor, and of course I agree with Balescu (as I read it here) that such attempts are ridiculous. Balescu (as I read it here) is saying that Jaynes’ epistemic principle has not produced new results in statistical mechanics (which of course has a great dynamical content, in addition to the epistemic step in which the molecular detail is eventually translated into macroscopic quantities for the macroscopic thermodynamic account); I would not dispute that statement of Balescu. Balescu 1975 gives a definition of irreversibility, on page 420. Irreversibility in that definition requires particle interactions, such as collisions. With perfect data and exact dynamical laws, the collisions would not have their unpredictability. Imperfect data about the initial trajectories of a collision leads to greater imperfection of predicted final trajectories even when the dynamical laws of the collision are exactly known. The epistemic commentary describes this as loss of information or increase of entropy due the collision. No one has to read the epistemic commentary if he doesn’t find it interesting. (Of course the idea of perfect data is different for quantum mechanics, but that is regarded as a physical factor, not as an epistemic □ Second law and entropy in thermodynamics are not only statements about information; they are about what happens to macroscopic physical properties of the system being considered. The difficulties in putting together fully deterministic dynamic equations and the principles of statistical mechanics have troubled theoreticians for long. What is ergodicity, how to resolve the black hole information paradox, the list of problems is very long. The difficulty is in a sense exemplified also by an experience from my time as a student. We had a mathematics professor, who decided that he should give a special course on the mathematical foundations of statistical mechanics. A thin 180 page textbook Mathematical foundations of statistical mechanics by I. Khinchin was selected as textbook for the course. Being a mathematician, our professor got stuck very early in the book in the problem of measuring phase space volumes and in the Liouville theorem. The whole one semester course was spent on these issues, i.e. in background for the first 20 pages of the book. This story is directly connected with the paradox that every single state is equally likely in a microcanonical ensemble. Every state by itself has zero entropy and according to the ergodic theorem also states that represent extremely unlikely macroscopic configurations will occur when given enough time (all gas molecules in a room will be located in one half, while the other half is empty, as an example). This paradox is not a paradox for the information theory application, but it can be considered such for thermodynamics. □ Response to the post of Pekka Pirilä of January 17, 2012 at 4:01 am. Pekka Pirilä writes: “Second law and entropy in thermodynamics are not only statements about information; they are about what happens to macroscopic physical properties of the system being Christopher agrees. Macroscopic physical properties are properties of the system of interest stated in a particular way, namely in terms of macroscopic variables for the system of interest. The choice to describe the system in terms of macroscopic variables is an epistemic choice, a choice of how to frame a description. An alternate epistemic choice might be to describe the system in terms of microscopic details about every “particle”. With suitable definitions, there is a natural correspondence between the definition of entropy in terms of macroscopic variables and the microscopic one in terms of information about details of “particles”. Pekka Pirilä’s comment just quoted above is another way of saying this. □ As Murray Gell-Mann describes in his book The Quark and the Jaguar, information theoretic approaches are clever ways to reduce complexity, and thus a shortcut to physics problem solving. □ Actually, i have a post coming on Gell-Mann’s Plectics □ WHT, I think that one can find at least three classes of problems where MAXENT -type methods work. First class consists of problems where the law of large numbers determines the results almost totally and all details of interaction have little effect. In these problems MAXENT is valid, but the problems are usually rather easy to solve in many different ways like the barymetric formula. The second class consists of problems, where making some generally true additional assumptions to support the MAXENT principle gives right results. Here the role of the MAXENT principle is less clear, because the role of the other assumptions is also essential. The third class consists of problems that can with care be tweaked to a form, where MAXENT gives good results, not necessarily exact results, but good enough for many uses. Using MAXENT in these cases is, however, more art than science, because tweaking of the setup is not based on solid knowledge, but rather on experience and intuition. I do believe that the cases, where MAXENT has been of practical value belong to this third class. □ Pekka, said, “I do believe that the cases, where MAXENT has been of practical value belong to this third class.” Exactly, engineers accept imperfection and modify equations into useful rules of thumb. I can modify the Kimoko equations and they become a useful tool. The scientist’s job is to prove why. □ The issue is not whether MaxEnt is useful or not, the issue appears as Juan asserts that it has not lead to the discovery of any new physics and that the general technique is mostly a warmed over variational approach that scientists have used for years. Frankly, I don’t care if it is not revolutionary and that it has unfairly usurped some other techniques, I have used the MaxEnt principle in many situations as I have described further up in this thread: Now, these couple dozen cases are all documented in my work, many help explain some rather anomalous behaviors, and I reference MaxEnt at least once in each argument. Would someone like Juan want to offer up an explanation where I went terribly wrong in building up ideas based on my readings of Jaynes, Gell-Mann, and others ??? 47. With respect, I don’t think it is part of the probability theory maximum entropy principle of Jaynes that “his theory that entropy must be a maximum at nonequilibrium”. It may be that some writers have put such an interpretation on Jaynes’ principle, but I think that such an interpretation does not rightly read Jaynes’ principle, which in itself has no physical content at all. I see a direct reading of Jaynes’ principle into a physical question like the proposed principle of maximum rate of entropy production as a travesty of physical reasoning and a travesty of probabilitistic reasoning. At first I thought the proposed physical principle of maximum rate of entropy production seemed like a good idea, but that got not the slightest support from my view of Jaynes’ principle. On reading more I grew to think that the proposed physical principle of maximum rate of entropy production could not be generally valid. That did not affect my view that Jaynes’ principle has its valid place in probability theory. I see no reason why Jaynes’ probabilistic principle should not be used to prove that the proposed physical principle of maximum rate of entropy production cannot be generally valid in physics. There is a logical link between physical entropy and information theoretic entropy, but they are of course of different natures. Planck was a strict macroscopic entropy man (no statistics) till he was persuaded to change his mind by his discovery of his law of thermal radiative spectrum, which was a turning point from classical physics as he had previously known it. But that physical link does not mean that the probabilisitic principle has a simple and direct application in support of any proposed physical principle of maximum rate of entropy production. People like to criticize and passionately reject the Jaynes principle because they passionately dislike his logical approach to probability theory. Impassioned attacks like that were also made on Harold Jeffreys’ views on probability theory. But both Jaynes and Jeffreys were respectable physicists, and they found their common views on probability theory compatible with their respectable physics. I agree with them on this. In a nutshell, I think that the question of the proposed physical principle of maximum rate of entropy production has nothing essential to do with Jaynes’ principle. I reject the proposed physical principle of maximum rate of entropy production and accept Jaynes’ principle of probability theory. □ Christopher Game, I definitely respect your pragmatic view. Perhaps many scientists resent the audacity that Jaynes had in referring to Probability Theory as the “Logic of Science”? □ The reasons why some or many physicists reject the Jaynes approach to probability theory are to do with their philosophical outllook, best not discussed further here. □ Sorry, but when I wrote that “his theory that entropy must be a maximum at nonequilibrium” I am refering to entropy not to entropy production. What I have said is correct. Jaynes writes that the Principle of Maximum Entropy is sufficient to construct ensembles representing a wide of nonequilbrium conditions and has tried to present his theory as a the extension of Gibbs’ formalism to irreversible processes“, but this theory has failed by the reasons stated before. 48. One reason why I’m enjoying this thread is that it has helped me deepen my understanding of a topic that I know at only a very general level. I’ll try to continue this process by asking a few questions. Let me start with a statement by Juan Ramon Gonzalez – “equilibrium is essentially statistical, nonequilibrium is essentially dynamical. This is another reason which the ‘statistical’ ideas by Jaynes et al have failed often in the science and engineering of nonequilibrium.” I’ve read some of Jaynes – I don’t know his work well enough to know exactly what is being referred to here, but let me continue to explore the possibility that probability theory can be applied in some way to the dynamics of non-equilibrium entropy production. Corrections to any misconceptions will be welcome. Imagine a large empty box into which a demon inserts a huge number of energized molecules into an upper right hand corner. Based on probabilistic considerations, the molecules will assume various microstates that eventuate over a time interval (based on their energies) in a macrostate encompassing a range that is highly probable because the total number of individual microstates adjusted for their relative probabilities greatly exceeds all combinations outside of that range. That macrostate, and fluctuations that maintain that range at almost all times will constitute an equilibrium state. Question 1: Given the initial conditions, can we not predict the rate at which equilibrium will be approached, based on probabilistic considerations? Even if the asymptotic portion of the change is a problem, the earlier phases should be better determined, should they not? Assume now that as the molecules are moving away from the corner toward more probable macrostates (i.e., characterized by a larger number of microstates), the demon sucks molecules out of the box at random locations and replaces them with a new batch of molecules in the upper right hand corner. The process described earlier will be repeated, with these molecules spreading out toward more probable macrostates. Imagine now that this phenomenon is continuously repeated, so that molecules are always being removed from the entire box contents and being replaced in the upper right hand corner. At some point, I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium distribution. Question 2: Are these rates not a characteristic of a process that can be described in probabilistic terms? Would these rates not be determined by the state of the system, so that rates outside of a particular range would be highly improbable? Obviously, the above very general questions relate to the maximum entropy production principle, but I’m not suggesting that the MEP principle is proved by this logic. In addition, all the comments in this thread as well as my own reading leave me with much doubt that the MEP principle (which must address more than simple gas-filled boxes) can yield enough predictive information to be of practical value – that seems to be unlikely. Rather, my questions relate to whether MEP production can have a theoretical basis in probability. □ Interesting. I think the issue is how well the statistical method compares the brakes. Different processes would have different inertia. How do the “brakes” compare? The over shoots/ under shoots with respect to each other is the most important dynamical consideration. That is the main non-linear issue in my mind. □ Web said, “My strong assertion based on applying these techniques is that many of the power law observations attributed to critical phenomenon are in actuality disorder in the physical space. In many cases I can agree with that, one of the reasons I used the tides and currents example. Note there that you approximate a base width large for larger waves. Very true, statistically. Rogue waves though, the base narrows to increase the height. So the NOAA radio predicts significant wave height, the average of the highest 1/3 of the waves with waves greater than twice the significant average being possible. Now, let’s say I use a similar method to predict the potential warming due to CO2. I use a method that predicts the maximum impact. I am now predicting the rogue waves. They can happen, but they are twice the average of the significant mean. That is valuable information, but more valuable is the probability of that perfect wave. That is why I say that the Arrhenius CO2 equation is misrepresented. Once you allow for real world coniditions, the disorder, the tropopause instead of the assumption that the TOA would have been the tropopause, you get closer to the reality, Arrhenius attempted to predict perfection. So climate science is in some ways mixing apples and oranges, much like linear no-threshold models tend to do, perfection should be compared with perfection, not averages and definitely not smoothed averages. □ Fred, you are on my wavelength. I use maxent for just that reason, to create a maximal spread of rates corresponding to the physical characteristics of the system (mean rate, etc). This is a very easy way to model the physical behavior of dispersion within a number of different contexts. I especially like to use it in steady state situations where I can average out different initial conditions as you describe with the box example. This can generate distributions of growth values, determined from the integration of rates. I always think back to my high school math problem solving days with this approach — the math and probabilities are easily in reach for a plodder like myself. Even though I think I use this approach correctly, I am positive Juan Carlos Ramirez would not approve. □ I meant Juan Ramon Gonzalez, sorry. □ But it doesn’t consider the brakes. LaPlace’s tidal equations do a good job. They don’t do a great job. To fine tune the tidal predictions you have to consider harmonics. Even considering harmonics, variations in wind direction and velocity can change the actual tide by a significant amount. The oceans absorb energy in the day and discharge a portion of that energy at night. The rates of absorption and discharge vary with the seasons, radiant conditions of the atmosphere, convective circulation patterns and several other smaller but not necessarily negligible factors. So any method of determining the rate of diffusion or dissipation can only be an approximation in a dynamic system. Forget that and you end up with possible maximum values without any indication of the variation from those values that can be possible or even should be expected. Comparing two methods, MEP with a simple linear method for example, would give you a better idea of the changes in the rates. Any method can be useful, but I doubt any one will be the best in all situations. □ Cap’n, You are crazy again. I am working on a model of wave energy spectra and the envelope is entirely explained by dispersion of rates given a mean energy of a wave crest. This is like one of those high school math problems: from the height of a wave, derived from the potential energy, one can determine how long it takes to drop. The frequency is the reciprocal of time and that together with the maxent pdf generates the power spectral density. I am in the midst of writing this up and found some really good coastal data taken from sensor buoys to fit the model to. It’s a quick derivation but don’t know if it has been done before. Start small and build your way up. □ Web said, “I am working on a model of wave energy spectra and the envelope is entirely explained by dispersion of rates given a mean energy of a wave crest. ” Then you have a good example. A tidal Crest leads the current slack by 1.5 hours down here, but can vary by an hour. Tarpon fishing, you want know the slack tide, It is harder to predict than the crest/valley of the tide. So your energy spectra would be great, unless you want to know when the flow changes. □ OK, here is as short a Maximum Entropy derivation I can give for the wave energy spectra of a steady state wave in deep water. First we make a maximum entropy estimation of the energy of a one-dimensional propagating wave driven by a prevailing wind direction. The mean energy of the wave is related to the wave height by the square of the height, H. This makes sense because a taller wave needs a broader base to support that height, leading to a triangular shape. Since the area of such a scaled triangle goes as H^2, the MaxEnt cumulative property is: P(H) = exp(-a*H^2) where a is related to the mean energy of an ensemble of waves. This is enough and we can stop here if we want, since this relationship is empirically observed from measurements of ocean wave heights over a time period. However, we can proceed further and try to derive the dispersion results of wave frequency, which is another very common measure. Consider from the energy stored in a wave, the time, t, it will take to drop is related to height by a Newtonian relation t^2 ~ H and since t goes as 1/f, then we can create a new PDF from the height cumulative as follows: p(f)*df = dP(H)/dH * dH/df * df H ~ 1/f^2 dH/df ~ -1/f^3 p(f) ~ 1/f^5 * exp(-c/f^4) which is just the Pierson Moskowitz wave spectra that oceanographers have observed for years. Now, I am interested in using these ideas for actual potential application, so I went to a coastal measuring station to evaluate some real data. The following is from the first two places I looked, measuring stations located off the coast of San Diego, and I picked the first day of this year, 1/1/2012 and this is the averaged wave spectra for the entire day: If you want to play around with the data yourself, here is a link to the interactive page : I don’t know what more I can add. This is practical Maximum Entropy principle analysis. If you want to attack it go ahead, I will continue to consider using this approach for any statistical phenomena I will come across in the future. □ WHT, What makes your calculation an application of MaxEnt? I think that many people might derive the same formulas having in mind nothing about entropy at all. To me it appears rather a typical physics based semiquantitative reasoning with little connection to entropy. In your description you mention maximum entropy a couple of times, but give no hint on it’s relevance for the reasoning. □ Pekka, It might be because you can’t see the nose in front of your face. The first relation I showed is a Maximum Entropy prior given an uncertainty in wave energy. Like I said, the physicists problem is that they don’t like the fact that the terminology has been hijacked and used without their permission by the maximum entropy people. So would you prefer to call P(H) = exp(-a*H^2) a Gibbs estimate? Fine with me, but that is just different terminology for the same first-order approximation. □ Here we come back to the question: Does MaxEnt produce something scientifically significant? When it’s applied at simple enough level like this one it doesn’t as there’s nothing new to discover. Are its results scientifically relevant for more difficult problems is a different question not answered by this kind of examples. Approaches that are rather rules of thumb than theories may be very useful, but they make theories only, if they can be formulated precisely for non non-trivial problems and if they give then correct non-trivial answers. I haven’t seen examples of that and Juan Ramon Gonzalez appears to belive that such examples do not exist. □ Pekka, Point me to a first-principles derivation of that wave energy formula I derived. I don’t think you can because it is driven by complete disorder in the mixing and dispersion of waves. I have hypothesized about this in the past. Physicists are always looking for something new because that is a laudable scientific goal. But the everyday occurrence is the mundane what I refer to as “garden variety” disorder. Unfortunately, characterizing this disorder does not lead to Nobel prizes, because like has been said, this is all statistical mush and it doesn’t get at the heart of the physical mechanisms. Well, I really don’t necessarily care about that. What I care about is the natural world and characterizing all the uncertainty that exists in that world. That’s why I have an interest in this blog, because I think it has a similar objective. □ I don’t claim that there are any first-principle derivations, only that similar formulas have been presented without reference to entropy, not necessarily for this specific application, but to many similar ones. The rationale behind those may be quite similat, and I don’t imply that they would be any better. My point was rather that none of these, including MaxEnt, has been formulated rigorously for the particular applications. Such approaches represent physical knowledge, but not in it’s precise form, but rather in the form of semiquantitative explanation of That’s all fine, but not yet science. To make science out of it requires quite a lot more. Perhaps it can be done, perhaps not. □ The range for what we consider science has suddenly narrowed. Math is not a science because it is abstract and does not discover new physical principles by itself. And statistics must just be The fact that this discussion has evolved into a philosophy of Science indicates we are at a standstill. For example, I spent some time incorporating maximum Entropy uncertainties into Fokker-Planck transport equations to characterize electrical transport. This seemed at least somewhat scientific to me, but I didn’t realize that I had crossed the boundary into an unscientific realm. □ Pekka Pirilä is right, nothing of MAXENT was used in WebHubTelescope derivation. That result follows from classical thermodynamics plus Gibbs statistical mechanics methods. □ Aha. Now let me change the parameters so there is a mean energy and a very tight variance about that energy. Or let me ask what happens if the mean is not well defined. The latter would allow one to model the fat-tail long wave dynamics. The exponential PDF no longer fits in to either of these solutions, meaning that we don’t invoke straight Gibbs but something more general in the probability realm. I believe that’s where Max Entropy and other approaches, including Christian Beck’s superstatistics allow for more flexibility in solving problems. My strong assertion based on applying these techniques is that many of the power law observations attributed to critical phenomenon are in actuality disorder in the physical space. Physicists want to believe it is some emergent phenomena, potentially related to some undiscovered phase, and are taught that is what power laws are associated with. But the dreary fact is that most power laws are due to plain vanilla aleatory uncertainty, and these methods are useful to root this behavior out. Now I suppose we will hear about the problems with superstatistics? □ WebHubTelescope, you are who said us that P(H) = exp(-a*H^2), where H^2 is area, was a MAXENT result… Pekka was suspicious and with good reason, because using plain Gibbs P(E) = exp(-beta*E) and using your “energy of the wave is related to the wave height by the square of the height, H”; i.e. E=E(H^2) we obtain P(H) = exp(-beta*E) = exp(-a*H^2) as a pure Gibbs result. Apart from being a trivial problem (as Pekka pointed), you are not using Jaynes MAXENT but Gibbs statistical mechanics. Now, you change completely your argument to let me ask what happens if the mean is not well defined and to The exponential PDF no longer fits in to either of these solutions, meaning that we don’t invoke straight Gibbs but something more general in the probability realm. The first is wrong, since the mean value of any observable is always defined = Tr{\hat{O} \rho}, unless you have discovered some system that does not follow quantum theory, which would be a The second is another non-issue. The exponential form is obtained from the equations of motion, when one ignores the inhomogeneous term, works in Markovian approximation and sets to zero the dissipative part. When we relax those we obtain generalizations, evidently. Long tails, power-laws, non-Markovian corrections, and all that is well-known for scientists. It seems that you have not still digested Balescu’s criticism of MAXENT. The theory fails and when it is cured ad hoc, MAXENT does not give any new result that was not derived before using the scientific theories developed by physicists, chemists… □ Juan, Web’s attempt to force fit MAXENT is not that unrealistic. Its commonality with other physical processes that have acquired other common names is not unexpected. Nature is pretty simple. What separates the applications is the degree of inertia. Gases diffusing have less inertia than fluids etc. Heat transfer from different boundary layers have differing rates and different inertia. Classic equations have to be modified for different applications, because the real world has exceptions. Those exceptions are commonly due to changing rates of change in a process, system While finding just the right name and giving the right person the credit is a wonderful goal, the end goal is solving the problem. Let’s call MAXENT WebENT and look at where it fails, then we have a clue of what needs to be adjusted to develop the Curry theory of dynamic non-linear non-equilibrium, pseudo-chaotic thermodynamic systems :) □ Juan, Stepping back for a moment, MaxEnt or Gibbs, I really have no preference either way. I only use the ideas that are available and that help me solve problems. The fact that you said my solution to the wave energy problem is trivial, I am happy with. That was how I was educated in my graduate physics classes, to come up with the simplest most concise derivation as one can. I guess I succeeded in that. I still do have issues with the other things you stated. Fat-tail processes result in PDF’s such as Cauchy and Levy that do not generate a mean value. That is what I was getting at when I said that mean values are not defined. In reality, the tails get truncated so that a mean value does exist in practice. Another example that is very pertinent to the climate change debate is the mean adjustment time of atmospheric CO2, which is either hundreds of years or thousands of years depending on how the tails of the impulse response function are truncated. The tails are obviously from a diffusional response as it goes 1/sqrt(time). And you say this: The exponential form is obtained from the equations of motion, when one ignores the inhomogeneous term, works in Markovian approximation and sets to zero the dissipative part. The Markovian approximation is purely a statistical conjecture so it looks like you have evoked a circular argument here. By making the Markovian assumption of a memory-less process you are making an inference with the least amount of bias (i.e. ignorance of higher order statistical moments), something that Jaynes is criticized for. I routinely interchange my invocation of MaxEnt Principle with a Markov approximation because they really point to the same thing — an intuitive probability concept for solving problems. Now I can invoke MaxEnt, Gibbs, or Markov depending on what aspect of the data I am ignorant about. That is really what we are arguing about, the probability and statistics forming the aleatory uncertainty in observed behaviors. Isn’t it? □ Cap’n raises some interesting points which fit under the heading of aleatory uncertainty. How do we model the diffusion of a trace gas such as CO2 to sequestering sites when we know that the diffusion coefficient can vary by a wide margin considering the varying ocean vs land pathways? The answer lies in modeling a prior in the proper way, and MaxEnt can help with this. Same thing with heat sequestering from the excess radiative forcing from GHG. How do we model thermal diffusion when we know that the diffusion coefficient can also vary quite a bit. Is this important for considering the heat in the pipeline? This is getting away from the way that Axel Kleidon applies maximum entropy principles, but the way I have consistently thought about the problem — that of rooting out the aleatory □ Thanks for that Fred, you said: “I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium The conception of your hypothesis removes gravity. So, “I assume a steady state will ensue in which the rate at which the well distributed molecules are removed is balanced by the rate at which new molecules are moving toward an equilibrium distribution”, would need to quantify the rate at which the new molecules are balanced, relative to the different fluctuations now in the range created when the first equilibrium was created by MEP. I surmise, the entropy created during the MEP process of the first theoretical equilibrium process forces potential gravitational energy to the range of its central gravitational point in the field. It must follow, that matter within this range will be driven to a different MEP state of equilibrium each time the process occurs. I’m not fully sure if your principal would be applicable in a three dimensional universe. □ One of the classic examples of maxent is deriving atmospheric pressure with altitude in the simplest approximation. This brings in gravity by incrementally calculating the weight of the gravity head. I have seen this derivation described in many places with the caveat that of assuming both abiabatic or non-adiabatic conditions. □ The barometric formula had been previously obtained from the ideal gas law plus the ‘electrochemical’ potential of classical electrodynamics. This confirms again Balescu criticism about MAXENT: “has not led to any new concrete result”… □ Web, as Juan mentions, the Electrochemical issue is where I am really stuck at the poles. That 184K boundary I have mentioned, approximately 65Wm-2, is in the thermal/non-thermal range. That can fluctuate with the geomagnetic field, gravity (more likely since Venus has roughly the same boundary) and/or atmospheric composition. Ozone depletion or rebuilding appears to change with temperature excursions into the 184K range, note the Arctic now and the tropopause during the bigger El Ninos. That is why I use the surface/tropopause emissivity instead of the TOA to estimate CO2 forcing impact. Being able to measure changes in the maximum local emissivity variation would be nice, but it looks like it would need to be calculated or modeled because of the small changes. With the multilayer bucky model, these impacts would be easier to hone in on to determine relative magnitudes at varying initial conditions. □ Cap’n, I am glad to see a forthcoming post on Gell-Mann’s concept of Plectics. Pay attention on the arguments for simplicity and you might be able to build up a cohesive mathematical argument. As it stands, I don’t think anybody can figure out what you are going on about, and it almost looks like you are taking us for a ride to nowhere. Ultimately, the only way that you will be able to make headway in convincing anyone on your complex thought pattern is to adopt some math formalism. It doesn’t have to be perfect, just enough that someone could follow along or test the ideas on their own. It may be that you are way above the rest of us in intellectual capacity, but your flashy style doesn’t help us any. Multilateral bucky model ???? Give me a break. □ Web said, “Ultimately, the only way that you will be able to make headway in convincing anyone on your complex thought pattern is to adopt some math formalism. ” The math formalism for the processes is known. What is not know is the limits of the math. Circular reasoning to a point, but the diffusion equations reach limits of accuracy as they approach boundary conditions, temperature, pressure, viscosity etc. When a process is nearing a limit, near equilibrium, there is instability possible with respect to a related process. The radiant impact of CO2 with respect to the conductive impact of CO2 for example. An accurate calculation of the impact of CO2 is not easy as it interacts with water vapor, itself and the source temperature plus the sink temperature. I do not see a reliable approach to solve that problem to the accuracy required. This is where the model enters the picture. Using the known math to determine the rates and expected impacts of each, variations from the expected provides the information. The model learns based on observation. No different that adjusting diffusions rate in a lab for dopants that tend to lag or out pace the calculated dispersion. That is the reason I modified the Kimoto equation. You use the fungibility of energy to perform band by band energy transfer calculations. with the Bucky sphere model, Together, the two provide a way to locate the significant unknowns, either in the limits of the equations/assumptions or unsuspected things like the stupid 184K boundary which I can’t tell if it is real, happenstance, instrumentation error or a combination. As far as I know there is no known mathematical solution for the combination of non-linear dynamic interrelations in the Earth climate system. This is just a way to find out if there may be one :) 49. We are not discussing if probability theory applies or not to non-equilibrium. Of course, it applies. What we are emphasizing here is that Jaynes MAXENT theory and similar developments as MEP are ill-founded and/or fail. Question 1: No. Because the rate at which equilibrium will be approached is given by the relaxation function/operator, which is at least second order in the coupling constant in the interaction Liouvillian, which itself is a function of the interaction term in the Hamiltonian. Question 2: No. If the system is in a stationary state, then the free motion term is not zero, the dynamics is not trivial, all the microstates are not equally probable. To obtain the rates you must solve the dynamical equation, with the constraint that the dissipation term must equal the free term. □ Even I understood that. □ Juan said: “No. Because the rate at which equilibrium will be approached is given by the relaxation function/operator, which is at least second order in the coupling constant in the interaction Liouvillian, which itself is a function of the interaction term in the Hamiltonian.” Markus then said: “Even I understood that.” Yet, elsewhere in this thread Markus said: “Pekka, I only know of basic understandings, I am not a scientist” Something does not square with those two statements, and I have a feeling that Markus’s understanding is at the level of “Yes, that sentence parses, and the grammar is correct, so I understand that it is a statement of some sort”. But beyond that he hasn’t a clue, and his support for the Unified Climate Theory is at the most superficial level. I can only conclude that Markus is but an ignorant cheerleader for a bone-headed theory. □ Rah. Rah. Rah. □ Interesting, that you would probably insist climategate emails would be taken out of context, but you have fully conceptualized my separate posts into one. Something does not square with those two inferences. □ “Yes, that sentence parses, and the grammar is correct, so I understand that it is a statement of some sort” Excepting, he knows that the predictive relevance of my syntax is false. 50. WebHubTelescope: You affirm that you have no preference for MAXENT, but you have taken a Gibbs distribution and renamed it as MAXENT. Before you said that Jaynes defined entropy as “the expected value of the (-) log of probability”, but this was done by Boltzmann and Gibbs much earlier. You also said us that the baricentric formula relating atmospheric pressure to height, was a typical MAXENT result; when this is a trivial result derived in classical thermodynamics by using the ‘electrochemical’ potential and the gas law. You said us that MAXENT theory is very popular and used each day by scientists and engineers, but the own MAXENT people reports that this theory is rejected by the majority of the scientific comunity and not used. As remarked before, the mean value of an observable is always defined and is given by the formula written above. If you think that you have discovered some system (atmospheric CO2?) for which the mean values are “not well defined” then receive my congrats, because you have discovered a system that, for instance, violates one of the basic postulates of quantum mechanics. A Nobel Prize must be waiting for you… :-) The Markovian approximation is neither “statistical” nor a “conjecture”. The Markovian approximation (also Markovian limit or Markovian regime) is a pure dinamical approximation of the dissipative part of the equations of motion. The analitical proof of this approximation is too long but can be checked in the literature and numerical checks are available also for computational applications. The fundamental idea is that the non-Markovian terms decay within a time scale of the order of t_corr (which for typical macroscopic systems can be so short that is experimentally unaccesible with current technology); therefore, for any time t >> t_corr, you can set to zero the Non-Markovian corrections, obtaining a purely Markovian equation as the equations of hydrodynamics, of chemical kinetics, as Fourier law… I would emphasize, once again, that this discussion is not about the utility of “probability and statistics”, which is beyond doubt. What we are emphasizing here is that Jaynes’ MAXENT theory and similar developments as MEP are ill-founded and/or fail. □ Thanks for pointing out how approximations are useful. The atmospheric CO2 adjustment time is a real problem. Based on the current models, the mean value of this does diverge. I don’t see this as any Nobel prize revelation either, but just a consequence of how marginal probability distributions are set up. The practical fact is that mean values of observables can diverge. Here is san example: what is the maximum entropy distribution for a random variable for the situation whereby we only know the MEDIAN value? Is this physical? I don’t know. Is it practical? I tend to think so. □ WHT, You can have an infinite number of strongly different answers for that kinf of questions by making a change in the variable. Which of them is the correct one? You cannot know the “correct” variable without a deeper theory. No statistical principle can tell the answer. □ That is what I am getting at, which is it depends on how one marginalizes the parameyers. Look to see how I solved that “trivial” wave energy formulation. Then consider a random walk problem. Does one place an uncertainty on the time constant, which is the way that current physics does it. Or do you put the uncertainty on the diffusion coefficient, which may be more realistic. Note that the two have a reciprocal relationship. This is the interesting part of modeling disordered behaviors and that is what I try to characterize through my work. Nothing grandiose, but that’s what I do. □ We have more or less agreed already earlier that practical use of MaxEnt involves additional assumptions. I think that we agree also on the observation that these additional assumptions are not based on solid theory but rather on experience and practices that could be called rules of thumb. Juan Ramon Gonzales (JRG) has shown interest on solid physical theory and the kind of use of MaxEnt that I discuss above is of little value for that. If and when the methods are widely valuable in practices, it’s possible to study, why that’s the case and what are the limits of applicability. That kind of studies may lead to scientifically interesting results, but that does require serious work which lead to publishable results. This thread was based on the paper of Kleidon. JRG has presented serious critique on that paper, and so have I. Our critiques are formulated quite differently, but I think that they are related as they both are based on the observation that the approach as formulated is in an essential way based on questionable additional assumptions. I see no reason to expect that Kleidons approach could produce any useful insight to the understanding of atmospheric processes. Where it gives right results, these results are almost certainly already known, where it differs, it’s very likely in error. □ I think this is wrong. I predict that somebody applying maximum entropy will eventually use it to infer missing data that no one will be able to measure. The reason no one can measure these parameters is that the disorder interfering with the observables or the model will make a complete simulation intractable. That is why Gell-Mann’s plectic argument has some authority. I know it is again simplistic but I used this to show the CO2 impulse response. Now I want to apply it to the thermal response. Both of these issues have as a description a “missing fraction” of some expected result. I contend that the missing pieces are best resolved as maximum entropy estimators. For CO2 I adopt a maximum entropy spread in diffusion coefficients for adjustment sequestering, and for thermal response, I assume a maximum entropy spread in thermal diffusion coefficient. This will allow us to estimate how much of the heat is going to the unobservable locations. I am sure somebody has done this analysis before but the hardest part is finding the correct citation. I will try to generate a thermal example over the weekend. □ The problem is that you have to assume. That may lead to good results and may sometimes give even better results than other known methods, but as long as the scientific basis is severely lacking, you’ll never know. Having success in many cases may give reason that results are good even other similar enough cases, but extrapolating further from the earlier successes adds rapidly to the uncertainty as always in fields that are more art than science. (Extrapolating may be questionable also in science, but the limits are typically known better in science.) □ In a perfect world I suppose you can get away without assuming but I do come from a background of experimental semiconductor research. In that discipline, everything is about characterization, be it electrical or spectroscopic or other measurements. You always have to fill in the holes (no pun intended) because you are never counting individual particles. You didn’t know the density of defects or the contaminants every time. The modeling was done as part of the characterization — I.e. what parametric model fit the curves the best.. I really don’t see anything foreign to what I have always done; it’s just a bigger system and trying to find pieces that I can chew on. In any case, with the thermal example, I will try an extrapolation. I think you are being supportive and I might be naive on certain issues, but that is not necessarily bad. □ WebHubTelescope: Divergences in observables are known in science since Poincaré work in classical (non-relativistic) mechanics or even before, and methods to cure them are available. Divergences have nothing to see with MAXENT. I cannot say if your question is physical, practical, or nonsensical because is rather ill-defined for the usual scientific standards. What median value we know? The median value of the variable? Of the distribution? What variable is? The IBEX index is not one of the variables describing the state of chemical system is a vessel, therefore with independence of if you know the value of this index today is irrelevant when studying the system. Indeed, this index is not even a physicochemical variable and does not appear in the theories of physics, chemistry, biology, geology… Is the random variable that you allude the strengh of magnetic field? If the system is an ideal gas, its physical state is not given by such physical variable, doing it again unuseful, and so on… What entropy? Informational entropy? Physical entropy? And if is the latter, what kind of physical entropy? Thermodynamical entropy? If the response is affirmative, what formulation are you using? Entropy in EIT is not the same than entropy in TIP, for instance. Why do you claim that entropy is a maximum? If the system is at equilibrium (x = x_eq), then its entropy is a maximum as classical thermodynamics predicts, but if the system is found at some non-equilibrium regime, its entropy is not a maximum. If the system is at equilibrium, what kind of equilibrium? Is the system open or closed? closed or isolated? If it is isolated each ‘microstate’ associated to the variable x is equally probably. If the system is closed then this is not true… Why do you asume that the system’s state is described by a distribution? Jaynes worked with classical concept of probability, but we, scientists, usually work in a more general framework. When I wrote above the generalized formulae for the average value of an observable I used \rho for describing the state of the system, but this is not a distribution: \rho(x) /= P(x). It is only under certain approximations, or in special cases, that a probability distribution P(x) can be derived from \rho and used to reproduce the properties of the system… This kind of questions must be irrelevant for you, but are basic for us, scientists. Divergences have nothing to see with MAXENT. I have no idea what you mean by this statement. Divergence differs from dispersion. I mentioned earlier that I would bring up a thermal example.I have this documented already but let me put a new spin on it. What I will do is solve the heat equation (aka Fourier’s law) with initial conditions and boundary conditions for a simple experiment. And then I will add two dimensions of Maximum Entropy priors. The situation is measuring the temperature of a buried sensor situated at some distance below the surface after an impulse of thermal energy is applied. The physics solution to this problem is the heat kernel function which is the impulse response or Green’s function for that variation of the master equation. This is pure diffusion with no convection involved (heat is not sensitive to fields, gravity or electrical, so no convection). However the diffusion coefficient involved in the solution is not known to any degree of precision. The earthen material that the heat is diffusing through is heterogeneously disordered, and all we can really guess at that it has a mean value for the diffusion coefficient. By inferring through the maximum entropy principle, we can say that the diffusion coefficient has a PDF that is exponentially distributed with a mean value D. We then work the original heat equation solution with this smeared version of D, and then the kernel simplifies to a exp() solution. But we also don’t know the value of x that well and have uncertainty in its value. If we give a Maximum Entropy uncertainty in that value, then the solution simpilfies to where x0 is a smeared value for x. This is a valid approximation to the solution of this particular problem and this figure is a fit to experimental data. There are two parameters to the model, an asymptotic value that is used to extrapolate a steady state value based on the initial thermal impulse and the smearing value which generates the red line. The slightly noisy blue line is the data, and one can note the good agreement. That is an example of how you apply the Maximum Entropy principle to what is essentially a non-equilibrium problem. I think that such a technique will be useful for a climate model where the thermal impulse due to the forcing function will likely diffuse into the ocean (mainly) and into the earth and freshwater lakes in a highly dispersed fashion. We don’t know the diffusion coefficient nor the spatial positions well so we use the MaxEnt priors which reflect this ignorance (in Jaynes definition of ignorance). □ WebHubTelescope, After you have ignored the previous post (without answering any of the dozen of questions asked to you), now you claim that you want to solve Fourier law of heat conduction, which you also call heat equation. You do not write any equation. I will do. Fourier law is q = – kappa nabla T where kappa is the thermal conductivity and q the heat flux. The heat equation is @T/@t = – kappa/C @T/@x = -lambda @T/@x where C is volumic heat capacity and @ denotes partial. Then you jump to talking about some “diffusion coefficient” D, but the diffusion coefficient D appears in Ficks law of diffusion, not in Fourier law neither in the heat equation. Using a bit of imagination, it seems that you name “diffusion coefficient” and denote by D to the coefficient lambda. In fact, the thermal diffusion coefficient (sometimes denoted by D’) is something completely different to kappa and lambda. The main point here is that Ficks law of diffusion, Fourier law of conduction, and similar laws are only valid in situations where the local equilibrium approximation holds. Since Fourier law is more fundamental than the heat equation I will continue with the former. Using standard statistical mechanics methods we can obtain both the Fourier law and the value of the coefficient kappa. As is well-known the value of kappa is computed by using the local equilibrium PDF (often named the Maxwellian PDF), which, of course, has an exponential shape. But once again this is not a MAXENT result, but the standard equilibrium theory known much before… The standard theory shows that kappa depends on the values of local parameters such as the density of particles n(x). If you do not know the value of those local parameters you could try to obtain an average value n by using the Maxwellian PDF. Of course, you do not need to compute the value of any transport coefficient (Ficks, Fourier…), you can merely leave it as a free parameter to be fit to the data a posteriori. Precisely in this way experimental values of many transport coefficients have been obtained by scientists and engineers. Evidently nothing of this is an application of Jaynes “Maximum Entropy principle to what is essentially a non-equilibrium problem”. But an application of the standard theory developed much before by Boltzmann, Maxwell… There are other issues in your post. For example, you allude to the kernel heat associated to the heat equation, but the heat kernel associated to the equation is proportional to e^{-x^2/Dt}, when you give a e^{-x/sqrt{Dt}}. There are other issues in your post. For example, you allude to the kernel heat associated to the heat equation, but the heat kernel associated to the equation is proportional to e^{-x^2/ Dt}, when you give a e^{-x/sqrt{Dt}}. Now I understand where you are confused with my approach. Yes, the e^{-x^2/Dt} is the well-known kernel solution I referred to earlier. I probably should have explicitly wrote that part out. However, the e^{-x/sqrt{Dt}} solution comes about after I applied the Maximum Entropy uncertainty to D. This essentially describes a model where many parallel paths of diffusion occur, corresponding to a heterogeneous environment. Essentially, I am trying to model a very mushy disordered behavior. We can’t control nature, but can only attempt to model what it gives us. It looks like we are in complete agreement, as you were able to infer the heat kernel I was basing my premise on. 51. I have got the impression that the words ill-founded and/or fail are not understood in the same way by all participants. It may even be that a major part of the disagreement is due to that and related speaking past each other. □ Examples of “iIl-founded” were given, for example, when Jaynes MAXENT gives an evolution law which violates transitivity and gives different predictions for the same system and process. Or for instance in the paper criticizing MEP, showing how the supposed ‘derivations ‘ ignore nonlinearities, and when reintroduced the derivation has gone. Examples of “fail” are also in the menu… Since MAXENT ignores relative component of f^c, cannot study the stability of system far from equilibrium, giving wrong predictions. Since Jaynes’ MAXENT theory lacks the equations for the non-conserved variables, it cannot do any prediction about the dynamics of those variables and the theory fails to explain phenomena. In my criticism of Kleidon paper, I showed how his attempt to equate a minus G term with an entropy production term fails miserable when pressure is not constant and the system is not closed. Both requirements omitted by Kleidon, but needed for G being a thermodynamic potential. To be more clear, someone saying that dG < 0 for some system for the which dG > 0 would be an easy to understand example of the concept “fails”. □ So, as a summary, MaxEnt both fails as a method, and succeeds in the way it copies the conventional approach. That sounds like a tie, and so I won’t update my usage of MaxEnt method. I will just have to apologize to all the science grammar police. □ WebHubTelescope: If you ignore totally Jaynes’ own words and redefine MAXENT to be anything that you want then it could be so satisfactory like you want. □ Pekka. Your thoughts are critical, but your reasoning is biased. You are of the genre that will be able to conceive the new paradigm of Unified Climate Theory, whereas, those of Josh’s genre, who cannot think critically, never overcome their bias. □ I try to be open to new ideas, but there’s a lot of physics and related science that I do consider so strongly justified that it’s usually better to disregard the possibility that that will There’s also a lot of such solid background in climate science. Thick books like that of Pierrehumbert are based on such knowledge, but reading such books with understanding reveals that they don’t even try to provide anything close to a full picture of the actual state of the Earth climate. Textbook theories help in understanding, what’s behind observations, but based on them alone the climate could be much more different from the present than we know as true any time in millions of years of history or or than anybody has projected for the future. All “theories” that I have seen proposed as alternatives for the kind of theory that Pierrehumbert presents are most certainly wrong, but there remain many possibilities for improving the understanding without attempting to invent some revolutionary new theory. Unfortunately real improvements are difficult to achieve and one has to first understand all the basics, before any hope of success can be considered possible. □ markus - Please tell me how you’ve reached your ability to reason critically without bias. Is it simply a function of your superior intelligence, or is there some specific technique that you use? □ More a matter, that my bias points towards truth. It is the bias of Skepticism that gives me truthful knowledge □ Pekka, I only know of basic understandings, I am not a scientist You say:.”There’s also a lot of such solid background in climate science. Thick books like that of Pierrehumbert are based on such knowledge, but reading such books with understanding reveals that they don’t even try to provide anything close to a full picture of the actual state of the Earth climate”. And in that statement is reason enough why I don’t read the bible. As with the debunked C02 climate theory, they based their faith on a false premise. Do you believe a man was raised from the dead? □ Markus, I wrote, what I wrote, because I have noticed that you present ideas that I’m 100% certain to be seriously wrong. The Unified Climate Theory is total crap as are several earlier “theories” that have been discussed on this site. I would not be surprised to find out, if Judith’s conclusion is that there’s no need to give more weight to this latest casse of hopeless attempts to discredit theories, which have not led to any reason for being discredited. I think that the new “theories” like UCT are supported by two groups of people, those who just like to oppose the present and don’t care the least about evidence, and those who want to give weight to empirical evidence but don’t realize how huge the total empirical support for the basic physical theories is and how well and unambiguously physicists can apply it to basic understanding of climate processes. There simply are no problems in need of new theories in the basics; the need is in the ability to use any theory in accurate understanding of a very complex system. The system is complex, because it’s large, consists of very many processes, is influenced by complex topography, etc., etc. This complexity is something that cannot be removed by any theory. All new theories make their attacks against those features of the present undestanding, which are well known and they do the attacks always making explicitly and provably wrong assumptions or strong assumptions that lack all real empirical or theoretical support, they all are just hoaxes. □ My main point briefly: It’s important to be open to new ideas and it’s important to remember that even rather well confirmed “facts” may turn out to be in error or incomplete in an unexpected way. In this sense one should never feel fully certain. Taking any specific new idea, it’s often 100% certain that they are wrong, because they contradict in an obvious way knowledge based on empirical observations directly or indirectly but still without doubt. The Unified Climate Theory is wrong in this way. I’m sure the right arguments are given on WUWT as well, although I haven’t looked at them carefully enough to tell, which message presents them best. □ Thank you for your thoughts Pekka. I will not bore you by waxing lyrically of my own philosophical perspectives. But, “Taking any specific new idea, it’s often 100% certain that they are wrong, because they contradict in an obvious way knowledge based on empirical observations directly or indirectly but still without doubt”. Without rhetoric, give me the empirical evidence relied upon, to support the AGW theory. No proxies please. I’ve searched without success. “Unfortunately real improvements are difficult to achieve and one has to first understand all the basics, before any hope of success can be considered possible”. Pekka, I would not throw my hat into a ring, unless I knew I was on a winner. My profession is such disciplined. It is my appeal to the authority of my own reasoning. Unlike many here, coached in the scientific method, I have no blinkers attached, I am free to the musing of a open mind. I am able to make my own mistakes and not be embarrassed by them. The basic premise of N&K is this: Kinetic Energy is (forced) employed by Potential Energy until mass re-radiates the employed kinetic energy to space. So Enhanced Energy is the kinetic energy plus the potential energy of mass. The mechanism of conjoining energies causes heating the mechanism of decoupling causes cooling. That is my own very, very short hypothesis. The long one would perplex you, no offence intended. But I am MAD, so none of it is truth. Right? □ I know perfectly well that it appears useless to argue with your views on the net. There isn’t slightist hope that either one of us would tell openly that he has changed his mind to the least, but I always have a tiny hope that something gets through and starts to have it’s effect. I’ve not written anything about AGW theory, neither is the book of Pierrehumbert about AGW theory. We both discuss physics and it’s application in understanding atmosphere. The empirical evidence that I refer to is all the empirical evidence that proves about the validity of understanding of physics. That’s the huge and powerful evidence, whose value is dismissed by all those who promote theories that contrdict with well known physics. the need is in the ability to use any theory in accurate understanding of a very complex system. The system is complex, because it’s large, consists of very many processes, is influenced by complex topography, etc., etc. This complexity is something that cannot be removed by any theory. Yes, that is indeed our view of the situation, that of a a very complex system. Yet, the reason statistical mechanics and the coarse-grained thermodynamic concept got invented was to convert complexity into a more simple concept. This is part of Murray Gell-Mann’s Holy Grail of plectics, and part of the rationale for his starting the Sante Fe Institute. Too many people think that Gell-Mann and company intended to study complexity for complexity’s sake, and don’t realize that Gell-Mann’s real interest lies in extracting simplicity from the complex. Consider that besides statistical mechanics and maximum entropy, physicists also rely on concepts of symmetry and group theory to model the essence of a behavior. Yet, should we expect that someone like Juan Ramon Gonzalez to cry foul at the use of symmetry or group theory to model a system, just because it is not real physics? Nothing of the sort, and we should try whatever technique that is in our arsenal to divide and conquer the beast. Sorry, but that was just the way I was educated by my physics professors and thesis advisors. Pekka also mentions the role of complex topography in the system. Coincidentally, that is something I have studied as well through maximum entropy principles. I have actually gone through number crunching the terrain profile of the entire USA down to the 100 meter post level and analyzed that in terms of maximum entropy at different scales. The superstatistics of the distribution is quite interesting. And incidentally, I just discovered that this approach is justified by invoking the Giry monad from category theory, which you can read more about from visiting the Azimuth blog, which is continuing a parallel discussion of the topic of thermodynamics, but at a much deeper and more fundamental level than we are having here: Again this all goes into the hopper of scientific analysis in the hope that we can put this all together and reduce the complexity of the climate system. Some people may consider this hopeless but obviously I and many others do not. □ WHT, The number of components in a system is certainly not enough for making it complex in the sense I mean. A small volume of gas has a huge number of molecules and is actually simple to analyse just because of that, when the boundary conditions are favorable. Adding larger scale structure makes the issue more difficult, but it may happen that the larger scale structure is such that a new statistical simplicity emerges. If that happens, it’s likely that some kind of MaxEnt principle can be developed to give valid results, but the order of inference goes from other knowledge to MaxEnt. When there are larger scale structures to the point they occur in the Earth system with a small number of continents and oceans, there’s no hope that any statistical approach can give a full and accurate description of all important phenomena. In the Earth system we have all scales from subatomic to global. Some subsets of these structures may follow some scaling laws (power laws with some exponents), but that cannot be a full description of the system and that cannot provide full results. Thus we are left with the situation that some rules of thumb apply somewhere, some others somewhere else. These can be identified empirically and perhaps also from results of more complete models, but does this observation really help us in deeper understanding of the system. I don’t know any evidence that it □ “The empirical evidence that I refer to is all the empirical evidence that proves about the validity of understanding of physics. That’s the huge and powerful evidence, whose value is dismissed by all those who promote theories that contrdict with well known physics”. Dear Pekka, I am glad that you do not give up hope that some will eventually get trough. We are at equilibrium in that, but I should try to make you should understand, the AGE does not validly employ all that evidence that proves relativity. The theoretical discussion of current climate physics is that it does in fact breach the first law – the creation of energy. A unenclosed GH system can not create energy, as with a closed Get the basic principles right first Pekka and truer results will follow. The roof of our atmosphere green house starts at the surface with the earth and ends with the TOA, u]nlike AGW that claim GHG’s act like a roof over us. So, adding Co2 does change the composition of Whole Of Atmosphere GH, but as the greenhouse works from top to bottom and from bottom to top, this change also effect the amount of incoming energy into the system, rendering the composition of the Whole of Atmosphere GH basically irrelevant. Peeka, we are very likely on the cusp of enlightenment. Imagine that the GH is in fact the force by pressure on mass, not the composition of the atmosphere. N&K haven’t, I repeat, haven’t envisaged a theory and then worked the science. No, they worked the science and then envisaged a theory. They have made a discovery, AGW proponents, by your own reckoning, have only shuffled the deck. □ Pekka, Einstein would be alarmed you put little faith in relativity, and, I would think, be a little bit miffed that his future fellows were overwhelmed by the understanding of it. “When there are larger scale structures to the point they occur in the Earth system with a small number of continents and oceans, there’s no hope that any statistical approach can give a full and accurate description of all important phenomena”. □ Pekka, I do agree that it will break down at the most coarse-grained level — as you say the handful of oceans and continents do create that last bit of determinism that cannot be modeled So there is bad news and there is good news. The bad news is that the probabilistic view breaks down, but the good news is that there are not many oceans to deal with. Here is an old but interesting article on modeling the ocean’s thermocline at a gross level. I think this was published after Paltridge’s work but he does not reference Paltridge. 52. Capt. Dallas: Nature is very very complex, and this complexity has little to see with your “degree of inertia”. There are many issues in your posts that I am going to ignore; such as you thinking that gases are different than fluids, when any student knows that a gas is just a kind of fluid… Maybe you are happy if tomorrow someone claims that Newton laws of gravitation were invented by Jaynes in the 20th and are a MAXENT result (or your “MAXENT WebENT”), but I would react when reading such nonsense; specially if such nonsenses are used to overemphasize an ill-defined theory (MAXENT) that has given nothing that was not known before in the physics and chemistry of □ Juan, “degree of inertia” is only one of the complexities, especially as related to the pseudo-chaotic behavior of non-linear systems. I, by the way, do not think gases are different that fluids, gases are fluids, in general though, the properties of gases are different enough from liquids that engineers use a less exact terminology. Terminology is one of the largest barriers of climate science progress, IMHO MAXENT (WebENT) could be a useful tool in a combined method approach to finding IF a reasonable solution is practical. So far, I am convinced that no single approach is up to the task. Personally, I see constructional theory and/or Maximum Entropy Production as more useful, but I am open to Web’s approach, provided he realizes the limitations of that approach. Inertia is but one of those limitations. □ Capt Dallas: “degree of inertia” is not a scientific concept in any fundamental science that I know. It must be a pseudo-scientific concept used by some camp to try to give self-importance. I know that the term is used by economists and in marketing. I cannot know what you think, but I am replying to what you write, and you wrote that “gases diffusing have less inertia than fluids”. It seemed to me that you though that gases and fluids are two different things. If you really think that gases are fluids (as you affirm NOW), then what you said was “fluids diffusing have less inertia than fluids” and I can object to this phrase as well. □ Juan, I am sorry that my command of the language is not up to your standards. Engineers do have a nasty habit of thinking of matter in phase states, solid, liquid and gas. All can be fluids in the strict meaning of the term. If I remember correctly, an object in motion tends to remain in motion until persuaded to stop or change the rate and/or direction of that motion. It is a lot easier persuade a less massive object to change its direction and rate of motion than a more massive object. When I use the term, gas, I am implying it would be less massive than a liquid, which some engineers refer to as “fluids” inappropriately, when the context of the sentence should indicate there is a not so subtle difference in the properties of the “fluids” being If that is cleared up to a sufficient degree, why would I use “degree of inertia” instead of precisely defining the term? Laziness. I would assume that a discussion on methods of modeling a non-linear dynamic, non-equilibrium thermodynamic system, that varying viscosity and mass of fluids in motion with different “braking distances” might contribute to what some might call chaotic or unpredictable changes in rates of energy transfer. A somewhat complex problem it appears to me. One that may require an outside of the box approach. □ Juan - There are many issues in your posts that I am going to ignore; such as you thinking that gases are different than fluids, when any student knows that a gas is just a kind of fluid… I have to say that you write very well, and you seem very logical and rational in your thinking in general. But then again, you do write a statement like that above – which amounts to an obvious self-contradiction (and you continue with that contradiction below where you again discuss what you say that you’re going to ignore). It’s a very minor point w/r/t the scientific debate – except that it speaks to what might potentially be a larger characterization of your reasoning: That it is influenced by biases – as reflected in an interest in somehow proving that Cap’n knows less than what “any student” might know (which seems quite dubious) – as opposed to simply objective scientific analysis. 53. The Google led me to the original Paltridge applications, and other papers. Very interesting applications, IMO. 54. WebHubTelescope: you claim often your are familiar with scientific topics, but continue writing completely distorted views of well-known scientific issues. Neither statistical mechanics nor “the coarse-grained thermodynamic concept” got invented “to convert complexity into a more simple concept”. Statistical mechanics was developed for searching a link between the microscopic world and the macroscopic world, so that one could, for instance compute macroscopic parameters from molecular structure. Statistical mechanics is more complex than hydrodynamics or thermodynamics because considers atomic-molecular structure. You have named Gell-Mann often but his contributions to statistical mechanics or to thermodynamics are easy to count: zero. Gell-Mann invented the term plectics and defines it as Plectics is then the study of simplicity and complexity. But this is not a revolutionary concept, it is what statistical mechanics has been doing for a century! As a consequence, only Gell-Mann and maybe a pair of persons more use the term plectics, with the immense majority of scientists and engineers ignoring it. Do you really pretend to compare MAXENT, to symmetry or group theory? Wow! Open a textbook on physics or chemistry and you will find applications of symmetry or group theory, when the same textbooks will not even mention MAXENT. You have been said and shown the technical reasons which MAXENT is rejected, but you seem to believe in some kind of conspiracy theory by physicists :-) About the Azimuth blog, sorry but there is absolutely no “deeper and more fundamental level” discussion therein, but a lot of confusion and some nonsense by people whose contribution to thermodynamics or statistical mechanics is zero. □ Read again what you just wrote, Juan, as you are starting to contradict yourself. I am not hung up on the purity of a subject as much as you seem to be. I am simply trying to provide some motivation and spirit to the discussion. The motivation is always to try to make the problems tractable. Working out the microscopic motions of all the particles is obviously impossible, so that the macroscopic stat mech was invented. And this made the solution simpler, and by implication, solvable. How can anyone argue that? About the Azimuth blog, sorry but there is absolutely no “deeper and more fundamental level” discussion therein, but a lot of confusion and some nonsense by people whose contribution to thermodynamics or statistical mechanics is zero. Wow. I don’t know that I would draw the conclusion that they were spouting nonsense. I pick up all sorts of interesting nuggets from there. At least I feel myself in good company. I always thought the Azimuth guys like Baez, Corfield, etc are way too smart for what they are trying to do, as they keep finding interesting diversions. I would pay big money to have their knowledge of math. □ I’ve added azimuth to my blogroll, i find it very interesting and unique in the blogosphere. □ Gosh, what a great blog, I’d not seen it before! How beautiful is that Hamilton-Maxwell comparison? Very grateful to you for pointing this blog out. □ WHT, the goal of statistical mechanics is not merely to simplify the equations of mechanics. If tomorrow you could solve the Hamilton equations for a N-body system (N ~ 10^23) using a powerful supercomputer, this would not help us to understand why a fluid behaves diffusively, less still to compute a diffusion coefficient. As stated before, statistical mechanics provides a link between microscopic equations and macroscopic equations. Moreover, the techniques developed in statistical mechanics have found novel applications to systems so simple as one degree of I already stated my opinion about the discussion on ‘thermodynamics’ in the Azimuth blog. If you are happy discussing such stuff in the company of mathematician and philosophers with zero contributions in the topic, that is fine for me. In the past I already corrected some similar mistakes of one of them, who I know rather well, in his own blog. You consider him “too smart”, but the fact is that even in his own field of ‘expertise’ (not thermodynamics as said) he has been dubbed crackpot in public by well-known physicists working in the same field than him. In the past I already corrected some similar mistakes of one of them, who I know rather well, in his own blog. The Azimuth people seem to appreciate having readers contribute to the discussion and make corrections when necessary. I believe that they are interested in exploring the mathematics to see how it may apply. You consider him “too smart”, but the fact is that even in his own field of ‘expertise’ (not thermodynamics as said) he has been dubbed crackpot in public by well-known physicists working in the same field than him. Well those look like unreferenced assertions to me. Baez provided a good explanation for the territory they are traversing when he wrote recently: The difference between ‘strange papers’ and ‘crackpot websites’ is that the former do mathematically valid things without making grandiose claims about their physical significance, while the latter make grandiose claims without any real calculations to back them up. I think they know exactly the boundaries of what they are trying to accomplish. Mathematically exploring the physics is one area that a collaborative effort works. No need for a lab and concrete experiments. □ WebHubTelescope, You say “Well those look like unreferenced assertions to me.” That must be right for outsiders unaware of the flame wars. People in the field knows that he has been considered a crackpot by several physicists, both in public and in private □ I only mentioned a couple of names and can’t imagine them as crackpots. They actually moderate the groups so have to deal with bizarre notions all the time. Since they have deep math backgrounds, they may use abstract notions, but that’s not a reason to make a blanket statement. □ Any physicist (or mathematical physicist) who “keeps realizing more and more that our little planet is in deep trouble!” due to changes in CO2 is a crackpot by very definition. Sorry, cannot □ Most natural scientists are observant about their environment. Something about the way they were educated and inspired about nature. Sorry couldn’t resist. □ Most natural scientists might be observant about their environment, but are apparently less clueful about physics that drives that environment. There must be something about the way they were educated and inspired that makes them to believe in MaxEnt (or was it a “MinEnt” a while ago, in chemistry?), or in any other simplified construction that would govern nonlinear dynamics and stationary state of open system of coupled reservoirs of Earth fluids… □ WHT, You say you “can’t imagine”… but was not about imagination. You claim that they “have deep math backgrounds”, but some people who has won a Fields medal thinks otherwise. In any case that was not the point, because was not about math but about physics. □ Al Tekhasski, if by “MinEnt”, you really mean the minimum entropy production theorem, the comparison to MaxEnt is not fair. Maybe some scientists initially proposed the theorem as a generic nonequilibrium evolution criteria, when the theory of nonequilibrium thermodynamics was being developed and still in its infancy, but posterior analysis restricted its use to the linear regime (the own Prigogine published results showing why one would wait the lack of a general ‘potential’ in far from equilibrium regimes). Today the theorem is acknowledged as one of the main results derived in linear non-equilibrium thermodynamics. The situation here is not very different from Einstein initially obtaining wrong field equations of general relativity, when he and other were developing the theory and was still in its infancy. Posteriorly, the initial field equations were amended with the famous trace term. Both situations would not be compared to that with MaxEnt. MaxEnt has been proved wrong both in a foundational basis and from the perspective of applications, although a very tiny community of about half-dozen of people ignores the physical and mathematical facts and take MaxEnt as gospel WHT, You say you “can’t imagine”… but was not about imagination. You claim that they “have deep math backgrounds”, but some people who has won a Fields medal thinks otherwise. In any case that was not the point, because was not about math but about physics. The Fields medal winner Terence Tao has encouraged what Baez is trying to do at least a few times So what I suggest you do is wander over to the Azimuth blog, the N-category blog, and Tao’s blog, and proclaim that they all stay away from discussing physics at all. I am sure that they would appreciate getting some feedback. Here is another Fields winner, David Mumford, who wrote a book published last year called “Pattern Theory: The Stochastic Analysis of Real-World Signals”, which I have been studying for its applications of entropy to understanding stochastic phenomena. This is what David Corfield says about Mumford: I do like to read what these guys have to say because the applied math stimulates thought. Your mileage appears to vary from mine. □ Juan, From what I found about 30 years ago, philosophical musings about dissipative structures are no more than a wishful thinking about ordinary well-known (in certain circles of course) linear theory of hydrodynamical stability by a newcomer from chemistry. I don’t know any other application of this principle other than to the ordinary Rayleigh-Benard convection, where the same result just have a fancy interpretation and new (at the time) buzzwords. It is probably quite better than the maxent, so it might be not fair, I agree. But still in the same ballpark of brutal [in]applicability to real far-from-equilibrium non-stationary situation in Earth climate dynamics. Same goes for other philosophical re-incarnations as “constructal theory”… – Al Tekhasski □ WebHubTelescope, I emphasized the word physics in bold face… because I am referring to research in this field. But you have ignored this and presents us, as support for your beliefs/credo, two links to Terence Tao blog. In the first link he acknowledge Baez article in a Notices journal about blogging and the second is about the same: blogging Thanks for the laugh. You make a suggestion about blogs. This is a difference between you and me. You want everyone to share your beliefs/credo. This explains why you are so angry when most of scientists and engineers ignore MaxEnt and similar flawed stuff. You even presented here conspiracy theories: Perhaps many scientists resent the audacity that Jaynes had in referring to Probability Theory as the “Logic of Science”? Like I said, the physicists problem is that they don’t like the fact that the terminology has been hijacked and used without their permission by the maximum entropy people. I have hypothesized about this in the past. Physicists are always looking for something new because that is a laudable scientific goal. But the everyday occurrence is the mundane what I refer to as “garden variety” disorder. Unfortunately, characterizing this disorder does not lead to Nobel prizes, because like has been said, this is all statistical mush and it doesn’t get at the heart of the physical mechanisms. The reading of the “Philosophy of Real Mathematics” article that you linked was very boring. □ Al Tekhasski: What you write has nothing to see with minimum entropy production, which is quoted often as one of main results on linear non-equilibrium thermodynamics. Many examples of dissipative structures are known and studied. Turing structures (a stationary spatial dissipative structure) are observed in chlorite-iodide-malonic acid reaction in an acidic aqueous solution. They can be studied using a Brussels molecular model, which, of course, goes beyond hydrodynamics theory. □ Juan, You were the one that brought up criticisms of the Azimuth people from Fields Medal winners, who would have to be mathematicians. Your whole argument seems to revolve around the purity of a discipline, whether it be math or physics, and then you defer to mathematicians to resolve the issue ? And then you start whining about a link I reference being boring. At this point, the arguments have become rhetorical. □ WHT, I wrote what you do not still understand WHT, I wrote what you do not still understand And that is why it has turned into a rhetorical debate, just as I said. I think the reference to a mathematical technique is just shorthand for describing the modeling some physical behavior. Your opinion obviously differs. Incidentally, I found what I think is a cool application of max entropy uncertainty propagation to characterizing behaviors in my semiconductor technology specialty. If interested, stay tuned 55. WebHubTelescope, After seven paragraphs showing why MAXENT plays no role in heat and Fourier equations, I did only a minor comment, in the eight paragraph, about your heat kernel. Let us study this specific issue a bit more… The derivation of the heat equation is easy, one starts from the energy balance law, substitutes the Fourier law, assumes a homogeneous medium (then kappa and C are independent of the position), and one obtains the heat equation written above, where lambda == (kappa/C) is independent of position. Now that you have confirmed that you call D to lambda, what you are doing seems to be the You start from the heat equation [in my post of above, powers of 2 are lacking in some partial derivatives] @T/@t = -D @^2T/@x^2 which is only valid for homogeneous systems (D is constant), but you decide that can be also applied to non-homogeneous systems with non-constant D(x). In your own words “The earthen material that the heat is diffusing through is heterogeneously disordered”. After this you decide that you need to obtain an average value for D(x), what you call “mean value” or “smeared version” of D(x) before solving the heat equation (ignoring that D in the heat equation is already independent of position) then you claim that your ‘kernel’ e^{-x/sqrt{Dt}}, with a smeared D independent of position, is a solution of the “original heat equation”. However, your ‘kernel’ is not a solution of the heat equation. The solution to the heat equation is the heat kernel e^{-x^2/Dt}, where D is constant. There are more issues with your ‘kernel’ but I think that is enough. □ Juan, It is a superposition of solutions, which is a perfectly acceptable way of doing propagation of uncertainty. This superposes a maximally non-commital view of the thermal diffusion coefficient assuming all we know is the mean. This is very intuitive to understand, and I wrote about a practical application elsewhere. This has to do with an insulated home where there may be multiple pathways for heat to escape. If you know an average thermal resistivity but due to variable insulation constructions, you have to guess at the variance, this is one way to do it.. The least informative is the maximum You may not like it, but the proof is in how useful it is from an applied physics or engineering perspective. In that case, you may not like it because it is trivial. Well, I like trivial. □ I already explained to you how the heat conduction coefficient is derived using ordinary statistical mechanics and how we can obtain an average value using Maxwellian distribution. Evidently, nothing of this has anything to see with MAXENT, although you continue renaming standard theory of equilibrium as MAXENT in a rather unfair way, like when you renamed the well-known Gibbs distribution as a Jaynes’ MAXENT distribution! Moreover, you have ignored any technical question about the heat equation and its kernel. You can invent any equation that you want and give any solution that you want, was nonsensical or not, but please do not continue blaming over scientists because are ignoring such ‘brilliant’ ideas. Finally, I only want to add that I did a sign mistake in an above post and that the coefficient lambda in the heat equation is related to Fourier by lambda = – kappa/C. Also, the Fourier law given above is ignoring the presence of fields and needs to be generalized for such cases □ I agree with you that Fourier’s Law is a component of the heat equation and the two are not the same thing. Sorry for the confusion in implying that they were the same (when I parenthetically stated “aka”), and want to state plainly that Fourier’s law is more fundamental and is used to derive the heat equation. That said, this misstatement had nothing to do with the rest of my analysis, and I stand by what I was trying to do with respect to uncertainty propagation. I put together a blog post on this topic, that has some implications for transient AGW analysis. Carry on, and you can continue to go postal on what I am trying to do. Believe me, I really don’t mind. □ The issue was not if Fourier law and heat equation are the same or not. Evidently are not. The real issues are the limitations of both laws, when they apply and when they do not apply, which is the kernel of the heat equation, and why there is not trace of MaxEnt in your musings. Evidently, you continue ignore such issues. □ I added some more information to my thermal diffusion model here. The results are very complementary to a 1D box diffusion model that James Hansen published in 1985. Ramble on, Juan. You can call it what you want, I don’t really care any more. I am satisfied that I am on an interesting track. □ You have missed another opportunity to correct the nonsense that you said… but I found this page where you are presented as A crank who believes that fossil fuel depletion is the ignored step-child in the climate science debate. He goes way overboard with math and shows an obsession for The Mentaculus. You claim now that your nonsensical claims about MaxEnT and the heat equation are related to an ancient box model published in Science. Of course that paper does not say the nonsense that you say about the heat equation, neither mention MaxEnT. Currently climate scientists use two box models as the simplest model for interpreting data. Although those simple models are more complex than your MaxEnt ruminations. Scientists also know what are the limitations of the two box model and how to develop corrections to it using science. Of course MaxEnt ruminations are absent in all this. I took a look to your crank blog and I can see that in your “wave-energy-spectrum” entry you continue naming to the Gibbs result P = exp(-bE), where energy E=E(A), being A area, a MaxEnt result, despite the fact at least two posters in this thread corrected you. The fact you pretend to rewrite the history of physics for supporting your silly ideas would be added to the presentation given in above google doc I will not waste more time with a crank. □ Juan, You have been what we call punk’d. That web page that you link to is something that I wrote recently. This is hysterical. I am making fun of myself for getting into the math, and you take it as evidence for your argument. Thanks for the laugh and you made my day. This entry was posted in Sensitivity & feedbacks. Bookmark the permalink.
{"url":"http://judithcurry.com/2012/01/10/nonequilibrium-thermodynamics-and-maximum-entropy-production-in-the-earth-system/","timestamp":"2014-04-20T20:55:59Z","content_type":null,"content_length":"620066","record_id":"<urn:uuid:4ffe2cce-be0e-4bf2-abed-2978573125b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Alpha, NJ Prealgebra Tutor Find an Alpha, NJ Prealgebra Tutor ...But really, there are no tricks. No shortcuts. Just hard work and sleepless nights, chocolate and coffee, and talking to yourself while brushing your teeth in the morning trying to remember the quadratics formula. ***************************************************** I like taking exams, and I enjoy helping others do the same. 34 Subjects: including prealgebra, English, physics, calculus ...I focus on identifying the way individual students learn and providing them with new ways to understand the information to increase their success. I currently hold an Elementary Education Certificate from New Jersey and passed my Praxis II test in Elementary Education. I currently teach math an... 19 Subjects: including prealgebra, geometry, biology, algebra 1 ...Although my primary love is science, I also tutor algebra 1. I have worked with students in all levels both in the classroom and during private tutoring sessions. Whether it is a general science class or an AP chemistry class, I am comfortable with the material. 6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2 ...I have the utmost confidence in my ability to relate this material in a comprehensible manner to the student. Furthermore, I am proficient in econometrics, having taught several students how to use SAS and STATA to perform regressions and analyses. I am also available to tutor for the quantitative section of the GRE. 19 Subjects: including prealgebra, calculus, precalculus, statistics ...Currently, I am a Doctoral student at Lehigh University. I have a Bachelor's degree in Psychology from the University of Pennsylvania and a Master's from Lehigh University in Human Development. My current background checks are available upon request. 14 Subjects: including prealgebra, English, reading, writing Related Alpha, NJ Tutors Alpha, NJ Accounting Tutors Alpha, NJ ACT Tutors Alpha, NJ Algebra Tutors Alpha, NJ Algebra 2 Tutors Alpha, NJ Calculus Tutors Alpha, NJ Geometry Tutors Alpha, NJ Math Tutors Alpha, NJ Prealgebra Tutors Alpha, NJ Precalculus Tutors Alpha, NJ SAT Tutors Alpha, NJ SAT Math Tutors Alpha, NJ Science Tutors Alpha, NJ Statistics Tutors Alpha, NJ Trigonometry Tutors Nearby Cities With prealgebra Tutor Asbury, NJ prealgebra Tutors Broadway, NJ prealgebra Tutors Durham, PA prealgebra Tutors Glendon, PA prealgebra Tutors Kintnersville prealgebra Tutors Little York, NJ prealgebra Tutors Milford, NJ prealgebra Tutors Phillipsburg, NJ prealgebra Tutors Riegelsville prealgebra Tutors Springtown, PA prealgebra Tutors Stewartsville, NJ prealgebra Tutors Stockertown prealgebra Tutors Tatamy prealgebra Tutors Upper Black Eddy prealgebra Tutors West Easton, PA prealgebra Tutors
{"url":"http://www.purplemath.com/Alpha_NJ_Prealgebra_tutors.php","timestamp":"2014-04-16T07:42:46Z","content_type":null,"content_length":"24134","record_id":"<urn:uuid:941fc9a3-8543-4031-af1a-5361bccb89b8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Russell Gardens, NY Algebra 2 Tutor Find a Russell Gardens, NY Algebra 2 Tutor ...As long as the student is trying hard to understand something, I will not give up on trying to help them learn. TRAVEL I am based in Manhattan, however I am able to travel to other boroughs or Jersey via train. Depending on your distance and the time needed to travel, I may request an additional amount to my rate to compensate for travel time/train fare. 4 Subjects: including algebra 2, geometry, algebra 1, prealgebra ...Please do not hesitate to contact me if I could be of any help. I look forward to working with you. Sincerely, DheerajI taught Algebra 1 at the 9th grade level during my time as a teacher. 26 Subjects: including algebra 2, calculus, writing, GRE ...I also recently taught a physics course at a local university. My student evaluations said that I was an excellent one-on-one teacher. Based on teaching in the classroom and one-on-one tutoring session, I think that anyone can learn math and science. 10 Subjects: including algebra 2, writing, physics, algebra 1 ...My other tasks included peer tutoring in the Office of Undergraduate Biology in all of the biology, chemistry and physics courses. Through my experience, I have recognized the importance in placing comprehension over memorization on my own grades and the students I have tutored so far. I have b... 17 Subjects: including algebra 2, chemistry, algebra 1, MCAT ...Students that hone these techniques over considerable practice have had great success! I developed techniques that helped a recent student raise his score from the low 20s to 30! I work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths. 34 Subjects: including algebra 2, writing, geometry, calculus Related Russell Gardens, NY Tutors Russell Gardens, NY Accounting Tutors Russell Gardens, NY ACT Tutors Russell Gardens, NY Algebra Tutors Russell Gardens, NY Algebra 2 Tutors Russell Gardens, NY Calculus Tutors Russell Gardens, NY Geometry Tutors Russell Gardens, NY Math Tutors Russell Gardens, NY Prealgebra Tutors Russell Gardens, NY Precalculus Tutors Russell Gardens, NY SAT Tutors Russell Gardens, NY SAT Math Tutors Russell Gardens, NY Science Tutors Russell Gardens, NY Statistics Tutors Russell Gardens, NY Trigonometry Tutors Nearby Cities With algebra 2 Tutor Glen Oaks algebra 2 Tutors Great Nck Plz, NY algebra 2 Tutors Great Neck algebra 2 Tutors Great Neck Estates, NY algebra 2 Tutors Great Neck Plaza, NY algebra 2 Tutors Harbor Hills, NY algebra 2 Tutors Kensington, NY algebra 2 Tutors Lake Success, NY algebra 2 Tutors Little Neck algebra 2 Tutors Manhasset algebra 2 Tutors Plandome, NY algebra 2 Tutors Saddle Rock Estates, NY algebra 2 Tutors Saddle Rock, NY algebra 2 Tutors Thomaston, NY algebra 2 Tutors University Gardens, NY algebra 2 Tutors
{"url":"http://www.purplemath.com/Russell_Gardens_NY_algebra_2_tutors.php","timestamp":"2014-04-18T08:53:12Z","content_type":null,"content_length":"24517","record_id":"<urn:uuid:79acf0bb-4f11-4aa5-ac00-2cca96e638b2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse Trig Functions Definition of inverse trig functions To find the inverse trig functions, we have to know that trig functions are periodic functions. In the case of inverse trig functions. Trig functions are defined as: When it's at x = When it's at x = Simply speaking, it is the shortest ranges of ? that allow for the maximum ranges of values for the respective trigonometry function. This is the same for the other functions. The inverse trig functions are defined as such: If x is out of the range of -1 and 1, sine Scholar's note : Inverse trig functions are different from reciprocal functions, i.e. Principal Values of Inverse Trig Functions The principal value of When the horizontal line y = k cuts the graph of a trigonometry function, we are finding the principle value of ? within their respective intervals. For example, the principal value of Here is an example of how inverse trig functions are used. Evaluate without the use of tables or calculators Let A= Then we make use of addition formula to expand cos(A+B) We can find the values for the trigonometry ratios for them by drawing out the triangles. We hope you understood how to apply inverse trig functions into trig problems. Return to Trigonometry Help or Basic Trigonometry .
{"url":"http://www.trigonometry-help.net/inverse-trig-functions.php","timestamp":"2014-04-21T12:49:01Z","content_type":null,"content_length":"10958","record_id":"<urn:uuid:c6633197-d30f-4954-8189-cf13130e7d27>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Constrained linear regression... is not linear? Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Constrained linear regression... is not linear? From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Constrained linear regression... is not linear? Date Tue, 7 Dec 2010 11:14:30 -0500 If you're getting an answer outside [0,1] then perhaps your model is incorrectly specified, and you should rethink it. That said, try: drawnorm x z e, n(1000) clear seed(1) g y=min(max(1,round(3+.3*x+.7*z+e)),5) g ylz=y-z constraint define 1 x+z = 1 cnsreg y x z, c(1-3) nl (y={a}+{b}*x+(1-{b})*z) loc i=logit(.3) qui nl (y={a=0}+invlogit({b=`i'})*x+(1-invlogit({b=`i'}))*z) nlcom (invlogit([b]_cons)) (1-invlogit([b]_cons)) qui nl (ylz={a=0}+invlogit({b=`i'})*(x-z)) nlcom (invlogit([b]_cons)) (1-invlogit([b]_cons)), post test _b[ 1]=_b[ 2] On Tue, Dec 7, 2010 at 3:06 AM, Maarten buis <maartenbuis@yahoo.co.uk> wrote: > --- On Tue, 7/12/10, kokootchke wrote: >> I am trying to run the following constrained linear >> regression: >> y = ax + (1-a)z, with a in [0,1] > <snip> >> What I'm doing is the following: >> constraint define 1 x+z = 1 >> constraint define 2 x >= 0 >> constraint define 3 x <= 1 >> cnsreg y x z, c(1-3) > > Cotraints 2 and 3 are not allowed with -cnsreg-. The > problem is the fact that you want to constrain the parameter > within a certain range, and this is not considered to be > linear constraint. If you want to estimate this model you'll > have to use either -nl- or -ml- as is discussed here: > <http://www.stata.com/support/faqs/stat/intconst.html> * * r searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-12/msg00258.html","timestamp":"2014-04-16T07:31:46Z","content_type":null,"content_length":"9382","record_id":"<urn:uuid:baa149a5-95ce-4e16-ba1b-674b49c23007>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Any hope for EdE's defense? (long) [Archive] - RedsZone.com - Cincinnati Reds Fans' Home for Baseball Discussion 05-26-2007, 03:46 PM With all the talk about Edwin Encarnacion's defense it felt like a good time to look into what we might expect from EdE defensively in the future. The commonly-accepted book on Encarnacion is that he will make his fair share of errors, and probably more, but is a range-monster. His supportes would submit that his range more than offsets his miscues. (And boy can EdE rake) The two questions that were trying to be answered here are as follows: 1)Will Edwin make less errors as his career progresses. 2)Just what kind of range does Edwin have? To try to glean the answers to EdE's future it was thought best to look to the past. The career fielding percentages and range factors of ten very recent major league third basemen were looked at to look for trends. I took the first six years of data for each of the following third basemen with regards to fielding percentage and range factor. These third basemen are as follows: Chipper Jones, Scott Rolen, Aramis Ramirez, Mike Lowell, Vinnie Castilla, Edgardo Alfonzo, Adrian Beltre, Aaron Boone, Joe Randa and Jeff Cirillo. These players were picked at random from about 25 available. Their fielding percentage and range factor for each of the first six years they played third base regularly was divided by their career fielding percentage and range factor and then multiplied by 100. Playing 120 games at the position was considered to be playing regularly. (Out of the 60 seasons looked at perhaps 3 or 4 dipped slightly under the 120 game boundary) The ratios for all ten third basemen were then added for each category for each of the first six years and then divided by 10. This was done to see if any group trends emerged. The following chart shows the results: It is quickly evident that fielding percentage stays constant. Individually, as well as a group this was true. The skill level in fielding the ball stayed at a constant rate. One ray of hope shone through here. Adrian Beltre improved his fielding percentage gradually every year. This would indicate that there is a very good chance that the fielding percent of Edwin is where it will be for the rest of his career. It is not likely to improve much if at all. Range factor shows a greater deviation, but is still fairly constant. An improvement was shown in many of the players in years 2 and 3 before leveling off. Hopefully EdE will be an exception and improve drastically. But probably not. What about Edwin being a range monster? For this I looked at the zone rating numbers from ESPN. In 2005 Edwin started about third of the games for Cincinnati. His Zone Rating was a healthy .794. If he had played enough games at this rate to qualify this would have ranked him slightly above average for all regular NL third basemen. In 2006, his first full season, his Zone Rating fell off sharply to .741. This ranked dead last among regular NL third basemen. To date in 2007 his Zone Rating has fallen off the map to .686. This again ranks dead last among NL third basemen. It seems as if EdE possesses the ability to make rangy plays at the hot corner, but something is keeping him from doing so. I don't feel that something is a lack of ability. In conclusion the future doesn't look at that bright for Encarnacion down at third. He will likely continue to make more than an average amount of errors. And unless he turns things around considerably, his range must also be questioned. Hopefully he bounces back strong in the latter area. Several times in discussions about EdE's defense someone has mentioned the name Mike Schmidt. They pointed out that Schmidt made a huge amount of errors early on but improved greatly as his career progressed. I had no idea so I looked up his numbers at baseball-reference.com. If he made a lot of errors it had to be in the minors. Here is a look at the ratios for the first nine years of his career. His fielding percent has been very steady. Like many of the other third basemen looked at his range jumped initially and then evened out. FPct-R RgF-R Year
{"url":"http://www.redszone.com/forums/archive/index.php/t-58516.html","timestamp":"2014-04-20T08:53:48Z","content_type":null,"content_length":"7696","record_id":"<urn:uuid:d62d776a-8f12-4e11-b8e0-c7cf094f58a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
The limit of R_n as n-->inf is... September 21st 2013, 06:26 PM #1 The limit of R_n as n-->inf is... It seems that I have some issues with the sums... I have the function $f(x) = \sqrt{4 - x^2}$ I need to evaluate it $\displaystyle \int_0^2 (\sqrt{4 - x^2})dx$ by using the definition of Riemann sum for $R_n$ - right end-points, $\sum_{n=1}^{n} (\sqrt{4 - x^2})$ $\delta x = \frac{2-0}{n} = \frac{2}{n}$ $a) R_n = \sum_{n=1}^{n} \sqrt{4 - (\frac{2i}{n})^2}) *(\frac{2}{n}) =$ $b) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4 - (\frac{4i^2}{n^2})} =$ $c) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4n^2[\frac{1}{n^2} - i^2]} =$ $d) R_n = (\frac{2}{n})(2n) \sum_{n=1}^{n} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$... I'm not sure I did right so far, and how should I proceed further... thanks for hints, Re: The limit of R_n as n-->inf is... It seems that I have some issues with the sums... I have the function $f(x) = \sqrt{4 - x^2}$ I need to evaluate it $\displaystyle \int_0^2 (\sqrt{4 - x^2})dx$ by using the definition of Riemann sum for $R_n$ - right end-points, $\sum_{n=1}^{n} (\sqrt{4 - x^2})$ $\delta x = \frac{2-0}{n} = \frac{2}{n}$ $a) R_n = \sum_{n=1}^{n} \sqrt{4 - (\frac{2i}{n})^2}) *(\frac{2}{n}) =$ The sum is over i not n. $b) R_n = (\frac{2}{n}) \sum_{i=1}^{n} \sqrt{4 - (\frac{4i^2}{n^2})} =$ $c) R_n = (\frac{2}{n}) \sum_{i=1}^{n} \sqrt{4n^2[\frac{1}{n^2} - i^2]} =$ $d) R_n = (\frac{2}{n})(2n) \sum_{n=1}^{n} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$... At this point you have already done the sum. You should not have summation sign. I'm not sure I did right so far, and how should I proceed further... thanks for hints, Re: The limit of R_n as n-->inf is... $a) R_n = \sum_{i=1}^{n} \sqrt{4 - (\frac{2i}{n})^2}) *(\frac{2}{n}) =$ $b) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4 - (\frac{4i^2}{n^2})} =$ $c) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4n^2[\frac{1}{n^2} - i^2]} =$ $(\frac{2}{n})(2n) \lim_{n \to \infty} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$ is that right... $d) (4) \lim_{n \to \infty} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$ what can I do with all this under the radical? I have no idea Re: The limit of R_n as n-->inf is... $a) R_n = \sum_{i=1}^{n} \sqrt{4 - (\frac{2i}{n})^2}) *(\frac{2}{n}) =$ $b) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4 - (\frac{4i^2}{n^2})} =$ $c) R_n = (\frac{2}{n}) \sum_{n=1}^{n} \sqrt{4n^2[\frac{1}{n^2} - i^2]} =$ $(\frac{2}{n})(2n) \lim_{n \to \infty} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$ is that right... $d) (4) \lim_{n \to \infty} \sqrt{\frac{1}{n^2} - [\frac{n(n+1)(2n+1)}{6}]} =$ what can I do with all this under the radical? I have no idea Re: The limit of R_n as n-->inf is... Re: The limit of R_n as n-->inf is... I'll try, the question is like this: The following sum $\sqrt{4 - (\frac{2}{n})^2}*(\frac{2}{n}) + \sqrt{4 - (\frac{4}{n})^2}*(\frac{2}{n}) + ...+ \sqrt{4 - (\frac{2n}{n})^2}*(\frac{2}{n})$ is a Right Riemann sum for the definite integral $\displaystyle \int_0^b (f(x))dx$ where b = ... and f(x) = ... I checked my answer for $b=2$ and $f(x) = (\sqrt{4 - x^2})$ and they are correct; The next question is: the limit of these Riemann sums as n-> infinity is.... does that change my approach? Re: The limit of R_n as n-->inf is... does that change my approach? Yes, this changes everything. I'll try, the question is like this: The following sum $\sqrt{4 - (\frac{2}{n})^2}*(\frac{2}{n}) + \sqrt{4 - (\frac{4}{n})^2}*(\frac{2}{n}) + ...+ \sqrt{4 - (\frac{2n}{n})^2}*(\frac{2}{n})$ is a Right Riemann sum for the definite integral $\displaystyle \int_0^b (f(x))dx$ where b = ... and f(x) = ... We use the definition of the Riemann sum to be $\sum_{i=1}^n f(x_i) \Delta x$ $\Delta x = \frac{b-a}{n} = \frac{b}{n} = \frac{2}{n} \implies b = 2$ as you've done. And $x_i = 0 + i\Delta x = \frac{2i}{n} \implies f(x_i) = \sqrt{4 - \left(\frac{2i}{n}\right)^2}$, thus $f(x) = \sqrt{4 - x^2}$ which you've also done correctly. The next question is: the limit of these Riemann sums as n-> infinity is.... When they ask for this, they're not asking you to compute this integral $\int_0^2 \sqrt{4 - x^2} \ dx$ by evaluating this limit: $\lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n}$ But rather, what they're asking you to do is evaluate this limit: $\lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n}$ by using this relation: $\lim_{n\to\infty} \sum_{i=1}^n f(x_i) \Delta x = \int_a^b f(x) \ dx$ where $\Delta x = \frac{b-a}{n}, \ x_i = a + i\Delta x$. In other words, they're not asking you to compute the integral by using the Riemann sum, they're asking you to compute the Riemann sum by using the integral. Last edited by FelixFelicis28; September 22nd 2013 at 06:23 PM. Re: The limit of R_n as n-->inf is... Yes, this changes everything. We use the definition of the Riemann sum to be $\sum_{i=1}^n f(x_i) \Delta x$ $\Delta x = \frac{b-a}{n} = \frac{b}{n} = \frac{2}{n} \implies b = 2$ as you've done. And $x_i = 0 + i\Delta x = \frac{2i}{n} \implies f(x_i) = \sqrt{4 - \left(\frac{2i}{n}\right)^2}$, thus $f(x) = \sqrt{4 - x^2}$ which you've also done correctly. When they ask for this, they're not asking you to compute this integral $\int_0^2 \sqrt{4 - x^2} \ dx$ by evaluating this limit: $\lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n}$ But rather, what they're asking you to do is evaluate this limit: $\lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n}$ by using this relation: $\lim_{n\to\infty} \sum_{i=1}^n f(x_i) \Delta x = \int_a^b f(x) \ dx$ where $\Delta x = \frac{b-a}{n}, \ x_i = a + i\Delta x$. In other words, they're not asking you to compute the integral by using the Riemann sum, they're asking you to compute the Riemann sum by using the integral. I'm not sure I understand really how it should be done, do I really need in this case to evaluate it as $\lim_{n\to\infty} \sum_{i=1}^n \left[ \sqrt{4 - \left(\frac{bi}{n}\right)^2} \cdot \frac{b}{n} - \sqrt{4 - \left(\frac{ai}{n}\right)^2} \cdot \frac{a}{n} \right] =$ $\left[ \lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{bi}{n}\right)^2} \cdot \frac{b}{n} - \lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{ai}{n}\right)^2} \cdot \frac{a}{n} \right] = $ and so on, without using the actual values of $b$ and $a$ ? Re: The limit of R_n as n-->inf is... I'm not sure I understand really how it should be done, do I really need in this case to evaluate it as $\lim_{n\to\infty} \sum_{i=1}^n \left[ \sqrt{4 - \left(\frac{bi}{n}\right)^2} \cdot \frac{b}{n} - \sqrt{4 - \left(\frac{ai}{n}\right)^2} \cdot \frac{a}{n} \right] =$ $\left[ \lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{bi}{n}\right)^2} \cdot \frac{b}{n} - \lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{ai}{n}\right)^2} \cdot \frac{a}{n} \right] = $ and so on, without using the actual values of $b$ and $a$ ? Hmmm? I'm not quite sure what you're doing... Your nth Riemann sum is: $R_n = \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n}$. And they're asking you to evaluate $\lim_{n \to \infty} R_n$. We proceed with the definition that: $\lim_{n\to\infty} \sum_{i=1}^n f(x_i)\Delta x = \int_a^b f(x) \ dx$. $\lim_{n\to\infty} \sum_{i=1}^n \sqrt{4 - \left(\frac{2i}{n}\right)^2} \cdot \frac{2}{n} = \int_0^2 \sqrt{4 - x^2} \ dx$. All that's left to do is compute the integral on the RHS and you're done. Last edited by FelixFelicis28; September 23rd 2013 at 04:30 PM. September 21st 2013, 06:31 PM #2 MHF Contributor Apr 2005 September 21st 2013, 06:48 PM #3 September 22nd 2013, 03:34 PM #4 September 22nd 2013, 04:28 PM #5 September 22nd 2013, 05:41 PM #6 September 22nd 2013, 06:17 PM #7 September 23rd 2013, 03:25 PM #8 September 23rd 2013, 04:28 PM #9
{"url":"http://mathhelpforum.com/calculus/222158-limit-r_n-n-inf.html","timestamp":"2014-04-17T07:04:02Z","content_type":null,"content_length":"83704","record_id":"<urn:uuid:55fca7ce-d497-46fb-8f50-9d29975432d4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of Cylinder May 5th 2009, 07:12 AM #1 MHF Contributor Jul 2008 Volume of Cylinder A cylinder is measured and found to be 27 m long and 33 m in diameter. The volume of a cylinder is where R is the radius and L is the length. I used the volume formula: V = pi(R^2)(L).....Is this the correct formula for this question? #1 What is the radius of the cylinder? I found the radius to be 33/2. Is this correct? #2 What is the volume of the cylinder in m^3? I found the volume to be 23081.355 m^3...Is this correct? How do I convert this volume to cm^2 A cylinder is measured and found to be 27 m long and 33 m in diameter. The volume of a cylinder is where R is the radius and L is the length. I used the volume formula: V = pi(R^2)(L).....Is this the correct formula for this question?.......Yes #1 What is the radius of the cylinder? I found the radius to be 33/2. Is this correct?.......Yes #2 What is the volume of the cylinder in m^3? I found the volume to be 23081.355 m^3...Is this correct?.......nearly How do I convert this volume to cm^2 1. You used $\pi = 3.14$. If you use the number $\pi$ a little bit more exact the volume would be $V = 23093.622\ m^3$ to #2: $1\ m^3 = 1\ m \cdot1\ m \cdot1\ m = 100\ cm \cdot100\ cm \cdot100\ cm = 1,000,000\ cm^3$ That means: Multiply the value of the volume by 1,000,000 to convert it into cm³. Are you saying??? 1. You used $\pi = 3.14$. If you use the number $\pi$ a little bit more exact the volume would be $V = 23093.622\ m^3$ to #2: $1\ m^3 = 1\ m \cdot1\ m \cdot1\ m = 100\ cm \cdot100\ cm \cdot100\ cm = 1,000,000\ cm^3$ That means: Multiply the value of the volume by 1,000,000 to convert it into cm³. Are you saying to multiply 1,000,000 by 23093.622 m^2 to convert to cm^3? Also, how did you get 23093.622 m^3 for the volume using 3.14 for pi? If you did not use 3.14 for pi, which value of pi did you use? Yes. I've made a tiny mistake: The coefficient which converts m³ into cm³ actually is: $1,000,000\ \dfrac{cm^3}{m^3}$ Also, how did you get 23093.622 m^3 for the volume using 3.14 for pi? If you did not use 3.14 for pi, which value of pi did you use? I used as many digits as my calculator uses: $\pi \approx 3.14159265359$ and then rounded the final result to 3 digits. got it May 5th 2009, 07:38 AM #2 May 6th 2009, 05:17 AM #3 MHF Contributor Jul 2008 May 6th 2009, 11:12 AM #4 May 6th 2009, 01:37 PM #5 MHF Contributor Jul 2008
{"url":"http://mathhelpforum.com/geometry/87603-volume-cylinder.html","timestamp":"2014-04-19T23:05:25Z","content_type":null,"content_length":"49386","record_id":"<urn:uuid:ee23d63f-f923-4683-a737-eee53238b231>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: PRECISION SUB-RADIX2 DAC WITH LINEARITY CALIBRATION Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A system includes an N bit sub-binary radix digital-to-analog converter (DAC) that converts an m bit digital input signal to an analog output signal, where m and N are integers greater than or equal to 1 and N>m. A radix conversion module determines a code ratio, the code ratio being a ratio of a total number of available monotonic codes to 2 , and performs radix conversion on the m bit digital input signal based on the code ratio. A system comprising: an N bit sub-binary radix digital-to-analog converter (DAC) that converts an m bit digital input signal to an analog output signal, where m and N are integers greater than or equal to 1 and N>m; and a radix conversion module that determines a code ratio, the code ratio being a ratio of a total number of available monotonic codes to sup.m, and that performs radix conversion on the m bit digital input signal based on the code ratio. The system of claim 1 wherein the radix conversion includes converting the m bit digital input signal to an N bit sub-radix DAC code. The system of claim 1 wherein the radix conversion includes multiplying the digital input signal by the code ratio. The system of claim 1 wherein the radix conversion module adjusts the code ratio to achieve a desired gain trim. The system of claim 4 further comprising: an amplifier coupled to the DAC; and a gain resistor coupled to the amplifier and the DAC, wherein the radix conversion module adjusts the code ratio based on a resistance R of the gain resistor. The system of claim 5 wherein the radix conversion module adjusts the code ratio further based on a DAC output resistance RDAC that is less than R The system of claim 6 wherein the radix conversion module adjusts the code ratio by multiplying the code ratio by RDAC R gain . ##EQU00005## The system of claim 1 wherein the DAC includes: an NL bit ladder module having NL ladder resistors connected in parallel; and an NS bit segment module having sup.NS-1 segment resistors connected in parallel, where NS is an integer greater than or equal to 1. 9. The system of claim 8 wherein a first number of the NL bits are associated with a first radix and a second number of the NL bits are associated with a second radix that is different than the first The system of claim 8 wherein the radix conversion includes selectively setting and clearing bits of the NL bit ladder module and the NS bit segment module based on the code ratio. A method comprising: converting an m bit digital input signal to an analog output signal using an N bit sub-binary radix digital-to-analog converter (DAC), where m and N are integers greater than or equal to 1 and N>m; determining a code ratio, the code ratio being a ratio of a total number of available monotonic codes to sup.m; and performing radix conversion on the m bit digital input signal based on the code ratio. The method of claim 11 further comprising converting the m bit digital input signal to an N bit sub-radix DAC code. The method of claim 11 further comprising multiplying the digital input signal by the code ratio. The method of claim 11 further comprising adjusting the code ratio to achieve a desired gain trim. The method of claim 14 further comprising: coupling an amplifier to the DAC; coupling a gain resistor to the amplifier and the DAC; and adjusting the code ratio based on a resistance R of the gain resistor. The method of claim 15 further comprising adjusting the code ratio further based on a DAC output resistance RDAC that is less than R The method of claim 16 further comprising adjusting the code ratio by multiplying the code ratio by RDAC R gain . ##EQU00006## The method of claim 11 wherein the DAC includes: an NL bit ladder module having NL ladder resistors connected in parallel; and an NS bit segment module having sup.NS-1 segment resistors connected in parallel, where NS is an integer greater than or equal to 1. 19. The method of claim 18 wherein a first number of the NL bits are associated with a first radix and a second number of the NL bits are associated with a second radix that is different than the first The method of claim 18 further comprising selectively setting and clearing bits of the NL bit ladder module and the NS bit segment module based on the code ratio. FIELD [0001] The present disclosure relates to a sub-radix digital-to-analog converter (DAC). BACKGROUND [0002] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. Digital-to-analog converters (DACs) receive a digital input signal and convert the digital input signal into an analog output signal. The digital input signal has a range of digital codes that are converted into a continuous range of analog signal levels of the analog output signal. Accordingly, DACs are typically used to convert data between applications operating in digital and analog domains. For example only, applications of DACs include, but are not limited to, video display drivers, audio systems, digital signal processing, function generators, digital attenuators, data storage and transmission, precision instruments, and data acquisition systems. A variety of types of DACs are available based upon desired functionality. For example only, DACs may have varying predetermined resolutions of the digital input signal, receive different encoded digital input signals, have different ranges of analog output signals using a fixed reference or a multiplied reference, and provide different types of analog output signals. Various DAC performance factors include, but are not limited to, settling time, full scale transition time, accuracy or linearity, and resolution. A number of bits (i.e. a bit width) of the digital input signal defines the resolution, a number of output (quantization) levels, and a total number of digital codes that are acceptable for the DAC. For example, if the digital input signal is m-bits wide, the DAC has 2 output levels. In sub-binary radix (i.e. sub-radix ) DACs, the ratio of a weighted DAC element to a next (lower) weighted DAC element is a constant less than 2 (i.e. sub-binary). For example only, the ratio may be approximately 1.85. Referring now to FIG. 1, an example sub-binary radix DAC 10 includes a ladder module 12 having m ladder bits and a switch control module 14. For example only, the ladder module 12 is an R-βR ladder. The ladder module 12 receives analog reference signals 16 and 18. For example only, the analog reference signal 16 may be ground and the analog reference signal 18 may be a positive reference voltage. The switch control module 14 receives bits b , b , . . . , b -1 of an m-bit binary digital input signal 20 and controls switches (not shown) of the ladder module 12 based on the m bits of the digital input signal 20. The ladder module 12 generates an analog output signal 22 based on the digital input signal 20 (i.e. the controlled switches of the ladder module 12) and the analog reference signals 16 and 18. Accordingly, the analog output signal 22 corresponds to the digital-to-analog conversion of the digital input signal 20. Referring now to FIG. 2, the ladder module 12 of the DAC 10 is shown to include resistors RL . . . RL -1, referred to collectively as RL , and resistors RDL . . . RDL -1, referred to collectively as resistors RDL . Each of the resistors RL has a value R and each of the resistors RDL has a value βR. In other words, β corresponds to a ratio of an RDL resistor value to an RL resistor value. A termination resistor RT has a value of γR. The values of β and γ satisfy the equation γ2= β+γ. The radix of the DAC 10 corresponds to γ γ - 1 . ##EQU00001## The analog reference signals 16 and 18 are selectively provided to the resistors RT and RDL via switches 30. The sub-binary radix DAC 10 is not monotonic. In other words, a transfer function of the DAC 10 is non-monotonic and a conversion between the non-monotonic transfer function and a monotonic transfer function is needed. Further, due to code overlapping, a dynamic range of the DAC 10 is reduced. Consequently, the DAC 10 uses additional bits to recover the dynamic range, and an algorithm is used to convert the bits of the m-bit binary digital input signal 20 to a sub-radix DAC code having additional bits. Conversion between the non-monotonic transfer function and the monotonic transfer function is performed via a calibration step and a radix conversion step. The calibration step is performed using an example recursive successive approximation method. The method determines a last code having a smaller value than an analog bit weight of a current bit for each of the bits of the digital input signal 20 (from the LSB to the MSB). Results of the method are used to generate a calibration table that associates each bit i from 0 to m-1 with a corresponding digital weight WL . An example calibration table 50 for m=4 is shown in FIG. 3. The example calibration table 50 corresponds to the following design parameters: effective number of bits (i.e. bits of input DAC code)= 3; radix DAC number of bits=4; and radix=1.5. The radix conversion step is performed using an example successive subtraction method. The method performs successive subtraction of the digital weight M from the binary input value of the digital input signal 20 to determine which bits of the DAC 10 are set and which bits of the DAC 10 are cleared. Results of the method are used to generate a radix DAC code, and subsequently an output value, for each input DAC code. For example only, a code mapping table 70 as shown in FIG. 4 illustrates a relationship between input DAC codes from 000 to 111 and corresponding radix DAC codes and output values. The example code mapping table 70 corresponds to the following design parameters: effective number of bits=3; radix DAC number of bits=4; and SUMMARY [0012] A system includes an N bit sub-binary radix digital-to-analog converter (DAC) that converts an m bit digital input signal to an analog output signal, where m and N are integers greater than or equal to 1 and N>m. A radix conversion module determines a code ratio, the code ratio being a ratio of a total number of available monotonic codes to 2 , and performs radix conversion on the m bit digital input signal based on the code ratio. Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0014] The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein: FIG. 1 is a functional block diagram of a sub-binary radix DAC according to the prior art; FIG. 2 is a schematic of a ladder module of a sub-binary radix DAC according to the prior art; FIG. 3 is a calibration table for a sub-binary radix DAC according to the prior art; FIG. 4 is a code mapping table of a sub-binary radix DAC according to the prior art; FIG. 5 is a functional block diagram of a sub-binary radix DAC according to the present disclosure; FIG. 6 is a schematic of a combination of a ladder module and an MSB segment module of a sub-binary radix DAC according to the present disclosure; FIG. 7 illustrates a ladder calibration method using recursive successive approximation according to the present disclosure; FIG. 8 illustrates a segment calibration method using recursive successive approximation according to the present disclosure; FIG. 9 is a flow diagram illustrating steps of the ladder calibration method and the segment calibration method according to the present disclosure; FIG. 10 illustrates a radix conversion method for performing a radix conversion step according to the present disclosure; FIG. 11 is a flow diagram illustrating steps of the radix conversion method according to the present disclosure; FIG. 12 is a code mapping table of a sub-binary radix DAC according to the present disclosure; FIG. 13 is a schematic of a sub-binary radix DAC incorporating gain trim according to the present disclosure; FIG. 14A illustrates DAC output after calibration according to the prior art; FIG. 14B illustrates DAC output after calibration according to the present disclosure; FIG. 15A illustrates DNL after calibration according to the prior art; FIG. 15B illustrates DNL after calibration according to the present disclosure; FIG. 16A illustrates INL after calibration according to the prior art; and FIG. 16B illustrates INL after calibration according to the present disclosure. DETAILED DESCRIPTION [0034] The following description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A or B or C), using a non-exclusive logical or. It should be understood that steps within a method may be executed in different order without altering the principles of the present disclosure. As used herein, the term module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the processor. The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, and/or objects. The term shared, as used above, means that some or all code from multiple modules may be executed using a single (shared) processor. In addition, some or all code from multiple modules may be stored by a single (shared) memory. The term group, as used above, means that some or all code from a single module may be executed using a group of processors. In addition, some or all code from a single module may be stored using a group of The apparatuses and methods described herein may be implemented by one or more computer programs executed by one or more processors. The computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium. The computer programs may also include stored data. Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage. Referring now to FIG. 5, a sub-binary radix DAC 100 according to the present disclosure includes an LSB ladder module 102, an MSB segment module 104, and a switch control module 106. For example only, the LSB ladder module 102 is an R-βR ladder. The LSB ladder module 102 and the MSB segment module 104 receive analog reference signals 108 and 110. For example only, the analog reference signal 108 may be ground and the analog reference signal 110 may be a positive reference voltage. A radix conversion module 112 receives a binary digital input signal 114 and outputs an N bit switch control signal 116, where N (a number of radix DAC bits) corresponds to NL (a number of ladder bits)+NS (a number of segment bits). In other words, N=NL+NS. For example only, for an 18 bit DAC (i.e. for an 18 bit digital input signal), N is greater than 18. The value of NS (and therefore N) may be selected based on desired linearity or other performance parameters. The switch control module 106 receives the N bits of the switch control signal 116 and controls switches (not shown) of the LSB ladder module 102 and the MSB segment module 104 based on the switch control signal 116. For example only, MSB segments of the MSB segment module 104 may be thermometer encoded. The MSB segment module 104 provides NS segment bits and generates an analog output signal 124 based on the controlled switches of the LSB ladder module 102 and the MSB segment module 104 and the analog reference signals 108 and 110. Accordingly, the analog output signal 124 corresponds to the digital-to-analog conversion of the digital input signal 114 after the radix conversion module 112 converts the digital input signal 114 to the N bit switch control signal 116. Referring now to FIG. 6, the LSB ladder module 102 of the DAC 100 is shown to include resistors RL . . . RL -1, referred to collectively as RL , resistors RDL . . . RDL -1, referred to collectively as resistors RDL , and termination resistor RT. Each of the resistors RL has a value R and each of the resistors RDL has a value βR. The termination resistor RT has a value of γR. The values of β and γ satisfy the equation β2=β+γ. The MSB segment module 104 is shown to include resistors RDS . . . RDS.sub.(2 ), referred to collectively as resistors RDS . Each of the resistors RDS has a value βR. The analog reference signals 108 and 110 are selectively provided to the resistors RT, RDL , and RDS via switches 130. The modified structure of the DAC 100 including the bits provided by the MSB segment module 104 improves resistance and drift sensitivities of the switch and metal connections of the DAC 100. Further, the MSB segment module 104 improves output noise of the DAC 100 without lowering DAC unit resistance. Bits of the ladder module 102 and the segment module 104 are set or cleared using the switches 130. For example, a bit may be set when a corresponding one of the switches 130 connected to the analog reference signal 110 is closed. Conversely, a bit may be cleared when a corresponding one of the switches 130 connected to ground is closed. Although the DAC 100 as described above implements a fixed radix for each bit, any of the techniques described herein may be applied to a mixed radix. For example only, a first number of bits associated with the LSB ladder module 102 may have a first radix (e.g. 2). Accordingly, a first number of stages of the LSB ladder module 102 associated with the first number of bits may operate as an R-2R DAC. For example only, the first number of bits may correspond to a number of stages that ensures monotonic output without any calibration. A remaining number of bits associated with the LSB ladder module 102 may have a different radix. The number of bits having the different radix may be determined based on, for example only, resistor matching and desired monotonic output. An algorithm according to the present disclosure performs conversion between a non-monotonic transfer function of the DAC 100 and a monotonic transfer function via a calibration step and a radix conversion step. The calibration step includes an LSB ladder calibration step, an MSB segment calibration step, and a calculation of a good code ratio (e.g. a ratio based on a total number of monotonic codes). The radix conversion step converts the incoming digital code to a sub-radix DAC setting. Referring now to FIG. 7, the LSB ladder calibration step is performed using, for example only, a recursive successive approximation method 150. The method 150 determines a last code having a smaller value than an analog bit weight of a current bit for each of the bits of the digital input signal 114. The method 150 calibrates each bit (for i from 1 to NL-1), starting from the LSB, of the digital input signal 114. In other words, the method 150 iteratively calibrates each bit i to determine a digital weight WL Referring now to FIG. 8, the MSB segment calibration step is performed using a segment calibration method 160. The method 160 asserts and calibrates each segment seg from the LSB segments to the MSB segments (from 0 to 2 -2). When a current segment seg is asserted, segments 0 through seg are each turned on. A total number of monotonic codes below segment seg equals a sum of a total number of monotonic codes below segment seg-1 (or zero if seg=0) and a total number of monotonic codes between segment seg and segment seg-1 (or a zero code output if seg=0). Referring now to FIG. 9, the methods 150 and 160 are shown as a flow diagram 190 that begins in step 192. In step 194, WL is set as 1. In other words, the digital weight of bit b is set to 1. In step 196, ladder calibration begins in order to calibrate bit i from 1 to NL-1, and values of WL and Vout are initialized to 1 and 0, respectively. The LSBs below bit i are then evaluated in step 198 to determine whether to keep or ignore each bit. Among the LSBs below bit i and starting from the MSB, j bits (from i-1 to 0) are iteratively evaluated. In step 198, control determines whether a sum of Vout and b (i.e. an analog bit weight of a current bit j) is less than b . If true, control continues to step 200 to keep (i.e. set to 1) the current bit j. If false, control ignores (i.e. sets to 0) the current bit j. If false and j is greater than 0, control repeats step 198. If false, j=0, and i is less than NL-1, control continues to step 196. If false, j=0, and i=NL-1, control continues to step 202 to begin segment calibration. In step 200, control keeps bit j, sets Vout equal to a sum of Vout and b , and sets WL equal to a sum of WL and WL , and determines whether all bits (from i-1 to 0 and from i+1 to NL-1) have been evaluated. If true (e.g. j=0 and i=NL-1), control continues to step 202 to begin segment calibration. If j is greater than 0, control returns to step 198. If j=0 and i is less than NL-1, control returns to step 196. In step 202, segment calibration begins in order to calibrate each segment bit from 0 to 2 -2, and values of WS and Vout are initialized to 1 and 0, respectively. If seg is greater than 0, control continues to step 204. If seg=0, control continues to step 206. In step 204, for seg greater than 0, a sum of WS and WS -1 is stored as a new value for WS , and an output (seg_sum -1) when asserting segment seg-1 (i.e. when segments 0 through seg-1 are each turned on) is stored as a new value for Vout. In steps 206 and 208, control determines whether to keep or ignore each bit of the ladder module 102 for j bits (for j from NL-1 to 0). In step 206, control determines whether a sum of Vout and an analog bit weight of a current bit j is less than seg_sum . If true, control continues to step 208 to keep (i.e. set to i) the current bit j. If false, control ignores (i.e. sets to 0) the current bit j. If false, j=0, and seg=2 -2, control continues to step 210. If false and j is greater than 0, control repeats step 206. If false, j=0, and seg is less than 2 -2, control returns to step 202. In step 208, control keeps bit j, sets Vout equal to a sum of Vout and the analog bit weight of the current bit j and sets WS equal to a sum of WS and WL , and determines whether all bits (for j from NL-1 to 0) and all segments (i.e. through segment 2 -2) have been evaluated. If true (e.g. j=0 and seg=2 -2) control continues to step 210. If false and j is greater than 0, control returns to step 206. If false, j=0, and seg is less than 2 -2, control returns to step 202. In step 210, control calculates the good code ratio. For example only, control calculates the good code ratio according to = WL i + WS 2 NS - 2 + 1 2 M . ##EQU00002## Control ends calibration in step Referring now to FIG. 10, the radix conversion step is performed using a radix conversion method 220. The radix conversion method 220 determines which bits of the DAC 100 are kept (i.e. set to 1) and which bits are cleared or ignored (i.e. set to 0). The radix conversion method 220 according to the present disclosure calculates a total number of monotonic codes (code_total) and the good code ratio (code_ratio) based on the code_total and performs the radix conversion step based in part on the good code ratio. Incorporating the good code ratio into the radix conversion allows all available monotonic codes to be selected to form the DAC transfer function. Consequently, both differential non-linearity (DNL) and integral non-linearity (INL) performance are significantly Assuming an input DAC code (e.g. the digital input signal 114) is m bits and a sub-binary radix DAC (e.g. the DAC 100) is N bits (where N=NL+NS and N>m), an input DAC code is indicated by d. The code_total is calculated according to code_total=ΣWL +1. The code ratio corresponds to a ratio of the code_total to an m bit full code, or _total 2 m . ##EQU00003## Referring now to FIG. 11, the method 220 is shown as a flow diagram 230 that begins in step 232. In step 234, control calculates a scaled input DAC code. For example, an "error" value is initialized to d*ratio (where "ratio" corresponds to the ratio of code_total to the m bit full code, and d is the m bit pre-scaled input DAC code). In step 236, control begins a segment search. For example, starting from the MSB segment (for seg from 2 -2 to 0), control finds a first segment having a total number of monotonic codes less than the scaled input code (error). If no segment meets this criterion, then no segments are turned on. Control determines whether error is greater than or equal to WS . If true, control continues to step 238. If false and seg is greater than 0, control repeats step 236 for the next WS . If false and seg=0, control continues to step 240 to begin a ladder search. In step 238, control sets a new value of the error to error-WS (for the first segment less than the error), and turns on segments 0 through seg of the MSB bits (i.e. sets MSB code (NS bit) to (seg+1)). Control performs the ladder search for each bit, for i from NL-1 down to 0, in steps 240 and 242. In step 240, control determines whether error is greater than or equal to WL of a current bit i. If true, control continues to step 242. If false, control ignores bit i (i.e. sets bit i to 0). If false and i is greater than 0, control repeats step 240. If false and i=0, control continues to step 244. In step 242, control sets a new value of error to error-WL and keeps bit i (i.e. sets bit i to 1), and determines whether all bits (for i from NL-1 to 0) have been evaluated (i.e. i=0). If true, control continues to step 244. If false (i.e. i is greater than 0), control returns to step 240. In step 244, code conversion is completed and the converted code (e.g. a 22-bit code for NS=4 and NL=18) is stored. For example, control may load the code into a DAC register. Control ends radix conversion in step 246. An example code mapping table 250 according to the present disclosure for input DAC codes from 000 to 111 and a code ratio of 1.5 is shown in FIG. 12. The example code mapping table 250 corresponds to the following design parameters: effective number of bits (i.e. bits of input DAC code)=3; radix DAC number of bits=4; and radix=1.5. Referring now to FIG. 13, the incorporation of the code ratio into the radix conversion method 220 allows a desired gain trim to be achieved without an additional analog or digital trim network. In particular, the code ratio may be adjusted to achieve a high resolution gain trim. For example only, the DAC 100 may include an inverting output amplifier 300 and a resistor R connected to the resistor RDS of the MSB segment module 104. When a value of R is larger than a nominal DAC output resistance RDAC, a positive initial gain error is introduced to the DAC 100. Accordingly, the code ratio can be adjusted downward to achieve the desired gain trim. For example only, the code ratio can be calculated according to _ratio = code_total 2 M * RDAC R gain . ##EQU00004## Referring now to FIGS. 14A and 14B, DAC output after calibration is shown for a conventional DAC and the DAC 100 according to the present disclosure, respectively. Referring now to FIGS. 15A and 15B, DNL after calibration is shown for a conventional DAC and the DAC 100 according to the present disclosure, respectively. Referring now to FIGS. 16A and 16B, INL after calibration is shown for a conventional DAC and the DAC 100 according to the present disclosure, respectively. For each of FIGS. 14A, 14B, 15A, 15B, 16A, and 16B, the following design parameters are assumed: effective number of bits=4; radix DAC number of bits=7; and radix=1.857. The broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, the specification, and the following claims. Patent applications by MAXIM INTEGRATED PRODUCTS, INC. Patent applications in class Using ladder network Patent applications in all subclasses Using ladder network User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120200442","timestamp":"2014-04-18T20:24:19Z","content_type":null,"content_length":"57200","record_id":"<urn:uuid:377f2630-83d0-4cfb-b0b7-bbdfd744edd5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Hot Potato: Momentum As An Investment Strategy Momentum investing has important features in common with other factor-based Smart Beta strategies. For example, it has straightforward index or portfolio construction rules that are easily explained and implemented. And, although momentum investing is emphatically not a contrarian strategy, neither is it necessarily inconsistent with the Smart Beta thesis that prices are noisy and mean-reverting. In this interpretation, momentum investing is a lively game of hot potato—buying rapidly appreciating stocks, holding them for a relatively short period, and selling them before their price trends reverse direction. And in favorable conditions it works very well. Nonetheless, our research raises serious theoretical and practical questions about momentum as an investment strategy in its own right. In this issue, I review the evidence for momentum investing, consider momentum in comparison with other equity risk factors, and briefly touch upon implementation issues, including portfolio construction and rebalancing policies. I argue in favor of choosing another factor for the core investment strategy and using momentum only as an ancillary trading strategy.^1 Evidence and Explanations Momentum has shown itself to be quite robust across U.S. and foreign equity markets, within industries and countries, and across many different asset classes such as stocks, currencies, commodities, and bonds. In 1993, UCLA professors Narasimhan Jegadeesh and Sheridan Titman (1993) published what is considered to be the first comprehensive study of the momentum effect. They found strong evidence, over the 1965–1989 period, that stock prices trend—at least in the “short-term” of up to two years. In Jegadeesh and Titman’s study, the best performing portfolio selected stocks on the basis of the previous 12 months of price returns, bought winners, sold losers short, and held those positions for the subsequent three months. Other academics confirmed that momentum is at work in international equities, emerging markets, industries and sectors, mutual funds, and asset classes.^2 In fact, commodity trading advisors (CTAs) have built a profitable business around trading momentum.^3 Empirical studies have shown the momentum effect to be strong, but financial theory hasn’t definitively explained why momentum exists. Describing investors’ behavioral tendencies in the 1970s, Daniel Kahneman and Amos Tversky (1979) identified what they called the “anchoring and adjustment” heuristic.^4 In the face of uncertainty, individuals estimate the expected future value of an asset by making adjustments to a reference price, that is, an “anchored” value. Investors manifest this tendency by anchoring to the current information (stock price) and being slow to adjust expected future values in light of new information. Thus, prices lag fundamental information and play “catch up” for a few quarters, leading to serial correlation in stock prices. Jegadeesh and Titman concluded that an under-reaction to firm-specific information was the likely cause of momentum. In further support of the anchoring hypothesis, Hong and Stein (1999) found that it takes time for information to be fully reflected in stock prices. Other financial and psychological considerations may also prolong momentum by postponing price adjustments due to new information. Tax liabilities might make it preferable to defer the realization of capital gains. Company insiders may decide it is prudent to reduce their holdings over an extended period. Investors’ sentimental attachment to a company may discourage them from divesting the stock. (For instance, an individual might have inherited the stock, or the officers of a charitable organization may be loath to sell the stock of their founders’ company.) Serial correlation in earnings announcements might also lead to stock price momentum.^5 Taken one by one, these insights make good sense. However, there isn’t a generally accepted theory that explains the causes of momentum in the financial markets. For example, it is not clear why the anchoring-and-adjustment heuristic would prevail over another psychological trait—investors’ tendency to overreact to new information. Nor is it clear that behavioral patterns which are perceptible in individual decision-making can be applied by simple extrapolation to untold numbers of investors interacting with one another. The lack of a cogent theoretical explanation is not a trivial matter. Maintaining that, because the stock price has risen, it will continue to rise—as though the conservation of linear momentum applied, by analogy, to financial assets—is scientifically dubious. After all, an investment thesis that supports buying stocks solely on the basis of past prices violates even the weak form of the efficient market hypothesis. More than just a beauty contest, investing becomes Keynes’s (1936) third degree of speculation: It is not a case of choosing those [faces] that, to the best of one’s judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. Momentum as an Equity Risk Factor From our vantage point, it appears that investors are starting to look at equity risk factors more closely. We believe there are two reasons for this new interest. One reason is to understand the nature and relative magnitude of the risks in their portfolio. This is a very sensible exercise because many actively managed portfolios have inapparent risk exposures. The other reason is that, with the growing acceptance of Smart Beta strategies, many investors are shifting their equity portfolios to capture specific long term risk premia. The commonly accepted equity risk factors are market beta (MKT – RF), value (HML), small size (SMB), momentum (MOM), and low volatility (BAB).^6 Among the first four equity risk factors, over a period longer than 40 years, momentum registered the highest return and Sharpe ratio (see Figure 1).^7 As attractive as momentum appears in Figure 1, it must be borne in mind that all equity risk factors are time-varying. That is, risk factor exposures will not add value consistently and all of the time. There will be some periods when certain risk factors are in favor and others when they are not—including extended intervals when factor-based investing is very discomfiting. As shown in Figure 2, the momentum risk factor has earned a negative risk premium for the 13 years ending June 30, 2013. We have also observed that momentum’s strength has eroded over the past decade. Factor-based investing requires strong conviction and a steady hand. The other major challenges with momentum include higher volatility and the associated left-tail risk of severe performance crashes. These traits make it difficult to adopt momentum as an investment strategy and may explain why we don’t see many pure momentum strategies in the marketplace, where value strategies are ubiquitous. Although momentum and value factors have similar Sharpe ratios over time, momentum has 50% higher volatility, whereas value is more stable and, perhaps, more intuitively appealing. Additionally, the momentum anomaly works best in illiquid, smaller cap stocks (Fama and French, 2011), and turnover is very high. Jegadeesh and Titman calculated turnover at 170% annually for their long/short portfolios. The trading costs are real and can substantially erode the risk premium due to momentum. Research is mixed about the alpha net of trading costs, but there is evidence that momentum’s high transaction costs offset the alpha potential at a fairly low level of assets invested in momentum strategies (Korajczyk and Sadka, 2004).^8 Implementation Matters A better form of momentum strategy can be implemented by adopting portfolio construction rules that adjust for systematic risk. Naïve momentum strategies hold high beta stocks that lead to crowding into expensive stocks during bubbles. When the inevitable market correction occurs, high momentum stocks reverse (that is, revert to the mean) strongly, and the high beta names naturally tend to overcorrect. One of our colleagues, Denis Chaves (2012), finds that the alpha produced by idiosyncratic momentum is significantly more robust than the alpha associated with traditional momentum. The Carhart four-factor model explains less than half of the return generated by an idiosyncratic momentum strategy. Chaves corrects for beta in calculating momentum for the purpose of stock selection. For example, if the market rises 20% and a stock with a beta of 2.0 rises 40%, the idiosyncratic momentum of that stock is zero because the stock is expected to rise twice as much as the market. All else equal, this stock is unlikely to be selected for an idiosyncratic momentum portfolio, but it would probably be held in a traditional or naïve momentum portfolio. Intuitively, adjusting for beta allows us to differentiate between stocks whose prices are rising for “authentic” reasons, and those that are just moving with the market.^9 Apart from the equity market factor, the value factor is probably the best documented and most commonly targeted source of risk premium. Nonetheless, it is not entirely clear what value is. Some theorists refer to an unknown or hidden risk (e.g., default). We have a different view. We maintain that the value premium (and the size premium as well) is a byproduct of noisy, mean-reverting stock prices, and it can be captured through contra-trading.^10 In the RAFI® Fundamental Index® methodology, contra-trading is accomplished by means of systematic rebalancing to constituent weights that are not related to prices. Rebalancing, in this approach, does not merely correct for style drift; it is integral to the strategy. We favor annual rebalancing because it minimizes turnover and, therefore, transaction costs. Value investing is a long-term proposition. Momentum strategies, in contrast, are profitable in the short run, and they call for more frequent rebalancing.^11 But, obviously, more frequent rebalancing entails higher transaction costs. In addition, rebalancing a non-price-weighted portfolio has a strong positive value factor loading and a negative loading to momentum. These opposing characteristics are hardly surprising; momentum and value strategies are themselves opposites—procyclical vs. contrarian, short-term vs. long-term, and based upon trending vs. reverting to the mean. Recognizing these oppositions, I submit that complementing a long-term fundamentals-weighted strategy with a judicious commitment to a short-term momentum strategy might, in aggregate, produce attractive risk-adjusted returns. Indeed, Morningstar found a blended portfolio of value and momentum outperformed a blended portfolio of value and growth by nearly 1% annually.^12 And Yet… So what are investors to do with momentum? Our conclusion is that momentum is inadvisable as a stand-alone strategy due to the risk of precipitous losses. Rather, we suggest that long-term investors seeking to tap more than one source of equity premium choose another, more stable factor for their core investment strategy (value is certainly a strong candidate), and consider adding momentum as a short-term trading strategy when market conditions are favorable. 1. For the record, the opinions expressed in this piece of writing are the author’s; they do not necessarily reflect Research Affiliates’ views. 2. See, for example, Rouwenhorst (1998), Griffin, Ji, and Martin (2005), Rouwenhorst (1999), Moskowitz and Grinblatt (1999), Carhart (1997), and Asness, Moskowitz, and Pedersen (2009). 3. For a fee on the order of 2 + 20%, CTAs will gladly provide you with the momentum returns across assets. 4. Kahneman devotes a very readable chapter to anchoring in Kahneman (2011). 5. Soffer and Walther (2000); Chordia and Shivakumar (2002). 6. Beta, value, size, and momentum constitute the classic “four-factor” Fama–French–Carhart risk model. 7. The risk factor portfolios are courtesy of Ken French at Dartmouth. Risk factor returns are calculated for zero-cost long/short portfolios. Momentum is calculated by taking the returns of all stocks from 12 months ago to 2 months ago, ranking them and selecting the top returning 30% of stocks for the long portfolio and shorting the worst 30% performing stocks. 8. The authors estimated that, for a single fund, momentum loses its statistical significance at $1–2 billion, and its profits at $5 billion. 9. Of course, traditional momentum portfolios have alpha beyond the beta risk factor, but idiosyncratic momentum dampens volatility, resulting in a more attractive risk premium. 10. If the stock market is not perfectly efficient for any reason, half of stocks are overpriced and half are underpriced. As market participants seek fair value, prices mean revert resulting in a return that has been shown to be approximately 2% over the capitalization-weighted index in developed markets such as the United States (Arnott, Hsu, and Moore, 2005). 11. Vayanos and Woolley (2013) determined that the Sharpe ratio of the momentum strategy is a function of the length of the window over which past returns are calculated, and they found that the highest Sharpe ratio was achieved using a window of four months. This implies a rebalancing frequency of three times per year. 12. Beginning in 1993 through June 2013, an equal-weighted Russell 1000 Value Index and AQR Momentum Index returned 9.53% relative to an equal-weighted Russell 1000 Value Index and Russell 1000 Growth Index that returned 8.63%. They had similar standard deviations of 15.4% (Bryan, 2013). Arnott, Robert D., Jason C. Hsu, and Philip Moore. 2005. “Fundamental Indexation.” Financial Analysts Journal, vol. 61, no. 2 (March/April):83–99. Asness, Clifford S., Tobias J. Moskowitz, and Lasse Heje Pedersen. 2010. “Value and Momentum Everywhere.” American Finance Association 2010 Atlanta Meetings Paper. Bryan, Alex. 2013. “Does Momentum Investing Work?” Morningstar (April 10). Carhart, Mark M. 1997. “On Persistence in Mutual Fund Performance.” Journal of Finance, vol. 52, no. 1 (March):57–82. Chaves, Denis. 2012. “Eureka! A Momentum Strategy that Also Works in Japan.” Research Affiliates Working Paper (January 9). Chordia, Tarun, and Lakshmanan Shivakumar. 2002. “Momentum, Business Cycle, and Time-Varying Expected Returns.” Journal of Finance, vol. 57, no. 2 (April):985–1019. Fama, Eugene F., and Kenneth R. French. 2011. “Size, Value, and Momentum in International Stock Returns.” Fama–Miller Working Paper; Tuck School of Business Working Paper No. 2011-85; Chicago Booth Research Paper No. 11-10. Griffin, John M., Xiuqing Ji, and J. Spencer Martin. 2005. “Global Momentum Strategies: A Portfolio Perspective.” Journal of Portfolio Management, vol. 31 no. 2 (Winter):23–39. Hong, Harrison, and Jeremy C. Stein. 1999. “A Unified Theory of Underreaction, Momentum Trading, and Overreaction in Asset Markets.” Journal of Finance, vol. 54, no. 6 (December):2143–2184. Jegadeesh, Narasimhan, and Sheridan Titman. 1993. “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.” Journal of Finance, vol. 48, no. 1 (March):65–91. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Strauss, and Giroux. Kahneman, Daniel, and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica, vol. 47, no. 2 (March):263–292. Keynes, John Maynard. 1936. The General Theory of Employment, Interest and Money. London: Macmillan Cambridge University Press. Korajczyk, Robert A., and Ronnie Sadka. 2004. “Are Momentum Profits Robust to Trading Costs?” Journal of Finance, vol. 59, no. 3 (June):1039–1082. Moskowitz, Tobias J., and Mark Grinblatt. 1999. “Do Industries Explain Momentum?” Journal of Finance, vol. 54, no. 4 (August):1249–1290. Rouwenhorst, K. Geert. 1998. “International Momentum Strategies.” Journal of Finance, vol. 53, no. 1 (February):267–284. ———.1999. “Local Return Factors and Turnover in Emerging Stock Markets.” Journal of Finance, vol. 54, no. 4 (August):1439–1464. Soffer, Leonard C., and Beverly R. Walther. 2000. “Returns Momentum, Returns Reversals, and Earnings Surprises.” Working Paper (January). Vayanos, Dimitri, and Paul Woolley. 2013. “An Institutional Theory of Momentum and Reversal.” Review of Financial Studies, vol. 26 no. 5 (May):1087–1145.
{"url":"http://www.researchaffiliates.com/Our%20Ideas/Insights/Fundamentals/Pages/F_2013_08_Momentum_Factor.aspx","timestamp":"2014-04-18T18:19:27Z","content_type":null,"content_length":"137653","record_id":"<urn:uuid:d4dacb7e-a0d6-4299-bf6e-ac46582a6f58>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Special Development Fund The Special Development Fund helps IMU to fulfill the important obligation of helping developing countries within the framework of mathematical research. The means of the Fund, which go unreduced to mathematicians from developing countries, are used primarily for travel grants to young mathematicians to make it possible for them to participate in International Congresses of Mathematicians. The Executive Committee of IMU elects an international committee to distribute the grants. Means to the Special Development Fund come from donations. Donations can be sent, at any time and in any convertible currency, to the following account: IMU Account at the Institute for Advanced Study PNC Bank 76 Nassau Street Princeton, NJ 08540 ABA # 031207607 Account # 8011913872 The goal now is to collect funds for travel grants for the 2006 International Congress of Mathematicians in Madrid, to have as many qualified young mathematicians from developing countries as participants. For the ICM-2002 in Beijing the IMU financed the trip of 95 young mathematicians and the Chinese Local Organizing Committee kindly covered the local expenses. We hope to increase this number to 120 or even 130 in 2006. As you may know, the American Mathematical Society has asked its members to make a donation to the SDF when paying their membership fees. We hope that other societies could consider a similar action. Also, from the start the London Mathematical Society and the Royal Society have made major contributions. Other countries that have been making important contributions to the Fund are: Brazil, Germany, Finland, France, Holland, Japan, Norway, Sweden, Switzerland and United Kingdom. Donations to the SDF can be sent at any time in any convertible currency to any of the following accounts: IMU Account at the Institute for Advanced Study PNC Bank 76 Nassau Street Princeton, NJ 08540 ABA # 031207607 Account # 8011913872 The following contributions have been received in the years 1991-2004: American Mathematical Society, USA US $ 14.772,93 Royal Society, Great Britain US $ 8.780,27 London Mathematical Society US $ 1.730,10 American Mathematical Society, USA US $ 27.787,00 Wiskundig Gennotschap, Netherlands US $ 1.825,40 Royal Society, Great Britain US $ 8.377,21 Deutsche Mathematics Vereinigung, German US $ 6.406,74 American Mathematical Society, USA US $ 32.500,95 Wiskundig Gennotschap, Netherlands US $ 1.418,43 American Mathematical Society, USA US $ 30.550,06 Mathematical Society of Japan, Japan US $ 18.881,11 Royal Society, Great Britain US $ 4.477,00 Société Mathématique de France, France US $ 3.404,86 National Council for S&T Development, Brazil US $ 6.944,44 American Mathematical Society, USA US $ 33.227,89 National Council for S&T Development, Brazil US $ 10.000,00 London Mathematical Society US $ 3.263,12 American Mathematical Society, USA US $ 31.807,41 London Mathematical Society US $ 3.639,60 Societe Mathematique de France and Societe des Math. Appl.et Ind. US $ 2.341,81 American Mathematical Society, USA US $ 30.872,76 National Council for S&T Development, Brazil US $ 9.708,05 Royal Swedish Academy of Sciences, Sweden US $ 265,95 London Mathematical Society, Great Britain US $ 3.121,05 American Mathematical Society, USA US $ 30.972,63 National Council for S&T Development, Brazil US $ 4.727,65 Mathematical Society of Japan, Japan US $ 14.084,50 Société Mathematique de France, France US $ 3.092,76 London Mathematical Society, Great Britain US $ 3,321.17 American Mathematical Society, USA US $ 32,081.10 Wiskundig Genootschap Netherlands US $ 5,349.80 London Mathematical Society, Great Britain US $ 5,083.89 Mathematique France Institute, France US $ 3,120.10 Brazil US $ 5,000.00 American Mathematical Society, USA US $ 29,972.27 London Mathematical Society, Great Britain US $ 5,000.00 American Mathematical Society, USA US $ 41,048.79 London Mathematical Society, Great Britain US $ 5,000.00 American Mathematical Society, USA US $ 23,471.51 Mathematical Society of Japan, Japan US $ 14,642.57 American Mathematical Society, USA US $ 20,361.50 Het Wiskundig Genootschap, Netherlands US $ 5,114.12 Unione Matematica Italiana, Italy US $ 684.97 American Mathematical Society, USA US $ 18,636.50 Unione Matematica Italiana, Italy US $ 763.49 London Mathematical Society, UK US $ 4,972.00
{"url":"http://www.mathunion.org/general/o/Activities/Travel_Grants_DC/fund.html","timestamp":"2014-04-17T03:54:39Z","content_type":null,"content_length":"13128","record_id":"<urn:uuid:e539aa89-e20d-4426-9a23-37b0b4578568>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Tough vectors question... February 7th 2011, 10:39 PM #1 Mar 2008 Tough vectors question... Let r be a postition vector of a viariable point in cartesian plane OXY such that r.(10j - 8i - r/|r| ) = 40 and p1 = max{(|r + 2i - 3j|)^2} , p2 = min{(|r + 2i - 3j|)^2} . A tanget line is drawn to the curve y = 8/(x^2) at the point A with abcissa 2. The drawn line cuts x-axis at a point B. Now answer the following questions based on above paragraph Q1 p2 is equal to ? Q2 p1+p2 = ? Q3 vector AB . vector OB = ? please explain exactly what is vector r ?? Let's say that $\mathbf{r}=\langle r_{1},r_{2}\rangle.$ If you plug this into the condition $\mathbf{r}\cdot\left(\langle -8,10\rangle-\dfrac{\mathbf{r}}{\|\mathbf{r}\|}\right)=40,$ you'll end up with the equation for an ellipse in the coordinates $\langle r_{1},r_{2}\rangle.$ You can verify that the equation for the ellipse is the following: So the answer to your question of "what is $\mathbf{r}?$" is this: $\mathbf{r}$ is the coordinate for a point on the ellipse described by the above equation. Does that help? how do you know that it is an ellipse ? I mean... how did you come to know that it is an ellipse at first sight.. Ah ok.. Now I get it... so to find p2 or p1 we jsut substitute r as <x,y> ... dot product it with itself, then d/dx the expression and equate d/dx = 0, find the maxima or minima depending on p1 or p2's demand .... the n find y from the ellipse equation... nice.. if you have an easier way please suggest. Thanks for making me understand what vector r is. I would highly recommend drawing a picture. Drawing a picture enabled me to conclude that, after all, it's not an ellipse. If you look here, you will see that you need the discriminant to be negative, which is not true in this case. The graph looks something like an hyperbola, but I don't think it's an hyperbola. The graph to which I've linked is centered about the point (-2,3), which figures largely in your expressions for p1 and p2. I do think that p2 is well-defined. The point (-2,3) lies just above the upper branch of the relation in question, and finding the minimum distance from that point to the graph should be doable, at least theoretically. However, considering that the graph appears to go on to infinity in multiple directions, I don't think your p1 is February 8th 2011, 01:01 AM #2 February 8th 2011, 09:49 AM #3 Mar 2008 February 8th 2011, 09:50 AM #4 Mar 2008 February 8th 2011, 09:55 AM #5 February 8th 2011, 10:04 AM #6 Mar 2008 February 8th 2011, 10:30 AM #7
{"url":"http://mathhelpforum.com/advanced-applied-math/170521-tough-vectors-question.html","timestamp":"2014-04-20T10:34:52Z","content_type":null,"content_length":"50894","record_id":"<urn:uuid:8824fdbd-ab93-4fa4-99c3-7c501909e306>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Upper Bounds and Supremums Suppose that A is contained in the set of all real numbers and is bounded above. Prove that if A contains one of its upper bounds, then this upper bound is sup A. It seems like it should be a simple problem, but I'm lost. Re: Upper Bounds and Supremums Originally Posted by Suppose that A is contained in the set of all real numbers and is bounded above. Prove that if A contains one of its upper bounds, then this upper bound is sup A. Suppose that Now prove that Hint: If you assume that
{"url":"http://mathhelpforum.com/discrete-math/203926-upper-bounds-supremums-print.html","timestamp":"2014-04-23T20:16:02Z","content_type":null,"content_length":"4966","record_id":"<urn:uuid:dd5c731e-1c46-4874-8ffc-0af4c5bb4191>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
arithmetic sequence Also known as an arithmetic progression, a finite sequence of at least three numbers, or an infinite sequence, whose terms differ by a constant, known as the common difference. For example, starting with 1 and using a common difference of 4 we can get the finite arithmetic sequence: 1, 5, 9, 13, 17, 21, and also the infinite sequence 1, 5, 9, 13, 17, 21, 25, 29, ..., 4n + 1, ... In general, the terms of an arithmetic sequence with the first term a[0] and common difference d, have the form a[n] = dn + a[0] (n = 1, 2, 3,...). Does every increasing sequence of integers have to contain an arithmetic sequence? Surprisingly, the answer is no. To construct a counter-example, start with 0. Then for the next term in the sequence, take the smallest possible integer that doesn't cause an arithmetic sequence to form in the sequence constructed thus far. (There must be such an integer because there are infinitely many integers beyond the last term, and only finitely many possible sequences that the new term could complete.) This gives the non-arithmetic sequence 0, 1, 3, 4, 9, 10, 12, 13, 27, 28, ... If the terms of an arithmetic sequence are added together the result is an arithmetic series, a[0] + (a[0] + d) + ... + (a[0] + (n - 1)d), the sum of which is given by: S[n] = n/2 (2a[0] + (n - 1)d) = n/2 (a[0] + a[n]) The arithmetic mean of two terms, a[s] and a[s+2] is given by (a[s] + a[s+2])/2 = a[s+1]. Related entries • geometric sequence • harmonic sequence Related category
{"url":"http://www.daviddarling.info/encyclopedia/A/arithmetic_sequence.html","timestamp":"2014-04-20T11:01:54Z","content_type":null,"content_length":"8183","record_id":"<urn:uuid:e72ce18a-a7a6-49ce-a79f-aef1f46080b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Ana Trigonometry Tutor Find a Santa Ana Trigonometry Tutor ...It brings my great pleasure to know that I have helped other students . I have recently finished my 3rd year of college in which I was able to tutor my classmates and friends. My expertise and knowledge is in math, physics, and of course chemistry. My goal is to make that every student understands the lesson being taught. 13 Subjects: including trigonometry, chemistry, physics, geometry ...I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/Trigonometry, Precalculus and Calculus. I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. My teaching strategy is to emphasize the importance of dedicatio... 9 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I also seek to build on these strengths by creating a learning environment that minimizes rote learning and focuses on stretching the student's thinking and problem-solving instincts by guiding them to the solution via a series of questions. Training the student to break down a large problem int... 20 Subjects: including trigonometry, physics, calculus, geometry ...You'll see have quite a bit of experience as a teacher. I've taught high school chemistry (Sage Hill School) for 2 years, and I've taught college-level General Chemistry and Organic Chemistry at the following local junior colleges: OCC, IVC, Mt. SAC, Cypress College, and Mesa College in San Diego...I've also written 2 chemistry books. 20 Subjects: including trigonometry, chemistry, geometry, algebra 1 ...I am currently at Fuller Seminary in Pasadena preparing for a career in community work as a pastor. Here is a testimonial from one of my students: “I love this teacher. He’s funny and interesting, which doesn’t usually happen with a math teacher. 9 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/santa_ana_ca_trigonometry_tutors.php","timestamp":"2014-04-17T13:00:25Z","content_type":null,"content_length":"24276","record_id":"<urn:uuid:637e7e42-b45e-4428-8b25-f9f415e5af12>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Time series cross-validation 2 November 22, 2011 By Zach Mayer In my previous post , I shared a function for parallel time-series cross-validation, based on Rob Hyndman's code . I thought I'd expand on that example a little bit, and share some additional wrapper functions I wrote to test other forecasting algorithms. Before you try this at home, be sure to load the functions from my last post These functions add random walk models , the theta method structural time series seasonal decomposition , and simple mean forecasts to our cross-validation repertoire. The following code fits each of these models to the example dataset, and charts their accuracies out to a forecast horizon of 12 months. Note that none of this code runs in parallel, but if you wish, you can parallelize things by loading your favorite backend. I would suggest running meanf, rwf, thetaf, and the linear model before loading a parallel backend, as all of these methods run very fast and do not need parallelization. Here is the resulting figure. As you can see, the mean forecast is very inaccurate, but provides a useful baseline. The random walk forecast and the theta forecast are both improvements, but they ignore the function's seasonal component and have a clear seasonal error pattern. StructTS and STL are clustered down at the bottom with accuracies similar to the linear model, arima model, and exponential smoothing model: If we ignore the mean, random walk, and theta forecasts, we get the following figure: As you can see, the structural time series is close to the exponential smoothing model in accuracy, while the to seasonal decomposition models are consistently worse. The arima model still outperforms all other models, at every forecast horizon. (Note that I've found a couple of bugs in my ts.cv function. It seems to not be working when fixed=TRUE, and it also doesn't like being told to just look at 1-step forecasts. I'll try to fix both bugs soon.) for the author, please follow the link and comment on his blog: Modern Toolmaking daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/time-series-cross-validation-2/","timestamp":"2014-04-19T17:17:52Z","content_type":null,"content_length":"39168","record_id":"<urn:uuid:65591e92-38eb-4334-a727-b25170c3ce95>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
February 13, 2013 COMS W4115 Programming Languages and Translators Lecture 7: Parsing Context-Free Grammars February 13, 2013 1. Yacc: a language for specifying syntax-directed translators 2. The pumping lemma for context-free languages 3. The parsing problem for context-free grammars 4. Top-down parsing 5. Transformations on grammars 1. Yacc: a Language for Specifying Syntax-Directed Translators • Yacc is popular language, first implemented by Steve Johnson of Bell Labs, for implementing syntax-directed translators. • Bison is a gnu version of Yacc, upward compatible with the original Yacc, written by Charles Donnelly and Richard Stallman. Many other versions of Yacc are also available. • The original Yacc used C for semantic actions. Yacc has been rewritten for many other languages including Java, ML, OCaml, and Python. • Yacc specifications □ A Yacc program has three parts: translation rules supporting C-routines The declarations part may be empty and the last part (%% followed by the supporting C-routines) may be omitted. Here is a Yacc program for a desk calculator that adds and multiplies numbers. (From ALSU, p. 292, Fig. 4.59, a more advanced desk calculator.) #include <ctype.h> #include <stdio.h> #define YYSTYPE double %token NUMBER %left '+' %left '*' lines : lines expr '\n' { printf("%g\n", $2); } | lines '\n' | /* empty */ expr : expr '+' expr { $$ = $1 + $3; } | expr '*' expr { $$ = $1 * $3; } | '(' expr ')' { $$ = $2; } | NUMBER /* the lexical analyzer; returns <token-name, yylval> */ int yylex() { int c; while ((c = getchar()) == ' '); if ((c == '.') || (isdigit(c))) { ungetc(c, stdin); scanf("%lf", &yylval); return NUMBER; return c; On Linux, we can make a desk calculator from this Yacc program as follows: 1. Put the yacc program in a file, say desk.y. 2. Invoke yacc desk.y to create the yacc output file y.tab.c. 3. Compile this output file with a C compiler by typing gcc y.tab.c -ly to get a.out. (The library -ly contains the Yacc parsing program.) 4. a.out is the desk calculator. Try it! 2. The Pumping Lemma for Context-Free Languages □ The pumping lemma for context-free languages can be used to show certain languages are not context free. □ The pumping lemma: If L is a context-free language, then there exists a constant n such that if z is any string in L of length n or more, then z can be written as uvwxy subject to the following conditions: 1. The length of vwx is less than or equal to n. 2. The length of vx is one or more. (That is, not both of v and x can be empty.) 3. For all i ≥ 0, uv^iwx^iy is in L. □ A typical proof using the pumping lemma to show a language L is not context free proceeds by assuming L is context free, and then finding a long string in L which, when pumped, yields a string not in L, thereby deriving a contradiction. □ Examples of non-context-free languages: ☆ {a^nb^nc^n | n ≥ 0 } ☆ {ww | w is in (a|b)* } ☆ {a^mb^na^mb^n | n ≥ 0 } 3. The Parsing Problem for Context-Free Grammars □ The parsing problem for context-free grammars is given a CFG G and an input string w to construct all parse trees for w according to G, if w is in L(G). □ The Cocke-Younger-Kasami algorithm is a dynamic programming algorithm that given a Chomsky Normal Form grammar G and an input string w will create in O(|w|^3) time a table from which all parse trees for w according to G can be constructed. □ For compiler applications two styles of parsing algorithms are common: top-down parsing and bottom-up parsing. 4. Top-Down Parsing □ Top-down parsing consists of trying to construct a parse tree for an input string starting from the root and creating the nodes of the parse tree in preorder. □ Equivalently, top-down parsing consists of trying to find a leftmost derivation for the input string. □ Consider grammar G: S → + S S | * S S | a Leftmost derivation for + a * a a: S ⇒ + S S ⇒ + a S ⇒ + a * S S ⇒ + a * a S ⇒ + a * a a Recursive-descent parsing □ Recursive-descent parsing is a top-down method of syntax analysis in which a set of recursive procedures is used to process the input string. □ One procedure is associated with each nonterminal of the grammar. See Fig. 4.13, p. 219. □ The sequence of successful procedure calls defines the parse tree. Nonrecursive predictive parsing □ A nonrecursive predictive parser uses an explicit stack. □ See Fig. 4.19, p. 227, for a model of table-driven predictive parser. □ Parsing table for G: Input Symbol Nonterminal a + * $ S S → a S → +SS S → *SS Moves made by this predictive parser on input +a*aa. (The top of the stack is to the left.) Stack Input Output S$ +a*aa$ +SS$ +a*aa$ S → +SS SS$ a*aa$ aS$ a*aa$ S → a S$ *aa$ *SS$ *aa$ S → *SS SS$ aa$ aS$ aa$ S → a S$ a$ a$ a$ S → a $ $ Note that these moves trace out a leftmost derivation for the input. 5. Transformations on Grammars □ Two common language-preserving transformations are often applied to grammars to try to make them parsable by top-down methods. These are eliminating left recursion and left factoring. □ Eliminating left recursion: expr → expr + term | term expr → term expr' expr' → + term expr' | ε Left factoring: stmt → if ( expr ) stmt else stmt | if (expr) stmt | other stmt → if ( expr ) stmt stmt' | other stmt' → else stmt | ε 6. Practice Problems 1. Write down a CFG for regular expressions over the alphabet {a, b}. Show a parse tree for the regular expression a | b*a. 2. Using the nonterminals stmt and expr, design context-free grammar productions to model 1. C while-statements 2. C for-statements 3. C do-while statements 3. Consider grammar G: S → S S + | S S * | a 1. What language does this grammar generate? 2. Eliminate the left recursion from this grammar. 4. Use the pumping lemma to show that {a^nb^nc^n | n ≥ 0 } is not context free. 7. Reading
{"url":"http://www1.cs.columbia.edu/~aho/cs4115/lectures/13-02-13.htm","timestamp":"2014-04-18T18:26:22Z","content_type":null,"content_length":"9746","record_id":"<urn:uuid:177719bf-e092-4d4f-a21f-d20ea4458092>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Limit of a Sequence using Epsilon N November 5th 2012, 06:17 PM #1 May 2012 United States Proving Limit of a Sequence using Epsilon N I am having a lot of trouble with the concept of proving the limit of a sequence using epsilon n. As an example, I am working on trying to prove that the limit of the sequnce $\frac{n+1}{2n}$ is $\frac{1}{2}$. I understand that the first step is to set $\frac{n+1}{2n} - \frac{1}{2} < \epsilon$. I then solve for n and get $n>\frac{1}{2\epsilon}$. So I think I understand this tells me that for any $\epsilon$, picking an $n>\frac{1}{2\epsilon}$ will give a value less than $\epsilon$. I'm just not sure where to go next with the proof. I've arbitraily tried different values than $\ frac{1}{2}$ for the limit and every time I can solve $\frac{n+1}{2n} -$ arbitrary number for $n >$ something with $\epsilon$ in it. So I must be doing something wrong as it seems using my method I can prove the limit to be any arbitrary number. In searching online and other forums it seems there is maybe a last step where I take my $n>\frac{1}{2\epsilon}$ and somehow work in the "other direction" to prove that indeed any $n>\frac{1}{2\epsilon}$ works, but I've been unable to follow. Thanks for any help. Re: Proving Limit of a Sequence using Epsilon N As another note, the way the concept was explained in class was that the limit L exists if I can always beat you at a game where you pick an $\epsilon$ then I pick an N. I win if given my N for every n > N, $|a_{n} - L| < \epsilon$ Re: Proving Limit of a Sequence using Epsilon N To prove the limit of the sequence \displaystyle \begin{align*} \frac{n+1}{2n} \end{align*} is \displaystyle \begin{align*} \frac{1}{2} \end{align*}, you need to prove \displaystyle \begin {align*} \lim_{n \to \infty}\frac{n +1}{2n} = \frac{1}{2} \end{align*} by showing \displaystyle \begin{align*} n > M \implies \left| \frac{n + 1}{2n} - \frac{1}{2} \right| < \epsilon \end{align*} . Working on the second inequality we have \displaystyle \begin{align*} \left| \frac{n + 1}{2n} - \frac{1}{2} \right| &< \epsilon \\ \left| \frac{n + 1 - n}{2n} \right| &< \epsilon \\ \left| \frac{1}{2n} \right| &< \epsilon \\ \frac{1}{2| n|} &< \epsilon \\ 2|n| &> \frac{1}{\epsilon} \\ |n| &> \frac{1}{2\epsilon} \end{align*} So choose \displaystyle \begin{align*} M = \frac{1}{2\epsilon} \end{align*} and reverse each step and you will have your proof Re: Proving Limit of a Sequence using Epsilon N Thanks for the quick reply! I fully understand everything right up to the last part. So choose \displaystyle \begin{align*} M = \frac{1}{2\epsilon} \end{align*} and reverse each step and you will have your proof This is the part that trips me up. so when I say $M = \frac{1}{2\epsilon}$ I just plug that straight back into the right side of $n > \frac{1}{2\epsilon}$ and go back to the beginning? I guess I'm have trouble understanding how that proves $\frac{1}{2}$ is the limit. For example, if I instead begin with \displaystyle \begin{align*} n > M \implies \left| \frac{n + 1}{2n} - \frac{1}{4} \ right| < \epsilon \end{align*} I can get down to $n > \frac{2}{4\epsilon -1}$ and also reverse my steps, but $\frac{1}{4}$ isn't the limit. Re: Proving Limit of a Sequence using Epsilon N \displaystyle \begin{align*} n &> \frac{1}{2\epsilon} \\ |n| &> \frac{1}{2\epsilon} \textrm{ since } n > 0 \implies n = |n| \\ \frac{1}{|n|} &< 2\epsilon \\ \frac{1}{2|n|} &< \epsilon \\ \left| \ frac{1}{2n} \right| &< \epsilon \\ \left| \frac{n + 1 - n}{2n} \right| &< \epsilon \\ \left| \frac{n + 1}{2n} - \frac{n}{2n} \right| &< \epsilon \\ \left| \frac{n + 1}{2n} - \frac{1}{2} \right| & < \epsilon \end{align*} The reason it works is because you need to show there exists some value \displaystyle \begin{align*} M \end{align*} such that no matter how big it is, your function value is always within a band of width \displaystyle \begin{align*} \epsilon \end{align*} from the limit value. In symbols, that means \displaystyle \begin{align*} \lim_{n \to \infty} f(n) = L \end{align*} if \displaystyle \ begin{align*} x > M \implies \left| f(n) - L \right| < \epsilon \end{align*}. Re: Proving Limit of a Sequence using Epsilon N Ok, great, that clears it up for me. Thanks for all the help! November 5th 2012, 06:29 PM #2 May 2012 United States November 5th 2012, 07:02 PM #3 November 5th 2012, 08:50 PM #4 May 2012 United States November 5th 2012, 09:19 PM #5 November 6th 2012, 09:17 AM #6 May 2012 United States
{"url":"http://mathhelpforum.com/calculus/206837-proving-limit-sequence-using-epsilon-n.html","timestamp":"2014-04-21T16:39:38Z","content_type":null,"content_length":"55019","record_id":"<urn:uuid:24df16ef-6d84-4288-b160-1050c865fea5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
von zur Gathen and Results 1 - 10 of 38 , 2009 "... Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these incl ..." Cited by 151 (2 self) Add to MetaCart Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these include new probabilistic definitions of classical complexity classes (IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shor’s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon computational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of more classical results. It is intended to both serve as a textbook and as a reference for self-study. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a web site for this book at "... We prove that monotone circuits computing the perfect matching function on n-vertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone circuits. ..." Cited by 77 (8 self) Add to MetaCart We prove that monotone circuits computing the perfect matching function on n-vertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone - Directions and Recent Results in Algorithms and Complexity , 1976 "... The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In ..." Cited by 28 (2 self) Add to MetaCart The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In particular, we correct some errors in the literature, discuss some recent results of Vallée, and describe a numerical computation which supports a conjecture of Vallée. 1 - ACM Transactions on CHI , 2004 "... It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some importa ..." Cited by 22 (11 self) Add to MetaCart It is usually very hard, both for designers and users, to reason reliably about user interfaces. This article shows that ‘push button ’ and ‘point and click ’ user interfaces are algebraic structures. Users effectively do algebra when they interact, and therefore we can be precise about some important design issues and issues of usability. Matrix algebra, in particular, is useful for explicit calculation and for proof of various user interface properties. With matrix algebra, we are able to undertake with ease unusally thorough reviews of real user interfaces: this article examines a mobile phone, a handheld calculator and a digital multimeter as case studies, and draws general conclusions about the approach and its relevance to design. - London Mathematical Society Lecture Note Series 336 , 2006 "... v Table of basic properties ix ..." , 1992 "... For any fixed dimension d, the linear programming problem with n inequality constraints can be solved on a probabilistic CRCW PRAM with O(n) processors almost surely in constant time. The algorithm always finds the correct solution. With nd/log² d processors, the probability that the algorithm wi ..." Cited by 17 (1 self) Add to MetaCart For any fixed dimension d, the linear programming problem with n inequality constraints can be solved on a probabilistic CRCW PRAM with O(n) processors almost surely in constant time. The algorithm always finds the correct solution. With nd/log² d processors, the probability that the algorithm will not finish within O(d² log² d) time tends to zero exponentially with n. - In Proc. 20th Annu. ACM Symp. Comput. Geom , 2004 "... The Bentley-Ottmann sweep-line method can be used to compute the arrangement of planar curves provided a number of geometric primitives operating on the curves are available. We discuss the mathematics of the primitives for planar algebraic curves of degree three or less and derive efficient realiza ..." Cited by 17 (6 self) Add to MetaCart The Bentley-Ottmann sweep-line method can be used to compute the arrangement of planar curves provided a number of geometric primitives operating on the curves are available. We discuss the mathematics of the primitives for planar algebraic curves of degree three or less and derive efficient realizations. As a result, we obtain a complete, exact, and efficient algorithm for computing arrangements of cubic curves. Conics and cubic splines are special cases of cubic curves. The algorithm is complete in that it handles all possible degeneracies including singularities. It is exact in that it provides the mathematically correct result. It is efficient in that it can handle hundreds of curves with a quarter million of segments in the final arrangement. , 2008 "... We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n p ..." Cited by 16 (1 self) Add to MetaCart We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1+o(1) log 1+o(1) q bit operations; this had earlier been achieved in characteristic n o(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n polynomials over Fq requiring (n 1.5+o(1) + n 1+o(1) log q) log 1+o(1) q bit operations, improving upon the methods of von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998). Our results also imply algorithms for irreducibility testing and computing minimal polynomials whose running times are best-possible, up to lower order terms. As in Umans (2008), we reduce modular composition to certain instances of multipoint evaluation of multivariate polynomials. We then give an algorithm that solves this problem optimally (up to lower order terms), in arbitrary characteristic. The main idea is to lift to characteristic 0, apply a small number of rounds of multimodular reduction, and finish with a small number of multidimensional FFTs. The final evaluations are then reconstructed using the Chinese Remainder Theorem. As a bonus, we obtain a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithm uses techniques which are commonly employed in practice, so it may be competitive for real problem sizes. This contrasts with previous asymptotically fast methods relying on fast matrix multiplication. Supported by NSF DMS-0545904 (CAREER) and a Sloan Research Fellowship. - in Proceedings of the Theory of Cryptography Conference, ser. Lecture Notes in Computer Science "... Abstract. In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for ..." Cited by 16 (0 self) Add to MetaCart Abstract. In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for legal or commercial reasons, this is not possible. The idea of using the theory of multi-party computation to analyse efficient algorithms for privacy preserving data-mining was proposed by Pinkas and Lindell. The point is that algorithms developed in this way can be used to overcome the apparent impasse described above: the owners of data can, in effect, pool their data while ensuring that privacy is maintained. Motivated by this, we describe how to securely compute the mean of an attribute value in a database that is shared between two parties. We also demonstrate that existing solutions in the literature that could be used to do this leak information, therefore underlining the importance of applying rigorous theoretical analysis rather than settling for ad hoc techniques. 1 - THE PROCEEDINGS OF ESA 2003 , 2003 "... It is shown that the optimum of an integer program in fixed dimension with a fixed number of constraints can be computed with O(s) basic arithmetic operations, where s is the binary encoding length of the input. This improves on the quadratic running time of previous algorithms, which are based o ..." Cited by 13 (1 self) Add to MetaCart It is shown that the optimum of an integer program in fixed dimension with a fixed number of constraints can be computed with O(s) basic arithmetic operations, where s is the binary encoding length of the input. This improves on the quadratic running time of previous algorithms, which are based on Lenstra's algorithm and binary search. It follows that
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1146263","timestamp":"2014-04-18T08:26:12Z","content_type":null,"content_length":"36459","record_id":"<urn:uuid:fbc46605-6f80-4ba1-ab5a-4098946b0447>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Draw a triangle with two sides January 6th 2007, 08:09 PM Draw a triangle with two sides I know.....this is going to sound retarded....but stay with me. At the beginning of the year, my Alg II teacher told us that half way through the year, he would ask us a hard question worth bonus points. He told us the problem, and won't tell us what it is until we have to solve it (less then a week). I am having a hard time remembering it, but I think it was "draw a triangle with only two sides" or something along those lines. It had something to do with drawing a figure that one would think is impossible to draw with x number of sides. I just don't remember the figure or number of sides. "Draw a triangle with four sides" for example. I have asked students of his from last year, and they told me that he DID prove that the problem could be solved....but couldn't recall the details. If anybody has any clue what I am talking about, PLEASE help. January 6th 2007, 08:36 PM I know.....this is going to sound retarded....but stay with me. At the beginning of the year, my Alg II teacher told us that half way through the year, he would ask us a hard question worth bonus points. He told us the problem, and won't tell us what it is until we have to solve it (less then a week). I am having a hard time remembering it, but I think it was "draw a triangle with only two sides" or something along those lines. It had something to do with drawing a figure that one would think is impossible to draw with x number of sides. I just don't remember the figure or number of sides. "Draw a triangle with four sides" for example. I have asked students of his from last year, and they told me that he DID prove that the problem could be solved....but couldn't recall the details. If anybody has any clue what I am talking about, PLEASE help. If you think it is a triangle that is not 3-sided, or a triangle not with 3 sides, then give your Teacher the answer, "Impossible!" If your Teacher ask why, then say, "By definition." A triangle is a closed polygon with 3 sides. So if the figure the Teacher wants you to draw is a triangle that is not 3-sided, then your Teacher's wish is retarded. January 6th 2007, 08:51 PM Thats what I was thinking too. But it is a trick question of some sort....and there is a solution. It ay not be a triangle...it might be a circle or some other shape....I don't remember. This guy is far from retarded, and he isn't making it up. I was thinking that this was some popular trick or something that you learn is college. Anybody ever hear of anything like this? Thanks Again. January 7th 2007, 04:05 AM Thats what I was thinking too. But it is a trick question of some sort....and there is a solution. It ay not be a triangle...it might be a circle or some other shape....I don't remember. This guy is far from retarded, and he isn't making it up. I was thinking that this was some popular trick or something that you learn is college. Anybody ever hear of anything like this? Thanks Again. The only possibility I can think of is he's cheating and not using plane geometry. One example would be to consider the surface of a sphere. Draw half a circumference (call this the "equator" for reference). On one end of the equator line, draw another half circumference at right angles that crosses one of the poles and meets the other side of the line on the equator. The resulting figure LOOKS something like a triangle, but only has two sides. It doesn't bear a great resemblence to a triangle, but perhaps your teacher was thinking of something else along similar lines. (Pardon the pun!) January 7th 2007, 08:46 AM I don't like this, but it might be what your teacher wanted. a / b Notice it is a triangle because it has 3 angles, a, b, and c I still don't like it though
{"url":"http://mathhelpforum.com/geometry/9631-draw-triangle-two-sides-print.html","timestamp":"2014-04-19T22:39:55Z","content_type":null,"content_length":"9229","record_id":"<urn:uuid:ce154062-fccf-4440-8c32-d3c26b353dac>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Length with Everyday Objects If you’ve ever estimated the length of a room by using your own two feet, then you understand the concept of measuring length with everyday objects. Your children will enjoy these kindergarten and first grade math skills as they are introduced to the concept of measuring. What is it? Measuring length with everyday objects is sometimes called measuring using non-standard units. This type of measurement is the first step in measuring the length of objects. Instead of using a ruler, children typically use objects in their everyday lives such as paper clips or blocks. This develops the concept of measuring the length of an object, without worrying about using a ruler or learning about various units of measure. • Select a tool. Some ideas for measuring tools that your child could use to measure objects around your house are: paper clips, blocks, pencils, pennies, hand spans, straws, and clothespins. Children typically love these types of math games or activities. They have fun without necessarily realizing they are being taught a new concept. • Discuss what you’re doing. Talk about the word “length” with your child, explaining that it means how long something is. Tell him he’s going to have a chance to find out how long some things are around the house – his choice! • Find an appropriate object to measure. Encourage him to choose objects to measure that are horizontal or flat, not vertical such as the height of a door. • Get your tools. Gather some measuring tools and let your child explore, placing the measuring tools down alongside an object, such as a book. • Let him guess. Once your child has had some experience with measuring using various household tools, ask him to guess how many. For example, ask him how many paper clips he thinks it will take to measure the length of his shoe. Then have him actually measure it with the chosen tool, and talk about the difference between his guess (“estimate”) and the real measure. You can decide if you’d like to introduce the word “estimate” at this time. • Chart your results. You might make a chart so that your child can record his measurements and estimates. You can use the following headings or make up your own: What to Watch For: • Be sure that your child places the first measuring tool (like a paper clip) at the edge of the object being measured (such as a book). • Be sure there aren’t any spaces between the measuring tools • Watch that your child chooses appropriate measuring tools depending on what he is measuring. For instance, if he is measuring the length of a pencil, an appropriate measuring tool might be paper clips. Something like straws, on the other hand, would be too long and would make it difficult to measure properly. • Remind your child that it is important that he chooses all the same measuring tools to measure an object (not a combination of tools). • As always, playing math games at home is a great way to reinforce math skills learned in school. • Have questions or ideas about this story? • Need help or advice about your child’s learning? • Have ideas for future Parent Homework Help stories? Go to “Leave a Reply” at the bottom of this page. I’d love to help! { 1 trackback } November 29, 2011 at 10:03 AM { 0 comments… add one now } Leave a Comment
{"url":"http://www.parent-homework-help.com/2011/03/29/measuring-length-with-everyday-objects/","timestamp":"2014-04-17T21:27:54Z","content_type":null,"content_length":"29173","record_id":"<urn:uuid:c567acba-101b-44a8-a7de-80cce2df84a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
negative mass propulsion A hypothetical propulsion system based on the juxtaposition of ordinary positive mass and negative mass. In theory, such a system would be able to provide continuous thrust, without violating the principles of conservation of momentum or energy. It would require no input energy and no reaction mass. Workability of the scheme, however, hinges on the existence of negative mass and also on negative mass having negative inertia. The combined interactions of the two types of mass would then result in a sustained acceleration of both masses in the same direction. The concept of negative mass was first considered in depth by Herman Bondi^1 in 1957 and revisited in the context of interstellar spaceflight by Winterberg^2 and Robert Forward^3 in the 1980s. 1. Bondi, H. "Negative Mass in General Relativity," Reviews of Modern Physics, Vol. 29, No.3, July 1957, pp. 423-428. 2. Winterberg, F. "On Negative Mass Propulsion," International Astronautical Federation, Paper 89-668, 40th Congress of the International Astronautical Federation, Malaga, Spain, Oct., 1989. 3. Forward, R. L. "Negative Matter Propulsion", Journal of Propulsion and Power (AIAA), Vol. 6, No. 1, Jan.-Feb. 1990, pp. 28-37. Related category
{"url":"http://www.daviddarling.info/encyclopedia/N/negative_mass_propulsion.html","timestamp":"2014-04-16T10:14:08Z","content_type":null,"content_length":"7451","record_id":"<urn:uuid:ce4d050b-a696-4ecc-a27c-0ec24c56888f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Word Problem - Simultaneous Equation? July 9th 2009, 05:16 AM #1 Jul 2009 Word Problem - Simultaneous Equation? Hi guys, I'm struggling to set up this problem into numbers If a teacher can place her students four to a bench, there will be three students on the final bench. But, if five children are placed on each bench, there will be four students on the last What is the smallest number of the children the class could have? I'm thinking x=number of benches and y=different number of benches, so Not really sure where to go from there? Any light shed would be greatly appreciated! With 5 children on a bench + 4 left over, the number of students must have a 4 or 9 as the last digit. But with 4 children on a bench + 3 left over, it's impossible for the last digit to be 4, so it must be 9. The smallest number that ends in a 9 that fulfills the conditions is 19. Hello, BoulderBrow! Another approach . . . If a teacher places her students 4 to a bench, there will be 3 students on the final bench. But if 5 children are placed on each bench, there will be 4 students on the last bench. What is the smallest number of the children the class could have? I'm thinking $x$ = number of benches and $y$ = different number of benches. So: . $4x+3 \:=\:5y+4$ . . . . Good! Solve for $y\!:\;\;y \:=\:\frac{4x-1}{5}$ Since $y$ is a positive integer, $4x-1$ must be divisible by 5. Trying $x \:=\:1,2,3,\hdots$ we find the first occurence is: $x = 4.$ . . And hence: $y = 3$ Therefore, the number of students is: . $4(4)+3 \:=\:5(3)+4 \:=\:\boxed{19}$ July 9th 2009, 05:53 AM #2 Super Member May 2009 July 9th 2009, 06:13 AM #3 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/algebra/94713-word-problem-simultaneous-equation.html","timestamp":"2014-04-19T12:36:14Z","content_type":null,"content_length":"38495","record_id":"<urn:uuid:149cbbea-4e23-4b46-a9bb-9dca099160c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculations with bounds March 13th 2012, 08:26 AM #1 Junior Member Mar 2012 Andy Mann is building a garden fence out of planks. The planks are 14 cm wide to the nearest cm. He needs to make 64 m of fencing altogether. Correct to the nearest meter. What is the smallest number of planks he should order to make sure he has enough to complete the fence? Re: Calculations with bounds he needs to make at least 63.5m of fencing (this will be 64m to the nearest metre). each plank is at least 13.5cm long (this is 14cm to the neares cm). can you finish? Note: it is also possible to interpret the question as "he needs to be able to make 64.5m of fencing". The question is ambiguous so you will just have to pick one interpretation. Last edited by SpringFan25; March 13th 2012 at 12:28 PM. Thank you. I get it now Re: Calculations with bounds Great, Really garden maintenance tips is needed. March 13th 2012, 12:26 PM #2 MHF Contributor May 2010 March 13th 2012, 08:00 PM #3 Junior Member Mar 2012 July 22nd 2012, 10:54 PM #4 Jul 2012
{"url":"http://mathhelpforum.com/algebra/195911-calculations-bounds.html","timestamp":"2014-04-18T18:53:16Z","content_type":null,"content_length":"35719","record_id":"<urn:uuid:0aa1a148-e971-4303-ab98-58923107807b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/kungpow/answered","timestamp":"2014-04-17T01:17:26Z","content_type":null,"content_length":"109286","record_id":"<urn:uuid:22a11d80-fe7d-4661-94b3-61dfa6f9445b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Short certificates for tournaments Noga Alon Mikl´os Ruszink´o February 22, 2002 An isomorphism certificate of a labeled tournament T is a labeled subdigraph of T which to- gether with an unlabeled copy of T allows the errorless reconstruction of T. It is shown that any tournament on n vertices contains an isomorphism certificate with at most n log2 n edges. This answers a question of Fishburn, Kim and Tetali. A score certificate of T is a labeled subdigraph of T which together with the score sequence of T allows its errorless reconstruction. It is shown that there is an absolute constant > 0 so that any tournament on n vertices contains a score certificate with at most (1/2 - )n2 1 Introduction A tournament is an oriented complete graph. An isomorphism certificate of a labeled tournament T is a labeled subdigraph D of T which together with an unlabeled copy of T allows the errorless reconstruction of T. More precisely, if V = {v1, . . . , vn} denotes the vertex set of T, then a subdigraph D of T is such a certificate if for any tournament T on V which is isomorphic to T and contains D, T is, in fact, identical to T. The size of the certificate D is the number of its edges, and D is a minimum certificate if no isomorphism certificate has a smaller size.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/325/2258446.html","timestamp":"2014-04-17T02:09:43Z","content_type":null,"content_length":"8364","record_id":"<urn:uuid:379142cb-bd28-4d50-953a-b6502a674ab5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Force and Motion An Illustrated Guide to Newton's Laws Publication Year: 2009 Isaac Newton developed three laws of motion that govern the everyday world. These laws are usually presented in purely mathematical forms, but Jason Zimba breaks with tradition and treats them visually. This unique approach allows students to appreciate the conceptual underpinnings of each law before moving on to qualitative descriptions of motion and, finally, to the equations and their solutions. Zimba has organized the book into seventeen brief and well-sequenced lessons, which focus on simple, manageable topics and delve into areas that often cause students to stumble. Each lesson is followed by a set of original problems that have been student-tested and refined over twenty years. Zimba illustrates the laws with more than 350 diagrams, an innovative presentation that offers a fresh way to teach the fundamentals in introductory physics, mechanics, and kinematics courses. Published by: The Johns Hopkins University Press Download PDF (62.8 KB) pp. vii-viii Newtonian mechanics, the subject of this book, is no longer considered a fundamental theory of nature.We live in a world of quantum theory and nanotechnology. But ask a physicist of today, even a quantum physicist, to explain how a curve ball works, and he or she will certainly use the methods and concepts of Newtonian mechanics to do so. Newtonian... 1 Graphing Relationships Download PDF (117.4 KB) pp. 1-5 Being comfortable with graphs is a basic requirement for citizenship in modern society. To become knowledgeable about issues such as global climate change, for example, we have to know how to interpret graphs like the one shown in Figure 1. This figure is a line graph. Line graphs 2 Rates of Change Download PDF (181.9 KB) pp. 7-31 Change is at the heart of physics. That’s because physics is the science that seeks to explain why anything happens at all! The physicist wants to know what makes any given situation develop as time progresses. In order to begin to answer this question, it is important to be able to understand change, and rates of change, intuitively as well as quantitatively... 3 Introducing Position and Velocity Download PDF (398.5 KB) pp. 33-42 The three basic concepts in the study of motion are position, velocity, and acceleration. We begin our study of motion in earnest with the first two of these concepts: position and velocity... 4 Vectors Download PDF (243.8 KB) pp. 43-73 In Chapter 3 I explained two kinds of vectors, position (r) and velocity (v). We’ll be working with vectors constantly in this book, so let’s take some time now to learn how to analyze them in 5 Position and Velocity, Revisited Download PDF (148.3 KB) pp. 75-86 Chapter 3 introduced a few basic ideas about position and velocity: • An object’s position vector r tells how far from the origin the object is and in what direction. • To draw the position vector, start with the tail at the origin and end with the tip at the location of the object. • An object’s velocity vector v tells how fast the object is moving and in what direction... 6 Introducing Acceleration Download PDF (181.7 KB) pp. 87-102 The word acceleration means something different in physics than it does in everyday speech. In everyday speech, acceleration means only “speeding up.” But in physics, speeding up is just one of the meanings of the word acceleration. Strangely enough, in physics acceleration can also mean slowing down! In common speech, slowing down is sometimes... 7 Acceleration as a Rate of Change Download PDF (251.9 KB) pp. 103-125 Imagine cruising along in your car with a constant, unchanging velocity vector v. Your car is just humming along, following a straight, flat road, maintaining a constant speed. Bored by the monotony, at risk of falling asleep at the wheel, you decide that you want to change your velocity vector. What are some ways you could do it... 8 Focus on a-Perp Download PDF (171.7 KB) pp. 127-138 As the concept map in Chapter 7 (Figure 108) illustrates, there are two basic ways to view the acceleration vector. • As we discussed in Chapter 6, we can view the acceleration vector as an indication of whether an object is speeding up, slowing down, or turning. • Equally well, as we discussed... 9 Case Study: Straight-Line Motion Download PDF (307.7 KB) pp. 139-190 Sometimes there are simple situations in which the motion of an object is confined more or less to a straight line. Think of driving a car along a straight (and flat) road, stepping off of a diving board and falling straight down, working a yo-yo straight up and down, or riding on an escalator... 10 The Concept of Force Download PDF (195.5 KB) pp. 193-213 In Chapter 6 we saw that in physics the word acceleration means much more than simply “speeding up,” its meaning in everyday speech. Physics also uses other words from everyday speech: words like force, momentum, and energy... 11 Combining Forces That Act on the Same Target Download PDF (193.1 KB) pp. 215-229 At any given time, your body is subject to a number of noticeable influences. The earth pulls you down. Your chair pushes you up. The moon and sun tug on you slightly, the other celestial bodies negligibly so. Now a breeze ruffles your hair; the air is exerting a force on you as well. All of these forces can be combined, leading to a single net influence. This net... 12 “Newton‘s Little Law” Download PDF (184.3 KB) pp. 231-245 Congratulations! You’ve reached the very threshold of the System of the World. Already you have journeyed across a varied landscape of physics. You have studied rates of change, and also rates of change of rates of change. You have added and subtracted vectors. You have examined... 13 Newton’s Second Law Download PDF (232.5 KB) pp. 247-265 “What is mass?” is a profound question. Indeed, modern physical theories such as field theory and string theory are still trying to sort this out. In this book we’ll take the commonsense view Newton himself took: Mass is simply the amount of “material stuff” contained in an object... 14 Dynamics Download PDF (190.2 KB) pp. 267-282 Newton’s Second Law permits us to solve two basic kinds of problems. These problems are in a sense the reverse of one another: • Problem Type 1. By observing the motion of an object, deduce the nature of the forces at work on it. • Problem Type 2. By knowing something about the forces at work on an object, predict its... 15 Newton’s Third Law Download PDF (157.5 KB) pp. 283-290 In Chapter 10 I explained that every action is an interaction. You can’t touch without being touched. Newton’s Third Law formalizes this idea and makes it quantitative. Here it is: Newton’s Third Law : If object 1 exerts a force on object 2, object 2 must also exert an equal and opposite force on object... 16 Kinds of Force Download PDF (271.5 KB) pp. 291-316 Every force has a type, or kind. For example, the force that the sun exerts on the earth is of the gravitational kind. The force that a magnet exerts on a nail is of the magnetic kind. The force that lifts your hairs after you rub a balloon on your head is of the electrostatic kind. Table 5 lists... 17 Strategies for Applying Newton’s Laws Download PDF (183.2 KB) pp. 317-338 In this chapter I’ll suggest some organized strategies that can help you in applying Newton’s Laws to understand physical situations and solve challenging physics problems. But before we get started, let’s talk about attitude... E-ISBN-13: 9780801896323 E-ISBN-10: 0801896320 Print-ISBN-13: 9780801891601 Print-ISBN-10: 0801891604 Page Count: 440 Illustrations: 3 halftones, 349 line drawings Publication Year: 2009
{"url":"http://muse.jhu.edu/books/9780801896323","timestamp":"2014-04-19T22:14:12Z","content_type":null,"content_length":"54161","record_id":"<urn:uuid:13fd9cce-691a-4f41-b268-ca971513e703>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to the School of Mathematical Sciences Variational Methods and Optimal Control III Go to this course in the University Course Planner. Many problems of optimisation and control in the sciences and engineering seek to find the shape of a curve or surface satisfying certain conditions so as to maximise or minimise some quantity. For example, shape a yacht hull so as to minimise fluid drag. Variational methods involve an extension of calculus techniques to handle such problems. This course develops an appropriate methodology, illustrated by a variety of physical and engineering problems. Topics covered are: Classical Calculus of Variations problems such as calculation of the shape of geodesics, the Cantenary, and the Brachystochrone; the derivation and use of the simpler Euler-Lagrange equations for second-order (the Euler-Poisson equation), multiple dependent variables (Hamilton's equations), and multiple independent variables (minimal surfaces); constrained problems and problems with non-integral constraints; Euler's finite differences, Ritz's method and Kantorich's method; conservation laws and Noether's theorem; classification of extremals using second variation; optimal control via the Pontryagin Maximum Principle, and its applications to space-flight calculations. Year Semester Level Units Matthew Roughan Lecturer for this course Graduate attributes Linkage future This course is not recorded as prequisite for other courses. Recommended text
{"url":"http://www.maths.adelaide.edu.au/courses/6128","timestamp":"2014-04-19T14:33:17Z","content_type":null,"content_length":"13655","record_id":"<urn:uuid:865cd53b-a8f3-4776-84c6-fe20c3a05aad>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Fanwood Math Tutor Find a Fanwood Math Tutor ...College-bound Precalculus students (high school sophomores and juniors) should seriously think about taking the SAT II - Math Level II in June. Most competitive schools require it and it is also an excellent way to study for the school finals. I have much experience teaching SAT I Math and SAT II Math. 11 Subjects: including calculus, algebra 1, algebra 2, geometry ...The GRE test includes math up to algebra and geometry, both of which I have been tutoring for a long time. I can help with the material covered on the tests as well as test taking strategies which will result in a top score. I have successfully tutored many students who went on to be admitted to great colleges because they scored high on the entrance exams. 15 Subjects: including algebra 1, algebra 2, calculus, geometry ...My course work included three main courses - Evolution, Ecology, and Behavior; General Ecology; and Conservation Biology. I learned a lot from these classes, and even took on extra work in General Ecology for Honor's credit. I gained a lot of knowledge from these classes and it is one of the more fun ones to discuss, so I would enjoy teaching it to anyone interested in it. 15 Subjects: including algebra 2, chemistry, physics, trigonometry ...Speaking of my qualifications, I think you should know the details of my background which include attending Rutger's University and majoring in Biomathematics with a minor in Psychology. Math is one of those subjects that seems to follow you wherever you go in life doesn't it?? Mathemati... 14 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...I am hoping to become a college professor one day. Teaching is my passion. I have worked with kids of all ages for the best six year, from one-on-one home tutoring to group tutoring in class rooms and after-school programs. 26 Subjects: including algebra 1, SAT math, trigonometry, statistics
{"url":"http://www.purplemath.com/fanwood_nj_math_tutors.php","timestamp":"2014-04-17T01:25:23Z","content_type":null,"content_length":"23766","record_id":"<urn:uuid:6ab36750-1166-4309-bd99-f9d0373fa399>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
[no subject] [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] [no subject] There might very well be an analytical method in the literature for power analysis for your situation; Roger Newson has already mentioned one approach. But have you considered using simulation as another? See www.stata.com/support/faqs/stat/power.html and www.stata-press.com/journals/sjabstracts/st0010.pdf for some contributions by Alan Feiveson on the topic. If an analytical method that fits the bill is readily available and facile, then Monte Carlo simulation won't be attractive, but the latter is always doable and often competitive with a trip to the library (and perhaps even with JSTOR) with respect to overall time spent. In your case, you can assess sensitivity toward assumptions regarding correlation, check out link functions, and explore population-average verus subject-specific modeling approaches, as well. See the do-file below for an example of link function exploration using GEE with your ANCOVA-like set up and assumptions for proportions and correlations; it displays power ("Mean" in -summarize-'s output) for the canonical link function ("A") and identity link function ("B") as it climbs through a series of three candidate treatment group sample sizes under the alternative hypothesis, and confirms the Type I error rate (displaying it in the same way as it does power) using the smallest of the probe per-group sample sizes under the null hypothesis. (The illustration do-file requires a user-written command, -ovbd-, that you can download from the SSC archive.) With a subject-specific model (-xtlogit , re- or -gllamm-), the simulation approach will demand increasing patience with increasing integration points. (It's not included below because the program as originally written used -xtlogit , re-, but you can often take advantage of -contract- and [fweight =] to speed things up.) Although I took a chance in the do-file below (especially with Model B), putting a limit on the number of iterations is a good idea when using an iterative analytical method in the simulation Note that the illustration do-file and output shown below shouldn't be construed as advocating a risk difference (identity link) model, but if the observed proportions are all in the neighborhood of 50% as you expect, then you might be lucky enough to get away with it. Joseph Coveney clear * set more off set seed `=date("2007-09-16", "YMD")' capture program drop binsimem program define binsimem, rclass version 10 syntax , n(integer) [Corr(real 0.5) NUll] tempname Control Experimental Correlation tempfile tmpfil0 if ("`null'" == "") { matrix define `Control' = (0.5, 0.55, 0.55) matrix define `Experimental' = (0.5, 0.45, 0.45) else { matrix define `Control' = (0.5, 0.5, 0.5) matrix define `Experimental' = (0.5, 0.5, 0.5) matrix define `Correlation' = J(3, 3, `corr') + /// I(3) * (1 - `corr') drop _all ovbd bas res0 res1, means(`Control') /// corr(`Correlation') n(`n') clear generate byte trt = 0 save `tmpfil0' drop _all ovbd bas res0 res1, means(`Experimental') /// corr(`Correlation') n(`n') clear generate byte trt = 1 append using `tmpfil0' erase `tmpfil0' generate int pid = _n reshape long res, i(pid) j(tim) generate byte bia = trt * bas generate byte tia = trt * tim xtgee res bas trt tim bia tia, i(pid) t(tim) /// family(binomial) link(logit) corr(exchangeable) test trt bia tia return scalar A_p = r(p) /* xtlogit res bas trt tim bia tia, i(pid) re /// intmethod(mvaghermite) intpoints(30) */ xtgee res bas trt tim bia tia, i(pid) t(tim) /// family(binomial) link(identity) corr(exchangeable) test trt bia tia return scalar B_p = r(p) forvalues n = 250(50)350 { quietly simulate A_p = r(A_p) /// B_p = r(B_p), reps(1000): binsimem , n(`n') display in smcl as text _newline(1) "n = " as result %3.0f `n' capture noisily assert !missing(A_p) & !missing(B_p) generate byte A_pos = (A_p < 0.05) generate byte B_pos = (B_p < 0.05) summarize *_pos local n 250 quietly simulate A_p = r(A_p) B_p = r(B_p), /// reps(1000): binsimem , n(`n') null display in smcl as text _newline(1) "NULL n = " as result %3.0f `n' capture noisily assert !missing(A_p) & !missing(B_p) generate byte A_pos = (A_p < 0.05) generate byte B_pos = (B_p < 0.05) summarize *_pos Edited results: n = 250 Variable | Obs Mean Std. Dev. Min Max A_pos | 1000 .754 .430894 0 1 B_pos | 1000 .768 .4223202 0 1 n = 300 Variable | Obs Mean Std. Dev. Min Max A_pos | 1000 .825 .3801572 0 1 B_pos | 1000 .833 .3731625 0 1 n = 350 Variable | Obs Mean Std. Dev. Min Max A_pos | 1000 .877 .3286016 0 1 B_pos | 1000 .884 .3203852 0 1 NULL n = 250 Variable | Obs Mean Std. Dev. Min Max A_pos | 1000 .046 .2095899 0 1 B_pos | 1000 .051 .2201078 0 1 . exit end of do-file * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-09/msg00491.html","timestamp":"2014-04-16T13:27:24Z","content_type":null,"content_length":"9496","record_id":"<urn:uuid:58f74dca-c066-4dee-8a69-f503ad6ad84f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 8th Day of Christmas… A Le Creuset Giveaway! (winners announced) UPDATE: The winners of the Le Creuset giveaway are: Grand Prize: #3,637 – Rachael: “I follow you on Twitter” Runner-Up: #1,370 – shoshana blum: “I follow you on instagram” Congratulations, Rachael and Shoshana! Be sure to reply to the email you’ve been sent, and your new Le Creuset items will be shipped out to you! I am so excited for today’s giveaway because my Le Creuset Dutch oven (sometimes also referred to as a French oven) has become attached to my hip ever since I got it 5+ years ago. It was my biggest kitchen purchase to date at the time, and I had been salivating over one for ages. I finally broke down and bought one, and I’ve never looked back. I bought the 7¼-quart size in cherry red and promptly named it “Big Red”. I use it for everything and anything in the kitchen – big pots of soup, stew, chili and spaghetti sauce… braising meat… frying doughnuts… boiling bagels. You name it, and I’ve most likely used my Dutch oven to do it. Needless to say, I am thrilled to be giving away TWO Le Creuset items today! Read below for the details and how to enter to win… To enter to win, simply leave a comment on this post and answer the question: “What’s your favorite Christmas cookie?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment on this post. 2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment on this post. 3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment on this post. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment on this post. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment on this post. Deadline: Thursday, December 13, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker and Le Creuset. 7,443 Responses to “On the 8th Day of Christmas… A Le Creuset Giveaway! (winners announced)” 4. I am subscribed via email. 7. I follow on pinterest as labellaluna. 8. I Follow @thebrowneyedbaker on Instagram as LABELLALUNA64 9. I like you! (on Facebook too). 10. I love a decorated sugar cookie 12. peanut blossoms are my favorite. 14. i follow you on instagram 17. I make almond cookies with chocolate icing–I love them! 19. I’m a fb fan (liz.nelson.531) 20. red velvet whoopie pies with peppermint cream cheese filling 22. I follow on pinterest (liznelson531) 29. My favorite Christmas cookie are gingerbread men 30. I subscribe to your email 31. I subscribe to your emails 32. I follow you on Instagram 33. I follow you on Instagram (kd6708) 35. I follow you on twitter (kdbabbles) 36. I like you on Facebook (Kate Donahue) 37. I follow you on Pinterest (kdbabbles) 38. Anything with mint and chocolate 42. I follow you on Pinterest 43. Peanut butter kiss cookies! 44. My grandmother’s date balls! 46. Pizzels & Snickerdoodles. Follow on FB & Pinterest 51. Favorite holiday cookies are cinnamon roll sugar cookies! 54. Followed you on Instagram 55. Followed you on Pinterest 57. I love molasses crinkles! 58. Christmas Sandwich Cremes are my favorite cookie. 59. I love any cookies with chocolate and peppermint together. Yum, yum 63. I follow you on pinterest. 66. I follow you on Pinterest. 68. Favorite cookie – like choosing favorite child! Let’s go with White chocolate chip and cranberry cookies. 69. Sugar Cookies with Royal Icing 75. chocolate chip with walnut cookies 80. I subscribe through e-mail 81. My favorite Christmas cookie is Springerle’s 82. I already follow you on Twitter 83. I’m already a fan on Facebook 84. I love Norwegian Butterknots!! There is over a pound of butter in one batch, but oh my they are delicious!! 85. I already follow you on Pinterest 89. pfeffernusse or gingerbread 93. Peanut butter blossoms and almo d creaents 95. Classic sugar cookies – love them! 96. i subscribe to your email 97. i follow you on Pinterest 99. My favorite Christmas cookies are Peanut Blossoms! They are so fun to make. It wouldn’t be Christmas without them 100. I subscribed to your posts via email!
{"url":"http://www.browneyedbaker.com/2012/12/12/on-the-8th-day-of-christmas-a-le-creuset-giveaway/comment-page-72/","timestamp":"2014-04-19T22:07:46Z","content_type":null,"content_length":"119413","record_id":"<urn:uuid:bd069196-3b34-4042-b1e5-eea6d3f55b55>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Fachbereich Physik 26 search hits Calculation of Wannier-Bloch and Wannier-Stark states (1998) Markus Glück A. R. Kolovsky Hans Jürgen Korsch Nimrod Moseyev The paper discusses the metastable states of a quantum particle in a periodic potential under a constant force (the model of a crystal electron in a homogeneous electric ,eld), which are known as the Wannier-Stark ladder of resonances. An ecient procedure to ,nd the positions and widths of resonances is suggested and illustrated by numerical calculation for a cosine potential. Universal and non-universal behavior in Dirac spectra (1998) M. E. Berbenni-Bitsch M. Göckeler S. Meyer A. Schäfer J. J. M. Verbaarschot T. Wettig We have computed ensembles of complete spectra of the staggered Dirac operator using four-dimensional SU(2) gauge fields, both in the quenched approximation and with dynamical fermions. To identify universal features in the Dirac spectrum, we compare the lattice data with predictions from chiral random matrix theory for the distribution of the low-lying eigenvalues. Good agreement is found up to some limiting energy, the so-called Thouless energy, above which random matrix theory no longer applies. We determine the dependence of the Thouless energy on the simulation parameters using the scalar susceptibility and the number variance. Periodic Instantons and Quantum-Classical Transitions in Spin Systems (1998) Jiu-Qing Liang H. J. W. Müller-Kirsten F. Zimmerschied D.K. Park The tunneling splitting of the energy levels of a ferromagnetic particle in the presence of an applied magnetic field - previously derived only for the ground state with the path integral method - is obtained in a simple way from Schr"odinger theory. The origin of the factors entering the result is clearly understood, in particular the effect of the asymmetry of the barriers of the potential. The method should appeal particularly to experimentalists searching for evidence of macroscopic spin tunneling. Quantum Tunneling and Phase Transitions in Spin Systems with an Applied Magnetic Field (1998) S.-Y. Lee H. J. W. Müller-Kirsten F. Zimmerschied D.K. Park Transitions from classical to quantum behaviour in a spin system with two degenerate ground states separated by twin energy barriers which are asymmetric due to an applied magnetic field are investigated. It is shown that these transitions can be interpreted as first- or second-order phase transitions depending on the anisotropy and magnetic parameters defining the system in an effective Lagrangian description. Correspondence, Poincar'e Vacuum Stateand Greybody Factors in BTZ Black Holes (1998) Nobuyoshi Ohta H.J.W. Müller-Kirsten Jian-Ge Zhou The greybody factors in BTZ black holes are evaluated from 2D CFT in the spirit of AdS3/CFT correspondence. The initial state of black holes in the usual calculation of greybody factors by effective CFT is described as Poincar'e vacuum state in 2D CFT. The normalization factor which cannot be fixed in the effective CFT without appealing to string theory is shown to be determined by the normalized bulk-to-boundary Green function. The relation among the greybody factors in different dimensional black holes is exhibited. Two kinds of (h; _h) = (1; 1) operators which couple with the boundary value of massless scalar field are discussed. Light-Cone Formulation of Super D2-Brane (1998) Ruben Manvelyan H. J. W. Müller-Kirsten A. Melikyan R. Mkrtchyan The light-cone Hamiltonian approach is applied to the super D2- brane, and the equivalent area-preserving and U(1) gauge-invariant effective Lagrangian, which is quadratic in the U(1) gauge field, is derived. The latter is recognised to be that of the three- dimensional U(1) gauge theory, interacting with matter supermultiplets, in a special external induced supergravity metric and the gravitino field, depending on matter fields. The duality between this theory and 11d supermembrane theory is demonstrated in the light-cone gauge. Dilute Instanton Gas of an O(3)Skyrme Model (1998) H. J. W. Müller-Kirsten S.-N. Tamariana D.H. Tchrakian Frank Zimmerschied The pure-Skyrme limit of a scale-breaking Skyrmed O(3) sigma model in 1+1 dimensions is employed to study the effect of the Skyrme term on the semiclassical analysis of a field theory with instantons. The instantons of this model are self-dual and can be evaluated explicitly. They are also localised to an absolute scale, and their fluctuation action can be reduced to a scalar subsystem. This permits the explicit calculation of the fluctuation determinant and the shift in vacuum energy due to instantons. The model also illustrates the semiclassical quantisation of a Skyrmed field theory. Observation of spatiotemporal self-focusing of spin waves in magnetic films (1998) Martin Bauer Oliver Büttner Serguei Demokritov Burkard Hillebrands Y. Grimalsky Yu Rapoport Slavin A.N. The first observation of spatiotemporal self-focusing of spin waves is reported. The experimental results are obtained for dipolar spin waves in yttrium-iron-garnet films by means of a newly developed space- and time-resolved Brillouin light scattering technique. They demonstrate self-focusing of a moving wave pulse in two spatial dimensions, and formation of localized two-dimensional wave packets, the collapse of which is stopped by dissipation. The experimental results are in good qualitative agreement with numerical simulations. In-plane anomalies of the exchange bias field in Ni80Fe20/Fe50Mn50 bilayers on Cu(110) (1998) S Riedling Martin Bauer C. Mathieu Burkard Hillebrands R. Jungblut Kohlhepp J. A. Reinders We report on the exchange bias effect as a function of the in-plane direction of the applied field in twofold symmetric, epitaxial Ni 80 Fe 20 /Fe 50 Mn 50 bilayers grown on Cu~110! single-crystal substrates. An enhancement of the exchange bias field, H eb , up to a factor of 2 is observed if the external field is nearly, but not fully aligned perpendicular to the symmetry direction of the exchange bias field. From the measurement of the exchange bias field as a function of the in-plane angle of the applied field, the unidirectional, uniaxial and fourfold anisotropy contributions are determined with high precision. The symmetry direction of the unidirectional anisotropy switches with increasing NiFe thickness from [110] to [001]. An exactly solvable model of the Calogero type for the icosahedral group (1998) Oliver Haschke Werner Rühl We construct a quantum mechanical model of the Calogero type for the icosahedral group as the structural group. Exact solvability is proved and the spectrum is derived explicitly.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15998/start/0/rows/10/doctypefq/preprint/yearfq/1998","timestamp":"2014-04-19T08:21:48Z","content_type":null,"content_length":"44650","record_id":"<urn:uuid:78be2996-4e5d-4b54-b3ee-74989cf853a9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: e Sunny earns $39 per day by washing cars. He is paid an extra $5 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Find the equation of a line whose slope is 3 and y-intercept is 2. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 56. View Solution DDD Each worker is paid $1360 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD Select the equation of a line whose slope is 45 and y-intercept 3. View Solution DDD Which is the equation of a line whose slope is 23 and y-intercept is 45? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 3. View Solution DDD What is the equation of a line whose slope is - 3 and y-intercept 47? View Solution DDD Dennis goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution DDD Select the equation of the line whose slope is - 4 and y-intercept is - 34. View Solution DDD Select the equation of a line whose slope is 34 and y-intercept 2. View Solution DDD Brad earns $23 per day by washing cars. He is paid an extra $4 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Each worker is paid $1392 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD The graph shows a linear increase in the price of oil. What is the slope of the line shown in the graph? DDD View Solution DDD In the slope-intercept form y = mx + b, m is the ______ and b is the ______. View Solution DDD Find the slope of the line AB in the graph. DDD View Solution DDD Find the slope of the line in the graph. View Solution DDD Find the equation of a line whose slope is 5 and y-intercept is 5. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 45. View Solution DDD Choose the equation of the line in slope-intercept form for the graph shown. View Solution DDD Which equation is in slope-intercept form? DDD View Solution DDD Find the y-intercept of the line in the graph shown. View Solution DDD What is the slope of the diameter AB of the circle shown in the graph? View Solution DDD What is the slope of the line in the graph? DDD View Solution DDD m is the slope and b is the y-intercept of a line. Which of the following equations is in the slope-intercept form? View Solution DDD Select the equation of a line whose slope is 23 and y-intercept 5. View Solution DDD Which is the equation of a line whose slope is 34 and y-intercept is 56? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 4. View Solution DDD What is the equation of the line shown in the graph in slope-intercept form? View Solution DDD Find the equation of the line shown in the graph using slope-intercept form. DDD View Solution DDD Which of the following equations is in slope-intercept form? DDD View Solution DDD Which of the following equations is not in slope-intercept form? DDD View Solution DDD Using y-intercept and the slope, find which of the graphs represents the line y = - (32)x - 1. DDD View Solution DDD Choose the equation of the line in slope-intercept form. DDD View Solution DDD What is the equation of a line whose slope is - 4 and y-intercept 59? View Solution DDD Which of the following equations is not in the slope-intercept form? DDD View Solution DDD Which graph represents the equation y = - 13x + 1? View Solution DDD Bill goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgehmxkjkdb&.html","timestamp":"2014-04-17T04:01:40Z","content_type":null,"content_length":"84699","record_id":"<urn:uuid:ab49aa6f-93cc-4389-aa30-aa4725412196>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
link group of the trivial $n$ component link up vote 0 down vote favorite Currently I am reading Milnor's paper "Link Groups". In the paper he defines the link group, for every $n$ component link $L$, as a certain quotient of the fundamental group $\pi_1 (S^3 \setminus L) $. On wikipedia in the article titled "link group" one can read that for trivial links this link group is isomorphic to the free group. However I think that this is impossible because the normal closure of the group generated by every meridian is abelian. So are the link groups of the trivial links free? knot-theory gt.geometric-topology 4 Presumably the term "link group" is being used in different ways. Typically it just means $\pi_1(S^3\setminus L)$, and for the trivial link this is free on the meridians. I don't recall Milnor's definition, but if he is taking a quotient, his "link group" means something different. – Paul May 29 '12 at 13:05 1 Indeed, read the wikipedia article and the mathscinet review of Milnor's paper with more care and you will see that they are using the term "link group" in two different ways. – Lee Mosher May 29 '12 at 13:12 1 Milnor's link group is not the same as $\pi_1(S^3 \setminus L)$, but nonetheless the article and the review use the term differently. – Lee Mosher May 29 '12 at 13:18 2 Milnor's link group is invariant under "link homotopy", which allows strands to pass through themselves but not through other strands (this isn't obvious -- it's one of the main theorems of his paper). In particular, under Milnor's definition the group of any knot is $\mathbb{Z}$ (since there are no other strands to get in the way of making the knot trivial). This is very different from the fundamental group of the link complement. – Andy Putman May 29 '12 at 15:16 Today I realized that the term link group means something else in both articles. Thanks for your comments anyway. – W. Politarczyk May 30 '12 at 19:57 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged knot-theory gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/98269/link-group-of-the-trivial-n-component-link","timestamp":"2014-04-19T09:27:40Z","content_type":null,"content_length":"52372","record_id":"<urn:uuid:5f38702a-3347-4524-bbd5-84ec45afbaec>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
Pearson r correlation coefficients for various distributions of paired data (Credit: Denis Boigelot, Wikimedia Commons) A paper published this week in outlines a new statistic called the maximal information coefficient (MIC), which is able to equally describe the correlation between paired variables regardless of linear or nonlinear relationship. In other words, as Pearson's r gives a measure of the noise surrounding a linear regression, MIC should give similar scores to equally noisy relationships regardless of type. Maximum Covariance Analysis (MCA) (Mode 1; scaled) of Sea Level Pressure (SLP) and Sea Surface Temperature (SST) monthly anomalies for the region between -180 °W to -70 °W and +30 °N to -30 °S. MCA coefficients (scaled) are below. The mode represents 94% of the squared covariance fraction (SCF). Maximum Correlation Analysis (MCA) is similar to Empirical Orthogonal Function Analysis (EOF) in that they both deal with the decomposition of a covariance matrix. In EOF, this is a covariance matrix based on a single spatio-temporal field, while MCA is based on the decomposition of a "cross-covariance" matrix derived from two fields.
{"url":"http://menugget.blogspot.com/2011_12_01_archive.html","timestamp":"2014-04-16T11:02:47Z","content_type":null,"content_length":"98128","record_id":"<urn:uuid:2f48ec2d-c66f-4456-b994-0f0ae944829e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
In how many ways could the letters in the word MINIMUM be ar Author Message In how many ways could the letters in the word MINIMUM be ar [#permalink] 01 Oct 2003, 15:06 Joined: 15 Aug 2003 Posts: 3470 5% (low) Followers: 57 Question Stats: Kudos [?]: 649 [0], given: 781 0% (00:00) correct 0% (00:00) based on 0 sessions In how many ways could the letters in the word MINIMUM be arranged if the U must not come before the I's? M 3 SVP I 2 N 1 Joined: 03 Feb 2003 U 1 Posts: 1619 total number of arranging the letters is 7!/(3!*2!*1!*1!)=420 consider U and Is among other letters. Followers: 5 Possible variations: I I U, I U I, U I I. I think the three combinations are equally probable, and we need the first one. 420/3=140 Kudos [?]: 29 [0], given: 0 sudzpwc Praet, whats the official answer? Intern i'm getting 400..... Joined: 13 Sep 2003 reasoning:-- total ways of arranging is 420. lets keep the 2 I's at the end. Posts: 43 so total ways of arranging the remaining 5,among which there is an I = 5!/3!1!1! i.e 20 ways. Location: US therefore the number of ways to arrange where the U doesnt come after the I's = 420 - 20 ..ways. Followers: 0 I am getting 80. There are 7 location U can not be in the first 2 positions. Because in that case I will definately follow U and violate the condition. We need to calculate the number of ways to arrange the letter when U is in 3rd, 4th, ...7th position. Consider U is in the 3rd place. That means we can select only M and N for positions 4 to 7 and 2 I's in the first two positions. Number of ways for this = 4! / 3! = 4 Similarly if U is in the 4th position, the only letters to the left could be (I's and M) OR (I's and N). Joined: 11 Mar 2003 If the left side letters are I's and M, Number of ways in this case = (Number of ways for the positions to the left of U) * (Number of ways for the positions to the right Posts: 54 of U) = (3!/2! X 3!/2!) Location: Chicago Sililarly, if the letters in the left are I's and N = 3!/2! X 1 Followers: 1 fOR U to be in the 4th position , total ways = (3!/2! X 3!/2!) + 3!/2! X 1 Similarly find out the number of ways for all the positions of U and sum them up. That will give 80. Do not know if I am correct. praetorian123, please let us know the answer. Intern I go with 140 too. By placing U at all the places following the third one, placing the two I's before them and then finding all possible permutations for the remaining Joined: 10 Oct 2003 Posts: 45 Location: Finland Followers: 1 stolyar wrote: AkamaiBrah M 3 I 2 GMAT Instructor N 1 U 1 Joined: 07 Jul 2003 total number of arranging the letters is 7!/(3!*2!*1!*1!)=420 Posts: 771 consider U and Is among other letters. Possible variations: I I U, I U I, U I I. I think the three combinations are equally probable, and we need the first one. 420/3=140 Location: New York NY 10024 I concur that this is the simplest way to solve this. Schools: Haas, MFE; Anderson, MBA; USC, MSEE _________________ Followers: 9 Best, Kudos [?]: 21 [0], given: 0 AkamaiBrah Former Senior Instructor, Manhattan GMAT and VeritasPrep Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 M 3 I 2 csperber N 1 U 1 total number of arranging the letters is 7!/(3!*2!*1!*1!)=420 Joined: 22 Nov 2003 consider U and Is among other letters. Possible variations: I I U, I U I, U I I. I think the three combinations are equally probable, and we need the first one. 420/3=140 Posts: 54 Location: New Orleans Would you explain the 3!*2!*1!*1! part of this equation? I understand where you got the numbers, but why do we divide by this value? Followers: 1 Praetorian stolyar wrote: CEO M 3 I 2 Joined: 15 Aug 2003 N 1 U 1 Posts: 3470 total number of arranging the letters is 7!/(3!*2!*1!*1!)=420 Followers: 57 consider U and Is among other letters. Possible variations: I I U, I U I, U I I. I think the three combinations are equally probable, and we need the first one. 420/3=140 Kudos [?]: 649 [0], given: 781 140 is correct, stolyar explain why you divide by 3. praetorian123 wrote: stolyar wrote: M 3 AkamaiBrah I 2 N 1 GMAT Instructor U 1 Joined: 07 Jul 2003 total number of arranging the letters is 7!/(3!*2!*1!*1!)=420 consider U and Is among other letters. Posts: 771 Possible variations: I I U, I U I, U I I. I think the three combinations are equally probable, and we need the first one. 420/3=140 Location: New York NY 10024 140 is correct, stolyar explain why you divide by 3. Schools: Haas, MFE; Simple logic. In all of the arrangements, either the U is after the two Is, before the 2 Is, or between the 2 Is. All of them are equally likely so the one we want Anderson, MBA; USC, MSEE happens 1/3 of the time. Followers: 9 _________________ Kudos [?]: 21 [0], given: 0 Best, Former Senior Instructor, Manhattan GMAT and VeritasPrep Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 Senior Manager In how many ways could the letters in the word MINIMUM [#permalink] 15 May 2004, 15:47 Joined: 02 Mar 2004 In how many ways could the letters in the word MINIMUM be arranged if the U must not come before the I's? Posts: 330 Location: There Followers: 1 hallelujah1234 wrote: Senior Manager In how many ways could the letters in the word MINIMUM be arranged if the U must not come before the I's? Joined: 02 Feb 2004 7!-(combinations where U come before I) Posts: 345 Followers: 1 mirhaque wrote: hallelujah1234 wrote: GMAT Club Legend In how many ways could the letters in the word MINIMUM be arranged if the U must not come before the I's? Joined: 15 Dec 2003 7!-(combinations where U come before I) Posts: 4318 And what is within the bracket is the tougher part to calculate. Do you want to try? Followers: 16 Kudos [?]: 123 [0], given: 0 Best Regards, Paul wrote: mirhaque mirhaque wrote: Senior Manager hallelujah1234 wrote: Joined: 02 Feb 2004 In how many ways could the letters in the word MINIMUM be arranged if the U must not come before the I's? Posts: 345 7!-(combinations where U come before I) Followers: 1 And what is within the bracket is the tougher part to calculate. Do you want to try? ahhhhhhhhhhhh! nah! it's been only a week I learned combination. Can't run before I learn to walk. hallelujah1234 wrote: Senior Manager Total # words = 7!/(3!2!), not 7! Joined: 02 Feb 2004 can you explain this as humanely as possible Posts: 345 Followers: 1 mirhaque wrote: hallelujah1234 wrote: Total # words = 7!/(3!2!), not 7! :-) Senior Manager can you explain this as humanely as possible :panel Joined: 02 Mar 2004 Replace 3.M and 2.N with M1, M2, and M2 and N1, and N2 respectively. Posts: 330 So, we can have 7! different words. However, M1 = M2 = M3, and N1 = N2. Hence, we need to divide the total words by 3! and 2! respectively in order to knock dummies off Location: There of the list, because Followers: 1 {M1M2M3, M1M3M2, M2M3M1, M2M1M3, M3M1M2, M3M2M1} --> MMM 3! to 1 map. Similarly {N1N2, N2N1} --> NN (2! to 1 map) Total number of ways to arrange letters of MINIMUM to form distinct words: 7!/2!*3! = 420 Paul Unfavorable outcomes are when U is in front of I: GMAT Club Legend (UI)-X-X-X-X-X --> 6! Joined: 15 Dec 2003 Similar outcomes with M's interchanged: 3! Posts: 4318 Total unfavorable outcomes: 6!/3! = 120 Followers: 16 Total # of ways to arrange letters of MINIMUM such that U does not come before I: 420-120 = 300 Kudos [?]: 123 [0], given: 0 _________________ Best Regards, hallelujah1234 wrote: mirhaque wrote: hallelujah1234 wrote: mirhaque Total # words = 7!/(3!2!), not 7! Senior Manager can you explain this as humanely as possible Joined: 02 Feb 2004 Replace 3.M and 2.N with M1, M2, and M2 and N1, and N2 respectively. Posts: 345 So, we can have 7! different words. However, M1 = M2 = M3, and N1 = N2. Hence, we need to divide the total words by 3! and 2! respectively in order to knock dummies off of the list, because Followers: 1 {M1M2M3, M1M3M2, M2M3M1, M2M1M3, M3M1M2, M3M2M1} --> MMM 3! to 1 map. Similarly {N1N2, N2N1} --> NN (2! to 1 map) why not knock off dummies for "I"s as well. there are two I's Paul wrote: Total number of ways to arrange letters of MINIMUM to form distinct words: 7!/2!*3! = 420 mirhaque Unfavorable outcomes are when U is in front of I: (UI)-X-X-X-X-X --> 6! Senior Manager Similar outcomes with M's interchanged: 3! Total unfavorable outcomes: 6!/3! = 120 Joined: 02 Feb 2004 Total # of ways to arrange letters of MINIMUM such that U does not come before I: 420-120 = 300 Posts: 345 Problem with this Followers: 1 (UI)-X-X-X-X-X --> 6! is: there are two "I"s & with this combination the second "I" will come before "U". However, if you assume "UII" as one, that eliminates that problem but does not count all combinations where other letters could be between the two "I"s but not before "U". What is the solution Halle? To figure out ways U can come before the I's, think of the I's as a unit. U _ _ _ _ I -------> there are 4! for the positions in the middle. Senior Manager There are also 5 ways U can come before I. The above and: Joined: 23 Sep 2003 _ U _ _ _ I Posts: 295 _ _ U _ _ I _ _ _ U _ I Location: US _ _ _ _ U I Followers: 1 = 5(4!) = 120 (7!/3!2!) - 5(4!) = 420 - 120 = 300 ndidi204 wrote: To figure out ways U can come before the I's, think of the I's as a unit. mirhaque U _ _ _ _ I -------> there are 4! for the positions in the middle. Senior Manager There are also 5 ways U can come before I. The above and: Joined: 02 Feb 2004 _ U _ _ _ I _ _ U _ _ I Posts: 345 _ _ _ U _ I _ _ _ _ U I Followers: 1 = 5(4!) = 120 (7!/3!2!) - 5(4!) = 420 - 120 = 300 but you are not counting other letters that can come betwee the two I's
{"url":"http://gmatclub.com/forum/in-how-many-ways-could-the-letters-in-the-word-minimum-be-ar-2701.html?fl=similar","timestamp":"2014-04-18T15:50:36Z","content_type":null,"content_length":"212273","record_id":"<urn:uuid:5e035235-1b09-4a6d-9c04-17fe7deba66f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: e Sunny earns $39 per day by washing cars. He is paid an extra $5 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Find the equation of a line whose slope is 3 and y-intercept is 2. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 56. View Solution DDD Each worker is paid $1360 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD Select the equation of a line whose slope is 45 and y-intercept 3. View Solution DDD Which is the equation of a line whose slope is 23 and y-intercept is 45? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 3. View Solution DDD What is the equation of a line whose slope is - 3 and y-intercept 47? View Solution DDD Dennis goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution DDD Select the equation of the line whose slope is - 4 and y-intercept is - 34. View Solution DDD Select the equation of a line whose slope is 34 and y-intercept 2. View Solution DDD Brad earns $23 per day by washing cars. He is paid an extra $4 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Each worker is paid $1392 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD The graph shows a linear increase in the price of oil. What is the slope of the line shown in the graph? DDD View Solution DDD In the slope-intercept form y = mx + b, m is the ______ and b is the ______. View Solution DDD Find the slope of the line AB in the graph. DDD View Solution DDD Find the slope of the line in the graph. View Solution DDD Find the equation of a line whose slope is 5 and y-intercept is 5. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 45. View Solution DDD Choose the equation of the line in slope-intercept form for the graph shown. View Solution DDD Which equation is in slope-intercept form? DDD View Solution DDD Find the y-intercept of the line in the graph shown. View Solution DDD What is the slope of the diameter AB of the circle shown in the graph? View Solution DDD What is the slope of the line in the graph? DDD View Solution DDD m is the slope and b is the y-intercept of a line. Which of the following equations is in the slope-intercept form? View Solution DDD Select the equation of a line whose slope is 23 and y-intercept 5. View Solution DDD Which is the equation of a line whose slope is 34 and y-intercept is 56? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 4. View Solution DDD What is the equation of the line shown in the graph in slope-intercept form? View Solution DDD Find the equation of the line shown in the graph using slope-intercept form. DDD View Solution DDD Which of the following equations is in slope-intercept form? DDD View Solution DDD Which of the following equations is not in slope-intercept form? DDD View Solution DDD Using y-intercept and the slope, find which of the graphs represents the line y = - (32)x - 1. DDD View Solution DDD Choose the equation of the line in slope-intercept form. DDD View Solution DDD What is the equation of a line whose slope is - 4 and y-intercept 59? View Solution DDD Which of the following equations is not in the slope-intercept form? DDD View Solution DDD Which graph represents the equation y = - 13x + 1? View Solution DDD Bill goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgehmxkjkdb&.html","timestamp":"2014-04-17T04:01:40Z","content_type":null,"content_length":"84699","record_id":"<urn:uuid:ab49aa6f-93cc-4389-aa30-aa4725412196>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: e Sunny earns $39 per day by washing cars. He is paid an extra $5 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Find the equation of a line whose slope is 3 and y-intercept is 2. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 56. View Solution DDD Each worker is paid $1360 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD Select the equation of a line whose slope is 45 and y-intercept 3. View Solution DDD Which is the equation of a line whose slope is 23 and y-intercept is 45? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 3. View Solution DDD What is the equation of a line whose slope is - 3 and y-intercept 47? View Solution DDD Dennis goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution DDD Select the equation of the line whose slope is - 4 and y-intercept is - 34. View Solution DDD Select the equation of a line whose slope is 34 and y-intercept 2. View Solution DDD Brad earns $23 per day by washing cars. He is paid an extra $4 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Each worker is paid $1392 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD The graph shows a linear increase in the price of oil. What is the slope of the line shown in the graph? DDD View Solution DDD In the slope-intercept form y = mx + b, m is the ______ and b is the ______. View Solution DDD Find the slope of the line AB in the graph. DDD View Solution DDD Find the slope of the line in the graph. View Solution DDD Find the equation of a line whose slope is 5 and y-intercept is 5. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 45. View Solution DDD Choose the equation of the line in slope-intercept form for the graph shown. View Solution DDD Which equation is in slope-intercept form? DDD View Solution DDD Find the y-intercept of the line in the graph shown. View Solution DDD What is the slope of the diameter AB of the circle shown in the graph? View Solution DDD What is the slope of the line in the graph? DDD View Solution DDD m is the slope and b is the y-intercept of a line. Which of the following equations is in the slope-intercept form? View Solution DDD Select the equation of a line whose slope is 23 and y-intercept 5. View Solution DDD Which is the equation of a line whose slope is 34 and y-intercept is 56? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 4. View Solution DDD What is the equation of the line shown in the graph in slope-intercept form? View Solution DDD Find the equation of the line shown in the graph using slope-intercept form. DDD View Solution DDD Which of the following equations is in slope-intercept form? DDD View Solution DDD Which of the following equations is not in slope-intercept form? DDD View Solution DDD Using y-intercept and the slope, find which of the graphs represents the line y = - (32)x - 1. DDD View Solution DDD Choose the equation of the line in slope-intercept form. DDD View Solution DDD What is the equation of a line whose slope is - 4 and y-intercept 59? View Solution DDD Which of the following equations is not in the slope-intercept form? DDD View Solution DDD Which graph represents the equation y = - 13x + 1? View Solution DDD Bill goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgehmxkjkdb&.html","timestamp":"2014-04-17T04:01:40Z","content_type":null,"content_length":"84699","record_id":"<urn:uuid:ab49aa6f-93cc-4389-aa30-aa4725412196>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
More on modelsMore on models More on models George Selgin wrote a post on expressing ideas in words rather than mathematics. Here are my two cents of commentary. Math is useful when we want to know “how much?” How much is the U.S. economy growing? How much is the value of the dollar changing against a trade-weighted basket of other currencies? How much more revenue would income taxes raise if their rates were doubled? The math that is useful for answering such questions is often nothing more advanced than what college-bound high school students learn, combined with some accounting identities. Deirde (formerly Donald) McCloskey and Arjo Klamer have made the case that accounting rather than higher mathematics is in fact the master metaphor of economics, because so much of applied economics is about what to count and how to account for it. At bottom, a model is a device for isolating and examining what we consider to be important features of a situation. A model need not be a forest of equations. A verbal description can be a model. So can a balance sheet. So can a historical case—that’s why we say, for instance, “Bank X’s lending practices were a model of good risk management.” A verbal model is more appropriate than a mathematical one in cases where generality is more important than extreme precision. A small change in the assumptions need not lead to a big change in the conclusions, as can happen with mathematical models (another favorite theme of Deirdre McCloskey). Economists continue to rely heavily on verbal models in practice. But rather than following Alfred Marshall’s habit of favoring verbal exposition even for results derived from mathematical inquiry, they express insights that were originally verbal only in mathematics. Readers waste much time mentally translating the math back into words. W Raftshol says: June 26th, 2011 7:18 pm I have posted this figure on the Where's My Model? thread but as it is still being moderated I will repost it here: Goodwin Simulation - US Economy 1913-2100 The Goodwin Model has an undeservedly bad reputation. Goodwin himself, a Harvard Marxist, even gave up on it as "useless". If you look into its elements, it is easy to see where the errors were made. The basic assumption connecting output to money is Q = ms (Actually Q = m*sigma, but I use s instead because Greek characters may not show up on your site) Q is real output, m is real money and s is the output/capital ratio. For some reason, the post-Keynesians who tried to use this model always assumed that s was a constant. However, the Quantity Theory of Money also connects output and money as pQ = mV or, Q = mV/p. Comparing the two expressions for Q, it can be inferred that s = V/p The 2 equations that make up the Goodwin Model are: (1) u'/u = Ph(v)- a'/a - p'/p (2) v'/v = (1-u)s - a'/a - n'/n - d Here, u is the wages share of output = wL/pQ = wL/paL = w/pa p= price level, a = labor productivity, w = nominal wages L is workforce, v is employment percentage, n is population, d is capital depreciation and Ph(v) is a general exponential function which gives the Phillips curve. The Phillips curve has a well established empirical basis. The difference between my model and previous efforts is that I use s = V/p, which is to say that the output/capital ratio is a function of the price level. The simulation on the above chart shows the response of the economy to a constant 4% inflation rate since 1940 except for the period 1979-83 which is shown as a zero inflation period. The model shows what I had expected. ie., that inflation destroys the wage share and the collapse arrives right on schedule. I have often thought that Austrians should use models. I have read enough 40,000 essays with nary a chart or equation to come to that conclusion. The Austrians have a sound theory but can't communicate with anyone! On my blog johanraft.wordpress.com I am working on a lengthy post entitled "A Dynamic Austrian Macro Model" which I am slowly piecing together. I have not had any comments so far from any serious person. The only "serious" people to look at it have been Keynesians who dogmatically reject pQ = mV. Eitan says: June 28th, 2011 4:28 am All this trashing of maths is bugging me. It seems to me that you economists don't realize that there is math other than calculus and statistics, but math is more than numbers! You decry that math cannot deal effectively with more general situations than those which may be described numerically, but honestly, to me, you only betray your limited scope of math knowledge. The verbal model you prefer is still math! Just written in imprecise forms that are subject to (mis)interpretation, with assertions made that are not rigorously proven. Not being able to rigorously show that your conclusions follow from your assumptions is not a strength. Sorry, I think I'm taking this too personally. :-) • Eitan says: June 28th, 2011 4:32 am There's a paper by Karl Menger, Carl Menger's son and a mathematician, about this topic which really explains much better than I can. Menger, K., 1973. Austrian Marginalism and Mathematical Economics. Carl Menger and the Austrian School of Economics, p.38-60, edited by Hicks, J.R. &Weber, W. • Kurt Schuler says: June 28th, 2011 10:51 pm Economics is about human behavior, so most of it should be understandable verbally, because that is how we usually communicate with each other. You did not try to reply to me with a bunch of mathematical symbols. I don't think you would have considered such a reply more precise and rigorous. What constitutes rigor depends on the context. Sometimes it involves mathematics; more often it does not. □ Eitan says: June 29th, 2011 6:26 am I can say "good A is preferred to good B", or I can use symbols "A>B". Both statements have the same level of rigor. Both statements are mathematical statements about an economic concept. You prefer the first statement because you don't have to translate back. That's fine. One should write in a way that is comprehensible to one's intended audience. I argue that mathematics is useful for economics, not that symbolic language is always better than verbal language. They're both mathematical. Rigor is the absolute certainty that every logical step follows from the previous one. For instance, the statement "Economics is about human behavior, so most of it should be understandable verbally, because that is how we usually communicate with each other" is not rigorous, which is fine, but does not convince me.
{"url":"http://www.freebanking.org/2011/06/26/more-on-models/","timestamp":"2014-04-18T15:49:47Z","content_type":null,"content_length":"38395","record_id":"<urn:uuid:fd48f190-3403-4832-b65b-d6667be06b12>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Different confidence intervals from proportions and tabulates (a [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Different confidence intervals from proportions and tabulates (also in survey) From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Different confidence intervals from proportions and tabulates (also in survey) Date Sun, 29 Oct 2006 22:55:15 -0000 The example code you give does not give any instances with lower confidence limits below zero, which I assume is what you mean by negative confidence intervals. But clearly the CIs do differ. Setting aside the complications of -svy-, which is naturally a big set-aside: There is no carved on stone, handed down from on high, method of getting "CORRECT" [your word] binomial confidence intervals. This is why -ci- (pure and simple) offers a variety of ways of doing it, and what you get over a range of real situations is interestingly scary. Sometimes methods agree nicely; other times they don't. Also, sometimes a confidence level means about that much coverage, but often not. The manual entry for [R] ci gives one entry into the literature. The paper by Brown and friends in Statistical Science 2002 is relatively friendly, and likely to be web-accessible to you. Regardless of that, as proportions approach 0 (or 1, really the same problem, modulo some measurement convention), then on any reasonable view the problem becomes increasingly asymmetric, and thus not best to be thought in terms of estimate +/- some multiple of standard error, which, whatever they may say in introductory treatments, is at best a crude approximation to what is going on. A much better scale to work on is logit. Thus negative confidence limits are in essence a clear sign that you are using an inappropriate method, and/or that a one-sided interval would be more appropriate. Jason Ferris > I have been running survey proportions and observing results with > negative confidence intervals (which doesn't make sense). When I use > survey tab (with column percent, se and ci) I get the same point > estimates and standard errors but different 95% confidence > intervals. I > assume this is an issue with the proportion calculations > using "Binomial > Wald" for confidence intervals. > I checked the survey manual and have not been able to find why: > Paste the following command to see my dilemma: > webuse nhanes2b, clear > svy: proportion race > svy: tab race, ci se > The results show the same point estimates and standard errors (with > rounding) but different CI's. As mentioned, for my data, I get some > negative CI's for svy: proportions commands but not for the svy: > tabulate commands. My ultimate concern is being able to automatically > extract the CORRECT estimates to excel (from using matrix > e(b) and e(V) > - and calculating 95% CI from square-root of e(V) *1.96). > I am using the latest version of Stata 9.2, on Windows XP. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-10/msg01126.html","timestamp":"2014-04-23T07:18:33Z","content_type":null,"content_length":"8403","record_id":"<urn:uuid:6695a158-e1dd-4ee8-8a7a-170f867f06cb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: e Sunny earns $39 per day by washing cars. He is paid an extra $5 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Find the equation of a line whose slope is 3 and y-intercept is 2. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 56. View Solution DDD Each worker is paid $1360 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD Select the equation of a line whose slope is 45 and y-intercept 3. View Solution DDD Which is the equation of a line whose slope is 23 and y-intercept is 45? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 3. View Solution DDD What is the equation of a line whose slope is - 3 and y-intercept 47? View Solution DDD Dennis goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution DDD Select the equation of the line whose slope is - 4 and y-intercept is - 34. View Solution DDD Select the equation of a line whose slope is 34 and y-intercept 2. View Solution DDD Brad earns $23 per day by washing cars. He is paid an extra $4 per car for washing after 6 P.M. Which of the following equations represents his total earnings for the day, if y View represents his earnings per day and x represents the number of cars washed after 6 P.M.? Solution DDD Each worker is paid $1392 per month in a factory. Workers are paid an additional $84 per day if they work during weekends. Which of the following equations represents the monthly View earnings of a worker, if y represents the wage per month and x represents the number of days he works during weekends? Solution DDD The graph shows a linear increase in the price of oil. What is the slope of the line shown in the graph? DDD View Solution DDD In the slope-intercept form y = mx + b, m is the ______ and b is the ______. View Solution DDD Find the slope of the line AB in the graph. DDD View Solution DDD Find the slope of the line in the graph. View Solution DDD Find the equation of a line whose slope is 5 and y-intercept is 5. View Solution DDD Select the equation of the line whose slope is - 2 and y-intercept is - 45. View Solution DDD Choose the equation of the line in slope-intercept form for the graph shown. View Solution DDD Which equation is in slope-intercept form? DDD View Solution DDD Find the y-intercept of the line in the graph shown. View Solution DDD What is the slope of the diameter AB of the circle shown in the graph? View Solution DDD What is the slope of the line in the graph? DDD View Solution DDD m is the slope and b is the y-intercept of a line. Which of the following equations is in the slope-intercept form? View Solution DDD Select the equation of a line whose slope is 23 and y-intercept 5. View Solution DDD Which is the equation of a line whose slope is 34 and y-intercept is 56? DDD View Solution DDD Choose an equation of a line in slope-intercept form that is parallel to x-axis and has y-intercept as 4. View Solution DDD What is the equation of the line shown in the graph in slope-intercept form? View Solution DDD Find the equation of the line shown in the graph using slope-intercept form. DDD View Solution DDD Which of the following equations is in slope-intercept form? DDD View Solution DDD Which of the following equations is not in slope-intercept form? DDD View Solution DDD Using y-intercept and the slope, find which of the graphs represents the line y = - (32)x - 1. DDD View Solution DDD Choose the equation of the line in slope-intercept form. DDD View Solution DDD What is the equation of a line whose slope is - 4 and y-intercept 59? View Solution DDD Which of the following equations is not in the slope-intercept form? DDD View Solution DDD Which graph represents the equation y = - 13x + 1? View Solution DDD Bill goes for basketball coaching class three times a week. The time of practice increases every week. The increase in the time of practice is modeled with the equation y = 3x + 1, View where y is the time of practice and x is the number of the week. Find the time of practice in the 5^th week. Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgehmxkjkdb&.html","timestamp":"2014-04-17T04:01:40Z","content_type":null,"content_length":"84699","record_id":"<urn:uuid:ab49aa6f-93cc-4389-aa30-aa4725412196>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
In the previous blog we found that the pixel FPN in dark was equal to 18.4 % of the average signal at 25 % of saturation (corresponding to 8 s exposure time at 30 deg.C). These relative values can be translated in absolute values : rms value of DSNU is equal to 150.5 DN, while the average signal in dark is 1637 DN, the offset (in dark) being equal to 819 DN and saturation defined at 4095 DN. (In the previous blog the DSNU was not correctly calculated.) “Where is this DSNU coming from ?” is a more than valid question. In this blog we will analyze the column FPN. To calculate (!) the column FPN in dark, the same data or images as before are being used. The following procedure is followed : - After removing/correcting the defect pixels, all images taken at a particular exposure time are averaged on pixel level, resulting in one (average) image per exposure time, - Next, per column all pixels are being averaged, yielding an average value for every column (at every value for the exposure time), - Once the column averages are available, the standard deviation on the average column values is calculated. In principle a single number will be found for all measurements done at every exposure time. The result of this calculation is shown in figure 1, indicating the column FPN in dark as a function of exposure time. Figure 1 : fixed-pattern noise in dark as a function of the exposure time. There are two curves shown : - the first represents the pixel-level FPN, already discussed previously, - the second one is showing the column FPN in dark, which is remarkably lower than the pixel FPN. The following data can be obtained from the curve : 2.9 DN is the FPN at 0 s exposure time, and the time depending part of the column FPN equals to 0.0008 DN/s. At 25 % of saturation level or an exposure time of 8s, the column FPN is equal to 7.8 DN rms. Taking into account the absolute values mentioned earlier, the column FPN can be calculated to be equal to 0.95 % at 25 % of saturation. The ratio between the DSNU on pixel level and the column FPN is equal to : 0.0188/0.0008 = 23.5, whereas the theoretical value would predict : (number of lines)^0.5 = 15.5. To find out where this discrepancy is coming from, the uniformity of the average column value of every column is checked at a particular exposure time (8 s). The result is shown in Figure 2. Figure 2 : average column signal in dark at 8 s exposure time. As can be learned from the data in Figure 2, the average column value is very constant (also expressed by the low column FPN rms value), and it is not expected that something is wrong with the calculation of the column FPN in dark. “There is a warning sign on the road ahead” : actually there is even more than one warning sign on the road ahead, but they are already listed in the previous blog. Albert, 11-10-2011.
{"url":"http://harvestimaging.com/blog/?m=201110","timestamp":"2014-04-25T06:45:19Z","content_type":null,"content_length":"22283","record_id":"<urn:uuid:cfb5aa96-ad7e-4f48-9b77-985199cba427>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
General Departmental Seminar Series Deformation Based Morphometry, Roy's Maximum Root and Recent Advances in Random Fields Jonathan Taylor, PhD, Dept. of Statistics, Stanford University October 15, 2004, 12 - 1 pm in room 5235-5275 Medical Sciences Center (NOTE - DIFFERENT LOCATION THAN PREVIOUSLY ADVERTISED!), 1300 University Ave. The starting point of our talk is a study of anatomical differences between controls and patients who have suffered non-missile trauma. We use a multivariate linear model at each location in space, using Hotelling's T^2 to detect differences between cases and controls. If we include further covariates in the model, Roy's maximum root is a natural generalization of Hotelling's T^2. This leads to the Roy's maximum root random field, which includes many special types of random fields: Hotelling's T^2, T, and F, so, in effect the Roy's maximum root random field "unifies" many different random This leads to the recent advances in random fields. In this part of the talk we will briefly describe some recent advances both in the "theory" and "application" of smooth random fields, particularly the behaviour of the maximum of a smooth random field. We will describe some recent results about the accuracy of the (arguably) well-known expected Euler characteristic (EC) approximation to the distribution of the maximum of a smooth random field; an integral-geometric "recipe" for using the EC approximation; and, finally, some important recent applications of such approximations, from classical multivariate problems to perturbation models, as well as open problems. This talk is based on joint work with Robert Adler, Keith Worsley and Akimichi Takemura. Return to seminar list
{"url":"http://www.biostat.wisc.edu/Seminars/SeminarAbstracts2004-2005/dept101504.htm","timestamp":"2014-04-20T13:51:15Z","content_type":null,"content_length":"17566","record_id":"<urn:uuid:48472af6-a81e-4220-9099-d421ae7aff3f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial order - Unbounded normal operators affiliated with von Neumann algebra. up vote 4 down vote favorite Hello, I have a question which is related to a partial order in a set of self-adjoint operators. Let $\mathcal{M}$ be a semifinite von Neumann algebra with a faithful semi-finite normal trace $\tau$. Let $T$ and $S$ be two self-adjoint operators (possibly unbounded) $\tau$-measurable (here probably the assumption that they are affiliated with $\mathcal{M}$ is enough) such that $0 \leq T \leq S$ i.e. $S-T$ is positive. How to get that $$E_{(s, \infty)}(|T|) \preceq E_{(s, \infty)}(|S|), \ \ s \geq 0,$$ where $E_I(|T|)$ (resp. $E_I(|S|)$) stands for a spectral projection of $T$ (resp. $S$) corresponding to the interval $I$ and $\preceq$ means sub-equivalence relation in Murray-von Neumann sense. I am looking also for some good references which describe the relation between $U|T|$ the elements of the polar decomposition of closed densely defined (possibly unbounded) operator $T$ affiliated with some von Neumann algebra $\mathcal{M}$. I mean that $U$ and each spectral projection of $|T|$ are in this von Neumann algebra. Probably, I can find this in Takesaki vol 2 or vol 3. I will be really grateful for any help. Thank you, VdM I don't quite understand the bit about "describe the relation between ..." as it's not clear what is between what, as it were. – Matthew Daws Jun 14 '11 at 18:42 Sorry, my mistake I mean the relation between $U|T|$ and von Neumann algebra $\mathcal{M}$ i.e. that $U$ and each spectral projection of $|T|$ are in this von Neumann algebra. I know it suffices to show that $U$ and $\textbf{1}(|T|)$ are in $\mathcal{M}$, because by virtue of Double Commutant Theorem the spectral projection of $f(|T|)$ are there since $\textbf{1}(|T|)$ is. I am looking for some good references for the theory of the operators affiliated with some von Neumann algebra. – Romanov Jun 14 '11 at 18:52 1 In Takesaki II Problem IX.7 there is a outline of a proof to the statment that $\tau(f(S))\leq \tau(f(T))$ for any $f\geq 0$ continuous with $f(0)=0.$ My guess is that one can work with this a little bit to show that the same inequality holds for $f=\chi_{(s,\infty)}$ which at least handles the factor case. The general case you may be able to do by working with the extended center-valued trace but I don't really know. – Benjamin Hayes Jun 15 '11 at 3:20 The first part of my question is in particular a part of this problem in Takesaki. Because $\mu_t(T) \leq \mu_s(T)$ iff $\lambda_s(T)= \tau(E_{(s,\infty)}(|T|)) \leq \tau(E_{(s,\infty)}(|S|))=\ lambda_s(S)$ iff $E_{(s,\infty)}(|T|) \preceq E_{(s,\infty)}(|S|)$. – Romanov Jun 16 '11 at 15:03 Another property of $s$-numbers is that $\mu_t(f(T))= f(\mu_t(T))$ for increasing continuous $f$ on $[0,\infty) with f(0) \geq 0$ $\tau(T) = \int_{0}^{\infty}\mu_t(T) dt$ for positive $\tau$ measurable $T$ we have $$\tau(f(S)) = \int_{0}^{\infty} f(\mu_t(S)) dt \leq \int_{0}^{\infty} f(\mu_t(T)) dt = \int_{0}^{\infty} \mu_t(f(T))= \tau(f(T)).$$ So this is not a good point. – Romanov Jun 16 '11 at 15:03 add comment 1 Answer active oldest votes I assume you are following the proof in Fack-Kosaki (if you are not, we are talking here about Proposition 2.2 and 2.5 there). Note that there is no need for absolute value bars since both $T,S$ are positive. The key fact is that $E_{(s,\infty)}(T)\wedge E_{[0,s]}(S)=0$ (to be proven afterwards). Using this, we have (using Kaplansky's formula) \[ E_{(s,\infty)}(T)=E_{(s,\infty)}(T)-E_{(s,\ up vote 3 infty)}(T)\wedge E_{[0,s]}(S)\sim E_{(s,\infty)}(T)\vee E_{[0,s]}(S)-E_{[0,s]}(S)\leq I-E_{[0,s]}(S)=E_{(s,\infty)}(S) \] down vote accepted So we only need to prove that $E_{(s,\infty)}(T)\wedge E_{[0,s]}(S)=0$. Now, if $\xi\in E_{(s,\infty)}(T)H \cap E_{[0,s]}(S)H$ with $\|\xi\|=1$, the following happens: \[ \langle T\xi,\ xi\rangle=\langle TE_{(s,\infty)}(T)\xi,E_{(s,\infty)}(T)\xi\rangle =\|T^{1/2}E_{(s,\infty)}(T)\xi\|^2>s, \] \[ \langle T\xi,\xi\rangle=\langle E_{[0,s]}(S)TE_{[0,s]}(S)\xi,\xi\rangle \ leq\langle E_{[0,s]}(S)SE_{[0,s]}(S)\xi,\xi\rangle=\|S^{1/2}E_{[0,s]}(S)\xi\|^2\leq s \] The contradiction implies that $\xi$ cannot exist. I definitely agree. It was the key point! Thank you very much! – Romanov Jun 16 '11 at 17:00 add comment Not the answer you're looking for? Browse other questions tagged oa.operator-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/67791/partial-order-unbounded-normal-operators-affiliated-with-von-neumann-algebra/67974","timestamp":"2014-04-16T04:39:13Z","content_type":null,"content_length":"58814","record_id":"<urn:uuid:4913b5f9-56ba-48dd-861a-8a98665f1efe>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Conjugates over Finite Fields up vote 1 down vote favorite Thinking about Diffe-Hillman for matrices brought me to the following question. Given $\mathbb{F}_{p^k}$ the finite field with $p^k$ elements when can we find non-trivial solutions to for $A,B,Q\in Mat_n(\mathbb{F}_{p^k})$ and $r\neq s$? linear-algebra cryptography matrices add comment 1 Answer active oldest votes This occurs if and only if the matrices $Q^r$ and $Q^s$ are conjugate. This is the case if and only if these matrices are conjugate over the algebraic closure of $\mathbb{F}_p$. If $Q$ iis diagonalizable, then things are straightfoward: if its eigenvalues are $\alpha_1,\ldots,\alpha_n$ then $Q^r$ is conjugate to $Q^s$ if and only if $\alpha_1^r,\ldots,\alpha_n^r$ are a permutation of $\alpha_1^s,\ldots,\alpha_n^s$. An interesting case is where the characteristic polynomial of $Q$ is irreducible over $\mathbb{F}_q=\mathbb{F}_{p^k}$. In this case the eigenvalues of $Q$ are $\alpha,\alpha^q,\alpha^{q^ 2},\ldots,\alpha^{q^{n-1}}$. Then $Q^r$ and $Q^s$ are conjugate if and only if $s\equiv q^i r$ (mod $t$) for some $i$ with $0\le i < n$ and where $t$ is the multiplicative order of $\ up vote 3 alpha$ (and of $Q$). down vote When $Q$ is not diagonalizable, things get rather tedious. It's not too bad if $Q$ is invertible and $r$ and $s$ are not divisible by $p$. If $Q$ has a Jordan block of size $k$ with eigenvalue $\alpha$ then $Q^r$ also has a Jordan block of size $k$ with eigenvalue $\alpha^r$ as long as $r$ is coprime to $p$. If $Q$ is singular or $r$ is a multiple of $p$ then the Jordan block sizes of $Q^r$ might be different from those of $Q$. Tedium ensues :-) add comment Not the answer you're looking for? Browse other questions tagged linear-algebra cryptography matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/29471/matrix-conjugates-over-finite-fields","timestamp":"2014-04-20T06:04:30Z","content_type":null,"content_length":"51310","record_id":"<urn:uuid:f721b89c-1b0f-4d05-bec8-8294962c9cc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Temperature Correction Factors for the Volume of LPG / NGL – part 2 Temperature Correction Factor for the volume of LPG/NGL- part 2 In part 1 of “Temperature Correction Factors for LPG and NGL” we talked about table 24E, which is used to calculate a CTL based on relative density at 60 deg Fahrenheit. In part 2 we will discuss the use of table 23E, which is used to convert a relative density at observed density to a relative density at 60 deg Fahrenheit. This article is the second in a series of three (or possibly more) and attempts to explain the practical use of these tables, and table 23E in particular. Okay, let’s get started! The implementation methods for all standards are explained in detail in the API MPMS publication 11.2.4 and all formulas required to do the calculations are supplied in that publication. It is however not that easy to translate the explanations into for example an excel spreadsheet without a thorough study of the implementation methods. Especially for table 23E where one needs to carry out one or more iterations in order to arrive at a satisfactory answer. Table 23E: Calculating relative density at 60 deg Fahrenheit based on observed relative density and observed temperature in deg Fahrenheit. In all the implementation methods the calculations are performed using reference fluids that are close in density on either side of the input density. These reference fluids have a number of defined parameters that represent the equations for their corresponding states (i.e. whether they are in liquid or solid state, saturated or not etc). Utilizing these reference fluids and their given critical temperatures, the temperature correction factor can be calculated from the densities of the reference fluids, after they have been scaled to the observed reduced temperature (reduced by the critical temperature of the input fluid). As described in part 1, the table with all data for the reference fluids is again necessary. The required data is given on page 13 of chapter 11.2.4, at the end of section 5.1.1.3 and can be viewed NOTE: for all calculations double precision is required and figures are rounded to 12 decimals. It is important to follow this convention otherwise results will be different from those as calculated by API. Input: (example figures and calculated results in blue letters) - Relative density at observed temperature (Ύx) (0.24573) - Observed temperature in °F (Tf) (189.98) Step 23/1: Round the relative density to the nearest 0.0001 and round the observed temperature to the nearest 0.1 °F: - Ύx = 0.2457, Tf = 190.0 Step 23/2: Convert the rounded observed temperature to units of °Kelvin: Tx = (Tf + 459.67) / 1.8 Tx = 360.927777777778 Step 23/3: Check if Tx and Ύx fall within the required boundaries: - temperature: 227.15 <= Tx <= 366.15 °K - density: 0.2100 <= Ύx <= 0.7400 If either one of the values is not within the boundaries the CTL cannot be calculated. Step 23/4: Determine the two reference fluids that are immediately smaller and bigger than the input density. The approach here used is a bit different from the one used for use of table 24E since we must now first calculate a density for each reference fluid at the observed temperature and only afterwards (in step 23/5) determine which reference fluids to use. The densities of all reference fluids must be calculated at the observed temperature Tx as follows: 1. Use each reference fluid’s critical temperature Tc,ref to compute its reduced observed temperature Tr,x: Tr,x = Tx / Tc,ref. 2. If Tr,x <= 1 calculate the saturation density for this reference fluid using Tr,x and the formula from step 24/10 in the calculations of table 24E: Rosatx,ref = Roc * (1 + ((k1 * Tau^0.35) + (k3 * Tau^2) + (k4 * Tau^3))/(1 + (k2 * Tau^0.65))) whereas Tau = 1 – Tr,x k1, k2, k3 and k4 are factors taken from the table above for each reference fluid. 1. Also calculate the saturation density for this reference fluid at 60°F using the reduced temperature Tr,60: Tr,60 = 519.67 / (1.8 * Tc,ref) Rosat60,ref = Roc * (1 + ((k1 * Tau^0.35) + (k3 * Tau^2) + (k4 * Tau^3))/(1 + (k2 * Tau^0.65))) whereas Tau = 1 – Tr,60 Calculate the relative density at the observed temperature for this reference fluid as: Ύx,ref = Ύ60,ref * (Rosatx,ref / Rosat60,ref) where Ύx,ref is the reference fluid’s relative density at 60°F If Tr,x > 1, this reference fluid will not be liquid at the observed temperature and no value of Ύx,ref can be calculated and the result should be flagged ‘-1’ in this case. Step 23/5: Determine the two reference fluids to be used for the calculation. First choose the lowest density reference fluid that has a density value greater than Ύx and name this fluid ‘reference fluid 2’. Likewise find the next lowest density (i.e. the fluid that has a density value smaller than Ύx) and name this fluid ‘reference fluid 1’. - fluid 1 = EP (35/65), Ύx,ref = -0.470381000000 - fluid 2 = Ethane, Ύx,ref = 0.341646473673 If Ύx is below that for “EE 68/32” (the reference fluid with the lowest density) then set “EE 68/32” as reference fluid 1 and “ethane” as reference fluid 2. Likewise if Ύx is above that for “n-heptane” then set “n-hexane” as fluid 1 and “n-heptane” as fluid 2. Step 23/6: This is where the iteration process is about to start. Because of the fact that there is no formula that accurately describes the relationship between relative density at 60°F and relative density at observed temperature, we need to find the relative density at 60°F that corresponds to our observed density by trial and error. First we establish the high and low limits between which the relative density at 60°F and the observed density lie using the calculated data from step 23/5. Then, based on these limits we calculate a middle value for the relative density at 60°F. Then we calculate a CTL based on this middle value and Tx, and obtain an observed density by applying the CTL to this middle relative density at 60°F. After comparing the obtained observed density with our input relative density (Ύx) we check to see if the difference between the obtained observed density and (Ύx) is less then 0.000000001. If so, then we have found the solution. If not then we must make a new iteration as will be described in step 23/9. Setting the limits: 1. Set the upper limit for the observed fluid’s 60°F relative density, Ύ60,high as Ύ60,high = Ύ60,2 2. Set the high limit for the relative density at observed temperature Ύx,high as Ύx,high = Ύx,2 However, if the relative density Ύx is greater than the reference fluid “2” relative density at observed temperature Ύx,2, then no answer exists. In this case Ύx,60 should be flagged as -1 and exit. 1. Set the low limit for the observed fluid’s 60°F relative density, Ύ60,low as Ύ60,low = Ύ60,1 2. Set the low limit for the relative density at observed temperature Ύx,low as Ύx,low = Ύx,1 However, if reference fluid “1” is not a liquid at the observed temperature (i.e. Tr,x >1 for the reference fluid), then set the lower limit 60°F relative density using the following equation: Ύ60,low = ((Tx – Tc,1) * (Ύ60,2 – Ύ60,1) / (Tc,2 – Tc,1) ) + Ύ60,1 Also, if Ύ60,low is less than 0.3500, set it to 0.3500. 1. If Ύ60,low has been reset using the preceding technique then recalculate the corresponding Ύx,low value, using table 24E steps 24/4 until 24/13 to calculate its CTL. The corresponding relative density at observed temperature will be: Ύx,low = CTL * Ύ60,low 2. If the observed relative density Ύx is less than the observed lower limit Ύx,low, then no answer exists, and Ύ60 should be flagged as -1 and we should exit the procedure. Step 23/7: Calculate an intermediate 60°F relative density value, Ύ60,mid. If a value for Ύ60,low exists, then calculate Ύ60,mid from: 1. δ = (Ύx - Ύx,low) / (Ύx,high - Ύx,low) 2. If δ is less than 0.001, set it equal to 0.001. If δ is more than 0.999 set it equal to 0.999 3. Ύ60,mid = Ύ60,low + δ * (Ύ60,high – Ύ60,low) If however no value for Ύx,low exists then calculate Ύ60,mid from: Ύ60,mid = (Ύ60,high + Ύ60,low) / 2 Calculate the CTL using this value of Ύ60,mid and Tx (both unrounded!), using steps 24/5 to 24/13 from table 24E. The relative density Ύx,mid at observed temperature Tx will be: Ύx,mid = Ύ60,mid * Step 23/8: Check for convergence of the 60°F relative density to see if the result is already satisfactory: Convergence has been accomplished either: 1. If Ύx is between Ύx,low and Ύx,mid and |Ύ60,low – Ύ60,mid| < 0.000000001 2. If Ύx is between Ύx,high and Ύx,mid and |Ύ60,high – Ύ60,mid| < 0.000000001 If convergence has been achieved, set Ύ60 = Ύ60,mid and skip to step 23/12. Step 23/9: Since convergence has not been achieved we now need to step through an iteration routine. First we calculate an approximation to the relationship between the three pairs of relative density values (Ύx,low , Ύ60,low), (Ύx,mid , Ύ60,mid) and (Ύx,high , Ύ60,high). We can calculate a quadratic equation of the form y = Ax^2 + Bx + C that will fit through these three points, using the following formulas to obtain parameters a, b and c: 1. α = (Ύ60,high – Ύ60,low) 2. β = Ύx,high^2 – Ύx,low^2 3. φ = (Ύx,high – Ύx,low) / (Ύx,mid – Ύx,low) 4. A = (α – φ*( Ύ60,mid – Ύ60,low)) / (β – φ*( Ύx,mid^2 – Ύx,low^2)) 5. B = (α – A * β) / (Ύx,high – Ύx,low) 6. C = Ύ60,low – B * Ύx,low + A * Ύx,low^2 7. Consequently Ύ60,trial = A * Ύx^2 + B * Ύx + C The resulting value of Ύ60,trial may need to be adjusted if it is outside the range Ύ60,low – Ύ60,high: If Ύ60,trial < Ύ60,low then reset Ύ60,trial to: Ύ60,trial = Ύ60,low + (Ύ60,mid – Ύ60,low) * (Ύx – Ύx,low) / (Ύx,mid – Ύx,low) If Ύ60,trial > Ύ60,high then reset Ύ60,trial to: Ύ60,trial = Ύ60,mid + (Ύ60,high – Ύ60,mid) * (Ύx – Ύx,mid) / (Ύx,high – Ύx,mid) Next, calculate the temperature correction factor CTL, using the new value of Ύ60,trial and steps 24/4 until 24/13 from table 24E. Do not round off the output for CTL! The relative density Ύx,trial at observed temperature will be: Ύx,trial = CTL x Ύ60,trial Step 23/10: Check for convergence of the 60°F relative density. The calculation will be considered converged if the absolute difference between Ύx,trial and Ύx is less than 0.00000001. If converged, Ύ60 = Ύ60,trial And skip to step 23/12 Step 23/11: Since the calculation has not yet converged, the iteration upper and lower limits must be updated: 1. If Ύx,trial > Ύx then reset the upper limits to: Ύx,high = Ύx,trial Ύ60,high = Ύ60,trial 1. Also, if Ύx,mid < Ύx then reset the lower limits to: Ύx,low = Ύx,mid Ύ60,low = Ύ60,mid 1. If Ύx,trial < Ύx then reset the lower limits to: Ύx,low = Ύx,trial Ύ60,low = Ύ60,trial 1. If Ύx,mid > Ύx then reset the upper limits to: Ύx,high = Ύx,mid Ύ60,high = Ύ60,mid Now return to step 23/7 and continue the iteration process. At most 10 iterations should be done. If after 10 iterations no convergence has been achieved it means there is no solution. In that case flag the result as -1 and exit this procedure. According to the API MPMS white paper, all known cases have been found to return a solution within less than 10 iterations at present. Step 23/12: Round off the 60°F relative density Ύ60 to the nearest 0.0001. If the value is less than 0.3500 or greater than 0.6880 the result is outside the scope of this standard and must be Well, this brings us to the end of the procedure. Attached to this post is an excel spreadsheet that has been compiled using the here described procedure. The spreadsheet makes use of a macro for table 24E to make it easier to carry out the iteration process. The procedure for table 23E itself however is done completely on the spreadsheet so as to make it easy to follow the necessary steps. The spreadsheet is protected without a password. Please feel free to download and use the spreadsheet for your own purposes. If you want to use it in your own products then please include a reference to this website (mooringmarineconsultancy.wordpress.com) for it: LPG table 23e standalone The calculations done in these articles can all be carried out and verified using my iPhone app OilCalcs, which can be downloaded in the Appstore. In the next article we will discuss table 54E, which is used to calculate the temperature correction factor based on a density at 15°C and observed temperature.
{"url":"http://mooringmarineconsultancy.wordpress.com/2013/08/31/temperature-correction-factors-for-the-volume-of-lpg-ngl-part-2/","timestamp":"2014-04-18T02:59:13Z","content_type":null,"content_length":"77612","record_id":"<urn:uuid:f6ef71bf-3111-48ae-93da-ffeff2f45f67>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
ATS 673, Lightning (3 credits) This course will provide an introduction to our present understanding of various aspects of lightning. After completing the course, you should be able to: 1. qualitatively and quantitatively discuss the complete lightning discharge, including nomenclature, characteristics, etc. 2. describe electrification of thunderstorms and the electrical properties of the atmosphere 3. explain how lightning varies on different spatial and temporal scales 4. compare and contrast the various types of lightning 5. generate basic models of various processes in a lightning flash 6. compare and contrast current methods of measuring lightning 7. relate lightning to other weather phenomena Other topics that are of particular interest to students will be considered. ATS 606, Data Analysis for Atmospheric Scientists (3 credits) This course will provide a theoretical and practical introduction to various data analysis methods commonly used by researchers in atmospheric science. After completing the course, you should be able 1. understand the theoretical underpinnings and practical applications of various statistical methods intrinsic to atmospheric science 2. quantify empirical data sets using numerical summary measures and probability theory 3. apply forecasting techniques to generate models to fit various data sets 4. quantify the validity of various models using appropriate parametric tests 5. use Monte Carlo methods to solve a variety of problems Other topics may be covered, as time allows. It is strongly recommended that you have taken ATS/ESS 509 prior to taking this class. IDL will be used in class to illustrate concepts, but experience in any scientific programming language is required (e.g.\ IDL, Mathematica, Maple, etc.). ATS/ESS 409/509, Applications of Computers in Meteorology (3 credits) This course will provide an introduction to programming. We use the Interactive Data Language (IDL) from Exelisvis (formerly ITT, formerly RSI), but we will occasionally discuss other languages. After completing the course, you should be able to: 1. use basic Linux commands 2. compare and contrast basic programming constructs, e.g., variables, arrays, structures 3. compare and contrast basic control statements, e.g., if/then, for loops, case statements 4. read in and write basic data files (ASCII, binary, netCDF, etc.) 5. efficiently program in IDL 6. harness the power of the IDL commands WHERE, HISTOGRAM, VALUE _LOCATE 7. create programs to analyze common atmospheric science data sets, including radar, lightning, and images Other topics may be covered, such as objects, widget-based programming, and version control, as time allows. Links (formatting in progress, pardon our progress!) Using Git for the Coyote Library Documentation for the PMB Library ATS/ESS 409/509 Notes
{"url":"http://www.nsstc.uah.edu/ats/bitzer/courses.html","timestamp":"2014-04-18T00:13:36Z","content_type":null,"content_length":"8965","record_id":"<urn:uuid:b7232908-c173-4b11-bcc5-2bcb16f4eb7b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Mathematical Analysis for Business, Economics and the Life and Social Sciences Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/bk-detail?isbn=9780132404228","timestamp":"2014-04-20T22:03:58Z","content_type":null,"content_length":"30417","record_id":"<urn:uuid:4a694a87-53bf-4772-9c7f-11ef505c1abc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
probability | GeekDad | Wired.com GeekDad Puzzle of the Week Answer: How to Bullseye a Womp Rat • By Garth Sundem • 02.16.13 • 5:30 AM Get ready for some serious puzzling. Basically this week’s puzzle asked you to calculate the probability of success of Luke’s shot at the Death Star exhaust port. First he had to survive the surface run. Then he had to survive long enough in the trench with Vader on his tail to get off a clean shot. Then he had to pull the trigger within the very short time that a photon torpedo would enter the exhaust port instead of simply exploding on the surface — i.e. Luke once he was in position, Luke had to bullseye the womp rat (which sounds a bit like an euphemism you might use with your significant other when the kids are listening). In any case, Andy and independently Blaine and Felicia sent spectacular and correct answers to this important conundrum: From the time Luke found himself in the sky around the Death Star, he had a 0.0168% or a 1-in-5946 chance of success. Here is Andy’s excellent explanation: Okay, so let’s break this down into three components: P_s: the probability that Luke survives the surface run P_t: the probability that he survives in the trench long enough to get a shot off (given that survived the surface run) P_h: the probability that he actually hits the exhaust port (given that he survived the trench run) Then, the overall probability of success is simply P_s * P_t * P_h. Now let’s figure out what the values for each of those components are. P_s is easy, because it was given to us. P_s = 10% = 0.1 P_t is determined by an exponential decay function: P_t = P_0 * e P_0 = the probability of surviving until the start of the trench run = 1 (because P_t is already conditioned on surviving the surface run) k = the decay constant = 1.15 (given) t = time (in minutes) that Luke has to survive in the trench Of course, now we need to calculate t: t = d / s d = distance traveled (in km) s = speed = 1050 km/h (given) = 17.5 km/min Now, we need to calculate d: d = (1/8)C = (1/8)*2πr = (1/4)*πr C = circumference of the midhemisphere trench (in km) r = radius of the midhemisphere trench (in km) As illustrated in the above picture, because the Death Star has a radius of 80 km, r is given by: r = sin(45°) * 80 km = 40*sqrt(2) km ≈ 56.569 km Plugging that into the equation for d, we get: d ≈ (1/4)*π*56.569 km ≈ 44.429 km Plugging that into the equation for t, we get: t ≈ 44.429 / 17.5 ≈ 2.539 min Finally, plugging that into our original equation, we get: P_t ≈ 1 * e^(-1.15 * 2.539) ≈ 0.0540 So, assuming Luke makes it to the start of the trench, he has around a 5.4% chance of making it to the end. Finally, let’s figure out the likelihood of Luke’s shot hitting the target: P_h = t_p / t_r t_p = the amount of time the exhaust port is in the target zone (in seconds) t_r = Luke’s reaction time = 0.22 s (given) We can calculate t_p using the following equation: t_p = l_p / s l_p = length of the exhaust port = 2 m (given) s = Luke’s speed = 1050 km/h (given) = 1050000 m/h ≈ 291.667 m/s Plugging those values into the equation for t_p gives us: t_p ≈ 2 / 291.667 ≈ 0.00686 s Plugging that into the equation for P_h gives us: P_h ≈ 0.00686 / 0.22 ≈ 0.0312 So. assuming Luke survives long enough to get a shot off, he has a little better than 3% chance of hitting the port. Putting this all together, the overall probability that Luke makes it to the trench, survives the trench run, and manages to hit the exhaust port (starting a chain reaction that should destroy the station), is given by: P ≈ 0.1 * 0.0540 * 0.0312 ≈ 0.000168 Luke has around 0.0168% chance of success, a little better than your chances of flipping 13 heads in a row with a fair coin. So, unlikely, but nowhere near winning-the-lottery-unlikely. The second part of the puzzle asked entrants to pontificate about the role of the Force in guiding Luke’s shot. Again, I defer to Andy: Now then, we have to consider what effect the influence of the Force would have on his chances of success. I would argue that the Force does not preordain that Luke should succeed. The Force does not care whether Luke succeeds or fails. It is simply an energy field that surrounds and permeates all living things. However, it does significantly improve Luke’s awareness of his surroundings and his reaction time, and therefore his likelihood of success in each of the three stages listed above. As seen in the prequels, Jedis have little difficulty surviving barrages of blaster fire in the midst of heated battles. They are able to dodge, deflect, and even redirect incoming shots to hit their opponents. It is difficult to estimate the reaction time necessary to accomplish these feats, but according to the analysis on this page, blaster shots travel around 78 mph, which is why even non-Jedis are often able to dodge them. 78 mph is a reasonable speed for a major league curveball, which gives us a good frame of reference. A reasonably skilled professional baseball player would have a chance of hitting a curveball, but it is not a certainty, and no normal human could hit a curveball into a human sized target while simultaneously dodging 10 or so other curveballs. Let’s estimate that in order to regularly be able to pull off those kinds of stunts, a Jedi would need to be able to react around 20 times faster than a normal human. (This clearly involves a bit of hand waving, but say 10x for the number of incoming shots involved, with an additional 2x for the difficulty of deflecting the shot back into a foe.) Of course, Luke is not a full Jedi – so let’s say that the Force only makes him 10x faster/more aware of his surroundings than an average human. A 10x reaction speed translates directly into a 10x likelihood of firing his shot at the right time, which raises P_h to around 31.2%. Let’s say it also reduces his chance of being hit on the surface run by 10x, so instead of a 90% chance of failure, he has a 9% chance, or in other words, a 91% chance of success. The trench run is a little trickier, because there is so much less room to maneuver, so let’s say his chance of failure is only cut by 5x, from around 95% to around 19%, giving him an 81% chance of success (ignoring the Captain Solo effect). Putting this all together, with the influence of the Force, Luke has around a 22.7% chance of success, around 1350x what his chances are without the Force. Not bad for a hokey religion! And the winner of this week’s puzzle and the $50 ThinkGeek gift certificate is, of course, Andy. Congratulations! Don’t forget to tune in Monday for another installment of POTW! GeekDad Puzzle of the Week Solution: Ping Pong Probability • By Judd Schorr • 01.17.13 • 6:00 AM This past week’s puzzle, as presented: Ping pong (or table tennis) is a game of both odds and luck. Most games I have seen have been rather one sided — it is really rare that two players are really at the same level. Pretty much every game I see at our ping pong (that is, table tennis) table at work is rather unbalanced. Even if two opponents had the same chance at winning any given point, differences in their excitement levels across multiple points can impact the next points — people can be “on a roll” or “choke” and temporarily increase or decrease their change of winning the next point. This week’s puzzle is about two players, one who is completely constant (let’s call him Troy), and one who occasionally gets “on a roll” and “chokes” (let’s call her Katharine.) For purposes of this puzzle, on any given day Katharine will be prone to either being “on a roll” or “choking,” but not both. Here is how we will define each: Choking – If Katharine wins two points in a row at her “standard” point probability, her odds for winning the next point will decrease by 25 percentage points. Furthermore, her odds will continue to decrease by an additional 25 percentage points for each additional consecutive point. For example, if she has a standard 60% chance of winning any given point, after winning two points in a row her odds for the next point will drop to 35%. If she wins that third point, her odds for the fourth point in a row fall to 10%. Clearly, it is not possible for her to win five consecutive points on days when she is prone to choking. After losing the point by choking, Katharine immediately returns to her standard odds for the very next point. On a Roll – If Katharine is prone to being “on a roll” for a given day, winning two points in a row improves her odds for winning the next point by 25 percentage points. She will continue to keep that 25 percentage point advantage until she loses a point or the game ends — at which point she reverts back to her “standard” chance of winning the next point. For example, if Katharine has a standard 55% change of winning any given point, after winning two points in a row her odds will increase to 80% until she either loses a point or the game ends, and which point she immediate returns to her standard odds for the very next point. Clearly if Katharine’s standard odds are even with Troy’s (i.e., 50% / 50%), Katharine will tend to win over time on days that she is on a roll, and will tend to lose over time on days that she is prone to choking. However, looking back through the years at their game records, they both won and lost the exact number of games. If their records are indeed accurate, what is Katharine’s “standard” point probability for days when she is prone to being “on a roll?” Additionally, what is Katharine’s “standard” point probability for days when she is prone to choking? NOTE: Troy is completely constant in his odds for winning a point throughout any given day, but changes to be the exact counterpart of Katharine’s for each day. A standard game of ping pong (table tennis) is 21 points, with the winner having to win by two points. This week’s puzzle was best and most often solved using a straightforward simulation; write some code to “play” the game using an initial set of odds for Katharine for either he “on a roll” or “choke” day, and run it a few thousand times to see what proportion of the games she won. Then simply re-run it with a different set of “on a roll” or “choke” odds to dial in to the 50/50 split While there were not many responses to this past week’s puzzle (note to self — don’t get carried away with puzzles solved by complex simulations!), congratulations to Katherine W. (note the different spelling!), who submitted a correct solution and earned herself a $50 ThinkGeek Gift Certificate. Her solution listed the “choke” and “on a roll” rates of 56.7% and 43.6%, respectively. Many thanks to everyone that submitted an entry, and good luck with this week’s puzzle! GeekDad Puzzle of the Week Answer: Present Game • By Garth Sundem • 12.22.12 • 6:00 AM It turns out my kids are experts at determining expected value: the size, weight and number of presents under the tree have now been precisely determined and weighed against each other to determine fairness, which, it also turns out, is an extremely complex concept and one into whose waters I highly recommend parents do not wade. In any case, here was this week’s puzzle: Imagine a row of seven presents. Their values are $1, $2, $3, $4, $5, $6 and $40. You don’t know what’s inside and so don’t know which present matches each value. It costs $7 to pick a present at random. Is it a good bet? Now imagine you remove a present at random from the line-up. Is it now a good deal to spend $7 on the blind bet? There are a couple correct-ish answers, depending on how you define “good bet”. First, there’s a 6-in-7 chance that you will lose money — so if you can only play once, you could make a semi-reasonable case that it’s not a good bet to play at all. But the more widely held definition of “good bet” is a situation in which you expect to win more than you lose. In our first case, the value of the presents added together is $61 and 61/7 = $8.71. So your $7 buys an expected $8.71 and it’s a good bet (really, that’s the answer). Now the second part gets trickier. What happens when you pull a present at random? Is it still a good bet? There are two ways to think of it, and here are both: 1. By pulling a present at random, you remove an expected $8.71 from the system. So now the value remaining is $61-$8.71=$52.29 with six presents left. 52.29/6=$8.71. Dang! You still get $8.71 for your $7 bet! 2. Imagine there’s a 1/7 chance you pulled the $1 present (yay!) and a 1/7 chance you pulled the $40 present (d’oh!) and, in fact, a 1/7 chance you pulled any of the others, as well. In the best case, you get 60/6 and in the worst case you get 21/6 and all cases are equally likely, so (60/6 + 59/6 + 58/6 + 57/6 + 56/6 + 55/6 + 21/6) / 7 = 61/7 ≈ $8.71. Still a good bet! The winner, drawn from the many correct entrants, is frequent entrant Andy! Congrats and thanks for all your correct answers over many, many weeks! Andy is the proud winner of a ThinkGeek $50 gift certificate. Sorry, the discount code we had for ThinkGeek has expired; we’ll have a new one soon! And remember: only 126 days until Arbor Day! GeekDad Puzzle of the Week: How to Bullseye a Womp Rat • By Garth Sundem • 02.11.13 • 5:00 AM The Death Star has been all the rage lately, from the White House petition to the recent Kickstarter. So I thought it would be worth taking a look at the fate of the DS-1 Orbital Battle Station. Specifically, what was probability of success of Luke’s shot? First, according to Wookieepedia, Death Star I was a sphere of diameter 160k. The rebel plan in the Battle of Yavin was to fly DS-1′s midhemisphere trench, which ran around the Star exactly half the distance between the equatorial trench and the command sector north, which sat like an evil Santa Claus at DS-1′s north pole. Judging by the surface fail rate, the chance of surviving to even make a trench run was about 10 percent. Then once in the trench, the chance of surviving to pull the trigger decayed exponentially as a function of time in the presence of Vader. This was the mistake of the first wave of Gold Squadron Y-Wings: they failed to appreciate the precipitous decline in their chances of success due to the necessity of their slower but tougher Y-Wings simply taking too long in the trench. To a lesser degree, this was also the mistake of Red Leader flanked by Reds 10 and 12, who considered forward fire and not being caught from behind as the primary danger during their trench run. This, of course, left Luke, Biggs and Wedge. Remember, Luke had only a 1/10 chance of making it to the trench and then his chance of firing further decayed over time. Imagine he had to fly 1/8 of the distance of the midhemisphere trench at the T-65 X-Wing’s stated maximum speed of 1050km/h. And imagine that time is measured in minutes and the decay constant due to the presence of Vader is 1.15. Now you have the probability of Luke surviving to take a shot. But what is the probability of that shot succeeding? Recall that Luke used to bullseye womp rats in his T-16 back home and that a womp rate is not much bigger than 2 meters. For our purposes, we’ll call the area of a womp rat and thus that of the exhaust port opening that of a circle with diameter 2 meters. Now Luke is speeding along at 1050km/h and must pull the trigger at exactly the right time to land a photon torpedo anywhere along the length of the exhaust port opening (disregarding horizontal skew due to the power of the targeting computer). In what time window must he pull the trigger? The average human reaction time is 0.2-0.25 seconds, which we’ll approximate at 0.22 seconds. Compare this 0.22s to the time window of Luke’s possible, successful trigger-pulling to discover the probability that he pulls the trigger within this window. Put it all together: the chance Luke survives to make a trench run, the probability he survives long enough in the trench to pull the trigger, and then the chance that he pulls the trigger in the short time in which a photon torpedo would enter the exhaust port and cause a chain reaction destroying the Death Star. What is the probability of Luke’s successful shot? Of course, this disregards the influence of the Force. The Force’s effect on this probability depends very much on your interpretation of its powers. Can the Force preordain that Luke succeed? And then can the Force preordain that Luke choose to use the Force — in which case nothing from the moment the galaxy formed long, long ago could anything have any resemblance to probability? Or does the Force simply help to guide Luke’s shot? There is a correct answer to the Force-less proposition of this problem. Please send it to Geekdad Central by Friday afternoon. And then I will happily enter your name a second time into this week’s drawing for a $50 ThinkGeek gift certificate if you can offer a well-reasoned interpretation of adjusted probability based on the influence of the Force. Note that well-reasoned and harebrained are not mutually exclusive. GeekDad Puzzle of the Week: Ping Pong Probability • By Judd Schorr • 01.08.13 • 5:00 AM Ping pong (or table tennis) is a game of both odds and luck. Most games I have seen have been rather one sided — it is really rare that two players are really at the same level. Pretty much every game I see at our ping pong (that is, table tennis) table at work is rather unbalanced. Even if two opponents had the same chance at winning any given point, differences in their excitement levels across multiple points can impact the next points — people can be “on a roll” or “choke” and temporarily increase or decrease their change of winning the next point. This week’s puzzle is about two players, one who is completely constant (let’s call him Troy), and one who occasionally gets “on a roll” and “chokes” (let’s call her Katharine.) For purposes of this puzzle, on any given day Katharine will be prone to either being “on a roll” or “choking,” but not both. Continue Reading “GeekDad Puzzle of the Week: Ping Pong Probability” » GeekDad Puzzle of the Week: The Present Game • By Garth Sundem • 12.17.12 • 4:45 AM I’m the world’s worst gift buyer. And so rather taking responsibility for gifting my wife a waffle iron, Leif an 8-nozzle sprinkler head, and my 4yo girl a soldering iron, it would be much, much better if I can remain in the dark as well, blaming gift fail and success on I-don’t-know-what’s-in-there-either surprise. Like the following puzzle. Imagine a row of seven presents. Their values are $1, $2, $3, $4, $5, $6 and $40. You don’t know what’s inside and so don’t know which present matches each value. It costs $7 to pick a present at random. Is it a good bet? Now imagine you remove a present at random from the line-up. Is it now a good deal to spend $7 on the blind bet? Submit your answers to GeekDad Central by Friday afternoon for you chance at a $50 ThinkGeek gift certificate — perhaps dearly needed at this time of year? GeekDad Puzzle of the Week Solution: Back to School Bugs • By Judd Schorr • 08.20.12 • 5:30 AM Everyone has had that dream that wakes you up in the middle of the night. You know the one, where you suddenly remember that you are not only still enrolled in school, but it is also finals time, and you have to take a test on a book that you have never read? The one where something that you should have done is unfinished, or simply slipped through the cracks? As I sat down to brainstorm new puzzles for this week, it dawned upon me that the winning entry for the previous week (Back to School Bugs) was never revealed. I would like to blame this on actually being sick from a bug, but as the solution shows, it is definitely not the case that with the parameters given that I could possibly be sick for quite that long a period. Here is the puzzle as originally posted: After some 104 days of summer vacation, Max and Nora are back in school. One of the big things that my wife Allison and I are concerned about is the fact that most schools are “breeder reactors” for coughs, sniffles, and stomach bugs. Case in point: Max’s third day of school this year was a sick day. As I was putting Max back to bed on Thursday evening, the question hit me: just how “contagious” would a stomach bug have to be to impact a majority of students in Max’s class? For purposes of this puzzle, Max’s class has 18 students, and they sit at six tables of three students each. Each table is a “work unit,” where the kids share schoolwork, ideas, and basic biologicals. There is no table-to-table conversation or sharing. Kids sit at random tables for the morning session, and then pick brand new tables in the afternoon. Any given child has the same odds of picking something up from the outside and bringing it to the classroom, and each kid at a table has the same chance of picking it up as any other child at that table. Kids are “contagious” for 2 days before they actually succumb to the bug, and are out just one day when it hits. Absent students’ seats are randomly assigned, and both completely empty tables and having single students at their own table is possible. If we pick a child-to-child transmission rate of 30% and give any individual child a 10% chance of picking something up from the outside and bringing it into the classroom, what are the odds that more than half of the class will be out during a given day? If it can’t happen, how would these rates need to be change to make it happen within a reasonable timeframe? The winner of this (past) week’s puzzle solved it the same way that I did: a simulation. After writing a small bit of code to simulate 18 students and follow them through a few weeks of school, I received answers that were really quite close to those sent in by several puzzlers. This week’s winner is Blaine, and he also included several graphs describing the situation in Max and Nora’s class over time, such as the chart depicting the odds of 1/2 of the class being sick over time, shown below. With the parameters originally in the puzzle, it turned out to be the case that there was never really a time when there were good odds that more than half of the class would be ill. In fact, my instance of the code required me to ramp up both the transmission rate and the infection rates well over 50% to have this happen. Thank goodness that these types of bugs are rare! Once again, congratulations to Blaine, the winner of last week’s $50 ThinkGeek gift certificate. For those of you under the weather from a bug of your own, sitting home shopping at ThinkGeek, please feel free to use the discount code GEEKDAD81AD for $10 off an order of $50 or more. GeekDad Puzzle of the Week: Back to School Bugs • By Judd Schorr • 08.07.12 • 5:00 AM After some 104 days of summer vacation, Max and Nora are back in school. One of the big things that my wife Allison and I are concerned about is the fact that most schools are “breeder reactors” for coughs, sniffles, and stomach bugs. Case in point: Max’s third day of school this year was a sick day. As I was putting Max back to bed on Thursday evening, the question hit me: just how “contagious” would a stomach bug have to be to impact a majority of students in Max’s class? For purposes of this puzzle, Max’s class has 18 students, and they sit at six tables of three students each. Each table is a “work unit,” where the kids share schoolwork, ideas, and basic biologicals. There is no table-to-table conversation or sharing. Kids sit at random tables for the morning session, and then pick brand new tables in the afternoon. Any given child has the same odds of picking something up from the outside and bringing it to the classroom, and each kid at a table has the same chance of picking it up as any other child at that table. Kids are “contagious” for 2 days before they actually succumb to the bug, and are out just one day when it hits. Absent students’ seats are randomly assigned, and both completely empty tables and having single students at their own table is possible. If we pick a child-to-child transmission rate of 30% and give any individual child a 10% chance of picking something up from the outside and bringing it into the classroom, what are the odds that more than half of the class will be out during a given day? If it can’t happen, how would these rates need to be change to make it happen within a reasonable timeframe? As always, please submit your answer to GeekDad Puzzle Central by Friday for your chance at a $50 ThinkGeek gift certificate. GeekDad Puzzle of the Week Answer: Dog Siblings • By Garth Sundem • 07.22.12 • 5:00 AM I discovered yesterday that Labradors don’t coexist peacefully with hook-wielding four-year-olds, and so I’m crossing the fishing beach off the list of 200 possible dog outings in Boulder, Colorado. In fact, I’m crossing it off with gusto. At least for me, these things that sound like good ideas sometimes require revision in hindsight… In any case, this week’s puzzle was a twist on the classic Birthday Problem. Here ’twas: Imagine that each of six dogs goes out somewhere an average of once every three days. And imagine that between trails and parks and fields and out-and-abouts and algae-choked scum holes there are 200 places a dog can go in and around Boulder, all (let’s say…) with equal probability. If it’s been exactly two years — 730 days — since Selkie’s owner picked her up from the litter, what are the chances that during this time Selkie would NOT see one of her five doggie siblings? Though it seems kinda tractable, this turns out to be way too horribly hard for a GeekDad puzzle. Oops. Like taking labradors swimming at the fishing beach, in hindsight I think I’ll be revising the involvement needed to solve these puzzles… Continue Reading “GeekDad Puzzle of the Week Answer: Dog Siblings” » GeekDad Puzzle of the Week: Dog Siblings • By Garth Sundem • 07.16.12 • 5:00 AM Every once in a while you get a puzzle handed to you. This week I was at the scum hole the city of Boulder, Colorado, calls the dog swimming “beach,” and a guy with a black lab said that his dog, Selkie, has five brothers and sisters in town. “But I’ve never run into one of them,” he said. “I wonder what are the chances of that?” Imagine that each of the six dogs goes out somewhere an average of once every three days. And imagine that between trails and parks and fields and out-and-abouts and algae-choked scum holes there are 200 places a dog can go in and around Boulder, all (let’s say…) with equal probability. If it’s been exactly two years — 730 days — since Selkie’s owner picked her up from the litter, what are the chances that during this time Selkie would NOT see a doggie sibling? For extra credit and a second entry into this week’s drawing for the $50 ThinkGeek gift certificate, what are the chances over the same time that any sibling will meet any other sibling? (And if you need a hint, try Googling “birthday problem.”) Submit your answer to Geekdad Puzzle Central by Friday for your chance at a the $50 ThinkGeek gift certificate!
{"url":"http://archive.wired.com/geekdad/tag/probability/","timestamp":"2014-04-21T08:37:54Z","content_type":null,"content_length":"180475","record_id":"<urn:uuid:fda16e95-56a0-43ce-96ca-2c7bdbaf59ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Cost Per Mile To Drive | Cost To Operate A Car, Truck, Vehicle Escalating gas prices in recent years have compelled American car owners to reconsider how they use their vehicles. For every driver who continues to log incredible miles on a daily basis, there is now one more person who rides a bike to work each day. Most Americans fall somewhere in-between these two extremes, and most of these people have found ways to reduce the amount of driving they do in the course of a day, week, month and year. Many car drivers use the miles-per-gallon averages of their vehicles in order to maximize the number of miles they get out of each tank of gas, and the result is hundreds of dollars saved. But what happens when automobiles no longer run on fuel? How will frugal drivers calculate mileage costs once the electric car dominates the road? What is your cost per mile to operate a car, truck or other vehicle? With electricity being measured in terms of kilowatts, the new cost per mile rating must reflect this. Therefore, when consumers go to calculate the cost per mile to drive a particular vehicle, they will do so according to the amount of miles per kilowatt that vehicle averages per hour. The mathematics will essentially remain the same as they are with the current miles per gallon ratings, but with a quantity of electricity supplanting a given amount of gasoline. The days of obsessing over MPG ratings will soon be gone, yet the automobile won’t be disappearing with it. Instead, CPM ratings will rule the minds of frugal drivers who search for ways to reduce transportation costs.
{"url":"http://costpermiletodrive.com/","timestamp":"2014-04-16T07:33:49Z","content_type":null,"content_length":"5874","record_id":"<urn:uuid:280ad26d-938a-4f0d-82d6-e691ab2a58ed>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
A CoD-based stationary control policy for intervening in large gene regulatory networks One of the most important goals of the mathematical modeling of gene regulatory networks is to alter their behavior toward desirable phenotypes. Therapeutic techniques are derived for intervention in terms of stationary control policies. In large networks, it becomes computationally burdensome to derive an optimal control policy. To overcome this problem, greedy intervention approaches based on the concept of the Mean First Passage Time or the steady-state probability mass of the network states were previously proposed. Another possible approach is to use reduction mappings to compress the network and develop control policies on its reduced version. However, such mappings lead to loss of information and require an induction step when designing the control policy for the original In this paper, we propose a novel solution, CoD-CP, for designing intervention policies for large Boolean networks. The new method utilizes the Coefficient of Determination (CoD) and the Steady-State Distribution (SSD) of the model. The main advantage of CoD-CP in comparison with the previously proposed methods is that it does not require any compression of the original model, and thus can be directly designed on large networks. The simulation studies on small synthetic networks shows that CoD-CP performs comparable to previously proposed greedy policies that were induced from the compressed versions of the networks. Furthermore, on a large 17-gene gastrointestinal cancer network, CoD-CP outperforms other two available greedy techniques, which is precisely the kind of case for which CoD-CP has been developed. Finally, our experiments show that CoD-CP is robust with respect to the attractor structure of the model. The newly proposed CoD-CP provides an attractive alternative for intervening large networks where other available greedy methods require size reduction on the network and an extra induction step before designing a control policy. A key purpose of modeling gene regulation via gene regulatory networks (GRNs) is to derive strategies to shift long-run cell behavior towards desirable phenotypes. To date, the majority of the research regarding intervention in GRNs has been carried out in the context of probabilistic Boolean networks (PBNs) [1]. Assuming random gene perturbation in a PBN, the associated Markov chain is ergodic, and thus it possesses a steady-state distribution (SSD), and (from a theoretical standpoint) one can always change the long-run behavior using an optimal control policy derived via dynamic programming [2,3]. In practice, however, the computational requirements of dynamic programming limit this approach to small networks [4,5]. As an alternative to such optimal intervention, greedy control approaches using mean-first-passage time (MFPT-CP algorithm) or the steady-state distribution directly (SSD-CP algorithm) have been proposed (CP denoting control policy) [6,7]; nonetheless, these algorithms have their own computational issues owing to their need to use the state transition matrix (STM) of the Markov chain. To overcome the computational problems associated with the design of control policies for larger PBNs, previous studies have proposed reduction mappings that either delete genes [8] or states [9]. Deletion of network components compresses large networks, but at the cost of information loss. Furthermore, reduction mappings themselves can be computationally demanding [8,9]. The control approach taken in this paper circumvents many of the computational impediments of previous methods by basing its intervention strategy directly on inter-predictability among genes. Referring to a gene that characterizes a particular phenotype as a Target (T) gene and a gene used to alter the long-run behavior of the network by controlling the expression of T as a Control (C) gene, the method proposed herein relies on the predictive power of a small group of genes, which includes the control gene, and designs a stationary control policy that alters the steady-state distribution of the model. The algorithm is designed for the specific class of networks where there is a path from the control to the target gene – an assumption having a natural interpretation in terms of the biochemical regulatory pathways present in cells. Our method simplifies the procedure of designing the stationary control policy and eliminates the need to have a complete knowledge about the STM. Most importantly, the new algorithm can be used to design stationary control policy directly on large networks without deleting any genes/states. It only requires knowledge about the SSD of the network which can be estimated without inferring the STM. The coefficient of determination (CoD) is used for measuring the power of gene interactions [10]. Thus, our new algorithm is optimized for and performs especially well on network models that are inferred from data using CoD-based approaches, e.g. the well-known seed-growing algorithm [11]. The proposed algorithm, called CoD-CP because the CoD is the main tool, uses the marginal probabilities of the individual genes obtained from the steady-state distribution of the network to calculate the CoDs. The most important advantage of the proposed CoD-CP is that it can be designed on networks with many genes, and without any compression of the model. All of the previously proposed methods for working with large GRNs, e.g. CoD-Reduce[8] or state reduction [9], require ‘deletion’ of network components to achieve a compressed model, which allows for the design of the control policy. An induction step is then required in order to induce those control policies back to the original networks. In this paper, we propose a new approach, which designs control policies directly on the original network and requires neither reduction/compression nor induction. We performed a series of simulation studies to validate CoD-CP performance. Our experiments show that in small networks, where it is possible to derive the currently available greedy MFPT-CP [6] and SSD-CP [7] policies, CoD-CP achieves a similar performance. Most importantly, when the size of the network is large and MFPT-CP or SSD-CP cannot be designed directly on the original model, CoD-CP is easily constructed and applied to the network without any reduction mappings and induction of the control policy from the reduced network back to the original model. Section describes our simulations results. When the network is large, a reduction step is needed before designing the MFPT-CP or SSD-CP. In these cases, CoD-CP can be designed directly on the large networks and performs better than the induced MFPT-CP and SSD-CP on average for networks with singleton attractors only or models where cyclic attractors are allowed. We examined CoD-CP performance for two different perturbation probabilities and the results show consistent patterns. Furthermore, we examined the performance of the three algorithms on a 17-gene gastrointestinal cancer network derived from microarray data. CoD-CP designed on that model network outperforms the stationary MFPT-CP and SSD-CP policies induced from the reduced versions of the 17-gene model. Thus, our new approach provides an attractive alternative to the methods that require network reduction and an extra induction step before designing a control policy. Boolean networks A Boolean network with perturbation p, BN[p] = (V, f), on n genes is defined by a set of nodes V = {x[1], …, x[n]} and a vector of Boolean functions f = [f^1, …, f^n]. The variable x[i] ∈ {0,1} represents the expression level of gene i, with 1 representing high and 0 representing low expression [12]. f represents the regulatory rules between genes. At every time step, the value of x[i] is predicted by the values of a set, W[i], of genes at the previous time step, based on the regulatory function f^i. W[i] = {x[i[1]], …, x[i[k[i]]].} is called the predictor set and the function f^i is called the predictor function of x[i]. A state of the BN[p] is a vector s = (x[1], …, x[n]) ∈ {0, 1}^n, and the state space of the BN[p] is the collection S of all possible network states. The perturbation probability p ∈ (0,1] models random gene mutations, i.e. at each time point there is a probability p of any gene changing its value uniformly randomly. The underlying model of a BN[p] is a finite Markov chain and its dynamics are completely described by its 2^n × 2^n state transition matrix, p(s[i], s[j]) is the probability of the chain undergoing the transition from the state s[i] to the state s[j]. The perturbation probability p makes the chain ergodic and therefore it possesses a steady-state probability distribution π which satisfies [13]: Coefficient of determination (CoD) The coefficient of determination (CoD) measures how a set of random variables improves the prediction of a target variable, relative to the best prediction in the absence of any conditioning observation [10]. Let X = (X[1], X[2], …, X[n]) be a vector of binary predictor variables, Y a binary target variable, and f a Boolean function such that f(X) predicts Y. The mean-squared error (MSE) of f(X) as a predictor of Y is the expected squared difference, E[|f(X) – Y|^2]. Let ε[opt](Y, X) be the minimum MSE among all predictor functions f(X) for Y and ε[0](Y) be the error of the best estimate of Y without any predictors. The CoD is defined as Letting x[1], x[2], …, x[2^n] denote the 2^n possible values for X, running from (0, 0, …, 0) to (1, 1, …, 1), the relevant quantities are given by [10]. The CoD can be used to measure the strength of the connection between a target gene and its predictors and has been used since the early days of DNA microarray analysis to characterize the nonlinear multivariate interactions between genes [14]. More recently, CoD was used to characterize canalizing genes [15] and contextual genomic regulation [16]. We have restricted ourselves to the Boolean case, thereby arriving at the preceding representations of ε[opt](Y,X) and ε[0](Y); however, the basic definition for CoD[X](Y) is not so restricted [10]. MFPT control policy (MFPT-CP) Optimal intervention is usually formulated as an optimal stochastic control problem [4]. We focus on intervention via a single control gene c, and stationary control policies µ[c] : S → {0,1} based on c. The values 0/1 are interpreted as off/on for the application of the control: 1 meaning that the current value of c is flipped, and 0 meaning that no control is applied. The mean-first-passage-time (MFPT) policy is based on the comparison between the MFPT of a state s and its flipped (with respect to c) state [6]. When considering intervention the state space S can be partitioned into desirable (D) and undesirable (U) states according to the expression values of a given target set T of genes. For simplicity, we assume T = {t}, the target gene t is the leftmost gene in the state’s binary representations, i.e. x[1] = t, s = (t, x[2], …, x[n]), and the desirable states correspond to the value t = 0. With these assumptions, the state transition matrix P of the network can be written as Using this representation, one can compute the mean-first-passage-time required for a state s to reach the boundary between desirable and undesirable states. Computation of these average times is performed in the time scale used for the state transitions of the network. If one uses the states of the network to index the components of the vectors in the 2^n dimensional Euclidean space ℝ^2^n, then one can form the vectors K[U] and K[D] that contain the mean-first-passage-time needed for the states in D and U to reach the undesirable and the desirable states, respectively. For example, the co-ordinate K[D](s) of K[D] gives the mean-first-passage-time for the undesirable state s to reach the set D of desirable states. The two vectors K[U] and K[D] are of dimension 2^n^ – 1, and, according to a well-known result from the theory of Markov chains [13], are given as solutions to the following system of linear equations: where e denotes the vector of dimension 2^n^ – 1 with all of its co-ordinates equal to 1. To understand the intuition behind the MFPT-CP algorithm it is important to notice that, because the control gene c is different from the target gene, every state s belongs to the same class of states, D or U, as its flipped state s reaches U on average faster than γ > 0, and these differences are compared to the value of γ, which is related to the cost of applying control. For example, γ is set to a larger value when the ratio of the cost of control to the cost of the undesirable states is higher, the intent being to apply the control less frequently [6]. The MFPT concept could be used in two different ways to design the intervention strategy. The first approach is called “model-dependent” and needs the state transition matrix of the Markov Chain. The time-course measurements can be used to estimate the transition probabilities for all states. Then the STM is used to find the K[U] and K[D] vectors to design the control policy. In the second approach, called “model-free,” the MFPTs are directly estimated from the time-course data and the inference of the STM is skipped. In this paper we focus on the model-dependent MFPT-CP. SSD control policy (SSD-CP) The steady-state-distribution (SSD-CP) policy [7] uses the steady-state distribution of a perturbed Markov chain given in [17] to quantify the shift in the steady-state mass after applying possible controls. A perturbation (change) in the logic defining the Boolean network changes the original transition probability matrix P and steady-state distribution π to [17], the fundamental matrix, Z, is used to represent π. Z = [I – P + eπ^T]^–^1, where T denotes transpose and e is a column vector whose components are all unity [18]. For a rank-one perturbation, the perturbed Markov chain has the transition matrix a, b are two arbitrary vectors satisfying b^Te = 0, and ab^T represents a rank-one perturbation to the original Markov chain P. In the special case where the transition mechanisms before and after perturbation differ only in one state, say state k, where β^T = b^TZ and e(k) is the elementary vector with a 1 in the kth position and 0s elsewhere [17-19]. To define the SSD-CP policy let c) corresponding to state s (as with MFPT-CP). Let π[U] be the original steady-state mass of the undesirable states and let s to s, respectively. The SSD-CP policy is defined on pairs of states, s and π[U], then control is applied to neither; otherwise, if s , and if Two step design of control policy: reduction followed by induction The derivation of the optimal or greedy control policies becomes infeasible as the number of genes in the GRN increases. As a solution, deleting the genes is proposed by methods outlined in [8]. The idea is to delete genes sequentially until the size of the network is small enough for designing the control policy. Because the dimension of the control policy designed on the reduced network is not compatible with the original network, it is necessary to induce the control policy from the reduced network to the original one. The best candidate gene for deletion is selected by an algorithm that measures strength of gene-connectivity using the CoD. Genes not predicting any other genes or being predicted by any other genes are called constant genes and are the first choice for deletion. If there are not any constant genes, then the gene that has minimum CoD for predicting the target gene is selected as the best candidate, d, for deletion. After selecting d, a reduction mapping is used to define the transition rules for states in the reduced network [20]. The design of the reduction mapping is based on the notion of a selection policy[8]. A selection policy ν^d corresponding to the deleted gene d is a 2^n dimensional vector, ν^d ∈ {0, 1}^2^n, indexed by the states of S and having components equal to 1 at exactly one of the positions corresponding to each pair , s ∈ S. For each gene d there are 2^2 – n different selection policies. Since finding the optimal selection policy is computationally impossible in large GRNs, an heuristic approach is proposed by [8]: if either state s or Finally, after a control policy designed on the reduced network, it is necessary to induce it back to the original model. The induction procedure repeats the same control action for the two states s that collapsed together to form the š states in the reduced network. The induction is formally defined as follows, where n is the number of genes in the original model. Assume that after n – m gene deletions the reduced network has m <n genes. Then, for any state (x[1], x[2], …, x[m]) in the reduced network, there are 2^n^–^m states in the original network of the form (x[1], …, x[m], z[1], …, z [n][–][m]). If µ[red] is the control policy designed on the reduced network, then the induced policy on the original network is defined by for any z[1], …, z[n–m] ∈ {0,1}. Proposed methodology This section describes our new algorithm, CoD-CP. The algorithm takes advantage of the predictive power of triplets of genes that include the control gene to predict the expression of the target gene with a small estimated error. To achieve the best performance of the algorithm, it is necessary to have a direct connection or a path from the control gene to the target gene in the regulatory network. The algorithm uses the CoD to measure that predictive power and to design a control policy. CoD-CP is a greedy technique for designing a stationary control policy. The target gene defines the phenotype and divides states into two mutually disjoint sets, D (desirable) and U (undesirable). The gene with the most predictive power over the target gene T among the genes connected with a path to T is used as the control gene C. The goal of the algorithm is to increase the total probability mass of desirable states in the long-run by controlling C. CoD-CP starts by generating all 3-gene combinations that include C. We use three genes for predicting T because, as Kauffman points out, the average connectivity of the model cannot be too high if its dynamics are not chaotic [21] and 3-gene predictors are commonly assumed in BN and PBN modeling [1]. CoD-CP uses the CoDs for determining the strength of the connection between a target gene and its predictors. The CoDs are calculated using the SSD of the network and the respective conditional probability distribution (CPD) tables. After examining all 3-gene combinations, they are sorted based on their CoDs. The triple that has the maximum CoD with respect to T and its corresponding CPD is stored and used for designing the control policy. If there is more than one such a triple, we can uniformly randomly decide to use one of them. We refer to this triple as MAXCOD and its CPD is called MAXCPD. Table 1 represents an example of a MAXCPD table, where the first three columns contain the binary combinations of the MAXCOD genes. Using T and the MAXCOD genes, the state space of the network is broken down into blocks with 2^n^ – 4 states. All states in a block share the same values for T and the MAXCOD genes. The details about the entries of the MAXCPD table are given in the Example 1, part a. Table 1. MAXCPD Table: the first three columns represent the binary combinations of the three MAXCOD genes. The last two columns are filled by summing up the SSD probabilities of states in each corresponding block. Example 1, part a: This example explains the entries of the MAXCPD table using a 7-gene network with 128 states. Without loss of generality, assume that x[1] and x[2] are the T and C genes, respectively, and x[1] = 0 defines desirable states. After examining all the triples, MAXCOD is found to be {x[2], x[3], x[4]}, which has maximum CoD for predicting x[1]. The first three columns of the MAXCPD table contain 8 binary combinations of x[2], x[3] and x[4], as table 1 shows. The last two columns of the table contain the summation of the SSD probabilities of the states with common value for MAXCOD genes. The only difference in columns four and five is the value of the T gene. The size of each block of states is 2^n^– 4 = 2^3 = 8. The first block is Block(1) = {0000000, 0000001, 0000010, 0000011, 0000100, 0000101, 0000110, 0000111}, where all have {x[2], x[3], x[4]} = 000 and x[1] = 0. The second block is Block(2) = {1000000, 1000001, 1000010, 1000011, 1000100, 1000101, 1000110, 1000111}, where {x[2], x[3], x[4]} = 000 and x[1] = 1. Each entry of the forth and fifth columns of the CPD table are represented by P[ij], where i ∈ {1, …, 8} represents a row and j ∈ {0,1} is the T value. Each P[ij] is the summation of the SSD probabilities of the states in a block. For columns four and five of the first row (i = 1), we have to sum up all the SSD probabilities for the states in Block(1) to find P[10]. The summation of the SSD probabilities of Block(2) forms P[11]. The rest of the P[ij]s are calculated similarly. In the PBN setting, control of the network is achieved by toggling the value of the control gene. The derivation of a stationary control policy µ ∈ {0, 1}^2^n, means defining control actions for each state s ∈ {StateSpace}. If the control action for the state s is set to 1, it means that the network should transition from its flipped with respect to CoD-CP algorithm finds the MAXCPD table in order to specify the control actions. It uses the total probabilities P[ij] to define the control actions. Algorithm 1 details all the steps of CoD-CP. In the binary representation of each state s, we find the values of MAXCOD genes. The decimal conversion of the values of MAXCOD genes determines the row of the MAXCPD table corresponding to state s. Then, the total probabilities P[ij] are used to find D(.), as described by algorithm 1, where D(.) defines the difference between the total probability of a block of states to be desirable from that of being undesirable in the long run. Using this difference we can define the control actions: if C in s; otherwise, flip the value of C in s and start the next transition of the Markov chain from Example 1, part b: Following the same 7-gene example, consider state s = 0000000. We calculate D(s) = P[10] – P[11]. The flipped state with respect to the control gene is MAXCOD genes in the binary representation of C = 1, Predictor1 = 0, Predictor2 = 0}, which maps to row 5 of the MAXCPD table. Similarly, s, but if s is set to 1. For all the states in Block(1) the same control action is applied. This greatly simplifies the design of the control policy. Figure 1 shows a numerical example of how the CoD-CP can be designed on this 7-gene example network. Figure 1. Deriving CoD-CP for a small 7-gene network. The x[1] and x[2] genes are the T and C genes, respectively. x[1] = 0 defines Desirable states. The MAXCOD genes are: {x[2], x[3], x[4]}. The control action for state s is 1 and the control action for state D(2) >D(1). Performance comparison In this section we compare the performances of CoD-CP, SSD-CP, and MFPT-CP, first with respect to run time and then to shift of the steady-state distribution. Run-time comparison The dynamics of a GRN and its associated Markov chain are determined by its state transition matrix. The STM provides the full knowledge about the states and their transitions in the network; however, inferring the STM is difficult, especially when available data about the network are limited or the size of the network is large. The main advantage of the CoD-CP algorithm is that it can be directly designed on large networks without inferring the STM and only needs an estimation of the SSD of the Markov chain. This section provides a comparison of CoD-CP with MFPT-CP [6] and SSD-CP [7 In the case of large GRNs, CoD-CP can be directly designed on the model, while MFPT-CP and SSD-CP are two-step procedures: first reducing the size of the network so that the policy can be designed and then inducing that control policy back to the original network. These necessary steps increase the computational time associated with MFPT-CP and SSD-CP. To compare the three algorithms, we measured the running time needed for designing control policies on gene networks containing 7, 8, 9, and 10 genes, averaged for 100 randomly designed BN[p]s. For MFPT-CP and SSD-CP, the best gene for deletion was selected and then the original network was reduced by deleting that gene, according to the methodology introduced in [8]. Consequently, the control policies were designed on the reduced networks and then induced back to the original networks. CoD-CP was designed directly on the original network as described by our new algorithm. All computations were performed on a computer with 4GB of RAM and Intel(R) Core(TM) i5 CPU, 2.53 GHz. Figure 2 shows the average running times for 100 BN[p]s in seconds. The running times tend to grow exponentially as the number of genes increases. Figure 2. Comparing the average running times(in seconds) for designing stationary control policy for 100 randomly generated 10-gene, 9-gene, 8-gene and 7-gene BN[p]s. Running time for CoD-CP algorithm is always less than MFPT-CP and SSD-CP. The running time grows exponentially as the number of genes increases. For comparing the performance of the three algorithms one needs to keep in mind their important characteristics. The CoD-CP algorithm needs the SSD to design the control policy. In cases when the SSD is known, one can directly proceed to the CoD calculations and design the control policy for the network. When the SSD is not known, it can be calculated using equation (1) or can be estimated by methods described in [22]. The model-dependent version of the MFPT algorithm requires an extra step to infer the STM. It then uses matrix inversion to find the mean-first-passage-time vectors K[D] and K[U], this step having the same time complexity as finding the SSD. The model-free version of MFPT-CP requires time-course measurements to estimate the necessary mean-first-passage time vectors. In such a case the algorithm can skip the inference of the STM, and the complexity of estimating MFPT vectors is constant with respect to the number of genes. However, the availability of time-course data is very limited in practice. The other available greedy approach, SSD-CP also requires the SSD and STM of the network. Moreover, the SSD-CP algorithm needs to find the perturbed SSD for each state, which increases the time spent for designing the control policy. As described in the section , CoD-CP uses the MAXCPD table to design the control policy, which divides the state space into blocks of size 2^n^– 4. These blocks are used to assign the same control actions to all of the states in a given block and the complement control action for the block of flipped states. This significantly reduces the complexity of the control policy design and leads to shorter run times. Steady-state performance This section provides simulation experiments to demonstrate the performance of the CoD-CP algorithm with respect to its main goal, to shift undesirable steady-state mass to desirable steady-state mass. In the first part, the algorithm is applied to randomly generated networks. In the second part, we demonstrate CoD-CP on a real-world-derived gastrointestinal cancer network with 17 genes, which can be considered large, given that even with binary quantization, the dimension of its Boolean network STM is 2^17 × 2^17. Synthetic networks CoD-CP has been designed for networks that are too large for direct application of greedy algorithms such as MFPT-CP and SSD-CP while at the same time not suffering from loss of information when designing control polices on reduced networks and then inducing them to the corresponding original networks. Hence, our desire is to demonstrate the improved performance of CoD-CP in comparison to the induced greedy control policies when reduction-inducement is necessary; otherwise, one can simply use the previously developed greedy policies directly. In this section, we discuss the results of a simulation study that compares the performance of CoD-CP to MFPT-CP and SSD-CP on a set of BN[p]s that are randomly generated using the algorithm from [23], for two different perturbation probabilities: p = 0.1 and p = 0.01. The latter probability is the one most commonly used in GRN control studies [2, 6, 7, 24]; nevertheless, we also use p = 0.1 to see the effect, if any, of a less stable network where less mass is concentrated in the attractors. In order to examine how the attractor structure affects performance, we test the CoD-CP algorithm on two model classes: (1) networks with singleton attractors only, and (2) networks that allow cyclic attractors. In the first class, we randomly choose 100 unique attractor sets for a different number of genes n, where n ∈ {7, 8, 9,10}. The attractor sets are restricted to be evenly distributed between the desirable and undesirable states. In the second class, the attractor sets are unique, but the criteria for evenly distribution between D and U is no longer required and attractors are allowed to be cyclic and of unequal length. We used the absolute shift of the SSD as the algorithm performance measure. It is given by Where λ being desirable. In real-world situations the target (T) and control (C) genes are often pre-selected by the biologists/clinicians, the basis for choice being that a phenotypically related target is to be up- or down-regulated and the control gene is known to be related to the target. However, in our simulation studies, where knowledge about T and C does not exist, we have designed a procedure to identify reasonable target and control genes. The objective of the procedure is to select a (C, T) pair such that there is a direct connection, or path, from C to T, which would be a natural constraint in applications. The strength of connection between C and T is measured by the CoD. The selected pair is called CoD-strongly-connected pair. To select this pair, we consider all two-gene combinations such that each gene in a given pair is treated as both the candidate target and candidate control gene, and the CoD of the candidate C for predicting candidate T is calculated. The pair with the maximum CoD of C candidate for predicting candidate T is picked. Then the algorithm checks if there is a path from C to the T. If such a path exists, then the (C, T) pair is chosen. If no path exists, then the pair is discarded and the next highest CoD pair is considered as the candidate (C,T) pair. For checking the existence of a path, we use the breadth-first-search (BFS) algorithm [25]. For more information please refer to the supplemental document (Additional file 1). Additional file 1. This is a file in PDF format and contains additional and supportive material. It provides details about SSD estimation methods, compares results from this paper to the previously published results and outlines the method used for selecting CoD-strongly-connected T-C pairs in the simulations. The link to the file is: http://gsp.tamu.edu/Publications/supplementary/ghaffari11a/ ghaffari-cod-cp-supplemental-document.pdf webcite Format: PDF Size: 77KB Download file This file can be viewed with: Adobe Acrobat Reader To compare CoD-CP to the reduction-inducement versions of MFPT-CP and SSD-CP, we use the reduction method described in [8], called CoD-Reduce. The CoD-Reduce algorithm is designed for the networks with singleton attractors only because its selection policy heuristically uses the singleton attractors to generate the structure of the reduced network. Therefore, in this paper, when reduction of the network is needed for comparison of the control policies, we focus on the networks with singleton attractors only (we return to cyclical attractors later). Figure 3 illustrates that the CoD-CP policy designed on the original network outperforms the induced MFPT-CP and SSD-CP policies when there is significant network reduction in the case of a 10-gene network and p = 0.1. Each set of bars in the graph shows the average SSD shifts for the three policies with different amounts of reduction for the MFPT-CP and SSD-CP policies, beginning no reduction-induction, then reduction to 9 genes and induction back to 10, and so on. The performance of CoD-CP is invariant because it is designed directly from the original network. Absent reduction we see that CoD-CP is outperformed by the induced polices and continues to be outperformed with a 2-gene reduction. But after that, for reductions of 3 ore more genes, CoD-CP outperforms the induced policies, with its superiority increasing as the extent of the reduction grows. This is precisely the behavior we desire. While both the MFPT-CP and SSD-CP policies can be used directly for 10-gene networks, they must be induced from reductions for large networks and, as we observe, the reduction-induction paradigm provides decreasing SSD shift as the amount of reduction increases. Figure 4 shows a similar phenomenon with p = 0. Figure 3. Comparing original CoD-CP to the original and induced MFPT-CP and SSD-CP for 100 randomly generated 10-gene BN[p]s with half of the attractors in D states. In the first set of bars, CoD-CP, MFPT-CP and SSD-CP are designed on the 10-gene networks. In the next sets, the CoD-CP was designed on the original 10-gene networks and compared to the induced MFPT-CP and SSD-CP. At each step, one gene was deleted, and then MFPT-CP and SSD-CP were designed and induced back to the original network, until each BN[p] had only 4 genes. The perturbation probability is 0.1. Figure 4. Comparing original CoD-CP to the original induced MFPT-CP and SSD-CP for 100 randomly generated 10-gene BN[p]s with half of the attractors in D states. In the first set of bars, CoD-CP, MFPT-CP and SSD-CP are designed on the 10-gene networks. In the next sets, the CoD-CP was designed on the original 10-gene networks and compared to the induced MFPT-CP and SSD-CP. At each step, one gene was deleted, and then MFPT-CP and SSD-CP were designed and induced back to the original network, until each BN[p] had only 4 genes. The perturbation probability is 0.01. Having demonstrated the advantage of CoD-CP over the induced polices as the degree of reduction (and, therefore, induction) increases, we now turn to two other aspects of CoD-CP: the effect of cyclic attractors and the selection of target-control pairs. For each issue we consider two cases. For attractors, as previously noted, we have: (a) only singleton attractors and (b) cyclic attractors allowed. Regarding target-control pairs, we have: (a) CoD-strongly-connected target-control pairs and (b) randomly selected target-control pairs. If we combine these choices, we have four factors to consider: network size (n), perturbation probability (p), attractor structure, and target-control structure. Table 2 provides the SSD shifts for network size n ∈ {7, 8, 9,10}, p ∈ {0.1, 0.01}, and the two possibilities for attractors and target-control pairs. Table 2. CoD-CP performance for p ∈ {0.1, 0.01} and singleton or cyclic attractors, averaged for 100 BN[p]s with 7, 8, 9 and 10 gene networks. The first point to recognize is that using CoD-strongly-connected target-control pairs is more realistic because in practice one would control a target with gene that is strongly connected to it via prediction and the CoD is a measure of prediction. On the other hand, one could hardly expect to achieve as good results by randomly selecting targets and controls. We see this contrast reflected in the SSD shifts in Table 2. In addition, we see that using CoD-strongly-connected target-control pairs results in decreasing SSD shift for increasing network size, whereas this trend is replaced by sporadic behavior for randomly selected target-control pairs. Finally, we note the better performance for p = 0.01 than for p = 0.1. This reflects the more random network behavior for higher perturbation probability because the control algorithm utilizes the predictive structure in the network (as measured by the CoD) and this structure is less determinative when perturbations are more likely. In this regard we note that both MFPT-CP and SSD-CP also perform better for p = 0.01 than for p = 0.1, in both their non-induced and induced modes. Gastrointestinal cancer network This section accomplishes two purposes: to examine CoD-CP performance on a real data-based network and on a network sufficiently large that neither MFPT-CP nor SSD-CP (nor, for that matter, dynamic programming) can be applied in their non-induced forms. To do so, we use a BN[p] derived from gastrointestinal cancer microarray dataset [26] and initially inferred in [8]. The 17-gene network has the genes OBSCN and GREM2 as target and control, respectively. This selection is based on biological knowledge and CoD-measured strength of the connectivity between them. The 17 genes comprising the model are: OBSCN, GREM2, HSD11B1, UCHL1, A_24_P920699, BNC1, FMO3, LOC441047, THC2123516, NLN, COL1A1, IBSP, C20or f166, KUB3, TPM1, D90075, and BC042026. Figure 5 shows the connectivity graph of the 17-gene gastrointestinal cancer network. Figure 5. 17-gene Gastrointestinal Cancer Network. Details about the network and this graphical display are provided in [8,9] The 17-gene network has a 2^17 × 2^17 state transition matrix. The generation and manipulation of the STM needed for the design of the MFPT-CP and SSD-CP is a hard computational problem, thus, reduction and induction are necessary steps for obtaining the two control policies. We use an estimation of the SSD because for such a large network it is infeasible to derive it analytically. The approximation method proposed in [22] is used to estimate the SSD of the network. Since CoD-CP can use the estimated SSD of the network, it can be used for directly designing the stationary control policy on the 17-gene network. The estimation procedure uses the Kolmogorov-Smirnov test to decide if the network has reached its steady-state. To apply MFPT-CP and SSD-CP, we reduce the network via the gene reduction method introduced in [8] and delete genes consecutively until only 10 genes are left in the network. At that point it is possible to design the MFPT-CP and SSD-CP policies, after which they are induced back to the original 17-gene network. The resulting performance comparison of the CoD-CP policy with the induced MFPT-CP and SSD-CP policies is shown in Table 3. The difference is dramatic, with the SSD shift for the CoD-CP far superior to the shift for the induced MFPT-CP and SSD-CP policies, which are about the same. The perturbation probability used in this experiment is p = 0.1. More results using p = 0.01 are shown by table 4. It is important to point out that the small perturbation probability, p = 0.01, makes such a large network to be very deterministic. Thus, all of the three control policies produce significant shifts in the network SSD towards the desirable states. In addition, one can notice that the CoD-CP performs extremely well which can be attributed to the use of CoD to infer the network structure from data. This results illustrates the importance of the proper combination of network inference and control policy design methods. Table 3. Comparing SSD shift before and after applying control policies, p = 0.1. The CoD-CP designed on the 17-gene Gastrointestinal cancer network. For MFPT-CP and SSD-CP, The 17-gene network is reduced to 10 genes, the control policies designed for it and then these policies induced back and applied on the original 17-gene network. Table 4. Comparing SSD shift before and after applying control policies, p=0.01. The CoD-CP designed on the 17-gene Gastrointestinal cancer network. For MFPT-CP and SSD-CP, The 17-gene network is reduced to 10 genes, the control policies designed for it and then these policies induced back and applied on the original 17-gene network. In this paper we propose a new algorithm, CoD-CP, for designing a greedy stationary control policy that beneficially alters the dynamics of large gene regulatory networks. The proposed algorithm needs minimum knowledge about the structure of the model and only uses the steady-state distribution of the associated Markov chain. This is particularly important for large networks, where it is computationally prohibitive to use the previously proposed optimal or greedy approaches for designing stationary control policies. The CoD-CP algorithm uses CoD computations based on the steady-state distribution for measuring the strength of connection between the target gene and its candidate predictor genes. CoD-CP is particularly designed for the class of network models where there is a path between the target and control genes, a condition that is reasonable in practical applications. The control action for each state of the network is defined based on the values of the strongest predictor set for the target gene. Simulations demonstrate that CoD-CP outperforms the induced versions of the MFPT-CP and SSD-CP algorithms relative to shifting the steady-state distribution of the network toward more desirable states when there is a significant amount of reduction, a requirement for large networks. Authors contributions NG proposed the main idea, developed the algorithm, designed and performed the simulations, and prepared the manuscript. II collaborated on the design of the algorithm and simulations, interpretation of the results and manuscript preparation. XQ provided insights on the interpretation of the algorithm and results and helped on the manuscript. ERD conceived the study, participated in the analysis and interpretation of the results, and helped draft the manuscript. All authors read and approved the final manuscript. This article has been published as part of BMC Bioinformatics Volume 12 Supplement 10, 2011: Proceedings of the Eighth Annual MCBIOS Conference. Computational Biology and Bioinformatics for a New Decade. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2105/12?issue=S10. Sign up to receive new article alerts from BMC Bioinformatics
{"url":"http://www.biomedcentral.com/1471-2105/12/S10/S10","timestamp":"2014-04-18T13:19:03Z","content_type":null,"content_length":"144329","record_id":"<urn:uuid:b315725a-c552-445d-a913-309322c9e50f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Not sure how to solve this problem? October 17th 2011, 04:59 PM #1 Sep 2011 Not sure how to solve this problem? "You collect baseball and football cards. Your uncle has an old collection of 360 cards that he gives to you. The collection has more baseball cards than football cards. In fact it has 30 more than twice the number of football cards. How many of each are in your uncles collection?" I know I can use elimination to solve this problem.. I'm having problems making the equation though.. So far I have: I'm not sure of the other equations. Can anyone help and explain? Thank you. Re: Not sure how to solve this problem? "You collect baseball and football cards. Your uncle has an old collection of 360 cards that he gives to you. The collection has more baseball cards than football cards. In fact it has 30 more than twice the number of football cards. How many of each are in your uncles collection?" I know I can use elimination to solve this problem.. I'm having problems making the equation though.. So far I have: I'm not sure of the other equations. Can anyone help and explain? Thank you. b = number of baseball cards f = number of football cards b + f = 360 b = 30 + 2f Re: Not sure how to solve this problem? "you collect baseball and football cards. Your uncle has an old collection of 360 cards that he gives to you. The collection has more baseball cards than football cards. In fact it has 30 more than twice the number of football cards. How many of each are in your uncles collection?" i know i can use elimination to solve this problem.. I'm having problems making the equation though.. So far i have: i'm not sure of the other equations. Can anyone help and explain? Thank you. 360 = 30 + 2n 360 - 30 = 2n 330 / 2 = n n = 165 Re: Not sure how to solve this problem? October 17th 2011, 05:02 PM #2 October 18th 2011, 09:27 AM #3 Senior Member Jul 2008 October 18th 2011, 10:33 AM #4
{"url":"http://mathhelpforum.com/algebra/190655-not-sure-how-solve-problem.html","timestamp":"2014-04-18T11:31:51Z","content_type":null,"content_length":"41478","record_id":"<urn:uuid:9fc069f4-47d1-4eab-adab-23a4d1f03141>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Johnson Circles The Johnson circles are a triplet of congruent circles sharing a single point. Every triangle has exactly two Johnson triplets. • The locators are the centers of the three circles. They form the Johnson triangle with circumcircle of the same radius. • Johnson's theorem: the "reference triangle" with vertices the points of two-fold intersection has, surprisingly, a circumcircle of the same radius. • The reference triangle is congruent to the Johnson triangle by homothety of factor . • The anticomplementary circle with twice the radius touches the Johnson circles. • The inscribed anticomplementary triangle is homothetic to the Johnson triangle with factor 2. • The three locators and the origin are, surprisingly, such that each is the orthocenter of the three others. • The homothetic center of the Johnson and reference triangle is the center of the nine-point circle of the reference triangle.
{"url":"http://demonstrations.wolfram.com/TheJohnsonCircles/","timestamp":"2014-04-19T01:47:59Z","content_type":null,"content_length":"43781","record_id":"<urn:uuid:7be2645a-8239-4b72-bcda-c860684ccee0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing Assemblies billcutlerpuzzles.com Contents previous: Constructing Assemblies next: Uniqueness and Disassembly The GENDA Program The analysis presented in this booklet would not have been possible without the development of a computer program to analyze the disassembly of puzzles made of pieces "built-up" from cubes. Pieces in the puzzle are allowed to move only in one of the three orthogonal directions, and the distances they move must be a multiple of the cube width. They may not be twisted or moved a fraction of a cube width. Pieces may either move individually or in groups. The first such program to be written is called GENDA (GENeral DisAssembly); it was written in FORTRAN and has the most sophisticated logic of any of the programs used in the 6-piece burr analysis. The following is an outline of the procedure used by the program for doing disassembly analysis: 1. The program starts with the assembled puzzle. If the program is successful in completely disassembling the puzzle, then the results can be used 'backwards' to assemble the puzzle from its separate pieces. An assembled puzzle is a 3-dimensional grid of cubes in which each cube is part of a particular piece or is empty space. If there are N pieces in the puzzle, this is represented in the computer by a 3-dimensional array containing integers from 0 to N. The value of a particular array element indicates the piece number that occupies the corresponding cube, or 0 if the cube is empty. 2. The allowable movements in the physical puzzle correspond to simple changes in the values of the array. 3. During movement prior to disassembly, the puzzle will pass through different 'states'. Each state is a different arrangement of the pieces in the same grid of cubes. Within the computer program, a state is represented by the amounts that each of the pieces have been moved in each of the three directions from their starting position. If one fixes piece #1 in its initial location, then each state is uniquely represented by the offsets of the other N-1 pieces, or 3 x (N-1) integers. 4. The problem can thus be reduced to analyzing movement in a single direction and determining which states can be reached from another state by movement in this direction. The program must be able to identify when one or more pieces may be separated from the rest of the pieces by movement in a single direction. This is called a partial solution, as the remaining groups of pieces may still not come completely apart. Movement in a single direction will be described in more depth in the next section. 5. The logic for completely disassembling a puzzle is just repeated applications of the same logic for disassembling the whole puzzle. Each time a group of pieces is disassembled, the resulting sets of pieces are cataloged as sub-assemblies. Any sub-assembly with more than one piece is saved for later analysis. The key to the analysis is keeping track of the states: 1. At the beginning of the analysis, the list contains only one state: the starting position, in which the offsets of all the pieces are 0. 2. Pick the next unanalyzed state in the list, and choose one of the orthogonal directions. Determine how much movement of the pieces relative to each other in this direction is possible. Use this information to construct all states which can be reached from the original state with one move. If the movement allows for complete separation of two or more sets of pieces, then we have found a partial solution and can stop. 3. For each of these new states, determine if it has already been recorded in the list of states. This is easily done by comparing the offsets of the pieces of the new state with the offsets of the pieces in each state in the list. (Recall that Piece #1 is not allowed to move - its displacements are always 0). If it has not previously been recorded, add it to the list; otherwise, discard 4. Continue with the analysis until either: □ Some state leads to disassembly in one move or □ The state list has been completely analyzed; no new other states can be reached from any of the states in the list. 5. Since there are only a finite number of states in which all the pieces are still interlocked, the process must end at some time. For some burrs with many pieces and independent movement of parallel pieces (ex. Van der Poel's 18-piece burr), this may involve hundreds of states and may take a significant amount of computer time; but the process is still finite. Analyzing Movement in One Direction This section is more technical than the rest of the booklet. It is included because it contains the single most interesting piece of logic or mathematics used by the programs. How is movement in one direction analyzed? Let N be the number of pieces in the puzzle. Let GRID(x,y,z) be the array containing the piece numbers which occupy each cube, and assume that the dimensions of GRID are less than 100 in each direction. We will analyze movement in the first orthogonal direction, that is the first supscript (x) of GRID. We will construct an N by N matrix, MOVE(i,j), which will show how each pair of pieces may move in relation to each other in the fixed direction. MOVE(i,j) is a non-negative integer which is the number of cubic widths piece # i can be moved in the positive direction while keeping piece # j fixed. If there is no limit to how far piece # i can be moved in this direction without moving piece # j, then this value is set to 100. The following steps are used to construct the matrix MOVE: 1. Initialize the main diagonal of MOVE to 0 and all other entries to 100. 2. Determine simple piece interactions (compute values for MOVE(i,j) by ignoring pieces except for i and j) as follows: For each i from 1 to N; For each j from 1 to N except j=i; For each (x1,y1,z1) for which GRID(x1,y1,z1)=i; For each (x2,y2,z2) for which GRID(x2,y2,z2)=j; If y2=y1 and z2=z1 and x2>x1, then let k=x2-x1-1; if MOVE(i,j) > k, then MOVE(i,j)=k. (The actually programming of the above step can be done more efficiently, but it is easier to explain this way). 3. Introduce interactions of other pieces. It is geometrically evident that the final MOVE matrix must satisfy the following transitive relationship: For all pieces i,j,k, MOVE(i,j) <= MOVE(i,k)+MOVE(k,j) What is not so immediately apparent is that this is the only additional change that must be made to get the desired result. Loop through all values of i,j,k and compare MOVE(i,j) with MOVE(i,j)+MOVE(j,k) If MOVE(i,j) is larger, replace it with the value on the Continue doing this over and over until no further changes are required for any values of i,j,k. If any MOVE(i,j)=100, a partial solution has been found. If all values of MOVE are 0, then there is no movement. Otherwise, there is some movement in the chosen direction. The FDA Program When the project to analyze all 6-piece burr assemblies was started, it was clear that a faster, more efficient program would be needed in order to have any hope of completing the project in a reasonable time. To this end, the following modifications were made to the GENDA program: 1. Only do partial disassembly analysis. In other words, only determine the first separation of the pieces and how many moves it took to achieve this. Assemblies which took a lot of moves for the first disassembly would be saved for a later, more thorough, analysis. Since the goal of the program was to find 6-piece burrs which took as many moves as possible to remove the first piece, all candidates for this honor would be identified. 2. Rewrite all routines with a fixed number of 6 pieces, and take advantage of this where possible to make the code more efficient. 3. Rewrite the most time-consuming routines in 8086 Assembler (PC machine language) to get the most speed possible. An additional complication was also added to the routines - varying piece lengths. Early on, it was realized that a thorough investigation of 6-piece burr assemblies must be sensitive to the lengths of the pieces. Lengths of 6, 8, 10 and 12 may produce different results, although it is rare for there to be a difference between lengths 10 and 12. Even though the original intention of the analysis was only to identify high-level solutions, no short cuts on piece length are possible - an assembly which is level-8 with length 10 pieces may be only level-3 with length 6 pieces; and, conversely, an assembly which is level-8 with length 6 pieces may have no solution with length 8 pieces. The result was a new program for 6-piece burr analysis called FDA (Fast Disassembly Analysis). For each assembly analyzed, the resulting output is a group of 4 numbers called the 'level-type'. These are the number of moves required for the first disassembly with length 6, 8, 10 and 12 pieces. The FDA routines handle the multiple length analysis as follows: 1. Movement of the initial position is analyzed - this movement is independent of piece length. If no movement is found, then the level type is 0-0-0-0. If a level-1 solution is found, then the level-type is 1-1-1-1. 2. If there is movement, but no level-1 solution, save all states created. 3. Continue the analysis with length 6 pieces. During the analysis, each movement and resulting state is analyzed as to whether it can be "generalized" using unlimited length pieces. If a solution is found, and all movements leading to the solution can generalize, then the assembly has the same level solution for all longer pieces also. If no solution is found, then there is no solution with longer pieces either. 4. If a length-6 solution is found which does not generalize, then redo the analysis with length 8 pieces, again checking for generalization possibilities. 5. Repeat with length 10 and 12 pieces if necessary. 6. Save the solution level found at each length. Note: Level-type 0-0-0-1 is used to indicate that there is movement in the assembly, but no solution at any length. After the FDA analysis for an assembly is completed, the program does the following: 1. Record the results by level-type, number of holes, whether notchable or general, and any symmetry of the assembly. 2. If the assembly is deemed to be 'special', write out the assembly to a separate file so that it can be analyzed in more detail later. An assembly was deemed to be 'special' if either: □ The level of solution with some length of pieces was 8 or more □ The level-type was a new combination not seen before. The format used to print out 'special' assemblies is called the "LL Format". D. The LL Format The following are LL listings for 3 assemblies: LL-Format E H S 6 8 A C Name -------------------------------- - - - - - - - --------------- 35350030443000251130143320220666 0 9 012 0 0 0 Love's Dozen 55555530043022551131642000220636 0 7 0 5 0 0 0 Bill's Baffling Burr 05550110503004001116603664226666 0 9 0 4 61010 L46AA Notchable The most important information is the first 32 numbers under the heading "LL-Format". These represent the internal cube arrangements of the assembly. Compare these numbers with Figure 2 (below). Each of the 32 numbers represents the allocation of one of the unassigned cubes in Figure 2. A '0' in the listing indicates that the cube is empty. A number from 1-6 indicates the number of the piece to which the cube belongs. The additional entries in the LL listing are less important, and can be recomputed if necessary: • The 'E' column is an error return which was used during the program run to indicate non-critical problems which occurred when that assembly was run, such as too much movement within the assembly or a new level-type. • The 'H' column is the number of holes in the assembly (same as the number of 0's in the LL-format). • The 'S' column was used for an analysis variable which will not be discussed here. • The '6', '8', 'A' and 'C' columns give the levels at lengths 6, 8, 10 and 12, respectively. Together, these 4 numbers are the level-type. See Love's Dozen or Computer's Choice Unique-10 in the examples for demonstrations of the use of the LL-Format. Love's Dozen LL Format to Assembly Computer's Choice Unique 10 LL Format to Assembly Contents previous: Constructing Assemblies next: Uniqueness and Disassembly Copyright 2000-2002 Bill Cutler Puzzles, Inc.
{"url":"http://home.comcast.net/~billcutler/docs/CA6PB/analyzing.html","timestamp":"2014-04-16T16:10:37Z","content_type":null,"content_length":"18552","record_id":"<urn:uuid:43dd4116-3391-4cbd-be9f-49d783d66d9a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Spectral contaminant identifier for off-axis integrated cavity output spectroscopy measurements of liquid water isotopes FIG. 1. Measured off-axis ICOS transmission spectra of an uncontaminated water standard, a 100 ppm[v] methanol-in-water mixture (blue), and a 1% ethanol-in-water mixture (green). Insets show the non-linear, least-squares fits to the measured water standard and methanol mixture in red with residuals shown in grey. The methanol adds discrete, narrowband absorptions that can be clearly identified in the marked regions of interest. Ethanol (and larger organics) acts as a broadband absorber, which shifts the baseline offset coefficient, b[0]. FIG. 2. Δδ ^18O scales linearly with m [ BB ], whereas Δδ ^2H follows a 3rd order polynomial. Standard #1 and Standard #2 were measured twice for each ethanol concentration (total of four points at each doping level). Data points are an average of 4 injections and error bars show the standard error of the average. Note that the data are plotted versus (m [ BB ] − 1) such that isotope measurement error is zero at m [ BB ] = 1. Fits are forced through (0,0). The approximate ethanol concentration is shown on the upper x axis. Data plotted are from instrument #4. FIG. 3. Δδ ^18O and Δδ ^2H scale linearly with log(m [ NB ]). Standard #1 and Standard #2 were measured twice for each methanol concentration (total of four points at each doping level). Data points are an average of 4 injections and error bars show the standard error of the average. Note that the x axis zero is defined by the average log(m[NB]) value of uncontaminated water standards. Fits are forced through (0,0). For larger values of m [ NB ], a small deviation from linear behavior is visible; other groups have used a piecewise function to describe this relationship but observed a similar logarithmic trend.^10 Approximate methanol concentration is shown on the upper x axis. Data plotted are from instrument #7. FIG. 4. Isotope error vs. metric fits for all 14 instruments: (a) 3rd order polynomial fits to Δδ ^2H vs. m [ BB ]-1 showing a wide variety of responses to contamination with ethanol. Poor fits (i.e., low R^ 2) typically have a small total deviation, indicating minimal error dependence on m [ BB ] and thus ethanol contamination. (b) Linear fits to Δδ ^18O vs. m [ BB ]−1. The terminal markers in (a) and (b) correspond to 2% ethanol. (c) Linear fits to Δδ ^2H vs. log(m [ NB ]). (d) Linear fits to Δδ ^18O vs. log(m [ NB ]). The terminal markers in (c) and (d) correspond to 100 ppm[v] methanol. R^2 values for each fit are shown in the legend. FIG. 5. Average absolute deviation after correction using the metrics described in this note. Black points were contaminated with a maximum of 100 ppm[v] methanol, red points with a maximum 2% ethanol. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/rsi/83/4/10.1063/1.4704843","timestamp":"2014-04-18T12:07:25Z","content_type":null,"content_length":"79375","record_id":"<urn:uuid:fe5076e9-d92a-426f-9ed9-e34e39bb91a0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the regulation by length and width by feet in table tennis? Table tennis or ping-pong is a sport in which two or four players hit a lightweight ball back and forth using table tennis rackets. The game takes place on a hard table divided by a net. Except for the initial serve, players must allow a ball played toward them only one bounce on their side of the table and must return it so that it bounces on the opposite side. Points are scored when a player fails to return the ball within the rules. Play is fast and demands quick reactions. Spinning the ball alters its trajectory and limits an opponent's options, giving the hitter a great advantage. When doing so the hitter has a good chance of scoring if the spin is successful. Table tennis is governed by the worldwide organization International Table Tennis Federation , founded in 1926. ITTF currently includes 218 member associations. The table tennis official rules are specified in the ITTF handbook. Since 1988, table tennis has been an Olympic sport, with several event categories. In particular, from 1988 until 2004, these were: men's singles, women's singles, men's doubles and women's doubles. Since 2008 a team event has been played instead of the doubles. Hospitality is the relationship between the guest and the host, or the act or practice of being hospitable. This includes the reception and entertainment of guests, visitors, or strangers. Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-regulation-by-length-and-width-by-feet-in-table-tennis","timestamp":"2014-04-19T22:09:25Z","content_type":null,"content_length":"24456","record_id":"<urn:uuid:395ba940-c498-4ad5-b5ed-5505c3ae7ea2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Double integrals September 12th 2010, 09:44 AM #1 Senior Member Apr 2009 Double integrals Find a solution to this integral $I = \int_0^{\infty} e^{-x^2} dx$ analytically by calculating $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$. I am not sure about how to go about this question, how did they get $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$? Shouldn't $I^2 = \int_0^{\infty} e^{-x^2} dx \times \int_0^{\ infty} e^{-x^2} dx = \int_0^{\infty}\int_0^{\infty} e^{-x^2} \cdot e^{-x^2} dxdy$? Thanks very much! (Just a bit confused on this question because I am working ahead of class Find a solution to this integral $I = \int_0^{\infty} e^{-x^2} dx$ analytically by calculating $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$. I am not sure about how to go about this question, how did they get $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$? Shouldn't $I^2 = \int_0^{\infty} e^{-x^2} dx \times \int_0^{\ infty} e^{-x^2} dx = \int_0^{\infty}\int_0^{\infty} e^{-x^2} \cdot e^{-x^2} dxdy$? Thanks very much! (Just a bit confused on this question because I am working ahead of class Use Google: Google Find a solution to this integral $I = \int_0^{\infty} e^{-x^2} dx$ analytically by calculating $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$. I am not sure about how to go about this question, how did they get $I^2 = \int_0^{\infty} \int_0^{\infty} e^{-x^2} e^{-y^2} dydx$? Shouldn't $I^2 = \int_0^{\infty} e^{-x^2} dx \times \int_0^{\ infty} e^{-x^2} dx = \int_0^{\infty}\int_0^{\infty} e^{-x^2} \cdot e^{-x^2} dxdy$? Thanks very much! (Just a bit confused on this question because I am working ahead of class There is a typo in your final line- you surely don't want $\int_0^\infty e^{x^2} dx\times \int_0^\infty e^{-x}dx= \int_0^\infty\int_0^\infty e^{-x^2}e^{-x^2} dx dy$. Where did that final "dy" come from? I suspect you meant $\int_0^\infty e^{x^2} dx\times \int_0^\infty e^{-x}dx= \int_0^\infty\int_0^\infty e^{-x^2}e^{-x^2} dx dx$ but that is also wrong. It is true that $I^2= \int_0^\infty e^{-x^2} dx\time\int_0^\infty e^{-x^2} dx$ but that is NOT the same as $\int_0^\infty\int_0^\infty e^{-x^2} e^{-x^2}dx dx$. In fact, that last double integral makes no sense. You cannot integrate twice with respect to the same variable. Remember that "x" is a dummy variable- the final answer will be a number so that it doesn't matter what you call the variable. If $I= \int_0^\infty e^{-x^2}dx$ then it is also true that $I= \int_0^\infty e^{-y^2}dy$ or, for that matter, $I= \int_0^\infty e^{-t^2}dt$ or $I= \int_0^\infty e^{-u^2}du$. We then can say that $I^2= \left(\int_0^\infty e^{-x^2}dx\right)\left(\int_0^\infty e^{-y^2} dy\right)= \int_{y= 0}^\infty \int_{x= 0}^\infty e^{-x^2}e^{-y^2} dx dy$ That last part, saying that the product of the two integrals (each integral involving a different variable) is equal to the double integral is a version of "Fubini's theorem". Of course, $e^{-x^2}e^{-y^2}= e^{-(x^2+ y^2)}$ so we can write $I^2= \int_{y=0}^\infty\int_{x=0}^\infty e^{-(x^2+ y^2)} dx dy$. The whole point of doing that is that now the integral is over the entire first quadrant and we can cast the integral into polar coordinates with $\theta$ going from 0 to $\pi/2$ and r from 0 to $\infty$. Of course, in polar coordinates, $x^2+ y^2= r^2$ and $dx dy= r dr d\theta$ so we have $I^2= \int_{\theta= 0}^{\pi/2}\int_{r= 0}^\infty e^{-r^2} r drd\theta= \left(\int_{\theta=0}^{\pi/2} d\theta\right)\left(\int_{r=0}^\infty e^{-r^2}r dr\right)$ where I have used Fubini's theorem the other way to separate that double integral into a product of integrals. Now that $\theta$ integral is easy and, because of the extra "r" so is the r integral. Thanks so much both of you, I read the wiki article and HallsofIvy's great explanation, understand it now! September 12th 2010, 12:38 PM #2 September 12th 2010, 01:19 PM #3 MHF Contributor Apr 2005 September 12th 2010, 11:54 PM #4 Senior Member Apr 2009
{"url":"http://mathhelpforum.com/calculus/155917-double-integrals.html","timestamp":"2014-04-20T21:07:01Z","content_type":null,"content_length":"49681","record_id":"<urn:uuid:fe87fc64-7fd7-4299-962f-4b3307f222af>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
CJM: Volume 63 Number 6 (Dec 2011) 1201 Resonant Tunneling of Fast Solitons through Large Potential Barriers 1220 Similar Sublattices of Planar Lattices 1238 Casselman's Basis of Iwahori Vectors and the Bruhat Order 1254 Constructions of Chiral Polytopes of Small Rank 1284 Non-Existence of Ramanujan Congruences in Modular Forms of Level Four 1307 A Bott-Borel-Weil Theorem for Diagonal Ind-groups 1328 On a Conjecture of Chowla and Milnor 1345 Pointed Torsors 1364 The Cubic Dirac Operator for Infinite-Dimensonal Lie Algebras 1388 Nonabelian $H^1$ and the Étale Van Kampen Theorem 1416 MAD Saturated Families and SANE Player
{"url":"https://cms.math.ca/cjm/v63/n6/","timestamp":"2014-04-17T09:47:40Z","content_type":null,"content_length":"61932","record_id":"<urn:uuid:9880f4f1-eb1e-405b-91aa-c5c2c6b7cb9a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Skew Spectra of Cartesian Products of Graphs An oriented graph ${G^{\sigma}}$ is a simple undirected graph $G$ with an orientation, which assigns to each edge of $G$ a direction so that ${G^{\sigma}}$ becomes a directed graph. $G$ is called the underlying graph of ${G^{\sigma}}$ and we denote by $S({G^{\sigma}})$ the skew-adjacency matrix of ${G^{\sigma}}$ and its spectrum $Sp({G^{\sigma}})$ is called the skew-spectrum of ${G^{\sigma}}$. In this paper, the skew spectra of two orientations of the Cartesian products are discussed, as applications, new families of oriented bipartite graphs ${G^{\sigma}}$ with $Sp({G^{\sigma}})={\bf i} Sp (G)$ are given and the orientation of a product graph with maximum skew energy is obtained. Oriented graphs; Spectra;Pfaffian graph Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v20i2p19","timestamp":"2014-04-17T12:31:06Z","content_type":null,"content_length":"15432","record_id":"<urn:uuid:b14bf1ba-c173-4830-be01-1928d9a9580b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Box Score Analysis: Wins by Differential, Part 1 Good afternoon, sports fans – I’m going to start out by saying that the whole ‘entry every two days’ thing isn’t going to continue all summer, so if you’re getting tired of reading a novel every couple days, never fear, this rate of posting will only continue through shortly after the season ends. I’ll probably settle into a once- or twice-a-week schedule over the summer, depending on how long my ideas for analysis hold out. Speaking of which, does anyone know the plural for ‘analysis’? Analysises? Analyses? Analysi? Today we’re going to look at one of the two things I previewed in the last Box Score Analysis entry – how quarter differentials correlate to wins. Essentially, we’re asking the question “how often did a team winning by X points after the first quarter go on to win the game?” for every possible value of X (and for every quarter and half). In this case, I’ll be splitting the analysis in half. Unknown to me when I set out on this part of this research, there are a lot of conclusions, some far more important than others. Putting them all in one entry would dilute the impact of the more meaningful ones, so in this entry we’ll be covering the less impactful (though still interesting) ones. Next entry we’ll cover the real heavy-hitters. So today we want to see if there’s a certain time when the probability of winning drastically increases – for example, how much more likely to win is a team leading by 7 at halftime compared to a team leading by 5? Is it significant at all? Unlike the last entry, I’m going to spend a good bit less time covering the statistical reasoning behind the conclusions and more time covering the conclusions themselves. If you want to see the proof behind the numbers, by all means let me know and I’d be glad to send it to you; or, you can run the numbers yourself: I’m posting the data sheet that’s being used to derive all this information right here. Statistical Significance Overview But let me start by going back to that pesky ‘statistical significance’ idea (which, if you understand already, jump ahead three paragraphs). Again, the upcoming ‘Stats Primer for a Sports Fan’ will detail what statistical significance is, but basically if something doesn’t have it, it’s not proven. A stat is ‘statistically significant’, by definition, if it is very unlikely to have simply happened by chance. For example, if a player is listed as a 60% free-throw shooter and misses three times out of three free-throw attempts, that’s not statistically significant enough to make us doubt that he’s really a 60% shooter (because statistically there was a 1-in-20 chance he’d miss all three). But, if a player is listed as a 95% free-throw shooter and misses three straight, that’s pretty significant because it’s unlikely that a shooter who was really that good would miss three out of three (statistically, it’s about a 1-in-10,000 chance). (Important note: we’re saying this as if we only observed the shooter taking three free-throws. The best free-throw shooter in the world will miss three straight at some point in his career – but what are the odds that the specific time we say ‘hey, take three free-throws’ and observe only those three that he misses all three?) Statistical significance is thrown around a lot because it’s a pretty general term, but here we’re going to mainly use it when talking about comparing two statistics. For example, Peja Stojakovic shot 92.9% from the free-throw line this year, and Dirk Nowitzki shot 87.9%. Is that difference statistically significant? If so, we can say that there’s statistical proof that Stojakovic was a better free-throw shooter than Nowitzki this year; but if not, we can’t conclusively assert that (incidentally, it’s not statistically significant, although the difference between Chauncey Billups shooting 91.8% and Dirk is significant even though Chauncey shot worse than Stojakovic. See why we call it ‘Little White Statistics’?). And a final note: when we refer to ‘confidence’ in terms of statistical significance, it means something pretty simple: basically, we can that confident that the observed results come from an actual difference, rather than just a random sampling error. So basically, when we say “we can conclude this at 95% confidence”, it means we’re 95% sure what we’re concluding is true. Study Background Alright, enough fluff. The reason I bring up statistical significance is because this analysis really depends on it to make any kind of conclusions. But before we get to the takeaways, a brief This portion of the study was completed by taking all the box scores from the 2007-2008 NBA regular season, computing the quarter/half differentials for each quarter (with respect to the home team, so a negative differential means the away team outscored the home team), and then looking at how many wins and losses each quarter/half differential led to. Then, we did our correlation voo-doo magic to see what increase in win percentage each point added to the differential gave. And finally, we looked to see if any of that crap was statistically significant. And if you really want to see the numbers, I can show them to you – but I’d recommend taking my word on it. If I was making stuff up, I’d make up far more conclusions than this. And with that, on to the results, subdivided into topics for your reading convenience: The Halftime Differential Let’s lead off with something bizarre. In the 2007-08 season, what halftime differential from leading-by-5 to trailing-by-5 was most likely to lead to a home team victory? Leading-by-5? No – within that range, the home team won most often (over that margin) when they were trailing by three points at halftime. This season, the home team trailing at halftime by 3 points won a bizarre 75.7% of their games (28 out of 37), compared to about 65% from margins +1 to +5, and around 55% from -1 to -2. That’s statistically significant at 95% confidence compared to differentials -2 through 1, but not statistically significant compared to 2 and higher. Similarly bizarre, in games that were tied at halftime, the home team actually lost more often than they won – the home team won only 46% of games that were tied at halftime (24 out of 52). That’s not statistically significant compared to most negative differentials, but it is compared to that -3 halftime differential (at an excessively high confidence level, too). So is the home team really more likely to win when they’re down by 3 at halftime than if they’re tied? I’m taking this conclusion with a grain of salt. 95% confidence is a high level, but statistically that means that for every 20 conclusions you make at 95% confidence, one will likely be wrong. I have a feeling this might be that one – but fortunately, this topic is very easy for further research (which I’ll mention later). And yes, in case you’re keeping score at home, we just used statistics to analyze statistics. To be specific, we statistically proved that statistics aren’t always reliable. But is that a reliable conclusion? And with that, this blog disappeared in a puff of logic. But by that same token, we’re not talking 95% confidence in this statistic. According to the numbers, we can (apparently, note I’m still as skeptical as you) assume a 3-point halftime deficit leads to more home team wins (than a halftime tie) with a remarkable 99.7% confidence. So either I completely screwed up the math somewhere, or we’re on to something (if anyone’s skeptical enough to check my math, we have a proportion of .757 with 37 samples and a proportion of .462 with 52 samples). But I’m still skeptical, so this will definitely be one of the items touched on when we re-do certain parts of this analysis for all the games over the past ten years (oops, gave away the ending). I should also note I’m not implying any causation here – I’m certainly not saying it’s wise for a home team to drop down 3 points before halftime. What we’re looking at here are measures that predict what would happen anyway. We aren’t saying that trailing by three at halftime leads to a win – what we’re saying is that the conditions that lead to a 3-point halftime deficit also lead to a victory by the end of the game. Through-Three Differential The team leading at the end of three quarters was always more likely to win this season, regardless of whether they were home or away, and regardless of the differential. Away teams leading by as little as one point after three quarters won 61.5% of the time, while the home team leading by as little as one point won 54.7% of the time. The difference in the winner is certainly statistically significant (at 94% confidence). Also interesting (and touched on more in the next analysis) is that once you get to a meager 4-point lead going into the fourth quarter, your victory percentage is sky-high – 75% for the home team, 71% for the away at a 4-point differential, and the percentages only get higher from there. Critical Points There’s absolutely no way to phrase this section title that completely prevents any possible puns. At the beginning, we said we wanted to see if there’s a certain differential in each quarter/half that signifies greatly increased odds of a win. And, as it turns out, one does appear. Analyzing statistical significance here is difficult (because we’d have to compare every pair of differentials’ winning percentages over a large range, for each of the seven time periods), but just some random sampling (yes, now we’re randomly sampling our statistics) for statistical significance revealed these are likely significant at the 90% confidence level, at the least. • 1st Quarter: Home: 2; Away: 6 • 2nd Quarter: Home: 4; Away: 6 • 3rd Quarter: Home: 5; Away: 5 • 4th Quarter: Home: 3; Away: 7 • First Half: Home: -3; Away: 5 • Second Half: Home: 1; Away: 4 • Through-3: Home: 2; Away: 1 There’s some pretty interesting stuff in there, believe it or not. In most cases, those point differentials correspond to a point at which teams become around 20% more likely to win the game, and sustain that increased win percentage over higher differentials. There’s a couple notable items in this: • First of all, it’s pretty notable how much less the home team needs to do to raise their win percentage. In most cases, a differential of -2 (the away team leading by 2) is what corresponds to an even winning percentage between the two teams. • Even more notable is that the home team still has a strong chance of winning as long as they’re losing by 3 or less points at the end of the first half. We covered in great length the fact that a 3-point halftime deficit this season still resulted in a winning record for the home team – but after 3, the drop is significant – trailing by four only brings victory 41% of the time, and the ratio decreases steadily after that. And, conveniently, the different between -3 and -4 is statistically significant, adding to the intrigue of the -3 differential. • We mentioned this earlier, but it’s also notable how delicate the through-three differential is – one 3-pointer drastically changes the odds of victory from the home team’s favor (70% when winning by 2 entering the fourth) to the away team’s (62%), a pretty ridiculous 32% swing. As I said above, no causation is implied here; I’m not trying to say that the act of winning the first quarter by 2 points causes the home team to be substantially more likely to win. Instead, I’m suggesting that whatever causes the home team to be up by 2 or more also causes the home team to eventually win the game. Leading by those differentials is a sign that they stand a good chance of winning the game – not the reason they do. Regression Analysis Like last time, I ran a regression analysis, seeking a correlation between differential (for each quarter and half) and winning percentage. There is one – an incredibly strong one. The second, third and fourth quarter differentials each correlate incredibly strongly to winning percentage (the first quarter differential correlates as well, but not quite as strongly – R=.9 for the first quarter whereas R=.94 for two, three and four). What this means is basically, outscoring your opponents by more points during a certain period of time does raise your chance of winning. We’re really uncovering deep, hidden secrets now, aren’t we? I think we just statistically proved that you win a game by outscoring your opponent. Groundbreaking, absolutely groundbreaking. The slopes of these regression lines border on relevant, though. The quarter regressions all hold slopes of roughly .023, implying that for every point added to the differential, winning percentage increases by .023. To put that in terms that make sense, it means statistically if a team outscores its opponent by 5 points in the second quarter in every game, they’ll likely win two more games (over a season) than if they outscored their opponent by only 4 points in those quarters. More relevantly, that means that if a team raises its average differential in one quarter by 1 point, it’ll average 2 more wins over an 82-game season. For a long-term coach, that’s a great goal. Raise it by 1 point per quarter and that’s possibly an 8 game improvement. That might sound drastic, but consider how strong a 4-point average differential difference makes in the league – in 2007-08, a 4-point difference is what separated the Jazz and the Raptors. And beyond all of the above, there are a few things in this analysis that I just find flat-out interesting. There’s no statistical relevance to any of them, but they’re interesting observations. • No home team recovered from being down 16, 17 or 19 points after one quarter (total of eight occurences), but two of the three home teams down 20 after the first recovered: Minnesota against Indiana and Phoenix against Seattle. Minnesota completely erased the 20-point deficit and led at halftime by 1, whereas Phoenix trailed by only 2. • The home team actually held a winning record when being outscored by 10 in the second quarter, or by 6, 7 or 10 in the third. They did not, however, hold a winning record when being outscored by anything more than 4 in the first quarter, or 5 in the fourth. • The lowest quarter differential to yield a 100% winning percentage was 13, when scored in the fourth quarter by the home team. The away team required a 16-point quarter-differential, but could have it occur in either the first or third quarters. • The away team won 3 times when being outscored by 19 points in the second half, but never won when being outscored by more than 16 unless it was 19. One of the things I plan to do later in the summer is re-hash the more ‘controversial’ or ‘fuzzy’ conclusions from this analysis by expanding the sample pool ten-fold and looking at the statistics for every game over the past ten years. If the conclusion on halftime differential holds up then, it’ll be only a one in a trillion (in other words, impossible) chance that it’s by coincidence. I think that’s about all the information I can beat out of this data without stepping into the second half of our analysis. If anyone has any other questions that might be answered by this data, feel free to e-mail me at the heavily disguised e-mail address on the left. Wait until tomorrow though, since I’m only half-done with this portion of the analysis. Now, on to the takeaways. So, in this analysis, we looked at how often each point differential led to a win (or, more specifically, the winning percentage associated with each point differential). As always, teams were separated by location, since it’s been thoroughly discovered that differential trends are very different between home and away teams. • Halftime Differential: Crazy stuff – I recommend reading this part regardless of your knowledge or interest in statistics. Basically, there’s evidence that the home team wins more often when trailing by 3 points at halftime than if the game is tied at halftime. It sounds bizarre, but the statistics behind it are extremely straightforward. Later this summer I’ll look at this again with data from the past 10 years (or 7, depending on how far back Yahoo!’s box scores go) and see if it still holds true. • Through-Three Differential: The team leading at the end of three quarters, regardless of home or away and regardless of the amount they lead by, is always statistically more likely to win the game (though not extensively – 82% of games are won by the team leading after three, but only around 60% are won by the team leading if they lead by less than 3). • Critical Points: There are critical points in the differential for each quarter and half, meaning that there is a certain differential that begins to lead to a much larger chance of winning. For example, a 1-3 point advantage for the home team in the second quarter (only the second quarter, not first and second) yields about a 57% chance of victory – however, 4 and above yields a 70%+ • Regression Analysis: A regression analysis showed there’s a very strong correlation between quarter differentials (in every quarter) and final result. Especially interesting from this part is the impact that a small improvement in differential can have – this part is also interesting reading even for those not interested in statistics. • Miscellaneous: Weird stuff happens. Don’t miss next entry, though. In the next entry we unveil some very interesting statistics about the power of individual quarters, and what periods of the game are most important to perform well in. It’s definitely interesting even to the casual fan, so come back tomorrow when it’s finished and posted. Until then, wish me luck on my last week as a college student. rbs Says: June 23rd, 2008 at 4:02 am Nice Site! JNO Says: March 2nd, 2009 at 1:23 pm Interesting site. I am looking for NBA quater differential info. It that avialable anywhere? Thanks, JNO
{"url":"http://www.littlewhitestatistics.com/?p=14","timestamp":"2014-04-21T14:40:52Z","content_type":null,"content_length":"33911","record_id":"<urn:uuid:d2d0e809-b315-4a27-af23-c7cc239cc0dd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Martindale's Calculators On-Line Center: Statistics - Statistics By Specialty - Databases, Courses, Textbooks, Lessons, Manuals, Guides, Publications, Technical Reports, Videos, Movies, Calculators, Spreadsheets, Applets, Animations, etc. PSYCHOLOGY CALCULATORS, APPLETS, ANIMATIONS & SIMULATIONS CONFIGURAL WEIGHT, RAM MODEL AND CUMULATIVE PROSPECT MODEL - CALCULATOR - M.H. Birnbaum, Department of Psychology, California State University, Fullerton, California Configural Weight, RAM Model and Cumulative Prospect Model Calculator "...handles only outcomes greater than or equal to zero. Zero outcomes will create a warning, but the results should be ok. However, negative outcomes will create a warning and the results will not agree with the model of CPT for negative and mixed outcomes. When shifting between cases of differing numbers of outcomes, be sure to erase the weights if you want the program to supply them for you..." For more information see Bayesian Research Conference; Michael H. Birnbaum's Home Page; Michael H. Birnbaum's Home Page or the Department of Psychology CONFIGURAL WEIGHT, TAX MODEL AND CUMULATIVE PROSPECT MODEL - CALCULATOR - M.H. Birnbaum, Department of Psychology, California State University, Fullerton, California Configural Weight, TAX Model and Cumulative Prospect Model Calculator "...implements the program DMCALC as an on-line JavaScript calculator. These programs are very useful in the design and analysis of experiments in decision making. For example, BEFORE running an experiment, one can calculate the predictions of these models to see if different theories make different predictions in the experiment. It is also possible to vary parameters to examine their effects in the models..." For more information see Bayesian Research Conference; Michael H. Birnbaum's Home Page or the Department of Psychology NORMAL PROBABILITY CALCULATION DEMONSTRATIONS FROM SEEING STATISTICS (JAVA APPLETS) - G.H. McClelland, Department of Psychology, University of Colorado For more information see Seeing Statistics; Gary H. McClelland's Home Page or the Department of Psychology Some examples include PSYCHOLOGICAL STATISTICAL METHODS: INTERACTIVE STATISTICAL EXERCIESES - D.W. Stockburger, Department of Psychology, Southwest Missouri State University Multimedia Course (Text, Images & JAVA). VERY VERY EXTENSIVE. Psychology Statistics Calculators include: View different interval sizes for grouped frequency polygon Calculator; View changes in normal curve Calculator; Correlation coeficients and Scatterplots Calculator; Sampling distribution of the mean illustrated Calculator; Simulated ANOVA for power Analysis Calculator; Error probabilities in hypothesis testing Calculator; etc..." For more information see David W. Stockburger's Home Page or the Department of Psychology STATISTICS: BOX PLOTS, CORRELATION & DISTRIBUTION FUNCTIONS CALCULATORS - A. Strader, Department of Educational Psychology, College of Education, Texas A&M University VERY VERY EXTENSIVE. For more information see Mathematics Applications or Arlen Strader's Home Page Some examples from over "10" Psychology Statistics Calculators include STATISTICS CALCULATORS - K.L. Norman, Department of Psychology, University of Maryland VERY VERY EXTENSIVE. Psychology Statistics Calculators include: "...Sampling Distribution for the Mean Calculator; Simple Frequency Distribution Calculator; Grouped Frequency Distribution Calculator; Mean: Raw Score Computations Calculator; Standard Deviation: Raw Score Computations Calculator; Confidence Interval for the Mean Calculator; t-Test: One Mean Calculator; t-Test: Between Groups Calculator; One-Way Chi-Square Calculator; Correlation Coefficient Calculator; etc..." For more information see Kent L. Norman's HyperCourseware THE WISE PROJECT'S APPLETS - D.E. Berger, Project Director, School of Behavioral and Organizational Sciences (SBOS), Claremont Graduate University, The Claremont Colleges VERY VERY EXTENSIVE. For more information see the Web Interface for Statistics Education (WISE); Claremont Graduate University or the The Claremont Colleges VASSARSTATS: STATISTICAL COMPUTATION - R. Lowry, Department of Psychology, Vassar College, Poughkeepsie, N.Y. Multimedia Statistics Course (Text, Images, Calculators & Java Applets). VERY VERY VERY...EXTENSIVE. VassarStats: Statistical Computation "...This site is dedicated to the free dissemination of knowledge on the world-wide web..." VassarStats Statistics Calculators include: Utilities Calculators Statistical Tables Calculator; Randomizer Calculator; Simple Graph Maker Calculator; etc..." Clinical Research Calculators Probabilities Calculators "...Randomness and the Appearance of Pattern Calculator; Pascal (Negative Binomial) Probabilities Calculator; Backward Probability Template Calculator; etc..." Distribution Calculators "...Binomial Distributions Calculator; Poisson Distributions Calculator; Chi-Square Distributions Calculator; t-Distributions Calculator; Distributions of r Calculator; F-Distributions Calculator; etc..." Frequency Data Calculators "...F-Distributions Calculator; Kolmogorov-Smirnov One-Sample Test Calculator; etc..." Proportions Calculators The Confidence Interval of a Proportion Calculator; Significance of the Difference Between Two Correlated Proportions Calculator; etc..." Ordinal Data Calculators Rank Order Correlation Calculator; Mann-Whitney Test Calculator; Wilcoxon Signed-Ranks Test Calculator; etc..." Correlation & Regression Calculators t-Tests & Procedures Calculators t-Tests for the Significance of the Difference between the Means of Two Samples Calculator; t-Test for Independent or Correlated Samples Calculator; etc..." ANOVA Calculators; ANCOVA Calculators; etc..." For more information see Richard Lowry's Home Page or the Department of Psychology RESEARCH: MARKETING & OPINION CALCULATORS & APPLETS ONLINE SAMPLE SIZE & STATISTICAL POWER CALCULATORS - DSS Research, Arlington, Texas VERY EXTENSIVE. For more information see DSS Research Examples of Research, Marketing & Opinion Statistics Calculators include RESPONSE RATE CALCULATOR - Answers Research, Inc., Response Rate Calculator "...measures the percentage of qualified or eligible respondents completing the survey..." For more information see Calculators or Answers Research, Inc. Examples of Research, Marketing & Opinion Statistics Calculators include SAMPLE SIZE CALCULATOR - SurveySystem Sample Size Calculator "...determine how many people you need to interview in order to get results that reflect the target population as precisely as needed. You can also find the level of precision you have in an existing sample..." For more information see SurveySystem SAMPLING ANALYSIS CALCULATORS & APPLETS
{"url":"http://www.martindalecenter.com/Calculators2A_3_Sub.html","timestamp":"2014-04-17T09:34:35Z","content_type":null,"content_length":"31737","record_id":"<urn:uuid:41951d23-5c1d-42f2-94a6-99c83751d0b3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Unifying Theory 2011-Sep-12, 12:40 PM #1 Join Date Aug 2011 Unifying Theory 1 of 2 I never started this with the intent of defining some ridiculously complex formula which only an Albert Einstein or Max Plank could understand, I went into it with the notion the answer was already there. I figured it should be a fairly simple problem ultimately, because absolute nothing was the lowest common denominator. It doesn't get any simpler than that, so it should be understandable in plain English, and definable in plain English, with one simple formula requiring nothing but a logical understanding of nature. That's exactly what I went looking for in From that simple point absolute chaos ensues and the problem grows exponentially, but the answer in my mind would come from a logical understanding of the process, not by wrangling the chaos one quanta at a time. Maybe I'm wrong, but I don't think so. Ultimately the solution should exceed hard mathematical proof, because the answer lies in the infinite. It's our brains job to devise a logical solution when infinity is involved. The hard math simply represents a tool for understanding, but math is not understanding itself. I think that's probably what Einstein meant when he said, “Imagination is more important than knowledge.” The math can help lead you to the wobbly edge of knowledge, but eventually that math becomes useless in defining our sense of reality. Imagination combined with logic are the only tools that can succeed when the math fails us, and applied correctly, will gain us the knowledge to move beyond to the next level of understanding. So, without any further ado, the magic formula is; -i / +i = -1 (i=infinity) I know it doesn't look to difficult to grasp, but give this to anyone in physics and they'll most likely blow a gasket. They tend to hate the use of infinity in any mathematical formula. They don't want it in the universe, and they don't want it in a formula, and they generally don't even want to discuss it. We are finite. We are not expanding into any part of infinity. There is absolutely nothing beyond the universe, etc, etc, etc. Personally, I think it's the only way to solve the problem logically, so I really don't care about opinions on the matter, because an opinion is all it truly is when it comes right down to it. It is a logical opinion, or perspective on an infinite state, and what that state represents to us physically. No one knows what it means with any sense of certainty, period. Just because someone has schooling in physics, doesn't make them anymore of an authority on the subject then anyone else. It's an open subject begging for a definition as far as I'm concerned. Infinity is an intrinsic part of who we are, because we are immersed in an infinite vastness. It's undeniable. What does infinity represent? When I imagine the infinite vastness around us, I see motion. It's like reaching out to grab something that keeps moving a little further out of reach. As hard as you try to grab that something, it just keeps inching a little further away. It is a perpetual process which expands outward forever, never reaching a conclusion, because there is no final destination. That something just keeps on going and going forever, infinitely moving away, infinitely growing in size and magnitude, but never reaching that unobtainable end point within the infinite vastness. There is no edge, no sides, no middle. There is no top or bottom, left or right. There's no up or down. There is nothing but a dark empty vastness waiting to be defined as you reach further and further outward to grab that something. This state of expansiveness represents the positive side of infinity. On the reciprocal side, things are exactly opposite, but still equal. It's like trying to place that same something in the palm of your hand, only to have it disappear. You thought it was there a second ago, but now you're not sure. You squint your eyes, move a little closer, and you think it's there again, and then it's gone. It keeps slipping through your grasp forever. You never could really see it, but you thought it was something, and then it was nothing. It represents a constant motion inward, perpetually moving that perceived something to an ever smaller state. There is no bottom limit, only an implied perception of something. This contracting state represents the negative side of infinity. Both the positive and negative side of infinity represents a perpetual vastness inward and outward, which tugs against a virtual middle. It is an unresolvable mathematical problem stuck in the resolution process. Every quanta of somethingness that we perceive is a product of this infinite calculation forever trying to reach a point of equilibrium between two conflicting forces of nature. It is the positive side of infinity against the negative side of infinity, and the proof lies in +i / -i = -1. Science has pondered the question of why matter dominated over antimatter in the early universe. It's a fairly simple answer in my mind really, because our universe has a preference of -1, not +1. Although I recognize the probability of multiverses, their relevance in defining our universe is extraneous information at this point, so I'll just touch on the subject for a moment. Sure, there probably is another universe where +i / -i = +1, but that universe would work reciprocal to ours, and would lie outside our finite limits. There's probably trillions upon trillions of universes being born every fraction of a second, and they could be stacked one on top of another. We could be an atom within a larger universal structure, and that larger universal structure could be an atom in another. The possibilities are endless, but virtually meaningless in understanding ours. One thing is fairly certain in my mind; all the physics in each possible universe would be perceived exactly the same as ours, because each one would either be represented as a -1, or +1. For us, that answer was -1. The proof? We're here asking the questions. If a point of equilibrium was ever possible nothing would happen. Infinity would reach a perfect balance, so +i / -i would always equal a neutral i, and we wouldn't be here. There would be a lot of potential, but the energy within that potential would forever build. I tend to think the underlying geometry of pi is ultimately the reason we're here, because pi never resolves. (continued on next post) 2 of 2 The shape of our universe is somewhat tricky to define. Infinity is similar to a 2-dimensional line forever stretching out in opposite directions. An infinite line though, runs in and out. Our universe is a break, or a division of that line, which can best be described as a line segment. The only way that break in infinity can form is spherically, which opens dimension between the vastness outward and vastness inwards. Our universe is similar in shape to a ping pong ball, and all that we experience occurs within that fragile wall. It's a pretty thick wall though from our relative perspective, because it stretches out about 13.7 billion light years. By all observations though, this shape is more a virtual shape than anything else. Yes, we have an interior and exterior wall that wraps around to form a hollow spherical shape, but you can't claim the interior wall is the center of our universe. It's a line segment that happens to be spherical in nature, and neither end of a line segment can represent a center. The exterior and interior walls are simply end points that represent a finite limit to us. We can imagine it as a center, but the center is not a real destination, like the exterior is not a real destination. They are virtual limitations that make our universe finite in nature. For the sake of simplicity though, we can call it the center of the universe, because there is a definite orientation to the universe from our relative perspective. Every piece of observable something within our universe can be represented as points within that universal line segment. Those points can also be considered the approximate middle, or virtual centers of our universe. Each one is unique in its own way, and its view of the universe is always relative to its scale. The reason matter always appears relative to the center of the universe, is because matter is a division of the universal line segment, so it also takes on the exact same properties as the universal line segment at -i/+i=-1. Although the -1 is a completely conceptual or logical concept, it represents a relative constant to that matter. This is roughly why the constancy of C always appears the same no matter how fast you are perceived to be traveling, because that perspective is always from a state of -1. Like the universe, matter doesn't have a center, because matter is also a line segment separating the infinite vastness outward from an infinite vastness inward. The exterior and interior walls are simply finite limits defined by that line segment, and those limits are defined by the universal line segment at -1. This doesn't change any of the physics that have already been defined at the quantum level, but it says our perspective on the quantum level is a relative perspective. The smallest Plank plank length could be the size of an entire galaxy, because the base from which we measure is not static, its malleable and ever changing. I made what was considered a fairly irrelevant prediction about 2 years ago based on this concept, and stated the Higgs particle would not be found. Although I still feel it's possible we may find something mathematically that looks and acts similar to a Higgs, because that's what the math appears to tell us. I saw it as an improbable cause of anything though, and unnecessary. There is nothing in the center of mass except a potential for infinite vastness, and the limit of that vastness is defined by the finite universal line segment at -1. That -1 is simply an unobtainable point of reference though, because it represents a constant of perspective. Simply put, there is no source of energy within mass, because that energy source lies outside of mass in both directions of infinity, and also lies outside the universe in both directions of infinity. Nothing within our universe constitutes a cause of energy within mass. We are an effect of an infinite source of energy which acts upon us in a finite manner. So no, energy cannot be created or destroyed within our universe, because we are an effect or a definition of that infinite energy source, not energy itself. All of the energy we experience is derived kinetically from the source. From what I saw at the time I made the prediction, science would simply keep accelerating particles at higher and higher energies and just keep finding more pieces to the puzzle. Ultimately, nothing lies in the middle except an infinite potential of scale which is not viewable in finite terms. It is that something which will forever slip out of our grasp no matter how hard we try to observe it. You thought it was there, and then you're not sure. We then accelerate the particle a little more to magnify it, and maybe we find a few more pieces inside, but then nothing. Mass is kind of like one of those Russian eggs, where you open one, and find another, and open that one, and find another. Once you get to the last egg though, you realize nothing is there except the same space you're looking at in the vastness that surrounds you. Then we're left scratching our heads wondering what e=mc^2 actually means. An odd thing happened in this discovery process. I could no longer determine which way the universe was heading, inward or outward. I think if we follow the laws of physics though, an answer becomes apparent. It is known physics that energy attenuates from the source. I've suggested that all the energy in the universe is a result of an infinite condition, which pits the expansive potential of infinity outward, versus the contractive potential inward. It is the positive against the negative, and the result is a -1 perceptual perspective. Size becomes a totally irrelevant concept, and virtually meaningless in describing nature. In the very beginning of the universe, the positive and negative end points of the universal line segment were touching more or less. Dimension opens up between those two points, so the source of energy that caused the universe is separating. Our mass represents middle points along that universal line segment, so we are attenuating from the initial source of the energy exactly as we understand normally in physics, because both sources are moving away from all the middles. Our mass is depleting at a constant rate of -1 as the universe is perceived to expand outward. Overall, the positive and negative source of the infinite energy is also moving away from the overall universe, so the entire universe is depleting in energy at a rate of -1 as well. One of these can be considered expansion, and the other acceleration, exactly as we observe. Expansion itself though, is a mere perspective, or virtual state, not a physical reality. Our entire universe is simply attenuating from the source of energy, so overall we are are shrinking in scale at a rate of X, not expanding outward as we observe. The Red Shift never defined a direction, we did based on our Earthly perspective of solid ground and fixed points of reference. Yes, you could look at it that way, and mathematically everything would seem to check, but the problem is reciprocal in nature, so either view would result in the same apparent answer. The reason is fairly simple, because our entire perspective on the universe stems from a relative perspective of -1, which is a malleable and depleting state, not static. At some point in time, our atoms could have been as large as basketballs or entire galaxies, but we would never have notice because scale is a relative perspective. The entire universe is scaling inward, most likely at a rate of C as near as I can imagine. Size means nothing to the universe, only to us and our relative perspective. Within the contracting process there are a lot of bumps and density changes, which always maintains a relative perspective of -1. That -1 represents a constant of energy flow, which enters through one end and exits out the other. There isn't any property in the universe that is not impacted by this negative flow. Space is contracting, mass is contracting, mass-less particles are contracting, waves are contracting, and anything else that can be observed or experienced. The entire universe, and all contents within the universe contracts inward. Expansion itself is a virtual perspective, not a physical reality like a balloon inflating. It's more like a balloon deflating, and all the little balloons (particles) inside the bigger balloon are deflating at a slightly higher rate, and all the little balloons inside those are once again deflating at a slightly higher rate. It's a slightly imperfect deflation process working against all matter in the I could go on explaining gravity in general terms, or explaining motion and time in general terms, but it becomes difficult without the core mathematics to back it up, which I am incapable of performing. I don't have those tools at my disposal. In general though, this theory is the unifying theory between GR and QP. It doesn't come in the form of a nice clean definable answer though, like e=mc^2, it comes to us in the form of logic at -i/+i=-1. We know in no uncertain terms that the answer is correct, but what we've had trouble grasping is the infinite. All we need to do is trust the logic which has served us well for 100's of years. What we gain is not a new brand of physics, rather a deeper understanding of the processes which animate our sense of reality. Both GR and QP are relative to -1. It is that simple understanding that helps make this universe comprehensible. The illusion of substance for instance, is easily understood as 2 points contracting away from each other. It's about the only way to get something out of nothing, so it's not very hard for the average person to grasp. The details of the process are enormously complex though, but I think we already have a fairly good grasp on those intrinsic details as defined by quantum physics. I never set out to redefine what we already knew, I set out to to understand what made the whole thing work. Yes, some ideas will change dramatically, like the BB, and motion, but in the end they are simply inverted perspectives we've been chasing. They were right to a degree, but the actual solution was their reciprocal state. The vast majority in physics will reject this theory, not because it's wrong, because they reject the use of infinity. Infinity is the universal engine driving all that we are. It is perfect, and perpetual in nature. We are a temporary finite resolve to an infinitely large logic problem that's still working on a final answer that will never be reached. We are the entanglement between two virtual particles interacting at the very spooky distance of the entire length of the universe. it comes to us in the form of logic at -i/+i=-1 There is precious little logic in your composition. Way too many words. Too much "poetry." You are presenting a highly dubious "answer," but you have not identified the question. By the way, the letter "i" in math and physics is already taken. The vast majority in physics will reject this theory I question whether you have presented a "theory," but other than that, this is one statement you are right about. Everyone is entitled to his own opinion, but not his own facts. There is precious little logic in your composition. Way too many words. Too much "poetry." You are presenting a highly dubious "answer," but you have not identified the question. By the way, the letter "i" in math and physics is already taken. I question whether you have presented a "theory," but other than that, this is one statement you are right about. Are you suggesting (-a / +a) doesn't equal -1? The logic is irrefutable if you ask me. I know the letter I was taken. In programming however, a variable can be anything I want to make it, so it's an irrelevant comment. The rest of your comments are simply opinions. Infinity is not a specific number, so -infinity/infinity is indeterminate. Infinity is a concept that has had mathematicians scratching their heads throughout history. On top of that there are hierarchies of infinity. For example there are infinitely many integers. Between any two of them there are infinitely many rational fractions. It goes on and on. Perhaps there is a profound line of thought in your head, but I am unable to ascertain it from your voluminous strings of words. Not when a = infinity. The rest of your comments are simply opinions. As are your posts... And I struggle to see any meaning in them. They don't want it [infinity] in the universe, and they don't want it in a formula, and they generally don't even want to discuss it. What do you base this on? It isn't known if the universe is finite or infinite, but it has certainly been discussed. It's a fairly simple answer in my mind really, because our universe has a preference of -1, not +1. How do you know that the "universe has a preference of -1"? (If that even means anything.) Infinity is similar to a 2-dimensional line forever stretching out in opposite directions. Wouldn't a "2 dimensional line" be a surface? And an infinite line doesn't have to stretch out in both directions. Take the number line of positive integers; it starts at 1 and stretches out to infinity in one direction. An infinite line though, runs in and out. Ian and out of what? Our universe is a break, or a division of that line, which can best be described as a line segment. The only way that break in infinity can form is spherically... Why? And how can a line segment be spherical? You have made your line (which should be 1 dimensional) go from 2 to 3 dimensions. This is mathematically meaningless. ... which opens dimension between the vastness outward and vastness inwards. What does that mean? matter doesn't have a center What about the centre of the earth? matter is also a line segment separating the infinite vastness outward from an infinite vastness inward I gave up trying to extract any meaning at that point ... Infinity is not a specific number, so -infinity/infinity is indeterminate. Infinity is a concept that has had mathematicians scratching their heads throughout history. On top of that there are hierarchies of infinity. For example there are infinitely many integers. Between any two of them there are infinitely many rational fractions. It goes on and on. Perhaps there is a profound line of thought in your head, but I am unable to ascertain it from your voluminous strings of words. No, infinity is not a specific number, we are. I don't care about numbers, I care about answers. We're here, so infinity can eek out a finite result, even if there are an infinite number of potential finite results. Infinity is a process with no resolve, which lies outside our finite universe. We are a result, not a cause of ourselves. I'm not sure how you are able to make any predictions about quantum theory based on one (incorrect) mathematical statement. The vast majority in physics will reject this theory, not because it's wrong, because they reject the use of infinity. No they don't. What is your evidence for that? Infinity is the universal engine driving all that we are. It is perfect, and perpetual in nature. Which particular infinity are you talking about? (there are an infinite number of them, as I'm sure you know) In what way is it "perfect"? Well, we're here having this conversation within the infinite, so I beg to differ. We obviously can't punch it into a calculator within our universe, but it seems to me the universe doesn't have a problem punching the results of our universe into its calculator. I stand by the mathematical logic that -i/+i=-1. That's seemed to work pretty well so far, even if we can't physically identify what those variables actually are. Non sequitur (even if the universe is infinite, which it might not be). We obviously can't punch it into a calculator Maybe you are confusing arithmetic and mathematics. I stand by the mathematical logic that -i/+i=-1. It is certainly not mathematical logic when i = ∞. You can even use more non-mathematical logic to prove it wrong: presumably you would think that 1/0 = ∞? Therefore 1/∞ = 0, but everything multiplied by 0 is 0, so -∞/∞ => -∞ * 0 => 0, not -1. It would probably be helpful if you'd identify any other terms/operators whose meanings you're importing from another domain. I assume "=" refers to identity here? No, infinity is not a specific number, we are. I don't care about numbers, I care about answers. We're here, so infinity can eek out a finite result, even if there are an infinite number of potential finite results. Infinity is a process with no resolve, which lies outside our finite universe. We are a result, not a cause of ourselves. Emphasis added. If "infinity is a process", how do you define the negative of that process? How can you apply a simple algebraic operator (division) to that process? The mystical application of a simple algebraic operator to a process--an undefined process at that--makes your "magic formula" utterly meaningless. Best regards, I identified what the i represented for the contents of this document (i=infinity). I never considered my use of i would be considered anything but a variable. I mean this respectfully... It's a petty side topic and totally irrelevant to the discussion... The problem with this statement, as a mathematical claim, is that it cannot be held as generally true. Normally, the mathematical concept of infinity is addressed via taking limits-- so we don't talk about -i/+i, we talk about -x/+x, where x is a variable that can take on any value, and then ask what happens in the limit as x->infinity. In that situation, it is in fact -1. But we can also consider the ration -x/2x, and take the limit, and in that case we get -1/2. So to know what -i/+i means, we have to understand the history of the "i" that appear in numerator and denominator, because 2x also goes to infinity as x does, so -x/2x is also an example of a positive infinity divided by a negative infinity in that limit, yet it is always -1/2. So you need to know more information about your expression -i/i, you need to know where the i's come from. Specifying that will essentially specify the answer-- so the answer does not have a general meaning. That's subjective Strange. It really is. Mass=energy, and I say the energy lies outside our total universe, so the Higgs does not provide mass. I said there may be something that looks like a Higgs, acts like a Higgs, but will ultimately fall short of a Higgs based on this theory. The answer does not lie within the universe, it lies outside the universe. A particle does not cause itself to exist, it is caused by an external energy source, and that source of moving away from us in both direction. We are attenuating from the source of energy, so we are contracting. The problem with this statement, as a mathematical claim, is that it cannot be held as generally true. Normally, the mathematical concept of infinity is addressed via taking limits-- so we don't talk about -i/+i, we talk about -x/+x, where x is a variable that can take on any value, and then ask what happens in the limit as x->infinity. In that situation, it is in fact -1. But we can also consider the ration -x/2x, and take the limit, and in that case we get -1/2. So to know what -i/+i means, we have to understand the history of the "i" that appear in numerator and denominator, because 2x also goes to infinity as x does, so -x/2x is also an example of a positive infinity divided by a negative infinity in that limit, yet it is always -1/2. So you need to know more information about your expression -i/i, you need to know where the i's come from. Specifying that will essentially specify the answer-- so the answer does not have a general meaning. Personally, I think you're over thinking the problem, something I've done 1000's of times over the years. Infinity is a singular state, which is divided by a universe, or infinitely growing number of universes. As I said in the original post though, additional universe's would be extraneous information. There is only one infinity to divide. If you prefer x, then -x/+x=-1. (x=infinity) That's subjective Strange. It really is. Mass=energy, and I say the energy lies outside our total universe, so the Higgs does not provide mass. I said there may be something that looks like a Higgs, acts like a Higgs, but will ultimately fall short of a Higgs based on this theory. The answer does not lie within the universe, it lies outside the universe. A particle does not cause itself to exist, it is caused by an external energy source, and that source of moving away from us in both direction. We are attenuating from the source of energy, so we are contracting. Emphasis added. "Subjective" has been introduced to the discussion in a rather dismissive manner. Thus a direct question seems relevant. ES1. What is the objective evidence supporting the assertion that "the energy lies outside our total universe"? Best regards, Personally, I think you're over thinking the problem, something I've done 1000's of times over the years. Infinity is a singular state, which is divided by a universe, or infinitely growing number of universes. As I said in the original post though, additional universe's would be extraneous information. There is only one infinity to divide. If you prefer x, then -x/+x=-1. (x=infinity) Emphasis added. This is becoming tedious. How does one "divide" a "singular state" by a "universe"? Best regards, It's all a matter of perspective. It is spherical because we comprehend it as spherical. It is still a 2-dimensional line segment, and we represents points within the segment. I have moved this thread from Q&A to ATM, as I didn't see a question, nor did it seem a mainstream concept. This means all the rules of ATM apply. If you have questions or concerns about this move, Report this post or PM me. At night the stars put on a show for free (Carole King) All moderation in purple - The rules Personally, I think you're over thinking the problem, something I've done 1000's of times over the years. Infinity is a singular state, which is divided by a universe, or infinitely growing number of universes. As I said in the original post though, additional universe's would be extraneous information. There is only one infinity to divide. If you prefer x, then -x/+x=-1. (x=infinity) Actually, what you are doing is you are dividing MQ1: Prove that 2011-Sep-12, 12:41 PM #2 Join Date Aug 2011 2011-Sep-12, 01:07 PM #3 2011-Sep-12, 01:21 PM #4 Join Date Aug 2011 2011-Sep-12, 01:32 PM #5 2011-Sep-12, 01:37 PM #6 2011-Sep-12, 01:39 PM #7 Join Date Aug 2011 2011-Sep-12, 01:42 PM #8 2011-Sep-12, 01:44 PM #9 Join Date Aug 2011 2011-Sep-12, 01:46 PM #10 2011-Sep-12, 01:46 PM #11 Join Date Aug 2011 2011-Sep-12, 01:49 PM #12 Join Date Aug 2011 2011-Sep-12, 01:52 PM #13 2011-Sep-12, 01:53 PM #14 2011-Sep-12, 02:00 PM #15 Join Date Aug 2011 2011-Sep-12, 02:14 PM #16 Join Date Aug 2011 2011-Sep-12, 02:17 PM #17 2011-Sep-12, 02:17 PM #18 Established Member Join Date Jun 2006 2011-Sep-12, 02:19 PM #19 2011-Sep-12, 02:20 PM #20 2011-Sep-12, 02:23 PM #21 Join Date Aug 2011 2011-Sep-12, 02:24 PM #22 2011-Sep-12, 02:29 PM #23 Join Date Aug 2011 2011-Sep-12, 02:41 PM #24 Join Date Aug 2011 2011-Sep-12, 02:46 PM #25 2011-Sep-12, 02:50 PM #26 2011-Sep-12, 02:51 PM #27 Join Date Aug 2011 2011-Sep-12, 02:54 PM #28 Join Date Aug 2011 2011-Sep-12, 02:57 PM #29 2011-Sep-12, 02:57 PM #30 Join Date Oct 2007
{"url":"http://cosmoquest.org/forum/showthread.php?120868-Unifying-Theory&p=1933594","timestamp":"2014-04-17T18:43:04Z","content_type":null,"content_length":"202768","record_id":"<urn:uuid:65ca1978-5a0d-41aa-85ef-c45f0cc5e609>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy Mean Value Theorem Consequence problem November 25th 2008, 01:14 PM #1 Super Member Mar 2006 Cauchy Mean Value Theorem Consequence problem Suppose that the function $f: \mathbb {R} \rightarrow \mathbb {R}$ has two derivatives, with $f(0)=f'(0)=0$ and $|f''(x)| \leq 1$ if $|x| \leq 1$. Prove that $f(x) \leq \frac {1}{2}$ if $|x| \leq Proof so far. Suppose that $x \in \mathbb {R}$ with $|x| \leq 1$, pick $x_0 \in \mathbb {R}$, find $z \in \mathbb {R}$ strictly between $x$ and $x_0$ such that $f(x) = \frac {f''(z)}{2}(x-x_0)^2$ Can I conclude that $f''(z) \leq 1$? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/61647-cauchy-mean-value-theorem-consequence-problem.html","timestamp":"2014-04-18T09:31:28Z","content_type":null,"content_length":"32056","record_id":"<urn:uuid:2efff0ea-cb1b-45d5-93a3-0f2bd6948708>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Aurora, IL Algebra 2 Tutor Find an Aurora, IL Algebra 2 Tutor ...Before I knew I was going to teach French, I was originally going to become a math teacher. I have had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math. 16 Subjects: including algebra 2, English, chemistry, French ...I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College. 7 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...This is probably because I tend to think in a logical and orderly way and I am a determined and creative problem solver. I've spent the last six years teaching math (and English) prerequisite courses at a small private nursing college. I still tutor there and lead workshops for the school's admission test. 17 Subjects: including algebra 2, reading, writing, English ...I have a strong desire to share knowledge and educate people, I believe that education makes people to think better and also better persons. Mathematics, Physics and Chemistry are strong areas of expertise. I have taught high school students back in India. 16 Subjects: including algebra 2, chemistry, physics, calculus ...I served as a teaching assistant for graduate and undergraduate structural engineering classes. I am looking to transition into a later-in-life career assisting people. With the above experience, I can offer a structured, systematic, and hopefully enjoyable learning experience with the student ... 10 Subjects: including algebra 2, geometry, algebra 1, GED Related Aurora, IL Tutors Aurora, IL Accounting Tutors Aurora, IL ACT Tutors Aurora, IL Algebra Tutors Aurora, IL Algebra 2 Tutors Aurora, IL Calculus Tutors Aurora, IL Geometry Tutors Aurora, IL Math Tutors Aurora, IL Prealgebra Tutors Aurora, IL Precalculus Tutors Aurora, IL SAT Tutors Aurora, IL SAT Math Tutors Aurora, IL Science Tutors Aurora, IL Statistics Tutors Aurora, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Aurora_IL_algebra_2_tutors.php","timestamp":"2014-04-19T10:11:20Z","content_type":null,"content_length":"23926","record_id":"<urn:uuid:0b3983c1-bede-4006-a9ea-05eb5426014e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Pursuit Curve "If A moves along a known curve, then P describes a pursuit curve if P is always directed toward A and A and P move with uniform velocities. Pursuit curves were considered in general by the French scientist Pierre Bouguer in 1732, and subsequently by the English mathematician Boole. Pursuit Curve [Thanks Yuri!] This entry was posted in Science. Bookmark the permalink.
{"url":"http://www.moleskinerie.com/2005/06/pursuit_curve.html","timestamp":"2014-04-18T22:06:17Z","content_type":null,"content_length":"18835","record_id":"<urn:uuid:b19d9d5c-4b3e-44c9-8b23-6712e68d0f61>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}